Skip to content

Our Privacy Statement & Cookie Policy

All Thomson Reuters websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.

Big data

Big Data basics: What supply chain managers need to know

Keith Haurie  Vice President of Business Development, ONESOURCE Global Trade Management, Tax & Accounting, Thomson Reuters

Keith Haurie  Vice President of Business Development, ONESOURCE Global Trade Management, Tax & Accounting, Thomson Reuters

Big Data and data analytics can make supply chain management simpler and clearer, but before that can happen, Big Data and data analytics themselves have to be understood. Here's our primer.

Here’s a scenario: Your director of trade compliance is notified there is a 90 percent chance that a critical shipment of parts to São Paolo may be subject to inspection by Brazilian customs, delaying receipt of the parts to your factory by up to two days. Your global trade management software recommends shipping the parts to another port to speed up the entry and keep the original timeline intact.

This circumstance shows how harnessing the power of analytics and Big Data can improve operational efficiency.

A worker loads Indonesian plywood onto a cargo ship headed for France on the southern coast of Kalimantan, Indonesia, REUTERS/Beawiharta
A worker loads Indonesian plywood onto a cargo ship headed for France on the southern coast of Kalimantan, Indonesia, REUTERS/Beawiharta Beawiharta

Data analytics and Big Data 101

Data analytics are the qualitative and quantitative techniques used to enhance productivity and business gains, while Big Data refers to voluminous amounts of data – structured or unstructured – that organizations can potentially mine and utilize for business gains.

As corporations continue to face pressure to increase profit margins and shorten order-to-delivery cycles, the application of data analytics will continue to grow. A Gartner, Inc. study published last year projected 2017 sales of USD$18.3 billion in the business intelligence and analytics market, while sales of prescriptive analytics software is estimated to grow from approximately USD$415 million in 2014 to USD$1.1 billion in 2019.

The container terminal Burchardkai at Hamburg harbor. REUTERS/Christian Charisius
The container terminal Burchardkai at Hamburg harbor. REUTERS/Christian Charisius

Volume, velocity and variety

Big Data, a broad term for data sets so large or complex that traditional processing applications are inadequate, is defined by the three V’s: volume, velocity, and variety.

  • Volume relates to the sheer magnitude of data currently available for analysis. We normally think of data as text or numbers, but it also includes email, tweets, images, audio, etc. In fact, data is expanding at a rate that doubles every two years, while human- and machine-generated data is growing at 10 times the rate of traditional business data. IT World Canada projected that by 2020, the sheer volume of the world’s digital data would fill a stack of iPad Air tablets that would extend from the earth to the moon.
  •  Velocity refers to the frequency of change in data. Think of how data velocity has accelerated in the past decade, thanks mostly to the expansion of the Internet and social media. Real-time data is projected to grow tenfold by 2025. A cousin of real time data is near-time data – transmissions which include a time delay between the occurrence of an event and the publication of that data. If you have ever accessed a website which provides stock prices published on a five-minute delay, you have accessed near-time data. Streaming is a term that probably most of us never heard of before the advent of consumer services such as Spotify or Netflix. Driven by the availability of cloud-based solutions, the growth of streaming services will only accelerate as younger generations have embraced the technology to access movies, music, videos, and television.
  • Variety refers to how data can be defined as structured, semi-structured or unstructured data. Structured data is data that has been organized into a formatted repository, typically a database, so that its elements are accessible for processing and analysis (think of Excel spreadsheets). For an understanding of semi-structured data, think of CSV (comma separated value) files. They aren’t parts of relational databases, but they are organized in a format that can be easily loaded into an analytical tool such as Excel for analysis. As the name implies, unstructured data is not contained in a database or some other type of data structure. It may consist of text, numbers, dates, video, images, etc.
This photo illustration photo depicts an Eikon ship-tracking screen showing dry bulk ships waiting off Hay Point, Australia, on December 15, 2017. REUTERS/Thomas White
This photo illustration photo depicts an Eikon ship-tracking screen showing dry bulk ships waiting off Hay Point, Australia, on December 15, 2017. REUTERS/Thomas White

Big Data offers large-scale opportunities to organizations, as both structured and unstructured data can be consolidated and analyzed from multiple perspectives. These perspectives reveal insights while guiding companies in revealing not before observed solutions to complex problems. These new insights will guide companies to scale their programs by combining data analytics with other applications, therefore by also embedding intelligence in every process.


Learn more

ONESOURCE Glotbal Trade for FTA  supports confidence and adds visibility across the entire FTA process.

  • Facebook
  • Twitter
  • Linkedin
  • Google+
  • Email

More answers