Skip to content
Big data

Big Data and the analytics bottleneck

Smarter trading means different things to different people, depending on their perspective within the trading life cycle, but it certainly embraces the imperative for firms to harvest more from technical infrastructure and the increasing “datafication” of everything.

Yet everyone knows that harvesting big analytics from Big Data is a significant challenge today. Every data source requires a unique degree of handling, processing and/or technical infrastructure for end-users to harvest a meaningful amount of information. Various sources will have some commonalities but will still vary by geography, latency, product specifications and a variety of other factors.

Every data source may also be different from itself when temporal issues come into play. The treatment of an end-of-day (EOD) version of a source is likely to be different – due to adjustments for corporate actions – than streaming or intraday versions of that same source. Historical versions of a data source will most likely need to be modified to reflect ongoing changes to the underlying data and the technical specifications of the feed itself.

If we then consider that even the simplest trading firm currently may rely on hundreds of data sources – market data, fundamental, transactional and others – and the most complex global banks and asset managers are likely to rely on thousands of data sources (plus the temporal permutations of each of them), then it’s no wonder that the imminent deluge of new digital-era data sources has most capital markets participants wide-eyed and frozen in place.

Data is increasingly viewed as a strategic asset and business enabler, with a value far beyond originally anticipated. A set of technologies that help make processing large amounts of data less expensive and really fast is only one part of the Big Data challenge. Once there is an efficient way to store and process large amounts of data in place, there is still the challenge of making sense out of it. The concept of “bigger integration,” collating different data types into a single, rapidly accessible location, is key. Delivering access to large numbers of data sources and analytics “on demand” as a service, rather than a multitude of separately operated analytics engines, is now essential.

Big Data is one of the key factors in the rapid evolution of electronification and rise of open platforms. Financial services are taking a second look to optimize ingestion of the data, from the development of the platform core, to the specific domain engines, to enhanced Natural Language Processing and machine learning capabilities beyond structured data – bringing together structured and unstructured data.

The end goal is to create an “intelligent engine” that will allow customers to ask fundamental questions, such as: “Which sectors rallied the most following the last five GDP surprises?” and surface relevant content and insights with questions like: “What should I know about my portfolio today?”

However, according to a new paper from Paul Rowady at TABB Group, “Analytics Bottleneck: Battling the (Unfortunate) Shape of Big Data,” in many ways, most players are still struggling with the management of internally generated data from yesterday and therefore are ill-equipped to even consider the mountains of data that are gathering externally. Rowady argues that what may be a competitive advantage for a few right now – using the latest tools, technologies, infrastructure and processing methods to harvest greater intelligence from increasingly bigger data – is sure to become a competitive necessity sooner than anyone expects.

The TABB paper goes on to examine how the extreme growth of data for the foreseeable future distorts conversion rates of that data into new (and in some cases, desperately needed) analytics; how the increasing complexity of Big Data for capital markets use cases means that virtually no firm will possess the capability to manage Big Data challenges without the help of content aggregation and platform management partners; and how a much more collaborative community of (both internal and external) specialists is representative of a new Enterprise Data Management (EDM) strategy going forward.

Interestingly, Rowady believes that the challenges from and solutions to the Big Data phenomenon can be best illustrated by using a modified version of the traditional knowledge-transfer pyramid, where successive higher order “conversion zones” first translate (raw) data into information, convert it into knowledge and then assimilate knowledge into wisdom.

Traditional knowledge transfer pyramid for the Big Data era

Infographic of the Traditional Knowledge Transfer Pyramid for the Bid data Era

However, in addition, Rowady has concluded that the rate of growth of raw data is increasing faster than anyone’s ability to proportionately convert it into information (which would preserve the shape of the traditional knowledge pyramid). In other words, metaphorically speaking, the Big Data phenomenon is causing the base of the traditional knowledge pyramid to increase in size dramatically faster than the layers above it, thus creating a skew, which leads to what they are calling the primary, Big-Data, analytics bottleneck, between the data layer and the analytics layer.

Big Data era shapes shifting challenge

Infographic Big Data Era Shapes Shifting Challenge

The implications of this analytics bottleneck are profound. Global markets firms are already choking on data. Or, they’re opting to ignore many of the newer sources of data for fear of augmenting the reality that they are already choking on data. Most of these same firms are struggling with the overall costs of regulatory and technology-induced transformation, and this means that there are significant competitive advantages to be won by exploiting the analytical conversion rate.

Tools and technologies for high performance computing (HPC) are a critical part of the new arsenal to combat the data analytics conversion bottleneck – and they are already available. Yet adoption remains slow in the current environment.

TABB Group has more than one theory on why this disparity exists. First, the regulatory juggernaut of the post-Global Financial Crisis era has put most of the main buyers of technology, tools and solutions – mainly tier-1 banks – on the defensive. Regulatory hangovers, operational numbness, unprecedented levels of change and budget constraints are just a few of the terms that come to mind to describe the current state of play.

In the end, Rowady’s paper suggests that the typical analytics bottleneck is likely to get worse before it gets better. The shape of the traditional knowledge pyramid is shifting faster than most players can keep up.

The next important question is: Can your firm battle this challenge on its own?


Learn more

A paraglider flies over the Mediterranean sea near the coastal city of Tel Aviv January 28, 2013.
Capitalize on Big Data with Open Calais. Download our API and connect your results with Open PermID.
Exchange Magazine in the Know 360 app
Read more from Exchange Magazine in the Know 360 app

More answers