Skip to content
Internet of Things

The Internet of Me: It’s personal

How mobile technology, artificial intelligence, machine learning, big data, natural language processing, and the Internet of Things might actually change your life.

Earlier this year, I had the opportunity to present the 2016 Turing Lecture across the United Kingdom. I titled the presentation “The Internet of Me: It’s Personal” as a way of attempting to boil down the exceptional growth in mobile, the Internet of Things (IoT) and big data to discuss the technologies that make this relevant at the individual level.

The raw numbers are truly staggering. Ericsson, the Swedish manufacturer of mobile telephony infrastructure, regularly produces a mobility report on subscriptions and data traffic, and in the spring 2016 Turing Lecture, I used data from their November 2015 report which I will use again here. At the end of 2015, there were approximately 7.4 billion mobile subscriptions worldwide – literally one for every person on the planet – with a forecast of 9.1 billion by the end of 2021. With just under half of those 7.4 billion being smartphones today, Ericsson forecasts that 6.4 billion smartphones, 70% of the market, will be in use by 2021.

Figure 1. Smartphone subscriptions set to almost double by 2021

This chart shows smartphone subscriptions per region from 2015 - 2021.
Most mobile broadband devices are, and will continue to be, smartphones. Many consumers in developing markets first experience the Internet on a smartphone, usually due to limited access to fixed broadband. It took over five years to reach the first billion smartphone subscriptions, a milestone that was passed in 2012, and less than two years to reach the second billion. The 4 billion mark is expected to be reached by the end of this year. Source: “Ericsson Mobility Report,” November 2015

Figure 2. Mobile subscriptions by technology

This chart shows mobile subscriptions by technology, and projections through 2021.
Source: “Ericsson Mobility Report,” November 2015

A second key point in this incredible mobile growth is the rise of “mobile broadband.” Data speeds continue to increase substantially as the world moves from third-generation standards to fourth-generation standards (aka Long Term Evolution (LTE)). LTE enables users to download and access data in the 5-15 megabits per second range – quite acceptable to view video and easily share/consume content. The increase in mobile broadband speeds draws parallels to fixed Internet usage in our homes and offices: The faster the speeds, the more data that is consumed and produced. Mobile broadband essentially has become the fuel to the mobile data consumption fire.

Ericsson predicts that by 2021, 70% of all mobile data traffic will be video content – a 14X growth from 2015. Quite simply, the mobile device will be an enormous generator and consumer of data in the years to come, as smartphones with their corresponding applications continue to expand. This is only the beginning. On the horizon and also expanding is the IoT: devices that have an Internet address and are capable of communicating with other things. By that same end of 2021, Ericsson forecasts that 28 billion devices (which includes mobile phones) will have some sort of Internet connection including computers, cars, appliances and machines. This represents another massive source of data creation and consumption.

Figure 3. Mobile data traffic by application type (monthly exabytes)

This chart shows the mobile data traffic by application type. In 2021, almost 70% of all mobile data traffic will be from video.
1. This is based on Ericsson measurements in a selected number of commercial networks in Asia, Europe and the Americas 2. Video is also likely to form a major part of file sharing traffic in addition to the identified application type “video” 3. Ericsson ConsumerLab, TV and Media (2015)

With all of these Internet-connected devices, one might think that this represents the largest pool of big data creation. There is more coming, though. According to a study by scientists at the University of Illinois at Urbana-Champaign and Cold Spring Harbor Laboratory in New York, human genomics will be producing data at a scale requiring 2,000 to 40,000 petabytes per year by 2025. What does that mean in comparison? Twitter® content will contribute just 1-17 petabytes per year in 2025. Deep space astronomy research will produce 1,000 petabytes per year. Even YouTube will only require 1,000-2,000 petabytes per year. This is really big data!

Figure 4. Sources of Big Data in 2025

By 2025, the storage needs of new genomics data will far outstrip those of any other data source, according to a study by scientists at the University of Illinois at Urbana-Champaign and Cold Spring Harbor Laboratory in New York. By that year, they predict that 100 million to 2 billion human genomes will have been sequenced.
This chart projects annual storage/sources of big data in 2025. By 2025, the storage needs of new genomics data will outstrip those of any other data source.
Source: “Big Data: Astronomical or Genomical?” PLOS Biology, July 7, 2015

With all of this data sitting on servers across the world, it begs a simple question. How will this data be managed and handled? One of the core technologies at the heart of this is cognitive computing. There are many ways that cognitive computing is defined, but for simplicity and ease of understanding, there are four keys to understanding its value: Cognitive computing represents a requirement that machines learn, machines think, machines interact with other machines and finally, machines interact with people.

There are many parallels between machines and human beings. At birth, we know nothing but immediately start to learn. Machines are similar to this. They must be trained with specific types of data sets to begin to learn. As machines do not have the same type of brains as humans, scientists develop sets of rules, built around very specific taxonomies of words, to train them – often for a particular application such as the medical or legal space. Machines can be taught, for example, that the name Robert may also have name variants of Bob, Rob or Bobby. They can be taught relationships between things, such as Exxon “is related” to oil, refineries and gasoline. However, machine learning is only as good as the data sets in which they are trained on, and learning comes with a fundamental problem: Just because a machine or human learns, it does not mean that either one understands.

A great demonstrator of this concept is the search engine. Google®, for example, knows from my search history and accompanying machine learning, that I am interested in Arsenal Football Club. Google doesn’t know why and thus, what should it show me? Perhaps I’m a fan of an opposing club and am searching for Arsenal to read all the bad news that would impact an upcoming match or run for the Premiership title? I happen though to be a genuine fan of Arsenal, and it all stems from a friend telling me to read Nick Hornby’s book Fever Pitch back in 1997 before I moved to the United Kingdom. After my first match at Highbury in 1998, I was hooked. Those are facts that Google will not know.

… the data being created is designed to do one thing: Provide relevance to me, the end consumer.

Thus, we need a second layer of cognitive computing, artificial intelligence, to start trying to make sense of this learning. Artificial Intelligence (AI), broadly speaking, is the next stage of attempting to apply human-like characteristics to data and information to get results and answers. Recommendation engines (you bought this, you might like that) serve as a simple example of marrying learning with intelligence. Another terrific example of this in action is the Deep Learning platform from Google. This AI engine recently beat the world’s best player at a game of Go, four games to one. Go is a massively complicated game to understand, and in fact, if you ask a Go player why he or she made a particular move, they will tell you “it just felt right.” Creating machines that can simulate the human brain through neural network technology is at the core of artificial intelligence.

Unsurprisingly, a machine – like a person – will not have all the answers. So what do we do as humans when we do not know an answer? We ask someone else. Similarly, machines will need to be able to connect to other machines to discover answers. This ties in directly with the IoT world we will soon be living in on a large scale. How much rainfall occurred in the farmer’s field in Cape Town? Let me ask the in-ground sensor that is connected to the network.

So while the machines get smarter, there is one last piece of the puzzle to solve. This represents the piece of cognitive computing that will be most relevant to the masses: the human-computer interface. One of the key ways this is evolving on a daily basis is in a technology called natural language processing. Today, we have glimpses of its most basic relevance to people when we use Apple’s Siri voice search or ask “Alexa” from the Amazon® Echo. We do not need to be prescriptive in language in our requests for information but rather can use “normal” human language to ask for and receive answers to our questions. Voice interfaces are but one way to “naturalize” the interface between humans and computers.

Thus, we can now start to see where this is going. All of this data being created, all of the technology working behind the scenes to model the data and make sense of it, is designed to do one thing: Provide relevance to me, the end consumer. As a human, big data might be interesting as a topic in general, but out of all of the petabytes of data in the world, I only care about what is relevant to me in any given moment: the one news story, stock quote, baseball score or picture from a friend or family on holiday. The real value in all of this technology is in recognizing my personal circumstances, my work-life blur, and delivering the “Internet of me” – an extraction of content out of all of the big data that is optimized for whatever device I can consume on, and ultimately delivers the following:

  • A recognition and learning of what I care about
  • A solution of value regardless of screen/device
  • A balance between anticipation, push and pull

Ultimately, cognitive computing will radically change the way each of us interacts with big data. We will see less in the way of user-initiated search and more in the way of anticipatory computing that is proactive and understanding of what I need and when I need it. That interaction will become conversational and natural as the technology matures, and simply put, cognitive computing will enable personal “oracles” that know you, understand you and assist you in life. Thus, the “Internet of me” is the realization of boiling the big data problem down to what is most important to you.


Learn more

Accordion image of the Thomson Reuters Know 360 Exchange Magazine app

Read more from Exchange Magazine in the Know 360 app

  • Facebook
  • Twitter
  • Linkedin
  • Google+
  • Email

More answers