As Kirk Borne of Booz Allen Hamilton explains, AI-as-a-service offerings can be transformative for both businesses and employees in serving customers.
If you are inclined to think providing excellent service to your customers isn’t rocket science, think again. In today’s digital topography, big data reigns supreme, and the ability to intelligently deploy technologies that can make sense of the information out there is the key to competitive advantage. Data science as a discipline is moving beyond the classroom and into the boardroom, as more and more executives and directors are realizing the tremendous opportunities it opens to their organizations.
To discuss some of the business applications for data science and artificial intelligence, we spoke with Dr. Kirk Borne, Principal Data Scientist and Executive Advisor at management and technology consulting firm Booz Allen Hamilton. A 20-year veteran of NASA as data scientist and astrophysicist, as well as a professor at George Mason University for 12 years, Dr. Borne is a widely regarded technology and industry commentator. He walked us through his views on the ways in which AI-as-a-service offerings may be used to strengthen the role that human employees play in the relationship with customers, and he also spoke to us about using contextual data in effective modeling and the challenges of proper data management.
“Companies have all this information on their websites which nobody reads. With AI-as-a-service, they can get that information into the right hands of the right people at the right time without burdening people with not-quite-as-important information, freeing up the call center people to solve harder problems. That’s just one example, but I think AI as a whole has many different applications everywhere.”
– Dr. Kirk Borne, Booz Allen Hamilton
ANSWERS: More and more businesses are looking at the power of data science. What do you see as the development curve for data science in predictive business analytics and forecasting in the next 3-5 years?
KIRK BORNE: I think more people will recognize the value of contextual data and prediction. The traditional method of time series forecasting has been very single variable – that is to say, Y is a function of time. You try to build a model of what that function looks like, whether you’re using Fourier analysis or some seasonal terms or Holt-Winter modeling or whatever. These techniques basically just look at the historical trend of the data and then try to predict what will happen next on that trend.
This is applied, and people try to do things like stock market forecasting or customer purchase forecasting. That’s fine, but I think what’s really moving the needle here is the addition of contextual information to realize that whatever that thing is you’re predicting, it’s not just based upon its own historical patterns. It’s also based upon the influences that come from external sources. For example, let’s say you want to predict a stock market crash. If you just look at a trend in stock market pricing over a period of several years and there hasn’t been a crash in the stock market in those years, there’s nothing in the data that shows that a crash will happen. Therefore, you’ll never predict one.
It’s called an autoregressive modeling, and it just looks at itself. Whereas, as we know, real market crashes do occur because there are all these other things happening in the market. For example in the previous crash, it was primarily due to the overextended credit and housing market, and the toxic loans given to people below the traditional lower cut-off for credit scores. That increased the risk in the market, that risk got overextended, and you reached the tipping point. I think the future of forecasting and predictive analytics is to recognize the value of all these external sources of data and information that give you the context in which something is happening; the thing that you’re trying to predict is not happening in a vacuum, but in a real world scenario. Those external forces will be the ones that actually give you the best insight into which way you think something is going to happen. It’s not going to necessarily tell you the time and the minute, but it might tell you the conditions that would be right for a such-and-such an event.
ANSWERS: Do you think we might see the rise of AI-as-a-service offerings, and if so how might those work?
BORNE: I think this is definitely coming, because so many things are “as-a-service” these days. The advantages of things like that is that you basically just call an application programming interface (API) provider, and you don’t necessarily have to do a full development. An example of that is chatbot-as-a-service; such services are now available on some of the cloud service providers and you don’t need to build your own bots.
If you’re running a small business and you can’t afford a whole customer care call center, what you can do is start with the easy things. That might take the form of a set of questions that people frequently call and ask about. That set could be fed into a chatbot-as-a-service; that’s AI-as-a-service. It’s the raw fuel that feeds the algorithm, and then you basically can deploy a conversational AI-as-a-service with very minimal investment because it is there for you. You already know what your content is, because it’s your business. You just put the two together.
That’s just one example. Another aspect of this is contextual data and what I would call cognitive awareness. A real intelligence (as opposed to an artificial intelligence) is cognitively aware. For instance, when I enter a situation, I instinctively know from a lifetime of experience what things I should pay attention to and what things I should not pay attention to. If you walk into a business meeting and you see there are some ballpoint pens and some pieces of paper on the table, those are not nearly as important as the fact that everyone has a very serious look on their faces. You automatically know without even thinking about it that this meeting must be serious, and you determine the most important things you should pay attention to in this situation. That’s being cognitively aware of context.
AI-as-a-service can help filter the floods of information that are coming to us from so many different directions in our business, whether it’s social media or customer calls or purchase patterns or market conditions or whatever. As humans, we know how to filter, monitor and distill the most important pieces of all that contextual information in a cognitive way so that we can take the best action or make the best decision.
If we can think of an AI-as-a-service offering as a filter for all these different streams of data to know which ones are the most meaningful for your business, then I think you’ve got something going for you. I’ve heard interviews with customer call center people in which they are asked if AI is helping or hurting their jobs. They said that their jobs are so much nicer now because they don’t have to answer all the trivial, repetitive questions that are already answered on the businesses’ websites. They get to work on hard, challenging problems. They said the bots make their jobs more interesting and more fulfilling because the people are now able to help customers solve harder and more complex problems.
Companies have all this information on their websites which nobody reads. With AI-as-a-service, they can get that information into the right hands of the right people at the right time without burdening people with not-quite-as-important information, freeing up the call center people to solve harder problems. That’s just one example, but I think AI as a whole has many different applications everywhere.
The greatest beneficiaries of AI-as-a-service offerings are going to be small businesses who just can’t afford to build data science teams who might analyze and try to code all this stuff.
ANSWERS: What do you believe are some of the consequences of data acquisition that are not being explored rigorously enough?
BORNE: Using customer data (purchase patterns and things like that) has led to really great marketing advances, such as personalized recommendation engines. We get more targeted ads for things that are more relevant and interesting to the individual consumer.
An unintended consequence would be if a hacker hacks into the company’s computer and all of a sudden my personal data is out on the dark web somewhere. Of course, we have to hope and trust that the company that has collected the data isn’t also abusing and misusing it. There are cases where we need to be more intense in our efforts to annotate and tag data with their particular uses.
I come from a background in astrophysics where I spent nearly 20 years at NASA working with space science data systems. We were making the data from these experiments publicly available, primarily to the scientific research community. School teachers and students and a lot of other people were accessing it. There was no problem with that because your tax dollars paid for these experiments in space, and all the data was open and public.
The idea was if we could tag the data by when it was used, by whom it was used, and what were the use cases that it was used for, it would actually make the data more enriching. It’s almost like having your own personal librarian. When you walk into a traditional library and say you are looking for a book about a specific topic, the librarian knows what other people have looked at and what books are most useful and he or she directs you to the right place.
If we tag the data with who used it, when they used it and why, then it satisfies the compliance and regulatory issues that GDPR and likely future regulation will insist upon. From the privacy side now you can put access controls and use case controls, so to speak – only certain people or applications can use the data.
Not only that, you can also do anonymization of the data. Another is encrypting the data – the algorithm needs to use the data, but no one ever needs to actually see the raw data. The encrypted data goes into the algorithm to recommend a product purchase to me; no one actually ever sees the things I’ve purchased in the past, but the algorithm unencrypts the data in order to make a better-targeted recommendation for this. Encryption is a good thing.
There is research on anonymization that shows it’s not as anonymous as we think. I’m a little leery about anonymization because there are a lot of ways to see anonymized data. In the data science world, it’s called data leakage – that is, there’s information that reveals the stuff that you were trying to hide and you don’t realize it.
There was a story years ago that made the rounds in the data science community, that just three single pieces of information (for example, your zip code, your date of birth and your gender) uniquely identifies an individual person within 80% accuracy to just about anybody with access to those bits of information. All they need is your gender, your zip code and your date of birth (and not even your birth year, just the day). How much of that stuff is already out there? Most people have their birthdays on their Facebook page, for instance. Your gender? Pretty easy for a hacker to figure out in most cases. And your zip code? That’s not too hard to figure out, either. Most people identify where they live.
With just that minimal information, nearly 80% of the time you can correctly identify the exact person that the information is referring to. It is easy to give away some personal information, thinking that no one could trace this back to you – but maybe they can. There are all kinds of unintended consequences that we need to be aware of.
In our new series, AI Experts, we interview thought leaders from a variety of disciplines — including technology executives, academics, robotics experts and policymakers — on what we might expect as the days race forward towards our AI tomorrow.