Skip to content

Our Privacy Statement & Cookie Policy

All Thomson Reuters websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.

AI Experts

Oracle’s design chief: Thinking about AI? Better re-think your data management

Like the old saying goes, “You are what you eat.” As Matt Walton of Oracle describes, if you want a healthy AI, you better clean up your data.

As one of the world’s leaders in cloud computing, human capital management (HCM), enterprise resource planning and customer experience, Oracle is in a prime position to participate in the coming artificial intelligence revolution. Devoting over $6 billion to research and development just in 2017 alone, the corporation has shown it has the will and the foresight to put its money where its mouth is in terms of investing in emerging technologies such as AI and machine learning.

We sat down with Matt Walton (Oracle’s Chief Design Officer, Artificial & Adaptive Intelligence) and asked him to share his personal insights on the steps business professionals need to take to capitalize on AI’s transformative power.

I don’t think a lot of companies are prepared to deal with the capabilities of AI, meaning that their ontologies and their data structures are not in a place that can be consumed. The reality is that what they’ve got is a bunch of people cranking out reports on a case-by-case basis, or they have very constrained reports. That’s primarily because the way they’re collecting data today isn’t necessarily directly usable without some level of cleansing.” Matt Walton, Oracle


ANSWERS: Can you share with us a bit about what your role at Oracle entails?

MATT WALTON: At Oracle, I am driving three facets of Artificial Intelligence (AI) development. First, I’m helping to organize and package our AI offering. Secondly, I’m leading the development of the AI platform architecture and system methodology. And lastly, I’m creating new ways in which users of cognitive and intelligent systems will interact to accomplish their tasks and goals. I would say that my job is to be forward thinking while being rooted in developing value today.

ANSWERS: What innovations do you see machine learning and AI germinating?

WALTON: Ultimately machine learning and AI will change everything we know. From genetics to medicine to retail, everything will have a hyper-personalized focus. Let me give you a simple example. Today, marketers break how they deliver ads into customer segmentation. You, as a customer, may fall within one of those segments, and so you get certain types of advertisements based on segment attributes. With AI, there are no segments. Now, because I know you as an individual customer, I deliver you ads and offers that only pertain to your individual likes/dislikes. It provides highly fine-grained personalization that could not be accomplished without the use of AI. You can already see some of this behavior occurring in Google and Amazon, but this capability will quickly go beyond just the leaders in retail. It will permeate everything that we touch, visible and invisible.

What I believe is the biggest area of change and innovation from AI is the technology platform itself. Systems had been built around streamlining human-process to get data into the system in specific formats. The whole industry of User-Centered Design and UX has evolved to streamline how people input data into systems.

While systems claimed to be automated, automation was based on rules: If this, then that. Rules constrained the system to do specific tasks. Most of the logic in systems was all around process and rules management. I view this process as a query-based model (the system or user queries based on rules) and executes accordingly. In essence, a query is valid or invalid. But again, it is all centered around the human putting data into the system.

Over the past 20-30 years, these human-centered systems have continually gone through a process of “abstraction.” In essence, it’s like writing a book but in one long paragraph. Over the years, we have broken up the book into paragraphs, chapters, etc. Now when we want to add something to the book, we can either enhance or rewrite a paragraph without having it completely mess with the flow of the book.

Code has evolved much in this way; it is abstracted and organized so that when you make a change or add something, you don’t break the system. Data has evolved alongside code abstraction – first from structured rational databases (cells in an .xls spreadsheet) to unstructured data models today. Removing data constraints was a huge evolution and precursor to AI and machine learning.

Now, when we talk about AI there are two fundamental shifts that are occurring. Humans don’t enter the data; instead the data is fed to them to validate, and most importantly, there are no rules that confine what the system does. What you are doing is teaching the system to learn a process and to make decisions off of a pattern. What makes this process so different is that there is no such thing as valid/invalid. Instead, there are degrees of right and wrong. How users interact with the system determines how the system applies its finding. In essence, when something is pushed to a user, the user now becomes a teacher by agreeing or disagreeing on the result derived from the system. What makes this so powerful is it’s individualized; a right decision for you may not be the right decision for me. Systems will evolve to the behaviors of the users – “hyper-personalized.”

When you talk about innovations germinating from AI, the fact that we are evolving to systems that are intelligent has far-reaching consequences on everything we do.

“When one talks about AI going mainstream, I believe the biggest barrier to change is acceptance, and I think we have a ways to go before mainstream unsupervised systems are widely accepted.” -Matt Walton, Oracle.

ANSWERS: How can professionals who might see their professions disrupted by automation and AI future-proof their careers? What advice do you have for them? How can they leverage the new technology to their advantage (rather than trying to compete with it)?

WALTON: It’s important to understand what AI is and what it is not. Unlike the swirl in the media about AI, the reality is it’s a leap in our technological evolution, just like robots were used to streamline automobile manufacturing. Did people lose their jobs on the manufacturing line? Yes, they did, but at the same time more opportunities opened up. After all, all of these machines needed to be invented, deployed and serviced.

Whenever you deal with big transformational shifts that are occurring, there is always disruption; but there is also an equal opportunity. It’s never one-sided. Disruption happens over a period of time; slowly at first, and accelerating over time. We are just now in the beginning stages. I think we can clearly see from the perspective of some of the challenges that even very sophisticated data companies such as Tesla, Waymo and Uber are having trying to create the autonomous vehicle.

When you start talking about corporations and businesses that are moving towards full automation as the North Star, we still have a ways to go. Just look at Elon Musk’s desire to create a fully automated manufacturing line. Humans still are exceptionally valuable in the process.

I would challenge professionals to start thinking about their daily jobs and evaluate how many repetitions they have in their day. It is the tasks and jobs that are repetitious that are at the greatest risk for AI disruption. Then, imagine if the repetition was gone. What would it free you up to do that a system couldn’t do? Exercise more creativity.

That’s usually because tasks and jobs that still require critical thinking, creative problem solving and a lot of experience can’t be moved into an autonomous system. Or, it will take a while before the technology is able to do so. I also think the emphasis on the way that we’re teaching and educating people has to evolve as well. We spend too much time teaching standardized methods and not enough time on creativity, critical thinking and critical problem-solving. I believe focusing on creativity is going to be the way in which we all (including me) continue to stay relevant in the world of AI.

ANSWERS: How close are we to seeing unsupervised learning systems proliferate and really taking off?

WALTON: It really depends on how you define an unsupervised system. We have aspects of unsupervised systems integrated into our lives today; aviation as just one example. I believe that the operational side of business is where the majority of these systems will quickly be utilized. However, for a larger consumer adoption, an intelligent system has to be accepted from the society as a whole. I believe that the buzz around AI (fear being the biggest) creates barriers that slow down acceptance of the technology. But even before accepting the technology, it has to be clear how everyone benefits from it.

I also believe we have significant legal implications that we have only started to deal with, specifically how data is used and shared, how and where AI will be applied, and larger questions of safety, bias and accuracy of autonomous systems. Just look what a single accident does to Tesla stock.

The implications of fully autonomous systems fuel a much bigger societal and human issue – that of relinquishing human control to system controls. We have not begun to even experience the ramifications of what this means, not only because it has yet to occur (we still have human intervention) but secondarily, because it’s a very big leap yet for most people to even comprehend. These are profound social impacts that will continually need to be addressed as different technological capabilities are released.

When one talks about AI going mainstream, I believe the biggest barrier to change is acceptance, and I think we have a ways to go before mainstream unsupervised systems are widely accepted.

ANSWERS: What do you see are the most pressing developments for AI that we need to get right, right now?

WALTON: A big one is removing bias from a machine learning/AL system. This is a very hard problem that has started to impact areas like HCM, which can create significant legal exposure.

Stepping back a bit further to some of the fundamentals, it really comes down to data organization. In order to capitalize on machine learning and AI, ontologies and data structures have to be organized in a way that a system can consume them. Companies need to get the fundamentals of how they organize their data into formats in which AI can be leveraged. It’s no longer about garbage in, garbage out. It’s now about data as a language, and without the proper structure an ML system can’t understand it.


Learn more

In our new series, AI Experts, we interview thought leaders from a variety of disciplines — including technology executives, academics, robotics experts and policymakers — on what we might expect as the days race forward towards our AI tomorrow.

  • Facebook
  • Twitter
  • Linkedin
  • Google+
  • Email

More answers