Skip to content

Our Privacy Statement & Cookie Policy

All Thomson Reuters websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.

Innovation

How the practice of design enhances artificial intelligence

Brian Romer  Data Visualization Lead, Thomson Reuters Labs – Boston

Brian Romer  Data Visualization Lead, Thomson Reuters Labs – Boston

Artificial intelligence is powerful on its own, but it needs thoughtful, imaginative design thinking to reach its highest potential.

Artificial Intelligence (AI) systems can perform amazing feats of problem-solving. But no matter how accurate AI solutions are, they won’t be relevant, insightful and adopted by people without great design work.

The practice of design is about problem solving. It starts long before the visual look and feel is created and continues long afterward. It creates a vital connection between humans and machines that allows AI systems to perform at their best.

In this article, I’ll focus on discrete cognitive machine tools and systems built for specific tasks, rather than Artificial General Intelligence.

AI systems are built by people to solve problems for people. Even if the AI system’s answers are correct enough to be helpful, very few humans will benefit unless the system is understandable, trustable, and actively evolves by learning from its end users. In interpersonal terms, even if a colleague has the right answers, we would probably struggle to take that colleague’s well-meaning advice unless there is trust and understanding with regard to his or her decision-making. The message is heard differently depending upon whom — or what — the messenger is.

A journalist looks at a sculpture "Adam" by Auguste Rodin at the gallery "Neue Nationalgalerie" in Berlin . REUTERS/Hannibal Hanschke
A journalist looks at a sculpture “Adam” by Auguste Rodin at the Neue Nationalgalerie in Berlin . REUTERS/Hannibal Hanschke

Transparency

Design has a role in defining how one communicates with an AI system and discovers what the limits or possibilities of this interaction are. We can clarify how the user interacts with the system, how it solves problems and how it learns from us.

Who needs to know what?

There are three sets of audiences for an AI system. In order of priority, they are:

1. End users : Need clear guidance on tasks; indications of what state the machine is in; clear “guard rails” showing the limits of the system and more.

2. Creators: Need feedback on the system and a map of its logic in order to continue building it effectively.

3. General public: Needs a simplified explanation of what the system does and for whom.

Part of an installation called "Campo de Color" by Bolivian artist Sonia Falcone is pictured during the 55th La Biennale of Venice. REUTERS/Stefano Rellandini
Part of an installation called “Campo de Color” by Bolivian artist Sonia Falcone is pictured during the 55th La Biennale of Venice. REUTERS/Stefano Rellandini

Trust

Design Thinking takes into account the hopes and fears that form cultural context of the people we’re designing for. With proper research and validating specific use cases, we’re more likely to start with a system that feels relevant and valuable.

By appreciating sources of anxiety about AI, we can work to address discomfort, lack of trust and fear. Starting with transparency, we can create a sense of understanding, familiarity and ultimately confidence which will allow people to use the systems more effectively.

Scope of power

When machines do low-level mental labor for us, our mental resources are freed up and can be applied to more nuanced problem-solving. We’re much better off using a calculator to tally up our finances, and using our powers of reason and imagination to plan ahead.

By clarifying the domain and ability of an AI system, we can empower people to think of AI as supporting, not replacing their thinking. We must provide a uniform visual language through which users can express disagreement or agreement. Machine reasoning should be viewed as an enhancement, not a replacement.

Limbic resonance

Building trust requires the satisfaction of both the rational and emotional brain. People have to understand logically what the system is about, how it works and how accurate it is. A complimentary design task is addressing the limbic brain, relating to feelings and motivations.

Small design elements like the graphic animation of the “listening” or “thinking” state of an AI system do much more than indicate the current cognitive state of the system.

When design elements are emotional, they are more memorable. We more easily create associations with them, and are more likely to return to use them. The design goal for trust is to foster emotional congruence: the experience should be positive and memorable, but also true to the nature of the system.

People take pictures on their mobile phones of the lost Rubens masterpiece "George Villiers" on show at Kelvingrove Museum Glasgow, Scotland, Britain September 28, 2017. REUTERS/Russell Cheyne
People take pictures on their mobile phones of the lost Rubens masterpiece “George Villiers” on show at Kelvingrove Museum Glasgow, Scotland, Britain September 28, 2017. REUTERS/Russell Cheyne

Symbiosis

What is an AI system to us? If as designers we choose to conceive of it as another creature or sentient thing, we can ascribe desires, goals and needs to it. An AI system should “want” to perform well, and be rewarded for doing so. So, how can an AI system evaluate its performance, and what should its reward be?

Active feedback

Passive usage analytics are a standard part of most software, but here active feedback is at least as crucial. It should capture the responses, decision-making processes and thoughts of users, to allow the AI system to learn and evolve over time. The trick is in surfacing these mechanisms unobtrusively, at the right time. We don’t want to interfere with our users’ primary tasks, and we have to ensure a request for feedback is contextually relevant. Interfaces can be made fluid and adaptable, tuning themselves to the people who use them over time.

Bot-centered design

An AI system successfully designed to capture passive and active feedback has a good chance of staying relevant and useful, even as new events and information are introduced. Ultimately, those that prove useful will get to keep working, and humans will reward them with electricity and code. Those that don’t will join the digital scrapheap.

As much as designers focus on human-centered design, for AI we need to complement it with bot-centered design. AI “wants” either evidence of good performance, or critical feedback that allows us to adapt the model to perform better. How can we make a system that optimally solicits and channels feedback from humans in terms of what the AI needs?

"Magnetic bricks" are displayed on a laptop screen at the National Taiwan University's Communication and Multimedia Laboratory in Taipei. "Magnetic bricks" that can be used on tablet computers to create 3D designs, play games and interact with others, can react with the screen for a variety of effects, according to the inventors. REUTERS/Pichi Chuang
“Magnetic bricks” are displayed on a laptop screen at the National Taiwan University’s Communication and Multimedia Laboratory in Taipei. “Magnetic bricks” that can be used on tablet computers to create 3D designs, play games and interact with others, can react with the screen for a variety of effects, according to the inventors. REUTERS/Pichi Chuang

Learn more

In our 2018 AI Predictions report, our industry experts share their forecasts for how developing technology will shape our future.

 

 

  • Facebook
  • Twitter
  • Linkedin
  • Google+
  • Email

More answers