Skip to content

Our Privacy Statement & Cookie Policy

All Thomson Reuters websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.

Artificial intelligence

Lessons learned in the AI space

Dr. Khalid Al-Kofahi  Vice President, Research, Thomson Reuters

Dr. Khalid Al-Kofahi  Vice President, Research, Thomson Reuters

Three years ago this summer, Thomson Reuters launched its AI Center in Canada, and it has quickly grown into one of the larger AI hubs there. Now, throughout the broader business community, companies of all shapes and sizes are creating similar artificial intelligence-focused hubs to tap the benefits of this technology.

On the anniversary of our launch, I wanted to share some of the lessons we have learned along the way.

You can’t do it

First, ignore the naysayers. There have been a number of leaders in the Canadian tech sector that have made inaccurate comments about the failure of Canadian companies to commercialize and scale their solutions. As a Canadian company, we are proof we can create leading-edge solutions that are marketed globally.

But it is not just about naysayers — many times you have to win over internal partners and stakeholders and get them to buy-in to the vision you want to achieve.

Purpose

Before starting on an AI venture, you have to ask yourself, “What is your team’s purpose?”

Before we launched our AI initiative, we started with two primary objectives: i) to help the business deliver AI-powered products to clients; and ii) to simplify and transform knowledge work.

The first objective keeps us grounded in creating business value; and the second provides us with a directional compass so we don’t get distracted by incremental innovations and one-off projects.

Teams

I can’t stress enough the importance of working with the broader business team. Many organizations, Thomson Reuters included, seek to ground their AI strategy as the combination of great amounts of curated data, technology, and subject matter expertise. All three components are needed.

Using this grounding, AI projects then can typically fall into three different teams addressing differing workstreams:

  • The product design team comprised of developers working with customers and supported by scientists, UX designers, marketing and sales.
  • The engineering team which interfaces with the design team on building the product and with the algorithms team to help ‘implement the AI’ at scale.
  • The algorithms team which is responsible for designing the AI components.

Indeed, most of our AI projects were not about applying an algorithm on a data set; instead, they were about capturing the nuances of a domain in a computable way. This requires subject matter experts and scientist working together — that is why our projects often have a one-to-one ratio of scientists and subject matter experts.

Talent

The reality of working in AI is there is a shortage of talent. Retention and recruitment need to be an ongoing activity that is built into the organization in a way that focuses leadership on talent. At the end of the day, your solutions are only as good as the people who make them.

Time

It may seem redundant but you need to ensure that you are building in sufficient time for the technology to learn what you need from it. Machine learning is an iterative process and ‘teaching’ the machine to make quality assessments takes time.

Customer and commercially-focused

There are untold interesting and exciting applications for artificial intelligence; but if those applications do not address a client’s pain point or provides them with an advantage, they won’t buy it. Don’t blame the client for not adopting your technology.

It is a tough lesson to learn, but a failure to involve your clients in the development process can result in a tremendous waste of resources and time.

Ethical and unbiased

We tend to think of bias in terms of the nature of the applications. For critical applications, transparency and explainability are key. In some applications, if you can’t cross-examine it, you shouldn’t be able to use it. In others, having a human in the loop who is ultimately responsible for making decisions serves the purpose.

For non-critical applications the bar is lower, but one needs to ensure that AI adheres to certain trust and ethical principles. For example, being transparent on which customer data is used and how. Establishing governance and audit processes for data to prevent abuse. Ensuring that solutions are — to the degree possible — free from bias should be a key design objective.

Finally, as we reflect on our past three years (an eon in AI-time), we clearly see that AI is not a narrative in the future — it is already here.

For more than two decades, we and others have been developing solutions to mine, connect, and organize content and data. What has changed is that the technology has matured significantly — the tools are more complex, yet are simpler to use. Coupled with the influx of even more data, more powerful computers, and infrastructure on demand, the potential impact of AI technology is expanding exponentially.

These are exciting developments. But as a ‘practicing scientist’ my formula remains the same. Focus on the value you want to create and the problem you want to solve — and make sure these are worthy and impactful.

Focus on the future you want to create and plan for it. Then think about how to get there. Most likely you will need AI — but, remember, unless you are in the AI business, AI is just a tool to help you create your vision.

More answers