Skip to content

Our Privacy Statement & Cookie Policy

All Thomson Reuters websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.

AI Experts

Solving global sustainability challenges with the help of AI

According to James Hodson of the AI for Good Foundation, artificial intelligence can aid society in addressing global issues – but it won’t do all the work for us.

A lot is being said these days about the potential use cases of artificial intelligence. From consumer products such as self-driving cars and personal assistants to commercial applications such as customer service chatbots and marketing algorithms, there is no shortage of headlines devoted to how AI will improve our personal and professional lives. However, one aspect of the forthcoming AI revolution worth a healthy amount of attention in the public discussion is how new technologies can be deployed to address long-standing societal challenges: From applying AI to help root out corruption and crime in government and the financial sector; to deploying data science as a means to address environmental issues such as climate change; to making machine learning an agent in the effort to provide for sustainable food growth, clean water and energy production.

At the forefront of these explorations is AI for Good Foundation, a not-for-profit dedicated to promoting the use of artificial intelligence technologies in service to the public weal. We spent some time discussing AI and sustainability matters with James Hodson, the organization’s Chief Executive Officer, to better understand his views on what an AI-enabled future might look like.

“We desperately need more collaboration between AI researchers and the physical and social sciences in order to drive innovation in the social good sectors. There’s a lot of work that could be using machine learning techniques, that could be leveraging the data that’s being collected and is out there today, and that could be collecting more and better data for tomorrow, but it’s not.” James Hodson, AI for Good Foundation


ANSWERS: How do you foresee artificial intelligence technologies being deployed to assist with sustainability and environmental challenges such as climate change, clean water, responsible food and energy production, etc.?

HODSON: This is exactly the work that we’re doing on a daily basis at the AI for Good Foundation – to identify how data driven technologies under the umbrella of artificial intelligence might contribute to solutions to some of the really big challenges that we’re facing today. What we have done is we’ve taken the framework provided by the United Nations of 17 global challenges – the United Nations Sustainable Development Goals (SDG). They cover everything from food security and the availability of clean water to equitable education, eradicating gender discrimination, and ensuring access to fair judicial systems. They’ve been ratified by 150+ nations, which means that these are things that people really agree we need to work on.

The biggest challenge that we have today is that, for many of these problems, we simply don’t have enough of the right kind of data to provide all-encompassing solutions through artificial intelligence, and that would be the wrong way to think about it (that AI is going to come in and solve things). Rather, it’s going to come in and help to find solutions.

Having said that, we do have enough data to provide key decision-makers with insights to help them in many ways; for instance, identifying corrupt spending practices in government, or accelerating the development of drought-resistant crop types, or finding appropriate locations to maximize the impact of wind farms for power generation, or even evaluating water quality with fewer samples, lower labor costs and higher accuracy.

As an anecdote, it was disappointing to me when I read the latest Intergovernmental Panel on Climate Change (IPCC) report (1,500 plus pages). There is almost no work cited in this report that leverages machine learning techniques with the stockpiles of climate data that is being collected and generated all around the world. The analysis in the IPCC report largely stems from physics-based models (which have a long history and are very good in some cases), or from a handful of observations from a handful of centers around the world.

We desperately need more collaboration between AI researchers and the physical and social sciences in order to drive innovation in the social good sectors. There’s a lot of work that could be using machine learning techniques, that could be leveraging the data that’s being collected and is out there today, and that could be collecting more and better data for tomorrow, but it’s not. That’s one of the big areas where we try to come in and build communities that cut across boundaries among the physical sciences, the social sciences, the AI research community, the policymakers and the industry application specialists. We would like them to come together and ideally we would “lock them in a room” for a few days, so they understand each other’s domains better and what the opportunities might be. So far, this hasn’t really been happening.

ANSWERS: It seems like a lot of AI development these days is primarily for commercial or military applications. How do you evolve the incentives model to get people to focus AI development more on addressing sustainability goals?

HODSON: In the U.S. especially, technological development has historically been funded by DARPA, the National Science Foundation and a few other government agencies. To be fair to those programs, they have developed an enormous number of technologies that have had transformative impact for social good around the world, ranging from things like the Internet, to things that USAID uses all of the time, to water filtration systems. Not everything that is being funded through military applications is “evil.” However, it does obviously have a principal aim, which is to support military applications and maintain a balance of power between nations. For most of us in the social good sector, that’s not the first thing that we’re trying to achieve.

In my opinion, the funding infrastructure in Europe is better aligned with achieving broad societal objectives than the military based infrastructure in the U.S. As a result, the EU has been funding a lot of things that interact with the Sustainable Development Goals, especially under the Horizon 2020 program. These are going to have interacting and positive benefits with relation to solving some big problems. Now, we might not solve them by 2030, which was the ambitious original goal of the Sustainable Development Goals, but any incremental improvement that we can make is only a good thing.

On the other side of this is industry. Companies are obviously going to spend as much as possible to remain competitive, and artificial intelligence is the latest technology that’s coming of age. There has been a lot of research happening, but not much of that research is breaking down barriers to solutions in the SDG space. This needs to improve.

With that in mind, I have noticed a rather interesting development in the last couple of years. If you go to universities and talk to students and faculty, they really do want to apply their skills to important global challenges. The moment you start showing them the datasets that are available and you start building the bridge from the big problems of the Sustainable Development Goals to the methodologies that they have been developing, things click. There is pressure to work on industrial applications, since that’s where the funding is. However, it is clear that the willingness and motivation exists for social impact, if only we could unlock it with appropriate incentives.

It’s a case of being on the ground and building links between the methods and the challenges, and that’s where we see ourselves doing our most important work. We don’t swoop in to solve the problems ourselves; we seek to change the culture of the research community, harnessing the power of thousands.

Hopefully, if we change enough mindsets, the large tech firms will also make research money available specifically for pushing forward these agendas, and the government may magnify existing efforts.

ANSWERS: Who or what do you see as best equipped to function as a governance entity to ensure the proper and ethical development of AI in a world where such development might be accelerated in a “first to market” rush among competing developers and nations?

HODSON: To answer the part around the pace of development (which I think is an important variable to think about in this space), what we’re seeing today is an industry in its infancy. It’s like the Gold Rush period in 1850s Western U.S. We have pretty much every company under the sun jumping on board and wondering how machine learning can help drive efficiencies in their business.

Everybody takes a slightly different approach to the exploration and implementation in this. The same thing happened with the rush to digitize key human business processes in the 1950s and 1960s. There’s a great 1957 movie with Katharine Hepburn called Desk Set (from the eponymous 1955 play by William Marchant) which I would recommend to anybody interested in these questions. It has the same types of fears, fear mongering and discussions as we’re seeing in today’s wave of artificial intelligence mania.

In the digitization process, there were no GitHub repositories, there was no formal testing, there were no agile methodologies; but eventually the software engineering industry matured and converged around accepted practices that are most efficient to getting results in business (and even in the research community). I think we’ll see the same thing happen in AI over the coming years. Of course there is a lot of investment, but there is also a lot to be done to bring the benefits of data to business, government and the world at large. My biggest concern on that front is that there are loud voices making unrealistic claims about the scientific capabilities that we’ll develop in the next few years. I would hate for there to be disappointment that causes a decrease in investments or innovation.

When discussing the uses of statistical and symbolic learning within the context of weapons development – the rush to be the first, with nations against nations, and people pumping money into systems that are intended to cause harm to human beings or the environment and other key global systems – then I take a slightly different view.

There are many nonprofits that have been started around the idea of protecting the human race by signing petitions, lobbying the White House and inviting “experts” with little or no economic training to make strong unsubstantiated statements. I don’t see the need to go beyond the templates that have already been used to regulate, say, chemical weapons in the 1990s through the Chemical Weapons Convention. It was signed by dozens of nations and does a great job regulating what nations agree to, how they’re going to be audited and what they do with those technologies within their borders (including allowing for technological development to continue, while protecting certain uses of that technology so that they don’t cause international harm, or suspicion of international harm). Of course, there have been isolated occasions when nations have tried to use chemical weapons against their own people or against other nations, but by and large this is no longer a burning topic on international political agendas.

Another good thing to come out of previous legislation is that we now have mechanisms at the international legal level to counterbalance the probability of infringement. We’re obviously always going to need to be on guard and know that there are threats from all directions. Whether it’s chemical weapons, advanced technologies, or poison in water supplies (or thousands of other threats), the point is that the international community has dealt with similar concerns many times in the past, and these types of technology products may well already be covered in existing international conventions (under appropriate interpretations, of course).

This is not a question of ethics; these are game theoretic mechanisms meant to keep opposing powers balanced and in check. There is a dangerous narrative game being played to magnify the perceived risks of technology development out of all proportion. These people should instead be focusing their energy and attention on solving the real problems that we’re facing today.


Learn more

In our new series, AI Experts, we interview thought leaders from a variety of disciplines — including technology executives, academics, robotics experts and policymakers — on what we might expect as the days race forward towards our AI tomorrow.

  • Facebook
  • Twitter
  • Linkedin
  • Google+
  • Email

More answers