Skip to content
Compliance & Risk

AI in the regulatory state: Stanford project maps the use of machine learning and other AI technologies in federal agencies

Thomson Reuters Institute  Insights, Thought Leadership & Engagement

· 5 minute read

Thomson Reuters Institute  Insights, Thought Leadership & Engagement

· 5 minute read

The vast US Federal administrative apparatus, consisting of hundreds of agencies and sub-agencies making thousands of decisions each day, sits atop a formidable pile of data. Some of those agencies are putting machine learning to work to understand that data in support of their decisions.

Within a new policy lab at Stanford University, called Administering by Algorithm: Artificial Intelligence in the Regulatory State, four professors and a multidisciplinary group of 25 students are taking a deep dive to discover the scope, prospects, and limitations of using various forms of AI in public administration.

The client for the project is the Administrative Conference of the United States (ACUS), an independent agency charged with overseeing improvements to administrative process and procedure. The lab will document its findings in a report to ACUS. “Our hope is that the report will land on the desks of agency heads and agency general counsels and help them think about how to deploy these potentially transformative tools,” said David Freeman Engstrom, Associate Dean for Strategic Planning and Professor of Law at Stanford, one of the project co-instructors.

A systematic approach

The primary task was to simply identify instances of AI and Machine Learning in use at federal agencies. And a first step was to try to look deeply at 100 of the most important agencies, and search through the public record — web sites, news, privacy notices published in the federal register, procurement records.  In addition, the teams simply worked the phones to collect examples from beyond that first 100 agencies.

After identifying the most interesting use cases, student teams did in-depth case studies, including extensive interviews with agency officials, resulting in about a dozen deep studies of exactly how the agencies are deploying these technologies. And the use cases cover several different buckets — such as adjudication support or enforcement — that showed many initiatives that various agencies were undertaking.

AI
David Freeman Engstrom

For example, the Social Security Administration has developed tools to try to deal with some of the problems that lots of adjudicatory agencies face, including backlogs and inter-judge disparities in decision making. On the enforcement side, the US Securities and Exchange Commission (SEC) and the Internal Revenue Service (IRS) are using interesting tools to look for patterns of violations in their data. “These tools help agencies target their resources by conducting what we call predictive targeting to carry out enforcement mandates,” says Daniel Ho, Stanford law professor and another co-instructor for the lab.

Other key applications the teams have explored help agencies engage with citizens and streamline the procurement and contracting process. Still another application under heavy development would help agencies analyze the crush of comments the public often submits in response to proposed regulations as part of the notice-and-comment process.

Progress to date

How far along are federal agencies in adopting AI? “It depends on the agency,” Engstrom says. “Well-resourced agencies like the SEC have several tools that are already fully deployed; other agencies have none, and many others have projects in the pipeline but aren’t deployed just yet. Overall penetration is still modest, but some of the use cases are substantial, and there’s no question that these tools will significantly alter the way the federal government does its work in the coming years.”

Some agencies are doing state-of-the-art work, while a fair number encounter resource challenges in recruiting the best technologists. Ho also noted that several initiatives were led by entrepreneurs and first movers who pushed their agencies to consider the adoption of these kinds of techniques.

At the SSA, for example, there was a judge who would later become the head of the appeals council who pushed the agency to capture more data and then begin analyzing that data. And that really laid the foundations to be able to engage in machine learning down the road.

Trade-offs & tensions

The idea of applying machine learning and algorithms to support high-risk decisions creates tensions and trade-offs. “Government use of AI creates a profound collision,” says Engstrom. “On the one hand, administrative law is grounded in transparency, accountability, and reason-giving. When government takes actions that affect our rights, it has to explain why. On the other hand, the AI tools that many agencies use are not, by their structure, fully explainable.”

AI
Daniel Ho

A big part of the work of the lab, and the resulting report, will be addressing those normative issues. “The procedural due process principles that are familiar to administrative lawyers don’t readily apply to automated decision-making tools,” he adds. “And how we monitor for the potential participant impact is also not straightforward.”

Both Engstrom and Ho are especially grateful for the mix of technologists and lawyers on the student teams. “This is a truly rewarding teaching model because the lawyers can draw on engineers to develop a deeper understanding of the technology and the legal questions it raises, while the engineers can comprehend how technical solutions can help address the legal challenges and constraints,” Ho says. “In addition, the technologists benefit from seeing how their toolkit can be of use in law and public policy and in the public sector. Seeing the complex social problems agencies like the SSA are grappling is moving some of them to work on these kinds of problems beyond the course.”

More insights