Hey, Congress, want to know how to regulate AI? Just ask it!

AI can be a powerful tool to support self-regulation. Designed properly, it can help us identify the risks associated with AI applications, develop potential regulatory solutions, and evaluate the effectiveness of proposed regulations.

Greg Wallig

1/30/20243 min read

Artificial Intelligence (AI) is rapidly transforming our world, and it is essential that the United States Congress react to this powerful technology to ensure that it is used for good and that the benefits of AI are shared by all.

But regulating AI is like herding cats on a treadmill. It's complex, it's ever-evolving, and it can be hard to keep up. That's where AI itself comes in.

Congressman Ted Lieu (D-CA 36) agrees: "As one of just three members of Congress with a computer science degree, I am enthralled by AI and excited about the incredible ways it will continue to advance society," Lieu wrote in a recent New York Times Op Ed. "And as a member of Congress, I am freaked out by AI, specifically AI that is left unchecked and unregulated." He has also called for Congress to "regulate AI before it regulates us."

And the best way to do that?

Ask AI for help.

AI can be a powerful tool to support self-regulation. Designed properly, it can help us identify the risks associated with AI applications, develop potential regulatory solutions, and evaluate the effectiveness of proposed regulations.

To paraphrase Congressman Lieu: "Technology is fast and flexible, laws are slow and static. We need a flexible, risk-based approach to regulate AI so that all Americans can benefit from this technology and safeguard against risks."

Frameworks for Governing AI

There are a number of existing (human-developed) frameworks that AI can use as inputs, including:

  • COSO Enterprise Risk Management (ERM) Framework - Provides a comprehensive approach to managing enterprise risks, including AI risks.

  • NIST Cybersecurity Framework (CSF) - Provides a framework for managing cybersecurity risks, including AI-related cybersecurity risks.

  • ISO/IEC 27001:2013 Information Security Management System (ISMS) - Provides a framework for managing information security risks, including AI-related information security risks.

  • O-RM Risk Maturity Model - Provides a framework for assessing and improving risk management capabilities, including AI risk management capabilities.

These frameworks can be used to help Congress develop and implement effective AI governance policies and procedures.

How Congress Can Ask AI for Help

Leveraging these existing frameworks, Congress could:

  • Develop an AI regulatory strategy. The AI regulatory strategy could identify the government's goals for AI regulation and the risks associated with not achieving those goals. For example, the government's goals for AI regulation may include protecting consumers and citizens, promoting innovation, and maintaining US global leadership in AI.

  • Establish an AI regulatory oversight committee. The committee could be composed of experts from industry, academia, and government. The committee could also develop a code of ethics for AI regulation to guide the development and use of AI in regulation.

  • Use AI to communicate relevant information about the government's AI regulatory efforts to the public, industry, and stakeholders. This could be done through reports, websites, and other channels.

  • Use AI to regularly review and revise the government's AI regulatory framework. This would help to ensure that the framework is effective and up-to-date.

  • Use AI to monitor the performance of the government's AI regulatory efforts. This could involve tracking metrics such as the number of AI regulations passed, the time it takes to pass AI regulations, and the effectiveness of AI regulations in mitigating AI risks.

Imagine a world where…

Is it possible to imagine a world where AI is used to regulate AI? Could AI systems be used to identify and mitigate the risks of bias, privacy, and security in real-time, and in a way that is flexible and adaptable to new technology?

The converse is certainly possible to imagine: a world where AI systems are biased against certain groups of people, where AI systems are used to collect and sell personal data without people's consent, and AI is used to launch cyberattacks and commit other forms of harm.

We need flexible, risk-based regulation to ensure that AI is used for good and that the benefits of AI are shared by all. And the best way to do that is to ask AI itself for help.

Like I did in crafting this article!

Interested in this topic? Let’s continue the conversation: gwallig@gmail.com. I’ll also use LinkedIn to keep you updated on the AI symposium I and my fellow researchers are developing in collaboration with Georgetown University.