Regulating artificial intelligence has been a hot topic in Washington in recent months, with lawmakers holding hearings and news conferences and the White House announcing voluntary AI safety commitments by seven technology companies on Friday.
But a closer look at the activity raises questions about how meaningful the actions are in setting policies in the rapidly evolving technology.
The answer is that it is not very significant. The United States is just at the beginning of what is likely to be a long and difficult path toward creating AI rules, lawmakers and policy experts said. While there have been hearings, meetings with top tech executives at the White House and speeches to introduce AI bills, it’s too early to predict even the roughest sketches of regulations to protect consumers and contain the risks the technology poses to jobs, the spread of disinformation and security.
“It’s still early days, and nobody knows what a law will look like yet,” said Chris Lewis, president of the consumer group Public Knowledge, which has called for the creation of an independent agency to regulate AI and other tech companies.
The United States remains far behind Europe, where lawmakers are preparing to enact AI legislation this year that would place new restrictions on what are seen as the most dangerous uses of the technology. In contrast, much disagreement remains in the United States over the best way to handle a technology that many American lawmakers are still trying to understand.
That suits many of the tech companies, policy experts said. While some of the companies said they welcome the rules around AI, they also argued against tougher regulations similar to those created in Europe.
Here’s a rundown on the state of AI regulations in the United States.
At the White House
The Biden administration is on a quick listening tour with AI companies, academia and civil society groups. The effort began in May when Vice President Kamala Harris met at the White House with the chief executives of Microsoft, Google, OpenAI and Anthropic and pushed the tech industry to take safety more seriously.
On Friday, representatives of seven tech companies appeared at the White House to announce a set of principles to make their AI technologies safer, including third-party security reviews and watermarking of AI-generated content to help prevent the spread of misinformation.
Many of the practices announced are already in place at OpenAI, Google and Microsoft, or are on track to take effect. They do not represent new regulations. Promises of self-regulation also fell short of what consumer groups expected.
“Voluntary commitments are not enough when it comes to Big Tech,” said Caitriona Fitzgerald, deputy director at the Electronic Privacy Information Center, a privacy group. “Congress and federal regulators must put in place meaningful, enforceable guardrails to ensure that the use of AI is fair, transparent and protects the privacy and civil rights of individuals.”
Last fall, the White House introduced a Blueprint for an AI Bill of Rights, a set of guidelines on consumer protections using the technology. Guidelines are also not regulations and are not enforceable. This week, White House officials said they were working on an executive order on AI, but did not disclose details and timing.
In Congress
The loudest drumbeat on regulating AI has come from lawmakers, some of whom have introduced bills on the technology. Their proposals include the creation of an agency to oversee AI, accountability for AI technologies that spread disinformation and the licensing requirement for new AI tools.
Lawmakers have also held hearings on AI, including a hearing in May with Sam Altman, the chief executive of OpenAI, which makes the ChatGPT chatbot. Some lawmakers floated ideas for other regulations during the hearings, including nutritional labels to inform consumers of AI risks.
The bills are in their earliest stages and so far do not have the support needed to move forward. Last month, the leader of the Senate, Chuck Schumer, Democrat of New York, announced a month-long process for creating AI legislation that included education sessions for members in the fall.
“In many ways we are starting from scratch, but I believe Congress is up to the challenge,” he said in a speech at the time at the Center for Strategic and International Studies.
In federal agencies
Regulatory agencies are already starting to act by policing some of the issues arising from AI
Last week, the Federal Trade Commission opened an investigation into OpenAI’s ChatGPT and sought information on how the company secures its systems and how the chatbot could harm consumers by creating false information. The chair of the FTC, Lina Khan, said she believes the agency has enough power under consumer protection and competition laws to control problematic behavior by AI companies.
“Waiting for Congress to act is not ideal given the usual timeline for congressional action,” said Andres Sawicki, a law professor at the University of Miami.