OpenAI CEO Sam Altman told Congress on Tuesday that government intervention will be critical to mitigating the risks of increasingly powerful AI systems.
Altman proposed the formation of a U.S. or global agency that would license the most powerful AI systems and have the authority to “take that license away and ensure compliance with safety standards.”
Driving the news: Altman’s San Francisco-based startup, OpenAI, gained attention after it released ChatGPT, a free chatbot tool that answers questions with convincingly human-like responses. Concerns have arisen about the ability of “generative AI” tools like ChatGPT to mislead people, spread falsehoods, violate copyright protections, and upend jobs.
- While there’s no immediate sign Congress will craft sweeping new AI rules, the societal concerns brought Altman and other tech CEOs to the White House earlier this month and have led U.S. agencies to promise to crack down on harmful AI products that break existing civil rights and consumer protection laws.
- Altman proposed that a new regulatory agency should impose safeguards that would block AI models that could “self-replicate and self-exfiltrate into the wild” — hinting at futuristic concerns about advanced AI systems that could manipulate humans into ceding control.
- Altman is planning to embark on a worldwide tour this month to national capitals and major cities across six continents to talk about the technology with policymakers and the public.
What they’re saying: Also testifying were IBM’s chief privacy and trust officer, Christina Montgomery, and Gary Marcus, a professor emeritus at New York University who was among a group of AI experts who called on OpenAI and other tech firms to pause their development of more powerful AI models for six months to give society more time to consider the risks.
- The panel’s ranking Republican, Sen. Josh Hawley of Missouri, said the technology has big implications for elections, jobs, and national security. He said Tuesday’s hearing marked “a critical first step towards understanding what Congress should do.”
- A number of tech industry leaders have said they welcome some form of AI oversight but have cautioned against what they see as overly heavy-handed rules. IBM’s Montgomery instead asked Congress to take a “precision regulation” approach.
- “We think that AI should be regulated at the point of risk, essentially,” Montgomery said, by establishing rules that govern the deployment of specific uses of AI rather than the technology itself.