ChatGPT's chief calls for new agency to regulate artificial intelligence

Should we be worried about artificial intelligence following warnings from the boss of ChatGPT that AI systems could spread misinformation and influence voters? Robert Moore reports

The CEO of the artificial intelligence company behind ChatGPT has told US politicians that government intervention "will be critical to mitigate the risks of increasingly powerful" AI systems.

Speaking before congress, Sam Altman said: “As this technology advances, we understand that people are anxious about how it could change the way we live. We are too."

Mr Altman proposed the formation of a US or global agency that would license the most powerful AI systems and have the authority to “take that license away and ensure compliance with safety standards.”

In addition to focusing on the potential threats posed by AI systems, the OpenAI founder highlighted the good that technological advancements could bring to humanity.

Chief among the positive impacts that ChatGPT could bring about was 'potentially finding the cure for cancer,' according to Mr Altman.

His San Francisco-based startup rocketed to public attention after it released the chatbot late last year.

Senator Richard Blumenthal speaking during a hearing on artificial intelligence. Credit: AP

What started out as a panic among educators about ChatGPT's use to cheat on homework assignments has expanded to broader concerns about the ability of the latest crop of “generative AI” tools to mislead people, spread falsehoods, violate copyright protections and upend some jobs.

Currently there is no immediate sign that Congress will craft sweeping new AI rules like their European counterparts.

But, the societal concerns brought by Mr Altman and other tech CEOs to the White House earlier this month have led US agencies to promise to crack down on harmful AI products that break existing civil rights and consumer protection laws.

Senator Richard Blumenthal, chairman of the Senate Judiciary Committee’s subcommittee on privacy, technology and the law, opened the hearing with a recorded speech that sounded like the senator.

But the speech was actually a voice clone trained on Mr Blumenthal's floor speeches and reciting a speech written by ChatGPT after he asked the chatbot to compose his opening remarks.

Think tanks believe coursework over exams will deliver 'less trustworthy' grades due to AI systems. Credit: PA

Mr Blumenthal asked: “What if I had asked it, and what if it had provided, an endorsement of Ukraine surrendering or (Russian President) Vladimir Putin’s leadership?”

Mr Blumenthal said AI companies ought to be required to test their systems and disclose known risks before releasing them.

He also expressed particular concern about how future AI systems could destabilise the job market.

Pressed on his own worst fear about AI, Mr Altman mostly avoided specifics.

But he later proposed that a new regulatory agency should impose safeguards that would block AI models that could “self-replicate and self-exfiltrate into the wild” - hinting at futuristic concerns about advanced AI systems that could manipulate humans into ceding control.

Co-founded by Mr Altman in 2015 with backing from tech billionaire Elon Musk with a mission focused on safety, OpenAI has evolved from a nonprofit research lab into a business.

Its other popular AI products including the image-maker DALL-E.

Microsoft has invested billions of dollars into the startup and has integrated its technology into its own products, including its search engine Bing.

Also testifying will be IBM's chief privacy and trust officer, Christina Montgomery, and Gary Marcus, a professor emeritus at New York University who was among a group of AI experts who called on OpenAI and other tech firms to pause their development of more powerful AI models for six months to give society more time to consider the risks.

The letter was a response to the March release of OpenAI's latest model, GPT-4, described as more powerful than ChatGPT.

“Artificial intelligence will be transformative in ways we can’t even imagine, with implications for Americans’ elections, jobs, and security,” said the panel's ranking Republican, Senator Josh Hawley of Missouri.

“This hearing marks a critical first step towards understanding what Congress should do.”

A number of tech industry leaders have said they welcome some form of AI oversight but have cautioned against what they see as overly heavy-handed rules.

In a copy of her prepared remarks, Ms Montgomery asks Congress to take a “precision regulation" approach, and disagreed with proposals by Mr Altman and Mr Marcus for an AI-focused regulator.

"This means establishing rules to govern the deployment of AI in specific use-cases, not regulating the technology itself,” Mr Montgomery said.

Want a quick and expert briefing on the biggest news stories? Listen to our latest podcasts to find out What You Need To Know.