As the UK prepares to host the global AI summit to crack the enigma over whether it is a force for good or ill, Carl Dinnen reports
I spent part of Thursday watching a robot sorting old milk bottles at a recycling plant. The Recycleye robot uses artificial intelligence to sort plastic milk bottles from other plastics.
It leaves FCC Environment in Reading with bales of old HDPE plastic milk bottles ready for recycling.
AI is developing incredibly quickly and the government - and some AI developers - have become concerned that artificial intelligence is in urgent need of proper regulation.
In 10 days time Rishi Sunak will host an international summit at Bletchley Park.
The AI Safety Summit aims to regulate two types of risk; loss of control where the AI starts doing stuff that’s unintended, and the risk of 'bad actors' - criminals, terrorists or rogue states - using AI for their own ends.
But some researchers and campaigners think these targets are too narrow. There are problems of bias and discrimination in existing AI systems.
The government says these are being dealt with in other international forums and in domestic legislation.
Fran Bennett from the Ada Lovelace Institute says all the AI risks involve technology doing something that harms society and so regulators must deal with them all at once, they can’t be separated out.
The discussion on how to regulate AI is just getting started. The development of AI is racing ahead.
Want a quick and expert briefing on the biggest news stories? Listen to our latest podcasts to find out What You Need To Know...