Are we ready to regulate AI? The conversation about the best way of regulating artificial intelligence (AI) has been bubbling for years. However, the release of generative AI (GenAI) models at the end of 2022 has sparked more urgent discussion on the need to regulate throughout 2023. In the UK, Parliament launched a number of inquiries into various aspects of the technology, including autonomous weapon systems, large language models (LLMs), and general AI governance in the UK.
Computer Weekly’s coverage of AI regulation has been focused on these and other developments in the UK, including the government publication of its long-awaited AI whitepaper, its insistence that specific AI legislation is not yet needed, and its convening of the world’s first AI Safety Summit at Bletchley Park in November, which Computer Weekly attended along with press from across the globe. Coverage also touched on developments with the European Union’s (EU) AI Act and the efforts of civil society, unions, and backbench MPs in the UK to gear regulation around the needs of workers and communities most affected by AI’s operation.
1. MPs warned of AI arms race to the bottom
In the year’s first Parliamentary session on AI regulation, MPs heard how the flurry of LLMs deployed by GenAI firms at the end of 2022 prompted big tech into an “arms race” to the bottom in terms of safety and standards. Noting that Google founders Larry Page and Sergei Brin were called back into the company (after leaving their daily roles in 2019) to consult on its AI future, Michael Osborne – a professor of machine learning at Oxford University and co-founder of responsible AI platform Mind Foundry – said the release of ChatGPT by OpenAI in particular placed a “competitive pressure” on big tech firms developing similar tech that could be dangerous. “Google has said publicly that it’s willing to ‘recalibrate’ the level of risk … in any release of AI tools due to the competitive pressure from OpenAI,” he said. “The big tech firms are seeing AI as something as very, very valuable, and they’re willing to throw away some of the safeguards … and take a much more ‘move fast and break things’ perspective, which brings with it enormous risks.”
2. Lords Committee investigates use of AI-powered weapons systems
Established 31 January 2023, the Lords Artificial Intelligence in Weapon Systems Committee spent the year exploring the ethics of developing and deploying lethal autonomous weapons systems (LAWS), including how they can be used safely and reliably, their potential for conflict escalation, and their compliance with international laws. In its first evidence session, Lords heard about the dangers of conflating the use of AI in the military with better international humanitarian law (IHL) compliance because of, for example, the extent to which AI speeds up warfare beyond human cognition; dubious claims that AI would reduce loss of life (with witnesses pointing out “for whom?”).