Preparatory papers on state of frontier AI and potential risks for UK AI Safety Summit published

October 27, 2023

On 26 October, the UK prime minister gave a speech on the AI Safety Summit to be hosted in the UK on 1 and 2 November. The summit will bring together representatives from large lab AI companies, world governments and civil society to discuss safety risks arising from frontier AI. Frontier AI is “highly capable general purpose AI models that can perform a wide variety of tasks that match or exceed the capabilities present in today’s most advanced models” (cutting edge foundation models) and the safety risks are cast quite widely, ranging from artificial general intelligence threatening our existence to the risk of deep fakes degrading trust in digital information sources.

The UK is setting up an AI Safety Institute that will be charged with examining, evaluating and testing new types of AI so that we understand what each new model is capable of. It will publish research on these risks so that governments can make appropriate regulatory policy responses. Its ambition is to make this body an international centre of scientific research similar to the Intergovernmental Panel on Climate Change.

To frame the discussion and establish its credentials as the host of such an important resource, the government also published Frontier AI: capabilities and risks. This is made up of 3 documents:

The discussion paper – this provides a clear explanation of the current state of frontier AI capabilities, how they might improve in the future, the cross cutting risks factors (including the difficulty of designing safety into models capable of being used in any domain and tracking how they are used, there being insufficient incentives for AI developers to invest in safety measures, and that AI safety standards have not yet been established), societal harms (including degradation of the information environment, labour market disruption, unfairness), misuse risks (including dual use science, cyber, disinformation and influence) and loss of control risks (including where humans hand over control to misaligned AI systems or AI systems actively reduce human control) it presents.

An annex “Safety and security risks of Generative Artificial Intelligence to 2025” – this concludes that by 2025 generative AI is more likely to amplify existing safety and security risks than create wholly new ones, but it will increase sharply the speed and scale of some threats and there will almost certainly be unanticipated threats. Less sophisticated threat actors will be able to conduct previously unattainable attacks. Impacts in the digital sphere (cyber-attacks, fraud scams, impersonation, child abuse images) are most likely; fake news and impersonation risks eroding trust in institutions and democratic engagement; and physical security risks will rise as generative AI becomes embedded in critical infrastructure.

An annex “Which capabilities and risks could emerge at the cutting edge of AI in the future” – this looks at five malign scenarios of where the capabilities and impacts that frontier AI have got to by 2030 and the key policy issues these present. It also provides more detail on current and future frontier AI capabilities (across the following areas: content creation, computer vision, planning and reasoning, theory of mind, memory, mathematics, accurately predicting the physical world, robotics, autonomous agents and trustworthy AI) and the critical uncertainties in how these capabilities might develop (looking at the technical inputs of data, compute and skills, the model ownership and access, the safety aspects, the level of distribution and of use, and the geo-political context) to support the plausibility of, and to assist in the evaluation of, the scenarios. The scenarios are:

  • Unpredictable Advanced AI – open source frontier AI using smaller compute takes off as big private labs run out of compute capacity; open source AI knitted together makes it possible for bad actors to accidentally misuse, use for cyber-attacks and bioweapons; skills inequalities widen between those who use AI and those who do not use AI; global conflicts make transnational cooperation difficult. This presents the following key policy issues: How to hold open source developers to account? If AI attacks are rife – can authorities keep up? Beneficial applications exist, but how to ensure safety first?
  • AI Disrupts the Workforce – narrow but capable AI provides effective automation in some domains (eg autonomous cars) by 2030; compute scarcity focusses on these narrow domains by big players; systems are safe but impact on workforce is increasing unemployment and poverty. The
    key policy issues are: How to address unemployment? How to reskill workers? Should we tax gains from high automation? Can high productivity and lower human labour demand support a better work life balance?
  • AI Wild West – a diverse range of moderately capable systems emerge operated by different actors; vibrant new sectors emerge; but there is widespread safety and malicious use (eg internet polluted with fake news and unreliable) and unemployment concerns which reduce social enthusiasm; authoritarian states’ breakthroughs are used for espionage or interference; authorities are struggling to regulate; no global consensus on how to regulate. The key policy issues are: How to regulate so many different types of system? Is AI powered enforcement required? Some organisations are overwhelmed by attacks, fraud etc – how does the state assist?
  • Advanced AI on a knife edge – a big lab has achieved near artificial general intelligence (AGI); access is restricted to paying users and businesses; other labs are some way behind; a suspicion exists that the big lab’s safety measures are being gamed by the AGI and there are concerns that the system could fall into the hands of bad actors; other AI has been developed responsibly and society is generally positive about it and the rules that are applied to it, but the AGI is another matter. The key policy issues are: How do you regulate AGI? How do you prevent existential threats if society becomes dependent on such a system? Where huge power sits with private companies – how should governments respond? How to respond to such rapid and huge societal disruption?
  • AI Disappoints – progress not much better than today; breakthroughs in narrow cases like healthcare, but businesses have struggled to use it; investors disappointed; some progress in safety but much misuse. Most people are indifferent as some have benefitted but some defrauded or lost jobs. The key policy issues are: How will ongoing productivity stagnation be solved if AI isn’t the solution? Does the shift in attention away from AI make it harder to regulate safety issues?

The papers were reviewed by an expert external panel and do not represent UK government policy.

Our Take: the papers make a very accessible window into the potential capabilities and security, safety and societal harms that might emerge over the next 2 to 7 years. Even if the UK government’s AI Safety Institute fails to become the central international hub for this research material, the work done to synthesise these concepts and the event itself should move policy ideas and options forwards. Although the focus is on safety, all deployers of generative artificial intelligence would be well advised to work through the materials and consider their impact on their current AI governance processes.