From education to employment

Reflections on the AI Safety Summit

Image of Paul Grainger on FE New's exclusive article background

Perhaps the most important aspect of the recent AI Safety Summit (Bletchley Park, 1 & 2 November) was its theatre. Senior policymakers from 28 countries plus the EU attended, roughly half the world’s substantial economies. There were some notable absences, for example, Argentina, Russia, Qatar, and some A listers, such as Joe Biden, did not appear, but by and large both cast and audience were impressive. The Coup de Theatre was holding the show at Bletchley Park, arguably the stage where AI began, perhaps significantly, at a time of war.

The topic of AI is huge

The topic of AI is huge, impossibly beyond the scope of a single conference, and subject to destabilising scare tactics by self-publicists – Elon Musk ‘Hope for the best, but prepare for the worst’, and Nick Clegg warning that governments must prepare for AI being used to interfere with upcoming elections, and the need to cooperate “right now” on the role it will play.

Sunak’s team did well to arrive at five coherent objectives within this vast space:

  • A shared understanding of the risks posed by frontier AI and the need for action.
  • A forward process for international collaboration on frontier AI safety, including how best to support national and international frameworks.
  • Appropriate measures which individual organisations should take to increase frontier AI safety.
  • Areas for potential collaboration on AI safety research, including evaluating model capabilities and the development of new standards to support governance.
  • Showcase how ensuring the safe development of AI will enable AI to be used for good globally.

These objectives are all pertinent, if bland.

The Guardian, 2 November, also identified 5 ‘takeaways’:

  1. It was undoubtedly a diplomatic coup, to assemble such a cast.
  2. Nevertheless, the real power lies with the US.
  3. Musk grabbed the glitzy headlines.
  4. There is no consensus on the existential risk: Prepare for a disinformation glut.
  5. The need for global action, but countries are moving at their own pace.

Undoubtedly the UK has shown leadership here. However, in the long term, is there really a need for yet another global institution? The G20 already has a greater reach and more global support. Developing technology is well within its remit. The Declaration at the end of the Bletchley summit affirms support for the UN Sustainable Development Goals. So do most global entities. Would it not be better to work together as a common body? A global, rather than a western voice?

No one knows what’s coming

Of course, part of the trouble with pondering AI is that no one knows what’s coming, or even agrees on what intelligence is. Musk likes to give the impression that he knows more than we do: but remember that this is the man who offered a submarine to a cave rescue in Thailand. It is the last of the Guardian’s takeaways that carries the real force. AI, and the developers of AI are not constrained by national boundaries. Attempts at some form of licencing or regulation in one country is countered by a shift in location to another. Any constraints on AI must be international in nature. Hence the need for collaboration, not duplication.

The summit was wise to avoid technical issues – the pace of change is just too rapid. But as a result the Declaration sounds vague, unspecific, and hopelessly loquacious, as for example:

To realise this, we affirm that, for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible.

One might as well throw in motherhood and apple pie as well

A recognition that the protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection needs to be addressed. 

However, the Declaration recognises the risks, and calls for collaborative activity – a balance between as yet unspecified threats and opportunities. It states:

Many risks arising from AI are inherently international in nature, and so are best addressed through international cooperation

and

All actors have a role to play in ensuring the safety of AI: nations, international fora and other initiatives, companies, civil society and academia will need to work together.

My point precisely.

Not far from where the conference was held is a world leader in college-based AI.  A rare proximity, if only in geographical terms, between policy makers and people who know what they are talking about.  The South Central Institute of Technology, Bletchley Campus, hosted, to coincide with the summit,  an Inclusive Global AI Ethics and Safety conference, 1 November, in collaboration with the Milton Keynes Artificial Intelligence (MKAI) community.

Alex Warner, Principal of the Institute of Technology, presented four takeaways to the recent AOC conference:

  1. Adaption rather than adoption: AI and large language models are here to stay. We need to learn to live in harmony and adapt… In the same way we did the internet and smartphones.
  2. Imagination and opportunity​: We should be excited. In the right hands it has the potential to save time, improve productivity, reduce workload and increase well-being.
  3. [The caveat to the above] There are Ethical and Safety considerations. Some we are aware of, others that we will discover in time.
  4. And to summarise, AI has the potential to have a profound impact on humans.

Between the global and the local it is possible to discern a growing consensus.

  1. AI will bring about seismic changes in the nature of work and human interactions. This has happened before – it is in the nature of ‘Industrial Revolutions’.
  2. AI has the potential for good and harm. This is not intrinsic to the technology, but, again, to the ethical stance of those developing and using the technology.
  3. Where there is a difference is that the nation state has very little power to limit or regulate AI. This is a truly global phenomenon, for good or ill.

By Paul Grainger, Honorary Senior Research Associate, UCL.


FE News on the go…

Welcome to FE News on the go, the podcast that delivers exclusive articles from the world of further education straight to your ears.

We are experimenting with Artificial Intelligence to make our exclusive articles even more accessible while also automating the process for our team of project managers.

In each episode, our thought leaders and sector influencers will delve into the most pressing issues facing the FE.


Related Articles

Responses