For months, there has been a GIANT buzz around the AI Safety Summit – the first global conference aiming to tackle AI risks and regulation. 

After two days of intense debate among tech experts, global leaders, and civil society representatives, the summit held at Bletchley Park ended November 2nd.

Here are some key highlights 👇

Day 1: AI as a societal risk 

The summit began with a GIANT diplomatic win when 29 nations including China signed the Bletchley Declaration. The international community promised to build a scientific understanding of AI and develop risk-based policies. The roundtables then focused on mitigating the impacts of AI on society.

🔑 Key takeaway? AI safety is now a shared concern – at least on paper. 

💬 Quotes

“The international community has sought to tackle climate change, to light a path to net zero, and safeguard the future of our planet. We must similarly address the risks presented by AI with a sense of urgency.”  – King Charles III

“We reject the false choice that suggests we can either protect the public or advance innovation. We can – and we must – do both.” – Kamala Harris

“Over the next five years or so, we’re going to have to consider that question [a pause in technology’s development] very seriously.” – Mustafa Suleyman

🏆 Winning photo

Elon Musk. Source: Sky News.

Day 2: Testing AI models

The second day set international priorities for AI in the next five years. A landmark was governments and tech giants like Google, Microsoft and Open AI agreeing to test their new AI models. In closing remarks, Sunak announced that spies will test new AI for national security threats.

The GIANT finale: Sunak interviewing Musk in a live debate broadcasted on X. 

🔑 Key takeaway? AI safety will become an expected requirement for tech giants.

💬 Quotes

“There will come a point where no job is needed. You can have a job if you want a job for personal satisfaction, but the AI will be able to do it.”Elon Musk

“Just as on a motorway: guardrails are not barriers – they allow the traffic to keep to the road and proceed safely.”Ursula von der Leyen

“Ultimately, if we’ve got a skilled population they will be able to keep up with the pace of change, but it’s still a concern.” – Rishi Sunak

🏆 Winning photo

Ursula von der Leyen, Kamala Harris, Rishi Sunak and Giorgia Meloni. Source: Data Breach Today

Our GIANT take 

Conference chat aside, what did the summit ACTUALLY signal for the future of AI regulation?

➡️ Step towards regulatory consistency – A consistent approach to AI would give tech leaders more clarity about how to navigate their business. But while the Bletchley Declaration is a gesture of unity, each region approaches AI differently. Launching the AI Safety Institute in Washington, the US won’t blindly follow UK initiatives. The EU is also heading its own way, planning to adopt new legislation through the EU AI Act. Whether governments will unify in their approach to a still evolving technology is a GIANT question. 

➡️ Developing ethical AI – In an executive order on Monday 30th October 2023, Joe Biden required tech firms to share AI safety test results with the government. The summit’s voluntary agreement expands this agenda. With ethics becoming a priority, businesses will need to communicate transparently and prove that their AI models follow ethical standards. 

➡️ Missed opportunity? – The summit aimed at mitigating speculative risks such as AI-powered bioweapons. Critics said the agenda avoided current harms including the spread of misinformation or AI cyberattacks. In tackling the existential threat, the world could ultimately miss the chance to address the here and now.

Are you interested in joining the AI conversation?

Let’s have a chat about your communications strategy in an AI-dominated market! 

Email us at onesmallstep@madebygiants.io or DM us on social media!