Last week, 1300 experts signed an open letter saying AI was a force for good. It was organised by BCS, the Chartered Institute for IT, to combat “AI doom”.
So the GIANT question is: have we already lost control of the AI conversation? Is our mainstream press encouraging technophobia?




There is an irony to all of this: there was a time when PR experts had to fight to have the latest AI developments heard by journalists – and now, it looks like parts of mainstream media are on a crusade to take down AI.
How do we manage the conversation and generate a bit more positivity? After all, in the very early days of the telephone, users were fearful of dabbling with the occult. It’s a natural survival response to something new and unknown.
Big social change is difficult to process – and we re-define risks and threats in our minds, as anxieties creep up.
As we’ve seen an outpouring of concern, the governance at the business level but also at the national and international levels is significant and imminent – and all guidelines are centred on managing risk, and by implication, these anxieties.
It is our job as communicators to remember these risk factors and devise communications strategies that understand and combat the unacceptable risks, the high risks and the limited risks of AI systems.
For example, the AI Act from the European Union has drawn very specific lines in the sand – and each area of risk defined in the AI Act poses a different communications challenge:
Limited risk
AI systems with limited risk factors will still have minimal transparency requirements so that users can make informed decisions on whether to carry on using the product. This includes AI systems that make or manipulate imagery, audio or video content, like deep fakes. Communicating what, why and how you use AI, and how it impacts what the user is doing and how their personal information is processed, will be a minimum requirement. Companies will be judged on how they deliver against these transparency requirements, and communicate the basics.
High risk
AI systems that negatively impact safety and fundamental rights are more complex and have different categories depending on the sector, the age of the audience, and the application. They will need to be assessed before they are launched, and also monitored continuously. Understanding how to communicate with regulatory audiences on an ongoing basis is a must, and showing you are an integrity-first brand is critical. Integrity is paramount, particularly if you are categorised as “high risk”.
Unacceptable risks
AI systems deemed “unacceptable” and a threat to people will be banned, with only a few exceptions. This particular area of risk will shape the future of AI innovation and the making of AI algorithms. PR pros need to be included in AI growth teams to help challenge ethics-based principles, and communicate the role of external perception to shape how things are built in the future.
If you need help defining your AI communications strategy DM us on social or email onesmallstep@madebygiants.io.