AI regulation is coming. But you already knew that.
Authorities across the globe recognise the potential opportunities of AI. But they are also mindful of its immeasurable threats too.
Here are the latest developments 👇
🇬🇧 UK – The UK hopes to become a global leader in AI regulation. In March, the government launched a public consultation for its pro-innovation AI policy approach. The government is also supporting research initiatives and has committed £100m to the AI Foundation Model Taskforce that will deal with safety aspects around research and development of AI. This news was followed by rumours that Downing street is going to replace advisors on the AI Council in a bid to refresh its approach.
🇪🇺 Europe – The EU is a step closer to passing one of the world’s first laws governing AI as the AI Act enters the final stages of debate before the act can be voted on. The European Commission is pushing for the act to be finalised before the end of 2023. Once the law comes into force, it will require AI systems to be submitted for review before commercial release. In 2024, companies looking to engage with the EU market will need to prioritise their commitments to ethical AI.
🇺🇸 US – The US Federal government is engaging with industry experts to develop means through which safe AI use can be ensured. Sam Altman’s testimony to a senate subcommittee was just the start of a series of congressional hearings about AI.
At this point you might be thinking: ‘Ok, AI regulation is well on its way, but what does this mean for my business?”
Here are just a few GIANT predictions on the future of AI regulation:
New Copyright Laws
Listening to the conversation at Cannes Lions this year, it is evident the creative industry is excited about implementing AI to streamline the process of creative content creation.
This also raises concerns about intellectual property rights. Who owns the content created by AI platforms? Does trademark infringement apply to AI creations? These are all important questions regulators will need to answer with a comprehensive intellectual property framework.
Developing Ethical AI
Authorities have already stressed the need to develop ethical AI and eliminate the potential for bias and discrimination.
We expect regulators to set out extensive tests and the requirements AI systems need to pass before they are released to the public. Businesses using AI will need to prove their AI use follows ethical standards. A good starting point for businesses is to develop their AI approach based on existing government guidelines. For example, in the UK, the Centre for Data Ethics and Innovation (CDEI) published its portfolio of AI assurance techniques.
A rise in people-centric policies and comms
Working alongside AI will also have significant implications on employees. AI thrives on data to drive workplace productivity, but where do we draw the line for what data can be collected about employees? Or how do we make sure AI bias does not introduce discrimination in recruitment? And finally, how do we navigate which roles are a priority in ten years’ time?
Regulators and lawmakers will focus on ensuring employment laws take into account the risks and opportunities introduced by AI. Developing new internal communications plans with employees and regulations in mind is a must, together with new employment policies.
While regulators are still figuring out AI regulation, the public expects companies to be transparent and responsible with their use of AI. Now is a good time for businesses to think strategically about how their brand fits into an AI-dominated future, and their position regarding regulation. Businesses will need to rethink how they speak to both their employees and customers to instil a sense of confidence that they are following safe and responsible guidelines, even BEFORE governments catch up with new rules.
Let’s have a chat about your communications strategy in an AI-dominated future. Email us at email@example.com.