The EU AI Act and digital content creation: what you need to know

The EU AI Act and digital content creation: what you need to know

Alberto Maestri Published on 8/2/2024

This article was conceived and written in collaboration with Federica Morichetti, copywriter and content manager at GreatPixel.

Misleading content, compromised privacy and user manipulation: these are just some of the risks posed by the dizzying rise of artificial intelligence (AI).

Which is why the European Union (EU) has just passed an AI Act intended to set clear boundaries for and address the dangers of using and developing this technology.

Originally proposed by the European Commission in 2021, the act was finally approved by the European Parliament on 13 March 2024 and will be phased in over three years. The EU has also created  an AI Office: staffed by 140 industry experts, this body is tasked with supervising the implementation of European AI regulations.

Why the AI Act is important

From the outset, the AI Act has been widely acknowledged as the most comprehensive attempt at global regulation of AI systems.

During the debate prior to the act’s approval by the European Parliament, MEP and co-sponsor of the act Brando Benifei enthused:

“We have finally managed to pass the world’s first binding legislation on artificial intelligence.”

The EU has now set an example for others to follow, but with it comes potential first-mover disadvantage.

Regulating an area as vast, fast-moving and multi-faceted as AI is a daunting challenge. Legislators have to work within the realm of the probable, not the certain,  and try to keep up with technology that changes quicker than people can understand it.

Indeed, shortly after the legislation was passed, there were significant shifts in the AI ecosystem. In May 2024, OpenAI generated much excitement with the release of its latest model, GPT-4o. In so doing, the organisation  led by Sam Altman consolidated its position as a pioneer in the space. The new model can process users’ text and audiovisual inputs through a single neural network, thereby minimising data loss and accelerating response speeds to an average of 320 milliseconds, comparable to a human’s response time in a conversation.

Who the AI Act affects

An area as complex as AI requires equally complex legislation – and this can be hard to get to grips with. Below we look at the key points of the new European AI legislation, which is part of the EU’s broader digital strategy.

But before we begin, we need to first clarify what we mean by artificial intelligence. In the EU regulation, Article 3 defines an AI system as:

“A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

The regulation therefore covers all generative AI programs like Google Gemini or ChatGPT. And even though the owners of these systems are based in the US, the law applies to all “providers” (public and private) offering AI services in the European market. What’s more, it also applies to “deployers” – in other words, anyone who uses an AI system for professional purposes and whose output is accessible in the EU. This means businesses that use content created with generative AI will be affected by the law too.

The act exempts AI used for military and national security purposes, and for scientific research, as well as open-source AI (which is nevertheless regulated in high-risk use cases).

The AI Act’s key points

The restrictions imposed by the act depend on the level of risk posed by AI systems, which are classified into four macro categories:

  • Unacceptable-risk AI systems. These are defined as any technology that does not adhere to the EU’s fundamental values, such as human dignity, democracy and rule of law. Anything that falls into this category is prohibited or subject to stringent restrictions.
  • High-risk AI systems. Technology that may have a “significant” negative impact on the fundamental rights of European citizens is considered high risk. This includes any AI systems used to manage and collect sensitive data in areas such as health, education, law enforcement or human resources, as well as generative AI systems with computing power greater than 10^25 FLOPS. Technology in this category is subject to strict regulations regarding data collection methods, information provided to users, human supervision, risk assessment and more. It must also be certified and approved by independent bodies.
  • Limited-risk AI systems. The vast majority of generative AI systems used for personal purposes or for the creation of audiovisual content (such as chatbots or image generators) are covered by this category. Deployers of such systems have a duty of transparency towards users. This means, for example, that people must be informed when they are interacting with an AI-powered chatbot.
  • Minimal-risk AI systems. Examples include photo filters or technologies used in videogames. There are no restrictions other than those set out in existing European law on privacy, consumer protection and copyright.

Bear in mind that these categories may well change in coming years as technology advances.

AI-generated content: it’s complicated

Though undoubtedly a step in the right direction, Europe’s new AI law is vague on one particularly thorny issue: intellectual property rights and AI-generated content.

Key questions have yet to be answered. The first has to do with the way in which technologies are “trained”. Before a generative AI system can produce text, images or audio, it first has to be fed vast amounts of human-created content. This involves processing copyrighted material, a practice that has long been condemned by rightsholders, who accuse developers of appropriating their content indiscriminately.

In this regard, the AI Act sets out limitations in Recital 105, which stipulates that:

“Any use of copyright protected content requires the authorisation of the rightsholder concerned unless relevant copyright exceptions and limitations apply.”

Authors therefore have the right to refuse AI systems permission to use to their work. But what about the output of these systems? Who actually owns the content generated by AI?

Unfortunately, this is still unclear, and there is no internationally agreed position either. As a result, two divergent approaches have been taken thus far:

  • Limiting intellectual property rights to works created by humans. This is the policy adopted by the United States and the European Patent Office (EPO). A ruling by the EPO’s Legal Board of Appeal in December 2021 established that, for a work to be patented, it must have been created by a “person with legal capacity”.
  • Attributing the originality of the work to the user or developer of the AI system. This is the position taken by the United Kingdom, India, Ireland and New Zealand.

To work around the problem, many businesses prefer to significantly alter AI-generated content, effectively using it for drafting purposes only. In theory, this should enable them to demonstrate human creative input that makes their work private and, therefore, protectable, but doubts remain.

Takeaways and future developments

The AI Act heralds a major legal and cultural shift. Its very existence is a sign of the growing role that AI is playing in people’s everyday lives. Regulating AI cannot be put off any longer, and promises to be a key issue in the global debate around this technology.

The greatest weakness of the EU’s AI regulation is that it’s a regional solution to a global problem. The digital nature of AI allows anyone, anywhere in the world to use it, regardless of where the developer is based. This makes regulating AI a global challenge. Moreover, conflicting interests between countries may lead to significant regulatory divergence. That’s why experts have long argued for a universal approach that involves as many stakeholders as possible.

All that being said, the EU’s AI Act is a major milestone. If nothing else, it shows European institutions realise that while the rise of AI cannot be stopped, if harnessed properly, it can bring enormous benefits. In the words of Dragoş Tudorache MEP, the European Parliament’s co-rapporteur for the legislation: “The future is AI-fuelled and we must continue to shape it.”

How the EU’s AI Act will regulate the use of artificial intelligence in everyday communication