Industry / Market - Europe
Industry Report: New Media
The European Audiovisual Observatory publishes a new report on AI’s impact on the audiovisual sector
The study delves into the many complex legal implications that AI presents, ranging from copyright challenges to data protection and labour issues
This week, the European Audiovisual Observatory (EAO) published a new legal report titled "AI in the Audiovisual Sector: Navigating the Current Legal Landscape".
In detail, the study delves into the many complex legal implications that AI presents, ranging from copyright challenges to data protection and labour issues. Developed by several European experts, it was launched at the Lyon Classic Film Market last Friday and is poised to be a key resource for understanding AI’s transformative role in the industry.
The document first underscores how AI is rapidly changing how audiovisual content is created, distributed, and consumed. From enhancing creativity and personalising content to streamlining production processes, AI technologies such as Claude, Midjourney, and DALL-E are revolutionising the field. However, the rise of AI also presents serious challenges, including concerns about job displacement and the regulation of AI-generated content.
Part 1 provides an overview of how AI is currently being used in the audiovisual industries. AI is expanding quickly within the audiovisual sector, boosting creativity, customising content, and optimising production workflows. This chapter includes case studies on tools like Claude, Midjourney, and DALL-E. However, its growing presence also brings major challenges, such as job displacement and regulatory concerns surrounding AI-generated content.
Part 2 delves into data protection and copyright challenges. Data protection remains a key concern as AI frequently processes vast amounts of personal data. This part explores how the General Data Protection Regulation (GDPR) and the recently introduced AI Act work to protect personal information. It also examines international data transfers, comparing the approaches of the EU and the USA on data privacy.
Copyright issues are another major hurdle in this context. AI systems often depend on copyrighted material for training, raising complex intellectual property questions. This section breaks down the legal complexities surrounding AI’s use of copyrighted works, including the creation of derivative content by AI models.
Part 3 outlines five key challenges that AI presents to the audiovisual industry.
As AI-generated content becomes increasingly common, issues of authorship, liability, and transparency take on greater importance. The report examines whether AI-generated works can be attributed to human creators and whether they may infringe on pre-existing works used to train the AI. It also emphasises the need for transparency and raises questions about who should bear responsibility for AI-generated content.
Moreover, this part also zooms in on how AI poses risks to personality rights and transparency. With AI replicating voices and creating digital doubles, actors face new difficulties in safeguarding their image and voice rights. This chapter reviews the legal protections for personality rights, with a focus on the EU’s new AI Act and the Council of Europe’s Framework Convention on AI.
AI’s transformation of the workforce is another major concern. This chapter explores the impact of AI on the labour market, citing recent strikes in the U.S. and evolving labour policies across the EU. It also analyses how collective management organisations, trade unions, and industry associations are responding to these changes.
In addition, the potential for AI to create and spread disinformation is another critical issue. The chapter explains how AI can produce fake content—such as text, deepfakes, and manipulated audio—that can mislead audiences. It reviews existing regulations aimed at preventing disinformation and protecting media integrity, and it explores the possibility of AI models being used for fact-checking.
Finally, AI’s impact on cultural diversity and media pluralism is dissected. While AI can personalise content, it may also reinforce biases and limit exposure to diverse perspectives. This chapter considers how regulatory frameworks can counteract these effects and encourage broader content consumption.
Part 4 looks ahead, exploring the future of regulation in this area and the ethical and societal dilemmas we will face in the coming years. This forward-thinking chapter questions whether current AI regulations are sufficiently robust to meet the challenges posed by AI in the audiovisual sector. It highlights the lack of directly binding, sector-specific regulations and examines the extent to which existing laws indirectly impact AI systems in the audiovisual industry. This part also considers whether newly developed frameworks adequately address the unique risks and challenges within the sector.
The final chapter delves into the broader ethical issues surrounding AI in the AV industry. It explores concerns about authenticity, the potential for AI to distort reality, and the societal consequences of AI-generated content.
The full document is available here.
Did you enjoy reading this article? Please subscribe to our newsletter to receive more stories like this directly in your inbox.