Hasan R. Sayed Hasan
Managing Director, Master Media
Presented at The “2nd International Expert Conference on AI, Ethics & Society: Media, Information & Technology Literacy for the Public Good”, organised by Université Moulay Ismaïl and UNESCO.
Moulay Ismail University, Meknès, Morocco 12-13 December 2024
1. Introduction
The media landscape is rapidly changing due to Artificial Intelligence (AI) which is fundamentally changing how content is created, managed, distributed, and consumed. From automating journalism to personalizing viewing experiences, AI-driven media technologies clearly present unique chances for improved creativity, innovation, efficiency, and scale. AI is already being used by media companies for content recommendations, news articles production, and to enhance video production workflows. However, there are some serious ethical issues with these developments, including algorithmic bias and the dissemination of misinformation, leading to possible decline of public trust in media.
Media and Information Literacy accordingly plays a critical role as it equips individuals, from producers to consumers in the media value chain, with the ability to navigate and critically evaluate AI-supported processes and content, and to understand the ethical dimensions of AI
We will explore the impact of AI on media, ethical challenges, and the role of Media and Information Literacy as a framework to balance innovation, and to promote a more transparent media ecosystem that benefits both media professionals and the public.
2. The Impact of AI on Media Technologies
AI in Content Creation
AI is transforming the process of content creation, mainly by increasing its speed and efficiency. By utilising AI tools to automate the generation of news articles, journalists can focus on more creative and complex stories.
AI tools are also greatly improving audiovisual content production by offering real-time editing capabilities, allowing creators to edit and enhance their work with minimal effort and time. Generative AI tools are supporting content creators by automating tasks like writing, and video and image generation, and creative suggestions, enabling faster production of high-quality content and enhanced storytelling.
AI in Media Asset Management
AI is also making significant impacts onto media asset management. AI tools are widely being used to automate metadata tagging and enrichment, transcription, content localisation (translation and cultural adaptations), archiving and preservation, and other media management processes. For media companies, such automation improves the efficiency of media workflows as it reduces the need for manual intervention, enabling faster and more accurate content management.
AI in Distribution and Consumption
AI is not only transforming production and management of audiovisual content, but also transforming its distribution and consumption. Streaming platforms heavily rely on AI-powered recommendation algorithms to analyse viewer behaviour, to curate content, and to tailor personalized content that keeps audiences engaged. Filter bubbles and echo chambers are two hazards associated with this personalization and curation. When users are only exposed to content that aligns with their interests and supports their views, they become isolated from other viewpoints and perspectives, which perpetuates a cycle of selective exposure and could further polarize societies.
Content moderation is another area where the use of AI is growing. Social media platforms employ AI tools to identify and remove harmful content such as hate speech and misinformation. These tools are not without flaws, as they sometimes fail to accurately understand context and cultural/ linguistic nuances, resulting in the deletion of legitimate content, or allowing harmful material to slip through.
3. Ethical Challenges of AI in Media
Misinformation and Disinformation
One of the most pressing ethical concerns with AI in media is its potential to amplify misinformation and disinformation.
AI tools can produce “hallucinations” which are instances where the AI produces information that is fabricated, incorrect, or not grounded in facts or reality. These hallucinations are problematic since they can mislead the public, damage trust, and spread false information.
Generative AI has also made it easier to fabricate videos or audio clips, often with harmful intent such as realistic but false videos of political figures and celebrities, causing confusion and potentially influencing public opinion[i]. Deepfakes, which are highly realistic fake videos that can deceive audiences by altering people’s appearances or speech, have raised concerns about disinformation campaigns, misleading viewers, and manipulating public opinion, highlighting the potential dangers of such technology and tools when placed in the wrong hands.
AI also plays a significant role in generating and spreading disinformation, especially on social media platforms. Automated AI-powered bots can generate content at scale, manipulating narratives and amplifying false information. Specifically during elections, bots have been used to push disinformation campaigns aimed at swaying voter opinion[ii]. AI can also exploit algorithmic biases in social media platforms, reinforcing misleading content through recommendation engines that prioritize user engagement over information accuracy[iii].
On a positive note, AI can also be used to combat misinformation. Social media platforms are employing AI algorithms to detect and flag inaccurate information[iv]. However, the sheer and growing volume of content published on social media makes it difficult for AI tools to filter out harmful content without occasional errors.
Copyright and Intellectual Property Concerns
AI-generated content raises complex questions about copyright and intellectual property. As AI tools are increasingly used to automate the generation of news articles and to generate or support the production of audio, video, and artwork, it becomes unclear who owns the resulting content. For example, when an AI system generates a news article, does the user “asking” the AI tool to generate the content own the rights to that content? Or is it the AI system itself?
The use of Generative AI in the content production process could easily result in plagiarism and copyright violations, as AI-generated content often mimics existing creative works, leading to potential plagiarism. Gen AI may generate outputs based on its training data, which could include copyrighted material, without proper attribution. This raises ethical and legal concerns about originality and intellectual property rights.
The Over-reliance on AI may also likely lead to weakened creativity and originality in content production, especially if it simply builds upon content which has already been produced.
The line between human and AI authorship is blurring, while media professionals must consider how to protect intellectual property and ensure proper credit is given. Copyright laws around the world are still coping with these questions, and media organizations are struggling to navigate this uncertain landscape in the foreseen future.
Algorithmic Bias and Public Trust
Algorithms used by online media platforms may unintentionally reinforce biases based on racial, ethnic, religious, socioeconomic, cultural, linguistic, or other characteristics. For example, YouTube’s recommendation algorithm has been criticized for promoting extremist content, especially to users who interact with certain types of political or social material[v] [vi]. Similarly, Instagram’s AI-powered content moderation system has faced criticism for disproportionately targeting marginalized communities, including censoring posts from people from specific ethnic/ cultural groups.[vii]
Restoring public trust in AI-driven media technologies requires transparency and oversight. Media organizations must work to ensure that AI tools are designed with fairness and ethical considerations in mind, and AI systems must be continually monitored and refined to mitigate bias and prevent harm to vulnerable communities.
4. Media and Information Literacy (MIL) as a Framework
MIL for Journalists and Media Producers
To face these ethical challenges, Media and Information Literacy (MIL) emerges as a vital framework for responsible engagement with AI in media. MIL goes beyond traditional literacy by teaching media professionals to critically evaluate the tools they utilize, with a specific focus on the ethics of media technologies[viii]. MIL can equip them with the skills to identify potential biases, detect misinformation, and understand the ethical implications of AI tools, thus ensuring that AI-generated content adheres to journalistic standards of accuracy and fairness.
With MIL, media professionals can identify the limitations of AI-generated content, ensuring that human oversight is maintained in critical reporting areas such as investigative journalism. Similarly, content creators working with AI video editing tools will understand the ethical implications of using automated tools, from privacy concerns and intellectual property rights protection, to deepfake risks.
Promoting AI literacy can also reduce the risks associated with algorithmic bias. When media professionals are trained to recognize how AI systems can perpetuate harmful biases, they can take steps to correct these issues. This could involve revising internal editorial guidelines to account for the potential biases in AI-driven tools, or advocating for more transparent processes.
Moreover, MIL encourages a comprehensive approach to AI, where journalists and media producers are not only users of technology but active participants in shaping its ethical use. Through MIL, they can advocate for human supervision of AI systems, ensuring that technological advancements do not come at the expense of ethical principles or public trust.[ix]
MIL for Other Media Professionals
Given the increasing role of AI-driven technologies in media, and in addition to journalists and media producers, other media professionals should possess advanced Media & Information Literacy. These roles ensure that MIL principles are upheld throughout the media ecosystem. Such profiles include, among others:
- Media Executives: guide ethical use of AI and ensure compliance with industry standards within their organisations. They can establish internal oversight mechanisms such asAI Ethics Committees, to provide further safeguards against unethical AI use. Ethical guidelines that dictate how AI can be used within their operations could include key principles like transparency, fairness, and accountability.
- Creative Producers: use AI tools for brainstorming, storytelling, or production planning while upholding ethical standards and to ensure inclusivity and fairness in creative decision-making.
- Video and Audio Editors: critically evaluate AI-assisted tools such as automated colour grading, video editing, noise reduction, and audiovisual effects while maintaining creative control. They need to also be aware of ethical, legal, Intellectual Property and copyright considerations when using AI tools. They must ensure when using voice cloning tools that they have proper permissions from the talent before using AI-generated voices, maintain creative integrity and avoid undermining the original artistic intent of the known talent or misrepresenting their contributions.
- Graphics Designers: effectively use AI tools like generative design, automated layout creation, and image generation while ensuring originality, avoiding unintentional plagiarism, and addressing ethical concerns around biases in AI-generated visuals such as stereotyping in image representation.
- IT and Technical Staff: manage AI-driven tools and platforms, and should be literate in the ethical dimensions of data handling and content automation.
MIL for Media Technology Providers
In addition to media production and publishing organisations, professionals within media technology development companies, especially those involved in software development, should also have advanced Media & Information Literacy (MIL). This will support them in ensuring that the technologies they develop and deploy are aligned with ethical and regulatory standards, which is critical in maintaining both innovation and public trust in media:
- Software Engineers and Developers: design AI algorithms and tools that are used in the media industry: MIL helps them integrate ethical considerations, such as preventing bias in AI models and ensuring transparency in automated processes.
- Data Scientists: work with large datasets and machine learning models, have to ensure they handle data responsibly, respect privacy, and avoid unintended consequences in content creation and distribution.
- Product Managers: oversee the development and implementation of AI-driven media technologies, have to ensure that the products they develop align with ethical standards and foster public trust.
- Compliance Officers: in companies developing media technology must ensure that AI systems comply with regulations and ethical guidelines related to data use, automation, and audience manipulation.
- Quality Assurance (QA) Specialists: MIL helps them test AI-driven media technologies with an understanding of potential ethical issues, such as how the software may affect content integrity or user trust.
5. Integration of AI Literacy into Media Education and Professional Development
As AI technologies become increasingly integral to media production, education, and professional development, the need to integrate AI literacy into media academic and professional education is more urgent than ever.
- Academic Curricula Development: Media schools and training programs should incorporate AI literacy into their curricula including both technical skills (such as understanding how AI algorithms are developed and how they work) and critical thinking around AI’s ethical implications. Media students can then graduate with a comprehensive understanding of how to use AI tools responsibly. Courses could include hands-on projects using AI tools, together with discussions on ethical concerns.
- Ongoing Professional Training: AI is a fast-evolving field, so media professionals need continuous training to stay updated on the latest tools and ethical/ legal considerations. AI literacy workshops that focus on emerging AI technologies and their applications in media should be offered within media organisations, highlighting case studies where AI tools were misused, alongside best practices for ensuring compliance and ethical use.
6. Conclusion
Artificial intelligence will continue transforming the media landscape, bringing with it both opportunities for innovation and significant ethical challenges. AI tools are becoming more deeply integrated into content creation, management, distribution, and consumption, and it is crucial that media professionals, educators, and organizations take proactive steps to ensure responsible AI use. Media and Information Literacy (MIL) offers a robust framework for navigating such complexities, equipping media professionals with the knowledge and skills necessary to engage with AI technologies ethically.
The media industry must maintain the balance between the benefits of technological advancements with the responsibility to uphold ethical standards. Through collaborative efforts between AI developers, media professionals, and educators, the industry can harness the power of AI while preserving its commitment to accuracy, fairness, and the greater social good.
[i] https://spectrum.ieee.org/deepfake-2666142928
[ii] https://www.wired.com/story/microsoft-russia-china-iran-election-disinformation/
[iii] https://neurosciencenews.com/social-media-behavior-misinformation-23752/
[iv] https://www.forbes.com/councils/forbestechcouncil/2022/06/14/the-growing-role-of-ai-in-content-moderation/
[v] https://www.ucdavis.edu/curiosity/news/youtube-video-recommendations-lead-more-extremist-content-right-leaning-users-researchers
[vi] https://foundation.mozilla.org/en/blog/mozilla-investigation-youtube-algorithm-recommends-videos-that-violate-the-platforms-very-own-policies/
[vii] https://dl.acm.org/doi/pdf/10.1145/3637300
[viii] https://files.eric.ed.gov/fulltext/EJ1344750.pdf
[ix] https://iite.unesco.org/publications/artificial-intelligence-media-and-information-literacy-human-rights-and-freedom-of-expression/