Review Article - (2025) Volume 1, Issue 1
Adoption of Artificial Intelligence in Media Organizations: A Comparative Study of Egyptian and European Media Professionals
Received Date: Jun 03, 2025 / Accepted Date: Jul 14, 2025 / Published Date: Jul 24, 2025
Copyright: ©©2025 Sally Samy Tayie. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Citation: Tayie, S. S. (2025). Adoption of Artificial Intelligence in Media Organizations: A Comparative Study of Egyptian and European Media Professionals. Int J Digital Journalism, 1(1), 01-09.
Abstract
This article reviews findings of a study which was carried out on a purposive sample of experts and media professionals from Egypt and some European countries. This study aims primarily to identify the perceptions and opinions of experts regarding the extent of awareness and knowledge of AI and its uses in media organizations. It also aims to find out about the challenges which AI may bring to our life in general. The study relied on in-depth interviews, which is an important qualitative technique and focused on identifying the extent of media professionals’ knowledge in these countries of the basics and uses of AI, the efforts made to improve AI skills, and ethical concerns in institutions that adopt AI. The results of the study have shown that in general there is a significant lack of awareness, knowledge, and understanding of the role that AI can play within media institutions, whether in Egypt or some European countries. The results have shown that experts in various countries also aim and called for the need to enact regulations and laws and emphasize the ethical aspect, and the urgent need to enact regulations to address ethical concerns, including the need to consider accuracy, copyright, bias, and privacy violations.
Keywords
Artificial Intelligence, AI Journalism, AI Challenges, AI Knowledge
Introduction
The rapid evolution of Artificial Intelligence (AI) in recent years has significantly impacted various industries, with media organizations being no exception. AI has emerged as a transformative force in the media sector, offering tools that enhance content production, streamline documentation processes, and facilitate engagement with new scientific knowledge. Many national and regional institutions have expressed a keen interest in leveraging AI-based technologies to improve their operations and to participate more effectively in the dissemination of information. However, as the adoption of AI tools continues to expand, a growing concern arises regarding the balance between leveraging the benefits of AI and maintaining journalistic integrity. This tension presents a major challenge for media organizations globally, particularly in the face of rapid technological advancements and the potential social implications of AI integration.
The Growing Role of AI in Media
Artificial Intelligence has become an increasingly vital component of media organizations’ strategies for enhancing content creation, distribution, and audience engagement. Recent advances in AI technologies, especially since 2020, have accelerated their integration into media workflows, driven by the need for efficiency, personalization, and data-driven decision-making in a highly competitive digital environment.
AI’s introduction into media institutions has resulted in significant changes in content production, with terms like “Automated Journalism,” “Algorithmic Journalism,” and “Robot Journalism” emerging to describe these developments. For instance, Algorithmic Journalism involves the use of structured data converted into narratives through Natural Language Generation (NLG), reducing production costs and enhancing operational efficiency [1]. AI’s impact is pervasive throughout the news production cycle, with machine learning aiding in data analysis and story discovery, and AI-driven technologies facilitating automated story creation. Additionally, AI has facilitated personalized content recommendations, increasing reader engagement [2,3]. The increased use of AI in communication and journalism has heightened the need for media professionals to gain a fundamental understanding of AI’s applications in their work [4].
Over the past few years, AI has made significant strides in transforming business operations, including within the media sector. As businesses worldwide embrace AI to gain a competitive edge, its application in media organizations has become indispensable to staying competitive in an increasingly digital landscape. AI applications help media organizations streamline processes, enhance productivity, and address the challenges posed by competition. According to Zhai et al. (2020), AI is a complex concept due to its reliance on intricate algorithms that have far- reaching social effects [5]. Scholars have proposed various definitions of AI, including characterization of AI as "enabling mechanisms to present smart behaviors similar to human behavior [6]." Cognitive scientist Marvin Minsky also defined AI as the science of replicating human-like intelligence in machines, a view that underscores its potential to enhance media practices [7].
One of the key developments is the expansion of automated journalism, where AI systems generate news reports directly from structured data. Companies like The Associated Press and Bloomberg have implemented sophisticated Natural Language Processing (NLP) algorithms to produce earnings summaries, sports reports, and financial stories at scale. For instance, Bloomberg’s Cyborg platform automates financial news generation, significantly increasing reporting speed while allowing human journalists to focus on more complex stories [8]. These AI systems employ machine learning models that can analyze large datasets rapidly, translating complex information into accessible narratives that meet journalistic standards.
In addition to automation, AI-powered recommendation engines and personalization algorithms have become fundamental to digital media platforms. Platforms like Netflix, YouTube, and news aggregators utilize deep learning models to analyze user behavior, preferences, and engagement patterns, enabling tailored content delivery. This personalization enhances user retention and satisfaction, which are critical for competitive advantage. Recent studies highlight how AI-driven personalization increases content engagement and fosters long-term audience loyalty, particularly when combined with real-time data processing [9]. Adaptive recommendation systems also assist media organizations in managing content discoverability amid the overwhelming volume of available digital information.
Audience analytics has also been transformed through AI tools that analyze sentiment, trends, and user-generated content. Sentiment analysis algorithms, leveraging advanced NLP techniques, enable media outlets to gauge public reactions to stories and adapt their strategies accordingly. Moreover, AI-driven trend detection tools can identify emerging topics faster than traditional methods, providing journalists with timely insights to cover breaking news or develop feature stories. This capacity for real-time analysis facilitates more responsive journalism and effective content planning [10].
Furthermore, AI-powered content curation and recommendation systems have become central to digital media strategies. Platforms like Netflix, YouTube, and Facebook leverage machine learning algorithms to analyze user behavior and preferences to deliver personalized content feeds. This customization enhances user engagement and retention, which are critical metrics for media organizations competing in saturated digital markets [11]. Such systems employ collaborative filtering and deep learning models to decipher intricate patterns within user data, thereby optimizing content delivery and increasing the likelihood of audience satisfaction.
Despite these opportunities, the deployment of AI in media is accompanied by growing ethical concerns and practical challenges. The proliferation of deepfake technology and synthetic media raises questions about authenticity and misinformation, especially as AI-generated content becomes more sophisticated. Studies have demonstrated that deepfakes can be used maliciously to spread false information, eroding public trust in media [12]. Furthermore, issues of algorithmic bias persist, where AI systems may inadvertently reinforce stereotypes or unfairly marginalize certain groups, complicating efforts to uphold journalistic objectivity and diversity.
Transparency and accountability are increasingly emphasized in recent discourse on AI ethics in media. Organizations are urged to adopt transparent criteria for AI decision-making and to implement oversight mechanisms to mitigate bias. Additionally, policymakers and industry stakeholders are exploring regulatory frameworks to ensure responsible AI use, emphasizing the importance of ethical standards and human oversight in automated content production [13].
Despite its numerous advantages, the deployment of AI in media organizations raises significant ethical and practical concerns. Issues related to bias in AI algorithms pose risks of perpetuating stereotypes or misinformation. For instance, training data for AI systems may inadvertently reflect existing biases, leading to skewed content recommendations or automated reporting that lacks objectivity [14]. Additionally, the proliferation of deepfakes and synthetic media generated by AI presents challenges for authenticity and trustworthiness in journalism. The potential for AI to automate not only benign tasks but also sensitive editorial decisions necessitates transparent practices and robust regulatory oversight.
In conclusion, AI's role in media organizations has expanded rapidly post-2020, offering significant benefits in automation, personalization, and analytical capacity. However, these advancements must be balanced with ethical considerations and safeguards to uphold trust and integrity in journalism. The ongoing development of responsible AI frameworks and regulatory measures will be crucial in shaping the future of AI in media.
Statement of the Study Problem
The understanding of AI is crucial in determining how effectively media professionals can leverage its capabilities. AI literacy involves not only the ability to understand and use AI technology but also to comprehend its applications, ethical implications, and social consequences, allowing individuals to make informed decisions. Furthermore, media professionals must possess the ability to evaluate the credibility of information and actively participate in discussions about AI's impact on society. As point out, understanding AI's complex algorithms is challenging, particularly for those with little or no experience with the technology. This lack of understanding may hinder the effective use of AI, especially when media professionals must make critical decisions (Eslami et al., 2019) [15]. In some instances, over-reliance on AI could lead to illogical or even misleading decisions, as seen in certain policy formulations [16].
This gap in understanding is particularly relevant in the media industry, where the ethical use of AI and its potential societal consequences must be carefully considered. The widespread adoption of AI has introduced a significant divide in media professionals’ understanding of how to navigate this technological shift, which highlights the need for greater awareness of AI’s potential uses and its ethical implications in media practices. This study aims to fill this gap by assessing the extent of knowledge of AI among media professionals in Egypt and selected European countries, focusing on their awareness of AI’s applications, ethical considerations, and social impacts, as well as the challenges they face in using AI in their work.
Significance of the Study
The significance of this study is multifaceted. First, AI technologies have become a prevalent trend in various sectors worldwide, and their integration into media organizations is reshaping industry practices. By examining the extent to which AI is adopted by media organizations, the study sheds light on how media professionals in Egypt and several European countries are engaging with AI technologies. This exploration provides critical insights into the challenges faced by media professionals in navigating this technological shift. Furthermore, the findings of this study can contribute to the broader conversation on the ethical and social implications of AI, particularly in the context of journalistic integrity. As an exploratory study, the research paves the way for future studies that can build upon its findings and further investigate the evolving role of AI in media practices.
Objectives of the Study
This study seeks to provide an in-depth understanding of how media professionals in Egypt and Europe are engaging with AI technologies, with a particular focus on their knowledge, ethical considerations, and the challenges they face. As AI continues to transform the media landscape, understanding its impact on the profession is crucial for ensuring that AI tools are used responsibly and in ways that preserve the integrity of journalism. Through this research, the study contributes to the ongoing discourse on the intersection of AI and media, offering valuable insights into the future of journalism in the age of AI. Therefore, the study aims to achieve the following objectives:
• To assess the extent of knowledge of AI in media institutions in Egypt and selected European countries.
• To examine the extent of media professionals' use of AI technology in media organizations in Egypt and Europe.
• To explore the perspectives of Egyptian and European media professionals regarding the use of AI applications in their media practices.
• To identify the most significant challenges faced by media organizations in utilizing AI technologies.
• To examine the ethical and social impacts of AI usage in the media, focusing on its effects on journalistic integrity and societal concerns.
Review of Previous Studies
This section examines relevant literature in the field of Artificial Intelligence (AI) in journalism and media organizations. The review is organized into two main areas of focus: (1) the use of AI in journalism and newsrooms, and (2) the impact of AI on the future of journalism.
Use of AI in Journalism Newsrooms:
Recent studies have explored how AI technologies are being integrated into journalistic practices, with particular attention to the benefits and challenges posed by AI adoption in newsrooms. One common theme in these studies is the mixed sentiment among media professionals regarding the potential of AI to enhance productivity and creativity, while also expressing concerns about its ethical implications, accuracy, and potential for job displacement. These concerns reflect a broader anxiety about AI's influence on the integrity of journalistic work.
Brigham et al. (2024) conducted a study investigating the use of large language models (LLMs) in news production. Their analysis, based on interactions between journalists and LLMs, found that journalists often use AI tools to generate content with minimal human intervention, providing sensitive materials to AI models before publication [17]. This practice highlights the need for responsible AI use, particularly when AI-generated content is employed in a journalistic context.
Similarly, Cools et al. (2024), examined the use of generative AI tools, such as ChatGPT, in newsrooms across the United States [18]. Their study revealed a cautious optimism among journalists, who acknowledged the potential of AI tools while advocating for clear editorial guidelines to govern their use. This sentiment was echoed in, who conducted a multinational study on 134 journalists from Europe and the United States Fletcher & Nielsen(2024) [19]. Their findings suggested that AI is increasingly used to automate news writing, enhance data analysis, and personalize content. However, concerns were raised about the reduced nuance and context in AI-generated news, as well as the emergence of hybrid "journalist–programmer" roles, which underscored the need for AI literacy among journalists.
A study by Hafied et al. (2024), focusing on journalists in Denmark and the Netherlands, found that while generative AI tools like ChatGPT and Bard offered significant potential to enhance creativity and efficiency, journalists expressed apprehension about ethical concerns, the accuracy of AI-generated content, and the threat of job displacement [20]. Similar concerns were also found by, who examined AI use in the media sector and highlighted skepticism regarding AI's reliability in news production [21]. This skepticism, as noted by, underscores the importance of developing ethical frameworks to address the challenges AI poses to journalistic practices [22]. The need for such frameworks is further emphasized by and, who highlighted ethical issues such as bias, transparency, and the accountability of AI systems in newsrooms [23,24].
Impact of the Use of AI on the Future of Journalism
The impact of AI on the future of journalism is a central concern for scholars exploring the evolving role of technology in media. A few studies have examined the broader implications of AI for the journalistic profession, focusing on its influence on education, editorial independence, and the workforce.
Johnson and Davis (2024) conducted a study on the incorporation of AI-driven tools into journalism education [25]. Their findings revealed that AI tools, particularly those used in data journalism courses, improved students' analytical skills and their ability to interpret complex datasets. However, the study also pointed to challenges related to preparing future journalists for an AI- integrated newsroom environment. Similarly, Fletcher (2021), highlighted concerns among media executives and journalists about maintaining journalistic integrity in the face of increasing reliance on AI tools [26]. These concerns were shared by Abu et al. (2024), who stressed that ethical issues such as bias, privacy, and transparency remain critical challenges in the integration of AI into newsrooms [27]. The study also emphasized the need for best practices in AI implementation, which involve careful planning and ongoing evaluation to address these concerns.
Wilczek et al. (2024) further explored the challenges of integrating AI into media organizations, noting concerns about the costs of implementation, the technical expertise required, and the potential impact on editorial independence [28]. Job displacement, a recurring theme in the literature, was also highlighted by Hafied et al. (2023) and Saheb et al. (2024), who examined the social implications of AI in the media sector [29,30].
In another important study, De-Lima and Ceron (2021) focused on the use of AI in the news industry, examining three key areas: machine learning, computer vision, and automated planning and scheduling [31]. Their study found that AI technologies are increasingly used to boost public engagement, optimize business strategies, and manage user-generated content. However, they also noted that natural language processing (NLP) models are less frequently used in newsrooms, primarily due to language barriers and the need for more straightforward instructions.
Newman (2024) analyzed trends shaping the future of journalism, particularly AI's disruptive potential and its implications for newsrooms, media business models, and audience behavior [32]. His study concluded that AI’s rapid adoption will challenge media sustainability, particularly as AI-driven tools redefine content creation and distribution. He also recommended that media organizations prioritize maintaining trust amidst growing concerns about misinformation and the ethical implications of AI.
Summary of Key Findings from Previous Studies
The literature reviewed reveals that AI applications in media organizations are increasing rapidly, with potential benefits including enhanced editorial accuracy, increased efficiency, and the automation of content creation. AI-driven tools such as automated fact-checking systems and grammar-checking algorithms offer the possibility of improving content quality while streamlining workflow. However, a recurring concern across studies is the ethical challenges posed by AI, particularly regarding accuracy, transparency, and the potential for bias. Job displacement is another significant issue raised by many scholars, as media professionals worry about the impact of AI on employment within the industry.
Furthermore, most of the studies reviewed rely on qualitative methodologies, particularly interviews with media professionals, to assess the use and perceptions of AI in newsrooms. These studies highlight the complexity of integrating AI into journalistic practices and emphasize the need for careful ethical considerations and ongoing evaluation to address the challenges AI presents to the media sector.
To conclude, the review of previous studies highlights the growing significance of AI in journalism and media organizations, as well as the challenges and concerns that accompany its adoption. While AI offers substantial benefits in terms of productivity and creativity, it also raises important ethical, social, and economic issues that must be addressed to ensure responsible and transparent use in journalistic practices. As AI continues to shape the future of journalism, further research is needed to explore its long-term impact on the industry and the workforce.
Thereotical Framework
This study adopts an integrated theoretical framework combining Rogers’ Diffusion of Innovations Theory Comparative Media Systems Theory (2003) to explore the adoption of Artificial Intelligence (AI) within media institutions across Egypt and selected European countries [33]. Rogers’ Diffusion of Innovations Theory offers a valuable lens to investigate how AI technology is perceived, adopted, and integrated into media workflows. The theory emphasizes five key attributes that influence the adoption process: relative advantage, compatibility, complexity, trialability, and observability. Rogers posits that the innovation adoption process unfolds in five stages: knowledge, persuasion, decision, implementation, and confirmation. This framework facilitates the analysis of varying levels of AI awareness, skillsets, and usage among media professionals in both regions, alongside the challenges and ethical concerns associated with the integration of AI into journalistic practices.
Complementing this theoretical perspective, Hallin and Mancini’s Comparative Media Systems Theory provides a broader understanding of the sociopolitical and cultural contexts that shape AI adoption in journalism. By examining structural variables such as media regulation, journalistic autonomy, professional norms, and market dynamics, this theory illuminates how external factors influence the implementation and use of AI within media organizations. The integration of both Rogers’ and Hallin and Mancini’s theories offers a comprehensive framework that accounts for both the internal dynamics of AI adoption and the external contextual factors that affect its integration. This dual- theoretical approach enables a nuanced comparative analysis of experts’ perceptions and the evolving role of AI in contemporary journalism across different geopolitical settings.
Methodology
This study employs a qualitative research approach, which is particularly effective for gaining an in-depth understanding of the phenomenon under investigation. Qualitative research methodologies are increasingly favored in media studies as they provide rich, detailed insights into the underlying reasons and motivations behind the observed phenomena, distinguishing them from quantitative approaches [34]. The qualitative nature of this study facilitates the exploration of participants' experiences, perceptions, and attitudes, allowing for a nuanced examination of the adoption of Artificial Intelligence (AI) in media organizations.
To gather primary data, the study utilizes in-depth interviews, a key qualitative research technique that allows for a comprehensive exploration of participants' perspectives. A purposive sample of 32 media professionals and experts was selected for this study, with 19 participants from Egypt and the remaining 13 from several European countries, including Spain (5), Portugal (3), Finland (3), and Italy (2). Data collection took place during November and December 2024, primarily through Zoom interviews, with some conducted via telephone for respondents located in Egypt. On average, each interview lasted approximately 30 minutes. All interviews were audio-recorded with the consent of the participants, transcribed verbatim, and subsequently analyzed.
The interview protocol was designed to address five key themes:
• Knowledge and understanding of artificial intelligence.
• Awareness of the significance of AI applications in media work.
• The importance of acquiring AI-related skills.
• Potential ethical concerns and challenges associated with the use of AI.
• Strategies for mitigating the negative impacts of AI adoption in media organizations.
These themes provided a structured framework for investigating the diverse perspectives of media professionals on the role of AI in their work, as well as the challenges and opportunities they perceive in its integration into media practices.
Findings and Discussion
This section presents the findings of the study, which examines the integration of Artificial Intelligence (AI) in media organizations, with a focus on the perspectives of media professionals across Egypt and several European countries. The findings are discussed in relation to the key themes identified in the in-depth interview guide.
Knowledge and understanding of AI
A critical factor influencing the adoption of AI in media organizations is the level of knowledge and awareness among journalists and editorial staff. The results of interviews with Egyptian media experts and professionals reveal that a significant gap exists in the understanding of AI's potential applications in journalism. One editor-in-chief reported that “more than half of the journalists in his organization have modest levels of awareness regarding AI and its applications,” yet they are open to integrating these technologies into their work. Several respondents echoed this sentiment, acknowledging that while their understanding of AI is basic, they are committed to expanding their knowledge in the future.
Additionally, there is a noticeable apprehension among some Egyptian journalists about the potential for AI to replace human roles in journalism, with one expert noting that “there is fear among some journalists of losing their jobs due to the rapid spread of AI.” However, junior media professionals exhibited less concern, viewing AI more as a tool that complements rather than threatens their work, which aligns with the perspective of AI experts who argue that AI technologies may create new job opportunities in the sector.
In Europe, responses varied, with media professionals in Portugal and Finland displaying a higher degree of awareness and understanding of AI applications in media. Portuguese respondents recognized the growing importance of AI in journalistic workflows but noted a lack of necessary AI skills among journalists. One respondent emphasized the importance of professional training programs to address this skill gap. In contrast, Finnish media professionals exhibited a more advanced understanding of AI and its role in enhancing journalistic practices. In Spain, while there was a general recognition of AI’s significance, the depth of knowledge varied significantly across organizations, with larger institutions showing more advanced understanding than smaller ones. Spanish universities and media companies have increasingly partnered with technology firms to improve AI literacy, signaling a growing commitment to integrating AI into the media industry.
Awareness of the Importance of AI Applications in Professional Practices
The results of the interviews with Egyptian media professionals revealed a consensus that there is insufficient awareness of the importance of AI in media work. While there is recognition of AI's ability to automate certain tasks—such as the generation of routine content—there remains an understanding that human input is irreplaceable in many areas of journalism. Notably, several experts pointed out that AI could have positive applications beyond the media sector, such as in critical situations (e.g., during the COVID-19 pandemic) or in conflict zones where AI- assisted technologies could help safeguard journalists and other professionals.
Conversely, Europeanrespondents, particularlyfrom Mediterranean countries (Spain, Italy, and Portugal), acknowledged the importance of AI, particularly in automating routine media tasks. Spanish respondents identified four key areas where AI is making a significant impact in media production: content automation, data analysis, personalized news delivery, and fact-checking. These applications enhance efficiency, support investigative journalism, and help combat misinformation. However, respondents also expressed concerns about the ethical implications of AI, such as the potential for filter bubbles and polarization resulting from algorithmic recommendations.
In Finland, media unions have introduced guidelines for AI usage, reflecting a more structured approach to integrating AI into journalistic practices. Finnish journalists generally perceive AI as a supportive tool that aids in the generation of ideas and the gathering of background information, marking a contrast with the more cautious attitudes observed in Egypt.
Media professionals generally have a range of perspectives on the use of AI in their organizations, influenced by factors such as technological benefits, ethical considerations, and industry trends. Here are some common viewpoints:
• Efficiency and Productivity: Many see AI as a tool that can streamline workflows, automate routine tasks (like editing, transcription, headline generation), and enable faster content production, thereby saving time and resources.
• Content Personalization: AI enables more targeted and personalized content delivery, which can enhance audience engagement and satisfaction.
• Data Analysis and Audience Insights: AI tools assist in analyzing large datasets to understand audience preferences, track trends, and optimize content strategies.
• Creativity and Innovation: Some professionals view AI as an aid to creativity, helping generate ideas or produce preliminary drafts, but often emphasize the importance of human oversight.
• Ethical and Trust Concerns: There are concerns about bias, misinformation, and the ethical implications of AI-generated content. Media organizations worry about maintaining journalistic integrity and avoiding manipulation or fake news.
• Job Impact and Skills: Some professionals are cautious about AI replacing certain roles or skills, advocating for retraining and adaptation rather than replacement.
• Regulatory and Legal Issues: Ongoing debates about intellectual property, accountability, and transparency influence attitudes towards AI adoption.
Overall, media professionals tend to see AI as a powerful and promising tool that, when used responsibly, can enhance media production and distribution—but they also emphasize the need for ethical guidelines and human oversight.
The Importance of Acquiring AI Skills
A shared conclusion across all countries studied was the unanimous agreement on the importance of acquiring AI-related skills for media professionals. Egyptian respondents emphasized the urgency of incorporating AI training into the professional development of journalists. One expert suggested integrating AI courses into university curricula, recommending that all media students should take a compulsory AI course. This recommendation underscores the recognition that AI literacy is essential for future journalists to remain competitive in an increasingly digital media landscape.
In Europe, similar sentiments were expressed. A respondent from Spain highlighted the necessity of continuous professional development to keep pace with rapidly evolving AI tools. Several professional organizations, including the Federación de Asociaciones de Periodistas de España (FAPE), have started organizing seminars and workshops focusing on AI, data literacy, and algorithmic transparency. These initiatives aim to promote dialogue between journalists, media managers, and technologists, fostering a more informed and ethical approach to AI integration in the media.
One may conclude, from the interviewees’ comments that the integration of artificial intelligence in news organizations is becoming increasingly essential, as it improves efficiency, accuracy, and audience engagement in a rapidly evolving digital environment. AI technologies can automate time-consuming tasks such as transcribing interviews, fact-checking, and curating personalized content for readers, allowing journalists to focus on in-depth reporting and analysis. Additionally, AI-powered tools can detect breaking news in real-time from vast data sources, enabling quicker and more comprehensive coverage. By leveraging natural language processing and machine learning, newsrooms can also identify trends, uncover hidden insights, and combat misinformation more effectively. Overall, AI empowers news organizations to deliver timely, reliable, and relevant information to their audiences while adapting to the evolving demands of modern journalism.
Potential Ethical Concerns and Challenges Brought by AI
The integration of AI in media practices introduces significant ethical concerns, as highlighted by interviewees from both Egypt and Europe. In Egypt, the need for human oversight in AI-driven processes was repeatedly emphasized. Respondents underscored that AI-generated content must be carefully reviewed to ensure accuracy and adherence to editorial standards. One participant remarked, “A lot of technology means a lot of responsibility,” stressing the importance of maintaining high ethical standards in AI-assisted media production. Similarly, experts pointed out that AI systems could produce inaccuracies, particularly when algorithms do not align with the editorial policies of the organization. European respondents also voiced concerns about the ethical implications of AI, particularly the potential for AI to perpetuate bias and disinformation. A Portuguese respondent expressed fears that AI technologies, particularly deepfakes, could contribute to the spread of misinformation. These concerns were echoed by other European respondents, who noted that while AI holds great promise for increasing productivity, it also raises significant challenges related to the control and management of the content it generates. From what the interviewees mentioned, it may be concluded that the use of AI in media organizations can lead to significant drawbacks, including the spread of misinformation through unchecked automated content generation. As AI tools prioritize speed and volume, they may sacrifice accuracy and context, leading to sensationalism or biased reporting. Additionally, reliance on AI can displace human journalists, reducing diverse perspectives and critical investigative work. There is also the risk of deepfakes and manipulated media being used maliciously, decreasing public trust in news sources. Despite these concerns, there was a broad consensus among respondents that with proper regulation and human oversight, the ethical risks associated with AI in media can be mitigated.
How to Counter the Negatives of Using AI
Experts and media professionals from Egypt and Europe largely agreed on the need for regulatory frameworks to govern the use of AI in media organizations. Many respondents suggested that clear guidelines should be established to ensure responsible AI use, emphasizing the importance of transparency in AI-generated content. Several interviewees advocated for the disclosure of AI as a source in content production, which would promote accountability and maintain trust with audiences.
Moreover, interviewees from both Egypt and Europe highlighted the importance of ongoing education and collaboration between media professionals and AI technologists. By fostering a culture of continuous learning and dialogue, media organizations can mitigate the negative consequences of AI integration and ensure that the technology is used ethically and effectively.
It may be concluded from what the interviewees mentioned that to counter the negative impacts of AI in media organizations, it is essential to adopt a multi-faceted approach. First, to prioritize transparency by clearly disclosing AI-generated content and ensuring human oversight to maintain accuracy and ethical standards. Second, to invest in media literacy programs to educate both employees and audiences about AI's capabilities and limitations, fostering critical engagement with automated content. Third, to establish ethical guidelines to address biases, privacy concerns, and misinformation, aligning AI use with journalistic integrity. Finally, to promote collaboration between technologists and journalists to design AI tools that enhance - rather than replace - human judgment, creativity, and accountability in storytelling. By balancing innovation with responsibility, media organizations can harness AI's potential while mitigating its risks.
To sum therefore, the findings of this study reveal a nuanced picture of AI integration in media organizations across Egypt and Europe. While there is a widespread recognition of AI’s potential to enhance journalistic practices, significant disparities in knowledge and awareness persist, with Egyptian media professionals generally exhibiting lower levels of understanding compared to their European counterparts. Despite these challenges, there is a broad consensus on the importance of acquiring AI-related skills and implementing ethical guidelines to ensure responsible AI use. Moving forward, it is essential for media organizations to invest in AI training, establish clear regulatory frameworks, and foster collaboration between journalists and technologists to fully harness the benefits of AI while addressing its ethical implications.
Conclusion Recommendations and Directions for Future Research
The integration of Artificial Intelligence (AI) into newsrooms has precipitated a fundamental transformation in the media landscape. This technological shift has markedly improved efficiency and productivity, revolutionizing the processes of news gathering, production, and dissemination. AI-powered tools enable media organizations to streamline routine tasks, thus allowing journalists to concentrate on interpretive, investigative, and analytical reporting. The utilization of data mining and algorithmic tools facilitates the uncovering of hidden narratives and enables more responsive engagement with societal issues. Furthermore, personalization engines enhance user experience, helping media outlets maintain audience engagement in an increasingly saturated digital environment.
Nevertheless, the adoption of AI is not without its challenges. Disparities in resource availability, uneven levels of technological literacy among journalists, and concerns about job displacement highlight the complexities of AI integration [35,36]. Additionally, regulatory and legal frameworks—particularly in the European Union and the Arab world—remain in flux, necessitating scrutiny of data protection practices, algorithmic transparency, and ethical accountability in AI-driven journalism. The potential for algorithmic bias also raises critical concerns regarding public trust and the integrity of journalistic content.
Ethical considerations must be foregrounded as AI continues to influence editorial processes. The opaque nature of AI decision- making—often described as a "black box"—poses risks in terms of transparency, fairness, and accountability. Media organizations must therefore develop robust ethical guidelines and governance frameworks that ensure AI systems operate in ways that are comprehensible, just, and aligned with journalistic values.
From a societal perspective, the widespread application of AI within media institutions necessitates the establishment of clear ethical standards and accountability mechanisms. Editorial transparency, particularly in AI-assisted decision-making, must be emphasized. Moreover, fostering dialogue with civil society, policymakers, and audiences is essential to building consensus around the responsible use of AI and maintaining journalism's vital role in democratic societies.
To support the responsible and equitable adoption of AI in the media sector, the following recommendations are proposed:
• Enhancing AI Literacy: It is imperative to promote AI literacy among audiences to mitigate the risks associated with misinformation and overreliance on AI-generated content.
• Comprehensive Training Programs: Continuous professional development for journalists, editors, and media personnel is essential to ensure a nuanced understanding of AI’s capabilities and limitations.
• Cross-sector Collaboration: Establishing partnerships between academia, technology developers, and media organizations can foster innovation, knowledge exchange, and the development of best practices.
• Algorithmic Transparency: News organizations should adopt transparent frameworks explaining how AI systems influence content creation and personalization, thereby reducing concerns about bias and manipulation.
• Ethical Governance: Existing codes of journalistic ethics should be updated to incorporate guidelines on AI usage, with clear delineations of responsibility and accountability in cases of error or ethical breaches.
• Bridging the Digital Divide: Efforts must be made to ensure equitable access to AI technologies and prevent further marginalization of vulnerable communities.
• Curriculum Integration: Universities should introduce AI education, particularly within media and communication programs, to prepare future journalists for an AI-augmented media landscape.
• Public Awareness Campaigns: Promoting a broader cultural understanding of AI and its implications will foster responsible consumption and critical engagement with AI- mediated content.
• Further Empirical Research: There is a pressing need for continued interdisciplinary research exploring the applications, implications, and challenges of AI in media and other sectors of society.
In conclusion, AI is reshaping journalistic practices in profound ways. While it offers substantial benefits—from content personalization to enhanced editorial accuracy—it also introduces a range of ethical, technical, and social concerns that must be carefully managed. As AI capabilities continue to evolve, the central question is not whether journalists should adopt AI, but how it can be implemented ethically, effectively, and inclusively to serve the public interest. Balancing innovation with accountability will be critical in ensuring that AI enhances, rather than undermines, the core values of journalism.
Future research should continue to investigate the broader societal impacts of AI, both within and beyond the media industry. Expanding the scope of inquiry to encompass diverse domains will provide a more comprehensive understanding of how AI influences public life and inform the development of frameworks that guide its responsible use.
References
- Dorr, K. N. (2016). Mapping the field of algorithmicjournalism. Digital Journalism, 4(6), 700–722.
- Underwood, C. (2017). Automated journalism: AI applications at New York Times, Reuters, and other media giants. Emerj.
- Goni, Md. A., & Tabassum, M. (2020). AI in journalism: Is Bangladesh ready for it? A study on journalism studentsin Bangladesh. Athens Journal of Mass Media and Communications, 6(4), 1–15.
- Jamil, S. (2021). AI and journalism practice: The crossroads of obstacles and opportunities for the Pakistani journalists. Journalism Practice, 15(10), 1400–1422.
- Zhai, Y., Jiaqi, Y., Hezhao, Z., & Wei, L. (2020). Tracing the evolution of AI: Conceptualization of AI in mass media discourse. Information Discovery and Delivery, 48(3), 245– 255.
- McCarthy, J., Minsky, M. L., & Shannon, C. E. (1955). A proposal for the Dartmouth Summer Research Project on AI. AI Magazine, 27(4), 12–14.
- Fjelland, R. (2014). Why general AI will not be realized. Humanities and Social Sciences Perspective on Algorithmic Media Production and Consumption, Communication Theory, 24(3), 204–213.
- Feldman, R., & Sango, R. (2021). Automating Financial Journalism: The Role of AI in Transforming News Production. Digital Journalism, 9(4), 517–534.
- Kumar, S., et al. (2022). Personalization and Engagement in Media: The Impact of Deep Learning Algorithms. *Computers in Human Behavior Reports*, 6, 100174.
- Wang, Y., & Liu, Z. (2021). Real-Time Trend Detection Algorithms for News Media. ournal of Big Data, 8, 81.
- Ghahramani, Z., et al. (2019). Deep learning for personalized content recommendation. IEEE Transactions on Neural Networks and Learning Systems, 30(7), 1824–1837.
- Chesney, R., & Citron, D. K. (2020). Deepfakes and the NewDisinformation War. *Foreign Affairs*, 99(1), 28–36.
- Deuze, M., & Witschge, T. (2020). Beyond Digital: How Media Organizations Can Rethink Their Strategies for the Post-Pandemic Future. Journal of Media Innovation, 7(1), 45–63.
- Crawford, K., & Paglen, T. (2019). Excavating AI: The politics of images in machine learning training sets. The New Inquiry
- Eslami, M., Kristen, V., Min, L., Amit, E. B. O., Eric, G., & Karrie, K. (2019). User attitudes towards algorithmic opacity and transparency in online reviewing platforms. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–13).
- Stone, P., Rodney, B., Erik, B., Ryan, C., Oren, E., Greg, H., Julia, H., et al. (2016). AI and life in 2030: One hundred-year study on artificial intelligence: Report of the 2015–2016 study panel. Stanford University.
- Brigham, N., Gao, C., Kohno, T., Roesner, F., & Mireshghallah,N. (2024). Breaking news: Case studies of generative AI's usein journalism. arXiv.
- Cools, H., & Diakopoulos, N. (2024). Uses of generative AI in the newsroom: Mapping journalists’ perceptions of perils and possibilities. Journalism Practice, 1–19.
- Fletcher, R., & Nielsen, R. (2024). What does the public in six countries think of generative AI in news? Reuters Institute for the Study of Journalism.
- Hafied, H., Irwanto, I., & Latuheru, R. (2024). Digital newsroom transformation: A systematic review of the impact of AI on journalistic practices, news narratives, and ethicalchallenges. Journalism and Media, 5, 1554–1570.
- Aissani, R., Abdallah, R., Taha, S., & Al Adwan, M. N. (2023). AI tools in media and journalism: Roles and concerns. IEEE Access, 19, 19-26.
- Porlezza, C., & Schapals, A. (2024). AI ethics in journalism: An evolving field between research and practice. Emerging Media, 2, 1–12.
- Chen, X., & Lee, S. (2024). Domain-specific evaluation strategies for AI in journalism. AI & Ethics, 9(1), 77–92.
- Rostamian, S., & Moradi, M. (2024). AI in broadcast media management: Opportunities and challenges. AI and Tech in Behavioral and Social Sciences, 2(3), 21–28.
- Johnson, M., & Davis, T. (2024). AI and the impact on journalism education. Journalism and Mass Communication Educator, 79(3), 310–325.
- Fletcher, R. (2021). AI and the future of news. Reuters Institutefor the Study of Journalism.
- Abu, S., Samy, B., & Nasser, A. (2024). AI in digital media: Opportunities, challenges, and future directions. International Journal of Academic and Applied Research (IJAAR), 8, 1–10.
- Wilczek, B., Haim, M., & Thurman, N. (2024). Transforming the value chain of local journalism with artificial intelligence. AI Magazine, 45(2), 200–211.
- Saheb, T., Sidaoui, M., & Schmarzo, B. (2024). Convergence of AI with social media: A bibliometric & qualitative analysis. Telematics and Informatics Reports, 14, 100146.
- De-Lima-Santos, M.-F., & Ceron, W. (2021). AI in news media: Current perceptions and future outlook. Journalism and Media, 3(1), 13–26.
- Newman, N. (2024). Journalism, media, and technology trends and predictions 2024. Reuters Institute for the Study of Journalism.
- Hallin, D. C., & Mancini, M. (2004). Comparing media systems: Three models of media and politics. Cambridge University Press.
- Tayie , S. (2019). Media research. Dar El Nahda PublishingHouse.
- Cai, M., & Sachita, N. (2023). Motivations, goals, and pathways for AI literacy for journalism. In CHI '23 Workshop on AI Literacy: Finding Common Threads between Education, Design, Policy, and Explainability, April 2023.
- Rogers, E. M. (2003). Diffusion of innovations (5th ed.). FreePress.
- Simon, F. (2024). AI in the news: How AI retools, rationalizes, and reshapes journalism and the public arena. Tow Center for Digital Journalism, Columbia University.

