أخبار المركز
  • أحمد عليبة يكتب: (هاجس الموصل: لماذا يخشى العراق من التصعيد الحالي في سوريا؟)
  • محمود قاسم يكتب: (الاستدارة السريعة: ملامح المشهد القادم من التحولات السياسية الدرامية في كوريا الجنوبية)
  • السيد صدقي عابدين يكتب: (الصدامات المقبلة: مستقبل العلاقة بين السلطتين التنفيذية والتشريعية في كوريا الجنوبية)
  • د. أمل عبدالله الهدابي تكتب: (اليوم الوطني الـ53 للإمارات.. الانطلاق للمستقبل بقوة الاتحاد)
  • معالي نبيل فهمي يكتب: (التحرك العربي ضد الفوضى في المنطقة.. ما العمل؟)

Advancing Media Production III

Professional Use of Generative Artificial Intelligence in Media: Standards, Challenges, and Solutions

30 أغسطس، 2024


In knowledge professions, there must be standards that form the cornerstone of content production, which cannot be ignored in favor of the allure of new technological advancements. The risks that generative artificial intelligence poses to these standards in journalism and media are significant, especially if technology is adopted without a clear strategy or user guide. Such guidelines should impose a clear methodology on media institutions for integrating generative AI into newsrooms.

Global Perspectives on AI in Newsrooms

Globally and in the Arab world, the use of generative AI is still in its infancy. Yet, discussions about its challenges frequently arise whenever the topic of integrating this technology into media institutions is broached. Although there are safe models for using AI in newsrooms worldwide, generative AI has altered the equation, presenting a greater challenge than previous, well-tested methods employed by many media outlets. With the advancements brought by generative AI, there is a need to explore new integration approaches that align with the media profession's ethical standards and codes.

These integration approaches must extend beyond initial or superficial uses, such as writing articles with tools like ChatGPT or Google Gemini, publishing dialogues with chatbots, or using avatars or virtual broadcasters for reading news summaries. They also include alternative digital audio formats for voiceovers.

While we discuss specific strategies, it is crucial to first acknowledge the risks associated with generative AI. Future articles will delve into these risks and conclude with a clear strategy for usage that avoids professional pitfalls or technological missteps.

This article aims to shed light on efforts to understand generative AI through global opinion polls, which attempt to capture its impact from the perspective of media institutions and its role in content production.

In light of the recent emergence of generative artificial intelligence tools and their application in journalism, a poll conducted by the International Association of News Publishers (WAN-IFRA) in May 2023, in collaboration with Schickler Consulting, stands as one of the first global surveys to explore the forms, features, and usage rates of these tools, as well as the concerns they raise in newsrooms.

Published on May 25, 2023, the survey involved more than 100 officials and publishers worldwide. It revealed the weekly usage rates of ChatGPT or similar tools among journalists: 39% of respondents indicated that only 5% of their journalists use these tools, while 31% reported a usage rate of 5-15%. Additionally, 8% believed the usage rate to be 15-30%, 6% estimated it to be 30-50%, and 3% noted that more than 50% of their journalists employed such tools. Meanwhile, 13% of newsroom leaders and officials surveyed concluded that their journalists do not use these tools at all.

The survey further identified the key drivers of generative AI integration in newsrooms: editorial management (32%), technology and data teams (31%), journalists (27%), and others.

Concerns and Regulatory Needs

Regarding concerns about the use of generative AI in journalism, the following issues were highlighted:

1. Accuracy of information and quality of content (85%)

2. Theft and infringement of intellectual property rights (67%)

3. Privacy considerations and protection of personal data (46%)

4. Threat to job security (38%)

Accuracy emerged as the primary concern, emphasizing the importance of maintaining quality and adherence to professional standards in content production. Newsrooms are particularly worried about the potential decline in content accuracy and quality due to generative AI tools. This concern is followed by fears of intellectual property infringement, privacy issues, and the threat to job security in the media industry.

In the Reuters Digital News Report 2024, there is a divide in opinion regarding the influx of AI-generated content. Some believe it could bolster journalism and enhance its visibility, while others fear, as the report highlights, that it might erode public trust in published information, leading to adverse political and social repercussions globally. Media professionals are advocating for worldwide AI regulation, as the potential dangers of its varied applications are urging governments to establish a regulatory framework. This framework would enable media organizations to integrate AI into their newsrooms responsibly, leveraging its benefits while ensuring rational use.

Journalists express concern over the rapid technological advancements that media personnel struggle to fully grasp and manage. This underscores the necessity for developing specialized media models for content production, tailored to the unique needs of each institution, making them distinctive and non-replicable. This is particularly pressing amidst discussions about AI potentially dominating news production by 2026.

According to the report's survey, which involved 300 media leaders from 50 countries, over half of the participants identified news automation as the most significant application of new technology, followed by recommendations and nominations, and then commercial use. However, half of the survey participants also view AI-generated content as the greatest threat to the reputation of media institutions if adopted.

Another concern for the media is the impact of AI on search operations. Many media outlets rely heavily on direct access through search engines, which poses a threat to the traffic models of news platforms and media sites.

In today's world, generative search experiments (SGE) are rapidly spreading across the Internet. These experiments, along with AI-powered chat models, promise a faster and more intuitive way to access information. This development suggests that users might increasingly bypass traditional search engines that direct them to news sites and media platforms.

Navigating the Future of AI in Media

Confronting technology by rejecting it is not feasible; instead, we must adapt its use. Therefore, technology should be embraced to enhance media work, increasing both effectiveness and speed. By leveraging technology more broadly, media can reach larger audiences and explore new methods for distributing and customizing content. However, it is crucial to consider scenarios that mitigate risks and address the impact on existing business models in the media industry.

This year marks a pivotal moment in the discussion of generative artificial intelligence's role in media. Clarifying the media's stance on this technology will provide an essential educational framework for media organizations worldwide and serve as a model for media innovation. In this context, the specific features of AI use in media will be defined. Before that, global efforts to regulate AI will address the complex legal issues, particularly those concerning intellectual property and copyright, as well as the use of media-produced content to train AI models.

Currently, only major international media outlets are negotiating lucrative agreements to use their content and share benefits with technology companies in developing chatbots or integrating AI into newsrooms. Meanwhile, most media organizations worldwide remain excluded from these agreements, highlighting the stark disparity between Western media and the rest of the world. This ongoing technological evolution continues to perpetuate inequality and discrimination.