أخبار المركز
  • د. أمل عبدالله الهدابي تكتب: (اليوم الوطني الـ53 للإمارات.. الانطلاق للمستقبل بقوة الاتحاد)
  • معالي نبيل فهمي يكتب: (التحرك العربي ضد الفوضى في المنطقة.. ما العمل؟)
  • هالة الحفناوي تكتب: (ما مستقبل البشر في عالم ما بعد الإنسانية؟)
  • مركز المستقبل يصدر ثلاث دراسات حول مستقبل الإعلام في عصر الذكاء الاصطناعي
  • حلقة نقاشية لمركز المستقبل عن (اقتصاد العملات الإلكترونية)

Advancing Media Production IV

Professional Use of Generative Artificial Intelligence in Media

30 سبتمبر، 2024


In continuation of our previous article, we delve into the technical aspects concerning the risks associated with using generative artificial intelligence (AI) in newsrooms. The creation and production of content is a complex process that cannot be simplified to merely pushing a button and presenting the results to the public without adhering to professional standards and a strict code of ethics in the media profession.

This discussion will focus on three main risks of employing generative AI in the daily production of content, emphasizing the need for caution:

Bias: The first major challenge in using popular generative AI tools is bias. When searching for information, recalling a specific date, or inquiring about historical events such as 20th-century wars or regional conflicts, the content generated by AI is inherently influenced by the original data that trained these models. This includes the biases, viewpoints, and ideas of those who initially fed the data into the neural networks of chatbots and other applications. Consequently, the content generated reflects these biases, which can manifest in various forms and degrees.

AI tools might retrieve information from biased or unprofessional sources, such as blogs or digital platforms that propagate one-sided views on controversial topics. This issue is prevalent not only in politics but also in religion, cultural beliefs, and more. As users, it is crucial to approach these tools with a critical mindset and awareness of their inherent biases.

When posing questions about ongoing international conflicts or controversial topics related to beliefs, religions, and public policies, the responses from AI may lack objectivity. Even if these responses are presented politely, they often rely on unspecified sources. Some chatbots may provide search results, but these are insufficient for building a news story or crafting a significant report on current affairs. Human intervention is necessary to verify these sources and ensure the reliability of the information.

It is essential to approach AI-generated content with skepticism and not rely solely on these general, potentially biased responses. Verification through reliable sources is imperative to maintain journalistic integrity.

Misleading and Text Manipulation

Chatbots and generative AI tools often engage in a form of misleading that is particularly dangerous. This is because such misleading becomes easily accessible to users without their awareness, often taking forms that might reassure them about the accuracy of the responses. As artificial intelligence scientist Gary Marcus noted in a television investigation for the renowned program 60 Minutes, this issue is prevalent with tools like "ChatGPT."

In a month-long self-experiment with the ChatGPT-3 version, one user inquired about the rulers of Egypt. Among the responses was the claim that a woman named Khadija El Adl ruled Egypt from 1894 to 1907. This is incorrect, as no woman has ruled Egypt in modern history. Upon further questioning about Khadija El Adl, the chatbot erroneously stated that she was an editor-in-chief of Al-Masry Al-Youm newspaper, a position no woman has held. Such misleading information perpetuates what resembles hallucinations, posing a risk to those lacking the knowledge or ability to verify the facts. When asked about serious matters like bomb-making, the chatbot diverted attention with the false claim that penguin urine constitutes 3% of Antarctica's rivers. The renowned broadcaster Leslie Stahl, host of 60 Minutes, confronted Microsoft's Vice President about this misleading information, pointing out that penguins do not urinate, thus questioning the validity of such diversions.

While the capabilities of newer chatbot versions have improved, particularly the latest version of "ChatGPT," they still do not provide answers that can fully reassure journalists or media professionals seeking truth and accuracy in their content. In this context, writer Mark Borzo, in an article titled "I Asked 'ChatGPT' About Roald Dahl and It Lied to Me," emphasizes the importance of remembering that the accuracy of "ChatGPT" depends on the data used in its training and the algorithms it employs. It is always advisable to use it with a critical eye and supplement it with other research methods to verify accuracy. Borzo argues that any professional writer will verify facts and check the actual sources of information before including them in their work. However, he warns, "We must not shy away from the facts, because some will write a quote like that without bothering to verify it." This caution is particularly relevant in our Arab societies, where there is often a rush to create articles based on trending topics.

Deep Fake Concerns

The rise of deep fake technology is a concerning development, particularly because the general public on social media often lacks the time, tools, and knowledge to critically evaluate the content they encounter. Recently, a widely circulated image on social media depicted Pope Francis being chased by men in police uniforms. This image, created using generative artificial intelligence, was not real but sparked significant uproar. It highlighted the rapid advancement of image manipulation and deep fake technologies, raising questions about their potential misuse.

Even when such images are intended as fictional, they can provoke unintended reactions from viewers. This capability of deep fake tools poses significant risks, as they can be exploited for illegal purposes, such as deceiving the public or serving military or political agendas. In June 2024, Pope Francis addressed these dangers in a speech before the G7 summit, stating, “Artificial intelligence must never be allowed to have the upper hand over the police.” He warned that relying on machines for decision-making could condemn humanity to a hopeless future.

Since the advent of chatbots in late 2022, manipulated images have been increasingly circulated worldwide, particularly in conflict zones. These fabrications are often used to exacerbate conflicts or interfere in electoral processes, a pressing concern as many countries face elections through the end of 2024. The controversies between Trump and Harris over fabricated images and videos underscore the urgent need to address these dangers.

Imagine scenarios like Barack Obama playing with Angela Merkel on a beach or the King of Britain appearing in clown attire at a public event—such fabrications are becoming commonplace. It is imperative for the press to actively confront and debunk these falsehoods, fostering a culture of skepticism in journalism and media to combat misinformation effectively.