أخبار المركز
  • د. أمل عبدالله الهدابي تكتب: (اليوم الوطني الـ53 للإمارات.. الانطلاق للمستقبل بقوة الاتحاد)
  • معالي نبيل فهمي يكتب: (التحرك العربي ضد الفوضى في المنطقة.. ما العمل؟)
  • هالة الحفناوي تكتب: (ما مستقبل البشر في عالم ما بعد الإنسانية؟)
  • مركز المستقبل يصدر ثلاث دراسات حول مستقبل الإعلام في عصر الذكاء الاصطناعي
  • حلقة نقاشية لمركز المستقبل عن (اقتصاد العملات الإلكترونية)

Advancing Media Production V

Professional Use of Generative Artificial Intelligence in the Media

27 أكتوبر، 2024


In our previous article, we delved into the various risks associated with employing generative artificial intelligence in media. Understanding these risks is paramount for fostering a rational approach among all newsroom users. Building upon that foundation, this article presents the second part of our analysis, shifting our focus to the risks that directly impact the professional and ethical standards governing journalism and media.

Intellectual Property Rights Violations

In the fast-paced world of newsroom production, journalists gather information from various sources, including human contacts, news agencies, and the internet. However, when it comes to online sources, ethical concerns arise as professional standards are often overlooked. Many journalists fail to properly attribute their sources or respect intellectual property rights when copying and collecting content in different forms.

The advent of generative artificial intelligence (AI) further complicates this issue. AI tools amass vast amounts of information and images without crediting original sources, posing a significant threat to intellectual property rights. This lack of attribution extends to the content generated by AI, which is then used without proper acknowledgment.

Digital platforms that utilize such unattributed or copied content face potential consequences. Search engines prioritize originality in their ranking algorithms, meaning these platforms may see their search result rankings decline. This drop in visibility can lead to decreased website traffic and reduced engagement with their content.

Production companies have long grappled with preserving the intellectual property rights of their products, but the rise of generative artificial intelligence has introduced new challenges. In April 2023, the creation of songs using AI-generated text commands made headlines, prompting industry giants like Spotify and Universal to take action to protect their rights.

A notable incident occurred when Spotify and Apple Music removed a song from their libraries that was created through artificial intelligence, mimicking the voices of artists Drake and The Weeknd. Universal Music Group, representing these artists, asserted that using their voices without permission constitutes a violation of copyright and broadcasting laws.

The issue of intellectual property rights extends beyond music. On digital media platforms, while free music clips from Facebook and YouTube libraries are readily available, the rights of any other music are fiercely protected. Google and Meta have implemented strict rules to pursue stolen clips or violations of fair use, with penalties as severe as account or channel closure for infringements.

The problem is not limited to audio content. According to NewsGuard, a news review organization, some websites have employed AI chatbots to remix stories published elsewhere, skirting the edge of plagiarism by merely adding source links at the bottom of articles. For instance, Biz Breaking News used AI tools to summarize articles from The Financial Times and Fortune magazine, prefacing each with "3 key points" generated by AI.

Even established technology platforms are not immune to controversy. CNET, for example, published 71 AI-generated reports, of which 41 were found to be copied from other sites, leading to accusations of intellectual property violations.

The scope of this issue encompasses all forms of content, including images and videos created using AI models based on existing works owned by others. As a result, the entire spectrum of digital content is vulnerable to infringement through generative AI models, treating these materials as if they had no rightful owners or creators.

Deficiency in Language Structure

Linguistic discipline and narrative construction in chatbot-generated texts often reveal deficiencies and a certain clumsiness, particularly when tasked with creating content about complex issues or phenomena. The resulting text typically requires revision, modification, and linguistic verification, especially in languages like Arabic, where words and vocabulary carry multiple meanings deeply tied to their contextual usage.

Some digital platforms have experimented with publishing ChatGPT-generated articles, often disclosing this fact at the end of the piece. A careful reading of these texts often unveils their stilted structure and hollow wording, necessitating human intervention to infuse vitality and coherence or, in some cases, a complete rewrite.

Recently, a friend shared the title of an important session from a public political event in a certain country, expressing concern over the poor wording of both the title and body text. Suspecting artificial generation, I tested the content using an AI detection tool, which confirmed it was 100% created by ChatGPT without any human modification. This raises significant questions about the impact on the integrity of official public discourse when chatbots are used irrationally, especially by those lacking the linguistic ability to properly express or formulate ideas in writing.

Content Farms

In early May 2023, a report from news review group NewsGuard raised concerns about the proliferation of AI-powered news sites. The study identified 49 websites utilizing chatbots to generate content, sparking discussions about the potential for this technology to exacerbate existing fraudulent and forgery tactics.

Bloomberg's independent review of these sites revealed a diverse range of topics. Some masquerade as breaking news outlets with credible-sounding names like News Live 79 and the Daily Business Post, while others focus on lifestyle advice, celebrity gossip, or sponsored content. Notably, none of these sites have openly disclosed their use of AI-powered chatbots such as OpenAI's ChatGPT or Alphabet's Google Bard for content creation.

The emergence of these sites coincides with the widespread adoption of AI tools in 2023. According to the report, many appear to be "content farms" – low-quality websites operated by anonymous sources that churn out vast amounts of content to attract advertising revenue. NewsGuard's findings indicate that these sites are globally distributed and publish in multiple languages.

Over half of the identified sites generate revenue through programmatic advertising, where ad space is automatically bought and sold by algorithms. This trend poses particular challenges for Google, whose advertising technology accounts for half of the company's revenue, especially given the potential use of its own AI chatbot, Bard.

NewsGuard's report also highlights instances where chatbots have propagated falsehoods based on existing news stories. A striking example occurred in April 2023, when CelebritiesDeaths.com published a fabricated article titled "Biden Dies, Acting President Harris Delivers 9 a.m. Address." This piece included a fake obituary with invented details about an architect's life and workunderscoring the potential dangers of unchecked AI-generated content in the news ecosystem.

Gordon Crovitz, co-CEO of NewsGuard, emphasized the importance of cautious AI model training for companies like OpenAI and Google to prevent the spread of misinformation. As a former Wall Street Journal publisher, Crovitz warned, "Utilizing AI models known for fabricating facts to create seemingly legitimate news sites is nothing short of a journalistic scam." While OpenAI did not provide an immediate response to the request for comment, they have previously stated that they employ a combination of human reviewers and automated systems to identify and address potential misuse of their model. This approach includes issuing warnings and, in severe cases, implementing user bans.

OpenAI's response contrasted with Google's stance, as conveyed by their spokesman Michael Aciman. He emphasized that Google prohibits ads from appearing alongside harmful, spam, or copied content from other sites. Aciman elaborated in a statement: "Our policy enforcement focuses on content quality rather than creation method. We remove or block ads from services where we discover violations."

Following Bloomberg's inquiry, Google took action by removing ads from specific pages on certain websites. In cases of widespread violations, they removed ads from entire websites. The company clarified that content generated by artificial intelligence doesn't inherently violate their advertising policies; instead, they evaluate content based on their existing publishing guidelines. However, Google stressed that using automation, including AI, to manipulate search engine rankings does breach their policies on search result manipulation.

Noah Gianciracusa, an assistant professor of data science and mathematics at Bentley University, offers an intriguing perspective on this issue. He notes, "While these practices aren't new, artificial intelligence has made them easier, faster, and cheaper to implement." Gianciracusa warns that those exploiting this fraudulent method "will continue to experiment and refine their techniques. As newsrooms become more AI-driven and content creation becomes increasingly automated, we risk creating an online information ecosystem of very limited quality."

He further explains the economic shift in these practices: "Previously, this type of manipulation had associated costs, limiting its scale. Now, it has become virtually cost-free for the fraudsters. The elimination of human labor costs has removed a significant barrier, potentially leading to more widespread manipulation of online content."