أخبار المركز
  • أسماء الخولي تكتب: (حمائية ترامب: لماذا تتحول الصين نحو سياسة نقدية "متساهلة" في 2025؟)
  • بهاء محمود يكتب: (ضغوط ترامب: كيف يُعمق عدم استقرار حكومتي ألمانيا وفرنسا المأزق الأوروبي؟)
  • د. أحمد أمل يكتب: (تهدئة مؤقتة أم ممتدة؟ فرص وتحديات نجاح اتفاق إنهاء الخلاف الصومالي الإثيوبي برعاية تركيا)
  • سعيد عكاشة يكتب: (كوابح التصعيد: هل يصمد اتفاق وقف النار بين إسرائيل ولبنان بعد رحيل الأسد؟)
  • نشوى عبد النبي تكتب: (السفن التجارية "النووية": الجهود الصينية والكورية الجنوبية لتطوير سفن حاويات صديقة للبيئة)

Advancing Media Production II

Towards Professional Use of Generative AI in the Media

26 يونيو، 2024


A previous article of the series "Responsible Use of Generative AI in the Media”  addressed the criticism aimed at this technology in the context of its use in media, following the revolution it sparked in late 2022. This technological revolution, marked by rapid advancements, continues to stir debate worldwide, particularly concerning its integration into content production within newsrooms and media outlets.

This article discusses the view opposing that of those wary of Gen AI tools—a perspective that focuses on the opportunities AI creates and advocates for embracing technological determinism. This view considers how generative AI can foster global growth, given the unprecedented pace of digital technological development, which, according to the United Nations, outpaces any innovation in our history. It also emphasizes humanity's choice of how to benefit from, and manage these technologies to ensure they are utilized beneficially rather than harmfully.

In an article published earlier this month on the United Nations website, Jeongki Lim, Assistant Professor of Strategic Design and Management, Parsons School of Design, discusses the response of a popular Gen AI application to a question about how AI can support the United Nations. Lim says, "The result is neither a groundbreaking prediction nor an award-winning movie script, yet. But the speed of improvement of the machine replicating human work is staggering." By the time this article is published, a series of updates and new AI models will have been released by competing companies. Whether we like it or not, we live in a new global landscape affected by this rapid technological development.

Even in terms of employment, proponents of this technology argue for the jobs it will create, enhancing productivity, speed, and efficiency, despite fears about job losses remain prevalent. A new report by McKinsey Global Institute suggests that up to 800 million global workers could lose their jobs by 2030 and be replaced by robotic automation in various fields.

A Misunderstanding of Artificial Intelligence

Writers and experts in technology and economics align to downplay fears surrounding AI, particularly Gen AI, and the uproar it causes. However, they also acknowledge the ethical dilemmas associated with using this technology.

Jean-Gabriel Ganascia, a French computer scientist, professor at Sorbonne University, and fellow of the European Association for Artificial Intelligence, argues in his article "Artificial Intelligence: Between Myth and Reality," published on UNESCO's website, that the success of the term AI is sometimes based on a misunderstanding when it is used to refer to an artificial entity endowed with intelligence, which would then compete with human beings.

According to Ganascia, this concept, which refers to ancient myths and legends, like that of the golem [an image endowed with life], has recently been revived by contemporary figures, including the British physicist Stephen Hawking (1942-2018), American entrepreneur Elon Musk, American futurist Ray Kurzweil, and proponents of what we now call Strong AI or Artificial General Intelligence (AGI). He asserts that this second meaning can only be ascribed to a fertile imagination, inspired more by science fiction than by any tangible scientific reality confirmed through experiments and empirical observations.

Discussing the ethical risks of AI, Ganascia argues that with AI, most dimensions of intelligence ‒ except perhaps humor ‒ are subject to rational analysis and reconstruction using computers. Moreover, machines surpass our cognitive faculties in most fields, raising concerns about ethical risks. These risks fall into three categories: the scarcity of work, as machines can perform tasks previously done by humans; the impact on individual autonomy, particularly regarding freedom and security; and the potential for humanity to be overtaken by more "intelligent" machines. However, if we examine reality, we see that human work is not disappearing; on the contrary, it is evolving and demanding new skills. Similarly, an individual’s autonomy and freedom are not inevitably undermined by the development of AI, provided we remain vigilant against technological intrusions into our private lives.

Ganascia concluded his article by asserting that, contrary to some claims, machines pose no existential threat to humanity. Their autonomy is purely technological, defined by material chains of causality that go from gathering and processing information to decision-making. On the other hand, machines have no moral autonomy, because even if they do confuse and mislead us in decision-making processes, they do not possess a will of their own and remain subjugated to the objectives we assign to them.

Following a similar line of thought, Tyler Cowen, an economics professor at George Mason University, expressed his support for AI in an article published on Bloomberg in May 2023. Cowen stated, “I am relatively sympathetic to AI progress. I am skeptical of arguments that, if applied consistently, also would have hobbled the development of the printing press or electricity.”

Cowen adds that when it comes to AI, as with many issues, people’s views are usually based on their prior beliefs and experiences, often because they have no alternative basis. He declared his own perspective, noting that "decentralized social systems are fairly robust; the world has survived some major technological upheavals in the past.”

However, Cowen links his sympathy with AI to the context of American supremacy, emphasizing that national rivalries will always persist (thus the need to outrace China), and intellectuals can too easily talk themselves into pending doom. 

Countering AI critics, Cowen argues that they do not share his underlying beliefs. They prioritize risk-aversion and view fragility and potentially competing intelligences as dangerous to humans.

Cowen, however, criticizes Geoffrey Hinton, known as the godfather of AI, who resigned from his job at Google and warned about the growing dangers in AI development. Cowen acknowledges the moral dilemma Hinton highlights, stating, “No matter how the debates proceed, however, there is no way around the genuine moral dilemma that Hinton has identified. Let's say you contributed to a technological or social advance that had major implications.”

Cowen asserts that establishing an AI control framework should be left to social and political scientists, not tech experts. The lesson for him is clear: Experts from other fields often often prove more accurate than those in the relevant field. He uses the example of Albert Einstein who helped to create the framework for mobilizing nuclear energy, and in 1939 he wrote to President Franklin Roosevelt urging him to build nuclear weapons. Einstein later famously recanted, saying in 1954 that the world would be better off without them.

Cowen argues that the benefits the United States has gained from dominating the international stage justify this balance.

These thinkers aim to balance adaptation and innovation, fostering responses that do not oppose Gen AI development. However, in journalism and media, there is still a need for clear strategies and usage guidelines. Future articles will contribute to developing frameworks that adhere to journalism's strict standards in content production while assisting newsrooms in integrating and adapting to these new technologies. The exploration of how, when, and in what forms AI should be used is an ongoing process.