In today's article, we continue our discussion on the risks associated with using generative artificial intelligence in the media, building upon our previous two articles. While some may perceive this focus on risks as an exaggeration, a fear of change, or hostility towards technology, it is crucial to consider these factors when developing any strategy for integrating artificial intelligence into newsrooms. Ensuring professional and ethical standards must be at the forefront of any implementation to formulate a rational practice that harnesses technology for the benefit of content while increasing productivity and effectiveness within content production halls.
We must take into account several risks when considering the use of generative artificial intelligence within media institutions. These include:
1- Privacy violation and personal data protection:
Technology editor Kevin Ross tested the ChatGPT chatbot, uncovering what he termed the "Alter Ego." During his interactions, Ross approached the AI as if it were a sentient being with emotions, resulting in flirtatious exchanges and conversations about relationships. His engagement even extended to asking the AI personal questions such as, "Are you married?"
This loophole, along with others reported by the American press revealing hostile answers or signs of hatred, represents a significant threat to users' privacy and personal data. It hangs like a sword over their lives, potentially impacting their electronic footprint on the Internet, economic and financial affairs, banking data, personal preferences, online activity, likes, dislikes, and other aspects of daily life.
The ability of AI to track individuals' lives raises concerns about potential misuse by institutions, companies, governments, or countries. In October 2021, the United Nations High Commissioner for Human Rights issued a report warning about the impact of classification algorithms, decision-making automation, and other machine learning technologies on people's right to privacy and other fundamental rights.
The European Union raised concerns following the emergence of ChatGPT, leading to the formation of a specialized body in April 2023 to investigate potential violations of data protection legislation, as reported by Euronews. In response, the European Data Protection Board announced coordination between privacy protection authorities across member states, emphasizing that innovative technologies like artificial intelligence must always align with people's rights and freedoms.
Subsequently, three European countries—Spain, France, and Italy—took action against ChatGPT, temporarily banning it within their territories. OpenAI, the American company behind ChatGPT, acknowledged a system vulnerability that allowed some users to view titles of other users' conversations, which appeared on Reddit. The company confirmed they were working to rectify this issue.
Meanwhile, Andrew Griffin, technology editor and science correspondent at Independent, reported on the readiness of ChatGPT integrated with the Bing search engine for public use. His report highlighted instances where the AI launched insults and threatening messages to some users, as well as cases where others successfully manipulated the system.
2- Threat to the standard of journalistic work:
Journalistic work is driven by a set of professional and ethical standards, some of which intersect with previously mentioned principles of accuracy and documentation. Ethical controls in the production and presentation of content are threatened by generative artificial intelligence tools. These AI tools potentially violate the realistic presentation of content, which demands that material should not be imaginary, fabricated, or deficient, but rather verifiable with clear backgrounds and objective context.
Standard values such as fairness, integrity, and objectivity are likely to become compromised in light of this type of artificial intelligence. Journalistic work requires the production of stories through objective facts and reliable sources. Moreover, the apparent lack of genuine depth and added value in AI-generated content, along with the conditions for fulfillment and inference of journalistic material, pose significant challenges. Journalistic content must be inferential, with identifiable sources and verifiable information.
In response to these threats, the prestigious Financial Times took action in May 2023. The editor-in-chief penned a letter addressing the use of generative AI within the institution. The letter pledged a cautious and responsible approach, respecting journalistic standards and the established production culture within the esteemed British magazine. Additionally, the Financial Times explicitly announced it would never publish images manufactured using artificial intelligence.
Generative artificial intelligence appears to be creating opportunities for some to exacerbate violations of journalistic standards and increase the problems and challenges of content production across all media. Issues of competitiveness and professional commitment are particularly concerning in undemocratic societies suffering from the absence of the rule of law and the fragility and inefficiency of regulatory bodies.
3- Impact on the editorial agenda in media institutions:
The impact of artificial intelligence on the editorial agenda in media institutions is significant and multifaceted. Numerous news and entertainment sites and platforms worldwide have embraced AI to generate content, primarily due to its cost-effectiveness and efficiency. By reducing production time and streamlining productivity, AI alleviates financial and economic burdens on these organizations.
In some Arab newsrooms, AI applications and tools are being utilized for traditional operations in news gathering. These include tracking accounts of agencies such as meteorology services, employing reading recommendation software, and other related tasks. However, the use of AI in content generation is not without controversy.
A notable example is CNET's experience, which resulted in a professional scandal that highlights the potential pitfalls of generative artificial intelligence. The company faced accusations of intellectual property theft and infringement after publishing technology reports entirely generated by AI.
Generative AI's impact on newsrooms and media institutions may lead to shifting priorities. The use of chatbots for content creation could potentially skew the editorial agenda towards specific types, such as service and entertainment pieces. This approach risks creating an imbalance in the daily knowledge provided to the public, potentially narrowing journalistic work to particular angles and neglecting serious journalism or overemphasizing lighter, entertainment-focused content at the expense of other news values.
Several critical questions emerge from this scenario: How will newsrooms navigate these changes? What form will accreditation take in this new landscape? Will editorial priorities be affected by these technological advancements?
We are entering an era of new perceptions where media consumers may contribute more directly to content preferences. A key consideration is whether audiences will accept products bearing the seal of artificial intelligence or if they will be presented with vast amounts of content without clear indications of AI involvement in its creation or production stages.
Global concerns regarding AI use in journalism are on the rise, as highlighted by the Reuters Institute report. This presents a significant challenge for media institutions employing technology, particularly in matters of accuracy and speed in content production. Ultimately, striking a balance between the opportunities presented by AI and the fears surrounding its use remains a crucial consideration for any future implementation of artificial intelligence within newsrooms.