أخبار المركز
  • د. أمل عبدالله الهدابي تكتب: (اليوم الوطني الـ53 للإمارات.. الانطلاق للمستقبل بقوة الاتحاد)
  • معالي نبيل فهمي يكتب: (التحرك العربي ضد الفوضى في المنطقة.. ما العمل؟)
  • هالة الحفناوي تكتب: (ما مستقبل البشر في عالم ما بعد الإنسانية؟)
  • مركز المستقبل يصدر ثلاث دراسات حول مستقبل الإعلام في عصر الذكاء الاصطناعي
  • حلقة نقاشية لمركز المستقبل عن (اقتصاد العملات الإلكترونية)

Generative AI

Regulation and a Stable World Order

02 أكتوبر، 2024


Like many, I find myself overwhelmed by the ongoing debate surrounding Artificial Intelligence (AI), particularly generative AI. The discussions and debates among technologists, philosophers, official institutions, and even among the public have been nothing short of mind-boggling. This is especially true for a layperson—even one like myself, with a degree in Physics and Mathematics and four decades of public service, where governance and national security were the primary focuses.

Contrasting Views on AI

Some consider AI as merely another phase of the Industrial Revolution, or more precisely, "algorithmic decision-making," stemming from the widespread use of computers. An algorithm, after all, is simply a set of finite rules designed to solve a specific problem.

Proponents of this view argue that generative AI represents a natural, transformative progression of technology and modernity. Many, though not all, caution against regulating AI, given its potential negative implications for freedom of speech and innovation.

Others, however, strongly believe that generative AI is fundamentally different from other transformative technologies. They emphasize its ability to create new information through cognitive capabilities comparable to humans across various fields, yet not based on direct instruction. This contrasts sharply with earlier technologies that merely made machines more efficient through algorithms.

The Regulation Debate

There is widespread recognition that generative AI offers significant potential for progress but also presents far more complex challenges. Many candidly admit they do not fully comprehend the scope of either its positive or negative implications.

Some argue that, as with past technologies, regulating generative AI is crucial, especially given its current unpredictability. However, others believe that generative AI is so transformative, unpredictable, and already unleashed that effective regulation may be impossible. They argue that any attempt to do so might prove futile and could lead to a false sense of security and complacency.

Ethical and Geopolitical Implications

Generative AI poses incredibly difficult technological questions—even experts disagree on the answers. These questions and concerns entail a number of ethical issues, which are significant, and the range of ethical concerns could expand as the field of application evolves. From an international and geopolitical perspective, particularly one of national security, I often shudder at the variables in play.

Globalization was initially perceived as the solution to many of the world's problems. However, it later became the harbinger of a "clash of civilizations" as virtual proximity rapidly exposed the discrepancies and double standards between peoples and nation-states, across borders, and even within individual countries.

Are we witnessing a shift in governance from human to non-human sources, with generative AI increasingly influencing human relations? Is technology—particularly generative AI—an asset or a liability? Can it evolve into an ally or become an adversary? These questions frequently arise, given AI's vast potential, its inherent vagaries, unpredictability, and associated risks.

Is the traditional developed/developing world paradigm transforming into one where states with generative AI capabilities stand apart from those with less advanced technological capabilities? Could this lead to an even wider gap between developed and developing nations?

Many technological breakthroughs, including generative AI, are financed by military industries. Consequently, the potential dangers of AI militarization cannot be underestimated. The possibility of an arms race involving generative AI applications must be taken seriously. How do national security experts assess the security implications of generative AI when its scope of activity is neither predetermined nor fully controlled by its propagator, allies, or adversaries? How do we account for potential AI errors, given its non-instructive nature and the diminishing role of human judgment in military decisions?

It's worth noting that the most militarily advanced nations have attempted to mitigate the irrational or mistaken use of nuclear weapons by implementing a "double key" system, requiring more than one human to authorize their release.

A shared perspective among diverse groups is that generative AI is already here and spreading widely and rapidly—a fact no one disputes. The overwhelming majority leans towards the view that some form of international AI regulation would be beneficial. However, even within this majority, some acknowledge that achieving effective regulation may be technically impossible, at least for now.

The Case for Regulation

I concur that generative AI is here to stay and support the position that some form of international regulation is imperative. While certain aspects of AI may currently be beyond our regulatory reach, regulating it to the best of our ability is preferable to complete chaos. I do not advocate impeding innovation or the free flow of information, but I believe we need a level of regulation that, though imperfect, can mitigate risks.

In essence, some regulation is better than none, provided it is grounded in credible scientific knowledge. At the very least, such regulation can make misuse more challenging, reduce the potential room for error, and ensure that innovation and free speech are not unduly hindered.

Recommendations

For laypersons—and perhaps for experts—much remains to be understood about generative AI. It is urgent to bridge the gap between science and science fiction regarding the benefits and risks of this technology. As the United Nations examines AI through several initiatives this fall, I call upon reputable scientific institutions to collaborate on preparing an orientation document on generative AI, emphasizing what we know and do not know about it.

Based on existing scientific knowledge, it would be beneficial to lay the foundation for voluntary cooperative arrangements that optimally harness generative AI for socio-economic development while safeguarding against its negative ramifications. Similarly, cooperative arrangements should be developed to maximize national security benefits while limiting the technology's negative security implications.

Transparency, connectivity, and rapid response are some beneficial features, and past transgressions involving lower-end technologies have been partially addressed by building virtual firewalls. In potentially more sensitive national security applications, even amid the current heightened tensions of a polarized world, it would serve even the most advanced military powers to develop safety measures against generative AI actions not fully mandated by higher authorities.

These preliminary suggestions are not a remedy for AI regulation but aim to foster better understanding and facilitate efforts to harness its potential while managing its dangers. In an international order rife with inequality and polarization, we are duty-bound to seize every opportunity for progress and safeguard against technologies that may exacerbate destructive capacities and increase the risk of national security miscalculations.