أخبار المركز
  • د. أحمد أمل يكتب: (تهدئة مؤقتة أم ممتدة؟ فرص وتحديات نجاح اتفاق إنهاء الخلاف الصومالي الإثيوبي برعاية تركيا)
  • سعيد عكاشة يكتب: (كوابح التصعيد: هل يصمد اتفاق وقف النار بين إسرائيل ولبنان بعد رحيل الأسد؟)
  • نشوى عبد النبي تكتب: (السفن التجارية "النووية": الجهود الصينية والكورية الجنوبية لتطوير سفن حاويات صديقة للبيئة)
  • د. أيمن سمير يكتب: (بين التوحد والتفكك: المسارات المُحتملة للانتقال السوري في مرحلة ما بعد الأسد)
  • د. رشا مصطفى عوض تكتب: (صعود قياسي: التأثيرات الاقتصادية لأجندة ترامب للعملات المشفرة في آسيا)

Multiple Policies

How are US institutions addressing the risks of artificial intelligence?

12 أبريل، 2024


On March 28, 2024, the White House released its first government-level policy, including instructions for reducing the risks of artificial intelligence (AI). This policy requires additional steps to improve AI's safety and security, protect Americans' privacy, and address the dangers that the technology could pose.

This is not the Biden administration's first attempt to frame and codify artificial intelligence. On October 30, 2023, US President Joe Biden signed an executive order establishing new safety and privacy protection regulations in the field of artificial intelligence, as reported by the White House. This aims to secure Americans' private data, promote innovation and competition, and strengthen the US' technological leadership. 

Artificial intelligence is at the top of Biden's agenda in the last year of his first presidential term. Commenting on the October signing, he said: "We are going to see more technological change in the next 10 — maybe the next five years than we have seen in the last 50 years, and that is a fact. Moreover, it is the most consequential technology of our time. Artificial intelligence is accelerating that change."

Directive or Mandatory Principles? 

Under the auspices of US Vice President Kamala Harris, the guidance memorandum issued by the White House Office of Management and Budget reflects the US administration's success in transforming Executive Order No. 14110 in October 2023 into practical executive measures that all US federal agencies must carry. Biden prioritizes artificial intelligence policies. Thus, this memorandum focuses on placing people and communities at the heart of government innovation goals. It also mandates federal agencies to identify and manage risks associated with AI's role in society. 

By December 1, 2024, federal agencies must implement "concrete guarantees" when using AI. These include a set of mandatory measures to reliably evaluate, test, and monitor the effects of AI on the public, as well as providing transparency on how the government uses AI. This will apply to various AI applications, including health, education, employment, and housing. 

Comprehensive Efforts

It is wrong to suppose that this memorandum represents the beginning of the US administration's efforts to frame artificial intelligence in the United States and mitigate its associated risks. Artificial intelligence has gained speed in all US federal and security agencies over the last few months, particularly since the issue of Executive Order No. 14110 on October 30, 2023, driven by the aim to reach three goals. The first step is to strengthen artificial intelligence governance. This entails developing a set of plans, policies, and frameworks to foster responsible innovation in this field, to protect Americans' privacy, promote justice and civil rights, defend consumers and workers, and encourage innovation and competition. The second factor addresses the securitization of artificial intelligence threats, which is a role played by US security institutions in protecting society from the negative and discriminatory uses that may follow. The last aspect concerns the US administration's ambition to seek worldwide leadership. 

1. Strengthening the governance of artificial intelligence in US society: 

The US administration is pursuing a plan to ensure the integrity of artificial intelligence standards in the government field, to increase public trust that federal agencies will protect their rights and ensure their safety. Perhaps the most visible evidence of this are:

A. More legalization, follow-up, and government accountability: As part of the recently issued guidance memorandum, agencies will have 60 days to appoint a chief AI officer at each federal agency, who will ensure that all essential measures are taken to avoid unexpected results from AI applications within any agency. In other words, they are ensuring that the agency's artificial intelligence capabilities do not threaten Americans. The memorandum requires agencies to submit annual public reports on how they use artificial intelligence, the risks involved, and how they manage those risks. Reports must be submitted to the White House Office of Management and Budget and made public. 

B. Establishing an AI governance council: Since December 2023, the White House Office of Management and Budget and the Office of Science and Technology Policy have been gathering the names of the nominated officials at each agency in preparation for the formation of a new Council of Senior AI Officials to coordinate their efforts across the federal government and implement the US administration's directives in this regard. 

C. Enhanced workforce capabilities: The Biden administration announced that it will hire 100 AI specialists by the summer of 2024 as part of the national AI talent boom created by Executive Order 14110 in October 2023. The President's FY 2025 budget includes an additional $5 million to expand the General Services Administration's government-wide AI training program, which had over 7,500 participants from 85 federal agencies last year. 

D. Limiting the migration of private sector companies related to artificial intelligence: It is believed that when Biden issued Executive Order No. 14110, one of his goals was for artificial intelligence companies such as Open AI, which produces ChatGPT, to disclose the results of their safety testing with the US government. The executive order, however, did not require companies to identify artificial intelligence-generated content. Instead, it directed the Department of Commerce to develop authentication standards and guidelines for watermarking content to combat deepfakes and mitigate the harms caused by using artificial intelligence. 

2. Securitization of AI risks: 

While some see AI as a "new technological revolution" that can transform many parts of life, others warn of its perils, starting with the fact that it may render some work useless. This raises unemployment rates and marginalizes employees. It may boost the spread of misleading information and visuals, beyond human control. This raises unemployment rates and marginalizes employees. It may boost the spread of misleading information and visuals, beyond human control. It outperforms human cognitive capabilities, so it was unsurprising that over 350 information technology experts, including executives, researchers, and engineers working in the field of artificial intelligence, signed a joint statement to Congress warning of the dangers of AI as it may threaten human safety and lead to creating epidemics and igniting global wars. They also urged US policymakers to create plans to control the growth and spread of artificial intelligence, similar to those used to plan for epidemics and nuclear conflicts. 

According to the International Monetary Fund's most recent report, published in January 2024, artificial intelligence may affect 40% of global jobs, exacerbating labor market inequality. So much so that the Goldman Sachs Group predicted that artificial intelligence would replace an estimated 300 million full-time jobs worldwide. 

There is little doubt that these facts have motivated the US security services to move forward in securitizing the subject of artificial intelligence, which gives it a security character because of its breadth that endangers national security. As a result, US institutions, as well as corporate and civil sectors, and public opinion must work together to take unprecedented measures to mitigate the growing threats of artificial intelligence. The most notable US federal security efforts in this regard over the last few months can be discussed:

A. US National Security Agency: In September 2023, the US National Security Agency announced the creation of an Artificial Intelligence Security Center. The center will be merged with the agency's current Cybersecurity Cooperation Center. Its goal is to secure US artificial intelligence systems and address external threats. It is expected to collaborate with the private sector and local partners to fortify US defenses against competitors such as China and Russia.

B. US Department of Homeland Security: On March 17, 2024, the Department of Homeland Security (DHS) issued the "DHS AI Roadmap," a detailed plan that addresses the Department's policy on artificial intelligence and how to protect the people and nation from its risks while ensuring that privacy and civil rights are fully protected.

- To improve the safety and security of artificial intelligence, the Department of Homeland Security has done the following:

- Forming the AI Safety and Security Advisory Council, which comprises industry specialists from both the private and public sectors who advise and recommend to the Minister and the critical infrastructure community how to strengthen security and respond to incidents involving artificial intelligence.

- Collaborating with the Department of Defense on a pilot program to create artificial intelligence capabilities to repair vulnerabilities in important US government networks.

- Coordination with the Cybersecurity and Infrastructure Security Agency (CISA) to identify vulnerabilities in cyber threats targeting artificial intelligence systems.

- Creating a program and guidelines to help the private sector avoid the risks of intellectual property theft related to artificial intelligence.

C. US Department of Defense: The Department of Defense's AI Adoption Strategy, released on November 2, 2023, did not overlook the defense dimension of AI risks. This is despite its primary goal being to capitalize on emerging AI capabilities in the future while also ensuring that US warfighters maintain superiority in battlefield decision-making for years to come. The strategy emphasized the importance of creating an appropriate regulatory environment to work on developing the data, analytics, and artificial intelligence ecosystem, expanding digital talent management, providing business capabilities, addressing the impact of war on artificial intelligence, strengthening governance, and removing political obstacles. 

3. US global leadership in reducing the risks of artificial intelligence:

A. A model for global action: The US hopes to emerge as an international leader with its new regime for government AI. Vice President Kamala Harris said during a news briefing ahead of the announcement that the administration plans for the policies to "serve as a model for global action." She said the US "will continue to call on all nations to follow our lead and put the public interest first regarding government use of AI."

B. A political declaration on the responsible military use of artificial intelligence: The US administration is seeking to implement worldwide political efforts and declarations that will encourage the international community to address the risks of artificial intelligence. In February 2023, the United States of America announced the introduction of the "Political Declaration on the Responsible Military Use of Artificial Intelligence" during a meeting in the Netherlands. It was adopted by 54 countries and served as an international accountability framework, allowing countries to reap the benefits of artificial intelligence while limiting military risks. It also aims to fulfill the US administration's intention to continue collaborating with its allies and partners to develop norms and standards that will help shape artificial intelligence globally. 

Washington held a "regular dialogue" with supportive countries to increase international support for ethical practices and their implementation. From March 19 to 20, the US Department of State and the US Department of Defense co-hosted the first meeting at the University of Maryland, which gathered 60 countries. The discussion examined the steps needed to ensure that the military's use of artificial intelligence is responsible, ethical, and contributes to international security. 

C. Participation in the first global declaration of risks of artificial intelligence: On October 31, 2023, the US, European Union, and approximately twenty other countries in Britain signed the Bletchley Declaration for the safe development of artificial intelligence during the first international summit on the rapid growth of this technology. Washington prioritizes artificial intelligence, understanding that its rapid development poses an existential threat to the world if not regulated. As a result, there is no reason for Washington to leave a void in which other countries, such as China or Russia, might lead the international community in formulating and designing guarantees and regulations for this industry. 

Artificial intelligence can help the US government improve outcomes for its people. It can assist agencies to organize, administer, and distribute benefits more effectively while saving costs and improving the security of government systems. However, leaving it without controls and regulatory rules would jeopardize Americans' safety and privacy; the Biden administration is on the right track in implementing measures, procedures, and executive orders that have proven to have three aspects in shaping and codifying the field of artificial intelligence. This stems from the US' desire to protect its people's privacy while promoting equality and civil rights, resulting in societal and national peace and security, and making the country a leading global model for the safe, secure, and trustworthy use of AI.