On July 13, 2023, Paris hosted the inaugural UNESCO conference on the ethics of neurotechnology. The conference discussed the development of a global ethical framework that ensures the safeguarding of human rights and essential freedoms while balancing the remarkable outcomes of these efforts with their potential consequences on mental privacy and the violation of brain-stored information, such as memories and thoughts. This first-of-a-kind global dialogue comes weeks after Elon Musk's Neuralink received FDA approval to begin human clinical trials. This demonstrates the rapid evolution of these technologies, whose capabilities now extend beyond decoding emotions and neuromuscular commands to the ability to read ideas and memories and control them. It also covers the use of machine learning techniques and big data, posing various scenarios to investigate the challenges that these technologies may raise and seek solutions to them.
A Roadmap
During the conference, the UNESCO announced the development of a global "ethical framework" to address human rights concerns raised by neurotechnology. Gabriela Ramos, UNESCO's assistant director-general for social and human sciences, said that this convergence of neurotechnology and AI was "far-reaching and potentially harmful. We are on a path to a world in which algorithms will enable us to decode people's mental processes and directly manipulate the brain mechanisms underlying their intentions, emotions and decisions, posing a threat to mental privacy.”
Many warnings were issued regarding the investment and commercialization of neurological technologies, as well as the demand for regulating the control of neurological user data and protecting special groups. These include children, people with disabilities, and the elderly. This is in addition to not overlooking the risks of non-surgical technologies, whose regulation appears to be less urgent, but are also capable of collecting accurate information about the lives of its users.
This discussion is part of a roadmap to regulate this technology and supplement the efforts of the international organization for high-tech ethical framing, which earlier developed an ethical framework for artificial intelligence technologies in 2021. The International Bioethics Committee is too sparing no effort to investigate issues related to the regulation of neurotechnologies, which resulted in a 2021 report named "The Ethical Issues of Neurotechnologies." Mental integrity, human dignity, personal identity, psychological continuity, independence, mental privacy, accessibility, and social fairness are among the ethical criteria highlighted in the report. The latter also dedicated an extensive section to discussing legal issues such as compliance, freedom of speech, and democracy, referring to governance as a foundation for accomplishing such ideals by supporting responsible innovation, integrating the public sphere and industry into discussion, and strengthening public-private collaboration.
In addition to the UNESCO's efforts, one of the most recognized organizations specializing in this sector is the Neurorights Foundation. It is the organization that has identified five rights that it seeks to incorporate into international human rights law and ethical standards. These include bias protection in the design of neurotechnology algorithms, equitable access to mental enhancement technologies by promoting balance and fairness, free will in which individuals have absolute control over their own decision-making without unknown manipulation from external neurotechnologies, personal identity protection by developing boundaries to prevent technology from disrupting a person's sense of self or blurring the line between a person's conscious and external international input as well as mental privacy through tightly regulating the sale, transfer, and use of neural data.
At the national level, Chile passed a constitutional change in October 2021 that includes the protection of "mental activities" in technological and scientific progress. Chilean lawmakers also approved a law in September of the same year providing for the rights to personal identity, free will, and mental privacy. This makes Chile the first country in the world to legislate neurotechnology that can manipulate one's mind. Brazil is expected to vote on a similar constitutional amendment in the coming months.
A Black Box
Despite the relative youth of neurotechnologies as devices and procedures that monitor the nervous system, there has been a surge in interest in recent years, with the global neurotechnology market estimated to be worth USD 12.82 billion in 2022 and expected to grow to USD 38.17 billion by 2032. This expansion has been driven by a surge in the volume of investment in neurotechnology businesses, which is expected to reach USD 33.2 billion in 2021, a 700% increase over 2014.
The combination of technical components and the neural system is known as neurotechnology. Its goal is to monitor, evaluate, and simulate neural processes, such as recording brain signals, evaluating them, and translating them into technical control commands, as well as performing stimulation processes to restore brain functions, using technologies ranging from wearable devices to non-invasive brain-computer interfaces to the development of brain implants, considered the most controversial.
Neurotechnology application models have long been associated with health care, early disease detection, assisting individuals with movement impairments to carry out daily activities or infer intentional speech and movements, supporting those with missing or damaged limbs to sense touch, heat, and cold via prosthetic limbs, and other examples. However, ongoing projects focus on non-medical applications ranging from the military to education, games, entertainment, transportation, and others, which include fears and threats based on hypothetical scenarios of these applications and their consequences. This prompted the UNESCO to describe the effects of this emerging sector as a "black box," where "we don’t know a lot about the effect decoding and manipulating brains has on us."
The United Kingdom's Information Commissioner's Office issued a report in June 2023 warning against companies using brain-reading technologies to monitor and evaluate their employees, urging them to improve their productivity and identifying patterns of undesirable behavior among job candidates. While this has the potential to improve safety and security, particularly in high-risk workplaces, it raises issues about intrusions of privacy, subjecting employees to biased screening processes and the psychological and societal consequences of the pressures such systems impose.
The RAND Corporation earlier published a report in 2020 on the consequences of brain-computer interface applications in the military arena, which touched on the impact of a decrease in personal accountability for actions. These included feelings of guilt, for example, over the decision-making process, full human control over decisions and bearing their consequences, as well as the ability to be held accountable, and the overlapping of actors beginning with the company that produces these technologies and the one based on the programming processes and ending with the actual users and all those involved in relevant decisions. This causes fundamental problems in regulations and laws and significant confusion in human control, especially given some projects that apply these technologies militarily. This has been announced by the US Army's Defense Advanced Research Projects Agency in March 2018 regarding the development of a non-invasive neural interface system capable of reading and writing to multiple points in the brain at the same time.
Revolutionary Challenges
Attempts to frame neurotechnologies focus primarily on analyzing hypothetical scenarios for their applications, which raise a variety of concerns about the revolutionary situation that they impose and the consequences of previously unseen developments, which can be identified in the following:
1. Mental privacy:
Neurotechnology, like any other high-tech, requires data gathering and processing, which raises the normal concerns about data privacy, misuse, and ownership. However, given the nature of the data collected in this case, the matter takes on a more dangerous dimension, as most of the data generated by the nervous system is unconscious. That is, generated subconsciously, and the person does not have full control over it and thus cannot give consent for its use, in addition to the privacy of thoughts and feelings in such a way that disclosing them violates human dignity itself.
2. Identity and human control:
Neurotechnologies aim for unprecedented levels of access into mental processes, raising concerns not only about health safety but also about personal identity. This reflects the way we think and feel, which will suffer confusion if automated algorithms are involved in decision-making, potentially obliterating who we are and calling free will into question.
3. Responsibility limits:
Developing technologies capable of understanding people's mental processes and directly regulating the brain systems underlying their intentions, feelings, and decisions; It raises questions about individuals' responsibility for the actions of automated systems that they control through their thoughts over great distances or if several minds are wired together, or if the system makes errors due to a misreading of those nonverbal commands, upending long-held assumptions about the limits of liability. In a world where people only communicate by thinking, legal or moral individuality is important.
4. Anonymity of risks:
Some neurological techniques necessitate the implantation of a foreign body in the brain, with complications ranging from infection, bleeding, and inflammation to brain damage, in addition to the complete anonymity of its long-term ramifications, both psychologically and physically, as well as the anonymity of its application to specific sectors such as children and adolescents, and its potential for obstructing normal growth. It is difficult to legalize technology with unknown impacts on humans and systems because of the nervous system.
5. Integrated systems:
Neurotechnologies do not operate independently but are typically linked to artificial intelligence, the Internet of Things, and big data analysis methods, putting them at risk of excessive control, bias, and discrimination, as well as privacy violations.
6. Potential crimes:
The development of technological applications is not limited to institutional companies subject to legal and ethical regulations but may also be owned by criminal gangs who may steal highly sensitive information about a government official's or a group of soldiers' brain activities. It can also interfere with databases and control systems in a firm, government institution, or security agency, as well as other behaviors that are still having an impact on the level of digital technologies around the world.
7. Imbalance:
The United States accounts for 50% of the 1,400 companies operating in the field of neurotechnology, while Europe accounts for 35%, reflecting a very different future picture in the returns of this technology, which is not limited to countries, but rather within the same community; the rich have more opportunities to benefit from the applications of neurotechnologies, thus increasing the gaps and opportunities for access.
The debate over these neurotechnology challenges, as well as the ethical frameworks proposed by international or national attempts to address them, is difficult. It expresses efforts to preserve basic human rights to privacy, freedom of thought, and will and attempts to strike a balance between the benefits of neurotechnology, particularly in the medical sector, and the preservation of human rights and identity, in order to avoid drifting with the superhuman thesis and marketing of "humans without physical or mental borders." This calls into question the very nature of humanity, with all of its flaws that contribute to the definition of what it means to be human.