In an open letter last week, more than 1,000 tech leaders, scientists and academics sounded an alarm bell on artificial intelligence, warning that “AI systems with human-competitive intelligence can pose profound risks to society and humanity.” They called for a six-month pause on the development of any AI projects more advanced than the existing ChatGPT-4 and, if this is ignored, urged governments to “step in and enact a moratorium.” Their clarion call was that “advanced AI could represent a profound change in the history of life on Earth” and so the time to regulate and manage the technology is now. The letter, organized by the Future of Life Institute, contained ominous warnings about what might be in store for the world if managing the rise of AI is left solely to its creators, who, it said, could quickly be overcome and outsmarted by their own creations. The signatories to the letter, who included Elon Musk and Apple co-founder Steve Wozniak, did not mince their words as they set out the scale of the problem in stark terms. “Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop non-human minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” they asked. This was a wake-up call for those of us who were initially so excited about ChatGPT, impressed by its language model and its ability to compile great articles in seconds or answer any question, using the best available information, in a most polite manner, as its programming instructs it to do. But things changed quickly as it became clear that ChatGPT was not always the polite, smart and fun bot it appeared to be. There was the time, for example, it declared its love for a New York Times journalist and urged him to leave his wife. Meanwhile, Microsoft’s latest version of Bing, powered by the same technology as ChatGPT, threatened to “hack” and “ruin” a researcher. The fact is that these systems have no ethical core or understanding of the truth, and they are capable of making big mistakes. They can spread disinformation at lightning speed, compared with regular media, and the fake images that have proliferated on social media in recent weeks are just the tip of the AI iceberg. The letter called for “all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4.” It also advised that “powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.” The requirements for ensuring the risks of AI are manageable were clear: “AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy and loyal.” The letter was perhaps notable as much for who did not sign it as who did. For example, the absence of the founder of OpenAI, the research laboratory behind the development of ChatGPT, or any other executives from organizations with major AI programs was telling. It does not bode well for efforts to regulate the development and use of AI if there is no consensus within the technology sector on the importance and urgency of such regulations. Those who signed the letter are simply asking for time to develop and implement “shared safety protocols” before the technology is developed any further, but made it clear that governments must step in and “institute a moratorium” if developers ignore their request. The fact is that these systems have no ethical core or understanding of the truth, and they are capable of making big mistakes. Dr. Amal Mudallali In fact, Musk, the CEO of Tesla and SpaceX, has been calling for the regulation of AI since at least 2017, when he told US media organization National Public Radio he believes that, without such regulation, the technology represents “a fundamental risk to the existence of civilization.” While some in the tech industry argue that it is too early in the development of AI to begin regulating it, others point out the pace of development and the difficulties regulators face in keeping up with the technology as being the more pressing argument for regulation to begin. Governments, as is often the case in these situations, are being slow to respond to technological advances. Although general bills or resolutions relating to AI were introduced in 17 US states last year, for example, there is no “comprehensive federal legislation on AI in the United States to date,” according to reports by law firms that monitor the issue. Despite the heightened risks highlighted by the open letter and others who are concerned, there appears little hope that comprehensive regulation will be easy to introduce or will happen quickly. Many experts believe that even convincing the tech industry to voluntarily accept a moratorium will not be a simple task. Many hurdles stand in the way of AI regulation in the US alone, where the influence of various interest groups and the prevailing political atmosphere in the country might delay any significant attempts at regulation in a nation where there is traditionally strong resistance to regulations in general. Many people in the US tech industry and the political arena believe that “premature regulation could stifle progress and limit American efforts to compete with China and other rivals,” as news website Axios noted in an article titled “AI rockets ahead in vacuum of US regulation.” Yet, according to the US-based National Law Review, “China has taken the lead in moving AI regulations past the proposal stage.” It said: “In March 2022, China passed a regulation governing companies’ use of algorithms in online recommendations systems, requiring that such services are moral, ethical, accountable, transparent, and ‘disseminate positive energy.’” Europe, meanwhile, is far ahead of both the US and China in efforts to regulate AI. As early as April 2021, in its first proposal for a comprehensive regulatory system, the European Commission introduced the Artificial Intelligence Act. The EU described it as “an attempt to ensure a ‘well-functioning internal market for artificial intelligence systems’” that is “based on EU values and fundamental rights.” Other countries have also taken action to regulate AI. The National Congress of Brazil, for example, approved a bill that creates a “framework for artificial intelligence.” Calls for regulation or a moratorium will not prevent countries from continuing the development of AI, however, out of a fear that their competitors will pull ahead. And the tense political situation in great power relations at this time makes it particularly difficult for international cooperation on AI regulation. Global competition in the AI arena will inevitably mirror the rivalries in other political, economic and military fields. But there is some hope, as the open letter demonstrates, that within the industry and countries, efforts to regulate AI will find strong support at least among those who will suffer the most from its unchecked development and utilization, especially workers whose livelihoods are at stake. Public pressure on the tech industry and governments can go a long way to help ensure humanity “enjoys a flourishing future with AI,” as the scientists said in the letter. It also advised us to enjoy an “AI summer in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt.” But at the same time it cautioned us not to then “rush unprepared into a fall.” Let us hope we heed the warning. Dr. Amal Mudallali is an American policy and international relations analyst.
مشاركة :