Will AI spell the end of humanity as we know it?

  • 11/21/2018
  • 00:00
  • 23
  • 0
  • 0
news-picture

Advances in technology have excited humankind since the dawn of history and have been one of the most important vehicles of human progress — though also of war and destruction. New technological inventions are usually accompanied by developments in the world of ideas and values to reflect or contain these scientific developments. New technologies have affected individual humans’ behavior, as well as the groups and societies to which they belong. At times, technology has been a force for good, such as with advances in medicine, while in wars and conflict it has often been a force for evil. We are currently on the verge of one of the most exciting, though frightening, breakthroughs in science and technology, with the advancement of artificial intelligence (AI). If some of the predictions of what AI could introduce into our lives are right, humanity is at the beginning of a journey that might change, in the fastest and most radical fashion, all aspects of our lives. For the first time, the question is not only about how we use and interact with new technologies, but about how they will be able to replace us by making independent and autonomous decisions. Dealing with robots that are capable of “thinking” is where AI could potentially change humanity drastically and irreversibly. The renowned British scientist, the late Stephen Hawking, bluntly warned that: “The development of full artificial intelligence could spell the end of the human race.” Dealing with robots that are capable of "thinking" is where AI could potentially change humanity drastically and irreversibly. Yossi Mekelberg Hawking made a key distinction about the difference in pace between the evolution of humans and the progress of technology. He asserted that: “Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded (by AI).” And therein lies a crucial aspect of the development of AI. Where the scientists and the high-tech world are always on the lookout for new ideas and their applications, the average person, whatever that means, and society is not only lagging behind in using AI in the technical sense, but also in understanding how it affects their behavior. It takes a while longer to understand, and either absorb or reject, the philosophical and ethical issues involved, and whether or not such technology will provide us with the tools to assist humanity in the direction it aspires to take. For now, the discussion about AI is rather semi-theoretical. As of yet, it doesn’t exist in the sense of platforms, either for civilian or military use, that are completely independent of humans. As sophisticated as modern electronic systems are, they lack the most crucial element of AI: The ability to make independent decisions, not only without programming, but while behaving entirely independently of their (human) makers. However, few in the scientific world doubt that the advent of such autonomous systems is only a matter of time, and this step forward requires a social-ideological-philosophical revolution to match the technological one. Otherwise, humankind might end up being ruled by powerful and menacing robots that have been completely separated from humans “at birth.” For generations, sci-fi novels and movies have toyed with these ideas, sometimes as allegories of their own societies, where leaders possess too much power, and on other occasions as fantasies that few believe could ever come true. Progress in machine technology in the last few decades, though at best resulting in semi-autonomous hardware, has given us a taste of what machines can do, how they can enter into our decision-making calculus, and how they could be used and abused to benefit the few at the expense of the many. Because the development of AI is a leap of faith, we are very much in the dark regarding what changes will result if and when machines replace our own intelligence, at least in some areas. Will they remain in the service of the good and stay subservient to us and serve our needs, protecting us and improving our quality of life? Or will the robots turn us into their servants, or even replace us altogether? One obvious issue with AI is how it might affect the future of warfare. Will we see “killer robots,” for example? If so, will they have the capability to decide on the objectives of a war, when to start one, how to conduct it, the use of proportionate force, and when to end hostilities? Humans, who possess, to a greater or lesser degree, not only intellect but also emotional intelligence and judgment, are struggling to supply adequate answers to these dilemmas. Will machines invented by humans surpass their makers in their decision-making capabilities and, for instance, be less vengeful and commit fewer war crimes? Alternatively, will killer robots, as their name implies, be no more than war machines destroying everything and everyone in their path? These questions bring to the surface the enormous ethical quandaries surrounding AI; including, for example, whether machines should be entitled to human rights and, if so, who is responsible for their violation? On the other hand, if they are the victimizers who commit crimes against one another and humans, who is responsible for that? Will an International Criminal Court for killer robots be established? Or one for those who built them and who remain responsible for their behavior? And if AI should replace doctors, will it have the authority, for example, to decide whether or not to switch off a life support machine? Clearly, there are more questions than answers regarding how AI will affect humans in all areas of life, and whether it will enhance or destroy humanity as we know it. Nevertheless, because we are only at the beginning of a journey into the unknown, and quite a scary one, a public debate on the moral, social and political implications of continuing the development of AI is a must. If we are to continue, then at what pace and with what provisions and exceptions? Such a public debate should be conducted with a sense of urgency, before we discover that these types of decisions are being taken by robots. Yossi Mekelberg is professor of international relations at Regent’s University London, where he is head of the International Relations and Social Sciences Program. He is also an associate fellow of the MENA Program at Chatham House. He is a regular contributor to the international written and electronic media. Twitter: @YMekelberg Disclaimer: Views expressed by writers in this section are their own and do not necessarily reflect Arab News" point-of-view

مشاركة :