An artificial intelligence system recently achieved passing scores on the notoriously difficult US medical licensing exam without any human reinforcement, marking a notable milestone in clinical AI maturation. While chatbots have caught the world’s imagination, should we be worrying or starting to celebrate a new dawn of advancement that is ultimately bent on serving humans? ChatGPT, we are told, is able to produce essays, poems and programming code within seconds. It was developed by OpenAI, a California-based startup founded in 2015 with early funding from Elon Musk, among others. For the purpose of their study, researchers at California-based AnsibleHealth tested ChatGPT’s performance on a three-part exam taken by medical students and physicians-in-training in the US. It included questions covering multiple medical disciplines, from basic science to biochemistry and diagnostic reasoning to bioethics. The questions presented to ChatGPT were put in various formats, including open-ended questions and multiple choice. Typically, medical students have to put in 300 to 400 hours of dedicated study time on basic science and pharmacology to pass the exam’s first part. The second part, focused on clinical reasoning and ethical management, is typically taken by fourth-year medicine students. The third part is tailored for physicians who have completed at least six months of postgraduate medical education. Though experts in the field have rushed to praise the milestone as being for the benefit of science and the provision of medical advice, others were more cautious. The boss of Google recently warned of the pitfalls of artificial intelligence in chatbots and other mediums, as Google’s parent company Alphabet has been battling to compete with ChatGPT. Prabhakar Raghavan, senior vice president at Google and head of Google Search, last week claimed that “this kind of artificial intelligence can sometimes lead to something we call hallucination,” as the machine provides convincing but completely made-up answers. Google’s AI chatbot Bard recently shared inaccurate information in a promotional video, costing the company about $100 billion in losses in market value. Arrays of apps or platforms using AI technology and machine learning have been penetrating all aspects of modern life, providing solutions and replacing human intervention in some fields. However, Italian regulators did not like what they saw and barred a firm from gathering data after finding a breach of Europe’s massive data protection law. Arrays of apps or platforms using AI technology and machine learning have been penetrating all aspects of modern life. Mohamed Chebaro Chatbots and medical bots have caught the imagination of the world, but similar bots have also been developed into so-called slaughterbots through the military use of artificial intelligence. The first international summit on the responsible use of machine learning and AI in the military is being held in The Hague, the Netherlands, this week with the participation of more than 50 countries, including China and the US. It hopes to shape what is and is not acceptable in terms of the use of bots in the theaters of war in the future. In the military field, AI is already used for reconnaissance, surveillance and situational analysis. While the prospect of fully independent killing machines remains far off, some hardware is capable of independently picking up targets for certain types of guided weapons and the hope is we will not see the wrath of AI use extended to refine nuclear command and control systems at the expense of humans. Introducing the conference, Dutch Foreign Minister Wopke Hoekstra said: “In a field that is really about life and death, you want to make sure that humans, regardless of the flaws baked into our DNA, are part of the decision-making process.” He went on to describe the current debate about AI bots such as ChatGPT and their penetration of all aspects of human existence, pointing out that they are a welcome innovation for the many benefits they yield to society, but also the fact that many people have been misusing these tools to cheat or for harmful ends. Many believe that the history of tech development seems to have less to do with simplifying tasks than reorganizing the workplace to draw more profits for the big tech companies. Some believe that it is also a means to cut corners rather than really bolstering quality or accountability. So far, government regulations of big tech have been faint or nonexistent and, even where they shyly existed, the stranglehold big companies and their mushrooming and rarely accountable operating arms have over public data and infrastructure makes such efforts by the state to allay people’s fears difficult to see through to fruition. Clearly, the ship of catching up with the technology sector has yet to leave the port, and maybe it will end up being too little, too late. Society is about to change. The bots and the so-called generative AI that underpins them are likely revolutionizing the internet and more. The bots, as experts expect, might become so good it will be difficult to tell them apart from humans; then we will have a bigger problem on our hands. The world will have to find a balance. If medical diagnoses can be closer to us through software known as “Dr. Google,” what will be the future sources of knowledge, on which the data used by these machines relies, if bots become so powerful and students depend on them, even stopping studying for their exams? And what if artists stop creating and instead, as my poet friend did recently, produce work using ChatGPT? My friend, for example, created a piece of poetry in Arabic in a style mimicking the work of the great Palestinian poet Mahmoud Darwish. • Mohamed Chebaro is a British-Lebanese journalist, media consultant and trainer with more than 25 years of experience covering war, terrorism, defense, current affairs and diplomacy.
مشاركة :