AI tools must not be exploited at humans’ expense

  • 1/24/2024
  • 00:00
  • 3
  • 0
  • 0
news-picture

An online artificial intelligence-powered chat system for customer service was last week manipulated into composing a poem about how bad that company’s customer service is. I am sure many of us have experienced being stuck trying to solve a problem with a banking app or track a delivery that has gone wrong, only to be driven to despair while trying to communicate the problem to an automated virtual customer assistant. This British customer’s ability to trick the AI function in a parcel delivery company’s online chat system led the company to disable the service. Its chatbot wrote a poem saying that it is “useless at offering help,” adding that the company in question is a “waste of time” and a “customer’s worst nightmare.” The poem ended with the chatbot rejoicing at the company shutting the service down and claiming that, finally, customers “could get the help they needed, from a real person who knew what they were doing.” This light-hearted exchange between chatbot and consumer could only temporarily mask the serious problems people face every day when trying to get a service, some support or simply some help to solve an urgent problem. After pressing the help button, customers are usually only able to ask a limited number of questions related to their inquiry. If they persevere, they might be given a telephone number, but then their call is answered by an interactive voice response system that greets them with endless prompts and automated menus. The whole experience leaves the user feeling powerless and perhaps even close to a nervous breakdown. The whole experience leaves the user feeling powerless and perhaps even close to a nervous breakdown Mohamed Chebaro And this, we are told, is likely to get worse. Apparently, our world is to witness even greater reliance on digital tools that are being built quickly and tested lightly, without the human experience aspect in mind. Neither do they have the accountability framework to protect their human users from harm or abuse by a glitch in the coding or a bias by the person designing the codes in the first place. Our world is sleepwalking into yet another crisis that is likely to hit us at some point further down the line, as everyone is racing to embrace the mantra of “saving costs at any cost” and replacing employees with AI-powered chatbots. Based on previous experience, many believe that, if you thought Big Tech was bad, big AI firms are likely to be even worse. Research published last year by the Pew Research Center reported on more than 300 experts trying to forecast the potential harms and threats societies face as a result of the constant, rapid evolution of our digital life. Some 37 percent said they were more concerned than excited about what today’s trends say about where digital developments are headed. Meanwhile, 42 percent said they were equally concerned and excited. Only 18 percent were more excited than concerned. The research also asked these experts to detail some impacts and adversities. Their findings make for a chilling read, since the development of digital systems continues to be driven by profit incentives in economic terms and power incentives in political terms. This will result in tools being used to control the public, such as through surveillance and data harvesting, rather than empowering them to have a smooth experience, act freely, share ideas and protest injuries or injustices. For these experts, surveillance advances, sophisticated bots embedded in civic spaces, the spread of deepfakes, disinformation and facial recognition are all likely to impact human rights. Above all, increasingly sophisticated AI applications are likely to lead to job losses, causing a rise in poverty and the diminishment of human dignity. Their development continues to be driven by profit incentives in economic terms and power incentives in political terms Mohamed Chebaro What the experts fear most is the loss of the best knowledge in a sea of misinformation and disinformation, as the institutions previously entrusted with informing the public will be further decimated and facts will likely be lost amid rampant bare-faced lies and targeted manipulation. They even argue that “reality itself will be under siege,” as emerging digital tools will be able to convincingly create deceptive alternate realities. Some of the experts also warned about the toll these digital systems are taking on people’s health and well-being, as high levels of anxiety and depression are recorded as technology further embeds itself into every aspect of our lives. This increases alienation, loneliness and the potential for a belief in altered realities that could trigger the displacement of jobs and civil strife and endanger social cohesiveness. Experts in the Pew study concerned with governance and human connections fear that norms, standards and regulations will not evolve quickly enough to shield the social and political interactions of citizens. The pace of change could even shake individuals’ compass and their trust in each other and in the institutions and modes of governance. Already, we are seeing signs of extreme polarization, cognitive dissonance and fragmentation eroding the fabric of society as we knew it. Surely there will be a few people with the wit and knowhow to give a chatbot or some other digital tool a taste of its own medicine by ridiculing the service company and its tools when they fall below the expected level. Unfortunately, the conclusions of the Pew Research Center point to difficult and tumultuous times ahead. Unless, that is, states and regulators sharpen their skill sets to contest Big Tech and big AI companies and make sure they are accountable to the consumers they claim these AI-powered tools were initially built to serve. The digital realm, despite its commercial raison d’etre, should remain a tool that is used to serve human beings and facilitate the execution of their many chores. It should not leave them in limbo, in despair and riddled with anxiety as a result of a lack of accountability and transparency in business models that have been turbocharged by Big Tech and big AI, regulation-free. Mohamed Chebaro is a British-Lebanese journalist with more than 25 years’ experience covering war, terrorism, defense, current affairs and diplomacy. He is also a media consultant and trainer.

مشاركة :