Rishi Sunak will set out his views on artificial intelligence next week to an audience of technology industry insiders during a keynote speech at London Tech Week. Twenty-four hours later, Keir Starmer will do the same. The prime minister and the Labour leader have a habit of speaking at the same venue within a day of each other – they did so at the beginning of the year when setting out their competing visions for the country from the same room at the Olympic Park in east London. The fact they are doing so again but on the far more technical and detailed question of AI shows how quickly the issue has rocketed up the political agenda. “We have been working on AI policy for a long time,” said one government official. “But suddenly the interest in this work has spiked. Everyone wants to weigh in, from cabinet ministers to industry to academia.” The shift has come from the top of government. Sunak himself, who used to speak enthusiastically about the opportunities AI presented, has gone on something of a re-education course, meeting industry executives and issuing statements about the “existential” risks it poses. This week, the prime minister has been in Washington DC lobbying Joe Biden to put the UK at the centre of efforts to formulate a global set of principles that will govern how countries regulate the industry. British officials argue the UK is ideally placed for such a task. London is home to Google DeepMind, and this week the technology company Palantir announced it would make Britain its European headquarters for AI development. Officials also say the UK position of overseeing AI development with broad principles makes more sense than trying to regulate individual technologies, as the EU has done. Sunak had some success, persuading the US president to sign up to an AI summit to be hosted in the UK later this year. British officials say “like-minded countries” will be invited, giving a heavy hint that China will not. Politico revealed on Friday that Sunak had appointed Henry de Zoete, a former special adviser to Michael Gove, to help organise the summit and advise Downing Street on AI more generally. But experts say it remains highly unlikely the prime minister will succeed in a second mission: to persuade other countries to use the UK as a base for a new global AI regulator, along the lines of the International Atomic Energy Agency. This idea had already been mooted at the G7 summit in Hiroshima where the US, Japan, Germany, France, Canada, Italy and the EU agreed a framework to work together to progress global governance on AI. The EU is concerned that although it may produce the world’s first AI laws this year, there will be a gap to bridge between legislation and implementation. This week, the European Commission started preparing companies for digital services legislation coming into force in August, asking 44 companies including Google and Facebook to “immediately” start labelling AI content. Dragoș Tudorache, a Romanian MEP who is a co-rapporteur on the committee progressing the EU’s AI act through the European parliament, said the UK was “late in the game”. “All jurisdictions are waking up to a reality that we have seen coming and we have been discussing about for quite some time,” Tudorache said. “The idea should not be to start a race as to who hosts what. I think we need to use the political energy of all the leaders … and ask how do we now diligently, responsibly sit around a global table and think what do we do next?” Prof David Leslie, the director of ethics and responsible innovation research at the Alan Turing Institute, said: “The UK has been a leader in AI policy innovation, but right now there are significant headwinds against setting up a new international regulatory body.” Privately, British officials admit that securing agreement from a diverse array of countries, especially those in the EU with which the UK until recently had a fractious relationship, is unlikely. “Can you imagine getting the French to sign up to having the UK lead the way on AI regulation?” said one. “It’s not going to happen.” Officials in the Department for Science, Innovation and Technology (DSIT) are busy speaking to industry figures about their own AI white paper, which was published in March but which critics say is already out of date. The department is consulting on the paper’s recommendations, which include a set of principles such as transparency, accountability and innovation, but say relatively little about how to regulate individual threats. Those close to the consultation process, which closes on 21 June, acknowledge that their response to that consultation will have to contain more specific policy proposals than the white paper did. But they say it will not recommend setting up a specific AI regulator, something many in the industry have called for. Labour is hastily working out its policy towards the technology. Last week, Lucy Powell, the shadow digital secretary, told the Guardian said she wanted a licensing regime for those building large datasets on which to train AI tools. Such a model, which could work like those for medicines or nuclear power, would allow ministers to insist that developers share their datasets with the government, or that they sell them only to approved buyers. Shadow ministers also say they would implement some form of centralised AI regulation if they win next year’s election, whether that is in the form of a coordinating unit between existing regulators or a separate regulator entirely. But the party is hampered by the fact it does not yet have someone directly shadowing the DSIT. Powell’s role covers everything from media regulation to arts funding to technology, and some in the party want Starmer to reshuffle his frontbench to create a science and innovation spokesperson. One Labour MP this week accused Powell of “freelancing” on the issue of AI, causing irritation among those close to her, who say it is a core part of her job. While the government consults and Labour bickers, the technology is surging ahead. Facebook’s parent company, Meta, announced a new push into AI this week that would allow users of its Messenger to generate their own artificially created images. And researchers in San Francisco found they could manipulate AI software made by Nvidia to get it to reveal users’ personal information. “Things have moved on quite quickly even since the white paper,” said Marion Oswald, a professor at Northumbria University who researches the interaction between technology and the law. “We need much more clarity on how you interpret the principles we have been talking about, rather than just leaving this to every regulator. Otherwise I think there is a risk we will end up making a lot of mistakes, and people will suffer as a result.”
مشاركة :