Action on sexual abuse images is overdue, but Apple’s proposals bring other dangers

  • 8/14/2021
  • 00:00
  • 3
  • 0
  • 0
news-picture

Last week, Apple announced two backdoors in the US into the encryption that protects its devices. One will monitor iMessages: if any photos sent by or to under-13s seem to contain nudity, the user may be challenged and their parents may be informed. The second will see Apple scan all the images on a phone’s camera roll and if they’re similar to known sex-abuse images flag them as suspect. If enough suspect images are backed up to an iCloud account, they’ll be decrypted and inspected. If Apple thinks they’re illegal, the user will be reported to the relevant authorities. Action on the circulation of child sexual abuse imagery is long overdue. Effective mechanisms to prevent the sharing of images and the robust prosecution of perpetrators should both receive the political priority they deserve. But Apple’s proposed measures fail to tackle the problem – and provide the architecture for massive expansion of state surveillance. Historically, the idea of scanning customers’ devices for evidence of crime comes from China. It was introduced in 2008 when a system called Green Dam was installed on all PCs sold in the country. It was described as a porn filter, but its main purpose was to search for phrases such as “Falun Gong” and “Dalai Lama”. It also made its users’ computers vulnerable to remote takeover. Thirteen years later, tech firms in China are completely subservient to the state – including Apple, which keeps all the iCloud data of its Chinese customers in data centres run by a state-owned company. Scanning photos is tricky to do at scale. First, if a program blocks only exact copies of a known illegal image, people can just edit it slightly. Less skilled people might go out and make fresh images, which in the case of sexual abuse imagery, means fresh crimes. So a censor wants software that flags up images similar to those on the block list. But there are false alarms, and a small system of the kind that will run in a phone might have an error rate as high as 5%.Applying that error rate to the 10bn iPhone photos taken every day, a 5% false alarm rate could mean 5oom images sent for secondary screening. In order to prevent this, Apple will only act if the primary screening on a phone detects a certain threshold of suspect images, probably 10 of them. Each photo added to a camera roll will be inspected and, when it’s backed up to iCloud, it will be accompanied by an encrypted “safety voucher” saying whether it’s suspect or not. The cryptography is designed so that once 10 or more vouchers are marked as unsafe, Apple can decrypt the images. If they look illegal, the user will be reported, and their account will be locked. A well-known weakness of machine-learning systems such as the one Apple proposes is that it’s easy to tweak a photo so it categorises it incorrectly. Pranksters may tweak photos of cats so phones mark them as abuse, for example, while the gangs who sell real abuse images work out how to sneak them past the censor. But images are only part of the problem. Curiously, Apple proposes to do nothing about live streaming, which has been the dominant medium for online abuse since at least 2018. And the company has said nothing about how it will track where illegal images come from. But if the technical questions are difficult, the policy questions are far harder. Until now, democracies have allowed government surveillance in two sets of circumstances: first, if it is limited to a specific purpose; second, if it is targeted at specific people. Examples of special-purpose surveillance include speed cameras, and the software in photocopiers that stops you copying banknotes. Targeting specific people usually requires paperwork such as a warrant. Apple’s system looks like the first type of these – but once it is built into phones, Macs and even watches, in a way that circumvents their security and privacy mechanisms, it could scan for whatever else – or whoever else – a government demands. These concerns are not abstract. Nor are they limited only to countries considered authoritarian. In Australia, the government threatened to prosecute a journalist over photos of Australian troops killing civilians in Afghanistan, arguing that the war-crime images were covered by national security laws. In addition, Australian law empowers ministers to compel firms to retrain an existing surveillance system on different images, vastly expanding the scope of Apple’s proposed snooping. Closer to home, the European Union just updated the law allowing tech firms to scan communications for illegal images and announced that a new child-protection initiative will extend to “grooming”, requiring firms to scan text, too. In Britain, the Investigatory Powers Act will also enable ministers to order a firm to adapt its systems where possible to assist in interception. Your iPhone may be quietly looking for missing children, but it may also be searching for the police’s “most wanted”. Legally, the first big fight is likely to be in the US, where the constitution forbids general warrants. But, in a case about drug sniffer dogs, a court found that a search that finds only contraband is legal. Expect the supreme court to hear privacy advocates claiming that your iPhone is now a bug in your pocket, while Apple and the FBI argue that it’s just a sniffer dog. Politically, the tech industry has often resisted pressure to increase surveillance. But now that Apple has broken ranks, it will be harder for other firms to resist demands by governments. Child protection online is an urgent problem, but this proposal will do little to prevent these appalling crimes, while opening the floodgates to a significant expansion of the surveillance state. Ross Anderson is professor of security engineering at Cambridge University and at Edinburgh University

مشاركة :