TechScape: How the UK forced global shift in child safety policies

  • 8/18/2021
  • 00:00
  • 3
  • 0
  • 0
news-picture

I bring good news: regulation works. The last month has brought a flurry of changes to major tech platforms related to child safety online, and specifically to the use and protection of children’s personal data. First, there was Instagram. In late July, Facebook announced some sweeping changes to the platform, billed as “giving young people a safer, more private experience”. The company began giving those under 16 private accounts by default, ensuring that kids only share content publicly if they actively dive into settings and change their privacy preferences accordingly. It also introduced a new set of restrictions for people with “potentially suspicious accounts” – “accounts belonging to adults that may have recently been blocked or reported by a young person for example.” In other words, if you’re a creep who goes around messaging kids, you’ll soon find that young people don’t show up in your algorithmic recommendations; you won’t be able to add them as friends; you won’t be able to comment on their posts; and you won’t be able to read comments others have left. Finally, the platform announced “changes to how advertisers can reach young people with ads”. People under 18 can now only be targeted on Instagram by “their age, gender and location”: the vast surveillance apparatus that Facebook has built will not be made available to advertisers. Instagram’s rationale for this is that, while the platform “already [gives] people ways to tell us that they would rather not see ads based on their interests or on their activities on other websites and apps … young people may not be well equipped to make these decisions.” At the time, I found that last change the most interesting one by far, because of the implicit claim it was making: that it’s bad to target people with adverts if you’re not absolutely certain that’s what they want. Facebook would hardly accept that targeted advertising can be harmful, so why, I wondered, was it suddenly so keen to make sure that young people weren’t hit by it? Along came Google Then YouTube announced a surprisingly similar set of changes, and everything started to make a bit more sense. Again, the default privacy settings were updated for teen users: now, videos they upload will be private by default, with users under 18 having to manually dig into settings to publish their posts to the world. Again, advertising is being limited, with the company stepping in to remove “overly commercial content” from YouTube Kids, an algorithmically curated selection of videos that are supposedly more child-friendly than the main YouTube catalogue. In YouTube proper, it’s updated the disclosures that appear on “made for kids” content that contain paid promotions. (Paid promotions are banned on YouTube Kids, so despite being officially “Made for kids” such content isn’t allowed on the platform explicitly for kids. Such is the way of YouTube). And YouTube also introduced a third change, adding and updating its “digital wellbeing” features. “We’ll be turning to take a break and bedtime reminders on by default for all users ages 13-17 on YouTube,” the company said. “We’ll also be turning autoplay off by default for these users.” Both these settings can again be overruled by users who want to change them, but they will provide a markedly different experience by default for kids on the platform. And TikTok makes three A couple of days behind Google came TikTok, and everything clicked into place. From our story: TikTok will prevent teenagers from receiving notifications past their bedtime, the company said … [It] will no longer send push notifications after 9pm to users aged between 13 and 15. For 16-year-olds and those aged 17 notifications will not be sent after 10pm. People aged 16 and 17 will now have direct messages disabled by default, while those under 16 will continue to have no access to them at all. And all users under 16 will now be prompted to choose who can see their videos the first time they post them, ensuring they do not accidentally broadcast to a wider audience than intended. It’s probably not a coincidence that three of the largest social networks in the world all announced a raft of child-safety features in the summer of 2021. So what could have prompted the changes? Age appropriate Well, in just over two weeks’ time, the UK is going to begin enforcing the age appropriate design code, one of the world’s most wide-ranging regulations controlling the use of children’s data. We’ve talked about it before on the newsletter, in one of the B-stories in July, and I covered it in this Observer story: The code, which was introduced as part of the same legislation that implemented GDPR in the UK, sees the Information Commissioner’s Office laying out a new standard for internet companies that are ‘likely to be accessed by children’. When it comes into force in September this year, the code will be comprehensive, covering everything from requirements for parental controls to restrictions on data collection and bans on “nudging” children to turn off privacy protections. I asked the platforms whether the changes were indeed motivated by the age appropriate design code. A Facebook spokesperson said: “This update wasn’t based on any specific regulation, but rather on what’s best for the safety and privacy of our community. It’s the latest in a series of things we’ve introduced over recent months and years to keep young people safe on our platforms (which have been global changes, not just UK).” TikTok declined to comment on whether the changes were prompted by the code, but I understand that they were – though the company is rolling them out globally because, once it built the features, it felt it was the right thing to do. And according to Google, the updates were core to the company’s compliance with the AADC, and the company said it was aiming beyond any single regulation – but also wouldn’t comment on the record. I also called up Andy Burrows, the head of child safety online policy at the NSPCC, who shared my scepticism at claims that the timing of these launches could be coincidental. “It is no coincidence that the flurry of announcements that we’ve seen comes just weeks before the age appropriate design comes into effect,” he said, “and I think it’s a very clear demonstration that regulation works.” The lack of public acknowledgment from the companies that regulation has influenced their actions is in stark contrast to the response to GDPR two years ago, when even Facebook had to acknowledge that it didn’t suddenly introduce a whole array of privacy options out of the goodness of its heart. And the silence has correspondingly led to an odd gap at the heart of coverage of these changes: they’ve had widespread coverage in the tech press, as well as many mainstream American papers, with barely a whisper of acknowledgment that they are almost certainly down to a regulatory limitation in a mid-sized European market. That, of course, is exactly how the tech companies would want it. Recognising that even a country as comparatively minor as the UK can still pass regulations that affect how platforms work globally is a shift in the power relationships between multinational companies and national governments, and one that might spark other nations to reassess their own ability to force changes upon tech companies. Not that everyone is fully compliant with the age appropriate design code. The big unanswered question is around verification, Burrows points out: “The code is going to require age assurance, and so far we haven’t seen publicly many, or indeed any, of the big players set out how they’re going to comply with that, which clearly is a significant challenge.” In everything I’ve written above – every single restriction on teen accounts – the platforms are fundamentally relying on children to be honest as part of the sign-up process. It’s hard to verify someone’s age online, but very soon UK law isn’t going to take “it’s hard” as a sufficient excuse. The next few weeks are going to be interesting.

مشاركة :