(Apologies for the technical error that meant last week's email was initially sent out in place of this week's one. Here it is, correctly – if a little late – this time!)
Happy 10 February: Safer Internet Day. This year, the internet feels less safe than ever.
Every time the annual event rolls around, it comes with much the same advice, most of it good. It has a focus on children, though the issues it deals with affect everyone and are in some cases adults actually seem more at risk: child sexual abuse, cyberflashing, gaming, misinformation, the dangers of social media. But the threats are changing.
The official celebrations recognise that change. The theme this year is slightly unwieldy, but very much of the moment: "Smart tech, safe choices – Exploring the safe and responsible use of AI". But the threat of AI might not just be a topical theme for the year, but an entire change in how we think about the internet and safety.
There are the obvious threats. AI is making it trivially easy to pretend to be someone, which in turn is useful in all sorts of malicious ways. It is empowering cyber attackers at a nearly unimaginable scale. Just as it might transform every other industry by making it more efficient, it will do the same to cyber crime.
But it also represents an entirely different threat. Children are growing up in a world where one of the primary ways of using the internet is not only dangerous but obliging. Every technology this century has made it easier to find content of all types, and that includes dangerous content. But AI might be the first time that there is a technology that not only shows you horrible content but explicitly tells you that you're great for doing it.
And of course the threat doesn't end with children. Safer Internet Day focuses on young people for obvious reasons, but older people face new kinds of threats from AI particularly. Young people – and everyone who grew up online, really – are used to the internet lying to them. But I have seen directly as several older people in my life, not equipped with the skepticism that is required to be online these days, are taken in by the dangerous results of artificial intelligence.
Fake videos, claiming to show events that are in fact entirely fictional, shared across Facebook. Chatbots that write both convincingly and charmingly, talking people into dangerous and pseudo-intimate relationships. Lies churned out at the speed of electricity. And the longer you've been around – the longer that you lived in a world where videos were proof of reality, where text was only produced by a writer – the less equipped you might be to spot those fictions.
In 2023, Safer Internet Day had the theme "Want to talk about it? Making space for conversations about life online", and it made explicitly clear that it was not about telling children what to do but engaging them in conversations about it. And some of those conversations were aimed at going the other way, with children and young people offering advice to the older people in their life.
Three years on, and the topic is maybe more relevant than ever. Now maybe the only way to stay safe on the internet is to have these kind of conversations. We are all of us grappling with a threat we don't quite understand, that in some ways is impossible to understand; the only way to wrestle with it is in collaboration and through connection.
Partly because everyone has something to offer. But partly, and more profoundly, because this danger threatens to undermine exactly that kind of spirit of togetherness itself.
0 comentários:
Postar um comentário