Snapchat must have known what it was doing when it called its photo tool "memories". Really, it's a photo back up tool like any other: pictures that you have taken with the app can be stored there. But calling them memories was an explicit recognition that those pictures are not just files, but pieces of people's most precious moments.
So it meant that when Snapchat announced recently that it would start charging for storage of those memories, it was both a moral and grammatical outrage. How can we run out of storage space for memories? How dare anyone ask us to pay a recurring fee to keep our memories safe?
But the recent march of technology suggests that those moments are going to keep happening. Companies are increasingly treating our data as something to own, to fight about, to charge for; anything that's not giving it back to us.
Around the same time as Snapchat's announcement, fitness social media platform Strava announced that it was suing Garmin. The case is complicated and both wide-ranging and fairly niche, since it relates to the segments and heat maps on both of the platforms. But the real heart of the argument seems to be about who gets to show data and how – Garmin wants Strava to show the fact that a workout was done using one of its devices, but it won't.
Notably, in one of Strava's many statements about the fallout, it seemed to recognise the emotional and moral pull of owning your own data. It suggested that it shouldn't be required to show Garmin's information because "We consider this to be YOUR data". "If you recorded an activity on your watch, we think that is your data," a company representative wrote. Whether it's true or not, the company knew how it needed to make the argument.
The fitness world is a useful test case for arguments about who owns what data, because it runs on the idea of sharing it around. Runners record their run on a Garmin smartwatch, say, but that data gets sent up to Strava so that friends can see it and training platforms for recommendations for future runs, and those future runs might be sent back down to the Garmin watch; until now, there has always been something slightly retro about the relatively good relationship between all of those technology companies. But even the utopia of fitness tech appears to be getting muddied with greed, as the rush to own as much data as possible continues.
There is something deeply frustrating about generating that data – especially when it requires doing a long workout – and not feeling like it is yours. But that is happening more and more, especially as AI platforms turn data in the central and most in-demand resource on the internet, required not only to power the offerings of today but to train the future as well.
Reddit and Wikipedia for instance remain two of the most useful troves of quality training data for AI. But companies that use them don't actually need to worry about the people who created that data – that learning, information, and much else besides – in the first place, since that was done on a voluntary and often even anonymous basis. They don't even often worry about the companies that look after that data, though Reddit is increasingly positioning itself precisely as a collection of dependable words for training AI systems.
All of this matters not only because that data can often be a representation of our intimate, important moments, but also because we have no real way of sharing it that is outside of this system, which so powerfully makes you feel like that data wasn't yours in the first place. The dream of a web powered by a host of decentralised servers all talking to each other is over, even if some people try to revive it. If you're going to share photos with your fiends, you're probably going to have to do so on Instagram, at which point they sort of stop being your photos at all.
This might be some of the reason for the retreat into the group chat. Many of the most popular messaging platforms – iMessage, for instance, and even Meta's WhatsApp – make big privacy promises that they actually keep to, which means that they cannot read the content of messages even if they wanted to. Chats are safe from prying eyes – or AIs.
Privacy has long felt like something of an abstract concern, in part because it has always seemed so impossible on the internet. But as the web becomes a training ground for AIs, we might finally want to take proper control of our data. Until we do so, we won't just be paying for our memories, but having them used to train whole new machines too.
0 comentários:
Postar um comentário