Recently, I've been saying something terrifying about Claude: "he". Anthropic's AI encourages you to do it, in the obvious way that he – sorry it – takes a human name, but in the much deeper way that it seems to have something of a personality, in a way that the more dull and obsequious competitors such as ChatGPT don't. It dares you, as you talk to it, to anthropomorphise it.
I probably shouldn't be so hard on myself; it's difficult not to think of these things as intelligent. Never in human history have we met anything that could talk to us like this, since words have been essentially human for as long as we've known. But the reminder that Claude is not human is an important if terrifying one. The "it" is crucial. Because Claude is – and this is an obvious truth, but one we should keep repeating to ourselves – Claude is a computer.
I don't mean this to be cruel to Claude. For nearly a hundred years, computers have been better at humans at a whole range of tasks. Being a computer is not a slur, and very often is the opposite. But it gets to an important fact about these chatbots: they compute, they don't understand. They are like calculators for words. Calculators have enabled us to work things out that would have previously been beyond our comprehension. But they don't understand maths, they just do it.
As soon as you start calling it a computer – and I have done so insistently in recent weeks – it seems to change the shape of everything in just one word. It is like an inverted version of switching from "he" to "it". 'I was speaking to my AI and he told me that I should quit my job' is a perfectly sensible feeling sentence; 'the computer told me to quit my job' might even have some wisdom in it, but it is of an entirely different and more accurate kind.
It's the second letter of AI that is the really damaging one. "Intelligence" is a particularly tricksy piece of marketing. In one sense, that is of course what these chatbots are: if we understand intelligence as a kind of informational handiness, then that's exactly what they are. But intelligence implies some sort of mental event, and so we are moved into the wrong sort of understanding. Intelligence is a useful metaphor, but sometimes the metaphor becomes switched up with the actual thing, and you can't see the forest for the trees.
We used to be more careful with the words we used for these things. For years, AI experts were uncomfortable with calling what they made "artificial intelligence", preferring the phrase "machine learning", since it more accurately and usefully describes the actual process, as well as avoiding both the technical imprecision and philosophical baggage that comes with talking about AI. But after ChatGPT was released, the victory of those two letters quickly became complete, and now you sound a little boring if you refuse to use it. But being interesting was the insidious point of that marketing exercise: OpenAI wants you to use the exciting words, because they want you to be excited about buying into it.
But we should use the boring ones. Because chatbots are very big, very good computers. Very, very good computers. Only this week, I was using Claude to analyse my marathon training plan – I gave it my Strava data, going back to the first ever marathon I did in 2019, and it produced charts that compared where I was in my training at comparable points in the past, computing it in ways that I didn't think possible. But then it strained against its limits, giving me something more like emotive cheerleading, telling me that my goal was well within my grasp as long as I tried hard enough. A computer doesn't know anything like that. (Interestingly, the Sonnet 4.6 model that Claude was recently updated to use is remarkably self-aware about this limitations, and sometimes refuses to give me this kind of human motivation even when I ask; curiously, this humility somehow makes me more likely to understand it as something that should be called he.)
Calling it a computer does highlight these kind of limitations. Even though AI systems are generative, that ostensibly creative work is itself a kind of computing: all it does is guess the right words in the right order. There might be wisdom there, but it is our wisdom. The super-computers that churn away to work out pi to unimaginable numbers of digits don't have any sense of what they are working out; when we look at them with a kind of wonder, that wonder is ours, just as the instructions that began their work was ours. Calling it a computer shifts that responsibility back onto us, in a way that can be scary but is absolutely central to responsibly using these systems.
This is ethically important. AI systems are now being used to kill people. There can be a quasi-religious tendency to talk about this as if it represents some new force in the world, some vast unknowable and powerful intelligence that we can't truly reckon with. But when we call it a computer, we put the responsibility back into our hands: some human started that horrible, fatal calculation, and some other human chose to use it. To allow ourselves not to call it a computer is letting people – real, human people – get away with that.
So calling it a computer can feel a little silly, but it is really done with the most honest and anguished intentions. Calling it a computer might be the most morally serious and truthful thing we can do.
0 comentários:
Postar um comentário