a rickety bridge of impossible crossing

neural networks?! i sure *hope* it does!

Machines have had at least a little bit of sentience for a long time. But also, sentience isn't really the high bar for measuring intelligence that people seem to think it is? All it means to be sentient is that you're able to perceive and react to the world around you. Sentient comes from the latin root sentio, meaning to perceive or feel, the same root word as sense. Fish are sentient, bugs are sentient, I think technically even some kinds of fungus are sentient. Computers are lousy with sensors these days; if I was someone who'd get into a self-driving car, I'd hope it'd be at least as sentient as a bug. (Tune into Fox this summer for the hit new game show, Is Your Tesla Smarter Than A Dung Bettle?)

robo sapiens

What people mean when they say an AI is sentient is that it has qualia, or consciousness, or some sort of uniquely human-like intelligence. To keep things simple, I'll call this "sapience", even though that's sort of a cyclical definition. The fact that we don't even have a good word to describe this should be a hint about how real it is, but let's entertain the notion for a minute. Is LaMDA sapient? No, but if it were, it'd be relatively straightforward to prove it is. I'd start by administering the wug test, and proceed from there.

If I were to point to one thing that explain how human intelligence is different than animal or computer intelligence, it would be our intuition for language. The wug test is the most basic psycholinguistic benchmark for this intuition. It goes like this:

This is a wug. Now there are two of them. There are two ____

It'd be very easy to program an AI to pass this sort of test, but you can keep going. "Wug means a fruit. Yug means a vegetable. I put 10 apples in a basket, so I have a basket of wugga. I put 10 carrots in a box. What do I have a box of?"

Everyone reading this and most small children could answer without thinking, but can a chatbot? Maybe, if it's been specifically programmed with this sort of task in mind, but there's a limit. If you keep pushing, eventually the if-then routine behind the curtain will be exposed.

I'm not saying the wug test is the be-all-end-all for measuring sapience. If a 5 year old doesn't pass the wug test, that certainly doesn't mean they're not sapient, it just means they need a little more time and attention than others. Heck, I won't even flat-out deny that octopi are sapient because they don't communicate in any language that we can perceive; this research is all very new and uncertain even in humans, we can't expect to be able to say whether non-humans can also have it. We'd have to study octopus brains for 100 years too, and honestly who has the time

But if they're going to claim that a machine is sapient because it can say philosophical-sounding things in our language, they need to prove that it actually understands our language, and wug tests would be the first step. I expect the reason all you see is out-of-context quotes and facilitated communication is because if anyone started doing these sort of tests, the illusion would shatter pretty quickly. The whole thing reminds me of another supposed breakthrough in nonhuman language1

conclusion

Google isn't a research institution, it's a corporation. Everything it (or an employee speaking for it) says is an ad, especially the stuff that sounds like science fiction. If it sounds like science fiction, that's because it is 🦝


  1. Terrace, Herbert (& al.) Can An Ape Create a Sentence? [PDF link] (Science, 1979)

#currents