I for one welcome our new computer overlords.

– Ken Jennings, ex-Jeopardy! grand champion

I’ve been thinking about Watson, the supercomputer that got its 15 minutes of fame last week after leaving microchip tracks all over the backs of humanity’s top quiz whizzes.  We know now, after years of hype, what Watson can do.  But what does Watson mean?  There’s been a lot of ink flying around lately regarding that question, but I’ve yet to see a clear answer emerge from the general splatter.  The mainstream media, it seems, is not going to be much help:

What humans have that [computers] do not have and will not get is the sort of thing that makes song, romance, smiles, sadness and all that jazz.  It’s something the experts…know very well because they can’t figure out how it works in people, much less duplicate it. It’s that indescribable essence of humanity.

In other words, Watson will never be a real boy:  he lacks that, you know, that something — which, whatever it is, is so obvious that it hardly needs to be articulated.  This sort of glib reassurance has been a very popular response to the whole affair, but it sure leaves a lot of questions unanswered (or answers unquestioned, in keeping with the Jeopardy! spirit).  A suggestion for those who are anxious about the prospect of being overthrown:  barricading yourself behind an ‘indescribable essence’ is not the most daunting defense.

If you don’t watch Jeopardy! and you read commentaries like the one quoted above, you could be forgiven for thinking of Watson as nothing but a turbocharged calculator, relying on brute force to crunch the data faster and harder than its human opponents.  Big whoop, you might say.  We’ve been relying on computers to crunch our data for us for decades now.  And if the game had been made up of questions like ‘What is the capital of Burkina Faso?’ or ‘What is the sum of 5,891 and 12,435?’, Watson’s accomplishment would have been underwhelming indeed.  But the reason IBM saw Jeopardy! as such a tantalizing challenge was precisely that it’s not just about crunching data.  (If it had been, they would have been better off bringing in Tianhe-1, the just-unleashed Chinese machine that is to Watson what Usain Bolt is to yours truly.)  Consider the following, each of which Watson answered correctly (though it wasn’t specific enough to get credit for the first one), and think about all the mental work, conscious and sub-, that goes into parsing each clue and homing in on a solution.  And if it doesn’t seem like much work, think harder.

It was the anatomical oddity of U.S. gymnast George Eyser, who won a gold medal on the parallel bars in 1904.

Wanted for a 12-year crime spree of eating King Hrothgar’s warriors; officer Beowulf has been assigned the case.

In English law, it’s a title above a gentleman & below a knight; in the U.S., it’s usually added to the name of an attorney.

William Wilkinson’s “An Account of the Principalities of Wallachia and Moldavia” inspired this author’s most famous novel.

A Dana Carvey character on “Saturday Night Live”; isn’t that special…

In order to unpack each sentence, correctly interpret each word in context (such as ‘wanted’ or ‘title’), discard the irrelevant bits, assess the unexpected juxtapositions (‘officer Beowulf’), sniff out trails of associations through multiple sources, and quickly whittle those associations down to a single answer that makes sense, Watson had to do a lot more than shuffling digits.  In fact, it had to do something very much like understanding English.  But of course that’s preposterous, because we know that as a computer, all Watson really does is shuffle digits.  Right?  Well, for that matter, all that his hapless opponents really did was shuffle synapses.  What does Watson have to do to earn, as Ken and Brad and you and I have, a higher level of description?  More on this later.

Of course, Watson’s inevitable failures offer plenty of ammunition for artificial-intelligence skeptics. Its biggest blunder last week came, awkwardly enough, during the Final Jeopardy round, when the category was announced as ‘U.S. Cities’ and the clue was:  Its largest airport is named for a World War II hero; its second largest, for a World War II battle.  Watson is programmed to buzz in only when it reaches a certain threshold of confidence, but since everyone has to put down an answer in Final Jeopardy, it was forced to reveal its wild guess:  ‘Toronto?????‘  What a dope!  What a cretin!  Everyone knows that Toronto’s not a U.S. city!  I guess we can all breathe easy about those would-be computer overlords after all.

And yet the more I think about that response, the more intelligent it seems, in a way — a quite human way, in fact. Like me, Watson didn’t know the answer; like me, it must have formulated a guess partly by hunting for clues, and partly by a process of elimination.  For instance:  this city is going to have to be a large city, with multiple airports that one might have heard of; that narrows the list quite a bit.  In the 30 seconds allotted for responding, like me, Watson no doubt picked the question apart and scrambled through the various possibilities, hoping for some faint bell to ring.  Unlike me, Watson apparently thought of Toronto, and perhaps dredged up the datum that the city’s Pearson and Bishop airports both have World War associations (both men served in WWI, as it so happens).  That being Watson’s strongest lead when time was up, that became its reluctant answer.

But wait — why was Toronto on the list at all?  Because Watson was thinking outside the box.  Thanks to its training, it was smart enough to recognize that categories in Jeopardy! are very often playfully misleading; ‘Country Clubs’ might include clues like ‘This is the French word for “stick.”‘  So when it failed to quickly find a match among U.S. cities in the strict sense, Watson likely tried going down side roads.  Perhaps, as Watson’s lead designer suggested afterwards (he cheekily wore a Blue Jays jacket to his interview), it fixated on the fact that Toronto fields a team in baseball’s American League, and decided this might be a clever reinterpretation of the category.  You and I know this just doesn’t work — the joke isn’t funny — but had that category been ‘American Cities,’ it almost would have.  Watson’s answer was amusingly out of touch, but it wasn’t unreasonable; it strikes me as just the sort of mistake an intelligent extraterrestrial might make, after boning up on Earthlings for a few years via books and newspapers.  The point is that Watson, when backed into a corner, seems to have responded creatively and unexpectedly — qualities we’re not used to attributing to computers.

For some people, the key word here is seems.  You can simulate intelligence all you want, they say; that doesn’t make it the real deal.  Literary and cultural critic Stanley Fish believes this.  In a recent Times Opinionator piece, which brushes off Watson as a glorified spell-checker, Fish plays a philosophical trump card by invoking the hallowed name of Wittgenstein.  Here’s what that ‘indescribable essence’ is, he says, that will always separate us from the machines:  it’s a form of life.  Although Wittgenstein himself (who coined it) never fully fleshed out the term, Fish understands it to mean the ‘situation,’ the ‘context,’ the ‘world’ within which every human intelligence operates.  He finds it all but obvious that such a world is categorically off-limits to the fleshless:  ‘The computer inhabits nothing and has no purposes and because it has no purposes it cannot alter its present (wholly predetermined) “behavior” when it  fails to advance the purposes it doesn’t have.’

That’s an awfully dogmatic statement.  Sure, Watson is not going to shoot spitballs at Trebek or walk offstage to grab a sandwich; those particular behaviors are off-limits to this particular machine, just as they are to an ant or a fish or a newborn.  But Watson does inhabit a world, albeit the extremely circumscribed world of one particular game show.  It does have purposes, albeit the extremely trivial purposes of answering quiz questions and amassing prize money.  For that matter, your world and mine would no doubt seem circumscribed, and our purposes trivial, from some loftier vantage.  And to call Watson’s behavior ‘wholly predetermined’ is a stretch:  after all, Watson has read things that its creators haven’t (about 200 million pages’ worth), drawn connections that they’ve never imagined, and surprised them continually with the things it does and doesn’t know.  (Years ago during early training, Watson answered a question about Louis Pasteur by citing the ’70s cannibal flick ‘How Tasty Was My Little Frenchman?’)  Like humans, Watson has a predetermined program — call it a genome — that is designed to help it cope, in often unexpected ways, with an unpredictable world.

But these objections aside, it’s not really fair to pick on Watson, is it?  As its proud parents at IBM readily point out, Watson was obviously not designed to simulate a human; it was designed to simulate a top-level Jeopardy! player, and that’s it.  If they had wanted to, they could have built in a module that improvises marimba solos, or one that composes corny puns, or one that detects opponents’ emotions and sheds tears of empathy.  They could have built in a general decision module, a frontal lobe of sorts, that chose which of these motley behaviors to engage in, depending on an instantaneous assessment of internal states or external stimuli.  They could have given Watson a pair of giant robotic legs and had it shamble around the set, looking for a sandwich.  They could have endowed it, cruelly, with embarrassing sexual urges that clouded its Jeopardy! judgment.  At what point could we say that Watson had entered a ‘situation’?

Now hold on just a minute! you may say.  Perhaps I’m taking way too many liberties with the language here.  Sure, you could program in some machine-code equivalent of raging hormones; you could devise some algorithm that scrambled Watson’s answers by x amount when those ‘hormones’ raged; but to say that Watson, this machine, this heap of metal, would then ‘feel’ some kind of ‘urges’ is absurd.  A computer can’t truly feel anything, just as a computer can’t truly understand anything.  This is part of Fish’s claim (‘the computer doesn’t know anything in the relevant sense of “know”’), and it’s been echoed by Stephen Baker, the guy who literally wrote the book on Watson:

See, Watson isn’t nearly as smart as it looks on TV. Outside of its specialty of answering questions, the computer remains largely clueless. It knows nothing. When it comes up with an answer, such as “What is ‘Othello’?,” the name of Shakespeare’s play is simply the combination of ones and zeros that correlates with millions of calculations it has carried out. Statistics tell it that there is a high probability that the word “Othello” matches with a “tragedy,” a “captain” and a “Moor.” But Watson doesn’t understand the meaning of those words any more than Google does, or, for that matter, a parrot raised in a household of Elizabethan scholars.

Really?  Watson doesn’t understand the meaning of words any more than a parrot does?  What exactly do we know about Othello that is fundamentally, decisively, categorically unknowable by a computer — even a computer with a computation speed (80 trillion operations per second) and a memory capacity (4 terabytes) that, while fairly puny by the standards of today’s supercomputers, already rival our best estimates for those of the human brain?  How exactly do we ourselves come to understand the words ‘tragedy,’ ‘captain,’ and ‘Moor,’ if not by ‘correlating them with millions of calculations’?  At what point in our education can we claim to have truly grasped these words, as opposed to ‘just’ assessing their relationships?  Anyone with a small child can testify that language learning is a haphazard, probabilistic, indefinite process, not unlike the one that is modeled (however crudely) in Watson’s primitive brain.

But there’s a powerful intuition lurking behind Fish’s and Baker’s arguments — one that was best expressed years ago, by the philosopher John Searle, in his notorious thought experiment of the Chinese Room.  Here’s how it goes:  say, first of all, that I don’t understand a bit of Chinese (which I don’t).  Say you were to stick me in a little room, with a door that has a slot in it, and that the only thing in the room is a stack of instructions — detailed, tedious, mind-numbing instructions — for taking questions in Chinese and generating appropriate Chinese answers:  ‘IF INPUT = 你好吗, OUTPUT = 很好谢谢你,’ and so on.  Say you were then to start passing notes in Chinese through the slot.  I could follow the instructions and crank out Chinese all day long.  I could crank out Chinese till I was blue in the face.  You’d think for all the world you were chatting with a native speaker.  But that would never, ever, change the fact that I don’t understand a bit of Chinese.  I’m just computing — following rules, stupidly, blindly, just as Watson does.

Searle’s paper, one of the most controversial in the recent history of philosophy, has been attacked by a lot of people from a lot of different directions.  The clincher, for me, is the observation that there’s some serious sleight-of-hand going on in his basic analogy.  What’s the computer here?  The line of reasoning above assumes that the computer is me, the person inside the room.  I don’t understand the data I’m processing, so a computer must not either.  But wait — put yourself in the shoes of the person outside the room for a minute, and assume you’re clueless about what’s going on behind that door.  You’re not passing me the Chinese notes; you’re passing them to the room, and the room, thanks to whatever mysterious subprocesses and submodules are housed within, is holding up its end of the conversation.  So the real question is:  does the room as a whole understand Chinese?  Or, if you’d rather:  does the room as a whole constitute a mind?

If we insist on denying that, then it puts us in a tight spot.  How can I be sure that anyone really understands what I’m saying?  You may be nodding thoughtfully right now; you may even leave a clever and insightful comment; but how can I be sure that you’re really understanding (i.e., that you’re the equivalent of a room with a real Chinese speaker inside), as opposed to ‘just following rules’ (i.e., that you’re a room with a dumb American and a stack of paper)?  This is, of course, a stupid question.  In real life, if a person speaks intelligently, we have no trouble whatsoever calling them intelligent.  Why should we hold computers to a higher standard?

In one sense, none of this matters; it’s all just semantics.  Certain commentators will keep on imposing a glass ceiling on artificial intelligence, insisting on indescribable essences of one sort or another; and AI researchers will keep on ignoring them, and building smarter and smarter computers capable of more and more things.  My children and grandchildren will interact with machines in ways I can’t yet fathom — machines that ‘simulate’ reason and emotion, creation and conversation, more and more convincingly, making it harder and harder to draw a line between their minds and ours.  Last week, the machines took another big step down the road toward intelligence.  They’re still a million miles from us, to paraphrase Stanley Fish; on that much, at least, we agree.  But they’ve built up quite a head of steam.  And if there’s something big blocking that road, I have yet to see it.

Leave a Reply