Ulterior Motives

How goals, both seen and unseen, drive behavior

Watson Is So Cool, Part II: Relevance

Watson represents a huge step forward in computer language understanding.

Watson

Watson, the IBM supercomputer that won a Jeopardy tournament.

Yesterday, IBM's supercomputer Watson won a Jeopardy tournament against the two best human players ever to play the game.  In my last blog entry, I talked about some reasons why people are less impressed with this performance than they should be.  In this entry, I want to talk about some of the hard problems that the designers of Watson had to solve in order to make it play Jeopardy successfully. 

Almost all of these hard problems come down to solving the problem of relevance.

These days, we have conversations with computers all the time.  You call the customer service line of a company, and a computer may answer the phone and ask you a series of questions, using that information to direct your call properly.  Most of these systems ask you focused questions and give you a limited number of answers you can provide.  The computer knows that what you are going to say is relevant, because it has asked you a particular question and has a specific script for deciding what to do with what you tell it.

In Jeopardy, a huge part of the problem that a computer has to solve is to figure out what information is really needed.  To see this, let's go through some of the actual questions from the games that Watson played.  A list of the questions and answers that the players gave can be found at http://www.j-archive.com

Let's start with a simple problem that Watson answered correctly.  In a category labeled "Beatles People" the answer was "And any time you feel the pain, hey" this guy "refrain, don't carry the world upon your shoulders." 

On the surface, this seems pretty straightforward.  The lyric is clearly from the Beatles song "Hey Jude."  The question is missing the name of Jude, and so the answer is obviously Jude.  If you Google the exact text of this question, all of the pages that come up pick off this lyric right away.

Still, the computer needs to know that the correct answer is "Jude" and not "Hey Jude."  That is, the computer needs to figure out that the question is asking for a name rather than a song title.  That requires figuring out that the question category "Beatles People" should focus you on the names of people rather than the names of songs.  Not an easy thing to program into a computer, even if people do it easily.

Watson also had to be programmed to figure out which pieces of information were the real focus of the question.  Think back to word problems you got in school.  A clever teacher might add a few extra numbers into the story of the problem to try to confuse you.  Similarly, some of the Jeopardy problems used extra information as a way of trying to lead the computer off track.  In the category "Don't Worry About It," the answer was "You just need a little more sun! You don't have this hereditary lack of pigment."  The correct response here (which Watson got) was "Albinism."

Notice, though, the first part of the question is not at all helpful for answering the question.  It relates to the question category "Don't Worry About It." It is fairly straightforward to get to albinism just from the clue "hereditary lack of pigment," but hard to realize that bit is the only part of the question that is carrying information.  So, the system had to be able to relate the way the question was framed to the question category to help isolate the focus of the question.

Many of the questions that Watson answered had this character.  As another example, under the category "Etude, Brute" there was the item "An etude is a composition that explores a technical musical problem; the name is French for this."  In this case, the entire first part of the statement is not needed to solving it correctly.

One way that people figure out what is relevant in a statement is by using the given-new convention. When you are engaged in a conversation, there is some information that the speaker assumes you already know.  That is the given information.  Then, the speaker presents some new information to add to what you know already.  If your friend comes up to you and says, "John and his wife just had a baby," your friend is assuming that you know who John is, and that you didn't know that they had a baby.  If you don't know John (or if you know more than one John that your friend might be talking about), then you have to ask more questions to figure out who your friend means.

We use this given-new convention all the time without realizing it to help us understand sentences.  Watson also had to have some knowledge of this kind of convention.  For example, in the category "Hedgehog-Podge" there was the item "Some hedgehogs enter periods of torpor; the Western European species spends the winter in this dormant condition."  The first part assumes the discussion is about hedgehogs (which is suggested by the title of the category) and then provides some new information (that they enter a period of torpor). 

The next part uses the phrase "The Western European species."  When you read this sentence, you know immediately that it refers back to hedgehogs, but for Watson to figure that out requires solving an interesting language problem.  Any reader of this sentence needs to know that the speaker is assuming that this phrase (Western European species) is part of the given information in the sentence, and so there should be something in the conversation that tells us what kind of species we're talking about.  The only species that is part of the conversation is hedgehogs, so that is how we attach "Western European species" to "hedgehog."  Once Watson has all that, it just needs to look for another name for the dormant state that hedgehogs go through (which is hibernation).

This ability to use language effectively is one of the big reasons why Watson is so cool.

Stepping back from these examples, it is clear that the designers of Watson did an amazing job of solving many difficult problems of determining relevance.  In the long-run, solving these problems will help computers to do a better job of communicating with us.  In the future, when we call a company to get service, it would be nice to just explain the problem to the computer that answers the phone and be directed to the right place to get the answer rather than having to follow a script that is designed to let the computer know what information is likely to be relevant for it to know. 

Follow me on Twitter.

Art Markman, Ph.D., is a cognitive scientist at the University of Texas whose research spans a range of topics in the way people think.

more...

Subscribe to Ulterior Motives

Current Issue

Dreams of Glory

Daydreaming: How the best ideas emerge from the ether.