Elementary, my dear @IBMWatson!

February 11th, 2011 by Carl Lambrecht

Or perhaps that should be “Jeopardy, my dear Watson”. By now, you’ve hopefully heard of the IBM project called Watson to develop a computer capable of competing on the quiz show Jeopardy. Scratch that, not just competing, but competing against two of the best players the show has ever had. And, if Watson works as designed, beating them.

I’ve been a fan of Jeopardy since I was a kid, so that angle of the story interested me from the start. But watching segments on NOVA about the project team addressed the challenges in developing a machine capable of understanding human language, it struck me as very relevant to the challenges we face in text analytics engine. If you haven’t heard much about Watson, I highly recommend the video “Building Watson – A Brief Overview of the DeepQA Project”. Without a doubt, Watson goes far beyond the applications we are dealing with. But there is synergy (buzzword bingo score) in the fundamental building blocks and approach, and it’s very exciting to see where this can all head. Here’s some of the core problems text analytics problems that Watson faces, and how they relate to us.

Right off the bat, Dr. Ferrucci mentions some key aspects that are directly applicable to text analytics.

Broad and open domain

One of the big challenges with Jeopardy, for any contestant, is the variety of knowledge. In any one game, there are 13 different categories between the two rounds and the final question. While you could easily build a system with a big lookup of all the questions that have ever been asked in the past, and the correct answer, but that is no guarantee that a question will be re-asked and that knowledge will be useful. And, as Dr. Ferrucci mentions, “You can’t anticipate all of the possible questions that are going to be asked and simply provide it with the answers.” So a big index of answers isn’t going to get you where you need to go.

We run into similar issues in text analytics. Where we’re trying to extract the names of people, companies, places, and products, it’s not a neat closed set. Some of our customers do have a well-defined list of entities they are looking for, but others need to process vast amounts of content and find entities they may not know they are looking for. A list of your competitors helps you find content about the ones you know about today, but may not help you notice the company being mentioned softly that will be your competitor tomorrow.

It comes down to a rules-based approach versus a model-based approach. A big list of all known companies is static (rules), and only gets so far. But if I provide the computer with enough examples of what a company looks like in context, the language used around mentions of a company, then the computer can start to develop the patterns (model) it needs to identify company names that I don’t list for it.

Complex language, wide variety of grammatical structures

Back to our Jeopardy questions, Dr. Ferrucci talks about the challenge in getting Watson to understand the question. This was also mentioned in a PBS NOVA program on Watson, “Smartest Machine on Earth”,  the other night.

“Just understanding the question is a pretty big deal…Human language is a minefield for computers.”

We come across this all the time as well. The English language is full of pitfalls in terms of nuance, humor, saracasm, entendre, and most of all, context. There is so much background information that a human is able to insert into their understanding of language based on their experience. From the time we are infants and begin to communicate, we are developing an understanding of the world around us and our interactions through communication by experience. Computers have none of this experience to provide that context.

With a machine, we start with the basics. We teach it various parts of speech so that it can understand that “milk” or “can” are objects, improper nouns to be precise. But again, this is where rules break down very fast because of the complexity of language.

Fred can milk the cows in the morning and bring a can of milk in for breakfast.

In this one sentence, the same words are used as both verbs and nouns. So you need a part-of-speech model that also takes into account examples of when certain words can take on different roles and what that means in order to develop the best understanding of the content.

The Watson team needed to go further in order to effectively generate the results from their models to answer Jeopardy questions. They developed methods for utilizing the subject-verb-object relationships and semantic graphs in order to link words and concepts together based on their meaning, not the exact words. While our software gets off easy here, in that we’re not looking for a specific answer to a specific question, we are developing semantic capabilities within the text analytics engine in order to provide better and more context-rich answers to the general question of what is in the content being processed.

High precision

On Jeopardy, if you get a question wrong, you lose money. When applying text analytics to business, you run the same risk. Get the sentiment analysis of today’s news wrong, and you could get blindsided by a customer concern or miss a breakthrough opportunity. The models used by the computer can be tuned and tweaked to provide more examples, more evidence, more context, and thus more learning. In our case, one of the strengths of the Salience Engine is the openness of the data files that drive the text analytics. This helps customers increase their precision based on the entities they know they are looking for and want to help the model identify.

Accurate confidence

With the model based approach, Watson develops a set of hypotheses, and then judges the possible candidate hypotheses based on evidence to provide an answer it can be confident of. Salience Engine takes the same approach with it’s entity extraction. “Charles Schwab” could be a person or a company, based on the evidence that the Salience Engine has been trained on. As the content is processed, each additional piece of evidence gives the model a nudge toward one possibility or the other. For example, use of the pronoun “they” that can be linked to a mention of “Charles Schwab” is an indicator that it is not one person, but rather a group. And though you could have a group of people all named “Charles Schwab”, it’s more likely to be a mention of a company based on this piece of evidence.

But one aspect of Watson that is fascinating, and it’s something that bumped it’s performance into the “winner’s cloud”, is Watson’s ability to learn dynamically within a category. Watson can take the correct and incorrect answers from questions and feed that into its understanding of the remaining questions in the category. We’ve always shied away from realtime training because unlike Jeopardy we don’t always have a human telling us correct versus incorrect answers, but it’s an interesting possibility.

High speed

On a single processor, Watson took 2 hours to answer a single question. The DeepQA team at IBM built a massive multiprocessing architecture for Watson in order to achieve the speed needed to compete on Jeopardy. This allows multiple pieces of evidence to be gathered and evaluated at the same time to develop and judge the candidate hypotheses and establish the confidence needed to answer.

The business world looks on speed in a similar way, with a similar goal. Our text analytics engine can churn through content and extract information faster than any human could read the content. But there is always more content coming out of the firehose, and businesses need to make their decisions faster to remain competitive.

Counting down to the big match

On top of everything else, the passion and vision that Dr. Ferrucci and his team have put into this effort is amazing. This is humans approaching a technical challenge at their best.

Watson will be competing on Jeopardy this coming Monday, Tuesday, and Wednesday. I’ll definitely be watching, will you?

Tags: , ,

One Response to “Elementary, my dear @IBMWatson!”

  1. Tweets that mention Lexalytics Development Blog » Blog Archive » Elementary, my dear @IBMWatson! -- Topsy.com Says:

    [...] This post was mentioned on Twitter by Barry Graubart and Eric Andersen, Eric Andersen. Eric Andersen said: Wow, great @Lexalytics post on @IBMWatson and the complexities of text analytics http://j.mp/eQO6RN /by @cjlambre [...]

Leave a Reply

You must be logged in to post a comment.