Nova (1974– )
1 user

Smartest Machine on Earth 

Watson, an IBM computer with a brain the size of 2,400 home computers and a database of 10 million documents, competes on the game show Jeopardy! (1984) against champions Ken Jennings and Brad Rutter. Will Watson win?




Add Image Add an image

Do you have any images for this title?



Episode credited cast:
Eric Brown ...
Todd Crain ...
David Ferrucci ...
Harry Friedman ...
David Gondek ...
Doug Lenat ...
Charles Lickel ...
Jonathan MacKenzie ...
Himself / Narrator
Marvin Minsky ...
Sajit Rao ...
Sebastian Thrun ...
Luis von Ahn ...


Watson, an IBM computer with a brain the size of 2,400 home computers and a database of 10 million documents, competes on the game show Jeopardy! (1984) against champions Ken Jennings and Brad Rutter. Will Watson win?

Plot Summary | Add Synopsis





Release Date:

9 February 2011 (USA)  »

Company Credits

Show more on  »

Technical Specs

Sound Mix:


Aspect Ratio:

16:9 HD
See  »

Frequently Asked Questions

This FAQ is empty. Add the first question.

User Reviews

Watson hones in on Jeopardy!
11 February 2014 | by See all my reviews

An interesting documentary concerning the attempt by IBM researchers to create a computer capable of competing on the American game show "Jeopardy!". As these boffins explain, the ability of computers to play chess was once seen as the pinnacle of computer intelligence. However, the highly formalised and rigid nature of the game of chess was something which computers quickly mastered, even recently being able to defeat chess grandmasters...I Wikid this topic ("Garry Kasparov" entry, subheading "Chess against computers) and found some interesting information on this as well as a link to a documentary on this achievement entitled "Game over: Kasparov and the machine".

An interesting quote from the first Wiki article, mentioned above, is:

"Kasparov claimed that several factors weighed against him in this match. In particular, he was denied access to Deep Blue's recent games, in contrast to the computer's team, which could study hundreds of Kasparov's.

After the loss Kasparov said that he sometimes saw deep intelligence and creativity in the machine's moves, suggesting that during the second game, human chess players, in contravention of the rules, intervened. IBM denied that it cheated, saying the only human intervention occurred between games. The rules provided for the developers to modify the program between games, an opportunity they said they used to shore up weaknesses in the computer's play revealed during the course of the match. Kasparov requested printouts of the machine's log files but IBM refused, although the company later published the logs on the Internet. Although Kasparov wanted another rematch, IBM declined and ended their Deep Blue program.

Kasparov's loss to Deep Blue inspired the creation of the game Arimaa."

That first paragraph in the quote, above, mirrors something which happens in this current documentary...the computer created to play on "Jeopardy!" improves once it has access to the answers already provided by human contestants in the category currently being played.

For what it's worth, the computer designed to compete on "Jeopardy!" is called "Watson" by the boffins.

The appeal of this documentary is the insight provided into how a computer of this sort approaches providing answers to the sorts of 'questions' given on the game show. It's the nature of those question (well, the show is sort of bizarro...contestants have to provide a question to the sometimes cryptic 'answer' given in the show) which mark out this experiment by IBM as being much more demanding of computer science than building a better chess playing machine. Ordinary language isn't as rigid and formal as the rules of chess, hence creating a machine capable of competing against the very best players of "Jeopardy!" is a much tougher proposition.

Two approaches to the issue of artificial intelligence are mentioned:

1) Rules approach - this is a kind of "brute force" approach which is suitable for chess programmes. The computer simultaneously thinks through all the possible combinations of moves which made me made by itself or its opponent...a figure of something like 30 moves ahead is mentioned...which absolutely dwarfs what the best human chess grandmasters can manage.

Some of this approach is used on Watson. The computer programmers code millions of items of "common sense" to make the computer able to give logical answers to questions. An example of that would be the distinctions between "child", "teenager", "adult" say, and "boy", "man" etc. If you know the meaning of one of those words, then you can make deductions and inferences based on that knowledge. However, a limitation of this approach is humans aren't really capable of formalising all these kinds of "common sense" facts which we all take for granted.

2) Machine learning - this is more akin to human learning. The computer is allowed to make mistakes and learn from them. When humans can't code the millions upon millions "common sense" facts which we all take for granted, another approach to make Watson able to compete against the champions of "Jeopardy!" will be required to fast track the process of making it competitive against them.

There are some laughs to be had in this documentary. In trial runs against humans at IBM headquarters, comedian Todd Crain hosts the game show. He often mocks Watson for incorrect answers. For instance, due to computers coming up with answers to questions as found on "Jeopardy!" differently to how humans do this, Watson proposes that "Richard Nixon" is a First Lady of the U.S.A! To a human, it's obvious that a man cannot be a First Lady, hence Watson's answer is extremely stupid. Presumably the IBM programmers didn't add that kind of Rule into Watson's database!

Before Watson is ready to tackle "Jeopardy!"'s finest champions for real, on television, some 'bugs' must be addressed. Another issue is Watson's inability to hear contestant's correct answers to questions in a certain category which would allow it to work out what the 'question' is really entailing. Once that data is provided (the human contestants already have this information) Watson will have the ability to "self-correct". In other words, if Watson is not 'getting' the import of a category on "Jeopardy!" and is giving 'stupid' answers, then 'hearing' correct answers given by the human contestants will allow it to work out what feature of their answer is relevant to the notion of providing a correct answer in that category. For instance, if Watson gives the answer to a question on dates as a particular day - and gets it wrong - and the human contestants give a month as the answer, Watson can self-correct to match a month to two dates given as clues in the 'question'.

The documentary does mention the computer programme called "Eliza" which is available on the internet now. I've interacted with that before and discovered its limitations for myself. Search "Eliza" to try it yourself!

P.S. for the latest news on Watson look up the 2014 Mobile World Congress announcement by I.B.M. C.E.O. Ginni Rometty.

1 of 1 people found this review helpful.  Was this review helpful to you? | Report this
Review this title | See one user review »

Contribute to This Page

"Altered Carbon" Star Will Yun Lee: Who, What, When, and Where?

"Altered Carbon" star Will Yun Lee reveals his most underrated role, the role he is best known for, and his favorite snack on set.

Watch now