This is not just Watson and IBM. Many, many people in AI make grandiose claims and throw around big words like "Natural Language Understanding" "scene understanding" or "object recognition" etc.
And it is a very old problem, at least from the time of Drew McDermot and "Artificial Intelligence meets Natural Stupidity":
However, in AI, our programs to a great degree are problems rather than solutions. If a researcher tries to write an "understanding" program, it isn't because he has thought of a better way of implementing this well-understood task, but because he thinks he can come closer to writing the _first_ implementation. If he calls the main loop of his program UNDERSTAND, he is (until proven innocent) merely begging the question. He may mislead a lot of people, most prominently himself, and enrage a lot of others.
And it is a very old problem, at least from the time of Drew McDermot and "Artificial Intelligence meets Natural Stupidity":
https://homepage.univie.ac.at/nicole.rossmanith/concepts/pap...
From which I quote:
However, in AI, our programs to a great degree are problems rather than solutions. If a researcher tries to write an "understanding" program, it isn't because he has thought of a better way of implementing this well-understood task, but because he thinks he can come closer to writing the _first_ implementation. If he calls the main loop of his program UNDERSTAND, he is (until proven innocent) merely begging the question. He may mislead a lot of people, most prominently himself, and enrage a lot of others.