Wednesday, October 12, 2011

Natural Language Processing




There have been high hopes for Natural Language Processing. Natural
Language Processing, also known simply as NLP, is part of the broader field of
Artificial Intelligence, the effort towards making machines think. Computers may
appear intelligent as they crunch numbers and process information with blazing
speed. In truth, computers are nothing but dumb slaves who only understand on or
off and are limited to exact instructions. But since the invention of the
computer, scientists have been attempting to make computers not only appear
intelligent but be intelligent. A truly intelligent computer would not be
limited to rigid computer language commands, but instead be able to process and
understand the English language. This is the concept behind Natural Language
Processing.
The phases a message would go through during NLP would consist of
message, syntax, semantics, pragmatics, and intended meaning. (M. A. Fischer,
1987) Syntax is the grammatical structure. Semantics is the literal meaning.
Pragmatics is world knowledge, knowledge of the context, and a model of the
sender. When syntax, semantics, and pragmatics are applied, accurate Natural
Language Processing will exist.
Alan Turing predicted of NLP in 1950 (Daniel Crevier, 1994, page 9):
"I believe that in about fifty years' time it will be possible to
program computers .... to make them play the imitation game so well that an
average interrogator will not have more than 70 per cent   chance of making the
right identification after five minutes of questioning."
But in 1950, the current computer technology was limited. Because of
these limitations, NLP programs of that day focused on exploiting the strengths
the computers did have. For example, a program called SYNTHEX tried to determine
the meaning of sentences by looking up each word in its encyclopedia. Another
early approach was Noam Chomsky's at MIT. He believed that language could be
analyzed without any reference to semantics.

No comments: