Human Cognitive Glitch is spotted by the Google’s AI
When we read a sentence due to the past experiences we think that it is written by a human. But there can be chances when the sentence is actually not written by human, even with such words: [Hi, there!] but such sentences which seem remarkably humanlike but are literally generated by artificial intelligence systems trained on large amounts of human text.
HIGHLIGHTS
- Determining the messages whether written by human or not has become difficult
- Google claims that it has an AI system LaMDA which has a sense of self
- The 1950s models would simply count occurrences of phrases and guess the next words
If we would see people are so accustomed to assume that fluent language only comes from human that evidence to the contrary would be difficult for one to wrap your head around. Well, such fluent expression with fluent thought can mislead people to think that an AI model can also express itself in such fluent manner and which means that it can think and feel a bit as like the humans do.
A former Google engineer recently claimed that Google’s AI system LaMDA encompasses a sense of self as it can eloquently generate text regarding its acknowledged feelings. This event and also the media coverage has led to a variety of articles and posts regarding the engineer’s claim that computational models of human language are capable of thinking, feeling and experiencing.
The texts which are generated by models like Google’s LaMDA can make it hard for people to distinguish from the text by humans. This spectacular achievement is definitely a result of long decade program to build models that can generate grammatical and meaningful language.
The early version of such models during 1950s, were known as n-gram models which would simply count up occurrences of specific phrases and used them to guess the words which would occur in specific contexts.
But today’s models, sets of information, and rules that approximate human language use to differ from these early attempts in many ways. First, models are trained on the entire Internet. Second, they are made to learn relationships between words that are way too far, not simply words that are neighbors. Third, they are made to tune by a huge variety of internal “knobs,” it makes difficult for the engineers to understand the reason behind generating a sequence of words instead of another.
However, the models’ task remains the same as in the 1950s, which was to determine which word is probably the next. Today, they have become so good that at this task that nearly all sentences they generate appear fluid and grammatical.
Also Read: Google fired Artificial Intelligence Researcher in a dispute with Chip Design Research.