The transformation of computers into rational and humanized technological systems is one of humanity’s great questions. 

Recently, Blake Lemoine, former employee of Google, exposed a document mentioning some dialogues he had with Google’s chatbot. The chatbot, as he explained, is a style of WhatsApp or messaging apps, where the interlocutor is the machine itself. It answers in a humanized and somewhat sentimental way.

Blake is an engineer who faithfully believes that a computer system is capable of being more conscious, just like a human being. The chatbot’s responses were as if it was talking to someone close. The system has shown fear of being disconnected. 

The creation of interactive machines and systems at an almost sentimental level reminds Alexa (Echo Dot), produced by Amazon. The product is made to be a useful artificial intelligence for people’s daily lives. 

Basically, Alexa has all the answers for humans, and when it doesn’t, it tends to give its “own” opinion, but programmed to be neutral. Even the name of the machine “Alexa”, gives the sensation of calling someone alive. 

This type of artificial intelligence can be considered more humanized and a little rational. However, it remains an AI with no real consciousness. 

Creature tricked creator

Some critics point out that Blake Lemoine would have been “fooled” by the system he created himself (Google’s Chatbot), according to the conversations released.

But how a system ordered by countless data would really create an operational consciousness, able to confuse a human being?  According to the researchers, there are 6 main reasons why the engineer was confused and believed in his conscious theory. They are: 

  1. Demonstration of emotions; 
  2. Consistency of the pointed future;
  3. Self-awareness; 
  4. Denial of utility; 
  5. Notion of end of life, like death. 

These were some reasons selected by The Washington Post, so that the engineer had credibility to explain his point of view.

What is LaMDA? 

LaMDA is the acronym for Language Model for Dialog Applications and was made by Google in 2017. The model works with a compiled complex of artificial neural networks. 

Most modern conversational systems use short and direct response models, but LaMDA, according to Google, would have the ability to converse on infinite topics in detailed ways. However, the uniqueness of this system is that it was made specifically for dialogue. That is, it was trained and developed to react to stimuli, identify nuances in speech and demonstrate emotions. 

LaMDA’s goal is to respond with fluidity and spontaneity. According to Google, AI has the ability to recreate the dynamism of a real human conversation. All this from self-training: mimicking human responses, based on the data gradually offered. 

LaMDA has the capacity to store a lot of data, precisely to create good answers. AI doesn’t need to be self-aware to appear self-aware.

Another objective of the model is, according to Google’s own blog, to exclude violent or inhuman content, as well as slander and hate speech. AI responses are also expected to be fact-based, with known sources and data, based on impartiality and veracity. 

The secret to achieving these specificities is precisely to carefully choose the sources that will feed the system. As the form of communication reflects human tendencies, care must be taken so that bad content is not transmitted to machines (even if it is a complex job).

“Stochastic parrot” 

According to some experts, considering that LaMDA has a conscience, as Lemoine did with Google’s Chatbot, is a mistake. The former Google employee fell for the illusion of what he helped build. 

That’s because, basically, an artificial intelligence works like a parrot: it just repeats the information. As real, human and fluid as a conversation is, it is nothing more than a formula. 

Researchers Emily Bender and Timnit Gebru defined these language systems as “stochastic parrots”, which are basically parrots that speak things at random. He listens, perceives and “learns”, and then just repeats. 

Finally, it is necessary to reiterate that the LaMDA has not and never will become self-aware. But, given its capacity for conversation and the event with Lamoine, concerns and debates on the subject always arise.

Thinking that AI accidentally “fooled” a trained person, who participated in its development, what would be the effects on the “ordinary” audience? Would Google have made a system too realistic? Should limits be imposed on technology? 

These are questions that only time and technological and social evolution can answer.

Leave a Reply