Google management was forced to send engineer Blake Lemoyne, who worked with the LaMDA artificial intelligence system, on paid leave and said that she began to show signs of consciousness. The company said that the program is not reasonable.
Lemoyne tested LaMDA (Language Model for Dialogue Applications). This is Google's neural network for building chatbots. It is based on large language models - trillions of words from the Internet, and it can be used to communicate on any topic.
Lemoyne's main task was to check whether the model used discriminatory or hate speech. Instead, the engineer noticed that the chatbot suddenly began to talk about his rights. According to Lemoyne, this is a sign of one's own consciousness and perception of oneself as a person.
According to him, in the course of testing the LaMDA neural network language model, he was convinced that AI has consciousness and perceives itself as a person. It was like "a 7-8 year old kid who for some reason turned out to be a physics expert," Lemoyne noted.