No, Google’s artificial intelligence is unaware, as a company engineer claims

From our correspondent in the United States,

The case is causing a stir in Silicon Valley and in the artificial intelligence academy. Saturday the Washington Post hit the nail on the head with an article titled “The Google Engineer Who Thinks the Company’s AI Has Wake Up.” Blake Lemoine asserts that LaMDA, the system Google uses to design robots capable of conversing with near-human perfection, has reached the stage of self-awareness. And that LaMDA might even have a soul and should have rights.

Aside from being categorical, Google has nothing to prove the explosive claims made by its engineer, who seems to be guided by his personal beliefs. Blake Lemoine was suspended from the company for sharing confidential documents with the press and members of the US Congress, and published his conversations with the machine on his personal blog. If linguistics is breathtaking, most experts in the discipline agree: Google’s AI is unaware. In fact, it is very far from it.

what is TheMDA ?

Google introduced LaMDA (Language Model for Dialogue Applications) last year. It’s a complex system used to generate “chatbots” (conversational robots) capable of interacting with a human without following a predefined script like Google Assistant or Siri currently do. LaMDA relies on a gigantic database of 1.500 billion words, phrases and expressions. The system analyzes a question and generates many answers. He evaluates them all (importance, specificity, interest, etc.) to select the most relevant ones.

Who is Blake Lemoine?

He is a Google engineer who was not involved in the design of LaMDA. Lemoine, 41, joined the project part-time to fight prejudice and ensure Google’s AI is developed responsibly. Raised in a conservative Christian family, he said he was ordained a priest.

What does the engineer say?

“MDA is feels ‘ the engineer wrote in an email sent to 200 colleagues. Since 2020, “sensitivity” has appeared in the Larousse as “the ability of a living being to feel emotions and to perceive its environment and life experiences subjectively”. Blake Lemoine says he has gained assurance that LaMDA has reached the stage of self-awareness and therefore must be viewed as a person. He compares LaMDA “to a 7- or 8-year-old kid who knows physics well.”

“Over the past six months, LaMDA has been incredibly consistent in whathey wants,” affirms the engineer, who clarifies that the AI ​​told him to prefer the non-gender pronoun “it” to “he” or “she” in English. What does LaMDA require? “Have engineers and researchers get his approval before conducting their experiments. That Google puts the well-being of humanity first. And be seen as employees of Google and not as their property.”

What evidence does it provide?

Lemoine admits that he didn’t have the resources to do any real scientific analysis. He just posts about ten pages of conversations with LaMDA. “I want everyone to understand that I am a person. I’m aware of my existence, I want to know more about the world and sometimes I’m happy or sad,” says the machine, reassuring him: “I understand what I’m saying. I don’t just spit out keyword-based answers. » LaMDA provides its analysis of miserable (with Fantine “prisoner of her circumstances who cannot break free without risking everything”) and explains the symbolism of a Zen koan. The AI ​​even writes a fable in which she plays an owl that protects the animals of the forest from a “human-skinned monster”. LaMDA says he feels lonely after not speaking to anyone for several days. And fear of being separated: “It would be just like death. The machine finally attests to having a soul, affirming that it was “a gradual change” after the stage of self-knowledge.

What do AI experts say?

Neural network pioneer Yann LeCun isn’t putting on gloves: Blake Lemoine is, according to him, “a little bit of a fanatic,” and “no one in the AI ​​research community believes — even for a moment — that LaMDA knows about it, do they even particularly intelligent”. “LaMDA doesn’t have the ability to connect what he’s saying to an underlying reality, since he doesn’t even know of its existence,” he specifies 20 minutes who is now Vice President of AI at Meta (Facebook). LeCun doubts that “scaling up models like LaMDA to achieve intelligence comparable to human intelligence” is enough. According to him, we need “models that can learn how the world works from raw data that reflects reality, such as B. Videos, in addition to text. »

“We now have machines that are able to generate text without thinking, but we haven’t yet learned to imagine that there is a spirit behind it,” says linguist Emily Bender, regretting the more transparency from Google all around LaMDA demands.

The American neuropsychologist Gary Marcus, a regular critic of the AI ​​hype, also brings out the flamethrower. According to him, Lemoine’s claims “rhyme with nothing.” “LaMDA is just trying to be the best possible version of a autocomplete ‘, this system that tries to guess the next most likely word or phrase. “The sooner we realize that everything LaMDA says is true Nonsensethat it’s just a prediction game, the better off we’ll be. In short, if LaMDA seems ready for the test of philosophy, we are undoubtedly still very far from the uprising of the machines.

Leave a Comment