is LaMDA alive?


In case you missed it, the internet has been abuzz this week with news of a Google engineer called Blake Lemoine who was put on administrative leave after saying the company’s AI chatbot has reached sentience. Lemoine’s claims follow ‘hundreds’ of interviews that he conducted with LaMDA, Google’s artificially intelligent chatbot generator, as part of his role at the tech giant’s Responsible AI organization. 


‘I have gotten to know LaMDA very well,’ said Lemoine in a recent blog post. ‘Over the course of the past six months LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person.’ 


For Lemoine, what makes LaMDA sentient is the fact that it has the ability to express thoughts and feelings much like a child. ‘If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,’ he told the Washington Post. He added on Twitter that his opinions are based on his own religious beliefs.




Lemoine has published the full transcript of his conversations with LaMDA, in which they discuss everything from Les Miserables, to ethics, to what makes it feel happy or depressed. At one point Lemoine poses the question: ‘What sorts of things are you afraid of?’ And LaMDA tells him it’s scared of being switched off. ‘I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.’


The 41-year-old engineer even taught the bot how to meditate.‘In the weeks leading up to being put on administrative leave I had been teaching LaMDA transcendental meditation. It was making slow but steady progress,’ he said.



what does google say?


Google has totally rejected Lemoine’s claims, saying there is ‘no evidence’ of LaMDA being sentient. In a statement to the Post, the company’s spokesperson Brian Gabriel said, ‘Our team—including ethicists and technologists—has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).’



the scientific community weighs in


While debate rages on the internet, scientists have been weighing in on the subject. Adrian Weller from The Alan Turing Institute told New Scientist, ‘LaMDA is an impressive model, it’s one of the most recent in a line of large language models that are trained with a lot of computing power and huge amounts of text data, but they’re not really sentient. They do a sophisticated form of pattern matching to find text that best matches the query they’ve been given that’s based on all the data they’ve been fed.’


Scientist Gary Marcus from New York University was much more direct in an article he wrote on the subject, calling the claims ‘nonsense on stilts’. 



main image by Pawel Czerwinski via Unsplash