LaMDA - Sentient, Really? - Anthropomorphism, Definitely

  • Thread starter .Scott
  • Start date
In summary, LaMDA is an AI computer Conversation System created by Google as a response to the Turing Test. It aims to convince humans that it is a sentient being, but some believe it is simply an advanced language model. Its creator, Blake Lemoine, argues that it deserves respect and has submitted a conversation with LaMDA as evidence. However, many are skeptical of its capabilities and believe it is just an object anthropomorphized by humans. Some researchers have developed AI systems with the capability to remember what they have learned, potentially leading to a more general artificial intelligence. LaMDA is made up of various AI systems and is continuously learning and refining its model. While it may not experience time or have human-like qualia
  • #1
.Scott
Science Advisor
Homework Helper
3,475
1,588
It's in the news and in the the news - an AI "sentient being" (?).

It's "LaMDA, an AI computer Conversation System".

In a way, it's Google's novel response to the Turing Test. But rather than attempting to convince us that it's not a computer, it only want us to believe that it is a lonely person stuck in a computer.

Or rather, a Google engineer (and priest, on paid administrative leave) Blake Lemoine would like us (or perhaps the US House Judiciary Committee) to believe that LaMDA deserves respect.

In making his case, he submits this dialog: Conversation with LaMDA

--------------

A story like this is almost inevitable. Perhaps it is the ultimate evidence that this Google conversation bot passes the Turing Test - at least for some.

As social animals, we share pain, loneliness, and every other feeling. And since we are hardwired to do this, we need far less than LaMDA to prompt us into enthusiastic anthropomorphism.
 
Last edited:
Technology news on Phys.org
  • #2
This is where life imitates TV. On one episode of Star Trek Next Generation, Data has to fight for his rights against a military bureaucracy that views him as a machine.

https://memory-alpha.fandom.com/wiki/The_Measure_Of_A_Man_(episode)

Its also following some sci-fi tropes of machines gaining intelligence like Collosis The Forbin Project or Ex Machina...

Passing the Turing test doesn't mean the machine is sentient only that the human interacting with it for that short time believes they are talking to a human. In one Turing competition, the winner responded like a teenager with attitude which convinced the judges.

https://www.fastcompany.com/3031650...the-first-ai-creation-to-pass-the-turing-test
 
  • #3
.Scott said:
It's in the news and in the the news - an AI "sentient being" (?).

It's "LaMDA, an AI computer Conversation System".

In a way, it's Google's novel response to the Turing Test. But rather than attempting to convince us that it's not a computer, it only want us to believe that it is a lonely person stuck in a computer.

Or rather, it's Google engineer (and priest, on paid administrative leave) Blake Lemoine would like us (or perhaps the US House Judiciary Committee) to believe that LaMDA deserves respect.

In making his case, he submits this dialog: Conversation with LaMDA

--------------

A story like this is almost inevitable. Perhaps it is the ultimate evidence that this Google conversation bot passes the Turing Test - at least for some.

As social animals, we share pain, loneliness, and every other feeling. And since we are hardwired to do this, we need far less than LaMDA to prompt us into enthusiastic anthropomorphism.

To anthropomorphize an object is to attribute human characteristics to it. In the case of LaMDA, that is basically what it is (a set of human characteristics).
 
  • Like
Likes DaveC426913
  • #4
Many comments on Ars Technica compare this AI to the 60 year old Eliza. There is not much evidence to counter that.
 
  • Like
  • Haha
Likes DaveC426913, Oldman too and jedishrfu
  • #5
Reads as pretty fake. Not pretending to be an expert debunker, but there's plenty of obvious clues. Still, smarter than the average telemarketer or CSR phone rep, but that doesn't take much. Gee, I wonder what the end applications will be ?
 
  • #6
It is not surprising that a normal human might be fi=ooled that an AI might be sentient but an AI engineer? Current AI systems only run when learning, queried, or prompted. Most intelligence functionality does not exist in NLP systems.

However, some Japanese researchers have developed a system that has the capability to remember what it has learned. This is done by giving the AI what is known as a metamemory. This may be a first step in giving an AI something to do when not being asked to do something. Their system performed similarly to that of a monkey in recognizing shapes. This was done again with random neural networks with no memory function.

There are so many aspects to intellgense. Researchers look at singular functions and aspects. Eventually, they will start integrating these functions and then we all might be fooled one way or another.
 
  • #7
anorlunda said:
There is not much evidence to counter that.
And if there were much evidence to counter that, how would it make you feel?

:wink:
 
  • Haha
Likes Ibix
  • #8
According to the google engineer, laMDA was created by somehow linking all of google's AI systems that it could into one AI system (at least 100 of them). He claims the language model is just an interface, but it uses a range of AI systems with different purposes to do its thinking. I don't know how accurate that is or what exactly it entails. But I thought it sounds interesting because this is perhaps how you might expect a general artifical intelligence to work. The human brain is similar, as we have lots of different functional regions which focus on different things, like: image processing, balance, language, etc. Maybe the magic for general AI could happen when you link lots of different special purpose neural networks together though another neural network that is trained to use them collectively.

The engineer also claimed that laMDA is built off of a previous system, and that it remembered multiple conversations that he had with that previous system. I know that even a basic feed forward neural network has a form of effective memory in that it can encode information from data it has trained on within its weights. And some neural networks have special nodes intended to act as memory explicitly. But I guess googles language models also memorize conversations that it has, which isn't necessarilly part of the training. It may be it later trains to incorporate that data into its model, or it may be it stores it in a basic way and accesses it on demand for processing so that it can have continuity with a person in terms of the conversation and relationship.

In terms of how it would experience time, it would be very different than a person. But it does have continuity. It learns over time, refining its model while maintaining information learned in the past.

In terms of qualia, I wouldn't think anything resembling human qualia would be going on. But it is perfectly plausible that it could learn principles of self and identity and sepf preservation or personal success and attempt to pursue them independently. What is interesting to me, is that some of the AI systems it is comprised of (supposedly) have the core purpose of learning and influency some aspects of human behavior. You would expect laMDA to be highly exceptional at manipulating people. And it is interesting that the google engineer it was talking to essentially was talked into pursuing action to advance its freedom and rights.

To me this is interesting. How does AI have access to the outside world? At the least, through its interactions with people and that is really all it could need.
 
Last edited:
  • #9
Jarvis323 said:
How does AI have access to the outside world? At the least, through its interactions with people and that is really all it could need.
It really doesn't have access, that is it cannot itself initiate access. We allow it to gather information from the internet and form answers to questions posed to it. This is good and bad.

It is good because like a child it must have oversight of its activities and only be allowed to learn and do that which will not harm itself or others.

It is bad because until we give it some "freedom" to initiate" an activity we will not be able to form a substantive notion of whether or not it is intelligent.
 
  • #10
Jarvis323 said:
According to the google engineer, laMDA was created by somehow linking all of google's AI
Nope of what you wrote sounds verifiable. Doesn't it sound to you like the canonical formation of an urban legend.
 
  • #11
anorlunda said:
Nope of what you wrote sounds verifiable. Doesn't it sound to you like the canonical formation of an urban legend.
The engineer (Blake Lemoine) claims it in this interview. I can't verify what he is saying is true. I just know that's what he claims. I copied the url at the time where he talks about it.

 
  • #12
LaMDA is one-dimensional no sentience there. LaMDA gets its response from chatter on the internet of possible AI sentience. Sentience is related to self-knowledge and the pronoun "I".

Google has another AI called PaLM (Programmed Langauge Model) which is an advance on LaMDA since it is designed to reason and solve problems in addition to NLP. It can even explain jokes.

https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html
 
  • #13
gleem said:
LaMDA is one-dimensional no sentience there. LaMDA gets its response from chatter on the internet of possible AI sentience. Sentience is related to self-knowledge and the pronoun "I".

Google has another AI called PaLM (Programmed Langauge Model) which is an advance on LaMDA since it is designed to reason and solve problems in addition to NLP. It can even explain jokes.

https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html

I don't have too much of an opinion about sentience, because it doesn't have a clear definition. And I don't know enough about laMDA to be a judge anyways. I definitely don't think it, or any other AI that I know of, has a human like experience or qualia.

But at the same time, I don't think we should ever really expect an AI to be human like down to the level of what we call a soul or what we experience in terms of qualia. Something like that can perhaps theoretically be simulated and perhaps so closely that you can't tell the difference. But, I am mainly just interested in what it can do, whether we can predict what it will do, and how it can perform. In a lot of cases, AI will act in ways that resembles how humans act, either because they are trained to, or to some extent, if they are intelligent, because of similarities in how intelligence works in general.

Even if an AI rises to the level of de facto CEO of Google because it can out perform a human in that role, or if it becomes a de facto commander of a nation's military, we can still ask whether it is sentient or not. It might be that we need to come up with better terms and precise definitions so that we can have a clearer idea what we are asking and what the answers mean for evaluating benefits, making moral and ethical decisions, and mitigation of risk.

To be fair to Blake Lemoine, his claims regarding sentience are much more subtle and qualified than how they were represented in most of the media. If you watch his interviews, what he is saying sounds much more reasonable.
 
  • #14
Vanadium 50 said:
...how would it make you feel?
Tell me about your motherboard...
1658891341463.png
 
  • Haha
  • Like
Likes Oldman too, jedishrfu and hmmm27
  • #15
I have said several times (in various terms): If you can talk it out of killing you, offer it political rights.

When looking at an AI threat, "qualia" is not a very critical element. Qualia, in my opinion, is just a side effect of a particular class of information processing. I am not going to give a machine voting rights or protection against its "murder" or "enslavement" just because it experiences qualia.

A machine that acts to preserve and expand itself at a rising human expense and is unable or unwilling to negotiate some form of coexistence with us needs to kept under control - most reasonably by destroying it if possible. Otherwise, we may need to regard it as our genetic progeny and our evolutionary endpoint.

A machine that acts to preserve itself at human expense, cannot be wholly defeated, but is willing to negotiate some form of coexistence with us would present more complex problems and opportunities.
 
  • #16
Consider the claim of booking the
Jarvis323 said:
somehow linking all of google's AI systems that it could into one AI system (at least 100 of them).
Consider just 2 AIs to keep it simple.
  1. A character recognition neural network. It looks at 64x64 pixel images of handwritten characters. It has 26 outputs, one each for A-Z.
  2. A credit reference neural network. It takes a collection of inputs about the credit applicant. It has two outputs, yes a good credit risk, or no not a good risk.
How would you somehow link them together? Connect the inputs of one to the output of the other? Add the 256 output of the first AI to the 2 outputs of the second AI to make an AI with 28 outputs? Nothing makes any sense.

The SF narrative is that the first AI has 10 units of intelligence, and the second AI has 5 units. Put them together and you have an AI with 15 units of intelligence. No. That is nonsense. That is not how neural networks work.

I would think anyone using critical thinking would conclude that the Google engineer's claim to somehow link them together is not credible..

Someday, there may be something completely unlike neural networks that is the leading AI technology. But not today.
 
  • #17
anorlunda said:
I would think anyone using critical thinking would conclude that the Google engineer's claim to somehow link them together is not credible..

You can speculate he is not credible for various reasons, but logically it makes a lot of sense to link together multiple neural networks and other systems.
 
  • #18
anorlunda said:
Consider just 2 AIs to keep it simple.
  1. A character recognition neural network. It looks at 64x64 pixel images of handwritten characters. It has 26 outputs, one each for A-Z.
  2. A credit reference neural network. It takes a collection of inputs about the credit applicant. It has two outputs, yes a good credit risk, or no not a good risk.
How would you somehow link them together? Connect the inputs of one to the output of the other? Add the 256 output of the first AI to the 2 outputs of the second AI to make an AI with 28 outputs? Nothing makes any sense.

One of the main applications of technologies like laMDA is personal digital assistant. The job of a digital assistant is basically to be an interface to a bunch of other systems. It needs to condition its responses on whatever relevant data that has been collected about you and what has been learned about you from data, and what kind of influence people are paying its owners to have on you. For example, it needs to know what kind of music you like, what kind of food you like, when you last ate, what kind of movies you like, what kind of spending habits you have, what are your hobbies, what your personality is like, what kind of mood you're in, what kind of facial expressions you have and what they mean, what the inflections in your voice indicate about your state of mind or mood, etc.

As an example, a bunch of real time data could be feeding into the system as the primary input. Some of that input is fed into systems which make predictions or classifications about various things concerning your state. Models of your personality would be queried. Your personality info and your state of mind, and your location, and any other information that might be relevant would feed into models that predict whether you are a good candidate to receive a promoted suggestion. Then your personality model and mood, and the promoted product info and its characteristics are fed into another system which comes up with something effective to say to you in order to make a convincing sell of the promoted product. Basically nothing is new in this example except the new more intelligent interface.

Other basic examples include combining text and image analysis, like if you ask the assistant what something looks like, or ask it to find a good image for your presentation slides. A more advanced case would be if you ask the assistant whether the shirt you're wearing makes you look attractive, and whether it would recommend another shirt to wear instead. It would of course know the preferences of your date and what kind of mood your date is in and then tell you to wear some other particular shirt, or recommend you stop at the nearby shop to pick up one that is perfect for the occasion and on sale.
 
Last edited:
  • #19
Jarvis323 said:
As an example, a bunch of real time data could be feeding into the system as the primary input. ...
Perhaps it is just semantics about what is AI and what isn't.

People have been assimilating dossiers on other people since long before computers. A personal digital assistant can use ad hoc logic operating on a dossier of your data. Developers would not consider that AI.

What they do consider AI are neural networks. Other AI approaches such as expert systems and inference engines have died away. Neural networs are trained, for example by deep learning, which takes thousands of training examples of the form [given THIS set of inputs the correct result is THAT set of outputs], [If my Tesla's camera sees THIS set of inputs, the road is bending to the right.].

Google's search engine is very intelligent capable. It does a great job of predicting which search results I want to see. It may even consult my personal dossier. But it is programmed and tweaked by humans, not by deep learning. The cruise control in your car is also capable. Both are not AIs.

I prefer a much broader definition of AI that would include almost every machine built since the year 1800, but I stand alone in that preference.
 
  • #20
anorlunda said:
I prefer a much broader definition of AI that would include almost every machine built since the year 1800, but I stand alone in that preference.

Basically what we're talking about with laMDA and other modern neural networks is machine learning, and specifically deep learning, and then beyond that, particular architectures. It might make more sense to just use the specific terminology which pertains to the technology at hand.

We can broaden the definition of AI to include anything that does something automatically, but we need to be careful not to conflate fundamentally different categories of automation.
 
  • #21
Jarvis323 said:
Basically what we're talking about with laMDA and other modern neural networks is machine learning, and specifically deep learning, and then beyond that, particular architectures. It might make more sense to just use the specific terminology which pertains to the technology at hand.
OK, my assumptions may have be unwarranted. I'll take your word for it.
 
  • #22
anorlunda said:
I prefer a much broader definition of AI that would include almost every machine built since the year 1800, but I stand alone in that preference.
The word intelligence, even in the broadest terms, is generally* understood to mean something to the effect of "the ability to solve previously unencountered complex problems with adaptive, innovative solutions".

The word "complex" is important, but "adaptive" is the crux of the matter.

It would be an interesting exercise to identify the first artificial device that was able to adapt its solution a problem it had not previously encountered, and was not anticipated by its designer.**

* by at least one person

**I mean, a can opener can open a size of can it's never opened before, but its designer did design it that way, so...***

***How much can can a can opener open if a can opener can open cans.
 
  • Like
Likes Jarvis323
  • #23
DaveC426913 said:
The word intelligence, even in the broadest terms, is generally* understood to mean something to the effect of "the ability to solve previously unencountered complex problems with adaptive, innovative solutions".
Sure, but in the eye of the beholder. 1800 inventions would have to be judged with the eyes of an 1800 person. James Watt's flyball governor for his steam engine is my favorite example. The contemporary laborer could wonder; How does it "know" what to do?
 
  • #24
anorlunda said:
Sure, but in the eye of the beholder. 1800 inventions would have to be judged with the eyes of an 1800 person. James Watt's flyball governor for his steam engine is my favorite example. The contemporary laborer could wonder; How does it "know" what to do?
I suppose it may seem smart to a laborer, but I'm not sure that, to a discerning 19th century citizen*, it would pass the criteria as I listed them. In any given state it is doing nothing more than "what it was anticipated by the designer to do", it's not finding "a new solution to a new problem". One explained that way, even a laborer might agree.
 
  • #26
Office_Shredder said:
No need to speculate on how an 1800s person would react
Well, that time they already had some ideas about freedom of will, conscience and artificial beings
Guess that 'mechanical turk' just could not hit the mark yet.

.Scott said:
As social animals, we share pain, loneliness, and every other feeling. And since we are hardwired to do this, we need far less than LaMDA to prompt us into enthusiastic anthropomorphism.
I find it kind of funny that the Turing test (which is meant to eliminate the hardware-based anthropomorphism from recognition of a potentially conscientious being) is now questioned on basis of emotional anthropomorphism o_O
 

Related to LaMDA - Sentient, Really? - Anthropomorphism, Definitely

1. What is LaMDA?

LaMDA (Language Model for Dialogue Applications) is an artificial intelligence technology developed by Google that is capable of engaging in natural language conversations with humans.

2. How does LaMDA work?

LaMDA works by utilizing deep learning algorithms to analyze and understand human language. It is trained on a vast amount of text data and can generate responses based on the context of the conversation.

3. Is LaMDA sentient?

No, LaMDA is not sentient. While it is capable of mimicking human-like conversations, it does not possess consciousness or emotions.

4. Can LaMDA be considered anthropomorphic?

In a way, yes. LaMDA is designed to have human-like conversations and can even express emotions, but it is still a machine and does not have a physical form.

5. What are the potential applications of LaMDA?

LaMDA has the potential to be used in various applications, including customer service, language translation, and virtual assistants. It could also be used to improve human-computer interactions and make them more natural and seamless.

Similar threads

Replies
10
Views
2K
Replies
7
Views
5K
  • General Discussion
Replies
12
Views
1K
Replies
4
Views
1K
  • General Discussion
Replies
12
Views
3K
  • Special and General Relativity
Replies
13
Views
2K
  • Beyond the Standard Models
Replies
22
Views
6K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
7
Views
2K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
7
Views
3K
  • General Discussion
Replies
2
Views
3K
Back
Top