The Limits of AI: Can Computers Truly Replicate Human Intelligence and Emotions?

  • Thread starter Delong
  • Start date
  • Tags
    Ai Limits
In summary: It would take a lot of time and resources to build something like that. Plus, I'm not sure that it would actually be that much smarter than a regular computer.
  • #1
Delong
400
17
I am skeptical that strong human-like AI is possible. I think AI can be made to look very much like human intelligence but I don't think it will actually be like human intelligence in the way that you and I are. For example, I don't think a computer can Actually feel pain or pleasure or fear and desire. I could be wrong but I am simply skeptical. I'm sure people can have those feelings towards robots and computers who actually look like they are like that. But can computers really feel? I am doubtful. I think feeling comes from biology and computers don't really have any kind of biology only a simulated kind of biology. Therefore I think there are limits. I obviously do not know the specifics of it but I am curious about it. Maybe computers can come very close but I don't think it can ever be exactly the same. Who are the big people working on AI right now? I would like to observe their progress.
 
Physics news on Phys.org
  • #2
I have no strong opinions over whether or not we will one day create a humanesque conscious being in silico but as for whether or not it's possible I see no reason why not. If something exists then you can simulate it and the simulation will have the same characteristics as the original.
 
  • #3
There is no way to determine if another human being is "really feeling" something, let alone the even more abstract question of whether a simulation of a feeling is equivalent to a real one. People even doubt THEIR OWN feelings sometimes.

It's difficult for me to imagine how you would even test this.

Human-like AI is a bit of a waste of time I think. We already have humans, billions of 'em. What we need computers for are things like summing a million digits in a fraction of a second, or crunching away at a problem for days on end with no rest and other "stupid" things that humans are incapable of.
 
  • #4
DavidSnider said:
Human-like AI is a bit of a waste of time I think. We already have humans, billions of 'em. What we need computers for are things like summing a million digits in a fraction of a second, or crunching away at a problem for days on end with no rest and other "stupid" things that humans are incapable of.
Agreed. Developing better and more capable software is by no means synonymous with trying to create a digital sentient being. Hell one day we may have software packages that are so good they can pass a Turing test and perform almost any task a human could but that in no way implies that under the hood is anything similar to what's going on in our grey matter.
 
  • #5
I also agree. There doesn't seem any good reason to pursue humanesque AI other than sensationalistic mad science hoorah. First of all, is that even possible? And second of all how will we treat these computers and how will that change the distinction we make between humans and machines? Like you said we could create computers that pass every turing's test but is it really something that can feel and think like humans or is it something that can simply pass every turing's test we have so far thought up? Anyway, I do not understand computer science and philosophy of mind extremely well I am simply curious about the possibilities of a thing.

I know some AI already exists. Things like Watson and the chess champion. Even calculators are a simple form of AI.
As for crazy future scenarios I'm more worried about the things that happened in AI or I-Robot than the things that happened in Terminator or the Matrix. Can robots really feel? If so how will that change how we think about ourselves and machines?
Perhaps computers can sufficiently emulate the cognitive aspects of human and animal thought. They might even be able to have memories, learn, plan, make decision, make judgements, or form "desires". But can they actually experience these things like motivation or desire or curiosity? Well like DavidSnider said how can we tell? I suppose we have to investigate what it means to really feel these things in humans first and how we know it is the case and then see if it's possible to simulate that in computers. Perhaps I should study a little bit of neuroscience and philosophy of mind before I go answer that. COOL!
 
  • #6
Yes to all. It has always seemed to me that superimposing emotions on a calculator would have no effect other than to greatly compromise its ability to calculate.
 
  • #7
Delong said:
I also agree. There doesn't seem any good reason to pursue humanesque AI other than sensationalistic mad science hoorah.
There's money in them there hills. Lots and lots and lots of money. It turns out that a good amount of the work that we think requires intelligence (whatever that is) is just as rote as is the work of those whose jobs have already been automated by run of the mill, non-AI software. Many aspects of the task of putting together a disparate bunch of parts to form some fantastic widget have been automated by simple software and machines. Planning the process, ordering the parts, keeping the machines running: That is where tomorrow's job security lies. Wrong. That work also is rote and can be automated by yesterday's AI. The only part that isn't rote (so far) is coming up with the process in the first place.

First of all, is that even possible? And second of all how will we treat these computers and how will that change the distinction we make between humans and machines?
It's quite possible it doesn't require real intelligence (whatever that is) at all.

Those of us who have struggled through four years of college to get a bachelor's degree and then even more to get an advanced degree look down upon our high school cohorts who never went to college at all. They have a problem with unemployment; we don't. Wrong. A lot of what we do requires no more intelligence than does knowing how to operate some machine in a factory. Whether this results in a neo-Luddite revolution remains to be seen. The nascent roots of this revolution are here right now in the Own Wall Street crowd.

Like you said we could create computers that pass every turing's test but is it really something that can feel and think like humans or is it something that can simply pass every turing's test we have so far thought up?
Google the term "Chinese room". Here's the wiki article: http://en.wikipedia.org/wiki/Chinese_room.

I know some AI already exists. Things like Watson and the chess champion. Even calculators are a simple form of AI.
A calculator is not AI. Neither is Deep Blue. Calculators are just abaci. There is zero intelligence behind an abacus or a calculator. Deep Blue, while it did beat Garry Kasparov, did so by means of dumb brute force. There was very little intelligence behind Deep Blue. Developers of chess programs long ago abandoned the AI approach to computerized chess.

Whether brute force will suffice to accomplish that which deem to be a sign of "intelligence" (whatever that is) remains to be seen. Whether AI researchers can use AI techniques to solve those problems is another question. Yet another question is what this thing we call intelligence is, is.
 
  • #8
D H said:
Yet another question is what this thing we call intelligence is is.
Definitely. The biggest problem with talking about things like this is the severe lack of good definition. When we can't define what exactly sentience or intelligence are how are we going to have a meaningful discussion about creating them? I tend to find it better to talk in terms of capability because at the end of the day that's what we want from machines; for them to do work so we don't have to. In fact it would be a far better if the mechanisms, however capable and human appearing*, were categorically nothing like humans in terms of consciousness or intelligence because then we get into a huge ethical quagmire.

*By human appearing I mean along the lines of natural language user interface rather than the Asimov-type human looking robot.
 
  • #9
It's quite possible it doesn't require real intelligence (whatever that is) at all.

Those of us who have struggled through four years of college to get a bachelor's degree and then even more to get an advanced degree look down upon our high school cohorts who never went to college at all. They have a problem with unemployment; we don't. Wrong. A lot of what we do requires no more intelligence than does knowing how to operate some machine in a factory. Whether this results in a neo-Luddite revolution remains to be seen. The nascent roots of this revolution are here right now in the Own Wall Street crowd.

[/QUOTE]

I don't understand how this connects to the question it was supposed to answer.
 
  • #10
D H said:
Yet another question is what this thing we call intelligence is, is.
I don't think this is a very difficult question at all. Intelligence is, as far as I'm concerned, "the ability to solve problems." Since there are many different kinds of problems, there must necessarily be different kinds of intelligence. This can even be simplified as "knowing what to do".

Then there's the fact that people tend to use different definitions. However, as language is primarily a means of communication, this definition makes sense, and does not seem to do injustice to most forms of what people call intelligence. Obviously, there are those who would say intelligence is primarily the ability to solve X kind of problem, or Y kind of problem that is most useful in situation Z. However, I have yet to find a more pragmatic definition than beforementioned one.
 
  • #11
I have long felt that it is enormously more likely that the first alien intelligence we communicate with will be one we built than an ET one. Which isn't to say how likely , how soon this is. Just that I think there are fewer fundamental obstacles to the former.
 
  • #12
I don't think this is a very difficult question at all. Intelligence is, as far as I'm concerned, "the ability to solve problems." Since there are many different kinds of problems, there must necessarily be different kinds of intelligence. This can even be simplified as "knowing what to do".

Wouldn't "knowing what to do." be more instinct, and easily programmable.
I bet you meant intelligence can be thought of as solving a problem not encountered before.
 
  • #13
256bits said:
I bet you meant intelligence can be thought of as solving a problem not encountered before.
If you meant, "not encountered before by said person," then yes. If you had already encountered the problem, you would mostly be recalling the solution from memory, which I agree is not a sign of great intelligence, per se.
 
  • #14
Hobin said:
I don't think this is a very difficult question at all. Intelligence is, as far as I'm concerned, "the ability to solve problems." Since there are many different kinds of problems, there must necessarily be different kinds of intelligence. This can even be simplified as "knowing what to do".
Defining "intelligence" is a very hard problem. The only working definition is the terribly circular "intelligence is the quantity that IQ tests measure." IQ tests offer what is at best an ersatz measure of intelligence. It measures intelligence in the sense of a "Chinese room" test. True intelligence is, in my mind, the ability to solve problems that no one has yet solved. One big problem with this definition: How are you going to measure it? Detect it? Define it other than after the fact?

To exemplify the difference between true intelligence and the ersatz intelligence measured by IQ tests one needs look no further than Richard Feynman. He was without doubt one of the most intelligent of all recent physicists, yet his ersatz intelligence (his IQ test score) was a paltry 125.
 
  • #15
D H said:
Defining "intelligence" is a very hard problem. The only working definition is the terribly circular "intelligence is the quantity that IQ tests measure." IQ tests offer what is at best an ersatz measure of intelligence. It measures intelligence in the sense of a "Chinese room" test. True intelligence is, in my mind, the ability to solve problems that no one has yet solved. One big problem with this definition: How are you going to measure it? Detect it? Define it other than after the fact?

To exemplify the difference between true intelligence and the ersatz intelligence measured by IQ tests one needs look no further than Richard Feynman. He was without doubt one of the most intelligent of all recent physicists, yet his ersatz intelligence (his IQ test score) was a paltry 125.

For what it's worth, the "Turing Test" seems to be the putative standard for "humanesque" intelligence that many still use. There are many experimental designs that are consistent with Alan Turing's original description (1951) and, afaik no machine has yet been developed that seriously warrants a comprehensive Turing Test.
 
Last edited:
  • #16
D H said:
Defining "intelligence" is a very hard problem...

...To exemplify the difference between true intelligence and the ersatz intelligence measured by IQ tests one needs look no further than Richard Feynman. He was without doubt one of the most intelligent of all recent physicists, yet his ersatz intelligence (his IQ test score) was a paltry 125.
The "very hard problem" would be solved by defining Richard Feynman, then.
 
  • #17
Hobin said:
If you meant, "not encountered before by said person," then yes. If you had already encountered the problem, you would mostly be recalling the solution from memory, which I agree is not a sign of great intelligence, per se.

Memory is a part of intelligence, just as much as problem solving. One has to make an assensement of a situation and determne whether to use the rules stored in memory applicable to the same old same old problem or devise a new set of rules for a never encountered problem. Intelligence can range from that of a lobster, to a dog, to a chimpanzee, to a human.

So I agree with your statement that intelligence is not that hard to define. Problem is you cannot give an IQ test to a lobster or a dog. So the level of intelligence maybe is more difficult to pin down. While the Turing test to some is the holy grail to strive for, so that one can say a computer is as smart as a human, I would seriously bet that very few humans themselves could make a passing grade, as much as computer could. It seems to have the same level as Asimov's three laws of robotics which are severly flawed for design of AI by humans. IE The military would love to have a robot that can kill.

At present silicon needs support staff for repair and energy replenishment. Would we become slaves to our intelligent robots if they themselves are not able to sustain themselves as a unit.
 
  • #18
D H said:
Defining "intelligence" is a very hard problem. The only working definition is the terribly circular "intelligence is the quantity that IQ tests measure." IQ tests offer what is at best an ersatz measure of intelligence. It measures intelligence in the sense of a "Chinese room" test. True intelligence is, in my mind, the ability to solve problems that no one has yet solved. One big problem with this definition: How are you going to measure it? Detect it? Define it other than after the fact?

To exemplify the difference between true intelligence and the ersatz intelligence measured by IQ tests one needs look no further than Richard Feynman. He was without doubt one of the most intelligent of all recent physicists, yet his ersatz intelligence (his IQ test score) was a paltry 125.

I tend to disagree with this definition of intelligence for exactly this reason. Pragmatically, defining intelligence as the IQ-quantity doesn't have any use. The ability to solve problems (which Feynman was very good at), however, has.

It's a lot harder to accurately test someone's ability to solve 'problems', though. After all, what kind of problems? When is something considered a problem? Does age matter when testing this? etc. etc. We'll most likely stick with IQ-tests for quite a while, which I think are the most reliable way to test one's potential for academic problem-solving at the moment (though I'm actually not sure of this; I've never really bothered to look up any studies to see whether this can be confirmed).
 
  • #19
People with low IQ (e.g. those with Down Syndrome), still have feelings, pretty much so.

While even the most sophisticated software on fastest world computer doesn't have any... Simply, without consciousness there aren't any feelings (nor emotions).

Also, IMO, consciosness and intelligence isn't the same thing... Level of IQ depends on quality of brains, while consciousness either is or isn't present.

All life is conscious, so, these two seems to be either one and the same thing or being two things as part of one (e.g. a coin with two faces).

All life is conscious, but awareness and intelligence varies in regards to structure of biological cells (not just brains), while human brains have the most comlpex biological structure on Earth, or say, in known Universe, thus they offer the best known ability to comprehend, imagine, create etc., and enermous capacity to associate and memorise (in capacity of storing data computers are already ahead of us humans, while in ability to comprehand they are behind even from bacteria, which knows well how to survive).

Computers/robots shall have feelings only when they become self-aware. And I don't think that's possible to achieve with software alone, no matter how sophisticated the software (simulation) is.
 
Last edited:
  • #20
If you meant, "not encountered before by said person," then yes. If you had already encountered the problem, you would mostly be recalling the solution from memory, which I agree is not a sign of great intelligence, per se.

http://www.dailywav.com/0904/quitelikethis.wav

Computers/robots shall have feelings only when they become self-aware.


http://www.dailywav.com/0106/fullestuse.wav



Note: links only clickable with IE8... copy and paste to address bar with Firefox. Opens in WMP.




OCR... :wink: ... lol
 
Last edited:
  • #21
Boy@n said:
All life is conscious, so, these two seems to be either one and the same thing or being two things as part of one (e.g. a coin with two faces).

All life is conscious, but awareness and intelligence varies in regards to structure of biological cells (not just brains), while human brains have the most comlpex biological structure on Earth, or say, in known Universe, thus they offer the best known ability to comprehend, imagine, create etc., and enermous capacity to associate and memorise (in capacity of storing data computers are already ahead of us humans, while in ability to comprehand they are behind even from bacteria, which knows well how to survive).

Computers/robots shall have feelings only when they become self-aware. And I don't think that's possible to achieve with software alone, no matter how sophisticated the software (simulation) is.

All life? Including bacteria, plants and brain dead patients? I highly doubt it. All the evidence points to consciousness being a product of a central nervous system. I also don't think it is fair to say that the human brain is the most complex biological structure, really there isn't much difference in complexity between a brain and many other organs.

If we discover how exactly emotions are generated then we may be able to emulate that on a chip. But that's hardly useful, what we want is computer programs that can solve problems in a mechanical, non-conscious way and if it acts like a person then that's all the better for interfacing.
 
  • #22
Ryan_m_b said:
If we discover how exactly emotions are generated then we may be able to emulate that on a chip. But that's hardly useful, what we want is computer programs that can solve problems in a mechanical, non-conscious way and if it acts like a person then that's all the better for interfacing.

I wonder about that. People in CS make a difference between (human) thought and computation. Humans are good at thought but lousy at computation whereas computers are good at computation but lousy at thought.

We don't really know what thought is, apart from that -if we sidestep a lot of philosophical issues- it seems to be the byproduct of an organ, the human brain, a complex entity made out of an incredible number of neurons but also hormones, which interacts with the other organs which make up a human body. (And, of course, while interacting with, or driven by, the environment, with probably even an evolutionary goal.)

It therefor seems reasonable that if you want something close to an organic intelligence, you'ld need to model all that - a brain, neurons, hormones, and a body.

That's the thing with your comment: How would you motivate an organic like intelligence if it wouldn't have emotions like love, curiosity, or ambition, to drive it? It may well refuse to give a correct answer to the simplest calculation like "3+5" since there is nothing driving it.

(This is under the assumption that you really want something like 'organic' intelligence in silicon, which I think we want. We (humans) are just so much better at solving 'easy' tasks, like cleaning the house, whereas computers have to be programmed by humans to do that, and still fail at the easiest of tasks.)
 
Last edited by a moderator:
  • #23
I think the main reason to pursue strong AI is not any particular belief about its utility, but simply because 'maybe we can', just like many other non-practical pursuits (is anyone expecting utility from understanding dark matter?). If we could, it would be cool that we could, and the result would presumably be cool. It would certainly shed light on questions of what is intelligence or consciousness.
 
  • #24
Maybe I'm just being a reductionist, but saying that we can never create an AI engine that can truly feel because it doesn't have any biology - in my opinion - is just silly. Biology is a series of complex chemical interactions, and these complex chemical reactions can be summarized by relatively simple physical laws, I don't think there is anything particularly special by calling it, "biology" because computers are governed by the same physical laws as humans are. So saying that our notion of a feeling is somehow special, and that no machine could ever replicate it seems to me absurd. I think the human brain is very scattered, and even if we had everything perfectly mapped out it wouldn't be very computer like. Logically, computers are much more rigorous and are less prone to being misled, and I suppose if you define this weakness as a human trait, then yes computers I don't think will seem very human like to us because they are too systematic and logical.

Personally I don't see the point in developing AI, because the whole point of computing is to do superhuman calculations, and developing emotions hinders the efficiancy of a computer. But when listening to EMI or Emily Howell I get conflicted. Ahh well, I guess we'll see what the future holds :P
 
  • #25
Firstly, I think that is possible. Plant a computer into a clone, which allows fluid interaction between the clone and machine, and your "emotion-AI" is formed.

Perhaps it is not that ideal for AI to possesses 'real' emotion; which IMO should be the clones' duty, that is if we progress that far. Then we would prefer cyborgs? But the former possessing "fake emotion" would definitely be essential, for UI purposes or such. That would then eliminate the need for rocket science just for interacting with the machine..
 
  • #26
DarkReaper said:
Firstly, I think that is possible. Plant a computer into a clone, which allows fluid interaction between the clone and machine, and your "emotion-AI" is formed.

Perhaps it is not that ideal for AI to possesses 'real' emotion; which IMO should be the clones' duty, that is if we progress that far. Then we would prefer cyborgs? But the former possessing "fake emotion" would definitely be essential, for UI purposes or such. That would then eliminate the need for rocket science just for interacting with the machine..

This would not work. Putting a computer inside someone won't make it conscious.
 
  • #27
Ryan_m_b said:
This would not work. Putting a computer inside someone won't make it conscious.

To elaborate, a computer would be placed within a living clone. Which the clone gains the added benefit of being able to interact with the computer (given an interaction environment is built), the clone then, in human language, reply and interact with its 'master' (a human).

Given sufficient modifying of the clone which renders it dependent on an actual human due to certain factors like the representation in Star Wars: Attack of the Clones.

Another example is the appearance of AI in Sci-Fi titles such as Halo, which bears the idea of an AI almost fully-functioning as an actual human, with hindering factors such as lack of senses of touch, etc. [Of course, inspiration is a good source of creation.]
 
  • #28
DarkReaper said:
To elaborate, a computer would be placed within a living clone. Which the clone gains the added benefit of being able to interact with the computer (given an interaction environment is built), the clone then, in human language, reply and interact with its 'master' (a human).

This is not science, you are espousing science fiction here. There is no way to play a computer in a living organism and expect it to integrate, let along become an artificial intelligence.
 
  • #29
DarkReaper said:
Firstly, I think that is possible. Plant a computer into a clone, which allows fluid interaction between the clone and machine, and your "emotion-AI" is formed.

DarkReaper said:
To elaborate, a computer would be placed within a living clone. Which the clone gains the added benefit of being able to interact with the computer (given an interaction environment is built), the clone then, in human language, reply and interact with its 'master' (a human).

This is just nonsense. You are hand-waving intelligence into existence and using this as proof that true AI is possible.

What is so special about a clone here? How is the computer connected to the clone? Implant a computer chip into a living being and about all that will happen is that this implantation will invoke the foreign body reaction.
 
  • #30
Limits of AI may be more related to technology and our inability to understand our selves.

Until recently the use of tools by animals was not even excepted.

Mans ability to limit his perception of the wold around them to preconceived ideas has slowed down mans advancement through out history.
The main idea is we are above all else seems to be the most prevalent idea that limits mankind.
 
  • #31
lostcauses10x said:
Limits of AI may be more related to technology and our inability to understand our selves.

Until recently the use of tools by animals was not even excepted.

Mans ability to limit his perception of the wold around them to preconceived ideas has slowed down mans advancement through out history.
The main idea is we are above all else seems to be the most prevalent idea that limits mankind.

Do you have anything to back up these claims? Animal tool use has been observed for a very long time (or do you think that for millennia people ignored bird's nests?). Are you by any chance confusing the paradigm shift in animal studies from behavioural to cognitive studies?

I have no idea what you are getting at in terms of limits of perception, empirical science distinctly avoids this and has given us the scientific world we live in.
 
  • #32
I agree with gordonj005 - reductionist and proud.

If you created a computer to simulate a human being cell-by-cell, molecule-by-molecule, quantum-state-by-quantum-state, I can see no reason (excepting magic, in which I include the concept of an eternal soul, at least for the purposes of this discussion) why such a thing would not think in just the same way the person it simulates does.

That is, I think, a kind of proof-by-brute-force that strong AI is possible in the absence of outright magic. At the least, I think it forces you to acknowledge that objections to the possibility of strong AI are difficult to ground in science (Penrose may disagree). It is not at all a practical way of building an AI; the computational requirements are horrible. I suppose a way to get from what I describe to a working AI is to work out ways to simulate the necessary features of multiple cells in a few computationally-efficient steps, and build up from there.
 
  • #33
Ibix said:
I agree with gordonj005 - reductionist and proud.

If you created a computer to simulate a human being cell-by-cel...

That is, I think, a kind of proof-by-brute-force that strong AI is possible in the absence of outright magic...
I agree with your statement but I'm not certain that this will be a goal in the development of artificial intelligence for quite a while, nor if it is necessarily a desirable goal. Instead in my opinion what we will see is a continuation of what we have now; increasingly sophisticated and capable software possibly combined with a more natural interface. I'm thinking a more advanced version of Siri rather than a digital entity based on the inner workings of the mammalian brain.
 
  • #34
"empirical science distinctly avoids this and has given us the scientific world we live in."

Correction is: tries to avoid.And should do so. Yet it all still comes from the observations (ability to perceive and the limits of such) and the human factor of the mind.

As for evidence of what I said, try reading old textbooks and even the arguments today by some of ridicules ideas. strangly enouhg if some groups did get into power as in the past those that think teach and would go agaist ideas of groups could and would find themselves persecuted.

Of course this just strayed of off the topic of AI, or has it??

If such a humanoid AI existed it would be resisted by some groups as noting more than a machine, even if it did come to a realization it existed.

The original poster sets limits almost immediately: "but I don't think it will actually be like human intelligence in the way that you and I are.". It is a simple example of such perceptions and preconceived ideas. ( Note: this is not meant as an attack on the original poster, it is just a simple example.)

Simply put I can not see what can and will become of AI. A great deal of science fiction has been wrote on the topics. To say it can not be human in the end form, or more than human who is to say?? The technology nor the advances of AI are to that point yet.
When it comes to humans and its use of such developments in AI, if today's usage of the internet is any idea, most would be to play games, and or some sort of other gratification of an individual: such as porn, or some form of social status. Humans are the strangest critter on the planet.

Who is to say what will be and not be??
 
  • #35
The iphone robot.
Doesn't an app exist for this. iface.
If your phone can recognize you, then how far away will it be before the "humanization" of the iphone occurs.

i can just imagine:
Pls call my better half.
No
What! Pls call my better half.
NO.
Why not?
I'm busy.
Doing what exactly.
Nothing. And it's none of your business.
Sigh.
 

Similar threads

  • General Discussion
Replies
10
Views
924
  • General Discussion
Replies
9
Views
1K
Replies
23
Views
3K
Replies
24
Views
2K
Replies
7
Views
767
  • General Discussion
Replies
5
Views
1K
Replies
10
Views
2K
Replies
8
Views
755
  • Science Fiction and Fantasy Media
2
Replies
55
Views
5K
Writing: Input Wanted Number of Androids on Spaceships
  • Sci-Fi Writing and World Building
Replies
9
Views
545
Back
Top