The Limits of AI: Can Computers Truly Replicate Human Intelligence and Emotions?

  • Thread starter Delong
  • Start date
  • Tags
    Ai Limits
In summary: It would take a lot of time and resources to build something like that. Plus, I'm not sure that it would actually be that much smarter than a regular computer.
  • #71
PAllen said:
Right, a good example being chess programs. While this has little relation to general intelligence, attempts to model human thinking about chess never go very far. Using completely different methods, computers have reached a point where no human would consider a match against a computer at any time control.

(Caveats: it is virtually undisputed that top humans play some positions better than any computer; and also true the computers play some positions better than any human. Yet the last human computer match (involving Vladimir Kramnik) demonstrated to everyone's satisfaction that human-computer direct matchups were no longer interesting. Final observation, suggesting value of cyborgs for the medium term: expert human players (human ratings go, e.g. novice, class player (E, D, C, B, A), expert, master, International Master, Grandmaster, top 20 player) + medium strength computer programs beat the strongest computer programs (playing with no human assistance).
Interesting example, but it doesn't hit the point I am making.

Also, computers are not better than humans at quick matches, say 1 minute match, humans win all the way down.

Of course, you can say that when computers get more powerful than today that they will someday win humans in chess even in 1 minute matches.

But, my real point here is, that humans have great advantage over computers because of adapting and learning abilities through motivation (which comes via emotions and feelings), e.g. exchange some figures on the board and run the chess program, it will suck, while a human will be able to play immediately almost as good as before. Or worse for computers, invent a new way to move a piece, say, instead of L move for knight it moves in S shape, computers on they own (without dedicated programmers intervention), would not be able to play at all, while humans wouldn't have much difficulties, they could play right away.

Go game (and alike games Go-moku and pro version Renju) is another interesting example, where computers don't stand a chance against good human players.
 
Last edited:
Physics news on Phys.org
  • #72
Ryan_m_b said:
Well we can and we can't. I would maintain that you are making the claim that an intelligent agent must be an emotive one. I would counter that claim by pointing out that non-emotive objects can perform complex tasks that previously would have only been in the domain of human beings. For example; software that can analyse speech semantically and respond. I see no reason to suggest that as complexity of tasks increases emotion must arrive. I'm quite tired now so I'll give the matter some thought overnight but I'm pretty sure there are examples of intelligent agents without emotion.
Found any? Performing complex tasks is one thing, deal with new tasks, or old ones which changed (e.g. chess example above), is quite another.

Ryan_m_b said:
Software can evolve, genetic algorithms are a good example of that. As for the rest of your statement I don't think you can categorically say that it isn't possible to write software capable of learning and adapting. Such things already exist in a limited capacity and I see no reason to believe that this capability cannot scale.
I agree that this field has a lot of potential for advancement, but I still maintain (I could be wrong), that without true motivation (simulated one won't do it), computers or robots with AI won't do anything more than what they will be programmed to do.

Even if we make incredible piece of software, which tells itself "I want to learn and adopt", it won't really learn by understanding like humans do (just calculating the data, no matter how complex, doesn't just automatically give rise to consciousness and self-awareness), but it will just gather data on its storage medium, and it won't really adopt, it won't know what it is doing and if that is best for adopting or not, and thus, such simulated adoption cannot be even close to such efficiency as that of humans.

I'd say that consciousness cannot be calculated/simulated, but instead, I'd dare to say that consciousness is essential element to whole physical existence of our Universe. The question then is, how to "tap into it"... Brains obviously became capable of doing so, while computers obviously haven't, yet, at least not in the way we are making them today (in physical and software sense).

So, will we, humans, be able to make computers "tap into consciousness"? Well, I don't see why not, if nature did it we might too. I just think that it will demand from us doing radical changes in thinking about it all.

Are feelings and emotions part of consciousness? Well, I guess we'll know when we make computers conscious. But I am ready to bet on it being so ;)
 
Last edited:
  • #73
I'd be wary about attributing characteristics like intelligent to consciousness, there's some interesting evidence emerging in neuroscience that consciousness is a by product of our mind rather than a controller (some studies have shown conscious awareness of decision making occurring after that decision is enacted). Also the link between consciousness and emotion is a difficult one to pin down, people with psychopathy for instance have extreme trouble feeling emotions or empathising with others and so simulate these things instead.
 
  • #74
Boy@n said:
Also, computers are not better than humans at quick matches, say 1 minute match, humans win all the way down.

This is false. In fact, the longer the time control, the better humans do against computer chess programs. Top players were helpless against computers at speed chess many years before tournament time controls. In a postal match (days per move), indications are that a top postal player would still be competitive.

Further, if you look back at my post, it had nothing to do with anything you were saying. It was simply amplifying a comment Ryan_m_b made about the eye: that in making a computer do a specific task, we don't need to emulate the way people do it, and can arrive at a very different mix of strengths and weaknesses as a result.

(FYI: I am well aware that methods used successfully in chess have not worked in Go, nor are such methods ever likely to work in Go).
 
Last edited:
  • #75
The issue really depends on what you think humans are. I believe human beings are physical systems. In that sense, we ARE computers -- really, really different biological machines that can feel pain, pleasure, love etc, but physical machines nonetheless. If you believe in a soul etc, then you would probably make the argument that any intelligence we attempt to replicate would lack that inner "thing" that makes people...well, people! However, if we look at strictly what we know about the human body, the brain etc, then there is no reason to believe that we couldn't create and simulate intelligence if we knew more about what it meant to be intelligent! In actuality, every time a human being reproduces, they create a new intelligence. Also, anyone of your cells could be used in principle to create a new intelligence. My point is that physically, the creation of a new intelligence occurs daily. We just don't quite understand it, YET! Perhaps some day...
 
  • #76
PAllen said:
FYI: I am well aware that methods used successfully in chess have not worked in Go, nor are such methods ever likely to work in Go.
Yes and no.

A go playing program has beat an 8-dan professional recently. Running on a 15 teraflop, 800 CPU supercomputer. With a nine stone handicap (chess equivalent: Toss both of your rooks). The solution is still essentially brute force, just a different kind of brute force.
 
  • #77
D H said:
Yes and no.

A go playing program has beat an 8-dan professional recently. Running on a 15 teraflop, 800 CPU supercomputer. With a nine stone handicap (chess equivalent: Toss both of your rooks). The solution is still essentially brute force, just a different kind of brute force.

A nine stone handicap is huge, as you've noted (in chess, I believe I could regularly beat Gary Kasparov with an equivalent handicap, and that's not saying much). Unlike chess, for a long time, the best Go playing program was one with essentially no lookahead at all, just pattern recognitions algorithms.

It is my opinion (not a rigorous argument) that substantially different methods than were used with chess are required for Go. That does not mean I think progress will not be made, or even that it need be slow; just that a different mix of approaches will be required.
 
  • #78
PAllen said:
It is my opinion (not a rigorous argument) that substantially different methods than were used with chess are required for Go. That does not mean I think progress will not be made, or even that it need be slow; just that a different mix of approaches will be required.
One of those substantially different methods that is used with quite a bit of success in computer go is Monte Carlo techniques, i.e., plopping stones down at random. Highly parallelizable, zero intelligence. Combining that technique with some pattern matching against a huge library yields even better play. Still highly parallelizable, and still zero intelligence.

That's apparently good enough that computer go games could now beat me. I've played off and on for 40 years, mostly off. The mostly off, occasionally on, means I'm perpetually on the patzer (kyu) scale. I've made it to up 1 or 2 kyu a couple times. Several computer go games are at the 1 or 2 dan (amateur) level now.
 
  • #79
Ryan_m_b said:
I bought up the example of a camera because it does a few of the jobs that the eye does, specifically it does the few jobs that we want to replicate but in a more convenient (to the best of our ability) way. The discussion going on in the thread right now is dealing with the claim that intelligent software will require emotions. I'm contesting that by pointing out that when we want to replicate a human faculty (e.g. vision) we don't go about it by copying how humans work. This led to the examples given by me and others of intelligent software that acts nothing like a human does. Because of this I disagree that it is necessary that a future general intelligent piece of software will require emotion.

Since I think I was one of the posters who brought it up, I'll try to play the devil's advocate on this one. My original line of thought:

People act in certain manners, from a true AI perspective -where I loosely define true AI as, say, a conscious thing, something I could converse with,- I would also expect it to act. But why do people act? Because they have intent. And why do they have intent? Because they have emotions: love, hate, greed, etc. And why do people have emotions? Difficult, but let's assume it is similar to pain reflexes and gives a Darwinistic edge over zombies.

So a truly intelligent router -one I could also converse with,- I assume, would act with the intent to route optimally out of the 'love' for moving bits around, not because it does an exhaustive search on a space of solutions. That emotion, love, may not be quantifiable or understandable for us humans, but a conscious router might describe it that way, since I don't see why it would otherwise do what it needs to do, which is to route.

And that's my problem with your example: An eye is not intelligent, neither is a chess playing program.
 
  • #80
MarcoD said:
Since I think I was one of the posters who brought it up, I'll try to play the devil's advocate on this one. My original line of thought:

People act in certain manners, from a true AI perspective -where I loosely define true AI as, say, a conscious thing, something I could converse with,- I would also expect it to act. But why do people act? Because they have intent. And why do they have intent? Because they have emotions: love, hate, greed, etc. And why do people have emotions? Difficult, but let's assume it is similar to pain reflexes and gives a Darwinistic edge over zombies.
Here is where our discussion breaks down, if you are going to define AI as a conscious thing and then assert that intents require emotions then we aren't going anywhere. I see no reason to equate consciousness with intelligence and no reason to assume that subjective emotions are a required part of the process.
 
  • #81
DROBNJAK said:
Intelligence doesn't require emotions.
That is a bald assertion, a fallacy. Science doesn't have a good handle on what intelligence truly is.

Emotions are simply primitive thoughts, kind of evolutionary residue from in-vertebra.
Another bald assertion. Perhaps you are thinking of instinct, not emotion.

Computers had been as intelligent as humans from the first day when they were switched on.
This is blatantly false by any meaningful definition of "intelligent".

Human intelligence is nothing but finding a path of least resistance in a given context. Simply put, computers are blind, they do not have sensors to establish the context, so somebody from outside environment needs to translate context to them. Once the context is translated to them, computers are as capable as humans.
Simply put, this is nonsense.

As well, consciousness doesn't exist. It simply egocentricity of animals. A thinking medium doesn't need to be self-conscious to be able to reason.
And you know this how?


This last post is the last straw. This thread has been going nowhere fast because from the very onset it has not been based on science. It has instead been based on beliefs.

Thread closed.
 

Similar threads

  • General Discussion
Replies
10
Views
925
  • General Discussion
Replies
9
Views
1K
Replies
23
Views
3K
Replies
24
Views
2K
Replies
7
Views
771
  • General Discussion
Replies
5
Views
1K
Replies
10
Views
2K
Replies
8
Views
757
  • Science Fiction and Fantasy Media
2
Replies
55
Views
5K
Writing: Input Wanted Number of Androids on Spaceships
  • Sci-Fi Writing and World Building
Replies
9
Views
546
Back
Top