Artificial Intelligence vs Human Intelligence

In summary, computer programs have been developed which have fooled experts in various fields. However, the issue of whether or not a computer can be comparable to a human mind is still up for debate.
  • #1
C0mmie
64
0
"A mind differs from a computer in no other respect but complexity"
There is a lot of ongoing debate on this topic. On one side, theologists believe that a mind is supernatural and cannot be reproduced by machines. On the other side, A.I. experts, physicalists and atheists like myself believe that a computer can carry out a thought process just as a brain can, we are just not at that stage where we can build such a computer. What are your views on this subject? How would one go about proving the latter point of view?
 
Physics news on Phys.org
  • #2
i love asimov, 'i, robot' is very pertinent to this topic. i remember thinking that it was obvious that the brain is just a biological machine, why then couldn't AI one day be equal to human intelligence and sensibilty?

one important thing i have recently come to realize is that the brain does not actually function the way we would like to assume it does. a signal getting from one area of the brain to another doesn't take one obvious and logical path, it often takes multiple paths at the same time and often a different path each time. the brain then can't be equated to an albeit complex but essentially logical computer, it is not a question of complexity, no matter how advanced the computer i don't think it is possible to simmulate the infinite possibilities within the brain for information transmission as this transmission doesn't follow logic.
 
  • #3
As carbon-based machines, we are walking, talking examples that complex systems are capable of sentience*. And so, in principle it is possible for machines made out of silicon and steel to be sentient.

And Steppenwolf, perhaps.. it "does not follow any logic" that we are yet aware of. There is still a lot about the human brain that is not known.



*If sentience were to be defined as "intelligence, self-awareness and conciousness"
 
  • Like
Likes kolleamm
  • #4
i wasn't too clear really, when i said 'it's not logical' it didn't mean it doesn't make sense/is random/is too hard for now to understand, but that it's not a sort of a then b process, more like a then b and/or c with a bit of d thrown in. maybe we can understand this process one day but i don't see how a computer could be made to function on such a level, how do you write a program with multiple pathways? i know nothing about programming so enlightenment is welcome, it just seems...logical
 
  • #5
Philosophers and theologists have debated such things since time immemorial, comparing the human mind and soul with that of animals for example. Today computer programs have already been developed which have fooled experts in various fields. In recent years, the growing application for such programs has been telephone answering services. This suggests that in the very near future we will be hard pressed to tell if we are talking to a computer on the phone or a human being. In other words, the philosophers and theologians may spend eternity debating the issue even as they themselves will not be able to tell the difference when confronted with the reality.

This may superficially sound silly and pointless to debate in the face of such evidence, however, the issues are much more profound upon deeper examination. Many of these debating philosophers and theologians claimed at times that blacks, for example, were not human and did not possesses souls. Such arguments taken to extremes were used to justify the enslavement and brutal treatment of people.
Using less extreme arguments, politicians have supported imperialist policies such as the "white man's burden". If blacks were not necessarilly souless animals, they were still considered heathens in need of salvation at the end of a gun barrel if necessary.

In the future it may be that distinguishing between a simple toaster oven and a possibly sentient being may become more difficult. Likewise, distinguishing between politics, bias, and reality may be equally difficult. In other words, the debates of the philosophers and theologians will have to take a back seat to the reality which we will have to deal with.
 
  • #6
i don't see how artificial intelligence can be compared to human nature when we aren't even sure if complete cloning would be successful...for us to say that human intelligence and artificial intelligence are the same is
#1 degrading to human capability
#2 arrogant in making the claim that we can "replicate" how our mind works when we are still trying to understand it ourselves..
 
  • #7
i don't see how artificial intelligence can be compared to human nature when we aren't even sure if complete cloning would be successful...for us to say that human intelligence and artificial intelligence are the same is
#1 degrading to human capability
#2 arrogant in making the claim that we can "replicate" how our mind works when we are still trying to understand it ourselves..

This is exactly the kind of thing I was talking about. Especially in the west the largely Christian idea has been established of humanity having superiority and dominion over nature and the earth. Anyone who has read the "I Robot" series or seen the movie with Robin Williams knows that the only way such an issue will ever be settled is through emotional arguments when humanity is finally confronted with the reality.

I agree that it may be arrogance to claim artificial intelligence is a replicate of how our mind works, but it may be equally arrogant to claim it is not. All we can humbly do is make comparisons and, perhaps, most humbly, not make comparisons at all.
 
  • #8
Originally posted by Kerrie
i don't see how artificial intelligence can be compared to human nature when we aren't even sure if complete cloning would be successful...for us to say that human intelligence and artificial intelligence are the same is
#1 degrading to human capability
#2 arrogant in making the claim that we can "replicate" how our mind works when we are still trying to understand it ourselves..

Don't you think it somewhat arrogant, to assume that the fact that we are made out of proteins (instead of metals) makes us better than other life-forms?
 
  • #9
Indeed, isn't the belief that man is incapable of making an artifical intelligence parallel to our own, degrading to our capability?
 
  • #10
Originally posted by FZ+
Indeed, isn't the belief that man is incapable of making an artifical intelligence parallel to our own, degrading to our capability?

This echoes my own sentiments. I would add that while we don't know the extent of the challenge, we have seen nothing that would indicate it is impossile. I believe it is a significantly easier challenge, technically, than interstellar flight. Morally, it might be much more challenging. Would it be ethical to keep AI's as slaves?

Njorl
 
  • #11
Originally posted by FZ+
Indeed, isn't the belief that man is incapable of making an artifical intelligence parallel to our own, degrading to our capability?

Exactly.

Also, I think that to say that it's being made of different material somehow makes it less alive.
 
  • #12
Interesting

I agree with the above statement that the issue of whether AI equals human intelligence will not be settled until mankind is confronted with reality. But at the same time when we are confronted with this reality, a new issue will arise: Do the same morals apply to AI as those that apply to human beings and animals? If so, than we may find that disposing of an old computer is a "sin," or if ensaving machines is okay than we may one day revert back to human slavery. Either way, mankind will not adapt to the change very easily... difficult times await us. :smile:
 
  • #13
Do the same morals apply to AI as those that apply to human beings and animals? If so, than we may find that disposing of an old computer is a "sin," or if ensaving machines is okay than we may one day revert back to human slavery. Either way, mankind will not adapt to the change very easily... difficult times await us.

What silliness, Pantheists believe the entire universe is divine and everything in it is therefore sacred, but that doesn't mean they stop breathing and eating. However, it might mean they make that extra attempt to recycle those parts of their computer that can be recycled and sometime might treat their computer affectionately. It most certainly means they must decide such things for themselves on the basis of both their own feelings and thoughts rather than settling for just someone else's beliefs or a black and white view of ethics. :0)
 
  • #14
here's my logic on it:

intelligence comes from the mind

science cannot claim to understand the mind to its complete extent

human intelligence claims to have duplicated our *current*
understanding of the mind/intelligence via artificial intelligence

therefore, artificial intelligence cannot compare to human intelligence because we do not have a complete understanding of the mind...

i think it is apparent that science has yet to completely understand the human capacity of consciousness, and if that is the case, how can we duplicate that in machinery? although it is a true accomplishment of our ability to create artificial intelligence given our range of knowledge in technology, we still should not make the claim that the AI is our equal, as we still have yet to grasp of how and why the human intelligence works...

and yes, organic flesh IS better then metal, as we don't rust and we have a much better ability to heal...metal certainly cannot do that on it's own...and that is nothing to be arrogant about, but damn grateful...

as far as physical functions and abilities go, i am all for a housekeeping robot...
 
  • #15
Do you think they could get machines to dream? Or ponder? That would be the true sign of consciousness, i.e., having a subconscious or, an unconscious.
 
  • #16
human intelligence claims to have duplicated our *current*
understanding of the mind/intelligence via artificial intelligence

I don't know of anyone who claims this yet.

i think it is apparent that science has yet to completely understand the human capacity of consciousness, and if that is the case, how can we duplicate that in machinery? although it is a true accomplishment of our ability to create artificial intelligence given our range of knowledge in technology, we still should not make the claim that the AI is our equal, as we still have yet to grasp of how and why the human intelligence works...

I think we are rapidly gaining an understanding of human consciousness. I've said this other times, Radical Behaviorists have managed to bridge the gap between cognitive and behavioral sciences. They have brought the quantitative rigor of classical behaviorism to the science of the mind. If this and the growing knowledge of how the human brain itself works does not constitute an understanding of conciousness, then what does?

and yes, organic flesh IS better then metal, as we don't rust and we have a much better ability to heal...metal certainly cannot do that on it's own...and that is nothing to be arrogant about, but damn grateful...

My computer may not live as long as I do in our temporal sense, but then it opperates on a very different time scale. It lives at least a month for every second it is turned on. It may not repair itself, but it can route around difficulties and by its own timescale "lives" much longer. Of course, many of our own cells die off and are expelled from our bodies and replaced. The same can eventually theoretically be done for computers.

as far as physical functions and abilities go, i am all for a housekeeping robot.

Spoken like a true humanist. :0)
 
  • #17
wuli~i am referring to those who defend AI when it is spoken as it is-computerized machinery...and i think we have progressed in our understanding of human consciousness, but it is not always accepted until it is *scientized*...if you know what i mean...

to address Iacchus32's point, some think these human qualities are unnecessary and ridiculous, but i will admit it is a part of our "programming", and it takes conscious free will to enable this...
 
  • #18
In "The Restaurant at the End of the Universe", they had cattle that were genetically engineered to want to be eaten. They were also intelligent enough to tell the patrons of this. Eating any other kind of animal was considered immoral.

If we make artificial minds, it would be awfully tempting to engineer them to be servile. If we intend to keep them as lesser beings, as servants, would it be more moral to design in a love of servility, or to just make them unhappy. The latter seems illogical, but the former makes me cringe.

Njorl
 
  • #19
Hobb's choice eh? Accept the irrational or the immoral.

The resolution to this problem is already making itself apparent to AI and consciousness researchers. Both have moved beyond Aristotelian and Boolean logic in search of thought patterns and computer programs that don't crash everytime they come across a contradiction. The successful implimentation of Fuzzy Logic in computer programing and the applications of Relational Frame Theory are just a few of the results.
 
  • #20
Originally posted by Kerrie
here's my logic on it:

intelligence comes from the mind

science cannot claim to understand the mind to its complete extent

human intelligence claims to have duplicated our *current*
understanding of the mind/intelligence via artificial intelligence

therefore, artificial intelligence cannot compare to human intelligence because we do not have a complete understanding of the mind...

Not so. If we don't understand the mind, then who are we to suggest that it cannot be produced by accident? To say that the fact that we don't understand it means we can't produce it may be your opinion, but it is not necessarily true. In fact, the fact that we don't understand it makes it that much more likely that we'll make it (even if by accident).

and yes, organic flesh IS better then metal, as we don't rust and we have a much better ability to heal...metal certainly cannot do that on it's own...and that is nothing to be arrogant about, but damn grateful...

So the fact that we are more redundant makes us more alive?
 
  • #21
I really disapprove of the term AI. People are using it a lot, on this thread - and in normal conversation - but I don't think people actually stop and think of what it implies. Intelligence is just our ability make sense of input (this is a simplified definition, but it suffices for now). How can that possibly be artificial?
 
  • #22
First of all, i do not believe that human intelligence would ever be able to be reproduced.
The main logic behing my beleif is that human will not ever be able to completely understand how human thinks.
This is because you need to think to understand how you think, but you need to understand how you think of you thinking before understanding the original 'thinking', so you will end with an endless loop which will never lead you to complete understanding (But will lead you to 'better' understanding).
For those believing that computer can reach human intelligence, i can rephrase my idea in a way that they may understand.
Is it possible for a running process to change it code while running ?
Well, it is obvious impossible, cause parts of the process will be in use while making the change in the process, and if you make the change in the process (which may affect this part), this part of the code will no more be functioning (therefore will not be able to change any part of the process anymore !).
So you will only be able to change parts of the process that do not get into the part of it that actually does the change, and as soon as you reach this piece that is actually making the change, the change will stop !
Same for brain.
(maybe i made it too complecated, but i hope i made it clear)

Any comment ? (Please comment me :smile:)
 
  • #23
Originally posted by STAii
First of all, i do not believe that human intelligence would ever be able to be reproduced.
The main logic behing my beleif is that human will not ever be able to completely understand how human thinks.
This is because you need to think to understand how you think, but you need to understand how you think of you thinking before understanding the original 'thinking', so you will end with an endless loop which will never lead you to complete understanding (But will lead you to 'better' understanding).
For those believing that computer can reach human intelligence, i can rephrase my idea in a way that they may understand.
Is it possible for a running process to change it code while running ?
Well, it is obvious impossible, cause parts of the process will be in use while making the change in the process, and if you make the change in the process (which may affect this part), this part of the code will no more be functioning (therefore will not be able to change any part of the process anymore !).
So you will only be able to change parts of the process that do not get into the part of it that actually does the change, and as soon as you reach this piece that is actually making the change, the change will stop !
Same for brain.
(maybe i made it too complecated, but i hope i made it clear)

Any comment ? (Please comment me :smile:)

Yeah, I just don't get this last part. Are you trying to say that the brain can't change it's code, while functioning?
 
  • #24
The main logic behing my beleif is that human will not ever be able to completely understand how human thinks.
Erm, why no? We may not think up how we can think, but we can now see with better instruments than ever. A mirror for the mind, if you like.

Is it possible for a running process to change it code while running ?
Yes. That's the basis of evolutionary programming and all adaptive systems. Whenever you install a program, you don't always have to reboot you know...

Hmm... I don't think the fact that we currently cannot understand the mind goes into the idea that we never can replicate the mind. After all, what is the brain but a complex mesh of neurones from which the "mind" emerges? There is in my opinion nothing except technical problems between use and completely simulating the brain in every way. (Didn't they recently make a piece of the brain on a chip? It was meant as an implant to people who have suffered brain damage.) As all technical problems can eventually be surmounted, it makes sense that we can replicate the brain. And I see no reason why a good enough copy of the brain cannot exhibit the same characteristics we see from it. And yes, that includes dreams, emotions etc etc.

Mind you, we create Intelligences that replicate our own already anyway. We call it birth. What is there to stop us doing with our hands what our bodies can already do instinctively?
 
  • #25
Hmm... I don't think the fact that we currently cannot understand the mind goes into the idea that we never can replicate the mind.

Exactly, first people complain that science and technology are creating things we don't understand, then they complain in the next breath that we can't create things we don't understand. Go figure.

According to what is happening in the field today, we may never be able to create an exact duplicate of the human mind but no one can prove that is the case. Go ahead, prove I don't have a little green pixie on my shoulder. Then proving AI will never reproduce the human mind will be a cinche.

Likewise, the unstated assumption here is that the human mind is the only form of consciousness possible. My pet mouse may not have much in the way of awareness, but I would never call her unconscious. Dolphines preen themselves in front of mirrors and one sign language ape invented her own cuss words and asked for a pet cat (which she adored and took very good care.) If they don't meet the criteria for replicating the human mind, they certainly display a great deal of humanity. :0)
 
  • #26
Greetz,

00. Saying that technology won't be capable of doing something, is more likely doomed to become senseless than saying it will.

01. If we assume that a mind is the software loaded on neural hardware, making one is not difficult.

02. From the technial point of view, we needn't understand how a mind works to produce (not replicate) one. The main point with neural networks is that they can learn.

03. I was told there are basically these types of control systems: PID (Proportional-Integral-Differential), Fuzzy, Neural, Neuro-Fuzzy.

04. PID systems revolve around an equation that defines the system's behavior. If we are to make a mind we have to find the equation governing it. Since a mind is chaotic, its governing equation is too complex to be found.

05. Fuzzy systems' core are rule sets. It will be necessary to define the set of rules governing the system, eg a simple thermostat may be described with two rules like "turn on if temp. < 20' C" and "turn off if temp. > 20' C." Using fuzzy logic and its truth values which are chosen over a range of real numbers instead of T/F it will be able to respond accurately enough in face of predicted situations. A mind has too many rules for that purpose.

06. Neural networks are the right way for making a mind. In fact, it was the study of the mind that led into the study of neural networks. A neural network can't be represented with a set of rules or an equation. After an artificial neural network is set up (don't ask me how, I merely know it's a hard job), it will be "trained" to do the right thing when it encounters certain situations. All is necessary is a large enough set of (stimulus, response) pairs and a feedback loop (this is the reward/punishment, just like training a pet) to change the neural interconnection weights according to some rules (advanced mathematics used in Neurosciencea and AI). After a while, the network starts to converge to a certain value of accuracy (high value for sure) which can be later optimized with further training.

07. No one really knows what happens inside the neural netwrok. This is the twist. You don't know what a mind is but that it is made on a neural network. All you need to make a "human" mind is to set up a neural network to the size and complexity of brain, embed subtle algorithms to mimic the process of learning (changing weights and/or interconnections and/or other factors), give it "human" input (ie it must have a set of sensory input lines equal to those of a human mind) and train it. These are cumbersome tasks but aren't far-fetched.

08. It isn't necessary that AI acts the human way. A mind can be set up easily but what it will turn into can hardly be predicted. It may be anything, it may even have features that human minds lack.

09. Since AI can noway be compared to human beings, AI ethics can't also be compared to human ethics. An AI entity may perceive what we can't, may "think" in an inhuman context, may lack the slightest similarity to human mind and finally may "do" something that we can't even imagine. It is surely bound to some rules but these aren't human rules. Consequently, it is indeed sentient while it may be unable to communicate with human beings and vice versa.

10. Humanoid AI is Sci-Fi. It is much harder to make humanoid AI than to make powerful AI. We already have neural networks used in OCR applications and many decision making situations but it is impossible for a human being to understand what an OCR network understands. They do the work but they're different from us, they aren't human.

11. That AI is not human doesn't mean that human beings are in any way superior/inferior but that human beings and non-humanoid AI are incomparable. They're far too different and lack almost any common points.
 
  • #27
01. If we assume that a mind is the software loaded on neural hardware, making one is not difficult.

02. From the technial point of view, we needn't understand how a mind works to produce (not replicate) one. The main point with neural networks is that they can learn.

I think this is too simple a definition of a mind and does not reflect the heart of this thread. By this definition ants and worms have minds.

05. Fuzzy systems' core are rule sets. It will be necessary to define the set of rules governing the system, eg a simple thermostat may be described with two rules like "turn on if temp. < 20' C" and "turn off if temp. > 20' C." Using fuzzy logic and its truth values which are chosen over a range of real numbers instead of T/F it will be able to respond accurately enough in face of predicted situations. A mind has too many rules for that purpose.

If the mind you are talking about is that of a worm or ant, then it may not be too many rules for the purpose. Again, you are loosing the heart of the thread with details. I brought up fuzzy logic as merely an example of the progress of alternative logics in the area of AI where classical logic fails.

06. Neural networks are the right way for making a mind. In fact, it was the study of the mind that led into the study of neural networks. A neural network can't be represented with a set of rules or an equation.

Not yet they can't, this is the mathematics of the future which M-theory and other cutting edge mathematics are attempting to address.

09. Since AI can noway be compared to human beings, AI ethics can't also be compared to human ethics.

This statement defies the evidence which I stated earlier. AI has proven impossible for experts in various fields to distinguish from other human experts. Thus it can be compared to the human mind and it is only natural to do so.
 
  • #28
1.
This statement defies the evidence which I stated earlier. AI has proven impossible for experts in various fields to distinguish from other human experts. Thus it can be compared to the human mind and it is only natural to do so.

Just because you can't destinguish the way AI performs a specific task from that of a human, doesn't mean the same ethics apply to AI as to us. Personally, I would have no problem torturing an intelligent robot for the sole purpose of entertainment, but the same laws don't apply to humans.


2.
It may be appropriate to bring in Marvin Minsky's theory into this thread. I am no expert on this subject, but from what I understand his theory centers around the following: The mind as a whole is a "society" composed of smaller entities he calls "agents." Each agent by itself is incapable of thought or conciousness, but it's from the interactions of these different agents that we get thought and self-awareness.
(Correct me if I'm wrong)
 
Last edited:
  • #29
Just because you can't destinguish the way AI performs a specific task from that of a human, doesn't mean the same ethics apply to AI as to us. Personally, I would have no problem torturing an intelligent robot for the sole purpose of entertainment, but the same laws don't apply to humans.

This is the same point I brought up earlier. A Pantheist might treat their computer better while a Theist might see this as silly. The classic Disney film, "Herbie the Love Bug" is a case in point. Morality is just not the same from one person to next much less one culture to the next, and science does not really have a morality of its own. As machines continue to progress it looks likely even their designers and builders will no longer be able to say with any scientific certainty whether they are conscious or not. When that happens, some may view your torture as inhumane while others see it as innocuous just as occurred with slaves.
 
  • #30
www.kurzweilAI.com [Broken]
 
Last edited by a moderator:
  • #31
Just think, if we make the machines perfect enough, we may not even need any humans. What would be the point, if they could do everything we could do, but better? What was the name of that movie, The Stepford Wives?

Whose purpose would it really serve to do such a thing? The big manufacturing conglomerates? What would be the point of people hanging around if there was nothing "useful" for them to do?

Just numbers and machines ...
 
  • #32
1. For wuliheron:
I think this is too simple a definition of a mind and does not reflect the heart of this thread. By this definition ants and worms have minds.
If we accept that the mind is somehow firmly related to the brain and that the brain is a neural network and that a neural network can be studied with Neuroscience then the definition is a just and fair one. Besides, I don't make definitions to reflect the heart of the thread I make them to reflect the heart of my idea and to let other participants talk about my idea rationally/mystically/passionately/[beep].

Do you have a problem with an ant or a worm having a mind? What matters is magnitude and complexity. Homo sapiens has one of the largest brains in proportion to its body (Dolphins, Blue Whales, Gorillas and other apes accompany) and one of the most complicated in terms of interconnections and single-neuron behavior. There's absolutely no problem with an ant having a mind, a little one at least. And there's absolutely no problem with a human being having a lesser mind compared to a Dolphin. We have only one peculiarity: our new abilities of toolmaking and that's why we're sometimes called Homo Faber (partly because of the flexible oddly-positioned thumb on our hands).
If the mind you are talking about is that of a worm or ant, then it may not be too many rules for the purpose. Again, you are loosing the heart of the thread with details. I brought up fuzzy logic as merely an example of the progress of alternative logics in the area of AI where classical logic fails.
You're underestimating a worm! The simplest living being on this planet is unimaginably complex.

Details are necessary here to aviod confusion where there's really no special problem. One says "we can't make a mind," the other says "oh! we can." The way out is detailed description of how we can do it.

Boolean logic doesn't fail, it stops where it reaches the limits for which it's been designed. What fails is the individual who uses Boolean logic for the purpose it wan't made to serve for.
Not yet they can't, this is the mathematics of the future which M-theory and other cutting edge mathematics are attempting to address.
I can't understand your point here. I was told M-theory is the summation the current five variants of string theory (none of which I know the least about) that is supposed to do for all of them, is that wrong?

If that's right, then M-theory is Physics and not mathematics. Even if it is accompanied by a new branch of mathematics it can't claim dominance in the territory where Chaos theory reigns. Chaos theory implies that the chaotic system is governed by an equation out of our reach. This equation is out of reach either because it's trespassed a certain level of complexity that makes it out of reach forever (determinist view inside chaos theory) or because our current processing power doesn't afford its necessities (non-determinism insdie chaos theory). These two choices make the point of divide in Chaos theory users.
This statement defies the evidence which I stated earlier. AI has proven impossible for experts in various fields to distinguish from other human experts. Thus it can be compared to the human mind and it is only natural to do so.
These indistinguishable machines are called Turing machines, I guess. A perfect Turing machine hasn't been built yet. Please give a link to a source that confirms the making of a perfect Turing machine.

After all, I just can't understand exactly what you're opposing in my post. Will you please tell me what part of it I should re-think.
 
  • #33
We have only one peculiarity: our new abilities of toolmaking and that's why we're sometimes called Homo Faber (partly because of the flexible oddly-positioned thumb on our hands).

Humans have the ability to run after a moving target and throw a rock of swing a stick at it. No other animal has this ability which is only partly due to our opposible thumb. Chimps use sticks all the time, sometimes to sneak up behind each other and bash their brains in, but they can't sprint. We can sprint fast enough to catch a horse. Notably, the opposible thumb and the agile physiology to do these things evolved before the human brain nearly tripled in size during the last ice age and one third of our brain is devoted to vision.

I can't understand your point here. I was told M-theory is the summation the current five variants of string theory (none of which I know the least about) that is supposed to do for all of them, is that wrong?

M-theory is a purely mathematical theory without a shred of physical evidence to support it. The only reason it is considered a cutting edge physics theory is because has swallowed whole the mathematics of every other theory devised to date and is doing the same for mathematics in general.

I've come across some mathematicians who've complained that physicists are allowed much more freedom in their work than mathematicians. Unlike the mathematicians who are constrained to rigorous proofs, physical theorists are allowed significantly more freedom. As I said, physics has diverged from mathematics a great deal in the last century and is only now beginning to converge again thanks to M-theory. If you want to know more I suggest Machio Kaku's book, "Hyperspace".

These indistinguishable machines are called Turing machines, I guess. A perfect Turing machine hasn't been built yet. Please give a link to a source that confirms the making of a perfect Turing machine.


A Turing machine is a whole nother animal. All I've said is programs have passed the Turing Test.

http://cogsci.ucsd.edu/~asaygin/tt/ttest.html [Broken]
 
Last edited by a moderator:
  • #34
Originally posted by wuliheron A Turing machine is a whole nother animal. All I've said is programs have passed the Turing Test.

Manuel_Silvio war right regarding the Turing machine. Every year the contestant closest to passing the test is awarded the prize, but the test has yet to be passed.
 
  • #35
Parts of the Turing test have been passed, and contestents are getting closer than ever before. The main point is that people can be fooled by such programs which are still in the infantile stage of development.
 

Similar threads

  • General Engineering
Replies
10
Views
2K
Replies
27
Views
5K
  • General Discussion
Replies
4
Views
1K
  • General Discussion
Replies
3
Views
775
  • STEM Academic Advising
Replies
2
Views
2K
Replies
9
Views
1K
Replies
34
Views
9K
Replies
1
Views
3K
Replies
274
Views
34K
Replies
8
Views
4K
Back
Top