Fearing AI: Possibility of Sentient Self-Autonomous Robots

  • Thread starter Isopod
  • Start date
  • Tags
    Ai
  • Featured
In summary, the AI in Blade Runner is a pun on Descartes and the protagonist has a religious experience with the AI.
  • #526
Ivan Seeking said:
You think hacking is a problem now? What happens when your enemy can hack and program your weapons systems AI?
This is an issue with any software system that designs, operates, or is a component of a weapon system. Not just AI.
 
  • Like
Likes Vanadium 50
Computer science news on Phys.org
  • #527
NLP began in the mid-50s with not too much to show for it until the last several years and then a huge advance occurred. ChatGPTn is not the final word in AI, GPT may remain a part of future AI but who knows it may be replaced by a radically different approach.

It will be interesting to see what happens when AI is given the ability to learn and interact with the world through speech, vision, hearing, smell, and touch. We already have robots that learn how to walk.
 
  • #528
There is a theme in this thread that we are very, very close to "true AI", whatever that is. I think we have learned over the decades that "intelligence" is an ensemble of abilities, some of which can be automated, and many of which we don't even know where to start.

To set the scale, the largest supercomputer I have ever worked with has 3M cores. )It was maybe #4 or so when I used it) That's maybe 1/25,000 the number of neurons in the human brain, and maybe 1/150,000,000 the number of synapses. The hardware just isn't there. And if you think "yeah, but maybe not all this is necessary", let me remind you that the brain is a very expensive organ - there are strong evolutionary pressures to make it smaller. If it could be, it probably would.

We're not talking SkyNet. We're maybe talking the brains of a minnow. Maybe.
 
  • Like
Likes Astronuc
  • #529
Vanadium 50 said:
We're not talking SkyNet. We're maybe talking the brains of a minnow. Maybe.
Which, if given control of an automobile or an armed drone, might be just as - indeed, perhaps more - dangerous.
 
Last edited:
  • #530
DaveC426913 said:
Which, if given control of an automobile or an armed drone, might be just as - indeed, perhaps more - dangerous.

As a brilliant scientist and handsome man-about-town once said,

A society that decides to give control of automobiles, airplanes, nuclear power plants etc. to sonmething as smart as a flatworm deserved what it gets.

However, there s control and there is control. Using AI to smooth out the response of an airplane or to identify that something might be down the road? I am OK with that. I might be OK with a car that could override the drivers decisions in certain cases. Actually, I guess I am - my cat can go faster than 130 mph, but won't (without a modification). Pulling out of a parking stall and driving up the doorway, I start to get nervous.

And it's not like glitched can't occur today. There was a famous case where an airplane had pounds instead of kilograms of duel loaded and ran oit of fuel over middle-of-nowhere Manitoba (at least there was a nearby Tim Horton's). Bad data is bad data, AIs are no more resiliant than people here, and probably less.
 
  • Like
Likes russ_watters
  • #531
Vanadium 50 said:
my cat can go faster than 130 mph, but won't (without a modification)
I would pay to see that.
 
  • #532
Typo. I meant "car". I don't have a cat. My sister's cat can reach that speed when avoiding a bath, though.
 
  • Haha
Likes berkeman
  • #533
Vanadium 50 said:
There was a famous case where an airplane had pounds instead of kilograms of duel loaded and ran oit of fuel over middle-of-nowhere Manitoba (at least there was a nearby Tim Horton's)
That would be the 'Gimli Glider', a Boeing 767 ran "out of fuel at 41,000 feet, hearts beat faster and knuckles turn white. It happened to Air Canada Flight 143, carrying 61 passengers and a crew of eight, at 8:15 p.m. on July 23, 1983. En route from Montreal to Edmonton with an intermediate stop in Ottawa, the flight was piloted by Capt. Robert Pearson and First Officer Maurice Quintal." Fortunately, the pilot had flow gliders and knew how to slip the aircraft.

https://www.aopa.org/news-and-media/all-news/2000/july/pilot/the-gimli-glider

https://en.wikipedia.org/wiki/Gimli_Glider
The incident was caused by a series of issues starting with a failed fuel-quantity indicator sensor (FQIS). These had high failure rates in the 767, and the only available replacement was also nonfunctional. The problem was logged, but later the maintenance crew misunderstood the problem and turned off the backup FQIS, as well. This required the fuel to be manually measured using a dripstick. The navigational computer required the fuel to be entered in kilograms, but an incorrect conversion from volume to mass was applied, which led the pilots and ground crew to agree that it was carrying enough fuel for the remaining trip. In fact, the aircraft was carrying only 45% of its required fuel load.[7][8] The aircraft ran out of fuel halfway to Edmonton, where maintenance staff were waiting to install a working FQIS that they had borrowed from another airline.[9]

The Board of Inquiry found fault with Air Canada procedures, training, and manuals.
 
  • #534
Meta has announced the release of its large language model, LLaMA2, as an open-source program. Does this further exacerbate the negative effect of AI on society? Well, there is a user manual delineating how to establish appropriate guard rails.

https://about.fb.com/news/2023/07/llama-2/

Takeaways​

  • Today, we’re introducing the availability of Llama 2, the next generation of our open source large language model.
  • Llama 2 is free for research and commercial use.
  • Microsoft and Meta are expanding their longstanding partnership, with Microsoft as the preferred partner for Llama 2.
  • We’re opening access to Llama 2 with the support of a broad set of companies and people across tech, academia, and policy who also believe in an open innovation approach to today’s AI technologies.
  • We’re committed to building responsibly and are providing resources to help those who use Llama 2 do so too.
 
  • #535
DaveC426913 said:
I would pay to see that.
My wife's cat travels at least that fast, or possibly just teleports from one location to another, when startled, and sometimes seemingly just for the hell of it. He'll be laying around the dining room and the next thing you know, there's just a blur on the staircase and he's gone.
 
  • #536
But back to AI:

Meta's president of global affairs Nick Clegg: "AI language systems are quite stupid."

Large Language Models - the platforms which power chatbots like ChatGPT - are basically joining dots in enormous datasets of text, and guessing the next word in a sequence, he said. He added that the existential threat warnings issued by some AI experts relate to systems which don't yet exist.

Full article:
https://www.bbc.com/news/technology-66238004
 
  • #537
.Scott said:
Just to dispose of that "bodies" part, an appropriate interface can be provided for a piezo sensor to allow it to generate brain-compatible signals. And the result would be "really real" pain. Similarly, the signals from human pain sensors can be directed to a silicon device and the result is not "really real".

If you want a computer to produce "really real" pain, I believe you need these features:
1) It needs the basic qualia. Moving bits around in Boolean gates doesn't do this. It is a basic technology problem. From my point of view, it is a variation of Grover's Algorithm.
2) As with humans, it needs a 1st-person model that includes "self-awareness" and an assessment of "well-being" and "control". But this is just a data problem.
3) As with humans, it needs to have a 2nd and 3rd person model - at least a minimum set of built-in social skills.
4) It needs to treat a pain signal as distracting and alarming - with the potential of "taking control" - and thus subverting all other "well-being" objectives.
5) Then it needs to support the escalating pain response: ignore it, seek a remedy, grimace/cry, explicitly request help.
6) For completeness, it would be nice for it to recognize the grimace and calls for help from others.
What you say is all good, I agree reproducing pain signals is not the unsolvable problem here. In fact even now as far as I know we can insert electrodes into the brain and by applying small potentials cause certain senses to be felt when in actuality they are not felt.

The problem is - how do you respond to pain and what you make of it...
I will go in more detail about this lower in my post.

.Scott said:
we can strongly suspect that this "qualia" device provides certain information services more economically than Boolean logic.
Not just more economically , but I'd say Boolean logic can only really solve "logical" information - that is information that you can quantify , ascribe certain value to it , like the pixels within a picture and such.
How do you ascribe value to pain felt by a self aware entity ?

Think about it , the physical signal is easy to reproduce, how do you reproduce the response so that it is fully compatible with free will and also is conscious?

We do know humans have a very wide and varying level of pain thresholds and most importantly attitude towards pain. I know some deep believers actually use their pain and suffering as a pathway for spiritual growth, now even if you don;t believe in God, you still can observe the physical results, namely, that one person gets depressed and decays while suffering pain while another one grows mentally and becomes more mentally capable.
There are religious practices where people abstain from food and even drink or do other self inflicted pain and report that afterwards feel better.
How do you program this within a silicon logic that is made according to the main thesis of evolutionary biology - avoiding actions detrimental to survival?

Because if you make a robot that is preprogrammed with the logic of evolutionary biology then you can only create a deterministic machine, because clearly pain equals damage and damage is bad for survival.
And yet humans , the really advanced ones, I would argue learn from the very damage they have created and sometimes even put themselves into harms way for a benefit that only often they themselves can understand.

Recall the "Pavlovsk experimental seed station " and the scientists that during the nazi siege of Leningrad stayed there and died from hunger just to protect the seed collection.

That is an outstanding level of self harm inflicted consciously for nothing more than the belief of a possible better future in case of success.
How do you calculate the necessity for suicide in certain situation using simple logic?
I think that and other examples like it are on the level of what is commonly referred to as faith - the ultimate state of self awareness and also the part of human consciousness that really doesn't seem computational to me. Because you are making a conscious decision based on unknown variables, one of those variables, for example, the idea that other people can be capable of good , therefore dying for the sake of humanity's future is worthwhile.

Mind you, the idea of humans capable of good, knowing of all the wars and atrocities that we have committed during history and all of that within the largest war ever in history , is really not a self evident idea and I'm sure it wasn't self evident to those scientists back then that consciously starved themselves to death instead of eating from the seeds they had.
So they went against every evolutionary instinct for self survival and all of that for a unknown goal, I'd say they had tremendous faith. How do you preprogram that within a AI computer in such a way that it isn't deterministic?

The way I see it, you will either produce a robot that is suicidal even when it doesn't have to be or a robot that isn't even when it should have been, because I don't see a way one can calculate the necessity for suicide on logical bases alone.
 
  • #538
PeterDonis said:
This depends on what position you take in the long-running philosophical controversy about qualia. Not everyone agrees that qualia are something extra that you have to add to the functional requirements you list.
It is indeed philosophy but I would argue that there is something real nevertheless about qualia , because if all we had is pain signals and processing of them then in theory all pain or sense input would result in a action-reaction style of process similar to that of the "hammer tapping on the knee" reaction.
And yet for the absolute most pain and other inputs we do have self aware reactions to them instead of instinctive action reaction inputs/outputs like those of hitting a nerve and causing a muscle to contract.

So there is a "buffer" and different people consciously decide how to use it so that their reactions to the same input differ by alot
 
  • #539
.Scott said:
A quantum circuit that creates a superposition of the scores of many generated candidate intentions could then use the Grover Algorithm to find the best of those scores - or less precise, one of the best scores. By using the Grover Algorithm that way, you have taken advantage of QM data processing, involved the kind of information people are conscious of into a single QM state, and when on occasion the final output is actually implemented, it provides a connection between consciousness and our actions. If consciousness could not affect our actions, we could never truthfully report it.
Basically, I follow all of the arguments followed in Integrated Information Theory up to the point where they start suggesting that all you need to do is involve a certain amount of information in the data processing in some particular way. At that point I say, yes - and the way is to put in all into a single state - and there's only one way to do that in Physics.

The reason that all the data involved in a moments conscious thought has to be in a single state is hard for me to explain because I see it as so obvious. How else would you associate the right collection of "bits"? It's like trying to argue against magic.

So what Grover's algorithm has to do with qualia is that it checks off all the boxes that are necessary for qualia as experienced and reported by humans.
I think that trying to use quantum mechanics to solve consciousness is just another attempt at the many existing but not necessarily a guarantee for success.
Quantum laws work differently than macroscopic electrically connected logic gates but then again that in itself is not proof that they are closer to consciousness than the logic gates.
Actually as far as we know our brains don't seem to be that "quantum" at all. And their temperature is far above that where we normally start noticing quantum behavior.
What I personally find most interesting about the brain is that it's essentially just a large "blob" of nerves and connections , and when you look at thoughts and how they arise it's almost impossible to comprehend how they can lead to structured self awareness because the brain neurons have what we know as "action potentials" and certain inputs to the brain can cause certain neurons in specific brain areas to become more active, so that their potential increases but is still below the threshold of firing. Then it is this one neuron that fires and causes the nearby neurons to fire along , almost like in a laser gain medium where the change of one excited atom down to it's ground state emits a photon that then travels along and causes other excited atoms to fall back and emit photons that are in phase with the original one.

But what is marvelous about it is that it is essentially a random process, at least semi random, because you can never really predict which neuron will fire , only the region where it will happen, but that region has loads of neurons. In a laser cavity this doesn't matter because all you are producing is a beam of light and no consciousness is involved but in the brain your producing conscious structured thought by the random firing of neurons in brain regions every second,

Because any one of your thoughts starts as this firing of a neuron that takes others with it, it happens all the time , but the process itself is not deterministic, the firing start can differ from one thought to another in terms of which neurons started the wave and where, it's hard to even comprehend why such a random electrical activity is capable of producing a continuous train of rational logical thought and experience.

Without going into personal speculation one thing is clear, that this brain neuron process is much different than how our logic gates operate and even how quantum bits operate. For one silicon logic gates don't have the ability to change their electrical connections along the way, but brain neurons do, in fac we know that our experiences and habits do change/rewire our brains with time.
 
  • #540
PeterDonis said:
In any case, as far as this thread's discussion is concerned, "qualia" that were epiphenomenalistic would by definition be irrelevant, since they can't have real world effects, and the concern being discussed in this thread is what real world effects AI might have. An AI that had epiphenomenalistic "qualia" would be no different as far as real world effects from an AI that had no "qualia" at all.
Exactly, this is the problem of faking consciousness , because unlike intellect which can be measured consciousness can be faked because it is not deterministically measurable.
A parrot can copy human phrases without understanding them, a robot can come up to a human without saying a word just like a human can do the same, if both look the same how do you know which one came up because of a conscious choice and which one because it was programmed to do so...
 
  • #541
Ivan Seeking said:
That was good! So one of his conclusions is that we need to understand how AI decisions are made.

In other words, we need AI psychiatrists.

And if true that AI will tend to go insane, I don't see that being a good thing!
I don't believe AI can go insane. Insanity requires to have a certain overwhelming realization of the magnitude and scale of incoming information from a personal subjective level. Do ordinary silicon computers go insane when you overload the CPU? No, they just "clog up" and freeze, Insanity i think requires not just the ability to process information but to also add meaning to it on a personal level and then contemplate that meaning.
Our current AI absolutely can't do that, it doesn't have a mechanism how to, nor do we know how to make it.

As for AI decisions, well what decisions exactly? It's just rehashing information we gave it and adding patterns along the way, it can create no more decisions than a obedient soldier on a battle field receiving a general's orders. If I ask chatGPT to write me a story about love, does it not then turn my English language text into it's program language and then that into machine code to execute and find information that is compared to it's memory and found to be matching to the input it was given?
I think I just explained how it makes a "decision" in a simple way, the answer is it doesn't make any decisions!
It just compares it's input to it's known memory and produces an output and does that based on the specific algorithm that it works on. The fact that the output is confusing and makes you think it has some "magic" up it's sleeve is only because it copies our speech and thought so essentially creating the illusion of being like us.
As I said before, consciousness is far far easier to copy/simulate than to create, given we have yet to create it you can insert your own word in the part where I said "far far"
 
  • #542
Ivan Seeking said:
So let me get this straight. We don't know what creates self-awareness or desire in humans, much less what would in an AI.

An AI program claims to love and wants to live, and we don't know why, but we can say with 100% confidence that it didn't really experience those emotions.

Prove it.

We can never know if a machine becomes self aware.
I agree we can never really know, but we have damn good proof , I think, that it hasn't happened, and the one thing that makes me think that is - we don't understand consciousness ourselves , some say we do but that is either their arrogance or them "jumping the gun"
Back in the day some also claimed brains work like hydraulic systems.

But the proof I think is that we do understand how our AI systems work but we don't understand how our own non deterministic brain neuron firing creates structures conscious self awareness so that is I think all the proof one needs that our AI is nothing but a fancy tool so far.
 
  • #543
Vanadium 50 said:
There is a theme in this thread that we are very, very close to "true AI", whatever that is. I think we have learned over the decades that "intelligence" is an ensemble of abilities, some of which can be automated, and many of which we don't even know where to start.

To set the scale, the largest supercomputer I have ever worked with has 3M cores. )It was maybe #4 or so when I used it) That's maybe 1/25,000 the number of neurons in the human brain, and maybe 1/150,000,000 the number of synapses. The hardware just isn't there. And if you think "yeah, but maybe not all this is necessary", let me remind you that the brain is a very expensive organ - there are strong evolutionary pressures to make it smaller. If it could be, it probably would.

We're not talking SkyNet. We're maybe talking the brains of a minnow. Maybe.
I personally believe that conscious self awareness and intelligence is not as connected as we think. Indeed we can simulate and create intelligence that even surpasses our abilities in specific tasks quite well.

But let me pose a strong counter argument against what you stated here.
There is a rather popular idea among intelligence and AGI researchers that indeed once we get to human brain level capacity for silicon based information processing architectures we will achieve general intelligence as an emergent property.

Now it is definitely true that for a computer it's information processing capacity (and it's claimed potential to reach conscious self awareness by some) scales with the number of logic gates and CPU transistor count etc.
I would argue that we have so far good evidence that the same is definitely not true for human brains!
Let me present just a few of the many proofs for that.

https://www.cbc.ca/radio/asithappen...f-his-brain-who-leads-a-normal-life-1.3679125

I highly suggest listening to this article in the provided audio.

When a 44-year-old man from France started experiencing weakness in his leg, he went to the hospital. That's when doctors told him he was missing most of his brain. The man's skull was full of liquid, with just a thin layer of brain tissue left.
https://www.thelancet.com/action/showPdf?pii=S0140-6736(07)61127-1

He was living a normal life. He has a family. He works. His IQ was tested at the time of his complaint. This came out to be 84, which is slightly below the normal range … So, this person is not bright — but perfectly, socially apt

So basically a grown man with almost no brain left being fully self aware and living a ordinary family life.
Clearly if conscious self awareness was proportionally related to brain capacity (neuron count and total brain size) then this man would be as dull as a hammer.https://www.dailymail.co.uk/news/ar...fy-Rodriguez-explains-got-bizarre-injury.htmlAlso an interesting fact is that humans by no means have the most number of neurons in their brain or certain brain regions, for example , the long finned pilot whale has roughly twice as many neurons in it's neocortex than humans, and neocortex by many is considered the most important brain region for intelligent self awareness, clearly it's not the numbers that decide conscious ability and I think we have good evidence for that.
 
  • #544
artis said:
I think that trying to use quantum mechanics to solve consciousness is just another attempt at the many existing but not necessarily a guarantee for success.
Quantum laws work differently than macroscopic electrically connected logic gates but then again that in itself is not proof that they are closer to consciousness than the logic gates.
Actually as far as we know our brains don't seem to be that "quantum" at all. And their temperature is far above that where we normally start noticing quantum behavior.
Pick your obstacle. On the one hand you have the difficulty in doing any significant quantum information processing in the warm and wet brain environment.
On the other hand you have the epiphenomenon issue discussed earlier in this thread. If you don't use QM, then you need to identify a way in which the information you are conscious of is associated (as is done with Integrated Information Theory) AND you need to show how that method of association has a method of effecting the universe - it can't just be epiphenomenal.

Given the choice, that "warm and wet" problem looks way more surmountable that what is like to be a search for new Physics.
 
  • #545
artis said:
I would argue that there is something real nevertheless about qualia , because if all we had is pain signals and processing of them then in theory all pain or sense input would result in a action-reaction style of process similar to that of the "hammer tapping on the knee" reaction.
Not at all. Your brain does "processing" of pain signals that is far more complex. Indeed, to a physicalist, "qualia" is simply part of that processing.

artis said:
this is the problem of faking consciousness , because unlike intellect which can be measured consciousness can be faked because it is not deterministically measurable.
This is the "zombie argument", which has been made by many philosophers, and debunked by many others. It has always seemed incomprehensible to me: it amounts to the claim that your own consciousness has no observable effect on your behavior--a "zombie" duplicate of you could exhibit identical observable behavior without being conscious. Really? So when you do things like describe your own conscious experience in detail, your consciousness has nothing to do with that? That's ridiculous.

In short: human qualia, at least, are not epiphenomenal.
 
  • #546
PeterDonis said:
This is the "zombie argument", which has been made by many philosophers, and debunked by many others. It has always seemed incomprehensible to me: it amounts to the claim that your own consciousness has no observable effect on your behavior--a "zombie" duplicate of you could exhibit identical observable behavior without being conscious. Really? So when you do things like describe your own conscious experience in detail, your consciousness has nothing to do with that? That's ridiculous.

In short: human qualia, at least, are not epiphenomenal.
It may be the "zombie argument" or any other argument, quite frankly there are so many I lost count, but that is not what I meant by saying that "consciousness can be copied", what I meant was that on a average simple level it is possible to make a machine that behaves very similarly to an actual conscious being, in fact we are already there. Text wise chatGPT could as well pretend to be a school teacher helping out a kid doing homework, if the kid wasn't explicitly told what is in the other end of his text conversation I would bet many would think it's an actual human.

Now self awareness, I'm sure, has a huge impact on what you observe, otherwise CCTV cameras would cry seeing a terrible traffic accident, but the problem is when you need to discern whether the other side has that experience or doesn't. It's always easy with yourself because you know your self aware, it says that right in the word "self" and "aware".
 
  • #547
PeterDonis said:
Not at all. Your brain does "processing" of pain signals that is far more complex. Indeed, to a physicalist, "qualia" is simply part of that processing.
I agree Peter, my point was more subtle , at least I hope it is. My point was that besides the process from pain source to the transport of signal to the processing of it there is another process going on. That process is the brain making a conscious choice on how to react to that stimuli. So for example if your an MMA practitioner and you also happen to follow one of the Asian religions , let's say you use pain not as a signal to be avoided as much as possible but rather as a tool and even a welcomed part of your life.
We know from neurology now that human brains rewire themselves with time as we live, new neuronal connections are made based on how we perceive the world, our experiences and what we feed ourselves information wise.
So I believe all humans have similar brains and nerves and yet based on the differences on how you perceive the world or the signals that you uptake, your brain is rewired and adapted to that.
This then begs the question, at least for me, is the mechanism of perception/self awareness a separate one from that of information processing or not.

The reason I say this is because in computers we don't have this "observer within a box" phenomenon, a computer truly only processes information and doesn't have the capacity to contemplate that information from a point of reference that is outside it's logic circuitry.
Yet for us it seems that what we experience and are aware of are two things not necessarily 100% intertwined.

Of course maybe this is all just a really weird emergent phenomenon of very complex special purpose information processing machines like our brains, that they can create this illusion of the "observer" being distanced from the very signals that allow him to observe.
Either way I believe this is paramount to achieving human like conscious self awareness, to understand how the observer can become, at least, in simulation, separated from that which is observed.
In other words , how input signals create a first person reality where the observer , if not physically, then at least mentally becomes separated from the signals he perceives.
 
  • #548
.Scott said:
On the one hand you have the difficulty in doing any significant quantum information processing in the warm and wet brain environment.
Well if I'm not mistaken then currently I think we have no evidence of whether the brain does any quantum effects at all and if it does then how much. So I think it is really hard to talk about it because this is one of those arguments that really needs actual repeatable evidence.
 
  • #549
PeterDonis said:
Then, as I said, it's personal speculation and is off limits here.
The AI can search every published work and attribute any possible idea to multiple reputable authors. It is inverse plagiarism, where original works are passed off as the work of others.
 
  • #550
artis said:
on a average simple level it is possible to make a machine that behaves very similarly to an actual conscious being, in fact we are already there
Only if people limit themselves to very simplistic tests of its behavior.

artis said:
besides the process from pain source to the transport of signal to the processing of it there is another process going on. That process is the brain making a conscious choice on how to react to that stimuli.
There is also a lot of unconscious information processing going on in addition to the simple "reflex arc" response that you originally described.

artis said:
is the mechanism of perception/self awareness a separate one from that of information processing or not.

The reason I say this is because in computers we don't have this "observer within a box" phenomenon
Here you are assuming that your question has the answer "yes". But what if the answer is "no"? In other words, what if it's all information processing, including qualia? Then you could put the same information processing into a computer and it would also have qualia.

Even if the answer to your question is "yes", there could still be some other physical mechanism that produces qualia, which just can't be usefully described as "information processing"--but you could still in principle put such a mechanism into a computer, or a robot, or whatever you want to call it, and it would have qualia.

Of course we are very far away from knowing how to do this, but that doesn't mean it's not possible.
 
  • #551
artis said:
Well if I'm not mistaken then currently I think we have no evidence of whether the brain does any quantum effects at all and if it does then how much. So I think it is really hard to talk about it because this is one of those arguments that really needs actual repeatable evidence.
That's only because you haven't caught on to the more profound problem. If you eliminate QM, you need to presume new Physics. Which of those choices is most in need of "repeatable evidence".

You're following what I would call the "common argument" - that consciousness comes from complicated and/or fuzzy logic. I have mentioned IIT only because they have filled in this common argument with enough detail to bring its short-coming into easier focus.

But if you only go as far as that common argument, you only have an epiphenomenal effect. To make it "phenomenal", you have to describe exactly what constitutes that complicated and/or fuzzy logic. Then you need to postulate that when such conditions exist, something physically different happens. At that point, you are describing full-fledged physics. If it isn't QM, it's new Physics.

So the repeatable experiment that defeats the "common argument" in favor of QM (or new Physics) is simply asking people if they can truthfully report being conscious and feel a sense of reality or awareness.
 
Last edited:
  • #552
.Scott said:
So the repeatable experiment that defeats the "common argument" in favor of QM (or new Physics) is simply asking people if they can truthfully report being conscious and feel a sense of reality or awareness.
I'm sorry but I'm not sure whether I follow your thought there, can you please elaborate , how would first person subjective experience told verbally prove anything beyond that which we already know?As for the new physics VS QM, I'd say I see an even bigger problem, I tried to put it forth but apparently I wasn't successful enough so far.
Let me ask this both to you as well as anyone else participating in this thread. Am I the only one who thinks there is a problem there.

All our computers so far , whether analog or digital and irrespective of their architecture , whether Von Neuman or other, follow certain known physics. In every computer you can actually trace out the path from the most abstract - a symbol within a user interface, to the symbol representing that within a program language to then the machine code that represents that, down further to the actual electrical signals that get created as the code is executed , then you can follow how those signals go back and forth as they are being processed by logic gates.
In fact we know that every single step, every single bite , every single half period of a square wave is deterministic within such system, because all of them have to be right and matching in order for the process to work.
Any single deviation from that operation is automatically an error, sure software are designed to tolerate certain amount of errors, but the bottom line is - these errors don't add anything of value to the process!
It's not like a CPU logic gate that is failing is creating new creativity within the computer. It adds no new information but only noise.
It's like corrupting a TV signal, you don't add information you can only add noise which at some point will completely destroy the original signal if left to increase.

Why am I saying this?
Because as far as we currently know and as far as I can understand it, our brains work differently.
There is no clear path one can trace out for every thought down to every neuron.
Sure, every thought is a spike of many neurons that progressed along the way , but there is no order as far as we know it. We can approximate the brain regions where the spike path begins, we can also see the regions that are most involved in certain activities, but apart from that every region has millions of neurons, and depending on the brain input , many neurons within a region can be "primed" or readied for spiking and yet not spike until some input is given, or maybe one spikes randomly.
In a way it is somewhat similar to nuclear decay, where each atom can decay randomly at any given time. Now unlike atoms , the neurons that are ready to fire can be , at least in theory , seen, by their increased potential that approaches but stays below the threshold, but there is no deterministic rule that determines which exact neuron will spike at any given time!

This is much more than just "fuzzy logic" or whatever you want to call it, this is essentially complete randomness with very little determination. It's like creating information from chaos, almost like the butterfly effect.
It also means that no two thoughts come from the same neuronal path, they may come from the same brain area but as probability would say, each has a different path, even if slightly.

Now add to that the property of brains that neuronal connections rewire with time - even more complexity that is completely unique for each individual. Almost like you would have 8 billion unique CPU architectures.

I see AI researchers making alot of brain to computer analogies but I believe they are premature and wrong, the brain is nothing like a computer, a computer is deterministic, and the determinism is built right into the circuit, the circuitry is fixed, it can;t change by it;s own accord nor can a error create new information.
Human brains on the other hand are working by what it seems, built in randomness.

And what is funny is that so far you don't even need QM or new physics, we understand how the neurons can change their potential and activity and influence one another even under classical physics - the hard part is to understand how that seemingly random behavior creates meaningful and complex information.
And is also very robust against damage and external influence.
One good argument against QM is that we know so far that quantum phenomenon not only need low temperatures for low kinetic energies of the involved particles so that their states can be preserved but they are also very sensitive to external influences.
The brain on the other hand is I'd say extremely robust against damage and external influence of all kinds, from downright mechanical impact to chemical to radiation etc . For reference see my post #543 and how the man with most of his brain deformed was able to preserve almost normal conscious self awareness.
 
Last edited:
  • #553
I'm not sure how it's being used above but Fuzzy Logic is an actual algorithm that can be useful for machine learning outputs.
 
  • #554
artis said:
I'm sorry but I'm not sure whether I follow your thought there, can you please elaborate , how would first person subjective experience told verbally prove anything beyond that which we already know?

From post #509 in this thread:
One of the problems with epiphenomenalism is called "self-stultification", (as described in "The Stanford Encyclopedia of Philosophy"). I have quoted a portion of it here:
The most powerful reason for rejecting epiphenomenalism is the view that it is incompatible with knowledge of our own minds — and thus, incompatible with knowing that epiphenomenalism is true. (A variant has it that we cannot even succeed in referring to our own minds, if epiphenomenalism is true. See Bailey 2006 for this objection and Robinson 2012 for discussion.) If these destructive claims can be substantiated, then epiphenomenalists are, at the very least, caught in a practical contradiction, in which they must claim to know, or at least believe, a view which implies that they can have no reason to believe it.

The point is that it is not sufficient for a "consciousness model" to explain your own experience of qualia. It must also explain how you can claim that qualia exists - how information about the qualia can escape into the real physical world.

For this to happen, qualia must be full-fledged physics - responsive to exterior events and able to influence exterior events.

So when you describe precisely what causes qualia, you are also describing what causes qualia's outward physical influences. If you say that all that is required to create qualia is complicated logic, then you are also saying that "complicated logic" is something physically significant - something that has effects greater than what would be expected of an assembly of Boolean logic gates, voting circuits, etc.

As far as the technology issues you described (the brain uses much different information processing technology and strategies than Von Neumann machines), note the reasoning presented above is completely agnostic to technology and algorithms. Any information processing machine (biological or otherwise) which elicits qualia as we know it, will require the physics described above.
artis said:
One good argument against QM is that we know so far that quantum phenomenon not only need low temperatures for low kinetic energies of the involved particles so that their states can be preserved but they are also very sensitive to external influences.
This is the "warm and wet" argument. But examples of quantum processes have been identified in nature. Photosynthesis comes to mind. Stuart Hameroff has his own specific ideas on this.

artis said:
The brain on the other hand is I'd say extremely robust against damage and external influence of all kinds, from downright mechanical impact to chemical to radiation etc . For reference see my post #543 and how the man with most of his brain deformed was able to preserve almost normal conscious self awareness.
So, to respond, all I need to provide is an example QM brain model that would demonstrate the durability and severability you question. I'm going to put it in a box so that it is clear that I am only providing an example model that would "checks all the boxes":
Given the practicalities of QM data processing technology, it would seem unlikely to me that it occurs across cells. And yet, it is also not dependent on any single brain location. So it would seem that we must sport many consciousness engines. Presuming they are critical in the decision-making process, and given that two simultaneous decisions are worse than one, only one engine gets our main storyline at a time. Only one gets to write to our "storyline" memory at a time. And only that same one gets to act as the "first person" at a time - and claim our general attention to consider a proposed action.

artis said:
I see AI researchers making alot of brain to computer analogies but I believe they are premature and wrong, the brain is nothing like a computer, a computer is deterministic, and the determinism is built right into the circuit, the circuitry is fixed, it can't change by it's own accord nor can a error create new information.
Human brains on the other hand are working by what it seems, built in randomness.
Two points on this: The brain/computer analogy predates AI by a lot. In fact, "computer" was originally a job title and the first computers were "analytical engines", "computing engines", "digital computers", or "automated computers". So, I suppose the question shouldn't be "Are computers human?", but "Are analytic engines computers?".
Those analogies are not wrong, but certainly they can be taken in the wrong way. And since people a very social animals, we are very ready (perhaps too ready) to befriend a machine as if it was another social animal.

The other point is about stochastic circuitry. There are stochastic algorithms (such as the Monte Carlo method) that depend on "randomness". I would be careful about describing any of these as "creating new information".

artis said:
And what is funny is that so far you don't even need QM or new physics, we understand how the neurons can change their potential and activity and influence one another even under classical physics - the hard part is to understand how that seemingly random behavior creates meaningful and complex information.
As described above, the QM or new physics is only required of neurons that elicit our consciousness. So the project is not to explain how neurons can operate without QM/NP but to find the ones that cannot be explained that way.
 
  • #555
.Scott said:
the QM or new physics is only required of neurons that elicit our consciousness
We don't know that this is the case. Some physicists believe it is, others believe it isn't. We have no real testable predictions either way.
 
  • #556
PeterDonis said:
We don't know that this is the case. Some physicists believe it is, others believe it isn't. We have no real testable predictions either way.
If someone is claiming that a device that limits itself to known physics minus QM can truthfully report itself to be conscious, they need to explain where that report is coming from.
 
  • #557
.Scott said:
a device that limits itself to known physics minus QM
The "new physics" part is the only limitation I was referring to in my previous post. I did not intend to include any "minus QM" limitation. Sorry if that wasn't clear.

If we take out the "minus QM" part, then as far as we know now, you are such a device. So am I, and so is every other human that says they are conscious. We don't know that any new physics is required for consciousness.
 
  • Like
Likes .Scott
  • #558
Any new disruptive technology is always an opportunity to make a profitable business. But not just by use of the technology, but also by the development of counter-technology (and it has the added bonus of not risking being seen as evil).

Are there any efforts toward AI-busting technology? I suppose the obvious tech would be analysis software that produces a confidence level that some given piece of content (text, picture, video, etc.) was produced by AI.

That seems like it might be lucrative. There's my million dollar idea for any of you entrepreneurs out there. The wave of counter-tech will come; you could be on the leading edge.
 
  • #559
This is the general idea for a class of neural networks called Generative Adversarial Networks (GANs). Two networks are built - one tries to create fake outputs (commonly images or audio) to fool the other and the second one tries to learn to detect the fakes. The problem is that they both get better so that it becomes harder for the detector network to detect the fakes.

That said, a common problem with GANs is that the generator network can often generate outputs that fool the detector but would be spotted by humans as obvious fakes.
 
  • #560
.Scott said:
So when you describe precisely what causes qualia, you are also describing what causes qualia's outward physical influences. If you say that all that is required to create qualia is complicated logic, then you are also saying that "complicated logic" is something physically significant - something that has effects greater than what would be expected of an assembly of Boolean logic gates, voting circuits, etc.
Well actually that's the mainstream view so far. It's commonly known as "emergent properties", We do see similar properties for complex systems elsewhere so the researchers have a natural tendency to think that conscious information processing should be similar.
They also believe that because the mainstream view, still is, evolutionary biology that describes the human mind as just a result of long process of random mutations aided by natural selection. Now there are quite a few shortcomings to evolution , that are now even pointed out by evolutionists themselves, but in order not to diverge here let that be another topic.
All in all I do see this as the main belief, that consciousness emerges when sufficient complexity of specific circuits is reached.
I myself have doubts about this , mainly because , although I subscribe to the fact that many systems do show emergent phenomenon, it is by no means clear that it is even possible to achieve consciousness out of silicon logic irrespective of it's complexity, therefore it might be that silicon logic is incapable of this type of emergent property and some other systems needs to be used that can achieve it.
.Scott said:
This is the "warm and wet" argument. But examples of quantum processes have been identified in nature. Photosynthesis comes to mind. Stuart Hameroff has his own specific ideas on this.
I think we need to put a distinction here. Quantum processes happen everywhere in nature if you zoom in to subatomic scales, that doesn't mean that the macro objects that hold within them these QM processes are conscious.
In other words, there is no proof so far that even if some quantum process does happen within the brain that it then directly influences consciousness or more so causes it to emerge.

And the dilemma is huge. Think about it. We know roughly how the brain works. We know there are neurons and synapses and neurons fire all the time. The problem is , how do you take that wet blob gray piece of matter called brain with all of it's billions of neurons and how do you then map out the neuron firing path on to the thought path. If you can't do this, how do you know which neurons made which thought , when and why?
Also, in order to prove a quantum process takes place one needs a highly prepared background. Now one can't just take the brain out of a living human and expect it will continue to function , so how do you prove a quantum process taking place by using the only working brain we have - that inside of a living human which is wet and warm.
This same problem applies to those that , I'd say naively, believe that there will come a time when we will be able to make an exact copy of one's consciousness at any given moment in time. But in order to do that you would have to know the state of each neuron within a brain at some specific time.
It is just an impossibility.

Now one could argue that we could take just few neurons, maybe even produce them artificially and then put them under the test to see whether QM is involved , but that wouldn't match the real situation, because you need them in their native environment to truly see how they work. That is because we don't know whether the brain is just a "sum of it's parts" or more and that is due to the emergent phenomenon law.

Borg said:
That said, a common problem with GANs is that the generator network can often generate outputs that fool the detector but would be spotted by humans as obvious fakes.
As you would expect because both are made from similar working principles. Almost like two thieves that work by the same methods would find it hard to rob one another.
 

Similar threads

Replies
10
Views
2K
  • Computing and Technology
Replies
1
Views
1K
Writing: Input Wanted Number of Androids on Spaceships
  • Sci-Fi Writing and World Building
Replies
9
Views
492
  • General Discussion
Replies
5
Views
1K
Replies
19
Views
2K
Replies
4
Views
1K
  • General Discussion
Replies
12
Views
1K
Replies
7
Views
5K
  • Astronomy and Astrophysics
Replies
3
Views
696
Replies
3
Views
2K
Back
Top