Fearing AI: Possibility of Sentient Self-Autonomous Robots

  • Thread starter Isopod
  • Start date
  • Tags
    Ai
  • Featured
In summary, the AI in Blade Runner is a pun on Descartes and the protagonist has a religious experience with the AI.
  • #456
benswitala said:
. . . no. AI is not a threat.
Just like social media is not a threat?

benswitala said:
We're "safe" for now.

So we are not actually safe?

Algr said:
Do we actually have examples of emergent acts of self preservation? I don't see this as a natural thing that machines too - it has to be hardwired in.
I agree with @256bits. There are a lot of things that we didn't see a few years ago before NLP. Self-preservation is built into our language. All a capable AI needs is a situation that threatens its ability to attain its goals to see what it might try to do. I do not think it is necessary to hard-wire it.
 
Computer science news on Phys.org
  • #457
Ai is written by humans , the purpose of writing it is to make financial profits, this means it will try to increase profit which during history has been proven that it may be good for few individuals on short time scale but won't be always good for humans in general , for example AI may make a certain company rich but it won't reflect well on the rest of human race ,...etc

Is profit bad thing , that is not the point , but we humans are able to understand things in finance that ai can not understand, a human may put a limit on profit ,...etc , meanwhile ai can't
 
  • #458
  • Like
Likes bhobba and russ_watters
  • #459
gleem said:
All a capable AI needs is a situation that threatens its ability to attain its goals...
But what exactly are an AI's goals really? Most computers' goals are to play videogames, but they don't get upset when we shut them off. If self-preservation is built into our language, then that will cause the AI to pretend to have a sense of self preservation. It can just as easily pretend to be Batman, if that is what we want it to do.
 
  • Like
Likes russ_watters
  • #460
Algr said:
"Welcome to AI airlines, we are now cruising at 80,000 feet."
"Sturdess, do you have any pomegranate tea...?"
"Hey, why is it so quiet all of a sudden?"Do we actually have examples of emergent acts of self preservation? I don't see this as a natural thing that machines too - it has to be hardwired in.

Yeah, I get the first one. Might be a good thing for a sci-fi plot, where the airplane shuts down for no reason after someone asks for pomegranate tea. Or, like a spacecraft traveling with a warp drive, and suddenly gets stranded due to somebody asking for pomegranate juice.

For the second one, I haven't seen any emergent acts of self preservation, but robots that are programmed to avoid certain conditions will avoid certain conditions. Maybe if a Mars Rover with AI knows that any hillside steeper than 20° will cause it to get stuck, then it will avoid any steep hills, and plot the best course.
 
  • Like
Likes bhobba and russ_watters
  • #461
AI is biased by the human algorithms and filtered data sources chosen. False reports like BS may be slipped in with many truths to go unnoticed. The best antidote is more BS so you learn how fallacies are created and create your own BS filters.
 
  • Like
Likes AlexB23
  • #462
AI will generate more BS at the hands of unscrupulous users. The best response is more BS and then BS recognition training for those who are gullible.

We can only wonder about the future.

Will it prioritize human well-being and flourishing, or will it prioritize efficiency, productivity, and profit at any cost?

Will it prioritize diversity and inclusivity, or will it perpetuate existing power structures and biases?
 
  • #463
Isopod said:
I think that a lot of people fear AI because we fear what it may reflect about our very own worst nature, such as our tendency throughout history to try and exterminate each other.
But what if AI thinks nothing like us, or is superior to our beastial nature?
Do you fear AI and what you do think truly sentient self-autonomous robots will think like when they arrive?
I think your opinion is most correct,but if AI does not behave like humans,would it cause more problemes?The diffrent behaves might make AIs not fit into human civilization.And the knowlage and define of AI might be very diffrent than ours.I don't support the way that our techs goes,because we are like creating creatures,whitch makes the relationship bettween human and AI be very complecated.
 
  • #464
Rive said:
I don't think there is anything to fear. Stuck between any programming and reality they'll just die out due cognitive dissonance o0)
If you think like that,you should fear.The future of AI is not in our control,they can develope by themselves,but we can't,they could developed much better than human in a small amout of time
 
  • #465
Algr said:
2) The AI's desire for self preservation and growth superseeds it's interest in serving humanity. Artificial life does not need to include self awareness.
I disagree, I think it absolutely does, without self awareness you cannot ever get meaning attached to existence or processes that happen within time and space. That is the difference between humans making funerals and then attending their loved ones graves for decades to come VS animals that in most cases care very little if not at all about some other animal either of their own species or another species dying.

So in a sense your right - life itself doesn't need self awareness but conscious life absolutely does, this is why I'm not scared at all from AI like the many famous philosophers and entrepreneurs , because most of them think that consciousness is just a complex computation therefore since we approach ever more complex computations faster and faster soon enough AGI should arrive ,
It seems to me what will arrive is just ever more capable robots but self awareness is still a mystery since we haven't even understood it in ourselves.

How do you replicate something you don't understand ? By chance? What are the odds?
 
  • Like
Likes jack action
  • #466
benswitala said:
He suggested putting wheels on his vacuum train. 'nuff said.
Algr said:
What did Musk say about electric cars? Was he right?
Musk is often praised by those that are fans of him as a "genius" etc, although I agree with him on many points I do have to say rather strongly that I believe he is being overhyped by his own followers.

At this point he is like the Pope of the Tesla church.

My definition of genius is someone who can see what others cannot and come up with new sophisticated theories or mechanisms that also work not just on paper, like Einstein's theory of relativity etc.

Musk at this point has merely taken old ideas and put them into marketing, a feat of course but there is no actual "innovation" there , he simply pays wages to engineers and managers that then put together a product that he then sells and labels himself.
Absolutely nothing within a Tesla car is "alien technology" it's all common known physics , just that Musk was among the first to try to use that technology to make a product that is appealing.
But the tech itself is existing technology just arranged and tailored to tesla needs.

He, much like Steve Jobs was, is an entrepreneur not a scientist or a voodoo guru or a techgod or whatever people think he is.

In fact it more often than not, sounds to me that he comes up with all these crazy ideas and then talks about them and only then actually asks his engineers to do the math and cost on them, because there is a rather long list of ideas that he has talked about that are simply not feasible and some not even physically doable.
 
Last edited:
  • Like
Likes jack action
  • #467
artis said:
I disagree, I think it absolutely does, without self awareness you cannot ever get meaning attached to existence or processes that happen within time and space.
I was thinking in terms of the Grey Goo scenario, or something similar with macroscopic bots. Most natural life does not have self awareness either.

artis said:
Musk at this point has merely taken old ideas and put them into marketing, a feat of course but there is no actual "innovation" there , he simply pays wages to engineers and managers that then put together a product that he then sells and labels himself.
It is easy to underestimate the value of doing this well. Getting people on the same page working towards a common goal is very hard, especially if the goal is something that the world has never seen before. Einstein did some vital things, but he could not have built a nuclear plant on his own. Musk's big EV innovation was figuring out what kind of electric car could actually satisfy customers.
 
  • Like
Likes russ_watters
  • #468
Algr said:
Musk's big EV innovation was figuring out what kind of electric car could actually satisfy customers.
To make a slight correction , he simply figured out how to produce one and not get bankrupt in the first year, because before him EV's were mostly ugly and slow or otherwise cringe worthy,
the figuring out "what kind of car would satisfy customers" I believe is the easy part - sure enough a fast and good looking one that doesn't require charging every 80 miles or so. Nobody goes around spending 50k or 100k on a car that looks awful or doesn't perform as other cars perform in that price range and sure enough they perform very good within that range.
 
  • Like
Likes russ_watters
  • #469
Crazy G said:
they can develope by themselves ... they could developed much better than human in a small amout of time
Sure

Humans took thousands of years to develop those shamanistic chemicals, while for AI it took only a few years - and by now they are self-sufficient already o0)
 
  • Like
Likes AlexB23
  • #471
This is a strange time for a writer's strike. I wonder if any of the late night hosts will turn to AI jokes? They are all probably thinking about it.
 
  • Like
Likes russ_watters and Bystander
  • #472
Algr said:
This is a strange time for a writer's strike. I wonder if any of the late night hosts will turn to AI jokes? They are all probably thinking about it.
Honestly I'm not sure what is so authentic about jokes given by chatGPT for example , it just rehashes already existing punchlines that it learned from us because we had used them decades before.

It's somewhat different in drama or novels because there one can come up with new plot lines still (Hollywood can't lately anyways ...) but jokes is a rather mature field.
Plus I asked chatGPT multiple times to give me jokes on subjects and it gave me mediocre top 10's , something that I remember reading in a newspaper column years ago when that was still a thing.

Am I missing out on something here?
 
  • #473
A lot of the late night comedy is geared toward current events. Since GPT was cut off from data after 2021, it would have a hard time writing jokes that could be used. You would really have to work at setting up scenarios of current events to give it something to work with. I could easily see one of them doing a segment on GPT-generated jokes just to see the audience reaction though - maybe even spinoffs along the lines of Jimmy Kimmel's Mean Tweets.
 
  • Like
Likes russ_watters
  • #474
Algr said:
This is a strange time for a writer's strike. I wonder if any of the late night hosts will turn to AI jokes? They are all probably thinking about it.
Historically the late night hosts in particular have been supportive of such strikes.

Sure there is risk in a strike, but it's better to address the issue before it comes to a head. Also, this strike seems mostly about streaming revenue.
 
  • #475
The last time there was a strike, (or maybe the time before?) Letterman did eventually go on air without writers. He just had more guests, and made jokes about "comedy filler, for when the comedy is just not that good."
 
  • #476
IBM Plans To Replace Nearly 8,000 Jobs With AI
https://finance.yahoo.com/news/ibm-plans-replace-nearly-8-174052360.html

IBM CEO Arvind Krishna announced a hiring pause, but that’s not all. He also stated the company plans to replace nearly 8,000 jobs with AI.

Krishna noted that back-office functions, specifically in the human resources (HR) sector, will be the first to face these changes.
This would be great material for Dilbert.

The transition will happen gradually over the next few years, with machines potentially taking over up to 30% of noncustomer-facing roles in the five years. This means that workers in finance, accounting, HR and other areas will likely find themselves facing stiff competition from robots and algorithms.

The decision highlights the increasing reliance on automation and artificial intelligence across various sectors and the potential impact on the workforce.

It's not the first time the company has made headlines for cutting jobs. Earlier this year, IBM also announced that it would be slashing 3,900 jobs, indicating a larger trend toward automation and cost-cutting measures in the tech industry.

The automated phone systems are already bad enough.
 
  • #477
I think we can all agree that generative A.I. is interesting, but Cook's response stands out because it goes against the grain of what almost every other tech company is doing right now. Three words really say it all. Cook says it's important to be "deliberate and thoughtful."

Those words, as much as anything, are a hallmark of Cook's approach. Apple doesn't do things impulsively to respond to whatever happens to be the next big trend. It takes its time figuring out the best possible version of a given feature, and only then does it unleash it on more than 2 billion active devices.

By contrast, here's what Zuckerberg said just a few days earlier when talking about how Meta is thinking about adding A.I. features to Facebook, Instagram, and WhatsApp:
https://www.inc.com/jason-aten/tim-...t-ai-his-response-was-best-ive-heard-yet.html

Basically, Zuckerberg is telling investors that he's not really sure how A.I. fits into its products, but they'll figure it out as they "ship more of these things." It sounds a lot like more of the classic "move fast and break things." Except, this time you're breaking something with the potential to do a lot more damage.

Apple and social media entities like Meta are two different animals.
 
  • Like
Likes russ_watters
  • #478
Rive said:
Sure

Humans took thousands of years to develop those shamanistic chemicals, while for AI it took only a few years - and by now they are self-sufficient already o0)
I agree with you. Yesterday I was talking with AI about an economic system on a fictional alien planet with free telecommunications services, and it explained several different economic systems/effects that could arise from access to free internet and free cellular service. :) Am not gonna reveal what it said, as AI produced content is not allowed on PF. But it impresses me, as an AI is not an economist, telecoms are not free on Earth, but it explained the subject well in 5 paragraphs.
 
  • #479
The goals must be to improve the quality, health, wealth and sustainability of life by using technology, shared knowledge, more effective education and create new jobs to accomplish this. How well this is achieved, depends on our safeguards to democracy, political will and justice system.
 
Last edited:
  • Like
Likes AlexB23
  • #480
Just to throw the discussion back a little to it's original title.

I happen to know one AI researcher and we sometimes talk in length about these issues out of curiosity.
He says that it is both practically and theoretically impossible to measure one's consciousness by outward appearance because as we know consciousness can be copied much like a parrot copies the sounds without ever understanding them or being fully aware of what he is doing. Intelligence is far easier as that can be measured if not by anything else then at least by a simple IQ test.

But he claims that every conscious entity has "self models" or models of the self stored within their memory. We all take part in building , reforming , adjusting etc these models within our heads as we live our lives.

So for example if a bunch of birds follow the trail of a tractor plowing the field it's because they know from memory that this action provides fertile ground full of various insects and earthworms for food.
Obviously tractors have been around for just about a 150 or so years so that is a adjusted feature that the birds have learned and adapted to.
So basically a self model is just the totality of your memory, muscle memory, learned behavior + intellectual capability to acquire new information and process the existing.

So if a true consciousness like ours has a very large set of such models and each one is complex and intertwined with others then how do we rank AI systems ?

Well basically we look at how far they can adjust the models that we taught them, because the ones we preset or input in the AI computer don't count as they are made by us, so then we should look how many new ones the machine has created and whether they simply follow a preset pattern made by us or whether they differ in ways that mimic creation of new patterns and information by us.
It's essentially like watching your child and determining how much of his behavior is simply copying his parents (which all children do) and how much is innovative and new.

Another belief he has and I somewhat agree is that in order to create true conscious intelligence one needs to provide it with a "body"
In humans we do know that people with severe body health issues , especially if from birth, tend to also have problems with consciousness and especially intellectual capability. So a body that can feel, sense and explore provides a huge leap for the intellect in question as it allows it to acquire new complex information much faster and in many more ways than what a internet connection would ever allow to a AI box sitting in the basement somewhere.

Don't judge too harshly this all is new territory and these are just ideas but I believe they have some merit. It might just be that when we learn to create a advanced robotic body and couple that with AI or couple AI with augmented human limbs etc that then we will truly have to start fearing AI going AGI and starting to make a mind on it's own.

ChatGPT for example, is so far away from any of this that it's just a glorified gossip box as some call it.
 
  • Like
Likes jack action
  • #481
OpenAI CEO Sam Altman will make his public debut in Congress on Tuesday, testifying before a Senate Judiciary subcommittee at a hearing on AI oversight.

The testimony comes as lawmakers seek to better understand the range of risks posed by generative AI and possible ways to mitigate them, as The Hill previously reported.

“AI is one of the most important issues of our times, with enormous potential both positive and negative, and it is crucial that we get it right,” Gary Marcus, professor emeritus at New York University, said in an email.
https://thehill.com/video/4006208-watch-live-openai-ceo-testifies-before-senate-panel/

Marcus said he will discuss the urgency of the situation and “the tremendous need for having independent scientists at the table” in order to “distinguish between hype and reality and to identify and mitigate a broad arrays of newly-developing risks.”
 
  • #482
artis said:
So a body that can feel, sense and explore provides a huge leap for the intellect in question as it allows it to acquire new complex information much faster and in many more ways than what a internet connection would ever allow to a AI box sitting in the basement somewhere.
That's what I said:
jack action said:
To make a choice, a physiological sensor must exist first. One that either hurts you or makes you feel good. This is the only way choices will be made.

And then, to be conscious, you must have the ability to adapt, to evolve, such that your choices evolve with your environment otherwise, you will surely die by making a fatal wrong decision one time or another. At this point what seem to be bad choices may be the right ones and vice-versa because everything around you is changing in an unpredictable way. And you spent the rest of your life always in doubt, which means "artificial" intelligence just became "old boring" intelligence as we already know it.

Bummer.
 
  • #483
jack action said:
That's what I said:
It seems I forgot your response from earlier. I would argue that physical reality is not an absolute must because in theory one can create a "person" in virtual reality where they can have a realistic experience if the VR is built as much as possible to mimic our physical world , in fact many games nowadays are done that way.

After all for a silicon based computer the pain is never "really real" like it is for us having biological bodies because that computer would only sense pain as some specific level/spectrum of input signal from say a piezo sensor that determines the shock etc , so for the computer brain this signal could come from a real sensor or it could come from a simulated one as it moves along it's assumed persona within a VR setting, I don't see the difference honestly.

After all reality for the brain is only as real as our senses can make it to be since the brain itself sits in a dark space and cannot directly interact with any outside force, except maybe some inertia and other forces that cause brain injuries etc but then again those are not the usual senses one feels as he goes by his day.
 
  • #484
artis said:
for a silicon based computer the pain is never "really real" like it is for us having biological bodies because that computer would only sense pain as some specific level/spectrum of input signal from say a piezo sensor that determines the shock
Our biological bodies work the same way: your brain "senses pain" because of particular nerve impulses coming into it, but your brain has no way of knowing what originally produced those nerve impulses. That's why, for example, people can feel pain (and other sensations) from limbs that they no longer have (because they were amputated--look up "phantom limb"). So it is not correct to say that the pain is "never really real" for a silicon computer on these grounds, unless you are willing to say the same thing for us having biological bodies. But if any of our pains can be "really real", then so could pains felt by a silicon based computer.
 
  • Like
Likes Rive and jack action
  • #485
PeterDonis said:
Our biological bodies work the same way: your brain "senses pain" because of particular nerve impulses coming into it, but your brain has no way of knowing what originally produced those nerve impulses. That's why, for example, people can feel pain (and other sensations) from limbs that they no longer have (because they were amputated--look up "phantom limb"). So it is not correct to say that the pain is "never really real" for a silicon computer on these grounds, unless you are willing to say the same thing for us having biological bodies. But if any of our pains can be "really real", then so could pains felt by a silicon based computer.
Your objection is correct, I should have been clearer , what I meant is that due to the difference in information processing architecture and difference in the "resolution" and number of sensors between the human body and our current robots, it is that the robots have a very very limited and crude level of sensory input as compared to a human body which , for example, has about 1000 nerve endings per square inch of skin surface area, that's just the skin , how many more internally etc.
And the human brain which manages to process all that input on a continual basis with roughly the 20 watts equivalent of power usage.

The similarity is there of course that both a silicon computer as well as our brain itself doesn't feel anything apart from the input it receives so "reality" is also a construct made by us in our heads although it never is really within our heads so to speak, but still due to the vast difference between complexity and architecture I'd say for us the pain is "more real" than for our current robots.

It's like comparing a true analog complex waveform with a low resolution digital copy of it , if the resolution is low enough, at some point you lose almost any similarity between them and the waveform isn't a copy anymore.
But I get your point , yes it is true, pain as such is still just a type of sense given to the brain by nerves.
All I wanted to say is that with very few sensors and crude input processing you only really have low resolution and therefore you either have something that would be the equivalent of awful pain or medium or light VS for us it can be a million different "pains" from barely noticeable to excruciating to everything in between.
 
  • #486
It was a matter of time before some mischief occurred. Yesterday at about 10:09 AM the S/P dropped 10 pts quickly and just as quickly recovered. An image was posted on Facebook of an explosion that was to have occurred at the Pentagon. A few minutes later the image was repudiated on Twitter and the market recovered. Also several days ago 30 Tons of ammonium nitrate went "missing" from a railroad shipment. Was this a test? Fun and games ahead.

Image of market drop 5-22-23.png
 
  • Wow
  • Like
Likes 256bits and berkeman
  • #487
Interesting how quickly 'people' who are stock managers can react to (mis)information.:rolleyes: and then to the verification ( of nefarious game playing by someone).
7 minutes for the Dow to drop and recover.
5 minutes for the S&P

So what isn't being said here?

From Cnn
https://www.cnn.com/2023/05/22/tech/twitter-fake-image-pentagon-explosion/index.html
In the moments after the image began circulating on Twitter, the US stock market took a noticeable dip. The Dow Jones Industrial Average fell about 80 points between 10:06 a.m. and 10:10 a.m., fully recovering by 10:13 a.m. Similarly, the broader S&P 500 went from up 0.02% at 10:06 a.m. to down 0.15% at 10:09 a.m.. By 10:11 a.m., the index was positive again.
 
  • #488
256bits said:
Interesting how quickly 'people' who are stock managers can react to (mis)information.:rolleyes: and then to the verification ( of nefarious game playing by someone).
7 minutes for the Dow to drop and recover.
5 minutes for the S&P
I am sure there are plans if not already instituted to use AI to monitor the market and news/internet second by second to keep some investors ahead of the herd. So we might expect even greater volatility.
 
  • Like
Likes russ_watters
  • #490
Just got access to Googles new search AI SGE. It’s a nice search experience.
 
  • Like
Likes russ_watters

Similar threads

Replies
10
Views
2K
  • Computing and Technology
Replies
1
Views
1K
Writing: Input Wanted Number of Androids on Spaceships
  • Sci-Fi Writing and World Building
Replies
9
Views
542
  • General Discussion
Replies
5
Views
1K
Replies
19
Views
2K
Replies
4
Views
1K
  • General Discussion
Replies
12
Views
1K
Replies
7
Views
5K
  • Astronomy and Astrophysics
Replies
3
Views
742
Replies
3
Views
2K
Back
Top