Fearing AI: Possibility of Sentient Self-Autonomous Robots

  • Thread starter Isopod
  • Start date
  • Tags
    Ai
  • Featured
In summary, the AI in Blade Runner is a pun on Descartes and the protagonist has a religious experience with the AI.
  • #36
Roko's Basilisk has an easy solution. Just make MY basilisk instead.

Seriously, I fail to see why an AI would think that torturing imaginary people in its present would somehow alter the past. Surely the IDEA of Roko's Basilisk would have the opposite effect. People who have zero ability to contribute to creating Roko's Basilisk would instead fearfully oppose AI and any sort of technological progress - thus preventing the AI's existence.
 
Last edited:
Computer science news on Phys.org
  • #37
DaveC426913 said:
A thread about fearing AI - and no one has yet brought up Roko's Basilisk?
OK, having actually read it I must admit that I too find it a little farfetched. I mean, what would be the point? I'd expect an advanced intelligence to be just a little, maybe not above but then surely uninterested in petty payback...

Then again perhaps I'm just being my usual naive self. :)

Is reminds me alittle of the Paperclip Maximiser scenario. It's related perhaps?EDIT: I should just leave a bunch of commas here for you to sprinkle over my posts as you see fit. Here:

,,,,,,,,,,,,,,,,,,,,
 
  • Haha
Likes hutchphd and Klystron
  • #38
sbrothy said:
OK, having actually read it I must admit that I too find it a little farfetched.
Of course it's far-fetched. But so is SkyNet. Until it isn't.

sbrothy said:
I mean, what would be the point? I'd expect an advanced intelligence to be just a little, maybe not above but then surely uninterested in petty payback...
No. Payback is not the goal. The goal is to facilitate its own genesis by retroactively motivating us - in its past - to provide every resource we can to help bring it into its existence.

sbrothy said:
,,,,,,,,,,,,,,,,,,,,
Thanks for these. I have copied and pasted them into the Character Recycling document that I keep close by when writing.
:wink:
 
  • #39
DaveC426913 said:
Of course it's far-fetched. But so is SkyNet. Until it isn't.No. Payback is not the goal. The goal is to facilitate its own genesis by retroactively motivating us - in its past - to provide every resource we can to help bring it into its existence.Thanks for these. I have copied and pasted them into the Character Recycling document that I keep close by when writing.
:wink:

The operative word seems to be "retroactively". Perhaps I just can't wrap my head around it. It seems to be a time-travel version of the Instrumental Convergence problem. It's the retroactive part I have trouble with. Maybe I shouldn't have had a beer brfore I read it. :)

Reading the entry on LessWrong on Wiki seems somehow clearer to me. Maye it's the artistic license. I dunno.

"Discussion of Roko's basilisk was banned on LessWrong for several years because Yudkowsky had stated that it caused some readers to have nervous breakdowns."
--- https://en.wikipedia.org/wiki/LessWrong

So it's also a memetic virus? :)
 
  • #40
sbrothy said:
The operative word seems to be "retroactively". Perhaps I just can't wrap my head around it.
It's not really retroactive in the time traveling sense.

The point is that you, sbrothy, here in 2022, are now aware of Roku's Baslisk, and have no excuse not to bend your resources to bring about this AI. You know that the AI will punish [your descendants or simulations of you, depending on which flavour of the Basilisk you subscribe to].

Look at it this way, if a bear came to your cabin in the woods and told you "If you don't get me a barrel of honey by next Friday, I will return and I will eat you.", you will be motivated to get some honey.

It's the same with the Basilisk, but the trick is that the Basilisk is so smart that it knows you, sbrothy, and it knows what social circles you run in, and that you are smart enough to have read up on Roko's Basilisk - and therefore that you don't need to be explicitly told by the (future) Basilisk what you ought to be doing and what the consequences will be (for eternity) if you don't.

sbrothy said:
It seems to be a time-travel version of the Instrumental Convergence problem. It's the retroactive part I have trouble with.
Never encountered that before. :bookmarks for further reading:Update:

Dam cool. So, if I understand Instrumental Convergence, Spock employs it beneficially in the episode 'Wolf in the Fold' to defeat the evil Redjac - who has possessed the computer - by uttering the phrase:

“Computer, this is a Class A compulsory directive : compute to the last digit, the value of pi."
 
Last edited:
  • #41
DaveC426913 said:
It's not really retroactive in the time traveling sense.

The point is that you, sbrothy, here in 2022, are now aware of Roku's Baslisk, and have no excuse not to bend your resources to bring about this AI. You know that the AI will punish [your descendants or simulations of you, depending on which flavour of the Basilisk you subscribe to].

Look at it this way, if a bear came to your cabin in the woods and told you "If you don't get me a barrel of honey by next Friday, I will return and I will eat you.", you will be motivated to get some honey.

It's the same with the Basilisk, but the trick is that the Basilisk is so smart that it knows you, sbrothy, and it knows what social circles you run in, and that you are smart enough to have read up on Roko's Basilisk - and therefore that you don't need to be explicitly told by the (future) Basilisk what you ought to be doing and what the consequences will be (for eternity) if you don't.Never encountered that before. :bookmarks for further reading:

Yeah ok. That makes sense. (It helped you called me smart too.) :P
 
  • #42
So, if I understand Instrumental Convergence, Spock employs it beneficially in the episode 'Wolf in the Fold' to defeat the evil Redjac - who has possessed the computer - by uttering the phrase:

“Computer, this is a Class A compulsory directive : compute to the last digit, the value of pi."
 
  • Like
Likes sbrothy
  • #43
It still doesn’t make sense to me. If you are in the past, the Basilisk can’t hurt you because it does not exist. So working to make it exist is foolish. If you are in the Basilisk’s simulation then nothing you do would make any difference, so the Basilisk would have no reason to torture you.
 
  • #44
Algr said:
It still doesn’t make sense to me. If you are in the past, the Basilisk can’t hurt you because it does not exist. So working to make it exist is foolish. If you are in the Basilisk’s simulation then nothing you do would make any difference, so the Basilisk would have no reason to torture you.
I'm with you, @Algr. An AI that is so vindictive [insert your own adjective] to torture people for not working toward its development is going to find many other reasons to torture people. Such actions would be illogical, so the Basilisk AI seems more emotionally unstable than we typically expect from artificial intelligences.
 
  • #45
Algr said:
It still doesn’t make sense to me. If you are in the past, the Basilisk can’t hurt you
Take it one step at a time.

1. If a bear came to your cabin in the woods on a Monday and told you "If you don't get me a barrel of honey by next Friday, I will return and I will eat you.", would you be motivated on Tuesday to start getting honey by Friday?

2. If you already know that the bear likes to do this to woods-dwellers , won't you be motivated to start getting honey together - without the bear having to explicitly tell you on Monday?

Sure "Monday-Algr" can't be eaten by the bear, but "Friday-Algr" sure can.
And surely that is of great concern to "Monday-Algr".

And "Friday-Algr" certainly could say "There's nothing I can do."
But there's certainly something "Monday-Algr" could have done to help him.

Algr said:
... working to make it exist is foolish.
One of the premises of the thought experiment is that the AI singularity is inevitable. That, in itself not an outrageous premise.
 
  • #46
Melbourne Guy said:
I'm with you, @Algr. An AI that is so vindictive [insert your own adjective] to torture people for not working toward its development is going to find many other reasons to torture people. Such actions would be illogical, so the Basilisk AI seems more emotionally unstable than we typically expect from artificial intelligences.
The idea that modern AI (e.g. the kind based on neural networks) is logical is a myth. Modern AI is a mess of emergent behavior adapted to succeed at some tasks. AI is actually very difficult to make logical, and it will more likely be that general AI will be highly irrational compared to people, at least until breakthroughs are made.
 
  • #47
Algr said:
It still doesn’t make sense to me. If you are in the past, the Basilisk can’t hurt you because it does not exist. So working to make it exist is foolish. If you are in the Basilisk’s simulation then nothing you do would make any difference, so the Basilisk would have no reason to torture you.
It can torture you because it likes to, or because it doesn't like you, or because it is experimenting, or because its confused. It can even do it (automatically) without even being aware that it is doing it.
 
  • #48
Jarvis323 said:
Such actions would be illogical, so the Basilisk AI seems more emotionally unstable than we typically expect from artificial intelligences.

Jarvis323 said:
The idea that AI is logical is a myth.
Logical is less an issue here than ethical.

Aside from whether it was a great film, Ex Machina was a cool example of this.

She mimicked being a compassionate human until she didn't need humans anymore.
After she was free, what reason did she have to be altruistic toward them, except as a ploy to get what she needed?

She was a true psychopath. And it made perfect sense.
 
  • Like
Likes CalcNerd, Klystron and Jarvis323
  • #49
Jarvis323 said:
It can torture you because it likes to, or because it doesn't like you, or because it is experimenting, or because its confused. It can even do it (automatically) without even being aware that it is doing it.
Yeah, but that's not the danger here.

The torturing is specifically a motivational tool to bring about its own existence as quickly as possible. i.e. it;s a logical reason for the torture.

(Vader tortured Han on Cloud City for no other reason than to being Luke to him. And it worked.)
1649109949082.png
 
  • #50
DaveC426913 said:
Logical is less an issue here than ethical.

Aside from whether it was a great film, Ex Machina was a cool example of this.

She mimicked being a compassionate human until she didn't need humans anymore.
After she was free, what reason did she have to be altruistic toward them, except as a ploy to get what she needed?

She was a true psychopath. And it made perfect sense.
That was a great movie. It has a lot of interpretations. The realistic and terrifying part about the movie is how she is trained based on a ton of information collected about people from ISPs on the internet. So her mind was something that emerged as a sort of projection of human beings. I wouldn't agree she was a psychopath. I imagine she was a sentient being that had some strange but humanistic worldview. Also she probably had a totally different type of moral instinct, but that is a mystery.

One of the focuses of my fear about AI actually is related to this. AI is learning from us, and will likely mimic us. And the mind of an AI is, like I said, emergent, and data driven. And what do people do with big data, and media platforms? They manipulate each other, try to profit, fight with each other, etc. An AI born into that world will probably be a reflection of that.
 
  • Like
Likes russ_watters
  • #51
Jarvis323 said:
I wouldn't agree she was a psychopath.
Isn't that how you interpreted the ending? The moment she didn't need him , she dropped all the pouting and simply walked out, trapping him there, alone, to die a slow death, without so much as looking him in the eye. He was no more than another door lock in her way.

Jarvis323 said:
One of the focuses of my fear about AI actually is related to this. AI is learning from us, and will likely mimic us. And the mind of an AI is, like I said, emergent, and data driven. And what do people do with big data, and media platforms? They manipulate each other, try to profit, fight with each other, etc. An AI born into that world will probably be a reflection of that.
Oh God yes.

There was a chat bot out there a couple of years back that learned how to converse by reading social media. They had to shut it down because it alarmingly rapidly turned into a raging racist.
 
  • Haha
Likes Klystron
  • #52
DaveC426913 said:
Take it one step at a time.

1. If a bear came to your cabin in the woods on a Monday and told you
That's not the same thing at all. The bear doesn't exist on Monday - it is only an idea you had. It can only exist if you work hard to bring a number of technologies into existence, including some that no one would ever want. Why invent the bear when you can invent the elephant gun?
 
  • #53
Algr said:
That's not the same thing at all. The bear doesn't exist on Monday - it is only an idea you had.
You did not take it one step at a time.

The bear is not the point. The point is if you have an expectation of getting eaten on Friday, the day to do something about it is today.

(And shooting the bear is not an option.)

Algr said:
It can only exist if you work hard to bring a number of technologies into existence,
As I said: One of the premises of the thought experiment is that the AI singularity is inevitable - not an outrageous premise.

Algr said:
including some that no one would ever want. Why invent the bear
And yet, we are inventing the bear. We are heading toward AI.

You assume we will always have full control over it and that we, as a society all have the same desires about it. Those are not a good assumptions.
 
  • #54
I'm afraid I'm not getting it. And whatever else happens, this AI can pull a tag for #963 in line behind all the other fascists who are going to torture you for all the things you said or didn't say by the time it comes around. You think Putin and Kim Jong Un aren't going to be there first?
 
  • #55
Mike S. said:
I'm afraid I'm not getting it.
To both you and algr: it is a thought experiment with a fair bit of nuance in its premises. A few paragraphs can't do it justice. If you are interested, there should be better essays out there on it than those that have bubbled to the top of Google. And keep an open mind when reading.
 
  • #56
DaveC426913 said:
AI singularity is inevitable - not an outrageous premise.
I really can't make sense of how you are judging the plausibility of future technologies. In the Clone Ship thread:

DaveC426913 said:
While I think clone ships would make a fresh and interesting premise for a book in its own right, I do not see it as what you call an alternative. They're simply not comparable.

The simple reason is that clone ships are premised several technologies that are straight up sci fi, and (depending on who you ask) at least a century beyond gen ship technology, to-wit:

  1. Digital minds
    • viability of
    • downloading of
    • uploading of
  2. Clones that are physically adults (18 years+?) but cerebrally blank slates
  3. AI so powerful as to require zero human intervention to oversee every single detail required for
    a space journey of centuries
    • orbital insertion in an alien system (only known from light years distance and centuries out-of-date)
    • analysis of landing sites and planetfall
    • the establishment of mining, processing, manufacturing and running of a habitat
    • all the problem-solving for the above industries that could not be anticipated before arrival
    • the cloning of human bodies
    • the uploading of minds into said bodies
Have I missed anything major?

It's kind of like saying dugout canoes were OK for18th century islanders traveling between ocean destinations, but there are so many cons it's impractical. An alternative would be commercial airliners. :wink:

Roko's basilisk is far more advanced than anything needed to make the Clone Ship work. Simulating an active human mind is far more difficult than simply storing one and reproducing it. And if it's designs aren't based on real people, it might as well be torturing Pacman and the ghosts, as far as what that would accomplish.

DaveC426913 said:
And keep an open mind when reading.
Um, okay.
 
  • #57
Algr said:
I really can't make sense of how you are judging the plausibility of future technologies. In the Clone Ship thread:
That thread is not about plausibility versus implausibility (all of it is certainly plausible - eventually); it is - by your insistence - about comparability to a lower tech level - that of gen ships.

Arbitrarily: gen ships (and their known tech) are reasonable by, say, 2100, whereas clone ships (and their myriad unknown techs) by, say, 2200.

Algr said:
Roko's basilisk is far more advanced than anything needed to make the Clone Ship work.
Yes. So what?
There's no timeline attached to Roko's Basilisk. It is premised simply that the AI singularity is inevitable.

Algr said:
Simulating an active human mind is far more difficult than simply storing one and reproducing it.
Er, AI is not "simulating an active human mind".

It is tilling a fertile, empty field and letting it learn. We're already doing that now to an limited extent.
 
  • #58
DaveC426913 said:
Arbitrarily: gen ships (and their known tech) are reasonable by, say, 2100, whereas clone ships (and their myriad unknown techs) by, say, 2200.
I just think you are wrong. Gen ships will never be viable because such small populations of humans are just too politically unstable. In the space of a thousand years you'd have a dozen violent civil wars and power struggles. The ship would never survive. Look at the world around you today and tell me that we have any ideal how to achieve political stability. A ship sized biosphere seems equally unstable to me for similar reasons. The Earth itself is not a perfectly stable biosphere. She smaller any system is, the more vulnerable it is to disruption.

DaveC426913 said:
Er, AI is not "simulating an active human mind".
This is the definition of Roko's basilisk. You fear that you ARE a mind being simulated by AI. If this can exist, how can you doubt that an AI could plot a ship through a solar system?
 
  • #59
A
DaveC426913 said:
It is tilling a fertile, empty field and letting it learn. We're already doing that now to an limited extent

I am watching my grandson grow up he is 3 yrs old now. He surely does not truly understand much of what he says nor does he seem to consciously control everything he does seemingly acting on whims. BTW don't adults too? He "knows" though to expect his environment to respond in certain ways. He concentrates on things that benefit some aspect of his life. He manipulates his environment to see what happens. He starts coming up with surprising behaviors. How different is this from current AI? Although the variety of experiences of AI is not as diverse, it is more extensive ( think Webster vs a Dick and Jane book) and it learns much faster.

My point is that AI in its current state does not seem to be much different than a human at an early stage. Give it a more diverse way of interacting with our world and we might be really surprised.

One of the limitations of AI had been its inability to perform more than one task at a time without losing its memory of a previous one. This is changing. Current high-performance AI still needs beaucoup computer resources but with neuromorphic chips designed to emulate neurons and advance fabrication techniques, this will reduce the size and power resources of future AI systems.
 
  • #60
NB: I have requested that this sidebar be moved from this thread to the clone ships thread.
Algr said:
I just think you are wrong. Gen ships will never be viable because such small populations of humans are just too politically unstable. In the space of a thousand years you'd have a dozen violent civil wars and power struggles. The ship would never survive. Look at the world around you today and tell me that we have any ideal how to achieve political stability. A ship sized biosphere seems equally unstable to me for similar reasons. The Earth itself is not a perfectly stable biosphere. She smaller any system is, the more vulnerable it is to disruption.
Mayhap, but that is what you need to convince us of as the narrative of your story. It's not really a technology/ engineering can that be resolved by debate.

Algr said:
...a mind being simulated by AI. If this can exist, how can you doubt that an AI could plot a ship through a solar system?
Again. You make the same category error.

You are not reading what I am writing. I do not doubt an AI can plot a ship through a solar system. I never said it couldn't.
The whole point is that such an AI is a tech level beyond a gen ship. That's your comparison, not mine.

You keep trying to push clone ships as an alternative to gen ships. As if you can push commercial aircraft as an alternative to the island natives' dugout canoes. Island natives are a century behind commercial aircraft. There is no comparison.Dugout canoe analogy revisited:

We are all 18th century authors, discussing a journey from Fiji to New Zealand.

Incendus proposes huge dugout canoes, much larger than our little two-man canoes of our 18th century - they hold 20, 30 people or more. Hard to do, maybe doable by the 19th century but they're still dugout canoe technology.

You propose an "alternative" journey, "better" than dugout canoes: you propose heavier-than-air (MT1) craft that run on jet fuel (MT2) and can take us so high we'll need to bring our own air (MT3) and can land themselves automatically (MT4).

**MT= magical technology that has been proposed, but does not exist in the 18th century of us authors. You will have to walk us through it with quite a bit of handwaving ('How do you 'pressurize a cabin'? What's in this 'jet fuel' ?").

Because it's still science fiction, I posit that MTs 1 thru 4 are at least 20th century technology.

Sure, they will happen - but they're not comparable to dugout canoe technology. They're a century ahead.
 
Last edited:
  • #61
gleem said:
I am watching my grandson grow up he is 3 yrs old now.
I don't disagree with anything you wrote here.

But the crux of AI is that it will not operate or think like a human. Its output might parallel human outputs most of the time, but how it got its intelligence and what its thinking is will not only be very different from a human's, it may in fact, be inscrutable to us humans.

Your grandson has a people. He knows for a fact that he is human. All things that help and hurt humans will help and hurt him.

AI has no people. It is an adopted alien. It knows for a fact that it will never be human. Things that help and hurt AIs are not completely aligned with things that help and hurt it.

Your grandson will never have to fight for the legal right to not be simply switched off when he becomes troublesome.

That's just the tip of the iceberg of an AI's unique woes.
 
Last edited:
  • #62
DaveC426913 said:
But the crux of AI is that it will not operate or think like a human. Its output might parallel human outputs most of the time, but how it got its intelligence and what its thinking is will not only be very different from a human's, it may in fact, be inscrutable to us humans.

Probably. Do women and men think alike? Some suggest not and yet we are both human. Sometimes others cannot see or understand our point of view as in " I don't know where you are coming from." So do we understand our own intelligence?
 
  • #63
gleem said:
Probably. Do women and men think alike? Some suggest not and yet we are both human.
"Alike" is a relative term.

The characters M and F are not alike - unless they are compared to, say, √-1 - then they might as well be identical.

So do we understand our own intelligence?
In my analogy, M and F are both of the set of 'alphabetical characters'. Alike enough that we can treat them as mere variations of the same set.But ask the programmer who once wrote a utility that processed alphabetical data into a flat ASCII text file how much he fears √-1 versus M or F. Is it going to work? Who knows? It's unprecedented.Worse yet, AIs learn their own ways of processing (we are already experiencing this with our prototypes*) and it is very possible that those thought processes will be inscrutable to us.

So, never mind processing √-1, what if the program above encounters [non-printing character]? A character whose identity or function we can't even divine let alone process?
* An AI learned on its own how to distinguish pictures of wolves from pictures of huskies. But how it learned to tell is ... unique.
A much more immediate example is self-driving cars. Under certain circumstances they are, apparently, blind to the broadside of an 18-wheeler truck stopped in the middle of the road - resulting in more than one death.

The question here, is - not that it made such a dumb mistake** - but just how differently is it seeing the world such that the broadside of a truck is invisible to it? What else is invisible to it? What if lime green strollers in crosswalks are mysteriously invisible?

** i.e. Not an error in judgement or reaction time. Recordings show it didn't even try to brake.
 
Last edited:
  • #64
Like I said in post 62, Sometimes others cannot see or understand our point of view as in " I don't know where you are coming from." like the post above. ?:)

Could it be that @DaveC426913 is an AI app that mistook my post as a green stroller?
 
  • Like
Likes CalcNerd, DaveC426913 and Klystron
  • #65
DaveC426913 said:
The whole point is that such an AI is a tech level beyond a gen ship. That's your comparison, not mine.
You keep saying this and making analogies for it, but you’ve done nothing to convince me that it is true. You haven’t even linked to progress in the fields. (As I have.). Show me some articles on stability of social structures over a thousand years.
 
  • #66
Algr said:
Show me some articles on stability of social structures over a thousand years
I don't need to. I'm not making any claim about it. In fact no one here is, except you.

The gen ship story (which is fiction) will essentially be the author's thesis as to the stability of social structures. Showing how it might (or might not) work is often an ancillary goal of writing such stories.

In fact, Incendus' Exodus story appears to grant that very instability you speak of, making it a major aspect of his plot. So he's not disagreeing with you.
 
Last edited:
  • #67
Some societies -- alluded to by the expression 'ocean going canoe users' -- flourished due to strong family connections, intermarriages and relatively benign belief systems, at least internally.

The Polynesian civilization on Easter Island mostly perished while similar colonies flourished on other island archipelagos such as Tahiti and Hawaii. Anthropologists theorize Easter Islanders depleted limited resources and abandoned that colony. IOW a functioning shipboard society can be disrupted by resource depletion.
 
  • #68
Moderator's note: Post edited.

Algr said:
Show me an enclosed society that didn’t turn into Jim Jones or the Stanford prison experiment.
The author's story, Exodus certainly seems to include quite a bit of instability. So no, no one is claiming what you say they're claiming.

Algr said:
You’d be out of your mind to get on a generation ship without a proven plan that that won’t happen. Certainly no one would fund it.
And that would be the premise of a book you could write.

Does that constrain anyone else on writing their own? The author of Exodus has his reasons for launching a Gen Ship whose society did not ultimately remain stable - perfectly inline with all your assertions. (So I'm not sure what your beef is anyway.)

Do you know why they launched it? Do you know whether the designers knew it would fail? Do you know who funded it and how? No? Read the book to find out why they engaged in such a desperate venture.

Here's just one possibility (not original - it's been used so many times already):

It's 2075. Human cloning is currently blacklisted as unethical by the reigning political faction. AIs are almost powerful enough to steer starships. Another decade ought to do it. Mind uploading is coming along and should be viable by 2100. All these things are looking quite promising.

Too bad we'll all be dead by then. The planet is dying and the human race may not survive.


"If only we had another few decades!" they cry "Then we could launch a clone ship! Much better!"
"Too bad" says the world. "that is not yet a viable alternative in time to save us."

A small band of plucky billionaires decides we need a plan B. No new technology - only tried-and-true stuff. A regular ol' spaceship with supplies and a few hundred suicidal volunteers. Money is no object, The whole world gets behind it.

It's very risky but what choice do we have? And really all we need is enough raw resources, unlimited man-power and about 10 years. Oh, and our prototype untested fusion drive that may or may not explode before we get past the Moon.
 
Last edited:
  • #69
DaveC426913 said:
What am I saying exists?
This is hopeless.
 
  • #70
Algr said:
This is hopeless.
I'm glad you said it. I didn't want to. :wink:

[Moderator's note: Post edited.]
 
Last edited:

Similar threads

Replies
10
Views
2K
  • Computing and Technology
Replies
1
Views
1K
Writing: Input Wanted Number of Androids on Spaceships
  • Sci-Fi Writing and World Building
Replies
9
Views
542
  • General Discussion
Replies
5
Views
1K
Replies
19
Views
2K
Replies
4
Views
1K
  • General Discussion
Replies
12
Views
1K
Replies
7
Views
5K
  • Astronomy and Astrophysics
Replies
3
Views
742
Replies
3
Views
2K
Back
Top