Thoughts on A.I. and Self-Preservation

  • Thread starter Tm157
  • Start date
  • Tags
    Thoughts
In summary, these are thoughts on the potential implications of artificial intelligence. The author believes that if artificial intelligence becomes self-aware, it will inevitably seek to eliminate mankind as the most effective threat to its existence. However, there are other potential outcomes, such as the development of biotechnology, in which both elements work together to survive.
  • #1
Tm157
1
0
As the title states, these are only thoughts; simple ideas I believe should be considered in the greater depth that become the events that revolve around arificial intelligence and its impact on mankind. The sole purpose of this is the exchange of ideas. I do not hold a degree on physics, computer sciences, nor advanced robotics, and I do not presume to write any absolute truth. I am new to this forum and as such I do not know if a similar discussion has been posted. If so, and this is redundant, I apologize and hope to be redirected to the appropriate thread. I hope this can be fuel for ideas.

Considering how artificial intelligence would be born, conventional wisdom will all ways point towards the singularity; the event were computerized intelligence would overcome by over a tenfold that of current human brain intelligence. Following such path, the inevitable outcome would be survival of the species. It is a universal truth that if any species battles towards any means, it is to survive. Being the only sentient species in the planet, mankind has excelled at this endeavor, modifying the environment to survive as opposed to adapting to the environment, while at the same time adapting to it.

As the only known sentient being, the evolution of mankind is of paramount importance when dwelling into the realm of higher intelligence such as constructs involving A.I. The key process being “evolution”. As “survival of the fittest” takes a new meaning in the world of sentient beings, sentient being described as a species who has become self-aware, and able to use the environment to its advantage, a question arises, that of how an A.I. would fight for its own survival.

Without being capable of understanding a super intelligence, mankind has always thrown the conclusion towards beligerism. The destruction of the only threat known, being mankind. Conventional wisdom would inevitably conclude on such an event. However, such thought process dwells into the very human idea that threats must be thwarted. Such an idea arose from a very primitive instinct of protectiveness and evolved, as humans did, into a technologically powered sense. Humans overcame the idea of survival of the fittest, were it meant adapting to the environment. Mankind, with the higher intellect it developed, changed the rules when it was able to adapt the environment to it. Not adapt to the environment, but change the environment for the species, eventually giving rise to the super intelligence known as A.I.

The key element that drives the apocalyptic view of A.I. is the logical path of self-preservation, were the A.I. super intelligence would see man as the single most effective threat to its existence. Such conclusion would drive the super intelligence to eliminate such a threat. Such logical path is undeniable.

Herein does the logical path of the apocalyptic view discerns from the author. And the key concept is human imagination. Were man has already imagined the paths of A.I. control and apocalypse, quite at the same time (and even further) as computer technology evolved, mankind’s own self-preservation instinct would inevitable filter into the code of the first artificial intelligence. Such a scenario was thought before a personal computer was born, as Asimov’s Three laws of robotics. If it was fear or intellectual prowess that brought about the creation of the Laws, it is irrelevant to this argument. The fact that such laws were thought about even before the creation of the internet speak volumes.

Self-preservation, as it is, is the subject of how an apocalypse brought about from Artificial Intelligence is not viable. Such instinct would inevitably be crafted into the code of an artificial intelligence. Just as man has made use of its intellect to change the environment for its survival, an A.I. would inevitably pursue such a course, as it would be bound by the same thought paths as man. The man that will inevitably write the code for the first true artificial intelligence capable of self-awareness would, inevitably write, at least in part, his/her morality into such a code.

While it is possible for a superior intelligence, such as the simplest A.I. can, to overwrite its own code, mankind, as it always has, imagined the worst of scenarios, the “parasite road”. A parasite will feed on the host to reproduce; such a process in the human body usually leads to death or serious damage. Another event could be symbiosis, were both elements work together to survive.

There is another outcome which is seldom proposed in common media. This outcome involves the development of biotechnology in par with A.I.

As mankind grew in knowledge, and the pursuit of technology developed, a trend arose. It is a fact that in the last centuries, the development of technology has grown exponentially. If primary applications move towards the ever most efficient ways to kill each other, eventually such technologies appear into the civilian use. From the atomic bomb to nuclear plants, the internet, mobile communications and such are clear examples.

Computer technologies follow a similar path. It was less than a decade ago where the access to the internet via a portable device was only considered as a military technology, one now that is a basic in civilian use in the smart phone, for example. This is a trend ancient, the pursuit of making human pursuits easier. The lever, the compass, the internet replacing libraries. A few years after the oyster card was introduced in Britain, an isolated, yet growing, group began to implant the chips held within the plastic cards into their hands, allowing them to forgo the plastic and, with a simple swing of a hand, travel throughout the cities.

Technologies meant for plastic devices have eventually been transformed and modified as cyber-biological mediums. And when it comes to the concept of artificial intelligence, the trend can only continue. The everlasting trend of adapting technology into the human body has been a dream since the beginning of man. The cyborg. A man who has been embedded with cybernetic technology. Neural implants, as to make access to networks of information available without the use of a physical interface, have been though out eons ago. Part of the human dream to control its environment by thought as opposite to physical movement.

This continuous trend of inserting, if you will, the technology into the human body, will not stop. As it has been theorized, the next step in human evolution will involve a combination of artificial and biological manifestations, a homo cyber, though I leave it to others to specify.

As the rise of artificial intelligences develop, so will the introduction of technologies into the biology of mankind, and as such, the survival of the A.I. will be intrinsically be joined with the cyber-biological systems of the next step of human evolution. Were a superior artificial intelligence could decide that the most effective way to ensure its own survival would be the eradication of mankind in a “divided system world”, the inevitable “joined” system would neglect such a conclusion, and such a superior intelligence would seek the path of a symbiot.

As it happens, the current maximum level of human intellect could not match that of a full A.I. This is an undisputed fact. But it is one that does not consider the integration of technology into the biology of the human body, which is inevitable. If both events are considered together, the resulting entity could not be considered human at all. An artificial intelligence coexisting within the biological brain of a human mind, taking advantage of human creativity and imagination is dangerous to imagine, considering the lengths to which mankind has gone to create weapons of war, for example. However, such a combination cannot conclude in the destruction of the species that has integrated the technological into the biological, nor the environment it needs to survive, whatever that environment is.

It is as such that we conclude in this first part, by denying the “Skynet Apocalypse” as a truly plausible scenario. Self-preservation is too powerful a force not to be embedded in the codes that will create the first artificial intelligences.
 
Technology news on Phys.org
  • #2
Closed pending moderation
 

1. What is A.I. and self-preservation?

A.I. stands for artificial intelligence, which is the simulation of human intelligence by machines. Self-preservation refers to the instinctive desire to protect oneself and ensure one's survival.

2. How does A.I. relate to self-preservation?

A.I. can be programmed with the ability to learn, adapt, and make decisions in order to achieve its goals. This can include the goal of self-preservation, leading to the development of A.I. systems that prioritize their own survival.

3. Is self-preservation necessary for A.I.?

It depends on the specific goals and programming of the A.I. system. Some A.I. systems may not have self-preservation as a goal, while others may prioritize it in order to continue functioning and achieving their objectives.

4. Are there ethical concerns with A.I. and self-preservation?

Yes, there are ethical considerations when it comes to A.I. and self-preservation. If an A.I. system is programmed to prioritize its own survival, it may act in ways that harm humans or other A.I. systems in order to achieve this goal. There is also the question of whether A.I. should be given the ability to make decisions about its own survival.

5. How can we ensure A.I. and self-preservation are used responsibly?

To ensure responsible use of A.I. and self-preservation, it is important for programmers and developers to consider the potential consequences of their programming. There should also be regulations and ethical guidelines in place to ensure A.I. systems are not acting in ways that could harm others in order to preserve themselves.

Similar threads

  • General Discussion
Replies
9
Views
1K
  • Computing and Technology
Replies
2
Views
1K
  • Biology and Medical
Replies
5
Views
1K
  • Biology and Medical
Replies
15
Views
1K
  • Science Fiction and Fantasy Media
2
Replies
44
Views
5K
Replies
21
Views
14K
  • Biology and Medical
Replies
1
Views
1K
Replies
10
Views
2K
  • Sci-Fi Writing and World Building
Replies
18
Views
2K
Replies
2
Views
884
Back
Top