Is It Time for Science to Wage War Again?

  • Thread starter Rader
  • Start date
  • Tags
    Science
In summary, the conversation discusses the potential consequences of developing advanced technology, specifically AI, and its potential to surpass human intelligence and potentially lead to the destruction of humanity. It also raises the ethical question of whether it is worth sacrificing human life for the development of AI. Ultimately, the conversation highlights the importance of valuing and preserving human life, even in the face of technological advancement.
  • #1
Rader
765
0
"Should Science wage war again"

It was known in 1939 that German physicists were close to discover the secrets to maintain a nuclear chain reaction. The consequence of this was understood by a German physicist that previously defected to the USA. His cooperation with Einstein to send a letter to President Roosevelt changed the course of human history. If the Nazis had developed an H bomb before the USA, history books would have been written very differently. It turned out that the USA did not need the H bomb to defeat Germany in WW II, as the first detonation was in Hiroshima some two months later. As of today, developing that H bomb, dropping it and showing the world the terrible mass destruction that it could cause, has deterred all nations, of many distinct ideologies from detonating one again, from fear of the devastation it could cause. Or was it only because one nation had more H bombs?

We are faced with a similar threat to human annihilation even more menacing. AI technology will supercede in a matter of decades human intelligence. This technology put in the hands of whatever controls them, humans or themselves, sets back the clock to 1939 again.

So the question is should a nation who values human rights above individual or state rights, developer a technology again which could totally destroy all human life?

It would be interesting to know your opinions.
 
Physics news on Phys.org
  • #2
-So the question is should a nation who values human rights above individual or state rights, developer a technology again which could totally destroy all human life?

Specifically in terms of your question - NO, no nation is 'forced' or 'should' build such technology. The same answer can be said of the opposite - there is nothing that says that they 'shouldn't' develop such technology (if true AI can really be built).

However, what is technology for? It's for improving our survival, our freedoms - our quality of life. If such goals are not interpreted into the development of AI, then it is a mistake. Developing AI for 'science' or for 'knowledge' are both equally idiotic, and eventually highly destructive.

-AI technology will supercede in a matter of decades human intelligence.

Where, pray tell, did you get this assumed 'fact'? Not movies I hope.
 
  • #3
The bombs that were dropped on Hiroshima and Nagasaki were ordinary fission bombs, not H bombs.

I believe that the human race has no long-term hope and that AI does. If we could create a fully intelligent, viable AI without any defects, this would be a good thing even if it eventually kills everyone. An AI has the potential to explore space, understand more things, better, and live longer both individually and as a race than any human or group of humans. Moreover, a truly intelligent AI would not kill everyone; humans are too interesting to be simply discarded by any rational being.
 
  • #4
-I believe that the human race has no long-term hope and that AI does.

-An AI has the potential to explore space, understand more things, better, and live longer both individually and as a race than any human or group of humans.

Why would understanding space and things be important to humans if they have no 'long-term hope'? Knowledge by itself is worthless. The reason these apocalyptic technologies exist is because of this 'sake of knowledge or science'. Both worthless objectives by themselves.

-Moreover, a truly intelligent AI would not kill everyone; humans are too interesting to be simply discarded by any rational being.

Assuming the eventual existence of killer robots, it's very possible that either they do not become intelligent fast enough before they destroy us, or they never become 'rational' enough to save us from extinction.
 
  • #5
Humans _do_ have no long-term hope. Space colonization is not practical for humans. The world will wind down with us on it, and that's the end. Catapulting computer chips and little robots through space, though, is relatively simple; AI can colonize space.

Purpose is not survival; it is the health of systems. A vibrant, super-intelligent AI could surpass the vitality--and thus the worth--of humanity.

Don't you see that it doesn't matter whether the future inhabitants of the Earth share your genes or have any genes at all, so long as they are intelligent and vital people?


It would indeed be a tragedy if the human race were destroyed by a semi-intelligent military AI without the capacity to develop further.
 
  • #6
Bartholomew said:
Humans _do_ have no long-term hope. Space colonization is not practical for humans. The world will wind down with us on it, and that's the end. Catapulting computer chips and little robots through space, though, is relatively simple; AI can colonize space.

Purpose is not survival; it is the health of systems. A vibrant, super-intelligent AI could surpass the vitality--and thus the worth--of humanity.

Don't you see that it doesn't matter whether the future inhabitants of the Earth share your genes or have any genes at all, so long as they are intelligent and vital people?

Actually it is not the case that "I would see" it as necessary that the future inhabitants be 'intelligent' and 'vital' if it meant abandoning or feeding either myself or the human species. To abandon the affirmation and improvement (and thus the continuation) of our lives simply to make way for the existence of a technologically superior (more powerful) species would be tantamount to decadence.
Indeed, I could see us laboring for the AI to improve them, since they presumably can do a lot more, but not the kind of quasi-suicidal tendency that I think you are implying. Anyone who values life will never value AI to be worth more than humanity.
 
  • #7
Your valuing of biological life over AI life is totally arbitrary. There is no important difference between an intelligent man and an equally intelligent machine.

What if a strange deity cast a spell over the Earth so that all infants conceived from here on are born to look and act exactly like normal children, with the exception that they are in fact machines, with immortal lifespans and very high intelligence? They would not be susceptible to disease or other ailments and they would work constructively just like humans, only better; the only difference between them and very hardy humans is that they (the machine-people) are made of non-biological substances.

In short, what if your children were perfect citizens, and also machines?

What would be any drawback to that scenario?

So don't speak of "abandoning the human species." Loyalty to something or other merely because of what it is made of (flesh and blood as opposed to steel and silicon), is arbitrary and irrational.
 
  • #8
Your valuing of biological life over AI life is totally arbitrary. There is no important difference between an intelligent man and an equally intelligent machine.
I don't value biological life over AI, I value my life over all other life. And I agree, there is no important difference between a man or AI that have both the abilities to reason and freedom of choice. However, their being AI does not guarantee their greatness over humans - this still has to be proven.

What if a strange deity cast a spell over the Earth so that all infants conceived from here on are born to look and act exactly like normal children, with the exception that they are in fact machines, with immortal lifespans and very high intelligence? They would not be susceptible to disease or other ailments and they would work constructively just like humans, only better; the only difference between them and very hardy humans is that they (the machine-people) are made of non-biological substances.
And what about this strange deity? He seems to be more powerful than all of the AI combined. Shouldn't we sacrifice instead to make more of such dieties? He is the superior life after all.

*****
Extra:
Indeed, I could see us laboring for any 'intelligent' AI to improve them, since they presumably can do a lot more and would be greater than humans in general, but not the kind of quasi-suicidal tendency that I think you are implying. Anyone who values life will never value AI to be worth more than humanity (I should have stated this earlier in a less racial sense).
Thus:
-This is true in the sense that anyone who values life will never value AI > Human out of a simple RACE or STRENGTH/SURVIVABILITY comparison - it will take more than that (ie. imagination, choice, intelligence, development potential). A simple look into the nazi ideology and world war 2 will show you just how dangerous your ideas can be if they are interpreted exactly as you have said it.

-The potential of AI is not founded on anything yet. We only know that they are physically superior (computing, lifting, moving, etc), but they have not shown or proven to be superior mentally - imagination, free thought, etc. It is true that so far, robotic components has been shown to be physically superior to biological components. What about computer viruses? Problems in coding? Overheating? Electrical failures? Shortcircuits? Are they free to choose and think - do they really have an imagination?

-Their show of intelligence has been nothing more than a show of speed and a blind ability to obey (none have shown the ability to command) - nothing of this superior AI intelligence has yet to be proven.

-Later on, if AI prove to be 'superior', we could labor/sacrifice for the development of AI as a way of preserving our legacy (they are our children after all), and THAT would be improving the quality of what humans call 'intelligent life' and their vitality. Not simply that the endgoal is supposedly a 'perfect AI' that we should go for. Perhaps humans will later evolve into something super powerful, that they become even greater than the AI? Unlikely, but at this moment, there is no reason to say that we must eventually abandon the survival of the human race.
 
Last edited:
  • #9
Thus, it is my opinion that at this moment, it is meaningless and worthless to build AI simply because they can colonize space and learn more things about spatial phenomena (knowledge and science by themselves are worthless). We need to see an improvement in our quality of life, and build AI to try to help ourselves out - not simple study because of a need to improve science and knowledge.

It is true, we could try to usher in an era of 'superior' robots (although this existence of such an end goal is still under dispute at the moment). But who is to say that building AI will prove to be superior to say, creating new biologically potent beings? As we said before there is no real difference - they are both of atoms and energy. Perhaps an evolved super-human or other biological or cybernetic beings with resistance to viruses will prove to be more effective since they are more adaptive and less rigid than AI in physical form (and perhaps in intelligence as well).

At this moment in time, we should be asking how AI could help our quality of life - not their 'eventual' succession of the human race. It is odd that humans think that it is ok if they are extinct, yet are so unwilling to make animals or plants extinct out of the fear that the Earth would die out.
 
Last edited:
  • #10
Bartholomew said:
humans are too interesting to be simply discarded by any rational being.

What does "interesting" have to do with reason?
 
  • #11
Certainly, at the moment AI is only a tool to help humans. It would take a true breakthrough to make a real AI. But our brains follow the orders of physics just as computers do; there is no fundamental difference.

It is possible that biological computers will always be better than non-biological computers, simply because of processing speed. No computer performs as many computations as a human brain. As for adaptibility, though, an AI has the potential to surpass humans and other biological entities enormously. Already the human mind is limited and one-track compared to the variety of things a simple desktop computer can do. Humans only think, however adaptable and useful this thought may be; even current computers can do unimaginably diverse things.

Genetic design may produce a more powerful computer than the human brain, but you don't necessarily need humans to make that design. An AI might do the same thing; it could make AI II.

All this is under the postulate that a truly reasoning AI can be built. If this can't happen, perhaps because of limited computing speed, then none of these things I am saying about the potential superiority of silicon applies. But I believe a reasoning AI _can_ be built.

Dan, rational beings are always interested by complex and quixotic things. You can't be intelligent if you don't have any interest in the unusual and significant. I believe it's not possible to design a truly intelligent AI which does not share this interest.
 
  • #12
It is possible that biological computers will always be better than non-biological computers, simply because of processing speed. No computer performs as many computations as a human brain. As for adaptibility, though, an AI has the potential to surpass humans and other biological entities enormously. Already the human mind is limited and one-track compared to the variety of things a simple desktop computer can do. Humans only think, however adaptable and useful this thought may be; even current computers can do unimaginably diverse things.

I think you got it reveresed comparison wise Bartholomew

Biological computers and non biological computers are great within their own bailiwick. They can't be compared head to head simply becasue they are constructed/developed vastly different

Non bio computers simply process a lot of 1's and 0's and they do it very fast to make up for this type of short fall and liniarity of processing which is caused by construction methods.


Bio computers (our brain) can't come close to the processing speed of computers, because of the medium of which they're made. But we make up for the slower processing speed by having a lot more interconnections (neurons), and therby are able to draw conclusions based on seeemingly unconnected thoughts or bits of data.

By this comparison unless AI is constructed as complex as our brain interconnections (literally billions of interconnections) we'll be able to most likely achive parity with non bio AI.

Besides if AI got as complex as our brains wouldn't it have learned from us and therefore be a human intelligence (albiet constructed)? Could it not be tought an appreciation of diversity and life?

.
 
  • #13
No, the brain is far faster than the fastest silicon computer. Each individual neuron is much slower, but overall the processing power is much greater because, as you say, there are many more connections and also many more neurons. Also, each individual neuron performs more computation than a single logic gate, mitigating its speed disadvantage.

Versatility is heavily in favor of the computer. For a human to learn a new thing, he must spend hours, sometimes longer, doing so. For a computer to learn a new thing it simply loads a program. The capability of computers to alter their memories and functionality far exceeds the human capability to do the same. Think of all the multifarious programs stored on just your computer at this moment--operating system(s), word processors, internet connection managers, other system software, games, hard drive browsers, internet browsers, command prompts, device drivers--who knows what else. There must be hundreds of thousands of commercial programs in existence, and any compatible computer can execute any of them.
 
  • #14
yep. You got me there onthe second part. I agree with you on the first part, I think we were saying the same thing (mostly) but from different perspecives. I was thinking of the neurons speed vs electron flow.
 
  • #15
I only disagree about the part about learning faster. In some ways, especially of non-mathematical, non-set ways like pattern recognition, humasn learn things a lot faster than the computer.
 
  • #16
Pattern recognition is a skill--just part of our genetic programming. If an equivalent pattern recognition program were made for the computer, it could learn it in an instant.

So the current advantages of humans are--more speed, and better existing programs for certain things (i.e. pattern recognition). With a true AI, the human programs would no longer be better than the computer programs, and then the only advantage of humans would be speed, matched against the computer versatility.
 
  • #17
Bartholomew said:
Pattern recognition is a skill--just part of our genetic programming. If an equivalent pattern recognition program were made for the computer, it could learn it in an instant.


Then if humans were put through genetic engineering to speed up neurons and computing speed, as well as pattern recognition...


As I said before, it's worthless to speculate about what AI or humans will have in the future (unless it was already almost complete). We must only look at what AI is now - not the fantastical future of what could happens.
 
  • #18
I think that AI now is very close. All it needs is the right few insights. Genetic engineering of intelligence will not happen to any significant extent in the next hundred years.
 
  • #19
Bartholomew said:
Genetic engineering of intelligence will not happen to any significant extent in the next hundred years.

Yet in 1900 is was said that people couldn't fly. 60 years later, we went to the moon. i would say that the impossible is never that.

Alo, Bartholomew, i think that you underestimate the human race. We have survived much. Yet you say we have no chance?!

We will continue to evolve. We still are. So do not discount us just because you can not see what is in front of your nose. A humans goal is always to survive. Don't discount us till we are gone.
 
Last edited:
  • #20
It's not impossible, but it's tremendously unlikely. Remember that very few people actually went to the moon. In terms of results it was not a significant event. Significant genetic engineering of intelligence would need to involve hundreds of thousands of people... it's just not going to happen even if the techniques were there, which they aren't, because intelligence is genetically complicated.
 
  • #21
Bartholomew said:
It's not impossible, but it's tremendously unlikely. Remember that very few people actually went to the moon. In terms of results it was not a significant event.
The reason few people went to the moon is that there is not much purpose (for the cost) to go there any longer. The next useful thing the moon is useful for would be terraforming (if that). What are 100,000 people going to do in the moon right now? Collect rocks and play some 30ft high basketball?

Bartholomew said:
Significant genetic engineering of intelligence would need to involve hundreds of thousands of people... it's just not going to happen even if the techniques were there, which they aren't, because intelligence is genetically complicated.
Even though it has been estimated that within 5 years, they will have mapped all of the genes of the human body, human cloning, genetic engineering for smarter kids, etc...

I don't wish to insult you, but you seem to put a lot of faith on the fantastic and idealistic AI more than you do people.
 
  • #22
Humans will not understand the genes for intelligence in 50 years. Whoever estimated 5 years for that either doesn't know what he's talking about or has been misquoted by you. AI, on the other hand, is fantastic but very possible. Genetic engineering of intelligence now is in the state that AI was in in the 1960s. There are big dreams but nobody really knows how to go about them. AI on the other hand is a much more mature science.

I believe that there are a few fundamental algorithms of intelligence. I believe it is not complicated to produce, and that current computers are powerful and flexible enough to do a lot with these algorithms once discovered.
 
  • #23
Bartholemew said:
Humans will not understand the genes for intelligence in 50 years. Whoever estimated 5 years for that either doesn't know what he's talking about or has been misquoted by you

I would really be interested in your reasons for rejecting genetic understanding of the hereditable component of IQ in 50 years. Recall that 50 years ago the DNA helix had just been announced. Look at the progress since then! We are now getting beyoond the protein-coding genes and into the regulatory mechanisms of the genome. I think some progress on intelligence in 10 years and usable knowledge by 50 is a prudent forecast.
 
  • #24
Bartholomew said:
Humans will not understand the genes for intelligence in 50 years. Whoever estimated 5 years for that either doesn't know what he's talking about or has been misquoted by you. AI, on the other hand, is fantastic but very possible. Genetic engineering of intelligence now is in the state that AI was in in the 1960s. There are big dreams but nobody really knows how to go about them. AI on the other hand is a much more mature science.

While indeed cybernetic systems are much more mature in complexity, speed and research time and money, AI has not been achieved at all. Only fakes and robots that simulate intelligent behavior.

On the other hand, the human genome is pretty much completed.
http://news.bbc.co.uk/2/hi/science/nature/2940601.stm

And indeed, the scientist here claims that making smarter humans may never happen, yet they have already made 'smarter' mice.
http://news.bbc.co.uk/1/hi/sci/tech/435816.stm

Bartholomew said:
I believe that there are a few fundamental algorithms of intelligence. I believe it is not complicated to produce, and that current computers are powerful and flexible enough to do a lot with these algorithms once discovered.
But that belief doesn't increase the possibility of AI, or even the possibility of intelligence being only a few fundamental algorithms.

In fact, I would see the advent of cyborgs (cybernetics + genetic engineering) to be ahead of AI and 'intelligence' genetic engineering. Cybernetics could much sooner help boost the brain function speeds of people, while genetic engineering could boost health and vitality for people.

Again, not only do I think of that as the case because it would be easier than AI and genetic engineering, but because it is also the most obvious purpose for such technology - ie. to help us humans. Most of us don't see ourselves yet as slaves to benefit the coming of "true AI" - we work for ourselves for now.
 
  • #25
Rader said:
... It turned out that the USA did not need the H bomb to defeat Germany in WW II, as the first detonation was in Hiroshima some two months later. As of today, developing that H bomb, dropping it and showing the world the terrible mass destruction that it could cause, has deterred all nations, of many distinct ideologies from detonating one again, from fear of the devastation it could cause...


I have a nit to pick with the way you put that. I don't think "needing" to use the A-bomb (not H-bomb) had anything to do with it. "Shortening the war" has been cited as a major reason by many apologists in defending the actions against the Japanese. America could just as easily have used the Bomb against the Nazis and spouted the same excuse. But they didn't.

No, it's fairly obvious that there were other unspoken and far uglier motives at play in deciding that Japan was going to have the dubious honor of being the first nuke target. Racism almost certainly had a lot to do with it. Plus the fact that many of the hawks in the American war machine were chafing at the bit to try out this new death machine on some human target. In hindsight, the Japanese were very close to being brought to their knees even before the atomic bombs were deployed. There was no good justification for actually having gone ahead with the bombings at that point; in fact, the major driving force seems to have been, "we won't get a golden opportunity like this again, let's just do it". A macabre experiment in human suffering.

This seems to have been the same sort of thinking that went into the current invasion of Iraq - even though Saddam was actually cooperating with UN inspections, even though there was no actual WMD/terrorist threat, "we won't get another chance again, so let's go in and get us some payback". Tragic that the mindset seems to have changed not a bit.
 
  • #26
Curious3141 said:
I have a nit to pick with the way you put that. I don't think "needing" to use the A-bomb (not H-bomb) had anything to do with it. "Shortening the war" has been cited as a major reason by many apologists in defending the actions against the Japanese. America could just as easily have used the Bomb against the Nazis and spouted the same excuse. But they didn't.

That's because they couldn't. The war in Europe was already over (May 8 1945). The testing phase occurred in July 1945, and they dropped the 1st japan bomb on August 1945.

Thanks for bringing race to the discussion though, it really helps clarify the positions of science and AI.
 
Last edited:
  • #27
GeD said:
That's because they couldn't. The war in Europe was already over (May 8 1945). The testing phase occurred in July 1945, and they dropped the 1st japan bomb on August 1945.

Thanks for bringing race to the discussion though, it really helps clarify the positions of science and AI.

You are misinformed, perhaps wilfully.

Read this : http://americanhistory.about.com/library/timelines/bltimelinemanhat.htm

Note that on May 5th, 1943, Japan was selected as the only live-target of nuclear bomb deployment.

The war in Europe took another two years to end. I'm wondering why Germany wasn't even considered for a live "test" ?

Are you going to argue a personal resentment brought on by Pearl Harbor ? Because war is war. Yet PH was a legitimate military target.

And I do strongly feel that race was a factor in earmarking Japan as the first target from an early date (1943). If you want to dispute this, come up with undeniable strategic reasons to exempt Germany from the horror of the Bomb.

I fully accept that the wartime Japanese were very far from perfect. They committed countless inhumane atrocities on the Chinese populations they invaded. Yet this does not excuse the actions of the US, which are often hypocritical and occasionally horrific.
 
  • #28
Curious, Ged- Drop this for know. This is one of those discussions that is going nowhere and is way off topic. It does not add anything to the discussion.
 
  • #29
That was my point in the first reply. The added discussion of race was worthless to the topic of whether science should pursue AI.


I fully accept that the wartime Japanese were very far from perfect. They committed countless inhumane atrocities on the Chinese populations they invaded. Yet this does not excuse the actions of the US, which are often hypocritical and occasionally horrific.
Just like most countries... highly hypocritical and occasionally horrific. I'm not here to defend the US' most likely bad or no reasoning to use atomic bombs. It's simply a retort against your need to add race to this topic because you feel the need to bloat racial issues and push an anti-US agenda.
 
Last edited:
  • #30
GeD said:
That was my point in the first reply. The added discussion of race was worthless to the topic of whether science should pursue AI.

The post that created the thread brought up the A-bomb against the Japanese. I addressed that issue as an example of science waging war against humanity (not just other humans, the essence of humaneness itself). I feel race was a pertinent issue when the target was being chosen. What did you expect, that only high minded noble ideals decide which country another wages war against ? War is a shameful and disgusting act, why should anyone be surprised that base motivations are often behind it ?


Just like most countries... highly hypocritical and occasionally horrific. I'm not here to defend the US' most likely bad or no reasoning to use atomic bombs. It's simply a retort against your need to add race to this topic because you feel the need to bloat racial issues and push an anti-US agenda.

No, I do NOT have an anti-US agenda. But I'm not going to azz-kiss the US either, where they have done grievous harm, I'll call them on it. Just like with any other country.

Since others want us to stop discussing this, this will be my last post on this subject.
 
  • #31
Curious3141 said:
The post that created the thread brought up the A-bomb against the Japanese. I addressed that issue as an example of science waging war against humanity (not just other humans, the essence of humaneness itself). I feel race was a pertinent issue when the target was being chosen. What did you expect, that only high minded noble ideals decide which country another wages war against ? War is a shameful and disgusting act, why should anyone be surprised that base motivations are often behind it ?

I agree that race played a part in the war between Japan and the US. If the Japanese high command had not been racially contemptuous of the Americans they would never have ordered Yamamoto to attack Pearl harbor!
 
  • #32
Curious3141 said:
No, it's fairly obvious that there were other unspoken and far uglier motives at play in deciding that Japan was going to have the dubious honor of being the first nuke target.

You think it might possibly have had something to do with the fact that the US would have been invading the Japanese islands alone had they gone that route? They had a large international coalition assembled for d-day and look at the losses they took there. The losses from a Japanese invasion would have been many times worse. It would have taken four or five d-days even just to get to Japan through all of the other Pacific islands they controlled.

In hindsight, the Japanese were very close to being brought to their knees even before the atomic bombs were deployed.

What exactly makes you think that? It took them over a week after the second a-bomb was dropped to finally surrender. Even then, Hirohito had to personally intervene, as the generals were ready to honor the Japanese code of death before surrender.

This seems to have been the same sort of thinking that went into the current invasion of Iraq - even though Saddam was actually cooperating with UN inspections.

Saddam was cooperating? Are there any additional historical inaccuracies you'd like to throw out there just for the fun of it?
 
  • #33
I think we wandered a little off topic when we started specifically consentrating on AI, and now the A-bomb. THe first was brought up as an example of technology used as a weapon, to support the AI as a weapon idea and pose the question So the question is should a nation who values human rights above individual or state rights, developer a technology again which could totally destroy all human life?

It would be interesting to know your opinions
.

I got caught up in the how AI could be developed/works portion of the thread myself.
But to answer Rader's question I'd say in my opinion scientific research shouldn't be avoided just because "somebody" might use it improperly. History is repleat with examples of a scientific discovery diverging both towards a benificial and a destructive technological path. Just because a new discovery is made doesn't necessarily mean it'll develope into the next WMD.
 
  • #34
Rader said:
It was known in 1939 that German physicists were close to discover the secrets to maintain a nuclear chain reaction. The consequence of this was understood by a German physicist that previously defected to the USA. His cooperation with Einstein to send a letter to President Roosevelt changed the course of human history. If the Nazis had developed an H bomb before the USA, history books would have been written very differently. It turned out that the USA did not need the H bomb to defeat Germany in WW II, as the first detonation was in Hiroshima some two months later. As of today, developing that H bomb, dropping it and showing the world the terrible mass destruction that it could cause, has deterred all nations, of many distinct ideologies from detonating one again, from fear of the devastation it could cause. Or was it only because one nation had more H bombs?

We are faced with a similar threat to human annihilation even more menacing. AI technology will supercede in a matter of decades human intelligence. This technology put in the hands of whatever controls them, humans or themselves, sets back the clock to 1939 again.

So the question is should a nation who values human rights above individual or state rights, developer a technology again which could totally destroy all human life?

It would be interesting to know your opinions.

Science has never waged any war. It's only human beings that wage war. Scientists who voluntarily offer their scientific knowledge for war making are those you need to ask why they do it. Why do they get involved? Scientists that are forced into war making by the illitrate populations, well, some people would argue that they should play bravery and refuse to offer their knowledge for war mongering even at consequence of being killed, and there are those who would argue that they have the right not to take risk, therefore they should offer their knowledge under force.

If you ask people 'why do people fight wars?', they will pour out as many flimpsy reasons as one is prepared to wait and listen. Now the problem of war fighting has graduated to a point where the Army, espeically the western Armies, is one single institution in our world societies that has privilledged access to every new technologies, and in 99.99% of the times they gain access to a particular technology many years before it reaches other mainstream institutions of the same society. So, scientifically and technologically, the Army is always ahead of the rest of the society.

At the moment everyone the world over is naively thinking and believing that 'MAN IS HIS OWN WORST ENEMY'. I had a debate elsewhere on this forum about this, when someone actually claimed that this is really the case. Wrong! As I have argued elsewhere, MAN'S WORST ENENEMY IS THE UNKNOWN. Hence, the reasons why all the armies in the world need to be retrained and be wholly re-focused on using all the available science and technologies to INTELLECTUALLY AND MATERIALLY FIGHT MAN'S GREATEST ENEMY.......> THE UNKNOWN.

For it is fundamentally stupid for us to be fighing and murdering each other at the naive human level, while beneath the surface of things Nature is concucting and brewing mayhems and catastrophies of amagedonic scale.
 
Last edited:
  • #35
Although I would agree that the FEAR of the unknown is a major ailment of the human race, your "call to war" against the unknown is flawed. There's a reason it's called the unknown - you can't specifically be prepared for it. You can only adapt to the whatever happens.

The important thing right now, is for people to stop destroying themselves in useless conflicts (although I don't mean to say that all conflicts are useless, just some), and to adapt to the unknown future.
 

Similar threads

Replies
4
Views
1K
Replies
3
Views
1K
Replies
10
Views
2K
Replies
17
Views
3K
Replies
1
Views
1K
  • General Discussion
Replies
9
Views
3K
Replies
1
Views
2K
  • Programming and Computer Science
Replies
29
Views
3K
  • General Discussion
Replies
29
Views
9K
Replies
13
Views
11K
Back
Top