- #1
Zantra
- 793
- 3
I didn't find this exact topic, so here we are!
Let me begin by saying that the singularity is inevitable- Its not "if", it's how soon.
Elsewhere on this board it was mentioned that we're far from autonomous driving. Assuming we mean level 5 autonomy, it means millions of hours of road testing. Fortunately every Tesla car out there is gathering data. Once the model 3 hits full production, those hours will increase exponentially. 10 years is not unrealistic. Mass acceptance of the tech is another story.
A.I. Is in its infancy. Siri can predict my music taste. A decision tree and NLP does not a Turing test make. Long way from Asimov's future. Or is it? Quantum computing will help take it to thr next level. We may not be able to teach it empathy but if it's able to think 1 million times faster than us, we will have bigger problems.
I'm most interested in the implications of building something greater than ourselves which will view us as children, then ants, and then an obstacle. Because something that is a million times smarter than us will have no use for things like empathy and compassion and will erase those subroutines post-haste.
I'm not saying it happens immediately, but as soon as we make machines capable of improving themselves, it becomes exponential.
So how do we come up with a way to control something like that? It will be like ants trying to walk a dog
Let me begin by saying that the singularity is inevitable- Its not "if", it's how soon.
Elsewhere on this board it was mentioned that we're far from autonomous driving. Assuming we mean level 5 autonomy, it means millions of hours of road testing. Fortunately every Tesla car out there is gathering data. Once the model 3 hits full production, those hours will increase exponentially. 10 years is not unrealistic. Mass acceptance of the tech is another story.
A.I. Is in its infancy. Siri can predict my music taste. A decision tree and NLP does not a Turing test make. Long way from Asimov's future. Or is it? Quantum computing will help take it to thr next level. We may not be able to teach it empathy but if it's able to think 1 million times faster than us, we will have bigger problems.
I'm most interested in the implications of building something greater than ourselves which will view us as children, then ants, and then an obstacle. Because something that is a million times smarter than us will have no use for things like empathy and compassion and will erase those subroutines post-haste.
I'm not saying it happens immediately, but as soon as we make machines capable of improving themselves, it becomes exponential.
So how do we come up with a way to control something like that? It will be like ants trying to walk a dog