From Elon Musk to Stephen Hawking, these distinguished individuals have fairly voiced their opinion against the fast emergence of Artificial Intelligence.
In fact, Musk have even gone to the extent of claiming that there's “five to 10 percent chance of success [of making AI safe]”, even advising companies working for AI development to slow down.
On the other hand, names like Mark Zuckerberg has been fairly positive about the overall progress of AI, hopeful that it’s going to eliminate poverty entirely, along with bringing other benefits.
Which group is right?
Honestly, a rather boring answer is “both”.
Yes, AI can be a great asset for the entire humanity. Imagine all the things that we all can’t do. AI can do all those things much quickly and efficiently. They can solve problems that we never could. And this idea itself is beyond lucrative to support AI development.
However, in the mix of ‘everything bright’ we shouldn’t overlook the disadvantages. First is what if this advanced technology ends up in the wrong hands, like terrorists. Imagine the destruction it can cause. We’re already living in constant fear of terrorist attacks. If these bad guys get such advanced technology, it could be a big setback on the security end.
Now coming to your exact question—yes, there is a possibility of AI powered robots taking over the world. If we continue to develop AI thoughtlessly with less caution and no plan B, we do live in danger. But if that ever happens, it’s going to be likely a century from now. Will the world survive till then is a big question, given the Global Warming and lack of collective effort from developed nations against it.So yes, there is a chance that AI powered robot taking over. But it’s too futuristic that won’t be happening anytime soon.
Likewise, an AI can't influence anything in reality except if it's associated with fringe machines. It can't associate itself to anything, since it has no hands. It's only a program running inside a crate.
In addition, except if explicitly modified to subjugate humankind (with express directions on what that implies, and how to do it), it won't. Since PC programs, regardless of how astute, have no thought processes they are not given. They have no feelings, and in this way they have no wants.
I want to explain a portion of this, since unmistakably it's bringing out a great deal of feelings in individuals. I comprehend that we as a whole need enchantment on the planet. Life and cognizance, those appear enchantment. We realize life unexpectedly rose up out of non-life, yet individuals need to comprehend that procedure set aside an astoundingly long effort to occur, and was the consequence of a random uncommon possibility. Also, cognizance is a new property of the manner in which our cerebrums work — which is remarkably perplexing, and depends on conflicting organic frameworks.
Because we WANT PC projects to have the option to create awareness, doesn't mean they can. Every one of those brilliant stories — fears and expectations — are dreams. Extraordinary dreams, agreeable ones — however dreams.
An AI can no more grow genuine cognizance and its own inspirations than you can suddenly grow plumes and wings, and fly away. The establishments required for that are missing, and there's no known procedure by which they can be acquainted — nor intention with do as such.
I'm grieved — I really am. Reality should stay mysterious in different manners — yet not along these lines.