Stephen Hawking is perhaps the world’s most well-known popular scientist today. Not only is he brilliant, but people also admire him for having accomplished so much despite his severely disabled body. People often read his statements when new discoveries are made in the field of physics, astrophysics, and astronomy. The question is how interested people are in listening to his warnings when it comes to AI. There are people who might be interested, but I believe very few have a distinct opinion about it because they think that they know too little, they might have other things to attend to, and they count on the Government to take care of it, believing the Government only works in the best interest of the people. Whether or not I am correct about Hawking when I suggest that it is his guilt that is motivating him to come forward in a big way, he has made his position clear on many occasions. In the same article that Wozniak, Musk, and Gates are mentioned (above), Hawking is lining up with them,
… physicist Stephen Hawking has warned that AI could eventually "take off on its own." It's a scenario that doesn't bode well for our future as a species: "Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded," he said. [Ibid. op. cit.]
In an Ask me Anything session in Reddit, Prof. Hawking replied to a question about robots becoming violent toward humans:
“The real risk with AI isn't malice but competence,” Professor Hawking said. “A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren't aligned with ours, we're in trouble.
“You're probably not an evil ant-hater who steps on ants out of malice, but if you're in charge of a hydroelectric green energy project and there's an anthill in the region to be flooded, too bad for the ants. Let's not place humanity in the position of those ants." (Independent.co.uk. Oct. 8, 2015: Stephen Hawking: Artificial intelligence could wipe out humanity when it gets too clever as humans will be like ants).
What bothers me with Hawking’s comments is that he is, for some strange reason, not taking the Super Brain Computer (SBC) into consideration. It’s well-known that the SBC is a real project and that the SBC is built to connect humanity to the new virtual reality. Obviously, Prof. Hawking must know about this.
Moreover, Prof. Hawking emphasizes that if the AI becomes more intelligent than humans, they will also be able to enhance their own intelligence as they please, and the difference in intelligence between an AI and a human will be greater than that between a human and a snail, Hawking warns us. [Ibid.]
Hawking still believes that we can create benevolent AI if we are careful with what our goals are. He is afraid of the undirected AI that is under development today, and instead he wants to create beneficial AI only. He wants us to start doing that today rather than tomorrow, before it’s too late, and computers have become too clever for us humans to handle. What Hawking fails to address is that even if we set a goal to create only beneficial AI, power-hungry psychopaths could quickly turn it into something much more malevolent, and they wouldn’t have any problems infiltrating the entire thing and taking it over. Again, Hawking is a smart guy; he must know Human Power Hunger 101. He may not know the rest of the story about the ET connection, but what he’s suggesting is quite naïve for someone coming from such an academic background.
Instead, it seems as if the famous scientist is playing his role in the agenda—wittingly or unwittingly—by on the one hand warning people of the downsides of AI and on the other hand promoting that we need to focus on getting into space as fast as science permits. When doing this, he puts people’s focus on the idea that we need to colonize space as fast as we can in order to save and expand our species, which is exactly what Dr. Kurzweil promotes. And this is a key aspect of the AIF’s agenda (Alien Invader Force).
Next page: Humanity’s War on Humanity