It is certainly not only Prof. Hawking who is speaking out against AI. Virtually everything that has to do with AI is extremely dangerous to our species, and it will wipe out humanity, as we know it, if it is allowed to progress. This is not a science-fiction scenario: it is inevitable. We are talking about Overlord technology, and all their technology benefits their purposes, not ours.
Multimedia 8-1: Scientists speak to the UN about dangers of Artificial Super Intelligence (ASI).
In 2015, there was a UN meeting, attended by Anti-Singularitists, including MIT physicist Max Tegmark and the founder of Oxford’s Future of Humanity Institute, Nick Bostrom. They talked in depth about the possible dangers of Artificial Super Intelligence. L J Vanier, Oct. 27, 2015, “Dangers of Artificial Super Intelligence”. They postulated that in the beginning, mankind could benefit from these new technologies, but in the long term, AI would be an uncontrollable machine, whose actions cannot be anticipated by anyone on this planet.
Although prominent voices are being raised against AI and the Singularity, they still have little to no bearing on the final decision regarding whether or not the AI project should continue. The ball is rolling fast, and it can’t be stopped, unless enough people refuse to cooperate by not buying any of the smart products on the market; whatever these smart products might be in the near future. In addition, most people on this planet have nanobots in their blood stream because of chemtrails, vaccines, medications, and other sources, and these can be activated at any time. In order to resist this, we must have the knowledge, inner strength, and high consciousness necessary not to let these nanobots activate. It can be done, but it does require a focused person with high integrity and awareness.
Another outspoken person about AI is Apple’s co-founder, Steve Wozniak, who says,
"Computers are going to take over from humans, no question," he told the outlet. Recent technological advancements have convinced him that writer Raymond Kurzweil – who believes machine intelligence will surpass human intelligence within the next few decades – is onto something.
"Like people including Stephen Hawking and Elon Musk have predicted, I agree that the future is scary and very bad for people," he said. "If we build these devices to take care of everything for us, eventually they'll think faster than us and they'll get rid of the slow humans to run companies more efficiently."
"Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on? I don't know about that …"[Yahoo News, Mar. 23, 2015, “Steve Wozniak: The Future of AI Is 'Scary and Very Bad for People'“]
It is interesting to note that Apple’s virtual assistant for the iPhone, Siri, uses Artificial Intelligence technology to anticipate users’ needs. Independent.co.uk., Oct. 8, 2015, “Stephen Hawking: Artificial intelligence could wipe out humanity when it gets too clever as humans will be like ants” (Slide show). It seems as if Wozniak is speaking with a forked tongue. Apart from Hawking, Steve Wozniak is another person I would investigate. He is a Freemason, and his wife is a member of the female division of Freemasonry, the Order of the Northern Star. Elon Musk would also be on my radar.
Elon Musk, the CEO of Tesla and (as we already know) Bill Gates of Microsoft have also raised their voices against AI. Although Gates is supposedly still on the fence on this issue, Musk is perhaps the more outspoken antagonists against AI, but his motives might be questioned. He has called AI the “biggest existential threat to mankind“, and it’s hard to disagree with that. Although he is an AI antagonist, he is still an investor in DeepMind and Vicarious,” two AI ventures. Why? He claims that,
“…it’s not from the standpoint of actually trying to make any investment return. I like to just keep an eye on what’s going on…nobody expects the Spanish Inquisition, but you have to be careful.” [Ibid. op. cit.]
In a Reddit, Ask me Anything, Bill Gates agrees with Musk,
"I agree with Elon Musk and some others on this and don't understand, why some people are not concerned," he wrote. [Ibid. op. cit.]
Fig. 8-1: Apple’s Steve Wozniak.
As I’ve mentioned before, and as Dr. Kurzweil also mentions in his books and in lectures and interviews, the Controllers want to hear both positive and negative voices on AI and the Singularity, and even though not many protesting voices are being raised by the public, there are many in academia and in science who vouch against it. Much of it is just a dog-and-pony-show, but it still has some value, and people who are interested in finding out more about this can do so and at least take an individual standpoint. Remember that every individual’s standpoint on this is very important; the more people who make up their minds, the greater chance we have to stop this on a global scale.
Remember, as always, to scrutinize everybody in a higher societal position; even those who seem to be speaking our language. This also includes Prof. Stephen Hawking.
Next page: Stephen Hawking