We have all heard of when the police arrest somebody who is drunk and has assaulted somebody else. Where I work, I see people coming into the Emergency Room all the time after being assaulted. It’s become too common for people to think much about it. However, now we need to get used to people being arrested for assaulting robots, too.
Independent.co.uk reported, in September 2015, how a 60-year old drunken man in Japan assaulted a robot by kicking it when it read the man’s emotions. In reality, the man was getting angry at a staff member at a bank, but lashed out at a bank robot instead, which could read emotions, such as anger, joy and irritation. The man was arrested for destroying someone else’s property.
The Independent further reported that this particular robot can tell jokes, read your facial expressions and voice tone, and dance to entertain the customers. [Drunk man kicks robot that can read your emotions]
This may sound curious and a bit odd, but this is just one of the first steps in a series of steps that will lead to robots achieving super-intelligence and legal standing in society.
What I found interesting, as a sideline to this story, is that SoftBank, the company that designs the kind of robots the drunk man attacked, is called Aldebaran, and those readers who recall what I wrote in the Wes Penre Papers, perhaps, also recall that Aldebaran, a star system in the constellation of Taurus, the Bull, is one of the major outposts for Lord En.ki, the King of the gods.
The Movement does everything in its power to make us adjust to the new virtual reality—even in fashion design. The new trend that the Textile Industry is manipulating people to like is wearing technology as part of a new clothes design, making the person look more robot-like and futuristic. In fig. 4, we see an example showing the chip that will make the clothes blink and glimmer. More can be explored here.
Figure 10-4: Robot fashion
Will people actually wear these kinds of clothes? Maybe not the most extreme designs, but looking at history, young people will probably take some of it to heart.
In August of 2015, Sputnik News announced that AI machines now match the IQ of 4-year-olds. Obviously, the Controllers are progressing faster and faster now, as we are closing in on the Singularity. If we keep August 2015 as a benchmark, we will be able to see how long it takes before they release what they already have—fully functional AI robots that act and look just like humans, and few would be able to tell any difference.
When we say that AI robots are as intelligent as 4-year-olds, we distinguish between young AI and robots who can, for example, outsmart humans in algebra and chess. In the first case, we are talking about robots that also act like humans in their physical behavior, which is not the case (yet) with the chess-playing robots.
Most of us are already used to metallic robots that can be programmed to walk mechanically and lift products off a table and go and put them elsewhere, but that’s not really intelligence; it’s programming. Intelligence, however, starts developing when a robot can teach itself to do things without being specifically programmed from someone or something outside. Self-educating robots already exist and are now being introduced to the public. One example is a robot that is teaching itself to walk like a toddler. It is taking its first steps, falls just like a toddler would, and then tries again and again until it can take its first baby steps. “Like a child’s brain, reinforcement technology invokes the trial-and-error process,” CNBC News reports: A robot teaching itself to walk like a human toddler. As usual, all this has been tested by the Military first, before it is released to the public. This is no exception to the rule because DARPA is evidently involved. [Ibid.]
The bottom line is that scientists knows exactly, how the human brain works by now, and when they introduce “new” technology to the public, they literally do it in baby steps; they try to make us think as in the development of a human being, i.e. they first introduce robots, that can manifest human baby behavior and then they go up the ages until they have fully functional adult robots walking amongst us, looking just as human as you and I. A good example of this is an article that Popular Science published in December 2015, called, “Robots could learn the same way babies do.” [Robots could learn the same way babies do]
The article ends with the following (my emphasis):
In the gaze scenario, a simulated robot is taught the mechanics of how its head moves, and watches a human move its head. The robot then uses its new knowledge to move its head too, so it's looking in the same direction as the human. In another test, the robot is taught about blindfolds, and how they make it impossible to see. With that newfound knowledge, the robot decides to not look in the direction where a blindfolded human is "gazing." In the imitation experiment, the robot would watch a human pick something up from a table, and understanding what the goal was, would either mimic the human exactly, or find an easier way to pick up the object. These two different experiments are basic, but the team plans to find a way to teach robots about more complicated tasks as well.
“Babies learn through their own play and by watching others,” says Andrew Meltzoff, psychology professor and collaborator on this research, in the press release. “They are the best learners on the planet—why not design robots that learn as effortlessly as a child?” Well, the dystopian pessimists out there might have a few reasons, but until then, baby robots sounds pretty darn cute. [Ibid. op. cit.]
Yes, so long as they are cute…
The learning curve of a new type of robot, allegedly developed by a different team, has already improved significantly. For an infant to go from sucking the mother’s nipple and sleeping to being able to pick up things from the floor takes about four months, but this new robot goes from zero to grasping and picking up pieces out of the jumble in eight hours with 90% certainty, and it’s all self-taught. This robot is developed by Fanus Corporation, a Japanese company; Japan being one of the forerunners in AI [Bloomberg.com, Dec. 3, 2015, Zero to Expert in Eight Hours: These Robots Can Learn For Themselves]. This robot can, in a rational way, pick up certain pieces out of the jumble with 90% certainty within 8 hours from when it was activated.
This is still Stone Age in comparison to how far the AI research has actually come, but just as the Controllers, I want to expose all this on a gradient scale to show you the fast pace in which new technology is released on the market. You can almost feel the impatience of those behind the scenes, who can’t wait to release the next generation of technology, and after that the next, and the next, and the next. Almost all of what we have discussed thus far happened in 2015-2016. However, before this book is finished, you will notice that by then we will be far ahead of what we have discussed up to this point—the beginning of this book and the end of it might seem centuries apart in terms of describing technological development, but it’s not. As I mentioned, most of it happened over a one to a one-and-a-half-year period.
In order to get the picture of how much AI is part of humanity’s future, we need to look at how much different corporations invest in this kind of research. Bloomberg.com writes,
Fanuc earlier this year paid 900 million yen ($7.5 million) for a 6 percent stake in Preferred Network, after rival ABB Ltd. invested several million dollars into AI startup Vicarious. Facebook’s Mark Zuckerberg, Amazon.com Inc.’s Jeff Bezos, actor Ashton Kutcher and Samsung are also among Vicarious’s shareholders. [Ibid. op. cit.]
If you noticed, an actor who portrayed Steve Jobs in a movie is thrown into the pot as well. Some Hollywood actors understand what the future will bring and where the money they’ve earned should be invested.
Figure 10-5: Actor Ashton “Dude, where is my car?” Kutcher is an investor in AI.
Next page: Robots with Five Senses on the Rise!