Scientists At MIT Create A "Psycopath" AI By Feeding It Graphic Images

BYMatthew Parizot9.4K Views
Link Copied to Clipboard!
Matt Cardy/Getty Images
ngineered Arts RoboThespian robots are pictured at the company's headquarters in Penryn on May 9, 2018 in Cornwall, England. Founded in 2004, the Cornish company operating from an industrial unit near Falmouth, is a world leader in life sized commercial available humanoid robots for entertainment, information, education and research. The company has successfully sold its the fully interactive and multilingual RoboThespian robot and their smaller SociBot robot around the world to science centres, theme parks and visitor attractions, and also to academic and commercial research groups where they are used as research and development platforms. However, more recently the company has been building a range of lifelike bio-mechanical Mesmer robots. Built on the sensors and the extensive software framework already developed for RoboThespian, the Mesmer robots can offer some of the smartest animatronics on the market, giving extensive interaction but can also move very smoothly, quietly and naturally too. Developed using Engineered Arts own animation software 'Virtual Robot', Mesmer characters can be fictional, or faithful recreations of real-world people with accuracy possible to the last pore or finest of hairs.
Nothing could possibly happen with a murderous AI, right?

While the scientists are researchers at MIT are probably much, much smarter than the average person, it seems that they could probably stand to read a book or watch a movie once in a while, or at the very least watch Terminator or The Matrix.

It's recommended, only because of their most recent project in artificial intelligence: Norman, the AI who's also a psychopath.

According to CNN, the purpose behind Norman (named after Norman Bates from the Hitchcock film, Psycho) isn't to eventually destroy all of humanity, but rather, to teach a lesson about how the kinds of conclusions an AI can make depends greatly on the data it's given.

Norman was fed a steady diet of blood, guts, and gore from an unnamed Reddit page, and then asked to spot images in a typical Rorschach test. While a typical AI might see a close-up of a wedding cake on a table, Norman will look at that same inkblot test and see a man who had been murdered by a speeding driver.

This test actually has some practical uses. Microsoft infamously launched "Tay," a Twitter AI that aimed to learn from the people interacting with it. However, they underestimated the depravity of the internet, and Tay turned into a racial slur-slinging Nazi within a day, forcing Microsoft to deactivate her account.

The eventual goal is to learn how to get AI's to resist this kind of pressure, and return to normalcy. Maybe next time find a way to do that without creating a computer that only knows how to kill.

 


About The Author
...