The growth of AI has been booming in the past few years. Where many attempts at creating that perfect AI has been a goal for many institutions and researchers.
From some of the popular systems that we use on a regular basis to some of the more experimental attempts, journey along with me as I highlight a few instances where these systems didn’t quite work like they were intended and their entertaining results.
#4 – The dollhouse incident
Alexa is wildly popular and currently growing its presence as a household item. There isn’t anything quite wrong with Alexa as it is a solid piece of work. So solid in fact that when a little girl asked Alexa to buy her a $160 dollhouse together with 4 pounds of cookies it happily obliged to the request and ordered the items.
This might be entertaining in itself, but the crux of the incident was during coverage by CW6 a San Diego new channel where they discussed the incident. The news anchor Jim Patton said,
“I love the little girl saying, ‘Alexa ordered me a dollhouse.’”
That very comment triggered Alexa devices of viewers watching to also attempt purchases of dollhouses! It is uncertain how many of these attempts were successful, but the incident is one of the funniest AI incidents that I have heard of.
#3 The porn incident
Oh, Alexa. I promise that I am not going to hammer on Alexa after this again. Kids are having fun with the voice recognition system and in this specific case it turned out to be hilarious.
A parental couple videotaped their child asking Alexa to play his favorite song “Digger, Digger” what ensued? Well Alexa didn’t really get what the child was asking for and this is what it responded with.
“You want to hear a station for porn detected … hot chick amateur girl sexy.”
The parents jumped in and loudly intervened with Alexa as it continued to spew different channels of adult orientated content. Luckily for us, they put the video up on the internet for all of us to enjoy.
Back in 2016 Microsoft released their AI Twitter bot called “Tay“. Tay was designed to converse with the people on the platform and then learn from the interactions to become integrated into “society”.
If you have spent any amount of time on social platforms on this wonderful thing we call the internet, then you know how people are. People tweeted crude, racist and pretty much anything vulgar at the bot to learn and learn it did. What started out as a sweet bot that loves humans soon turned into a bot with malice.
Needless to say, with the things that Tay was saying, Microsoft had to put the project down. The account still exists, but it is fully protected with limited access, luckily I could find a few screenshots of some of the things Tay were saying before its decommission.
#1 The, not so smart bot.
A lot of people when discussing AI brings up the topic that AI will become smarter than humans and take over the globe making us their slaves or wiping out humanity for good. This AI attempt might just put your heart at ease, well, at least for the time being.
A team of researchers began a project back in 2011 called the Todai Bot. The sole purpose of this AI was to get accepted into the university of Tokyo. Over the course of the next 4 years they trained the AI as much as they could and in 2015 the bot took a shot at the national entrance exam. It failed horribly, not even close to an acceptance score for the university.
A year later the researchers put Todai back into the fray for a second attempt at the entrance exam. Yes, it failed again. With little to no improvement between the two years on the exam the researchers abandoned the project in 2016. The exam said not “TODAI, bot!” giving humans +1 one the scoreboard, for now.