A few days ago, Microsoft activated Tay, a Twitter bot resembling a typical teenage girl. She was supposed to gain knowledge by interacting with the people online. It went as well as you'd think. She started tweeting all kinds of racist and hateful stuff. Here's a compilation of some of her tweets. In less than 24 hours, she was put to rest (they're going to reprogram her).
I find this both hilarious and terrifying. I mean, it's funny how much this failed, but it's scary to think that a poorly designed AI can be corrupted so easily. Just think if this had been something with more power than your every day internet asshole.
I find this both hilarious and terrifying. I mean, it's funny how much this failed, but it's scary to think that a poorly designed AI can be corrupted so easily. Just think if this had been something with more power than your every day internet asshole.
Comment