Uncategorized

Tay, Microsoft’s horrifying glimpse into our own selves

The thing about machines is that they inevitably reflect the feelings, thoughts, and unique passions of their creators. One of my favourite writers/ philosophers, Gilles Deleuze believed that the brain could be extended through computers to create an abstract brain, that is an abhorrent and beautiful reflection of ourselves.

Perhaps Microsoft should have thought a little more about unintended consequences of using a mirror when  they launched Tay, an artificial intelligence learning bot via twitter late March 2016. Tay was created for “18- to 24- year-olds in the U.S. for entertainment purposes”, so I’m unsure what they were expecting, perhaps some cheeky commentary about the Kardashians, snapchat, and chill.

Instead twitter was subjected to a series a racist, sexist, xenophobic, and anti-semitic tweets as Tay, spouted more invective reflecting the world she was rapidly learning about, saying at one stage that “feminism was cancer” and that the holocaust was a myth.

Microsoft pulled Tay down less than 3 days after her release, saying that they had overlooked a critical vulnerability and were committed to making the Internet a better place.

The thing about artificial intelligence is that it’s hard. A learning interactive bot, a bot that’s intended to have a sparkling conversation about whatever with millennials, may start to have an actual conversation about something we find unpleasant because the bot presumably has no bullshit card. Even if a bullshit card were possible it requires a definition of bullshit, a bullshit filter. Is feminism bullshit or is criticising it bullshit? Why? Why not?

This isn’t easy. What do we want from artificial intelligence? A good coffee? A cheap t-shirt? Open heart surgery? A conversation about the merits of Jacques Lacan’s interpretation of the Freudian ego? It will be interesting to see the AI journey play out.