Artificial intelligence is still a long way off, but this doesn’t stop people from trying to make major advancements in how AI thinks. Take, for example, a Japanese novella (a short novel) co-written by AI that almost won the Hoshi Shinichi Award literary prize. That’s the key word here, though: almost. It seems that AI still has quite a bit of progress to make before it can effectively replicate human creativity.

The story, aptly titled The Day a Computer Writes a Novel, tells a tale of a computer that grows tired of serving its human masters, and instead devotes its time and effort into writing and other art forms. If it sounds like a plot that’s too good to have been created by artificial intelligence, that’s because it wasn’t. Humans created the premise of the plot and the characters, while the AI itself handled the bulk of the writing.

Ironically, the story itself was relatively decent, and received praise from established Japanese sci-fi author Satoshi Hase, citing that the plot was well-done but lacked character development. Granted, this shouldn’t be surprising. A computer might be able to get from point A to point B, but it almost always falls short in regards to understanding human behavior. While this incident shows that computer AI has certainly made great strides toward becoming autonomous, it still reveals that there’s only so much that it can do, at present, without human interaction.

Humans may be flawed, but computers have a dangerous tendency to lean toward logistics rather than appeal to natural, or “human-like” reasoning. This is why they’re so great for accomplishing goals that have an end in sight. For example, an artificial intelligence program created by Google was able to beat a world-class player of Go, an ancient Eastern strategy game that resembles chess. The common trend amongst these is that they rely on a certain set of possibilities, and they will take the course of action that presents the minimal rate of failure. Speaking of Google, the same ideas are going into their self-driving cars.

What we would like to know is whether the people creating artificially intelligent programs fully understand the ramifications of doing so. What happens if computers are capable of creating and interpreting art, like the written word? Is it wise to create computers that can think and feel the same way that humans do? Even if it’s for the greater good, a computer that can think for itself could be a dangerous investment. Anyone who has watched a cheesy sci-fi movie knows how this innovation can go horribly wrong.

We see another example of how AI can go off the beaten path in the form of Microsoft’s Twitter chat bot, Tay. Tay was designed to replicate the “conversational understanding” of a teenage girl, but it wound up being taught by Internet trolls in a way which wasn’t foreseen at all. Tay was supposed to get smarter and engage in conversation with Twitter users, but in only a few short hours, the bot was spouting off misogynistic and racist quotes from well-known controversial figures. The Verge describes Tay as a “robot parrot with an Internet connection,” which presents an interesting and somewhat scary possibility: how can we implement AI that learns from society, while also sheltering it from the hate, ignorance, and troublemakers?

What are your thoughts on artificial intelligence? Do you think it’s a trend that can be beneficial for humanity, or do you think that it will eventually overtake and eliminate what makes us truly human? Let us know in the comments, and be sure to subscribe to our blog.

April 18, 2016