If you’re a writer, you’re naturally concerned about getting your point across and you’re going to write with an audience in mind. You write in English for an English-speaking audience because writing to them in another language would usually be ineffective. Scientific articles begin with an abstract because if your audience is other scientists, that’s what they’ll expect. Children’s books don’t typically include footnotes because most children don’t want them. And so on.
So what if most of our online readers were artificial intelligences, and we had to write in a way that accommodated their idiosyncrasies?
There has been much hoopla and rejoicing over the fact that an AI recently sort-of passed the Turing Test by pretending to be a 13-year-old boy, writing in English, who can’t speak English very well. And while saying this AI “passed” the Turing Test would be sort of like saying you “won” a round of golf by standing over the hole and dropping a ball in it, there is no question that we’ve come a long way since the days of Eliza.
But there are some potential drawbacks to all of this. The Turing Test works if human judges can’t tell the difference between a computer and a human. This means that in order for a computer to pass the Turing Test, a human has to fail it.
As we continue to change our communication style in small but subtle ways to accommodate AI technology, and AIs continue to mimic humans in more convincing ways, we’re headed towards what looks like a hilarious parody of the singularity: a situation where much of what we write looks more artificial than the stuff a computer auto-generates. We’re not there yet (as the New York Review of Bots demonstrates), but when we are there—when most social media accounts, article commenters, and so forth are artificial intelligences—how will that change the way we write, think, and interact?