The latest xkcd proposes an alternative to the captcha anti-bot test: Matt Webb notices that it’s the Voight-Kampff test applied to the web (I have no idea how Harrison Ford managed to sound the second “f”, but apparently other people know it and have used it). I love it: focussing on what makes a human a human to weed out the bots, beyond our image-processing and language skills, concentrating on accepting interaction with entities that are capable of empathy and value judgements and can recognise the answer we’re most likely to be after.
But it doesn’t work. For one, a yes/no answer just means that a bot has to try twice instead of once, which reduces it to a problem of bandwith. But even if there was a greater choice of answers, any replicant capable of landing a job interview would surely have wifi. My phone has wifi. Even cameras have wifi now, and they would not pass many job interviews (“how do you get on with other people?”, “I click well”). The combination of connectivity and Amazon’s HIT service means that given enough time, any net-enabled replicant could just ask an army of skint humans to come up with the statistically probable answer.
Of course, the crucial element is time. For the HIT strategy to work, replicants would have to be questioned in an environment that would allow them to pause for a while before answering: this implies that they’d be best off applying for jobs in the civil service or the media, where a dilatory approach to qualifying their suitability for a role would be acceptable. Soon, Goldsmiths and Millbank would be staffed with replicants dedicated to working against all that true humans stand for, while the private sector looked on aghast and tried to concoct ways of avoiding working with either for as long as possible. So far, the story checks out: maybe wiser heads than mine are already working on a solution.
One way of avoiding the HIT approach might be to ask for responses that could only be answered through a deep knowledge of the milieu of the author: the purpose of the captcha then progresses from just weeding out bots, to weeding out people who aren’t cool enough to understand the question. In this way blogs can manage their appeal in a far more fine-grained way than at present. Serious tech blogs could bar Mac fanboys through judicious probing of their command-line fluency; political blogs could make sure that comments only come from those that articulate their allegiance in an acceptable fashion. No-one need ever hear from live-action roleplayers ever again.
But more elegant than this crude reification of web cliques would be the inclusion of a “dude this is so a trick question” button, perhaps placed elsewhere in the comments form (“was the question above totally manipulative or a fair chance to express your views?”). Perhaps in addition to the “yes” or “no” options in the two examples above, we might add a po or mu option, giving humans a chance to do what a robot can’t, at present: recognise an absurdity and claim the right to not answer.