Mime types

Researchers at the University of Washington have worked out how to detect the tiny variations that a moving human body makes in a wifi field, and built a gesture interface with it.

WiSee is able to detect and identify nine different gestures with 94% accuracy, and the team says that the next version of the system will be able to recognize sequences of gestures and accept a wider “vocabulary” of commands. “The intent is to make this into an API where other researchers can build their own gesture vocabularies,” said Shwetak Patel, one of the lead researchers.

The team has successfully used it to control electronic devices for purposes such as changing the channel on TV or turning on and off the lights, and anticipate many household applications. They acknowledge that they will have to introduce new improvements to the technology to prevent unauthorized use and restrict it to a specific area with a ‘geofence’

I think it’s huge. I mean, I don’t know if it will be huge, but it could fundamentally reconfigure our relationships with the various outposts of machine intelligence with whom we cohabit. It makes worries about the Kinect’s permanent state of eavesdropping seem a bit quaint. You have to assume that you’re not alone, even when you are. Perhaps it’ll be easier to get used to for people with butlers.

How hard it will be – at the moment, lacking conventions of gesture and positioning – to tell what or who someone flailing their arms around in your living room is trying to speak to. When we communicate there’s historically been a way of indicating who we expect to be listening, either with some sort of eye-contact or pointing a remote control at them. This technology breaks that connection. Where’s the feedback? How important is it for other people in the room to know what you’re doing? Is it possible to design politely? This makes Timo Arnall’s “no to no UI” message even more urgent.

So it’s worth thinking about the cultural repositories of gesture that we have access to already and that we could draw on when working out how to fit this new magic into our lives. Tai chi and the priestly movements made in places of worship are both existing modes of embodied intervention in an unseen but pervasive medium – perhaps in these traditions there might be some answers to the problems raised by this new kind of interaction.

There’s a whole other set of questions about what might be done with the knowledge embedded in this system, of course. Imagine a room full of machines that don’t function for people who have been behaving in a way that fits the movement profile of an undesirable. Or wifi that’s faster for tall, graceful people. Or localised versions to accommodate dialects that make more use of body language than standard British English. Perhaps there will be more positive things – maybe an entertainment system that recognises toddlers in the room and turns screens off, or an application that recognises unfocussed flitting between distractions, or texts you when someone in your care has depressive body language. But however you spin it, there’s a huge potential for invasive and constraining applications that limit individual autonomy. Which seems a high price to pay for a new way to change the CD or turn the aircon up, the applications suggested by the demonstration video. I’m increasingly ready to believe in a future me covered in dazzle facepaint walking furtively along a tube station, dressed in Hyperstealth camo, turning my wave-deflecting rosary over and over in my fingers to ward off the data demons.