A Step Towards Gesture Interface

When Microsoft Kinect launched it was accompanied by a blast of developers attempting to hack the system and use the Kinect for everything from 3D cameras to a consistent force for computer vision. There’s a bunch of these hacks available and the developers working on them are making apps that let you control digital monsters, make a movie in 3D and, my favorite, control your Mac.

http://www.kinect-hacks.com/kinect-guides/2011/03/12/guide-installing-kinect-jesture-mac-os-x

Why is this my favorite? Because, with all the concerns people have about how we interact with our computer lives, sitting in front of a screen, typing, all day controlling your computer with your hands seems like the perfect first step in helping us do work the way we’re meant to. I imagine a world where we interact with our technology through a combination of touch and gesture. In this future, we don’t have people complaining of repetitive stresses like carpal tunnel, we have people up and moving around to use their devices.

I’ve recently been reading several studies on the need to move in order to increase memory and cognitive skills. Basically, our brains work best when we can touch and move something directly. Currently our computer interfaces are about indirect interaction. Move the mouse and we must translate this motion into a change on the screen. Integrating gesture interfaces into our technology is a good first step towards bridging this internal translation of space. This trend feels like an inevitability to me both because the demand from all users exists but also because we’re beginning to view our physical world as a place full of data that we can interact with.

I’m going to go and try this hack. Maybe you should too (if you’ve got photos of it, even beter, I’d love to check it out.)