Google Glass Eye-Tracking, Web Control, Touch-Sensitive Clothing, and Bananas?

|

google-glass

Bananas and Google Glass? Better to watch the video and listen to developer Brandyn White explain what he’s up to.

How we interact with a device largely determines the context that it can be used in (e.g., while driving, during a meeting) and potentially who can use it (e.g., users with disabilities).

Glass supports touch gestures (e.g., swipe, tap, scroll), head gestures (e.g, tilt up turns display on, gesture down turns display off), and voice controls (e.g., “ok glass”, voice input). By using the IMU sensors directly (as we show here) it’s simple to extend the range of head gestures. There is a proximity sensor that is used by Glass to determine if the device is being worn. It is capable of recognizing wink/blink gestures but it cannot actually track the gaze of the user.

We start by developing a 3D printed mount for a small webcam that will be attached to Glass. We remove the IR filter from the webcam and replace the blue LEDs with IR LEDs which helps improve the pupil contrast (our webcam teardown roughly follows the Pupil project). Now we can see the user’s eye, to detect the pupil we developed acustom approach using MSER and filtering regions based on area, intensity, and eccentricity. This gives us the real-time position and radius of the pupil in the image. To use eye position as an input method, we create zones on the screen and perform a calibration process where the user looks at dots and takes off glass and puts it back on and repeats. We treat each zone as a Gaussian distribution and use the Mahalanobis distance to determine if a user is looking at one.

Read more at Brandyn’s blog linked below.

Brandyn White’s Blog

Source: Engadget

Last Updated on January 23, 2017.

Previous

Watch Apollo 11 Saturn V Launch Camera E-8 In Slow Motion

Nvidia: The Future Of Applications Is Streaming Not Installation

Next

Latest Articles

Share via
Copy link
Powered by Social Snap