As more people watch movies, edit videos, read the news, and keep up with social media on their smartphones. These devices have grown to accommodate the bigger screens and higher processing power needed for more demanding activities.
The problem with unwieldy phones is they frequently require second hand or voice commands to operate. It can be slow and inconvenient.
In response, researchers in the Future Interfaces Group at Carnegie Mellon University’s Human-Computer Interaction Institute (HCII) are developing a gaze tracking tool called EyeMU. Which allows users to execute operations on a smartphone by combining gaze control and simple hand gestures.
Also Read: Smart Devices To Fight Against Climate Change
Features of EyeMU
Using eyes to interact with devices isn’t new. A couple of years ago, Google launched a similar initiative where users could use their eyes to “select” words and phrases to be spoken. The main difference is that, EyeMU will allow users to interact with the entire device.
This includes selecting and opening notifications, returning to apps, selecting photos, and more. To prevent users from accidentally performing actions they didn’t mean to, the researchers have paired eye movements together with hand movements, so different hand gestures will act as confirmations or dismissals.
“The eyes have what you would call the Midas touch problem,’ said Chris Harrison, an associate professor in the HCII and director of the Future Interfaces Group. “You can’t have a situation in which something happens on the phone everywhere you look. Too many applications would open.”
Also Read: Smart Ring: Google’s Patent To Control Devices Using Hand Gestures In 2021
About EyeMU- gaze tracking tool
Software that tracks the eyes with precision can solve this problem. Andy Kong, a senior majoring in computer science, had been interested in eye-tracking technologies since he first came to CMU. He found commercial versions pricey, so he wrote a program that used a laptop’s built-in camera to track the user’s eyes, which in turn moved the cursor around the screen — an important early step toward EyeMU(gaze tracking tool).
“Current phones only respond when we ask them for things, whether by speech, taps, or button clicks,” Kong said. “If the phone is widely used now, imagine how much more useful it would be if we could predict what the user wanted by analyzing gaze or other biometrics.”
Kong and Ahuja advanced their early prototype by using Google’s Face Mesh tool. To study the gaze patterns of users looking at different areas of the screen and render the mapping data. Next, the team developed a gaze predictor that uses the smartphone’s front-facing camera to lock in what the viewer is looking at and register it as the target.
The team made the tool more productive by combining the gaze predictor with the smartphone’s built-in motion sensors to enable commands. For example, a user could look at a notification long enough to secure it as a target and flick the phone to the left to dismiss it or to the right to respond to the notification. Similarly, a user might pull the phone closer to enlarge an image or move the phone away to disengage the gaze control, all while holding a large latte in the other hand.
Advantages & Disadvantages
Benefits or advantages of Eye Tracking Technology
- It increases computing and resource efficiency.
- It helps to assess human conditions and behaviors.
- Helps to learn from experts delivering skills.
- It makes technology more intuitive.
- It helps to communicate with machines in order to automate manual tasks.
- Increases user experience and performance in playing games.
Drawbacks or Disadvantages of Eye Tracking Technology
- Expensive technology due to costly hardware requirements.
- It does not work with a few users who wear contact lenses or have long eyelashes.
- It requires some calibration time before it gives satisfactory results. Hence few users deviate from using it.
- Eye movements of some users are often un-intentional. This results in unwanted responses by the system.
- It is difficult to control eye position accurately all the time, unlike mouse. The eye tracker provides an unstable output when it does not get an appropriate image of the eye in consecutive frames.