Eye gaze trackers have been studied widely for their utility in aviation domain since long time. So far, numerous studies have been conducted in the direction of gaze-controlled interfaces for electronic displays in flights and Head mounted display systems under simulated conditions. In this paper, we present our study on usage of eye gaze trackers in real flight conditions and their failure modes under such usage conditions and illumination. We show that the commercially available off the shelf (COTS) eye gaze trackers with state-of-the-art accuracy fails to provide gaze estimates beyond certain level of illumination on the eyes. We also show that the limited available tracking range of eye gaze trackers limit them to provide gaze estimates even during the pilots’ natural operating behavior. Further, we present three approaches of developing eye gaze trackers which are designed to use webcam instead of infrared illumination and are aimed to be functional at high illumination conditions. We present our intelligent tracker, developed using OpenFace framework, provides comparable results to COTS eye tracker in terms of interaction speed for both indoor and outdoor conditions.

Eye gaze tracking is the process of estimating where a person is looking at. Eye gaze tracking technology is used to understand eye gaze scanning process, visual search and reading behavior from late 18th century. With availability of portable infrared based eye trackers, researchers also explored controlling digital user interfaces just by looking at it. While the technique sounds intuitive, underlying technical limitations like innate eye movement process and the constraints on direct manipulation of graphical user interfaces so far restricted eye gaze controlled interfaces to limited applications for people with severe disability and as a binary input channel (on/off) for a few smartphone video viewing applications.

In recent time, a set of new applications were explored for eye gaze controlled interfaces where the operators’ situation impedes him/her to operate traditional physical or touch screen user interfaces. Examples of such situations include undertaking secondary tasks in automotive and mission tasks inside combat aircraft.

Eye gaze tracking technology is widely explored in aviation domain for pilot training, understating pilots’ scanning behavior, optimizing cockpit layout and recently to estimate pilots’ cognitive workload [6, 9]. Although a commercial product is not yet available, different defense manufacturers are already investigating to use eye gaze-controlled interfaces inside cockpit. While most of the research on eye gaze-controlled interfaces for military aviation concentrate on Head Mounted Display Systems (HMDS), this paper explores use of eye gaze controlled interface for Head Down Display. The paper initially explored a state-of-the-art wearable eye tracking device in a combat aircraft undertaking representative combat maneuvers and then proposed and evaluated a set of algorithms for developing screen mounted eye gaze tracker for operating head down displays. These algorithms can also be used in transport and passenger aircrafts.

We have recorded data from two flights using the COTS eye tracker (Tobii Pro Glasses 2) which uses infra-red (IR) illumination-based eye gaze estimation principles. The duration of the first flight is 55 minutes 58 seconds (Flight 1) and another flight’s duration is 56 minutes (Flight 2), the flight profiles are furnished in Table 1 below. The eye tracker contains a front-facing scene camera which records the first-person view of the pilot. It also contains four eye-cameras, two cameras per each eye, to record the eye movements. The eye tracker estimates gaze points at a frequency of 100 Hz. The frame rate of scene camera is 25.01 frames/second at 1920 x 1080 resolution and that of each eye camera is around 50 frames/second with a resolution of 240×240. Each gaze point is recorded with a dedicated identifier, called “gidx”. We initially used Tobii Pro Lab tool to analyze the recorded gaze samples and observed that both flight recordings contain gaze samples only for around 50% of the duration. We investigated this loss of data samples during the flight using the raw data provided by manufacturer in json format and by correlating the raw data with the eye images.

Table 1. Flight Profiles

Sl NoObjectiveProfile
Flight #1Maneuvering flight with head mounted eye tracker on Pilot in CommandTake-off – climb – level flight to Local Flying Area – Constant G (3G and 5G) level turns both sides each – Vertical loop – Barrel Roll – Air to Ground dive attack training missions – Descent – ILS Approach and landing
Flight #2Non – Maneuvering flight with head mounted eye tracker on Pilot in CommandTake-off – climb – level flight to Local Flying Area – Straight and Level cruise with gentle level turns – Descent – ILS Approach and landing

  Our initial approach for eye gaze estimation used feature based approach. We used a method that extracts Histogram of Oriented Gradients (HoG) features combined with a linear SVM to detect eye landmarks. These landmarks were used to compute Eye Aspect Ratio (EAR) feature to estimate the gaze block on the screen. Even though HoG based landmark detection had been used earlier widely, we observed that it occasionally failed to detect landmarks of our users and affected the gaze estimation accuracy. Variations in illumination and appearance of facial features like beard or spectacles could affect tracking accuracy based on pre-selected facial features.

The second approach, Webgazer.js proposes to map the pixel data of eye images directly to gaze locations rather than to rely on handcrafted features from eye images. They used a 6 x 10 eye image patch for each eye and converted them to 120-dimensional feature vector. This vector is used as an input for a regression model to map to gaze points on screen. This approach also relies on multiple eye landmark detection algorithms which suffers similar limitations as HoG based algorithms. Further, this approach requires users to click at least 40-50 locations on screen for calibration purpose before it can make predictions which requires significant time.

OpenFace uses a state-of-the-art deep learning approach for landmark detection and gaze estimation. It uses Constrained Local Neural Field (CLNF) for eye landmark detection and tracking. Unlike HoG and Linear SVM approach, which was based on handcrafted features and trained on a relatively smaller dataset, OpenFace uses larger dataset and deep learning approach to learn the estimation of 3D gaze vector from the eye images. Even though OpenFace is not very accurate in predicting gaze points on screen, it does not suffer from illumination and appearance to detect eye landmarks. Further, OpenFace was implemented in C++ which makes real-time gaze estimation possible even on CPUs. In addition to these, the reported state-of-the-art cross-validation accuracy prompted us to test for a gaze block detection application.

It may be noted that the response times were lowest for the COTS tracker compared to webcam-based approaches. The COTS tracker used on board ASIC chip to run image processing algorithms, which has lower latency than general purpose processor used in webcam-based eye trackers [5]. It may be noted that still the difference in response times was not significant inside room between COTS and intelligent tracker. Our future work is investigating to further reduce the latency and make eye tracking work in bright lighting condition. This paper presents a case study of testing and development of bespoke eye gaze tracker for operating Head Down Displays in an aircraft cockpit. Our study showed that present COTS eye gaze trackers are not yet ready to be integrated to combat aircraft in terms of tracking eye gaze at different lighting conditions and vertical field of view. We presented a set of algorithms that can be configured for operating multi-function displays inside cockpit and a particular intelligent algorithm using the OpenFace framework worked better than classical computer vision based algorithms.

Link to the presentation video: https://drive.google.com/file/d/1SLjsTOhP0SPq8Ysf83PHRXwY9t0Q-oqp/view?usp=sharing

Link to the paper: DOI: 10.13140/RG.2.2.33700.30082

Presented in the EUROPEAN TEST AND TELEMETRY CONFERENCE (ETTC) – 24 June 2020 (ONLINE)


✈Thank you for viewing this post. Please give a ‘thumbs-up’👍 if you liked the post. Happy Landings!

Secured By miniOrange