Eye Centre Localisation: An Unsupervised Modular Approach

Purpose – This paper introduces an unsupervised modular approach for eye centre localisation in images and videos following a coarse-to-fine, global-to-regional scheme. The design of the algorithm aims at excellent accuracy, robustness and real-time performance for use in real-world applications. Design/methodology/approach – A modular approach has been designed that makes use of isophote and gradient features to estimate eye centre locations. This approach embraces two main modalities that progressively reduce global facial features to local levels for more precise inspections. A novel Selective Oriented Gradient (SOG) filter has been specifically designed to remove strong gradients from eyebrows, eye corners and self-shadows, which sabotage most eye centre localisation methods. The proposed algorithm, tested on the BioID database, has shown superior accuracy. Findings – The eye centre localisation algorithm has been compared with 11 other methods on the BioID database and 6 other methods on the GI4E database. The proposed algorithm has outperformed all the other algorithms in comparison in terms of localisation accuracy while exhibiting excellent real-time performance. This method is also inherently robust against head poses, partial eye occlusions and shadows. Originality/value – The eye centre localisation method utilises two mutually complementary modalities as a novel, fast, accurate and robust approach. In addition, other than assisting eye centre localisation, the SOG filter is able to resolve general tasks regarding the detection of curved shapes. From an applied point of view, the proposed method has great potentials in benefiting a wide range of real-world HCI applications.


Introduction
Eye centre localisation from images and videos has received a considerable amount of attention in the area of HCI through utilization of computer vision techniques. Its study is still rapidly expanding due to the increased availability of HCI devices and systems. The ability to accurately localise eye centres can promise to bring significant benefits to a HCI system that is designed to observe its users and to capture user attentiveness. With the knowledge of user attention and predicted user intentions, a HCI system can make informed decisions and therefore can respond to a user in a more intelligent and natural way. Compared to other facial cues of a particular user such as age and gender that remain unchanged during a HCI session, eye analysis provides a constant stream of information that can address the dynamic nature of HCI. Furthermore, eye analysis excels in remote and contactless HCI, which provides an ideal channel for elderly people and those with motor disabilities to access HCI systems.
Generally, eye centre localisation methods fall into two main categories: passive methods and active methods. The former type of method is based on inherent features from periocular appearance or geometry, while the latter is based on additive features created by active lighting. More specifically, an additive feature based method projects near-infrared illumination toward the eyes and results in reflections on the corneas, which are commonly referred to as 'glints' (Zhu and Ji, 2005). This type of method has been widely employed for commercial eye trackers (Holmqvist, 2011). Being highly reliant on dedicated devices, this type of method essentially alters the primary task of eye centre detection into corneal reflection detection as a simplified task. Therefore, a passive inherent feature based method is more generalizable since it employs characteristic features from the eye region itself and therefore becomes the method we explore in this paper.
Both types of methods have witnessed real-world applications in different areas. For example, smart solutions are available that monitor gaze directions of a driver in order to identify driver distraction/drowsiness and provide timely alerts (Tawari, Chen and Trivedi, 2014). These driver assistance systems, capable of detecting and acting on driver inattentiveness, are of great value to road safety. Eye/gaze tracking has also been employed for psychological and medical applications (Mele and Federici, 2012). Its ability to reflect human processing of visual information and to indicate levels of cognitive load can contribute to reading comprehension and presentation design. For example, statistics such as eye saccade, gaze direction and gaze duration are valuable user feedback and therefore can lead to adaptive e-learning systems (Rosch and Vogel-Walcutt, 2013) that can react and adapt to users' psychological responses. 1) Lack of accuracy in real-world scenarios. Many research works are tested on controlled databases where ideal illumination, high-resolution images and desirable viewpoint are available. When tested under various types of real scenes with dynamic environmental factors, their performance drops severely.
2) Undesirable real-time performance. As reliable as they might be, sophisticated algorithms often incur large computational cost, rendering them unsuitable for real-time implementations.
3) High dependence on expensive or inconvenient hardware configuration. A high cost and complexity of algorithm implementation will limit the usability and applicability of any method. Inexpensive yet effective methods are in high demand in order to boost real-world applications.
Therefore, resolving these issues can bridge the gaps in the field of eye centre localisation under realistic scenes and will give rise to HCI applications that are more accessible and robust. To this end, our method for eye centre localisation aims to maintain high accuracy on low-resolution images by utilising two types of features, and to increase robustness to head pose and to decrease computational cost by following a global-to-regional scheme. We also design a Selective Oriented Gradient (SOG) filter specifically to strengthen robustness to self-shadowing and interfering facial edges. Moreover, our method is unsupervised and only requires a webcam to function. Therefore it is of great practical value due to its ease of use and implementation.

Related works
Eye centre localisation methods are commonly based on analysis geometrical/morphological features (Cuong and Hoang, 2010) that conform to a pre-designed eye model, or machine learning techniques that train classifiers with appearance features. Others combine these two types of methods to increase the accuracy and robustness (Hansen and Ji, 2010) of eye centre localisation in images containing a wide range of variations (e.g. illumination variation, different head poses, shadows and specularities, etc.) Timm and Barth (2011) proposed to localise eye centres by means of gradients. In this approach, periocular geometry was expressed by an objective function that peaked at the centre of a circular object. Despite its capability in dealing with deformation of circular pupil/iris contours, its performance would decline in the presence of strong gradients from eyelids, eyebrows, shadows and occluded pupil/iris. This remains to be an unsolved problem shared by most eye centre localisation methods. Another unsupervised method using geometrical features investigated the self-similarity space, where image regions that can maintain particular characteristics under geometric transformations receive high self-similarity scores (Leo et al., 2014). This eye model is derived from the rotational invariance of a pupil/iris region. As a result, extraction of a pupil/iris region directly affects computation of self-similarity scores, because the inclusion of eyebrows and other interfering sharp edges can produce high self-similarity scores that surpass those from the pupil/iris. In addition, isophote patterns also proved to be effective as a type of geometrical feature (Valenti and Gevers, 2008). Characterising contours of equal pixel intensity, isophotes are invariant to linear lighting changes as well as in-plane rotations and thus can give excellent eye centre localisation results even under challenging experimental conditions. Furthermore, isophote features have been combined with the shape regression model by Wei, Pang and Chen (2014) for improved accuracy and robustness.
On the other hand, with regard to machine learning based methods (Alpaydin, 2014), training data pose a critical influence on the performance of the algorithms (Zhu and Ramanan, 2012). More specifically, variations posed by illumination and head rotation have a huge impact on the accuracy and robustness of most algorithms. Inspired by Fisher Linear Discriminant (FLD) (Duda, Hart and Stork, 2012), Kroon, Hanjalic and Maas (2008) designed a linear filter trained by image patches extracted from normalized face images. This method not only considers the high response from a filtered face image, but also examines a rectangular neighbourhood around the estimated eye centre positions. This is based on the observation that a pupil in an image is formed by a collection of dark pixels within a small region. Another machine learning based method (Niu et al., 2006) focuses on the design of a novel classifier rather than the extraction of representative features. This method introduces a 2D cascade AdaBoost classifier that combines bootstrapping positive samples and bootstrapping negative samples (Viola and Jones, 2001). The final localisation of an eye is achieved by fusion of multiple classifiers. More recently, invariant eye patterns were investigated by Ren et al. (2014) in order to facilitate localisation of eyes with arbitrary angles in face images. They constructed a codebook of Scale Invariant Feature Transform (SIFT) features and validated its relative scale and rotation invariance on near-frontal face images rotated by from 0 to 180 degrees. In general, unsupervised methods have advantages that they are independent of training data and are therefore less biased toward a certain type of environmental setting. For unsupervised methods, models characterising eye regions are vital in determining the accuracy and robustness of the resulting algorithms.
Many eye centre localisation methods are designed for only frontal faces and thus deteriorate with the presence of head rotations and/or eye movements. Asadifard and Shanbezadeh (2010) employed a cumulative distributed function (CDF) for adaptive centre of pupil detection on frontal face images. Their approach firstly extracts the top-left and top-right quarters of a face image as the regions of interest and then filters each region of interest with a CDF. An absolute threshold is defined for the filtering process given the fact that the pixels in the pupil region are darker than the rest of the eye region. Despite the limitation that this method only accounts for frontal faces, the eye model in this method only considers the intensity values of an eye region in a greyscale image. Only when a complete pupil can be extracted by means of erosion would this method give an accurate estimation of the eye centre. However, under realistic scenes, specularities on a pupil or those caused by a pair of spectacles will split the pupil into several disconnected regions while self-cast shadows would easily change the values calculated by a CDF. Another study on frontal faces (Türkan, Pardas and Cetin, 2007) explored edge projections for eye localisation. With a face image available, their method firstly defines a rough horizontal position for the eye region according to facial anthropometric relations. After the eye band is cropped, it gathers eye candidate points that are extracted by a high-pass filter of a wavelet transform. A Support Vector Machine (SVM) based classifier (Chang and Lin, 2011) is then used to estimate the probability value for every eye candidate. This type of method normally requires that all face images are 2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58 perfectly aligned so that the facial geometry agrees with facial anthropometric relations as the prior knowledge. Any misalignment will cause inconsistency to the features and will thus lead to poor results.
In summary, the attention that eye analysis receives has never faded despite the variety of observed facial features. A facility for eye centre localisation and exploitation of its practical application offers huge potential for HCI applications. The challenges of this research area largely arise from poor illumination conditions that create shadows and specularities around the eye region. Further complications arise from changes in head pose and eye movement, long distance scenarios and dependency on dedicated devices or complex system structures. Apart from the studies reviewed in this section, we provide a detailed summary of 11 state-of-the-art methods in section 3 and a comparison with our eye centre localisation method.

Eye centre localisation -an unsupervised modular approach
We propose a hybrid method that can perform accurate and efficient localisation of eye centres in low-resolution images and videos in real time. An overview of the proposed method can be found in Figure 1. The algorithm includes two main modalities, using global isophote features and regional gradient features, respectively. The first modality performs a global estimation of eye centres over a face image and extracts eye regions. Results from the first modality are then fed into the second modality as prior knowledge, which lead to a local and more precise estimation of eye centres. The two energy maps generated by the two main modalities are fused to enable the final estimation of eye centres.

Isophote based global centre voting and eye detection
A pupil and an iris in an image can be represented by contours of equal intensity values, i.e. isophotes (Lichtenauer, Hendriks and Reinders, 2005). Equation (1) is then formulated (Valenti and Gevers, 2008) to calculate displacement vectors which point from pixels to the centres of isophotes they belong to.
where ‫ܫ‬ ௫ , ‫ܫ‬ ௬ ‫ܫ‬ ௫௫ , ‫ܫ‬ ௫௬ and ‫ܫ‬ ௬௬ are first-order and second-order derivatives of the luminance function ‫,ݔ(ܫ‬ ‫.)ݕ‬ The importance of each vote is indicated by the curvedness of the isophote since the iris and pupil edges that are circular obtain high curvedness values as opposed to flat isophotes. The curvedness (Koenderink and Doorn, 1992) can be calculated as: We also consider the brightness of the isophote centre in the voting process based on the fact that the pupil is normally darker than the iris and the sclera. Therefore, an energy map ‫ܧ‬ ‫,ݔ(‬ ‫)ݕ‬ is constructed that collects all the votes to reflect the eye centre position following equation (3), where ߙ is the maximum greyscale in the image (ߙ = 255 in the experiments).
Our isophote based modality is different from other isophote feature based methods, in that it extracts isophote features for the whole face instead of periocular regions. Pixels with intensities below 30% of the maximum value are removed. The lower half of ‫ܧ‬ ‫,ݔ(‬ ‫)ݕ‬ is simply removed since it is unlikely to concern any eye region information regardless of normal head rotations. For the remainder of the left and right half of the energy map ‫ܧ‬ ௨ ‫,ݔ(‬ ‫)ݕ‬ and ‫ܧ‬ ௨ ‫,ݔ(‬ ‫,)ݕ‬ we further calculate the energy centre, i.e. the first moment divided by the total energy, which is selected instead as the optimal eye centres.
Taking ‫ܧ‬ ௨ ‫,ݔ(‬ ‫)ݕ‬ as an example, this can be formulated as equation (4) ሼcx ௨ , where ‫ܥ‬ ௨ = ሼcx ௨ , cy ௨ ሽ is the optimal estimation of the left eye centre, ݉ and ݊ are the maximum row and column number in ‫ܧ‬ ௨ . The eye region to be analysed by our second modality is then selected which centres at the optimal eye centre estimation (its width being 1/10 of the face size and its height being 1/15 of the face size). As a result, our method does not require an eye detector and is robust to head rotations since global isophotes are investigated. This process is shown in Figure 2. The isophote features employed by the first modality are extracted on the global level, i.e. from a whole face image rather than a pre-detected eye region. Despite its computational efficiency, it is deemed a coarse estimation due to global variations such as uneven lighting and strong edges from outside the eye region. Therefore, a second modality is needed to analyse gradient features iteratively at the local level, i.e. from within a refined eye region, for enhanced precision. The modality introduced in Section 2.2 is further boosted by an iris radius constraint (Section 2.3) and a SOG filter (Section 2.4).

Gradient based eye centre estimation
A pupil and an iris can also be characterised by radial vectors pointing inward to an eye centre. Specifically, prominent gradient vectors on a circular iris/pupil boundary should agree with the radial directions. Therefore the dot product of each gradient vector with its corresponding radial vector is maximised. This is formulated by Timm and Barth (2011)  where is the centre candidates, * is the optimal centre, ܰ is the number of pixels in the eye region to be analysed, ‫,ݔ(‬ ‫)ݕ‬ is the displacement vector connecting a centre candidate and ‫,ݔ(‬ ‫)ݕ‬ which is a pixel different from , ‫,ݔ(‬ ‫)ݕ‬ is the gradient vector, ‫ܫ‬ is the intensity value at an isophote centre and ݉ ܽ݊݀ ݊ have the same definition as in the preceding subsection. The displacement vectors and gradient vectors are both normalised to unit vectors. We only preserve the gradients whose directions are reverse to the corresponding displacement vectors based on the fact that the pupil should be darker than its neighbouring regions and thus generates inward gradients. A sample implementation of this approach is illustrated in Figure 3. To resolve the challenges posed by strong shadows inside an eye region and sharp edges outside the eye region, we introduce a radius constraint and design a SOG filter that can effectively deal with the circularity measure and automatically remove strong gradients from eyebrows and eyelids.

Iris radius constraint
We introduce an iris radius constraint based on the assumption that shadows and eyebrow segments have random radius values, while the iris radii are more constant relative to the size of a face. This provides a way to differentiate circular clusters of various radii and to determine their weights in energy map accumulation. The function for the significance measure emulates the frequency response of a Butterworth low pass filter: where ‫,ݔ(‖‬ ‫‖)ݕ‬ ଶ is the ‫ܮ‬ ଶ norm of the displacement vector without being normalised to a unit vector. D is the estimated radius of the iris. ߪ and ߱ correspond to the order and the cutoff frequency of the filter. The curves corresponding to varying ߪ and ߱ following equation 7 are shown in Figure 4. The radius weight function is maximally flat around the estimated centre D and drops rapidly when the radius is out of the flatness band. Increasing ߱ while decreasing ߪ will enhance the rigidity of the constraint which could be assumed for circumstances where strong shadows are present. D is a value relative to the size of the face region. This objective function allows adjustable coefficients that effectively alleviate the problematic issues posed by edges around eyelids, eye corners and shadows.

SOG filter
We specifically design a novel SOG filter that discriminates gradients of rapid change in orientation from those of less change. It is perfectly tailored to reinforce the two main modalities despite its versatile applicability. The basic idea takes the form of a statistical analysis of gradient orientations within a window centred at a pixel position. For each ‫ܦ‬ ௫ × ‫ܦ‬ ௬ window centred at pixel ݅, the gradients in ‫ݔ‬ and ‫ݕ‬ directions are calculated whose orientations follow: We then stack gradient orientations into ݇ (݇ ൏ 360) orientation bins, where each bin contains the count of the orientations from ܾ • ଷ° to (ܾ + 1) • ଷ° (0 ܾ ݇ − 1) within the window. If the count recorded in a bin exceeds a threshold, the corresponding pixels that accumulate the bin will have their gradient vectors halved, i.e. their weights reduced. As a result, the objective function becomes: where ‫,ݔ(ݓݏ‬ ‫)ݕ‬ is the weight of a gradient vector adjusted by the SOG filter.
The threshold for the counts is determined by an absolute value as well as a value relative to the number of pixels with nonzero gradients within the window. As a result, the pixels that maintain similar gradient orientations to their neighbours will have their weights reduced and they are referred to as 'impaired pixels' in the rest of the paper. When a curve has low local curvature, it comprises more 'impaired pixels'. Figure 5 demonstrates the effectiveness of a SOG filter applied to an image containing irregular curves and an image of an eye region. It is shown in Figure 5 that the SOG filter has successfully distinguished curves with low and high curvatures and that it is effective in dealing with intersected and occluded curves or curve segments. In the eye image, the gradients in the eyelid and shadowed eye pouches are detected as 'impaired pixels' whose weights are to be reduced in the accumulation of the energy map while gradients around the iris and the pupil maintain their original weights. This approach resolves challenges brought by shadows, facial makeup and edges on eyelids, eyebrows and other facial parts outside the iris, which are extremely problematic for most geometrical feature based eye centre localisation approaches.

Energy map integration
In the final stage, the two energy maps ‫ܧ‬ and ‫ܧ‬ are integrated into ‫ܧ‬ ‫,ݔ(‬ ‫)ݕ‬ so that they both contribute to the election of the eye centre. It is critical, prior to the integration, to determine the confidence of each modality, to estimate the complexity of the eye image, and thus to determine their weights in the fusion mechanism.
The left eye region is taken as an example to illustrate the fusion mechanism. If the equivalent centroid ‫ܥ‬ ௨ calculated by equation (4) is close to the pixel position ‫ܥ‬ ௫ that has the maximum value in the first energy map ‫ܧ‬ ௨ ‫,ݔ(‬ ‫,)ݕ‬ ‫ܥ‬ ௨ is considered confident since the isophote centre and the equivalent centroid coincide. In this case, more gradients from the pupil/iris are present, allowing the second modality to be more robust and precise. The two main modalities are then utilised and fused following equation (10): where 0 ൏ ‫ܥ‖‬ ௨ − ‫ܥ‬ ா௫ ‖ ଶ ߳, with ߳ taking a value relative to the width of the eye region ߳ . In our experiments, ϵ = 0.3ϵ pixels. The coefficient of 0.3 is an empirical value found to be a suitable weight adjustment factor for the first modality. It corresponds to the average diameter of the iris in our experiments. When C ୟ୳୪ and C ୫ୟ୶୪ disagree and have a large Euclidean distance, the first energy map will have high energy clusters sparsely distributed, potentially caused by severe shadows and specularities. The second modality will be influenced by 'impaired pixels' and produce erroneous centre estimates. Therefore only the equivalent centroids ‫ܥ‬ ௨ and ‫ܥ‬ ௨ are selected to be the final eye centres.
The maximum response in the final energy map will represent the estimated eye centre. Estimation of the final right eye centre follows the same approach.

Eye centre localisation experiments and results
We employed two publicly available datasets for algorithm evaluation, i.e. the BioID dataset and the GI4E dataset. The BioID dataset (BioID Technology Research, 2001) consists of 1520 images with ground truth data of eye centre coordinates. It has been popular in the literature for the evaluation of other eye centre localisation algorithms. The variations in this dataset include illumination, face scale, head pose and the presence of glasses. The GI4E dataset (Villanueva et al., 2013) contains images of 103 subjects with 12 different gaze directions. Therefore, the impact of eye and head movement on the proposed algorithm can be evaluated. The relative error measure proposed by Jesorsky, Kirchberg and Frischholz (2001) was used to evaluate the accuracy of the algorithm. This approach firstly calculates the absolute error, i.e. the Euclidian distance between the centre estimates and the ground truth provided by the dataset, and then normalises the Euclidian distance relative to the pupillary distance. This is formulated by equation (12).
Evaluation results on the BioID dataset are shown in Figure 6 where representative examples of accurately and inaccurately localised eye centres are demonstrated, as well as the overall accuracy curve on this dataset. The proposed algorithm is further compared with 11 other methods in the literature, summarised in Table 1.  Note: Those with a '*' notation are not explicitly provided by the author(s) but are measured from the accuracy curves available. Those with a '\' notation are neither explicitly nor implicitly provided by the author(s). The numbers in bold are the highest accuracy in their corresponding ranges and those underlined the second highest. The accuracy measure for ݁ 0.25 does not contribute to scores since very similar results have been achieved by all the methods.
The proposed method gains the best results for the accuracy measure ݁ 0.10 as well as ݁ ௫ 0.25, and the second best for ݁ ௫ 0.05 and ݁ ௫ 0.10. Except for the accuracy measure for ݁ 0.25 where very similar results have been achieved by all the listed method, a score of 2 is assigned to every first rank and a score of 1 is assigned to every second rank. The proposed method gains a total score of 6, outperforming all the other methods in comparison. Similarly to the experiments by Villanueva et al. (2013) and Baek et al. (2013), we also evaluated the proposed method on the GI4E dataset and compared it to 6 other methods. As shown by Figure 7, the proposed method outperforms all the other methods in comparison and proves to be robust against eye/head movement by achieving 97.9% accuracy for ݁ ௫ 0.05. Tests on video data also validated the superior accuracy and efficiency of the proposed method. Results of a few representative image frames from a video recording can be found in Figure 8, which further reflect the accuracy and robustness of gradient features against facial accessory, dynamic eye morphology and occlusion. In addition, we further evaluated the robustness of the proposed method against head poses and eye occlusions in challenging illumination conditions. In this test, an under-illuminated environment was created by two near-infrared (NIR) lamps while a NIR bandpass filter was mounted to the camera lens. An over-exposed condition was simulated by adjusting camera settings (i.e. exposure and gain). Results can be found in Figure 9 which displays representative frames captured during this test.  1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  We further demonstrate the simplicity and efficiency of the proposed method by comparing it to Timm and Barth (2011) which claims to have achieved excellent real-time performance as one of its key features. Take the image containing a 41 × 47 eye region, i.e. 1927 pixels, as an example (Figure 3), the method in comparison performs per-pixel estimation of the eye centre, assuming that every pixel is an eye centre candidate. Therefore 1927 iterations are needed before the optimal candidate can be selected. The proposed method, on the other hand, resolves the problem by utilising the prior knowledge from the first modality which, through an initial estimation, avoids the per-pixel candidate assumption. The removal of lowenergy pixels in the first modality largely reduces the number of candidates, i.e. number of iterations in the second modality. In the same sample image of the 41 × 47 eye region, the iterations decrease to only 67, making our algorithm 29 times faster. With a webcam running at 30 frames per second, the proposed algorithm is capable of localising eye centres in detected faces accurately in real time.

Discussion and conclusion
This paper introduces an unsupervised modular approach for eye centre localisation, suitable for HCI system implementations. The novelty of this approach can be summarised as follows: 1) The proposed modular scheme is different from the majority of existing approaches that follow the conventional three stages, i.e. pre-processing, feature extraction and classification. In fact, the two main modalities in our method are based on different eye models and employ different types of features. As a result, the proposed method can benefit from the relative rotational invariance brought by the isophote features, as well as the iterative circularity measurements by the gradient features.
2) Most existing methods require the eye regions to be located prior to eye centre localisation. This is commonly achieved by image cropping according to facial anthropometrics or pre-trained eye detectors. While the former type of approach cannot deal with different head poses, the latter type is complex and often yields inaccurate results in undesirable environments. In contrast, the proposed method first analyses features at the global level such that no eye detector is needed. Subsequently, global features are processed for regional analysis, meaning that the local features are naturally robust to head poses.
3) The designs of the iris radius constraint and the SOG filter are tailored to this modular scheme. They effectively deal with interfering edges and shadows, which severely sabotage most eye centre localisation algorithms.
4) The interaction of the two main modalities not only results in increased accuracy and robustness, but it reduces the computation load by filtering eye candidates before performing iterative analysis.
A number of experiments have been carried out to justify the effectiveness of the proposed method on different datasets, under different illuminations, and with the presence of head poses and eye occlusions. Tested on the BioID dataset, the proposed method has outperformed the 11 other methods in comparison with the highest localisation accuracy. Tested on the GI4E dataset, the proposed method has achieved the highest accuracy, compared to 6 other methods. The proposed method will afford great potentials in implementations of a variety of HCI systems and will benefit the development of assistive technologies that can assist the elderly people and those with motor disabilities. 2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58