Real-time face tracking for video stream

Glazfit face tracking

The core of Glazfit products is 3D face tracking, a cutting-edge technology which can detect and track facial points (or keypoints) and estimate 6 Degrees-of-Freedom (DoF) including three dimensional movements and three dimensional directions of the faces in images or video frames. Different from other face tracking solutions, Glazfit technology targets Web-based applications and being optimized for virtual glasses try-on.

Face tracking features

Glazfit features

Standard Camera

Algorithm run with any standard camera types (including cheap webcams and mobile phone cameras) and any image or video formats
Glazfit features


Algorithms return the accurate results, for example: 49 facial points (optional), 6 Degree-of-Freedom (DoF) as 3D orientation and translation of the face
Glazfit features

Wide Angle

Algorithms work in a very wide angle up to 180 degree. It means that users are given the freedom to move around from left to right or top to bottom while the algorithms still work stably
Glazfit features

Robust and stable

Algorithms perform perfectly under various challenging conditions, such as: low resolution (as small as 20x20 pixels), illumination, partly occlusions, far-away distance
Glazfit features


State-of-the-art algorithms in computer vision, such as: automatical face detection, face alignment, face tracking, 2D/3D fitting solutions
Glazfit features

Real-time on Multiple platform

Algorithm performs in real-time maner in most of the common multiple platforms: Web browsers, Chrome, Firefox, Microsoft Edge, Safari, and on mobile devices
Face detection example

68 landmarks for an image in 300-W dataset (source: ibug)

3D face modeling example

3D face modeling: 51 facial points in synthetic images and its rendering at different orientations.


Glazfit detection/tracking algorithms rely on a large face dataset (about 10K images). From some public datasets, which are available in computer vision community, we use 3D face modeling algorithms to synthesize one 2D image in order to build a 3D face model, so that we can render it at different orientations (Yaw, Pitch and Roll) and create a much larger dataset.

We use some machine learning (regression based learning) and computer vision (local feature representation) techniques to learn detection/tracking models from such a large dataset in order to avoid the overfitting problem and allows the tracking system to be consistent with face orientation. Kalman Filtering (KF) algorithm is also used to smoothen the trajectory of facial movements and stablize the tracking system.