How It Works

Humans use many non-verbal cues such as facial expressions, gestures, body postures, and vocal tone to express emotions. These non-verbal cues convey emotions are more genuine and more accurate than verbal cues because they are less filtered and biased.

The human face in particular provides a rich canvas of emotion. Opsis scientifically measures and reports the facial expressions and associated emotions using sophisticated computer vision algorithms and machine learning techniques.

Our software tracks the head and a large number of facial features responsible for expressions. These features are then mapped to emotions in real time.

Opsis unique approach does not assign expressions to discrete emotion categories, but instead uses a continuous space of emotions. This enables a much more precise and fine-grained representation of expressions.

All that is required is a standard webcam or other video source.
We plan to add speech and gesture analysis in the near future.

Find out more about our measurements and solutions.