In this work, we propose a framework that enables the use of
facial motion capture data as a means of user interface.
Advances in facial motion capture technology has enabled real-
time, markerless facial tracking. Although originally designed to
drive the motions of a virtual character, the data captured by
these systems, which can be fairly extensive, could just as well
be used to drive other applications. By utilizing motion capture
data as user interface signals, we provide a more general means of
hands-free application control, allowing for use of a wider
variety of facial signals as input. |
Collaborators
|
|
Publications
|
|
Presentations
|
|