Online
learning has gained increased popularity in recent years. However,
with online learning, teacher observation and inter- vention is
lost, creating a need for technologically observable
characteristics that can compensate for this limitation. The
present study used a wide array of sensing mechanisms including
eye tracking, galvanic skin response (GSR) recording, facial
expression analysis, and summary note-taking to monitor
participants while they watched and recalled an online video
lecture. We explored the link between these human- elicited
responses and learning outcomes as measured by quiz questions.
Results revealed GSR to be the best indicator of the challenge
level of the lecture material. Yet, eye tracking and GSR remain
difficult to capture when monitoring online learn- ing as the
requirement to remain still impacts natural behavior and leads to
more stoic and unexpressive faces. Continued work on methods
ensuring naturalistic capture are critical for broadening the use
of sensor technology in online learning, as are ways to fuse these
data with other input, such as structured and unstructured data
from peer-to-peer or student-teacher interactions.Smart spaces are
typically augmented with devices capable of sensing various inputs
and reacting to them. Data from these devices can be used to
support system adaptation, reducing user intervention; however,
mapping sensor data to user intent is difficult without a large
amount of human-labeled data. We leverage the capabilities of
head-mounted immersive technologies to actively capture users’
visual attention in a unobtrusive manner. Our contributions are
three-fold: (1) we developed a novel prototype that enables
studies of user intent in an immersive environment, (2) we
conducted a proof-of-concept experiment to capture internal and
external state data from smart devices together with head
orientation information from participants to approximate their
gaze, and (3) we report on both quantitative and qualitative
evaluations of the data logs and pre- /post-study survey data
using machine learning and statistical analysis techniques. Our
results motivate the use of direct user input (e.g. gaze inferred
by head orientation) in smart home environments to infer user
intent allowing us to train better activity recognition
algorithms. In addition, this initial study paves a new way to
conduct repeatable experimentation of smart space technologies at
a lower cost.
|
Collaborators
|
|
Publications
|
|