Project 1 - HVI: What, When, How: A Sensor-based Driver Awareness System to Improve Human-Computer Interaction (CHI LBW 2016, T-SET UTC)
The goal of this project is to help drivers safely interact with ubiquitous HCI demands and benefit from proactive information services in cars. Our prior and ongoing projects primarily explore the ‘interruptive’ feature of ubiquitous HCI demands in cars. We have been rigorously addressing the issues of when to intervene by using our sensor-based assessment technologies that estimate drivers’ cognitive load in near real-time. In this project, we extend the key technologies in the projects to help users understand how an in-car system is interpreting the state of the world around them. We aim to support intelligibility of system behavior in cars, including the issues of what intervention to make and how to intervene.
Figure 5. A single Android application that collects real-time sensor data streams from a range of sensors (sensing module), estimates driver/driving states in real-time (computing module), and delivers multimodal information to drivers (feedback module)
Description
In this project, we collect big sensor data streams from a least intrusive set of wearable or internet-of-things sensors, worn by vehicle users and/or embedded in vehicles, including daily smart devices (Figure 5 – left). During a set of human-subject experiments in naturalistic field driving situations, we are investigating how drivers interact with proactive adjustments of HCI demands initiated by system intelligence rather than user demand. We consider presentation methods and types of interaction schemes across human visual, auditory, and haptic sensory channels (Figure 5 – Right). The near goal is to create a smarter, contextually intelligent cyber-physical system that supports intelligibility of system behavior. These experiments will provide a set of sensor-based real-time models of drivers’ cognitive load, user interruptibility, and user experience of proactive information services. Ultimately, these technologies will help drivers safely interact with proactive interventions of machine intelligence in futuristic cars (e.g., intelligent automotive physical systems such as self-driving cars).
Project 2: Sensor-based Assessment of In-Situ Driver Interruptibility (CHI 2015, T-SET UTC)
Figure 1. Detecting driver interruptibility every second through monitoring of peripheral interaction states
Description
The goal of this project is to create an in-car information system that adapts the delivery timings of HCI demands to drivers based on in-situ driving and cognitive load models for safe navigation. Our latest work examined situations in which drivers spontaneously enter high cognitive load states and then identified when a driver can be interrupted (e.g., push notifications can be delivered to the driver). We estimated driver/driving states in real-time by examining a broad range of sensor data streams (e.g., motion capture, peripheral interaction monitoring, psycho-physiological responses, etc.). The study presented a model-based driver/driving assessment to mediate interruptions safely and with minimal increase in driver workload.
In our latest work, we collected sensor data from 25 drivers during naturalistic driving (approx. 1.25 hours / driver). To obtain the data, we equipped participants’ cars with an on-board diagnostics (OBD) device. Participants wore five sensor devices: four accelerometer sensors for capturing body motion and one chest belt sensor for tracking physiological responses (See Figure 1a). We installed two smartphones in each car – one on the front windshield to record traffic in adjacent lanes and the other on the passenger seat headrest to detect drivers’ activities. We obtained information from the OBD device and body-worn sensors via Bluetooth and logged sensor data streams in real-time. In total, we extracted 152 sensor features (OBD: 72; accelerometer sensors: 40; physiological sensor: 40) and 5 manually annotated features related to traffic from videos (one car driving state and four traffic states around the vehicle, i.e. front, right, left, and oncoming traffic).
We used the instances of drivers engaging in peripheral interactions as moments of ground truth for drivers’ split attention while managing the interruption. As a result, we demonstrated that the sensor data can be used to build a machine learning classifier that can determine interruptibility every second with a 94% accuracy. The average classification accuracy of the individual models of our drivers was 94.9% (SD=2.6%, range: 90.2% from 98.2%), which was similar to the population model (94.3%). Accuracy for each driver was greater than 90% (See Figure 1b). We also identified sensor features that best explained the states in which drivers performed peripheral interactions and contributed high system performance. Based on our findings, we proposed a classifier that could be used to build systems that mediate when drivers use technology to self-interrupt and when technology interrupts drivers.
Project 3 - HCI: Adaptive Cyber-learning with a Sensor Support (CHI WIP 2014, QoLT Center)
This project aims to better support student learning by adapting computer-based tutoring to individual learning phases and real-time capabilities. In this manner, computer-based tutors may be more effective in supporting robust learning.
Figure 8. Understanding Expert-Novice Differences in Geometry Problem-Solving Tasks: A Sensor-based Approach
Description
The specific research goal is to explore a method for automated sensor-based learner/learning assessment in intelligent tutoring systems. In this project, we apply rigorous analytics and machine learning techniques to sensor data to make models that predict, in real time, transaction-level implications related to lack of knowledge (e.g., errors) and mental workload. In particular, we study a learner’s expertise level in cognitive skill application as a key factor that varies cognitive attention switching strategies and instructional effects between individuals. We then assess to what degree expertise reversal effects are manifested in eye movement and psycho-physiological measures.
In a Work-In-Progress study, I have investigated differences in the approach patterns that novice and expert learners use to manage their visual attention (Figure 8). We collected data from 21 novices and 20 experts during geometry problem-solving tasks. Initial results showed transactional and perceptual correlations between geometry expertise and task complexity. The results further suggested that eye tracking could reveal distinguishable patterns in perceptual and cognitive activities between expert and novice learners, and can help identify quantifiable metrics for future learner modeling. I have obtained research grants from a Carnegie Mellon institutional fund called ProSEED (“Sensor-based Assessment of Student In-situ States in Attention and Cognition during Computer-based Geometry Problem-Solving Tasks”), and I am preparing to resubmit an Institute of Education Science (IES) proposal (“Supporting Computer-based Geometry Tutoring through Learners’ Eye Tracking Patterns and Real-time Cognitive Load”) about this project.
Project 4 - HVI for the Elderly: Multimodal Route Guidance and Its Reversal Effects, Elder vs. Younger (Pervasive 2012, MTAP 2016, QoLT Center)
While in-car navigation systems enhance situational awareness, they also increase drivers’ visual distraction and cognitive load. This project explores the efficacy of multi-modal route guidance cues for ‘safer’ driving. We studied the impact of modality combinations on driving performance and cognitive load for elder and younger drivers (Figure 3 Left).
Figure 3. Simulated driving test-bed with multisensory cues and wearable sensors
Description
This study highlighted a number of ‘safer’ navigation differences between elder and younger drivers. In general, the most useful way to reduce way-finding errors included the full combination of visual, auditory, and haptic feedback (Figure 3 Right). However, for elder drivers, adding more modalities strained their already high workload. Thus, we found that personalized navigation systems enhanced the benefit of auditory feedback for elder drivers without increasing the number of sensory feedbacks. In contrast, for younger drivers, adding haptic feedback to traditional audio and visual feedback led to more attentive driving. Furthermore, we found that safer navigation for younger drivers incorporates new non-visual feedback to minimize distractions caused by visual feedback. These results were demonstrated through task performance-based measures, eye-tracking measures, subjective workload measures (e.g., NASA-TLX), and objective workload measures based on psycho-physiological responses. We predicted a driver’s cognitive load in near real-time by using machine learning techniques.
A number of psycho-physiological responses significantly differ with the intervention of sensory augmentation (i.e., with haptic augmentation vs. without haptic augmentation). Additionally, many of them strongly interacted with specific types of intervention (i.e., replacement or addition) as well as age groups (i.e., younger or elder adults). This provides early evidence that psycho-physiological assessment may promise tracking of the expertise reversal effect in higher sampling rates during interaction with sensor augmentation systems, further detecting the inflection points of the reversal (e.g., changes from ‘benefits > costs’ to ‘costs > benefits’ or vice versa).
Project 5 - HCI: Tracking Real-Time Mental Workload during Elementary Cognitive Process (UbiComp 2010, SSCI 2014, QoLT Center)
This project seeks to develop a sensor-based method for tracking variation in cognitive processing loads (Figure 6 left).
Figure 6. Sensor-based assessment of real-time mental workloads during a set of elementary cognitive tasks in psychology and cognitive science.
Description
As a preliminary study, I have explored six elementary cognitive tasks (ECTs) to assess how cognitive load varies according to task difficulty. We recruited more than 70 participants (including 27 people age 60+) and studied mental processes associated with handling interruptions, dual-task processing (e.g., way-finding requiring spatial attention switching or cognitive mapping), and task integration (e.g., comparing an ambient display with a mental legend that indicates its meaning). In terms of human cognitive abilities, we focused on visual perception and cognitive speed and explored three major first-order factors: flexibility of closure, speed of closure, and perceptual speed. The ECTs were manipulated to induce either high or low cognitive load (Figure 6 right) and their differentiability was validated based on participants’ task performance and NASA-TLX-based subject rating results. The results showed that we can build a real-time cognitive load tracker that discriminates between the two levels of mental workloads at almost 80% accuracy on averaged by using our sensor data - approx. 79% for younger adults and 86% for older adults every second, with QDA.
Project 6 - HVI for the Elderly: Aesthetics and Usability of Automotive User Interfaces for Elder Drivers (CHI 2010, QoLT Center)
Figure 4. Simulated dual-task driving test-bed with an eye tracking system
Description
The purpose of this project was to design features for car dashboard displays that are both functional and aesthetically pleasing. We evaluated six designs according to two clutter types and three design properties (Figure 3). In the evaluation study, thirty-two participants (18 senior citizens, 14 adults younger than 65) were asked to use dashboard information (e.g., speed, fuel level, odometer reading) to verbally respond to voice commands while playing a coin-collecting driving game as a primary task. The apparatus in the user study included a java-based driving simulator, a Bluetooth-based Wii-wheel joystick, and an eye/gaze tracker (Figure 3). We found that contrast in size and reduced clutter are instrumental in enhancing driving performance, particularly for the elder population. Surprisingly, our results showed that color elements have a negative effect on driving performance for elders, while color elements and fills slightly improve performance.
Project 7 - HCI: Augmented Reality User Interfaces for Seamless Interaction with Virtual Spaces (JCAD 2010, GIST)
This project explored the effects of AR technology, when combined with a range of 3D prototype applications.
Figure 9. Incorporating AR-interfaces for providing intuitive observation, informative visualization, and immersive interaction
Description
The first part of this project explored an AR-interfaced 3D computer-aided engineering (CAE) simulation test-bed. The purpose of this part is to leverage the intuitive observation of CAE simulations by incorporating AR interfaces. We embedded an AR interface into a series of custom-built 3D simulators used in robotics and aerospace engineering (Figure 9ab). To better manage multiple coordinate systems in an AR environment, we demonstrated the collaboration of virtual robots that are oriented in different coordinate systems in the physical space. In addition, we examined the reliability of augmentation when graphical occlusion interrupts image processing, which frequently happens as end-users are interacting with or manipulating AR-interfaced graphical visuals.
In the second part of this project, I presented a traveler guidance system (TGS) test-bed that can provide macro- and a micro-service in a two-stage system. In the system, a web-based TGS simulates 3D GIS information about a metropolitan road network and a matrix of paths, and then directs end-users to an AR-based TGS that allows fingertip interaction for detailed information about subsets in the destination area (Figure 9c). This project has explored how an AR-incorporated visualization can facilitate the spatial awareness of the streets as well as better understanding of the geospatial information.
In the last part of this project, I explored the communication of a Haptic-based interaction platform with AR-based information media. The purpose of this project is to test the feasibility of a haptically-enhanced broadcasting test-bed for 3D interactive media (Figure 9d). We adapted AR techniques for broadcasting productions and installed a 6-dof haptic device, Phantom, at the client site. In a validation study, we created a home-shopping scenario to demonstrate TV viewers’ immersion and interactivity.
Project 8 - HVI for the Elderly: Cognitive Mapping Aid for Elderly Navigation (CHI 2009, MTAP 2016, QoLT Center)
Figure 2. Simulated AR-windshield display system for navigation aid
Description
In this project, I explored a novel navigation display system that uses an augmented reality (AR) projection to minimize cognitive distance by overlaying driving directions on the windshield and road. The projected display makes it easier for users to focus on one location and to translate information between the virtual/information space of the navigation system and the real world (Figure 2).
The navigation display visualizes the driving area via a computer-generated map that appears as if it is sliding down the upper portion of the windshield and merging into the road. By synchronizing this movement with the car movement, drivers experience seamless integration of the display and its information into the real road.
We evaluated this display system by examining eye-tracking states, subjective rating, and dual-task performance measures. We found that our navigation display is especially supportive for elder drivers when they choose which road to take at an intersection and respond to traffic incidents.
Project 9 - HCI: Modality Fusion during Touch-based Interaction (Samsung Electronics)
The goal of this project is to improve perception and performance during touch-based interaction in personal electronic devices. Specifically, we have identified the appropriate fusion of visual, audio, and haptic cues during fingertip interaction with touch screen images (Figure 10a).
Figure 10. Modality fusion for more appealing/engaging touch-based interaction
Description
This project was intiated by Samsung Telecommunications America based on our prior work in planning, design, and execution of user-specific studies to understand the effects of multimodal fusion. During this project, our research team provided consultation to Samsung about study design, user interface of an Android-based test-bed, and evaluation methods, as well as conducted three user studies (pilot, main, and confirmation) concerning modality fusion. In the main study, I presented more than 100 participants with a series of multimodal effects for 26 images with varying textures. An Android Application provided combinational effects of visual, auditory, and haptic cues, which vary according to textures on the images where the user’s finger is touching and hovering (Figure 10b). We collected participants’ evaluations about the given effects (Figure 10c) and asked them to build their best effects by themselves for another 15 images. The study found perceptual engagement was the best when vibro-tactile intensity was linearly combined with the vsual effects, and the analysis results have presented quatifiable proportions of sensory cues for inducing more natural and more appealing/engaging touch-based interactions with image textures.
Project 10 - HCI: The Quality of HCI in Connected Environments (KETI, DRAPER)
This project aims to understand users-on-the-go in connected environments and to improve the quality of their ubiquitous HCI experience by enhancing machine intelligence to be more human-centered. Much of the preliminary work for this project has been completed, and the ongoing work for this project will develop in two phases:
(a) Usability testing for novel wearable UI prototypes that the project partners (i.e., KETI and Celluon) have developed
(b) 26 scenarios that the project sponsor (i.e., Draper Lab) illustrated for the verification of our contextual framework.
Figure 11. Preliminary work for Phase 1 and Phase 2 to explore the proposed scope of this project.
Description
In Phase I, I study the Internet of Things, which is combined with wearable technologies. The goal of this phase is to understand user interaction with connected devices, which are either wearable for users (e.g., smartwatch) or embeddable for systems (e.g., tiny versatile sensors). Based on this understanding, I explore user interface schemes that can enhance the quality of interaction. The specific tasks for this area include the development and fusion of contactless and contact user interfaces (especially input devices), the creation of a conceptual framework, the design of multisensory interaction schemes, the construction of a user interface framework, and the development of test applications.
As preliminary research for this first phase, I currently serve as the principal investigator on an international project about wearable technologies (“Development of UI/UX Technology to Overcome the Limitations of Wearable Device UIs”, Project period: 2014 Nov – 2017 Oct). In this project, we test and evaluate the usability of a series of advanced-concept user-interface prototypes for wearable devices (e.g., smartwatches), improve their usability through iterative design, and identify the strengths and weaknesses of each prototype in order to suggest design implications for a more usable user interface and user-interaction method (Figure 11a).
In Phase II, I study Human-in-the-Loop Cyber Physical Systems, which are combined with context-aware and user-aware machine intelligence. The goal of this phase is to present machine intelligence that can understand how the states of various end-users change in real time and that is robust to various situational events, thereby helping people interact seamlessly with connected systems in connected infrastructure.
As preliminary research for this second phase, I participated in a research and development project about immersive situation awareness (“User-Centric Study of Human Interaction with Visual, Audio, and Haptic Presentation Systems”, 2015 Jul – 2016 Jun). In this project, we proposed a generic contextual framework that can support immersive situation awareness for 26 different scenarios (Figure 11b). The framework consists of modules related to sensing; a context identifier; an information provider; presentation mapping; processors for value estimation, cost estimation, prioritization, intervention determination, and filtration (candidate info combo); and an actuator controller. We have verified and refined the operation of the framework through thorough literature reviews and a set of human subject experiments.
Project 11 - HCI: Driver-Aware Interruptions of Dialog-based HCI Demands in Cars (Hyundai, DRAPER)
This project investigates how dialog-based HCI demands interact with driver interruptibility. We are refining our key technology, obtained from Project 1 in this document, to predict the duration of driver interruptibility. Understanding duration will allow us to design intelligent in-vehicle systems that respond to how drivers behave across moments of interruptibility. We will examine sensor data streams collected around drivers and in vehicles to identify the most appropriate interruptive information type and modality according to context.
Figure 12. Dual-task test-bed - Primary task (PT) is vigilance task with field driving video clips (obtained from the 1st stage); and Secondary task (ST) is inspection task with a structured sequence of simulated visual, auditory, and haptic interruptions.
Description
Our experiment includes computer-driven interruptions of dialog-based HCI demands and/or context-sensitive information (e.g., route guidance or road hazard information). We have drivers receive this information in opportune interruptible moments and in manners that support perception and cognitive interpretation. The experiment consists of two stages :
In the first (and most complete) stage of the experiment, we are conducting a field driving experiment followed by a ground-truth labeling session. In this stage, we collect a broad range of multimedia data, on-board diagnostic data, wearable sensor data, and participants’ ESM responses during naturalistic field driving. The collected data has helped us build a test-bed for the next stage that simulates various situations where driver interruptibility and various dialog-based HCI demands conflict with each other.
In the second stage, we will conduct a lab-based human-subject experiment to compare contextual attributes for interruptible timings and interruptible duration, driver interruptibility in conversational or dialog-based scenarios, or the effects of information content type and presentation modality associated with driver interruptibility. The test-bed simulates situations in which our human subjects continuously engage in a dual-task paradigm that includes a primary on-going vigilance task (i.e., PT in Figure 12 left) and a set of computer-driven dialog-based interruptions as secondary HCI demands (i.e., ST and their sequence in Figure 12 left and Figure 12 right respectively). In this stage, we test the interaction between driver interruptibility and in-car HCI tasks and discuss its impacts on the driver’s attention and cognition.
Project 12 - HVI: Driver-Centered Interaction in Intelligent Automotive Physical Systems (Cyber-Physical Systems, Smart and Autonomous Systems)
This proposed project aims to make machine intelligence-driven physical and non-physical interventions in computer-assisted driving more acceptable and dependable.
In this project, we will investigate what impacts an intelligent automotive physical system that delivers relevant interactions can have on drivers. This project will use a motion platform and a VR headset to simulate various driver / driving scenarios where computers autonomously determine the vehicle’s physical actions (i.e., self-driving or computer-assisted driving situations) (Figure 13). Also, in this project, we will employ a range of sensors to track driver cognition, attention, and behavior in real time to detect / predict a driver's condition and experience.
Figure 6. Sensor-based assessment of real-time mental workloads during a set of elementary cognitive tasks in psychology and cognitive science.
Description
In particular, this project deals with issues that arise from situations where an intelligent automotive physical system’s machine intelligence cannot exercise sufficient capacity to make decisions about how to interact with other vehicles and in-situ contextual events for various reasons (e.g., multiple jaywalkers in different directions, vehicles with an emergency light on, damaged road signs, broken signal lights, or heavy fog or mist). More specifically, we focus on how to make the system itself act to improve safety by regulating the vehicle's physical behaviors (e.g., slowing down, keeping the lane, leaving more space between the vehicle in front, or turning on an emergency light), by requesting the human driver's intervention in the given situation, and, if necessary, by taking control of the vehicle's physical actions in order to manage the situation as safely as possible until the human driver takes over.
Thus, this project aims to improve the system’s intelligence in order to be able to help the driver quickly and accurately understand and address both the difficulties that the system is experiencing and the conditions of the driving situation. For this, we will explore how fast and accurate the intelligent physical system needs to be to enhance driver performance and satisfaction, given the critical nature of the driving context, and we will identify what sensor features and contextual events allow for the effective delivery of interactions to a driver of an intelligent automotive physical system. The project deliverables include a design guideline for a mixed-initiative vehicle control between human driver and computer driver and a series of sensor-based models that actively adapt physical or non-physical behaviors of such intelligent systems to drivers of different demographics and to drivers whose cognitive abilities are changing over time.
In a user study, I examined the advantages and disadvantages of types of visual feedback that indicate the motion of the computer car and its decision-making states. I then discussed the expected effects of feedback type combinations with respect to intelligibility in a simulated autonomous driving environment. I will revise “Supporting Mobility Independence for Elderly Drivers Using Semi-Autonomous Vehicular Technologies Enhanced by Human-Centered Situational Awareness,” which we submitted to NSF Cyber-Physical Systems, and resubmit it to NSF Smart and Autonomous Systems.