Multi-modal Sensing for Human Perception and Memory Augmentation
-Indrajeet Ghosh
Memory retention involves temporarily holding and preserving relevant information for short periods. This capacity enables individuals to recall and utilize valuable information over brief periods, facilitating various cognitive tasks and activities and enhancing human perception of the surrounding environments. Inherent limitations in the individual capacity to hold information lead to people needing to remember or recall important specifics during such tasks. While prior works have successfully used wearable and assistive technologies to improve other types of memory functions that are longer-term in nature (e.g., episodic memory), how such technologies can aid in daily activities is under-explored. Additionally, by incorporating eye gestures in human-centered studies, visual information processing can be better understood, allowing for the development of hands-free, human-centered applications. This approach enhances interaction in daily activities and bridges the gap between cognitive processing and assistive technologies, offering intuitive solutions for users engaged in tasks requiring real-time memory support.
Research Objectives:
The purpose of this research is to develop human-machine integration techniques to explore the feasibility of machine-augmented human intelligence. The study aims to understand neural correlates of attention during everyday tasks such as visuospatial working memory tasks (visual search and wayfinding navigation), using commercially-off-the-shelf wearable sensors. Such correlates are used in automatically extracting highlights of those tasks (e.g., important notes from the meeting) to augment the recall of the user after completing such tasks.
Research Questions:
- When fewer electrodes are available, how much does the inference accuracy degrade in downstream tasks such as cognitive load assessment and event-evoked potential detections (EEPs)?
- Can fewer EEG electrodes capture relevant temporal visuospatial EEPs during more complex tasks such as visual search and wayfinding navigation where noise artifacts are more prevalent?
- Can such functional connectivity-based signal reconstruction, leading to better pseudo-resolution, be performed efficiently to support real-time applications?
- Can non-invasive technologies such as multi-modal wearables effectively capture important highlights of the high attentional EEPs episodes for the working memory tasks?
- Will capturing personalized important highlights of such short-term windows (EEPs episodes) and cue intelligently such that highlights of mementos from those episodes can be remembered and recalled better?