Current interactive, ubiquitous and wearable technology has the potential to revolutionize patient care and improve quality of life by assisting people in need. Many of the sensors and tracking devices that we can easily access nowadays can be used to support a variety of people and patients, providing easier access to information. In our research we investigate a variety of ubiquitous and multimodal technology such as digital pens, augmented reality devices (Google Glass, MS HoloLens), depth cameras (MS Kinect), eye-tracking and more, to support a variety of users undergoing specific health issues or needing assistance during their every day life.

Our research in this space investigates how a variety of wearable and ubiquitous computing devices can enhance personal information spaces and improve the condition of both individuals in need and health professionals. We believe that many diseases can benefit of these technologies and that by working closely with users, clients and patients, we have the extraordinary opportunity to make a difference and to increase their quality of life. We exploit multimodal interactions and ubiquitous computing to enhance assistive technology in the setting of aphasia, color-blindness, bruxism, locked-in syndrome, and others. We also create technology that can help other specific populations, such as older adults, children or infants.

Augmented and Mixed-Reality for Surgery and Radiology

We are designing, implementing and evaluating Mixed-Reality solutions for Surgery and Radiology. Working closely with the Simulation Training Center, Albuterol HFA and the Center for the Future of Surgery, we co-design, develop and assess technologies that rethink the way doctors and residents interact with patients and medical images.

The goal of this project is to identify and evaluate the use of Mixed-Reality technologies in surgical training, guidance, and execution. We currently work with a broad range of technologies such as Microsoft HoloLens, HTC Vive, Microsoft Kinect, OpenCV, Vuforia, and the ARToolkit.

holosonogram
ar_devices_
observing_ct_scan
Funding

Ubiquitous Computing for Assistive Technology
(Moxie Foundation, January 2016 – December 2017)

Publications

Multimodal and Ubiquitous Technology for Healthcare

We are investigating a variety of ubiquotous and multimodal technology such as digital pens, augmented reality glasses (Google Glass, Microsoft HoloLens), depth cameras (Microsoft Kinect), Eye-tracking, Interactive Surfaces, and more, to support a variety of users undergoing specific health issues such as aphasia, color-blindess, bruxism, Locked-in Syndrome, Autism, etc.

  • Chroma
  • Write-n-Speak
  • EyeHome
  • BendableSound
  • CocoonCam
googleglass
bobchronoviz
chroma
livingroom-copy
Funding

Towards a Multimodal Paper-Digital Infrastructure for Content Sharing and Collaboration for Health Care, Education and Social Interaction
(Swiss SNF  PA00P2_142066, August 2012 – January 2013)
A Multimodal Paper-Digital Infrastructure For Health Care Applications
(Swiss SNF PA00P2_136434, August 2011 – July 2012)
Ubiquitous Computing for Assistive Technology
(Moxie Foundation, January 2016 – December 2017)
Cocoon Cam, a Wearless Smart Baby Monitor
(NSF I-Corps Teams, June 2015 – Oct 2016)

Publications

Information Capture and Display in Time-critical Healthcare Settings

The goal of the project is to study the introduction of digital capture and display technology during trauma resuscitation. We aim at studying information currently available on paper forms and explore mechanisms to support situation awareness captured through digital pens and visualized on large room-based displays.

  • TraumaPen
  • TraumaGlass
traumapen_displays
simulation
simulation
traumapen_designs
Funding

Introducing Digital Pen and Wall Displays for Improving Resuscitation Team Performance. (NIH R21, LM011320-01A1)
(September 2012 – August 2014)

Publications

Pen and Paper Language Learning

The goal of the project is to study multimodal paper-digital interactions for health care applications education and social interactions, and develop an infrastructure to support educators, end-users, therapists and health professionals during their interactions, as well as support content sharing across them.

  • Tap-and-Play
  • Audio-enhanced Paper Photos
sue-using-pen
tp_cp
tp_spanish
Funding

Towards a Multimodal Paper-Digital Infrastructure for Content Sharing and Collaboration for Health Care, Education and Social Interaction
(Swiss SNF  PA00P2_142066, August 2012 – January 2013)
A Multimodal Paper-Digital Infrastructure For Health Care Applications
(Swiss SNF PA00P2_136434, August 2011 – July 2012)

Publications

Archived Projects

Publications – Multimodal and Ubiquitous Technology for Healthcare

Start typing and press Enter to search