skip to Main Content
design lab helena mentis telestration surgery
Telestration: How Helena Mentis Applies Design Thinking to Surgery
Telestration: How Helena Mentis Applies Design Thinking to Surgery
Telestration: How Helena Mentis Applies Design Thinking to Surgery

Helena Mentis is the director of the Bodies in Motion Lab at University of Maryland, Baltimore County (UMBC) with research spanning human-computer interaction (HCI), computer supported cooperative work (CSCW) and medical informatics. During a recent visit to the Design Lab at UC San Diego, Mentis talked about her research on surgery in the operating room.

She examines the medical world through surgical instruments and the workflow inside the operating room. Mentis hones in on minimally invasive surgery and the reliance toward images.  She is particularly interested in how medical professionals see and share visual information in a collaborative fashion, which has grown over the past several years. She asks, “What happens if surgeons were given greater control over the image? What would happen to the workflow? Would it change anything?”

In one study at Thomas Hospital in London, surgeons were using a lot of pointing gestures to direct the operation. Confusion would arise and the surgeon would need to repeat his exact intention with others. This break in the workflow inspired Mentis’ team to ask: what if we were to build a touchless illustration system that responded to the surgeon’s gestures? Her team set out to build what she calls “telestration,” which enables surgeons to use gestures to illustrate their intentions through an interactive display.

During another operation, the surgeon encountered a soft bone and had to stop the procedure. As a result, the surgeon had take off their gloves to re-examine the tissue on the visual display. Mentis notes, “There is a tight coupling between images on display and feeling with the instrument in hand.” If the image on display could be more closely integrated with the workflow, would this save time in the operating room?After publishing her findings, people raved over how voice narration rather than gesture aided imaging and collaboration in surgery. Consequently Mentis asked, “If given the opportunity would doctors use voice or gesture?” The ensuing observations revealed that while doctors stated their preference for voice, gesture was more frequently used for shaping telestration images. While voice narration and gestures provided greater interaction with the image, surgeons actually spent more time in surgery. Mentis reasons, “There is more opportunity for collaborative discussion with the information.” Interestingly, this did add time to the overall operation, but it also yielded greater opportunities to uncover and discuss critical information.

About Helena Mentis, Ph.D.

Assistant Professor, Department of Information Systems
University of Maryland, Baltimore County

Helena Mentis, Ph.D., is an assistant professor in the Department of Information Systems at the University of Maryland, Baltimore County. Her research contributes to the areas of human-computer interaction (HCI), computer supported cooperative work (CSCW), and health informatics. She investigates how new interactive sensors can be integrated into the operating room to support medical collaboration and care. Before UMBC, she was a research fellow at Harvard Medical School, held a joint postdoctoral fellowship at Microsoft Research Cambridge and the University of Cambridge, and was an ERCIM postdoctoral scholar at Mobile Life in Sweden. She received her Ph.D. in Information Sciences and Technology from Pennsylvania State University.

Helena Mentis is the director of the Bodies in Motion Lab at University of Maryland, Baltimore County (UMBC) with research spanning human-computer interaction (HCI), computer supported cooperative work (CSCW) and medical informatics. During a recent visit to the Design Lab at UC San Diego, Mentis talked about her research on surgery in the operating room.

She examines the medical world through surgical instruments and the workflow inside the operating room. Mentis hones in on minimally invasive surgery and the reliance toward images.  She is particularly interested in how medical professionals see and share visual information in a collaborative fashion, which has grown over the past several years. She asks, “What happens if surgeons were given greater control over the image? What would happen to the workflow? Would it change anything?”

In one study at Thomas Hospital in London, surgeons were using a lot of pointing gestures to direct the operation. Confusion would arise and the surgeon would need to repeat his exact intention with others. This break in the workflow inspired Mentis’ team to ask: what if we were to build a touchless illustration system that responded to the surgeon’s gestures? Her team set out to build what she calls “telestration,” which enables surgeons to use gestures to illustrate their intentions through an interactive display.

During another operation, the surgeon encountered a soft bone and had to stop the procedure. As a result, the surgeon had take off their gloves to re-examine the tissue on the visual display. Mentis notes, “There is a tight coupling between images on display and feeling with the instrument in hand.” If the image on display could be more closely integrated with the workflow, would this save time in the operating room?After publishing her findings, people raved over how voice narration rather than gesture aided imaging and collaboration in surgery. Consequently Mentis asked, “If given the opportunity would doctors use voice or gesture?” The ensuing observations revealed that while doctors stated their preference for voice, gesture was more frequently used for shaping telestration images. While voice narration and gestures provided greater interaction with the image, surgeons actually spent more time in surgery. Mentis reasons, “There is more opportunity for collaborative discussion with the information.” Interestingly, this did add time to the overall operation, but it also yielded greater opportunities to uncover and discuss critical information.

About Helena Mentis, Ph.D.

Assistant Professor, Department of Information Systems
University of Maryland, Baltimore County

Helena Mentis, Ph.D., is an assistant professor in the Department of Information Systems at the University of Maryland, Baltimore County. Her research contributes to the areas of human-computer interaction (HCI), computer supported cooperative work (CSCW), and health informatics. She investigates how new interactive sensors can be integrated into the operating room to support medical collaboration and care. Before UMBC, she was a research fellow at Harvard Medical School, held a joint postdoctoral fellowship at Microsoft Research Cambridge and the University of Cambridge, and was an ERCIM postdoctoral scholar at Mobile Life in Sweden. She received her Ph.D. in Information Sciences and Technology from Pennsylvania State University.

Helena Mentis is the director of the Bodies in Motion Lab at University of Maryland, Baltimore County (UMBC) with research spanning human-computer interaction (HCI), computer supported cooperative work (CSCW) and medical informatics. During a recent visit to the Design Lab at UC San Diego, Mentis talked about her research on surgery in the operating room.

She examines the medical world through surgical instruments and the workflow inside the operating room. Mentis hones in on minimally invasive surgery and the reliance toward images.  She is particularly interested in how medical professionals see and share visual information in a collaborative fashion, which has grown over the past several years. She asks, “What happens if surgeons were given greater control over the image? What would happen to the workflow? Would it change anything?”

In one study at Thomas Hospital in London, surgeons were using a lot of pointing gestures to direct the operation. Confusion would arise and the surgeon would need to repeat his exact intention with others. This break in the workflow inspired Mentis’ team to ask: what if we were to build a touchless illustration system that responded to the surgeon’s gestures? Her team set out to build what she calls “telestration,” which enables surgeons to use gestures to illustrate their intentions through an interactive display.

During another operation, the surgeon encountered a soft bone and had to stop the procedure. As a result, the surgeon had take off their gloves to re-examine the tissue on the visual display. Mentis notes, “There is a tight coupling between images on display and feeling with the instrument in hand.” If the image on display could be more closely integrated with the workflow, would this save time in the operating room?After publishing her findings, people raved over how voice narration rather than gesture aided imaging and collaboration in surgery. Consequently Mentis asked, “If given the opportunity would doctors use voice or gesture?” The ensuing observations revealed that while doctors stated their preference for voice, gesture was more frequently used for shaping telestration images. While voice narration and gestures provided greater interaction with the image, surgeons actually spent more time in surgery. Mentis reasons, “There is more opportunity for collaborative discussion with the information.” Interestingly, this did add time to the overall operation, but it also yielded greater opportunities to uncover and discuss critical information.

About Helena Mentis, Ph.D.

Assistant Professor, Department of Information Systems
University of Maryland, Baltimore County

Helena Mentis, Ph.D., is an assistant professor in the Department of Information Systems at the University of Maryland, Baltimore County. Her research contributes to the areas of human-computer interaction (HCI), computer supported cooperative work (CSCW), and health informatics. She investigates how new interactive sensors can be integrated into the operating room to support medical collaboration and care. Before UMBC, she was a research fellow at Harvard Medical School, held a joint postdoctoral fellowship at Microsoft Research Cambridge and the University of Cambridge, and was an ERCIM postdoctoral scholar at Mobile Life in Sweden. She received her Ph.D. in Information Sciences and Technology from Pennsylvania State University.

Read Next

Design Lab Stroke-kinect Nadir Weibel

Team behind Stroke Kinect Receives Funding

As part of the Frontier of Innovation Scholars Program (http://research.ucsd.edu/fisp.html), the Stroke-Kinect proposal led by…

Design Lab Uc San Diego Maya Azarova

Design Lab Anthropology Graduate Student Wins Prestigious CRES Award

Peering into our culture can reveal new insights about how multidisciplinary teams solve socio-technical problems.…

Design Lab Don Norman John Maeda Debate

Don Norman debates John Maeda of Automattic, the company behind WordPress

In January, world-renown executive, designer and technologist, John Maeda, and a team of 25 people…

Back To Top