Skip to content

Telestration: How Helena Mentis Applies Design Thinking to Surgery

Telestration: How Helena Mentis Applies Design Thinking to Surgery

Telestration: How Helena Mentis Applies Design Thinking to Surgery

Helena Mentis is the director of the Bodies in Motion Lab at University of Maryland, Baltimore County (UMBC) with research spanning human-computer interaction (HCI), computer supported cooperative work (CSCW) and medical informatics. During a recent visit to the Design Lab at UC San Diego, Mentis talked about her research on surgery in the operating room.

She examines the medical world through surgical instruments and the workflow inside the operating room. Mentis hones in on minimally invasive surgery and the reliance toward images.  She is particularly interested in how medical professionals see and share visual information in a collaborative fashion, which has grown over the past several years. She asks, “What happens if surgeons were given greater control over the image? What would happen to the workflow? Would it change anything?”

In one study at Thomas Hospital in London, surgeons were using a lot of pointing gestures to direct the operation. Confusion would arise and the surgeon would need to repeat his exact intention with others. This break in the workflow inspired Mentis’ team to ask: what if we were to build a touchless illustration system that responded to the surgeon’s gestures? Her team set out to build what she calls “telestration,” which enables surgeons to use gestures to illustrate their intentions through an interactive display.

During another operation, the surgeon encountered a soft bone and had to stop the procedure. As a result, the surgeon had take off their gloves to re-examine the tissue on the visual display. Mentis notes, “There is a tight coupling between images on display and feeling with the instrument in hand.” If the image on display could be more closely integrated with the workflow, would this save time in the operating room?After publishing her findings, people raved over how voice narration rather than gesture aided imaging and collaboration in surgery. Consequently Mentis asked, “If given the opportunity would doctors use voice or gesture?” The ensuing observations revealed that while doctors stated their preference for voice, gesture was more frequently used for shaping telestration images. While voice narration and gestures provided greater interaction with the image, surgeons actually spent more time in surgery. Mentis reasons, “There is more opportunity for collaborative discussion with the information.” Interestingly, this did add time to the overall operation, but it also yielded greater opportunities to uncover and discuss critical information.

About Helena Mentis, Ph.D.

Assistant Professor, Department of Information Systems
University of Maryland, Baltimore County

Helena Mentis, Ph.D., is an assistant professor in the Department of Information Systems at the University of Maryland, Baltimore County. Her research contributes to the areas of human-computer interaction (HCI), computer supported cooperative work (CSCW), and health informatics. She investigates how new interactive sensors can be integrated into the operating room to support medical collaboration and care. Before UMBC, she was a research fellow at Harvard Medical School, held a joint postdoctoral fellowship at Microsoft Research Cambridge and the University of Cambridge, and was an ERCIM postdoctoral scholar at Mobile Life in Sweden. She received her Ph.D. in Information Sciences and Technology from Pennsylvania State University.

Helena Mentis is the director of the Bodies in Motion Lab at University of Maryland, Baltimore County (UMBC) with research spanning human-computer interaction (HCI), computer supported cooperative work (CSCW) and medical informatics. During a recent visit to the Design Lab at UC San Diego, Mentis talked about her research on surgery in the operating room.

She examines the medical world through surgical instruments and the workflow inside the operating room. Mentis hones in on minimally invasive surgery and the reliance toward images.  She is particularly interested in how medical professionals see and share visual information in a collaborative fashion, which has grown over the past several years. She asks, “What happens if surgeons were given greater control over the image? What would happen to the workflow? Would it change anything?”

In one study at Thomas Hospital in London, surgeons were using a lot of pointing gestures to direct the operation. Confusion would arise and the surgeon would need to repeat his exact intention with others. This break in the workflow inspired Mentis’ team to ask: what if we were to build a touchless illustration system that responded to the surgeon’s gestures? Her team set out to build what she calls “telestration,” which enables surgeons to use gestures to illustrate their intentions through an interactive display.

During another operation, the surgeon encountered a soft bone and had to stop the procedure. As a result, the surgeon had take off their gloves to re-examine the tissue on the visual display. Mentis notes, “There is a tight coupling between images on display and feeling with the instrument in hand.” If the image on display could be more closely integrated with the workflow, would this save time in the operating room?After publishing her findings, people raved over how voice narration rather than gesture aided imaging and collaboration in surgery. Consequently Mentis asked, “If given the opportunity would doctors use voice or gesture?” The ensuing observations revealed that while doctors stated their preference for voice, gesture was more frequently used for shaping telestration images. While voice narration and gestures provided greater interaction with the image, surgeons actually spent more time in surgery. Mentis reasons, “There is more opportunity for collaborative discussion with the information.” Interestingly, this did add time to the overall operation, but it also yielded greater opportunities to uncover and discuss critical information.

About Helena Mentis, Ph.D.

Assistant Professor, Department of Information Systems
University of Maryland, Baltimore County

Helena Mentis, Ph.D., is an assistant professor in the Department of Information Systems at the University of Maryland, Baltimore County. Her research contributes to the areas of human-computer interaction (HCI), computer supported cooperative work (CSCW), and health informatics. She investigates how new interactive sensors can be integrated into the operating room to support medical collaboration and care. Before UMBC, she was a research fellow at Harvard Medical School, held a joint postdoctoral fellowship at Microsoft Research Cambridge and the University of Cambridge, and was an ERCIM postdoctoral scholar at Mobile Life in Sweden. She received her Ph.D. in Information Sciences and Technology from Pennsylvania State University.

Helena Mentis is the director of the Bodies in Motion Lab at University of Maryland, Baltimore County (UMBC) with research spanning human-computer interaction (HCI), computer supported cooperative work (CSCW) and medical informatics. During a recent visit to the Design Lab at UC San Diego, Mentis talked about her research on surgery in the operating room.

She examines the medical world through surgical instruments and the workflow inside the operating room. Mentis hones in on minimally invasive surgery and the reliance toward images.  She is particularly interested in how medical professionals see and share visual information in a collaborative fashion, which has grown over the past several years. She asks, “What happens if surgeons were given greater control over the image? What would happen to the workflow? Would it change anything?”

In one study at Thomas Hospital in London, surgeons were using a lot of pointing gestures to direct the operation. Confusion would arise and the surgeon would need to repeat his exact intention with others. This break in the workflow inspired Mentis’ team to ask: what if we were to build a touchless illustration system that responded to the surgeon’s gestures? Her team set out to build what she calls “telestration,” which enables surgeons to use gestures to illustrate their intentions through an interactive display.

During another operation, the surgeon encountered a soft bone and had to stop the procedure. As a result, the surgeon had take off their gloves to re-examine the tissue on the visual display. Mentis notes, “There is a tight coupling between images on display and feeling with the instrument in hand.” If the image on display could be more closely integrated with the workflow, would this save time in the operating room?After publishing her findings, people raved over how voice narration rather than gesture aided imaging and collaboration in surgery. Consequently Mentis asked, “If given the opportunity would doctors use voice or gesture?” The ensuing observations revealed that while doctors stated their preference for voice, gesture was more frequently used for shaping telestration images. While voice narration and gestures provided greater interaction with the image, surgeons actually spent more time in surgery. Mentis reasons, “There is more opportunity for collaborative discussion with the information.” Interestingly, this did add time to the overall operation, but it also yielded greater opportunities to uncover and discuss critical information.

About Helena Mentis, Ph.D.

Assistant Professor, Department of Information Systems
University of Maryland, Baltimore County

Helena Mentis, Ph.D., is an assistant professor in the Department of Information Systems at the University of Maryland, Baltimore County. Her research contributes to the areas of human-computer interaction (HCI), computer supported cooperative work (CSCW), and health informatics. She investigates how new interactive sensors can be integrated into the operating room to support medical collaboration and care. Before UMBC, she was a research fellow at Harvard Medical School, held a joint postdoctoral fellowship at Microsoft Research Cambridge and the University of Cambridge, and was an ERCIM postdoctoral scholar at Mobile Life in Sweden. She received her Ph.D. in Information Sciences and Technology from Pennsylvania State University.

Read Next

San Diego And Tijuana Selected As 2024 World Design Capital

San Diego And Tijuana Selected As 2024 World Design Capital

US-Mexico Border Cities Win Historic Designation to Become First Binational Design Capital

SAN DIEGO-TIJUANA - Today the World Design Organization (WDO) announced the San Diego-Tijuana region has been selected as the World Design Capital (WDC) for 2024.

The selection of the San Diego-Tijuana joint bid makes it the first binational World Design Capital in WDO history. While the designation is for both cities as a united region, San Diego is now the first U.S. city ever to receive the WDC designation. Tijuana is the second city in Mexico to hold the title, following Mexico City in 2018.

"We did it!" said Don Norman, founder of UC San Diego’s Design Lab (now retired) and co-founder and Board advisor to the Design Forward Alliance (DFA). "Designers, city officials, and organizations in both the Tijuana and San Diego regions collaborated to make our binational community the World Design Capital for 2024. It shows the power of design as a way of thinking, to address important societal issues, and as a source of innovation for companies, organizations, and educational communities at all levels. We have built a permanent coalition of our communities to address civic and climate challenges, to grow our industrial sectors, and to support a strong culture of cross-border design."
San Diego Profs Tackle Dying Oceans

San Diego profs tackle dying oceans and idea cross-pollination at global exhibition

San Diego Union Tribune

Design Lab member and Visual Arts Professor Pinar Yoldas joins the 2021 Venice Biennale to promote discussion of dying oceans and idea cross-pollination through a global exhibition.

This summer, 112 artists and architectural teams from around the world were invited to the annual Venice Biennale in Italy to create artworks that answer the forward-thinking question: “How will we live together?” Two of the invitees to this prestigious exhibition are from San Diego.

Pinar Yoldas, a multidisciplinary art professor at UC San Diego, took an imaginative look at what the world’s endangered oceans might look like in 30 years, while Daniel López-Pérez, a founding faculty member for the architecture program at the University of San Diego, studied the global dialogue of ideas inside a spherical structure inspired by R. Buckminster Fuller’s geoscope design.

How They Got There: Janet Johnson

Graduate student Janet Johnson is currently working towards her doctorate degree in Computer Science, while also conducting HCI research in the UCSD Design Lab, primarily focusing on XR (extended reality).

So, what is Johnson’s research?  Johnson conducts HCI research, primarily focusing on XR. As Johnson describes it, “XR is an umbrella term for augmented reality, augmented virtuality, mixed reality, and virtual reality.” She says to think of it as a spectrum where one end is the real world alone, the other is complete virtual reality, and everything in between is varying mixes of the two. Johnson’s research primarily focuses on this mixed middle ground. “The majority of my research focuses on how we can use mixed reality or extended reality to help a novice…get help from an expert.” She then poses the example of both surgery and CPR. Johnson’s research explores ways for an expert to provide instructions to the novice as if though they were in the same room. Her goal is to help bridge the distance between novices and experts, both physically and skill wise, while also decreasing the amount of time a person receives aid. “By the time a medical personnel arrives at the scene, it’s already been 7 to 10 minutes, so each minute counts for the person’s life,” she explains. “You don’t have time in that 10 minutes to train the people around to be able to do CPR or any other sort of resuscitation, same with surgery.” 

As Johnson continues to conduct her research in this field, she’s excited for what the future holds for this technology and the ways she can contribute to it.  
Uc San Diego Design Lab Viasat

Viasat Invests in UC San Diego’s Design Lab

Viasat gift helps researchers provide guidance to engineering organizations on ways to implement a ‘design…

Design Education Don Norman

The Future of Design Education

Don Norman, Design Lab Director, reports on "The Future of Design Education"

Many of you know that for a long time I have been partnering with IBM Design and The World Design Organization to rethink the curriculum for design.  This is a progress report.

The History

It all started in March 2014 when Scott Klemmer and I wrote a paper called "State of Design: How Design Education Must Change" published in LinkedIn. (Why LinkedIn? Because of the wide, diverse readership: This paper has been read by 50,167 people, with 93 comments.) https://bit.ly/31Qqv1W

Design Lab

When Scott, Jim Hollan, and I started the Design Lab, we knew what we did NOT wish to do: build a traditional design education. Our training was rich and varied, and we wanted our students to have a similarly broad education. We wanted to do things that made a real difference in the world. After all, our origin was from Cognitive Science and computers -- Human Behavior and Technology, Design is an applied field that requires multi-disciplinary approaches to important, difficult issues.
Design Lab Cat Hicks Signalio

Cat Hicks Q&A: A Conversation about Google & her new start-up Signal IO

The Design Lab has long lasting impacts. Catherine Hicks has seen the Design Lab since its…

Back To Top