Access provided by Shenyang Normal University. Download reference work entry PDF
Definition
Modern healthcare is constantly discovering and exploring the contributions that virtual reality can make to healthcare. The adoption of virtual human characters has had a success growth, in the past decades, in education, movies, games, and embodied conversational agents (ECAs). However, the growing interest of virtual reality in healthcare might stimulate a renewed interest in virtual human character. Therefore, this paper introduces the potential application areas of virtual human character (VHC) in healthcare from the point of view of realism and draws attention to the technology behind VHC as it pertains to realism.
Introduction
Modern healthcare is constantly discovering and exploring the contributions that virtual reality can make to healthcare. The usefulness of virtual reality in healthcare was discovered as far back as the 1990s during a surgery planning when a team of medical practitioners needed to visualize some complex data for a surgery (Chinnock 1994). Whenever the term “virtual reality” is used, attention is usually directed toward a three-dimensional (3D) imaginary world that allows for integration of human senses and interaction within a computer-generated environment using devices, such as head-mounted display, DataGloveTM, or a joystick (Satava 1995). However, there is the other aspect of virtual reality that is minimally discussed in literature, the virtual human character. Virtual human character (VHC) is a computer-generated human look-alike that could act as replacements for human in virtual environments. The VHC, this paper is particularly concerned with, is the one to do with the human face. The adoption of virtual human characters has had a success growth in other aspects of life, such as education, movies, games, and embodied conversational agents (ECAs).
Virtual human character as human look-alike should be able to express facial expression, facial emotions, and exhibit interaction characteristics, and these are fundamental components in achieving realism. Currently, there is a growing interest of virtual human in healthcare. Though the “uncanny valley” has in some ways slowed the focus on VHC in other application areas (Kokkinara and McDonnell 2015), in healthcare, it should never be an object of concern. The reason is this: imagine being a medical student and the virtual agent instructor lack human characteristics, will you have a committed interest and engagement during the training? Realism for virtual humans is key for such a scenario. Therefore, VHC should be developed in such a way that the “uncanny valley” effect is minimized/negligible as much as possible. Simply put, uncanny valley is human look-alike characters that lack real human emotions.
A great deal of healthcare systems requiring virtual humans needs realism to sustain the goals of virtual human adoption. A study in Seyema and Nagayama (2007) show that higher levels of animation of virtual faces are better perceived as being appealing. The question is: does this apply to all applications? This paper serves to assemble at one point the application areas of virtual humans in healthcare with emphasis on the dependency of the VHC on realism based on the application areas. Also, highlight will be thrown on current state of VHC research and technology relevant to virtual human assisted healthcare.
The rest of the paper is organized as follows. In section of “Virtual Human Application in Healthcare”, the application areas in healthcare where virtual human requires realism is presented. The technology, through literatures, behind virtual human as it pertains realism are discussed in section of “Virtual Human Character Technology”. In section of “Virtual Human Character Challenges” is a highlight on observable challenges of the virtual characters in realism. The conclusion is presented in the last section.
Virtual Human Application in Healthcare
Virtual human in healthcare has several applications, some of which are identified and discussed independently as listed in Magnenat-Thalmann and Thalmann (2004).
-
Virtual patients for surgery and plastic surgery
-
Virtual humans for treatment of social phobia and virtual psychotherapies
-
Virtual teachers for distance learning, interactive assistance, and personalized instructions
-
Virtual human for simulation-based learning and training
Virtual Patients for Surgery and Plastic Surgery
During surgery planning, surgeons usually simulate how a procedure will play out on a given patients face by using a simulated image of a patient. The simulation offers an avenue for the patient to build confidence in the doctor and can also help doctors understand and practice the procedure that best fits a given patient. Virtual humans are used in surgery to practice aesthetics and conduct maxillofacial surgery, oral surgeries, etc. Since the interest is more on the face of virtual humans’ character, the discussions here are limited to literatures with emphasis on the face of the virtual character. The research work by Patel et al. show that high face fidelity of virtual humans helps surgery patients interact easily with VHC (Patel et al. 2013). The use of virtual humans in patient surgery is one of the oldest areas of virtual human application in healthcare (Raibert et al. 1998; Altobelli et al. 1993; Rosen et al. 1996). The study in Karaliotas (2011) used virtual patients to provide normal and pathological conditions in the simulation of human anatomy and with emphasis on realism. In Smith et al. (2005) a virtual reality model is created for simulating aesthetic surgery procedures. The adoption of virtual humans in surgery help patients know and experience what can be expected from a surgery (Khor et al. 2016). Thus, the patient and doctor can have a good understanding of successes and implications of certain procedures.
Virtual Humans for Treatment of Social Phobia and Virtual Psychotherapies
In Sjolie (2014) virtual characters are proposed as a tool for helping dementia patients and persons who are experiencing cognitive decline. However, the extent to which realism is to be attained for this use case is not yet clear. What is clear is to be able to help fight phobia among social phobic individuals requires the scene of commonly experienced social woes to be constructed to mimic real-world scenario (Didehbani et al. 2016). The scene alone is not enough; rather the manipulation of the virtual human verbal and behavioral responses is an avenue to assert control of the level of therapy measures invoked with the use of VHCs (Brinkman et al. 2008).
Anxiety is one of the medical disorders that can be medically treated through psychotherapy. The treatment process involves psychological counseling which comprises of cognitive-behavioral therapy (CBT), psychotherapy, or a combination of therapies. Pan et al. (2012) through their study showed that presenting socially anxious participants to a VHC help in dealing with situations that might trigger anxiety. The VHC does not necessarily have to be human look-alike nevertheless should be able to exhibit human facial gestures and expressions, give and maintain eye contact, and talk and interact with the human participant (Pan et al. 2012; Grillon et al. 2006).
Virtual Teachers for Distance Learning, Interactive Assistance, and Personalized Instructions
The teacher’s role in a learning environment is void of learning without interaction, eye contact, exhibiting emotions, and expressing such emotions. A virtual agent should easily assume these behaviors and characteristics within a learning environment to be able to meet the expectations and acceptance of its human users.
Healthcare, particularly medical and nursing education and training, often involves the acquisition of knowledge through use of voluminous study materials which are constantly being updated, thus presenting a challenge to learning. Virtual reality has stimulated the increasing excitement on the integration of virtual humans in medical training process. According to a study on support for technology in healthcare education in 2010 Kron et al. (2010), 98% of over 200 medical students supported the inclusion of virtual teachers in healthcare training. Another study in Gromala et al. (2015) suggest that training of mindfulness meditation, a healthcare intervention program which has been used over decades to reduce addictive behaviors, is more effective with the use of virtual humans than with humans themselves.
Virtual Human for Simulation-Based Learning and Training
Realism has been our focus so far, but it is more heightened here because simulation-based learning and training for medical practitioners require high level of realism. Why is this the case? The simulation should mimic the real thing. For instance, imagine that virtual human suffering from internal pains (depicting a real patient; see Fig. 1) is presented as practical bed for junior doctors, the virtual human should be able to express pains when touched at various locations of the body pain is experienced. A VHC that lacks the ability to show emotions and express feelings will send across a different skill set and knowledge other than the required to the team of the junior doctors being trained. Therefore, VHC in simulation-based learning for healthcare practitioners should be able to simulate the real thing. The application of VHC for simulation-based learning and training in healthcare are endless. The study (Bertrand et al. 2010) used virtual humans to simulate hand hygiene practices for the full medical hygiene cycle: doctor, nurse, and patient. According to Zielke et al. (2010), a huge advantage of virtual humans in simulation of reality for medical education is that the learning environment and scenario of design can be adapted to simulate various levels of medical conditions; high-risk and low-incidence cases. The trainees can as well be able to learn the various factors such as patient regional, social, economic, behavioral, and cultural factors that can influence diagnosis and treatment sessions with a patient.
Virtual Human Character Technology
In the preceding session, various literatures point to the significance of realism of VHC as it pertains to healthcare. In this section, we seek the technology as identified in literatures for achieving realism for virtual humans. Therefore, the reviews of literature are based on the underlying methodologies that are currently used for VHC realism and are discussed under key topics in the succeeding subsections.
Facial Action Coding and Expression
The facial action coding system (FACS) is adversely applied for understanding virtual character model (Kopp et al. 2006). In Shapiro (2011) FACS was used to achieve virtual character simulation, control, and realism. (Čereković and Pandži 2011) employed neural networks together with a manual synthesis of face expressions, speech, and gestures with facial animation model. This work was improved by Wang et al. (2010). Their work made use of the action units (AUs) which defines 46 points on the face. These units are assumed to house the regions of the face facial muscles or group of muscles contract and reveal the face emotions. The Fig. 2 shows the simpler AUs for a typical human face. The arrows show the directions of movement of the AU, while the endpoint red arrows are the center of mass for emotion. To achieve facial expressions alongside continued human dialogue emotions, latent semantic analysis concept is adopted for embedded conversation for virtual characters (Zhang et al. 2013).
Character Deformation and the Physics Model
Facial deformation can be achieved through manipulating on element-wise basis, mesh units by some deformation functions. Multi-facial expressions can be created by combining muscles, though not the structure, and representing spline surfaces as hierarchical B-splines (Thiebaux et al. 2008). A muscle model was created by Platt and Badler in 1981 using the human facial structure and to generate multi-facial expressions; they applied the muscle arcs to flexible meshes. This approach was also adopted in (Ali et al. 2018), and the efficiency of synchronizing speech with the virtual character was evaluated using a context-dependent viseme. The observed that the number and distribution of fiduciary points effect on the quality of viseme presentation. The study in Leigh and Zee (2006) on physics muscle resulted in Greta: a virtual agent developed by Pasquariello and Pelachaud (2002) to synchronize gestures with speech to coordinate.
Performance-Driven Virtual Character
The performance-driven VHC is useful in circumstances where difficulties in controlling facial animation result from inaccuracies of movement tracking. This is a significant issue in real-time healthcare scenarios, especially for simulation-based training. In simulation-based training, interactivity of characters must be ensured to naturally combine motion and expressions. Deng and Neumann (2008) proposed to synthesize an expressive speech animation system through control at the phoneme level. In Hofer et al. (2008) an automatic lip-motion trajectory synthesis of the ECAs was presented. The work was extended by Zoric et al. (2011). They used animation trajectories achieving desired lip-speech synchronization with the ECA.
MPEG-4
Several literatures adopt the MPEG-4 technique in facial animation (Parke and Waters 2008). The MPEG-4 is used for generating animation based on three face data; face definition parameters (FDP), the face animation parameters (FAP), and the FAP Interpolation Table (FIT), which define the FAP interpolation rules. The FDP is used create 3D face geometry. The FAPs is designed to encrypt animation facial emotions, expressions, and speech pronunciation. After identifying 68 face parameters (see Fig. 3), they are animated through the face animation parameter units (FAPU). To achieve neutral facial emotion, the FAPUs are used to calculate distance between face units. Balci et al. (2007) created the Xface tool which is for the ECA using the MPEG-4 and achieved key-frame driving using the SMIL-agent scripting language. This method is visually depicted in Fig. 4. The purpose of their method is to be able to mimic human expression and emotion in the absence of gesture.
Queiroz et al. (2009) used the MPEG-4 attain parameterization of face with a high-level facial actions and behaviors of virtual characters that can interaction human user. In Zoric et al. (2011) synchronized facial gestures and speech to virtual character using MPEG-4, it lacked emotions. Their work was improved in Kessous et al. (2010). In Kessous et al. (2010) a Bayesian classifier was used to identify the different types of emotions for different gestures, which they adopted to achieve human-like character but lacked face movements. Arsov et al. (2010) developed an approach that makes virtual character achieve interaction and learn cued speech.
Visual Speech Virtual Character
Facial expression alongside recorded speech is difficult to model. The reason is this: The human language comprises of vocabulary, phonemes, and speech coarticulation, which need to be integrated to achieve realism (Arsov et al. 2010). In Skantze and Al Moubayed (2012) the IrisTK, a toolkit designed for multiparty face-to-face interaction, while the Situation, Agent, Intention, Behavior, and Animation (SAIBA) is developed to compute ECA communication and human behavior realization (Bevacqua 2011), Berger et al. (2011) incorporated hissing and buzzing to speech synchronization with virtual character.
Virtual Human Character Challenges
The exhibition of facial expression, emotions, and interaction for VHC are not yet as good as that of a real human. However, just to avoid “uncanny valley,” it is widely acceptable if the VHC can communicate all the important expressions of emotions of the human, make eye contact, and engage in interactive activities.
To simulate the emotional state of the virtual character, it is important to determine facial expressions and behavioral characteristic that the model is to mimic prior to modeling, for example, lip-syncing synthesis animation. Lip-synching has existed for decades, but the real challenge originates in blending emotions with lip-synching to produce fuller and richer character animation. For model-based talking head systems, it is more elegant to work on the facial animation parameters to bring about coarticulation effects. Giving information to a user in the form of a talking head increases acceptance level of the VHC by the humans. Therefore, the integration of speech to an animated character may also make the use experience acceptable to users, especially for social phobic patients.
During synchronization of the VHC with human emotions identified during FAP, conversation should be simulated to obtain the varying emotional states of the human model, and care should be taken to ensure that the lexicon obtained contains enough emotions variables before encoding. This is to guarantee that different personalities and dissimilar emotions required in a dialogue are captured. More also, eye movements are very important factor for achieving realism for VHC. The believability of the virtual human is not necessarily of being human look-alike but in its ability to maintain facial behaviors that mimics human behavior, such as eye contact, blinking, and squinting.
Conclusion
The future holds a great potential for VHC in healthcare. The benefits of VHC in healthcare are endless and will serve to enhance the medical practices, therapies, and education. The successes of VHC in other fields of application, such as military, games, movies, and sports gave us a glimpse of its possibilities which will continue to drive the interest of VHC developers, researchers, and the medical practitioners, who are the healthcare end users. Therefore, there lies the challenge of achieving valuable experiences, limiting “uncanny valley” effect while attaining realism, between the virtual human and various application usage scenarios in healthcare. Finally, the design features that enables the VHC to assume human behavior and character which would make virtual human in healthcare enjoyable and readily adopted is still lacking. Though, the sole purpose should not only focus on human behavior but richly capture material content as well, especially for simulating an educational instructor or surgery.
References
Ali, I.R., Kolivand, H., Alkawaz, M.H.: Lip syncing method for realistic expressive 3D face model. Multimed. Tools Appl. 77(5), 5323–5366 (2018)
Altobelli, D.E., Kikinis, R., Mulliken, J.B., Cline, H., Lorensen, W., Jolesz, F.: Computer-assisted three-dimensional planning in craniofacial surgery. Plast. Reconstr. Surg. 92(4), 576–585 (1993)
Arsov, I., Jovanova, B., Preda, M., Preteux, F.: On-line animation system for learning and practice cued speech. In: ICT Innovations, pp. 315–325, Springer (2010)
Balci, K., Not, E., Zancanaro, M., Pianesi, F.: Xface open source project and SMIL-agent scripting language for creating and animating embodied conversational agents. In: Multimedia, pp. 1013–1016 (2007)
Berger, M.A., Hofer, G., Shimodaira, H.: Carnival-combining speech technology and computer animation. Comput. Graph. Appl. IEEE. 31(5), 80–89 (2011)
Bertrand, J., Babu, S.V., Polgreen, P. and Segre, A.: Virtual agents-based simulation for training healthcare workers in hand hygiene procedures. In: International Conference on Intelligent Virtual Agents, pp. 125–131. Springer, Berlin/Heidelberg (2010)
Bevacqua Jr. John F.: System, method and financial product for providing retirement income protection, U.S. Patent 7,908,196, issued March 15 (2011)
Brinkman, W.P., Van der Mast, C.A.P.G., de Vliegher, D.: Virtual Reality Exposure Therapy for Social Phobia: A Pilot Study in Evoking Fear in a Virtual World, vol. 1. Delft University of Technology, pp. 85–88 (2008)
Čereković, A., Pandži, I.S.: Multimodal behaviour realization for embodied conversational agents. Multimed. Tools Appl. 54, 143–164 (2011)
Chinnock, C.: Virtual reality in surgery and medicine. Hosp. Technol. Ser. 13(18), 1–48 (1994)
Deng, Z., Neumann, U.: Data-Driven 3D Facial Animation. Springer, London (2008)
Didehbani, N., Allen, T., Kandalaft, M., Krawczyk, D., Chapman, S.: Virtual reality social cognition training for children with high functioning autism. Comput Hum Behav. 62, 703–711 (2016)
Grillon, H., Riquier, F., Herbelin, B., and Thalmann, D.: Use of virtual reality as therapeutic tool for behavioural exposure in the ambit of social anxiety disorder treatment. In: Proceedings of the 6th International Conference on Disability, Virtual Reality and Associated Technology, pp. 105–112. Esbjerg, 18–20 Sep, (2006)
Gromala, D., Tong, X., Choo, A., Karamnejad, M. and Shaw, C.D.: The virtual meditative walk: virtual reality therapy for chronic pain management. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 521–524. ACM (2015)
Hofer, G., Yamagishi, J., and Shimodaira, H.: Speech-driven lip motion generation with a trajectory HMM, Interspeech 2008, pp. 2314–2317, Brisbane (2008)
Karaliotas, C.: When simulation in surgical training meets virtual reality. Hell. J. Surg. 83(6), 303–316 (2011)
Kessous, L., Castellano, G., Caridakis, G.: Multimodal emotion recognition in speech-based interaction using facial expression, body gesture and acoustic analysis. J. Multimodal User Interfaces. 3, 33–48 (2010)
Khor, W.S., Baker, B., Amin, K., Chan, A., Patel, K., Wong, J.: Augmented and virtual reality in surgery-the digital surgical environment: applications, limitations and legal pitfalls. Ann. Transl. Med. 4(23), 454 (2016)
Kokkinara, E., McDonnell, R.: Animation realism affects perceived character appeal of a self-virtual face. In: Proceedings of the 8th ACM SIGGRAPH Conference on Motion in Games, ACM, Paris, France, Nov 16–18, pp. 221–226 (2015)
Kopp, S., Krenn, B., Marsella, S., Marshall, A.N., Pelachaud, C., Pirker, H.: Towards a Common Framework for Multimodal Generation: the Behaviour Markup Language. In: Intelligent Virtual Agents, pp. 205–217 (2006)
Kron, F.W., et al.: Medical student attitudes toward video games and related new media technologies in medical education. BMC Med. Educ. 10(1), 50 (2010)
Leigh, R., Zee, D.: The Neurology of Eye Movements: Book-and-DVD Package. Contemporary Neurology Series, vol. 70. Oxford University Press (2006)
Magnenat-Thalmann, N., Thalmann, D.: Handbook of Virtual Humans. Wiley, Hoboken (2004)
Morten Havmøller Laursen.: Partially automated system for synthesizing human facial expressions in interactive media. Alborg University (2012)
Pan, X., Gillies, M., Barker, C., Clark, D.M., Slater, M.: Socially anxious and confident men interact with a forward virtual woman: an experimental study. PLoS One. 7(4), e32931 (2012)
Parke, F.I., Waters, K.: Computer Facial Animation. AK Peters/CRC Press (2008)
Patel, V., Aggarwal, R., Cohen, D., Taylor, D., Darzi, A.: Implementation of an interactive virtual-world simulation for structured surgeon assessment of clinical scenarios. J. Am. Coll. Surg. 217(2), 270–279 (2013)
Pasquariello, S., Pelachaud, C.: Greta: A Simple Facial Animation Engine. Soft Computing and Industry, 511–525. Springer London (2002)
Platt, S.M., Badler, N.I.: Animating facial expressions. ACM SIGGRAPH Comp. Graph. 15(3), 245–252 (1981)
Queiroz, R.B., Cohen, M., Musse, S.R.: An extensible framework for interactive facial animation with facial expressions, lip synchronization and eye behaviour. Comput. Entertain. 7, 58 (2009)
Raibert, M., Playter, R., Krummel, T.M.: The use of a virtual reality haptic device in surgical training. Acad. Med. 73(5), 596–597 (1998)
Rosen, J.M., Lasko-Harvill, A., Satava, R.: Virtual reality and surgery. In: Taylor, R., Lavallee, S., Burdea, G., Moesges, R. (eds.) Computer-Integrated Surgery, pp. 231–244. MIT Press, Cambridge (1996)
Satava, R.M.: Interactive technology and the new paradigm for healthcare, pp. 21–28. IOS Press, Washington, DC (1995)
Seyema, J., Nagayama, R.: The uncanny valley: effect of realism on the impression of artificial human faces. Presence Teleop. Virt. Environ. 16(4), 337–351 (2007)
Shapiro, A.: Building a character animation system. In: Motion in games, pp. 98–109. Springer, Berlin (2011)
Sjolie, D.: Realistic and adaptive cognitive training using virtual characters. In: Proceedings of ICDVRAT, Sept 2014, pp. 385–388 (2014)
Skantze, G., Al Moubayed, S.: IrisTK: a statechart-based toolkit for multi-party face-to-face interaction. In: Proceedings of the 14th ACM international conference on Multimodal interaction, pp. 69–76, ACM (2012)
Smith, D.M., Aston, S.J., Oliker, A., Weinzweig, J.: Designing a virtual reality model for aesthetic surgery. Plast. Reconstr. Surg. 116(3), 893–897 (2005)
Thiebaux, M., Marsella, S., Marshall, A.N., Kallmann, M.: Smartbody: Behaviour Realization for Embodied Conversational Agents, Autonomous Agents and Multiagent Systems, vol. 1, pp. 151–158 (2008)
Wang, L., Qian, X., Han, W., Soong, F.K: Synthesizing Photo-Real Talking Head Via Trajectory-Guided Sample Selection, INTERSPEECH, pp. 446–449 (2010)
Zhang, L., Jiang, M., Farid, D., Hossain, M.: Intelligent facial emotion recognition and semantic-based topic detection for a humanoid robot. Expert Syst. Appl. 40, 5160–5168 (2013)
Zielke, M., LeFlore, J., Dufour, F., and Hardee, G.: Game-based virtual patients–educational opportunities and design challenges. In: Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC). Orlando, Florida (2010)
Zoric, G., Forchheimer, R., Pandzic, I.S.: On creating multimodal virtual humans-real time speech driven facial gesturing. Multimed. Tools Appl. 54, 165–179 (2011)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 Springer Nature Switzerland AG
About this entry
Cite this entry
Ali, I.R., Ahmed, A.S., Tayyeh, H.K., Kolivand, H., Alkawaz, M.H. (2024). Virtual Human for Assisted Healthcare: Application and Technology. In: Lee, N. (eds) Encyclopedia of Computer Graphics and Games. Springer, Cham. https://doi.org/10.1007/978-3-031-23161-2_363
Download citation
DOI: https://doi.org/10.1007/978-3-031-23161-2_363
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-23159-9
Online ISBN: 978-3-031-23161-2
eBook Packages: Computer ScienceReference Module Computer Science and Engineering