Bärbel Bissinger, Christian Märtin and Christian Herdin: Applied Emotion AI in Video Conferences
Artificial Emotional Intelligence, Affective Computing or Emotion AI deals with the ability of machines to recognize human emotions. Our physical signals can be analyzed and categorized which makes it possible to train machines to recognize emotions and respond to it. This changes how we interact with technology, and it could also change how we interact with each other. There are more and more research activities in this field as well as products on the market that apply Emotion AI. According to a recent forecast, emotion detection and recognition is a rapidly growing market which will be worth more than 42 billion USD by 2027. As technology becomes ubiquitous in interpersonal interactions and activities, Emotion AI could make our tool-based interactions more human-like.
Emotions play a central role in our communication and decision-making and should therefore get more attention, even in business environments. Since Covid-19, more and more meetings are being held virtually which has advantages but also many disadvantages. For example, the transmission of non-verbal signals becomes more difficult and changes our interaction behavior. People also report exhaustion caused by the huge number of video conferences, the so-called Zoom-Fatigue phenomenon. One issue is the constant self-view. Seeing yourself all the time is not natural and can have a positive effect on self-awareness, but, e.g., a negative one on enjoyment and perceived productivity. Therefore, alternatives to activated cameras that nevertheless transmit emotional reactions automatically could be useful.
In our previous research, we used Facial Expression Recognition (FER) in video conferences to detect emotional states of participants. In these small-scale user studies, we analyzed the facial expressions of participants to detect and visualize emotions with a FER-tool and human observers. We discovered situations in which facial expressions alone were not sufficient to correctly identify emotions outside the lab in our small-scale study. Circumstances such as changing video quality or participant movement made automatic emotion recognition via faces challenging. In these scenarios, we could only analyze emotions when participants shared their videos during the meeting.
In this paper, we would like to present a different approach, where we integrate and analyze a set of different physical signals in addition to facial expressions, and where people do not share their video in the conference, but an interface which changes depending on the emotional state of the participant. This makes it possible to share emotions without sharing the own video. For this purpose, we use our SitAdapt system.
SitAdapt is an integrated software system for enabling situation-aware real-time adaptations for web and mobile applications. SitAdapt uses the different APIs of the devices such as eye-tracker, wristband, facial expression, and EEG signal recognition software, as well as metadata from the application to collect data about the user. The included rule editor allows the definition and modification of situation rules, e.g., for specifying the different user states and the resulting actions. The rule editor can use all input data types and attribute values as well as their temporal changes for formulating rule conditions. At the runtime of the application, the rules are triggered by the adaptation component for adapting the user interface, if the conditions of one or more rules apply.
Benjamin Roszipal, Sebastian Egger-Lampl and Markus Karlseder: Enhancing Higher Education through XR Technology: An Innovative Approach to Knowledge Acquisition, Skill Development, and Examining
Knowledge acquisition and skill development are fundamental components of effective learning in higher education. Traditional methodologies involving pattern recognition, knowledge reproduction, and hands-on practice have long been utilized to build competencies in various fields. However, certain disciplines, such as healthcare and large industries, often face limitations in hands-on training due to safety concerns or resource-intensive simulations.
This contribution explores the potential of experiential learning in digital simulations supported by Extended Reality (XR) technology as a novel and efficient approach to address these challenges. XR presents an advanced platform for creating immersive and engaging experiential learning experiences. Leveraging XR, we have developed a didactic concept that enables learners to acquire knowledge, develop skills, and achieve successful test results in a meaningful and effective manner.
The implementation of XR supported experiential learning (XREL) offers several advantages over traditional methods. It provides learners with a safe and risk-free environment to practice complex tasks, promoting higher retention of knowledge and skill transfer. XREL can be tailored to diverse educational needs, catering to both beginners seeking foundational knowledge and seasoned experts looking to enhance their proficiency.
Our research investigates the effectiveness of XREL in learning outcomes across various disciplines. We present a comprehensive analysis of our XREL-based didactic approach, demonstrating its applicability in upskilling employees, students, and experts alike. The study showcases how our XREL solutions facilitate experiential learning, empowering learners to acquire knowledge and skills through immersive experiences.
Through this conference contribution, we aim to shed light on the potential of XREL technology in transforming higher education, bridging the gap between theoretical understanding and practical application. We provide empirical evidence of the positive impact of XR simulations on knowledge acquisition, skill development, and proficiency assessment. Ultimately, this work paves the way for a more efficient and accessible approach to learning and training in various fields, fostering continuous growth and advancement in education and professional development.
XRCONSOLE is an R&D-driven SME focused on incorporating cutting-edge research into interactive educational XR technologies. Engaging in various national and international research projects, we develop XR prototypes, integrate behavioral probes and analytics to create educational and training KPIs. Our interdisciplinary team with scientific, development, and educational backgrounds ensures evidence-based practices, fostering meaningful collaboration with academia, industry, and educators. With a vision for immersive learning experiences, we strive to revolutionize education through XR technology, empowering learners of all backgrounds to acquire knowledge and skills in unprecedented ways.
Dennis Rosenberg: Predictors of the Perceived Health-Related Usefulness of Mobile Devices in Later Life: Results of the Health Information National Trends Survey
Objectives and Research Question: The goal of this study was to assess the impact of the older adults' background characteristics on their perceptions of the health-related mobile device usefulness. The third-level digital divide approach was employed as the theoretical framework for the study. According to this approach, (older) people tend to benefit differently from using technology even when they have equal access to it and are equally competent or skillful in its use. The third-level digital divide approach maintains that the differences in these benefits or their perceptions are attributed to the categories (older) people belong. The study attempted to answer the following question: which socio-demographic and health-related characteristics of older adults are associated with their perception of mobile device usefulness in various health domains? Methods: The data were obtained from the Health Information National Trends Survey (Wave 5, Cycle 4) conducted in February-June 2020. The sample included 1373 U.S. older mobile device owners, i.e. people who reported owning either smartphone, tablet, or both. The lower limit for the respondent age, in accordance with the definitions of the United Nations and World Health Organization, was set at 60 years. The items served as dependent variables asked whether tablet or smartphone helped respondents in the following domains: tracking progress on a health-related goal (achieving health-related goal), making a decision about how to treat chronic illness or condition (medical decision making), and discussions with healthcare provider (patient-provider communication). Since each item was dichotomously introduced, the data were analyzed using logistic regression models. Results: Being 75 years old or older and self-definition as White significantly interacted with respect to perceived mobile device usefulness in achieving health-related goal and patient-provider communication. Age and White race variables were negatively and independently associated with perceived mobile device usefulness for medical decision making. Being married, female gender, better self-rated health, and having a hypertension diagnosis corresponded to a greater likelihood of perceiving own mobile device as useful in achieving health-related goal. Mental health disorder diagnosis positively related to perceived usefulness for medical decision making and patient-provider communication. Education and income levels also demonstrated some associations, mainly with respect to perceived usefulness in achieving health-related goal. Discussion: The findings provide support for the employed theoretical framework by showing that some categories of older adults are more likely than others to perceive their mobile devices useful in health domains. Health background seems to be less dominant than socio-demographic background in explanation of the mobile device health-related usefulness, providing additional support for the third-level digital divide approach. The findings may serve for preparation of programs aimed at encouraging older adults to use their mobile devices for health purposes.
Robert Halwaß, Steffen Prowe, Evelyne Becker, Joachim Villwock, Martina Mauch, Linnea Pehl, Clara Simon and Lena Ziesmann: Immersive STEM-Laboratories to improve teaching and learning
The project “Interactive Teaching in Virtual STEM labs | MINT-VR-Labs” (MINT is the German acronym for STEM) takes up the didactic potential of virtualizing laboratories in order to anchor innovative interaction formats in virtual space and thus implement new blended learning and virtual teaching/learning formats in practical modules for the Berlin University of Applied Sciences (BHT). In these virtual labs, students can, for example, explore biotechnology labs, look behind the scenes of a theater stage, learn basic programming skills in a pizza factory, or experience complex mathematical functions. Virtual laboratory exercises are intended to supplement classroom teaching, reduce the heterogeneity of students' prior knowledge, and increase student success. For this reason, the development and testing of these virtual laboratories will be didactically accompanied and the effectiveness of the use of these digital media in university teaching will be evaluated.
(Chair: Georg Vogt)