Integrating Emotion Recognition Tools for Developing Emotionally Intelligent Agents

  1. Samuel Marcos-Pablos 1
  2. Fernando Lobato Alejano 2
  3. Francisco José García-Peñalvo 1
  1. 1 Universidad de Salamanca
    info

    Universidad de Salamanca

    Salamanca, España

    ROR https://ror.org/02f40zc51

  2. 2 Universidad Pontificia de Salamanca
    info

    Universidad Pontificia de Salamanca

    Salamanca, España

    ROR https://ror.org/02jj93564

Revista:
IJIMAI

ISSN: 1989-1660

Año de publicación: 2022

Título del ejemplar: Special Issue on New Trends in Disruptive Technologies, Tech Ethics and Artificial Intelligence

Volumen: 7

Número: 6

Páginas: 69-76

Tipo: Artículo

DOI: 10.9781/IJIMAI.2022.09.004 DIALNET GOOGLE SCHOLAR lock_openDialnet editor

Otras publicaciones en: IJIMAI

Resumen

Emotionally responsive agents that can simulate emotional intelligence increase the acceptance of users towards them, as the feeling of empathy reduces negative perceptual feedback. This has fostered research on emotional intelligence during last decades, and nowadays numerous cloud and local tools for automatic emotional recognition are available, even for inexperienced users. These tools however usually focus on the recognition of discrete emotions sensed from one communication channel, even though multimodal approaches have been shown to have advantages over unimodal approaches. Therefore, the objective of this paper is to show our approach for multimodal emotion recognition using Kalman filters for the fusion of available discrete emotion recognition tools. The proposed system has been modularly developed based on an evolutionary approach so to be integrated in our digital ecosystems, and new emotional recognition sources can be easily integrated. Obtained results show improvements over unimodal tools when recognizing naturally displayed emotions.

Referencias bibliográficas

  • A. García-Holgado, S. Marcos-Pablos, F.J. García-Peñalvo, “A Model to Define an eHealth Technological Ecosystem for Caregivers”, in New Knowledge in Information Systems and Technologies, Á. Rocha, H. Adeli, L. P. Reis, and S. Costanzo, Eds. Springer International Publishing, 2019, pp. 422–432, https://doi.org/10.1007/978-3-030-16187-3_41.
  • S. Marcos-Pablos, A. García-Holgado, F.J. García-Peñalvo, “Modelling the business structure of a digital health ecosystem”, in Proceedings of the Seventh International Conference on Technological Ecosystems for Enhancing Multiculturality, 2019, pp. 838–846, https://doi. org/10.1145/3362789.3362949.
  • S. Marcos-Pablos, F.J. García-Peñalvo, “Emotional Intelligence in Robotics: A Scoping Review”, In New Trends in Disruptive Technologies, Tech Ethics and Artificial Intelligence, J. F. de Paz Santana, D. H. de la Iglesia, & A. J. López Rivero, Eds. Springer International Publishing, 2022, pp. 66–75, https://doi.org/10.1007/978-3-030-87687-6_7.
  • S. Poria, E. Cambria, R. Bajpai, and A. Hussain, “A review of affective computing: From unimodal analysis to multimodal fusion”, Information Fusion, 2017, vol. 37, pp. 98–125, https://doi.org/10.1016/j. inffus.2017.02.003.
  • M.G. Huddar, S.S. Sannakki, and V.S. Rajpurohit, “Attention-based Multimodal Sentiment Analysis and Emotion Detection in Conversation using RNN”, International Journal of Interactive Multimedia and Artificial Intelligence, 2021, vol. 6, no. 6, pp. 112-121, http://doi.org/10.9781/ ijimai.2020.07.004.
  • H. Daus and M. Backenstrass, “Feasibility and Acceptability of a MobileBased Emotion Recognition Approach for Bipolar Disorder”, International Journal of Interactive Multimedia and Artificial Intelligence, 2021, vol. 7, no. 2, pp. 7-14, http://doi.org/10.9781/ijimai.2021.08.015.
  • M. Magdin, D. Držík, J. Reichel, and S. Koprda, “The Possibilities of Classification of Emotional States Based on User Behavioral Characteristics”, International Journal of Interactive Multimedia and Artificial Intelligence, 2020, vol. 6(Regular Issue), no. 2, pp. 97-104, http:// doi.org/10.9781/ijimai.2020.11.010
  • S. Kirrane, “Intelligent software web agents: A gap analysis”, Journal of Web Semantics, 2021, vol. 71, 100659, https://doi.org/10.1016/j. websem.2021.100659
  • W. Brenner, R. Zarnekow, and H. Wittig, “Intelligent Software Agents: Foundations and Applications”, Springer-Verlag Berlin, 1998, https://doi. org/10.1007/978-3-642-80484-7.
  • P. Salovey, and J. D. Mayer, “Emotional Intelligence”, Imagination, Cognition and Personality, 1990, vol. 9, no. 3, pp. 185–211, https://doi. org/10.2190/DUGG-P24E-52WK-6CDG.
  • W. Ickes, “Empathic Accuracy”, Journal of Personality, 1993, vol. 61, no. 4, pp. 587–610, https://doi.org/10.1111/j.1467-6494.1993.tb00783.x.
  • E. van der Kruk, M. M. Reijne, “Accuracy of human motion capture systems for sport applications; state-of-the-art review”, European Journal of Sport Science, 2018, vol. 18, no. 6, pp. 806–819, https://doi.org/10.1080 /17461391.2018.1463397.
  • L. Shu, J. Xie, M. Yang, Z. Li, D. Liao, X. Xu, and X. Yang, “A Review of Emotion Recognition Using Physiological Signals”, Sensors, 2018, vol. 18, no. 7, 2074, https://doi.org/10.3390/s18072074.
  • M.L. Rohlfing, D.P. Buckley, J. Piraquive, C.E. Stepp, and L.F. Tracy, “Hey Siri: How Effective are Common Voice Recognition Systems at Recognizing Dysphonic Voices?”, Laryngoscope, 2021, vol. 131, no. 7, pp. 1599-1607, https://doi.org/10.1002/lary.29082.
  • N. Samadiani, G. Huang, B. Cai, W. Luo, C.H. Chi, Y. Xiang, and J. He, “A Review on Automatic Facial Expression Recognition Systems Assisted by Multimodal Sensor Data”, Sensors, 2019, vol. 19, no. 8, https://doi. org/10.3390/s19081863.
  • P. Tzirakis, J. Chen, S. Zafeiriou, and B. Schuller, “End-to-end multimodal affect recognition in real-world environments”, Information Fusion, 2201, vol. 68, pp. 46–53, https://doi.org/10.1016/j.inffus.2020.10.011.
  • D.C. Rubin and J.M. Talarico, “A comparison of dimensional models of emotion: Evidence from emotions, prototypical events, autobiographical memories, and words”, Memory, 2009, vol. 17, no. 8, pp. 802–808, https:// doi.org/10.1080/09658210903130764.
  • S. Marcos-Pablos, F.J. García-Peñalvo, and A. Vázquez-Ingelmo, “Emotional AI in Healthcare: A pilot architecture proposal to merge emotion recognition tools” in Ninth International Conference on Technological Ecosystems for Enhancing Multiculturality, 2021, pp. 342– 349, https://doi.org/10.1145/3486011.3486472.
  • P. Ekman and E.L. Rosenberg, “What the face reveals: Basic and applied studies of spontaneous expression using the facial action coding system (FACS)”, Oxford University Press, 2005, https://doi.org/10.1093/ acprof:oso/9780195179644.001.0001.
  • G. Paltoglou, M. Thelwall, “Seeing Stars of Valence and Arousal in Blog Posts”, in IEEE Transactions on Affective Computing, 2013, vol. 4, no. 1, pp. 116–123, https://doi.org/10.1109/T-AFFC.2012.36.
  • M. Olszanowski, G. Pochwatko, K. Kuklinski, M. Scibor-Rylski, P. Lewinski, and R.K. Ohme, “Warsaw set of emotional facial expression pictures: A validation study of facial display photographs” Frontiers in Psychology, 2015, vol. 5, https://www.frontiersin.org/article/10.3389/ fpsyg.2014.01516.
  • B. Ristic, S. Arulampalam, and N. Gordon, “Beyond the Kalman Filter: Particle Filters for Tracking Applications”, Artech House, 2003.
  • A. Bhattacharjee, T. Pias, M. Ahmad, and A. Rahman,”On the Performance Analysis of APIs Recognizing Emotions from Video Images of Facial Expressions”, 17th IEEE International Conference on Machine Learning and Applications, 2018, pp. 223–230, https://doi.org/10.1109/ ICMLA.2018.00040.
  • A. Mathur, A. Isopoussu, F. Kawsar, R. Smith, N.D. Lane, and N. Berthouze, “On Robustness of Cloud Speech APIs: An Early Characterization”, in Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers, 2018, pp. 1409–1413, https://doi. org/10.1145/3267305.3267505.
  • S.R. Khanal, J. Barroso, N. Lopes, J. Sampaio, and V. Filipe, “Performance analysis of Microsoft’s and Google’s Emotion Recognition API using pose-invariant faces”, in Proceedings of the 8th International Conference on Software Development and Technologies for Enhancing Accessibility and Fighting Info-Exclusion, 2018, pp. 172–178, https://doi. org/10.1145/3218585.3224223.
  • W.E. Rinn, “The neuropsychology of facial expression: A review of the neurological and psychological mechanisms for producing facial expressions”, Psychological Bulletin, 1984, vol. 95, no. 1, pp. 52–77.
  • T. Kanade, J.F. Cohn, and Y. Tian, “Comprehensive database for facial expression analysis”, in Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition, 2000, pp. 46–53, https://doi.org/10.1109/AFGR.2000.840611.
  • Ryerson Emotion Database. (n.d.). Retrieved April 23, 2022, from https:// www.kaggle.com/datasets/ryersonmultimedialab/ryerson-emotiondatabase
  • J.M. Buenaposada, E. Muñoz, and L. Baumela, “Recognising facial expressions in video sequences”, Pattern Analysis and Applications, 2008, vol. 11, no. 1, pp. 101-116.
  • I. Sneddon, M. McRorie, G. McKeown, and J. Hanratty, “The Belfast Induced Natural Emotion Database”, IEEE Transactions on Affective Computing, 2012, vol. 3, no. 1, pp. 32–41, https://doi.org/10.1109/TAFFC.2011.26.
  • T. Küntzler, T.T.A. Höfling, and G.W. Alpers, “Automatic Facial Expression Recognition in Standardized and Non-standardized Emotional Expressions”, Frontiers in Psychology, 2021, vol. 12, https:// www.frontiersin.org/article/10.3389/fpsyg.2021.627561.
  • “A New Video Based Emotions Analysis System (VEMOS): An Efficient Solution Compared to iMotions Affectiva Analysis Software”. (n.d.). ASTES Journal. Retrieved April 25, 2022, from https://astesj.com/v06/i02/p114/.
  • Z. Kowalczuk and M. Czubenko, “Computational Approaches to Modeling Artificial Emotion – An Overview of the Proposed Solutions”, Frontiers in Robotics and AI, 2016, vol. 3, https://doi.org/10.3389/ frobt.2016.00021.