{"id":87203,"date":"2025-08-13T10:00:00","date_gmt":"2025-08-13T08:00:00","guid":{"rendered":"https:\/\/aktuelles.uni-frankfurt.de\/?p=87203"},"modified":"2026-01-30T10:45:29","modified_gmt":"2026-01-30T09:45:29","slug":"when-ai-learns-from-gestures","status":"publish","type":"post","link":"https:\/\/aktuelles.uni-frankfurt.de\/en\/english\/when-ai-learns-from-gestures\/","title":{"rendered":"When AI learns from gestures"},"content":{"rendered":"<h2 class=\"wp-block-heading\">A subproject in the Priority Program \u201cVisual Communication\u201d endeavors to make body language analyzable<\/h2>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-1 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img fetchpriority=\"high\" decoding=\"async\" width=\"500\" height=\"333\" data-id=\"85027\" src=\"https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_M1B0656-2-Avatare-500x333.jpg\" alt=\"\" class=\"wp-image-85027\" srcset=\"https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_M1B0656-2-Avatare-500x333.jpg 500w, https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_M1B0656-2-Avatare-300x200.jpg 300w, https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_M1B0656-2-Avatare-18x12.jpg 18w, https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_M1B0656-2-Avatare.jpg 650w\" sizes=\"(max-width: 500px) 100vw, 500px\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"500\" height=\"346\" data-id=\"85028\" src=\"https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_VR3-2Avatare-500x346.png\" alt=\"\" class=\"wp-image-85028\" srcset=\"https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_VR3-2Avatare-500x346.png 500w, https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_VR3-2Avatare-300x208.png 300w, https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_VR3-2Avatare-18x12.png 18w, https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_VR3-2Avatare.png 650w\" sizes=\"(max-width: 500px) 100vw, 500px\" \/><\/figure>\n<\/figure>\n\n\n\n<p>Gestures help people under-stand each other. They are also increasingly important for human-machine interaction.<br>All photos: Uwe Dettmar; all other photos: Technology Lab<\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p><strong>Visual means of communication such as gestures and facial expressions are age-old forms of human understanding. When someone speaks, they not only string words or sentences together but also use their hands, arms, head and face. ViCom, a Priority Program at Goethe University Frankfurt funded by the German Research Foundation (DFG), has been studying these nonverbal channels since 2021. One of the subprojects is examining the significance of body language with the help of virtual reality (VR) recording methods.<\/strong><\/p>\n\n\n\n<p>Transitioning from the office in the Faculty of Computer Science and Mathematics to the virtual world takes only a few seconds: The test person (in this case, the author) just puts on her VR goggles and is suddenly an avatar standing on a street that runs through a small town \u2013 past a chapel, through the park to the church and onwards to the town hall. Her task then is to describe the way to another test person so that they can follow.<\/p>\n\n\n\n<p>For computer linguist and computer scientist Professor Alexander Mehler, linguist Dr. Andy L\u00fccking and computer scientist Dr. Alexander Henlein in Frankfurt, the dialogs taking place in the VR lab can deliver valuable insights: What role do gesticulation and facial expressions play when describing the way? How is spoken language combined with pointing, facial cues or other nonverbal instructions? Which route does the second test person take and what helps them to arrive at their destination? These are questions that researchers in the GeMDiS project (Virtual Reality Sustained Multimodal Distributional Semantics for Gestures in Dialog) want to answer.<\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<div class=\"wp-block-group is-nowrap is-layout-flex wp-container-core-group-is-layout-6c531013 wp-block-group-is-layout-flex\">\n<figure class=\"wp-block-image alignleft size-large is-resized\"><img decoding=\"async\" width=\"500\" height=\"333\" src=\"https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_M1B0633-linke-Hand-zeigt-500x333.jpg\" alt=\"\" class=\"wp-image-85029\" style=\"width:600px\" srcset=\"https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_M1B0633-linke-Hand-zeigt-500x333.jpg 500w, https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_M1B0633-linke-Hand-zeigt-300x200.jpg 300w, https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_M1B0633-linke-Hand-zeigt-18x12.jpg 18w, https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_M1B0633-linke-Hand-zeigt.jpg 650w\" sizes=\"(max-width: 500px) 100vw, 500px\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image alignright size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"500\" height=\"346\" src=\"https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_VR1-Linke-Hand-500x346.png\" alt=\"\" class=\"wp-image-85030\" style=\"width:578px\" srcset=\"https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_VR1-Linke-Hand-500x346.png 500w, https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_VR1-Linke-Hand-300x208.png 300w, https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_VR1-Linke-Hand-18x12.png 18w, https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_VR1-Linke-Hand.png 650w\" sizes=\"(max-width: 500px) 100vw, 500px\" \/><\/figure>\n<\/div>\n\n\n\n<p class=\"has-text-align-center\">Pointing \u2013 a clear gesture even in early childhood that the computer must first learn with the help of VR technology.<\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h3 class=\"wp-block-heading\">Gestures: Long neglected by linguistics<\/h3>\n\n\n\n<p>Although gestures were long regarded in linguistic research as mere accessories to verbal utterances, we know today that they are much more than that. There is a difference between a guest in a restaurant saying \u201cThe food is too salty\u201d and banging his fist on the table as he says it. \u201cWe use gestures and facial expressions because they enable us to convey more information and make communication more efficient,\u201d says Alexander Mehler, and gives another example: \u201cIf I want to talk to my interlocutor about a certain plant, I automatically look or point in the corresponding direction. This makes it easier to identify the object without wasting a lot of words.\u201d<\/p>\n\n\n\n<p>This is exactly what most of the test persons also do when describing their route in the VR trials. In countless experiments, the GeMDiS team has observed, for example, that the participants use their hands to form a triangle when saying the word \u201cchurch\u201d to describe the roof more expressively. To indicate a pond in the virtual landscape, they draw a circle with their finger. The project hypothesizes that the more frequently the first test person couples spoken language with gestures and facial expressions, the more easily the second test person can follow the way.<\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h3 class=\"wp-block-heading\">Restricted gestures \u2013 less understanding<\/h3>\n\n\n\n<p>The GeMDiS team can also use virtual reality technology to explore what happens when body language is restricted. For example, the test person\u2019s auditory and visual ability can be influenced by manipulating the audio and video outputs of the VR goggles. It is also possible within the VR environment to influence to what extent they can grasp and use objects virtually: In one of the experiments, for example, the test person could no longer pick up a virtual cup from the table. The researchers discovered that such restrictions of a person\u2019s ability to act have a major impact on communication that is not, by contrast, seen when restricting auditory and visual quality via technical means: Test persons whose possibilities for interaction are curtailed move around far less in the virtual world, they talk more about their experiences there and are generally more negative in their assessment of how they experienced the experiment.<\/p>\n\n\n\n<p>Each test series in experiments of this kind lasts around 25 minutes. The GeMDiS team then looks in detail at the image sequences and analyzes them. The computer delivers data on hand and face movements from this three-dimensional space. These data are used over the course of the project to train artificial intelligence (AI) models. The AI recognizes when examples frequently recur \u2013 like when a word such as \u201ctree\u201d or \u201ccrossroads\u201d is often combined with a specific gesture when describing the way.<\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<div class=\"wp-block-group is-layout-constrained wp-block-group-is-layout-constrained\">\n<div class=\"wp-block-group has-background is-layout-constrained wp-block-group-is-layout-constrained\" style=\"background-color:#eeeeee\">\n<p class=\"has-text-align-center\">IN A NUTSHELL<\/p>\n\n\n\n<ul style=\"background-color:#eeeeee\" class=\"wp-block-list has-background\">\n<li>Experiments based on virtual reality (VR) have demonstrated that the combination of verbal language and body language (e.g. gestures) greatly improves comprehensibility and orientation, for example when giving directions.<br><\/li>\n\n\n\n<li>In subsequent experiments, the test persons&#8216; ability to perceive gestures or facial expressions in the VR environment was restricted via technical means. The outcome was that communication was less effective, and participants&#8216; overall perception of how they experienced it was more negative<br><\/li>\n\n\n\n<li>The GeMDi project uses AI to recognize patterns in the combination of verbal and nonverbal language. In the process, the different communicaiton signals are mapped in a common, mulitmodal semantic space.<br><\/li>\n\n\n\n<li>The aim is to establish a form of corpus linguistics that integrates spoken and nonverbal language and to use the resulting semantic space as the basisi for multimodal AI, e.g. to create a corresponding language-gesture lexicon or for improved human-machine interaction.<\/li>\n<\/ul>\n<\/div>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n<\/div>\n\n\n\n<figure class=\"wp-block-image alignleft size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"500\" height=\"333\" src=\"https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_M1B0668-500x333.jpg\" alt=\"\" class=\"wp-image-85024\" srcset=\"https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_M1B0668-500x333.jpg 500w, https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_M1B0668-300x200.jpg 300w, https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_M1B0668-18x12.jpg 18w, https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_M1B0668.jpg 650w\" sizes=\"(max-width: 500px) 100vw, 500px\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image alignleft size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"500\" height=\"333\" src=\"https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_M1B0670-500x333.jpg\" alt=\"\" class=\"wp-image-85025\" srcset=\"https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_M1B0670-500x333.jpg 500w, https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_M1B0670-300x200.jpg 300w, https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_M1B0670-18x12.jpg 18w, https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_M1B0670.jpg 650w\" sizes=\"(max-width: 500px) 100vw, 500px\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image alignleft size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"500\" height=\"333\" src=\"https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_M1B0677-500x333.jpg\" alt=\"\" class=\"wp-image-85026\" srcset=\"https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_M1B0677-500x333.jpg 500w, https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_M1B0677-300x200.jpg 300w, https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_M1B0677-18x12.jpg 18w, https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_M1B0677.jpg 650w\" sizes=\"(max-width: 500px) 100vw, 500px\" \/><figcaption class=\"wp-element-caption\">Getting to the point: Visual communication is sometimes a big help when talking about computational linguistics too. Andy L\u00fccking is seen here explaining the ViCom research project he is working on with Alexander Henlein and Alexander Mehler.<\/figcaption><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\">Once trained, the AI should recognize patterns<\/h3>\n\n\n\n<p>To do this, the AI generates something called a multimodal similarity space, a kind of mathematical or geometric representation of the collected data, which makes it possible to compare different linguistic signs, such as words, sentences or grammar, and nonverbal signs, such as gestures, facial expressions or body posture. \u201cThe advantage is that this similarity space also provides areas for things that are not said or expressed as gestures. The AI does not have to monitor these areas, but they make it easier to recognize more and more new gestures or language data,\u201d explain the scientists. In this way, things that were not detected when the algorithms were trained can nevertheless be \u201crecognized\u201d later.<\/p>\n\n\n\n<p>Here, the computer scientists and linguists in Frankfurt are using a method long common in spoken and written language: corpus-based linguistics. This involves analyzing language data from large collections of texts, referred to as corpora. The aim is to recognize patterns \u2013 for example, at the level of grammatical structure or word frequency. To make predictions, they also analyze how linguistic and non-linguistic units relate to each other. This method makes it possible to document even the finest changes in language or to visualize how expressions are used in different contexts. This is not simply a playground for theorists but instead can help in practice to improve communication between humans and machines \u2013 in speech recognition systems, for example, or systems based on gesture control.<\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h3 class=\"wp-block-heading\">Gestures have a broad bandwidth<\/h3>\n\n\n\n<p>Because corpus linguistics has to date dealt far less with the analysis of nonverbal language such as gestures or facial expressions, there is a lack of corresponding corpora. That is why the GeMDiS team wants to contribute to closing this data gap with the help of multimodal corpus linguistics, that is, by analyzing spoken and nonverbal language in combination. As far as the methodology is concerned, this is a challenging task. While spoken and written language are concerned with understanding each word with its assigned meaning, visual communication is far more complex. \u201cGestures that accompany speech are not prescribed or codified, in contrast to sign language, for example. Pointing gestures, for example, have very different meanings depending on the context,\u201d explains Professor Mehler. \u201cUnlike in spoken language, form variance is also vast. Gestures are not always the same. For example, if I have injured my hand, I point differently than I would without that restriction.\u201d In addition, a gesture\u2019s meaning can vary depending on where it is used and by whom. Nodding your head means you agree? Not always. In Greece, for example, it means \u201cNo\u201d and not \u201cYes\u201d.<\/p>\n\n\n\n<p>Let\u2019s get back to gesture research in the VR lab in Frankfurt. The GeMDiS team still needs to observe a lot of dialogs in the virtual world as well as collect and analyze multimodal information so that other scientists can also work with the wealth of data in the future. In the long term, the results of the ViCom subproject should contribute to facilitating human-machine interaction. But the research project might also produce a multimodal gesture lexicon, a bit like a dictionary for a foreign language. The idea, for example, is to use images such as shaking hands, bowing, or similar symbols to represent various forms of greeting.<\/p>\n\n\n\n<p>Furthermore, the VR-based recording technology at Goethe University Frankfurt could yield interesting findings for sign language. In collaboration with the University of Cologne, the GeMDiS team wants to shed light on the role of mouth movements when describing a route. For deaf people to understand what is being said, such gestures are very important. Mouth images and mouth shapes are an integral part of sign language.<\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p class=\"has-background\" style=\"background-color:#eeeeee\"><strong>Visual Communication<\/strong><br><br>\u00bbAs philosopher and communication theorist Paul Watzlawick once said: \u201cOne cannot not communicate.\u201d Even when words fail, people convey messages with their body. The Priority Program \u201cVisual Communication\u201d (<a href=\"https:\/\/vicom.info\" target=\"_blank\" rel=\"noreferrer noopener\">ViCom<\/a>) funded by the German Research Foundation (DFG) examines this nonverbal form of communication in greater depth. In a collaborative project between Goethe University Frankfurt and the University of G\u00f6ttingen, researchers from various disciplines are investigating, for example, commonalities between gestures and sign language, the effects of gestures in didactic or therapeutic contexts, animal communication, and how human-computer interaction works. The aim of ViCom is to develop a new model that captures the multi dimensionality of communication.<\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p class=\"has-background\" style=\"background-color:#eeeeee\"><strong>About<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"500\" height=\"500\" src=\"https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/AndyLueckig_AlexanderMehler_AlexanderHenlein._Dettmar-500x500.jpg\" alt=\"\" class=\"wp-image-85031\" style=\"width:494px;height:auto\" srcset=\"https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/AndyLueckig_AlexanderMehler_AlexanderHenlein._Dettmar-500x500.jpg 500w, https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/AndyLueckig_AlexanderMehler_AlexanderHenlein._Dettmar-300x300.jpg 300w, https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/AndyLueckig_AlexanderMehler_AlexanderHenlein._Dettmar-150x150.jpg 150w, https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/AndyLueckig_AlexanderMehler_AlexanderHenlein._Dettmar-12x12.jpg 12w, https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/AndyLueckig_AlexanderMehler_AlexanderHenlein._Dettmar.jpg 650w\" sizes=\"(max-width: 500px) 100vw, 500px\" \/><figcaption class=\"wp-element-caption\">Left to right: Dr. Andy L\u00fccking, Dr. Alexander Henlein, Prof. Dr. Alexander Mehler<\/figcaption><\/figure>\n\n\n\n<p class=\"has-background\" style=\"background-color:#eeeeee\"><strong>Prof. Dr. Alexander Mehler<\/strong> has held the Chair for Computational Humanities\/Text Technology at Goethe University\u2019s Faculty of Computer Science and Mathematics since 2013. He earned his doctoral degree in computational linguistics at Trier University. His research interests include the quantitative analysis, simulative synthesis and formal modeling of textual units in spoken and written communication. In this context, he studies linguistic networks based on contemporary and historical languages (using language evolution models). One of his current research interests is 4D text technologies based on virtual reality (VR), augmented reality (AR) and augmented virtuality (AV).<br><a href=\"mailto:mehler@em.uni-frankfurt.de\">mehler@em.uni-frankfurt.d<\/a><\/p>\n\n\n\n<p class=\"has-background\" style=\"background-color:#eeeeee\"><strong>Dr. habil. Andy L\u00fccking<\/strong> is a private lecturer and principal investigator in Goethe University Frankfurt\u2019s Text Technology Lab. He studied linguistics, philosophy and German philology at the University of Bielefeld and earned his doctoral degree there with a dissertation on multimodal grammar extensions. He worked as a postdoctoral researcher in computational linguistics\/text technology and defended his postdoctoral degree (Habilitation) at the Universit\u00e9 Paris Cit\u00e9 with a dissertation on \u201cAspects of Multimodal Communication\u201d, in particular a \u201cgesture-friendly\u201d semantic theory of plurality and quantification. L\u00fccking is especially interested in the linguistic theory of human communication, that is, the face-to-face interaction within and beyond single sentences, paying particular attention to various kinds of gestures and cognitive modeling.<br><a href=\"mailto:luecking@em.uni-frankfurt.de\">luecking@em.uni-frankfurt.de<\/a><\/p>\n\n\n\n<p class=\"has-background\" style=\"background-color:#eeeeee\"><strong>Dr. Alexander Henlein <\/strong>is a postdoctoral researcher in the Text Technology Lab. As a computer science student at Goethe University Frankfurt, his main interest was computer speech recognition. His dissertation explored text-to-3D scene generation and the question of how three-dimensional worlds can evolve from language. He is currently dealing with multimodal language and communication, especially in conjunction with modern language models, and working on the development of a VR-based system to collect and analyze multimodal data.<br><a href=\"mailto:henlein@em.uni-frankfurt.de\">henlein@em.uni-frankfurt.de<\/a><\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<figure class=\"wp-block-image alignleft size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"500\" height=\"333\" src=\"https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_M1B0633-linke-Hand-zeigt-500x333.jpg\" alt=\"\" class=\"wp-image-85029\" style=\"width:180px\" srcset=\"https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_M1B0633-linke-Hand-zeigt-500x333.jpg 500w, https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_M1B0633-linke-Hand-zeigt-300x200.jpg 300w, https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_M1B0633-linke-Hand-zeigt-18x12.jpg 18w, https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_M1B0633-linke-Hand-zeigt.jpg 650w\" sizes=\"(max-width: 500px) 100vw, 500px\" \/><\/figure>\n\n\n\n<p class=\"has-background\" style=\"background-color:#eeeeee\"><strong>The author<\/strong><br>Katja <strong>Irle<\/strong>, born in 1971, is an education and science journalist, author and moderator.<br><a href=\"mailto:irle@schreibenundsprechen.eu\">irle@schreibenundsprechen.eu<\/a><\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p class=\"has-background\" style=\"background-color:#eeeeee\"><a href=\"https:\/\/www.forschung-frankfurt.uni-frankfurt.de\/34831594\/aktuelle_Ausgabe\" target=\"_blank\" rel=\"noreferrer noopener\">Zur gesamten Ausgabe von Forschung Frankfurt 1\/2025: Sprache, wir verstehen uns!<\/a><\/p>\n\n\n\n<p><\/p>","protected":false},"excerpt":{"rendered":"<p>A subproject in the Priority Program \u201cVisual Communication\u201d endeavors to make body language analyzable Gestures help people under-stand each other. They are also increasingly important for human-machine interaction.All photos: Uwe [&hellip;]<\/p>\n","protected":false},"author":8,"featured_media":85029,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_eb_attr":"","_price":"","_stock":"","_tribe_ticket_header":"","_tribe_default_ticket_provider":"","_ticket_start_date":"","_ticket_end_date":"","_tribe_ticket_show_description":"","_tribe_ticket_show_not_going":false,"_tribe_ticket_use_global_stock":"","_tribe_ticket_global_stock_level":"","_global_stock_mode":"","_global_stock_cap":"","_tribe_rsvp_for_event":"","_tribe_ticket_going_count":"","_tribe_ticket_not_going_count":"","_tribe_tickets_list":"[]","_tribe_ticket_has_attendee_info_fields":false,"footnotes":""},"categories":[126,254],"tags":[412,297,273],"post_folder":[],"class_list":["post-87203","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-english","category-research","tag-forschung-frankfurt-1-25","tag-informatics","tag-linguistics"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>When AI learns from gestures | Aktuelles aus der Goethe-Universit\u00e4t Frankfurt<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/aktuelles.uni-frankfurt.de\/en\/english\/when-ai-learns-from-gestures\/\" \/>\n<meta property=\"og:locale\" content=\"en_GB\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"When AI learns from gestures | Aktuelles aus der Goethe-Universit\u00e4t Frankfurt\" \/>\n<meta property=\"og:description\" content=\"A subproject in the Priority Program \u201cVisual Communication\u201d endeavors to make body language analyzable Gestures help people under-stand each other. They are also increasingly important for human-machine interaction.All photos: Uwe [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/aktuelles.uni-frankfurt.de\/en\/english\/when-ai-learns-from-gestures\/\" \/>\n<meta property=\"og:site_name\" content=\"Aktuelles aus der Goethe-Universit\u00e4t Frankfurt\" \/>\n<meta property=\"article:published_time\" content=\"2025-08-13T08:00:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-01-30T09:45:29+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_M1B0633-linke-Hand-zeigt.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"650\" \/>\n\t<meta property=\"og:image:height\" content=\"433\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"-\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"-\" \/>\n\t<meta name=\"twitter:label2\" content=\"Estimated reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"10 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/aktuelles.uni-frankfurt.de\\\/english\\\/when-ai-learns-from-gestures\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/aktuelles.uni-frankfurt.de\\\/english\\\/when-ai-learns-from-gestures\\\/\"},\"author\":{\"name\":\"-\",\"@id\":\"https:\\\/\\\/aktuelles.uni-frankfurt.de\\\/#\\\/schema\\\/person\\\/8e55ea338fb65d1ce87a91565d1f1739\"},\"headline\":\"When AI learns from gestures\",\"datePublished\":\"2025-08-13T08:00:00+00:00\",\"dateModified\":\"2026-01-30T09:45:29+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/aktuelles.uni-frankfurt.de\\\/english\\\/when-ai-learns-from-gestures\\\/\"},\"wordCount\":2023,\"publisher\":{\"@id\":\"https:\\\/\\\/aktuelles.uni-frankfurt.de\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/aktuelles.uni-frankfurt.de\\\/english\\\/when-ai-learns-from-gestures\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/aktuelles.uni-frankfurt.de\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/4.2_M1B0633-linke-Hand-zeigt.jpg\",\"keywords\":[\"Forschung Frankfurt 1.25\",\"Informatics\",\"Linguistics\"],\"articleSection\":[\"English\",\"Research\"],\"inLanguage\":\"en-GB\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/aktuelles.uni-frankfurt.de\\\/english\\\/when-ai-learns-from-gestures\\\/\",\"url\":\"https:\\\/\\\/aktuelles.uni-frankfurt.de\\\/english\\\/when-ai-learns-from-gestures\\\/\",\"name\":\"When AI learns from gestures | Aktuelles aus der Goethe-Universit\u00e4t Frankfurt\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/aktuelles.uni-frankfurt.de\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/aktuelles.uni-frankfurt.de\\\/english\\\/when-ai-learns-from-gestures\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/aktuelles.uni-frankfurt.de\\\/english\\\/when-ai-learns-from-gestures\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/aktuelles.uni-frankfurt.de\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/4.2_M1B0633-linke-Hand-zeigt.jpg\",\"datePublished\":\"2025-08-13T08:00:00+00:00\",\"dateModified\":\"2026-01-30T09:45:29+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/aktuelles.uni-frankfurt.de\\\/english\\\/when-ai-learns-from-gestures\\\/#breadcrumb\"},\"inLanguage\":\"en-GB\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/aktuelles.uni-frankfurt.de\\\/english\\\/when-ai-learns-from-gestures\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-GB\",\"@id\":\"https:\\\/\\\/aktuelles.uni-frankfurt.de\\\/english\\\/when-ai-learns-from-gestures\\\/#primaryimage\",\"url\":\"https:\\\/\\\/aktuelles.uni-frankfurt.de\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/4.2_M1B0633-linke-Hand-zeigt.jpg\",\"contentUrl\":\"https:\\\/\\\/aktuelles.uni-frankfurt.de\\\/wp-content\\\/uploads\\\/2025\\\/07\\\/4.2_M1B0633-linke-Hand-zeigt.jpg\",\"width\":650,\"height\":433,\"caption\":\"Foto: Uwe Dettmar\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/aktuelles.uni-frankfurt.de\\\/english\\\/when-ai-learns-from-gestures\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Startseite\",\"item\":\"https:\\\/\\\/aktuelles.uni-frankfurt.de\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"When AI learns from gestures\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/aktuelles.uni-frankfurt.de\\\/#website\",\"url\":\"https:\\\/\\\/aktuelles.uni-frankfurt.de\\\/\",\"name\":\"Aktuelles aus der Goethe-Universit\u00e4t Frankfurt\",\"description\":\"Aktuelles aus der Goethe-Universit\u00e4t | Neues aus Forschung, Lehre, Studium\",\"publisher\":{\"@id\":\"https:\\\/\\\/aktuelles.uni-frankfurt.de\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/aktuelles.uni-frankfurt.de\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-GB\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/aktuelles.uni-frankfurt.de\\\/#organization\",\"name\":\"Goethe-Universit\u00e4t\",\"url\":\"https:\\\/\\\/aktuelles.uni-frankfurt.de\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-GB\",\"@id\":\"https:\\\/\\\/aktuelles.uni-frankfurt.de\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/aktuelles.uni-frankfurt.de\\\/wp-content\\\/uploads\\\/2022\\\/03\\\/800px-Goethe-Logo.png\",\"contentUrl\":\"https:\\\/\\\/aktuelles.uni-frankfurt.de\\\/wp-content\\\/uploads\\\/2022\\\/03\\\/800px-Goethe-Logo.png\",\"width\":800,\"height\":436,\"caption\":\"Goethe-Universit\u00e4t\"},\"image\":{\"@id\":\"https:\\\/\\\/aktuelles.uni-frankfurt.de\\\/#\\\/schema\\\/logo\\\/image\\\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/aktuelles.uni-frankfurt.de\\\/#\\\/schema\\\/person\\\/8e55ea338fb65d1ce87a91565d1f1739\",\"name\":\"-\",\"description\":\"Dieser Beitrag wurde von der Online-Redaktion ver\u00f6ffentlicht. Wenn der Beitrag von einem Gastautoren verfasst wurde, findet sich dieser Hinweis am Ende des jeweiligen Artikels.\",\"sameAs\":[\"http:\\\/\\\/aktuelles.uni-frankfurt.de\\\/autoren\"],\"url\":\"https:\\\/\\\/aktuelles.uni-frankfurt.de\\\/en\\\/author\\\/redaktion\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"When AI learns from gestures | Aktuelles aus der Goethe-Universit\u00e4t Frankfurt","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/aktuelles.uni-frankfurt.de\/en\/english\/when-ai-learns-from-gestures\/","og_locale":"en_GB","og_type":"article","og_title":"When AI learns from gestures | Aktuelles aus der Goethe-Universit\u00e4t Frankfurt","og_description":"A subproject in the Priority Program \u201cVisual Communication\u201d endeavors to make body language analyzable Gestures help people under-stand each other. They are also increasingly important for human-machine interaction.All photos: Uwe [&hellip;]","og_url":"https:\/\/aktuelles.uni-frankfurt.de\/en\/english\/when-ai-learns-from-gestures\/","og_site_name":"Aktuelles aus der Goethe-Universit\u00e4t Frankfurt","article_published_time":"2025-08-13T08:00:00+00:00","article_modified_time":"2026-01-30T09:45:29+00:00","og_image":[{"width":650,"height":433,"url":"https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_M1B0633-linke-Hand-zeigt.jpg","type":"image\/jpeg"}],"author":"-","twitter_card":"summary_large_image","twitter_misc":{"Written by":"-","Estimated reading time":"10 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/aktuelles.uni-frankfurt.de\/english\/when-ai-learns-from-gestures\/#article","isPartOf":{"@id":"https:\/\/aktuelles.uni-frankfurt.de\/english\/when-ai-learns-from-gestures\/"},"author":{"name":"-","@id":"https:\/\/aktuelles.uni-frankfurt.de\/#\/schema\/person\/8e55ea338fb65d1ce87a91565d1f1739"},"headline":"When AI learns from gestures","datePublished":"2025-08-13T08:00:00+00:00","dateModified":"2026-01-30T09:45:29+00:00","mainEntityOfPage":{"@id":"https:\/\/aktuelles.uni-frankfurt.de\/english\/when-ai-learns-from-gestures\/"},"wordCount":2023,"publisher":{"@id":"https:\/\/aktuelles.uni-frankfurt.de\/#organization"},"image":{"@id":"https:\/\/aktuelles.uni-frankfurt.de\/english\/when-ai-learns-from-gestures\/#primaryimage"},"thumbnailUrl":"https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_M1B0633-linke-Hand-zeigt.jpg","keywords":["Forschung Frankfurt 1.25","Informatics","Linguistics"],"articleSection":["English","Research"],"inLanguage":"en-GB"},{"@type":"WebPage","@id":"https:\/\/aktuelles.uni-frankfurt.de\/english\/when-ai-learns-from-gestures\/","url":"https:\/\/aktuelles.uni-frankfurt.de\/english\/when-ai-learns-from-gestures\/","name":"When AI learns from gestures | Aktuelles aus der Goethe-Universit\u00e4t Frankfurt","isPartOf":{"@id":"https:\/\/aktuelles.uni-frankfurt.de\/#website"},"primaryImageOfPage":{"@id":"https:\/\/aktuelles.uni-frankfurt.de\/english\/when-ai-learns-from-gestures\/#primaryimage"},"image":{"@id":"https:\/\/aktuelles.uni-frankfurt.de\/english\/when-ai-learns-from-gestures\/#primaryimage"},"thumbnailUrl":"https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_M1B0633-linke-Hand-zeigt.jpg","datePublished":"2025-08-13T08:00:00+00:00","dateModified":"2026-01-30T09:45:29+00:00","breadcrumb":{"@id":"https:\/\/aktuelles.uni-frankfurt.de\/english\/when-ai-learns-from-gestures\/#breadcrumb"},"inLanguage":"en-GB","potentialAction":[{"@type":"ReadAction","target":["https:\/\/aktuelles.uni-frankfurt.de\/english\/when-ai-learns-from-gestures\/"]}]},{"@type":"ImageObject","inLanguage":"en-GB","@id":"https:\/\/aktuelles.uni-frankfurt.de\/english\/when-ai-learns-from-gestures\/#primaryimage","url":"https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_M1B0633-linke-Hand-zeigt.jpg","contentUrl":"https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2025\/07\/4.2_M1B0633-linke-Hand-zeigt.jpg","width":650,"height":433,"caption":"Foto: Uwe Dettmar"},{"@type":"BreadcrumbList","@id":"https:\/\/aktuelles.uni-frankfurt.de\/english\/when-ai-learns-from-gestures\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Startseite","item":"https:\/\/aktuelles.uni-frankfurt.de\/"},{"@type":"ListItem","position":2,"name":"When AI learns from gestures"}]},{"@type":"WebSite","@id":"https:\/\/aktuelles.uni-frankfurt.de\/#website","url":"https:\/\/aktuelles.uni-frankfurt.de\/","name":"Aktuelles aus der Goethe-Universit\u00e4t Frankfurt","description":"Aktuelles aus der Goethe-Universit\u00e4t | Neues aus Forschung, Lehre, Studium","publisher":{"@id":"https:\/\/aktuelles.uni-frankfurt.de\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/aktuelles.uni-frankfurt.de\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-GB"},{"@type":"Organization","@id":"https:\/\/aktuelles.uni-frankfurt.de\/#organization","name":"Goethe University Frankfurt","url":"https:\/\/aktuelles.uni-frankfurt.de\/","logo":{"@type":"ImageObject","inLanguage":"en-GB","@id":"https:\/\/aktuelles.uni-frankfurt.de\/#\/schema\/logo\/image\/","url":"https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2022\/03\/800px-Goethe-Logo.png","contentUrl":"https:\/\/aktuelles.uni-frankfurt.de\/wp-content\/uploads\/2022\/03\/800px-Goethe-Logo.png","width":800,"height":436,"caption":"Goethe-Universit\u00e4t"},"image":{"@id":"https:\/\/aktuelles.uni-frankfurt.de\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/aktuelles.uni-frankfurt.de\/#\/schema\/person\/8e55ea338fb65d1ce87a91565d1f1739","name":"-","description":"Dieser Beitrag wurde von der Online-Redaktion ver\u00f6ffentlicht. Wenn der Beitrag von einem Gastautoren verfasst wurde, findet sich dieser Hinweis am Ende des jeweiligen Artikels.","sameAs":["http:\/\/aktuelles.uni-frankfurt.de\/autoren"],"url":"https:\/\/aktuelles.uni-frankfurt.de\/en\/author\/redaktion\/"}]}},"_links":{"self":[{"href":"https:\/\/aktuelles.uni-frankfurt.de\/en\/wp-json\/wp\/v2\/posts\/87203","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/aktuelles.uni-frankfurt.de\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/aktuelles.uni-frankfurt.de\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/aktuelles.uni-frankfurt.de\/en\/wp-json\/wp\/v2\/users\/8"}],"replies":[{"embeddable":true,"href":"https:\/\/aktuelles.uni-frankfurt.de\/en\/wp-json\/wp\/v2\/comments?post=87203"}],"version-history":[{"count":0,"href":"https:\/\/aktuelles.uni-frankfurt.de\/en\/wp-json\/wp\/v2\/posts\/87203\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/aktuelles.uni-frankfurt.de\/en\/wp-json\/wp\/v2\/media\/85029"}],"wp:attachment":[{"href":"https:\/\/aktuelles.uni-frankfurt.de\/en\/wp-json\/wp\/v2\/media?parent=87203"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/aktuelles.uni-frankfurt.de\/en\/wp-json\/wp\/v2\/categories?post=87203"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/aktuelles.uni-frankfurt.de\/en\/wp-json\/wp\/v2\/tags?post=87203"},{"taxonomy":"post_folder","embeddable":true,"href":"https:\/\/aktuelles.uni-frankfurt.de\/en\/wp-json\/wp\/v2\/post_folder?post=87203"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}