Abstracts

Shaun Gallagher:
Missing in action: Gestures and embodied narratives
Embodied approaches to cognitive development underscore the relevance of narrative in lieu of mentalistic explanations of social cognition. Narrative is considered by some developmental psychologists to be an embodied practice, anchored in early social experiences, perceptions and emotions, providing children the means to understand how others act according to reasons. Relevance given to embodied narrative in developing social cognition has led researchers to explore its ontogeny, often resulting in contrasting theories. Some studies take an nativist approach and define narrative as an invariant generative process. This view argues for the continuity of narrative structure beginning in fetal and neonatal movement, through infant pre-verbal communication, and into linguistic meaning-making in childhood and adulthood. Other studies, while upholding that narrative is anchored in pre-verbal actions, suggest that it must be kept distinct from actions to avoid pan-narrativism and the overlooking of a significant status change in the nature of content (from non-representational to representational). These contrasting views on embodied narrative raise relevant questions about the relation between actions and language: the former suggesting an identity in structure, the latter suggesting a developmental derivation from action structure to narrative structure. Laura Sparaci and I have offered an analysis of the pivotal role of gestures as communicative forms that join action to language. Close analyses of the emergence of communicative gestures in childhood as well as of the recent literature on aproprioception in adults, map the path from actions to narrative through gestures. This path shows structural continuity, but a consistent shift from non-representational to representational processes, moving from functional or instrumental acts to communicative ones, suggesting continuity through change in the passage from action to narrative.


Ping Li:
Sensorimotor Integration and Embodied Language Learning in Virtual Interactive Realities: Technology, Data, and the Brain
In an era of rapid developments in digital technology, we have an opportunity to conduct highly interdisciplinary research that connects human learning experience with digital applications and mind-brain studies. In this talk, I outline an approach that combines emerging technologies with current neuro-cognitive theories, with a particular reference to sensorimotor integration in embodied language learning through virtual interactive realities. I highlight the differences in the context of learning between children (learning native language, L1) and adults (learning a foreign language, L2), and suggest ‘social learning of L2’ (SL2) as a new framework for thinking about L1-L2 differences and the corresponding neural correlates. If social and affective cues can be made available through virtual interactive realities to L2-learning adults, as they are naturally available to L1-learning children, the success of L2 learning may be enhanced. The neural evidence from our work shows that SL2-based learning, as compared with traditional classroom-based learning, can lead to brain network patterns in L2 that are more similar to those underlying L1.
Theoretically, this approach can also help us to gain a deeper understanding of embodied learning and its neural mechanisms. More broadly, our studies suggest that different patterns of neurocognitive representations can result as a function of different approaches towards learning. Combined with the power of big data and machine learning, this approach also sheds light into personalized education, thereby informing pedagogical designs and instructional innovations.


Sofia Teixeira:
Modelling Human Behaviour and Cognition with Network Science
tba


Anna Ciaunica:
Reality + and –   
Recent years have seen rapid development of new technologies advocating Virtual Reality (VR) environments as new tools to expand human perception, cognition, and socialness. In this talk I explore and contrast what VR technologies can add but also remove from our day-to-day social interactions. Specifically, I will focus on the neurobiological and phenomenological bases of proximal, multisensory versus distal audiovisual social cognition. I will argue that while VR technologies can make us travel into extraordinary worlds, it can also detach us from our ‘ordinary’ world and bodies. However, while exploring the extraordinary is a fantastic achievement of the human mind, keeping our bodies safe in the ordinary world is a necessary condition for the survival of biological self-organizing systems such as humans. Hence VR worlds and technologies come both with a ‘+’ (plus) and a ‘– ‘ (minus) and both should be carefully considered for potential purposes and impact on our daily lives.


Tania Alexandra Couto:
Immersive realities in human minds and social lives: How immersive and interactive realities impact body ownership and joint-agency during social interactions between humans and artificial agents
tba


Qihui Xu:
Does Human Cognition Require Interactive Realities? Insights from Large Language Models
There has been a long-standing debate regarding the extent to which interactive realities shape human cognition and the degree to which language representation requires grounded cognition (Burgess, 2000). Recent advances in large language models (LLM), such as GPT-3 and chatGPT, have the potential to shed light on this debate. Although these models are trained on massive amounts of text without human embodied experience, they have nevertheless demonstrated human-like behaviors and performance in various psychology tasks (Binz & Schulz, 2023). The LLM’s impressive performance, most recently shown by OpenAI’s chatGPT, may provide an alternative interpretation: human cognition and behavior can emerge from very large data that capture the broad range of linguistic context. To address this problem, we analyzed and compared lexical-semantic representations between humans and chatGPT, including subjective ratings of word concreteness, body-object interaction, imageability, emotionality, and other psycholinguistic attributes. Our findings show that while chatGPT’s and human ratings are similar in some aspects such as word concreteness, they still show differences in aspects more related to embodiment experience such as body-object interaction. These results indicate that there are still gaps between human cognition and representation and what LLMs can derive based on large amount of text with broad linguistic context. As LLM models continue to develop, the picture will become further clearer regarding the limits of LLMs including chatGPT and whether interactive realities are an essential component for human cognition.


Jesper Rørvig:
Embodied Joint Agency in Human – Robot Interactions
Artificial Intelligence-based technologies such as robots are developing at an unprecedented speed in our societies, considerably impacting human lives. In order to better understand this impact, we need to investigate the effect of interacting with these technologies on human mental and social lives. Previous studies on human – robot interaction has showed that humans anthropomorphize robots. In addition, we also spontaneously attribute mental and affective states to, and feel empathy towards robots. However, the impact of interacting with humanoid robots on the human sense of self and sense of body ownership remains an open question. In this talk, I will present the Interself project on Human – Robot Interactions, which aims to unravel the link between sense of body ownership, sense of agency and sense of self, under the different conditions of interacting with either a humanoid robot or another human being.