Workshop on Enactive Mind in Design @ Imagis lab Politecnico di Milano

ENACTIVE MIND IN DESIGN

Dr Ilkka Kosunen on affective computing, at the Enactive Mind in Design workshop, the 2nd workshop week organised by Enactive Virtuality Lab at Department of Design, Politecnico di Milano, invited by Prof. Francesca Piredda.
In this workshop we familiarise ourselves with the concept of enactive mind and learn by practical work how enactive narrative systems can be applied to designing media projects.
The concepts of enactive cinema (Tikka 2008) and enactive media (Tikka 2010) are discussed, against the theoretical foundations of enactive cognitive sciences (Varela et al. 1991). Accordingly, a holistic first-person experience can be understood as being and playing a part in the world. The approach suggest going beyond the conventional concept of human–computer interaction by emphasising unconscious interaction between the experiencing participant and narrative systems. Instead of directly manipulating the narrative, the unfolding of the story is affected by the participant’s enactive emotional participation, tracked, for instance, by biosensors.
LEARNING MODES:
Lectures with video screenings, reading articles, and discussing on enactive mind and narratives; instructed/tutored hands-on enactive design exercises in small teams, analysing and reviewing created projects, demos and/or concepts.
LEARNING OUTCOMES:
Basic understanding on enactive mind and enactive narrative systems
Basic understanding of the use of biofeedback in enactive media design
Hands-on designing and producing enactive narrative systems (proof-of-concept)
Collaborative team work and presentation skills
TEACHERS:
Prof. Dr Pia Tikka, Dr. Ilkka Kosunen, School of Film, media, art and communication & Center of Excellence MEDIT, Tallinn University; Several visiting lecturers (t.b.a.)

Meeting @EmpaticaXR and eRMLab/UAM Madrid

 

EmpaticaXR is a Transformative Technology and Evolutionary company translating cutting-edge neuropsychological knowledge into experiential narrative XR experiences aimed towards human flourishing. The researchers at EmpaticaXR are encoding concepts from the cognitive sciences, in combination with the emotional power of the cinematic narrative storytelling arts, to create the audiovisual transformative experience of wonder and awe that can only be massively spread through the power of XR, AI & Biometric Technologies.

Meeting with Alejandro Sacristán, EmpaticaXR Business Development & Operations, Madrid and Jorge Esteban Blein, VR Creative Director, and immersive storytelling consultant (in image).

***

Followed by a visit to the reactive professor Jose Maria de Poveda at empathic Reactive Media Lab (eRMLab)”, Universidad Autónoma de Madrid, UAM.

(In image at left Profesor Jesús Poveda de Agustin, in the middle professor Jose Maria de Poveda)

 

Neuroadaptive dance project “Trisolde”

TRISOLDE – Neuroadaptive Gesamtkunstwerk: The Biocybernetic Symbiosis of Tristan and Isolde”

Exploring the final frontier of human-computer interaction with a neuroadaptive opera…performed by the audience, dancers and computational creativity .

Team of “TRISOLDE” (Tiina Ollesk, Simo Kruusement, Renee Nõmmik, Ilkka Kosunen, Hans-Günther Lock, Giovanni Albini) performed in Festival “IndepenDance” in Göteborg, nov 29 and Dec 2, 2019.

A symbiotic dance version of Wagner´s “Tristan and Isolde” where dancers are controlling the music via body movements and implicit psychophysiological signals. This work explores the next step in this coming-together of man and machine: the symbiotic interaction paradigm where the computer can automatically sense the cognitive and affective state of the user and adapt appropriately in real-time. It brings together many exciting fields of research from computational creativity to physiological computing. To measure audience and to use the audience’s reactions to module the orchestra is new way of doing “participatory theatre” where audience becomes part of the performance.

“Tristan and Isolde” is widely considered both as one of the greatest operas of all time as well as beginning of modernism in music, introducing techniques such as chromaticism dissonance and even atonality. It has sometimes been described as a “symphony with words”; the opera lacks major stage action, large choruses or wide range of characters. Most of the “action” in the opera happens inside the heads of Tristan and Isolde. This provides amazing possibilities for a biocybernetic system: I this case, Tristan and Isolde will communicate both explicitly (through movement of the dancers) but also implicitly via the measured psychophysiological signals.

Dance artists: Tiina Ollesk, Simo Kruusement

Choreographer-director: Renee Nõmmik

Dramaturgy and science of biocybernetic symbiosis: Ilkka Kosunen

Composers for interactive audio media: Giovanni Albini, Hans-Gunter Lock

Video interaction: Valentin Siltsenko

Duration: 40’

This performance is supported by: The Cultural Endowment of Estonia, and Enactive Virtuality Lab and Digital Technology Insitute (biosensors), Tallinn University.

Presentation of project: November 29th-30th and December 1st, 2018 at 3:e Våningen Göteborg (Sweden) at festival Independance. The event is dedicated to the centenary of the Republic of Estonia and supported by program “Estonia100-EV100”.

PREMIER IN TALLINN FEBRUARY 2019 (see more Fine 5 Theater)

 

The Booth @ Worlding the Brain conference in Aarhus Uni

The Booth

Due to on-going experiments more details added about this art/science project only after the experimental data collection has been completed. Our presentation in Aarhus University Worlding the Brain Conference, 27–29 Nov, 2018, showed initial findings.

Team: Pia Tikka, Ilkka Kosunen, Lynda Joy Gerry, Eeva R Tikka, Victor Pardinho, Can Uzer, Angela Kondinska, Michael Becken & Ben Ighoyota Ajenaghughrure, with others.

Finnish Cultural Foundation Huhtamäki Fund; Virtual Cinema Lab Aalto University School of ARTS; BioLab by Digital Technology Insitute, Tallinn University; Tikka & Kosunen: EU Mobilitas Pluss Top Researcher Grant (2017-2022), Estonian Research Council in association with Tallinn University.

Talking about AI & MEDIA with ACE Producers

AI & Media Afternoon by Aalto Studios with ACE Producers / 12.10.2018 / Helsinki
 
Drs Pia Tikka and Ilkka Kosunen (image) gave a joint talk at the AI & Media Afternoon event on Friday the 12th of October, 2018. The event was held at Miltton offices at Vuorikatu 15, Helsinki, from 16:15 until 19:00.
Mika Rautiainen, Valossa Oy:  Applying Video Recognition and Content Intelligence to Media Workflows
Pia Tikka & Ilkka Kosunen: Creating Autonomous Behavior of Virtual Humanlike Characters in Interaction with Human Participants

Tallinn Summer School in St. Petersburg August 26–31

Ilkka Kosunen

With the support of the Estonia 100 programme, Tallinn University, Tallinn Summer School, and ITMO University  offered a summer school course “Experimental Interaction Design: Physiological Computing Technologies for Performative Arts.”

The main goal of the one-week extensive hands-on course in interaction design was to empower people to shape their digital environments thus providing a new level of digital literacy. This edition focused on Neurotheatre, a specific type of interactive theatre, where audience and/or actors can communicate via brain and neural computer interfaces using multimodal sensors and actuators.

The course introduced core design and interaction design topics in a provocative stance, inviting participants to reflect upon ongoing shifts, connections, and re-framings in just about every area of interaction design, and inciting a rebellion against passivity. This was complemented with the development of skills in systematic evaluation of usability and user experience of interaction designs. The expectation is to see participants take ownership of the interaction design process.

http://summerschool.tlu.ee/russia/

 

Neurocinematic publication – Narrative comprehension beyond language

Pia Tikka, Janne Kauttonen & Yevhen Hlushchuk (2018): “Narrative comprehension beyond language: Common brain networks activated by a movie and its script”

A young girl Nora stares shocked at her mother Anu. Anu stands expressionless by the kitchen table and scrapes the left-over spaghetti from Nora’s plate into a plastic bag. She places the plate into the bag and starts putting there other dining dishes, takes a firm hold of the bag and smashes it against the table. Nora is horrified: “Mother! What are you doing?. Anu continues smashing the bag without paying attention to her daughter. Nora begs her to stop. Anu collapses crying against the table top. Nora approaches, puts her arms around the crying mother and starts slowly caressing her hair.

The dramatic scene describes a daughter witnessing a nervous breakdown of her mother. Its narrative content remains the same should one read it in a textual form or viewed it as a movie. It is relatively well known how narratives are processed in the distinct human sensory cortices depending on the sensory input through which the narrative is perceived (reading, listening, viewing; [15]). However, far less is known of how the human brain processes meaningful narrative content independent of the media of presentation. To tackle this classical dichotomy issue between form and content in neuroimaging terms, we employed functional magnetic resonance imaging (fMRI) to provide new insights into brain networks relating to a particular narrative content while overlooking its form.

In the image Nora (actress Rosa Salomaa); director Saara Cantell, cinematography Marita Hällfors (F.S.C), producer Outi Rousu, Pystymetsä Oy, 2010.

Abstract

Narratives surround us in our everyday life in different forms. In the sensory brain areas, the processing of narratives is dependent on the media of presentation, be that in audiovisual or written form. However, little is known of the brain areas that process complex narrative content mediated by various forms. To isolate these regions, we looked for the functional networks reacting in a similar manner to the same narrative content despite different media of presentation. We collected 3-T fMRI whole brain data from 31 healthy human adults during two separate runs when they were either viewing a movie or reading its screenplay text. The independent component analysis (ICA) was used to separate 40 components. By correlating the components’ time-courses between the two different media conditions, we could isolate 5 functional networks that particularly related to the same narrative content. These TOP-5 components with the highest correlation covered fronto-temporal, parietal, and occipital areas with no major involvement of primary visual or auditory cortices. Interestingly, the top-ranked network with highest modality-invariance also correlated negatively with the dialogue predictor, thus pinpointing that narrative comprehension entails processes that are not language-reliant. In summary, our novel experiment design provided new insight into narrative comprehension networks across modalities.

Conference presentation @Virtual Reality as a Transformative Technology to Develop Empathy June 20

A video conference presentation at the 1st “Virtual Reality as a Transformative Technology to Develop Empathy” Conference, organised by the “empathic Reactive MediaLab Coalition” (eRMLab Coalition) and EmpaticaXR research group.

 

Lynda Joy Gerry: Envisioning Future Technology for Compassion Cultivation

Experiments in cognitive neuroscience have recently dissociated neural pathways underpinning the experience of empathy from that of compassion. Namely, whereas the experience of empathy involves neural activations regarding an affective pain resonance with the distress or suffering of the target, compassion is correlated to reward centers and positive affect regions of the brain. The import of this finding is that empathy may in some instances lead to a withdrawal reflex if there is excessive sharing of distress with the target, whereas compassion appears to involve a care for the welfare of another and an approach motivation. Within the last five years, virtual environments (VEs) have been increasingly researched and developed towards the goal of enhancing users’ social intelligence, self-compassion, positive attitudes towards out-groups, and empathy. However, most of these VEs do not help the user to become more aware of his or her own emotional and psychological states in response to another person or persons, arguably a crucial step in the recovery from empathic distress or over-arousal. Biofeedback training has also been used for improving social cognition skills, but only a few projects have incorporated biofeedback into empathy-enhancing virtual environments. Recovery from empathic distress as a skill trained through biofeedback VEs could enhance interpersonal connectedness, quality of life, and social cohesion.

Compassion Cultivation Training in Bio-Adaptive Virtual Environments Paris, CRI June 8

As part of the Frontiers du Vivant, Lynda Joy Gerry presented and defended her project “Compassion Cultivation Training in Bio-Adaptive Virtual Environments” at the Centre for Research and Interdisciplinarity (CRI) in Paris. This project involves using perspective-taking in virtual environments using biofeedback relating to emotion regulation (heart rate variability) to manage the recovery from empathic distress. Empathic distress is conceived as a step in the empathic process towards the understanding of another person’s affective, bodily, and psychological state, but one that can lead to withdrawal and personal distress for the empathizer. Thus, this project implements instruction techniques adopted from Compassion Cultivation Training guided meditation practices cued by biofeedback to entrain better self-other boundaries and distinctions, as well as emotion regulation.

Lynda also participated in a workshop on VR and Empathy led by Philippe Bertrand from BeAnother Lab (BAL, The Machine to Be Another). See Philippe Bertrand TEDx talk “Standing in the shoes of the others with VR”