Due to on-going experiments more details added about this art/science project only after the experimental data collection has been completed. Our presentation in Aarhus University Worlding the Brain Conference, 27–29 Nov, 2018, showed initial findings.
Team: Pia Tikka, Ilkka Kosunen, Lynda Joy Gerry, Eeva R Tikka, Victor Pardinho, Can Uzer, Angela Kondinska, Michael Becken & Ben Ighoyota Ajenaghughrure, with others.
Finnish Cultural Foundation Huhtamäki Fund; Virtual Cinema Lab Aalto University School of ARTS; BioLab by Digital Technology Insitute, Tallinn University; Tikka & Kosunen: EU Mobilitas Pluss Top Researcher Grant (2017-2022), Estonian Research Council in association with Tallinn University.
AI & Media Afternoon by Aalto Studios with ACE Producers / 12.10.2018 / Helsinki
Drs Pia Tikka and Ilkka Kosunen (image) gave a joint talk at the AI & Media Afternoon event on Friday the 12th of October, 2018. The event was held at Miltton offices at Vuorikatu 15, Helsinki, from 16:15 until 19:00.
Mika Rautiainen, Valossa Oy: Applying Video Recognition and Content Intelligence to Media Workflows
Pia Tikka & Ilkka Kosunen: Creating Autonomous Behavior of Virtual Humanlike Characters in Interaction with Human Participants
With the support of the Estonia 100 programme, Tallinn University, Tallinn Summer School, and ITMO University offered a summer school course “Experimental Interaction Design: Physiological Computing Technologies for Performative Arts.”
The main goal of the one-week extensive hands-on course in interaction design was to empower people to shape their digital environments thus providing a new level of digital literacy. This edition focused on Neurotheatre, a specific type of interactive theatre, where audience and/or actors can communicate via brain and neural computer interfaces using multimodal sensors and actuators.
The course introduced core design and interaction design topics in a provocative stance, inviting participants to reflect upon ongoing shifts, connections, and re-framings in just about every area of interaction design, and inciting a rebellion against passivity. This was complemented with the development of skills in systematic evaluation of usability and user experience of interaction designs. The expectation is to see participants take ownership of the interaction design process.
A young girl Nora stares shocked at her mother Anu. Anu stands expressionless by the kitchen table and scrapes the left-over spaghetti from Nora’s plate into a plastic bag. She places the plate into the bag and starts putting there other dining dishes, takes a firm hold of the bag and smashes it against the table. Nora is horrified: “Mother! What are you doing?”. Anu continues smashing the bag without paying attention to her daughter. Nora begs her to stop. Anu collapses crying against the table top. Nora approaches, puts her arms around the crying mother and starts slowly caressing her hair.
The dramatic scene describes a daughter witnessing a nervous breakdown of her mother. Its narrative content remains the same should one read it in a textual form or viewed it as a movie. It is relatively well known how narratives are processed in the distinct human sensory cortices depending on the sensory input through which the narrative is perceived (reading, listening, viewing; [1–5]). However, far less is known of how the human brain processes meaningful narrative content independent of the media of presentation. To tackle this classical dichotomy issue between form and content in neuroimaging terms, we employed functional magnetic resonance imaging (fMRI) to provide new insights into brain networks relating to a particular narrative content while overlooking its form.
In the image Nora (actress Rosa Salomaa); director Saara Cantell, cinematography Marita Hällfors (F.S.C), producer Outi Rousu, Pystymetsä Oy, 2010.
Narratives surround us in our everyday life in different forms. In the sensory brain areas, the processing of narratives is dependent on the media of presentation, be that in audiovisual or written form. However, little is known of the brain areas that process complex narrative content mediated by various forms. To isolate these regions, we looked for the functional networks reacting in a similar manner to the same narrative content despite different media of presentation. We collected 3-T fMRI whole brain data from 31 healthy human adults during two separate runs when they were either viewing a movie or reading its screenplay text. The independent component analysis (ICA) was used to separate 40 components. By correlating the components’ time-courses between the two different media conditions, we could isolate 5 functional networks that particularly related to the same narrative content. These TOP-5 components with the highest correlation covered fronto-temporal, parietal, and occipital areas with no major involvement of primary visual or auditory cortices. Interestingly, the top-ranked network with highest modality-invariance also correlated negatively with the dialogue predictor, thus pinpointing that narrative comprehension entails processes that are not language-reliant. In summary, our novel experiment design provided new insight into narrative comprehension networks across modalities.
A video conference presentation at the 1st “Virtual Reality as a Transformative Technology to Develop Empathy” Conference, organised by the “empathic Reactive MediaLab Coalition” (eRMLab Coalition) and EmpaticaXR research group.
Lynda Joy Gerry: Envisioning Future Technology for Compassion Cultivation
Experiments in cognitive neuroscience have recently dissociated neural pathways underpinning the experience of empathy from that of compassion. Namely, whereas the experience of empathy involves neural activations regarding an affective pain resonance with the distress or suffering of the target, compassion is correlated to reward centers and positive affect regions of the brain. The import of this finding is that empathy may in some instances lead to a withdrawal reflex if there is excessive sharing of distress with the target, whereas compassion appears to involve a care for the welfare of another and an approach motivation. Within the last five years, virtual environments (VEs) have been increasingly researched and developed towards the goal of enhancing users’ social intelligence, self-compassion, positive attitudes towards out-groups, and empathy. However, most of these VEs do not help the user to become more aware of his or her own emotional and psychological states in response to another person or persons, arguably a crucial step in the recovery from empathic distress or over-arousal. Biofeedback training has also been used for improving social cognition skills, but only a few projects have incorporated biofeedback into empathy-enhancing virtual environments. Recovery from empathic distress as a skill trained through biofeedback VEs could enhance interpersonal connectedness, quality of life, and social cohesion.
As part of the Frontiers du Vivant, Lynda Joy Gerry presented and defended her project “Compassion Cultivation Training in Bio-Adaptive Virtual Environments” at the Centre for Research and Interdisciplinarity (CRI) in Paris. This project involves using perspective-taking in virtual environments using biofeedback relating to emotion regulation (heart rate variability) to manage the recovery from empathic distress. Empathic distress is conceived as a step in the empathic process towards the understanding of another person’s affective, bodily, and psychological state, but one that can lead to withdrawal and personal distress for the empathizer. Thus, this project implements instruction techniques adopted from Compassion Cultivation Training guided meditation practices cued by biofeedback to entrain better self-other boundaries and distinctions, as well as emotion regulation.
Lynda also participated in a workshop on VR and Empathy led by Philippe Bertrand from BeAnother Lab (BAL, The Machine to Be Another). See Philippe Bertrand TEDx talk “Standing in the shoes of the others with VR”
Examination of Johanna Lehto’s MA thesis “Robots and Poetics – Using narrative elements in human-robot interaction” for the Department of Media, programme New Media Design and Production on the 16th of May 2018.
As a writer and a designer, Johanna Lehto sets out to reflect upon the phenomenon of human-robot interaction through her own artistic work. To illustrate plot structure and narrative units of the interaction between a robot and human, she reflects upon how Aristotle’s dramatic principles. In her work, she applies Aristotelian drama structure to analyse a human-robot encounter as a dramatic event. Johanna made an interactive video installation in which she created a presentation of an AI character, Vega 2.0 (image). The installation was exhibited in Tokyo in Hakoniwa-exhibition on 22.-24.6.2017 and in Musashino Art University Open Campus -festival 10.-11.6.2017.
Since connected by Storytek Content+Tech Acceleator in fall 2017 Pia Tikka has consulted the VFC project directed by Charles S. Roy on screenplay and audience interaction.
Charles S. Roy, Film Producer & Head of Innovation at the production company La Maison de Prod, develops his debut narrative film+interactive project VFC as producer-director. VFC has been selected at the Storytek Content+Tech Accelerator, the Frontières Coproduction Market, the Cannes NEXT Cinema & Transmedia Pitch, the Sheffield Crossover Market, and Cross Video Days in Paris. In the vein of classic portrayals of female anxiety such as Roman Polanski’s REPULSION, Todd Haynes’ SAFE and Jonathan Glazer’s BIRTH, VFC is a primal and immersive psychological drama about fear of music (cinando.com). Its main innovation is in bringing brain-computer interface storytelling to the big screen by offering an interactive neurotech experience.
On the premises of the Cannes Film Market, as a grant holder for the Estonian innovation and development incubator Storytek Accelerator, Charles presented his work to the audience of the tech-focused NEXT section (8-13 May).
Testing facial expressions of the viewer driving the behavior of a screen character with the Louise’s Digital Double (under a Creative Commons Attribution Non-Commercial No Derivatives 4.0 license). see Eisko.com
In the image Lynda Joy Gerry, Dr. Ilkka Kosunen and Turcu Gabriel, Erasmus exchange student from the University of Craiova @Digital Technology Institute, TLU) examining facial expressions of a screen character driven by an AI.