2 TALKS on EMOTIVE VR Film by Marie-Laure Cazin

Freud’s last Hypnosis, a neuro-interactive 360 movie for EMOTIVE VR

Presentation of the ongoing project Emotive VR prototype, an innovative form combining VR and EEG headsets. A neuro-interactive omnidirectional movie has been realized, visualized in Virtual Reality (VR) Head-Mounted Display (HMD). During the visualization, the EEG signals are recorded and analyzed in real time. Some visual effects and an interactive music vary according to the emotional state of the viewer.

Marie-Laure Cazin is a Fine Arts teacher in the High school of Arts and Design ESAD-TALM (France) and in Paris 1 Panthéon-Sorbonne. Part of the Enactive Virtuality Research Group, in BFM, University of Talinn, she is currently completing a PhD in Aix-Marseille University (France) on Cinema and Neurosciences. As an artist and a filmmaker, she has developed many experimental cinematic prototypes, using digital tools to create a live interaction between the film and performers or spectators. She collaborated with scientists for art-science projects, working with brains’ datas of emotions in her last interactive projects.

See projects on line (Fr)

Sound designer Matias Harju webpages (Eng

EMOTIVE VR documentation online (Fr)

2 TALKS on EMOTIVE VR by Marie-Laure Cazin (ESAD-TALM, France)

Visiting lecture at Aalto University, Aalto Studios, organised by Virtual Cinema Lab & Enactive Virtuality Lab, January 30, at 14-15

January 30, at 14-15 Place: N-416, BFM, Narva Mnt 27, Nova Building, Tallinn University.

 

Neuroadaptive dance project “Trisolde”

TRISOLDE – Neuroadaptive Gesamtkunstwerk: The Biocybernetic Symbiosis of Tristan and Isolde”

Exploring the final frontier of human-computer interaction with a neuroadaptive opera…performed by the audience, dancers and computational creativity .

Team of “TRISOLDE” (Tiina Ollesk, Simo Kruusement, Renee Nõmmik, Ilkka Kosunen, Hans-Günther Lock, Giovanni Albini) performed in Festival “IndepenDance” in Göteborg, nov 29 and Dec 2, 2019.

A symbiotic dance version of Wagner´s “Tristan and Isolde” where dancers are controlling the music via body movements and implicit psychophysiological signals. This work explores the next step in this coming-together of man and machine: the symbiotic interaction paradigm where the computer can automatically sense the cognitive and affective state of the user and adapt appropriately in real-time. It brings together many exciting fields of research from computational creativity to physiological computing. To measure audience and to use the audience’s reactions to module the orchestra is new way of doing “participatory theatre” where audience becomes part of the performance.

“Tristan and Isolde” is widely considered both as one of the greatest operas of all time as well as beginning of modernism in music, introducing techniques such as chromaticism dissonance and even atonality. It has sometimes been described as a “symphony with words”; the opera lacks major stage action, large choruses or wide range of characters. Most of the “action” in the opera happens inside the heads of Tristan and Isolde. This provides amazing possibilities for a biocybernetic system: I this case, Tristan and Isolde will communicate both explicitly (through movement of the dancers) but also implicitly via the measured psychophysiological signals.

Dance artists: Tiina Ollesk, Simo Kruusement

Choreographer-director: Renee Nõmmik

Dramaturgy and science of biocybernetic symbiosis: Ilkka Kosunen

Composers for interactive audio media: Giovanni Albini, Hans-Gunter Lock

Video interaction: Valentin Siltsenko

Duration: 40’

This performance is supported by: The Cultural Endowment of Estonia, and Enactive Virtuality Lab and Digital Technology Insitute (biosensors), Tallinn University.

Presentation of project: November 29th-30th and December 1st, 2018 at 3:e Våningen Göteborg (Sweden) at festival Independance. The event is dedicated to the centenary of the Republic of Estonia and supported by program “Estonia100-EV100”.

PREMIER IN TALLINN FEBRUARY 2019 (see more Fine 5 Theater)

 

Neurocinematics @ the Worlding the Brain Conference, Aarhus University

Enactive Virtuality Lab presented the collaborative research with the Brain and Mind Lab of the Aalto School of Science at the Worlding the Brain Conference in Aarhus University, Nov 27-29.

Image: The son (Juha Hippi) confronting his father (Vesa Wallgren). Short film The Queen (Kuningatar) is directed by Pia Tikka, Production Aalto University in collaboration with Oblomovies Oy 2013.

 


TITLE: Narrative priming of moral judgments in film viewing

Authors: Pia Tikka, Jenni Hannukainen, Tommi Himberg, and Mikko Sams

How does narrative priming influence the moral judgements of the film viewers? In two studies we focus on the evaluation of the rightness of the perceived action of the characters and the acceptability of these actions, in relation to the viewers experience of sympathy and filmic tension.
Providing additional narrative information beforehand for the viewers is an effective method to manipulate how they perceive and make sense of the film narrative. Our experiment data is collected from two different studies, behavioral and psychophysiological. In both experimental settings two groups receive additional background information of either the male or the female character, while the third controls are not primed. All subjects view the same 25 minute long drama film and reply to post questionnaires online.
Based on the collected data in the first experiment using parallel mixed method analysis we showed that the narrative priming itself does not increase the spectrum of the moral judgment statements and the acceptance of the wrong-doings by the characters but more influential factor seems to be the type of the action and its relation to the generally accepted moral norms. Yet, the narrative priming increased the explanatory spectrum of the subjects, which showed to some extent the trend for accepting or trying to understand actions that embody socio-emotionally complex situations. In the second currently on-going psychophysiological study (HR, EDA; EEG) we expect the explanatory spectrum collected via online questionnaires to correlate with the results of the first behavioral study. However, we also expect to show more priming dependent and spatio-temporal film-event dependent differences in arousal between all groups, indicating the influence of priming to the unconscious emotional and cognitive processes related to moral judgements

The Booth @ Worlding the Brain conference in Aarhus Uni

The Booth

Due to on-going experiments more details added about this art/science project only after the experimental data collection has been completed. Our presentation in Aarhus University Worlding the Brain Conference, 27–29 Nov, 2018, showed initial findings.

Team: Pia Tikka, Ilkka Kosunen, Lynda Joy Gerry, Eeva R Tikka, Victor Pardinho, Can Uzer, Angela Kondinska, Michael Becken & Ben Ighoyota Ajenaghughrure, with others.

Finnish Cultural Foundation Huhtamäki Fund; Virtual Cinema Lab Aalto University School of ARTS; BioLab by Digital Technology Insitute, Tallinn University; Tikka & Kosunen: EU Mobilitas Pluss Top Researcher Grant (2017-2022), Estonian Research Council in association with Tallinn University.

Talk at Sergej Eisenstein Workshop, The Brandenburg Centre for Media Studies (ZeM), Potsdam, Nov 22-24

Sergej Eisenstein and the Game of Objects 

Workshop at The Brandenburg Centre for Media Studies (ZeM), Potsdam

In the year which marks both the 120th anniversary of Sergej Eisenstein’s birth and the 70th anniversary of his death, the Brandenburg Centre for Media Studies (ZeM) in Potsdam and “Cinepoetics – Center for Advanced Film Studies” at the Free University Berlin are jointly organising a workshop that will take place in Potsdam from the 22th to 24th November 2018. More here.

Title of the talk by Pia Tikka: Simulatorium Eisensteinense: Eisenstein’s legacy in art and science dialogue 

Image form Sergei Eisenstein Elokuvan muoto, p 121 ISBN 951-835-004-3

A talk at Estonian Art Academy conference “The Collaborative Turn in Art”

The two day conference The Collaborative Turn in Art: The Research Process in Artistic Practice deals with artistic research, in particular the expanded understanding of this term and the questions raised by collaborative creative practices. Venue: Estonian Academy of Art s, Põhja pst 7, room A501.

Image: Julijonas Urbonas. “Talking Doors” 2009 (Doors Event) Fo more, click the Link to the conference webpage.

Pia Tikka:

My talk “Neurocinematics & Art-Science Collaboration” concerned the first hand knowledge gained from several collaborative projects in which I have worked as a consulting film expert, and my own neurocinematic projects in which I have functioned as the principal investigator. I highlighted the diversity of issues one faces in collaborations between artists and scientists. Especially interesting was to reflect conceptual, technological and methodological differences between arts and sciences. The discussion ranged from conceptual to technological issues, however the focus  on challenges such as finding shared language, working methods, best division of labor and responsibilities and authorship.

Image shows a view to the lecture room: Chris Hales guides the audience through his talk tilled “From Tacit Knowledge to Academic Knowledge”

Talking about AI & MEDIA with ACE Producers

AI & Media Afternoon by Aalto Studios with ACE Producers / 12.10.2018 / Helsinki
 
Drs Pia Tikka and Ilkka Kosunen (image) gave a joint talk at the AI & Media Afternoon event on Friday the 12th of October, 2018. The event was held at Miltton offices at Vuorikatu 15, Helsinki, from 16:15 until 19:00.
Mika Rautiainen, Valossa Oy:  Applying Video Recognition and Content Intelligence to Media Workflows
Pia Tikka & Ilkka Kosunen: Creating Autonomous Behavior of Virtual Humanlike Characters in Interaction with Human Participants

Neurocinematic publication – Narrative comprehension beyond language

Pia Tikka, Janne Kauttonen & Yevhen Hlushchuk (2018): “Narrative comprehension beyond language: Common brain networks activated by a movie and its script”

A young girl Nora stares shocked at her mother Anu. Anu stands expressionless by the kitchen table and scrapes the left-over spaghetti from Nora’s plate into a plastic bag. She places the plate into the bag and starts putting there other dining dishes, takes a firm hold of the bag and smashes it against the table. Nora is horrified: “Mother! What are you doing?. Anu continues smashing the bag without paying attention to her daughter. Nora begs her to stop. Anu collapses crying against the table top. Nora approaches, puts her arms around the crying mother and starts slowly caressing her hair.

The dramatic scene describes a daughter witnessing a nervous breakdown of her mother. Its narrative content remains the same should one read it in a textual form or viewed it as a movie. It is relatively well known how narratives are processed in the distinct human sensory cortices depending on the sensory input through which the narrative is perceived (reading, listening, viewing; [15]). However, far less is known of how the human brain processes meaningful narrative content independent of the media of presentation. To tackle this classical dichotomy issue between form and content in neuroimaging terms, we employed functional magnetic resonance imaging (fMRI) to provide new insights into brain networks relating to a particular narrative content while overlooking its form.

In the image Nora (actress Rosa Salomaa); director Saara Cantell, cinematography Marita Hällfors (F.S.C), producer Outi Rousu, Pystymetsä Oy, 2010.

Abstract

Narratives surround us in our everyday life in different forms. In the sensory brain areas, the processing of narratives is dependent on the media of presentation, be that in audiovisual or written form. However, little is known of the brain areas that process complex narrative content mediated by various forms. To isolate these regions, we looked for the functional networks reacting in a similar manner to the same narrative content despite different media of presentation. We collected 3-T fMRI whole brain data from 31 healthy human adults during two separate runs when they were either viewing a movie or reading its screenplay text. The independent component analysis (ICA) was used to separate 40 components. By correlating the components’ time-courses between the two different media conditions, we could isolate 5 functional networks that particularly related to the same narrative content. These TOP-5 components with the highest correlation covered fronto-temporal, parietal, and occipital areas with no major involvement of primary visual or auditory cortices. Interestingly, the top-ranked network with highest modality-invariance also correlated negatively with the dialogue predictor, thus pinpointing that narrative comprehension entails processes that are not language-reliant. In summary, our novel experiment design provided new insight into narrative comprehension networks across modalities.

Conference presentation @Virtual Reality as a Transformative Technology to Develop Empathy June 20

A video conference presentation at the 1st “Virtual Reality as a Transformative Technology to Develop Empathy” Conference, organised by the “empathic Reactive MediaLab Coalition” (eRMLab Coalition) and EmpaticaXR research group.

 

Lynda Joy Gerry: Envisioning Future Technology for Compassion Cultivation

Experiments in cognitive neuroscience have recently dissociated neural pathways underpinning the experience of empathy from that of compassion. Namely, whereas the experience of empathy involves neural activations regarding an affective pain resonance with the distress or suffering of the target, compassion is correlated to reward centers and positive affect regions of the brain. The import of this finding is that empathy may in some instances lead to a withdrawal reflex if there is excessive sharing of distress with the target, whereas compassion appears to involve a care for the welfare of another and an approach motivation. Within the last five years, virtual environments (VEs) have been increasingly researched and developed towards the goal of enhancing users’ social intelligence, self-compassion, positive attitudes towards out-groups, and empathy. However, most of these VEs do not help the user to become more aware of his or her own emotional and psychological states in response to another person or persons, arguably a crucial step in the recovery from empathic distress or over-arousal. Biofeedback training has also been used for improving social cognition skills, but only a few projects have incorporated biofeedback into empathy-enhancing virtual environments. Recovery from empathic distress as a skill trained through biofeedback VEs could enhance interpersonal connectedness, quality of life, and social cohesion.

Compassion Cultivation Training in Bio-Adaptive Virtual Environments Paris, CRI June 8

As part of the Frontiers du Vivant, Lynda Joy Gerry presented and defended her project “Compassion Cultivation Training in Bio-Adaptive Virtual Environments” at the Centre for Research and Interdisciplinarity (CRI) in Paris. This project involves using perspective-taking in virtual environments using biofeedback relating to emotion regulation (heart rate variability) to manage the recovery from empathic distress. Empathic distress is conceived as a step in the empathic process towards the understanding of another person’s affective, bodily, and psychological state, but one that can lead to withdrawal and personal distress for the empathizer. Thus, this project implements instruction techniques adopted from Compassion Cultivation Training guided meditation practices cued by biofeedback to entrain better self-other boundaries and distinctions, as well as emotion regulation.

Lynda also participated in a workshop on VR and Empathy led by Philippe Bertrand from BeAnother Lab (BAL, The Machine to Be Another). See Philippe Bertrand TEDx talk “Standing in the shoes of the others with VR”