“The Queen” project – collaboration with Aalto BioLab

Enactive Virtuality Lab will present the collaborative research with the Brain and Mind Lab of the Aalto School of Science at the forthcoming Worlding the Brain Conference in Aarhus University, Nov 27-29.

Image: The son (Juha Hippi) confronting his father (Vesa Wallgren). Short film The Queen (Kuningatar) is directed by Pia Tikka, Production Aalto University in collaboration with Oblomovies Oy 2013.

A talk at Estonian Art Academy conference “The Collaborative Turn in Art”

The two day conference The Collaborative Turn in Art: The Research Process in Artistic Practice deals with artistic research, in particular the expanded understanding of this term and the questions raised by collaborative creative practices. Venue: Estonian Academy of Art s, Põhja pst 7, room A501.

Image: Julijonas Urbonas. “Talking Doors” 2009 (Doors Event) Fo more, click the Link to the conference webpage.

Pia Tikka:

My talk “Neurocinematics & Art-Science Collaboration” concerned the first hand knowledge gained from several collaborative projects in which I have worked as a consulting film expert, and my own neurocinematic projects in which I have functioned as the principal investigator. I highlighted the diversity of issues one faces in collaborations between artists and scientists. Especially interesting was to reflect conceptual, technological and methodological differences between arts and sciences. The discussion ranged from conceptual to technological issues, however the focus  on challenges such as finding shared language, working methods, best division of labor and responsibilities and authorship.

Image shows a view to the lecture room: Chris Hales guides the audience through his talk tilled “From Tacit Knowledge to Academic Knowledge”

Talking about AI & MEDIA with ACE Producers

AI & Media Afternoon by Aalto Studios with ACE Producers / 12.10.2018 / Helsinki
 
Drs Pia Tikka and Ilkka Kosunen (image) gave a joint talk at the AI & Media Afternoon event on Friday the 12th of October, 2018. The event was held at Miltton offices at Vuorikatu 15, Helsinki, from 16:15 until 19:00.
Mika Rautiainen, Valossa Oy:  Applying Video Recognition and Content Intelligence to Media Workflows
Pia Tikka & Ilkka Kosunen: Creating Autonomous Behavior of Virtual Humanlike Characters in Interaction with Human Participants

Enactive VR research project “The State of Darkness”

In the VR-mediated experience of the State of Darkness the participant will meet face-to-face with a humanlike artificial character in immersive narrative context.

Human mind and culture rely on narratives people live by every day, narratives they tell to one another, narratives that allow them to learn from others, for instance, in movies, books, or social media. Yet, the State of Darkness connects the notion of non-human narratives to the stories experienced by our virtual character, Adam B. Trained by an exhaustive range of human facial repertoire, Adam B has gained access to control his facial expressions when encountering with humans.

Our concept builds on the idea of a symbiotic interactive co-presence of a human and non-human. Adam B will be experiencing his own non-human narrative that draws to some extent from the behavior of the participant, yet driven mainly by Adam B’s own life story hidden from the participant, emerging within the complexity of Adam B’s algorithmic mind. The State of Darkness is an art installation where human and non-human narratives coexist, the first experienced and lived-by our participant and the latter experienced by our artificial character Adam B, as they meet face-to-face, embedded in the narrative world of the State of Darkness.

Team: Idea, Concept, Director Pia Tikka; Script & Dramaturgical  supervision Eeva R Tikka; Enactive Character design and production pipeline design Victor Pardinho; Enactive Scenography Tanja Bastamow; Soundscape Can Uzer; Technical 3D Artist Maija Paavola; VR audiovisual spatialization consultation Iga Zatorska; Symbiotic Creativity Ilkka Kosunen; Machine learning and Unreal Engine consultation Paul Wagner; Unreal engine consultation Tuomas Karmakallio; and others.

Team funding: Finnish Cultural Foundation Huhtamäki Fund; Aalto Studios’s Virtual Cinema Lab; Digidemo Promotion Center of Audiovisual Culture (Oblomovies Oy); Tikka & Kosunen: EU Mobilitas Pluss Top Researcher Grant (2017-2022), Estonian Research Council & Tallinn University.

For more information, contact: piatikka@tlu.ee

Scheduled premier: The International Conference on Interactive Digital Storytelling ICIDS 2018 5-8 December 2018, Trinity College Dublin, Ireland. The State of Darkness VR-installation has been proposed to the ICIDS 2018 Art Exhibition, a platform for artists to explore digital media for interactive storytelling from the perspective of a particular curatorial theme: Non- Human Narratives. See https://icids2018.scss.tcd.ie

Tallinn Summer School in St. Petersburg August 26–31

Ilkka Kosunen

With the support of the Estonia 100 programme, Tallinn University, Tallinn Summer School, and ITMO University  offered a summer school course “Experimental Interaction Design: Physiological Computing Technologies for Performative Arts.”

The main goal of the one-week extensive hands-on course in interaction design was to empower people to shape their digital environments thus providing a new level of digital literacy. This edition focused on Neurotheatre, a specific type of interactive theatre, where audience and/or actors can communicate via brain and neural computer interfaces using multimodal sensors and actuators.

The course introduced core design and interaction design topics in a provocative stance, inviting participants to reflect upon ongoing shifts, connections, and re-framings in just about every area of interaction design, and inciting a rebellion against passivity. This was complemented with the development of skills in systematic evaluation of usability and user experience of interaction designs. The expectation is to see participants take ownership of the interaction design process.

http://summerschool.tlu.ee/russia/

 

New neurocinematic publication – Narrative comprehension beyond language

Pia Tikka, Janne Kauttonen & Yevhen Hlushchuk (2018): “Narrative comprehension beyond language: Common brain networks activated by a movie and its script”

A young girl Nora stares shocked at her mother Anu. Anu stands expressionless by the kitchen table and scrapes the left-over spaghetti from Nora’s plate into a plastic bag. She places the plate into the bag and starts putting there other dining dishes, takes a firm hold of the bag and smashes it against the table. Nora is horrified: “Mother! What are you doing?. Anu continues smashing the bag without paying attention to her daughter. Nora begs her to stop. Anu collapses crying against the table top. Nora approaches, puts her arms around the crying mother and starts slowly caressing her hair.

The dramatic scene describes a daughter witnessing a nervous breakdown of her mother. Its narrative content remains the same should one read it in a textual form or viewed it as a movie. It is relatively well known how narratives are processed in the distinct human sensory cortices depending on the sensory input through which the narrative is perceived (reading, listening, viewing; [15]). However, far less is known of how the human brain processes meaningful narrative content independent of the media of presentation. To tackle this classical dichotomy issue between form and content in neuroimaging terms, we employed functional magnetic resonance imaging (fMRI) to provide new insights into brain networks relating to a particular narrative content while overlooking its form.

In the image Nora (actress Rosa Salomaa); director Saara Cantell, cinematography Marita Hällfors (F.S.C), producer Outi Rousu, Pystymetsä Oy, 2010.

Abstract

Narratives surround us in our everyday life in different forms. In the sensory brain areas, the processing of narratives is dependent on the media of presentation, be that in audiovisual or written form. However, little is known of the brain areas that process complex narrative content mediated by various forms. To isolate these regions, we looked for the functional networks reacting in a similar manner to the same narrative content despite different media of presentation. We collected 3-T fMRI whole brain data from 31 healthy human adults during two separate runs when they were either viewing a movie or reading its screenplay text. The independent component analysis (ICA) was used to separate 40 components. By correlating the components’ time-courses between the two different media conditions, we could isolate 5 functional networks that particularly related to the same narrative content. These TOP-5 components with the highest correlation covered fronto-temporal, parietal, and occipital areas with no major involvement of primary visual or auditory cortices. Interestingly, the top-ranked network with highest modality-invariance also correlated negatively with the dialogue predictor, thus pinpointing that narrative comprehension entails processes that are not language-reliant. In summary, our novel experiment design provided new insight into narrative comprehension networks across modalities.

Conference presentation @Virtual Reality as a Transformative Technology to Develop Empathy June 20

A video conference presentation at the 1st “Virtual Reality as a Transformative Technology to Develop Empathy” Conference, organised by the “empathic Reactive MediaLab Coalition” (eRMLab Coalition) and EmpaticaXR research group.

 

Lynda Joy Gerry: Envisioning Future Technology for Compassion Cultivation

Experiments in cognitive neuroscience have recently dissociated neural pathways underpinning the experience of empathy from that of compassion. Namely, whereas the experience of empathy involves neural activations regarding an affective pain resonance with the distress or suffering of the target, compassion is correlated to reward centers and positive affect regions of the brain. The import of this finding is that empathy may in some instances lead to a withdrawal reflex if there is excessive sharing of distress with the target, whereas compassion appears to involve a care for the welfare of another and an approach motivation. Within the last five years, virtual environments (VEs) have been increasingly researched and developed towards the goal of enhancing users’ social intelligence, self-compassion, positive attitudes towards out-groups, and empathy. However, most of these VEs do not help the user to become more aware of his or her own emotional and psychological states in response to another person or persons, arguably a crucial step in the recovery from empathic distress or over-arousal. Biofeedback training has also been used for improving social cognition skills, but only a few projects have incorporated biofeedback into empathy-enhancing virtual environments. Recovery from empathic distress as a skill trained through biofeedback VEs could enhance interpersonal connectedness, quality of life, and social cohesion.