Embodying Creative Expertise in Virtual Reality Zurich ZhDK May 29-31

In collaboration with BeAnotherLab (The Machine to Be Another), Lynda Joy Gerry taught a workshop, “Embodying Creative Expertise in Virtual Reality” to Masters in Interaction Design students at Zürcher Hochschule der Künste (ZhDK), as part of a course, “Ecological perception, embodiment, and behavioral change in immersive design” led by BAL members.

Image: Poster for the students’ final project presentation and exhibition.

Lynda specifically taught students design approaches using a semi-transparent video overlay of another person’s first-person, embodied experience, as in First-Person Squared. The focus of the workshop was on Leap Motion data tracking and measurements, specifically how to calculate compatibility and interpersonal motor coordination through a match score between the two participants, and how to send this data over a network. The system provides motor feedback regarding imitative gestures that are similar in form and position, and also for gestures that occur synchronously (at the same time), ideally trying to support both types of interpersonal motor coordination. Lynda taught students the equations used and data input necessary to calculate this algorithm for the different match scores, and also how to add interaction effects to this data. Lynda showed students how to implement Leap motion hand tracking on top of stereoscopic point-of-view video and how to record user hand movements. On the 31st, students premiered their final projects at an event entitled “Scattered Senses.”

Mimic Yourself: Mo-cap Workshop Zurich ZhDK May 30 

On March 30th, Lynda Joy Gerry visited the Innovation Lab at Zürcher Hochschule der Künste (ZhDK) for a workshop entitled “Mimic Yourself.”

This workshop involved collaborations between psychologists, motion-tracking and capture experts, and theater performers. The performers wore the Perception Neuron motion capture suit within an opti-track system. The data from the performer’s motion was tracked onto virtual avatars in real-time. Specifically, the team had used the Structure Sensor depth-field camera to create photogrammetry scans of members of the lab. These scans were then used as the avatar “characters” put into the virtual environment to have the mocap actors’ movements tracked onto. A screen was also programmed into the Unity environment, such that the screen could move around the real world in different angles and three-dimensional planes and show different views and perspectives of the virtual avatar being tracked relative to the human actor’s movements. Two actors playfully danced and moved about while impacting virtual effects with their tracked motion – specifically, animating virtual avatars but also cueing different sound effects and experiences.

Image above: Motion capture body suit worn by human actor and tracked onto a virtual avatar. Multiple avatar “snap shots” can be taken to create visual effects and pictures. Images below: Creating a many-arm shakti pose with avatar screen captures created through mocap.

Image above shows examples of photogrammetry scans taken with the Structure Sensor.

Aalto MA evaluation on Robots & Poetics

Examination of Johanna Lehto’s MA thesis “Robots and Poetics – Using narrative elements in human-robot interaction” for the Department of Media, programme New Media Design and Production on the 16th of May 2018.

As a writer and a designer, Johanna Lehto sets out to reflect upon the phenomenon of human-robot interaction through her own artistic work. To illustrate plot structure and narrative units of the interaction between a robot and human, she reflects upon how Aristotle’s dramatic principles. In her work, she applies Aristotelian drama structure to analyse a human-robot encounter as a dramatic event. Johanna made an interactive video installation in which she created a presentation of an AI character, Vega 2.0 (image). The installation was exhibited in Tokyo in Hakoniwa-exhibition on 22.-24.6.2017 and in Musashino Art University Open Campus -festival 10.-11.6.2017.

Collaboration meeting @ Virtual Cinema Lab, Aalto University

Our Enactive Avatar team Victor Pardinho, Lynda Joy Gerry, Eeva R Tikka, Tanja Bastamow, and Maija Paavola planning the volumetric video capture of a screen character with a collaborator in Berlin. The team’s work is supported by the Finnish Cultural Foundation, Huhtamäki Fund, and Virtual Cinema Lab (VCL), School of Film, Television and Scenography, Aalto University, and by Pia Tikka’s EU Mobilitas Top Researcher Grant.

Testing behavioral strategies for Enactive Avatar

Testing facial expressions of the viewer driving the behavior of a screen character with the Louise’s Digital Double (under a Creative Commons Attribution Non-Commercial No Derivatives 4.0 license). see Eisko.com

In the image Lynda Joy Gerry, Dr. Ilkka Kosunen and Turcu Gabriel, Erasmus exchange student from the  University of Craiova @Digital Technology Institute, TLU) examining facial expressions of a screen character driven by an AI.

Enactive Virtuality at CNA Sound and Storytelling Conference LA March 22

Pia Tikka and Martin Jaroszewicz gave a joint talk at the CNA Sound and Storytelling Conference.

Image: Dr. Martin Jaroszewicz disucssing his ideas of enactive VR soundscapes at the Chapman University conference, Orange, CA, March 22.

“Enactive Virtuality – a framework for dynamically adaptive soundscapes”. 

The novel notion of enactive virtuality is discussed, drawing from the theories of enactive mind [1], and the concept of enactive cinema [2]. The key attribute of ‘enactive’ refers here to the setting in which the human agent is in continuous feed-back-looped interaction with the surrounding world. Enactive virtuality in turn refers to the story emerging in the agent’s mind in this dynamical setting in order to make sense of the world. This story is based on the agent’s current situation and previous experiences [3], and in relation to others through neurally built-in imitation of their actions [4]. Thus, the concept of ‘enactive virtuality’ extends beyond the common techno-spatial buzzword of ‘virtual reality’. While virtual reality technologies allow platforms where the perception of sound events can be modulated algorithmically, we describe the human agent’s experience of a dynamically adaptive soundscape as an expression of enactive virtuality.Theories and techniques of sound transformations in the spectral domain [5,6,7] for this setting are discussed.

References:

[1] Varela F, Thompson E, Rosch E. 1991. Embodied Mind. Cambridge, MA: MIT Press.

[2] Tikka P. 2008. Enactive Cinema: Simulatorium Eisensteinense. PhD diss. Helsinki: Univ. Art and Design Publ.

[3] Heyes CM, Frith CD. The cultural evolution of mind reading. Science. 2014 Jun 20;344(6190):1243091. doi:10.1126/science.1243091.

[4] Gallese V, Eagle MN, Migone P. 2007. Intentional attunement: mirror neurons (…) J Am Psychoanal Assoc. 55(1):131-76.

[5] Jaroszewicz, M. 2015 Compositional Strategies in Spectral Spatialization. PhD thesis,University of California Riverside,

[6] Jaroszewicz, M. 2017 ”Interfacing Gestural Data from Instrumentalists,” ART – Music Review, vol. 32.

[7] Kim-Boyle. 2008. “Spectral spatialization – an overview,” RILM Abstracts of Music Literature. http://hdl.handle.net/2027/spo.bbp2372.2008.086

A 13 min talk on Neurocinematics, Tallinn University Day 2018

 

1920x1080-13-eng.jpgSee full program here

At 12:15 session Pia Tikka, Research Professor, Baltic Film, Media, Arts and Communication School “Neurocinematics”

My 13 minutes will introduce a multidisciplinary research paradigm of neurocinematics. Combining methods of cinema, enactive media, and virtual screen characters with those of cognitive sciences it allows us to unravel new aspects of the neural basis of storytelling, creative imagination, and narrative comprehension. In addition to contributing to academic research on human mind, neurocinematics contributes to a range of more specifically targeted goals, such as the impact of audiovisual media on its audience for artistic, therapeutic, or commercial implementations, to name few of many.

Neurocinematics – viewing “At Land” by Maya Deren (1944) in brain scanning lab

 

Pia Tikka

The film that all subjects viewed in fMRI and in MEG brain scanning labs at Aalto University  was At Land from 1944 by Maya Deren. I suggested this experimental film for our brain imaging studies due to its special characteristics.

“The dancer, choreographer, and filmmaker Maya Deren can be seen as one of the pioneers of screendance. Her experimental films have challenged conventional plot- driven mainstream cinema by emphasizing an ambiguous experience, open for multiple interpretations. For Deren film viewing is a socially determined ritual embodying intersubjectively shared experiences of participants. This makes her films particularly interesting for today’s neurocinematic studies. Deren’s ideas also anticipate the recent enactive mind approach, according to which the body-brain system is in an inseparable manner situated and coupled with the world through interaction. It assumes that both private, such as perception and cognition, and intersubjective aspects of human enactment, such as culture, sciences, or the arts, are based on the embodiment of life experience. Reflecting this discourse, Deren’s film At Land is analyzed as an expression of a human body-brain system situated and enactive within the world, with references to neuroscience, neurocinematic studies, and screendance” (Tikka & Kaipainen 2016).

Read more of the film and why it is of interest to cognitive neuroscience studies here:

Pia Tikka and Mauri Kaipainen (2016). “Screendance as Enactment in Maya Deren’s At Land: Enactive, Embodied, and Neurocinematic Considerations.” In The Oxford Handbook of Screendance Studies Edited by Douglas Rosenberg. Print Publication Date: Aug 2016 Subject: Music, Dance Online Publication Date: Jun 2016 DOI: 10.1093/oxfordhb/9780199981601.013.15

2015

One of the challenges of naturalistic neurosciences using movie-viewing experiments is how to interpret ob- served brain activations in relation to the multiplicity of time-locked stimulus features. As previous studies have shown less inter-subject synchronization across viewers of random video footage than story-driven films, new methods need to be developed for analysis of less story-driven contents. To optimize the linkage between our fMRI data collected during viewing of a deliberately non-narrative silent film ‘At Land’ by Maya Deren (1944) and its annotated content, we combined the method of elastic-net regularization with the model- driven linear regression and the well-established data-driven independent component analysis (ICA) and inter-subject correlation (ISC) methods.(…)

“In our study, we aimed to go beyond story-driven narratives. What methods could improve the analysis of time-locked interdependence between the brain data and the content of more ambiguous non- narrative films, or video recordings of non-structured events, such as improvised conversation? This question motivated us to study the link- age between our fMRI data collected during viewing of a non-narrative silent film ‘At Land’ directed by Maya Deren (1944, 14′40′′) and its annotated content. The film ‘At Land’ shows an expressionless young woman wandering in her surroundings without any explicit motivation for her behavior, such as collecting stones in the beach, or jumping down from a rock. In addition, according to the director-actress herself, she deliberately avoided emotional expressions, while the cinemato- graphic aspects, such as camera movements and framing, have been carefully composed (Deren, 2005). How to link the cinematic features of such an ambiguous film with the viewers’ brain activation detected while they are trying to make sense of it? As the film ‘At Land’ does not give any tools for inferring the character’s mental state, goals of her actions, or inner reasons, story-based film structure analysis methods do not necessarily allow adequate interpretation for the resulting linkages. Instead, by annotating all bodily actions and camera-related features specifically pointed out by the director of the film, one might find meaningful linkages between the film content and collected fMRI data”(Kauttonen et al 2016).

Read more here: Kauttonen, J., Hlushchuk, Y., and Tikka, P. (2015). “Optimizing methods for linking cinematic features to fMRI data.” NeuroImage 110C:136–148. doi: 10.1016/j.neuroimage.2015.01.063.

2016

Observation of another person’s actions and feelings activates brain areas that support similar functions in the observer, thereby facilitating inferences about the other’s mental and bodily states. In real life, events eliciting this kind of vicarious brain activations are intermingled with other complex, ever-changing stimuli in the environment. One practical approach to study the neural underpinnings of real-life vicarious perception is to image brain activity during movie viewing. Here the goal was to find out how observed haptic events in a silent movie would affect the spectator’s sensorimotor cortex. (…)

Kaisu Lankinen, Eero Smeds, Pia Tikka, Elina Pihko, Riitta Hari, Miika Koskinen (2016) Haptic contents of a movie dynamically engage the spectator’s sensorimotor cortex. Human Brain Mapping.
DOI: 10.1002/hbm.23295

2018

Another article that harnessed our data collected when people watched At Land by Deren (1944):

Movie-viewing allows human perception and cognition to be studied in complex, real-life-like situations in a brain-imaging laboratory. Previous studies with functional magnetic resonance imaging (fMRI) and with magneto- and electroencephalography (MEG/EEG) have demonstrated consistent temporal dynamics of brain activity across movie viewers. However, little is known about the similarities and differences of fMRI and MEG/EEG dynamics during such naturalistic situations. (…)

Below Fig. 6. Time courses of the most similar MEG and fMRI signals.

For detailed information, the reader is referred to the original article: Lankinen K, Saari J, Hlushchuk Y, Tikka P, Parkkonen L, Hari R, Koskinen M. (2018). Consistency and similarity of MEG- and fMRI-signal time courses during movie viewing. NeuroImagehttps://www.sciencedirect.com/science/article/pii/S1053811918301423

 

Enactive Avatar in Time Flies Nordica Spring 2018

You can find an article about prof Pia Tikka and her studies on enactive co-presence between the viewer and screen character in the Nordica in flight magazine “Time Flies”.

You can also read the article in BFM’s blog:
http://media.tlu.ee/imagine-movies-could-read-your-emotions

Common VR/AR grounds with TTÜ and EKA Jan 25

Pia Tikka

Enactive Virtuality team in search for the common grounds with TTÜ and EKA at Ericsson Connectivity room in Mektory House (Raja 15, 12618 Tallinn) 25.01.2018. In order to cultivate successful research environments to study and develop innovations within virtual reality, augmented reality and other related audiovisual fields, collaboration across distinct academic institutions with their specific expertise is essential.

Hosts@TTÜ: Aleksei Tepljakov Research Scientist, Eduard Petlenkov Associate Professor, and Ahmet Kose Junior Researcher at TTU;  Guests: Kristjan Mändmaa, Dean of Design, Ruth Melioranski Design Researcher, Tanel Kärp Interaction Design Program Manager at EKA; Suk-Jae Chang, AR/VR businessman; Pia Tikka, Professor TLU.

Link to Recreation Lab at TTU: https://recreation.ee