Hybrid Labs The Third Renewable Futures Conference

Hybrid Labs Symposium
The Third Renewable Futures Conference
May 30 – June 1, 2018, Aalto University, Otaniemi Campus

June 31 NeurocinemAtics on narrative experience

Hybrid Labs is the third edition of Renewable Futures conference that aims to challenge the future of knowledge creation through art and science. The HYBRID LABS took place from May 30 to June 1, 2018 at Aalto University in Espoo, Finland, in the context of Aalto Festival. Celebrating 50 years of Leonardo journal and community, the HYBRID LABS conference looked back into the history of art and science collaboration, with an intent to reconsider and envision the future of hybrid laboratories – where scientific research and artistic practice meet and interact.
http://hybridlabs.aalto.fi/hls2018-symposium/

Pia Tikka and Mauri Kaipainen:
Triadic epistemology of narrative experience

We consider the narrative experience as a triangular system of relations between narrative structure , narrative perspective , and physiological manifestations associated with both. The proposal builds on the fundamentally pragmatist idea that no two of these elements are enough to explain each other, but a third is always required to explicate the interpretative angle. Phenomenological accounts altogether reject the idea of objective descriptions of experience. At the same time, a holistic understanding must assume that a narrative is shared on some level, an assumption narratology must make, and that even individual experiences are also embodied, as is evident to neuroscientists observing brain activity evoked by narrative experience. It cannot be that these accounts are incompatible forever. Using these elements, we discuss a triadic epistemology, a mutually complementary knowledge construction system combining phenomenological, narratological and physiological angles in order to generate integrated knowledge about how different people experience particular narratives.

Our approach assumes a holistic, or even deeper, an enactive perspective to experiencing, that is, assuming systemic engagement in the embodied, social, and situational environmental processes. Consequently, we propose understanding narrative content needs to be analyzed not only based on subjective reports of theexperiencer, but they also need to be related to neurophysiological manifestations of the experience. Or, describing the associated neural activity during the viewing of a film is not enough to relate it to subjective reports of the viewers, but the observations also need to be interpreted to conventions of storytelling. A selection of cases are described to clarify the proposed triadic method.

Κeywords: neurophenomenology, narrative experience, narrative perspective, enactive theory of mind, epistemology

eFilm: Hyperfilms for basic and clinical research presented by
my aivoAALTO collaborator professor Mikko Sams showed highlights of neuroscience findings related to viewing films in fMRI and introduced the concept of eFilm, a novel computational platform for producing and easily modifying films to be used in basic and clinical research.

June 1 VR TALKS at Aalto Studios

VR Research Talks organised with Virtual Cinema Lab and FiVR Track dedicated on research in and around VR, with a focus on artistic praxes around sound, alternative narrations and the self.

Daniel Landau: Meeting Yourself in Virtual Reality and Self-Compassion
Self-reflection is the capacity of humans to exercise introspection and the willingness to learn more about their fundamental nature, purpose, and essence. Between the internal process of Self-reflection to the external observation of one’s reflection – runs a thin line marking the relationship between the private-self and the public-self. From Narcissus’s pond, through reflective surfaces and mirrors, to current day selfies, the concepts of self, body-image and self-awareness have been strongly influenced by the human interaction with physical reflections. In fact, one can say that the evolution of technologies reproducing images of ourselves has played a major role in the evolution of the Self as a construct. With the current wave of Virtual-Reality (VR) technology making its early steps as a consumer product, we set out to explore the new ways in which VR technology may impact our concept of self and self-awareness. ‘Self Study’ aims to critically explore VR as a significant and novel component in the history and tradition of the complex relationship between technology and the Self (—).

See more on Daniel’s work here)

 

 

Embodying Creative Expertise in Virtual Reality Zurich ZhDK May 29-31

In collaboration with BeAnotherLab (The Machine to Be Another), Lynda Joy Gerry taught a workshop, “Embodying Creative Expertise in Virtual Reality” to Masters in Interaction Design students at Zürcher Hochschule der Künste (ZhDK), as part of a course, “Ecological perception, embodiment, and behavioral change in immersive design” led by BAL members.

Image: Poster for the students’ final project presentation and exhibition.

Lynda specifically taught students design approaches using a semi-transparent video overlay of another person’s first-person, embodied experience, as in First-Person Squared. The focus of the workshop was on Leap Motion data tracking and measurements, specifically how to calculate compatibility and interpersonal motor coordination through a match score between the two participants, and how to send this data over a network. The system provides motor feedback regarding imitative gestures that are similar in form and position, and also for gestures that occur synchronously (at the same time), ideally trying to support both types of interpersonal motor coordination. Lynda taught students the equations used and data input necessary to calculate this algorithm for the different match scores, and also how to add interaction effects to this data. Lynda showed students how to implement Leap motion hand tracking on top of stereoscopic point-of-view video and how to record user hand movements. On the 31st, students premiered their final projects at an event entitled “Scattered Senses.”

Mimic Yourself: Mo-cap Workshop Zurich ZhDK May 30 

On March 30th, Lynda Joy Gerry visited the Innovation Lab at Zürcher Hochschule der Künste (ZhDK) for a workshop entitled “Mimic Yourself.”

This workshop involved collaborations between psychologists, motion-tracking and capture experts, and theater performers. The performers wore the Perception Neuron motion capture suit within an opti-track system. The data from the performer’s motion was tracked onto virtual avatars in real-time. Specifically, the team had used the Structure Sensor depth-field camera to create photogrammetry scans of members of the lab. These scans were then used as the avatar “characters” put into the virtual environment to have the mocap actors’ movements tracked onto. A screen was also programmed into the Unity environment, such that the screen could move around the real world in different angles and three-dimensional planes and show different views and perspectives of the virtual avatar being tracked relative to the human actor’s movements. Two actors playfully danced and moved about while impacting virtual effects with their tracked motion – specifically, animating virtual avatars but also cueing different sound effects and experiences.

Image above: Motion capture body suit worn by human actor and tracked onto a virtual avatar. Multiple avatar “snap shots” can be taken to create visual effects and pictures. Images below: Creating a many-arm shakti pose with avatar screen captures created through mocap.

Image above shows examples of photogrammetry scans taken with the Structure Sensor.

Collaboration meeting @ Virtual Cinema Lab, Aalto University

Our Enactive Avatar team Victor Pardinho, Lynda Joy Gerry, Eeva R Tikka, Tanja Bastamow, and Maija Paavola planning the volumetric video capture of a screen character with a collaborator in Berlin. The team’s work is supported by the Finnish Cultural Foundation, Huhtamäki Fund, and Virtual Cinema Lab (VCL), School of Film, Television and Scenography, Aalto University, and by Pia Tikka’s EU Mobilitas Top Researcher Grant.

Testing behavioral strategies for Enactive Avatar

Testing facial expressions of the viewer driving the behavior of a screen character with the Louise’s Digital Double (under a Creative Commons Attribution Non-Commercial No Derivatives 4.0 license). see Eisko.com

In the image Lynda Joy Gerry, Dr. Ilkka Kosunen and Turcu Gabriel, Erasmus exchange student from the  University of Craiova @Digital Technology Institute, TLU) examining facial expressions of a screen character driven by an AI.

Enactive Virtuality at CNA Sound and Storytelling Conference LA March 22

Pia Tikka and Martin Jaroszewicz gave a joint talk at the CNA Sound and Storytelling Conference.

Image: Dr. Martin Jaroszewicz disucssing his ideas of enactive VR soundscapes at the Chapman University conference, Orange, CA, March 22.

“Enactive Virtuality – a framework for dynamically adaptive soundscapes”. 

The novel notion of enactive virtuality is discussed, drawing from the theories of enactive mind [1], and the concept of enactive cinema [2]. The key attribute of ‘enactive’ refers here to the setting in which the human agent is in continuous feed-back-looped interaction with the surrounding world. Enactive virtuality in turn refers to the story emerging in the agent’s mind in this dynamical setting in order to make sense of the world. This story is based on the agent’s current situation and previous experiences [3], and in relation to others through neurally built-in imitation of their actions [4]. Thus, the concept of ‘enactive virtuality’ extends beyond the common techno-spatial buzzword of ‘virtual reality’. While virtual reality technologies allow platforms where the perception of sound events can be modulated algorithmically, we describe the human agent’s experience of a dynamically adaptive soundscape as an expression of enactive virtuality.Theories and techniques of sound transformations in the spectral domain [5,6,7] for this setting are discussed.

References:

[1] Varela F, Thompson E, Rosch E. 1991. Embodied Mind. Cambridge, MA: MIT Press.

[2] Tikka P. 2008. Enactive Cinema: Simulatorium Eisensteinense. PhD diss. Helsinki: Univ. Art and Design Publ.

[3] Heyes CM, Frith CD. The cultural evolution of mind reading. Science. 2014 Jun 20;344(6190):1243091. doi:10.1126/science.1243091.

[4] Gallese V, Eagle MN, Migone P. 2007. Intentional attunement: mirror neurons (…) J Am Psychoanal Assoc. 55(1):131-76.

[5] Jaroszewicz, M. 2015 Compositional Strategies in Spectral Spatialization. PhD thesis,University of California Riverside,

[6] Jaroszewicz, M. 2017 ”Interfacing Gestural Data from Instrumentalists,” ART – Music Review, vol. 32.

[7] Kim-Boyle. 2008. “Spectral spatialization – an overview,” RILM Abstracts of Music Literature. http://hdl.handle.net/2027/spo.bbp2372.2008.086

A 13 min talk on Neurocinematics, Tallinn University Day 2018

 

1920x1080-13-eng.jpgSee full program here

At 12:15 session Pia Tikka, Research Professor, Baltic Film, Media, Arts and Communication School “Neurocinematics”

My 13 minutes will introduce a multidisciplinary research paradigm of neurocinematics. Combining methods of cinema, enactive media, and virtual screen characters with those of cognitive sciences it allows us to unravel new aspects of the neural basis of storytelling, creative imagination, and narrative comprehension. In addition to contributing to academic research on human mind, neurocinematics contributes to a range of more specifically targeted goals, such as the impact of audiovisual media on its audience for artistic, therapeutic, or commercial implementations, to name few of many.

Enactive Avatar in Time Flies Nordica Spring 2018

You can find an article about prof Pia Tikka and her studies on enactive co-presence between the viewer and screen character in the Nordica in flight magazine “Time Flies”.

You can also read the article in BFM’s blog:
http://media.tlu.ee/imagine-movies-could-read-your-emotions

Common VR/AR grounds with TTÜ and EKA Jan 25

Pia Tikka

Enactive Virtuality team in search for the common grounds with TTÜ and EKA at Ericsson Connectivity room in Mektory House (Raja 15, 12618 Tallinn) 25.01.2018. In order to cultivate successful research environments to study and develop innovations within virtual reality, augmented reality and other related audiovisual fields, collaboration across distinct academic institutions with their specific expertise is essential.

Hosts@TTÜ: Aleksei Tepljakov Research Scientist, Eduard Petlenkov Associate Professor, and Ahmet Kose Junior Researcher at TTU;  Guests: Kristjan Mändmaa, Dean of Design, Ruth Melioranski Design Researcher, Tanel Kärp Interaction Design Program Manager at EKA; Suk-Jae Chang, AR/VR businessman; Pia Tikka, Professor TLU.

Link to Recreation Lab at TTU: https://recreation.ee