On March 30th, Lynda Joy Gerry visited the Innovation Lab at Zürcher Hochschule der Künste (ZhDK) for a workshop entitled “Mimic Yourself.”
This workshop involved collaborations between psychologists, motion-tracking and capture experts, and theater performers. The performers wore the Perception Neuron motion capture suit within an opti-track system. The data from the performer’s motion was tracked onto virtual avatars in real-time. Specifically, the team had used the Structure Sensor depth-field camera to create photogrammetry scans of members of the lab. These scans were then used as the avatar “characters” put into the virtual environment to have the mocap actors’ movements tracked onto. A screen was also programmed into the Unity environment, such that the screen could move around the real world in different angles and three-dimensional planes and show different views and perspectives of the virtual avatar being tracked relative to the human actor’s movements. Two actors playfully danced and moved about while impacting virtual effects with their tracked motion – specifically, animating virtual avatars but also cueing different sound effects and experiences.
Image above: Motion capture body suit worn by human actor and tracked onto a virtual avatar. Multiple avatar “snap shots” can be taken to create visual effects and pictures. Images below: Creating a many-arm shakti pose with avatar screen captures created through mocap.
Image above shows examples of photogrammetry scans taken with the Structure Sensor.
In May the online magazine Lehtiset published a Finnish Language article on “Cinema, storytelling and the mind” by Pia Tikka.
Elokuva, tarinankertoja ja mieli
On May 17th, Lynda Joy Gerry attended a conference organized by the Berlin School of Mind and Brain entitled “Watch Your Bubble!” The conference title refers to the information bubble conceptualization of social dynamics, wherein which groups of individuals are becoming increasingly nested within a bubble that reinforces only their own worldview. The conference brought in speakers on neuroaesthetics and social neuroscience, specifically Vittorio Gallese, Joerg Fingerhut, Andreas Roepstorpff, Olafur Eliasson, and Vincent Hendricks.
Neuroscientist Vittorio Gallese’s lecture specifically explored the ways in which individuals are reciprocally connected and the inter-dependence of self and other. Embodied simulations when imagining actions activate similar neural pathways as actually performing the same action. This makes film an especially evocative medium, specifically for haptic-vision and what Gallese calls “embodying technesis” wherein common action representations exist between imagination and action and also between self and other. The conference also addressed the formation and plasticity of personal identity, where identity comes from and how it is formed. Joerg Fingerhut specifically addressed the types of changes that individuals believe would make them a new person. For instance, a change in one’s musical tastes and preferences is perceived as a change in one’s identity.
Examination of Johanna Lehto’s MA thesis “Robots and Poetics – Using narrative elements in human-robot interaction” for the Department of Media, programme New Media Design and Production on the 16th of May 2018.
As a writer and a designer, Johanna Lehto sets out to reflect upon the phenomenon of human-robot interaction through her own artistic work. To illustrate plot structure and narrative units of the interaction between a robot and human, she reflects upon how Aristotle’s dramatic principles. In her work, she applies Aristotelian drama structure to analyse a human-robot encounter as a dramatic event. Johanna made an interactive video installation in which she created a presentation of an AI character, Vega 2.0 (image). The installation was exhibited in Tokyo in Hakoniwa-exhibition on 22.-24.6.2017 and in Musashino Art University Open Campus -festival 10.-11.6.2017.
News from our international network.
Since connected by Storytek Content+Tech Acceleator in fall 2017 Pia Tikka has consulted the VFC project directed by Charles S. Roy on screenplay and audience interaction.
Charles S. Roy, Film Producer & Head of Innovation at the production company La Maison de Prod, develops his debut narrative film+interactive project VFC as producer-director. VFC has been selected at the Storytek Content+Tech Accelerator, the Frontières Coproduction Market, the Cannes NEXT Cinema & Transmedia Pitch, the Sheffield Crossover Market, and Cross Video Days in Paris. In the vein of classic portrayals of female anxiety such as Roman Polanski’s REPULSION, Todd Haynes’ SAFE and Jonathan Glazer’s BIRTH, VFC is a primal and immersive psychological drama about fear of music (cinando.com). Its main innovation is in bringing brain-computer interface storytelling to the big screen by offering an interactive neurotech experience.
On the premises of the Cannes Film Market, as a grant holder for the Estonian innovation and development incubator Storytek Accelerator, Charles presented his work to the audience of the tech-focused NEXT section (8-13 May).
Our Enactive Avatar team Victor Pardinho, Lynda Joy Gerry, Eeva R Tikka, Tanja Bastamow, and Maija Paavola planning the volumetric video capture of a screen character with a collaborator in Berlin. The team’s work is supported by the Finnish Cultural Foundation, Huhtamäki Fund, and Virtual Cinema Lab (VCL), School of Film, Television and Scenography, Aalto University, and by Pia Tikka’s EU Mobilitas Top Researcher Grant.
Testing facial expressions of the viewer driving the behavior of a screen character with the Louise’s Digital Double (under a Creative Commons Attribution Non-Commercial No Derivatives 4.0 license). see Eisko.com
In the image Lynda Joy Gerry, Dr. Ilkka Kosunen and Turcu Gabriel, Erasmus exchange student from the University of Craiova @Digital Technology Institute, TLU) examining facial expressions of a screen character driven by an AI.
The manifesto for the new field of science, Symbiotic Composing, by Dr. Ilkka Kosunen, was recently published in the Teater. Muusika. Kino.
Symbiotic composing connects the topics of deep learning, physiological computing and computational creativity to facilitate new type of creative process where technology and human aesthetic judgment merge into one.
The article is in Estonian language.
News from our international network.
A visiting researcher Ellen Pearlman, hosted by MEDIT and DTI at Tallinn University in November 2017, introduced the Enactive Virtuality team with filmmaker Karen Palmer and her team. Karen spent 6 months at ThoughtWorks as the Artist in Residence in NYC working on her emotionally responsive immersive film RIOT. Our interaction with Karen’s team was focusing on joint interest in the advanced facial recognition technologies that would allow driving the narrative in real time.