Call for NEUROCINEMATIC papers — Baltic Screen Media Review, Special Issue 2023

Call for NEUROCINEMATIC papers – Baltic Screen Media Review, Special Issue 2023

Cinematic minds in making – Investigation of subjective and intersubjective experiences of storytelling

Guest editors Pia Tikka and Elen Lotman with Maarten Coëgnarts

Contact and submission to NeuroCineBFM@tlu.ee

 

 

One of the key foundations of everyday activities in society is intersubjectively shared communication between people. Stories, films, and other audiovisual narratives promote shared understanding of possible situations in other people’s lives. Narratives expose complex social situations with their ethical, political, and cultural contexts (Hjort and Nannicelli, 2022). They also play a role for the human kind as a means to learn from protagonists and their positive examples and successes, but also their mistakes, false motivations, and blinded desires that may lead to dramatic situations, sometimes even disasters. An exhaustive range of contextual situatedness as the constitutive essence in narratives does not only serve entertainment and education, but also scientific studies of human mind and behavior.

Since the beginning of this millennium, narratives mediated by films have allowed researchers to simulate complex socio-emotional events in behavioral and neuroimaging  laboratories, accumulating new insights to the human behavior, emotion, and memory, to name a few of many topics. The proponents of the so-called naturalistic neuroscience and, in particular its subfield neurocinematics (Hasson et al. 2008) have shown how experiencing naturally unfolding events evokes synchronized activations in the large-scale brain networks across different test participants (see, Jaaskelainen et al. 2021, for review). The  tightly framed contextual settings in cinematic narratives have opened a fresh window for researchers interested in understanding linkages between individual subjective experiences and intersubjectively shared experiences.

So far, neurocinematic studies have nearly exclusively focused on mapping  the correlations between narrative events and observed physiological behaviors of uninitiated test participants.. The knowledge accumulated so far does not tell much of the affective or cognitive functions of the experts of audiovisual storytelling, with few exceptions (e.g. deBorst et al 2016; Andreu-Sánchez et al. 2021).  Along with this Special Issue we want to extend the scope of studies to the embodied cognitive processes  of storytellers themselves. As an example, consider the term “experiential heuristics” proposed by cinematographer ELen Lotman (2021) to describe the practice-based knowledge accumulation by cinematographers, or the embodied dynamics of the filmmaker in the process of simulating the experiences of the fictional protagonists and/or that of  imagined viewers described as “enactive authorship”(Tikka 2022). Another question that merits further understanding is how the embodied decision-making processes of filmmakers further lead to the creation of dynamic embodied structures in the cinematic form. Appealing to the shared embodiment of both the filmmaker and the film viewer, these pre-conceptual patterns of bodily experience or “image schemas” have been argued to play a significant expressive role in the representation and communication of meaning in cinema (Coëgnarts 2019).

We call for papers that focus on the creative experiential processes of the filmmakers,  storytelling experts and their audience. We encourage the proposed papers to discuss how temporally unfolding of contextual situatedness depicted in narratives manifests in reported subjective experiences, the observed  body-brain behaviors,  and time-locked content descriptions.

TOPICS

We invite boldly multidisciplinary papers to contribute with theoretical, conceptual and practical approaches to the experiential nature of filmmaking and viewing. They may draw, for instance, from social and cognitive sciences, psychophysiology, neurosciences, ecological psychology, affective computing, cognitive semantics, aesthetics, or empirical phenomenology. 

Accordingly, we encourage papers that discuss the relations of data from these approaches, all concerning film experience of the film professionals and/or film viewers.  The focus of a submission may also focus on a specific expertise, for example that of the writer, editor, cinematographer, scenographer or sound designer. The papers may describe, for example 1) subjective experiences, 2) intersubjectively shared experiences; 3) context or content annotation, 4) semantic description; 5) first-person phenomenal description, and/or 6) physiological observation (e.g. neuroimaging, eye-tracking, psychophysiological measures).

Contributions addressing topics such as (but not limited to) the following are particularly welcome:

  • Social cognition and embodied intersubjectivity
  • Interdisciplinary challenges for methods; annotations linking cinematic features to physiological data
  • First-and second-person methodologies; empirical phenomenological observations
  • Embodied enactive mind, embodied simulation, and theory of theory mind
  • Embodied metaphors in film; embodied film style; bodily basis of shared film language; semantics;
  • Cinematic empathy; emotions; simulation  of character experiences
  • Audience engagement; immersion; cognitive identification;
  • Temporality of experiences; context-dependent memory coding; story reconstruction; narrative comprehension;
  • Experiential heuristics; multisensory and tacit knowledge;
  • Storytelling strategies, aesthetics, fIlm and media literacy; genre conventions

GUIDELINES

We will accept long research articles (4000 – 8000 words w/o ref) and short articles and commentaries (2000 – 2500 words w/o ref). Submitted papers need to follow Submission guidelines

All submissions should be sent via email attachment to Guest editors at NeuroCineBFM@tlu.ee

BSMR embraces visual storytelling, we thus invite authors to use photos and other illustrations as part of their contributions. See Journal info https://sciendo.com/journal/BSMR


Key dates


01.04.2023 – Submit abstracts of 200–300 words.
10.04.2023 –Acceptance of abstracts
30.06.2023 –Submit full manuscripts for blind peer review
20.09.2023 –Resubmit revisions
31.12.2023 –Special Issue published online


This issue of BSMR will be published both online and in print in December 2023. 

All submissions should be sent via email attachment to Guest editors at NeuroCineBFM@tlu.ee

Guest editors 

Maarten Coëgnarts https://www.filmeu.eu/alliance/people/maarten-coegnarts

Elen Lotman  https://www.filmeu.eu/alliance/people/elen-lotman

Pia Tikka https://www.etis.ee/CV/Pia_Tikka/eng

 

References

Andreu-Sánchez, C., Martín-Pascual, M.A., Gruart, A. and Delgado-García, J.M. (2021). The effect of media professionalization on cognitive neurodynamics during audiovisual cuts. Frontiers in Systems Neuroscience, 15: 598383. doi: https://doi.org/10.3389/fnsys.2021.598383

de Borst, A. W., Valente, G., Jääskeläinen, I. P., & Tikka, P. (2016). Brain-based decoding of mentally imagined film clips and sounds reveals experience-based information patterns in film professionals. NeuroImage, 129, 428–438. https://doi.org/10.1016/j.neuroimage.2016.01.043

Hasson, U., Landesman, O., Knappmeyer, B. Vallines, I., Rubin, N., & Heeger, D. (2008). Neurocinematics: The Neuroscience of Film. Projections, 2, 1–26. https://doi.org/10.3167/proj.2008.020102.

Hjort, M and Nannicelli, T. (Ed.).(2022) The Wiley Blackwell Companion to Motion Pictures and Public Value. Wiley-Blackwell Press.

Jääskeläinen, I. P., Sams, M., Glerean, E., & Ahveninen, J. (2021). Movies and narratives as naturalistic stimuli in neuroimaging. NeuroImage, 224, 117445. https://doi.org/10.1016/j.neuroimage.2020.117445

Lotman, E. (2021) Experiential heuristics of fiction film cinematography. PhD Diss. Tallinn University.

Tikka, Pia (2022). Enactive Authorship Second-Order Simulation of the Viewer Experience— A Neurocinematic Approach. Projections: the Journal for Movies & Mind, 16 (1), 47−66. DOI: 10.3167/proj.2022.160104.

 

Relationality in VR – Luzern workshop at HSLU 22 March 2023

Programme – Relationality in VR

Organizer Hochschule Luzern, Design & Kunst, Dr. Christina Zimmermann, Projektleiterin Forschung

hslu.ch/design-kunst

***

 

Tikka Pia

Enactive Co-presence in Narrative Virtual Reality
 

The talk describes our most recent project The State of Darkness (SOD 2.0; work-in-progress).

SOD 2.0 is a virtual reality installation in which human and non-human lives coexist. The first is lived by the participant while the latter is lived by the non-human Other.  The narrative VR system is enactive, this is, all elements of the narrative space are in a reciprocally dependent state with the other elements.

The participant’s experiential moves are interpreted from their biosensor measurements in real-time, and then fed back to drive the different elements of the enactive narrative system. In turn, the facial and bodily behaviour of the artificial Other feeds back to the participant’s experiential states. The scenography and the soundscape adapt to the behaviours of the two beings while these adaptations affect back to the atmosphere of the intimate co-presence between the human and the Other.

The concept of non-human narrative allows the State of Darkness 2.0 to reflect the human-centric perspective against that of a non-human perspective. The intriguing question is whether narratives and the narrative faculty should be considered as exclusively characteristic to humans, or if the idea of narrative can be extended to other domains of life, or even to the domain of artificially humanlike beings.

The SOD 2.0 is an artistic dissemination of the project Enactive Co-presence in Narrative Virtual Reality: A Triadic Interaction Model.
The MOBTT90 research project which combines arts and sciences to explore how the viewer’s experience of co-presence can be controlled by parametrically modifying the behavior of a screen character or its context. It is led by research professor Pia Tikka, Baltic Film, Media and Arts School (BFM) & Centre of Excellence in Media Innovation and Digital Culture (MEDIT), Tallinn University.The project is funded by the EU Mobilitas Pluss Top Researcher Grant and Estonian Research Council. 
 

Enactive Virtuality Lab hybrid seminar 24 Nov, 2pm – 6 pm – Open for online attendance!

Enactive Virtuality Lab present the most recent work by the team members.

The event takes place in Tallinn University, Nova building N-406 Kinosaal

Date: 24.11.2022. 14:00-18:00 (EET / CET +1)

Join Zoom Meeting https://zoom.us/j/95032524868

Speakers and Schedule, see below.

 

SPEAKERS

Tanja Bastamow: Virtual scenography in transformation

The performative possibilities of virtual environments

 

Scenography has historically been understood as the illustrative support for a staged drama. However, in the recent decades, the field has expanded to encompass the overall design of performance events, actions and experiences. Positioning this expanded understanding of scenography in dialogue with virtual reality (VR) opens up interesting possibilities for designing virtual scenography which can become an enabler of emerging narratives, unforeseen events and unexpected encounters. In my research I investigate the question of shifting agencies and virtual scenographers’ role as a co-creator rather than a traditional author-designer. Who – and what – become the creators, performers and spectators in virtual experiences and encounters in which the real-time responsiveness, transformability, (im)materiality and immersivity of virtual scenography play a key role? In my talk I will introduce thoughts around these questions by describing the process of building the virtual scenography for “State of Darkness II” VR experience.

Tanja Bastamow is a virtual scenographer working with experimental projects combining scenography with virtual and mixed reality environments. Bastamow’s key areas of interest are immersive virtual environments, the creative potential of technology as a tool for designing emergent spatial narratives, and creating scenographic encounters in which human and non-human elements can mix in new and unexpected ways. Currently, she is a doctoral candidate at Aalto University’s Department of Film, Television and Scenography, doing research on the performative possibilities of real-time virtual environments. In addition to this, she is working in LiDiA – Live + Digital Audiences artistic research project (2021-23) as a virtual designer. She is also a founding member of Virtual Cinema Lab research group and has previously held the position of lecturer in digital design methods at Aalto University. 

Ats Kurvet: DESIGNING VIRTUAL characters 

Ats Kurvet is a 3D real-time graphics and virtual reality application developer and consultant with over 8 years of industry experience. He specialises in lighting, character development and animation, game and user experience design, 3D modeling and environment development, shaders and material development and tech art. He has worked for Crytek GmbH as a lighting artist and runs ExteriorBox OÜ. His focus working with the Enactive Virtuality Lab in researching the visual aspects of digital human development and implementation.

Matias Harju: Traits of Virtual Reality Sound Design

Whilst sound design for VR borrows a lot from other media such as video games and films, VR sets some unique conditions for sonic thinking and technical approaches. Spatial sound is inarguably one of the most significant areas in this respect. That entails the whole process from narrative decisions to source material recordings to 3D audio rendering on the headphones. Other sound design considerations in VR may relate to the level of sonic realism and the role of sound in general. My talk will briefly expose some of the characteristics of sound in VR from both artistic and technical perspectives.

Matias Harju is a sound designer and musician specialising in immersive and interactive sonic storytelling. He is currently developing Audio Augmented Reality (AAR) as a narrative and interactive medium at WHS Theatre Union in Helsinki. He is also active in making adaptive music and immersive sound design for other projects including VR, AR, video games, installations, and live performances. Matias is a Master of Music from Sibelius Academy with a background in multiple musical genres, music education, and audio technology. He also holds a master’s degree from the Sound in New Media programme at Aalto University

MARIE-LAURE CAZIN:Freud’s last hypnosis, validating emotion driven enactions in cinematic VR

Dr Marie-Laure Cazin will present the study she has been conducting in the context of her European Mobilitas + postdoctoral grant with Enactive Virtuality Lab  BFM, TLU (2021-2022).  She is currently developing an artistic prototype of Emotive VR for interactive 360° film. The project aims to obtain emotional interactivity of the viewer with the film’s soundtrack. While experiencing her VR film, Freud’s last hypnosis, the emotional responses of the viewers are collected with physio-sensors, eye-tracking and with personal interviews after the viewing. Project is in collaboration with Dr Mati Mõttus from the Interaction lab of the school of Digital Technology (TLU) and Matias Harju, composer and sound designer (Finland).

Marie-Laure Cazin is a French filmmaker and a researcher. She is teaching in Ecole Supérieure d’Art et de Design-TALM in France. She was a postdoctoral researcher at MEDIT BFM, Tallinn University with a European post doctoral grant Mobilitas + in 2022-23. She received a PhD in Aix-Marseille University for her thesis entitled “Cinema et Neurosciences, du Cinéma Émotif à Emotive VR”[1] in 2020. This thesis is about cinema experience, contextualized by neurosciences research, describing the neuronal process of emotions and trying to think further about the analogy between cinema and the thinking process. She is also a regular teacher in the Ecole Supérieure d’Art et de Design TALM, le Mans (France).

She comes from an artistic background, having studied in Le Fresnoy, Studio National des Arts Contemporains, the Jan van Eyck Academy, post graduate program in residency (Masstricht, Netherlands) and in the Ecole Nationale Supérieure des Beaux-Arts in Paris. She has conducted many art-science projects together with scientific partners that have been shown in many exhibitions and festivals, creating prototypes that renew the filmic experience. In her Emotive Cinema (2014) and Emotive VR (2020) projects she applies physiological feedback, like EEG, in order to obtain an emotional analysis from the brains’ activity of the viewers that changes the film’s scenario.

[1]Cazin, M-L. 2020. Cinéma et neurosciences : du cinéma émotif à emotive VR. Accèss https://www.theses.fr/2020AIXM0009

MATI MÕTTUS: Psyhco-physiology and eye-tracking of CINEMATIC VR

There are various ways to track our emotions through physiometric signals. Most common of these signals are electrical conductance of skin, facial expressions, cardiography and encephalography. In my talk I’d like to discuss the reliability and intrusiveness of physiological measurements in interactive art. While the reliability of measurements is not too critical in the domain of art, the intrusive sensors over art-enjoyers’ bodies can easily spoil the artistic experience.

Mati Mõttus is a lecturer and researcher in the School of Digital Technologies, Tallinn University. His doctoral degree on computer science in the field of human-computer interaction focused on “Aesthetics in Interaction Design” (2018). The current research interests are hedonic experiences in human-computer interaction. The focus is two-fold. The use of psycho-physiological signals in detecting users’ feelings while interacting with technology and explaining emotional behavior on one hand. On the other hand, the design of interactive systems, based on psycho-physiological loops.

Robert McNamara: Empathic nuances with virtual avatars – a novel theory of compassion

Technological mediation is the idea that technology affects or changes us as we use it, either consciously or unconsciously. Digital avatars, whether encountered in VR or otherwise, are one such technology, often in concert with various machine learning applications, that pose potential unknown and understudied interactional effects on humans, whether through their use in art, video games, or governmental applications (such as the pilot program at EU borders which was used to detect traveler deception, iborderctrl). One area where avatars may pose particular interest for academic study is in their perceived ability to not only evoke the uncanny valley effect, but also in their potential to promote a “distancing effect.” It is hypothesized that interaction with avatars, as an artistic process, may also result in increased expressions of cognitive empathy through the conscious or unconscious process of abductive reasoning.

Robert McNamara has a background in American criminal law as well as degrees related to audiovisual ethnography and eastern classics. He is from New York State, but has lived in Tallinn for the last five years. Currently, he specifically focuses on researching related to ethical, socio-political, anthropological, and legal aspects in the context of employing human-like VR avatars and related XR technology. He is co-authoring journal papers in collaboration with other experts on the team related to the social and legal issues surrounding the use of artificial intelligence, machine learning, and virtual reality avatars for governmental immigration regimes. During 05-09/2020 Robert G. McNamara worked as a visiting research fellow working with the MOBTT90 team. 10/2020 onwards he continues working with the project as a doctoral student. In the Enactive Virtuality Lab Robert has contributed to co-authored writing; the most recent paper accepted for publication is entitled “Well-founded Fear of Algorithms or Algorithms of Well-founded Fear? Hybrid Intelligence in Automated Asylum Seeker Interviews”, Journal of Refugee Studies, Oxford UP,

DEBORA C.F. DE SOUZA: SELF-RAting of Emotions in simulated immigration interview

Humans benefit from emotional interchange as a source of information to adapt and react to external stimuli and navigate their reality. Computers, on the other hand, rely on classification methods. It uses models to calculate and differentiate affective information from other human inputs because of the emotional expressions that emerge through human body responses, language, and behavior changes. Nevertheless, theoretically and methodologically, emotion is a challenging topic to address in Human-Computer Interaction. During her master’s studies, Debora explored methods for assessing physiological responses to emotional experience and aiding the emotion recognition features of Intelligent Virtual Agents (IVAs). Her study developed an interface prototype for emotion elicitation and simultaneous acquisition of the user’s physiological and self-reported emotional data. 

Debora C. F. de Souza is a Brazilian visual artist and journalist. Graduated in Social Communication at the University of Mato Grosso do Sul Foundation in Brazil. Her artwork and research are marked by experiments with different kinds of images and audiovisual media. Recently graduated with an MA in Human-Computer Interaction (HCI); she is now a doctoral student at the Information Society Technologies and a Junior Researcher at the School of Digital Technologies at Tallinn University. In the Enactive Virtuality Lab, she is researching the implications of anthropomorphic virtual agents on the human affective states and the implications of such interactions in collaborative and social contexts, such as medical simulation training.



SCHEDULE 14-18

14:00-14:10 Pia Tikka: State of Darkness (S0D)- Enactive VR experience

14:10-14:45 Tanja Bastamow – Designing performative scenography (SoD)

14:45-15:00 Ats Kurvet –Designing humanlike characters (SoD)

15:00-15:20 Matias Harju – Traits of Virtual Reality Sound Design

15:20-15:40 Marie-Laure Cazin:Freud’s last hypnosis, validating emotion driven enactions in cinematic VR

15:40-16:00 Mati Mōttus: Experimenting with Emotive VR cinema

Break (10 min)

16:15-17:15 Robert McNamara: Empathic nuances with virtual avatars: a novel theory of compassion.

17:15-17:35 Debora Souza: Self-rating of emotions in simulated immigration interview

17:35- 18:00 Discussion

****

 

BFM PhD is inviting you to a scheduled Zoom meeting.

Topic: Enactive Virtuality Lab hybrid seminar
Time: Nov 24, 2022 10:00 Helsinki

Join Zoom Meeting
https://zoom.us/j/95032524868

Meeting ID: 950 3252 4868
One tap mobile
+3726601699,,95032524868# Estonia
+3728801188,,95032524868# Estonia

Dial by your location
+372 660 1699 Estonia
+372 880 1188 Estonia
+1 346 248 7799 US (Houston)
+1 360 209 5623 US
+1 386 347 5053 US
+1 507 473 4847 US
+1 564 217 2000 US
+1 646 558 8656 US (New York)
+1 646 931 3860 US
+1 669 444 9171 US
+1 669 900 9128 US (San Jose)
+1 689 278 1000 US
+1 719 359 4580 US
+1 253 205 0468 US
+1 253 215 8782 US (Tacoma)
+1 301 715 8592 US (Washington DC)
+1 305 224 1968 US
+1 309 205 3325 US
+1 312 626 6799 US (Chicago)
Meeting ID: 950 3252 4868
Find your local number: https://zoom.us/u/abUQJvmMJi

Join by Skype for Business
https://zoom.us/skype/95032524868

 

Invited talk at Tehohoitopäivat 01.-02. Nov 2022 Helsinki University Hospital conference

Pia Tikka

The Finnish language talk “Interfacing Humans and Machines – Challenges for virtual simulation training” was part of the Enactive Virtuality Lab’s efforts to implement the gained scientific findings and innovations within the XR field to serve the societal wellbeing and to find solutions for challenges in the domain of healthcare and medical training.

https://sthy.fi/sthy-koulutus/tehohoitopaivat-9/

page1image3220612576Simulation session Nov 02, 2022 [in. Finnish]

5. SIMULAATION MONET MAHDOLLISUUDET Pj LT Maarit Hult

Mikä simulaatio? -katsaus simulaatioiden kiehtovaan maailmaan, Ayl, LT Taru Kantola, Anestesia ja leikkausosasto, Meilahden sairaala, HUS   (Image above)

“Simulation saves lives.” Oyl, dos. Pekka Aho, Vatsakeskus, Meilahden sairaala, HUS Ihmisen ja koneen rajapinnalla – simulaation haasteita virtuaalitodellisuudessa,

Tutkimusprofessori Pia Tikka, Media Innovaatioiden ja Digitaalisen Kulttuurin Huippukeskus (MEDIT), Tallinnan yliopisto

***

SUOMEN TEHOHOITOYHDISTYKSEN KOULUTUSPÄIVÄT
Clarion Hotelli, Helsinki Tyynenmerenkatu 2, BYSA 3.krs

1.-2.11.2022

Tiistai 1.11.2022

1. LAATU JA HOIDON TULOSTEN SEURANTA. Pj. Dos Mika Valtonen, TYKS

 2. EEG:N UUDET MAHDOLLISUUDET Pj. LT Ville Jalkanen, TAYS

3. HOIDON RAJAT MUUTTUVAT- UUDET POTILASRYHMÄT TEHOHOIDOSSA

Keskiviikko 2.11.2022

4. TURVALLISUUSUHAT JA VARAUTUMINEN Pj Dos Mika Valtonen ja Dos Juha Koskenkari

5. SIMULAATION MONET MAHDOLLISUUDET Pj LT Maarit Hult

 

Invited talk at Elvytys 2022 Helsinki University Hospital 1 day seminar 24 Oct

Pia Tikka

 

My presentation “Hybrid intelligence – applications in virtual medical training” was part of the Enactive virtuality Lab’s efforts to implement the gained scientific findings and innovations of the XR field to serve the societal wellbeing and to find solutions for challenges in the domain of healthcare and medical training.

https://www.hus.fi/ajankohtaista/elvytys-2022-symposium-kokosi-elvytystoiminnan-kehittajat-yhteen

The Clarion Hall was filled with medical experts and healthcare workers, the  professional audience that elicits my greatest respect and admiration. Thank you for serving us — everyday.

Talk at « Art Interactif Numérique et Sciences Cognitives » Journée d’étude Octobre 14, 2022 l’Université Polytechnique Hauts de France, Valenciennes

« Art Interactif Numérique et Sciences Cognitives »

Journée d’étude Octobre 14, 2022

Au bâtiment Matisse de l’Université Polytechnique Hauts de France, Valenciennes https://www.uphf.fr/en

Avec expositions des artistes

 

Pia Tikka: Designing enactive co-precence with humanlike characters in narrative contexts

The scope of technologies available to filmmakers is expanding and apparently opening new avenues of storytelling. My focus is on the application of new findings in the fields of psychophysiological tracking and machine learning in order to create virtual characters, whose behavior resembles that of humans in the most natural ways.

In this talk I will share some recent updates in this fast developing domain and discuss their possible applications to the co-presence of human participants and humanlike virtual characters in narrative contexts. This  implies a range of multidisciplinary challenges. The core research question is what types of roles can the filmmaker give to machine learning and psycho-physiological tracking in the process of creating humanlike behaviors in narrative settings. The discussion draws from the holistic embodied approach to the mind,  which in my view provides useful explanatory frames for my claims. The talk aims to inspire discussions related to the use of adaptive artificial characters in the future of virtual storytelling.

Ülo Pärnits & Ülemiste City @ Tallinn University

Tallinn University and Ülemiste City signed a cooperation agreement on 12 October to confirm the interest of both parties in jointly promoting innovative education and supporting talents. The signing was preceded by a discussion about what knowledge, talents, and technologies the education system will need in the future. (…) At the heart of the memorandum between Ülemiste City and Tallinn University is the promise not to lose sight of the human being in an ever-technological world and to look for ways to support talents and their development.

In order to sign the memorandum, the founder of Ülemiste City and the long-time chairman of the board of Mainor, Ülo Pärnits, was ‘brought to life’ as an avatar created by Pia Tikka, the lead researcher of the Enactive Virtuality Lab of the Baltic Film, Media and Arts School of Tallinn University, and Ats Kurvet, a computer graphics specialist. It is a tribute to his influential work.

The work on Ülo Pärnits avatar was coordinated by Ermo Säks/ BFM.

An article about the whole event can be read in Estonian here: https://www.ulemistecity.ee/uudised-teated/tallinna-ulikool-ja-ulemiste-city-asuvad-koostoos-tulevikuharidust-edendama/ and in English here: https://www.ulemistecity.ee/en/news-messages/tallinna-ulikool-ja-ulemiste-city-asuvad-koostoos-tulevikuharidust-edendama/

Collaborative work with M-L Cazin and Nantes Univ fall 2022

Pia Tikka & Mati Mõttus worked 03.-07.10. with filmmaker and researcher Marie-Laure Cazin in Paris, Saint Maurice on the experimental psycho-physiological study where Cazin’s VR film “Freud’s Last Hypnosis” is viewed by French speaking participants. The currently on-going multidisciplinary art-science study is a natural extension of Cazin’s EU Mobilitas Pluss Postdoc year at the Enactive Virtuality Lab (09/2021-08/2022; a separate report t.b.d.). The target is a joint paper with Enactive Virtuality team, Cazin and Nantes University researchers. As Cazin’s VR film is a well-designed cinematic story that has strong emotional and intellectual components that trigger VR viewers’s affective-cognitive processes, it provides the MOBTT90 team with a perfect study case.

During the week in Paris we worked on analysis of the interviews of participants and the analysis of collected data, as well as the structure of the joint paper (in preparation). In this project we measure eye-movements, heart rate and skin conductance as indicators of dynamically changing mental states of the viewers, and the collected data is correlated with the annotations of the individual eye-tracking data layered on top of the performance video recording. Furthermore, these are related with the in person interview material collected by Cazin.

We are excited to collect new data with François BOUFFARD and colleagues at the Nantes Univ, Halle 6 – Pôle Universitaire interdisciplinaire des cultures numériques, in early December.

 

On the 6.10. Pia Tikka, Mati Mõttus and Marie Cazin did a three hour seminar with presentations of on our MOBTT90 research invited by prof. Jean-Marie Dallet at the École des arts de la Sorbonne, UNIVERSITÉ PARIS 1 PANTHÉON-SORBONNE.

State of Darkness meeting diary 2022

Diary for the Fall 2022 work-in-progress.

State of Darkness (SOD II; working title)

Image: Team meeting with the script writer Eeva R. Tikka, the designer o f the performative scenography Tanja Bastamow, and Pia Tikka, discussing the responses of the artificial character to the changes in the VR world.

Images: Enactive Virtuality team planning session with Victor Pardinho, consultant and CEO from the start-up Sense of Space, spring 2022

Image by Pia Tikka. Enactive virtuality team discussing the biofeedback Ilkka Kosunen) between different adaptive elements in the sound world (Matias Harju), scenography (Tanja Bastamow) and character design (Ats Kurvet).

State of Darkness II – drafting the co-presence experience in narrative VR

VR Experience – draft April 2021, Pia
An updated version of feelings and experience of co-presence with a stranger as inner early state of darkness, but with upgraded functions and meta human char and perhaps participant may move in the space (t.b.d.)
The story
You (as a participant) put the head set on.
You hear warlike sound from somewhere above around suggesting you are below the ground, maybe a cellar.
You find yourself at the doorway of a cellar room. It is dark. Maybe some light streaming through the window crack to the street level…
You may be able to move around (or then the installation is experienced seated).
You are inside the room. Not sure if you are alone.
Then you realise there is some movement around you.
A flashlight is turned on.
You see a person sharing the room with you.
That person is a stranger for you (three different versions of meta human).
From this onwards, you need to hide in this cellar with a stranger.
External sound scape of the warlike situation outside.
The soundscape of the cellar.
The behaviors of the stranger (and the plausible hidden motivations) and events in the cellar will be written in more detail by Eeva, who will join the enactive team starting May 1.
Tasks:
  • Eeva: responsible of create the situation (story), behavioural motivations for the “stranger” and choreograph the movements and default facial behavior. This includes working with Ats with motion capture.
  • Tanja:performative environment elements.
  • Ats: UE environment,  the characters, visuals, sound scape, etc. all UE related functions, including motion capture.
  • Abdallah: tracking of user behavior and interpreting that into UE environment, finding the suitable way to measure facial expressions, eyetrakcing, etc. inside headsets (you proposed one option) – we should check that in more detail.