NEUROCINE WORKSHOP @BFM MAY 5-7, 2023
Enactive Virtuality Research Group, Tallinn University
Guest editors Pia Tikka and Elen Lotman with Maarten Coëgnarts
Contact and submission to NeuroCineBFM@tlu.ee
One of the key foundations of everyday activities in society is intersubjectively shared communication between people. Stories, films, and other audiovisual narratives promote shared understanding of possible situations in other people’s lives. Narratives expose complex social situations with their ethical, political, and cultural contexts (Hjort and Nannicelli, 2022). They also play a role for the human kind as a means to learn from protagonists and their positive examples and successes, but also their mistakes, false motivations, and blinded desires that may lead to dramatic situations, sometimes even disasters. An exhaustive range of contextual situatedness as the constitutive essence in narratives does not only serve entertainment and education, but also scientific studies of human mind and behavior.
Since the beginning of this millennium, narratives mediated by films have allowed researchers to simulate complex socio-emotional events in behavioral and neuroimaging laboratories, accumulating new insights to the human behavior, emotion, and memory, to name a few of many topics. The proponents of the so-called naturalistic neuroscience and, in particular its subfield neurocinematics (Hasson et al. 2008) have shown how experiencing naturally unfolding events evokes synchronized activations in the large-scale brain networks across different test participants (see, Jaaskelainen et al. 2021, for review). The tightly framed contextual settings in cinematic narratives have opened a fresh window for researchers interested in understanding linkages between individual subjective experiences and intersubjectively shared experiences.
So far, neurocinematic studies have nearly exclusively focused on mapping the correlations between narrative events and observed physiological behaviors of uninitiated test participants.. The knowledge accumulated so far does not tell much of the affective or cognitive functions of the experts of audiovisual storytelling, with few exceptions (e.g. deBorst et al 2016; Andreu-Sánchez et al. 2021). Along with this Special Issue we want to extend the scope of studies to the embodied cognitive processes of storytellers themselves. As an example, consider the term “experiential heuristics” proposed by cinematographer ELen Lotman (2021) to describe the practice-based knowledge accumulation by cinematographers, or the embodied dynamics of the filmmaker in the process of simulating the experiences of the fictional protagonists and/or that of imagined viewers described as “enactive authorship”(Tikka 2022). Another question that merits further understanding is how the embodied decision-making processes of filmmakers further lead to the creation of dynamic embodied structures in the cinematic form. Appealing to the shared embodiment of both the filmmaker and the film viewer, these pre-conceptual patterns of bodily experience or “image schemas” have been argued to play a significant expressive role in the representation and communication of meaning in cinema (Coëgnarts 2019).
We call for papers that focus on the creative experiential processes of the filmmakers, storytelling experts and their audience. We encourage the proposed papers to discuss how temporally unfolding of contextual situatedness depicted in narratives manifests in reported subjective experiences, the observed body-brain behaviors, and time-locked content descriptions.
We invite boldly multidisciplinary papers to contribute with theoretical, conceptual and practical approaches to the experiential nature of filmmaking and viewing. They may draw, for instance, from social and cognitive sciences, psychophysiology, neurosciences, ecological psychology, affective computing, cognitive semantics, aesthetics, or empirical phenomenology.
Accordingly, we encourage papers that discuss the relations of data from these approaches, all concerning film experience of the film professionals and/or film viewers. The focus of a submission may also focus on a specific expertise, for example that of the writer, editor, cinematographer, scenographer or sound designer. The papers may describe, for example 1) subjective experiences, 2) intersubjectively shared experiences; 3) context or content annotation, 4) semantic description; 5) first-person phenomenal description, and/or 6) physiological observation (e.g. neuroimaging, eye-tracking, psychophysiological measures).
Contributions addressing topics such as (but not limited to) the following are particularly welcome:
We will accept long research articles (4000 – 8000 words w/o ref) and short articles and commentaries (2000 – 2500 words w/o ref). Submitted papers need to follow Submission guidelines
All submissions should be sent via email attachment to Guest editors at NeuroCineBFM@tlu.ee
BSMR embraces visual storytelling, we thus invite authors to use photos and other illustrations as part of their contributions. See Journal info https://sciendo.com/journal/BSMR
01.04.2023 – Submit abstracts of 200–300 words.
10.04.2023 –Acceptance of abstracts
30.06.2023 –Submit full manuscripts for blind peer review
20.09.2023 –Resubmit revisions
31.12.2023 –Special Issue published online
This issue of BSMR will be published both online and in print in December 2023.
All submissions should be sent via email attachment to Guest editors at NeuroCineBFM@tlu.ee
Maarten Coëgnarts https://www.filmeu.eu/alliance/people/maarten-coegnarts
Elen Lotman https://www.filmeu.eu/alliance/people/elen-lotman
Pia Tikka https://www.etis.ee/CV/Pia_Tikka/eng
Andreu-Sánchez, C., Martín-Pascual, M.A., Gruart, A. and Delgado-García, J.M. (2021). The effect of media professionalization on cognitive neurodynamics during audiovisual cuts. Frontiers in Systems Neuroscience, 15: 598383. doi: https://doi.org/10.3389/fnsys.2021.598383
de Borst, A. W., Valente, G., Jääskeläinen, I. P., & Tikka, P. (2016). Brain-based decoding of mentally imagined film clips and sounds reveals experience-based information patterns in film professionals. NeuroImage, 129, 428–438. https://doi.org/10.1016/j.neuroimage.2016.01.043
Hasson, U., Landesman, O., Knappmeyer, B. Vallines, I., Rubin, N., & Heeger, D. (2008). Neurocinematics: The Neuroscience of Film. Projections, 2, 1–26. https://doi.org/10.3167/proj.2008.020102.
Hjort, M and Nannicelli, T. (Ed.).(2022) The Wiley Blackwell Companion to Motion Pictures and Public Value. Wiley-Blackwell Press.
Jääskeläinen, I. P., Sams, M., Glerean, E., & Ahveninen, J. (2021). Movies and narratives as naturalistic stimuli in neuroimaging. NeuroImage, 224, 117445. https://doi.org/10.1016/j.neuroimage.2020.117445
Lotman, E. (2021) Experiential heuristics of fiction film cinematography. PhD Diss. Tallinn University.
Tikka, Pia (2022). Enactive Authorship Second-Order Simulation of the Viewer Experience— A Neurocinematic Approach. Projections: the Journal for Movies & Mind, 16 (1), 47−66. DOI: 10.3167/proj.2022.160104.
Programme – Relationality in VR
Organizer Hochschule Luzern, Design & Kunst, Dr. Christina Zimmermann, Projektleiterin Forschung
The talk describes our most recent project The State of Darkness (SOD 2.0; work-in-progress).
SOD 2.0 is a virtual reality installation in which human and non-human lives coexist. The first is lived by the participant while the latter is lived by the non-human Other. The narrative VR system is enactive, this is, all elements of the narrative space are in a reciprocally dependent state with the other elements.
The participant’s experiential moves are interpreted from their biosensor measurements in real-time, and then fed back to drive the different elements of the enactive narrative system. In turn, the facial and bodily behaviour of the artificial Other feeds back to the participant’s experiential states. The scenography and the soundscape adapt to the behaviours of the two beings while these adaptations affect back to the atmosphere of the intimate co-presence between the human and the Other.
The concept of non-human narrative allows the State of Darkness 2.0 to reflect the human-centric perspective against that of a non-human perspective. The intriguing question is whether narratives and the narrative faculty should be considered as exclusively characteristic to humans, or if the idea of narrative can be extended to other domains of life, or even to the domain of artificially humanlike beings.The SOD 2.0 is an artistic dissemination of the project Enactive Co-presence in Narrative Virtual Reality: A Triadic Interaction Model.The MOBTT90 research project which combines arts and sciences to explore how the viewer’s experience of co-presence can be controlled by parametrically modifying the behavior of a screen character or its context. It is led by research professor Pia Tikka, Baltic Film, Media and Arts School (BFM) & Centre of Excellence in Media Innovation and Digital Culture (MEDIT), Tallinn University.The project is funded by the EU Mobilitas Pluss Top Researcher Grant and Estonian Research Council.Read more of the project here: http://enactivevirtuality.tlu.ee/the-state-of-darkness-ii/
The CUDAN Show & Tell seminars with brief (10 minute) presentations!
The field of social robotics, be these physical or virtual, is advancing fast, however, the challenge continues to be the socially relevant context awareness of these artificial characters so that human interaction with them would become meaningful in varying situations of everyday life.
The research aims at developing novel ways to adopt the films and their scripts as a training material for social robotics. This would constitute a major breakthrough for modeling systemically adaptive dynamics of artificial characters with increased human relevance for a variety of societal XR applications, e.g. in medical or educational fields.
The proposed research focus converges the multidisciplinary expertise at TLU on audiovisual narratives, semiotics, VR/XR, cultural data analytics, machine learning, psychophysiology, and social robotics. It strengthens TLU’s international collaborations within these fields.
Working remotely between Paris and Tallinn on paper that discusses psychophysiological findings related to Marie-Laure Cazin’s 360 VR film “Freud’s Last Hypnosis”.
Day 3 (14 December, 2022)
|10:00 –10:05||Conference opening|
|10:05 – 11:30||
Digital Interaction – Session 1
Chair Dr. Wiesław Kopeć / Kinga Skorupska
Polish-Japanese Academy of Information Technology, Warsaw, Poland
|Seeking Emotion Labels for Bodily Reactions: An Experimental Study in Simulated
Debora Souza, Pia Tikka and Ighoyota Ajenaghughrure
Enactive Virtuality Lab present the most recent work by the team members.
The event takes place in Tallinn University, Nova building N-406 Kinosaal
Date: 24.11.2022. 14:00-18:00 (EET / CET +1)
Join Zoom Meeting https://zoom.us/j/95032524868
Speakers and Schedule, see below.
The performative possibilities of virtual environments
Scenography has historically been understood as the illustrative support for a staged drama. However, in the recent decades, the field has expanded to encompass the overall design of performance events, actions and experiences. Positioning this expanded understanding of scenography in dialogue with virtual reality (VR) opens up interesting possibilities for designing virtual scenography which can become an enabler of emerging narratives, unforeseen events and unexpected encounters. In my research I investigate the question of shifting agencies and virtual scenographers’ role as a co-creator rather than a traditional author-designer. Who – and what – become the creators, performers and spectators in virtual experiences and encounters in which the real-time responsiveness, transformability, (im)materiality and immersivity of virtual scenography play a key role? In my talk I will introduce thoughts around these questions by describing the process of building the virtual scenography for “State of Darkness II” VR experience.
Tanja Bastamow is a virtual scenographer working with experimental projects combining scenography with virtual and mixed reality environments. Bastamow’s key areas of interest are immersive virtual environments, the creative potential of technology as a tool for designing emergent spatial narratives, and creating scenographic encounters in which human and non-human elements can mix in new and unexpected ways. Currently, she is a doctoral candidate at Aalto University’s Department of Film, Television and Scenography, doing research on the performative possibilities of real-time virtual environments. In addition to this, she is working in LiDiA – Live + Digital Audiences artistic research project (2021-23) as a virtual designer. She is also a founding member of Virtual Cinema Lab research group and has previously held the position of lecturer in digital design methods at Aalto University.
Ats Kurvet is a 3D real-time graphics and virtual reality application developer and consultant with over 8 years of industry experience. He specialises in lighting, character development and animation, game and user experience design, 3D modeling and environment development, shaders and material development and tech art. He has worked for Crytek GmbH as a lighting artist and runs ExteriorBox OÜ. His focus working with the Enactive Virtuality Lab in researching the visual aspects of digital human development and implementation.
Whilst sound design for VR borrows a lot from other media such as video games and films, VR sets some unique conditions for sonic thinking and technical approaches. Spatial sound is inarguably one of the most significant areas in this respect. That entails the whole process from narrative decisions to source material recordings to 3D audio rendering on the headphones. Other sound design considerations in VR may relate to the level of sonic realism and the role of sound in general. My talk will briefly expose some of the characteristics of sound in VR from both artistic and technical perspectives.
Matias Harju is a sound designer and musician specialising in immersive and interactive sonic storytelling. He is currently developing Audio Augmented Reality (AAR) as a narrative and interactive medium at WHS Theatre Union in Helsinki. He is also active in making adaptive music and immersive sound design for other projects including VR, AR, video games, installations, and live performances. Matias is a Master of Music from Sibelius Academy with a background in multiple musical genres, music education, and audio technology. He also holds a master’s degree from the Sound in New Media programme at Aalto University
Dr Marie-Laure Cazin will present the study she has been conducting in the context of her European Mobilitas + postdoctoral grant with Enactive Virtuality Lab BFM, TLU (2021-2022). She is currently developing an artistic prototype of Emotive VR for interactive 360° film. The project aims to obtain emotional interactivity of the viewer with the film’s soundtrack. While experiencing her VR film, Freud’s last hypnosis, the emotional responses of the viewers are collected with physio-sensors, eye-tracking and with personal interviews after the viewing. Project is in collaboration with Dr Mati Mõttus from the Interaction lab of the school of Digital Technology (TLU) and Matias Harju, composer and sound designer (Finland).
Marie-Laure Cazin is a French filmmaker and a researcher. She is teaching in Ecole Supérieure d’Art et de Design-TALM in France. She was a postdoctoral researcher at MEDIT BFM, Tallinn University with a European post doctoral grant Mobilitas + in 2022-23. She received a PhD in Aix-Marseille University for her thesis entitled “Cinema et Neurosciences, du Cinéma Émotif à Emotive VR” in 2020. This thesis is about cinema experience, contextualized by neurosciences research, describing the neuronal process of emotions and trying to think further about the analogy between cinema and the thinking process. She is also a regular teacher in the Ecole Supérieure d’Art et de Design TALM, le Mans (France).
She comes from an artistic background, having studied in Le Fresnoy, Studio National des Arts Contemporains, the Jan van Eyck Academy, post graduate program in residency (Masstricht, Netherlands) and in the Ecole Nationale Supérieure des Beaux-Arts in Paris. She has conducted many art-science projects together with scientific partners that have been shown in many exhibitions and festivals, creating prototypes that renew the filmic experience. In her Emotive Cinema (2014) and Emotive VR (2020) projects she applies physiological feedback, like EEG, in order to obtain an emotional analysis from the brains’ activity of the viewers that changes the film’s scenario.
Cazin, M-L. 2020. Cinéma et neurosciences : du cinéma émotif à emotive VR. Accèss https://www.theses.fr/2020AIXM0009
There are various ways to track our emotions through physiometric signals. Most common of these signals are electrical conductance of skin, facial expressions, cardiography and encephalography. In my talk I’d like to discuss the reliability and intrusiveness of physiological measurements in interactive art. While the reliability of measurements is not too critical in the domain of art, the intrusive sensors over art-enjoyers’ bodies can easily spoil the artistic experience.
Mati Mõttus is a lecturer and researcher in the School of Digital Technologies, Tallinn University. His doctoral degree on computer science in the field of human-computer interaction focused on “Aesthetics in Interaction Design” (2018). The current research interests are hedonic experiences in human-computer interaction. The focus is two-fold. The use of psycho-physiological signals in detecting users’ feelings while interacting with technology and explaining emotional behavior on one hand. On the other hand, the design of interactive systems, based on psycho-physiological loops.
Technological mediation is the idea that technology affects or changes us as we use it, either consciously or unconsciously. Digital avatars, whether encountered in VR or otherwise, are one such technology, often in concert with various machine learning applications, that pose potential unknown and understudied interactional effects on humans, whether through their use in art, video games, or governmental applications (such as the pilot program at EU borders which was used to detect traveler deception, iborderctrl). One area where avatars may pose particular interest for academic study is in their perceived ability to not only evoke the uncanny valley effect, but also in their potential to promote a “distancing effect.” It is hypothesized that interaction with avatars, as an artistic process, may also result in increased expressions of cognitive empathy through the conscious or unconscious process of abductive reasoning.
Robert McNamara has a background in American criminal law as well as degrees related to audiovisual ethnography and eastern classics. He is from New York State, but has lived in Tallinn for the last five years. Currently, he specifically focuses on researching related to ethical, socio-political, anthropological, and legal aspects in the context of employing human-like VR avatars and related XR technology. He is co-authoring journal papers in collaboration with other experts on the team related to the social and legal issues surrounding the use of artificial intelligence, machine learning, and virtual reality avatars for governmental immigration regimes. During 05-09/2020 Robert G. McNamara worked as a visiting research fellow working with the MOBTT90 team. 10/2020 onwards he continues working with the project as a doctoral student. In the Enactive Virtuality Lab Robert has contributed to co-authored writing; the most recent paper accepted for publication is entitled “Well-founded Fear of Algorithms or Algorithms of Well-founded Fear? Hybrid Intelligence in Automated Asylum Seeker Interviews”, Journal of Refugee Studies, Oxford UP,
Humans benefit from emotional interchange as a source of information to adapt and react to external stimuli and navigate their reality. Computers, on the other hand, rely on classification methods. It uses models to calculate and differentiate affective information from other human inputs because of the emotional expressions that emerge through human body responses, language, and behavior changes. Nevertheless, theoretically and methodologically, emotion is a challenging topic to address in Human-Computer Interaction. During her master’s studies, Debora explored methods for assessing physiological responses to emotional experience and aiding the emotion recognition features of Intelligent Virtual Agents (IVAs). Her study developed an interface prototype for emotion elicitation and simultaneous acquisition of the user’s physiological and self-reported emotional data.
Debora C. F. de Souza is a Brazilian visual artist and journalist. Graduated in Social Communication at the University of Mato Grosso do Sul Foundation in Brazil. Her artwork and research are marked by experiments with different kinds of images and audiovisual media. Recently graduated with an MA in Human-Computer Interaction (HCI); she is now a doctoral student at the Information Society Technologies and a Junior Researcher at the School of Digital Technologies at Tallinn University. In the Enactive Virtuality Lab, she is researching the implications of anthropomorphic virtual agents on the human affective states and the implications of such interactions in collaborative and social contexts, such as medical simulation training.
14:00-14:10 Pia Tikka: State of Darkness (S0D)- Enactive VR experience
14:10-14:45 Tanja Bastamow – Designing performative scenography (SoD)
14:45-15:00 Ats Kurvet –Designing humanlike characters (SoD)
15:00-15:20 Matias Harju – Traits of Virtual Reality Sound Design
15:20-15:40 Marie-Laure Cazin:Freud’s last hypnosis, validating emotion driven enactions in cinematic VR
15:40-16:00 Mati Mōttus: Experimenting with Emotive VR cinema
Break (10 min)
16:15-17:15 Robert McNamara: Empathic nuances with virtual avatars: a novel theory of compassion.
17:15-17:35 Debora Souza: Self-rating of emotions in simulated immigration interview
17:35- 18:00 Discussion
BFM PhD is inviting you to a scheduled Zoom meeting.
Topic: Enactive Virtuality Lab hybrid seminar
Time: Nov 24, 2022 10:00 Helsinki
Join Zoom Meeting
Meeting ID: 950 3252 4868
One tap mobile
Dial by your location
+372 660 1699 Estonia
+372 880 1188 Estonia
+1 346 248 7799 US (Houston)
+1 360 209 5623 US
+1 386 347 5053 US
+1 507 473 4847 US
+1 564 217 2000 US
+1 646 558 8656 US (New York)
+1 646 931 3860 US
+1 669 444 9171 US
+1 669 900 9128 US (San Jose)
+1 689 278 1000 US
+1 719 359 4580 US
+1 253 205 0468 US
+1 253 215 8782 US (Tacoma)
+1 301 715 8592 US (Washington DC)
+1 305 224 1968 US
+1 309 205 3325 US
+1 312 626 6799 US (Chicago)
Meeting ID: 950 3252 4868
Find your local number: https://zoom.us/u/abUQJvmMJi
Join by Skype for Business
Pia Tikka’s superviseé Elen Lotman, cinematographer, will be confirmed as the first doctor from the artistic research line at the doctoral program of the Baltic Film, Media and Arts School, TLU.
Image: Drs Pia Tikka, Elen Lotman, Madis Jävekülg, Indrek Ibrus, Teet Teinemaa, Baltic Film, Media and Arts School, Tallinn University
Time: 18 November, 2022 at 3 p.m
Venue: Tallinn University’s Ceremonial Hall (Narva Road 25, III floor)
Gathering of doctors and supervisors: T-307 at 2:15 p.m
The event will be broadcasted live on our internal televisions and on the Tallinn University Youtube channel.
“We warmly welcome you to Match XR 2022 – the biggest one-night XR & emerging tech dedicated expo & networking event in the Nordics!
WHAT: One night XR & emerging tech dedicated expo & networking event
WHERE: Arabia Creative Campus (Metropolia UAS), Hämeentie 135 D, 00560 Helsinki, Finland
WHEN: 16 November 2022 from 4 PM to 8 PM
COST: Free of charge (get tickets)
After a couple of years of virtuality, Match XR, the biggest one night extended reality (XR) & emerging tech event in the Nordics, is returning to real life as a physical expo & networking event on 16 November 2022 from 4 PM to 8 PM at Hämeentie 135 D, Helsinki, Finland!
What to expect?
· 50+ interesting XR & emerging tech organizations at the Expo Area (Read more: Expo Area)
· 800+ international visitors (from e.g. Slush)
· Pre-organized 1-on-1 Meetings (Read more: 1-on-1 Meetings)
· Casual networking over drinks & snacks
Match XR is an annual Slush pre-event focusing on extended reality (XR), Web3, gaming and everything in between and beyond. The mission of the event is to create an overview of the current Finnish XR & emerging tech scene, uplift these companies, and, first and foremost, bring fresh start-ups, industry veterans, investors, students and other tech enthusiasts together. Read more on Match XR 2022 official website!
Experience dozens of cutting-edge applications & solutions at the Expo Area, meet Finnish XR & emerging tech professionals over pre-organized 1-on-1 sessions, network casually with hundreds of fellow XR & new tech enthusiasts, and, of course: enjoy great drinks, snacks and music!
The Finnish language talk “Interfacing Humans and Machines – Challenges for virtual simulation training” was part of the Enactive Virtuality Lab’s efforts to implement the gained scientific findings and innovations within the XR field to serve the societal wellbeing and to find solutions for challenges in the domain of healthcare and medical training.
Simulation session Nov 02, 2022 [in. Finnish]
5. SIMULAATION MONET MAHDOLLISUUDET Pj LT Maarit Hult
Mikä simulaatio? -katsaus simulaatioiden kiehtovaan maailmaan, Ayl, LT Taru Kantola, Anestesia ja leikkausosasto, Meilahden sairaala, HUS (Image above)
“Simulation saves lives.” Oyl, dos. Pekka Aho, Vatsakeskus, Meilahden sairaala, HUS Ihmisen ja koneen rajapinnalla – simulaation haasteita virtuaalitodellisuudessa,
Tutkimusprofessori Pia Tikka, Media Innovaatioiden ja Digitaalisen Kulttuurin Huippukeskus (MEDIT), Tallinnan yliopisto
SUOMEN TEHOHOITOYHDISTYKSEN KOULUTUSPÄIVÄT
Clarion Hotelli, Helsinki Tyynenmerenkatu 2, BYSA 3.krs
1. LAATU JA HOIDON TULOSTEN SEURANTA. Pj. Dos Mika Valtonen, TYKS
3. HOIDON RAJAT MUUTTUVAT- UUDET POTILASRYHMÄT TEHOHOIDOSSA
4. TURVALLISUUSUHAT JA VARAUTUMINEN Pj Dos Mika Valtonen ja Dos Juha Koskenkari