Call for NEUROCINEMATIC papers — Baltic Screen Media Review, Special Issue 2023

Call for NEUROCINEMATIC papers – Baltic Screen Media Review, Special Issue 2023

Cinematic minds in making – Investigation of subjective and intersubjective experiences of storytelling

Guest editors Pia Tikka and Elen Lotman with Maarten Coëgnarts

Contact and submission to NeuroCineBFM@tlu.ee

 

 

One of the key foundations of everyday activities in society is intersubjectively shared communication between people. Stories, films, and other audiovisual narratives promote shared understanding of possible situations in other people’s lives. Narratives expose complex social situations with their ethical, political, and cultural contexts (Hjort and Nannicelli, 2022). They also play a role for the human kind as a means to learn from protagonists and their positive examples and successes, but also their mistakes, false motivations, and blinded desires that may lead to dramatic situations, sometimes even disasters. An exhaustive range of contextual situatedness as the constitutive essence in narratives does not only serve entertainment and education, but also scientific studies of human mind and behavior.

Since the beginning of this millennium, narratives mediated by films have allowed researchers to simulate complex socio-emotional events in behavioral and neuroimaging  laboratories, accumulating new insights to the human behavior, emotion, and memory, to name a few of many topics. The proponents of the so-called naturalistic neuroscience and, in particular its subfield neurocinematics (Hasson et al. 2008) have shown how experiencing naturally unfolding events evokes synchronized activations in the large-scale brain networks across different test participants (see, Jaaskelainen et al. 2021, for review). The  tightly framed contextual settings in cinematic narratives have opened a fresh window for researchers interested in understanding linkages between individual subjective experiences and intersubjectively shared experiences.

So far, neurocinematic studies have nearly exclusively focused on mapping  the correlations between narrative events and observed physiological behaviors of uninitiated test participants.. The knowledge accumulated so far does not tell much of the affective or cognitive functions of the experts of audiovisual storytelling, with few exceptions (e.g. deBorst et al 2016; Andreu-Sánchez et al. 2021).  Along with this Special Issue we want to extend the scope of studies to the embodied cognitive processes  of storytellers themselves. As an example, consider the term “experiential heuristics” proposed by cinematographer ELen Lotman (2021) to describe the practice-based knowledge accumulation by cinematographers, or the embodied dynamics of the filmmaker in the process of simulating the experiences of the fictional protagonists and/or that of  imagined viewers described as “enactive authorship”(Tikka 2022). Another question that merits further understanding is how the embodied decision-making processes of filmmakers further lead to the creation of dynamic embodied structures in the cinematic form. Appealing to the shared embodiment of both the filmmaker and the film viewer, these pre-conceptual patterns of bodily experience or “image schemas” have been argued to play a significant expressive role in the representation and communication of meaning in cinema (Coëgnarts 2019).

We call for papers that focus on the creative experiential processes of the filmmakers,  storytelling experts and their audience. We encourage the proposed papers to discuss how temporally unfolding of contextual situatedness depicted in narratives manifests in reported subjective experiences, the observed  body-brain behaviors,  and time-locked content descriptions.

TOPICS

We invite boldly multidisciplinary papers to contribute with theoretical, conceptual and practical approaches to the experiential nature of filmmaking and viewing. They may draw, for instance, from social and cognitive sciences, psychophysiology, neurosciences, ecological psychology, affective computing, cognitive semantics, aesthetics, or empirical phenomenology. 

Accordingly, we encourage papers that discuss the relations of data from these approaches, all concerning film experience of the film professionals and/or film viewers.  The focus of a submission may also focus on a specific expertise, for example that of the writer, editor, cinematographer, scenographer or sound designer. The papers may describe, for example 1) subjective experiences, 2) intersubjectively shared experiences; 3) context or content annotation, 4) semantic description; 5) first-person phenomenal description, and/or 6) physiological observation (e.g. neuroimaging, eye-tracking, psychophysiological measures).

Contributions addressing topics such as (but not limited to) the following are particularly welcome:

  • Social cognition and embodied intersubjectivity
  • Interdisciplinary challenges for methods; annotations linking cinematic features to physiological data
  • First-and second-person methodologies; empirical phenomenological observations
  • Embodied enactive mind, embodied simulation, and theory of theory mind
  • Embodied metaphors in film; embodied film style; bodily basis of shared film language; semantics;
  • Cinematic empathy; emotions; simulation  of character experiences
  • Audience engagement; immersion; cognitive identification;
  • Temporality of experiences; context-dependent memory coding; story reconstruction; narrative comprehension;
  • Experiential heuristics; multisensory and tacit knowledge;
  • Storytelling strategies, aesthetics, fIlm and media literacy; genre conventions

GUIDELINES

We will accept long research articles (4000 – 8000 words w/o ref) and short articles and commentaries (2000 – 2500 words w/o ref). Submitted papers need to follow Submission guidelines

All submissions should be sent via email attachment to Guest editors at NeuroCineBFM@tlu.ee

BSMR embraces visual storytelling, we thus invite authors to use photos and other illustrations as part of their contributions. See Journal info https://sciendo.com/journal/BSMR


Key dates


01.04.2023 – Submit abstracts of 200–300 words.
10.04.2023 –Acceptance of abstracts
30.06.2023 –Submit full manuscripts for blind peer review
20.09.2023 –Resubmit revisions
31.12.2023 –Special Issue published online


This issue of BSMR will be published both online and in print in December 2023. 

All submissions should be sent via email attachment to Guest editors at NeuroCineBFM@tlu.ee

Guest editors 

Maarten Coëgnarts https://www.filmeu.eu/alliance/people/maarten-coegnarts

Elen Lotman  https://www.filmeu.eu/alliance/people/elen-lotman

Pia Tikka https://www.etis.ee/CV/Pia_Tikka/eng

 

References

Andreu-Sánchez, C., Martín-Pascual, M.A., Gruart, A. and Delgado-García, J.M. (2021). The effect of media professionalization on cognitive neurodynamics during audiovisual cuts. Frontiers in Systems Neuroscience, 15: 598383. doi: https://doi.org/10.3389/fnsys.2021.598383

de Borst, A. W., Valente, G., Jääskeläinen, I. P., & Tikka, P. (2016). Brain-based decoding of mentally imagined film clips and sounds reveals experience-based information patterns in film professionals. NeuroImage, 129, 428–438. https://doi.org/10.1016/j.neuroimage.2016.01.043

Hasson, U., Landesman, O., Knappmeyer, B. Vallines, I., Rubin, N., & Heeger, D. (2008). Neurocinematics: The Neuroscience of Film. Projections, 2, 1–26. https://doi.org/10.3167/proj.2008.020102.

Hjort, M and Nannicelli, T. (Ed.).(2022) The Wiley Blackwell Companion to Motion Pictures and Public Value. Wiley-Blackwell Press.

Jääskeläinen, I. P., Sams, M., Glerean, E., & Ahveninen, J. (2021). Movies and narratives as naturalistic stimuli in neuroimaging. NeuroImage, 224, 117445. https://doi.org/10.1016/j.neuroimage.2020.117445

Lotman, E. (2021) Experiential heuristics of fiction film cinematography. PhD Diss. Tallinn University.

Tikka, Pia (2022). Enactive Authorship Second-Order Simulation of the Viewer Experience— A Neurocinematic Approach. Projections: the Journal for Movies & Mind, 16 (1), 47−66. DOI: 10.3167/proj.2022.160104.

 

Relationality in VR – Luzern workshop at HSLU 22 March 2023

Programme – Relationality in VR

Organizer Hochschule Luzern, Design & Kunst, Dr. Christina Zimmermann, Projektleiterin Forschung

hslu.ch/design-kunst

***

 

Tikka Pia

Enactive Co-presence in Narrative Virtual Reality
 

The talk describes our most recent project The State of Darkness (SOD 2.0; work-in-progress).

SOD 2.0 is a virtual reality installation in which human and non-human lives coexist. The first is lived by the participant while the latter is lived by the non-human Other.  The narrative VR system is enactive, this is, all elements of the narrative space are in a reciprocally dependent state with the other elements.

The participant’s experiential moves are interpreted from their biosensor measurements in real-time, and then fed back to drive the different elements of the enactive narrative system. In turn, the facial and bodily behaviour of the artificial Other feeds back to the participant’s experiential states. The scenography and the soundscape adapt to the behaviours of the two beings while these adaptations affect back to the atmosphere of the intimate co-presence between the human and the Other.

The concept of non-human narrative allows the State of Darkness 2.0 to reflect the human-centric perspective against that of a non-human perspective. The intriguing question is whether narratives and the narrative faculty should be considered as exclusively characteristic to humans, or if the idea of narrative can be extended to other domains of life, or even to the domain of artificially humanlike beings.

The SOD 2.0 is an artistic dissemination of the project Enactive Co-presence in Narrative Virtual Reality: A Triadic Interaction Model.
The MOBTT90 research project which combines arts and sciences to explore how the viewer’s experience of co-presence can be controlled by parametrically modifying the behavior of a screen character or its context. It is led by research professor Pia Tikka, Baltic Film, Media and Arts School (BFM) & Centre of Excellence in Media Innovation and Digital Culture (MEDIT), Tallinn University.The project is funded by the EU Mobilitas Pluss Top Researcher Grant and Estonian Research Council. 
 

MIDI2022 conference Debora de Souza presentation

Kuvaus:https://midi2022.opi.org.pl

Day 3 (14 December, 2022) 
Remote event

10:00 –10:05 Conference opening
10:05 – 11:30 Digital Interaction – Session 1
Chair Dr. Wiesław Kopeć / Kinga Skorupska
Polish-Japanese Academy of Information Technology, Warsaw, Poland
Seeking Emotion Labels for Bodily Reactions: An Experimental Study in Simulated 
Interviews
Debora Souza, Pia Tikka and Ighoyota Ajenaghughrure

Enactive Virtuality Lab hybrid seminar 24 Nov, 2pm – 6 pm – Open for online attendance!

Enactive Virtuality Lab present the most recent work by the team members.

The event takes place in Tallinn University, Nova building N-406 Kinosaal

Date: 24.11.2022. 14:00-18:00 (EET / CET +1)

Join Zoom Meeting https://zoom.us/j/95032524868

Speakers and Schedule, see below.

 

SPEAKERS

Tanja Bastamow: Virtual scenography in transformation

The performative possibilities of virtual environments

 

Scenography has historically been understood as the illustrative support for a staged drama. However, in the recent decades, the field has expanded to encompass the overall design of performance events, actions and experiences. Positioning this expanded understanding of scenography in dialogue with virtual reality (VR) opens up interesting possibilities for designing virtual scenography which can become an enabler of emerging narratives, unforeseen events and unexpected encounters. In my research I investigate the question of shifting agencies and virtual scenographers’ role as a co-creator rather than a traditional author-designer. Who – and what – become the creators, performers and spectators in virtual experiences and encounters in which the real-time responsiveness, transformability, (im)materiality and immersivity of virtual scenography play a key role? In my talk I will introduce thoughts around these questions by describing the process of building the virtual scenography for “State of Darkness II” VR experience.

Tanja Bastamow is a virtual scenographer working with experimental projects combining scenography with virtual and mixed reality environments. Bastamow’s key areas of interest are immersive virtual environments, the creative potential of technology as a tool for designing emergent spatial narratives, and creating scenographic encounters in which human and non-human elements can mix in new and unexpected ways. Currently, she is a doctoral candidate at Aalto University’s Department of Film, Television and Scenography, doing research on the performative possibilities of real-time virtual environments. In addition to this, she is working in LiDiA – Live + Digital Audiences artistic research project (2021-23) as a virtual designer. She is also a founding member of Virtual Cinema Lab research group and has previously held the position of lecturer in digital design methods at Aalto University. 

Ats Kurvet: DESIGNING VIRTUAL characters 

Ats Kurvet is a 3D real-time graphics and virtual reality application developer and consultant with over 8 years of industry experience. He specialises in lighting, character development and animation, game and user experience design, 3D modeling and environment development, shaders and material development and tech art. He has worked for Crytek GmbH as a lighting artist and runs ExteriorBox OÜ. His focus working with the Enactive Virtuality Lab in researching the visual aspects of digital human development and implementation.

Matias Harju: Traits of Virtual Reality Sound Design

Whilst sound design for VR borrows a lot from other media such as video games and films, VR sets some unique conditions for sonic thinking and technical approaches. Spatial sound is inarguably one of the most significant areas in this respect. That entails the whole process from narrative decisions to source material recordings to 3D audio rendering on the headphones. Other sound design considerations in VR may relate to the level of sonic realism and the role of sound in general. My talk will briefly expose some of the characteristics of sound in VR from both artistic and technical perspectives.

Matias Harju is a sound designer and musician specialising in immersive and interactive sonic storytelling. He is currently developing Audio Augmented Reality (AAR) as a narrative and interactive medium at WHS Theatre Union in Helsinki. He is also active in making adaptive music and immersive sound design for other projects including VR, AR, video games, installations, and live performances. Matias is a Master of Music from Sibelius Academy with a background in multiple musical genres, music education, and audio technology. He also holds a master’s degree from the Sound in New Media programme at Aalto University

MARIE-LAURE CAZIN:Freud’s last hypnosis, validating emotion driven enactions in cinematic VR

Dr Marie-Laure Cazin will present the study she has been conducting in the context of her European Mobilitas + postdoctoral grant with Enactive Virtuality Lab  BFM, TLU (2021-2022).  She is currently developing an artistic prototype of Emotive VR for interactive 360° film. The project aims to obtain emotional interactivity of the viewer with the film’s soundtrack. While experiencing her VR film, Freud’s last hypnosis, the emotional responses of the viewers are collected with physio-sensors, eye-tracking and with personal interviews after the viewing. Project is in collaboration with Dr Mati Mõttus from the Interaction lab of the school of Digital Technology (TLU) and Matias Harju, composer and sound designer (Finland).

Marie-Laure Cazin is a French filmmaker and a researcher. She is teaching in Ecole Supérieure d’Art et de Design-TALM in France. She was a postdoctoral researcher at MEDIT BFM, Tallinn University with a European post doctoral grant Mobilitas + in 2022-23. She received a PhD in Aix-Marseille University for her thesis entitled “Cinema et Neurosciences, du Cinéma Émotif à Emotive VR”[1] in 2020. This thesis is about cinema experience, contextualized by neurosciences research, describing the neuronal process of emotions and trying to think further about the analogy between cinema and the thinking process. She is also a regular teacher in the Ecole Supérieure d’Art et de Design TALM, le Mans (France).

She comes from an artistic background, having studied in Le Fresnoy, Studio National des Arts Contemporains, the Jan van Eyck Academy, post graduate program in residency (Masstricht, Netherlands) and in the Ecole Nationale Supérieure des Beaux-Arts in Paris. She has conducted many art-science projects together with scientific partners that have been shown in many exhibitions and festivals, creating prototypes that renew the filmic experience. In her Emotive Cinema (2014) and Emotive VR (2020) projects she applies physiological feedback, like EEG, in order to obtain an emotional analysis from the brains’ activity of the viewers that changes the film’s scenario.

[1]Cazin, M-L. 2020. Cinéma et neurosciences : du cinéma émotif à emotive VR. Accèss https://www.theses.fr/2020AIXM0009

MATI MÕTTUS: Psyhco-physiology and eye-tracking of CINEMATIC VR

There are various ways to track our emotions through physiometric signals. Most common of these signals are electrical conductance of skin, facial expressions, cardiography and encephalography. In my talk I’d like to discuss the reliability and intrusiveness of physiological measurements in interactive art. While the reliability of measurements is not too critical in the domain of art, the intrusive sensors over art-enjoyers’ bodies can easily spoil the artistic experience.

Mati Mõttus is a lecturer and researcher in the School of Digital Technologies, Tallinn University. His doctoral degree on computer science in the field of human-computer interaction focused on “Aesthetics in Interaction Design” (2018). The current research interests are hedonic experiences in human-computer interaction. The focus is two-fold. The use of psycho-physiological signals in detecting users’ feelings while interacting with technology and explaining emotional behavior on one hand. On the other hand, the design of interactive systems, based on psycho-physiological loops.

Robert McNamara: Empathic nuances with virtual avatars – a novel theory of compassion

Technological mediation is the idea that technology affects or changes us as we use it, either consciously or unconsciously. Digital avatars, whether encountered in VR or otherwise, are one such technology, often in concert with various machine learning applications, that pose potential unknown and understudied interactional effects on humans, whether through their use in art, video games, or governmental applications (such as the pilot program at EU borders which was used to detect traveler deception, iborderctrl). One area where avatars may pose particular interest for academic study is in their perceived ability to not only evoke the uncanny valley effect, but also in their potential to promote a “distancing effect.” It is hypothesized that interaction with avatars, as an artistic process, may also result in increased expressions of cognitive empathy through the conscious or unconscious process of abductive reasoning.

Robert McNamara has a background in American criminal law as well as degrees related to audiovisual ethnography and eastern classics. He is from New York State, but has lived in Tallinn for the last five years. Currently, he specifically focuses on researching related to ethical, socio-political, anthropological, and legal aspects in the context of employing human-like VR avatars and related XR technology. He is co-authoring journal papers in collaboration with other experts on the team related to the social and legal issues surrounding the use of artificial intelligence, machine learning, and virtual reality avatars for governmental immigration regimes. During 05-09/2020 Robert G. McNamara worked as a visiting research fellow working with the MOBTT90 team. 10/2020 onwards he continues working with the project as a doctoral student. In the Enactive Virtuality Lab Robert has contributed to co-authored writing; the most recent paper accepted for publication is entitled “Well-founded Fear of Algorithms or Algorithms of Well-founded Fear? Hybrid Intelligence in Automated Asylum Seeker Interviews”, Journal of Refugee Studies, Oxford UP,

DEBORA C.F. DE SOUZA: SELF-RAting of Emotions in simulated immigration interview

Humans benefit from emotional interchange as a source of information to adapt and react to external stimuli and navigate their reality. Computers, on the other hand, rely on classification methods. It uses models to calculate and differentiate affective information from other human inputs because of the emotional expressions that emerge through human body responses, language, and behavior changes. Nevertheless, theoretically and methodologically, emotion is a challenging topic to address in Human-Computer Interaction. During her master’s studies, Debora explored methods for assessing physiological responses to emotional experience and aiding the emotion recognition features of Intelligent Virtual Agents (IVAs). Her study developed an interface prototype for emotion elicitation and simultaneous acquisition of the user’s physiological and self-reported emotional data. 

Debora C. F. de Souza is a Brazilian visual artist and journalist. Graduated in Social Communication at the University of Mato Grosso do Sul Foundation in Brazil. Her artwork and research are marked by experiments with different kinds of images and audiovisual media. Recently graduated with an MA in Human-Computer Interaction (HCI); she is now a doctoral student at the Information Society Technologies and a Junior Researcher at the School of Digital Technologies at Tallinn University. In the Enactive Virtuality Lab, she is researching the implications of anthropomorphic virtual agents on the human affective states and the implications of such interactions in collaborative and social contexts, such as medical simulation training.



SCHEDULE 14-18

14:00-14:10 Pia Tikka: State of Darkness (S0D)- Enactive VR experience

14:10-14:45 Tanja Bastamow – Designing performative scenography (SoD)

14:45-15:00 Ats Kurvet –Designing humanlike characters (SoD)

15:00-15:20 Matias Harju – Traits of Virtual Reality Sound Design

15:20-15:40 Marie-Laure Cazin:Freud’s last hypnosis, validating emotion driven enactions in cinematic VR

15:40-16:00 Mati Mōttus: Experimenting with Emotive VR cinema

Break (10 min)

16:15-17:15 Robert McNamara: Empathic nuances with virtual avatars: a novel theory of compassion.

17:15-17:35 Debora Souza: Self-rating of emotions in simulated immigration interview

17:35- 18:00 Discussion

****

 

BFM PhD is inviting you to a scheduled Zoom meeting.

Topic: Enactive Virtuality Lab hybrid seminar
Time: Nov 24, 2022 10:00 Helsinki

Join Zoom Meeting
https://zoom.us/j/95032524868

Meeting ID: 950 3252 4868
One tap mobile
+3726601699,,95032524868# Estonia
+3728801188,,95032524868# Estonia

Dial by your location
+372 660 1699 Estonia
+372 880 1188 Estonia
+1 346 248 7799 US (Houston)
+1 360 209 5623 US
+1 386 347 5053 US
+1 507 473 4847 US
+1 564 217 2000 US
+1 646 558 8656 US (New York)
+1 646 931 3860 US
+1 669 444 9171 US
+1 669 900 9128 US (San Jose)
+1 689 278 1000 US
+1 719 359 4580 US
+1 253 205 0468 US
+1 253 215 8782 US (Tacoma)
+1 301 715 8592 US (Washington DC)
+1 305 224 1968 US
+1 309 205 3325 US
+1 312 626 6799 US (Chicago)
Meeting ID: 950 3252 4868
Find your local number: https://zoom.us/u/abUQJvmMJi

Join by Skype for Business
https://zoom.us/skype/95032524868

 

Round table at Moscow Neurotechnology and Freedom -conference

Pia Tikka, Panelist at Moscow Neurotechnology and Freedom -conference march 18. 2021,  19-20:30 (GMT+3)

Image: Panelists

Program

Neuroscience & Art

Will be held on March 18, 2021 16:00-22:00 (Moscow Standard Time: GMT+3)

International оnline сonference «Neurotechnology and Freedom».

Organized by the Centre for Cognition & Decision Making, HSE University

Scientists, philosophers, and artists will discuss ethical, social, and legal issues related to the development of neurotechnologies.

Preliminary оnline program сonference:

16:00 — 16:15 Vasily Klucharev, Director of Institute for Cognitive Neuroscience, HSE University, PhD in Biology

16:15 — 16:45 Video presentation (opinions of experts on neuroscience and freedom)

16:45 — 17:00 Break

17:00 — 19:00 Talks:

17:00 — 17:25 Prof. Danil Raseev, Saint Petersburg University, Russia, expert of the Russian Science Foundation

17:25 — 17:50 Dr. Suzanne Dikker, NYU Max Planck Center for Language, Music, and Emotion, USA

17:50 — 18:15 Prof. Dr. Gabriel Curio, Charité – Universitätsmedizin Berlin, Germany

18:15 — 18:40 Prof. Risto Ilmoniemi, Aalto University, Finland

18:40 — 19:00 Dr. Ksenia Fedorova Leiden University, the Netherlands

19:00 — 20:30 Round table:

Prof. Dr. Gabriel Curio, Charité – Universitätsmedizin Berlin, Germany, Dr. Suzanne Dikker, NYU Max Planck Center for Language, Music, and Emotion, USA; Prof. Risto Ilmoniemi, Aalto University, Finland, Prof. Mikhail Lebedev, HSE University, Russia and Skoltech Center for Neuroscience and Neurorehabilitation, Russia; Dr. Ippolit Markelov, ITMO University, «18 Apples», Russia, Dr. Maria Nazarova, HSE University, Russia and Centre for Brain Research and Neurotechnologies, FMBA, Russia; Dr. Vadim Nikulin, Max Planck Institute for Human Cognitive and Brain Sciences, Germany, and HSE University, Russia; Prof. Danil Raseev, Saint Petersburg University, Russia; Dr. Prof. Pia Tikka, Enactive Virtuality Lab, Baltic Film, Media and Arts School (BFM) and Centre of Excellence in Media Innovation and Digital Culture (MEDIT), Tallinn University

20:30-21:00 Report: Prof. Patrick Haggard, University College London, UK

moderators: Prof. Vasily Klucharev, Institute for Cognitive Neuroscience, HSE University, Russia; media art theorist, Dr. Ksenia Fedorova Leiden University, the Netherlands

21:00 -—21:15 Break

21:15 — 22:00 Presentation of art projects: Ippolit Markelov artist, researcher, PhD in Biology, ITMO, «18 Apples»

Enactive Virtuality Lab presents Dec 1, 2020 

Welcome. Please join us!

Enactive Virtuality Lab presents the on-going work in online seminar.

Date: Tuesday Dec 1, 2020 
Time: 09:30 -12:00 Helsinki
Zoom Meeting

VIDEO recording (not edited) Passcode: &%10sVN$

Program

9:30

Pia Tikka (Enactive Virtuality Team leader, MOBTT90)
Introduction

9:45

Mehmet Burak Yılmaz (doctoral student @BFM)
Emotional impacts of camera movements
Exploring the emotional effects of different camera movement techniques (dolly, Steadicam, handheld) and the direction of the movement. Conducting psychophysiological experiments where the viewers watch cinematographic scenes.

10:00

Robert McNamara (doctor in law; doctoral student @BFM)
The creative potential of cinematic game narratives for evoking empathy for asylum seekers.
Exploring machine learning in the asylum seeker narratives in the “Refugee Status Determinations”. Measuring types of empathy response in depicting child separation by immigration enforcement officers in game engine-based cinematic narratives.

10:15

Debora Conceição Firmino De Souza (MA Thesis @DTI)
Humanizing interactions at the Border Control 
Drawing upon topics of HCI and game development, the study investigates the emotional states elicited by interactions with anthropomorphic Virtual Agents at the Border Control.

10:30

Ats Kurvet (computer graphics specialist)
Creating digital humans on a budget
The challenges and options when creating the visual component for avatars/digital humans.

10:45  

Valentin Siltsenko (research assistant)
Real-time text to speech synthesis 
Synthesizing natural sounding human speech with ability to set the emotion of the speaker.

11:00  

Abdallah Sham (doctoral student @DTI)
Machine learning in dyadic human – artificial agent interaction
Exploring the implementation of machine learning to training virtual human behaviour.

 11:15

Ermo Säks (doctoral student @BFM)
Storytelling in Cinematic Virtual Reality: The role of cinematographic techniques in evoking immersion in virtual environments
Using practical research methods this doctoral project seeks the cinematic techniques feasible to increase the perceived immersion in Cinematic Virtual Reality (CVR) where the user’s main agency is to look around in a narrative story-based CVR drama experience that features a beginning, middle, and end.

11:30-12:00

Discussion
 

Open for public!

A talk at the Forum on Arts & Social Robotics in Hong Kong

Invited talk by Pia Tikka at the Hong Kong Forum Nov 2, 2019.

Hong Kong Baptist University will be partnering with the Consul General of France and the Alliance Francaise to mount an exhibition and a forum, October 31 – November 2, 2019.

The exhibition will focus on the work of the French artist Yves Gellie, specifically his photographs and films related to social robotics and artificial empathy. (t.b.c.)

Hong Kong offers possibilities to play with Sophia from Hanson Robotics.

 

Affective Computing ACII 2019 conference Cambridge

 

The 8th International Conference on. Affective Computing & Intelligent Interaction ( ACII 2019). 3rd-6th September, 2019. Cambridge, United Kingdom.

Enactive Virtuality team has two abstracts accepted for multidisciplinary Special Tracks :

(1) Neural and Psychological Models of Affect and (2) Technological and Biological Bodies in Dialogue: Multidisciplinary Perspectives on Multisensory Embodied Emotion and Cognition.

Image: Jelena Rosic in action during an TLU ‘s internal pre-presentation of the  ACII conference talk “Phenomenology of the Artificial” by Kosunen,  Rosic, Kaipainen and Tikka,

TALK by professor Iiro P Jääskeläinen Brain and Mind Lab Aalto Uni

 

Invited lecture and a collaboration meeting with professor Iiro P Jääskeläinen and Enactive Virtuality Lab May 21-22, 2019.

Image: Pia Tikka, Iiro P Jääskeläinen, Jelena Rosic, and Ilkka Kosunen at MEDIT meeting space.

May 21 at 3-4 pm Dr Iiro P. Jääskeläinen, Associate Professor of the Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, Finland,
gave an open neurocinematic talk on “Using movies as real-life like stimuli during neuroimaging to study the neural basis of social cognition” (room M-134).

Abstract:
Movies and narratives are increasingly used as stimuli in neuroimaging studies. This in many ways helps bridge the gaps between neuroscience, psychology, and even social sciences by allowing stimulation of, and thus also measurement of neural activity underlying, phenomena that have been less amenable to study with more traditional neuroimaging stimulus-task designs. Observation of signature patterns underlying discrete emotions across largely shared brain structures have suggested that both basic and dimensional emotion theories are partly correct. Robust differences in brain activity when viewing genetic vs. adopted sisters going through a moral dilemma in a movie clip have shown that knowledge of shared genes shapes perception of social interactions, thus demonstrating how neuroimaging can offer important measures for social sciences that complement the traditional behavioral ones. Furter, more idiosyncratic brain activity has been observed in high-functioning autistic than neurotypical subjects specifically in putative social brain regions when watching a drama movie. Development of data analysis algorithms holds keys to rapid advances in this relatively new area of research. Modeling the stimulus and recording brain activity is significantly complemented by behavioral measures on how the subjects experienced the movie stimulus.

Image: Jelena Rosic and Ilkka Kosunen engaged in  discussing correlations between ‘pheno’-dynamics and ‘neuro’-dynamics for our micro-phenomenological Memento study, a follow-up for Kauttonen et al 2018.