1ˢᵗ International Workshop on Affective Biosignal Processing

We proudly announce the first hybrid International Workshop on Affective Biosignal Processing, hosted by the UU Cybernetics group. We invite you to join us on Thursday, November 23rd, for an afternoon filled with presentations and fruitful discussions.

The event will feature:

  • a keynote by Professor Albert Salah, UU chair on Affective and Social Computing,
  • a presentation of the Horizon 2020 ERC-funded IM-TWIN project,
  • a set of showcases, and
  • a closing, informal networking session.

As we are excited to tell you about our research, we want to spark your interest by giving a short explanation why Affective Biosignal Processing (ABP) is going to play a vital role in the future of our techheavy society. We would be honored to welcome you on our campus to share our interest in this field. Please register via this form if you are planning to join the workshop physically.

Schedule

  • 13:30 - 14:00: Walk-in with tea and coffee
  • 14:00 - 14:15: Opening
  • 14:15 - 15:00: Keynote by Professor Albert Salah
  • 15:00 - 16:00: Main presentation by Lukas Arts on the Horizon 2020 ERC-funded IM-TWIN project
  • 16:00 - 18:00: Informal poster session with drinks and bites.

If you are unable to visit our campus physically, we are also offering a full livestream experience of our event. Please join us via Youtube Live to access both the keynote and main presentation online. The livestream will start around 16:00 and end after the main presentation.

Afterwards, a digital poster session is online available via www.cyberneticslab.info/workshop-posters where you can have a digital stroll around our online poster fair and ask direct questions via an interactive Slido interface.

Man vs. Machine

Humans and computer machinery differ in vast and numerous ways, one of which is communication. Humans communicate in rich, complex ways—through gestures, speech, and touch. Consequently, our evolutionary journey has equipped us with sophisticated biological sensors and a powerful neural network to make sense of the world around us. Computers, on the other hand, live in a black-and-white world of 0s and 1s. A camera translates photon activity into binary code and microphones turn the difference in air pressure into discretized digits.

Computers are designed to be fast, sensitive, and precise. They can track signals that humans would never be able to sense (e.g., brain waves, earth magnetic field, etc). However, their ability to sense tiny variations also causes them to struggle with the complex and highly variable nature of human behavior. For humans, this comes naturally. We can easily pick up on a friend's mood just by the tone of their voice or read emotions through body language, just by paying attention to small changes.

Although perfectly capable of sensing these variations, computers find these tasks incredibly challenging. Recent breakthroughs in machine learning and artificial neural networks have made some progress in understanding images and text; but, the core issue remains: human signals aren't built for machine interpretation, and computer signals are largely indecipherable for humans.

Humans and computers have more in common than one might think. Both perform pattern recognition of signals coming from sensory inputs with the goal of generating a response. Arguably, they only differ in terms of the signals. In our workshop, we will focus on the computer part of the interaction.

Not so different after all

Despite their obvious differences, humans and computer machinery have more in common than one might think. The "A" in AI is called 'artificial' for a reason—it tries to replicate the human's cognitive processes. Consequently, in the scope of human-computer interaction, both humans and computers perform some sort of pattern recognition, taking in sensory data and generating a response. Where computers rely on statistical algorithms to learn from input data, humans use conscious (e.g., beliefs, behavioral patterns) and unconscious (e.g., reflexes) methods, to come to a conclusion.

So, what sets us apart? A large part boils down to the type of signals we use and how they are represented. For example, human signals are analogue and continuous, while computers work with digital discretized streams of data. Also, the latent space of human signals is vastly different than that of a computer. Consequently, mapping one onto another is often far from trivial making the interaction between humans and computers difficult and often clumsy. It still happens largely via primitive interfaces such as keyboards, mouses, and touchscreens.

Both humans and computers perform some sort of pattern recognition, ...

For a truly seamless interaction between humans and computer machinery—covering both factual and emotional ground—a more sophisticated mapping of these signals is needed. Unfortunately, this is hard because it involves transducing and translating signals while preserving essential information and filtering out noise. While humans can do this almost perfectly (e.g., the cocktail party problem), computers find this very hard.

Variance is key

With our hybrid workshop we want to address these challenges from a human-to-computer perspective. Unlike domains like User Experience (UX) and eXplainable AI (XAI), which aim to make machines more comprehensible to humans, we focus on enhancing computers' ability to understand humans. Specifically, our goal is to harness the computers' computational power to decrypt the often obscured but vital physiological signals—often termed biosignals—that our bodies continuously emit. These biosignals do not necessarily play a direct role in conventional human-human communication. However, because of their unconscious origin (e.g., people cannot influence them directly) they provide an important unbiased view of the body’s physical and mental state.

Yet, to unlock biosignals' hidden potential, we must overcome substantial challenges in signal processing and machine learning. Humans live in environments rich with noise that often obscure the subtle signals (e.g., heart rate, breathing, sweat glance activity) we aim to capture. Here, the difficulty lies in training computers to distinguish between 'good' variance—the tiny changes in bodily vitals—and 'bad' variance—noise caused by motion or equipment malfunction. This challenge often results in computers falling into the trap of overfitting, where they adapt to meaningless, noisy patterns rather than the signals that matter. We can reduce the risk of overfitting by already differentiating 'good' and 'bad' variance before the machine gets the chance to learn. This process, called affective biosignal processing when applied in the scope of biosignals, is an essential cornerstone for reliable affect-based human-computer interaction.

Robust biosignal processing serves as an essential cornerstone for reliable human-computer interaction.

The possibilities for reliable human-machine interactivity are endless. Imagine emphatic digital assistants and personal coaches that enhance productivity and mental well-being. From 24/7 at-home patient monitoring that could revolutionize healthcare to interactive toys that learn children about their feelings, the possibilities are vast.

Join us on November 23rd to explore this and much more.

Event details

Register here!

November 23rd 2023 | 13.30 - 18.30 | Minnaertgebouw Mezzanine, Leuvenlaan 4, 3584 CE Utrecht

Funded by the European Union’s Horizon 2020 research and innovation program under grant agreement no. 952095, from the Intrinsic Motivations to Transitional Wearable INtelligent companions for autism spectrum disorder (IM-TWIN) project.

Read paper