Emotive Echoes
2024
TouchDesignerMax MSPPython
Emotive Echoes is an audio-visual artwork that reacts to the emotion displayed by its viewers.
Pipeline:
1. A Python script processes live video from a webcam, using a pretrained Facial Emotion Recognition (FER) model to analyze and detect emotions. The detected emotion data is sent to a Max MSP patch via OSC (Open Sound Control).
2. In Max MSP, two FluCoMa models adjust the pitch and panning of 24 samples composing the soundscape based on the received emotion data. The audio is transmitted to TouchDesigner using a virtual audio cable, while the average pitch and panning values are sent via OSC.
3. In TouchDesigner, these inputs dynamically control the animation speed and the color range of visuals, responding to changes in audio frequencies.