Short Bio
I am a PhD researcher at Ghent University (IPEM) working at the intersection of music, embodied cognition, and human–computer interaction. My research focuses on real-time rhythmic synchrony in XR, between humans and embodied virtual agents, combining motion capture, audio analysis, and computational modeling. I am also an active musician, songwriter, and music engineer; my artistic practice directly informs my research on immersive, human-centered musical software.
Doctoral Project
My doctoral project SynchMuse is a bilateral project involving McGill University; it aims to explore real-time rhythmic synchrony between humans and embodied virtual agents, with a particular emphasis on musical interaction. Drawing from coordination dynamics, signal processing, and music cognition, we investigate how virtual agents can perceive, anticipate, and adapt to human timing, motion, and expressive intent. A central goal of my work is to move beyond purely reactive systems toward dynamically coupled, rhythmically coherent agents that participate meaningfully in joint musical action. This includes the design of computational frameworks for temporal alignment and entrainment in immersive XR environments, through the integration of motion capture, gesture analysis and dynamical systems.
Methodologically, my research combines experimental data collection (motion capture, audio, behavioral data), computational modeling, and practice-based artistic exploration. I am particularly interested in how rhythmic joint tasks—such as collective tapping, drumming, or ensemble performance—can serve as testbeds for studying social timing, leadership–followership dynamics, and embodied interaction. Alongside empirical studies, I am actively engaged in a scoping synthesis, developing a taxonomy that clarify distinctions between synchronization approaches, motion synthesis strategies, and computational frameworks in human–agent interaction research.
Backgroud
My academic background is tightly interwoven with my identity as a musician and music technologist. I am a guitarist, songwriter, and audio engineer, with experience in composition, recording, and live performance. My musical practice, spanning songwriting, recording, and collaborative performance, deeply informs my scientific questions. Rather than treating music as a mere application domain, I approach it as a core epistemic tool: a way to probe timing, expressivity, embodiment, and social interaction under conditions that demand high temporal precision and emotional engagement.
Technically, I work with wearable sensors, motion capture systems, real-time audio frameworks, and machine-learning models, with a strong awareness of latency, robustness, and artistic usability. I am particularly motivated by artist-centered design, aiming to develop intelligent systems that augment rather than constrain human creativity. This perspective reflects my broader interest in creative technologies, immersive performance, and augmented musical instruments, where AI acts as a responsive partner rather than a director.
Overall, my work seeks to bridge scientific rigor and artistic practice, contributing both with theoretical insights and practical tools for the future of embodied, rhythmic human–machine interaction—especially within the culture-creative sector.