Modeling the peripheral auditory processes involved in mimesis
When creating music, musicians interpret scores provided by the composer. In the case of audio scores produced by the composer in a real-time interaction with the musician, the interpretation heavily depends on the limits of musician’s and composer’s auditory system. In this talk, first, we will describe the general concept of a musician being an agent embedded in the environment and using auditory perception to sense the environment states. We will break down the perception-action cycle into discrete processing stages and discuss their limits, uncovering perceptual effects relevant for the musician and composer when relying on an audio score. Then, we will introduce those peripheral auditory models which are good candidates to describe these perceptual effects. We will focus on modeling auditory parameters such as timing, pitch, and spectral descriptors, including complex auditory effects related to non-linear cochlear processing and spectral masking effects. Finally, we will discuss the relevance of these models in the context of the project “Sound as Score” and mimesis as a theoretical concept for artistic aesthetics involved in interpreting audio scores.