Mapping strategies are an essential step when designing realtime musical performance systems, as well as offline digital sound processing. These strategies define how we relate input device parameters to sound synthesis or audio effect parameters. This implies the ability to combine input parameters among themselves (parameter combination) and valid control signals in terms of range, variation type, etc. (signal conditioning). Recent works highlighted the interest of multi‐layer mapping strategies in the context of digital musical instruments, which can also be applied in the context digital audio effects. In this presentation, three strategies will be discussed in order to illustrate the role of mapping strategies in various contexts. The first example concerns an additive synthesizer called Ssynth, a further development of Escher, a prototyping system aiming at studying the effect of mapping strategy in instrument design. The second example is a general mapping strategy for digital audio effects, allowing for both adaptive and gestural control. The final example concerns sonification of gestures, used to provide cues about ancillary movements of performers. For each example, mapping strategies will be explained in terms of their structure and functionality. [Work supported by FQRNT and MDEIE PSR‐SIIRI (Québec, Canada), CNRS and PACA (France).]