Large-scale distributed arrays can obtain high spatial resolution, but they typically rely on a rigid array structure. If we want to form distributed arrays from mobile and wearable devices, our models need to account for motion. The motion of multiple microphones worn by humans can be difficult to track, but through manifold techniques we can learn the movement through its acoustic response. We show that the mapping between the array geometry and its acoustic response is locally linear and can be exploited in a semi-supervised manner for a given acoustic environment. Prior work has shown a similar locally linear mapping between source locations and their spatial cues, and we implement a semi-supervised model originally used with source localization for dynamic array geometries.

This content is only available via PDF.