Spatial audio reproduction has gained renewed interest in recent years with the increasing presence of virtual reality applications. Ambisonics is currently the most widely adopted format for spatial audio content distributed via internet streaming services, and this has raised the need for audio compression relevant to the format. Current perceptual audio coders used for stereo audio content rely heavily on masking thresholds to reduce data rates, but these thresholds do not take into consideration spatial release from masking. This study begins an effort to update these thresholds for spatially separated sources in ambisonics. The listening tests were performed with sounds encoded in ambisonics to allow for tests to integrate whatever inherent limitations exist in the format. Initial listening tests were carried out for a subset of possible conditions—sound sources were separated along the horizontal plane, with a specific set of separation angles between the masker and maskee. Suggestions are given for continuing the work for the full range of possible conditions.