Sound field synthesis systems vary in number and arrangement of loudspeakers and methods used to generate virtual sound environments to study human hearing perception. While previous work has evaluated the accuracy with which these systems physically reproduce room acoustic conditions, less is known on assessing subjective perception of those conditions, such as how well such systems preserve source localization. This work quantifies the accuracy and precision of perceived localization from a multi-channel sound field synthesis system at Boys Town National Research Hospital, which used 24 physical loudspeakers and vector-based amplitude panning to generate sound fields. Short bursts of broadband speech-shaped noise were presented from source locations (either coinciding with a physical loudspeaker location, or panned between loudspeakers) under free-field and modeled reverberant-room conditions. Listeners used a HTC Vive remote laser tracking system to point to the perceived source location. Results on how physical versus panned sources in dry or reverberant conditions impact accuracy and precision of localization are presented. Similar validation tests are recommended for sound field synthesis systems at other labs that are being used to create virtual sound environments for subjective testing. [Work supported by NIH GM109023.]