Ambisonics is a spatial audio capture and reproduction system that has been utilised since the 1970’s, primarily for music reproduction, but has of late gained significant interest in broadcast, gaming and interactive immersive video frameworks. It is a system that allows for soundfield recordings and mixes to be created independent of a known reproduction format. It is based on the representation of the soundfield using spherical harmonic basis functions, where the higher the order of spherical harmonics utilised, the more physically accurate the soundfield capture and reproduction will be over a defined area. Despite major advances in spherical microphone design and higher order Ambisonic decoding methods, there still needs to be more attention paid to the perceptual limitations of Ambisonic reproduction. What order is sufficient for correct perception of depth, immersion or locatedness in higher order Ambisonic soundfields? Are the perceptual limitations the same over loudspeakers as they are over virtual Ambisonic reproduction on headphones? Are current methods of up-mixing low order microphone signals to higher orders perceptually correct? And finally, are there sacrifices that can be made to the configuration of reproduction setups without any degradation in the perception of spatial audio quality? This project looks to answer these questions through acoustic measurement, psychoacoustic modeling and perceptual testing, with the ultimate aim of developing new Ambisonic processing algorithms grounded in validated perceptual data.