Developments in measuring the acoustic characteristics of concert halls and opera houses are leading to standardized methods of impulse response capture for a wide variety of auralisation applications, including many surround-sound formats. This project will consider some of the newer methods of rendering a soundfield from a database of acoustic impulse responses, possibly including 5.1, high-order ambisonics, SIRR, or Wavefield Synthesis techniques. How can large databases of such measurements be effectively parameterised and potentially hybridised for multiple destination, speaker agnostic, next-generation audio systems? Can interpolation and data reduction schemes be used to reduce the size of a typical dataset? How might the end user interact with and use such a set of impulse responses in a creative and technically transparent manner? Once rendered, is it possible to enable the listener to become an active part of an interactive soundfield, rather than adopt the more usual role of passive ‘observer’ of a static acoustic image?