Designed for Live Sound Spatialization and Immersive Experiences
Development of the HOLOPHONIX system is undoubtedly the most ambitious and the most exciting project we have initiated for a very a long time. The HOLOPHONIX processor brings together several different spatialization techniques including Wave Field Synthesis, High-Order Ambisonics, Distance-Based Amplitude Panning, and more, enabling intuitive placement and movement of sources in a 2D and/or 3D space, comments Michel Deluc, Amadeus’ R&D Manager.
The processor is structured around a powerful multichannel algorithmic reverberation engine. It allows users to combine several different artificial reverberations, homogeneously combining sound materials and fine-tuning the perceived sound depth. Reflection calculators allow the user to create several virtual sound spaces.
The HOLOPHONIX processor creates an extremely advanced platform which is able to mix, reverberate and spatialize sound contents played from various devices using several different spatialization techniques in two or three dimensions, says Thierry Coduys, Chief Technology Officer who was intimately involved in the creation of the HOLOPHONIX processor.
Co-designed with world-renowned French Institutions
Amadeus collaborated with several top engineers from world-renowned musical and theatrical institutions, including Jean-Marc Harel from La Gaîté Lyrique Theater, Marc Piera from Chaillot National Theater, Dominique Bataille and Samuel Maitre from Théâtre du Vieux-Colombier – one of the three theaters of the world-famous Comédie-Française – and Dewi Seignard from Les Champs Libres Cultural Center in Rennes.
This project brings together a plurality of talents from the most prestigious French musical, theatrical, and scientific institutions – and their experience and knowledge are as rich as they are complementary. Our long-term and close relationships with the teams at Paris-based IRCAM Institute, and their trust in Amadeus for almost 15 years, has inevitably led us to get closer to integrating into the HOLOPHONIX processor a large part of their technologies related to the spatialization of sound, says Gaetan Byk, Amadeus’ Marketing Manager.
Thanks to the HOLOPHONIX project, Amadeus has gathered and utilized with the best technological, scientific and pure research resources, and some of the top French user-experts in the field for live spatial sound. We wanted to offer our future users a simple, intuitive and ergonomic tool, perfectly optimized considering the needs and demands in the theatrical, musical and performance fields. The cooperation of contributors from prestigious French institutions, among them, the first users and beta-testers of our spatial sound processor, was essential, adds Gaetan Byk.
Engineered by Amadeus.
For over twenty years, Amadeus has designed, built and marketed a wide range of electro-acoustical playback systems and very high-end signal processing equipment, bringing together high technology, style and emotion. Amadeus was founded in 1992 with the reunion between designer Bernard Byk and scientist Michel Deluc, and today Amadeus has become a reference brand for sound professionals.
Amadeus enjoys close ties with major musical and scientific institutions, both national and international. From the onset, Amadeus has housed a scientific research and conceptualization department, led by Michel Deluc, dedicated to, among other things, the design of custom-made acoustical and electro-acoustical solutions for a wide range of clients, including theaters, opera houses, studios, musicians, producers, composers, multidisciplinary artists, research centers, and more…
Michel Deluc worked closely with software designers Thierry Coduys, and Guillaume Jacquemin, co-inventors of the digital creation-oriented IanniX graphical sequencer, based on a system designed by famed modernist composer Iannis Xenakis.
As a musician, a multidisciplinary artist, and a producer, Thierry Coduys has acted as the sound engineer for some of world’s most acclaimed modern composers including Pascal Dusapin, Karlheinz Stockhausen, Philip Glass, Steve Reich, Luciano Berio, and pianists and performers Marielle and Katia Labèque.
The new system allows the user to select and control a series of highly advanced 2D and 3D sound algorithms designed at IRCAM-based STMS Lab (Sciences et Technologies de la Musique et du Son), located in Paris, and supported by CNRS (National Center for Scientific Research), Sorbonne University, French Ministry of Culture and IRCAM (Institut de Recherche et de Coordination Acoustique/Musique).
Amadeus’ relationships with IRCAM started in the late 1990s. Over the years, Amadeus has designed more than 339 custom loudspeakers installed within IRCAM’s variable acoustics hall (called Espace de Projection) for research on high-end sound field recreation systems, including Wave Field Synthesis 2D and Ambisonics 3D sound.
When Design Embodies Technology
'The superfluous, a very necessary thing...'
Bernard Byk, co-founder of Amadeus, commented on the HOLOPHONIX sound processor's design by quoting the French poet Voltaire. This unique audiophile-grade sound spatialization processor had to receive a distinguishing ornament, superfluous and essential, embodying its intrinsic properties and features. Machined in three dimensions from an aluminum block during more than twenty hours, its front panel embodies by itself all the technicality embedded and hidden within this unique tool, adds Bernard Byk.
The HOLOPHONIX processor is housed in a 3U-height chassis, machined from aluminum and anodized. Its front panel is machined in three dimensions from an aluminum block, its style drawn from the aesthetic and technical aspects of Amadeus' audiophile Hi-Fi product development.
Unlimited Number of Spatialization Buses Supported
The hardware offers a quasi-unlimited number of spatialization buses, each one able to run one of the different sound algorithms available, designed at IRCAM-based STMS Lab, including: Higher-Order Ambisonics (2D, 3D) Vector-Base Intensity Panning (2D, 3D), Vector-Base Amplitude Panning (2D, 3D), Wave Field Synthesis, Angular 2D, k-Nearest Neighbor, Stereo Panning, Stereo AB, Stereo XY, Binaural.
This allows the user to achieve control of the sound sources using different techniques. For each project, the algorithms get evaluated, listened to and selected on site, according to their coherence with the main electro-acoustic system and the artistic expectations of the composer or performers, says Thierry Coduys.
13 Algorithms Included
The HOLOPHONIX processor allows composers, sound artists, performers, and sound engineers to control the localization of sound sources in 2D and/or 3D auditory spaces.
Here is a non exhaustive list of supported spatialization algorithms :
WFS (Wave Field Synthesis) 2D
The WFS technology can rebuild a sound field over an extended area. It recreates a wavefront by superimposing secondary sound waves radiated by a speaker array.
High-Order Ambisonics 3D
The Ambisonics technology also utilizes the data for each specific speaker position. It recreates an acoustic field by de-composition/re-composition based on spherical harmonics.
VBAP (Vector Base Amplitude Panning) 2D or 3D
The VBAP technology utilizes the data for each specific speaker position. It uses the three speakers closest to the desired position of the source. This approach is based on the directional component of the vectors corresponding to the two or three speakers placed closest to the sound source.
DBAP (Distance-Based Amplitude Panning) 2D
The DBAP technology is based on amplitude panning, applied to a series of speakers. The gain applied to each speaker is calculated according to an attenuation model based on the distance between the sound source and each speaker.
Binaural Rendering for Headphones
The Binaural algorithm has been designed to help engineers and producers prepare their production using a conventional pair of headphones, giving them the experience of a full 3D image of their mix, and to design sound object trajectories. The processor also includes around a hundred head-related transfer function (HRTF) available in the SOFA file format.
The head-related transfer function (HRTF), also sometimes known as the anatomical transfer function (ATF), is a response that characterizes how an ear receives a sound from a point in space. The Audio Engineering Society (AES) has defined the SOFA file format for storing spatially oriented acoustic data like head-related transfer functions (HRTFs).
The HOLOPHONIX processor also works with show control software and many popular DAWs that are compatible with the Open Sound Control (OSC) protocol – including Ableton Live, Cubase, Digital Performer, IanniX, Logic Pro, Mandrin, Max, Nuendo, PureData, Pyramix, QLab, Reaktor, REAPER, Reason, Traktor — allowing composers to add a control layer to existing software, hardware or network systems used for original in-situ creations, such as installations or performances involving graphic, video and/or sound content.
Every HOLOPHONIX processor parameter can be monitored and controlled via the Open Sound Control (OSC) protocol. This data transmission format between computers, synthesizers, robots, or any other compatible unit or software, was designed for real-time control. Data transmission uses the UDP (User Datagram Protocol) network protocol and improves speed and flexibility compared to MIDI, explains Guillaume Jacquemin, Software Designer.
HOLOPHONIX is Dante-compatible and can integrate with standard commercial DAW software as well as Dante-enabled devices providing an added control layer.
The technological gains offered by the Dante protocol and its widespread adoption by professionals led us to consider its implementation within our systems, says Michel Deluc.
Besides its standard Dante compatibility, the system can also be configured on request for MADI, RAVENNA, or AES67 formats. The input/output matrix of the HOLOPHONIX processor allows the user to choose the rendering mode for each of the incoming channels. It natively handles 128 inputs and 128 outputs in 24-bit/96kHz resolution, but can be extended to 256 or 384 inputs and outputs, Deluc explains.
The HOLOPHONIX processor offers the user a 3D representation of the venue in its web-based application simply by inputting some basic parameters. This simplifies designing the complex scenarios of sound projection within the space. Sound wave ‘projections’ can be viewed within this modeled 3D space to more easily communicate, present and plan the audio processing ideas.
The HOLOPHONIX Controller web-based application is compatible with all devices operating systems with a web browser, including iOS, MacOS, Windows, and Android-based environments. It offers a three-dimensional visualization of the venue, easing live monitoring and user interaction with all sound objects, speakers, and other various parameters. Regular 2D venue drawings can also be imported into the GUI and shown as axonometric projections; appearing to be rotated to show its all three dimensions adds Guillaume Jacquemin, Software Designer.