Featured post

DAKSHAM KALKI: A SOUNDSCAPE BY SAIBAL RAY

Sunday 18 September 2022

Latest Developments in last one decade in the field of cinema sound

Introduction: To capture and store sound was a dream of humankind. As human society started getting modern, especially through industrial development, many attempts were made to capture sound. The experiments began during the industrial revolution. Many notable personalities started experimenting to capture and store sound. One of the notable achievements was the phonautograph made by a French printer, bookseller and inventor Scott. But it is used to record by physical contact of the sound-producing object like a tuning fork. As a result of these efforts of recording sound, in 1877, Thomas Edison invented the phonograph, which was considered to be the first recording device that could record sound waves in the form of etches on a cylinder, which could be played back to reproduce the recorded sound. This technology with polishing gave birth to the gramophone, turntable or record player in the course of time. Readers might feel confused as to why I am talking of these historical facts to discuss the latest development of the last decade in the field of sound. It is simply because everything is connected the way we cannot imagine our adulthood without the experiences we go through in our childhood. Similarly, it is impossible to imbibe anything about sound technology unless one does not have a clear understanding of a phonograph that used a pin (stylus) excited by sound waves or sound pressures to sketch (etch) the sound on a cylinder. That’s the basic idea of sound reproduction. As the development of sound technology is not a discrete affair, I will just touch upon a brief developmental history of sound technology so that it becomes easier to understand the scenario of the last decade.

Evolution of sound technology: The development of sound technology could be divided into four stages since the invention of the phonograph. The first era was called the Acoustic era (1877–1925). The characteristic of this era was all the recording devices were mechanical, which meant no non-mechanical forces like electricity were used to record sound. The recording devices of this era could only record 250 Hz to 2.5 kHz of sound spectrum and artists, especially musicians, were forced to generate sound within this range. The next era was called the Electrical era (1925–1945) which gave electrical microphones, amplifiers, speakers, mixing consoles, and of course electrically powered stylus to cut the groove. This era reached a capacity to record a wider band (60 Hz to 6 kHz) and notably the first sound film ‘The Jazz Singer’ (1927) was a product of this era. The third one is called the Magnetic era (1945–1975) which invented the magnetic tape as a medium of recording as opposed to grooves of gramophone or turntable. The development of magnetic tape-based recording became successful to capture almost the full band sound (30 Hz to 16 kHz) and reproducing it. During this time the film sound totally shifted to magnetic recording and developed a way to optically transfer the sound on magnetic tape onto a negative similar to a picture negative or celluloid. And thus came the new age of sound recording i.e. the Digital era (1975 to present). Till the magnetic era, everything was analogue, which meant, sound waves used to be transformed to create an acoustic, electrical or magnetic analogue or analogous form to be transcribed on a cylinder, disc or tape. But in digital recording everything was converted into a series of two states i.e. full voltage or no voltage and that series of voltages were recorded as ‘data’ optically (CD/DVD/BlueRay), magnetically (DAT/HI8/LTO) or electronically (Solid State Devices). And a software or algorithm was required to extract or decode the recorded or encoded signal. But here it has to be remembered that only the recording and processing had become digital. Microphones, speakers and human ears still operate on mechanical, electrical and magnetic principles of creating ‘analogues’ of the original sound wave. The first decade of 21st century has reached up to surround sound, but the last decade was completely revolutionary in terms of cinema audio. It brought forth the incredible experience of immersive sound. These two factors will be discussed below as surround sound experience has incited the inventors and audiences for more, which is immersive sound.



Surround Sound: Surround sound was a technique that used five or seven channels of speakers, known as L or left, C or centre, R or right, LS or left surround, RS or right surround and sometimes LBS or left back surround, RBS or right back surround. With these channels there used to be a sub-woofer that took feed of low-frequency sound signal from those five or seven channels. The positions of L, C, R, LS, RS, LBS, and RBS were always critical as they focused on the audience area or the ‘sweet’ spot, whereas the position of the sub-woofer was not critical as it fired low frequency in all directions. Point to be noted that L, C, R, LS, RS, LBS and RBS denoted channels, not speakers and one channel might have more than one speaker, especially in the case of LS, RS, LBS and RBS. As the sub-woofer used to take feed from these seven channels, it was considered as .1 instead of 1. Thus the popular name 5.1 or 7.1 surround sound had come into play. This was the technology behind the cinema sound till the end of the first decade of this century. But the arrival of immersive sound had completely changed the experience of listening to movies, music or any type of audio-visual.

Immersive sound: This is also called 3D sound. If stereo sound is considered to be one-dimensional as sound can move from left to right and vice versa, the surround sound is two-dimensional since it can take the sound of a moving object along the front to back axis as well as along the left to right axis. In the year 2012 Dolby brought in Atmos sound, which is capable of giving a true 3D or 360° immersive sound experience. Dolby Atmos uses height speaker channels to deliver an immersive sound experience. Height speaker channels mean, it adds down-firing speaker channels on the ceiling of the theatre. It means one can clearly hear the rain coming down from the top for example. If there are two height speaker channels in a theatre, it is called 7.1.2 the third number denoting the height channels and if there are four height channels in the theatre, it is called 7.1.4 Dolby Atmos. The fun does not stop here. In the case of a surround system, all the speakers of a single channel play the same sound assigned to it at a given point in time. But in the case of Atmos, every single speaker of a channel can be assigned different sounds or ‘audio objects’ to be played at a single moment. Yes, that’s true! And it has been achieved by creating digital audio metadata (location or pan automation data) that tells the amplifiers/speakers which sound to be played when. Another equally popular immersive or 3D sound format is Auro 3D which uses three different layers called Surround (Layer 1 at 0°), Height (Layer 2 at 30°) and Top (Layer 3 at 90°). The top layer of Auro 3D is interestingly called the ‘Voice of God’ layer. The basic Auro 3D configurations are 11.1 or 13.1. It can go higher depending on the number of speakers being used. Besides DTS:X also provides similar immersive sound. In the last decade starting from 2012 most of the RR (ReRecording) theatres in India and abroad have become immersive sound equipped to cater to the taste of the cinema audience. And the cinema theatres also are tending to be equipped with the same. The best part of the immersive sound format is it can be played in any environment starting from 3D sound-enabled headphones to home theatre utilizing digital algorithms, psychoacoustic principles and acoustical techniques.

Sync Sound: During the last decade another popular phenomenon, especially in the Indian Film Industry, has been the rise of sync sound. Sync sound means to use the sound, especially dialogue, recorded directly on the location. The reason is more economic than aesthetic, as I feel. Economic because the sound recording engineer, employed by an audio post-production studio is always incredibly low paid compared to a sync sound recording engineer, who can operate independently. Secondly, sync sound professionals are bound to travel across the globe depending on the demands of the script. That is the reason why more and more audio professionals are getting inclined towards sync sound, despite the better quality audio promised by studios. At the same time dubbing requires excellent acting skills that most new-age actors vehemently lack. These factors are creating demands for sync sound professionals, though aesthetically studio sound is far better provided it is performed/acted properly both by the actor and the engineer.

Conclusion: Precisely the last decade gave us an immersive and sync sound experience. As it is said that history repeats itself, the studio sound may return again with enough support from passionate engineers. For music also the demand for good engineers and spacious studios is increasing due to cutthroat competition and with the arrival of digital technology music production has shifted to smaller studios in general. As a result, the quality of ‘resonance’ has got lost, and people keep shouting in the air ‘Why do the old songs sound sweeter?’ The reason is ‘resonance’ my dear! Everything cannot be judged by the demand and supply equation of the market economy. The last decade has taught us that.

©Saibal Ray

09/05/2022