Featured post

DAKSHAM KALKI: A SOUNDSCAPE BY SAIBAL RAY

Showing posts with label ESSAY. Show all posts
Showing posts with label ESSAY. Show all posts

Sunday 18 September 2022

Latest Developments in last one decade in the field of cinema sound

Introduction: To capture and store sound was a dream of humankind. As human society started getting modern, especially through industrial development, many attempts were made to capture sound. The experiments began during the industrial revolution. Many notable personalities started experimenting to capture and store sound. One of the notable achievements was the phonautograph made by a French printer, bookseller and inventor Scott. But it is used to record by physical contact of the sound-producing object like a tuning fork. As a result of these efforts of recording sound, in 1877, Thomas Edison invented the phonograph, which was considered to be the first recording device that could record sound waves in the form of etches on a cylinder, which could be played back to reproduce the recorded sound. This technology with polishing gave birth to the gramophone, turntable or record player in the course of time. Readers might feel confused as to why I am talking of these historical facts to discuss the latest development of the last decade in the field of sound. It is simply because everything is connected the way we cannot imagine our adulthood without the experiences we go through in our childhood. Similarly, it is impossible to imbibe anything about sound technology unless one does not have a clear understanding of a phonograph that used a pin (stylus) excited by sound waves or sound pressures to sketch (etch) the sound on a cylinder. That’s the basic idea of sound reproduction. As the development of sound technology is not a discrete affair, I will just touch upon a brief developmental history of sound technology so that it becomes easier to understand the scenario of the last decade.

Evolution of sound technology: The development of sound technology could be divided into four stages since the invention of the phonograph. The first era was called the Acoustic era (1877–1925). The characteristic of this era was all the recording devices were mechanical, which meant no non-mechanical forces like electricity were used to record sound. The recording devices of this era could only record 250 Hz to 2.5 kHz of sound spectrum and artists, especially musicians, were forced to generate sound within this range. The next era was called the Electrical era (1925–1945) which gave electrical microphones, amplifiers, speakers, mixing consoles, and of course electrically powered stylus to cut the groove. This era reached a capacity to record a wider band (60 Hz to 6 kHz) and notably the first sound film ‘The Jazz Singer’ (1927) was a product of this era. The third one is called the Magnetic era (1945–1975) which invented the magnetic tape as a medium of recording as opposed to grooves of gramophone or turntable. The development of magnetic tape-based recording became successful to capture almost the full band sound (30 Hz to 16 kHz) and reproducing it. During this time the film sound totally shifted to magnetic recording and developed a way to optically transfer the sound on magnetic tape onto a negative similar to a picture negative or celluloid. And thus came the new age of sound recording i.e. the Digital era (1975 to present). Till the magnetic era, everything was analogue, which meant, sound waves used to be transformed to create an acoustic, electrical or magnetic analogue or analogous form to be transcribed on a cylinder, disc or tape. But in digital recording everything was converted into a series of two states i.e. full voltage or no voltage and that series of voltages were recorded as ‘data’ optically (CD/DVD/BlueRay), magnetically (DAT/HI8/LTO) or electronically (Solid State Devices). And a software or algorithm was required to extract or decode the recorded or encoded signal. But here it has to be remembered that only the recording and processing had become digital. Microphones, speakers and human ears still operate on mechanical, electrical and magnetic principles of creating ‘analogues’ of the original sound wave. The first decade of 21st century has reached up to surround sound, but the last decade was completely revolutionary in terms of cinema audio. It brought forth the incredible experience of immersive sound. These two factors will be discussed below as surround sound experience has incited the inventors and audiences for more, which is immersive sound.



Surround Sound: Surround sound was a technique that used five or seven channels of speakers, known as L or left, C or centre, R or right, LS or left surround, RS or right surround and sometimes LBS or left back surround, RBS or right back surround. With these channels there used to be a sub-woofer that took feed of low-frequency sound signal from those five or seven channels. The positions of L, C, R, LS, RS, LBS, and RBS were always critical as they focused on the audience area or the ‘sweet’ spot, whereas the position of the sub-woofer was not critical as it fired low frequency in all directions. Point to be noted that L, C, R, LS, RS, LBS and RBS denoted channels, not speakers and one channel might have more than one speaker, especially in the case of LS, RS, LBS and RBS. As the sub-woofer used to take feed from these seven channels, it was considered as .1 instead of 1. Thus the popular name 5.1 or 7.1 surround sound had come into play. This was the technology behind the cinema sound till the end of the first decade of this century. But the arrival of immersive sound had completely changed the experience of listening to movies, music or any type of audio-visual.

Immersive sound: This is also called 3D sound. If stereo sound is considered to be one-dimensional as sound can move from left to right and vice versa, the surround sound is two-dimensional since it can take the sound of a moving object along the front to back axis as well as along the left to right axis. In the year 2012 Dolby brought in Atmos sound, which is capable of giving a true 3D or 360° immersive sound experience. Dolby Atmos uses height speaker channels to deliver an immersive sound experience. Height speaker channels mean, it adds down-firing speaker channels on the ceiling of the theatre. It means one can clearly hear the rain coming down from the top for example. If there are two height speaker channels in a theatre, it is called 7.1.2 the third number denoting the height channels and if there are four height channels in the theatre, it is called 7.1.4 Dolby Atmos. The fun does not stop here. In the case of a surround system, all the speakers of a single channel play the same sound assigned to it at a given point in time. But in the case of Atmos, every single speaker of a channel can be assigned different sounds or ‘audio objects’ to be played at a single moment. Yes, that’s true! And it has been achieved by creating digital audio metadata (location or pan automation data) that tells the amplifiers/speakers which sound to be played when. Another equally popular immersive or 3D sound format is Auro 3D which uses three different layers called Surround (Layer 1 at 0°), Height (Layer 2 at 30°) and Top (Layer 3 at 90°). The top layer of Auro 3D is interestingly called the ‘Voice of God’ layer. The basic Auro 3D configurations are 11.1 or 13.1. It can go higher depending on the number of speakers being used. Besides DTS:X also provides similar immersive sound. In the last decade starting from 2012 most of the RR (ReRecording) theatres in India and abroad have become immersive sound equipped to cater to the taste of the cinema audience. And the cinema theatres also are tending to be equipped with the same. The best part of the immersive sound format is it can be played in any environment starting from 3D sound-enabled headphones to home theatre utilizing digital algorithms, psychoacoustic principles and acoustical techniques.

Sync Sound: During the last decade another popular phenomenon, especially in the Indian Film Industry, has been the rise of sync sound. Sync sound means to use the sound, especially dialogue, recorded directly on the location. The reason is more economic than aesthetic, as I feel. Economic because the sound recording engineer, employed by an audio post-production studio is always incredibly low paid compared to a sync sound recording engineer, who can operate independently. Secondly, sync sound professionals are bound to travel across the globe depending on the demands of the script. That is the reason why more and more audio professionals are getting inclined towards sync sound, despite the better quality audio promised by studios. At the same time dubbing requires excellent acting skills that most new-age actors vehemently lack. These factors are creating demands for sync sound professionals, though aesthetically studio sound is far better provided it is performed/acted properly both by the actor and the engineer.

Conclusion: Precisely the last decade gave us an immersive and sync sound experience. As it is said that history repeats itself, the studio sound may return again with enough support from passionate engineers. For music also the demand for good engineers and spacious studios is increasing due to cutthroat competition and with the arrival of digital technology music production has shifted to smaller studios in general. As a result, the quality of ‘resonance’ has got lost, and people keep shouting in the air ‘Why do the old songs sound sweeter?’ The reason is ‘resonance’ my dear! Everything cannot be judged by the demand and supply equation of the market economy. The last decade has taught us that.

©Saibal Ray

09/05/2022

Friday 11 August 2017

THE JOURNEY OF SOUND

It starts right from screenplay. Screenplay is not only about visuals. Instead it is about audiovisuals. The sound has to be written clearly in the screenplay. Then comes the stage of scripting i.e. the audiovisual narration of the story. After that the shot division has to be done. Ideally one should draw each shot and write each sound on paper for the movie. With many other activities to prepare for shooting, this stage is called preproduction.

Production starts after the preproduction i.e the preparation for shooting. For sound there are two types of productions. One is sync sound production. Another non-sync sound production. Sync sound production means the final sound has to be recorded on location. For this choosing the location becomes most crucial. It has to be as silent as possible. At the same time directional microphones should be used to avoid external noises. High end wireless microphones are also used. But it is better to use both small hide-able wire less collar microphone and directional boom microphone together for safety as well as for quality as collar mic gives strength and boom mic gives perspective For various constraints hundred percent sync sound recording is hardly possible all the time. But 80-90% of sync sound recording is possible. For the rest of the sound, the sound engineer takes the help of a dubbing studio and dub the remaining part using the same microphone combination. If he/she does not use the same microphone combination, the sound quality will mismatch. So that is how sync sound is done.

For non-sync sound projects, the sound engineer records the sound with boom microphone and wireless lapel if necessary. In this case the recorded track is called ‘pilot’ track or reference track, since it will be used as a guide track for the dubbing. To record the pilot track location is not as important as it is in case of a sync sound recording. Only thing is that the dialogue should be properly audible. Otherwise dubbing will not be possible. It is important to note that before taking each shot the clap board inscribed with scene number, shot number and take number, must make a sound in front of the camera so that it becomes easier and methodical for editor to syncronise the sound with visual on the edit table.

After that the shot footage goes to the edit table. This process is known as post production. In the editing lab the editor syncronises the sound with visuals and makes a rough cut. After that he/she does fine tuning to the rough cut and prepare a final cut and locks the edit. Then it goes to the sound studio for dubbing and sound designing. As I said before for sync sound project, very less amount of dubbing is necessary. For non sync sound project, the sound engineer puts the visual with pilot track on a sound software. And the arrangement of sound studio allows the actor to hear the pilot track through headphone. At the same time the actor or dubbing artist can view the visual on a screen. So he/she watches the visual on screen and at the same time hears the pilot track through the headphones. Then he /she tries to utter the dialogues with emotion in sync with the visual standing or sitting in front of a studio microphone. If it is a sync sound project, sound engineers prefer to use the location microphone combination for matching the tonal quality. This way each actor or dubbing artist comes to the studio and records his/her dialogue in sync with the visual. After the dialogue recording is over, the sound engineer saves all the tracks in a project.

Thereafter comes the foley. Foley recording is a very speacilised job. The name Foley came from Jack Donovan Foley, who started creating sound effects for motion pictures. For foley recording a foley studio is required. Foley studio has sand, dry leaves, wooden floor, concrete floor, various shoes, cups, dishes, glasses, water, knife, clothes etc everyday materials. The main purpose of foley recording is to add sound to all the minor details of the movie. For example, the character on screen keeping a tea cup on the table. While shooting this sound might not be recorded for various reasons. Foley artist recreates the sound in the studio. Suppose there is a campfire, the sound of the fire can be recreated by the crackles of a plastic. This way sounds are added to the film. It is not possible to have all the materials, in the foley studio all the time. So often the required materials are collected from somewhere else depending on the final cut of the movie. A major example of foley is foot steps recording. It happens almost in every movie. The procedure of foley recording is quite similar to dubbing. The foley artist watches the visual on screen inside the studio and perform the action in front of microphone. Then it gets recorded in the software.

   After that it is the turn of the ambience sound. The ambience is created in the studio. The recording engineer does ambience track laying in two ways. One from the stock sound and another by recording the real ambience. The recording engineer watches the final cut and makes a list of the ambience sound required. Then he/she takes the field recorder, microphone, head phone and travels from place to place to collect the required sound effects. After that he comes to the studio and lay the sounds on the software according to the visuals. 

    Music is recorded in a different studio with the help of the musicians and the tracks are sent to the mixing studio. Now the recording engineer lays the music tracks wherever it is necessary. Next step is  balancing, cleaning, and adjusting the dialogue, foley and ambience tracks. This stage is known as premixing. For example dialogues are put into a group and balanced. Similarly ambience and foley tracks are grouped and premixed. Music comes with four separate tracks - vocal, rhythm, wind and strings. These tracks are also premixed.


 After the premixing all the final four tracks - dialogue, foley, ambience and music - go to the rerecording studio. At rerecording studio the mixing engineer mixes the four tracks according to the Dolby, DTS or SDDS standard. Then the final tracks are sent to the lab for encoding with the visual.

Monday 26 July 2010

Published in, CHHAYACHITRO,10th Bardhaman International Film Festival issue (2008/Page 31)

             Art of Sound
        Saibal  Ray, Sound Dept, SRFTI 2005-2008

Audio Art or sound art is a new genre of contemporary art that uses various technological sounds, natural ambience and sounds of our day to day life to create a piece of art. This is an interdisciplinary art form that includes and resorts to many genres of traditional art forms and subjects, like music, acoustic ecology, acousmatic sound (acousmatic sound is a sound that one hears without seeing the originating cause), psychoacoustics as well as painting. The painting, which is inspired by the sound, is also being called sound art nowadays. For example, ‘Frozen’ (5 Days Off MEDIA: Frozen, Wed 2 through Sat 26 July 2008, Melkweg Mediaroom & Paradiso, Amsterdam) was an exhibition that held in Amsterdam in July of 2008 to present sound as a space. [1] As this exhibition was described, “In the Mediaroom at the Melkweg multi-channel sound pieces can be experienced over an advanced speaker setup, accompanied by sound in a "frozen" form: Images and sculptural objects made using sound as input. These artworks use audio analysis and custom software processes to extract meaningful data from the sound signal, creating a mapping between audio and other media. Frozen will feature digital prints as well as four "sound sculptures" created using digital fabrication technology such as rapid prototyping, CNC and laser cutting, which allow for the direct translation of a digital model into physical form.” [2] This example clearly shows a new emerging art form, which is quite different from the traditional art forms, though the main focus, in this exhibition, was definitely on the visual representation of sound. To be very clear here sound was used as a brush to paint a soundscape, which is a sort of abstract landscape. And the end result of the exhibition was without any doubt beyond expectation. To have an overview and realize the impact of this exhibition, the websites given in the footnote could be referred to.

This new domain of sound art is not only isolated within the periphery of physical or ‘frozen’ representation of sound, it has already brought forth many more sub-genres, based on our day to day life, of contemporary art. For instance, there are sonification, soundscape, sound installation, sound sculpture, sound poetry, radio art, noise music, electronic music and many more.

Sonification is described as the use of non-speech audio to convey information in our daily life.
[3] For example sms alert, ring tone of our cell phone. This is also known as ‘Audio display’, which is very often necessary to have constant awareness of some vital body functions, especially, for example, in operation theatre. Since every science has an art of invention and application embedded, the scientific technique of sonification is also naturally being used in traditional art forms, like cinema and music. But the main problem is most of the time the usage of sonification appears to be boring and monotonous because of lack of research and exhibitions that can motivate the traditional as well as contemporary artists to incorporate this technique of sonification more creatively understanding the psychological and social impact of sonification in an updated sense.


        .

Soundscape is another domain that has immense raw material to inspire sound art in various directions. But unfortunately, it has long been ignored in India for some unknown reason. The study of soundscape is a subject of acoustic ecology, which defines a space by its sound. Like the normal history of human society, there is also a history of soundscape. Earlier the world was dominated by the natural sounds of babbling water, rustling leaves, chirp of birds, wind etc. Slowly the soundscapes started to get replaced with the human made sounds or noises. The pure natural soundscape of different places was slowly invaded by the human noise. For example, with the development of agriculture and dependence on the modern technology, the sound of an electrical motorized pump has become so prevalent in Indian villages that it could easily be used in a cinema to give a hint of villagescape. In the same way the Indian cities are dominated by the sound of ‘auto-rickshaws’ i.e three wheeler mass transport vehicles. Besides, now from village to city the whole soundscape is becoming uniform because of random urbanization in the countries of ‘third world’. Whether the diversity is good or bad, is certainly a matter of debate. But the reality is that it is changing. And naturally there will be an urge in the mind of sound artists to capture the aural history and display it to the audience. Not only for this urge but also to save this environment from noise pollution the study of soundscape or acoustic ecology is extremely important. To get the attention of the people to the sonic environment, an educational and research group called WSP (World Sound Project) was established by R. Murray Schafer at Simon Fraser University during the late 1960s and early 1970s.
[4] This group studied sonic environment of various places and published them. One of the important publications of this group is ‘The Vancouver Soundscape’. The influence of this later helped to form ‘World Forum for Acoustic Ecology’[5] which worked on to connect all the scholars and people who are concerned with the sonic environment of this world. At the same time this group published soundscape compositions and served City of Vancouver’s Urban Noise Task Force and contributed to the educational recommendations of its final report ‘Urban Noise’.[6] From the activity of WSP it is clearly visible how a study on the sonic environment can be helpful to human society and to the arena of art. The soundscape compositions reveal a new and larger context of sound art, of which music is only a part.

Sound installation is an extension of the traditional art installation form. The sound installation incorporates sound element in a space that is installed to be experienced in a different way. The main difference between traditional installation art and sound installation art is that the later includes the temporal element, since sound requires time to be perceived. So, sound installation demands time from the audience. The advantage of sound installation is it engages audience within the space. At the same time this art is site-specific, which means it involves the space within its periphery. So if the installation is removed from the space, it looses its value or charm. In a sound installation musicians or performer could perform or produce the sound. At the same time it could be designed with sound sources (e.g. speakers) placed in different space points. Precisely sound installation is a time based art form that incorporates installation art, sound art and sound sculpture. Many of the contemporary artists and scholars have already experimented with this art form.
[7] In India there are few academic institutions (e.g. SRFTI)1 and private organizations that practice and pioneer serious installation and soundscape designing, though it is essential to create more such institutions for academic practice.

Sound sculpture is another popular term regarding sound art, which is again a time based intermedia art form. Sound sculpture is any kind of art object, which is produced by the manipulation of sound or vice versa. This form could be site-specific or not. Cymatics (Study of wave phenomena) and Kinetic art (the art inspired by the study of movement or movement itself) mostly influenced this art form. The exhibition ‘Frozen’, which has been mentioned in the beginning of this essay, had a few sound sculptures installed. For details refer to the footnotes.

Sound poetry is supercalifragilisticexpialidocious.  Sound poetry, interestingly is not at all a contemporary phenomenon. In the 20th century the Futurist and the Dadaist vanguards were the first to introduce a form of a poetry that emphasizes on the phonetic aspect of human speech rather on the semantic and the syntactical aspects. So, sound poetry was precisely defined as ‘verse without words’. This type of poetry has actually found a proper place in the performing art i.e. these are to be performed. So, this could also be seen as an effort to use the human voice as a sound-generating instrument. This form of poetry is said to have influenced the ‘concrete music movement’, which was mainly focused on electro-acoustics music, of late 1940s.
[8]

Radio art is one of the most popular sound arts. It is an art that is produced by using the sound in such a way that it could be transmittable. Though any form of audio art can be transmitted out through radio, the contemporary popular radio art is highly dominated by human speech and thus songs.

 
Besides, the abstract energy of sound can be manipulated in many other ways to produce different genres of audio art. For example noise music[9], electronic music, NIME (New Interfaces for Musical Expression)[10], text sound art and intermedia arts, like cinema, that incorporates sound art as well as the art forms, like installation and sculpture that force (read influence) the sound art to break through into the their periphery.
Unfortunately the art of audio is not at all properly recognized in India. The proper research and academic practice in this field could easily explore a new arena of art as well as recognize the sound artists , who are working silently across the country as mere sound recording and mixing engineer, or sound designer mostly for film and television. At the same time this exploration of sound art can lead us to build up a soothing sonic environment that is essentially required in the context of rapid urbanization of Indian villages to keep the development for a more comfortable lifestyle up without creating a detrimental sonic environment around.


Friday 17 April 2009

Audio Mastering



The debate and discussion over mastering is really getting hotter. It seems that mastering is a strange and very confusing audio technique which is necessary but at the same time confined to a niche of very few specialized mastering engineers. Is it really so difficult to understand? If so let's at first deal with the question, 'What is mastering?' or 'What is the definition of mastering in relation with audio?' Online Merriam-Webster dictionary (http://www.merriam-webster.com/dictionary/mastering%20/17-04-2009/1:02 am) says, mastering is "3. to produce a master recording of (as a musical rendition)." Then Wikipedia ( http://en.wikipedia.org/wiki/Audio_mastering 17-04-2009/1:05 am) with all the debates of its being too democratic to deal with the authenticity of knowledge, puts it as "Mastering, a form of audio post-production, is the art of preparing and transferring recorded audio from a source containing the final mix to a data storage device (the master); the source from which all copies will be produced (via methods such as pressing, duplication or replication). The format of choice these days is digital masters, although analog masters, such as audio tapes, are still being used by the manufacturing industry and a few engineers who specialize in analog mastering. " Another very interesting definition of mastering has been given by an online journal Music Biz Academy.com ( http://www.musicbizacademy.com/articles/gman_mastering.htm /17-04-2009/1:10 am). It reads in a very attractive informal way, " Once you have finished recording and mixing your songs, the tracks are shaped, sculpted, scooped, equalized, compressed, and finessed into sonic splendor (well, you hope) through the audio process known as mastering. Mastering is what gives depth, punch, clarity and volume to your tracks. It is part science, part craft, and part alchemy. . . just like songwriting, singing, performing and recording." There are lot more ways how audio 'mastering' is being defined everyday all across the world. But from these three popular definitions of mastering it is very clear that mastering is a technique to produce a master copy or a final track for replication as well as to get the best continuous sound out of the recorded material. Achieving a sense of continuity through out a compilation of different tracks or a single track is also an essential goal of mastering.

Naturally the next point of curiosity brings in the question, " How to do mastering in relation with audio?" Now when recorded audio materials arrive at the desk of mastering engineer, he can do a few steps to achieve the goals of audio mastering as stated above. He can usually do the following manipulations,
Raise the overall level.
Raise the punch and warmth of the whole track.
Even out the track levels and Eq individual tracks (if available) for cohesion.
Correct minor mix deficiencies with eq and plug-ins.
Manipulate the space between different tracks as required.
Eliminate noises between tracks.

Of course all these steps require a lot of support of high-end good quality dedicated gears to achieve the best quality of sound. For a basic mastering studio the required gears are as follows.

EQUALIZERS
COMPRESSORS
CONSOLE
CONVERTERS
TUBE GEAR
DIGITAL AUDIO WORKSTATIONS
MONITORS.

In the market there are many high-end professional audio gears, which could be bought to get a good mastering set-up. For your convenience I am giving a set of good equipments that could make a high quality mastering set-up.

1. EQUALIZERS
Massenburg Labs (http://www.massenburg.com/)
Avalon (http://www.avalondesign.com/)
Sontec
Weiss (http://www.weiss.ch/)
Mellennia Media (http://www.mil-media.com/)
Pultec

2.COMPRESSORS
Daniel Weiss (http://www.weiss.ch/)
Focusrite Blue 330 (http://www.focusrite.com/)
Tube-Tech (http://www.tube-tech.com/)
Millennia Media (http://www.mil-media.com/)

3.CONSOLE
Crookwood (http://www.crookwood.com/)

4.CONVERTERS
Apogee (http://www.apogeedigital.com/)
Mytek (http://www.mytekdigital.com/)

5.TUBE GEAR
Giltronics

6.DIGITAL AUDIO WORKSTATIONS
SADiE (http://www.sadie.com/sadie_home.php)
Sonic Solutions (http://www.sonic.com/)

7.MONITORS.
B&W (http://www.bowers-wilkins.com/display.aspx?infid=2441)
Dynaudio Acoustics (http://www.dynaudio.com/flash-index.php)

Courtsey: http://www.discmakers.com/

Useful Links:
1.http://www.discmakers.com/soundlab/masteringgear/index.asp
2. http://www.discmakers.com/soundlab/advice/index.asp
3. http://www.discmakers.com/soundlab/whatismastering/index.asp
4.http://www.musicbizacademy.com/articles/gman_mastering.htm

Copyright©SaibalRay