Audio Editing Explained – Common Functions And What They Do
Like all technology, audio editing software is packed with jargon – here’s what the most common functions do…
When first using the audio editing features of common DAW software (like Sonar or Cubase), the terminology used can be a bit confusing. As a musician, all you want to do is get the ideas in your head translated to a recording at the best possible quality.
If you’re scratching your head at every step, however, these obstacles can quickly sap your creative energy…
To help, here’s an explanation of what some of the most commonly used audio editing functions actually do.
This one of the most frequently used functions in an audio editor, and also one of the most confusingly named, perhaps.
Put simply, ‘normalize’ will make an audio file as ‘loud’ as it can possibly be. When recording in a digital environment, there is limit to the maximum level that audio can be played back (the dynamic range).
Whilst on an analogue, tape based recording system, this would result in mild distortion, in a digital system, exceeding the dynamic range would result in very unpleasant digital clipping.
Normalizing scans an audio file and finds the peaks, then raises the amplitude of the whole file so that these peaks are right at the upper limit of the dynamic range, without clipping.
Crop/ Truncate/ Trim
If you are recording live instruments, you’ll usually have periods of near silence when the instrumentalist isn’t playing. Typically, these will most often occur at the beginning and end of an audio file.
To make your recording as ‘neat’ and professional sounding as possible, you’ll probably want to remove these ‘noisy’ intro and outro sections.
This is easily done with the ‘crop’ function (sometimes called ‘truncate’ or trim). Select the area you want to keep, and crop will remove everything outside of it.
This process is sometimes called ‘topping and tailing’.
This one is pretty straightforward- it plays the audio file from end to start, rather than start to end. Great for experimental effects, adding reversed reverbs, and other ‘out there’ techniques…
Though it might sound similar, ‘invert’ is very different from reverse. You might have tried this function, and heard no audible difference upon playback, too.
To understand this, we need to understand how sound works, to an extent. Essentially, sound is a vibration of air particles. This means that as the air particles vibrates, the air pressure increases and decreases, according to the vibration.
There are two distinct parts to these changes in air pressure: compression and rarefaction. To make understanding this process a bit simpler, think of a typical speaker. A speaker works by using a speaker cone to move air particles, causing them to vibrate in a way that reproduces the original sound.
When the speaker is extending, it compresses the air, and when it recoils, it rarefies the air. When it isn’t playing anything, this resting state corresponds to the ‘zero’ centre line of an audio file (see above), with compression above the line, and rarefaction below.
Still with me?
Invert flips this so that compression becomes rarefaction and vice versa (see the original snare recording, and the same wave inverted above).
Why would you want to do this? When playing back different audio tracks at the same time, tracks that are compressing and the same time as others are rarefying can cancel each other out. See the diagram below as an example.
If this happens, inverting the audio (also known as flipping the phase) can fix it.
Zero crossing/ snap to zero crossing
So, we know that sound waves compress and rarefy air particles, but this means that as they move from one state to the other, they most cross a point where they do neither.
This is known as a zero crossing, and is the point at which a wave crosses the zero line when viewed in an editor.
A ‘snap to zero crossing’ feature means that the playback head or selection line/ tool will always move to the nearest zero crossing of a wave.
Why is this useful? If you think back to how sound works, starting an audio file anywhere other than at a zero line means that the speaker will immediately have to jump from a ‘zero’ resting phase, to either a compressed or rarefied state. This will be audible as a pronounced ‘click’.
Starting your audio files at zero, however, will mean that playback will always start with the speaker at its normal ‘resting’ point. Simple.
Stay posted for more jargon-busting guides…