• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/44

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

44 Cards in this Set

  • Front
  • Back

Amplifier Simulator

A device, piece of software, or plug-in that emulates the coloration added to a signal by an amplifier, particularly an instrument amplifier such as those used with guitars and bass guitars. Amplifier simulators will typically emulate the effects of an amp’s preamp section, power amp section, any effects (such as built-in spring reverb), and the connected speaker and its enclosure or cabinet. Often convolution or modeling are used to generate the most accurate emulation.

BPM

Abbreviation for Beats Per Minute, it is the standard way in which musical tempos are denoted, especially for use in electronic music composition tools like sequencers. 120 BPM means that in one minute there will be 120 musical beats regardless of any other variables such as time signature.

Breath Controller

Breath controller (a.k.a controller number 2) is a MIDI continuous controller command set aside for parameters lending themselves to breath control. To fully understand why something seemingly this obscure has a designated controller number, one has to go back to the early days of MIDI, when the Yamaha DX-7 came out. The DX-7 utilized a breath control device to add realism to certain types of sounds such as brass and woodwind instruments. The breath controller itself was a small device that connected to a proprietary port on the back of the DX-7. A musician could insert it into the mouth (like a whistle) and blow through it. The air velocity was measured and turned into control data inside the DX. The control data could then be used to open a filter or some other assigned parameter to manipulate the sound by the player without having to do anything special with his/her hands or feet. MIDI was in its infancy at the time, and the DX-7 was an extremely popular and groundbreaking instrument in a number of respects. As such it seemed likely that breath control would become a common way of manipulating synth parameters in real time, and so it made sense for a controller to have this function. In reality there is nothing unique about controller #2 compared to most of the other controllers. It can be used for any common continuous controller command so long as you set up the transmitting and receiving devices accordingly. You will simply see it referred to as breath controller pretty frequently in documentation. While breath controllers aren’t as popular today as we once thought they would be, there are quite a few players who use them.

Eurorack

A standardized modular synthesizer format, developed by Doepfer, that somewhat analogous to the popular 500-series modular audio processor standard. The Eurorack format consists of a chassis and various chassis-mountable modules. In addition to a framework for mounting the modules, the chassis also provides +/- 12-volt and +/- 5-volt power. A number of manufacturers make analog synthesizer modules (oscillators, filters, envelope generators, and more) in the Eurorack format.

Event List

In MIDI sequencers an event list is a way to look at a written index of all the recorded MIDI messages or events. While not used often in today’s graphic-heavy software sequencer environments, Event Lists provide users with the ability to edit MIDI events precisely and comprehensively. It can be argued that Event lists are one of three ways to view or edit messages; the others are graphic editing, which is most commonly used by today’s standards, and notation editing, not available in all sequencers.

Granular Synthesis

A sophisticated (and esoteric) form of additive synthesis (see WFTD archive Additive Synthesis) combining sound elements called “grains,” which are used to make up sonic “events.” Events are time sliced into “screens” that end up containing the amplitude and frequency dimensions of hundreds of events. Very complex sounds can be created using this technique, but the computational power required to generate them is so great that it has not been practical to use this form of synthesis in any commercially available hardware machines.

Harmonic

In audio a harmonic is sort of the opposite of a fundamental, though technically the fundamental is also considered a harmonic. Pretty confused? Harmonics of a particular waveform are multiples of its fundamental frequency. The first multiple is obtained by multiplying the fundamental frequency by one (1). Therefore in a strict sense the first harmonic is the same value (frequency) as the fundamental. The rest of the “harmonic series” (2x, 3x, 4x, etc.) of a sound make up the basic character, or timbre, of the sound based upon all of their relative amplitudes (levels).



In the discourse of guitar playing (though this concept applies to all stringed instruments) a harmonic is a technique where a string is made to sound at some multiple of its fundamental frequency. This is achieved by applying light pressure at some point along the length of the string and exciting it into vibration (usually with a pick). With this technique the fundamental frequency of the string is (nearly) muted by the pressure, but depending upon where along the length of the string pressure is applied the harmonics are excited differently. This has the effect of changing the apparent pitch of the note played, but the notes always have some relationship to the fundamental frequency of that string at its given tension and length. This action is fundamentally (no pun intended) different than fretting a string, which actually changes the length of the string and creates a new fundamental frequency.

Keymap

Refers to a function in modern keyboards and synthesizers that use sample data for raw sounds. The keymap is what defines or assigns each sample to a particular key or key range (or each key to a sample – depends on how you look at it). This is sometimes confused with a zone, but in most keyboards zones are distinct and separate from keymaps. It depends on the architecture of the specific instrument, but keymaps are usually at a much lower level of the hierarchy than zones. If you were to make a sample based piano program, for example, one of the first steps would be to assign your individual samples to specific set of keys that will trigger them. This will cause the proper samples to play over the range of the keyboard. In most modern instruments this “sound” can then be layered with other sounds or routed through effects and filters to create the final program or patch. To make things confusing, keymaps are not always called keymaps, though the word keymap is by far the most descriptive of what they are. Some brands of keyboards refer to it as Key Group, Voice, Multisample, or Wave.

MIDI Clock

A MIDI timing reference signal used to synchronize pieces of equipment together. MIDI clock runs at a rate of 24 ppqn (pulses per quarter note). This means that the actual speed of the MIDI clock varies with the tempo of the clock generator (as contrasted with time code, which runs at a constant rate). Also note that MIDI clock does not carry any location information – the receiving device does not know what measure or beat it should be playing at any given time, just how fast it should be going.

MIDI Delay

This is one of those terms that has been bantered around in the industry over the years and has come to have several subtly different meanings. The original meaning of MIDI delay refers to the time it takes for any active MIDI circuit to handle the signal. Just passing MIDI into, and then directly out of any device (even without doing anything to it) takes some finite amount of time because of the electronics involved in managing and buffering the signal. This is MIDI delay and in most cases it is usually well under 5 ms. The delay is cumulative though. So if you pass your signal through several devices it may be significantly delayed by the time it gets to the last device. Some people also refer to the time it takes an instrument to respond to MIDI commands as MIDI delay. While true MIDI delay is one component of this, there are other factors, such as the speed of the processor in the device. Some instruments react more slowly as they are asked to do more (for example, play more notes at once), but this is technically not MIDI delay. Some musicians claim to be able to hear/feel MIDI delay and do not like performing in situations where MIDI is used. While it’s pointless to dispute what a person says they can perceive, it is important to note that given the speed of sound in air the sound leaving a speaker cabinet on the one side of a 20 foot wide stage would take about 20 ms to reach the ear of a player on the other side.

MIDI Implementation Chart

MIDI implementation refers to the specific MIDI messages and signals a piece of gear can recognize; a MIDI implementation chart is therefore a listing of the messages a particular device can transmit and recognize. This can be very useful when attempting to determine if a device can send and/or receive various types of channel or system messages. Normally found in the back of the device’s manual, its MIDI implementation chart will consist of a list of available MIDI messages, whether the device incorporates those messages, and any special notes or limitations on how it deals with those messages. For example, the chart will list the MIDI channels and modes, note numbers, and continuous controllers the device can respond to. Support for aftertouch, velocity, pitch bend (often with bit resolution), and program change will be indicated. Also listed will be recognition of system exclusive, system real time (clock commands), system common (song position, song select, etc.) and aux messages (local on/off, all notes off, active sensing, and so on).

MIDI Mode

One of several ways in which a device can respond to incoming MIDI information. There are two parts to each mode, one defining whether it is monophonic or polyphonic, and the other determining if it is multitimbral or not. Four modes are included in the MIDI spec, and two others, Multi Mode and Mono Mode (for MIDI guitar) were developed later.



Omni On/Poly – Device responds to MIDI data regardless of channel, and is polyphonic. (See WFTD “Polyphonic“)


Omni On/Mono – Device responds to MIDI data regardless of channel, and is monophonic. This mode is rarely, if ever, used.


Omni Off/Poly – Device responds to MIDI data only on one particular channel, and is polyphonic. This is the normal mode for most keyboards that are not functioning multitimbrally.


Omni Off/Mono – Device responds to MIDI data only on one particular channel, and is monophonic.


Multi Mode – Used by many devices for multitimbral operation. An expanded version of Mode 3, Multi Mode allows the device to respond to several independent MIDI channels at once, with each being polyphonic. (See also WFTD “Multitimbral“)



Mono Mode – Used for MIDI guitar applications, Mono Mode is an expanded version of Mode 4, allowing for six Omni Off/Monophonic channels to be used at once, one for each string of the controller. This allows for better tracking, independent pitch bend per channel, and a separate sound or patch assignment per channel.

MIDI Polyphonic Expression (MPE)

MPE, which stands for MIDI Polyphonic Expression, is a method of using MIDI that allows multidimensional controllers (MDCs), such as the ROLI Seaboard and the LinnStrument, to control multiple parameters (such as pitch bend, vibrato, timbre, volume) of every single note separately when using MPE-compatible software. Normally, channel-wide MIDI messages like pitch bend are applied to all notes on a single channel. MPE allows each note to have its own MIDI channel so that those parameters can operate independently per note.

Overtone

Similar in concept to a harmonic. Overtones are tones produced by an instrument (or sound source) that are higher in frequency than the fundamental. They may or may not coincide with the frequencies of a harmonic series (harmonics), but they usually do. The difference is that harmonics are always musically related to the fundamental in that they are integer multiples of it. Overtones of a sound are often exactly the same as its harmonics except the first overtone is considered the second harmonic because the first harmonic is the fundamental. Overtones are also sometimes called partials (more on them later).

Native Kontrol Standard (NKS)

An extended plug-in format introduced in late 2015 by Native Instruments. This format allows KONTAKT instrument and plug-in developers to integrate their plug-ins with Native Instruments KOMPLETE KONTROL and MASCHINE software and hardware.

Partial

Any one of a series of tones which usually accompany the prime tone (fundamental) produced by a string, an organ-pipe, the human voice, etc. The fundamental is the string tone produced by the vibration of the whole string, or the entire column of air in the pipe; the partial tones are produced by the vibration of fractional parts of that string or air-column. Harmonic tones such as these are also obtained, on any stringed instrument which is stopped (guitar, violin, zither), by lightly touching a nodal point of a string.

Physical Modeling Synthesis

A type of sound synthesis performed by computer models of instruments. These models are sets of complex equations that describe the physical properties of an instrument (such as the shape of the bell and the density of the material) and the way a musician interacts with it (blow, pluck, or hit, for example).

Ring Modulator

A type of audio mixer combining two audio signals, and outputting their sum and difference. The frequencies found in the original signals are not passed through to the output. For example, if two sine waves (single frequency waveforms containing no overtones) are inputted, one with a frequency of 1000 Hz, and the second at 400 Hz, the ring modulator will output two frequencies: 600 Hz and 1400 Hz. With more complex waveforms (which contain many more overtone frequencies) ring modulators produce a clangorous, “metallic” result often used for special effects, in synth programming, and so on. One popular use has been to process vocals, which produces sci-fi sounding “robotic” voices.

Sample and Hold

Sample and Hold is a circuit that is used to take a changing analog signal and literally hold it so that a following circuit or system such as an ADC, (Analog to Digital Converter) has the necessary time it needs to process it. At its simplest, a sample and hold circuit is a capacitor and a switch. The capacitor is used to store the analog voltage for a short time and an electronic switch is used to alternately connect and disconnect the analog input to the capacitor. When the switch closes, the capacitor charges up to, or discharges down to the input voltage. This is the sampling function. Once the switch opens, the voltage across the capacitor remains constant since no current can flow due to the infinite resistance created by the open switch (hold). However, the voltage across the output is still measurable. In the real world, resistance can never be infinite, thus the voltage stored in the capacitor will decay slowly. The quality of a sample and hold circuit is measured by the rate of the voltage decay. The rate at which the switch opens and closes is the sampling rate of the system.



Sample and Hold sections are found on some of the older synthesizers made by Moog and ARP. Taking a random input from a noise generator and turning it into a variety of musically useful effects was the most common use of the S & H section.

Standard MIDI File (SMF)

A standardized file format for saving MIDI sequences independent of the platform they were created on. Standard MIDI Files allow musicians with completely different types of computers or sequencers to exchange MIDI sequences. There are two types, Type 0 (single track), and Type 1 (multitrack). Each type contains the same information, but on a Type 0 all MIDI channels are combined into one track (MIDI channel assignments and other information are not lost) while on a Type 1 each track is kept separate

System Exclusive

One of the categories of MIDI messages, System Exclusive (Sys Ex) is data intended for, and understood by, only one particular piece of gear. Normally, this data is used to communicate with and control parameters specific to that item. For example, all of the proprietary data in a Roland D-110 synthesizer representing RAM patches might be sent as a “sys ex dump” to a computer librarian. When the computer sends this data back out over MIDI, the only device recognizing and responding to it will be a D-110, all other synths and MIDI devices will ignore it. Other uses for sys ex? MIDI control of parameters not supported by continuous controllers, remote patch editing, patch bank select, and more – uses depend on, and can be tailored for, each specific piece of MIDI gear – that’s the beauty of sys ex!

Virtual Analog

A digital synthesizer that mimics the circuitry found in an analog synthesizer. A Virtual Analog synth emulates analog characteristics by implementing mathematical models of analog circuitry. Analog modeling is a type of physical modeling, which imitates the electronic properties of circuit components rather than the mechanical or acoustical qualities of some device.



It’s important to understand that digital synthesizers as they are currently implemented don’t exactly model the changes in voltage an analog synth uses to operate. Analog’s voltage fluctuations are smooth, continuous and infinitely variable, and the interactions that take place between all the components under different conditions are highly variable and dynamic. Digital synths, on the other hand, represent signal changes as numbers. Digital signals and parameter values are quantized into a finite number of discrete steps. How these steps ultimately become manifested as an analog signal (or control how that signal is generated), and ultimately the quality of that signal, will depend upon the implementation of the software and the D/A converter at the end. Even where this implementation may be “perfect,” and can produce a perfect replica of an analog signal at a moment in time under one set of conditions, it may fall short in the next moment under a slightly different set of conditions just due to the enormous complexity of all the possible variables of an analog system as the components interact with each other and the environment. In theory every signal aspect of a device’s operation can be modeled, but in practice they are not and this is one place inaccuracies in the final result can creep in. There are many ongoing advances in this type of modeling so expect to see better and more realistic implementations in the future.

Virtual Instrument

A computer program that emulates the performance of an analog or digital synthesizer, a sampler or an acoustic instrument. Virtual instruments earn this name because they operate entirely as software with no physical “box.” However, this is not actually correct, as virtual instruments simply utilize the host computer’s CPU and internal or external audio hardware to generate sounds in place of the dedicated, proprietary hardware of most of the keyboards and synthesizers we’ve been used to over the years. Virtual instruments can be of relatively simple design, such as a collection of samples with a playback engine, or they can use complex modeling algorithms to emulate analog synths of the past (called “virtual analog” synths). Most of these instruments will respond to MIDI continuous controller messages in the same manner as a hardware synthesizer.



Virtual instruments often can operate in two modes. First, they function as a plug-in in compatible host programs such as Pro Tools, Digital Performer, SONAR, or other audio/MIDI sequencers. To do so, the virtual instrument must be written to support the audio format used by the host program, such as VST, MAS, DirectX or Audio Units. In addition many virtual instruments can function in standalone mode, which means they can be played and programmed without requiring a host program to be open.

VST Instrument

A software based musical instrument such as a synthesizer or sampler that works in Steinberg’s VST environment. These are referred to as VST Instruments in Steinberg and other software applications.

Wavetable Synthesis

A method of sound synthesis in which waveforms are generated by loading their characteristics from a special set of parameters stored in a lookup table in computer memory. Advanced wavetable synthesizers are able to crossfade between different waveforms while notes are sounding, which can produce very complex sounds. The resulting complex waveforms are often further modified by other filtering techniques and envelope generators.

Battle Mixer

Battle mixers are usually small, 10 inch wide, two channel mixers. They have very uncluttered faces and the faders are slick, smooth, and made for lots of heavy use. The faders have sharp cut in points or adjustments that allow you to choose the sharpness of the cut-in.

Booth

Isolation rooms and smaller iso-booths are acoustically sealed areas built into and (hopefully) easily accessible from the main studio area and/or control room. These areas provide improved separation between loud and soft instruments such as guitars and vocals.

Cartridge

A cartridge is a small device attached to the end of a cantilever, which contains the stylus and electro-magnetic system needed to play a vinyl record on a turntable. The two main types of cartridges are moving magnet and moving coil.

EDM


Abbreviation for “Electronic Dance Music.”

Flight Case

A road case, ATA case or flight case is a shipping container specifically built to protect musical instruments, motion picture equipment, audio and lighting production equipment, properties, firearms, or other sensitive equipment when it must be frequently moved between locations by ground or air.

Ground cable on turntable

"Grounds out your turntable. Kills the hum" - by xjimmy225x. It's a ground cable all right. Probably could have make one for under $5 but felt lazy and wanted something I know would work. If you need a ground cable

House

House is a genre of electronic dance music characterized by a repetitive four on the floor beat and a tempo of 120 to 130 beats per minute. ... It has spawned numerous subgenres, such as acid house, deep house, hip house, ghetto house, progressive house, tech house, electro house, and many more.

Phono Preamp

Short for phonograph preamplifier, a special type of preamplifier designed to handle the output of phonograph cartridges, which are transducers designed to turn the grooves in a phonograph record into electrical energy that can be amplified for a playback system. The phono preamp is a circuit that boosts the very weak signal coming out of the aforementioned cartridges up to more of a line level so it can be properly handled by the other components in a hi-fi system. Additionally the preamp’s job is to apply equalization to the signal to restore it to its original form. In order to make it easier to manufacturer phonograph records, and to make them more universally playable, it was determined years ago that special equalization would be applied during mastering. The RIAA came up with an equalization curve – now known as RIAA Equalization – that, among other things, lowered the level of low frequency information relative to other frequencies (See WFTD RIAA Equalization for more background). In order for records to play back properly the opposite EQ has to be employed in the phono preamp. Not all phonograph cartridges require the same amount of and type of equalization though. High end or audiophile preamps allow the user to set certain parameters to better tailor the response of the preamp to the cartridge being used.

Slip mat

A slipmat is a circular piece of slippery cloth or synthetic materials disk jockeys place on the turntable platter instead of the traditional rubber mat.

Stylus

The element in a phonograph cartridge that rides in the groove of a record. It consists of a small arm called the cantilever and the stylus tip. They are designed to be able to move in to orthogonal directions at once. (Orthogonal refers to two [or more] phenomena that can exist in the same medium at the same time and not interfere with one another. The vertical and horizontal motion of a properly working stylus tracking a record groove is an example of orthogonal motion.) Over the years many different shapes and materials have been tried with varied results, and of course, like most things audio, there is little agreement about what the ‘best’ system is.

Techno

Techno is a form of electronic dance music that became popular in Detroit, Michigan during the mid-1980s. ... At the same time, the word "techno" is commonly used when talking about all forms of electronic music and dance music, especially in Europe, the Americas and Australia.

Absorption

In acoustics (as opposed to paper towels), the opposite of reflection. Sound waves are “absorbed” or soaked up by soft materials they encounter. Studio designers put this fact to work to control the problem of reflections coming back to the engineer’s ear and interfering with the primary audio coming from the monitors. The absorptive capabilities of various materials are rated with an “Absorption Coefficient,” which is a measure of the relative amount of sound energy absorbed by that material when a sound strikes its surface. (See also WFTD “Anechoic“)

Acoustic Treatment

Acoustically treating a room is necessary in audio production due to the fact that very few “spaces” have the physical qualities that make for accurate monitoring or desired recording. There are many things that can be done to a space before and during construction to optimize its acoustic behavior. These include the shape of the space, its isolation, and the surface materials. Once a room is already constructed, Acoustic Treatment mostly tends to consist of treating the surfaces. There are two primary elements to consider: absorption and diffusion. Acoustic foam is well suited to alleviate slap and flutter echo, the two most common problems in rooms not specifically designed for music recording and performance. In fact, foam can turn even the most cavernous warehouse or gymnasium into a suitable acoustic environment. Diffusion keeps sound waves from grouping, so there are no hot spots or nulls in a room. In conjunction with absorption, diffusion can effectively turn virtually any space into one that is appropriate and useful for the purpose of recording or monitoring sound with a high degree of accuracy.

Bass Trap

A device used to help acoustically tune a room. Enclosed spaces all have resonant frequencies based upon the various dimensions of the space. As a room becomes energized with sound certain frequencies will build up or be cancelled at various locations around the room based upon its shape and dimensions. A bass trap is a low frequency sound absorber used to reduce the effects of standing waves in a room. They are usually placed in corners or along wall joints where low frequency energy tends to build up. The absorption qualities of bass traps prevent low frequencies from interfering with each other throughout the rest of the room, which results in much more accurate response in the listening area. Bass traps come in many shapes and sizes and employ a variety of construction techniques. Some are tuned to kill a narrow band of frequencies while others are designed to cover a broad range.

Diffraction

A phenomenon in the propagation of waves where the direction of a wave front (either sound wave or electromagnetic [light] wave) is altered when passing by an object or through a small aperture in a large surface. At shorter wavelength relative to the obstacle, sound (and light) will tend to reflect off the surface more and bend around it less (which partially explains why you can hear, but not see at a concert when someone is standing in front of you). Waves will also bend to fill an opening behind a surface (which partly explains why you can hear someone talking in the next room through an open door even though you can’t see them).

Diffusion/Diffuser

Diffusion is the process of spreading or dispersing radiated energy so it is less direct or coherent. A Diffuser is a device that does this. The plastic covers over fluorescent lights in many office environments are diffusers. They make the light spread out in a more randomized way so it is less harsh. In audio, diffusion is a characteristic of any enclosed (or partially enclosed) space. It is caused by sound waves reflecting off of many complex surfaces. For example, a flat concrete wall produces a pretty distinct echo when sound reflects off of it. However a brick wall, while still pretty reflective, tends to diffuse the sound reflections and produces a much less distinct echo. This is due to both the surface of the brick itself and the mortar between the bricks (more specifically the edge diffraction of the joint between the two). All surfaces will of course differ and it is usually a variety of surfaces that create the most randomized diffusion of sound. Diffusion is a very important consideration in acoustics because it minimizes coherent reflections that cause problems. It also tends to make an enclosed space sound larger than it is. Diffusion is an excellent alternative or complement to absorption in acoustic treatment because it doesn’t really remove much energy, which means it can be used to effectively reduce reflections while still leaving an ambient or live sounding space.

Early Reflections

According to standard definitions, early reflections are sounds that arrive at the listener after being reflected maybe once or twice from parts of listening space, such as walls, ceilings and floor. They arrive later than the direct sound, often in a range from 5 to 100 milliseconds, but can arrive before the onset of full reverberation. The early reflections give your brain the information about the size of a room, and for the sense of distance of sounds in a room. They have an important role in determining the general character and sound of the room.



There are those who disagree with the assumption that reverb is based on two discrete components “Early reflections” and “Reverberant field”. The “Early reflections” are often recreated by using a bunch of taps off a delay line, supposedly representing the sound reflected for the first time from all the walls and ceiling. The “Reverberent field” is a diffuse scrambling of the early reflections, with some kind of feedback to keep it going.



Both Lexicon and Quantec companies assert that early reflections are a purely academic concept, if not a myth altogether – at least in terms of designing their signal processors, which stand by far, as some of the best reverb units in the world.



Quantec’s argument against early reflections is simple and philosophical: A room is just one signal processor of sound. It doesn’t have the intelligence to separate out the two concepts or even to care about whether a sound is direct or reflected. It just bounces and diffuses all sound, no matter what its source.



Lexicon’s argument against early reflections is a little more descriptive: They also argue that a room just reflects and diffuses sound, irrespective of source, but point out as well, that you could be hearing second or third generation reflections from the areas around you, before you even get the first reflection off the back wall. Reverb, according to Lexicon, is an extremely complex reflection and diffusion pattern that builds up to a dense thickness from the moment you hear the original dry sound onwards. That’s why the controls on a Lexicon are quite different to other reverb units, including parameters like “spread” and “shape” to control how the reverb thickens and builds up before decaying.



Nevertheless many artificial reverb units do treat the early reflections separately, and have a separate group of parameters to adjust accordingly. And regardless of what terminology we want to apply, or whether we wish to address specific parameters in signal processing equipment it is known that there can be some discrete delays or echos that may reach a listener in a space that could potentially be characterized as separate from the overall reverberation. That’s what people refer to when speaking of early reflections.

Aspect Ratio

This term is used to describe an image on a TV or movie theater screen, and is defined as the width of the image divided by the height. In the case of a standard TV with a full-screen image, it is 4:3 or 1.33:1 (once the mathematical division is calculated). Movie theater images are usually 1.85:1 or 2.35:1, sometimes called “widescreen” or “letterbox.” When the widescreen images are shown on a regular TV in their original aspect ratio, they leave a blank area at the top and bottom of the screen.

HDMI

Acronym for “High-Definition Multimedia Interface.” HDMI is the first and only industry-supported, uncompressed, all-digital audio/video interface. It was designed to deliver crystal-clear, digital audio and video via a single cable, thereby dramatically simplifying cabling and providing consumers with the highest-quality home theater experience.



HDMI provides an interface between any audio/video source, such as a set-top cable or satellite box, DVD player, or A/V receiver and an audio and/or video monitor, such as a digital television (DTV), over a single cable.



HDMI supports standard, enhanced, or high-definition video, plus multi-channel digital audio. It transmits all ATSC HDTV standards and supports 8-channel, 192kHz, uncompressed digital audio and all currently-available compressed formats (such as Dolby Digital and DTS). HDMI 1.3 adds additional support for Dolby TrueHD and DTS-HD lossless digital audio formats with bandwidth to spare to accommodate future enhancements.