• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/58

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

58 Cards in this Set

  • Front
  • Back

Additive Synthesis

The process of constructing a complex sound using a series of fundamental frequencies (pure tones or sine waves). Each of the fundamental frequencies usually has its own amplitude envelope which allows independent control of each partial (harmonic). Pipe organs or Hammond organs are both instruments which are based on additive synthesis. Some modern synthesizers have employed additive synthesis techniques, but other techniques such as FM (see WFTD archive FM Synthesis) and physical modeling (see WFTD archive Physical Modeling Synthesis) have proven to be easier to develop and still very effective at producing a wide variety of sounds.

ADSR

Abbreviation for Attack, Decay, Sustain, and Release. These are the four parameters found on a basic synthesizer envelope generator. An envelope generator is sometimes called a transient generator and is traditionally used to control the loudness envelope of sounds, though some modern designs allow for far greater flexibility. The Attack, Decay, and Release parameters are rate or time controls. Sustain is a level control. When a key is pressed, the envelope generator will begin to rise to its full level at the rate set by the attack parameter, upon reaching peak level it will begin to fall at the rate set by the decay parameter to the level set by the sustain control. The envelope will remain at the sustain level as long as the key is held down. Whenever a key is released, it will return to zero at the rate set by the release parameter.

Aftertouch

Aftertouch is MIDI data sent when pressure is applied to a keyboard after the key has been struck, and while it is being held down or sustained. Aftertouch is often routed to control vibrato, volume, and other parameters. There are two types: The most common is Channel Aftertouch (also known as Channel Pressure, Mono Aftertouch, and Mono Pressure) which looks at the keys being held, and transmits only the highest aftertouch value among them. Less common is Polyphonic Aftertouch, which allows each key being held to transmit a separate, independent aftertouch value. While polyphonic aftertouch can be extremely expressive, it can also be difficult for the unskilled to control, and can result in the transmission a great deal of unnecessary MIDI data, eating bandwidth and slowing MIDI response time.

Amplitude modulation

This term refers to any periodic change in the amplitude (volume) of a signal. When the modulating signal is in the audible range (above 20Hz), amplitude modulation can produce additional harmonics, somewhat like those produced by FM (frequency modulation). More often, the frequency of the modulating signal is below the audible range and with a sine or triangle wave, this produces the effect more commonly referred to as tremolo.

Arpeggiator

A device that electronically creates an arpeggio. An arpeggio is the playing of the tones of a chord in rapid succession rather than simultaneously. Many synthesizers over the years have had arpeggiators built in to them that have been used to create all manor of variations on the basic theme. Some merely do the arpeggio in ascending or descending order of notes, while some can employ very complex algorithms to the note order structure.


Continuous Controller

In MIDI terms, a continuous controller (CC) is a MIDI message capable of transmitting a range of values, usually 0-127. The MIDI Spec makes 128 different continuous controllers available for each MIDI channel, although some of these have been pre-assigned to other functions. CC’s are commonly used for things like MIDI controlling volume (#7), pan (#10), data slider position (#6), mod wheel (#1) and other variable parameters.



Use of continuous controllers in performance and sequencing can be a major factor in adding life to MIDI music – but beware, over-use of CC messages can result in MIDI log-jam, where the amount of data being sent is more than the bandwidth of MIDI can support. (Most sequencers support commands for “thinning” CC data if this becomes an issue)



Interestingly, pitchbend is technically NOT a continuous controller. Because of the greater resolution wide bends require (to prevent “stair-stepping”), pitchbend has been assigned its own dedicated MIDI message type…

DCA

Abbreviation for Digitally Controlled Amplifier. The DCA abbreviation has been used on and off mostly by synthesizer manufacturers to make a distinction from normal amp designs in their architecture, which historically are analog, in the form of VCA‘s. A DCA performs the same function, only its gain is under digital control.


DCF

Abbreviation for Digitally Controlled Filter. The DCF abbreviation has been used on and off mostly by synthesizer manufacturers to make a distinction from normal filter designs in their architecture, which historically are analog, in the form of VCF‘s. A DCF performs the same function, only it is under digital control.

DCO

Abbreviation for Digitally Controlled Oscillator. A DCO serves the same purpose as a VCO in synthesizers, only it is under digital control instead of being controlled by an analog voltage. DCO’s tend to be much more stable and less susceptible to environmental conditions – especially with regard to tuning – than their analog counterparts, but some synthesists complain they are too sterile and perfect sounding.

Envelope

In sound and synthesis, the envelope is the variation that a sound exhibits over time – basically how a sound starts, continues and disappears. It is comprised of concepts such as attack and decay, but other sonic distinctions such as transient and sustain may also be applied in some circumstances. Pitch, timbre, and harmonic content (which is basically timbre) can also change over time and in some cases are considered part of the overall envelope making up a sound.

Envelope generator (EG)

The envelope of a sound can be explained as a variation that occurs to it over time. How a sound starts, continues, and disappears in terms of pitch, harmonic content, and loudness is a function of its envelope. An envelope generator is a circuit or algorithm found in most synthesizers that provide a means to apply these kinds of changes to a sound over time.

FM Synthesis

The generation of complex signal waveforms in electronic music by Frequency Modulation of one or more sine wave signals by other sine waves (or other waveforms). FM synthesis as a method of generating complex musical waveforms was pioneered by John Chowning at Stanford University, and has shown that an extremely wide variety of waveforms may be made this way. The method also requires significantly less hardware than other similar methods, such as additive synthesis. One of the first commercial synthesizers to use FM synthesis was the Synclavier, produced by the now defunct New England Digital Corp. Easily the most famous FM synthesizer, however, is the Yamaha DX-7. This keyboard brought FM synthesis to the masses and is still renowned for its pure bell like tones and electric piano sounds.


Frequency Modulation (FM)

The changing of the frequency of a “carrier” in response to a “modulating” signal, usually an audio waveform. As the modulating signal voltage (amplitude) varies up and down the frequency of the carrier varies up and down from its nominal unmodulated value. In music, vibrato is a form of frequency modulation because it is a periodic variation in frequency (pitch). In FM broadcasting the audio signal is used to modulate a high frequency carrier that is then transmitted. At the receiving end a special circuit called a FM detector, or “discriminator” is used to recover the audio from the modulated signal. FM is considered a better (than AM – Amplitude Modulation) method of transmitting radio and TV signals because the FM signal is not as sensitive to amplitude variations caused by atmospheric interference. FM is also used as a sound synthesis technique (see FM Synthesis).


General MIDI (GM)

A set of requirements for MIDI devices aimed at ensuring consistent playback performance on all instruments bearing the GM logo. Some of the requirements include 24-voice polyphony, a standardized group (and location) of sounds, as well as defining a limited number of controllers. For example, patch #17 will always be a drawbar organ sound on all General MIDI instruments. Continuous controller number 7 will control its volume. Music written and sequenced for General MIDI should play back with the same instrument sounds on any General MIDI (GM) sound source.

Layer/Layering

Playing two or more sounds together to achieve a fuller, richer sound. Many modern keyboards and synthesizers have layering functions that allow you to create a composite sound from several individual components. Layers can also be used to allow a sound to change in real time based on velocity or some continuous controller input, which provides a simple mechanism to create more expressive and realistic sounds. It is quite common, for example, to have the same instrument sampled at three different dynamic levels to capture the unique timbre of each. Those samples can then be applied to layers that switch or crossfade into one another based on the player’s velocity.

LFO

Abbreviation for Low Frequency Oscillator. An oscillator primarily used as a modulator for other things. Low frequency oscillators may or may not operate exclusively below 20 Hz (the lower limit of typical human hearing), but by definition they are not designed to be used as sound generating elements, even though they can be in some synthesizers. When you bring in modulation (vibrato) in a keyboard, or a chorus in an effects processor you are using an LFO to generate the waveform that produces the variance. In synthesizers and more advanced effects units LFO’s can often be routed to many different parameters (often simultaneously). They can sometimes generate different waveforms (sine, sawtooth, square, random, etc.) for different types of modulation effects.

Local control

A parameter found in many MIDI keyboards. Local control determines whether a MIDI device responds to its own keyboard and controllers (local control on) or only to incoming MIDI messages (local control off).

MIDI

MIDI is an acronym for Musical Instrument Digital Interface. MIDI was developed back in the early 1980’s as a standardized protocol for communication between electronic musical instruments and peripherals. It allows MIDI devices to transmit and receive almost every aspect of a musical performance. Today MIDI is used in all sorts of applications, including synchronization, sequencing, lighting control, automation systems, more. There are many different types of MIDI messages that are used in MIDI for various applications. A typical MIDI connection is made with a MIDI cable, which has a 5-pin DIN type connector of which only three pins are used (except in some special applications). MIDI can also be transferred via USB cables.

MIDI

MIDI is an acronym for Musical Instrument Digital Interface. MIDI was developed back in the early 1980’s as a standardized protocol for communication between electronic musical instruments and peripherals. It allows MIDI devices to transmit and receive almost every aspect of a musical performance. Today MIDI is used in all sorts of applications, including synchronization, sequencing, lighting control, automation systems, more. There are many different types of MIDI messages that are used in MIDI for various applications. A typical MIDI connection is made with a MIDI cable, which has a 5-pin DIN type connector of which only three pins are used (except in some special applications). MIDI can also be transferred via USB cables.

MIDI channel

Analogous to a television or radio channel. MIDI communication is digital, and the MIDI signals contain several types of information, often including the MIDI channel. Most MIDI messages are intended to be “picked up” by only one of perhaps many available devices that can be connected together. The channel provides an easy way to differentiate these devices. A message intended for the device on channel one, for example, will have that MIDI channel number present in its data. Only devices assigned to listen on channel one will respond to any messages with this encoding. The current MIDI specification calls for 16 MIDI channels. These 16 channels provide a way to transmit and receive 16 different musical parts all on one MIDI cable creating a convenient way to play sequences back through several keyboards, or one multitimbral keyboard.

MIDI Controller

The term MIDI Controller refers to any device that sends MIDI Control Change data from the device to the hardware of software sound module. A MIDI controller may take the form of a piano-style keyboard, a MIDI-equipped guitar, a wind controller, trigger pads, etc. MIDI controllers are often equipped with additional objects that send their own MIDI control change information such as modulation and pitch wheels and assignable knobs and sliders.

MIDI In

Short for “MIDI input.” A type of MIDI connection that accepts MIDI data from an external device.

MIDI Interface

A device that allows MIDI equipment to be connected to and work with a computer. Over the years MIDI interfaces have come in many different sizes, shapes, capabilities, and price ranges. The simplest interface has just one MIDI input and one MIDI output, providing the most basic way to get a MIDI instrument connected to a computer. More modern and sophisticated designs may have many discrete inputs and outputs as well as ports for synchronization of MDM’s and other technologies. Some have the ability to resolve MIDI data to word clock, LTC, or video sync, and some even have Superclock capabilities. A few have been able to provide MIDI routing and patch bay features as well as MIDI processing functions (like changing one type of continuous controller data to another), but most newer models have forgone these features since modern software is so sophisticated with these kinds of tasks. Early models had to be built specifically for each type of computer (PC, Mac, Atari, Amiga, etc.), but recently, with the emergence of standards like USB and the decline of other computing platforms, most MIDI interfaces are cross platform and work equally well on Mac or PC.

MIDI out

Short for “MIDI output.” A type of MIDI connection that sends out MIDI information generated within the device. This differs from a MIDI thru, which sends out a copy of the MIDI information arriving at the device’s MIDI input.

MIDI Thru

Short for MIDI Through. The MIDI through is a connection available on many MIDI devices. It’s purpose is to pass on (or through) an exact copy of the data present at the MIDI In of the device. This is distinct from a MIDI out, which can sometimes pass on a copy of the input, but usually has other information on it generated by the device in question. MIDI Thru allows many MIDI devices to have their MIDI connections daisy chained together all being driven by a common source or controller, which makes building complex systems much easier.

Modulation

Literally, modulation is change. In music technology, the term normally applies to a control signal being used to change some aspect or parameter of another signal. For example, a regularly repeating sine waveform might be added to a pitched note to produce vibrato, or a control voltage might be used to change (modulate) a filter cutoff frequency. A whole category of synthesis (and radio broadcasting), FM (frequency modulation), is based around using the frequency of one signal (the modulator) to change the frequency of another audible signal (the carrier). Likewise, AM radio works because of amplitude modulation, or using one signal’s volume to modulate another signal.

Multi-Sample

A group of samples organized in a musically relevant way. For example, most piano samples are actually made up of many different samples. There are usually at least 8 (sometimes many more) different samples across the keyboard, and in many cases there are samples of different velocities and so on. This is necessary due to how sensitive our ears are to timbral changes that result when sampled audio is pitched up or down. Similarly, our ears usually reveal to us many other things about a performance. You can record a sample of a hard piano strike and play it back at a low volume, perhaps even with a filter on it to limit the high frequencies, but our ears can still tell something isn’t right. Therefore the sounds that we consider good or believable in sampling instruments are usually made up of many samples combined together to more closely approach the original instrument. Once these sounds are tied together they become known as a multi-sample. The exact terminology used by your instrument may vary, but multi-sample is the most commonly used term.

Multitimbral

A synthesizer or sampler is multitimbral if it is capable of producing more than one type of sound or timbre (pronounced tam bur) at a time. Usually this is described as the number of “parts” a unit can play at once. For example, a Kurzweil K2500 is 16-part multitimbral, meaning it can produce 16 different sounds at once (a sound being defined as a single patch or preset; part one might be piano, part two strings, part three trombone, part four flute, and so on. Generally these parts are assigned to different MIDI channels for independent control). This is distinct from the amount of polyphony, or number of actual notes the unit can simultaneously generate. Using the K2500 example again, a 16-part multitimbral K2500 can produce up to 48 notes of polyphony distributed dynamically across those 16 multitimbral parts.

Oscillator

An electronic device which generates a periodic signal of a particular frequency, usually a sine wave, but other waveforms (square, sawtooth, triangle) are often used. Oscillators are common in audio devices such as synthesizers and test signal generators. Early synthesizers used oscillators as the basic component for all of the sounds of the machine. All of the filters and envelopes modified the sound created by the oscillator to produce the desired sound. Nowadays most keyboards produce sounds by playing back samples recorded on chips or by more modern synthesis techniques such as Physical Modeling (see WFTD archive Physical Modeling Synthesis), FM, LA, or any number of other methods that have been employed in the past 10 years.

Patch list

Basically a patchlist is just what it sounds like: a list of patches. Patchlists are an important part of making software sequencers easier to use. A list of all the patch names for each keyboard in a particular set up is stored in a file (the patchlist) where the sequencer can refer to it. When set up properly it enables the user to select keyboard programs by name from the sequencer. So instead of choosing your sound on a track by looking at a list of bank names and/or patch numbers you would see the actual names of the programs. Most patch lists are a simple text file with all of the patch names listed, which means the user can easily edit them if necessary. The software maps them sequentially to the proper patch numbers of the keyboards in question. Bank one, patch one would map to the first name on the list, and so forth.


Pitch Bend

A special MIDI control message specifically designed to produce a change in pitch in response to the movement of a pitch bend wheel or lever. Pitch bend data can be recorded and edited, just like any other MIDI controller data, even though it isn’t part of the Controller message group. Behind the scenes, at a bits and bytes level, pitch bend messages are actually fairly complex and appear to break a lot of the conventional rules of MIDI data protocol. Pitch bend messages were designed to be able to hold and transmit a lot more data than most MIDI messages primarily because it can take a lot of data to produce a truly smooth (as in unquantized) bending of pitch over a potentially broad range. If pitch bend messages were handled like most other continuous controller messages you would often hear a noticeable stair-stepping quality to the bends. Without going into too much detail, pitch bend messages have a conventional status byte, which is followed by two data bytes. Some MIDI instruments makes use of only one of these, while others use both, so the data is formatted in a specific way so all instruments are able to communicate and discern the intended amount of pitch bend desired from one platform to another. Fortunately all of this madness goes on behind the scenes for the most part. Most software sequencers deal with it behind the scenes. An exception would be older sequencers that may show all of this data in an event list view. In those cases editing pitch bend data can get pretty challenging and requires a deeper understanding of the subtleties. Another element to consider is that a lot of complex pitch bend data can potentially cause some MIDI log jam problems, especially when there is a lot of other controller and/or MIDI clock or MTC data on the same cable.

Polyphony

In general, polyphony describes music with two or more parts playing at the same time. More specifically, the term refers to the number of actual notes an electronic instrument may play at one time. For instance, the original MiniMoog synthesizer was monophonic (it could only play one note at a time), while the ARP Odyssey could play two, making it duophonic. Most early samplers were capable of playing only eight notes at any time (or four notes if the sample being played is in stereo, as that requires two notes of polyphony). When instruments can play multiple notes at one time, they are considered to be polyphonic. Today, most synthesizers and samplers can play far more notes, in some cases up to 128 (and even more if a personal computer is being used as the sound source).

PPQN

The timing resolution of a MIDI sequencer. PPQN indicates the number of divisions a quarter note has been split into, and directly relates to the ability of the sequencer to accurately represent fine rhythmic variations in a performance, or to recreate the “feel” of a performance. Older sequencers were capable of 96 PPQN (sometimes even less), which often resulted in a stiff “quantized” feel to the music (even if it hadn’t actually been quantized). Current versions can reach 768 PPQN or even higher resolutions, which is more than adequate for most musical applications. Note that the resolution of the sequencer is especially important at slower tempos. If your sequencer is limited to a lower resolution, one trick is the double the tempo of the song, then perform the parts in half time. This effectively results in a doubling of resolution.

Program change

Also known as Patch Change, a type of MIDI message used for sending data to devices to cause them to change to a new program. Program Changes messages are channelized so they will only affect a device on a specific MIDI channel. These commands are used in all sorts of MIDI applications ranging from simply changing patches on a synth or reverb to controlling lighting systems. Software sequencers that appear to have the programs of your keyboard in them by name are in fact using program change commands that are known to pull up those programs in your keyboard.

Quantization

The division of a continuous event (such as an analog signal) into a series of discrete steps. To quantize or quantify something. In digital audio recording this takes the form of “sampling” (another word for quantizing) the analog signal a specified number of times per second (sampling rate) with each sample made up of some known amount of information (how many bits, or bit depth – i.e. 16-bit, 24-bit, etc.). In MIDI it pertains to the timing resolution of a sequencer or drum machine and is measured in Pulses Per Quarter Note (PPQN). For example, a 480 PPQN sequencer has greater timing resolution than a 96 PPQN sequencer. Also in MIDI the verb quantize means to perform an operation to the MIDI data that will bring notes closer to a specified grid of acceptable timing values. For example, you could quantize a performance where someone played inconsistently to make all of their note events land on even quarter notes. Over-quantization results when such correction is so extreme that the resulting sequence becomes stiff or robotic sounding.

Resample

The process of sampling an already digital signal again. Normally once a signal is in digital form all sorts of things can be done to it with DSP. But in some situations it is desirable to sample the signal again. For example, in some hardware based sampling instruments there is a limited amount of DSP available. Users will sometimes apply a host of effects to that signal (possibly even analog effects) and resample it with them so the new sample has the desired sound while freeing up DSP to do other things. It’s also common to take a series of audio bites (like an arranged drum groove) and resample it all as one performance that can be triggered. Some devices also use resampling to change the sample rate, rather than use a sample rate conversion algorithm.

Resonance/feedback

This is a function on a filter in which a narrow band of frequencies (also known as the resonant peak) becomes relatively more pronounced. If the resonance is set high enough, the filter will begin to self-oscillate, producing a sine wave audio output even without a note being played on the keyboard. Filter resonance is also sometimes referred to as emphasis or Q. In some of the oldest synthesizers, this effect is referred to as regeneration or even feedback, for the simple reason that feedback was used in the circuit to produce the resonant peak.


ROM

Acronym for Read Only Memory. ROM is a type of memory chip where the data is programmed (sometimes called masked) into the chip as it is manufactured. Unlike RAM and other technologies it cannot later be changed, reprogrammed or rewritten (which is where the Read Only name comes from). The data stored in a ROM is often referred to as “firmware” since it cannot be changed unless the physical chip is swapped with one having different programming. ROM chips are used in all kinds of electronic computing devices. They often carry the instructions that define the basic operating properties or fixed sets of data required by the device that don’t often change. If it is believed that the data in a ROM chip will need to be changed from time to time the reprogrammable EPROM has often been used, though today there are many other viable solutions.

Sample

A frequently used word these days. In music/audio production a sample is a digitally recorded piece of audio. To sample (sampling) is the act of recording said samples. In order to take (record) a sample, a device known as an A/D converter measures the instantaneous amplitude (voltage) of an analog waveform at periodic intervals known as the sample rate. Each of these samples is converted into a digital “word” that is represented by some number of digital bits (ones and zeros), with the number of bits per sample (known as bit depth) determining the resolution of the sample, or how closely it matches the exact voltage value of the waveform at the time of the sample. The bit depth also determines the theoretical dynamic range that can be captured per sample (with more bits allowing greater dynamic range). The sample rate, however, determines the frequency range that can be captured per sample. Both bit depth and sample rate have a significant impact on the overall accuracy of the sample as compared to the analog waveform.

Sample-based synthesis

A type of audio synthesis that employs sampled sounds or instruments as the basis for its sounds. Sample-based synthesis should not be confused with wavetable synthesis. An advantage of this approach is the relatively modest processing power required (compared to physical modeling or other types of synthesis), since the tonal characteristics of each instrument are “built in” to the samples.



Early samplers were severely limited by the expense of memory, and therefore utilized the shortest samples possible, augmenting their length by looping and achieving pitch changes via “stretching” one sample across several notes. Later samplers offered much more memory and storage capability, allowing sound designers to employ multisamples to provide more realistic changes in dynamics and timbre.



A number of manufacturers offer sample-based synthesizers under a variety of names. Korg’s HI (Hyper-Integrated), Yamaha’s AWM (Advanced Wave Memory), and others differ mainly in the number and types of filters and modulation sources that can be applied to the samples. Software instruments such as TASCAM GigaStudio and Native Instruments Kontakt also provide a wide range of sample-editing features.

Sample-based synthesis

A type of audio synthesis that employs sampled sounds or instruments as the basis for its sounds. Sample-based synthesis should not be confused with wavetable synthesis. An advantage of this approach is the relatively modest processing power required (compared to physical modeling or other types of synthesis), since the tonal characteristics of each instrument are “built in” to the samples.



Early samplers were severely limited by the expense of memory, and therefore utilized the shortest samples possible, augmenting their length by looping and achieving pitch changes via “stretching” one sample across several notes. Later samplers offered much more memory and storage capability, allowing sound designers to employ multisamples to provide more realistic changes in dynamics and timbre.



A number of manufacturers offer sample-based synthesizers under a variety of names. Korg’s HI (Hyper-Integrated), Yamaha’s AWM (Advanced Wave Memory), and others differ mainly in the number and types of filters and modulation sources that can be applied to the samples. Software instruments such as TASCAM GigaStudio and Native Instruments Kontakt also provide a wide range of sample-editing features.

Sawtooth Wave

A waveform in which voltage rises gradually to a peak and then falls off rapidly, or vice-versa, during each cycle. It’s shape on a screen or oscilloscope is similar to the tooth of a saw blade. The shape of all waveforms is produced by a combination of the fundamental frequency and the presence (or lack thereof) of various harmonics. The sawtooth wave contains a fundamental and all other harmonics. Their characteristic sounds are softer and smoother than most. It is the waveform produced by bowed stringed instruments. They are also sometimes called “ramp” waves.


Semi-weighted action

A keyboard key type developed by synthesizer manufacturers. Semi-weighted keyboards combine the spring-loaded mechanism of synth actions with the addition of light weights attached to each key, similar to those found in weighted action or hammer action keyboards. The result is a key that has light-to-moderate resistance when you press it, with a rebound to the “up” position that is a little slower than the springiness of a synth action. Semi-weighted actions appeal to players who don’t need or want the full resistance of a weighted action. Players accustomed to the feel of a Hammond organ also like semi-weighted actions, with one important exception: while the Hammond (and other electronic organs) uses a waterfall-style key that is slightly rounded at the top edge, almost all semi-weighted actions have a piano-style overhang, or lip, on top, making some organ techniques like smears and glissandos difficult and often painful.

Sequencer

In music production, a sequencer is a hardware or software device designed to record performance data and play it back in sequence, or in a specific order of events. Early sequencers were analog and programmed by setting a series of voltages (representing pitch) with potentiometers that triggered VCO’s. The playback involved having a clock step through or trigger each of these “events” in sequence. Modern sequencers have evolved well beyond that original concept and now have very sophisticated editing and performance features. Nowadays a large percentage of music composition and arrangement is done with sequencers and MIDI instruments. We also now have sequencers with digital audio recording and editing capabilities built in.


Sine Wave

A continuous, cyclic waveform in which the amplitude (or instantaneous voltage) varies according to the sine (a trigonomic function) of the time. It is unique in that it has no overtones whatsoever. Since it contains only the fundamental pitch it gives a smooth rounded tone. Test tones used to calibrate tape machines and other equipment are generally sine waves. In acoustic instruments a flute sometimes has a nearly sinusoidal output. On an oscilloscope a sine wave looks like a symmetrical wavy line.

Subtractive synthesis

One of several types of sound synthesis. In the subtractive method of sound synthesis the sound is tailored by using filters to selectively remove certain harmonics from an initial waveform. That waveform may be a complex sound, like a sample, or a simple shape created by an oscillator. Most analog synthesizers use the subtractive method.

Synthesizer

An electronic musical instrument that uses sound generating elements (such as oscillators or the like) to create audio waveforms. These waveforms are then combined with others and/or manipulated in specific ways to “synthesize” a unique sound character. Over the years many, many different types of synthesis architectures have been developed and used. Of those a relatively small number have become popular and seen widespread usage. Many modern synths provide digital control over analog parameters, such as frequency, amplitude, filtering and so on to create different timbres. Some generate these waveforms digitally.

Synthesizer

An electronic musical instrument that uses sound generating elements (such as oscillators or the like) to create audio waveforms. These waveforms are then combined with others and/or manipulated in specific ways to “synthesize” a unique sound character. Over the years many, many different types of synthesis architectures have been developed and used. Of those a relatively small number have become popular and seen widespread usage. Many modern synths provide digital control over analog parameters, such as frequency, amplitude, filtering and so on to create different timbres. Some generate these waveforms digitally.

VCA


A feature found on many high end live mixing boards. A VCA group provides the same type of control over signal levels that a mute group provides for muting. Basically, VCA groups allow the sound engineer to control the volumes of several independent sources through one control fader without having to route them all through a common subgroup. It is called a VCA group because Voltage Controlled Amplifiers are used. In fact every controllable channel in the desk has its volume controlled by a VCA (as opposed to audio passing through a resistive fader) in order for this to work. Some more modern (and expensive) designs have employed a motorized fader scheme (also known as Moving Fader), but these sometimes aren’t referred to as VCA groups since there may no longer VCA’s involved (see the Technical Tip of the Day from 04/09/2002 for more background on that).

Synthesizer

An electronic musical instrument that uses sound generating elements (such as oscillators or the like) to create audio waveforms. These waveforms are then combined with others and/or manipulated in specific ways to “synthesize” a unique sound character. Over the years many, many different types of synthesis architectures have been developed and used. Of those a relatively small number have become popular and seen widespread usage. Many modern synths provide digital control over analog parameters, such as frequency, amplitude, filtering and so on to create different timbres. Some generate these waveforms digitally.

VCA


A feature found on many high end live mixing boards. A VCA group provides the same type of control over signal levels that a mute group provides for muting. Basically, VCA groups allow the sound engineer to control the volumes of several independent sources through one control fader without having to route them all through a common subgroup. It is called a VCA group because Voltage Controlled Amplifiers are used. In fact every controllable channel in the desk has its volume controlled by a VCA (as opposed to audio passing through a resistive fader) in order for this to work. Some more modern (and expensive) designs have employed a motorized fader scheme (also known as Moving Fader), but these sometimes aren’t referred to as VCA groups since there may no longer VCA’s involved (see the Technical Tip of the Day from 04/09/2002 for more background on that).

VCF

Abbreviation for Voltage Controlled Filter. The VCF is to filtering what the VCA is to amplifiers. Actually many filters are amplifiers where the gain of the amp is manipulated by other components such that certain frequencies are filtered out of the final output. And this is exactly what a VCF is. The user has control over the cutoff frequency of the filter and whether it is low pass, hi pass, or band pass. More advanced designs allow you to add resonance and/or vary the Q of the filter.

VCO

Abbreviation for Voltage Controlled Oscillator. It is an oscillator whose pitch (or frequency) is controlled by an input voltage. In a keyboard, for example, pressing different keys produce different voltages, which then drive the oscillator circuit to produce specific pitches (notes). Modern (digital) keyboards don’t work this way anymore, but back in the days of analog synthesizers it was all done with voltage. A lot of the old stuff was one volt per octave. So if it took one volt to go from a low C to the C an octave higher, it took an additional volt to reach the next C up, and so on.


Waterfall key

A type, or style, of keys found on a keyboard that does not have extruding lips or edges. Waterfall style keys (also known as “Square Front”) are best recognized as the key found on the famous Hammond B3 organ. Many B-3 players perform using the palm of their hands on the base of the keys in a “wipe” motion up the keyboard (glissando). This, and other performance styles, are made possible by Waterfall keys. Common, sharp-edged synth keys would make this performance style difficult or impossible.

Weighted Action

A type of keyboard assembly used in electronic keyboards and synthesizers. During manufacturing weights are added to the plastic keys, usually by glueing pieces of metal to the underside of the part of the key where a players fingers make contact. The extra mass, combined with stronger springs causes the key to be harder to set in motion down, and depending on the spring and how much weight is added will change the speed and force with which it returns to rest. The result is a keyboard action that feels or “plays” much more like mechanical piano keys, which is sometimes preferred among players. A further development in keyboards is the use of hammer action key assemblies, where the key actually moves a mechanical hammer, making it feel even more like a real piano key.


Zone

In MIDI a zone is a defined area of a keyboard, or a range or layer of MIDI notes. Zones are the common method of splitting a keyboard, controller, or sound module into ranges that can each play different sounds. For example, one might have a bass guitar on the bottom of a keyboard, a mellow pad sound in the middle, and a sax sound for solos on the top range. It is also sometimes possible to have multiple zones that span the entire keyboard range, but are accessed with different levels of playing velocity. Different products allow for varied degrees of flexibility with regards to zones, not only in terms of how many zones are allowed, but which parameters they manage.


Beam angle

The easiest way to think of beam angle is the angle at which light is emitted. The more complete answer is that beam angle is the full width at half maximum. In other words, since we can’t measure the “edge” of light, but we can measure the intensity of light, we measure the beam angle from where the light is at 50% intensity.


DMX

The DMX512 standard was developed in 1986 by the Engineering Commission of United States Institute for Theatre Technology (USITT). “DMX512” stands for “Digital Multiplex with 512 pieces of information.”



The original aim was to provided standardized control over lighting dimmer packs, which at the time each used proprietary control systems. But the standard has expanded and been adopted for many control applications for stage, theater, and architectural lighting.



DMX512 uses 5-pin XLR connectors for communication between devices.