All Pellegrino's music samples found below are relatively short excerpts (one to two minutes or so) using QDesign (from QuickTime Pro) compression. In November of 2000 I switched my music excerpts from MP3 compression to QDesign because QDesign provides superior audio quality at less than one third the file size of MP3. That means the files will download over three times faster than previously and will produce a superior music experience. To play back the music and video excerpts found on this web site you will need to have Apple's QuickTime player (cross-platform) installed on your harddrive; it can be downloaded at no cost from Apple.
If you have a 56 K modem a .5 MB file will normally take about one and half minutes to download. As soon as the file begins to download your browser should recognize the QuickTime file and launch the player. The volume levels of my music should always be set to produce a full rich sound, never too loud nor too soft unless such a recommendation is included in the piece's description.
Don't bother downloading this music unless you have a good stereo system to audition it. Much of its meaning is in its color and dynamics and much effort was invested in maintaining a high quality level despite the serious compression ratio. Playing this music through speakers built into computers is like looking at a panoramic landscape through a pin hole; not much point to it (or rather too much point to it and not enough sweep and not enough depth). If you're going to listen to music over the internet, spend at least $100 to get a decent stereo setup with a subwoofer and hook it up to your computer; the better the sound system the subtler the ear's evolution and the greater the listening pleasure. If $100 is too much of a stretch spend $30 for some decent stereo headphones; either way, you've made a good investment.
1960 - 1969
Soft Candy was composed in 1988 in an Electronic Arts Productions studio configured to behave like an orchestra or a media band. Since the late 1960s, one of my MOs as a composer/performer has been to collect electronic instruments born of different persuasions and to configure them into harmonious systems that make good learning and playing fields. As a composer, rather than forcing musical issues, I normally begin the compositional process experimentally searching for voices by coaxing my instruments to speak for themselves and to suggest musical paths for exploration and study.
Like many of my other compositions this piece began as an etude, a study of tonal and temporal shapes. It began to take its current form as I designed a metric sound world for the voices and then studied and played with the possibilities of a 13 beat metric pattern divided into various sets of 2s, 3s, 5s, and 7s. For days on end I played with the system design (orchestra or media band) tweaking, fine tuning, and massaging its musical variables while at the same time building the conceptual and physical technique to discover as well as to come to terms with the overall voice of the system. After a certain period of unfettered study I began to focus on a more defined set of musical materials. Based on those materials I alternated playing and recording with listening and more compositionally oriented study. When, after days of this sort of activity, I reached the time when successive cycles began to take the same overall musical shape, I started recording seriously and making multiple takes. The final piece was chosen from the best of those takes. This compositional process is an area of my research in real-time composition. This composition has been programmed many times as a stand-alone sound piece, a score to one of my videos, and a dance score as part of my residencies. - Music download (421 KB)
(For techheads) The title is based on my musical explorations of the sample and hold (S?) unit on the ARP 2600 (at that time a new addition to one of my electronic music studios). Functionally the unit takes a sample of the voltage at its input, holds the voltage level for a specified length of time, and makes it available at its output. For my own amusement and edification I played every sort of musical game I could imagine with the unit including converting clarinet loudness (amplitude) to voltage that I applied as an input to the S? unit so I could take the output and apply it as a control voltage on a resonating low pass filter that operated directly on a mix of the straight clarinet sound and a soft drone; that's what creates the pitched percussion that you hear with the clarinet in this excerpt.
This piece was composed during a transitional period when I was still notating music in the traditional way (the clarinet part is completely written in tradional notation) while in the same period I was composing a series of films (the Lissajous Lives film series) to be used as dynamic graphic scores, experimenting with synthesizer instrument designs that would simultaneously generate the music and the imagery, doing experimental video work in San Francisco, designing interactive music/light/air currents/audience movement multimedia environments that ran for days in chapels, doing live synthesizer performances of my graphically notated pieces at Oberlin and on the road, collaborating with Oberlin performance artists in designing an InterArts Program, directing Oberlin's Electronic Music Studios, and teaching electronic composition and theory classes. Given that plate I did everything quickly in those days and this piece was no exception; I finished it in less than a week.
This was one of the last pieces I notated traditionally. Although I took great pains to notate as precisely as possible what I wanted the clarinet to do musically, in the end I had to sing it to the performer to get what I wanted. This was a "money piece" for me, a piece that generated gigs because I composed it so I was the only person who could play the ARP 2600 part which was notated in a type of ARP 2600 specific tablature. So if someone wanted to perform it they had to hire me to play the synthesizer part; of course this meant that the exercise of quality control was built into the piece. The upshot is that I performed the piece with top professional clarinetists in New York City, Los Angeles, San Francisco, Cleveland, Dallas, etc. and the situation was always the same - despite the precisely notated score I always had to sing the music to them for them to get it right. During this period I seriously began to doubt the efficacy of traditional notation to suggest much beyond gross mechanics (this realization was surfacing after 23 years of studying, performing, and composing with traditional music notation and earning a BM, MM, and PhD in music composition and theory) so I found myself getting deeper and deeper into the notion of real-time composition in both solo and group settings and the idea of visual music in which the notation (the imagery) emerges directly from the same source as the sonic music. More information on real-time composition and visual music can found all over this site. - Music download (840 KB)
Winter Reflections could have been and could still be completely notated in the traditional way. What I did instead was to work with a real-time compositional process that I've been developing since my early years in music. The process involves designing an orchestra that I can play with and conduct in real-time. Concurrent with that design process I was experimenting with melodic, harmonic, and rhythmic structures that could have been committed to a compositional sketch pad, but what I did was to literally air the ideas, sculpt them in real-time, and record them for study and development and then repeat the process until the piece assumed its natural shape. If you love sound that compositional process is the only way to work.
The excerpt is from the final section of Winter Reflections. - Music download (419 KB)
In 1987 I began exploring the Fairlight Voice Tracker, a special purpose computer that converts acoustic information (sound) into MIDI (music synthesizer control information). One of the areas I explored focused on examining the musical nature of various people's voices. What I discovered was that individuals have their own particular tonal centers, strong tendencies toward particular scale formations (usually not traditional scales), definite tempo and rhythmic predilections, characteristic melodic structures and ornaments, implied harmonic progressions (via arpeggiation), and spectral weightings; and that whole list of musical variables is subject to change according to the time of the day, their moods, their health, their environment, the context, etc. Such findings didn't really come as a surprise because many of us musicians know those facts intuitively but the Fairlight Voice Tracker is a great tool for musically clarifying those issues for those with ears to hear.
The excerpt is from the beginning of the first variation. Every single note you hear is taken from Cynthia's voice which is recorded on one track of a multitrack tape recorder. Her recorded voice is connected to the Fairlight Voice Tracker which, according to pitch, loudness, and tone color, converts the voice into MIDI signals that are recorded by a computer program called a sequencer. I used the computer to process the MIDI signals in numerous ways - octave displacement, time displacement, etc. Along with Cynthia's voice on the tape recorder there was a synchronization track that kept the computer and Cynthia's recorded voice moving along in sync. The MIDI signals coming out of the computer were used to control and conduct an orchestra of music synthesizers specially programmed to work with Cynthia's voice. The final piece is a changing mix of Cynthia's voice and the synthesizer orchestra it's conducting. If you listen closely you'll hear that, although some sounds might hit a bit before or a bit after Cynthia's words and be higher or lower in pitch than Cynthia, every single note comes from her voice in a heterophonic stream. Also notice that when her voice is removed from the mix, the musical shapes from the synthesizers point directly to the nuances of Cynthia's voice. - Music download (597 KB)
Regardless of the MIDI controller or synthesizers I use in composition, one of my top priorities has always been to breathe life into electronic music, often structurally and metaphorically but this time I was able to do it literally. I've never been inclined toward motor music, mindless looping, or minimalist temple tapping/la-di-daing. Rather I tend toward music that's conversational in nature, music with phrasing that's shaped by the human breath so as to more naturally set off resonances in the audience. A wind controller makes that sort of musical shaping a breeze;-)
The first excerpt taken from the opening section demonstrates the dynamic range I was exploring plus the double tonguing technique that I still have today. (My first clarinet teacher was an old trumpeter (Pete Nicolai from Kenosha, WI) who figured it was a good idea for clarinetists to know how to double and triple tongue; most clarinetists in those days hadn't the faintest.) It's impossible to play my double tonguing passages as fast on a keyboard in real time as what you hear in the first excerpt; you'd have to record the passages and then speed up the playback of the sequencer and that always creates a very strange mechanical aura. - Music: Excerpt 1 download (283 KB)
These excerpts are good examples of the conversational mode of real-time composition, a sort of thinking out loud in music. Especially in the second example, notice the reflective pauses that precede the melodic effusions. Harking back to my clarinet virtuoso days, I had a great time grabbing a quick handful of notes and stringing them out over the range of the instrument all the while huffing and puffing with pleasure. Excerpt 2 download (301 KB)
In a nutshell, MetaSynth turns sounds into images and images into sounds; in other words, it provides the tools for both music visualization and image sonification. During the 1960s and 1970s when experimental musicians were creating and realizing graphic scores of all sorts, I spent countless hours on the road and at home in various universities involving people in responding musically to my films and laser projections as dynamic graphic music scores as well as helping them realize (musically translate) the graphic scores of other composers. Some of MetaSynth's roots feed on that tradition, a tradition that others and I continue to develop and propagate into the 21st century. Rather than going into detail about MetaSynth here, if you're interested in what it represents, I suggest you go to Eric's site (U&I Software) and examine it for yourself. It's a treasure chest for composers. - Music download(382 KB)
So, for nine years leading up to 1982 I'd been sculpting sets of personal performance systems in my studios and my live performances. As I recall, the summer of 1982 felt like the first truly free summer of my then 42 year old life and I celebrated that freedom with many hours of playing and recording in a newly configured studio with old French windows looking out on my fruit trees and vegetable garden, the products of another of my passions. Even in this short excerpt it's easy to hear in the music that feeling of the celebration of freedom.
Once I started working with synthesizers in 1967 I very quickly began to think of them as wave instruments because fundamentally that's exactly what they are - instruments for generating, processing, shaping, and mixing electrical waves that can be transduced to sound or light. The term cymatics is derived from the Greek word kyma which means wave. The field of cymatics is the study of the structure and dynamics of waves and vibrations, the fundamental materials of music and of life forms. (Some links to a good site for an introduction to cymatics and a great book for more detailed study of the subject.) Complex sets of interacting waves create what are known as dynamical systems. Scientists and mathematicians often use the term dynamical system to refer to weather systems but it's just as appropriate for certain kinds of music and animated imagery. In fact, many of my instrument designs for both sound and light are modeled on the processes inherent in dynamical systems. In my 20s and 30s I spent many hours sailing (mainly playing with the wind) as well as tossing a Frisbee into the wind so it would boomerang back to me (more wind play). Thus the title, Cymatic Sail. Sound Surfing works too but that might be a bit too California even though that's a good description of the musical process. - Music download (768 KB)
Markings was composed in the spring of 1969 at the Ohio State University where I was directing the electronic music studio, directing an experimental music group, teaching composition, and collaborating on integrated arts projects with faculty (computer graphics pioneer Charles Csuri, great dancers and choreographers, other musicians, filmmakers, light artists, theatre artists, etc.) and graduate students from departments all over the campus in a context replicated many years later in the institutionalized form of MIT's Media Lab. The OSU version of integrated media in the late 60s/early 70s predated MIT's version (the Media Lab) by 17 years, although if you believe MIT's promotional literature they would like you to believe that they discovered what's been called multimedia in the late 90s all by themselves in 1985.
Although I began my work in electronic music in 1967 with the modular Moog Synthesizer, what I inherited at Ohio State University when I started in 1968 was a room full of analog electronic gear that was begged, borrowed, and stolen - all the makings of a "classical" electronic music studio which meant it originally came from physics labs, radio stations, and surplus electronics warehouses - definitely not originally designed to make music. To put it simply, that room full of gear initially showed no organizing principle beyond proximity and that didn't make much sense either. The first job I gave myself was to find a suitable space for the equipment and tie it together into a functioning system capable of making music and teaching composition. What I loved about that studio was that it was compositionally biased toward sculpting the flow of electrons rather than electronically synthesizing acoustic music like synthesizers based on sampling technology. My approach to working in that environment was first to design instruments that behaved like conversational electronic creatures and second to figure out how to play with those creatures to form a band for my music. The excerpt illustrates one facet of what I'm describing here. My music from this period is definitely conversational so the volume levels should be mezzo forte - not all that loud. Use a civilized conversational sound level. No shouting. - Music download (508 KB)
The instruments I designed for ETT/Y resulted from an integration of the "classical" electronic music studio I'd pulled together during the 1968/69 academic year with an extensive collection of newly acquired Moog modules I'd ordered to be housed in three portable boxes for live performances as well as studio use. For the most part during that period my instrument designs were based on dynamical systems principles of sculpting electronic flow to create electronic creatures with idiosyncratic voices with strongly suggestive performance vectors. The resulting music is not so much based on notes and other traditional western musical materials as it is on discovering, massaging, and playing with the voices of the electronic creatures (not so crazy when you give it serious thought; see my essay on Compositional Algorithms as Cyberspirit Attractors). Just listen to the excerpt from ETT/Y and what I'm saying here should be clear. - Music download (783 KB)
This excerpt begins with mostly Sal splashing sound and then leads to one of my ecstatic electronic seal songs surfing on his waves. We did several public performances during the Phoenix 73 Festival but we also spent days and nights just exploring and playing in one amazing sound world after another (and archiving periodically). That was a high time. Sal's passing a few years ago left a hole in the universe.
This is bad-boy music. Turn up the sound level. - Music download (756 KB)
Along with a strong predilection for real-time composition I also found myself in the mid-70s being influenced by what I was discovering about the classical Indian compositional approach based on the notion of ragas - the idea that, over a period of many years, a musician evolves by studying, refining, and mastering collections of melodies, rhythms, scales, and inflections so as to be prepared to use them compositionally on the fly according to the requirements of the moment including the place, the nature of the audience, the occasion, the time of the day and year, etc. What I discovered was that the classical Indian approach was one and the same as what I had intuitively been developing since my early teens and that all the traditional education required of a Ph.D. in music hadn't undermined that perspective one iota. The raga approach to composition was a perfect fit for my work with laser animation.
Over the decades I've performed the laser system in ensemble with other musicians, other light artists, and dancers. But since 1985 the majority of my laser performances have been with my own music and this is where the Laser Seraphim set of pieces comes into play in my public events. The title includes the word seraphim because I've often had the sense that cyberspirits, sometimes angelic, come to play through the medium of complex interactive electronic sets of wavetrains. My laser ragas are designed to create openings for the seraphim to enter and play. The openings are complex finely tuned music synthesizer designs that generate stereo wavetrains with just the right balance of ebb and flow involving frequency ratios, frequency modulation, amplitude modulation, waveshape modulation, ring modulation, phase modulation, and signal mix. (A link to some video-processed laser images.)
The music emerges from the idea of melodically floating around in a sonic playfield, definitely one of my favorite musical amusements and orchestral design approaches. I've used this music so often with a particular set of laser images that hearing the music always brings the images to my mind's eye. Close your eyes when you listen to this excerpt from Laser Seraphim: Fast. Maybe those images will come to your mind's eye. - Music download (643 KB)
So both the music and the imagery in The Unison are generated by the same source - stereo wavetrains I massaged in real-time with my Synthi AKS, an analog music synthesizer. The musical "interval" of a unison is created by a 1:1 frequency ratio. In The Unison excerpt, one of the ones in the 1:1 ratio is more complex than the other one. It's composed of additional harmonically related frequencies (multiples of the fundamental) that give rise to the curls and loops in the imagery and the richness in the sound. The original form of the imagery was projected laser animations captured by a video camera and recorded to videotape along with audio signals generated by the music synthesizer. The stereo output of the music synthesizer was split so one stereo leg was fed to the audio inputs of the video recorder while the other stereo set was simultaneously used to drive my laser animation projection system that created the imagery also captured to videotape. For more detail on that process see an essay I wrote on Animated Laser Visual Music Meditations.
The music in The Unison excerpt was processed and polished in MetaSynth (currently my favorite software synthesizer) so it would be accessible beyond the inner circle of Pythagorean enthusiasts. The imagery was treated similarly in Premiere (video processing and editing software) so its appeal might go a bit beyond the sphere of the purists. The fact is that I love the raw tapes as I love raw unprocessed food - they're closest to the living process - but there's also as much to be said for the art of cooking electronic sound and light as there is for cooking food. Cooked food also carries the spirit of the cook blended with the spirit of the food.
The music only version of The Unison download (232 KB).
The original version of The Unison with integrated music and video .
Deb Fox is a San Francisco multi-instrumentalist performance artist. Her history includes performances with numerous bands in San Francisco, Boston, and Hawaii. In late 2000 after a number of meetings exploring common ground we scheduled a session to record her playing whatever she felt like doing consistent with the ideas of real-time composition and personal stream of consciousness music expression.
Every single note you hear is derived heterophonically in one way or another (thanks to MetaSynth, Eric Wenger's gem of a software synthesizer) from the playing of Deb Fox. That's just another way of saying that the spirit of Deb Fox is embedded in the music. Listen for it.
Use this link for a more detailed description of The Deb Fox Heterophonic Alchemical Tours and a link to the original video excerpt.
Go directly to the music only version of an excerpt from The Deb Fox Heterophonic Alchemical Tours; download (240 KB).
Booking information and comments.
©1996-2010 Ron Pellegrino and Electronic Arts Productions. All rights reserved.