A Digital Signal Processing Instrument

for Improvised Music

Lawrence Casserley

(This paper was first published in Volume 11 of the Journal of Electroacoustic Music)

 

Abstract

Over the past two years the author has been working on Digital Signal Processing techniques for use in improvised music. Unlike many others, he has taken as the basis for development the view that the processing system should be an instrument in its own right. Working with the IRCAM Signal Processing Workstation and a number of MIDI controllers, he has developed a 'first generation' instrument which has been used in a number of performances and recordings. The concept and design of this instrument will be discussed, along with ideas for further development.

 

Introduction

A number of approaches to the use of computers in performance have been implemented . Many of these focus on the use of gestural, pitch or rhythmic information derived from a musician's performance to control electroacoustic elements generated by the computer. Others have placed the emphasis on the computer as a special kind of performer, usually using some form of artificial intelligence techniques. Some have combined these concepts (There are too many references to quote, but useful overviews of such work may be found in Rowe, 1994, and Chadabe, 1997, among others). My own approach has been to view the computer processing system as an instrument which is played in performance by the computer musician. In creating such an instrument there are a number of problems to be overcome.

First is the problem of gestural control and its relation to to the resulting sounds. In general an electroacoustic instrument has no inherent mapping of control gesture to sonic result, unlike most conventional instruments, where the method of sound production usually dictates this mapping to a great extent. The advantage of this is the possibility of choosing controllers to best suit both the performer and the instrument, and of fine tuning their behaviour in software. The disadvantage is that a gesture can easily seem unconnected with, or inappropriate to, the resulting sound. In a signal processing instrument, where the sound produced will also include the gestural information of the source sound, these problems are increased.

A second, and related, issue is the relationship between the acoustic and computer performers, and more particularly, which performer is perceived to be the source of the sounds. Even if the intent here is to play on ambiguities of source it is vital to understand how the sources are perceived if the ambiguities are to be well articulated. In discussing this issue I am grateful to Simon Emmerson for his useful terminology of 'local/field' (Emmerson, 1994). His discussion was chiefly concerned with problems of amplification and diffusion in works involving both pre-recorded and live sounds, but the same criteria apply, and arguably even more critically, to live signal processing. I have adopted Emmerson's terminology in the subsequent discussion.

Thirdly, there is the problem common to all electroacoustic music of choosing appropriate sounds, or in this case transformations. It is necessary in an instrument for improvisation to give oneself both sufficient freedom and sufficient limitation. The palette of available processes must be rich enough to encompass many situations and controllable enough to adapt to them quickly.

Finally, in improvised music there is the problem that you do not know in advance what sounds will be input to the system. In pre-composed music careful crafting of the source sounds and the processes applied to them can resolve many of these problems. In improvised music it is necessary to resolve these issues in the performance. This is, perhaps, the greatest challenge in designing such an instrument.

Over the last two years I have worked on some new approaches to signal processing which are finding solutions to some of these problems.

 

The Musical Context

For many years improvisation has played an important role in my music. I have improvised on instruments, usually some combination of voice, percussion and invented instruments, on synthesisers, and on sound processing equipment. Processing the sound of others has always seemed my most natural home, but I have been acutely aware of the limitations in the ways in which these processes could be controlled. While I could frequently contribute much to the musical development, I envied my collaborators' capabilities of articulation and nuance. It was always a challenge to find a satisfactory middle ground between a series of 'effects', a secondary layer of the musical argument, and an over-dominating texture of complex sound. In addition, limitations of the available equipment and interfaces restricted my capability for real gestural input.

In attempting to design a signal processing instrument for improvisation I have divided the available processes into three broad areas; those which are 'local' to the source musician, those which are 'local' to the computer musician, and those which create 'fields'.

In the first category I include processes where the main articulation is controlled directly by the source musician. These include various forms of timbral manipulation, eg modulation, filtering, phasing, etc, and pitch-shifting. While another layer of control may be added by the computer musician, they remain, in general, 'attached' to, or extensions of, the sound of the source instrument. We may also include here simple short delays, up to a few seconds, where the causality of the statement and echo is still clearly apparent.

The second category involves capturing the sound of the performer and replaying it in such a way that the articulation and causality are clearly generated by the computer musician. Developing techniques for this has been one of the most important elements of this work.

The third category includes longer delays, where the connection between the original sound and the echo is more tenuous, or even entirely obscured. This is particularly the case with complex multi-tapped delays, or where another transformation process has been applied to the delayed sound. Reverberation and space echo processes may also be included in this category.

Clearly many processes can fall into more than one category according to how they are used. In addition, these are not discrete conditions; there is a continuum between them, and there are many areas of ambiguity. Figure 1 illustrates the relationship between the three categories. The heavy lines indicate movement between pairs of categories. The inner triangle joins the points of greatest ambiguity. A successful instrument would allow the player to move around this inner triangle with control and clarity.

 

The Technical Context

Most of my current work is based around the IRCAM Signal Processing Workstation (Lindemann et al. 1991). The graphic programming environment of MAX (Puckette 1991) allows constant experiment and development without the need for conventional programming. It also allows mapping of MIDI messages to any desired parameter so that standard MIDI devices may control any part of the instrument. I have found the ISPW to be an excellent platform for the development of signal processing instruments (Casserley, 1993). At present there is no other adequate platform for work of this kind.

A more difficult problem is the selection of suitable controllers. My experience as a percussionist led me to consider MIDI drum pad units, and I found the drumKAT to be the most suitable. I was already using a Peavey PC1600 fader unit and a Yamaha MFC1 foot controller with extra footpedals and switches. These became the starting point for my prototype instrument.

I approached Stichting STEIM, Amsterdam for assistance with my work, and they invited me to work there in January, 1996. At STEIM I worked on two models, both of which became key elements in the first version of the instrument. One was a long multi-tapped delay line with a single control of all the output levels. The other was also a delay line, but with a way of controlling its behavior with the drumKAT. These were demonstrated in workshops at STEIM and at Bonner Entwicklungswerkstatt fr Computermedien in January, 1996. At this stage they were two separate programs, but they became the core elements of the first instrument.

 

Instrument Design

Figure 2 shows an alternative view of the signal processing instrument. This indicates a signal flow from 'preprocessing', processes local to the source, through processes local to the computer musician (DSP) to 'postprocessing', the 'field' part of the instrument. This creates the possibility of combining one or more elements as required. This is the general structure adopted for the first version of the instrument.

For preprocessing this instrument uses a convolution filter based on the design by Settel and Lippe (Settel and Lippe 1994). This convolves the source with an FM generator shown in Figure 3. The particular features of this generator are: 1) the carrier is modulated by a set of odd harmonics of itself; 2) the harmonics are put through a variable band-pass filter before modulating the carrier; 3) control of the fundamental (carrier) frequency, filter bandwidth and band centre frequency are available to the computer musician. This has proved to be a fruitful source of transformations that vary from subtle colouring of the instrument to extreme distortions. The variable parameters are mapped to Peavey PC1600 faders.

The second stage of the instrument (Figure 4) utilises a delay line which is controlled by the drumKAT. An incoming signal triggers a timer; a note from the drumKAT sets a delay tap to the current time, so playing the sound. A polyphonic system has been implemented, so up to four delays are available at once. There are also provisions for a sustain pedal, a pedal to interrupt triggers so that the same sound can continue to be replayed, and control of feedback. Each voice also contains a hilbert transform frequency shifter (an IRCAM library patch), so the pad played produces a new pitch and timbre. Several shift ranges can be selected by footswitches, varying from subtle to drastic changes. A footpedal can be used to vary portamento rate of the shifts.

At an early stage an important decision was made to use a delay-line model, rather than a sampler model. This decision was taken both for musical and practical reasons. Improvised music is very much a music of the 'now', and I felt that longer term storage of material would detract from the immediacy of the performance. The discipline of working with the present, or at least the very recent past, was felt to be an asset. In addition, I felt that having to make decisions about which material to store for future use, and when to replay it, would be a serious distraction in performance.

The third stage utilises a twenty second multi-tapped (22 taps) delay line. The output levels from all the taps are controlled by a single footpedal. There are also controls for changing the proportions of the delay times and a foot control to vary all the delays in proportion. Figure 5 shows one of the delay taps. The abstraction 'declicker.abs' briefly mutes the delay while the time is changed, while the pipe allows each tap to be changed at a different time, so that not all delays are muted simultaneously, while 'flushmute~.abs' also receives a command to 'flush' the whole delay line, muting each tap for the period of its delay. The result is a basically very simple structure that allows much flexibility in use.

During my work with Evan Parker at STEIM in January, 1997 I decided to try using the same delay line for both the pad system and the multi-tapped delay. This has proved to be very successful and was adopted for the first generation instrument.

Signal paths through the instrument can be controlled from the Peavey PC1600. These include instrument inputs to the convolution filter, inputs to the delay system, convolution direct output and convolution to delay system. The complete layout of the PC1600 is shown in Figure 6. This is the version used for trio performances with Evan Parker and Barry Guy. In this version I used a different 'local' process for Barry, a multiple flanger patch, and provided a means of exchanging Barry's and Evan's inputs. Note that some of the PC1600 buttons are used to control aspects of the program.

The complete MIDI setup is shown in Figure 7. The MacIntosh PowerBook runs a small MAX program, which is used to relay useful information about the state of the instrument to a more convenient viewing position.

 

The Instrument in Use

The instrument described has been used for performances and recordings with a number of musicians. Some examples are:

- Solo performances in which I generate all the sounds, using voice, amplified percussion and/or self-built monochords.

- Duo with poet/vocalist Bärbel Nolden. I use voice, percussion and monochords to complement her voice.

- Duo with saxophonist Evan Parker. I sometimes insert processed vocal sounds into the texture,

- Trios with Parker and bassist Barry Guy, with pianist Roland Bürck and cellist Roland Graeter, and with flautist Simon Desorgher and guitarist Richard Durrant.

In general the instrument has proved to function well and to meet the design criteria. I could respond effectively to the challenges presented by these very different musicians. The three stages of the instrument appear to fulfil effectively the functions of 'local to source', 'local to computer musician' and 'field', without setting too rigid demarcation lines between them. The combination of convolution and frequency shifting gives the instrument a clear 'character' and the way in which these transformed sounds can be mixed with untransformed sound in the delays helps to integrate the sound picture. Control of signal paths allows for quite fluid movement between the areas, and in particular, a variety of combinations that move into the zone of my central triangle.

In addition, parts of the instrument have been used in other work. In 'The Garden of Forking Paths' (1996) for guitar and computer I utilised a development of the drum pad instrument. In 'PanDemonic 3' for the giant panpipes I used versions of the drum pad instrument and the multi-tapped delay (Casserley, 1996).

 

Future Directions

The first generation instrument grew gradually by combining various elements until a structure emerged. This has led to a situation where further experiment has become more and more difficult. The first task for a second generation instrument is to create a clear framework into which experimental modules may be inserted, so encouraging further development. This structure will also clarify, and make more flexible, the signal paths through the instrument.

Given such a structure I intend to work on increasing the variety of spectral models for the convolution filter. I intend also to continue developing the control possibilities with the development of some new custom controllers. I will also experiment with new processes.

 

Conclusion

A musical and practical consideration of experience with real time processing in improvised music has led to a viable instrument structure, both as a basis for current performance and as a platform for further experiment. The application of Emmerson's 'local/field' concept has been a valuable stimulus to analysing the requirements and to developing the design. Encouraging initial results are leading to further development and refinement of the concept.

 

Acknowledgements

I am greatly indebted to Stichting STEIM, Amsterdam for my residencies there. The stimulating atmosphere at STEIM has been a crucial catalyst to the development of my ideas. In particular I would like to thank Michel Waisvisz, Joel Ryan, Steina Vasulka, Nicolas Collins, Ray Edgar and Tom Demeyer for their invaluable advice and encouragement. The performers (listed above) who have acted as 'guinea pigs' for the new system have provided essential feedback as well as giving me the chance to gain experience in using the instrument. In particular, I am indebted to Evan Parker, without whose abundant enthusiasm and infinite patience much of my progress would have been very much more painful.

 

References

Casserley, L., 1993 - "The IRCAM Signal Processing Workstation. A Composer/Performer's View.", Journal of Electroacoustic Music, Vol 7.

Casserley, L., 1996 - Report on Tube Sculpture Sanish Tour, Diffusion 1.

Chadabe, J., 1997 - "Electric Sound, The Past and Present of ElectronicMusic", Prentice-Hall, Upper Saddle river, NJ.

Emmerson, S. 1996. - "'Local/Field': Towards a Typology of Live Electronic Music", Journal of Electroacoustic Music, Vol 9.

Lindemann, E. et al. 1991 - "The Architecture of the IRCAM Music Workstation", Computer Music Journal, Vol 15 No 3.

Puckette, M., 1991 - "Combining Event and Signal Processing in the MAX Graphical Programming Environment", Computer Music Journal, Vol 15 No 3.

Rowe, R., 1994 - "Interactive Music Systems, Machine Listening and Composing", The MIT Press, Cambridge, MA.

Settel,Z. and Lippe, C. 1994, - "Real-Time Musical Applications using FFT-based Resynthesis", Proceedings of the International Computer Music Conference, Aarhus, Denmark, International Computer Music Association.

 

Discography

Solar Wind

Evan Parker, Soprano Saxophone

Lawrence Casserley, Signal Processing Instrument

Touch, TO:35

 

Lawrence Casserley

August, 1997