Javier Alejandro Garavaglia

Gegensaetze (gegenseitig) for alto flute, 4-Track Tape and live-electronics (1994)

Genre: Instrument + Tape + Interactive

Duration: 33 Min.

 

PARTS OF THE FOLLOWING TEXT APPEARED ON THE PROCEEDINGS BOOK OF THE 2nd BRAZILIAN SYMPOSIUM ON COMPUTER MUSIC (SCBII) IN AUGUST 1995.

PAPER READ DURING THE CONFERENCE HELD IN CANELA (BRAZIL)

 

 

THE NECESSITY OF COMPOSING WITH LIVE-ELECTRONICS

A short account of the piece "Gegensaetze (gegenseitig)" and

of the hardware (AUDIACSYSTEM) used to produce the real-time processes on it.

 

Javier Alejandro Garavaglia

Hendrik-Wittestr. 11

45128- Essen.

Deutschland. (Germany)

Tel: (0049-201-229831)

E-Mail Address: gara@folkwang.uni-essen.de

ICEM- Folkwang-HochsChule Essen

Klemesborn 39 / 45239- ESSEN. Germany

 

 

ABSTRACT

 

The aim of this paper is to present my piece Gegensaetze [gegenseitig] (in English: Contraries [reciprocally]) for alto flute, 4 Channel-tape and live electronics (1994)- making an account of how and why the work was conceived. The hardware-and-the-software environments which are responsible for the real-time processes (AUDIACSYSTEM, a project carried on by the ICEM (Institut for Computer music and electronic Media) at the Folkwang Hochschule-Essen and the company Micro-Control GmbH & Co KG, both in Germany) will be described. Finally some examples and passages of the piece will be explained.

 

"CONTRARIES "

 

Gegensaetze (gegenseitig) was the result of an idea that I have had for a long time: to compose a piece in which contraries should be shown not only against each other (in a negative way), but also that they could be able to build some kind of unity by creating something completely new, constructive and positive.

 

My first challenge was how to put this into music without using a text about the subject. At the beginning I simply wanted to make a contrast between a normal instrument and a pre-recorded tape, but it didn't seem like being the solution to the problem because it could actually show only the contraries themselves but not the reciprocal action of both elements. The instrument should make with the electronic something really new and this should happen in real time and not with recorded material. That was the reason why I first began to work on the tape itself, making sounds with two Yamaha synthesizers (TX 802 - TG77) that shouldn’t have any relation with normal instruments. I composed then a previous piece for stereo-tape alone, from which I took the materials for the work’s definitive version.

 

Once the tape materials were selected, I knew already that the instrument should have to be a very soft one, and the election was that of an alto flute. How should then the "reciprocal action " look like? I was now pretty sure that it should be performed with live-electronics. This decision conduced me to the next problem: what type of live-electronics did I really want and much further, which kind of system should I use?. There are basically two ways of working with live-electronics: on one side, those whose aim is to create a new conception of how the live instruments could be projected into a particular space or room, normally using only echoes and delay lines; on the other side, the more complicated ones, in which the sound will be actually processed in real-time (through FM, AM, filters, envelope generators, envelope-followers, transpositions, etc) up to the point in which the instrument itself could be no longer recognizable.

 

At the ICEM of the Folkwang Hochschule in Essen (Germany), there was no ISPW, but a completely different project, which has been carried on since eight years at ICEM by a group of German composers and engineers. This project is the AUDIACSYSTEM, about which I shall speak later in this paper.

 

Once I had already got the three Instrumental groups (alto flute, 4-channel tape and the 4-channel live-electronics), I wanted to prosecute composing each parameter (from the micro-up to the macro-structures) with the same concepts of THESIS-ANTITHESIS working together to create something new, so that at any point of the piece the main idea could be shown. For this purpose, I've chosen two principles opposite to each other:

- "single-principle"

- "totality-principle".

 

Both principles are the main generators of every event throughout the work and are mainly represented everywhere in the piece by two objects: a "glissando-object" representing the "totality-principle" and a "single-note-object", representing the "single-principle".

 

For the whole structure of the work a numerical-row was chosen, whose first four components were explicitly selected by myself, but from the 5th component on, they should always be the addition of the last three numbers (that means that the next figure in the row will be constituted with the reciprocal action of the former three). It comes as result a bigger new value standing as a contrary to the first, for example, the row begins with (1 1 3 5), which are the numbers that I arbitrary selected; the next value will be 9 (1+3+5), the next 17 (9+5+3) and so on. Each single element contributes to make a partial new totality. This row plays an extremely important role in the composition of the pitches, rhythms, metronomic values, form, and the stage-production, as well.

 

The form of the piece consists of 5 sections, each one showing the principles already mentioned:

 

1 - Solo alto flute ("single-principle")

2 - Alto flute + Tape (as opposites)

3 - Only Tape ("single-principle")

4 - Alto flute + Tape + live-electronics ("totality-principle"- reciprocally action of all three)

5 - Only live-electronics ("single-principle" as result of the reciprocally action of all three)

 

Rrhythms have been also composed with the numerical row: there is a unit value (the sixteenth or semiquaver), which will be multiplied and divided with the numbers 1, 3, 5, 9, in all possible combinations within these 4 numbers (for example, ratio 9:5 means that 9 equal durations should be instead of 5 sixteenths; ratio1:3 results in a dotted eight, etc.).

 

The stage-production works also with contraries: the stage is only illuminated when the flautist plays (parts 1, 2, 4 and 5). In part 3, where only the 4-channel-tape is present, the whole stage and the whole hall (if possible) should be dark.

 

The materials for the pitches were derived from a chromatic scale beginning with the pitch G3 (the deepest note for the alto flute in G), representing a whole or totality object, a meta-symbol of the "glissando-object ". This object plays one of the most important roles throughout all parameters in the piece, not only for the flute-part, but also and mainly for the tape and the live-electronics. The process of generating the whole pitches for the flute part are produced by an algorithm that eliminates some notes in such a way that at the end, there's only one pitch left. The result is a process going from the whole (all 12 tones) up to ONE SINGLE element, generating a tension between the two main principles mentioned above. The pitches which were eliminated, will be used later in part 4, in the form of 3 improvisations, in which only the rhythms are totally free. These improvisations make a counterpoint to the live-electronics and even modulate them, as it happens in the third one.

 

The whole 4-channel-tape part was produced and composed using different programmes, procedures and methods, i.e. CommonMusic, transpositions and filters (mostly with Sound Designer II), echoes and even with the AUDIACsystem itself. The about twenty minutes long 4-channel tape makes at its beginning a counterpoint to the alto flute, then develops thereafter alone and finally, it fades out very slowly as the live-electronics start.

 

 

 

THE AUDIACSYSTEM

 

The AUDIACsystem iss a project developed at the Folkwang-Hochschule in Essen (Germany) by the ICEM (Institut for Computer music and Electronic Media) and the Micro-Control GmbH & Co KG. The people involved in its whole design are: Dr. Helmut Zander, Dipl. Ing. Gerhard Kümmel, Prof. Dirk Reith and the German composers Markus Lepper and Thomas Neuhaus. The whole began in 1987 and attaches not only the hardware architecture, whose specially designed Audio Processor Unit has got the power of 2,5 Pentiums (1st generation) - naturally regarding only the audio processing capacities- but also the software itself, which was exclusively created for this particular environment.

 

The hardware configuration employed in my piece should be contemplated today as an already finished stage of its own development, because almost the whole is going to be actualized, replacing the current design with a new one, which shall result in a chain of Pentiums or most probable P6s, acquiring a RISC- processor configuration and making the whole a bit smaller than today's one cubic meter, possibly making it also compatible with a Power-PC.

 

HARDWARE CONFIGURATION

 

The hardware configuration of the AUDIACSYSTEM is shown on the following schematic representation:

 

 

The hardware architecture of the AUDIAC has been conceived with the principle of the specialized subsystems. It has not only been made to generate organized forms for the musical production, but also incorporates the generation and working up of sounds in real-time. The whole implies a huge measure of different demands in relation of its computing potential, which can only be solved with the above mentioned subsystems and their communication capacities.

 

The whole system could be described as the cooperation of a "von-Neumann" unit on the one side and a Signal-processing unit on the other. The former perceives configurations (devices), control and driving functions, which steer the processes of generating and working up of sounds from the latter. The communication is guaranteed with the help of the Multibus II. The "von-Neumann" part consists of a Manager (APM) and one or more control units, the APCs. Both do communicate via SCSI.

 

The APM (Audio Processor Manager) is a 486 Computer with a  66 Mhz clock-rate, where the software specially designed for the AUDIAC is implemented. This software is the language APOS which means Audio Processing Operating System and which was specially created by the German composer Markus Lepper for this purpose. APOS pursues three goals, which are:

(1)  a monolithic system architecture, in terms that every hard-and-software levels could be described with the same language, from an individual bit of the hardware up to very complicated abstract compositional models;

(2) an enlargeable anthropomorphic surface, in the sense that each composer can use not only algorithms that are already defined but also can implement his own language for a particular use as well;

(3) an abstraction from the technical necessities, meaning that composing should be allowed on a symbolic level, without caring about technical details.

 

APOS is an object-orientated language that works with two levels of interpreter: an outer interpreter, which receives the information in ASCII code, and an inner interpreter, which reads a row of object-references, which are references about objects that already exist and could be recognized as such. The software runs in protected-mode because of memory management reasons, and makes possible that some kind of tasks - which are necessary for the actual configuration of the system - can be perceived.

 

Regarding the APCs (Audio Processor Controller), the system can afford from only one up to four units. These are all "186" computers which, due to the ATOS kernel (a real-time operating system kernel specially developed for musical applications) have got many functions at their disposal, which are needed for the multitasking operations. The ATOS configurations are created on the APM in APOS and will be later called by the APC, generating or working up sounds. The APC and the Signal processor run asynchronously. The heart of the APC is the APU (Audio Processor Unit), the real Audio processor. Beside it, there are a number of auxiliary units, such as the AOC ( a unit capable of transferring data and time code between the APUs, also from one to other two simultaneously, and which could be programmed separately; the CIN (a low control interface with a 16 times multiplex A-D converter, through which up to 16 control voltage units could be brought in); the AIF (the A-D and D-A converters). The APU consists of one Memory Unit (MAU=Memory Address Unit) and an arithmetic unit (AAU, a multiplier). It is possible to put up to 4 APU plus one AOC together, connected through a z-bus. The data could be read and written on the Multibus II. The two memories of the APU (XMY and YMY) can be addressed alone or parallel. The in-and-out sample ports work with the Fifo principle and connect the APU with the out world through the A-D and D-A converters. The interface has 2 inputs and 4 outputs, which could be enlarged up to 32 and 64 respectively. The computing processes run parallel, that means that it could make up to two additions (or subtractions), one multiplication, twice read and write from and to the D-RAM (or four times from the S-RAM) at once. The flexible handling of the signal processing unit is guaranteed due to its totally free way of being programmed. The synthesis or working up of sounds result from micro-programmes specially developed for this APU.

 

The Parameter-Functions-Generator (PFG), which is a computing unit in itself works within the APU. It is coupled on one side to the APU and can (due to its complexity) be seen as an independent unit. Its multiple possibilities of application could be resumed in the providing of control instruments for the manipulation of sound: envelopes, spectral control, sound intensity, etc. For each parameter to be controlled, there could be placed pro time-unit one "value-pair" plus a bit-control. Each sample of every four could take a new PFG value. There are altogether 128 PFG free for each APU. The PFG has basically two operating modes: one, in which a "value-pair" INC/FIN makes a linear interpolation, building an envelope which makes a continuous alteration of the y values through the time axis; the other, which interprets a "value-pair" y-dt , where y takes one value and dt represents the duration of it, building discrete values. The control-bits allow a flexible and interactive influence to the corresponding value rows, for example: back to the first value, mode switch, segment switch-interrupt and hold function (fermata). Interrupts are possible in the first operating mode over each FIN value; in the second mode, at the moment of any new y-value. Through the use or the interrupt features, new support values can be called, resulting in more support values for only one parameter function.

 

Microcodes

 

The biggest time unit to take account of is that of the Sample-rate. The time between two samples will be called "MINICYCLE". There are multiple "cycle calculations" within such a "MINICYCLE" which are coordinated to different process channels (PROK). One cycle calculation can be divided into a given number of microcycles, which correspond to that of the machine rate, which is normally set at 10 MHz. All calculations necessary for the generation of a sample must occur within a single "MINICYCLE". The cycle will be finished with a reset signal, which guides to the next step, that is the D-A conversion. With a sample-rate of 48 kHz., the duration of a "MINICYCLE" comes up to around 20 micro-seconds.

 

 

WORKING WITH THE AUDIAC system

 

The way in which the input data can be programmed, may be defined in two different forms: on the one hand, it could be done algorithmically; on the other hand however, a specially pre-composed material could be later imported to the system. Both possibilities don't exclude each other, but could be mixed throughout a composition, which is actually the case of this piece. The resulting Score can be defined anew in two different ways: statically, creating discrete values for the structure, or dynamically, in which the begin and end of each event is particularly significant, because any kind of process can be programmed between both extremes (for example, transpositions, dynamical filters, etc). This data will be then translated, resulting in a row of orders to be interpreted and fulfilled.

 

Coming back to the piece, the ca. 13 minutes long live-electronics part is divided into three different groups: "LA", "LB", "LC", "L" meaning in this case "live". For the programming in APOS, I had got the invaluable help of Markus Lepper, who I've already mentioned as the creator of this language.

 

Regarding the first part, "LA"- with a tempo of quarter equal 50 and measure 3/4 - the AUDIAC system records three different types of single events played by the alto flute, namely: breath-out-noise, one multiphonic and a row of slap tones played separately. They will be played back with intervals of 9, 5 and 3 quarters (proportions taken from the numerical row, which I spoke about in the first part of this paper), rotating from one channel to the other anticlockwise, in opposition to the tape's channel distribution: 1. front-left, 2. front-right, 3. right-rear, 4. left-rear.

 

The recording of the different sounds is made through an object defined in APOS as "RECORDER", which works in such a way as a normal recorder. It has a begin- and an end-buffer-time, an amplitude value, etc. The samples recorded will be played by another object, the "Player" which also has a begin- and an end-buffer-time, an amplitude, an input to vary its frequency ("FINC" transposing the sample) and an input to loop the sample from a given buffer-time-point. There are four players, each one corresponding to each one of the four channels. Each recorded sound becomes a different memory address, so that it could be called at any time. For the time allocation of these events, the computer was asked to find the best possible distribution through all four channels between space and time, in order to force the events to meet quite often at one same channel. When this actually happens, one event will multiply the other, modulating each other (Amplitude Modulation). When all events (once breath-out-noise, once a multiphonic, and five times different slap-tones) have been played and recorded, the computer begins to transpose the information of 3 of the 4 players with different ratios (which are taken from the numerical-row). This transposition, made through the input "FINC" of each "Player", takes place dynamically, that means that within its time limits given in the score, the frequency will be varied every fourth sample, making "glissando-structures".

 

For part "LB", there are two moments to be recorded, both 12 seconds long. This part makes a formal "crossfade" with "LA", and is all about transpositions on all 4 channels of both recorded materials. These transpositions, however, are not dynamical, but discrete. The first of the two recorded materials of "LB", must be further stored, because it will be used in the next part "LC".

 

The score for "LB" is programmed half algorithmically and half pre-composed, as the following APOS example demonstrates:

 

new plstarts "pls2" 200

* open pls-kanal 0

*;       ANZAHL   ABSTAND

*;       EINSAETZE

* put pls2   1  *  17

* put pls2   3  *  9

* put pls2   5  *  5

* put pls2   9  *  3

* put pls2  17  *  1

*

* apl pls2 110 to 150 [ if [[_1 mod 21] ?eq (110 mod 21)] ['@ .p0 _0 ok] ]

* apl pls2 111 to 150 [ if [[_1 mod 19] ?eq (111 mod 19)] ['@ .p1 _0 ok] ]

* apl pls2 112 to 150 [ if [[_1 mod 15] ?eq (112 mod 15)] ['@ .p2 _0 ok] ]

* apl pls2 113 to 150 [ if [[_1 mod 9] ?eq (113 mod 9)] ['@ .p3 _0 ok] ]

 

 

All these lines describe every starting point of every of the four players. The last four lines use an explicit indication (pre-composed) of how the structure should finish; on the other side, the "put" lines use an automatic way of creating the starting points with an special syntax implemented for this purpose. This syntax will be implemented with the following APOS source text:

 

 new latch "pls-kanal"

*  new latch "pls-Position"

*

*; 4 new methods are going to be defined for this purpose (dm)

*

*  dm [ put (any plstarts) @ (any integer) .p (any integer)

*>              (any integer) * (any integer) ]

*>   [ apl 0 to [pred _6]

*>        ['@ pls-Kanal  ['['_5 + __0] MOD 4] ;

*>         @ pls-Position [' _3 + ['_8 * ['SUCC __0]]] ;

*>         ~do _0_1 ]]

*;

*  dm [ ~do put (any plstarts) ]

*>   [ @ .p [pls-kanal] [@ _2 [pls-position]] OK ]

*;

*

*  dm [ open pls-kanal (any integer) ]

*>   [ @ _1_2 ; @ pls-position (INTEGER 0) ]

*;

*  dm [ put (any plstarts) (any integer) * (any integer) ]

*>   [ _0_1 @ [pls-position] .p [[SUCC[pls-kanal]]MOD 4] _r2 ]

 

 

The evaluation of both texts results in a abstract-time-structure which could be edited either manually or automatically. In this latter case, it could be submitted to different processes of automatic transformation and interpretation, being the actual generation of sound only one of the multiple possible steps of such a chain.

 

The last part, "LC", begins with three eight-measure long statements of the alto-flute, which will be recorded and played by each speaker with an interval of 3, 5 and 1 quarters (the corresponding tempo is now quarter=90, the measure remains 3/4), making a canon that wanders all over the 4 speakers. Between each of these eight-measure statements, the alto flute plays three improvisations, whose durations are respectively of 9, 17 and 31 seconds. The third one modulates the frequency of the recorded signal of all three parts of the canon, and the result of this modulation will be anew recorded and again modulated from the alto flute. From this point on, the flute doesn't play any more, and the resulting modulation will be amplitude modulated - with the first element of "LB"- than it will be transposed dynamically. The transpositions will be gradually filtered with a notch filter, whose frequency is around the pitch f#5 and whose bandwidth will be dynamically narrowed up to this pitch. At the end there's only a filtered f#5 left, making an opposition to the first note of the piece, which was g3 (last and first note respectively of the chromatic row used as pitch-material).

 

Gegensaetze (gegenseitig) was the first piece of music using the AUDIACsystem in a real-time live performance. Up to its premiere, the system had only been used to steer another type of events (all within electronic music production), but there were no pieces with live instruments composed especially for and with the system.

 

Although the general conception of the work can be interpreted from several points of views, my intention was to show the "Thesis-Antithesis" concept at the light of human, social and naturally also political relationships. I think that nowadays, a time in which Neo-nazi ideas and deeds wide out again (mostly in Europe and in the USA, but not only), the concept of a reciprocal action of opposite elements may be considered as the contrary to intolerance, racism and discrimination. That doesn't mean neither that my piece has got a secret program nor that it is a political work (which is mostly the case of Luigi Nono's music, to take only one example), but it may be able to recall these type of implications.

 

The piece was first performed on June18, 1994 in the city of Dortmund (Germany) by the German flautist Christianne Schulz.

 

--------------------------------------------------------

June 1995 – revised 2006-07-28

 © Javier Alejandro Garavaglia