of Electroacoustic Music
Agostino Di Scipio
Modes of Interference / 2
Modes of Interference / 2 was created between 2005 and 2006 and is dedicated to Gianpaolo Antongirolami. It belongs to a series of four pieces with the same title created between 2005 and 2007. They share the same live electronic approach consisting of
“a particular materialization of the double feedback loop […] and [developing] a specific composed dynamical system with its own ways to create and manage the real-time and real-space conditions for the emergence of sound.” [Di Scipio 2014, p. 89]
Di Scipio became first interested in feedback as a constitutive feature of non-linear systems in the context of non-real time, algorithmic composition. The interest in live electronics was triggered by the access to a Kyma system in 1994. In 4 Variations on the rhythm of the wind for double bass, Paetzold recorder and electronics (1995) the question of interaction was investigated as a by-product of two parallel processes, instrumental playing and deterministic signal processing. Such experiences and a critical attitude towards interaction concepts restricted to action-reaction lead to the development of autonomous non-linear real time systems including audio feedback loops in the late 1990s. This was inspired by the theories brought forward by the cybernetic researchers such as Heinz von Foerster, Humberto Maturana, Francisco Varela [Di Scipio 2019b, 6:00–6:57]
Accordingly, Di Scipio conceived interaction as a network of interdependences between the components of the system. The task was to compose such networks of interactions and to listen to their sound traces:
“This is a substantial move from interactive music composing to composing musical interactions, and perhaps more precisely it should be described as a shift from creating wanted sounds via interactive means, towards creating wanted interactions having audible traces. [Di Scipio 2003, p. 271]
The work group entitled Modes of Interference is a further development of ideas previously explored in the four studies belonging to Audible Ecosystemics (2002/05): Impulse Response Study, Feedback study, Background Noise Study and Background Noise Study, with Mouth Performer.
One fundamental aspect of Audible Ecosystemics is the integration of the real space into the system, along with its cultural resonances and social implications. The system consists of three elements: performer, DSP system and the acoustic environment. The listener is part of the system as well:
“Listeners are a very special kind of external observer or hearer, because their mere physical presence in the room acts as an element of acoustical absorption. Hence they are rather an internal component of the ecosystemic dynamics.” [Di Scipio 2003, p. 274]
The focus is on the multiple relations of the system’s components rather than the complexity of the algorithms. The intended economy of means becomes evident in the use of background noise as the only source of energy. In Audible Ecosystemics
“the role of noise is crucial […]. Noise is the medium itself where a sound-generating system is situated, strictly speaking, its ambience. In addition, noise is the energy supply by which a self-organizing system can maintain itself and develop.”
[Di Scipio 2003, p. 271].
The DSP system is based on feature extraction of signals from the instruments and the room resonances. The sound has a double function, as a trace of interactions and as a control signal. Control signals are then mapped onto the parameter space. Di Scipio calls the link between the sound and its transformations “adaptive digital signal processing”. Here, sound acts
“not as just a raw material to forge and deliver to listeners, but as the source of dynamical behaviour within the network, i.e. the vehicle of information transferred from any one node to another. Sound then becomes the interface, the medium itself for control and interaction among agencies participating in a performance – either human or machine agencies.”
[Di Scipio 2006, p. 1]
After several chamber pieces, the role of the performer in Audible Ecosystemics is reduced to that of an active listener, directly involved only in the fourth study.
In Modes of Interference Di Scipio further explores new ways of agency of performers, mechanical devices, DSP systems and even the audience itself. Modes of Interference /1 for audio feedback system with trumpet and electronics (2005/06) and Modes of Interference /2 for audio feedback system with sax and live-electronics (2005/6) are concert pieces and share the same idea and setup based on the production of Larsen tones and granular transformations. Modes of Interference /3 for autonomous feedback system with electric guitars, combo amps and computer (2007) and Modes of Interference /4 for autonomous feedback system with disassembled combo amplifiers and other analogue electronics (2011) are installations. The performer disappears and acts again indirectly. In Modes of Interference /3 the audience becomes involved by producing acoustical shadows.
In Modes of Interference /2 the player actively interacts with the system:
“The saxophonist enters the loop in a way to explore the margin of maneuver, the range of possibilities he or she has in that system, to make it express a potential that by itself would not be able to express, but at the same time won’t be entirely dependent on you as an external entity.” [Di Scipio 2019b, 20:34–21:05]
The system has both a synchronic and a diachronic dimension: the performer
“is always confronted with situations that are the result of previous interactions.”
[Di Scipio 2019b, 21:42–21:52]
Thus, the performative situation corresponds not only to a generic live performance paradigm (double feedback loop of action/perception and loudspeaker/microphone) but to a communication paradigm. Echoing Edgar Morin’s idea of an “Ecology of actions” Di Scipio pursues a performative attitude that has social implications:
“Although you don’t know exactly how you are going to affect the process, you know you are going to affect it to some extent, so you as a performer, you as a composer, you as a listener, you as a subject living in this world take responsibility for the consequences of your actions. […]” [Di Scipio 2019b, 11:15–11:47]
Nevertheless, Di Scipio’s work with some exceptions consciously remains between the boundaries of the audible, avoiding visual elements and programmatic approaches.
From 1997 Di Scipio’s scores contain very precise descriptions of the system, including the algorithms. According to Di Scipio, the score became a description of a performance practice. [Di Scipio 2019a, 20:26]. The collaboration with the Brazilian saxophonist Pedro Bittencourt leading to the recording of the piece in 2015 contributed to the development of this practice.
In Modes of Interference /2 the primary task of the performer on an operative level is to allow for the emergence of a stable stream of Larsen tones of various frequencies throughout the piece. These, again, are first triggered by the background noise or by sound events inadvertently produced by the audience.
At the core of the DSP system is a self-regulating unit called AmpScaler based on amplitude following of the incoming signal. It controls bandwidth and dynamics inverting input and output level: if the input goes up, bandwidth and gain of the output signal decrease and vice-versa. The system prevents itself from producing unwanted loud and high tones or collapsing and reverts itself to a stable state if the right setting of hardware, DSP system and instrument in the given space is achieved.
In addition, a transformation unit performing a “cascaded granular resampling” transforms the incoming signal. Its gain is also inversely scaled by the AmpScaler. Soft input sounds produce louder grains and vice-versa. The granular scan speed is scaled by a unit called MemScaler, which, in opposition to AmpScaler, performs a direct scaling: soft input sounds will produce more static granular textures, louder input sounds result in “more articulated and gestural” events. [cf. Di Scipio 2006, p. 6]
Modes of Interference /1 and /2 require a “minimal, portable electroacoustic setup” that can be fully controlled by the performer.
The piece is structured in two main sections containing four sub-sections each marked by fermatas. Each sub-section contains three sequences with a duration of 10, 20 or 30 seconds. Di Scipio calls these 32 sections “time windows” [Score, introduction]. The continuous stream of Larsen tones is represented in the score through an uninterrupted line.
The score specifies the keys used to control the Larsen tones’ frequencies. In the first section the keys are fixed. In the second, the player can decide, freely choosing a passage from the saxophone repertoire or scales including the keys specified in the corresponding time window.
Each “time window” prescribes one or several layers of playing techniques such as key noise, whistles, tongue ram, pulse tongue, blowing across the side holes and other actions on the instrument. The instrument is not used to produce any ordinario saxophone sounds.
Towards the end of each section, the number of action layers increases, strongly interfering with the Larsen tones. Their stability is challenged due to the inverse scaling of AmpScaler. Since they should sound throughout, the performer needs to find the right degree of activity in order not to “kill” them.
“The feedback tone has two roles: one is providing the basis to go with and the other is monitoring how strong, how inappropriate or how invasive your action in the system [is]. If you realise you are killing it then you understand you went too far, and you recede a little bit.” [Interview, 26:17 – 26:45]
Time and layer structure of both sections are identical. The differences lie in the prescribed keys and in the instrumental actions performed in the fermatas. But the range of the further granular parameters (transposition, pitch deviation and grain size), too, shows a contrary motion, increasining in the first section and decreasing in the second.
In Modes of Interference /2 the musical form results from the tension between the emergent and fragile nature of the system and a clearly structured temporal and operative framework. This framework delimits the field of action to be explored by the performer. The Larsen tones act as a sensor, audibly informing the performer where the limits of action are. For he or she, listening must stay in the foreground. This mirrors Di Scipio’s attitude as a composer:
“A composer is a listener, but a kind of listener that takes responsibility to act upon that, which he wants to listen to in the sonic world and who takes responsibility for […] bringing forth things that he wants or other people may want to listen to, and share those things with others.” [Di Scipio 2019b, 50:54–51:20]
1) Performance Materials
The score was received as a pdf file from the composer (composer’s own edition). It is indicated to have been sketched at ZKM Karlsruhe in April 2006 during preliminary work with Gianpaolo Antongirolami, and to have been revised likewise at ZKM Karlsruhe, in February 2013 on the occasion of recording sessions with Pedro Bittencourt.
The actual score is two pages long and the individual “time windows” are notated in space notation manner. The actions to be realised on the saxophone are described in the score, as well as the electronic procedures.
The score also contains a detailed introduction describing the feedback system and its relation to the microphone setup, notes on technical setup containing directions on placing the microphones, diagrams of the audio signals and directions on sound projection, playing instructions and explanations, as well as notes on realising the electronic part and on the PD patch.
Source: Agostino di Scipio
Author: Agostino di Scipio
Software: Pure Data
Source: Agostino di Scipio
Author: Agostino di Scipio, update by Leandro Gianini
Date: 27 February 2019
Software: Pure Data
Update by Leandro Gianini: Midi pedal implementation to start fade in, Part 1, Part 2 and fade out
File: editMODE2-zhdk v3.pd
Source: Agostino di Scipio
Author: Agostino di Scipio, update by Joan Jordi Oliver
Date: 20 February 2020
Software: Pure Data
Update by Joan Jordi Oliver: Process 1, 2 and 3 start first when triggering Part 1. In this way the musician has more time to obtain the first feedback.
File: Di Scipio – Joan Edits.pd
2) Other Materials
Pertinent information on patch and performance practice
–[Di Scipio 2006] contains a detailed description of the pd-patch, as written for Modes of interference /1 and used as well for Modes of interference /2
–[Bittencourt 2014] contains a detailed account of the performer’s experience with the piece.
3) Reference Recording
Wergo (2015): Enlarge Your Sax. Kompositionen für Saxofon und Elektronik, Pedro Bittencourt (sax.).
1. Editorial Instructions
Equipment and disposition
Two microphones are prescribed to feed the patch, one mounted inside the neck of the instrument and one mounted on the cup. The microphones should preferably be condenser capsules, omni-directional, and very sensitive, such as the DPA 4061. Depending on where precisely they are mounted, it may be necessary to use a less sensitive model, as distortions might occur. [c.f. Score, p. 4]
The position of the inner microphone is particularly important. The instruction in the score reads: “mic1 (inside the neck, or a little lower, at the beginning of the body pipe, with the capsule facing downwards)”. Pedro Bittencourt emphasises this issue: “Lower sensitivity miniature microphones may be preferred, but much attention must anyway be paid when placing and fixing the microphone within the instrument.” [Bittencourt 2015, p. 52]
A setup of at least two loudspeakers is described placed behind the player and forming an equilateral triangle with sides of 3 to 5 metres length. The composer recommends the use of nearfield loudspeakers placed at the player’s height. Subwoofers should not be used.
In large venues, four additional speakers for sound reinforcement can be placed at the sides of and behind the audience. Distance ratios and delay times for the additional speakers are provided. These loudspeakers should not be part of the feedback loop.
Figure I. Disposition [Score, Technical setup, p. 4]
The score includes an overview of the system and detailed description of the processing modules. For an implementation within a different software, however, the variable values for the envelope of the granulation are not given in the score and must be taken from the Pd patch. They only appear in a screenshot of the patch given in the score [Score p. 9] Also, some mathematical transformations used are not described in detail in the score.
The patch operates only at 44.1 KHz. In the granulation modules some mathematical operations use sample values. If it is necessary to run the patch at a different sampling rate, these values have to be adapted.
Control signal extraction
In this module the signal of the microphone placed inside the neck of the instrument is transformed and used to obtain a low-frequency signal that is utilised as a control signal in other parts of the patch. This parameter is called AmpScaler and varies exponentially between 1 and 0, with 1 = no input and 0 maximum amplitude. The complement value is called MemScaler.
Dynamical low-pass filtering and dynamical amplitude scaling
These two modules use the parameter AmpScaler to reduce or control the input signal.
Dynamical low-pass filtering
If the signal is loud, the cutoff frequency of the lowpass filter is decreased, if the signal is soft, the cutoff frequency of the low pass is increased. The cutoff frequency varies between a maximum of 15kHz and a minimum of 5kHz.
Dynamical amplitude scaling
The output of the low-pass filtering is scaled by this module. If the signal is loud, the gain will be reduced, if the input signal is soft, there will be no reduction.
There are three granulation modules; gr8, gr4, gr2. The numbers refer to the time interval in which the buffer of each module is written/updated: gr8 each 8 seconds, gr4 each 4 seconds and gr2 each 2 seconds. In the performing interface of the patch, Process1 corresponds to gr2, Process2 to gr4 and Process3 to gr8.
The module has 3 inlets for variable parameters: frequency, grainsize and scan speed. The first and the second inlet controlling the parameter frequency and grainsize are determined by the initial parameter values, and they are then fully automated during the four minutes of each part. Part 1 and Part 2 have different initial values. In part 1 the pitch transpositions are increasing and the grainsize decreasing, while in part 2 the pitch transposition are decreasing and the grainsize are increasing.
Figure II. [Score, “Other Controls over the Granular Resampling”]
The third inlet controlling the scan speed is fed by the MemScaler: If the input signal amplitude gets louder, the scan speed gets faster, and vice-versa.
The mono output of the granulation modules is also scaled according to the AmpScaler value, then it is split in two and used to feed the stereo output. One of the two paths (L or R) is delayed by a short delay amount (20, 25 or 35ms).
The patch operator has the possibility to change the following parameters:
Ctrl_sig_exponent: This parameter affects the control signal extraction for the Ampscaler and Memscaler. Default is set at 48. Smaller exponents will determine lazier control signal variation.
Gain: The parameter gain regulates the signal after the auto gate function and determines the volume of the input signal in the granular cascade.
Fig. III. [Score, Digital signal-processing methods,
Overview of the full process]
3. Oral remarks by the composer on performance issues :
Di Scipio emphasises listening as the main action of the performer:
“[…] the first thing to do is listening to what happens around you, then taking action to contribute or to prevent things from happening or to interfere with the process […]”
I have the saxophonist realise little by little that by way of adding to the process he is actually killing the feedback loop, at least preventing resonances from emerging from the feedback loop. So I don’t make it too clear, well, it is clear, but I don’t stress it too much, because it has to do with the way he or she discovers it through the performance. Eventually the feedback tones could even disappear at the end of […] the two sections. If he is playing a little too much it will disappear. […] If he is aware of what he is doing […] he will eventually adjust himself.” [Di Scipio 2019b, 25:19–26:34]
Additional live electronic performer
While the piece is conceived to be performed autonomously by the saxophonist controlling the patch on stage, this task can be assigned to a second performer. During the rehearsals, Di Scipio agreed that some gain levels, in this case at the beginning and in the middle of the piece, might be changed by the electroacoustic performer if necessary (ultimately, this was not done) [Di Scipio 2019b, 35:54–36:13], however, such actions should be carefully rehearsed:
“It’s not like following up every single action with the mixing, it’s more like adjusting to see in the next stretch of […] a few minutes, if that improves or not. It should be within a range of possibilities that have been […] rehearsed before, […] you may not try gain values you have never tried so far in the concert” [Di Scipio 2019b, 36:14–36:38]
”These are overall dynamical signs meant to describe what happens [rather] than to prescribe what should be, because he is using many more techniques and […] the saxophonist is providing more events than at the beginning, so eventually that would be the result.” [Di Scipio 2019b, 37:12–37:33]
Key sequences from literature passages or scales in Part II
“By way of easing and relaxing on the fingering pattern I provide him a solution to focus on more complex things of which he might not have any experience.” [Di Scipio 2019b, 43:48–44:05]
4. Remarks by the composer on the DSP System
Impact of automated bandwidth and gain control [Di Scipio 2006, p. 4]
– To prevent the audio feedback loop “to grow to the point of clipping or distorting.”
– To provide “enough energy in the system across a large frequency range, so that Larsen tones of any frequency can in principle come out”
– To ensure that “very high frequencies are dampened before they come out too strong (and painful for the ears).”
– “When a perfect balance is found, the process gives way to the emergence of Larsen tones, keeps them at a rather constant amplitude, and eventually lets them fade out slowly.”
Differences in latency time in different computers lead to different results: it is possible to adjust time parameters in the patch (e.g. window time of env~, normally 32 ms) or adjust feedback gain [Di Scipio 2006, p. 3].
”If the system dynamics seems [sic] to significantly suffer from latency or delay values, at least the delay lines and the window size in the envelope following should be adjusted, as already suggested above.” [Di Scipio 2006, p. 5]
Possible source of unexpected glitches
The 32 ms latency introduced by the envelope follower can be problematic ”if Larsen tones with fast or very fast onset transients occur, as in that case the signal may peak and get distorted for few instants before the self-gating mechanism is driven to scale the gain down enough to dampen the feedback oscillations.” [Di Scipio 2006, p. 4–5]
5. Remarks by the composer on performance issues
“It is important that you find as many chances as possible to get various Larsen tones, sustained and not peaking, no only when all sax keys are released, but also when some are depressed. When that is fixed, you are in the position to do a good performance” (Di Scipio, personal communication, 23 February 2010) [Bittencourt 2014, p. 57]
“As a general attitude to this piece, you understand that the denser and louder your activity becomes on the sax, the softer the sound texture becomes… in a sense you have to ‘let it go’, and add only as little as possible with keys and/or reed/lips, everything on your side could be like ‘suggesting’ some materials, rather than overtly
‘speaking’ them. Listening and refraining from doing is just as important as doing, and when the score requires action on your side, things you do should not be overstated. (Di Scipio, personal communication, 23 February 2010) [Bittencourt 2014, p. 52]
6. Remarks by Pedro Bittencourt on performance issues:
“[the soprano sax] is smaller, so it is easily handled without fastening it to the players’ neck
(that cuts off some noise that is otherwise difficult to avoid), and the key mechanics
are not as noisy as on bigger instruments.” [Bittencourt 2014, p. 55]
“I carefully followed all the fingering indications in my earlier approaches to Modes of
Interference n.2. However, it did not always work. Knowing that the different fingerings
should allow for a range of different Larsen sounds, I experimented with alternative
fingerings, still carefully following all the other score indications as to the number
of keys to play with, etc. Results varied, depending on several contextual circumstances.
I think there is further work to do concerning the fingering possibilities.” [Bittencourt 2014, p. 55]
“The goal is to allow for as many stable, not peaking Larsen situations as possible (and also as relative to sax fingering positions).” [Bittencourt 2014, p. 52]
“Once the Larsen is there, then the rest of the piece can follow. Note that it is impossible to start the piece from any other point than the beginning, even in recording sessions, because any event in the piece makes sense only in a context that follows from the history of previous events and interactions.” [Bittencourt 2014, p. 54]
2. Performance report
The concert took place on 27 June 2019 at the Zurich University of the Arts, Toniareal, main concert hall (GKS 3) with Joan Jordi Oliver, saxophone, and Leandro Gianini, sound engineer.
Choice of the instrument
In this performance a soprano saxophone was used. During the rehearsals an alto saxophone was also tried, but, in line with Bittencourt’s remark (c.f. above), it proved much more difficult to manipulate the instrument. Feedback and electronics would also react better when using the soprano saxophone.
Two Genelec 1038 BP speakers were used.
The microphones were a DPA 4061 inside the instrument and a DPA 4099 outside pointing to the inside of the cup.
Six K&F Gravis speakers were set up around the audience (used only during the rehearsals).
The following soundcheck procedure was used:
– Set the patch gain value. We used 0.8.
– Start the patch (fade in)
– Switch all the processes off (1,2,3)
– Set the feedback gain on the preamp
– Use a scale on the instrument to check if equalisation is needed
The feedback should not cover the live electronics all the time but should almost always be audible. The performer should be able to move the instrument to increase or reduce the intensity of the feedback.
It was at first not entirely clear what kind of sound the second microphone should deliver.
It should be used only to slightly amplify the saxophone and should not directly affect the feedback. Nevertheless, the amplification will in any case affect the feedback and the microphone should be placed carefully to avoid too much collateral noise due to the different playing techniques.
Similarly, and contrary to the prescription in the score, the loudspeakers used for sound reinforcement will also affect the feedback. They enhance the acoustical activity of the space and facilitate the feedback process. For this reason, they have to be mixed in carefully. They can be useful for compensating for changing performance conditions, such as audience absorption and change in temperature and humidity.
Sometimes, random glitches would be audible. They occurred at different times and in soft as well as loud passages. While the cause couldn’t be identified, Pedro Bittencourt indicated he had also encountered this problem at times. A possible cause is indicated above (c.f. Remarks on the DSP System)
A midi pedal was used, allowing the musician to switch between the different sections of the piece without having to use his hands.
No audio monitoring is needed but it is important for the player to see the patch or to have video monitoring for the clock showing the time for parts I and II.
The piece is difficult to rehearse because the electronics react to the room acoustics. It is essential for the player to have enough time for soundcheck and practice at the actual concert venue, as recommended by the composer.
Distance between player and loudspeakers
Different distances between the speakers on stage and the player were tested. In the concert space where the stage is quite large, five meters proved to be an optimal distance. Taking into account this team’s and the composer’s experiences it can be said that the distance will have to be greater when using large speakers and smaller when using small speakers.
Remarks by Joan Jordi Oliver
Agostino di Scipio’s work Modes of Interference / 2 challenges the performer to completely re-define his or her role in the execution of a piece of music. Whereas most pieces demand an active participation of instrumentalists to directly generate the sound materials and to shape the musical discourse, Di Scipio places them in a completely new situation, still participative and relevant, but not any longer as the essential source for the production of the music. Instead, the performer is asked to be a part of an ecosystem where every single component (the instrument, the computer and its electronic processes, the equipment used, the ambience and the objects and bodies that occupy the space of the performance) is equally important and necessary and communicates with every other element that, intentionally or unintentionally, will shape the development of the piece.
Understanding our position as performers in this system of relationships is not as simple as it might seem. Our training as instrumentalists tends to make us carefully conscious of our responsibility in the execution of a piece, of the necessity to achieve the higher degree of control that our abilities make us capable of . As the composer mentioned on several occasions, this is not the case in Modes of Interference / 2. Not only is the absolute control of the sound parameters impossible, since external factors like ambience and the DSP system will play a role in shaping the piece beyond the control of the performer, but the insistence of the instrumentalist in trying to make their actions predominant or lead the musical discourse might affect the development of the piece adversely.
In my case, understanding the whole philosophy behind the piece has been the real challenge. During the first stages of the rehearsal process I tried to adjust my understanding of the piece to the conventional demands of most of the music I have performed. I tried to imagine some sort of hidden dramaturgy behind the piece, and I provided all my technical actions on the instrument with some sort of performative presence in order to make the piece as performative as possible. It took some time for me to understand the necessity to step back, to reconsider my functions inside the work and, specially, to accept my role as another agent inside a system of relationships where there isn’t supposed to be a predominant element.
This acceptance challenges the performer to understand interpretation in a completely different way. As Di Scipio stresses, listening is the most essential activity for the instrumentalist. Participating inside the system, not forcing its development in a concrete direction, but accompanying it and subtly shaping it through the indications in the score – presented not in the traditional and strict way but more as a set of possibilities offered to the performer which they will have to use according to their intuition and to their listening skills in every given moment of the piece. The performance of Modes of Interference / 2 is in this sense extremely challenging, as it demands constant attention, sensitivity, intuition, and a lot of flexibility in a piece that performance after performance will never turn out the same way.
Indeed, the degree of control of the instrumentalist over their main actions (controlling the feedback tones and the instrumental actions) will also vary according to the equipment used and to the space where the piece is to be performed. The experience of having tried to perform the piece in different spaces accentuates the necessity of this flexibility, and to understand the importance and the function of every technical element: the placement of the microphones, the model and distance of the speakers and the resonant frequencies of the room that will dramatically alter the results of the different fingerings used in the piece. The work can’t be learned in such a way that it can be reproduced in the same way in any other space. Instead, it has to be learned almost in the sense of a training for the performer to remain sensitive and flexible with respect to all the conditions of a new performance, since any alteration of any component of the system will affect the result and determine a successful or unsuccessful performance of the piece.
Anderson, Christine, (2005): Dynamic Networks of Sonic Interactions: An Interview with Agostino Di Scipio. In: Computer Music Journal, 29 (3), pp. 11–28.
Bittencourt, Pedro (2015) : Interpretation musicale participative – La médiation d’un saxophoniste dans l’articulation des compositions mixtes contemporaines. PhD thesis (unpublished), Université Paris 8 – Vincennes – Saint-Denis.
Bittencourt, Pedro (2014): The Performance of Agostino Di Scipio’s Modes
Of Interference / 2: A Collaborative Balance. In: Contemporary Music Review, 33 (1), pp. 46–58.
Di Scipio, Agostino (2019a): Talk. Held at ZHdK, 26 June 2019.
Di Scipio Agostino (2019b): Interview. Conducted by Germán Toro Pérez, 28 June 2019.
Di Scipio, Agostino (2018): Dwelling in a field of sonic relationships. In: Sallis, Friedemann; Bertolani, Valentia; Burle, Jan & Zattra, Laura (eds): Live-electronic Music. Composition, performance and study. London & New York: Routledge, pp. 17–45.
Di Scipio, Agostino (2014): A Constructivist Gesture of Deconstruction. Sound as a Cognitive Medium. In: Contemporary Music Review, 33 (1), pp. 87–102
Di Scipio, Agostino (2006): Using PD for Live Interactions in Sound. An Exploratory Approach. In: Proceedings of the 4th International Linux Audio Conference 2006, ZKM Karlsruhe. Online: http://lac.zkm.de/2006/abstracts.shtml#agostino_di_scipio [last accessed 28 February 2020]
Di Scipio, Agostino (2003): Sound is the interface—from interactive to ecosystemic signal processing. In: Organised Sound, 8 (3), pp. 266–277.
Döbereiner, Luc (2014): Resonances of Subjectivity: Hegel’s Concept of
Sound and Di Scipio’s Audible Ecosystemics. In: Contemporary Music Review, 33 (1), pp. 19–30.
Green, Owen (2014): Audible Ecosystemics as Artefactual Assemblages: Thoughts on Making and Knowing Prompted by Practical Investigation of Di Scipio’s Work. In: Contemporary Music Review, 33 (1), pp. 59–70.
Meric, Renaud & Solomos, Makis (2014): Analysing Audible Ecosystems and
Emergent Sound Structures in Di Scipio’s Music. In: Contemporary Music Review, 33 (1), p. 4–17.
Schröder, Julia H. (2014): Emergence and Emergency: Theoretical and Practical
Considerations in Agostino Di Scipio’s Works. In: Contemporary Music Review, 33 (1), pp. 31–45.
Solomos, Makis (2013): De la musique au son. L’émergence du son dans la musique des XXe-XXIe siècles. Rennes: Presses Universitaires de Rennes.
Zattra, Laura (2014): Points of Time, Points in Time, Points in Space.
Agostino Di Scipio’s Early Works (1987–2000). In: Contemporary Music Review, 33 (1), pp. 7285.