Previous Master’s Research Projects

Guzmán Calzada Llorente (2018–2020)

Musical Explorations Through Spaces

With site-specific locations in mind, I am expanding the aural conception of what a room is and how it operates. As a general strategy, I plan to understand acoustic spaces as energetic places, locations with inherent autobiographies that can be manifested by articulating and uttering their resonances. This primarily occurs when working with room reverberation and electromagnetic activity, and by even exciting objects which inhabit a room through different vibrational methods. Within this framework, a piece of music may stand as a sort of adaptive sculpture, articulating a room’s history.

One branch of my research focuses on electroacoustic pieces for specific venues, working with audio sources that trigger fixed oscillators when they have certain coincidences in their frequency spectrum. These electronic instruments or a particular audio source are related to a venue by both expressing a perspective of their acoustic-physical properties and their poetic dimensions. While another branch of my work, involves approaching the process described above by filtering and re-synthesizing an original audio source, where transformations of a musical or audio source can be understood by the way in which a room affects them. 

Practically, investigating this will involve realizing several solo and ensemble pieces — ones that directly emerge from different filtering and re-synthesizing of audio and also graphic sources (e.g. scores). Overall, I expect this project to address notions of how understanding a space can reveal many different spheres of meaning. 

Eunji Kim (2018–2020)

A Game Environment as Algorithm to Generate a Musical Structure

Given that computer games are being highlighted as a platform that can be used with interactive media, I am proceeding with research examining connections between the hierarchical nature of such games and their similarity to many examples of algorithmic art. Algorithms used in computer games reveal a hierarchy, where the control of a system manages objects and records data associated with these objects. In short, I see a similarity between many of the algorithms used in games and those I use to make my own algorithmic music. Thus, I believe it is possible to switch the systems used in a game and turn them into a musical composition; thereby allowing musical parameters to be derived from numeric data of game objects.

My compositional approach, uses algorithms to make it possible to get rid of the fixed idea of the musical work. The algorithm, when seen through an art game, presents a model that determines how to generate sound structures in real time. Designing musical structures in this way means that one algorithm can make thousands of potential choices. This does however mean that the type of data used as input has a massive impact on the generated music. Also, when using rapidly changing data, it is necessary to arrange the movements of data in appropriate sound movements (a similar type of data processing seen in the area of sonification).

I am additionally trying to deal with data that is not a part of commercial games. For instance, I want to visualize various types of data that can be encountered in real-life; designing a model that allows users to play with data and to convert it into artistic output. The aim of this is to allow the user/audience to more actively interact with musical works, making it possible to manipulate music, by adding ‘time’ as a dimension. This area of appreciation can also be extended by adding a dimension of ‘direct experience’, whereby the audience (user) can directly manipulate the music.

Michael Kraus (2019–2020, double degree with TU Berlin)

Solarsonics – A Theoretical and Artistic Investigation

My research raises the question how solar energy can be used as a contemporary leitmotiv for sound creation. Smallwood (2011) describes recent developments in the creation of sound art powered by photovoltaic technologies and refers to Solarsonics (2013) as a pattern of ecological practice. Taking into consideration the latest development in the climate crisis philosopher Michel Serres describes the sun as our energetic horizon and as the ultimate capital (ngbk, 2014). 

Therefore I investigate how morphing forms of solar energy can be used to articulate this relationship in the age of the Capitalocene and climate change. 
Donna Haraway (2016) suggests the Chthulucene and David Schwartzman “Solar Communism” as a seminal direction for our societies. Within my work I draw connections between them and align aspects of it to Vilém Flusser’s (2000) model of human communication.  

A leitmotiv is used as a metaphor for human communication and thus needs to be interpreted. It is grounded on a deeper level of understanding than paradigms. It should encourage sonic experiments that do not reduce to purely human categories and rational thought but inherit the tradition of storytelling, meditation, ecstatic encounters and place human practices as belonging to myriad others.

Keywords: Solarsonics, Energy, Communication, Ecology, Experimental Futures 

Flusser, V. (2000) Kommunikologie 
Haraway, D. (2016) Staying with the Trouble – Making Kin in the Chthulucene 
ngbk (2014) The Ultimate Capital is the Sun 
Smallwood, S. (2011) Solar Sound Arts: Creating Instruments and Devices Powered by Photovoltaic Technologies 
Smallwood, S. & Bielby J. (2013) Solarsonics: Patterns of Ecological Praxis in Solar-powered Sound Art

Toby Kruit (2018–2020, Instruments & Interfaces)

Bodily Awareness in Electronic Music Performance

As a musician working with digital electronics, there comes a point in the creative process where expression has to be quantified in order to enter a digital model. Composing interactions with computers/software traditionally (and possibly inevitably) resorts to ‘mapping’ connections between signals from outside the system to functions in a code. However, the digital inherently means limited, discrete in such a way that any interaction with it is oppressed by approximation – using digital media for music means translating ‘human being’ to ‘computer data’, and back.

The focus of my research is on how this translation can be approached from outside the digital oppression. Taking the human as the starting point for human-computer interaction, I am working on methods of interaction that are based on material and bodily properties. The goal is to evoke states of enhanced bodily awareness, placing participants in the ‘moment’ that is not concerned with representation, but with experience. Existing in this sensorial state means simultaneously accepting all stimuli as a continuous, chaotic signal from the world to the body, and continuously (re-) adjusting the body based on the perceived forms of these stimuli.

Because traditional hard- and softwares are based on models and discretisation, they may face problems in quantifying the noisy, imprecise, reflexive, semi-automatic conditions of the human body. The challenge in picking up or amplifying these profoundly human movements presents opportunities in all domains of the interaction: the performer’s body, material properties, electronic signals, and digital conversions. Practically, this involves making electronic textiles & fabric sensors, activating physical materials using transducers, and involving the whole body in performance.

Simone Sacchi (2018–2020)

“Can you hear that?” — Amplifying Discrete Sounds for Live Performances and Installations

Technology allows us to extend the limits of human senses and my research aims to give the listener a new perspective on what can be perceived aurally. In essence, my work builds out of this interest by exploring soundscapes generated by electromagnetic fields. However, my current research extends this by bringing the act of hearing into the realm of the “microscopic” — vis-à-vis rescaling the amplitude of hidden sounds, ranging from almost imperceptible ones to those that are truly inaudible.


My present work exists in categories where I either work with a variety of materials as well as living beings and mechanical objects (i.e. animals, humans and plants, or machinery such as recording devices and studio equipment). Additionally, I plan to work with musicians who will prepare performances for “mute instruments”. The latter refers to when only the tiniest sounds of instruments and performers’ movements (or their bodies) are amplified. This approach gives the audience an idea of what happens inside an instrument, even while it is not being played. 

The micro-scale in sound is attributed to the time domain, yet there is another side of this scale to consider; in this respect my work originates from sounds that are on the verge of the audible, encouraging the listener to hear fainter and fainter sounds by using different and unorthodox technological approaches for microphoning and amplifying. Consequently, my work also focuses on minimising the noise-floor and avoiding unwanted feedback in order to explore different ambiences and materials.


I plan to use the above strategies for installations, where discreet sounds occur all the time despite our senses being able to perceive them. In this sense, installations can be understood as sound lenses for examining intimate worlds we cannot normally access. For example, a miniature anechoic chamber may act as a controlled environment from which sounds can be projected into the external world.

Jad Saliba (2018–2020)

Stations of Exception: Revisiting Analog Radio for Live Performances

My present research primarily builds upon experiments with circuit-bending radios. A central focus of this involves a live performance setup made out of several receivers and transmitters continually interfering with one another to generate new sounds. This includes discovering new random combinations of inharmonic tones whose frequency spectrum shifts completely when modifying the tuning frequency of a given radio as well as employing micro samples of local radio broadcasts. However, these sonic processes or results are not solely dependent on the idiosyncratic elements of a circuit, as electromagnetic transmissions in the microenvironment inherently have an effect.

My initial reasoning for wanting to use circuit bending was inspired by the unpredictability of radio sonic artifacts emerging in the frequency band between stations. Likewise the intricate sound patterns evolving from static noise frequently feature abstract voices in the background. These qualities are often heard on shortwave transmissions, long distance AM broadcasts, and other types of radio satellite broadcasts. However, tweaking into these frequencies/artifacts is largely dependent on ecological factors such as weather, natural sunlight, and electromagnetic interferences—as these factors all shape the final result of the information received.

Ernests Vilsons (2018–2020)

Well-Structured Vocalisations: An Attempt to Imitate Birdsong

Birds. Thousands of different species, their songs and calls varying in kind and complexity. Both within the individual acts of vocalization and in the way these vocalizations succeed one another, patterns can be observed. Bird vocalizations are produced within and are influenced by their immediate environment – flora, fauna, light, wind, etc., they are a means of communication. But from the recognition of bird vocalizations as fascinating sonic structures, to composition of sound and its organization that would derive from them, a series of intermediary steps are to be taken. My research is concerned with these ‘intermediary steps’ as much as with bird vocalizations.

The research originates within the aural, within the experiential. Hearing as a mode of being, from which a network of relations unfold to become re-contextualised, taken apart, qualified. Through this unfolding, the aural — the fleeting origin — is exceeded while being preserved in the unfolded; a temporary move away from hearing/listening to their ‘product’ — the actions (analysis, classification, re-synthesis, etc.) and material reconfigurations (recordings, scores, programs, etc.) they instigate.

Analysis and formalization as a reduction of sound (birdsong) to a limited amount of parameters; a reduction that eventually will determine the synthesis of the imitation. The parameter space, shaped by, yet not limited to, that which is analyzed and formalized, provides a possibility for gradated movement away from the object of analysis (a specific birdsong) toward sound structures that are situated anywhere between close resemblance to the object represented and a complete non-resemblance.

Through the research, a limit of imitation is pursued. A double endeavour: a striving to become birdsong without becoming a bird, and a reflection on the abundance of by-products (ideas, experiences, insights, etc.) this striving generates.

Anna-Lena Vogt (2018–2019, double degree with TU Berlin)

Domestic Spatial Investigations: Expanding perception of Space through Experiments with Sound

Architecture and building acoustics pursue acoustic as functional objective in the design process, but not the auditory aesthetic quality of a space. This is partly due to the fact that there are no suitable design tools for this process, as they focus mainly on visual analogies. With this in mind, my research focuses on finding a method that allows to record and present the perceived auditory environment to integrate it into my design and artistic practice. Thereby, the interior of apartments is of particular interest for me, since the invisible encounters we have made over the years accompany and influence us in our daily experience of these spaces. Through studies of intimate auditory instants, conclusions are possible about the general vernacular experience. As an approach to spatial sound investigations, experiments are conducted in three different apartments to explore 1) how to observe and capture the aural experience 2) which aural qualities occur in a given space 3) which aural categories are reoccurring between the apartments and 4) how to reproduce the experience and bring the aural to the foreground. The decisive factor in answering these questions is a phenomenological approach that places the experiencing body at the center of perception, creating a dialogue with the everyday situations found in the living spaces. We are physically influenced by and actively change these spaces, but we are not aware of the manners in which that occurs. In order to approach the horizon of our experience and to sharpen our consciousness, it is necessary for this study to layer complementing methods. In-situ listening to become aware of what shapes the local and taking on different body positions, such as walking, standing, sitting and lying to get close to the everyday instances in the apartments, parallel sound recording through binaural techniques to be as close as possible to the experience in the context and writing down accounts to capture the moments. The experiments with the everyday auditory living situations are a preliminary step towards making the spatial ambiances tangible. It shapes awareness and enables the appropriation of visual design methods. A collection of the acquired daily aural situations is the outcome and be presented in an installation. 

Keywords: 
Atmosphere, Sonic Effects, Sound, Space, Architecture

References:
Augoyard, J. F., & Torgue, H. (Eds.). (2005). Sonic experience: a guide to everyday sounds. Montreal & Kingston, London, Ithaca: McGill-Queen’s University Press.
Bachelard, G., & Jolas, M. (1994). The Poetics of Space. Boston: Beacon.
Böhme, G. (2014). Atmosphäre. Essays zur neuen Ästhetik. Berlin: Suhrkamp. 
Perec, G., & Helmlé, E. (2016). Träume von Räumen. Zürich-Berlin: Diaphanes.
Woolf, V. (2000). Selected Short Stories. Penguin UK.

Laura Agnusdei (2017–2019)

Combining Wind Instruments and Electronics within the Timbral Domain

My research at the Institute of Sonology is focused on composing with wind instruments and electronics. The starting point for this is my background as a saxophone player and my compositional process aims to enhance the timbral possibilities of my instrument, while still preserving its recognisability. In line with the artistic interest of blurring the perceptual difference between acoustic and electronic sounds, I process acoustic sounds from my instrument using digital software like Spear, CDP and Cecilia – carefully selecting procedures based on analysis and re-synthesis techniques.

Musical points of reference in my contexts are chosen from different contexts – such as free jazz, electroacoustic composers, experimental rock, and my interest in timbre has also encouraged me to explore many extended techniques on my instrument. Additionally, when composing for saxophone I consider the instrument to be occupied in music history, straddling influences between African-American culture, pop music and contemporary classical music. Aside from the saxophone, however, I plan to use my own timbral peculiarities and how to use their sonorities in a personal way.

More precisely, I am interested in working with acoustic instruments because it is possible to translate this into the electronic domain. Therefore, my research will take place in the studio as well as live performance. The final outcome of my studies should be to incorporate discoveries. Moreover, in my improvisational practice (both solo and group) I want to expand my research in order to combine live processed sounds with pure acoustic as well as experimenting with different amplification techniques. 

Görkem Arıkan (2017–2019, Instruments & Interfaces)

ARMonic: A Wearable Interface For Live Electronic Music 

In live computer music, many notable works have been created that utilize new sound sources and digital sound synthesis algorithms. However, in live computer music concerts, we may be encountering a lack of visual/physical feedback between the musical output and the performer, since looking at a screen does not really convey much to an audience about a performer’s actions. Therefore, in my concerts, I have been looking for ways to minimize the need to look at the screen by using various kinds of setups, largely those consisting of MIDI controllers, self-made physical sound objects, and sensor-based interfaces transform physical acts into sound.

With this in mind, from the onset of my research, my goal has been to build a performance system that can be interacted through body movements. “ARMonic” is a performance piece I created during my study which I continue to develop. The making process of the piece is a pursuit of creating an unconventional performance vehicle that enables me to drive towards exploring my inner world and learn new ways of expressing myself through sound and movement. While doing that, one of my main concerns is to be active in social music-making besides solo shows such as from various improvised music formations to written pieces.

In the written part of my research, I explain the technical details of my work, and discuss personal considerations regarding the pre-make and after-effects of my performance practice. In addition to this, I build up a relationship with the works by Jacques Attali (The Political Economy of Noise) and Johan Huizinga (Homo Ludens), are my guides on philosophical and socio-economical thinking in my work. 

Matthias Hurtl (2017–2019)

DROWNING IN ÆTHER
software-defined radio – a tool for exploring live performance and composition

In my practice I am often fascinated by activities happening in outer space. Currently, I am interested in the countless number of signals and indeterminate messages from many of the man-made objects discreetly surrounding us. Multitudes or satellites transmit different rhythms and frequencies, spreading inaudible and encoded messages into what was once known as the æther. Whether we hear them or not, such signals undeniably inhabit the air all around us. Radio waves, FM radio, WiFi, GPS, cell phone conversations, all of these signals remain unheard by human ears as they infiltrate our cities and natural surroundings. Occasionally though, such signals are heard by accident, emerging as a ghostly resonance, a silent foreign voice, or as something creating interference in our hi-fi systems. Yet aside from these accidental occurrences, tuning into these frequencies on purpose requires a range of tools, such as FM radios, smartphones, wireless routers and navigation systems. 

Presently, my research at the Institute of Sonology includes placing machine transmissions into a musical context, exploring what inhabits the mysterious and abstract substance once referred to as æther. This exploration fundamentally delves into how one might capture these bodiless sounds into a tangible system, so they can be transformed into frequencies like an oscillator or any other noise source. Additionally, I employ methods or indeterminacy; a methodology assisting the emergence of unforeseeable outcomes. Specifically, this includes using external parameters that engage with chance and affect details or the overall form of my work.

Principally, this research project will focus on grabbing sound out of thin air and using it in a performative setup or within a composition. I expect this to be concealed as possible, as well as concealed signals, as well as finding methods for listening to patterns in the static noise and using tools to generate sound in bursts of coded or scrambled signals.

Slavo Krekovic (Instruments & Interfaces, 2017–2019)

An Interactive Musical Instrument for an Expressive Algorithmic Improvisation

The aim of the research is to explore the design strategies for a touch-controlled hybrid (digital-analogue) interactive instrument for algorithmic improvisation. In order to achieve a structural and timbral complexity of the resulting sound output while maintaining a great degree of expressiveness and intuitive ‘playability’, possibilities of simulations of complex systems drawing inspiration from natural systems and their external influence via the touch-sensor input will be examined. The sound engine should take advantage of the specific timbral qualities of a modular hardware system, with an intermediate software layer capable of generating a complex, organic behaviour, influenced by a touch-based input from the player in real time. The system should manifest an ongoing approach of finding the balance between deterministic and more chaotic algorithmic sound generation in a live performance situation.

The research focuses on the following questions: What are the best strategies to design a ‘composed instrument’ capable of autonomous behaviour and at the same time being responsive to the external input? How to overcome the limitations of the traditional parameter control of hardware synthesizers? How to balance the deterministic and more unpredictable attributes of a gesture-controlled interactive music system for an expressive improvisation performance? The goal is to use the specific characteristics of various sound-generation approaches but to push their possibilities beyond the common one-to-one parameter-mapping paradigm, thus allowing a more advanced control leading to interesting musical results.

Hibiki Mukai (2017–2019)

An Interactive and Generative System Based on Traditional Japanese Music Theory 

The origin of notation in western classical music. This meant that the score should be in a form that was easy for the general public to understand and also be passed on future generations. Since then, this western notation system has been widely used as a universal language throughout the world.However, its popularity does not mean that there is not a loss of important sonic information. In contrast, most traditional Japanese music was handed down to the next generation orally, but was also accompanied by original scores that conveyed subtle musical expressions. These ‘companion scores’ were written with graphical drawings and Japanese characters.

In my research at the Institute of Sonology, I plan to reinterpret traditional notation systems of Japan (ie Sho ̄ myo ̄ 声明 and Gagaku 雅 楽) and designing real-time interactive systems which analysis relationships between these scores and the vocal sounds made by performers. In doing this, I plan to generate that exists in digital form. This will allow me to control parameters such as pitch, rhythm, dynamics and articulation by analyzing intervals in real time. Furthermore, this research will culminate in using this system to realize a series of pieces for voice and western musical instruments (eg piano, guitar, harp) and live electronics.   

Overall, I believe it is possible to adapt this traditional Japanese music theory into a western system by using electronics as an intermediary that processes data. In addition, by re-inventing traditional Japanese notation I expect it to be easier to access expressive ideas in western notation. However, using this type of score I also aim at extending many of the techniques of western musicians and composers – designing an interactive relationships between instruments, the human voice, and the computer. 

Yannis Patoukas (2017–2019)

Exploring Connections Between Electroacoustic Music and Experimental Rock Music of the Late 60s and early 70s

From the late 60s until the late 70s the boundaries of music genres and styles, including popular music, were blurred. This situation resulted from rapid and numerous socio-economic changes and an unpredicted upheaval in the music industry. Rock music, later renamed to “progressive rock” or “art rock”, separated itself aesthetically from “mass pop” and managed to blend art, avant-garde and experimental genres into one style. Also, the shift from just capturing the performance to using the recording as a compositional tool led to increasing experimentation that created many new possibilities for rock musicians.

Undoubtedly, many bands were aware of the experiments in electroacoustic music in the 1950s (elektronische Musikmusique concrète) and drew upon influences from the compositional techniques or avant-garde / experimental composers. However, many questions arise about why and how art rock was connected to experimental and avant-garde electroacoustic music; and secondly, whether it is possible to trace common aesthetic approaches and production techniques between the two genres.

My intention during my research at the Institute of Sonology is to elucidate and exemplify possible intersections between experimental rock and the field of electroacoustic music, especially in terms of production techniques and aesthetic approaches. The framework of this research will include a historical overview of the context that experimental rock emerged from, exploring why and how certain production techniques were used at that period in rock music, and investigating into whether the aesthetic outcome of these techniques relates to experiments in the field of electronic music.

Parallel to this theoretical research, I also plan to attempt a reconstruction of some production techniques which I will explore for the sake of developing my own aesthetic and compositional work. The driving force behind this type of reconstruction will include exploring tape manipulation, voltage control techniques, and their application to different contexts (such as fixed media pieces, live electronics and free improvisation).

Orestis Zafiriou (2017–2019)

Mental Images Mediated Through Sound

I’m interested in researching correlations between physical and perceptual space in the process of composing music. Concentrating on the ability of human perception to create images and presentations of the phenomena it encounters, I propose a compositional methodology where the behaviour and the spatiotemporal properties of matter are translated into musical processes and represented through electronic music.

Sounds in my work represent objects and events, as well as their movement in space-time. This implies that a musical space is formed by a succession of mental images unfolded in the perceptual space of the composer when encountering these events. With this method I also aim to point out the importance of music as a means to communicate objective physical and social processes through a subjective filter, both from the standpoint of a composer as well as from the perception of a listener.

Orestis Zafeiriou has studied mechanical engineering at the technical university of Crete, completing his thesis on sound acoustics specifically pertaining to the physical properties of sound and its movement through different mediums (active noise control using the finite-element method). In addition to his present and past studies, he also actively composes music in the band Playgrounded, who released their first album (Athens) in 2013 and their second (In time with Gravity) in October 2017.

Chris Loupis (2016–2018)

Bridging Isles: Dynamically Coupled Feedback Systems for Electronic Music

My research at the Institute of Sonology has been principally involved with the investigation of coupled and recursive processes, with a final goal of applying and employing derived techniques into modular synthesis, through the (re)implementation – or appropriation – of circuits. Audio and control feedback systems present a non-hierarchical way of interacting with an emergent, unforeseeable and unrepeatable output. In such systems, individual components share energy mutually. They can therefore be considered as coupled; their resulting sonic behaviour being one of synergetic or conflictual relations. The central theme this project builds upon is the idea of musical systems designed to be operating – or better yet, operated – within reciprocally affected and variably intertwined structures. 

Specifically, the above ideas include an investigative trajectory through the mechanics of coupling in feedback control systems, which arrived to the Van der Pol equation (Balthasar Van der Pol, 1927). 

This oscillator has a significant musical value, as it is characterised by behavioural versatility, rhythmical and timbral complexity and richness; its non-linear response to external signals, is also particularly interesting. While being well-documented in physics and mathematics, applying the Van der Pol oscillator as an analogue model has not been widely adopted into modular synthesis. With these observations in mind, I am currently developing an extended implementation of the circuit into a set of analogue-coupled Van der Pol electronic oscillators. This will culminate in the creation of a series of modules to be used for the composition and performance of electronic music, for studio and live usage.    

In parallel, and alongside the notion of the user as an integrated part of the musical system, my work has also focused on redesigning the interface of a computer-aided switch matrix mixer developed by Lex van den Broek in 2016, known as the Complex. This version allows users to actively and dynamically participate in the recursive architecture of sensitive interwoven systems. 

Riccardo Marogna (2016–2018)

CABOTO: A Graphic Notation-based Instrument for Live Electronics

The main idea behind my research project is to explore ways of composing and performing electronic music by means of scanning graphic scores. Drawing inspiration from the historical experiments on optical sound by Arseny Avraamov,  Oskar Fischinger, Daphne Oram, Norman Mclaren and the computer-based interface conceived by Xenakis (UPIC, 1977),  the project has evolved into an instrument/interface for live electronics, called CABOTO. In CABOTO, a graphic score sketched on a canvas is scanned by a computer vision system. The graphic elements are then recognised following a symbolic/raw hybrid approach, that is, they are interpreted by a symbolic classifier (according to a vocabulary) but also as waveforms and optical raw signals. All this information is mapped into the synthesis engine. The score is viewed according to a map metaphor, and a set of independent explorers are defined, which traverse the score-map according to real-time generated paths. In this way I can have some kind of macro-control on how to develop the composition, while at the same time the explorers are programmed for exhibiting a semi-autonomous behaviour. CABOTO tries to challenge the boundaries between the concepts of composition, score, performance, and instrument. 

Sohrab Motabar (2015–2018)

Non-standard synthesis and non-standard structure

The starting point of my research is to explore the structural possibilities of using sound material generated by non-standard synthesis, namely the jey.noise~ object in Max. My research also proceeds from a consideration of the technique used by Dick Raaijmakers in Canon 1,­ where the time interval between two simple impulses becomes the fundamental parameter on which the music is composed. I have investigated the possibilities of replacing these impulses by more complex materials and these microscopic time intervals with different timescales.

Julius Raskevičius (2016–2018)

PolyTop: Map-oriented Sound Design and Playback Application for Android

My work is currently focused on creating an Android application that can act as a universal translator of parameter space to a meaningful 2D map of sounds. The application can control virtual instruments in SuperCollider or any other program that accepts MIDI as parametric control. The goal of this research is to create an app positioning a musical piece as a type of network, which enables a gradual gestural transformation of material. This research was prompted by the fact that a majority of touchscreen instruments have inherited the looks of older-generation mouse-oriented interfaces, and thus they still rely on the old paradigm of pointing and clicking. Consequently, users of this type of traditional mouse-based interface have been limited to a single interaction with a given virtual instrument at a time. Simply put, gestures involving multiple fingers have not been commonly found in professional sound design programs — even with the widespread adoption of touchpads as a primary input device for portable personal computers.  

However, with the growing computing power of touchscreen devices and acceptance of touch as a mode of interaction, new sound-design possibilities are emerging. These developments, combined with visual and sonic input, are allowing touch to become a powerful way of intuitively generating sound. Additionally, the continuous nature of touch gestures promotes the design of sounds encouraging uninterrupted modulation. This also promotes a holistic perspective on sound design, by way of using multi-touch to make possible the simultaneous adjustments of sonic details.  

Overall, this suggests that the possibilities of touch input can be combined with 2D maps of sound parameters. And, similar to a scrollable digital map representing a geographical area (such as a 2D tablet), can represent all possible permutations of a virtual instrument’s parameters. Given that such parametric combinations are nearly infinite, a user takes upon the role of an explorer; wandering through the space of parameters and pursuing directions that lead to interesting sonic results, or conversely, avoiding areas on the map that are less intriguing. All of this underlines the undeniable fact that multi-touch gesture will speed up and simplify this process, largely by adding intuitive control to the randomisation of parameters.

Edgars Rubenis (2016–2018)

Adventures in Temporal Field 
pushing past clock-based notions of temporality and the self

While sounding material has always been of central interest in my musical practice, in the course of this master’s project I am directing my attention towards the experiential side of musical interactions – the various types of perceptual material that are being received while engaging in the act of listening to music. 

By being interested in music that exists on its own terms, free from obligations towards the listener, I am also aware that in the case of a musical listening act an inevitable overlapping of worlds takes place. In the course of a musical event our human sphere enters in relations and becomes affected by the principles of the musical world. In some cases it can even be said that these worlds temporarily merge.

Therefore, for the course of this research, the focus is on raising the awareness about how the “thing that I interact with” not only fills my perceptual space but also shapes its boundaries. Considering that our perception informs us of who/what we are, such musical experiences shape our notions of what is our human realm. 

By building strongly on my bachelor’s thesis “Use of Extended Duration in Music Composition” (which focused on works of Eliane Radigue, La Monte Young and Morton Feldman) and on an even more previous personal musical practice of a related kind, I am currently gathering insights on how musical experiences shape our notions of who we are – how they draw borders of our humanness and legitimise certain types of experiences and states over others.

Notions of perception and temporality are informed through reading of Edmund Husserl’s work On the Phenomenology of the Consciousness of Internal Time and related academic texts.

Timothy S.H. Tan (2016–2018)

Spatialising Chaotic Maps with Live Audio Particle Systems

Chaotic maps have already been used for many parameters in algorithmic music, but have very rarely been applied to spatialisation. Furthermore, chaotic maps are sensitive to tiny changes and still wear distinctive shapes, thus providing strong gestures and metaphors. This allows for effective control or spatialisation during real-time performances.   

On the other hand, particle systems provide a novel, effective means for sound design. Often, this involves using regular and random shapes for visual effects like smoke, clouds and liquids. However, up to now chaotic maps have not been included in particle systems, and both of them hold promising potential for choreography of sounds. In my research, I seek to explore this crossroads between chaotic spatialisation and audio particle systems. This involves probing and evaluating the use of chaos and particle systems in music, and then spatializing select chaotic maps with particle systems in upcoming works for performances, and finally documenting my findings.

Vladimir Vlaev (2016–2018)

Real-time Processing of Instrumental Sound and Limited Source Material Composition

My research at the Institute of Sonology focuses on real-time digital processing of acoustic instrumental sound. It is an extension of my background as a composer and an instrumentalist and it is a result of my interest in applying these two activities in the real-time electroacoustic domain. 

At the core of my project lies a compositional approach which I call “limited source material composition”. In this approach, ‘material’ or ‘sound material’ has the broad meaning of pitch, timbre or rhythm. This principle is one I have applied extensively to many of my previous works, and indeed, is a concept whose implementations could be traced back from early polyphonic music to certain examples of contemporary instrumental and electronic music. My aim is to implement this non real-time compositional approach in real-time, thus this concept which once was used for composing a score now serves as a performative and improvisational tool. Thereby time also turns into one of the parameters subjected to limitation or restriction. I have sought to accomplish this by designing a real-time sound processing system, which uses instrumental sound as a source or ‘material’. In other words I compose certain digital sampling processes, which then treat the acoustic sound in real-time during a performance in order to create an ‘instant composition’. Additionally, this computer-based interface is intended as a tool for both scored composition and live electronic improvisation.

Therefore, in order to accomplish these ideas, I distinguish two main directions in my work:

1) Composing a piece for a solo instrument (prepared piano) and live electronics in which I apply the above-mentioned principles of constraint.

2) Another particular implementation of the proposed real-time DSP system involves the area of instrument building as an additional activity and is based on the use of a hexaphonic-pickup guitar as an acoustic sound source with multichannel output. The ability to apply individual processing to each string of the guitar and thus creating complex polyphonic textures is one of the major advantages of this implementation.      

More particularly, the desired interface itself is a set of modules, each representing a real-time audio process: ring modulation, delay lines, granulation, filter, pitch shifter, distortion, buffer module, etc. Each module has a number of parameters whose values determine the behaviour of the module. The signal flow between the distinct modules is flexible. The system is capable of switching, adding, or removing processes from the chain during performance, as well as reversing or changing their order. To achieve a smooth control over the parameters is also an essential task of this project.