Amit Dagim (2022–2024)
Composing synthetic creatures, instruments, and environments –polyphony, color, and space
In synthesis and sound design that drives toward the acoustic-like, the pseudo-organic, the animalistic, the surreal or hyper-real, the physical and esoteric, there is an unmapped territory that is laden with uncanny impressions of supposed species, instruments, objects, and worlds. It is an aesthetic approach that brings forth the thing becoming—animate, dynamic and alive—as opposed to the thing become, supposedly clear, objective, and neat, though in fact frozen and lifeless. I would like to explore these areas of synthesis through a study of polyphony, color, and space, and their relevance and application for creating aesthetically physical or acoustic sound and music.
The polyphonic is the multi-voiced, multi-perspective narrative, image, structure. It is the way of thinking in arrays, networks, multitudes, and interactions. It is a basic quality for a multi-voiced, dynamic system that is made of individual—yet connected and inter-dependent—voices to play and interact in.
Color and space are almost inseparable in the context of “pseudo-acoustic” synthesis, as spatial/time-based processing is an essential factor in the timbre of the object/organism, as it is in our perception of it as a live, vibrant, physical thing that inhabits a world or space of some sort. Color would be the overall timbre and texture of sound, but in fact leading to questions of synesthetic phenomena and perception. In feedback patching and various methods of audio signal processing, I find a deep connection and importance to the aforementioned questions of aesthetics and composition.
Researching methods of digital and analog feedback patching/routing, dynamic, spectral, and spatial processing, electroacoustic feedback systems, and concepts of polyphony, arrangement, and rhythm, I would like to make a study of the applications of these techniques for the composition of supposed creatures, instruments, and habitats. I will then compile a catalogue of recordings, that will act as field-recordings of these worlds and the voices/objects that inhabit them, and serve as a possible basis for several compositions reflecting these ideas and aesthetics.
Gyuchul Moon (2022–2024)
Organic Algorithmic Composition
To reach a goal of self-organization is a process where some form of overall order arises from local interactions between parts of an initially disordered system.
A noise, which is not only sound but also two-dimensional in computational terms, contains information of all frequencies and the implicit possibility of being structured and composed of tomographic layers. At this point, a question arises: If we design a sonic system that is controlled by feedback and self-regulation like Generative Adversarial Network (GAN), would it generate a form of music?
This research project aims to create a program and algorithm for generating music forms. For this process, cybernetics theory and the concept of neural networks will be referenced. The core concept of cybernetics is circular causality or feedback—where the observed outcomes of actions are taken as inputs for further action in ways that support the pursuit and maintenance of particular conditions or their disruption. Assuming a situation in which waves occur in water, entropy increases with the movement of water. After that, if there are no other external forces, It enters an equilibrium state through the characteristics of the fluid. In general, all substances are defined in terms of their equilibrium state. A system in which feedback exists has the potential of a natural, organic network. This research aims to create music with a generative structure, which has feedback, self-regulation, and self-organization. I will explore the possibility that the form of music is composed as an organic form, including the process of reaching physical and chemical equilibrium within these structures.
It will implement a contemporary computational methodology to program a whole system. The bottom-up architecture will be considered, including the detailed nodes involved in sound generation. To reach an organic network of Networks refers to the idiosyncratic research of the British cybernetician Gordon Pask, whose work looks at a theory of conversation and electrochemical learning mechanism ( “Physical Analogues to the Growth of a Concept,” “The Natural History of Networks” ).
The algorithms of the project consist of a small-scale structural set. Each substructure is designed as a single organism based on interdisciplinary cybernetics research to become a whole system as a music form. The correlation of individual units set for the organicity of the overall structure is set low. Each unit of the procedurally generated sound structure maintains a kind of relationship and goes through the process of creation, extinction, and transformation. It creates a form that has the potential to develop by giving and receiving feedback in a self-organized and evolving system.
New auditory imagination will arise through the programmatic connection of the real and the artificial. Based on an engineering approach that links traditional machine algorithms with sound and uses the principles of self-organization, this project raises fundamental questions about digital, technical, and complex systems and their entanglement with the aspect of sound.
Adomas Palekas (2022–2024)
Microbial and Molecular Sonifications
Microbial and Molecular Sonifications is an interdisciplinary master’s research project that aims to search for novel connections between microbiology and sound. My particular interest is in the sonic potential of metabolic processes and molecular structures found in microorganisms. Although we can’t hear microbes or organic molecules directly, sonification can be used to audibly display certain microbial qualities, such as growth curves, amino acid and DNA sequences. I seek to explore sonification as a method for interdisciplinary practice linking both natural sciences and composition.
Humans are partially comprised of microbes: the trillion bacteria living inside our gut, which have an impact on our physical and even mental wellbeing. However, apart from the physiological influence, could microorganisms influence us cognitively or spiritually as well? What could the voice of the microorganism be? Could a sonic-incorporeal interaction invoke new sensations or modes of listening, inspiring us to stray outside our anthropocentric viewpoint? These questions form the conceptual basis of my research.
My master’s research will comprise two practical stages. First, I plan to experiment with sonic mapping of biological datasets such as amino and nucleic acid sequences, including their secondary and tertiary structures, and then compose a series of pieces using these methods. Further, I plan to experiment with sonification of living organisms in micro-ecosystems and ecospheres. Through these experiments I aim to develop a bio-sonic interface that would represent the real-time metabolic processes in the audible domain. I will also experiment with an auditory feedback loop that will directly vibrate the micro-ecosystem. I believe this could lead to a formation of an autopoietic bio-sonory system, where sound becomes more than just an output but also the major factor contributing to life, death and adaptive evolution.
Oscar Peters (2022–2024)
New Perspectives for Organ Music
Nowadays, pipe organ is not a very popular choice amongst composers of any kind, and because of its relationship with the sacred space, more and more generations grow up without having a sounding memory of the instrument. Combined with the inevitable and rapidly increasing closure of churches, this prompts the question of whether the instrument’s future is in danger. New Perspectives for Organ Music orbits around the future perspective of organ music, its physical identity, and the current limitations of the instrument. This project is a continuation of my BMus research project, and tries to unveil new sonic potential and artistic possibilities through speculative research and practice.
This project can be described as a continuous feedback dialogue between my roles as composer-performer and instrument-builder. These two agencies—which are motivated by artistic innovation—will dialogue with each other, and will occur simultaneously throughout the process. As an instrument-builder, I aim to develop several technical approaches that deal with the excitation of organ pipes. These different types of excitation should extend the timbral domain and dynamic behaviour that is characteristic of the contemporary organ. Within this context, I attempt to answer questions such as: What defines an organ? What musical parameters of the contemporary organ should we keep, and which ones should we replace? What can we learn from other wind instruments, and how can we apply these instrument-specific behaviours onto the organ?
Ege Şahin (2022–2024)
Sonic Transfrontiers: Agency of sound in border conditions
The border, as a territorial, political, social, and juridical concept, is often researched by sociologists, anthropologists, architects, and historians, but rarely by the ears of music artists and sound theorists. If done, this can open up new ways in dealing with the agency of sound on how it acts and reacts, engages and disengages, in various modes of bordering: national borders, gendered bathrooms, dis(con)sonance, loudness, mountains, rivers, and so on.
This research is twofold; first as ‘sound in borders’ and second as ‘borders in sound’. The first will deal with the potentiality of sound as a border-crossing agent, facilitating sonic transfrontiers from a geo-cultural context that will unfold through onsite research involving field-recordings, experiment design, and consultation with experts and habitants in historically and politically conflicting borders that Turkey shares with Armenia, Syria, and Cyprus.
The second aspect will question sound as a border-bearing phenomenon, interpreting physical properties of sound—such as threshold, instability, noise, and distortion—as possible borders. In order to refer to these notions as borders, it is necessary to investigate, both through experiments (including subject-specific physical and digital instrument preparation and source manipulation) and theoretical research, if a conceptual framework for borders in sound can be established and, if so, where does the emerging sound material compositionally stand?
Elif Gülin Soğuksu (2022–2024)
Emancipating the Voice as an Instrument in Electroacoustic Music
The voice is a universal sound source that can be controlled and moulded in an exceptionally malleable and direct way. It is an instrument capable of producing and sculpting complex sound structures within its range in changing dynamics, gestures, and behaviors. Musical ideas and imagined/inner sonorities can be expressed immediately without the utilization of other tools. It can facilitate flexible and convenient expressiveness in improvisation, where it can be employed in the development of different compositional strategies.
Using the voice in one’s work provokes an inquiry into meaning in
conjunction with the frameworks of identity, gender, and culture; it
evokes associations, connotations, and significations. Hence, it might have a way of influencing the perception and interpretation of the listener; it can be so prominent and distinctive that it grabs attention. In musical forms, the voice could be immensely dependent on the historical aspects, social-cultural norms, and conditioning factors of which performer is brought up. Thereby, these might be limiting andrestrictive factors of voice expression.
My project aims to push the expressive boundaries of voice and oral
materiality by investigating its sound-making capacity and potentialities in electroacoustic music practices. The overarching focus is to emancipate the voice from the potential inferences that are embedded within it by means of technology, while maintaining its intrinsic instrumentality, expressiveness, directness, and malleability.
The outcome will be an interactive music system performed through the voice that will be capable of analyzing the incoming sonic data and parameters to synthesize sound events in real-time during performance. The project will revolve around the question of what are the ways of preserving the directness, morphological identity, and peculiarities of the voice as an instrument even though the voice quality is radically abstracted by the system. Consequently, it will react and use substantially the morphological aspect of the voice to organize musical events and produce sonic structures.
Hugo Ariëns (2021–2024)
The sonic potential of electric guitar preparation
The introduction of preparations to the electric guitar has opened up a new world of sonic possibilities. The prepared guitar forces us to rethink our relationship to the guitar and its limits, offering a vast array of sounds that gives new meaning to Aguado’s idea of the guitar as a “miniature orchestra.” Preparations transform the electric guitar into an amorphous object—a platform for different materials and textures to meet. It becomes a magnifying glass, able to amplify the tiniest details of a sound. The potential is undeniable—but how do we deal with it?
The prepared guitar field is typified by an individualistic mindset; guitarists are often reluctant to share their techniques or discuss their practice in a meaningful way. I aim to break out of this mindset and open the guitar up to an awareness and acknowledgment of community and collaboration. Part of this awareness is the exploration of the (historical) context of the prepared guitar, tracing its development from its origins to the multitude of approaches in the contemporary field. The individual languages developed by prepared guitar practitioners are the key to understanding the nature of the instrument, the possible preparations, and the practical challenges one comes across in the prepared guitar practice.
Examining the underlying technical principals of the electric guitar can help us understand how guitar preparations work in terms of the whole of the instrument. In the context of this research, “the whole of the instrument” means everything involved in the signal chain that contributes to the sound production; the strings, the pickup, the amplifier, and the effects pedals are all integral parts of the instrument. All these parts have certain possibilities and limits that define what is possible when preparing the guitar. Instead of trying to fit the preparations in the framework of a traditional guitar setup (one designed for a band setting), I will take the opposite route. I aim to shape the instrument to the preparations; the purpose of the instrument becomes to let the preparations blossom.
Studying the context and the workings of the prepared guitar will allow me to refine my approach to my instrument and artistic practice. This personal outlook will require a custom set of preparations and a setup for live performance that supports this. The aim is for these three things—the preparations, the setup, and the approach—is to have a reciprocal relationship that coalesces in a live performance setting.
Paolo Piaser (2021–2024)
Towards a Whole. Systemic Theory and Cybernetics in Music: Searching for Self-regulating Musical Forms
In the same way that a system is a group of interconnected parts that influence and interact with each other, Systemic Theory is an interdisciplinary field of studies that comprehends and connects different disciplines and approaches, including biology, mathematics, sociology and cybernetics. The history of Systemic Theory is connected to music, particularly through the cybernetic movement, many of whose protagonists were the authors of theories used in electronic music (the theory of sampling being perhaps the most important), or were in close contact with the musical world (in 1996, Heinz von Foerster organised a conference dedicated to the applications of the computer in music, from which he would later extract Music by Computers).
With the idea of further cultivating this connection through the means of composition, the aim of this research is to create a self-regulating music-based system by simulating an autopoietic net – a living system whose parts are interconnected, influence each other in various ways, and continuously ‘create’ themselves, the others, and the relationships that occur between them, as conceptualised in the Santiago Theory of H. Maturana and F. Varela.
In order to obtain this result, various aspects need to be addressed and explored. For the sake of clarity, it is possible to divide the entire research into parts, regardless of chronology or order of importance: one part is the study of the literature related to the Systemic Theory, in order to aid in the conceptualisation of the entire project, from the epistemological to the most practical aspects (how the elements are related and connected, for example); another one is the creation of both a sound world (equally coming from acoustic or synthetic means and instruments) and a ‘movement world’, the latter being inhabited by performers able to move in the space of the performance; a third part is the collection and elaboration of data through the analysis of audio signals and movements, in order to control DSP parameters and communicate specific informations to the performers through sonic, visual, and physical cues; and a fourth part is the use of the space of the performance, not only for the diffusion of the sound, but also as a field to collect and perpetuate data.
More specifically, the intentions are to use Machine Learning in conjunction with DSP algorithms, as assistant for the analysis, the control, and the synthesis of sounds; to create wearable hardware to help the performers convey and receive information (particularly the non-musicians); to use unconventional spatialisation systems (for example the WFSsystem) together with more conventional ones; and lastly, to create a relationship of trust with an ensemble specialised in my music, in order to achieve the best musical outcome. In this ensemble I would include the collaboration of other students who are exploring something complementary to the project, who believe in it and are excited by it, and who are ready to help and exchange competences for the best outcome: components of an enriching collaborative system.
To conclude and sum with few terms: the aim of this research is the creation and exploration of an interdisciplinary semi-aleatoric system, a whole, where the range of the possibilities are defined by the influences occurring between the interconnected elements..
Riccardo Ancona (2021–2023)
Organising Sonic Materialities
There is a mode of listening whose aim is to recall the materiality of objects that emit sonic vibrations. Humans’ capability to infer material properties from sound is based on a set of perceived material features, for which a complex interaction of percepts, memories, and context-based information is constantly devised and rearranged. Our mental representations of perceived materialitiesraise questions regarding the epiphenomenal nature of listening, its neurophysiological development, and its close relationship with tactility.
Perceived materiality does not necessarily correspond to a physical actuality. It is an interplay between experience, imagination, and desire. Being a qualitative aspect of sonic interpretation, it eludes any attempt at formalisation: it is inherently incommensurable. Yet, despite their ineffability, the qualia of materiality take form out of shared embodied conditions; they are grounded in our understanding of objectual physical properties – as state of matter, surface texture, density, weight, elasticity, and so on – in such a way that the physicality of sound is projected onto a set of commonly understood schemata.
Therefore we can still try to define a non-exhaustive taxonomy of perceived material features as an heuristic map for analysis and composition. Once a set of archetypal categories of materiality is circumscribed, it is possible to conceive a compositional system based on a syntax of metamorphoses. Arches, trajectories, and complex movements in the field of perceived materiality can provide a process-based approach to a sonic exploration of the transmigration of matter.
Francesco Corvi (2021–2023)
Programming as a cognitive extension for improvisation in time-based media
My research starts from the vision of programming as a performative medium and explores how, through computational creativity and human-computer interaction, programming languages become an extension of the performer’s mind. This perspective not only sees technology as a creative tool, but raises questions about the role of human beings in this relationship and of how to exploit black boxes without losing an understanding of hidden computational processes.
In live coding, this cognitive augmentation has the potential to enable the interaction of processes occurring in different time scales, and to define form and material by direct control of the temporal dimension itself. Similarly to improvised instrumental music, there is a strong extemporaneous component that makes such performances unpredictable, but the cognitive process allows one to act both on the present and on the future without the constriction of an immediate cause-and-effect relationship. Considering the ability of time-based media to transform absolute time into inner time—duration perceived, as opposed to time occupied—this framework aims to influence inner time by a direct transformation of virtual time—the one represented in a digital system. Ultimately, by changing the flow of virtual time, the performer will define how material is shaped and distributed in absolute time, consequently influencing how time is perceived by the listeners.
Building on my previous work in the field of live coding, I propose to extend the widely used event-oriented framework inspired by the sequencer by defining two further approaches: process-oriented and mapping-oriented. In these, reprogramming occurs in symbiosis with other agents, establishing a complex feedback of interactions with emerging behaviors inside an autonomous cognitive system, which is not necessarily limited to the programmer and the computer or to the act of typing.
Nils Davidse (2020–2023)
Spatial Composition Using Game Audio Engines
Video games have a sonic landscape typically including utterances of speech, music, sound effects, and ambience (e.g. field recordings). Often the role of these sounds provides feedback for the orientation and visual cues and, more traditionally, a programmable sound generator (PSG), which allows such content to enhance the playability and liveliness of the game. In recent decades, PSGs have evolved into engines that offer endless possibilities but still mostly assist the visual aspects of a game. However, I intend to explore the possibilities of using the capabilities of these audio engines in a leading role. To do this, I plan to compose virtual environments where alternate physical and acoustical properties can make an audience experience a composition in ways that would not be possible in real life.
As a point of departure, my compositions will refer to sound art installations and works by Bernhard Leitner, Dick Raaijmakers, and Steve Reich. Their ideas about phasing, movement through space, and minimalistic approaches will be central to my compositional experiments. These influences will be informed by my background in music and installation art, as attempts to transform my compositions into a virtual space will help me to discover new visual, sonic, and immersive experiences.
Ida Hirsenfelder (2021–2023)
Empathic Atmospheres: Sonic stories for a sensitive cohabitation
Emphatic Atmospheres engages various methods of tracing environmental processes, and uses them as scores for sonic storytelling. The aim of this composition is to trigger empathic neural pathways and to nurture a more sensitive relationship with the environment, promoting rewilded ecological restoration and biodiversity while staying with the troubleof extractivist logic in late capitalism.
The central method is observational field recording, supplemented by data collection of biotic/atmospheric processes, psychoacoustics, and random processing. These methods are complementary, and look at the world from different non-human-biased perspectives. With these diverse approaches, I contemplate a multitude of simultaneously present sonic possible worlds, as theorised by Salomé Voegelin, and use the capacity of sound to create atmospheres and generate moods entangling the layers of such possibilities.
The idea of sonic worldscorresponds to the ecological paradigm shift from the ideal of antiutilitarian deep ecology to the troubled dark ecology of Timothy Morton. I would like to examine how this ecological turn creates a shift from the deep listening of Pauline Oliveros to a dark listening that contests listening as an essentially anthropocentric act, and how sonic worlds can surpass a cynical nature-culture divide to produce the nature-culture-techne binding. The condition of this binding is to unlearn the divide and give agency not just to the animals that use language and display consciousness similar to humans, but also to non-living-others such as the lithosphere, as in the pagan practices of my ancestry.
The vital bond between all the things thinging in the world is the core of their generative powers, exemplified by Rosi Braidotti in the affirmative ethics of co-production and the acknowledgement of the immanent interconnection of the multiple ecologies that constitute all systems. The depletion of biodiversity and the ongoing terraforming has displayed the fragility and vulnerability of entities in this system, and the deeply affective and relational nature of all entities. I use sound manipulation to mimic such ecological conditions, and attempt to create an expanded perception in which the listener is transposed to a specific layer of the sonic world. In sound, the kinship between entities evolves in ever-changing processes of behaviours, rhythmic structures, cycles, and randomness, with an interchange of noise, silence, and serendipitous flux. Everything is connected to everything else.
Anna Khvyl (2021–2023)
Sound in Spaces of Remembrance and Commemoration
The intangible physicality of sound is capable of expressing a more-than-graspable message to a listener. The invisible presence of sound waves balances between individual imagination and socially constructed reality. Our shared ability to listen to the environment builds a sense of community, while leaving a space for personal sensorial experiences. We listen to be with someone, and we listen to come to ourselves.
Places of remembering are meant to prescribe a specific value to a site, both personal and collective. Commemoration practices exist in every culture to allow communities to overcome traumatic memories through sustained mutual experiences. A moment of silence as a radical sonic presence is used to express something beyond words, something “more-than-graspable”.
In my project I explore commemoration practices via aural experience to create a sound work that interacts with human perception and site, and facilitates collective memory through listening and sound making.
Farzaneh Nouri (2021–2023)
Improvisation with Énacteur: an AI-driven collaborator
Énacteur will be an AI-driven collaborator for use in both live electroacoustic music improvisation and algorithmic composition. The design will be focused on the communication between artist and machine, resulting in a synergetic human-AI sonic network with emergent behaviors. The outcome will be a complex system that spontaneously produces temporal, spatial, and functional sonic structures. It will be an example of a cybernetic network, maintaining features such as feedback, system perspective, agency, and symmetry.
Énacteur will consist of three main components: an audio analyzer (or machine-listening system), a real-time sound processor, and a decision-maker / compositional strategist. The machine-listening system will analyze various parameters of the sound produced by the artist; the processor will use the analysis data to synthesize and transform the sound in real time; and the decision-maker will follow a compositional strategy extracted from previous demonstrations, creating sonic textures and musical structures during the improvisation process. By analyzing structural combinations provided by the musician, Énacteur will be trained on the stylistic preferences of the artist. Learning methods will include generative grammars, evolutionary algorithms, and imitation learning. The object of this enquiry is to explore the emergence of human-machine musical interaction via a self-organized structure of collaboration, and to investigate how AI models as composition tools could influence new aesthetics in electroacoustic music composition.
Kaðlín Sara Ólafsdóttir (2021–2023)
What is the Icelandic aesthetic in electronic music?
While many ‘schools of electronic music’ (Cologne, New York, Paris, The Hague, etc.) can be identified by their connections to institutions, as well as by well-documented publications and recordings, the history of Icelandic electronic music is comparatively scattered. Electronic music composers had no access to a well-equipped studio in Iceland until the 1990s, so prior to that the government provided funding for Icelandic composers to travel and study at studios across the world. (1) Thus, the first Icelanders who studied at the Institute of Sonology and other such institutions were educated in different techniques and could not consciously form a single ‘Icelandic School’ with which to identify themselves. The only commonality was their nationality – something so strong in Icelanders that it may well have left its mark on their compositions. My goal is to discover if there is an Icelandic identity that unifies the work of these composers.
My master’s project will take as its starting point and focus the history of Icelandic electronic music and composers. Essentially, I am interested in what unifies them in their music and inspirations, and whether I can identify a distinct Icelandic aesthetic. By going through archives, listening to pieces, and interviewing composers and key figures, I will gain insight into the history and culture of my country’s electronic music. This in turn will inform my own compositions, which already exist as ‘Icelandic electronic music’, but which I would like to place more firmly within this tradition.
The artistic output of my research will be fixed media pieces inspired by my findings about the Icelandic identity in electronic music. My aim is to engage with the material I collect in a similar way as the composers I am researching did; working on my own compositional process in parallel with researching theirs will lead to a better understanding of their techniques and inspirations. An important part of the research and archiving phase will be to keep a log of my findings, and to make a website that gathers all information and links. Finally, I am hoping to organize and/or curate concerts of Icelandic music in Reykjavík and in The Hague.
(1) Bjarki Sveinbjörnsson, “Tónlist á íslandi á 20. Öld Með Sérstakri Áherslu á Upphaf Og þróun elektrónískrar tónlistar á árunum 1960-90” (dissertation, Aalborg Universitet, Institut for Musik og Musikterapi, 1998), http://www.musik.is/BjarkiSve/Phd/phd.html).
David Petráš (2021–2023)
Song and Site: Listening to The Environment of Traditional Music
This research aims to explore the possibilities of working with sound recordings of traditional music through the disciplines of ethnomusicology, anthropology, field recording, and soundscape composition. My main motivation is to look for ways to present audio recordings through several compositions, based not only on recordings of music and oral history, but also on the sounds of the environment and activities from the lives of people who are part of the research. This creative approach can bring new possibilities to work with the sonic narrative by clarifying the essential circumstances of the origin of the songs and the environment in which they are performed, as well as the cultural context in which this happens. The practical part of the thesis will be based on a case study of research led by visual artist and ethnographer Lucia Nimcova in the Carpathian Mountains (in Zakarpathian Ukraine and Slovakia), on which I am collaborating as a sound artist.
Andrejs Poikāns (2020–2023)
Investigating the Phenomena of Paracusia and Inner Auditory Experience
Investigating the phenomena of paracusia (auditory hallucinations) and inner auditory experience, my work deals with the ways a computer system can gain ‘knowledge’ of these psychoacoustic processes by the means of machine listening and how such data can be used artistically. My aim is to explore the latter with these practical and theoretical approaches: working with field recordings and sound synthesis based on the documentation of these phenomena, case studies and an analysis of speech.
The potential result of this research will bring new knowledge to the field of sound perception, leading to either an acousmatic musical composition or a sound art installation that incorporates ideas of George L. Lewis’s notion of computer improvisation [1]. Conceptually, my work will deal with unconscious and conscious modes of listening—referencing Pauline Oliveros’s distinction between hearing versus listening—as well as situations of over-hearing and auditory hallucinations occurring in those having certain mental illnesses [2]. In contrast, to making an objective study of sound, my goal is to instead explore the subjective intimacies of inner auditory experience to reflect on processes of thought.
[1] Lewis, George E. Why do we want our computers to improvise? Oxford University Press, 2018.
[2] Oliveros, Pauline Deep Listening: A Composer’s Sound Practice. iUNIVERSE, 2005.
Ranjith Hegde (2020–2022)
Electronic Music in Context of Interdisciplinary Performance
Throughout the long history of individual artistic development, there have been few attempts at procuring an expressive and intelligent dialogue between the various disciplines of art. In attempting to start this dialogue, several potential problems appear concerning the integration, one of them being an ambiguous system of communication. Part of my project then will be dedicated to creating or exploring a common language to facilitate communication between disciplines.
This however entails re-examining broader concepts and smaller parameters related to music through the prism of other disciplines. For example, does the term “dynamics” only mean variation in volume or can it also mean intensity, speed of execution, or quantity, such as how a dancer would commonly understand this term. Also, how can the spatial distribution of events/ideas be fundamentally rethought and expanded? These questions, along with many other sub-questions about the interdisciplinary facets of art will be explored in my research by using a conjunction of artist pairs working through specific restrictions.
The second part of the project examines the idea of dependencies. The most important element for an artist in any ensemble, be it composed, improvised, single or multidisciplinary, is listening. While such an operation is simple enough between people of the same discipline, there is a need to reinforce this concept in multidisciplinary setups. One option is to ascribe part of one’s control to other artists. Making artistic choices and decisions purely based on the events in the other discipline is one way (e.g. mapping the development of an idea into the next based on when or how a movement artist executes a certain pattern). Another way is to literally divulge control such as OSC from motion capture mapped as control parameter in SuperCollider. This also explores an interesting concept of substituting parameters controlled by incidental and aleatoric systems to activities of other artists. Inherently, this involves sound artists building setups that enable mobility and choose concepts flexible enough to accommodate such dependencies.
The third focus of the project is space. Interdisciplinary setups afford a unique opportunity to reconsider space not in the context of loudspeaker configuration alone, but to consider space as the canvas onto which artistic events are distributed, executed and witnessed. This will lead to exploring spatial tensions created by movement (or by static configurations), exploring vantage points to witness localized events from, and experiment with artist-audience placement. Seeing as dance, theatre and visual use different varieties of space for performance, exhibition or installation, this project will exploit this difference to find new ways to perform and witness electronic music (with or without other disciplines).
Kim Ho (2020–2022)
WAT(ER), AM I? | Listen… but where should we begin?
WAT(ER), AM I? | Listen… but where should we begin? will explore the intrinsic links between sound and identity, focusing on “water” as a principle subject matter. The quest for “identity” to find an answer to the fundamental question of “what am I?” has become a vital issue in the modern era, characterised by convenient mobility, greater levels of migration, personal relocation, and the normalisation of a nomadic lifestyle. Noticeably, while the topic of “musical identity” has been extensively studied in current scholarly literature, the notion of “sonic identity” is seldom explored. To address this gap in research, this project will examine how various sonic environments interact with our personal process of identification.
For a cogent investigation into the topic, this research will focus on the themes mentioned above and the sonic properties of a universal entity that constitutes a principal part of our everyday sonic environment—water. Water is the first sound we hear in our mothers’ womb; throughout our lives it embeds itself in our sonic memory because we simply cannot live without it. Focusing on this fundamental and ephemeral sound source, I will investigate the connections between sound and identity, such as how the sonic environment can influence one’s process of identification and how sound can be conceived and recognised by human perception. It will employ a cross-disciplinary approach, one combining perspectives from acoustic ecology, psychoacoustics, and ethnomusicology. The findings will be presented in one or more forms of sound art, occurring either in an interactive sound installation and/or a surround-sound composition that will be performed live. Ultimately, this project aims to awaken people’s awareness to listening to their surroundings more attentively and to inspire them to beautify their sonic environment.
[1] Wade, Bonnie C. Thinking musically: Experiencing music, expressing culture. New York: Oxford University Press, 2004, 16.
Martin Hurych (2020–2022)
Development of Listening: Recording Sounds of Daily Activities in the Acoustic Environment
This project considers how society’s acoustic environment affects individuals. There will be an emphasis on investigating the opportunities of how people can learn through their daily activities and interactions with the public environment. Additionally, my work will focus on discovering how various experimental methods of listening and their associated technology can act as tools: extending and facilitating new sonic experiences.
Overall, this research seeks to develop the capacity of listening to one’s surroundings by using this facility more generally in life and in artistic practice. The subject of the analysis will be recordings made from selected daily activities, such as those that are habitual and often unconsciously lead us to avoid encountering new experiences. In contrast to this, my goal is to extend limited perceptions of reality from the actual content and context of recordings, thereby placing everyday life into an experimental learning process.
Lucie Nezri (2020–2022)
indeterminate — incomputable
Indeterminacy is one of the most important notions for science and contemporary music from the 20th century. Its emergence can be traced back to the discoveries made in the field of quantum mechanics, which had a decisive influence on musical practices. The early ‘indeterminate’ experiments found in the music of John Cage and Iannis Xenakis are exemplary of the development of different compositional strategies—along with, sometimes, radical and polarized philosophies that resulted from their respective understandings of indeterminacy. With time, this notion has revealed its paradoxical facets and numerous nuances, both in science and music. In particular, a new light has been shed on indeterminacy and its potential expressions due to recent evolutions of computability theory.
The latter will be central in this research and will serve to reveal a compositional and perhaps ethical standpoint in the face of indeterminacy. If this notion has been initially used as a means for composers to generate more complexity in sounds and macro-compositional structures, indeterminacy will be examined from its limits. Specifically, an aspect of this research will consist of approaching indeterminacy from computational limits, considered as interstices of a particular, compositional indeterminacy. The inherent logical and mathematical dimensions of computations will be regarded as inspirational starting points for composing. They will be explored as different gradations and loci of indeterminacy, imbued with various degrees of determinacy.
Wilf Amis (2019–2021)
After the Twelve Tones: Using Post-12-TET Tunings in Electronic Music
Since 12-Tone Equal Temperament has been enforced as a tuning standard globally, there have been, in all kinds of pitch/harmony-based musics, considerable barriers to entry for any musicians interested in exploring alternatives, from the lack of education at young ages, to the incompatibility of the likes of guitars and pianos.
In my own research I address the incompatibility of electronic instruments and softwares. This is done directly through the building of synthesizers, and the writing of text guides to some of the existing possibilities for electronic musicians, plus the research also encompasses the development of new tuning systems, as well as compositions and performances using these systems.
There will be written a manifesto justifying the dismantling of the 12-TET hegemony, and the diversification of tuning. The manifesto will attempt to reframe the “xenharmonic”/”alternative tuning” movement as “Post-12-TET”: a turn away from Harry Partch’s ideal of appropriating historically and geographically other cultures, and towards a movement motivated by an interest in the future. Post-12-TET tuning practice, rather, encompasses the movement from Helmholtz onwards as a modern phenomenon, and one that will continue to grow and shape the future of experimental and popular music.
Margherita Brillada (2019–2021)
Radio Art: An Expression of Social Relatedness
This research is conceived to actively participate in social reality aiming at the development of a musical language and a compositional approach intended as tools for critical reflection on current issues. By increasing awareness of audiences, and thinking of Radio Art as an expression of social relatedness, the project focuses on the production of radio artworks and podcasts.
Historical and theoretical research on the body of Radio Art was fundamental to understand how radio itself and its audience has expanded and changed in recent times. Radio, podcast, and music streaming platforms all have their public and different ways of listening. Questions raised about the context of new media and its audience are cardinal points for the choice of the compositional methods and on how to shape the sound according to this.
Podcasts are listening-on-demand platforms that allow a conscious and intimate way of listening. In such formats, my compositional methods deal with the concept of linearity of time, resulting in a sonic narrative structure. FM Radio and related online streaming platforms, on the other hand, are usually listened in everyday situations where it is difficult to predict when the audience is tuned in. When composing for the radio it is possible to overlook the concept of linearity of time. I argue that Radio Art should avoid dealing with finite temporal objects with a beginning, a middle and an end, but rather allow radio listeners to perceive a different piece and create the final version from a framework of possibilities. The compositional approach for Radio Art should be an open acoustic end result, welcoming the idea of losing control of a temporal structure.
Francesco Di Maggio (2019–2021, Instruments & Interfaces)
Drawing Inferences: Designing Interactive Music Systems for Real-time Composition
My research at the Institute of Sonology aims to design an interactive music system capable of capturing, analysing and modelling the incoming musical data created by the musician, and use them as musical agents to drive the live performance in real-time. Due to its interactive nature, the system will establish a ‘feedback loop’ between the performer interpreting an open, graphically notated musical score, and the digital sound processes running on the computer. The outcomes will be recorded and, after a phase of troubleshooting, they will be performed by trained musicians in the form of live exhibition.
Having approached live electronics where the music produced and the electronic processes were both written in a precise, linear manner, I felt limited by the expression possible with the performer reading the score and the synchronisation of the sound processes. The initial need to achieve greater control of these musical aspects gradually shifted in favour of more natural and even unexpected musical outcomes. The use of graphic scores and reactive systems started to nourish a sense of openness to non-linearity, welcoming the possibility of accepting human ‘failures’ and system errors as part of the process.
Thanks to the close connection with STEIM and using its mentorship and expertise, I will be able to sketch the musical insights and build the necessary bridges in the form of custom hardware and software: motion tracking, pitch- and gesture-following techniques will be used for selecting, mapping and synchronising continuous sound processes to gestures. On the basis of these premises, new musical inferences will be drawn in the form of mixed-music compositions for instruments and live electronics, where the emphasis will be placed on defining compositional strategies for real-time human-computer interaction.
Giulia Francavilla (2019–2021)
Immersive Sound: In-Between Spaces
The overarching focus of my research centres upon immersive sound: investigating some of its possible ramifications through sound composition and theoretical research.
Namely, the research focuses on three main aspects related to the topic: Presence, Distance and Transformation. These aspects shape my practical investigation and my path through theoretical reflections, furthermore underlining the foundation of immersive art into the relationship between the individual and the external world. Within respect of this, the practical research starts from Wind as a sound source belonging to the non-human environment, investigated through the perspective of algorithmic composition: field recordings of wind are used as a source of control for the creation and manipulation of synthetic sounds, thanks to a step-by-step process of analysis – data mapping – manipulation. The process is applied to the framework of live coding and fixed media composition, constituting an outcome that oscillates between some extremes: “familiarity” and “abstraction”, “data-driven” composition and “human” intervention.
Within this perspective, a connotation of virtuality takes form when referring to the space created by a sonic composition, and its dialogue within the physical space contains multiple potential relationships with the perceiving self.
The research is shaped through the usage of different media such as multichannel speaker setups, headphones, binaural technology, alongside the experimentation with VR technology. The latter is being investigated as a different side of the research and put in relation to the musical aspects of the research by keeping the focus on simulation, abstraction, and perception of space.
Tornike Karchkhadze (2019–2021)
Sound Synthesis and Music Generation with Artificial Neural Networks
My research at the Institute of Sonology is sound synthesis and music generation with Artificial Intelligence (AI) and Machine Learning (ML). My focus is using AI and ML with most low-level means of sound, like digital audio samples (either time and/or frequency domain), seeing audio as a database. Music and sound in general have a long history of applying computing technology, the roots of which can be traced back to the origins of electronic music from the beginning of the 20th century and even before. Today, computer software is used to deal with all the basic aspects of music and sound – among other things – recording, sound synthesis and design, composition, and music programming. These techniques have been around for a long time already and are quite familiar to professional musicians and amateurs alike. However, new developments in AI and ML, particularly the recently re-enhanced Artificial Neural Networks (ANN) have now entered into the field. ANN’s data-driven approach to music and sound is totally unprecedented and unconventional in comparison to existing approaches – this has huge implications and promises to change the game. In short, Artificial Neural Networks are already blurring lines between music composition, sound synthesis, and audio generation, completely opening up new horizons for experimenting, as well as conducting artistic and scientific research.
In response to these developments, the purpose of my project is to create a software tool for audio generation with ANN. The tool will be capable of listening and learning music, as well as whole databases of any audio. In doing this, ANN will acquire ‘knowledge’ and generate sound in response to input material, simulating and mixing the musical and/or sonic characteristics. One of my interests however will be to experiment with ANN to mix input materials that will hopefully give insightful outcomes for composing. From this, my main research question examines the extent to which a sample-by-sample approach (i.e. waveform as database) can work and give ‘meaningful’ results. In other words, how can the capabilities of ANN render high-level abstractions, like musical meaning or representative sound content, from low-level sound essentials.
Aleksandar Koruga (2019–2021)
Compositional Spaces
The goal of my research is to find a framework of artistic expression starting from a reinterpretation of Xenakis’s Symbolic Music theory. The latter aim will first involve making a redefinition of time and duration in compositional space. And by taking into account an independent time measure I intend to extend Xenakis’s preposition regarding uniqueness; thus enabling a different framework for the relationships between amplitude, frequency and time as fundamental descriptors of a musical system.
Secondly, by inferring heuristics on such systems I intend to use and implement this theory as a lens through which I will compose and describe my own musical material. The objective of this enquiry is to have a system of mathematical descriptors, which can be used and related to functional concepts in live performance and composition.
The potential outcome of this research then is not to realize an all-encompassing mechanism but rather to create a series of musical works exploring possible relationships within a given context of constraints.
Kamilė Rimkutė (2019–2021)
Listening Inside the Network of the Brain
My research focuses on network systems, aspects of universality and their applicability in artistic approaches. Network theories are widely used by scientists in many fields, such as neuroscience, mathematics, sociology and many other areas of study. Currently, I am merging two of my major passions into one holistic concept, that is, the science of the human brain and sound. For the most part this means I am researching functional brain connectivity, its visualisation using graph theory and sonification vis-a-vis software and electronics. I am delving into human-computer interaction as a reaction to the presence of these phenomena in modern society, as well as examining human brain interactivity for the purpose of raising awareness about the most mysterious organ in the human body. Based on the neurofeedback method, where subjects are expected to change their neural activity in response to real-time displays of brain activity, it can be postulated that humans have far more control and influence on their own mind than they usually believe.
Interactions between the brain’s two hemispheres, its different networks and neural network interconnectivity can serve as a hierarchical structure for systematic musical compositions, using graph theory as one of the intermediate tools for structural analysis of the brain.
Keywords: complex systems, small-world networks, hierarchical modular networks, non-linear systems, stochastic processes, connectome, functional connectivity, graph theory.
Mihalis Shammas (2019–2021, Instruments & Interfaces)
Primitive Electronic / Materiality in Creative Tools
Creative tools are the objects through which we explore ideas and emotions, gently sculpting them into form as we project them into the outside world. These tools are both the medium and the channel that guides this inchoate information from the subconscious to the conscious in the process of creation. In music, we call these instruments.
More often than not, creative tools, like most objects, are man-made and have evolved through time to thoroughly follow contemporary technology. In this sense, they can be viewed — as well as the rest of the technical world — as a constant and continuous manifestation of knowledge. Borrowing Lyotard’s distinction (in The postmodern Condition) between “narrative” and “scientific” forms of knowledge, a dualitythat characterizes the shift from the pre-industrial to the industrial era, we will form a theoretical context where creative tools may be deduced to symbolic combinations, either conflicting or complementary, of these two conceptual spheres. In their narrative/scientific nature, they constitute technology that is created to produce stories. In this framework of thought, they can be examined and constructively analyzed in terms of their architecture, their materiality, their ways of interaction with human players and eventually — and most importantly — the creative potentials they contain. On this basis, my hardware research and development will try to assimilate a blend of primitive and electronic technologies into one single object, in an effort to extract the essence of each realm’s creative attributes. In the design of such a “hybrid” instrumental apparatus, the interaction interface could be seen as a combination of a “gestural system of effort” with a “gestural system of control” (as described by Baudrillard in The System of Objects). This structure ought to be material yet its technology transparent; a visible, exposed and large architecture that can be visually deconstructed into its component parts and thoroughly comprehended. This is a search for a technology that has a physical substance and a bold presence in space; that can physically blend with a performer and be communicated visually and emotionally to an audience, at the same time bypassing the limitations imposed by its own materiality.
Mai Sukegawa (2019–2021)
Audience as Art: Interactive Audiovisual Work to Be Completed by the Audience
Relationships between art works and audiences have been questioned by modern artists and philosophers. Challenges to these paradigms led to the occurrence of interactive art and relational art, where audiences can participate and not just wait for works to bring an effect. However, it can be said that there is still room for questioning the role audiences play in this context, specifically whether or not they are acting on their own initiative. In connection to this, my research aims to define what influences an audience’s behaviour. At this point, I suggest there are five main points to consider: multisensory, interactiveness, embodiment, subjectivity (as described by Paul Valery in “Œuvres” and “Cahiers”), as well as ambiguity or openness (as referenced by Umberto Eco in “The Open Works”). Moreover, because of the development of technology multisensory work has become a standard in contemporary art, most often being used in visual and auditory work. However, an obstacle to audience participation in this field often occurs when visual and auditory elements are imbalanced or even disconnected; as this makes audiences feel less engaged.
To resolve this issue I have hypothesised that there are two main approaches to interacting with an audience: relational and immersive interaction. Relational interaction means that audience’s participations and actions are necessary as the element of a work. On the other hand, immersive work directly involves audiences and can sometimes cause cross-modal perception, where an audience may experience an illusion of touch, smell or taste. Additionally, interpretation of this illusional perception depends on our personal experiences. Therefore, it can be said that immersiveness is strongly relevant to the themes of embodiment and subjectivity. Finally, openness is a term that Eco used in his essay “The Open Work”, and it is roughly understood as implying ambiguity. He summarised this by saying that “[o]pen works are characterised by the invitation to make the work together with the author.” Therefore, if a work is not ‘open’ enough, there is no room for an audience to participate.
As I have a musical background, and also have a synaesthetic sense of connection between the visual (colour) and sound (pitch), it is feasible for me to examine the field of audiovisual art with both a theoretical and an intuitive approach. This will include practical experimentation with works that apply colour theory, relationships between colour spectra and audio frequencies, as well as phenomena exhibiting sound-colour synaesthesia.
Atte Olsonen (2018–2021)
For the purpose of making sound design and stage performances, I am researching the concepts of presence of mind and listening as a compositional strategy. This research delves into how active listening and having an acute awareness of a present moment can be used to reform sound design. Ultimately sound design can then become a more fluid, dynamic, reactive, and even a central or leading part of a performance.
Presence exists in moments where the performer is aware of the current situation of a performance—following her/his co-performers closely, knowing what are the possible paths that can be proposed with the individual output and then just feeling what is the right thing to do next. This experience can be referred to as a kind of flow experience: a non-verbal connection with both the performing team and the space the performance takes place in; a state of mind where decisions happen more intuitively than at a conscious level.
My research takes form in practical trials where these different modes of presence of mind and listening as a compositional strategy are applied to my creative process. The theoritical aspect of this research therefore combines theatre research, sound studies, studies in improvisation, dsp-coding, phenomenology and the philosophy of presence.
Guzmán Calzada Llorente (2018–2020)
Musical Explorations Through Spaces
With site-specific locations in mind, I am expanding the aural conception of what a room is and how it operates. As a general strategy, I plan to understand acoustic spaces as energetic places, locations with inherent autobiographies that can be manifested by articulating and uttering their resonances. This primarily occurs when working with room reverberation and electromagnetic activity, and by even exciting objects which inhabit a room through different vibrational methods. Within this framework, a piece of music may stand as a sort of adaptive sculpture, articulating a room’s history.
One branch of my research focuses on electroacoustic pieces for specific venues, working with audio sources that trigger fixed oscillators when they have certain coincidences in their frequency spectrum. These electronic instruments or a particular audio source are related to a venue by both expressing a perspective of their acoustic-physical properties and their poetic dimensions. While another branch of my work, involves approaching the process described above by filtering and re-synthesizing an original audio source, where transformations of a musical or audio source can be understood by the way in which a room affects them.
Practically, investigating this will involve realizing several solo and ensemble pieces — ones that directly emerge from different filtering and re-synthesizing of audio and also graphic sources (e.g. scores). Overall, I expect this project to address notions of how understanding a space can reveal many different spheres of meaning.
Tony Guarino (2018–2020)
Tapping into Place
Practicing percussion tunes our experience of vibrant, graspable [things] in the world. They appear charged with instrumentality, reflecting our compositional aims and tactile fluency. However, this projected musicalization may restrict the potential agency of objects we find.
By prolonging periods of experimentation — suspending crystallization — we can unearth distinct relationships to materials beyond timbral extraction.
Objects then take the lead while remaining vitally rooted in their found situation or purpose. Gradually, site-specific performances, installations, and document-artifacts emerge through these personal moments of discovery.
Each work requires particular methods for transferring energy between assembled elements. To expand upon conventional percussion techniques, I develop indirect approaches (electrostatic conduction, wind-powered resonance, rain collection, etc.) to facilitate negotiations between intended action and material response. Railings, glass bottles, and office trays become animated associates that inform rhythmic, spatial, and formal decisions.
Participants are invited to slip through their immediate identification of the sounding object and remain continuously attentive to intersensorial differences. Attempting to comprehend the totality of each social-material-acoustic encounter generates unique states of dissociative listening. My intention is to tune into this affective exchange between cooperative bodies — recalibrating the experience of a beach, a city street, a concert hall.
Eunji Kim (2018–2020)
A Game Environment as Algorithm to Generate a Musical Structure
Given that computer games are being highlighted as a platform that can be used with interactive media, I am proceeding with research examining connections between the hierarchical nature of such games and their similarity to many examples of algorithmic art. Algorithms used in computer games reveal a hierarchy, where the control of a system manages objects and records data associated with these objects. In short, I see a similarity between many of the algorithms used in games and those I use to make my own algorithmic music. Thus, I believe it is possible to switch the systems used in a game and turn them into a musical composition; thereby allowing musical parameters to be derived from numeric data of game objects.
My compositional approach, uses algorithms to make it possible to get rid of the fixed idea of the musical work. The algorithm, when seen through an art game, presents a model that determines how to generate sound structures in real time. Designing musical structures in this way means that one algorithm can make thousands of potential choices. This does however mean that the type of data used as input has a massive impact on the generated music. Also, when using rapidly changing data, it is necessary to arrange the movements of data in appropriate sound movements (a similar type of data processing seen in the area of sonification).
I am additionally trying to deal with data that is not a part of commercial games. For instance, I want to visualize various types of data that can be encountered in real-life; designing a model that allows users to play with data and to convert it into artistic output. The aim of this is to allow the user/audience to more actively interact with musical works, making it possible to manipulate music, by adding ‘time’ as a dimension. This area of appreciation can also be extended by adding a dimension of ‘direct experience’, whereby the audience (user) can directly manipulate the music.
Michael Kraus (2019–2020, double degree with TU Berlin)
Solarsonics – A Theoretical and Artistic Investigation
My research raises the question how solar energy can be used as a contemporary leitmotiv for sound creation. Smallwood (2011) describes recent developments in the creation of sound art powered by photovoltaic technologies and refers to Solarsonics (2013) as a pattern of ecological practice. Taking into consideration the latest development in the climate crisis philosopher Michel Serres describes the sun as our energetic horizon and as the ultimate capital (ngbk, 2014).
Therefore I investigate how morphing forms of solar energy can be used to articulate this relationship in the age of the Capitalocene and climate change.
Donna Haraway (2016) suggests the Chthulucene and David Schwartzman “Solar Communism” as a seminal direction for our societies. Within my work I draw connections between them and align aspects of it to Vilém Flusser’s (2000) model of human communication.
A leitmotiv is used as a metaphor for human communication and thus needs to be interpreted. It is grounded on a deeper level of understanding than paradigms. It should encourage sonic experiments that do not reduce to purely human categories and rational thought but inherit the tradition of storytelling, meditation, ecstatic encounters and place human practices as belonging to myriad others.
Keywords: Solarsonics, Energy, Communication, Ecology, Experimental Futures
Flusser, V. (2000) Kommunikologie
Haraway, D. (2016) Staying with the Trouble – Making Kin in the Chthulucene
ngbk (2014) The Ultimate Capital is the Sun
Smallwood, S. (2011) Solar Sound Arts: Creating Instruments and Devices Powered by Photovoltaic Technologies
Smallwood, S. & Bielby J. (2013) Solarsonics: Patterns of Ecological Praxis in Solar-powered Sound Art
Toby Kruit (2018–2020, Instruments & Interfaces)
Bodily Awareness in Electronic Music Performance
As a musician working with digital electronics, there comes a point in the creative process where expression has to be quantified in order to enter a digital model. Composing interactions with computers/software traditionally (and possibly inevitably) resorts to ‘mapping’ connections between signals from outside the system to functions in a code. However, the digital inherently means limited, discrete in such a way that any interaction with it is oppressed by approximation – using digital media for music means translating ‘human being’ to ‘computer data’, and back.
The focus of my research is on how this translation can be approached from outside the digital oppression. Taking the human as the starting point for human-computer interaction, I am working on methods of interaction that are based on material and bodily properties. The goal is to evoke states of enhanced bodily awareness, placing participants in the ‘moment’ that is not concerned with representation, but with experience. Existing in this sensorial state means simultaneously accepting all stimuli as a continuous, chaotic signal from the world to the body, and continuously (re-) adjusting the body based on the perceived forms of these stimuli.
Because traditional hard- and softwares are based on models and discretisation, they may face problems in quantifying the noisy, imprecise, reflexive, semi-automatic conditions of the human body. The challenge in picking up or amplifying these profoundly human movements presents opportunities in all domains of the interaction: the performer’s body, material properties, electronic signals, and digital conversions. Practically, this involves making electronic textiles & fabric sensors, activating physical materials using transducers, and involving the whole body in performance.
Simone Sacchi (2018–2020)
“Can you hear that?” — Amplifying Discrete Sounds for Live Performances and Installations
Technology allows us to extend the limits of human senses and my research aims to give the listener a new perspective on what can be perceived aurally. In essence, my work builds out of this interest by exploring soundscapes generated by electromagnetic fields. However, my current research extends this by bringing the act of hearing into the realm of the “microscopic” — vis-à-vis rescaling the amplitude of hidden sounds, ranging from almost imperceptible ones to those that are truly inaudible.
My present work exists in categories where I either work with a variety of materials as well as living beings and mechanical objects (i.e. animals, humans and plants, or machinery such as recording devices and studio equipment). Additionally, I plan to work with musicians who will prepare performances for “mute instruments”. The latter refers to when only the tiniest sounds of instruments and performers’ movements (or their bodies) are amplified. This approach gives the audience an idea of what happens inside an instrument, even while it is not being played.
The micro-scale in sound is attributed to the time domain, yet there is another side of this scale to consider; in this respect my work originates from sounds that are on the verge of the audible, encouraging the listener to hear fainter and fainter sounds by using different and unorthodox technological approaches for microphoning and amplifying. Consequently, my work also focuses on minimising the noise-floor and avoiding unwanted feedback in order to explore different ambiences and materials.
I plan to use the above strategies for installations, where discreet sounds occur all the time despite our senses being able to perceive them. In this sense, installations can be understood as sound lenses for examining intimate worlds we cannot normally access. For example, a miniature anechoic chamber may act as a controlled environment from which sounds can be projected into the external world.
Jad Saliba (2018–2020)
Stations of Exception: Revisiting Analog Radio for Live Performances
My present research primarily builds upon experiments with circuit-bending radios. A central focus of this involves a live performance setup made out of several receivers and transmitters continually interfering with one another to generate new sounds. This includes discovering new random combinations of inharmonic tones whose frequency spectrum shifts completely when modifying the tuning frequency of a given radio as well as employing micro samples of local radio broadcasts. However, these sonic processes or results are not solely dependent on the idiosyncratic elements of a circuit, as electromagnetic transmissions in the microenvironment inherently have an effect.
My initial reasoning for wanting to use circuit bending was inspired by the unpredictability of radio sonic artifacts emerging in the frequency band between stations. Likewise the intricate sound patterns evolving from static noise frequently feature abstract voices in the background. These qualities are often heard on shortwave transmissions, long distance AM broadcasts, and other types of radio satellite broadcasts. However, tweaking into these frequencies/artifacts is largely dependent on ecological factors such as weather, natural sunlight, and electromagnetic interferences—as these factors all shape the final result of the information received.
Ernests Vilsons (2018–2020)
Well-Structured Vocalisations: An Attempt to Imitate Birdsong
Birds. Thousands of different species, their songs and calls varying in kind and complexity. Both within the individual acts of vocalization and in the way these vocalizations succeed one another, patterns can be observed. Bird vocalizations are produced within and are influenced by their immediate environment – flora, fauna, light, wind, etc., they are a means of communication. But from the recognition of bird vocalizations as fascinating sonic structures, to composition of sound and its organization that would derive from them, a series of intermediary steps are to be taken. My research is concerned with these ‘intermediary steps’ as much as with bird vocalizations.
The research originates within the aural, within the experiential. Hearing as a mode of being, from which a network of relations unfold to become re-contextualised, taken apart, qualified. Through this unfolding, the aural — the fleeting origin — is exceeded while being preserved in the unfolded; a temporary move away from hearing/listening to their ‘product’ — the actions (analysis, classification, re-synthesis, etc.) and material reconfigurations (recordings, scores, programs, etc.) they instigate.
Analysis and formalization as a reduction of sound (birdsong) to a limited amount of parameters; a reduction that eventually will determine the synthesis of the imitation. The parameter space, shaped by, yet not limited to, that which is analyzed and formalized, provides a possibility for gradated movement away from the object of analysis (a specific birdsong) toward sound structures that are situated anywhere between close resemblance to the object represented and a complete non-resemblance.
Through the research, a limit of imitation is pursued. A double endeavour: a striving to become birdsong without becoming a bird, and a reflection on the abundance of by-products (ideas, experiences, insights, etc.) this striving generates.
Anna-Lena Vogt (2018–2019, double degree with TU Berlin)
Domestic Spatial Investigations: Expanding perception of Space through Experiments with Sound
Architecture and building acoustics pursue acoustic as functional objective in the design process, but not the auditory aesthetic quality of a space. This is partly due to the fact that there are no suitable design tools for this process, as they focus mainly on visual analogies. With this in mind, my research focuses on finding a method that allows to record and present the perceived auditory environment to integrate it into my design and artistic practice. Thereby, the interior of apartments is of particular interest for me, since the invisible encounters we have made over the years accompany and influence us in our daily experience of these spaces. Through studies of intimate auditory instants, conclusions are possible about the general vernacular experience. As an approach to spatial sound investigations, experiments are conducted in three different apartments to explore 1) how to observe and capture the aural experience 2) which aural qualities occur in a given space 3) which aural categories are reoccurring between the apartments and 4) how to reproduce the experience and bring the aural to the foreground. The decisive factor in answering these questions is a phenomenological approach that places the experiencing body at the center of perception, creating a dialogue with the everyday situations found in the living spaces. We are physically influenced by and actively change these spaces, but we are not aware of the manners in which that occurs. In order to approach the horizon of our experience and to sharpen our consciousness, it is necessary for this study to layer complementing methods. In-situ listening to become aware of what shapes the local and taking on different body positions, such as walking, standing, sitting and lying to get close to the everyday instances in the apartments, parallel sound recording through binaural techniques to be as close as possible to the experience in the context and writing down accounts to capture the moments. The experiments with the everyday auditory living situations are a preliminary step towards making the spatial ambiances tangible. It shapes awareness and enables the appropriation of visual design methods. A collection of the acquired daily aural situations is the outcome and be presented in an installation.
Keywords:
Atmosphere, Sonic Effects, Sound, Space, Architecture
References:
Augoyard, J. F., & Torgue, H. (Eds.). (2005). Sonic experience: a guide to everyday sounds. Montreal & Kingston, London, Ithaca: McGill-Queen’s University Press.
Bachelard, G., & Jolas, M. (1994). The Poetics of Space. Boston: Beacon.
Böhme, G. (2014). Atmosphäre. Essays zur neuen Ästhetik. Berlin: Suhrkamp.
Perec, G., & Helmlé, E. (2016). Träume von Räumen. Zürich-Berlin: Diaphanes.
Woolf, V. (2000). Selected Short Stories. Penguin UK.
Laura Agnusdei (2017–2019)
Combining Wind Instruments and Electronics within the Timbral Domain
My research at the Institute of Sonology is focused on composing with wind instruments and electronics. The starting point for this is my background as a saxophone player and my compositional process aims to enhance the timbral possibilities of my instrument, while still preserving its recognisability. In line with the artistic interest of blurring the perceptual difference between acoustic and electronic sounds, I process acoustic sounds from my instrument using digital software like Spear, CDP and Cecilia – carefully selecting procedures based on analysis and re-synthesis techniques.
Musical points of reference in my contexts are chosen from different contexts – such as free jazz, electroacoustic composers, experimental rock, and my interest in timbre has also encouraged me to explore many extended techniques on my instrument. Additionally, when composing for saxophone I consider the instrument to be occupied in music history, straddling influences between African-American culture, pop music and contemporary classical music. Aside from the saxophone, however, I plan to use my own timbral peculiarities and how to use their sonorities in a personal way.
More precisely, I am interested in working with acoustic instruments because it is possible to translate this into the electronic domain. Therefore, my research will take place in the studio as well as live performance. The final outcome of my studies should be to incorporate discoveries. Moreover, in my improvisational practice (both solo and group) I want to expand my research in order to combine live processed sounds with pure acoustic as well as experimenting with different amplification techniques.
Görkem Arıkan (2017–2019, Instruments & Interfaces)
ARMonic: A Wearable Interface For Live Electronic Music
In live computer music, many notable works have been created that utilize new sound sources and digital sound synthesis algorithms. However, in live computer music concerts, we may be encountering a lack of visual/physical feedback between the musical output and the performer, since looking at a screen does not really convey much to an audience about a performer’s actions. Therefore, in my concerts, I have been looking for ways to minimize the need to look at the screen by using various kinds of setups, largely those consisting of MIDI controllers, self-made physical sound objects, and sensor-based interfaces transform physical acts into sound.
With this in mind, from the onset of my research, my goal has been to build a performance system that can be interacted through body movements. “ARMonic” is a performance piece I created during my study which I continue to develop. The making process of the piece is a pursuit of creating an unconventional performance vehicle that enables me to drive towards exploring my inner world and learn new ways of expressing myself through sound and movement. While doing that, one of my main concerns is to be active in social music-making besides solo shows such as from various improvised music formations to written pieces.
In the written part of my research, I explain the technical details of my work, and discuss personal considerations regarding the pre-make and after-effects of my performance practice. In addition to this, I build up a relationship with the works by Jacques Attali (The Political Economy of Noise) and Johan Huizinga (Homo Ludens), are my guides on philosophical and socio-economical thinking in my work.
Matthias Hurtl (2017–2019)
DROWNING IN ÆTHER
software-defined radio – a tool for exploring live performance and composition
In my practice I am often fascinated by activities happening in outer space. Currently, I am interested in the countless number of signals and indeterminate messages from many of the man-made objects discreetly surrounding us. Multitudes or satellites transmit different rhythms and frequencies, spreading inaudible and encoded messages into what was once known as the æther. Whether we hear them or not, such signals undeniably inhabit the air all around us. Radio waves, FM radio, WiFi, GPS, cell phone conversations, all of these signals remain unheard by human ears as they infiltrate our cities and natural surroundings. Occasionally though, such signals are heard by accident, emerging as a ghostly resonance, a silent foreign voice, or as something creating interference in our hi-fi systems. Yet aside from these accidental occurrences, tuning into these frequencies on purpose requires a range of tools, such as FM radios, smartphones, wireless routers and navigation systems.
Presently, my research at the Institute of Sonology includes placing machine transmissions into a musical context, exploring what inhabits the mysterious and abstract substance once referred to as æther. This exploration fundamentally delves into how one might capture these bodiless sounds into a tangible system, so they can be transformed into frequencies like an oscillator or any other noise source. Additionally, I employ methods or indeterminacy; a methodology assisting the emergence of unforeseeable outcomes. Specifically, this includes using external parameters that engage with chance and affect details or the overall form of my work.
Principally, this research project will focus on grabbing sound out of thin air and using it in a performative setup or within a composition. I expect this to be concealed as possible, as well as concealed signals, as well as finding methods for listening to patterns in the static noise and using tools to generate sound in bursts of coded or scrambled signals.
Slavo Krekovic (Instruments & Interfaces, 2017–2019)
An Interactive Musical Instrument for an Expressive Algorithmic Improvisation
The aim of the research is to explore the design strategies for a touch-controlled hybrid (digital-analogue) interactive instrument for algorithmic improvisation. In order to achieve a structural and timbral complexity of the resulting sound output while maintaining a great degree of expressiveness and intuitive ‘playability’, possibilities of simulations of complex systems drawing inspiration from natural systems and their external influence via the touch-sensor input will be examined. The sound engine should take advantage of the specific timbral qualities of a modular hardware system, with an intermediate software layer capable of generating a complex, organic behaviour, influenced by a touch-based input from the player in real time. The system should manifest an ongoing approach of finding the balance between deterministic and more chaotic algorithmic sound generation in a live performance situation.
The research focuses on the following questions: What are the best strategies to design a ‘composed instrument’ capable of autonomous behaviour and at the same time being responsive to the external input? How to overcome the limitations of the traditional parameter control of hardware synthesizers? How to balance the deterministic and more unpredictable attributes of a gesture-controlled interactive music system for an expressive improvisation performance? The goal is to use the specific characteristics of various sound-generation approaches but to push their possibilities beyond the common one-to-one parameter-mapping paradigm, thus allowing a more advanced control leading to interesting musical results.
Hibiki Mukai (2017–2019)
An Interactive and Generative System Based on Traditional Japanese Music Theory
The origin of notation in western classical music. This meant that the score should be in a form that was easy for the general public to understand and also be passed on future generations. Since then, this western notation system has been widely used as a universal language throughout the world.However, its popularity does not mean that there is not a loss of important sonic information. In contrast, most traditional Japanese music was handed down to the next generation orally, but was also accompanied by original scores that conveyed subtle musical expressions. These ‘companion scores’ were written with graphical drawings and Japanese characters.
In my research at the Institute of Sonology, I plan to reinterpret traditional notation systems of Japan (ie Sho ̄ myo ̄ 声明 and Gagaku 雅 楽) and designing real-time interactive systems which analysis relationships between these scores and the vocal sounds made by performers. In doing this, I plan to generate that exists in digital form. This will allow me to control parameters such as pitch, rhythm, dynamics and articulation by analyzing intervals in real time. Furthermore, this research will culminate in using this system to realize a series of pieces for voice and western musical instruments (eg piano, guitar, harp) and live electronics.
Overall, I believe it is possible to adapt this traditional Japanese music theory into a western system by using electronics as an intermediary that processes data. In addition, by re-inventing traditional Japanese notation I expect it to be easier to access expressive ideas in western notation. However, using this type of score I also aim at extending many of the techniques of western musicians and composers – designing an interactive relationships between instruments, the human voice, and the computer.
Yannis Patoukas (2017–2019)
Exploring Connections Between Electroacoustic Music and Experimental Rock Music of the Late 60s and early 70s
From the late 60s until the late 70s the boundaries of music genres and styles, including popular music, were blurred. This situation resulted from rapid and numerous socio-economic changes and an unpredicted upheaval in the music industry. Rock music, later renamed to “progressive rock” or “art rock”, separated itself aesthetically from “mass pop” and managed to blend art, avant-garde and experimental genres into one style. Also, the shift from just capturing the performance to using the recording as a compositional tool led to increasing experimentation that created many new possibilities for rock musicians.
Undoubtedly, many bands were aware of the experiments in electroacoustic music in the 1950s (elektronische Musik, musique concrète) and drew upon influences from the compositional techniques or avant-garde / experimental composers. However, many questions arise about why and how art rock was connected to experimental and avant-garde electroacoustic music; and secondly, whether it is possible to trace common aesthetic approaches and production techniques between the two genres.
My intention during my research at the Institute of Sonology is to elucidate and exemplify possible intersections between experimental rock and the field of electroacoustic music, especially in terms of production techniques and aesthetic approaches. The framework of this research will include a historical overview of the context that experimental rock emerged from, exploring why and how certain production techniques were used at that period in rock music, and investigating into whether the aesthetic outcome of these techniques relates to experiments in the field of electronic music.
Parallel to this theoretical research, I also plan to attempt a reconstruction of some production techniques which I will explore for the sake of developing my own aesthetic and compositional work. The driving force behind this type of reconstruction will include exploring tape manipulation, voltage control techniques, and their application to different contexts (such as fixed media pieces, live electronics and free improvisation).
Orestis Zafiriou (2017–2019)
Mental Images Mediated Through Sound
I’m interested in researching correlations between physical and perceptual space in the process of composing music. Concentrating on the ability of human perception to create images and presentations of the phenomena it encounters, I propose a compositional methodology where the behaviour and the spatiotemporal properties of matter are translated into musical processes and represented through electronic music.
Sounds in my work represent objects and events, as well as their movement in space-time. This implies that a musical space is formed by a succession of mental images unfolded in the perceptual space of the composer when encountering these events. With this method I also aim to point out the importance of music as a means to communicate objective physical and social processes through a subjective filter, both from the standpoint of a composer as well as from the perception of a listener.
Orestis Zafeiriou has studied mechanical engineering at the technical university of Crete, completing his thesis on sound acoustics specifically pertaining to the physical properties of sound and its movement through different mediums (active noise control using the finite-element method). In addition to his present and past studies, he also actively composes music in the band Playgrounded, who released their first album (Athens) in 2013 and their second (In time with Gravity) in October 2017.
Chris Loupis (2016–2018)
Bridging Isles: Dynamically Coupled Feedback Systems for Electronic Music
My research at the Institute of Sonology has been principally involved with the investigation of coupled and recursive processes, with a final goal of applying and employing derived techniques into modular synthesis, through the (re)implementation – or appropriation – of circuits. Audio and control feedback systems present a non-hierarchical way of interacting with an emergent, unforeseeable and unrepeatable output. In such systems, individual components share energy mutually. They can therefore be considered as coupled; their resulting sonic behaviour being one of synergetic or conflictual relations. The central theme this project builds upon is the idea of musical systems designed to be operating – or better yet, operated – within reciprocally affected and variably intertwined structures.
Specifically, the above ideas include an investigative trajectory through the mechanics of coupling in feedback control systems, which arrived to the Van der Pol equation (Balthasar Van der Pol, 1927).
This oscillator has a significant musical value, as it is characterised by behavioural versatility, rhythmical and timbral complexity and richness; its non-linear response to external signals, is also particularly interesting. While being well-documented in physics and mathematics, applying the Van der Pol oscillator as an analogue model has not been widely adopted into modular synthesis. With these observations in mind, I am currently developing an extended implementation of the circuit into a set of analogue-coupled Van der Pol electronic oscillators. This will culminate in the creation of a series of modules to be used for the composition and performance of electronic music, for studio and live usage.
In parallel, and alongside the notion of the user as an integrated part of the musical system, my work has also focused on redesigning the interface of a computer-aided switch matrix mixer developed by Lex van den Broek in 2016, known as the Complex. This version allows users to actively and dynamically participate in the recursive architecture of sensitive interwoven systems.
Riccardo Marogna (2016–2018)
CABOTO: A Graphic Notation-based Instrument for Live Electronics
The main idea behind my research project is to explore ways of composing and performing electronic music by means of scanning graphic scores. Drawing inspiration from the historical experiments on optical sound by Arseny Avraamov, Oskar Fischinger, Daphne Oram, Norman Mclaren and the computer-based interface conceived by Xenakis (UPIC, 1977), the project has evolved into an instrument/interface for live electronics, called CABOTO. In CABOTO, a graphic score sketched on a canvas is scanned by a computer vision system. The graphic elements are then recognised following a symbolic/raw hybrid approach, that is, they are interpreted by a symbolic classifier (according to a vocabulary) but also as waveforms and optical raw signals. All this information is mapped into the synthesis engine. The score is viewed according to a map metaphor, and a set of independent explorers are defined, which traverse the score-map according to real-time generated paths. In this way I can have some kind of macro-control on how to develop the composition, while at the same time the explorers are programmed for exhibiting a semi-autonomous behaviour. CABOTO tries to challenge the boundaries between the concepts of composition, score, performance, and instrument.
Sohrab Motabar (2015–2018)
Non-standard synthesis and non-standard structure
The starting point of my research is to explore the structural possibilities of using sound material generated by non-standard synthesis, namely the jey.noise~ object in Max. My research also proceeds from a consideration of the technique used by Dick Raaijmakers in Canon 1, where the time interval between two simple impulses becomes the fundamental parameter on which the music is composed. I have investigated the possibilities of replacing these impulses by more complex materials and these microscopic time intervals with different timescales.
Julius Raskevičius (2016–2018)
PolyTop: Map-oriented Sound Design and Playback Application for Android
My work is currently focused on creating an Android application that can act as a universal translator of parameter space to a meaningful 2D map of sounds. The application can control virtual instruments in SuperCollider or any other program that accepts MIDI as parametric control. The goal of this research is to create an app positioning a musical piece as a type of network, which enables a gradual gestural transformation of material. This research was prompted by the fact that a majority of touchscreen instruments have inherited the looks of older-generation mouse-oriented interfaces, and thus they still rely on the old paradigm of pointing and clicking. Consequently, users of this type of traditional mouse-based interface have been limited to a single interaction with a given virtual instrument at a time. Simply put, gestures involving multiple fingers have not been commonly found in professional sound design programs — even with the widespread adoption of touchpads as a primary input device for portable personal computers.
However, with the growing computing power of touchscreen devices and acceptance of touch as a mode of interaction, new sound-design possibilities are emerging. These developments, combined with visual and sonic input, are allowing touch to become a powerful way of intuitively generating sound. Additionally, the continuous nature of touch gestures promotes the design of sounds encouraging uninterrupted modulation. This also promotes a holistic perspective on sound design, by way of using multi-touch to make possible the simultaneous adjustments of sonic details.
Overall, this suggests that the possibilities of touch input can be combined with 2D maps of sound parameters. And, similar to a scrollable digital map representing a geographical area (such as a 2D tablet), can represent all possible permutations of a virtual instrument’s parameters. Given that such parametric combinations are nearly infinite, a user takes upon the role of an explorer; wandering through the space of parameters and pursuing directions that lead to interesting sonic results, or conversely, avoiding areas on the map that are less intriguing. All of this underlines the undeniable fact that multi-touch gesture will speed up and simplify this process, largely by adding intuitive control to the randomisation of parameters.
Edgars Rubenis (2016–2018)
Adventures in Temporal Field
pushing past clock-based notions of temporality and the self
While sounding material has always been of central interest in my musical practice, in the course of this master’s project I am directing my attention towards the experiential side of musical interactions – the various types of perceptual material that are being received while engaging in the act of listening to music.
By being interested in music that exists on its own terms, free from obligations towards the listener, I am also aware that in the case of a musical listening act an inevitable overlapping of worlds takes place. In the course of a musical event our human sphere enters in relations and becomes affected by the principles of the musical world. In some cases it can even be said that these worlds temporarily merge.
Therefore, for the course of this research, the focus is on raising the awareness about how the “thing that I interact with” not only fills my perceptual space but also shapes its boundaries. Considering that our perception informs us of who/what we are, such musical experiences shape our notions of what is our human realm.
By building strongly on my bachelor’s thesis “Use of Extended Duration in Music Composition” (which focused on works of Eliane Radigue, La Monte Young and Morton Feldman) and on an even more previous personal musical practice of a related kind, I am currently gathering insights on how musical experiences shape our notions of who we are – how they draw borders of our humanness and legitimise certain types of experiences and states over others.
Notions of perception and temporality are informed through reading of Edmund Husserl’s work On the Phenomenology of the Consciousness of Internal Time and related academic texts.
Timothy S.H. Tan (2016–2018)
Spatialising Chaotic Maps with Live Audio Particle Systems
Chaotic maps have already been used for many parameters in algorithmic music, but have very rarely been applied to spatialisation. Furthermore, chaotic maps are sensitive to tiny changes and still wear distinctive shapes, thus providing strong gestures and metaphors. This allows for effective control or spatialisation during real-time performances.
On the other hand, particle systems provide a novel, effective means for sound design. Often, this involves using regular and random shapes for visual effects like smoke, clouds and liquids. However, up to now chaotic maps have not been included in particle systems, and both of them hold promising potential for choreography of sounds. In my research, I seek to explore this crossroads between chaotic spatialisation and audio particle systems. This involves probing and evaluating the use of chaos and particle systems in music, and then spatializing select chaotic maps with particle systems in upcoming works for performances, and finally documenting my findings.
Vladimir Vlaev (2016–2018)
Real-time Processing of Instrumental Sound and Limited Source Material Composition
My research at the Institute of Sonology focuses on real-time digital processing of acoustic instrumental sound. It is an extension of my background as a composer and an instrumentalist and it is a result of my interest in applying these two activities in the real-time electroacoustic domain.
At the core of my project lies a compositional approach which I call “limited source material composition”. In this approach, ‘material’ or ‘sound material’ has the broad meaning of pitch, timbre or rhythm. This principle is one I have applied extensively to many of my previous works, and indeed, is a concept whose implementations could be traced back from early polyphonic music to certain examples of contemporary instrumental and electronic music. My aim is to implement this non real-time compositional approach in real-time, thus this concept which once was used for composing a score now serves as a performative and improvisational tool. Thereby time also turns into one of the parameters subjected to limitation or restriction. I have sought to accomplish this by designing a real-time sound processing system, which uses instrumental sound as a source or ‘material’. In other words I compose certain digital sampling processes, which then treat the acoustic sound in real-time during a performance in order to create an ‘instant composition’. Additionally, this computer-based interface is intended as a tool for both scored composition and live electronic improvisation.
Therefore, in order to accomplish these ideas, I distinguish two main directions in my work:
1) Composing a piece for a solo instrument (prepared piano) and live electronics in which I apply the above-mentioned principles of constraint.
2) Another particular implementation of the proposed real-time DSP system involves the area of instrument building as an additional activity and is based on the use of a hexaphonic-pickup guitar as an acoustic sound source with multichannel output. The ability to apply individual processing to each string of the guitar and thus creating complex polyphonic textures is one of the major advantages of this implementation.
More particularly, the desired interface itself is a set of modules, each representing a real-time audio process: ring modulation, delay lines, granulation, filter, pitch shifter, distortion, buffer module, etc. Each module has a number of parameters whose values determine the behaviour of the module. The signal flow between the distinct modules is flexible. The system is capable of switching, adding, or removing processes from the chain during performance, as well as reversing or changing their order. To achieve a smooth control over the parameters is also an essential task of this project.
Giuliano Anzani (2015–2017)
Dynamic Stochastic Synthesis: A Performative Approach
Based on the model of dynamic stochastic synthesis (GENDY) created by Iannis Xenakis, my master’s research aimed to investigate the timbral possibilities of this approach through the development of a dedicated live-performance environment. During the initial period of this research, a hybrid interface consisting of a specialised software and a physical controller for interacting and performing with the dynamic stochastic synthesis model was developed.
Following this path, the research carried out during the master’s programme, describes the development of a practice aiming to control the GENDY algorithm in real time. In these lines, the result of this study contributed a real-time instrument named as ExGen, which was designed as an environment for the usage of GENDY as the main source for further sound manipulations.
The stochastic nature of this synthesis technique became the inspiration and the antagonist of the performer who, through the ExGen system, can control and vary the behaviour of the stochastic synthesis.
Using the knowledge and the practice achieved from the master’s research, the next step of this project is focused on the further development of this instrument.
An optimisation of the audio synthesis used in the software appears as a next step. This implementation includes the development of different audio plugins that will be made publicly available in order to receive feedback and suggestions regarding the utility of this tools.
In the previous version, a variety of commercial MIDI controllers was recruited for the physical interaction between the performer and the ExGen. This fact resulted in a series of limitations. A further step in the development of this instrument is the development of a dedicated hardware that will overcome these limitations. Using the practice developed for the master’s thesis, the objective of this implementation is to design and build a specific physical surface for the real-time use of this environment. A simplification of the controls is also planned in order to obtain a more efficient interaction between the controller and the performer.
Simultaneously with the development of the instrument, the software and the hardware developments gathered in this research, will be documented, collected and released through a dedicated website under Creative Commons (CC) license, in order to provide freedom of use and allow further modifications of these tools.
Kyriakos Charalampides (2015–2017)
Rhythmanalysis: an expressive tool for environment-aesthetics relationship
Just before the dawn of the 21st century, Henri Lefebvre envisioned an act that sought to analyse the world as a moving complexity. In 1992, the publication of Rhythmanalysis aspired to transform the abstract concept of rhythm into a method. By studying periodic temporalities through subjective prisms, Rhythmanalysis carries the vision to allow its practitioners to listen to a town or a street, in the same way as an audience listens to a symphony. The present research aims to study Lefebvre’s ideas as an alternative way of musical expression focused in the environment/subject intersection. During the first part of this investigation, micro-periodic relations between the observer and observed were studied as a compositional method. Based on the findings of this period, the second part of this research is focused in macro time scales. Current experiments aspire to retrieve coherent rhythmical relations from large sets of data, in order to transform complex sequences of events into musical structures.
Kyriakos Charalampides is a sound engineer and composer from Greece. His interest orbits around environmentally emerged aesthetics. He has been involved as a post-production engineer in several music and film productions. During the last years, he is occupied with applications of Rhythmanalysis in sonification. He holds a BSc in Sound Engineering and Music Technology and a MMus in Sonology from the Royal Conservatoire.