Dasein: On Live Performance with Interactive Electronic Improvisation

Dasein: Improvisation + Generative + Interactive

A New Spirit on Live Performance with Interactive Electronic Improvisation

In 2019 Feb and 2019 March, I hosted two live shows in Ham & Eggs Tavern DTLA and 191 Space Guangzhou, presenting my album Turn Right. I rearrange all my tracks, turned them into re-mixable sonic/phrase/speech/sample-based elements, controlled by pads. The structure of song changes if I change the order of sample. Elements are transformed in real-time live (distorted, echoed, stretched…). Accompanied by the ‘flow’, I improvised on piano and guitar. I call the workflow: Dasein system.

Three Foundations of Contemporary Electronic Music:

(1) Timbral Freedom: Revolutions of Luigi Russolo, Iannis Xenakis, John Cage and Edgard Varése had expanded the definition of “useable sonic materials”: pitch is the first to be disenchanted. Beyond the harmonic characteristics, a vast spectrum of noises, clusters, streams, cloud, socio-politico sampled materials and microtonal grains are being acknowledged and manipulated. Composers are freed from timbral restriction by the prevailing of grain-based synthesis techniques, sampling and live-mixing.

(2)  Formal Freedom: Similarly, Forms are no longer an internal alienated element of music: algorithmic music languages set up a foundation for combining improvisation, interactivity and planning of compositional structure in live, which varies and develops the performativity. From solely a rigid carrier for developing linear themes or horizontally and vertically articulate motivic phrases, forms grow into a free network where all mesostructural sonic elements interact. For example, Gottfried Michael Köenig’s PR1 indicated that forms can be a pre-determined factor in composition. Computer/Electronic music is therefore capable of embracing much more structural complexity of content.

(3)  Spatial Freedom: Spatialization in physical and psychoacoustic virtual sonic space expands the auditory dimension of music, structures are mapped in physical space for creating energy flow, while the virtual space remains a potential to explore.

Timbral modification in multiscale planning, structural development changes in real-time and spatial counterpoint – surprises are delivered, creating an auditory experience of romanticized irrationality. But it’s not free enough. One concern is: music still suffers from creativity waste from the process of preparation. Improv/indeterminacy does solve partially this concern, but does currently existed improv performing system fully exploit the above three sonic possibilities? Can improviser be fully free by having the possibility to interact with timbral modification, form development and spatialization in real-time? Can what Jacque Attali concluded become reality? – composition blurred the boundary between musical consumers and producers – is it possible for the undifferentiated phenomenological freedom of consciousness be created in real-time. Can Freedom of creativity be born?

The answer is in a Dasein-style live interactive Improvisation.

My experience at Ham & Eggs Tavern (Los Angeles) & 191 Space Guangzhou, performing my electronic music album Turn Right, when I was the sole performer for the event. Having been interested in finding ways for a musician to fully be free with individual command of live performing, my one-woman-band practice was designed as below:(1)  each song is reorganized into patterns and pre-recorded sonic materials – I mapped them from Ableton live to a controller.

(2)  Specific sequences are designed for triggering those patterns, a song can thus have endless combinations of structural possibilities, it freed my motives from traditional linear theme development.

(3)  I use audio extensions to modify sonic properties of patterns in real-time (frequency shift, echo, noise…) to create sound installation that’s statically changing.

(4)  most importantly, I improve on individual instruments (guitar, piano, keyboard, voice), with accompany of previous generated sonic installation.

Traditionally, composer’s job is to develop motive, while in electronic music, sound itself become a vast continuum, and composition as a whole can become a motive for a bigger picture or a process of new development. As a conclusion, this experience inspired me to think: how to truly release and exploit the ultimate creativity behind not just a single musical motive, but a sound continuum or even a whole composition? Traditional music system and programming language focuses on single note-based, instead of continuum-based progress. Understanding that composition has its own universe of CHANCES, and it’s a musician’s freedom to choose from them on-the-fly.

Live Interaction Flow: Musical Representation, Spatialization, and Controlling of Devices

  • Musical Representation

Developing a new unique notation system that accurately describes the elements constructing the architecture of Dasein is essential for accurate future music representation and generative improv. Factors to be consider include:

  1. The recharacterization of sound, time and sonic qualities in multiscale.
  2. a new set of symbols precising spatialization, articulating the generative pattern in improvisation
  3. calculating the micro variance of interactive sonic phenomena
  4. clarify the nonlinear nature of sound when it’s assembled across n-dimensional networks.
  5.  Spatialization:
  6. Geometric Sound space modelling.
  7. Multiple-stream Reverberation.
  8. Granular Reverberation: time-splattering effect
  9. Use Spatium for Max in articulating the spatialization
  10. functional controllers for a unique live performing system

A controlling system consisted of functional keys (4 groups):

(1) manipulating mesoscale sonic transformation parameters modeled after personalized clusters, streams, and masses

(2) from the macroscale perspective, modify stochastic generation, function-based derivation and restructuring of any previous improvised multidimensional composition texture in real-time

(3) the final audio output is articulated with virtual spatialization representable on subjective psychoacoustic space.

(4) the generative patterns for different acts, triggered by keyboard.

I aim to keep maturing and developing this motive.

For live performance control interface:

  • Score Analysis and prescribed pattern constructions.
  • Generative rules: deterministic and stochastic algorithms (embed in SuperCollider0, ChucK and Max for Live.
  • Pattern Matching and rules for live triggering design.
  • Signal Processing Patch, Waveform Generation Patch, Single Event Processing (voice assignment subprogram, event-scheduling subprogram, resource allocation subprogram) and Event Connections.
  • Using ChucK for generating real-time patterns

For Recording and Mixing

Recording the final outcome of performance and conduct multiscale and multitrack mixing and mastering in Ableton Live 10 Suite for online publication and streaming distribution.

Factors to consider including:

  1. spectra and spectrum processing
  2. dynamic range processing
  3. accuracy with respect to source
  4. cleanliness.
  5. Device Communication

MIDI Association (2020) had introduced MIDI 2.0 to the market, which not only brings higher resolution for sonic delivery and property exchange efficiency for controllers, but also makes communication changes to how devices communicate: 2-way communication is possible so devices can be compatible with each other. I seek to dive into this technology and MIDI 2.0 configuration with Max/MSP, ChucK and new musical interfaces.

Generative Strategies:

Markov Chains: Hierarchical Model

Generative Grammars: Music Analysis

Chaotic System and Lindenmayer System

Cellular Automata: Polyrhythmic structures

Neural Networks: Adaptive Resonance Theory

AI: Context-Dependence, Machine-Learning, Rule-based System

The Basic Terms, Functions and Workflow Design of Dasein System: – Terms and Functions

This section presents the terms constructing the logic flow of Dasein, I hope to constantly develop, mature and finalize this motive in the first year of research.

Skeleton: the composition, the sound continuum (equivalent to what’s in traditional music form: a mature movement), a well-developed motive. Musical scores including all sonic materials, data of sound properties (timbre, pitch, amplitude, duration), primary structural information (variations and theme developments composed). A skeleton in Dasein will be performed first for live processing.
Conductor: operator of Dasein system, capable of selecting combinations of micro materials in Skeleton, and manipulating the algorithm of selected materials to develop.

Pattern: Live-processed sonic materials produced by selected instrument type combinations with pitch information converted to MIDI and modified sound morphological information (waveform, spectra, spatial path, harmonicity, effects) of each instrument, analyzed in real-time by the unit of fixed duration of time.
Mesos: Meso-structural level information (cluster, mass, stream, cloud) from live-processed modified sonic materials of any combinations of multiple instruments, with the overall morphological information analyzed in real-time in fixed duration of time.
Phrases: Fixed duration composite compositional structure (theme, development, variations). Micro-level spatial articulation is included in phrases.
Macros: Multiscale compositional structure including pattern, phrases and mesos information, as sonic continuum (for example, pitch- rhythm continuum)
Algorithm: Generative models possible to apply to any scope of information (from parameters, MIDI sequence, clusters, to form variations).

Context: Product of generative sounds based on the algorithm chosen. Grand level spatial articulation can be modified in context.
Action: Real-time improv accompanied by context, which will be re-processed to form 2nd derivative context.
Scale: A basic unit. Unit in Dasein includes pattern, mesos, phrases, macros, any unit combination chosen by the Conductor is called a Scale.
Turn: Modification of parameters on any scale level. Mostly sonic transformation on timbral information.

– Workflow Design
1. Performers play Skeleton, using new instruments interface.
2. Live processing of Patterns, Mesos, Phrases, Macros in multi-channels. (Max/MSP multichannel connections)
3. During the performing of Skeleton, Conductor choose from lists of Algorithm to develop 1st derivate context.
4. Turn in real-time, creating timbral variance.
5. Action with guidance: performers improv under instructions on fixed duration, amplitude, harmonic, accompanied by 1st derivate context
6. Live processing of Action + 1st derivate context, during which sonic transformation can be made by performers on interface and Conductor on macroscale
7. 2nd derivative context is formed by modifications under Action + 1st derivate
8. Conductor choose from the lists of Algorithm for 3nd derivative of context
9. Action*2: Free Improv with only duration data in instruction.
10. Possibility for X time of live processing and generation. Can start from either deterministic or stochastic algorithmic model.