This is the project website & blog of MIRLCAuto: A Virtual Agent for Music Information Retrieval in Live Coding, a project funded by the EPSRC HDI Network Plus Grant - Art, Music, and Culture theme.

An Interview with John Richards

15 Sep 2021 • Anna Xambó and Sam Roig • Post a comment

This is an interview with John Richards, and is a follow-up of the performance with the Dirty Electronics Ensemble led by John Richards, in collaboration with Jon.Ogara (also interviewed in the MIRLCa's blog here), and Anna Xambó at the closing concert of the MIRLCAuto project, on the 17th of May, 2021. The concert was hosted by MTI^2, De Montfort University in Leicester, UK. The video and live album of the concert will be premiered soon, stayed tuned!

John Richards.

John Richards explores the idea of Dirty Electronics that focuses on shared experiences, ritual, gesture, touch and social interaction. He is primarily concerned with the performance of large-group electronic music and DIY electronics, and the idea of creating music inside electronics. His work also pushes the boundaries between performance art, electronics, and graphic design and is transdisciplinary as well as having a socio-political dimension. Dirty Electronics has been commissioned to create sound devices for various arts organisations and festivals and has released a series of hand-held synths on Mute Records.
https://www.dirtyelectronics.org

Tell us a bit about you and your artistic practice... #

Dirty Electronics. It mostly says it all in the name. Crawling about on the floor with lots of people making noise. Sometimes very serious, often humorous. Absurdist noise! I like how making stuff and performance can overlap.

Did you have any prior hands-on experience with machine learning before this collaboration with the MIRLCa project? #

Well, actually no, well at least not directly. Although I’ve thought a lot about human-machine interaction, and the role of machines in performance.

How did you plan the collaboration between the Dirty Electronics Ensemble with Jon.Ogara (trombone, flute, Kinect sensor, and MIRLCa) and Anna Xambó (MIRLCa)? What was the role of the Dirty Electronics Ensemble members? #

Originally, we wanted to create some contrasting material, a pool of noise and sound as a basis for MIRLCa. Essentially, Dirty Electronics to become the Freesound source from which Jon.Ogara and Anna Xambó could draw. In terms of the Ensemble, there was the intention to introduce a level of detachment from the decision- and sound-making, to think of our range of sound circuits as autonomous to a certain extent. There was a focus on attending to sound and the machines, rather than performance in the traditional sense. We had the guidelines: think assembling, dissembling, listening, observing and performing uncontrollable instruments. The Ensemble were involved in the process of learning from and listening to the machine, as well as from the other performers in the room.

How do you envision integrating machine learning to your future work? How do you imagine machine learning being applied to the practice of DIY in the future? #

I was recently asked to give a talk about play, sound-making and performance. When I think of play, I also think of the process of finding. Through play, we can explore and from this learn. Find out things. And in my music, this play- and find-thing is fundamental. Relationships with and to technology also play important roles. This encompasses human-machine and human-computer interaction. Or on the flip side, human-machine non-interaction. These interests are often manifest through instrument design or exploring sound-making stuff and technological artefacts.

Machine learning in my practice of DIY sound-making has been one directional in many respects: me learning from the machine, playing, finding etc. But, this is not strictly true. Afterall, as a DIYer, I’m the one designing or building the machine, writing the code and algorithms; so, the relationship with the machine becomes more complex. Control of the machine, its behaviour, and, dare I say, the spirit of the machine is set and defined by its maker. For example, recently I’ve been working with microprocessors, wavetable synthesis, and algorithms (see Radical Chip/Nails) to exaggerate this play- and find-thing in music and sound-making, creating an environment where ‘learning’ takes place de facto. The material nature of the technology is also prioritised and celebrated, a kind-of materialism if you like: it’s about finding the idiosyncrasies, character and limitations of these materials, thus more scope to play and find.

I don’t see it as a massive step to what might be thought of as a bi-directional machine learning between machine and its user - the user learning from the machine and its limitations, and the machine learning from the user - as the user if often playing ‘God’ in that they set the parameters and conditions from which the machine learns. In this regard, the machine is not, or never is, autonomous. But the general principle of ‘the machine can learn too’ extends the possibilities of human-machine interaction. This is for sure something I envision having a big impact on my work and DIY sound-making in the future.

← Home