This is the project website & blog of MIRLCAuto: A Virtual Agent for Music Information Retrieval in Live Coding, a project funded by the EPSRC HDI Network Plus Grant - Art, Music, and Culture theme.

An Interview with Jon Ogara

14 May 2021 • Anna Xambó and Sam Roig • Post a comment

This is an interview with Jon Ogara about follow-up work using MIRLCa after attending the workshop hosted by Leicester Hackspace in January 2021. Jon Ogara will participate in the final concert of the project Dirty Dialogues featuring Dirty Electronics Ensemble (John Richards, Amit D Patel aka Dushume, Zach Dawson, Robin Foster, Ben Middle, Jacob Myer Braslawsce, Audrey Riley, Matt Rogerson, Harry Smith, Sam Topley, Samuel Warren) together with Jon Ogara, Anna Xambó and Sam Roig (interviewer).

Jon Ogara.

Jon Ogara started his musical path by learning the flute at school and discovered the delights of classical music. Jon studied electronics at the University of Manchester with a focus on radio communications. At university, he learnt the guitar and started to create more independent rock music, influenced by bands such as The Fall or Cabaret Voltaire. He discovered the saxophone and Jazz and started to bring together ideas of Jazz improvisation into his composition. Jon has studied Jazz with Nik Weldon at the JazzSchool in Rushden. With the development of the internet and connected devices, he started to explore the world of experimental music and started to share ideas and compositions.
https://allopenelectrics.com | https://soundcloud.com/allopenelectrics

Video "Drowning in and Struggling Out" #

Tell us a bit about you and your artistic practice in live coding... #

I am interested in creating music in the moment using a variety of analogue and digital devices. The pieces I create are spontaneous, in the moment and not edited. I spend a lot of time creating music with laptops and IPads. I like to improvise over the top of these pieces with sax, flute, guitar, or trombone. I upload these recordings to my SoundCloud page.

My initial practice with live coding was to use Sonic Pi on a Raspberry Pi. I then created a Python program, linked to a MIDI controller to create musical rhythms which can be adjusted in time. The result was the piece Analogue algorithms in the mind.

I have mainly used Max/MSP to create new types of instruments. I have started to investigate the use of SuperCollider as well as thinking of integrating this tech with modular synthesis.

What motivated you to sign up for the workshop? #

I am always seeking new ways of integrating new technologies into music creation and performance.

Did you have any prior hands-on experience with machine learning before attending this workshop? #

I used Google Magenta to create a self-learning loop, where the AI was seeded with a sound and then fed this sound back to itself. I have used this as a live recorded performance named live experiment 6.

How have you approached the production of your video "Drowning in and struggling out" using the tool? #

I like the idea of bringing the physical into the performance, for example, I connect a Kinect sensor to MAX and have this generate MIDI notes and control messages to a synth. I also use the Kinect to generate the visual representation of when I move to create these notes. Linking this with some degree of live coding to set up the musical backdrop, I can improvise over this. I use IPad music apps a lot, I find them very flexible and quick to use. I can integrate samplers, external MIDI instruments and effects to this. An example of this was to interpret a graphical score from Ying Lui, nonsense note and the video I created for The Outlands Network called Drowning in and struggling out.

How do you envision integrating MIRLCa into your future work? #

I am currently working on several projects to use MIRLCa. The first is to integrate this within a whole performance for the Manchester Fringe in September 2021. The focus is on the emergence of myself and my friend/collaborator as artists over the pandemic and lockdown. I have created two instances of neural networks using MIRLCa, one for myself and one for my friend and collaborator. I will seed these with a particular word and generate different sounds for both of us, one to the right and one to the left. Once I adjusted the sounds to create the backdrop, we will improvise over the top of this. The piece is titled “Who are we?” And fits over the entire narrative of “From Where? To here”.

The other main project is of a longer-term. I am creating a history of myself in terms of how I react to events. Over a week, I find a word that sums up my feelings over that week. I then use MIRLCa to create a neural net recording of my reaction to sounds which sum up that word. I will also take an image/video with words that link all this together. Over a year – and perhaps longer I will generate a history which I can then interrogate at a later state within performances.

Give us a (speculative) example of how you imagine machine learning being applied to the practice of live coding in the future. #

I see machine learning becoming normalised within performance, even giving credit to these “alternative intelligences” as performers. I see live coding becoming integrated as part of a live performance, I see many opportunities to integrate this with Jazz.

← Home