This is the project website & blog of MIRLCAuto: A Virtual Agent for Music Information Retrieval in Live Coding, a project funded by the EPSRC HDI Network Plus Grant - Art, Music, and Culture theme.
Presentation at Coding Literacy, Practices & Cultures Colloquia – April 29, 2021
29 Apr 2021 • Anna Xambó • Post a comment
Today, 29 March 2021, I presented on Zoom the MIRLCAuto project at the Coding Literacy, Practices & Cultures, a networked series of research colloquia organized by Dan Verständig (University of Magdeburg / Germany) and Angela Brennecke (Film University KONRAD WOLF in Potsdam Babelsberg / Germany). You can find more info about the colloquia series here.
The talk entitled "Insights into MIRLCAuto: A virtual agent for music information retrieval in live coding" has been framed to introduce the project in the context of collaborative live coding and the three Human-Data Interaction (HDI) Network Plus tenets of legibility, agency and negotiability from the angle of why legibility, agency and negotiability matter in AI-powered live coding environments. This project contributes to the HDI Network Plus Theme “Art, Music & Culture” with insight on live coding and machine learning (ML). It has been nice to reflect on the project achievements and also to outline what needs to be done before the project ends on 30th June 2021.
The three overarching HDI tenets relate to the project in the following way:
- Legibility (making the processes of sharing data about a person, and others’ analysis and use of that data, comprehensible to that person): This tenet connects with the TOPLAP Manifesto and the principle of "Obscurantism is dangerous. Show us your screens." In addition, code and processes should be clear, which the MIRLC and MIRLCa languages have as a priority.
- Agency (giving a person the capacity to interact with their systems so as to control and correct the above-mentioned processes): Here the role of interactive machine learning is fundamental to enable live coders to create 'situated musical actions' (Xambó et al. 2021) that can, in turn, influence the MIRLCa system.
- Negotiability (giving a person the capacity to interact with the people who do the above-mentioned analysis and use, so as to change and correct what those people do): The workshops, concerts and GitHub repository have been instrumental to promote a participatory design approach to designing the tool as an ongoing conversation.
Take-away message. The main take-away message is that there are as many approaches to machine learning and live coding as practices in live coding. To make a meaningful research/tool beyond personal use, these 3 tenets are helpful to find a versatile solution to the nature of the live coding community, which should align with open culture and DIY practices. It is also a way of bringing ML concepts to the live coding community adapted to their respective live coding environments. It is just the beginning of a promising area of research and practice.
Feedback: We had a round of Q&A at the end where we discussed interesting topics such as the legal boundaries of repurposing sound samples in live coding and thus how to properly attribute the sound samples, how live coding practices can inform educational contexts and techniques, how to get started with machine learning for music and to what extent MIRLCa has changed my practice and how.
The slides of my presentation are available here. You can find the reference list below.
Acknowledgments #
Thanks to Dan Verständig and Angela Brennecke for their invitation, to the attendees for their interest, and to the University of Magdeburg and Film University KONRAD WOLF in Potsdam Babelsberg for co-hosting the event.
Reference List #
- Barbosa, Álvaro. "Displaced soundscapes: A survey of network systems for music and sonic art creation." Leonardo Music Journal (2003): 53-59.
- Bernardo, F., Zbyszyński, M., Grierson, M., & Fiebrink, R. (2020). Designing and Evaluating the Usability of a Machine Learning API for Rapid Prototyping Music Technology. Frontiers in Artificial Intelligence, 3(13), 1–18.
- Bullock, J., & Momeni, A. (2015). ml.lib: Robust, Cross-platform, Open-source Machine Learning for Max and Pure Data. In E. Berdahl & J. Allison (Eds.), Proceedings of the International Conference on New Interfaces for Musical Expression (pp. 265–270). Baton Rouge, Louisiana, USA: Louisiana State University.
- Collins, N. (2015). Live Coding and Machine Listening. In Proceedings of the First International Conference on Live Coding (pp. 4–11). Leeds, UK: ICSRiM, University of Leeds.
- Collins, N., McLean, A., Rohrhuber, J., & Ward, A. (2003). Live Coding in Laptop Performance. Organised Sound, 8(3), 321–330.
- Fails, J. A., & Olsen, D. R. (2003). Interactive Machine Learning. In Proceedings of the 8th International Conference on Intelligent User Interfaces (pp. 39–45). Miami, Florida, USA: Association for Computing Machinery.
- Fiebrink, R., & Caramiaux, B. (2018). The Machine Learning Algorithm as Creative Musical Tool. In R. T. Dean & A. McLean (Eds.), The Oxford Handbook of Algorithmic Music (pp. 181–208). Oxford University Press.
Fiebrink, R., & Sonami, L. (2020). Reflections on Eight Years of Instrument. Creation with Machine Learning. In R. Michon & F. Schroeder (Eds.), Proceedings of the International Conference on New Interfaces for Musical Expression (pp. 237– 242). Birmingham, UK: Birmingham City University. - Fiebrink, R., Trueman, D., & Cook, P. R. (2009). A Meta-Instrument for Interactive, On-the-Fly Machine Learning. In Proceedings of the International Conference on New Interfaces for Musical Expression (pp. 280–285). Pittsburgh, PA, United States.
- Font, F., Roma, G. and Serra, X. Freesound Technical Demo. In Proceedings of the 21st ACM International Conference on Multimedia, pages 411–412, 2013.
- Navarro, L., & Ogborn, D. (2017). Cacharpo: Co-performing Cumbia Sonidera with Deep Abstractions. In Proceedings of the International Conference on Live Coding. Morelia, Mexico.
- Lorway, N., Jarvis, M., Wilson, A., Powley, E., & Speakman, J. (2019). Autopia: An AI Collaborator for Gamified Live Coding Music Performances. In 2019 Artificial Intelligence and Simulation of Behaviour Convention. Falmouth, UK.
- Ordiales, H., & Bruno, M. L. (2017). Sound Recycling From Public Databases: Another Bigdata Approach to Sound Collections. In Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences (p. 48:1-48:8).
- Paz, I. (2015). Live Coding Through Rule-Based Modelling of High-Level
Structures: Exploring Output Spaces of Algorithmic Composition Systems. In Proceedings of the First International Conference on Live Coding (pp. 83–86). Leeds, UK: ICSRiM, University of Leeds. - Reppel, N. (2020). The Mégra System - Small Data Music Composition and Live Coding Performance. In Proceedings of the 2020 International Conference on Live Coding (pp. 95--104). Limerick, Ireland.
- Roma, G., Green, O., & Tremblay, P. A. (2019). Adaptive Mapping of Sound Collections for Data-driven Musical Interfaces. In M. Queiroz & A. X. Sedó (Eds.), Proceedings of the International Conference on New Interfaces for Musical Expression (pp. 313–318). Porto Alegre, Brazil: UFRGS.
- Roma, G., Xambó, A., & Freeman, J. (2018). User-independent Accelerometer Gesture Recognition for Participatory Mobile Music. Journal of the Audio Engineering Society, 66(6), 430–438.
- Stewart, J., Lawson, S., Hodnick, M., & Gold, B. (2020). Cibo v2: Realtime Livecoding A.I. Agent. In Proceedings of the 2020 International Conference on Live Coding (ICLC2020) (pp. 20–31). Limerick, Ireland: University of Limerick.
- Subramanian, S., Freeman, J., & McCoid, S. (2012). LOLbot: Machine Musicianship in Laptop Ensembles. In Proceedings of the International Conference on New Interfaces for Musical Expression. Ann Arbor, Michigan: University of Michigan.
- Suchman, L. A. (1987). Plans and Situated Actions: The Problem of Human-Machine Communication. Cambridge University Press.
- Xambó, A., Lerch, A. and Freeman, J. Music Information Retrieval in Live Coding: A Theoretical Framework. Computer Music Journal, 42:9–25, 2019.
- Xambó, A., Roma, G., Lerch, A., Barthet, M. and Fazekas, G. Live Repurposing of Sounds: MIR Explorations with Personal and Crowdsourced Databases. In T. M. Luke Dahl, Douglas Bowman, editor, Proceedings of the International Conference on New Interfaces for Musical Expression, pages 364–369, Blacksburg, Virginia, USA, 2018. Virginia Tech.
- Xambó, A., Roma, G., Roig, S. and Solaz, E. Live Coding with the Cloud and a Virtual Agent. In Proceedings of the International Conference on New Interfaces for Musical Expression, Shanghai, China, 2021. New York University Shanghai.