Skip to main content

Research Repository

Advanced Search

Automatic for the people: Crowd-driven generative scores using Manhattan and machine vision

Nash, Chris

Automatic for the people: Crowd-driven generative scores using Manhattan and machine vision Thumbnail


Authors

Chris Nash Chris.Nash@uwe.ac.uk
Senior Lecturer in Music Tech - Software Development



Abstract

This paper details a workshop and optional public installation based on the development of situational scores that combine music notation, AI, and code to create dynamic interactive art driven by the realtime movements of objects and people in a live scene, such as crowds on a public concourse. The approach presented here uses machine vision to process a video feed from a scene, from which detected objects and people are input to the Manhattan digital music notation [1], which integrates music editing and programming practices to support the creation of sophisticated musical scores that combine static, algorithmic, or reactive musical parts. This half-or full-day workshop begins with a short description and demonstration of the approach, showcas-ing previous public art installations, before moving on to practical explorations and applications by participants. Following a primer in the basics of the tools and concepts , attendees will work with Manhattan and a selection of pre-captured scenes to develop and explore techniques for dynamically mapping patterns, events, structure, or activity from different situations and environments to music. For the workshop, scenes are pre-processed so as to support any Windows or Mac machine. Practical activities will support discussions on technical, aesthetic, and ontological issues arising from the identification and mapping of structure and meaning in non-musical domains to analogous concepts in musical expression. The workshop could additionally supplement or support a performance or installation based on the technology, either showcasing work developed by participants, or presenting a more sophisticated, semi-permanent live exhibit for visitors to the conference or Elbphilharmonie, developing on previous installations.

Citation

Nash, C. (2021). Automatic for the people: Crowd-driven generative scores using Manhattan and machine vision.

Conference Name Technologies for the Notation or Representation of Music (TENOR)
Conference Location Hamburg, Germany
Start Date May 10, 2021
End Date May 13, 2021
Acceptance Date Mar 11, 2021
Publication Date 2021
Deposit Date May 3, 2021
Publicly Available Date Mar 28, 2024
Volume 2021
Series Title Proceedings of the 2020/2021 Conference on Technologies for the Notation or Representation of Music (TENOR)
Public URL https://uwe-repository.worktribe.com/output/7336888
Publisher URL https://www.tenor-conference.org/proceedings.html
Additional Information Presentation of the Manhattan-based crowd-driven music system used in BBC Music Day 2018, originally planned as paper, workshop and new installation in the Elbphilharmonie (Hamburg, Germany), but revised due to COVID-19 - as paper and video performance of Bristol footage.

Files







Related Outputs



You might also like



Downloadable Citations