This paper details a workshop and optional public installation based on the development of situational scores that combine music notation, AI, and code to create dynamic interactive art driven by the realtime movements of objects and people in a live scene, such as crowds on a public concourse. The approach presented here uses machine vision to process a video feed from a scene, from which detected objects and people are input to the Manhattan digital music notation , which integrates music editing and programming practices to support the creation of sophisticated musical scores that combine static, algorithmic, or reactive musical parts. This half-or full-day workshop begins with a short description and demonstration of the approach, showcas-ing previous public art installations, before moving on to practical explorations and applications by participants. Following a primer in the basics of the tools and concepts , attendees will work with Manhattan and a selection of pre-captured scenes to develop and explore techniques for dynamically mapping patterns, events, structure, or activity from different situations and environments to music. For the workshop, scenes are pre-processed so as to support any Windows or Mac machine. Practical activities will support discussions on technical, aesthetic, and ontological issues arising from the identification and mapping of structure and meaning in non-musical domains to analogous concepts in musical expression. The workshop could additionally supplement or support a performance or installation based on the technology, either showcasing work developed by participants, or presenting a more sophisticated, semi-permanent live exhibit for visitors to the conference or Elbphilharmonie, developing on previous installations.
Nash, C. (in press). Automatic for the people: Crowd-driven generative scores using Manhattan and machine vision