Skip to main content

Research Repository

Advanced Search

Automatic for the people: Crowd-driven generative scores using Manhattan and machine vision (2021)
Conference Proceeding
Nash, C. (2021). Automatic for the people: Crowd-driven generative scores using Manhattan and machine vision.

This paper details a workshop and optional public installation based on the development of situational scores that combine music notation, AI, and code to create dynamic interactive art driven by the realtime movements of objects and people in a live... Read More about Automatic for the people: Crowd-driven generative scores using Manhattan and machine vision.

Creativity in children's digital music composition (2021)
Conference Proceeding
Ford, C., Bryan-Kinns, N., & Nash, C. (2021). Creativity in children's digital music composition. In Proceedings of the International Conference on New Interfaces for Musical Expression (NIME). https://doi.org/10.21428/92fbeb44.e83deee9

Composing is a neglected area of music education. To increase participation, many technologies provide open-ended interfaces to motivate child autodidactic use, drawing influence from Papert’s LOGO philosophy to support children’s learning through pl... Read More about Creativity in children's digital music composition.

Was that me?: Exploring the effects of error in gestural digital musical instruments (2020)
Conference Proceeding
Brown, D., Nash, C., & Mitchell, T. J. (2020). Was that me?: Exploring the effects of error in gestural digital musical instruments. In AM '20: Proceedings of the 15th International Conference on Audio Mostly (168-174). https://doi.org/10.1145/3411109.3411137

Traditional Western musical instruments have evolved to be robust and predictable, responding consistently to the same player actions with the same musical response. Consequently, errors occurring in a performance scenario are typically attributed to... Read More about Was that me?: Exploring the effects of error in gestural digital musical instruments.

Crowd-driven music: Interactive and generative approaches using machine vision and Manhattan (2020)
Conference Proceeding
Nash, C. (2020). Crowd-driven music: Interactive and generative approaches using machine vision and Manhattan. In F. Schroeder, & R. Michon (Eds.), Proceedings of the International Conference on New Interfaces for Musical Expression (259-264)

This paper details technologies and artistic approaches to crowd-driven music, discussed in the context of a live public installation in which activity in a public space (e.g. a busy railway platform) is used to drive the automated composition and pe... Read More about Crowd-driven music: Interactive and generative approaches using machine vision and Manhattan.

Understanding user-defined mapping design in mid-air musical performance (2018)
Conference Proceeding
Brown, D., Nash, C., & Mitchell, T. (2018). Understanding user-defined mapping design in mid-air musical performance. https://doi.org/10.1145/3212721.3212810

© 2018 Copyright held by the owner/author(s). Publication rights licensed to Association for Computing Machinery. Modern gestural interaction and motion capture technology is frequently incorporated into Digital Musical Instruments (DMIs) to enable n... Read More about Understanding user-defined mapping design in mid-air musical performance.