Next Generation Performance Rendering -- Exploiting Controllability
Keiji Hirata and Rumi Hiraga
Abstract:
We believe that the next-generation performance rendering system
should be able to refine and improve a generated performance
interactively, incrementally and locally through direct instructions
in the natural language of a musician. In addition, the generated
performance must reflect the musician's intention properly. For these
purposes, we propose a new framework called two-stage performance
rendering. The first stage translates a musician's instruction in
natural language into the deviations of the onset time, duration and
amplitude of structurally important notes and second stage spreads the
deviations over surrounding notes. We demonstrate sample sessions
using a prototype system that contains a gouping editor and a
performance rendering engine.
In Proc. of ICMC 2000, pp.360-363, ICMA.