Symbolic music generation has made significant progress, yet achieving finegrained and flexible control over composer style remains challenging. Existing training-based methods for composer style conditioning depend on large labeled datasets. Besides, these methods typically support only single-composer generation at a time, limiting their applicability to more creative or blended scenarios. In this work, we propose Composer Vector, an inference-time steering method that operates directly in the model’s latent space to control composer style without retraining. Through experiments on multiple symbolic music generation models, we show that Composer Vector effectively guides generations toward target composer styles, enabling smooth and interpretable control through a continuous steering coefficient. It also enables seamless fusion of multiple styles within a unified latent-space framework. Overall, our work demonstrates that simple latent-space steering provides a practical and general mechanism for controllable symbolic music generation, enabling more flexible and interactive creative workflows.
# Clone the repository
git clone https://github.com/JiangXunyi/Composer-Vector.git
cd composervector# create a virtual environment
python -m venv env
source env/bin/activate
pip install -r requirements.txtIf you use this work in your research, please cite:
@inproceedings{composervector2025,
title={Composer Vector: Style-steering Symbolic Music Generation in a Latent Space},
author={Xunyi Jiang, Xin Xu},
year={2025}
}