Jump to content

Counterfactuals in music generation

From CCRMA Wiki
Revision as of 18:11, 12 May 2021 by Agrawalk (talk | contribs) (Reorganized page with updates by week. Still filling in)

Introduction

[describe high-level goal of the project: human-AI co-creation, refinement of system outputs through counterfactuals]

Updates

Weeks 1 & 2 were primarily spent doing literature review. I learned a lot of background information on causal inference, primarily informed by Pearl's Causal Hierarchy.

Week 3

this week, I tentatively narrowed the scope of the project to music generation. I have been experimenting with various music generation models from Google Magenta, including the Music Transformer and CoCoNet.

Week 4

Still playing around with the generative models, trying to get some intuition into their workings and what parameters I can adjust. The orderless composition property of Coconet is particularly interesting — it seems like the sampling strategy is non-deterministic, and we could run a counterfactual in the vein of "what would have happened if the notes were generated in a different order..."

Week 5

I am now using the DDSP framework in my project. My project abstract can be viewed here.

TODO:

- Intuitive control knobs

- Bayesian optimization

- Gaussian Processes

Week 6

TODO:

- Design Adjectives

- knob prototype (link colab)

- idea of higher-level and lower-level control knobs

- talked to Tobi Gerstenberg; describe convo

Week 7

TODO:

- discussion about building intuition for which control parameters to control (and how fine-grained): using envelopes.

- improved knob prototype