Counterfactuals in music generation: Difference between revisions
No edit summary |
Reorganized page with updates by week. Still filling in |
||
| Line 1: | Line 1: | ||
'''Introduction''' | |||
[describe high-level goal of the project: human-AI co-creation, refinement of system outputs through counterfactuals] | |||
'''Updates''' | |||
''Weeks 1 & 2'' were primarily spent doing [https://docs.google.com/document/d/1X88qML-T1Zzw7A9GvK_O57yqetIu0MOr1AsSPyS0LeQ literature review]. I learned a lot of background information on causal inference, primarily informed by Pearl's Causal Hierarchy. | |||
''Week 3'' | |||
this week, I tentatively narrowed the scope of the project to music generation. I have been experimenting with various music generation models from Google Magenta, including the Music Transformer and CoCoNet. | |||
''Week 4'' | |||
Still playing around with the generative models, trying to get some intuition into their workings and what parameters I can adjust. The [https://magenta.tensorflow.org/coconet orderless composition] property of Coconet is particularly interesting — it seems like the sampling strategy is non-deterministic, and we could run a counterfactual in the vein of "what would have happened if the notes were generated in a different order..." | |||
''Week 5'' | |||
I am now using the DDSP framework in my project. My project abstract can be viewed [https://docs.google.com/document/d/1tbPepxLPPV2MJjuzKfh2UqA_vHr0eMc81PPHPzi0mUg here.] | |||
TODO: | |||
- Intuitive control knobs | |||
- Bayesian optimization | |||
- Gaussian Processes | |||
''Week 6'' | |||
TODO: | |||
- Design Adjectives | |||
- knob prototype (link colab) | |||
- idea of higher-level and lower-level control knobs | |||
- talked to Tobi Gerstenberg; describe convo | |||
''Week 7'' | |||
TODO: | |||
- discussion about building intuition for which control parameters to control (and how fine-grained): using envelopes. | |||
- improved knob prototype | |||
Revision as of 18:11, 12 May 2021
Introduction
[describe high-level goal of the project: human-AI co-creation, refinement of system outputs through counterfactuals]
Updates
Weeks 1 & 2 were primarily spent doing literature review. I learned a lot of background information on causal inference, primarily informed by Pearl's Causal Hierarchy.
Week 3
this week, I tentatively narrowed the scope of the project to music generation. I have been experimenting with various music generation models from Google Magenta, including the Music Transformer and CoCoNet.
Week 4
Still playing around with the generative models, trying to get some intuition into their workings and what parameters I can adjust. The orderless composition property of Coconet is particularly interesting — it seems like the sampling strategy is non-deterministic, and we could run a counterfactual in the vein of "what would have happened if the notes were generated in a different order..."
Week 5
I am now using the DDSP framework in my project. My project abstract can be viewed here.
TODO:
- Intuitive control knobs
- Bayesian optimization
- Gaussian Processes
Week 6
TODO:
- Design Adjectives
- knob prototype (link colab)
- idea of higher-level and lower-level control knobs
- talked to Tobi Gerstenberg; describe convo
Week 7
TODO:
- discussion about building intuition for which control parameters to control (and how fine-grained): using envelopes.
- improved knob prototype