Jump to content

Counterfactuals in music generation: Difference between revisions

From CCRMA Wiki
Agrawalk (talk | contribs)
mNo edit summary
Agrawalk (talk | contribs)
No edit summary
Line 1: Line 1:
4/29/21: My project abstract can be viewed here: https://docs.google.com/document/d/1tbPepxLPPV2MJjuzKfh2UqA_vHr0eMc81PPHPzi0mUg
4/29/21: I am now using the DDSP framework in my project. My project abstract can be viewed here: https://docs.google.com/document/d/1tbPepxLPPV2MJjuzKfh2UqA_vHr0eMc81PPHPzi0mUg


4/20/21: Still playing around with the generative models, trying to get some intuition into their workings and what parameters I can adjust. The [https://magenta.tensorflow.org/coconet orderless composition] property of Coconet is particularly interesting — it seems like the sampling strategy is non-deterministic, and we could run a counterfactual in the vein of "what would have happened if the notes were generated in a different order..."
4/20/21: Still playing around with the generative models, trying to get some intuition into their workings and what parameters I can adjust. The [https://magenta.tensorflow.org/coconet orderless composition] property of Coconet is particularly interesting — it seems like the sampling strategy is non-deterministic, and we could run a counterfactual in the vein of "what would have happened if the notes were generated in a different order..."


4/15/21: this week, I tentatively narrowed the scope of the project to music generation. I have been experimenting with various music generation models from Google Magenta, including the Music Transformer and CoCoNet.
4/15/21: this week, I tentatively narrowed the scope of the project to music generation. I have been experimenting with various music generation models from Google Magenta, including the Music Transformer and CoCoNet.

Revision as of 06:43, 1 May 2021

4/29/21: I am now using the DDSP framework in my project. My project abstract can be viewed here: https://docs.google.com/document/d/1tbPepxLPPV2MJjuzKfh2UqA_vHr0eMc81PPHPzi0mUg

4/20/21: Still playing around with the generative models, trying to get some intuition into their workings and what parameters I can adjust. The orderless composition property of Coconet is particularly interesting — it seems like the sampling strategy is non-deterministic, and we could run a counterfactual in the vein of "what would have happened if the notes were generated in a different order..."

4/15/21: this week, I tentatively narrowed the scope of the project to music generation. I have been experimenting with various music generation models from Google Magenta, including the Music Transformer and CoCoNet.