320C 2021 Joss Project
From CCRMA Wiki
Project Proposal
Create a plugin which performs "color sonification," using the colors of a video to modulate an accompanying piece of music/audio. The RGB values of the average color of each frame will be used as inputs into audio effects, effectively creating a parametric mapping between video color and audio manipulation.
Structure & Timeline
- (By End of Week 4): Create script (most likely in Python) which takes a video file as input and outputs three "audio" tracks to be imported into a DAW for use with plugin.
- Each frame of the video will be subjected to a "color averaging" algorithm
- Average color will be broken down into its RGB values, one value per track.
- Numerical mapping: color value 0 = -1.0 audio sample value, color value 255 = 1.0 audio sample value, color value 127.5 = 0.0 audio sample value
- For initial purposes, restricting project to 24fps video and 48000Hz sampling rate for best compatibility
- (By End of Week 6): Design a plugin that uses a single track's information as a "side chain" to control multiband EQ setting specified by user.
- For one example, the "Green" track's values might control how much of the EQ'ing is applied at any time. A very green frame of video would fully apply the user's EQ preset. A neutrally colored frame of video would have a default flat EQ. A very magenta frame of video (likely to have low green value) would apply a "negative" or "inverse" of the user's green-associated EQ preset.
- Will likely build off of Faust EQ demo.
- Need to figure out how to get audio input to "automate" between settings in a non-destructive way.
- (By End of Week 8): Implement a structure that takes three input tracks as side-chains to three plugins, contained within one single plugin.
- Create two layers of GUI options:
- For each color, what is the effect (straight from Faust library or with small modifications) user wants to apply?
- What is the most effective way to load plugins into a plugin?
- For a given effect that a user has paired with a color, how will the user select which exposed parameter of that effect to "side chain" color info to?
- Will the user set a max (and/or min) value for the audio effect parameter to correspond with max (and/or min) RGB value(s)? This may only be sometimes necessary, depending on the type of parameter...
- For each color, what is the effect (straight from Faust library or with small modifications) user wants to apply?
- Create two layers of GUI options:
- (By End of Week 10 / Presentation Date): Refine GUI, testing, creating demonstrations/documentation.