Difference between revisions of "Melody Generator"
From CCRMA Wiki
Line 28: | Line 28: | ||
**For example, if the user enters an input starting in C4, the output is in C Major. The system currently looks for the C Major, D Minor and G Major scales. | **For example, if the user enters an input starting in C4, the output is in C Major. The system currently looks for the C Major, D Minor and G Major scales. | ||
**Another piece of information that's used from the input is the timing. If the average deltatime between different notes of the input is within a certain threshold, the song is considered 'fast-paced' otherwise, the overall contour of the song is in terms of its speed is mild. | **Another piece of information that's used from the input is the timing. If the average deltatime between different notes of the input is within a certain threshold, the song is considered 'fast-paced' otherwise, the overall contour of the song is in terms of its speed is mild. | ||
− | **For inputs with less average deltatime, the AABA model is used to generate the melody | + | **For inputs with less average deltatime, the AABA model is used to generate the melody. The program builds patterns since melody is all about repetition and ''patterning''. |
* Software Design/Architecture | * Software Design/Architecture | ||
Line 44: | Line 44: | ||
=== Milestones === | === Milestones === | ||
− | * Get an OOP version of Karplus Strong working | + | * Get an OOP version of Karplus Strong working (3rd Week, Nov) |
− | * Implement the timer logic | + | * Implement the timer logic (3rd Week, Nov) |
− | * Implement a pitch logic into the timer | + | * Implement a pitch logic into the timer (3rd Week, Nov) |
− | * Design a way so that RtAudio waits until RtMidi is done with the user input | + | * Design a way so that RtAudio waits until RtMidi is done with the user input (3rd Week, Nov) |
− | * Build the scales and check the user input | + | * Build the scales and check the user input (4th Week, Nov) |
− | * Build the melody | + | * Build the melody (4th Week, Nov and 1st Week, Dec) |
− | * Graphics? 2D | + | * Graphics? 2D Graph like plot? (4th Week, Nov and 1st Week, Dec) |
+ | |||
+ | === Possible improvements === | ||
+ | * A better way to setup timing | ||
+ | * STK/Fluidsynth instead of a Karplus Strong voice | ||
+ | * Graphical Interactive output | ||
+ | * More and more and more intuition match | ||
+ | * More and more and more pattern building | ||
+ | |||
+ | === Download === | ||
+ | * Please download this project from here: https://ccrma.stanford.edu/~ravik/melody_generator.zip and provide some feedback! Thanks!! |
Revision as of 09:53, 8 December 2010
Contents
Melody Generator
Idea/Premise
- To *aid* musicians in creating melodies
- To modify melodies out of existing ideas
Motivation
- Personal interest in softwares that 'use' 'intelligence'
Design
- It's a challenge of placing notes in time
- The code almost exclusively is about designating time values to different pitches as the melody proceeds
- Take several attributes from the user input.
- Ideas considered for this project to create the output: Scale and 'speed' of the input
- The user interacts initially by giving an input and the engine immediately takes over the control, computes and plays the output.
Design Considerations
- The program uses the RtAudio and RtMidi libraries for computing realtime audio and accepting real time input from the user.
- The Karplus Strong synth model is the 'voice' of the project.
- The number of notes that the user would like to give as the input can be set within the program. (This is kinda clumsy, so I am ACTUALLY working on trying to gauge the input in a different way)
- A heavy amount of information is used from the input:
- For example, if the user enters an input starting in C4, the output is in C Major. The system currently looks for the C Major, D Minor and G Major scales.
- Another piece of information that's used from the input is the timing. If the average deltatime between different notes of the input is within a certain threshold, the song is considered 'fast-paced' otherwise, the overall contour of the song is in terms of its speed is mild.
- For inputs with less average deltatime, the AABA model is used to generate the melody. The program builds patterns since melody is all about repetition and patterning.
- Software Design/Architecture
- The system basically consists of the Karplus Strong synth class, a timer section, a scales section and a melody generation section.
- The Karplus Strong synth class is mostly derived from HW2 of this course, save a FEW changes.
- The timer section is implemented within the RtAudio Callback fucntion (functon audio_callback)
- The scales and melody are implemented in separate header files for clear delineation of functionality and modularity.
- Scales header file is called 'Scales.h' and Melody header file is called 'Melody.h'.
Testing
- A good measure of testing is how *tasteful* the output sounds. This is a matter of preference, taste and a little bit of intuition into the next possible note.
Team
- I worked on the project by myself (Ravi Kondapalli, ravik at ccrma)
Milestones
- Get an OOP version of Karplus Strong working (3rd Week, Nov)
- Implement the timer logic (3rd Week, Nov)
- Implement a pitch logic into the timer (3rd Week, Nov)
- Design a way so that RtAudio waits until RtMidi is done with the user input (3rd Week, Nov)
- Build the scales and check the user input (4th Week, Nov)
- Build the melody (4th Week, Nov and 1st Week, Dec)
- Graphics? 2D Graph like plot? (4th Week, Nov and 1st Week, Dec)
Possible improvements
- A better way to setup timing
- STK/Fluidsynth instead of a Karplus Strong voice
- Graphical Interactive output
- More and more and more intuition match
- More and more and more pattern building
Download
- Please download this project from here: https://ccrma.stanford.edu/~ravik/melody_generator.zip and provide some feedback! Thanks!!