Difference between revisions of "MIR workshop 2015"
(→Day 2: Beat, Rhythm, Pitch and Chroma Analysis) |
(→Day 3: Machine Learning, Clustering and Classification) |
||
Line 109: | Line 109: | ||
=== Day 3: Machine Learning, Clustering and Classification === | === Day 3: Machine Learning, Clustering and Classification === | ||
− | |||
− | + | '''Lecture''' | |
− | + | ||
− | + | ||
− | + | ||
− | + | Classification: Unsupervised vs. Supervised, k-means, GMM, SVM | |
+ | '''Lab''' | ||
− | + | [http://musicinformationretrieval.com/kmeans_instrument_classification.html MFCC, K-Means Clustering] | |
− | + | ||
− | + | ||
− | + | * [http://ccrma.stanford.edu/workshops/mir2012/2012-ClusterLab.pdf K-Means (2012)] | |
− | * [http://ccrma.stanford.edu/workshops/ | + | |
− | + | ||
=== Day 4: Music Information Retrieval in Polyphonic Mixtures === | === Day 4: Music Information Retrieval in Polyphonic Mixtures === |
Revision as of 12:36, 10 July 2015
Intelligent Audio Systems: Foundations and Applications of Music Information Retrieval
Contents
Logistics
- Monday, July 13, through Friday, July 17, 2015. 9 AM to 5 PM every day.
- Location: The Knoll, CCRMA, Stanford University. http://goo.gl/maps/nNKx
- Instructors:
Abstract
How would you "Google for audio", provide music recommendations based on your MP3 files, or have a computer "listen" and understand what you are playing?
This workshop will teach such underlying ideas, approaches, technologies, and practical design of intelligent audio systems using music information retrieval (MIR) algorithms.
MIR is a highly interdisciplinary field bridging the domains of digital audio signal processing, pattern recognition, software system design, and machine learning. Simply put, MIR algorithms allow a computer to listen to, understand, and make sense of audio data such as MP3s in a personal music collection, live streaming audio, or gigabytes of sound effects, in an effort to reduce the semantic gap between high-level musical information and low-level audio data. In the same way that listeners can recognize the characteristics of sound and music -- tempo, key, chord progressions, genre, or song structure -- MIR algorithms are capable of recognizing and extracting this information, enabling systems to sort, search, recommend, tag, and transcribe music, possibly in real time.
This workshop is intended for students, researchers, and industry audio engineers who are unfamiliar with the field of Music Information Retrieval (MIR). We will demonstrate exciting technologies enabled by the fusion of basic signal processing techniques with machine learning and pattern recognition. Lectures will cover topics such as low-level feature extraction, generation of higher-level features such as chord estimations, audio similarity clustering, search, and retrieval techniques, and design and evaluation of machine classification systems. The presentations will be applied, multimedia-rich, overview of the building blocks of modern MIR systems. Our goal is to make the understanding and application of highly-interdisciplinary technologies and complex algorithms approachable.
Knowledge of basic digital audio principles is required. Familiarity with Python is desired but not required. Students are highly encouraged to bring their own audio source material for course labs and demonstrations.
Workshop Structure: The workshop will consist of half-day lectures, half-day supervised lab sessions, demonstrations, and discussions. Labs will allow students to design basic ground-up "intelligent audio systems", leveraging existing MIR toolboxes, programming environments, and applications. Labs will include creation and evaluation of basic instrument recognition, transcription, and audio analysis systems.
Schedule
Instructional material can be found at musicinformationretrieval.com (read only) or on GitHub (full source).
Day 1: Introduction to MIR, Signal Analysis, and Feature Extraction
Lecture
Introductions
- CCRMA Introduction - (Nette, Fernando).
- Introduction to MIR (What is MIR? Why MIR? Commercial applications)
- Basic MIR system architecture
- Timing and Segmentation: Frames, Onsets
- Classification: Instance-based classifiers (k-NN)
Overview: Signal Analysis and Feature Extraction for MIR Applications
- Windowed Feature Extraction
- Feature-vector design (Overview: http://www.create.ucsb.edu/~stp/PostScript/PopeHolmKouznetsov_icmc2.pdf)
- Time-domain features
- Frequency-domain features
MFCCs sonified
Lab
Understanding Audio Features Through Sonification
- Background for students needing a refresher: Fundamentals of Digital Audio Signal Processing (lecture slides from Juan Bello)
- Reminder: Save all your work, because you may want to build on it in subsequent labs.
Day 2: Beat, Rhythm, Pitch and Chroma Analysis
List of beat tracking references
Onset Detection
- Time-domain differences
- Spectral-domain differences
- Perceptual data-warping
- Adaptive onset detection
Beat and Tempo
- IOIs and Beat Regularity, Rubato
- Tatum, Tactus and Meter levels
- Tempo estimation
- Onset-detection vs Beat-detection
- The Onset Detection Function
- Beat Histograms
- Fluctuation Patterns
- Joint estimation of downbeat and chord change
Approaches to Beat Tracking and Meter Estimation
- Autocorrelation
- Beat Spectrum measures
- Multi-resolution (Wavelet)
Pitch and Chroma
- Features:
- Monophonic Pitch Detection
- Polyphonic Pitch Detection
- Pitch representations (Tuning Histograms, Pitch and Pitch Class Profiles, Chroma)
- Analysis:
- Dynamic Time Warping
- Hidden Markov Models
- Harmonic Analysis/Chord and Key Detection
- Applications
- Audio-Score Alignment
- Cover Song Detection
- Query-by-humming
- Music Transcription
Lab
Part 1: Tempo Extraction
Part 2: Add in MFCCs to classification and test w Cross validation
Bonus Slides: Temporal & Harmony Analysis
- Temporal Analysis (lecture slides from Juan Bello)
- Harmony Analysis (lecture slides from Juan Bello)
- Chord recognition using HMMs (Kyogu Lee)
- Genre-specific chord recognition using HMMs (Kyogu Lee)
Day 3: Machine Learning, Clustering and Classification
Lecture
Classification: Unsupervised vs. Supervised, k-means, GMM, SVM
Lab
Day 4: Music Information Retrieval in Polyphonic Mixtures
Lecture 6: Steve Tjoa, Lecture 6 Slides
- Music Transcription and Source Separation
- Nonnegative Matrix Factorization
- Sparse Coding
Guest Lecture 7: Andreas Ehmann, MIREX
Lecture 8: Evaluation Metrics for Information Retrieval - Leigh Smith Slides
Lab 4
References:
- IR Evaluation Metrics (precision, recall, f-measure, AROC,...)
Day 5: Deep Belief Networks and Wavelets
Lecture 10: Steve Tjoa, Introduction to Deep Learning Slides
Lecture 11: Leigh Smith, An Introduction to Wavelets Slides
[ https://ccrma.stanford.edu/workshops/mir2014/fann_en.pdf Neural Networks made easy]
Lunch at The Oasis
Klapuri eBook: http://link.springer.com/book/10.1007%2F0-387-32845-9
Afternoon: CCRMA Lawn BBQ
software, libraries, examples
Applications & Environments
Machine Learning Libraries & Toolboxes
- Netlab Pattern Recognition and Clustering Toolbox (Matlab)
- libsvm SVM toolbox (Matlab)
- MIR Toolboxes (Matlab)
- UCSD CatBox
Optional Toolboxes
- MA Toolbox
- MIDI Toolbox
- [see also below references]
- Marsyas
- CLAM
- Genetic Algorithm: http://www.ise.ncsu.edu/mirage/GAToolBox/gaot/
- Spider http://www.kyb.tuebingen.mpg.de/bs/people/spider/
- HTK http://htk.eng.cam.ac.uk/
Supplemental papers and information for the lectures...
- Explanations, tutorials, code demos, recommended papers here - for each topic....
- A list of beat tracking references cited
Past CCRMA MIR Workshops and lectures
- CCRMA MIR Summer Workshop 2014
- CCRMA MIR Summer Workshop 2013
- CCRMA MIR Summer Workshop 2012
- CCRMA MIR Summer Workshop 2011
- CCRMA MIR Summer Workshop 2010
- CCRMA MIR Summer Workshop 2009
- CCRMA MIR Summer Workshop 2008
References for additional info
Recommended books:
- Data Mining: Practical Machine Learning Tools and Techniques, Second Edition by Ian H. Witten , Eibe Frank (includes software)
- Netlab by Ian T. Nabney (includes software)
- Signal Processing Methods for Music Transcription, Klapuri, A. and Davy, M. (Editors)
- Computational Auditory Scene Analysis: Principles, Algorithms, and Applications, DeLiang Wang (Editor), Guy J. Brown (Editor)
- Speech and Audio Signal Processing:Processing and perception of speech and music Ben Gold & Nelson Morgan, Wiley 2000
Prerequisite / background material:
- http://140.114.76.148/jang/books/audioSignalProcessing/
- The Mathworks' Matlab Tutorial
- ISMIR2007 MIR Toolbox Tutorial
Papers:
- ISMIR 2011 Proceedings: http://ismir2011.ismir.net/program.html
- Check out the references listed at the end of the Klapuri & Davy book
- Check out Papers listed on Pg 136-7 of MIR Toolbox: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox/userguide1.1
Other books:
- Pattern Recognition and Machine Learning (Information Science and Statistics) by Christopher M. Bishop
- Neural Networks for Pattern Recognition, Christopher M. Bishop, Oxford University Press, 1995.
- Pattern Classification, 2nd edition, R Duda, P Hart and D Stork, Wiley Interscience, 2001.
- "Artificial Intelligence: A Modern Approach" Second Edition, Russell R & Norvig P, Prentice Hall, 2003.
- Machine Learning, Tom Mitchell, McGraw Hill, 1997.
Interesting Links:
- http://www.ifs.tuwien.ac.at/mir/howtos.html
- http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials
- http://www.music-ir.org/evaluation/tools.html
- http://140.114.76.148/jang/matlab/toolbox/
- http://htk.eng.cam.ac.uk/
Audio Source Material
OLPC Sound Sample Archive (8.5 GB) [3]
http://www.tsi.telecom-paristech.fr/aao/en/category/database/
RWC Music Database (n DVDs) [available in Stanford Music library]
RWC - Sound Instruments Table of Contents
http://staff.aist.go.jp/m.goto/RWC-MDB/rwc-mdb-i.html
Univ or Iowa Music Instrument Samples
MATLAB Utility Scripts
- Reading MP3 Files
- Low-Pass Filter
- Steve Tjoa: Matlab code (updated July 9, 2009)
http://ccrma.stanford.edu/~kglee/kaist_summer2008_special_lecture/
Bonus Lab Material from Previous Years (Matlab)
- Harmony Analysis Slides / Labs
- Overview of Weka & the Wekinator
- A brief history of MIR
- Notes
- CAL500 decoding
for i in *.mp3; do echo $i; afconvert -d BEI16@44100 -f AIFF "$i"; done
- Extract CAL 500 per-song features to .mat or .csv using features from today. This will be used on lab for Friday. Copy it from the folder ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500.tar (beware it's a 2Gb .tar file!) or grab the AIFF versions from ccrma-gate.stanford.edu:/usr/ccrma/workshops/mir2011/cal500_aiffs.tar (that's 16 GB)