What would it be if the computer could accompany human automatically?
Automatic accompaniment is a research topic which is proposed 35 years ago. Nowadays, this is still a very attractive topic. e.g. a musician plays the main melody of "Twinkle, Twinkle Little Star"; the computer performs the accompaniment part adapt for the musician's tempo. That is to say, the system must be able to "listen to" musician in real-time.
In this talk, I will share the topic background, problem definition, the approaches I tried, and show the accompaniment result with a performer.
As an entry-level session, everyone is welcome to attend. It will be better if you are interested in music tech.
The issue sponsor's demo video (Roger Dannenberg, the Professor of Computer Science, Carnegie Mellon University):
- [Computer Accompaniment](https://www.youtube.com/watch?v=RnjoxwY3RfA)
This system including the following Python libraries:
- midi processing: pretty_midi, music21
- digital signal processing: librosa
- soundfont synthesizer: pyFluidSynth
- sound recording: sounddevice