Real-Time Automatic Accompaniment Implementation in Python

摘要

What would it be if the computer could accompany human automatically?

Automatic accompaniment is a research topic which is proposed 35 years ago. Nowadays, this is still a very attractive topic. e.g. a musician plays the main melody of "Twinkle, Twinkle Little Star"; the computer performs the accompaniment part adapt for the musician's tempo. That is to say, the system must be able to "listen to" musician in real-time.

In this talk, I will share the topic background, problem definition, the approaches I tried, and show the accompaniment result with a performer.

As an entry-level session, everyone is welcome to attend. It will be better if you are interested in music tech.

說明

The issue sponsor's demo video (Roger Dannenberg, the Professor of Computer Science, Carnegie Mellon University): - [Computer Accompaniment](https://www.youtube.com/watch?v=RnjoxwY3RfA) This system including the following Python libraries: - midi processing: pretty_midi, music21 - digital signal processing: librosa - soundfont synthesizer: pyFluidSynth - sound recording: sounddevice

講者

范斯越 Szu-Yueh Fan

Szu-Yueh pursues theory and engineering balanced lifestyle.

He embraces complexity and uncertainty that is why he likes the probabilistic model more than the deterministic model.
He loves to be a creator that is why the generative model is more interesting to him than the discriminative model.

Amateur book translator, tech+music enthusiast, cellist.