Exciting Times hugo sonnery's AI blog

Episode 2 : Music demixing with the sliCQ Transform

About the speaker

Bio : Sevag is a second-year research master’s student advised by Prof. Ichiro Fujinaga at McGill University. He’s interested in signal processing, time-frequency analysis, and music source separation.

Do not hesitate to check out Sevag Hanssian’s personal website for latest updates !

Sevag Hanssian


Video recording of the speaker’s presentation :


Paper : Music demixing with the sliCQ transform [arXiv:2112.05509]

Music demixing with the sliCQ transform

Abstract : Music source separation is the task of extracting an estimate of one or more isolated sources or instruments (for example, drums or vocals) from musical audio. The task of music demixing or unmixing considers the case where the musical audio is separated into an estimate of all of its constituent sources that can be summed back to the original mixture. The Music Demixing Challenge was created to inspire new demixing research. Open-Unmix (UMX), and the improved variant CrossNet-Open-Unmix (X-UMX), were included in the challenge as the baselines. Both models use the Short-Time Fourier Transform (STFT) as the representation of music signals. The time-frequency uncertainty principle states that the STFT of a signal cannot have maximal resolution in both time and frequency. The tradeoff in time-frequency resolution can significantly affect music demixing results. Our proposed adaptation of UMX replaced the STFT with the sliCQT, a time-frequency transform with varying time-frequency resolution. Unfortunately, our model xumx-sliCQ achieved lower demixing scores than UMX.

Supplementary material

Useful resources : Courtesy of the speaker, the interested reader may find additional information in the references below :

comments powered by Disqus