跳到主要內容區塊
:::
A- A A+

演講公告

:::

Generative AI Meets Music Theory

  • 2025-12-15 (Mon.), 10:30 AM
  • 統計所B1演講廳;茶 會:上午10:25。
  • 實體與線上視訊同步進行。
  • Dr. Stephen Ni-Hahn
  • Leader of the Structural Music Lab, Duke University

Abstract

The intersection of artificial intelligence and music is vast and quickly evolving, with models such as Suno enabling amateur music enthusiasts to generate impressive compositions with a simple text prompt. There is further an urgent need for AI systems that humans can control. Unfortunately, the vast majority of recent systems rely on black box models trained on massive datasets, heavily limiting themselves based on available data. Furthermore, most models are expected to learn complex notions like music-theoretical structure without any guidance, leading to poor consistency and a lack of large-scale structure. My research addresses these limitations by integrating domain knowledge from music theory to enhance interpretability and human controllability for AI music generation, enabling the generation of coherent and enjoyable music that outperforms much larger state-of-the-art deep learning models. In this talk, I will present three methods that facilitate this integration: SchenkComposer, a theory-based framework for hierarchical melody generation; AutoSchA, which build on recent developments in Graph Neural Networks, enabling AI models to interpret deeper musical connections in a more human way; and E-Motion Baton, a real-time human-in-the-loop conducting simulator that incorporates human emotion and gesture for responsive music generation.

線上視訊請點選連結

最後更新日期:2025-12-08 09:43
回頁首