跳到主要內容區塊
:::
A- A A+

演講公告

:::

Large-scale Distributed Machine Learning

  • 2019-01-14 (Mon.), 10:30 AM
  • 中研院-統計所6005會議室(環境變遷研究大樓A棟)
  • 茶 會:上午10:10統計所6005會議室(環境變遷研究大樓A棟)
  • Mr. Ching-pei Lee (李靜沛先生)
  • Department of Computer Sciences, University of Wisconsin-Madison, WI, USA.

Abstract

Machine learning frequently deals with data of volume larger than the capacity of a single machine. To accommodate the training data in memory for faster access, multiple machines are often used to train machine learning models in a distributed manner. In this scenario, the per machine computational burden is reduced, but the expensive inter-machine communication becomes the bottleneck for further accelerating the training of machine learning models.? In this talk, I will introduce state-of-the-art distributed training algorithms for various machine learning tasks broadly covered by the regularized empirical risk minimization problem, including but not limited to binary and multi-class classification, regression, outlier detection, and feature selection.

最後更新日期:
回頁首