jump to main area
:::
A- A A+

Postdoc Seminars

Decomposing End-to-End Backpropagation into Small Components with Independent Objective Functions

  • 2021-04-28 (Wed.), 14:00 PM
  • R6005, Research Center for Environmental Changes Building
  • The reception will be held at 15:00 at the R6005, Research Center for Environmental Changes Building
  • Prof. Hung-Hsuan Chen
  • Computer Science & Information Engineering, National Central University

Abstract

Backpropagation (BP) is the cornerstone of today’s deep learning algorithms, but it is inefficient partially because of backward locking, which means updating the weights of one layer locks the weight updates in the other layers. Consequently, it is challenging to apply parallel computing or a pipeline structure to update the weights in different layers simultaneously. In this talk, I will introduce a learning structure called associated learning (AL), which modularizes the network into smaller components, each of which has a local objective. Because the objectives are mutually independent, AL can learn the parameters in different layers independently and simultaneously, so it is feasible to apply a pipeline structure to improve the training throughput. Surprisingly, even though most of the parameters in AL do not directly interact with the target variable, training deep models by this method yields accuracies comparable to those from models trained using typical BP methods, in which all parameters are used to predict the target variable. Our implementation is available at https://github.com/SamYWK/Associated_Learning

Update:
scroll to top