jump to main area
:::
A- A A+

Postdoc Seminars

Confidence Training with Logically-lucid Pattern

  • 2024-05-15 (Wed.), 14:00 PM
  • Auditorium, B1F, Institute of Statistical Science;The tea reception will be held at 13:40.
  • Online live streaming through Cisco Webex will be available.
  • Dr. Yu-Cheng Li
  • Institute of Statistical Science, Academia Sinica

Abstract

Wrong prediction is bad. For users, having high confidence in a wrong prediction is even worse. Since even the best-trained class-label predictor will have some chance of making mistakes, users, especially in some AI application areas, such as personalized medicine, may want to tell high-quality predictions from low-quality ones. In convolutional neural networks (CNN), confidence in a prediction is associated with the softmax output layer, which gives a probability distribution on the class labels. But even a prediction with 95% probability concentrated on one class may still turn out wrong many times more often than the anticipated rate of 5%. Here, we take a different approach, termed post-prediction confidence training (PPCT), to guide users on how to discern high-quality predictions from low-quality ones.  An enhancement to CNN configuration is required during network training. We propose a blueprint by coupling each logit node (T channel) in the layer feeding to softmax with an additional node (C channel) and using maxout to link the pair to the softmax layer. The C channel is introduced to counter the T channel as a contrastive feature against the feature of the target class. A high-quality prediction must follow a logically-lucid pattern between T and C for every class. Successful implementations of our methods on popular image datasets are reported.

Please click here for participating the talk online.

Download

1130515_Dr. Yu-Cheng Li.pdf
Update:2024-05-08 13:59
scroll to top