Abstract

Natural history of human disease is
comprised of several milestones of sequential events where these milestones are
all time-to-event by nature. For example, from hepatitis C infection to death,
patients may experience intermediate events such as liver cirrhosis and liver
cancer. The events of hepatitis, cirrhosis, cancer and death have the
sequential order and are subject to right censoring; moreover, the latter
events may mask the former ones. By casting the natural history of human
diseases in the framework of causal mediation modeling, we set up a mediation
model with intermediate events and a terminal event, respectively as the
mediators and the outcome. By introducing counterfactual counting processes, we
define the intervention analogue of path-specific effects (iPSEs) as the effect
of the exposure on the terminal event mediated by any combination of the
intermediate events, including not through any of the events. We derive the
expression of the counterfactual hazard. We construct composite nonparametric
likelihood and derive a Nelson-Aalen type of estimator for the counterfactual
hazard. We establish the asymptotic unbiasedness, uniform consistency and weak
convergence for the proposed estimators. Numerical studies including simulation
and data application of a hepatitis study conducted in Taiwan are presented to
illustrate the finite sample performance and utility of the proposed method.

Abstract

Backpropagation (BP) is the cornerstone of
today’s deep learning algorithms, but it is inefficient partially because of
backward locking, which means updating the weights of one layer locks the
weight updates in the other layers. Consequently, it is challenging to apply parallel
computing or a pipeline structure to update the weights in different layers simultaneously.
In this talk, I will introduce a learning structure called associated learning (AL),
which modularizes the network into smaller components, each of which has a
local objective. Because the objectives are mutually independent, AL can learn
the parameters in different layers independently and simultaneously, so it is feasible
to apply a pipeline structure to improve the training throughput. Surprisingly,
even though most of the parameters in AL do not directly interact with the
target variable, training deep models by this method yields accuracies
comparable to those from models trained using typical BP methods, in which all
parameters are used to predict the target variable. Our implementation is
available at https://github.com/SamYWK/Associated_Learning