Projects

Interpretable Computational Phenotyping

A Deep Learning Modeling framework to discover meaningful data-driven representations and to learn interpretable phenotype features.

Exponential growth in electronic health care data has resulted in new opportunities and urgent needs to discover meaningful data-driven representations and patterns of diseases, i.e., computational phenotyping. Deep learning models have shown superior performance for many computational phenotyping tasks, but suffer from model interpretability which is crucial for wide adoption in medical research and clinical decision-making. In this project, we introduce a simple yet powerful knowledge-distillation approach, namely interpretable mimic learning, to learn interpretable phenotype features and achieve strong prediction performance as deep learning models. Our framework uses gradient boosting tree to learn interpretable methods from deep learning models. We also develop an efficient hierarchical multimodal learning framework to learn hierarchical shared representations from multiple modalities in health care data to further improve the prediction performance. Both approaches are well suited tohealth care applications and can be easily generalized to other domains. Experiment results on two real-world health care datasets show that our proposed methods outperform state-of-the-art approaches in prediction accuracy and are interpretable to clinicians.



Principal Investigator

Yan Liu

Team Members

Sanjay Purushotham

Zhengping Che

Tanachat Nilanon

Guangyu Li

Peter Lillian

About Melady Lab

The USC Melady Lab develops machine learning and data mining algorithms for solving problems involving data with special structure, including time series, spatiotemporal data, and relational data. We work closely with domain experts to solve challenging problems and make significant impacts in computational biology, social media analysis, climate modeling, health care, and business intelligence.