Biography

I am a tenure-track assistant professor of Computer Science and UMIACS at University of Maryland, College Park. My research interests are in machine learning, optimization, and natural language processing. I have published ~100 papers at NeurIPS, ICML, ICLR, AISTATS, ECML/PKDD, ACL, EMNLP, NAACL, COLING, CVPR, ICCV, KDD, ICDM, AAAI, IJCAI, ISIT, Machine Learning (Springer), IEEE TPAMI/TIP/TNNLS/TKDE, etc.

Our recent works study (1) How, why, and when to transfer human learning (e.g., curriculum, retention, sub-tasking, curiosity, exemplar selection, collaboration, etc.) to improve machine learning in the wild (e.g., with unlabeled, biased, noisy, redundant or distributed data, unseen tasks/environments, distribution shift); (2) Controllable AI in both training and inference/adaptation; and (3) Human-AI alignment and AI personalization. And Yes we are developing these methods for LLMs, multi-modality foundation models, and RL. Our goal is to develop efficient, versatile, trustworthy, and environmentally-friendly hybrid-intelligence based on coevolution between human and machine. A list of our research topics can be found at the bottom of this webpage.

I was a visiting research scientist at Google between 2021-2022. Before that, I received my Ph.D. (thesis) from Computer Science of University of Washington, where I was a member of MELODI lab led by Prof. Jeff A. Bilmes. I have been working with Prof. Dacheng Tao as a research assistant at University of Technology, Sydney (UTS) and Nanyang Technological University. I was a research intern at Yahoo! Labs, mentored by Dr. Hua Ouyang (Apple) and Prof. Yi Chang (Jilin University), and a research intern at Microsoft Research, mentored by Dr. Lin Xiao (Meta AI).

News

Research Topics

  • Machine Learning (2008-present)
    1. Learning over time: Curriculum Learning, Continual Learning
    2. Learning via interactions: Reinforcement Learning, Online Learning
    3. Learning across tasks/domains: Multi-task Learning, Meta-Learning, Domain Adaptation/Generalization
    4. Learning multiple models: Mixture-of-Experts (MoE), Collaborative/Cooperative Learning, Federated/Decentralized Learning
    5. Learning under noises: Noisy-Label Learning, Adversarial Learning
    6. Learning representations: Self-Supervised Learning, Dimension Reduction
    7. Sparse Learning: Compressed Sensing, Matrix Factorization, Spectral Method
    8. Optimization: Continuous, Combinatorial, Multi-Objective, Black-Box
  • Natural Language Processing (2016-present)
    1. Attention mechanisms
    2. Toxicity and Bias in NLP models
    3. Adversarial textual attack and defense
    4. Large language models (LLMs) (Reflection-Tuning, Alpagasus, and Cherry LLM created by our group)
    5. Personalization
  • Multi-modality Models (2021-present)
    1. Vision-Language Models
    2. Human-AI alignment
    3. VLM/LLM + RL and Multi-modality Embodied-AI