Biography
I am a tenure-track assistant professor of Computer Science and UMIACS at University of Maryland, College Park. My research interests are in machine learning, optimization, and natural language processing. I have published ~100 papers in ML (NeurIPS, ICML, ICLR), NLP (ACL, EMNLP, NAACL), CV (CVPR, ICCV, ECCV), DM (KDD, ICDM), AI (AAAI, IJCAI) conferences, and journals as Machine Learning (Springer), IEEE TPAMI/TIP/TNNLS/TKDE, etc.
Our recent works study (1) How, why, and when to transfer human learning (e.g., curriculum, retention, sub-tasking, curiosity, exemplar selection, collaboration, etc.) to improve machine learning and generalization in the wild (e.g., with unlabeled, biased, noisy, redundant, or distributed data, in unseen tasks/environments); (2) Controllable Generative AI in both training and inference/adaptation; (3) Synthetic data, self-evolving AI, and auto-benchmarking; and (4) Human-AI teaming and hybrid agent with personalization. We are developing these methods with LLMs, multi-modality foundation models, and RL. Our goal is to develop efficient, versatile, trustworthy, and environmentally-friendly hybrid-intelligence based on coevolution between human and machine. The code/data/models can be found at Tianyi Lab’s GitHub and HF.
I was a visiting research scientist at Google between 2021-2022. Before that, I received my Ph.D. (thesis) from Computer Science of University of Washington, where I was a member of MELODI lab led by Prof. Jeff A. Bilmes. I have been working with Prof. Dacheng Tao as a research assistant at University of Technology, Sydney (UTS) and Nanyang Technological University. I was a research intern at Yahoo! Labs, mentored by Dr. Hua Ouyang (Apple) and Prof. Yi Chang (Jilin University), and a research intern at Microsoft Research, mentored by Dr. Lin Xiao (Meta AI).
News
- 2024/09: Five papers (3 main + 2 findings) have been accepted by EMNLP 2024.
- 2024/09: I will serve as an Area Chair of ICLR 2025.
- 2024/07: We initialize TurningPoint AI, a research team from multiple universities and industry (UMD+UCLA+PSU+Google) investigating Muiltimodal Agents, with the goals of building Trustworthy Embodied AI, Self-Evolving Machines, Compositional Agents, and Controllable AIGC. We already released 8 projects with several ICML and ECCV publications and new datasets.
- 2024/07: 2 papers of diffusion models (analysis of negative prompts, extracting discriminative features from generative models) have been accepted by ECCV 2024.
- 2024/05: 4 ICLR + 4 ICML + 6 ACL + 2 NAACL + 2 CVPR have been accepted, featuring our works on controllable AIGC, personalized AI, data-efficient training of LLMs, RLHF, prompt optimization, multi-modal hallucinations, multi-modal and embodied agent, and curriculum reinforcement learning.
- 2024/02: We release a survey on knowledge distillation of LLMs with GitHub repo.
- 2023/11: I will give a talk “Towards Controllable and Personalized AI Models” at UMD CS department seminar on 11/03.
- 2023/10: We release HallusionBench focusing on the Language Hallucination and Visual Illusion of GPT-4V(ision), Llava-1.5, and other multi-modality models. Analyses and insights can be found in the preprint.
- 2023/10: Data recycling and filtering improves instruction-tuning of LLMs, leading to recycled LLMs outperforming other larger LLMs trained on new data and RLHF. We release Reflection-Tuning preprint, codebase, and the model.
- 2023/10: Two papers (How Many Demonstrations Do You Need for In-context Learning, Merging Mixture-of-Experts into One) have been accepted by EMNLP 2023.
- 2023/09: Two papers (multi-modality model distillation for task adaptation, clustered additive modeling for structured federated learning) have been accepted by NeurIPS 2023.
Research Topics
- Machine Learning (2008-present)
- Learning over time: Curriculum Learning, Continual Learning
- Learning via interactions: Reinforcement Learning, Online Learning
- Learning across tasks/domains: Multi-task Learning, Meta-Learning, Domain Adaptation/Generalization
- Learning multiple models: Mixture-of-Experts (MoE), Collaborative/Cooperative Learning, Federated/Decentralized Learning
- Learning under noises: Noisy-Label Learning, Adversarial Learning
- Learning representations: Self-Supervised Learning, Dimension Reduction
- Sparse Learning: Compressed Sensing, Matrix Factorization, Spectral Method
- Optimization: Continuous, Combinatorial, Multi-Objective, Black-Box
- Natural Language Processing (2016-present)
- Attention mechanisms
- Toxicity and Bias in NLP models
- Adversarial textual attack and defense
- Large language models (LLMs) (Reflection-Tuning, Alpagasus, and Cherry LLM created by our group)
- Personalization
- Multi-modality Models (2021-present)
- Vision-Language Models
- Human-AI alignment
- VLM/LLM + RL and Multi-modality Embodied-AI