Research

Current Research Topics

Controllable Generative AI (LLMs, MLLMs, Diffusion Models, etc.)


Large Language Models (LLMs)

  1. Ziyue Li, Tianyi Zhou, “Your Mixture-of-Experts LLM Is Secretly an Embedding Model For Free”, arXiv:2410.10814, 2024. PDF, CODE
  2. Siyu Zhou, Tianyi Zhou, Yijun Yang, Guodong Long, Deheng Ye, Jing Jiang, and Chengqi Zhang, “WALL-E: World Alignment by Rule Learning Improves World Model-based LLM Agents”, arXiv:2410.07484, 2024. PDF, CODE
  3. Ming Li, Pei Chen, Chenguang Wang, Hongyu Zhao, Yijun Liang, Yupeng Hou, Fuxiao Liu, and Tianyi Zhou, “Mosaic IT: Enhancing Instruction Tuning with Data Mosaics”, arXiv:2405.13326, 2024. PDF, CODE
  4. Lichang Chen, Jiuhai Chen, Chenxi Liu, John Kirchenbauer, Davit Soselia, Chen Zhu, Tom Goldstein, Heng Huang, and Tianyi Zhou, “OPTune: Efficient Online Preference Tuning”, arXiv:2406.07657, 2024. PDF
  5. Jiuhai Chen, Rifaa Qadri, Yuxin Wen, Neel Jain, John Kirchenbauer, Tianyi Zhou, Tom Goldstein, “GenQA: Generating Millions of Instructions from a Handful of Prompts”, arXiv:2406.10323, 2024. PDF, DATA
  6. Ming Li, Han Chen, Chenguang Wang, Dang Nguyen, Dianqi Li, Tianyi Zhou, “RuleR: Improving LLM Controllability by Rule-based Data Recycling”, arXiv:2406.15938, 2024. PDF, CODE
  7. Siyuan Wu, Yue Huang, Chujie Gao, Dongping Chen, Qihui Zhang, Yao Wan, Tianyi Zhou, Xiangliang Zhang, Jianfeng Gao, Chaowei Xiao, Lichao Sun, “UniGen: A Unified Framework for Textual Dataset Generation Using Large Language Models”, arXiv:2406.18966, 2024. PDF, CODE
  8. Xiaohan Xu, Ming Li, Chongyang Tao, Tao Shen, Reynold Cheng, Jinyang Li, Can Xu, Dacheng Tao, Tianyi Zhou, “A survey on knowledge distillation of large language models”, arXiv:2402.13116, 2024. PDF, CODE
  9. Linxin Song, Jieyu Zhang, Lechao Cheng, Pengyuan Zhou, Tianyi Zhou, and Irene Li, “NLPBench: Evaluating Large Language Models on Solving NLP Problems”, arXiv:2309.15630, 2023. PDF, CODE
  10. Jiuhai Chen, Lichang Chen, Heng Huang, and Tianyi Zhou, “When do you need Chain-of-Thought Prompting for ChatGPT?”, arXiv:2304.03262, 2023. PDF
  11. Maharshi Gor, Hal Daumé III, Tianyi Zhou, Jordan Lee Boyd-Graber, “Do great minds think alike? Investigating Human-AI Complementarity for Question Answering”, The 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2024. PDF
  12. Hanchi Sun, Tianyi Zhou, Xun Chen, Lichao Sun, “SpecHub: Provable Acceleration to Multi-Draft Speculative Decoding”, The 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2024. PDF, CODE
  13. Yue Huang, Chenrui Fan, Yuan Li, Siyuan Wu, Tianyi Zhou, Xiangliang Zhang, Lichao Sun, “1+1>2: Can Large Language Models Serve as Cross-Lingual Knowledge Aggregators?”, The 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2024. PDF
  14. Xirui Li, Ruochen Wang, Minhao Cheng, Tianyi Zhou, Cho-Jui Hsieh, “DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers”, The 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP) Findings, 2024. PDF, CODE
  15. Lilly Kumari, Shengjie Wang, Tianyi Zhou, Nikhil Sarda, Anthony Rowe, and Jeff Bilmes, “BumbleBee: Dynamic KV Cache Summarization in Transformers using Submodular Optimization”, First Conference on Language Modeling (COLM), 2024. PDF
  16. Ming Li, Yong Zhang, Shwai He, Zhitao Li, Hongyu Zhao, Jianzong Wang, Ning Cheng, and Tianyi Zhou, “Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning”, Annual Meeting of the Association for Computational Linguistics (ACL), 2024. PDF, CODE
  17. Yibin Lei, Di Wu, Tianyi Zhou, Tao Shen, Yu Cao, Chongyang Tao, and Andrew Yates, “Meta-Task Prompting Elicits Embedding from Large Language Models”, Annual Meeting of the Association for Computational Linguistics (ACL), 2024. PDF, CODE
  18. Dang Nguyen, Jiuhai Chen, and Tianyi Zhou, “Multi-Objective Linguistic Control of Large Language Models”, Annual Meeting of the Association for Computational Linguistics (ACL) Findings, 2024. PDF, CODE
  19. Ming Li, Jiuhai Chen, Lichang Chen, and Tianyi Zhou, “Can LLMs Speak For Diverse People? Tuning LLMs via Debate to Generate Controllable Controversial Statements”, Annual Meeting of the Association for Computational Linguistics (ACL) Findings, 2024. PDF, CODE
  20. Ming Li, Lichang Chen, Jiuhai Chen, Shwai He, Jiuxiang Gu, and Tianyi Zhou, “Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning”, Annual Meeting of the Association for Computational Linguistics (ACL) Findings, 2024. PDF, CODE
  21. Tao Shen, Guodong Long, Xiubo Geng, Chongyang Tao, Yibin Lei, Tianyi Zhou, Michael Blumenstein, and Daxin Jiang, “Large Language Models are Strong Zero-Shot Retriever”, Annual Meeting of the Association for Computational Linguistics (ACL) Findings, 2024. PDF
  22. Lichang Chen*, Jiuhai Chen*, Heng Huang, Tom Goldstein, and Tianyi Zhou. “InstructZero: Efficient Instruction Optimization for Black-Box Large Language Models”, International Conference on Machine Learning (ICML), 2024. PDF, CODE
  23. Ruochen Wang, Sohyun An, Minhao Cheng, Tianyi Zhou, Sung Ju Hwang, and Cho-Jui Hsieh. “One Prompt is not Enough: Automated Construction of a Mixture-of-Expert Prompts”, International Conference on Machine Learning (ICML), 2024. PDF, CODE
  24. Lichang Chen, Chen Zhu, Davit Soselia, Jiuhai Chen, Tianyi Zhou, Tom Goldstein, Heng Huang, Mohammad Shoeybi, and Bryan Catanzaro. “ODIN: Disentangled Reward Mitigates Hacking in RLHF”, International Conference on Machine Learning (ICML), 2024. PDF, MODEL
  25. Lichao Sun, Yue Huang, et al. “TrustLLM: Trustworthiness in Large Language Models”, International Conference on Machine Learning (ICML), 2024. PDF
  26. Ming Li, Yong Zhang, Zhitao Li, Jiuhai Chen, Lichang Chen, Ning Cheng, Jianzong Wang, Tianyi Zhou, and Jing Xiao. “From Quantity to Quality: Boosting LLM Performance with Self-Guided Data Selection for Instruction Tuning”, Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2024. PDF, CODE
  27. Lilly Kumari, Shengjie Wang, Arnav Mohanty Das, Tianyi Zhou, and Jeff Bilmes. “An End-to-End Submodular Framework for Data-Efficient In-Context Learning”, Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL) Findings, 2024. PDF
  28. Yibin Lei, Yu Cao, Tianyi Zhou, Tao Shen, Andrew Yates, “Corpus-Steered Query Expansion with Large Language Models”, The 18th Conference of the European Chapter of the Association for Computational Linguistics (EACL), 2024. PDF
  29. Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin, “Alpagasus: Training a Better Alpaca Model with Fewer Data”, International Conference on Learning Representations (ICLR), 2024. PDF
  30. Jiuhai Chen, Lichang Chen, Chen Zhu, Tianyi Zhou, “How Many Demonstrations Do You Need for In-context Learning?”, The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP) Findings, 2023. PDF

Multimodal Large Language Models (MLLMs)

  1. Xirui Li, Hengguang Zhou, Ruochen Wang, Tianyi Zhou, Minhao Cheng, Cho-Jui Hsieh, “MOSSBench: Is Your Multimodal Language Model Oversensitive to Safe Queries?”, arXiv:2406.17806, 2024. PDF, CODE
  2. Xiyao Wang, Jiuhai Chen, Zhaoyang Wang, Yuhang Zhou, Yiyang Zhou, Huaxiu Yao, Tianyi Zhou, Tom Goldstein, Parminder Bhatia, Furong Huang, Cao Xiao, “Enhancing Visual-Language Modality Alignment in Large Vision Language Models via Self-Improvement”, arXiv:2405.15973, 2024. PDF
  3. Sen Li, Ruochen Wang, Cho-Jui Hsieh, Minhao Cheng, Tianyi Zhou, “MuLan: Multimodal-LLM agent for progressive multi-object diffusion”, arXiv:2402.12741, 2024. PDF, CODE
  4. Kaiwen Yang, Tao Shen, Xinmei Tian, Xiubo Geng, Chongyang Tao, Dacheng Tao, and Tianyi Zhou, “Good Questions Help Zero-Shot Image Reasoning”, arXiv:2312.01598, 2023. PDF
  5. Davit Soselia, Khalid Saifullah, and Tianyi Zhou, “Learning UI-to-Code Reverse Generator Using Visual Critic Without Rendering”, arXiv:2305.14637, 2023. PDF
  6. Xiyang Wu*, Tianrui Guan*, Dianqi Li, Shuaiyi Huang, Xiaoyu Liu, Xijun Wang, Ruiqi Xian, Abhinav Shrivastava, Furong Huang, Jordan Lee Boyd-Graber, Tianyi Zhou, Dinesh Manocha, “AUTOHALLUSION: Automatic Generation of Hallucination Benchmarks for Vision-Language Models”, The 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP) Findings, 2024. PDF, CODE
  7. Tianrui Guan*, Fuxiao Liu*, Xiyang Wu, Ruiqi Xian, Zongxia Li, Xiaoyu Liu, Xijun Wang, Lichang Chen, Furong Huang, Yaser Yacoob, Dinesh Manocha, and Tianyi Zhou. “HallusionBench: An Advanced Diagnostic Suite for Entangled Language Hallucination and Visual Illusion in Large Vision-Language Models”, IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024. PDF, CODE+DATA
  8. Yijun Yang, Tianyi Zhou, Kanxue Li, Dapeng Tao, Lusong Li, Li Shen, Xiaodong He, Jing Jiang, and Yuhui Shi. “Embodied Multi-Modal Agent trained by an LLM from a Parallel TextWorld”, IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024. PDF, CODE
  9. Chen Liang, Jiahui Yu, Ming-Hsuan Yang, Matthew Brown, Yin Cui, Tuo Zhao, Boqing Gong, Tianyi Zhou, “Module-wise Adaptive Distillation for Multimodality Foundation Models”, Advances in Neural Information Processing Systems 37 (NeurIPS), 2023. PDF

Diffusion Models

  1. Yuanhao Ban, Ruochen Wang, Tianyi Zhou, Boqing Gong, Cho-Jui Hsieh, and Minhao Cheng, “The Crystal Ball Hypothesis in diffusion models: Anticipating object positions from initial noise”, arXiv:2406.01970, 2024. PDF
  2. Yuanhao Ban, Ruochen Wang, Tianyi Zhou, Minhao Cheng, Boqing Gong, Cho-Jui Hsieh, “When and How do negative prompts take effect?”, European Conference on Computer Vision (ECCV), 2024. PDF
  3. Sen Li, Ruochen Wang, Cho-Jui Hsieh, Minhao Cheng, Tianyi Zhou, “MuLan: Multimodal-LLM agent for progressive multi-object diffusion”, arXiv:2402.12741, 2024. PDF, CODE
  4. Soumik Mukhopadhyay, Matthew Gwilliam, Yosuke Yamaguchi, Vatsal Agarwal, Namitha Padmanabhan, Archana Swaminathan, Tianyi Zhou, Jun Ohya, Abhinav Shrivastava, “Do text-free diffusion models learn discriminative visual representations?”, European Conference on Computer Vision (ECCV), 2024. PDF

Curriculum Learning and Data Selection


  1. Yucheng Yang, Tianyi Zhou, Lei Han, Meng Fang, Mykola Pechenizkiy, “Automatic Curriculum for Unsupervised Reinforcement Learning”, International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2024. PDF
  2. Songhua Wu, Tianyi Zhou, Yuxuan Du, Jun Yu, Bo Han, Tongliang Liu, “A Time-Consistency Curriculum for Learning from Instance-Dependent Noisy Labels”, IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI), 2024.
  3. Chengkai Hou, Jieyu Zhang, Tianyi Zhou, “When to Learn What: Model-Adaptive Data Augmentation Curriculum”, International Conference on Computer Vision (ICCV), 2023. PDF
  4. Shuang Ao, Tianyi Zhou, Guodong Long, Xuan Song, Jing Jiang, “Curriculum Reinforcement Learning via Morphology-Environment Co-Evolution”, arXiv:2309.12529, 2023. PDF
  5. Shuang Ao, Tianyi Zhou, Jing Jiang, Guodong Long, Xuan Song, Chengqi Zhang, “EAT-C: Environment-Adversarial sub-Task Curriculum for Efficient Reinforcement Learning”, International Conference on Machine Learning (ICML), 2022. PDF
  6. Tianyi Zhou*, Shengjie Wang*, and Jeff A. Bilmes, “Curriculum Learning by Optimizing Learning Dynamics”, International Conference on Artificial Intelligence and Statistics (AISTATS), 2021. PDF, Appendix
  7. Tianyi Zhou*, Shengjie Wang*, and Jeff A. Bilmes, “Robust Curriculum Learning: from clean label detection to noisy label self-correction”, International Conference on Learning Representations (ICLR), 2021. PDF, Slides
  8. Yuchen Jin, Tianyi Zhou, Liangyu Zhao, Yibo Zhu, Chuanxiong Guo, Marco Canini and Arvind Krishnamurthy, “AutoLRS: Automatic Learning-Rate Schedule by Bayesian Optimization on the Fly”, International Conference on Learning Representations (ICLR), 2021. PDF, Slides, Code
  9. Shuang Ao, Tianyi Zhou, Guodong Long, Qinghua Lu, Liming Zhu, Jing Jiang, “CO-PILOT: COllaborative Planning and reInforcement Learning On sub-Task curriculum”, Advances in Neural Information Processing Systems 35 (NeurIPS), 2021. PDF, Appendix
  10. Tianyi Zhou*, Shengjie Wang*, and Jeff A. Bilmes, “Curriculum Learning with Dynamic Instance Hardness”, Advances in Neural Information Processing Systems 34 (NeurIPS), 2020. PDF, Appendix, Slides, Code
  11. Tianyi Zhou*, Shengjie Wang*, and Jeff A. Bilmes, “Time-Consistent Self-Supervision for Semi-Supervised Learning”, International Conference on Machine Learning (ICML), 2020. PDF, Appendix, Slides+Talk
  12. Meng Fang, Tianyi Zhou, Yali Du, Lei Han, and Zhengyou Zhang, “Curriculum-guided Hindsight Experience Replay”, Advances in Neural Information Processing Systems 33 (NeurIPS), Vancouver, BC, Canada, 2019. PDF, Appendix, Code, Poster
  13. Tianyi Zhou, Shengjie Wang, and Jeff A. Bilmes, “Diverse Ensemble Evolution: Curriculum Data-Model Marriage”, Advances in Neural Information Processing Systems 32 (NeurIPS), Montreal, QC, Canada, 2018. PDF, Appendix, Poster
  14. Tianyi Zhou and Jeff A. Bilmes, “Minimax Curriculum Learning: Machine Teaching with Desirable Difficulties and Scheduled Diversity”, Sixth International Conference on Learning Representations (ICLR), Vancouver, BC, Canada, 2018. PDF
  15. Tianyi Zhou, Jeff A. Bilmes and Carlos Guestrin, “Divide-and-Conquer Learning by Anchoring a Conical Hull”, Twenty-Eighth Annual Conference on Neural Information Processing Systems (NeurIPS), Montreal, Canada, 2014. PDF
  16. Tianyi Zhou, Wei Bian, and Dacheng Tao, “Divide-and-Conquer Anchoring for Near Separable Nonnegative Matrix Factorization and Completion in High Dimensions”, IEEE International Conference on Data Mining (ICDM), pp., Dallas, TX, USA, 2013. ( Best Student Paper Award ) PDF, Slides

Reinforcement Learning


  1. Shuang Ao, Tianyi Zhou, Guodong Long, Xuan Song, Jing Jiang, “Curriculum Reinforcement Learning via Morphology-Environment Co-Evolution”, arXiv:2309.12529, 2023. PDF
  2. Yucheng Yang, Tianyi Zhou, Qiang He, Lei Han, Mykola Pechenizkiy, Meng Fang, “Task Adaptation from Skills: Information Geometry, Disentanglement, and New Objectives for Unsupervised Reinforcement Learning”, International Conference on Learning Representations (ICLR), 2024. ( Spotlight )PDF
  3. Qiang He, Tianyi Zhou, Meng Fang, Setareh Maghsudi, “Adaptive Regularization of Representation Rank as an Implicit Constraint of Bellman Equation”, International Conference on Learning Representations (ICLR), 2024. PDF
  4. Qiang He, Tianyi Zhou, Meng Fang, Setareh Maghsudi, “Eigensubspace of Temporal-Difference Dynamics and How It Improves Value Approximation in Reinforcement Learning”, European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML/PKDD), 2023.
  5. Shuang Ao, Tianyi Zhou, Jing Jiang, Guodong Long, Xuan Song, Chengqi Zhang, “EAT-C: Environment-Adversarial sub-Task Curriculum for Efficient Reinforcement Learning”, International Conference on Machine Learning (ICML), 2022. PDF
  6. Shuang Ao, Tianyi Zhou, Guodong Long, Qinghua Lu, Liming Zhu, Jing Jiang, “CO-PILOT: COllaborative Planning and reInforcement Learning On sub-Task curriculum”, Advances in Neural Information Processing Systems 35 (NeurIPS), 2021. PDF, Appendix
  7. Yijun Yang, Jing Jiang, Tianyi Zhou, Jie Ma, Yuhui Shi, “Pareto Policy Pool for Model-based Offline Reinforcement Learning”, International Conference on Learning Representations (ICLR), 2022. PDF
  8. Meng Fang, Tianyi Zhou, Yali Du, Lei Han, and Zhengyou Zhang, “Curriculum-guided Hindsight Experience Replay”, Advances in Neural Information Processing Systems 33 (NeurIPS), Vancouver, BC, Canada, 2019. PDF, Appendix, Code, Poster
  9. Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Sen Wang and Chengqi Zhang, “Reinforced Self-Attention Network: a Hybrid of Hard and Soft Attention for Sequence Modeling”, International Joint Conferences on Artificial Intelligence (IJCAI), Stockholm, Sweden, 2018. PDF, Code

Natural Language Processing (pre-LLM)


  1. Shwai He, Run-Ze Fan, Liang Ding, Li Shen, Tianyi Zhou, Dacheng Tao, “Merging Experts into One: Improving Computational Efficiency of Mixture of Experts”, The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2023. PDF
  2. Yu Cao, Dianqi Li, Meng Fang, Tianyi Zhou, Jun Gao, Yibing Zhan, Dacheng Tao, “TASA: Deceiving Question Answering Models by Twin Answer Sentences Attack”, The 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022.
  3. Le Hou*, Richard Yuanzhe Pang*, Tianyi Zhou, Yuexin Wu, Xinying Song, Xiaodan Song, Denny Zhou, “Token Dropping for Efficient BERT Pretraining”, Annual Meeting of the Association for Computational Linguistics (ACL), 2022. PDF
  4. Yibin Lei, Yu Cao, Dianqi Li, Tianyi Zhou, Meng Fang, Mykola Pechenizkiy, “Phrase-level Textual Adversarial Attack with Label Preservation”, Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL Findings), 2022. PDF
  5. Bo Wang, Tao Shen, Guodong Long, Tianyi Zhou, Yi Chang, “Eliminating Sentiment Bias for Aspect-Level Sentiment Classification with Unsupervised Opinion Extraction”, The 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP) Findings, 2021. PDF
  6. Bo Wang, Tao Shen, Guodong Long, Tianyi Zhou, Yi Chang, “Structure-Augmented Text Representation Learning for Efficient Knowledge Graph Completion”, The WEB Conference (WWW), 2021. arXiv
  7. Yang Li, Tao Shen, Guodong Long, Jing Jiang, Tianyi Zhou, Chengqi Zhang, “Improving Long-Tail Relation Extraction with Collaborating Relation-Augmented Attention”, The 28th International Conference on Computational Linguistics (COLING), 2020. PDF, Code
  8. Yang Li, Guodong Long, Tao Shen, Tianyi Zhou, Lina Yao, Huan Huo and Jing Jiang, “Self-Attention Enhanced Selective Gate with Entity-Aware Embedding for Distantly Supervised Relation Extraction”, The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI), New York, USA, 2020. PDF
  9. Yujia Xie, Tianyi Zhou, Yi Mao, and Weizhu Chen, “Conditional Self-Attention for Query-based Summarization”, arXiv:2002.07338, 2020. PDF
  10. Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang and Chengqi Zhang, “Tensorized Self-Attention: Efficiently Modeling Pairwise and Global Dependencies Together”, Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2019. PDF, Code
  11. Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Sen Wang and Chengqi Zhang, “Reinforced Self-Attention Network: a Hybrid of Hard and Soft Attention for Sequence Modeling”, International Joint Conferences on Artificial Intelligence (IJCAI), Stockholm, Sweden, 2018. PDF, Code
  12. Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang and Chengqi Zhang, “Bi-Directional Block Self-Attention for Fast and Memory-Efficient Sequence Modeling”, Sixth International Conference on Learning Representations (ICLR), Vancouver, BC, Canada, 2018. PDF, Code
  13. Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Shirui Pan and Chengqi Zhang, “DiSAN: Directional self-attention network for rnn/cnn-free language understanding”, The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI), New Orleans, Louisiana, USA, 2018. ( Most cited student’s paper, 808 citations ) PDF, Code

Distributed and Collaborative (Federated, Decentralized) Learning


  1. Shutong Chen, Tianyi Zhou, Guodong Long, Jie Ma, Jing Jiang, Chengqi Zhang, “Multi-Level Additive Modeling for Structured Non-IID Federated Learning”, arXiv:2405.16472, 2024. PDF
  2. Ziyue Li, Tian Li, Virginia Smith, Jeff Bilmes, Tianyi Zhou, “Many-Objective Multi-Solution Transport”, arXiv:2403.04099, 2024. PDF
  3. Zhiwei Li, Guodong Long, Tianyi Zhou, “Federated Recommendation with Additive Personalization”, International Conference on Learning Representations (ICLR), 2024. PDF
  4. Jie Ma, Tianyi Zhou, Guodong Long, Jing Jiang, Chengqi Zhang, “Structured Federated Learning through Clustered Additive Modeling”, Advances in Neural Information Processing Systems 37 (NeurIPS), 2023. PDF
  5. Shuangtong Li, Tianyi Zhou, Xinmei Tian, and Dacheng Tao. “Structured Cooperative Learning with Graphical Model Priors”, International Conference on Machine Learning (ICML), 2023. PDF
  6. Shuangtong Li, Tianyi Zhou, Xinmei Tian, Dacheng Tao, “Learning to Collaborate in Decentralized Learning of Personalized Models”, IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022. PDF
  7. Yue Tan, Guodong Long, Jie Ma, Lu Liu, Tianyi Zhou, Jing Jiang, “Federated Learning from Pre-Trained Models: A Contrastive Learning Approach”, Advances in Neural Information Processing Systems 36 (NeurIPS), 2022.
  8. Chunxu Zhang, Guodong Long, Tianyi Zhou, Peng Yan, Zijian Zhang, Chengqi Zhang, and Bo Yang, “Dual Personalization on Federated Recommendation”, International Joint Conference on Artificial Intelligence (IJCAI), 2023. PDF
  9. Ravikumar Balakrishnan*, Tian Li*, Tianyi Zhou*, Nageen Himayat, Virginia Smith, Jeff A. Bilmes, “Diverse Client Selection for Federated Learning via Submodular Maximization”, International Conference on Learning Representations (ICLR), 2022. PDF, Code
  10. Fengwen Chen, Guodong Long, Zonghan Wu, Tianyi Zhou, Jing Jiang, “Personalized Federated Learning With Structural Information”, International Joint Conference on Artificial Intelligence (IJCAI), 2022. (long presentation) PDF
  11. Yue Tan, Guodong Long, Lu Liu, Tianyi Zhou, Qinghua Lu, Jing Jiang, Chengqi Zhang, “FedProto: Federated Prototype Learning across Heterogeneous Clients”, The Thirty-Sixth AAAI Conference on Artificial Intelligence (AAAI), 2022. PDF, Code
  12. Guodong Long, Ming Xie, Tao Shen, Tianyi Zhou, Xianzhi Wang, Jing Jiang, “Multi-Center Federated Learning: clients clustering for better personalization”, World Wide Web Journal (Springer), 2022. PDF

Data Augmentation and Synthesis


  1. Divya Kothandaraman, Tianyi Zhou, Ming Lin, Dinesh Manocha, “AerialBooth: Mutual Information Guidance for Text Controlled Aerial View Synthesis from a Single Image”, arXiv:2311.15478, 2023. PDF
  2. Divya Kothandaraman, Tianyi Zhou, Ming Lin, Dinesh Manocha, “Aerial Diffusion: Text Guided Ground-to-Aerial View Translation from a Single Image using Diffusion Models”, The 16th ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia Pacific (SIGGRAPH Asia), 2023. PDF, CODE
  3. Lilly Kumari, Shengjie Wang, Tianyi Zhou, Jeff A. Bilmes, “Retrospective Adversarial Replay for Continual Learning”, Advances in Neural Information Processing Systems 36 (NeurIPS), 2022. PDF, CODE
  4. Kaiwen Yang, Yanchao Sun, Jiahao Su, Fengxiang He, Xinmei Tian, Furong Huang, Tianyi Zhou, Dacheng Tao, “Adversarial Auto-Augment with Label Preservation: A Representation Learning Principle Guided Approach”, Advances in Neural Information Processing Systems 36 (NeurIPS), 2022. PDF, CODE
  5. Kaiwen Yang, Tianyi Zhou, Xinmei Tian, Dacheng Tao, “Identity-Disentangled Adversarial Augmentation for Self-supervised Learning”, International Conference on Machine Learning (ICML), 2022. PDF

Continual Learning, Plasticity-Stability Trade-off


  1. Jiangtao Kong, Zhenyu Zong, Tianyi Zhou, Huajie Shao, “Condensed Prototype Replay for Class Incremental Learning”, arXiv:2305.16143, 2023. PDF
  2. Haiyan Zhao, Tianyi Zhou, Guodong Long, Jing Jiang, and Chengqi Zhang. “Does Continual Learning Equally Forget All Parameters?”, International Conference on Machine Learning (ICML), 2023. PDF
  3. Yijun Yang, Tianyi Zhou, Jing Jiang, Guodong Long, and Yuhui Shi. “Continual Task Allocation in Meta-Policy Network via Sparse Prompting”, International Conference on Machine Learning (ICML), 2023. PDF, Code
  4. Lilly Kumari, Shengjie Wang, Tianyi Zhou, Jeff A. Bilmes, “Retrospective Adversarial Replay for Continual Learning”, Advances in Neural Information Processing Systems 36 (NeurIPS), 2022. PDF, CODE

Transfer Learning, Multi-task Learning, Meta-Learning


  1. Ziyue Li, Tian Li, Virginia Smith, Jeff Bilmes, Tianyi Zhou, “Many-Objective Multi-Solution Transport”, arXiv:2403.04099, 2024. PDF
  2. Haiyan Zhao, Tianyi Zhou, Guodong Long, Jing Jiang, Chengqi Zhang, “Voting from Nearest Tasks: Meta-Vote Pruning of Pre-trained Models for Downstream Tasks”, European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML/PKDD), 2023. PDF
  3. Lu Liu, Tianyi Zhou, Guodong Long, Jing Jiang, Xuanyi Dong and Chengqi Zhang, “Isometric Propagation Network for Generalized Zero-shot Learning”, International Conference on Learning Representations (ICLR), 2021. PDF
  4. Lu Liu, Tianyi Zhou, Guodong Long, Jing Jiang, and Chengqi Zhang, “Attribute Propagation Network for Graph Zero-shot Learning”, The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI), New York, USA, 2020. PDF, Code, Poster
  5. Lu Liu, Tianyi Zhou, Guodong Long, Jing Jiang, and Chengqi Zhang, “Learning to Propagate for Graph Meta-Learning”, Advances in Neural Information Processing Systems 33 (NeurIPS), Vancouver, BC, Canada, 2019. PDF, Appendix, Code, Poster, Slides
  6. Lu Liu, Tianyi Zhou, Guodong Long, Jing Jiang, Lina Yao and Chengqi Zhang, “Prototype Propagation Networks (PPN) for Weakly-supervised Few-shot Learning on Category Graph”, International Joint Conferences on Artificial Intelligence (IJCAI), Macau, China, 2019. PDF, Code
  7. Lu Liu, Tianyi Zhou, Guodong Long, Jing Jiang, and Chengqi Zhang, “Many-Class Few-Shot Learning on Multi-Granularity Class Hierarchy”, IEEE Transactions on Knowledge and Data Engineering (T-KDE), 2020. PDF, Code
  8. Tianyi Zhou and Dacheng Tao, “Multi-task Copula by Sparse Graph Regression”, ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), NYC, USA, 2014. PDF

Submodular Optimization


  1. Shengjie Wang*, Tianyi Zhou*, Chandrashekhar Lavania, Jeff A. Bilmes, “Constrained Robust Submodular Partitioning”, Advances in Neural Information Processing Systems 35 (NeurIPS), 2021. ( Spotlight ) PDF, Appendix
  2. Tianyi Zhou, Hua Ouyang, Jeff A. Bilmes, Yi Chang and Carlos Guestrin, “Scaling Submodular Maximization via Pruned Submodularity Graphs”, Twentyth International Conference on Artificial Intelligence and Statistics (AISTATS), Fort Lauderdale, FL, 2017. PDF, Appendix
  3. Tianyi Zhou and Jeff A. Bilmes, “Stream Clipper: Scalable Submodular Maximization on Stream”, arXiv:1606.00389, 2016. PDF

Matrix Factorization and Feature Disentanglement


  1. Kaiwen Yang, Tianyi Zhou, Yonggang Zhang, Xinmei Tian, Dacheng Tao, “Class-Disentanglement and Applications in Adversarial Detection and Defense”, Advances in Neural Information Processing Systems 35 (NeurIPS), 2021. PDF, Appendix
  2. Tianyi Zhou, Wei Bian, and Dacheng Tao, “Divide-and-Conquer Anchoring for Near Separable Nonnegative Matrix Factorization and Completion in High Dimensions”, IEEE International Conference on Data Mining (ICDM), pp., Dallas, TX, USA, 2013. ( Best Student Paper Award ) PDF, Slides
  3. Tianyi Zhou and Dacheng Tao, “Shifted Subspaces Tracking on Sparse Outlier for Motion Segmentation”, International Joint Conferences on Artificial Intelligence (IJCAI), Beijing, China, 2013. PDF
  4. Tianyi Zhou and Dacheng Tao, “Greedy Bilateral Sketch, Completion & Smoothing”, International Conference on Artificial Intelligence and Statistics (AISTATS), Journal of Machine Learning Research - Proceedings Track, Scottsdale, Arizona, USA, 2013. PDF
  5. Tianyi Zhou and Dacheng Tao, “Multi-label Subspace Ensemble”, International Conference on Artificial Intelligence and Statistics (AISTATS), Journal of Machine Learning Research - Proceedings Track 22: 1444-1452, La Palma, Canary Islands, Spain, 2012. PDF
  6. Tianyi Zhou and Dacheng Tao, “Bilateral Random Projections”, IEEE International Symposium on Information Theory (ISIT), MIT, Boston, USA, 2012. PDF
  7. Tianyi Zhou and Dacheng Tao, “GoDec: Randomized Low-rank & Sparse Matrix Decomposition in Noisy Case”, Twenty-Eighth International Conference on Machine Learning (ICML), Bellevue, WA, USA, 2011. ( IEEE TCSC Most Influential Paper Award, Most cited first-author paper, 808 citations ) PDF, Code, Demo Videos, Talk

Compressed Sensing, Sparse Dimension Reduction


  1. Tianyi Zhou and Dacheng Tao, “k-bit Hamming Compressed Sensing”, IEEE International Symposium on Information Theory (ISIT), Istanbul, Turkey, 2013. PDF
  2. Tianyi Zhou and Dacheng Tao, “1-bit Hamming Compressed Sensing”, IEEE International Symposium on Information Theory (ISIT), MIT, Boston, USA, 2012. PDF
  3. Tianyi Zhou and Dacheng Tao, “Double Shrinking for Sparse Learning”, IEEE Transactions on Image Processing (T-IP), 22(1): 244-257, 2013. PDF
  4. Tianyi Zhou, Dacheng Tao and Xindong Wu,”Compressed Labeling on Distilled Labelsets for Multi-label Learning”, Machine Learning (Springer) (MLJ) 88(1-2): 69-126, 2012. PDF
  5. Tianyi Zhou, Dacheng Tao and Xindong Wu, “Manifold elastic net: a unified framework for sparse dimension reduction”, Data Mining and Knowledge Discovery (Springer) (DMKD) 22(3): 340-371, 2011. PDF
  6. Tianyi Zhou and Dacheng Tao, “Hamming Compressed Sensing”, arXiv:1110.0073, 2011. PDF