|
Research [/]
My recent research focuses on LLMs/MLLMs reasoning and efficiency.
|
[ICLR 2026]
SwiReasoning: Switch-Thinking in Latent and Explicit for Pareto-Superior Reasoning LLMs
Dachuan Shi, Abedelkadir Asi, Keying Li, Xiangchi Yuan, Leyan Pan, Wenke Lee, Wen Xiao
[Paper]
[Code]
[Website]
|
[ICML 2025]
LaCache: Ladder-Shaped KV Caching for Efficient Long-Context Modeling of Large Language Models
Dachuan Shi, Yonggan Fu, Xiangchi Yuan, Zhongzhi Yu, Haoran You, Sixu Li, Xin Dong,
Jan Kautz, Pavlo Molchanov, Yingyan Celine Lin
[Paper]
[Code]
|
[ICML 2024]
CrossGET: Cross-Guided Ensemble of Tokens for Accelerating Vision-Language Transformers
Dachuan Shi, Chaofan Tao, Anyi Rao, Zhendong Yang, Chun Yuan, Jiaqi Wang
[Paper]
[Code]
|
[ICML 2023]
UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers
Dachuan Shi, Chaofan Tao, Ying Jin, Zhendong Yang, Chun Yuan, Jiaqi Wang
[Paper]
[Code]
[Website]
|
[Preprint 2026]
Behavior Knowledge Merge in Reinforced Agentic Models
Xiangchi Yuan, Dachuan Shi, Chunhui Zhang, Zheyuan Liu, Shenglong Yao, Soroush Vosoughi, Wenke Lee
[Paper]
[Code]
[Website]
|
[Preprint 2025]
Mitigating Forgetting Between Supervised and Reinforcement Learning Yields Stronger Reasoners
Xiangchi Yuan, Xiang Chen, Tong Yu, Dachuan Shi, Can Jin, Wenke Lee, Saayan Mitra
[Paper]
|
[EMNLP 2025]
Superficial Self-Improved Reasoners Benefit from Model Merging
Xiangchi Yuan, Chunhui Zhang, Zheyuan Liu, Dachuan Shi, Leyan Pan, Soroush Vosoughi, Wenke Lee
[Paper]
[Code]
|
[Preprint 2024]
Supervised Fine-tuning in turn Improves Visual Foundation Models
Xiaohu Jiang, Yixiao Ge, Yuying Ge, Dachuan Shi, Chun Yuan, Ying Shan
[Paper]
[Code]
|
[ECCV 2022]
Masked Generative Distillation
Zhendong Yang, Zhe Li, Mingqi Shao, Dachuan Shi, Zehuan Yuan, Chun Yuan
[Paper]
[Code]
|
Experience [/]
2017โ21
Tsinghua Outstanding Bachelor's Thesis
2021โ24
Tsinghua Outstanding Master's Thesis
Shanghai AILab
Research Intern
Microsoft
Research Intern
2022โ24
Multimodal LLMs, Inference Optimization
2025
Reasoning LLMs, Math & Coding
|
|