Suyuchen Wang (王苏羽晨)
Suyuchen Wang (王苏羽晨)

Graduating Ph.D., Computer Science

About Me

Suyuchen Wang is a graduating Ph.D. candidate at Mila, Quebec AI Institute and Université de Montréal, supervised by Bang Liu. His research spans the full stack of making language models more capable: from efficient long-context modeling (Resonance RoPE, ACL 2024) to retrieval-augmented reasoning (CARE, EMNLP 2025), and more recently vision-language understanding (VCR, ICLR 2025 co-first author with Yoshua Bengio).

He has published 20+ papers at venues including ICLR, EMNLP, ACL, NeurIPS, and The Web Conference, with open-source contributions including model checkpoints on HuggingFace and a Chrome extension for arXiv-to-Markdown conversion (⭐90+). He has held research positions at ServiceNow Research, Huawei Noah’s Ark Lab, and Tencent Jarvis Lab.

Download CV
Interests
  • Large Language Models
  • Vision-Language Models
  • Long-Context & Efficient Reasoning
  • Retrieval-Augmented Generation
  • Open-Source ML Tools
Education
  • Ph.D., Computer Science

    Mila - Quebec AI Institute / Université de Montréal

  • B.Eng. (Hons.), Computer Science

    Beihang University

Featured Publications
Recent Publications
(2025). System-1.5 Reasoning: Traversal in Language and Latent Spaces with Dynamic Shortcuts. Advances in Neural Information Processing Systems (NeurIPS 2025).
(2025). Advances and Challenges in Foundation Agents: From Brain-Inspired Intelligence to Evolutionary, Collaborative, and Safe Systems. arXiv preprint arXiv: 2504.01990.
(2025). AlignVLM: Bridging Vision and Language Latent Spaces for Multimodal Understanding. CoRR.
(2025). BigDocs: An Open Dataset for Training Multimodal Models on Document and Code Tasks. The Thirteenth International Conference on Learning Representations (ICLR 2025).
(2025). CARE: Improving Context Fidelity via Native Retrieval-Augmented Reasoning. Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP 2025).