Xingchen Wan

Xingchen Wan

Research Scientist


About me

I am a Research Scientist at Google based in the San Francisco Bay Area.

I did my DPhil (the Oxford way of saying PhD) in the Machine Learning Research Group, Department of Engineering Science, University of Oxford, where I was supervised by Professor Michael A. Osborne. I was also a Clarendon Scholar and a member of St John’s College, both at the University of Oxford. I previously interned at Google and Meta.

Recent News

Academic Services

Reviewer/program committee member at ACL (2023-24), ICML (2023-24), AutoML-Conf (2023-24), CVPR (2024), ECCV (2024), ICLR (2024), WACV (2022-24), NeurIPS (2022-23), EMNLP (2023), JMLR, Machine Learning, etc.

Area chair at NeurIPS (2024).

  • Large language models
  • Bayesian optimization & AutoML
  • Machine learning on graphs and networks
  • DPhil (PhD) in Machine Learning, 2019 - 2023

    University of Oxford

  • MEng (integrated bachelor's and master's degree) in Engineering Science. 1st-class Honours (top graduate of the class), 2015 - 2019

    University of Oxford


View all listed publications or view by tags. For a complete list including preprints & working papers, refer to my Google Scholar.
(2024). Batch Calibration: Rethinking Calibration for In-Context Learning and Prompt Engineering. International Conference on Learning Representations (ICLR).

PDF Cite Google Research Blog Abstract OpenReview

(2024). Adaptive Batch Sizes for Active Learning: A Probabilistic Numerics Approach. International Conference on Artificial Intelligence and Statistics (AISTATS).

PDF Cite Abstract

(2024). Working Memory Capacity of ChatGPT: An Empirical Study. AAAI Conference on Artificial Intelligence (AAAI).

PDF Cite Code Abstract

(2024). AutoPEFT: Automatic Configuration Search for Parameter-Efficient Fine-Tuning. Transactions of the Association for Computational Linguistics (TACL).

PDF Cite Code Abstract

(2024). Iterate Averaging in the Quest for Best Test Error. Journal of Machine Learning Research (JMLR).

PDF Cite Code Abstract

(2023). Universal Self-Adaptive Prompting. Empirical Methods in Natural Language Processing (EMNLP).

PDF Cite Google Research Blog Abstract (Google Research) OpenReview ACL Anthology

(2023). Survival of the Most Influential Prompts: Efficient Black-Box Prompt Search via Clustering and Pruning. Findings of the Association for Computational Linguistics: EMNLP 2023.

PDF Cite Code Abstract OpenReview ACL Anthology

(2023). Bayesian Optimisation of Functions on Graphs. Advances in Neural Information Processing Systems (NeurIPS).

PDF Cite Abstract OpenReview


Google Research, Cloud AI Team
Research Scientist
February 2024 – Present Sunnyvale, CA, US
Google Research, Cloud AI Team
Research Intern
October 2022 – June 2023 Sunnyvale, CA, US & London, UK
Meta Research
Research Intern
May 2022 – September 2022 London, UK
Oxford-Man Institute of Quantitative Finance, University of Oxford
Research Intern
August 2018 – September 2018 Oxford, UK
Morgan Stanley
Sales and Trading Summer Analyst
June 2018 – August 2018 London, UK


Department of Engineering Science, University of Oxford
Maurice Lubbock Prize for Best Performance in the Honour School of Engineering Science
Deutsche Boerse Group
Deutsche Boerse Scholarship