Rui Wang (王瑞)

Computer Science & Artificial Intelligence

I am a third-year undergraduate student at The Hong Kong University of Science and Technology (HKUST), pursuing a Bachelor of Engineering in Computer Science with an Extended Major in Artificial Intelligence. Currently, I am on a semester exchange at The University of Texas at Austin (UT Austin). This summer, I will be joining AQUMON (Shenzhen) as a research intern through the HKUST CSE Co-op Program.

My research interests broadly lie in Large Language Models (LLMs) and their reasoning capabilities. Specifically, I focus on:

  • LLM Decision-making
  • Uncertainty Quantification and Confidence Calibration
  • RAG Robustness
  • Rule Induction

👈 Feel free to reach out — my contact information is available in the sidebar.

🔥 News

Jun 2026
💼 Accepted into the HKUST CSE Co-op Program. Starting internship at AQUMON (Shenzhen) as a AI Large Language Model Research Intern under the supervision of Prof. Yangqiu Song.
Apr 2026
📄 Updated the arXiv version of my paper "Rethinking Prospect Theory for LLMs: Revealing the Instability of Decision-Making under Epistemic Uncertainty", and submitted it to COLM 2026. [arXiv]
Apr 2026
🚀 Planning to start HKUST UG AI Lab with collaborator Jiayu Liu.
Jan 2026
📄 One paper on confidence calibration in RAG systems submitted to ACL 2026. [arXiv]
Jan 2026
✈️ Started semester exchange at The University of Texas at Austin.

📝 Research

Rethinking Prospect Theory for LLMs Research Figure

Rethinking Prospect Theory for LLMs: Revealing the Instability of Decision-Making under Epistemic Uncertainty

Under Review (COLM 2026) Apr 2026

First Author

  • Designed a three-stage workflow to evaluate LLM decision-making under uncertainty, estimating Prospect Theory parameters and testing whether the framework meaningfully fits model behavior
  • Showed that Prospect Theory is not consistently reliable for interpreting LLMs, and that the inferred behavior is especially unstable under epistemic uncertainty expressed through linguistic markers
NAACL Framework Research Figure

Noise-Aware Verbal Confidence Calibration for LLMs in RAG Systems

Under Review (ACL) Jan 2026

Co-first Author

  • Developed NAACL, a framework that resolves RAG overconfidence by training models to explicitly discern retrieval noise and calibrate their verbal confidence

🎓 Education

The Hong Kong University of Science and Technology

Sep 2023 - May 2027

Bachelor of Engineering in Computer Science - Extended Major in AI

Cumulative GPA: 3.983 / 4.30
Major CGA: 4.073 / 4.30
Honors: Dean's List (All Semesters)
Scholarship: University Continuing Scholarship (Total 60,000 HKD)
Transcript: View PDF Here

The University of Texas at Austin

Jan 2026 - May 2026

Semester Exchange, Electronic and Computer Engineering

Coursework: Software Engineering, Machine Learning & Data Analytics for Edge AI

University College London

Jul 2024

Summer School

Programme: Short-term summer programme

🌟 Services

Teaching Assistant

HKUST

COMP1021: Introduction to Computer Science

Designed coding assignments and conducted tutorials for 50+ first-year undergraduates
Received positive feedback for clarifying complex CS concepts

Peer Mentor

HKUST

CSE Peer Mentor Program

Provided academic guidance and career planning advice to junior students
Fostered a collaborative learning environment

💻 Technical Skills

Machine Learning & Programming

Python PyTorch NumPy Scikit-Learn C++

Software Development

Git Bash Scripting GitHub Copilot Cursor

LLM Operations

vLLM LlamaFactory

Relevant Coursework

Data Structures (A+) Knowledge Discovery (A) Operating Systems Algorithms Probability (A+) Linear Algebra (A+)

📬 Get In Touch

Research Figure