1 Robust and Trustworthy AI for Software Engineering

We study how large language models (LLMs) and AI-based tools can be made more resilient, consistent, and trustworthy when reasoning about source code. This includes understanding how AI systems respond to imperfect or buggy code and designing strategies that improve their reliability in practical software development contexts.

2 Automated Quality Assurance and Test Intelligence

This area focuses on advancing automation in software testing, particularly in how systems learn to identify correct versus incorrect program behaviors. We investigate intelligent agents that can reason about program outputs, supported by high-quality datasets and evaluation frameworks that reflect real-world testing practices.

3 Data-Centric Foundations for Reliable Testing

High-quality datasets are critical to developing effective AI for testing and validation. Our research examines how to curate, synthesize, and structure test data derived from human-written tests and other software artifacts, enabling systematic exploration of correctness and coverage in diverse settings.

4 Cross-Domain Software Assurance

Emerging computing paradigms, such as quantum and hybrid software, pose new challenges for testing and analysis. We explore how classical software engineering principles can enhance the testing of quantum systems, and conversely, how quantum-inspired methods can inform next-generation testing strategies for traditional software.