Department of Computer Science and Techonology
Nanjing University
163 Xianlin Avenue, Qixia District
Nanjing, Jiangsu Province, China, 210023
I am currently a professor in the Department of Computer Science and Technology at Nanjing University. I received my Ph.D. degree in computer science from Southeast University in 2003. From January 2003 to December 2004, I was a researcher at Tsinghua University. From February 2005 to February 2008, I was a researcher at Hong Kong Polytechnic University.
Open positions (New!)
Positions available for highly motivated Ph.D students with a major in computer science, mathematics, or related fields
Positions available for master's students with a major in computer science, mathematics, or related fields
Current interests My research interests focus on software quality assurance in software engineering, especially on software testing, defect prediction/detection, and program analysis.
Software testing: cost-effective mutation testing, testing for/with AI
Program analysis: data-driven program analysis, selective program analysis, program analysis for/with AI
Our objective is to provide strong (i.e., simple yet effective) baseline approaches for important problems in software quality assurance (see examples). A baseline approach defines a meaningful point of reference and hence allows a meaningful evaluation of any new approach against previous approaches. The ongoing use of a strong baseline approach would help advance the state-of-the-art more reliably and quickly. If you are interested in our “SEE” (Simple yEt Effective) group, please contact me.
Teaching
Software metrics
Mathematical modelling in computer science
Awards/honors
2018: Advisor for an Excellent PhD dissertation in Jiangsu Province
2013: "Deng Feng" Distinguished Scholars Program, Nanjing University
2012: First Prize of Jiangsu Science and Technology Award
2010: China Computer Federation Young Computer Scientist Award
2008: Program for New Century Excellent Talents in University, Ministry of Education
2007: First Prize of Jiangsu Science and Technology Progress Award
2024:How to conducta reliable performance evaluation in defect prediction?
Suggestion: Use MATTER (a fraMework towArd a consisTenT pErformance compaRison) to conduct the evaluation
2024:How to evaluatethe accuracy of test effectiveness metrics in a reliable way?
Suggestion: Use ASSENT (evAluating teSt Suite EffectiveNess meTrics) to conduct the evaluation
2023:The test program'sinherent control flow is a better oracle for testing coverage profilers
Suggestion: Use DOG (finD cOverage buGs) to uncover bugs in code coverage profilers
2023:Does your CLBI (code-line-level bugginess identification) approachreally advance the state-of-the-art in identifying buggy code lines?
Suggestion: Use GLANCE (aiminG at controL- ANd ComplEx-statements) to examine the practical value of your CLBI approach
2023:Existing label collection approachesare vulnerable to inconsistent defect labels, resulting in a negetive influence on defect prediction
Suggestion: Use TSILI (Three Stage Inconsistent Label Identification) to detect and exclude inconsistent defect labels before building and evaluating defect prediction models
2022:Measuring the order-preserving ability is important but missing in mutation reduction evaluation
Suggestion: Use OP/EROP (Order Preservation/Effort-aware Relative Order Preservation) to evaluate the effectiveness of a mutation reduction strategy
2022:An unsupervised model dramatically reduces the cost of mutation testing while maintaining the accuracy
Suggestion: Use CBUA (Coverage-Based Unsupervised Approach) as a baseline in predictive mutation testing
2021:Matching task annotation tags is competitive or even superior to the state-of-the-art approaches for identifying self-admitted technical debts
Suggestion: Use MAT (Matches task Annotation Tags) as a baseline in SATD identification
2019:Simple multi-source information fusion can find dozens of bugs in mature code coverage tools
Suggestion: Use C2V (Code Coverage Validation) as a baseline in testing code coverage tools
2018:Very simple size models can outperform complex learners in defect prediction
Suggestion: Use ManualDown/ManualUp on the test set as the baselines in defect prediction
We hope to see the real advance in software quality assurance We hope to see you in SEE in NJU (Now Join Us) Last updated: June, 2024