I am a postdoctoral researcher at KAIST researching software engineering: specifically, I use machine learning to build tools to help developers remove bugs from software. My research focuses on facilitating the debugging process from the perspective of the developer. For example, I have contributed towards automatically reproducing bug reports and explainable automated debugging. I have also contributed to the theory of the field by analyzing existing techniques using a novel Bayesian statistics framework.
Fault Localization (FL), in which a developer seeks to identify which part of the code is malfunctioning and needs to be fixed, is a recurring challenge in debugging. To reduce developer burden, many automated FL techniques have been proposed. However, prior work has noted that existing techniques fail to provide rationales for the suggested locations, hindering developer adoption of these techniques. With this in mind, we propose AutoFL, a Large Language Model (LLM)-based FL technique that generates an explanation of the bug along with a suggested fault location. AutoFL prompts an LLM to use function calls to navigate a repository, so that it can effectively localize faults over a large software repository and overcome the limit of the LLM context length. Extensive experiments on 798 real-world bugs in Java and Python reveal AutoFL improves method-level acc@1 by up to 233.3% over baselines. Furthermore, developers were interviewed on their impression of AutoFL-generated explanations, showing that developers generally liked the natural language explanations of AutoFL, and that they preferred reading a few, high-quality explanations instead of many.
Debugging takes up a significant portion of developer time. As a result,
automated debugging techniques including Fault Localization (FL) and Automated
Program Repair (APR) have garnered significant attention due to their potential
to aid developers in debugging tasks. With the recent advance in techniques that treat the two tasks as closely coupled, such as Unified Debugging, a framework to formally express these two tasks together would heighten our understanding of automated debugging
and provide a way to formally analyze techniques and approaches. To this end,
we propose a Bayesian framework of understanding automated debugging. We
find that the Bayesian framework, along with a concrete statement of the objective of
automated debugging, can recover maximal fault localization formulae from
prior work, as well as analyze existing APR techniques and their underlying
assumptions.
As a means of empirically demonstrating our framework, we further propose
BAPP, a Bayesian Patch Prioritization technique that incorporates intermediate program values
to analyze likely patch locations and repair actions, with its core equations
being derived by our Bayesian framework. We find that incorporating program
values allows BAPP to identify correct patches more precisely:
the rankings produced by BAPP reduced the number of
required patch evaluations by 68% and consequently reduced the repair
time by 34 minutes on average. Further, our Bayesian framework suggests a number of
changes to the way fault localization information is used in program repair,
which we validate is useful for BAPP.
These results highlight the potential of value-cognizant automated debugging
techniques, and further verifies our theoretical framework.
Large Language Models are Few-shot Testers: Exploring LLM-based General Bug Reproduction
Many automated test generation techniques have been developed to aid developers with writing tests. To facilitate full automation, most existing techniques aim to either increase coverage, or generate exploratory inputs. However, existing test generation techniques largely fall short of achieving more semantic objectives, such as generating tests to reproduce a given bug report. Reproducing bugs is nonetheless important, as our empirical study shows that the number of tests added in open source repositories due to issues was about 28% of the corresponding project test suite size. Meanwhile, due to the difficulties of transforming the expected program semantics in bug reports into test oracles, existing failure reproduction techniques tend to deal exclusively with program crashes, a small subset of all bug reports. To automate test generation from general bug reports, we propose LIBRO, a framework that uses Large Language Models (LLMs), which have been shown to be capable of performing code-related tasks. Since LLMs themselves cannot execute the target buggy code, we focus on post-processing steps that help us discern when LLMs are effective, and rank the produced tests according to their validity. Our evaluation of LIBRO shows that, on the widely studied Defects4J benchmark, LIBRO can generate failure reproducing test cases for 33% of all studied cases (251 out of 750), while suggesting a bug reproducing test in first place for 149 bugs. To mitigate data contamination, we also evaluate LIBRO against 31 bug reports submitted after the collection of the LLM training data terminated: LIBRO produces bug reproducing tests for 32% of the studied bug reports. Overall, our results show LIBRO has the potential to significantly enhance developer efficiency by automatically generating tests from bug reports.