Fuzzing Deep Learning Frameworks
With growing interests in building intelligent systems using machine learning techniques,
various libraries such as TensorFlow and PyTorch have been released to allow developers to easily integrate machine learning algorithms in their applications.
However, these libraries contain bugs, which hurt not only the development but also the accuracy and the performance of the models.
Therefore, the libraries need to be well-tested for better reliability.
Testing machine learning libraries is challenging because many of these library functions expect structured inputs that follow machine learning-specific constraints.
In this talk, I will present several testing approaches that address this challenge by mining function constraints from API documents and synthesizing predicates on execution paths.
Mijung Kim is an Assistant Professor of Computer Science and Engineering Department at UNIST, South Korea. Before joining UNIST, she was a postdoc at Purdue University.
She received her Ph.D. in Computer Science from Hong Kong University of Science and Technology.
She also studied at Georgia Tech and UIUC for her master’s and bachelor’s degrees, respectively.
She is broadly interested in software engineering with a focus on improving software reliability and security on AI systems via software testing, fuzzing,
and large language models. She has published her work in major software engineering and AI conferences such as FSE, ISSTA, and EMNLP.