Trustworthy AI: A Compositional Perspective

2024-03-26
  • 769

[Abstract]
Artificial Intelligence (AI) systems become deployed into practical and safety-critical systems that interact with humans due to their great achievements in performance. However, careless deployment raises concerns due to untrusting behaviors of AI systems. In particular, a small untrustworthy AI component leads to catastrophic consequences of a large AI system.

In this talk, I will discuss my efforts on learning and quantifying the uncertainty of predictions from AI components and a composition of AI components to enhance the trustworthiness of entire AI systems. In particular, we consider a large AI system consisting of AI components where each AI component returns a conformal set. Here, a conformal set quantifies the uncertainty of predictions via a set of possible label predictions, where the set size represents predicted uncertainty and the conformal set comes with the correctness guarantee. We mainly consider the correctness guarantee of the system-wide conformal set predictions given guarantees on component-wise conformal set predictions. We demonstrate the feasibility of the compositional guarantee in the problem of retrieval-augmented generation (RAG) and the consensus problem of blockchain price oracles. In closing, I will discuss interesting research directions related to compositional conformal prediction.

[Biography]
Sangdon Park is an assistant professor at POSTECH GSAI/CSE. Previously, he was a postdoctoral researcher at the Georgia Institute of Technology, mentored by Taesoo Kim. He earned his Ph.D. in Computer and Information Science from the University of Pennsylvania in 2021, where he was advised by Insup Lee and Osbert Bastani. His research interest focuses on designing trustworthy AI systems by understanding from theory to implementation and by considering practical applications in computer security, computer vision, robotics, cyber-physical systems, and natural language processing.

LIST