세미나안내

세미나안내

Dual cone gradient descent for multi-objective optimization and its applications

2025-05-14
  • 28
임동영 교수(UNIST) / 2025.05.28

[Abstract]

Recent AI models are no longer limited to a single task. They are increasingly designed to integrate various aspects, such as general-purpose capabilities, user needs, social regulations, and technical features. As a result, AI training often involves optimizing multiple objectives simultaneously. However, current optimization methods that rely on weighted sums frequently lead to issues like gradient conflicts, imbalances, and lack of controllability. In this seminar, we introduce a new framework for multi-objective optimization called Dual Cone Gradient Descent (DCGD), which ensures that the updated gradient falls within a dual cone. We explore the geometric and theoretical properties of DCGD and demonstrate its potential in applications such as Physics-Informed Neural Networks and Machine Unlearning.  His work has been published in top AI/optimization journals such as JMLRMathematics of Operations Research, and TMLR, and has been presented at AI conferences including NeurIPSICMLICLR, and AAAI.

[Biography]

임동영 교수는 KAIST 산업공학과에서 학사석사박사 학위를 취득하였으며박사 과정에서는 금융공학을 주제로 연구하였다이후 Marie Sklodowska-Curie 펠로우로 선발되어 영국 에딘버러 대학교 수학과에서 인공지능 학습 이론을 연구하였고, 2022 6월부터 UNIST 산업공학과 및 인공지능대학원에 재직 중이다. 2024년 여름에는 영국 Alan Turing Institute 최적화 이론 분과의 초청 연구원으로 활동하였다주요 연구 분야는 확률론과 최적화를 기반으로 한 AI 이론 및 응용이며관련 연구 성과는 JMLRMathematics of Operations ResearchTMLR 등에 게재되었고NeurIPSICMLICLRAAAI 등의 주요 국제 학회에서 발표되었다.

목록