KAIST 정보보호대학원에서는 11월 11일 오후 4시에 아래와 같이 콜로퀴움을 개최하고자 합니다. 많은 참석 부탁드립니다.
o 일 시 : 25. 11. 11(화) 16:00~
o 주 제 : LLM Alignment
o 강 사 : 박상돈 교수님
o 장 소 : N1동 201호
※ 시작시간 5분전에 준비하여 주세요.
ㅡㅡㅡ
♣ Title: LLM Alignment
♣ Abstract
We live in a surprising age of generative AI. Performant generative AI popularizes its applications for knowledge base, web search engine, personalized agents, arts, and computer security. But, the brighter the light, the darker the shadow. Generative AI has been criticized due to its misalignment issues, including hallucination effects, biased generation, and safety/security concerns. In particular, the hallucination of large language models (LLMs) negatively impacts the reliability of the popularized LLMs, like providing falsified knowledge with high confidence. In this talk, I will share lessons learned from my efforts on building aligned LLMs with theoretical guarantees towards trustworthy AI.
♣ Bio
Sangdon Park is an assistant professor at POSTECH GSAI/CSE. His research interest focuses on designing trustworthy AI systems by understanding from theory to implementation and by considering practical applications in robotics and computer security. He serves as the area chair of premium machine learning conferences, including NeurIPS, ICLR, and ICML. Prior to joining POSTECH, he was a postdoctoral researcher at the Georgia Institute of Technology. He holds a Ph.D. degree in Computer and Information Science from the University of Pennsylvania in 2021.
