|[세미나] 인공지능 보안 이슈와 적대적 샘플 공격에 관한 연구 - 권현 교수 (육군사관학교)|
다음주 저희 정보보호대학원에서는 육군사관학교 권현 교수님을 모시고
"인공지능 보안 이슈와 적대적 샘플 공격에 관한 연구" 주제로 아래와 같이 세미나를 개최하고자 합니다.
※ 코로나19 확산방지를 위하여 원격수업으로(ZOOM) 진행할 예정입니다.
= 아 래 =
- 21.3.23(화) 16:00~
※ 시작시간 5분전에 준비하여 주세요.
접속 비밀번호: 이메일 별도 공지
Title: 인공지능 보안 이슈와 적대적 샘플 공격에 관한 연구
Deep neural networks (DNNs) are widely used for image recognition, speech recognition, intrusion tolerance, natural language processing, and game-playing. The security and safety of neural networks and machine learning receive considerable attention from the security research community. Adversarial examples are presented in image classification; in an evasion attack, images that are transformed slightly can be incorrectly classified by a machine learning classifier, even when the changes are so small that a human cannot easily recognize them. Such an attack can cause a self-driving car to perform an unwanted action, provided a slight change is made to a road sign. Countermeasures against these attacks have been proposed, and subsequently, more advanced attacks were developed to defeat the countermeasures. This presentation will discuss the adversarial example attack and defense for machine learning security.
Hyun Kwon’s research interests are at machine learning security, adversarial example, and intrusion-tolerant systems. He is currently an assistant professor at the Korea Military Academy. He received the B.S. degree in mathematics from Korea Military Academy, South Korea, in 2010; the M.S. degree from the School of Computing, Korea Advanced Institute of Science and Technology (KAIST), in 2015; and the Ph.D. degree from the School of Computing, KAIST, in 2020.