Logo

On the Understanding of Vulnerability of Deep Learning and Beyond

Speaker

Yisen Wang, Department of Computer Science and Engineering, Shanghai Jiao Tong University

Time

2019.12.11 15:00-16:00

Venue

Room 306, No.5 Science Building

Abstract

Deep learning, has become increasingly popular in the past few years. This is largely attributed to a family of powerful models called deep neural networks (DNNs). With many stacked layers, and millions of neurons, DNNs are capable of learning complex non-linear mappings, and have demonstrated near or even surpassing human-level performance in a wide range of applications such as image classification, object detection, natural language processing, speech recognition self-driving cars, playing games or medical diagnosis. Despite their great success, DNNs have recently been found vulnerable to adversarial examples (or attacks), which are input instances slightly modified in a way that is intended to fool the model. Such a surprising weakness of DNNs has raised security and reliability concerns on the development of deep learning systems in safety-critical scenarios such as face recognition, autonomous driving and medical diagnosis. Since the first discovery, this has attracted a huge volume of work on either attacking or defending DNNs against these attacks. In this talk, I will introduce this adversarial phenomenon, explanations to this phenomenon, and techniques that have been developed for both attack and defense.