Residential College | false |
Status | 已發表Published |
Adversarial example generation with adaptive gradient search for single and ensemble deep neural network | |
Xiao,Yatie; Pun,Chi Man; Liu,Bo | |
2020-08-01 | |
Source Publication | Information Sciences |
ISSN | 0020-0255 |
Volume | 528Pages:147-167 |
Abstract | Deep Neural Networks (DNNs) have achieved remarkable success in specific domains, such as computer vision, audio processing, and natural language processing. However, researches indicate that deep neural networks are facing many security issues (e.g., adversarial attack, information forgery). In the field of image classification, adversarial samples generated by specific adversarial attack strategies can easily fool deep neural classification models into making unreliable predictions. We find that such adversarial attack algorithms induce large-scale pixel modifications in crafted images to maintain the effectiveness of the adversarial attack. Massive pixel modifications change the inherent characteristics of generated examples and cause large image distortion. To address the mentioned issues, we introduce an adaptive gradient-based adversarial attack method named Adaptive Iteration Fast Gradient Method (AI-FGM), which focuses on seeking the input's preceding gradient and adjusts the accumulation of perturbed entity adaptively for performing adversarial attacks. By maximizing the specific loss for generating adaptive gradient-based entities, AI-FGM calls for several gradient-based operators on the clean input to map crafted sample with the corresponding prediction directly. AI-FGM helps to reduce unnecessary gradient-based entity accumulation when processing adversary by adaptive gradient-based seeking strategy. Experimental results show that AI-FGM outperforms other gradient-based adversarial attackers in attacking deep neural classification models with fewer pixel modifications (AMP is 0.0017 with L norm in fooling Inception-v3) and higher success rate of invasion on deep neural classification networks in white-box and black-box attack strategy on public image datasets with different resolution. |
Keyword | Deep Neural Networks Adversarial Attack Adaptive Gradient Perturbation |
DOI | 10.1016/j.ins.2020.04.022 |
URL | View the original |
Indexed By | SCIE |
Language | 英語English |
WOS Research Area | Computer Science |
WOS Subject | Computer Science, Information Systems |
WOS ID | WOS:000532827200009 |
Scopus ID | 2-s2.0-85083338932 |
Fulltext Access | |
Citation statistics | |
Document Type | Journal article |
Collection | DEPARTMENT OF COMPUTER AND INFORMATION SCIENCE |
Corresponding Author | Pun,Chi Man |
Affiliation | Department of Computer and Information Science,University of Macau,Macau,999078,Macao |
First Author Affilication | University of Macau |
Corresponding Author Affilication | University of Macau |
Recommended Citation GB/T 7714 | Xiao,Yatie,Pun,Chi Man,Liu,Bo. Adversarial example generation with adaptive gradient search for single and ensemble deep neural network[J]. Information Sciences, 2020, 528, 147-167. |
APA | Xiao,Yatie., Pun,Chi Man., & Liu,Bo (2020). Adversarial example generation with adaptive gradient search for single and ensemble deep neural network. Information Sciences, 528, 147-167. |
MLA | Xiao,Yatie,et al."Adversarial example generation with adaptive gradient search for single and ensemble deep neural network".Information Sciences 528(2020):147-167. |
Files in This Item: | There are no files associated with this item. |
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment