University of Bahrain
Scientific Journals

RDMAA: Robust Defense Model against Adversarial Attacks in Deep Learning for Cancer Diagnosis

Show simple item record

dc.contributor.author A. Abd El-Aziz, Atrab
dc.contributor.author A. El-Khoribi, Reda
dc.contributor.author Eldeen Khalifa, Nour
dc.date.accessioned 2024-03-08T21:17:01Z
dc.date.available 2024-03-08T21:17:01Z
dc.date.issued 2024-03-10
dc.identifier.issn 2210-142X
dc.identifier.uri https://journal.uob.edu.bh:443/handle/123456789/5501
dc.description.abstract Attacks against deep learning (DL) models are considered a significant security threat. However, DL especially deep convolutional neural networks (CNN) has shown extraordinary success in a wide range of medical applications, recent studies have recently proved that they are vulnerable to adversarial attacks. Adversarial attacks are techniques that add small, crafted perturbations to the input images that are practically imperceptible from the original but misclassified by the network. To address these threats, in this paper, a novel defense technique against white-box adversarial attacks based on CNN fine-tuning using the weights of the pre-trained deep convolutional autoencoder (DCAE) called Robust Defense Model against Adversarial Attacks (RDMAA), for DL-based cancer diagnosis is introduced. Before feeding the classifier with adversarial examples, the RDMAA model is trained where the perpetuated input samples are reconstructed. Then, the weights of the previously trained RDMAA are used to fine-tune the CNN-based cancer diagnosis models. The fast gradient method (FGSM) and the project gradient descent (PGD) attacks are applied against three DL-cancer modalities (lung nodule X-ray, leukemia microscopic, and brain tumor magnetic resonance imaging (MRI)) for binary and multiclass labels. The experiment’s results proved that under attacks, the accuracy decreased to 35% and 40% for X-rays, 36% and 66% for microscopic, and 70% and 77% for MRI. In contrast, RDMAA exhibited substantial improvement, achieving a maximum absolute increase of 88% and 83% for X-rays, 89% and 87% for microscopic cases, and 93% for brain MRI. The RDMAA model is compared with another common technique (adversarial training) and outperforms it. Results show that DL-based cancer diagnoses are extremely vulnerable to adversarial attacks, even imperceptible perturbations are enough to fool the model. The proposed model RDMAA provides a solid foundation for developing more robust and accurate medical DL models. en_US
dc.language.iso en en_US
dc.publisher University of Bahrain en_US
dc.subject Adversarial Attacks,Cancer Diagnosis, Deep Learning,Deep Convolutional Autoencoder en_US
dc.title RDMAA: Robust Defense Model against Adversarial Attacks in Deep Learning for Cancer Diagnosis en_US
dc.identifier.doi http://dx.doi.org/10.12785/ijcds/150190
dc.volume 15 en_US
dc.issue 1 en_US
dc.pagestart 1273 en_US
dc.pageend 1287 en_US
dc.contributor.authorcountry Egypt en_US
dc.contributor.authorcountry Egypt en_US
dc.contributor.authorcountry Egypt en_US
dc.contributor.authoraffiliation Department of Information Technology, Faculty of Computers and Information, KafrelSheikh University en_US
dc.contributor.authoraffiliation Department of Information Technology, Faculty of Computers and Artificial Intelligence, Cairo University en_US
dc.contributor.authoraffiliation Department of Information Technology, Faculty of Computers and Artificial Intelligence, Cairo University en_US
dc.source.title International Journal of Computing and Digital Systems en_US
dc.abbreviatedsourcetitle IJCDS en_US


Files in this item

This item appears in the following Issue(s)

Show simple item record

All Journals


Advanced Search

Browse

Administrator Account