University of Bahrain
Scientific Journals

Variance Adaptive Optimization for the Deep Learning Applications

Show simple item record

dc.contributor.author Jadhav, Nagesh
dc.contributor.author Sugandhi, Rekha
dc.contributor.author Pawar, Rajendra
dc.contributor.author Shirke, Swati
dc.contributor.author Nalavade, Jagannath
dc.date.accessioned 2024-04-26T16:58:27Z
dc.date.available 2024-04-26T16:58:27Z
dc.date.issued 2024-04-26
dc.identifier.issn 2210-142X
dc.identifier.uri https://journal.uob.edu.bh:443/handle/123456789/5631
dc.description.abstract Artificial intelligence jargon encompasses deep learning that learns by training a deep neural network. Optimization is an iterative process of improving the overall performance of a deep neural network by lowering the loss or error in the network. However, optimizing deep neural networks is a non-trivial and time-consuming task. Deep learning has been utilized in many applications ranging from object detection, computer vision, and image classification to natural language processing. Hence, carefully optimizing deep neural networks becomes an essential part of the application development. In the literature, many optimization algorithms like stochastic gradient descent, adaptive moment estimation, adaptive gradients, root mean square propagation etc. have been employed to optimize deep neural networks. However, optimal convergence and generalization on unseen data is an issue for most of the conventional approaches. In this paper, we have proposed a variance adaptive optimization (VAdam) technique based on Adaptive moment estimation (ADAM) optimizer to enhance convergence and generalization during deep learning. We have utilized gradient variance as useful insight to adaptively change the learning rate resulting in improved convergence time and generalization accuracy. The experimentation performed on various datasets demonstrates the effectiveness of the proposed optimizer in terms of convergence and generalization compared to existing optimizers. en_US
dc.language.iso en en_US
dc.publisher University of Bahrain en_US
dc.subject Deep Neural Networks, Deep Learning, Optimization, Variance, Convergence. en_US
dc.title Variance Adaptive Optimization for the Deep Learning Applications en_US
dc.identifier.doi http://dx.doi.org/10.12785/ijcds/XXXXXX
dc.volume 16 en_US
dc.issue 1 en_US
dc.pagestart 1 en_US
dc.pageend 10 en_US
dc.contributor.authorcountry India en_US
dc.contributor.authorcountry India en_US
dc.contributor.authorcountry India en_US
dc.contributor.authorcountry India en_US
dc.contributor.authorcountry India en_US
dc.contributor.authoraffiliation Department of Computer Science & Engineering, MIT Art, Design and Technology University en_US
dc.contributor.authoraffiliation Department of Information Technology, MIT Art, Design and Technology University en_US
dc.contributor.authoraffiliation Department of Computer Science & Engineering, MIT Art, Design and Technology University en_US
dc.contributor.authoraffiliation SOET, Pimpri Chinchwad University en_US
dc.contributor.authoraffiliation Department of Computer Science & Engineering, MIT Art, Design and Technology University en_US
dc.source.title International Journal of Computing and Digital Systems en_US
dc.abbreviatedsourcetitle IJCDS en_US


Files in this item

This item appears in the following Issue(s)

Show simple item record

All Journals


Advanced Search

Browse

Administrator Account