University of Bahrain
Scientific Journals

Improved Image Fusion Technique Using Convolutional Neural Networks and The Hybrid PCA-Guided Filter

Show simple item record

dc.contributor.author Jagtap, Nalini S.
dc.contributor.author Thepade, Sudeep D.
dc.date.accessioned 2023-02-28T20:02:48Z
dc.date.available 2023-02-28T20:02:48Z
dc.date.issued 2023-02-28
dc.identifier.issn 2210-142X
dc.identifier.uri https://journal.uob.edu.bh:443/handle/123456789/4765
dc.description.abstract : In order to better accurately forecast the diagnostic features, medical image fusion attempts to combine multi-focus and multimodal medical data into an original image. The outcomes of deep learning-based image processing can be visually beautiful. The incomprehensibility of the outcomes in Medical Imaging is a critical topic. This paper offers a feature-level multi-focus, multi-exposure, and multimodal image fusion using a hybrid layer of Principal Component Analysis (PCA) and Guided Filter (GF) to maximize the anatomical details and eliminate significant noise and artefacts. The proposed method utilizes a Convolutional Neural Network (CNN) based network for feature extraction. The original image is initially decomposed using Principal Component Analysis (PCA). A PCA decreases its dimensionality while preserving all of the essential information in the picture in the first stage and produces a revised weight map. A Guided Filter is used at the PCA output to uphold the edges and further augment the features, reducing the ringing and blurring effects. In the third step, a pre-trained CNN network creates a new weight map by extracting critical characteristics from pictures in the input dataset. The output feature map combines the weight maps produced by the GF and CNN, which is further fused with the reference picture to create the fused image output. The contribution of developed method: • To improvise image quality features by removing noise, ringing and blurring. • Increase the quality of the image by using the hybrid mechanism for extracting more underlying critical features of images [1]. The estimation is based on three multimodal imaging datasets, including CT-MRI, MRI-PET, and MRI-SPECT. Furthermore, the proposed method excels in existing state-of-the-art techniques in terms of fusion quality. . en_US
dc.language.iso en en_US
dc.publisher University of Bahrain en_US
dc.subject image fusion, multimodal image, PCA en_US
dc.title Improved Image Fusion Technique Using Convolutional Neural Networks and The Hybrid PCA-Guided Filter en_US
dc.type Article en_US
dc.identifier.doi http://dx.doi.org/10.12785/ijcds/130161 en
dc.contributor.authoraffiliation Pimpri Chinchwad College of Engineering, Dr D Y Patil Institute of Engineering Management and Research, en_US
dc.contributor.authoraffiliation Pimpri Chinchwad college of Engineering, en_US
dc.source.title International Journal of Computing and Digital Systems en_US
dc.abbreviatedsourcetitle IJCDS en_US


Files in this item

This item appears in the following Issue(s)

Show simple item record

All Journals


Advanced Search

Browse

Administrator Account