Volume 13 | Issue 4
Volume 13 | Issue 4
Volume 13 | Issue 4
Volume 13 | Issue 4
Volume 13 | Issue 4
The authors provide a novel multimodal medical picture fusion method that is based on deep convolutional neural networks (CNN) and local spatial domain modification. The raw image is first fed into Siamese CNN, which produces the weight map, which is then processed by the Weighted Sum of Eight neighborhood-based Modified Laplacian (WSEML) to produce a new image-based WSEML. Following that, images from CT and MRI are entered into the Weighted Local Energy method (WLE). Finally, the activity level evaluation based on local energy is dedicated to integrating each of the new WLE images and new WSEML images to extract critical information during the reconstruction process. The simulation results suggest that the proposed method acquired more relevant information from source photographs with increased visibility while simultaneously minimizing fused image artefacts