REFERENCES
1. Li L, Wang H, Li C. A review of deep learning fusion methods for infrared and visible images. Infrared Laser Eng. 2022;51:20220125.
2. Shen Y, Huang C, Huang F, Li J, Zhu M, Wang S. Research progress of infrared and visible image fusion technology. Infrared Laser Eng. 2021;50:20200467.
3. Liu B, Dong D, Chen J. Image fusion method based on directional contrast pyramid. J Quantum Electron 2017;34:405-13. Available from: https://m.researching.cn/articles/OJb55f12fb8e8c310f. [Last accessed on 26 Dec 2024].
4. Meng F, Song M, Guo B, Shi R, Shan D. Image fusion based on object region detection and non-subsampled contourlet transform. Comput Electr Eng. 2017;62:375-83.
5. Zhang Y, Qiu Q, Liu H, Ma X, Shao J. Brain image fusion based on multi-scale decomposition and improved sparse representation. J Shaanxi Univ Technology 2023;38:39-47. Available from: https://kns.cnki.net/kcms2/article/abstract?v=8XtZWovJaIRKW_m-UDySgTjWyqyco1C29tm9qtlAQukS1yBmvKDlfsLyujV75oXhuqr7_fIir8qYF-i4Vh6zcRFxkf38gN_JP301fxZmMDCamAZzIfynKMzcrepn3ta_QzURcktRLBXYwBhm5QweFEojPKfTZQ3aUF62LXfeTAwfYBi0SoRJzwD8WebuubMP&uniplatform=NZKPT&language=CHS. [Last accessed on 26 Dec 2024].
6. Yang P, Gao L, Zi L. Image fusion of convolutional sparsity and detail saliency map analysis. J Image Graph 2021;26:2433–49. Available from: https://kns.cnki.net/kcms2/article/abstract?v=8XtZWovJaIQhF4EB97rzeF9qazTDbDP00WW97CVhjFMlUYqfPZElERlDygQxUOVyCEdhfJJfqK-SpxKnGhI8gRrOD41-g36P17UI3EDaxNoeNi_NkrjEJ4YYJFVx-S54oABS3i1gJJJ4sLwLa2QTElcwweP6dI7weqH_sBywZEIcq39PaojJ9iBeJoq1HEu9S4wxRERmaeNvPvURIk72CerLzBqE0KI3vlAQZ5RHbC-1Y6hWdXw16Q==&uniplatform=NZKPT&language=CHS. [Last accessed on 26 Dec 2024].
7. Chen H, Deng L, Zhu L, Dong M. ECFuse: edge-consistent and correlation-driven fusion framework for infrared and visible image fusion. Sensors. 2023;23:8071.
8. Min L, Cao S, Zhao H, Liu P. Infrared and visible image fusion using improved generative adversarial networks. Infrared Laser Eng. 2022;51:20210291.
9. Liu Y, Chen X, Peng H, Wang Z. Multi-focus image fusion with a deep convolutional neural network. Inform Fusion. 2017;36:191-207.
10. Li H, Wu XJ. DenseFuse: a fusion approach to infrared and visible images. IEEE Trans Image Process. 2018;28:2614-23.
11. Li H, Wu XJ, Durrani T. NestFuse: an infrared and visible image fusion architecture based on nest connection and spatial/channel attention models. IEEE Trans Instrum Meas. 2020;69:9645-56.
12. Chang Z, Feng Z, Yang S, Gao Q. AFT: adaptive fusion transformer for visible and infrared images. IEEE Trans Image Process. 2023;32:2077-92.
13. Goodfellow IJ, Pouget-Abadie J, Mirza M, et al. Generative adversarial networks. Commun ACM. 2020;63:139-44.
14. Ma J, Yu W, Liang P, Li C, Jiang J. FusionGAN: a generative adversarial network for infrared and visible image fusion. Inform Fusion. 2018;48:11-26.
15. Ma J, Zhang H, Shao Z, Liang P, Xu H. GANMcC: a generative adversarial network with multiclassification constraints for infrared and visible image fusion. IEEE Trans Instrum Meas. 2020;70:1-14.
16. Ma J, Xu H, Jiang J, Mei X, Zhang XP. DDcGAN: a dual-discriminator conditional generative adversarial network for multi-resolution image fusion. IEEE Trans Image Process. 2020;29:4980-95.
17. Zhou H, Hou J, Zhang Y, Ma J, Ling H. Unified gradient-and intensity-discriminator generative adversarial network for image fusion. Inform Fusion. 2022;88:184-201.
18. Rao D, Xu T, Wu XJ. TGFuse: an infrared and visible image fusion approach based on transformer and generative adversarial network. IEEE Trans Image Process 2023.
19. Li H, Cen Y, Liu Y, Chen X, Yu Z. Different input resolutions and arbitrary output resolution: a meta learning-based deep framework for infrared and visible image fusion. IEEE Trans Image Process. 2021;30:4070-83.
20. Xu X, Shen Y, Han S. Dense-FG: a fusion GAN model by using densely connected blocks to fuse infrared and visible images. Appl Sci. 2023;13:4684.
21. Yi Y, Li Y, Du J, Wang S. An infrared and visible image fusion method based on improved GAN with dropout layer. In: The Proceedings of the 18th Annual Conference of China Electrotechnical Society. Springer; 2024. p. 1–8.
22. Yin H, Xiao J, Chen H. CSPA-GAN: a cross-scale pyramid attention GAN for infrared and visible image fusion. IEEE Trans Instrum Meas. 2023;72:1-11.
23. Girshick R, Donahue J, Darrell T, Malik J. Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2014. pp. 580–7. Available from: https://openaccess.thecvf.com/content_cvpr_2014/html/Girshick_Rich_Feature_Hierarchies_2014_CVPR_paper.html. [Last accessed on 26 Dec 2024].
24. He K, Zhang X, Ren S, Sun J. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans Pattern Anal Mach. Intell. 2015;37:1904-16.
25. Girshick R. Fast R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision; 2015. pp. 1440-48. Available from: https://openaccess.thecvf.com/content_iccv_2015/papers/Girshick_Fast_R-CNN_ICCV_2015_paper.pdf. [Last accessed on 26 Dec 2024].
26. Redmon J, Divvala S, Girshick R, Farhadi A. You only look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2016. pp. 779–88. Available from: https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Redmon_You_Only_Look_CVPR_2016_paper.pdf. [Last accessed on 26 Dec 2024].
27. Liu W, Anguelov D, Erhan D, et al. SSD: single shot multibox detector. In: Computer Vision - ECCV 2016: 14th European Conference; 2016 Oct 11-14; Amsterdam, the Netherlands. Springer; 2016. pp. 21–37.
28. Lin TY, Dollár P, Girshick R, He K, Hariharan B, Belongie S. Feature pyramid networks for object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2017. pp. 2117–25. Available from: https://openaccess.thecvf.com/content_cvpr_2017/papers/Lin_Feature_Pyramid_Networks_CVPR_2017_paper.pdf. [Last accessed on 26 Dec 2024].
29. Lin TY, Goyal P, Girshick R, He K, Dollár P. Focal loss for dense object detection. In: Proceedings of the IEEE International Conference on Computer Vision; 2017. pp. 2980–8. Available from: https://openaccess.thecvf.com/content_ICCV_2017/papers/Lin_Focal_Loss_for_ICCV_2017_paper.pdf. [Last accessed on 26 Dec 2024].
30. Zhang Y, Tian Y, Kong Y, Zhong B, Fu Y. Residual dense network for image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2018. pp. 2472–81. Available from: https://openaccess.thecvf.com/content_cvpr_2018/papers/Zhang_Residual_Dense_Network_CVPR_2018_paper.pdf. [Last accessed on 26 Dec 2024].
31. Gao SH, Cheng MM, Zhao K, Zhang XY, Yang MH, Torr P. Res2net: a new multi-scale backbone architecture. IEEE Trans Pattern Anal Mach Intell. 2019;43:652-62.
32. Kim Y, Koh YJ, Lee C, Kim S, Kim CS. Dark image enhancement based onpairwise target contrast and multi-scale detail boosting. In: 2015 IEEE international conference on image processing (ICIP); 2015 Sep 27-30; Quebec City, Canada. IEEE; 2015. pp. 1404–8.
33. Xu H, Ma J, Jiang J, Guo X, Ling H. U2Fusion: a unified unsupervised image fusion network. IEEE Trans Pattern Anal Mach Intell. 2020;44:502-18.
34. Zhao Z, Xu S, Zhang C, Liu J, Li P, Zhang J. DIDFuse: deep image decomposition for infrared and visible image fusion. arXiv 2020. arXiv: 2003.09210. Available from: https://doi.org/10.48550/arXiv.2003.09210. [Last accessed on 26 Dec 2024].