1. Zhang, Y.; Bai, S.; Jiang, B.; Li, K.; Dong, Z.; Pan, F. Modeling the correlation between texture characteristics and tensile properties of AZ31 magnesium alloy based on the artificial neural networks. J. Mater. Res. Technol. 2023, 24, 5286-97.
2. Bai, J.; Yang, Y.; Wen, C.; et al. Applications of magnesium alloys for aerospace: a review. J. Magnes. Alloys. 2023, 11, 3609-19.
3. Tian, P.; Liu, X. Surface modification of biodegradable magnesium and its alloys for biomedical applications. Regen. Biomater. 2015, 2, 135-51.
4. Yue, X.; Shang, J.; Zhang, M.; Hur, B.; Ma, X. Additive manufacturing of high porosity magnesium scaffolds with lattice structure and random structure. Mater. Sci. Eng. A. 2022, 859, 144167.
5. Abazari, S.; Shamsipur, A.; Bakhsheshi-Rad, H. R.; et al. Magnesium-based nanocomposites: a review from mechanical, creep and fatigue properties. J. Magnes. Alloys. 2023, 11, 2655-87.
6. Wang, S.; Pan, H.; Xie, D.; et al. Grain refinement and strength enhancement in Mg wrought alloys: a review. J. Magnes. Alloys. 2023, 11, 4128-45.
7. V, K.; Kumar, B. N.; Kumar, S. S.; M, V. Magnesium role in additive manufacturing of biomedical implants - challenges and opportunities. Addit. Manuf. 2022, 55, 102802.
8. Katunin, A.; Wronkowicz-Katunin, A.; Dragan, K. Impact damage evaluation in composite structures based on fusion of results of ultrasonic testing and X-ray computed tomography. Sensors 2020, 20, 1867.
9. Fu, Y.; Downey, A. R.; Yuan, L.; Zhang, T.; Pratt, A.; Balogun, Y. Machine learning algorithms for defect detection in metal laser-based additive manufacturing: a review. J. Manuf. Process. 2022, 75, 693-710.
10. Minaee, S.; Boykov, Y.; Porikli, F.; Plaza, A.; Kehtarnavaz, N.; Terzopoulos, D. Image segmentation using deep learning: a survey. IEEE. Trans. Pattern. Anal. Mach. Intell. 2022, 44, 3523-42.
11. Li, K.; Ma, R.; Qin, Y.; et al. A review of the multi-dimensional application of machine learning to improve the integrated intelligence of laser powder bed fusion. J. Mater. Proc. Technol. 2023, 318, 118032.
12. Gao, G.; Xu, G.; Yu, Y.; Xie, J.; Yang, J.; Yue, D. MSCFNet: a lightweight network with multi-scale context fusion for real-time semantic segmentation. IEEE. Trans. Intell. Transport. Syst. 2022, 23, 25489-99.
13. Hong, D.; Yao, J.; Meng, D.; Xu, Z.; Chanussot, J. Multimodal GANs: toward crossmodal hyperspectral–multispectral image segmentation. IEEE. Trans. Geosci. Remote. Sensing. 2021, 59, 5103-13.
14. Wang, L.; Li, R.; Zhang, C.; et al. UNetFormer: a UNet-like transformer for efficient semantic segmentation of remote sensing urban scene imagery. ISPRS. J. Photogramm. Remote. Sens. 2022, 190, 196-214.
15. Lu, W.; Zhang, Z.; Nguyen, M. A lightweight CNN–transformer network with laplacian loss for low-altitude UAV imagery semantic segmentation. IEEE. Trans. Geosci. Remote. Sensing. 2024, 62, 1-20.
16. Wu, J.; Liu, B.; Zhang, H.; He, S.; Yang, Q. Fault detection based on fully convolutional networks (FCN). JMSE. 2021, 9, 259.
17. Papadeas, I.; Tsochatzidis, L.; Amanatiadis, A.; Pratikakis, I. Real-time semantic image segmentation with deep learning for autonomous driving: a survey. Appl. Sci. 2021, 11, 8802.
18. Chu, P.; Li, Z.; Lammers, K.; Lu, R.; Liu, X. Deep learning-based apple detection using a suppression mask R-CNN. Pattern. Recognit. Lett. 2021, 147, 206-11.
19. Yang, J.; Tu, J.; Zhang, X.; Yu, S.; Zheng, X. TSE DeepLab: an efficient visual transformer for medical image segmentation. Biomed. Signal. Process. Control. 2023, 80, 104376.
20. Lin, K.; Zhao, H.; Lv, J.; et al. Face detection and segmentation based on improved mask R-CNN. Discrete. Dyn. Nat. Soc. 2020, 2020, 1-11.
21. Qiong, L.; Chaofan, L.; Jinnan, T.; Liping, C.; Jianxiang, S. Medical image segmentation based on frequency domain decomposition SVD linear attention. Sci. Rep. 2025, 15, 2833.
22. Banjanovic-Mehmedovic, L.; Husaković, A.; Gurdic, R. A.; Prlja, N.; Karabegovi, I. Advancements in robotic intelligence: the role of computer vision, DRL, transformers and LLMs. 2024.
23. Kolides, A.; Nawaz, A.; Rathor, A.; et al. Artificial intelligence foundation and pre-trained models: fundamentals, applications, opportunities, and social impacts. Simul. Model. Pract. Theory. 2023, 126, 102754.
24. Han, D.; Pan, X.; Han, Y.; Song, S.; Huang, G. Flatten transformer: vision transformer using focused linear attention. In: 2023 IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France. Oct 01-06, 2023. IEEE, 2023; pp. 5961-71.
25. Chen, J.; Mei, J.; Li, X.; et al. 3D TransUNet: advancing medical image segmentation through vision transformers. arXiv2023, arXiv:2310.07781. Available online: https://doi.org/10.48550/arXiv.2310.07781. (accessed on 12 Mar 2025)
26. Ozcan, A.; Tosun, Ö.; Donmez, E.; Sanwal, M. Enhanced-TransUNet for ultrasound segmentation of thyroid nodules. Biomed. Signal. Process. Control. 2024, 95, 106472.
27. Jain, J.; Li, J.; Chiu, M.; Hassani, A.; Orlov, N.; Shi, H. OneFormer: one transformer to rule universal image segmentation. arXiv2022, arXiv:2211.06220. Available online: https://doi.org/10.48550/arXiv.2211.06220. (accessed on 12 Mar 2025)
28. Chen, J.; Mei, J.; Li, X.; et al. TransUNet: rethinking the U-Net architecture design for medical image segmentation through the lens of transformers. Med. Image. Anal. 2024, 97, 103280.
29. Chen, J.; Lu, Y.; Yu, Q.; er,. TransUnet: transformers make strong encoders for medical image segmentation. arXiv2021, arXiv:2102.04306. Available online: https://doi.org/10.48550/arXiv.2102.04306. (accessed on 12 Mar 2025)
30. Anand, V.; Kanhangad, V. PoreNet: CNN-based pore descriptor for high-resolution fingerprint recognition. IEEE. Sensors. J. 2020, 20, 9305-13.
31. Al-Zaidawi, S. M. K.; Bosse, S. A pore classification system for the detection of additive manufacturing defects combining machine learning and numerical image analysis. Eng. Proc. 2023, 58, 122.
32. Budd, S.; Robinson, E. C.; Kainz, B. A survey on active learning and human-in-the-loop deep learning for medical image analysis. Med. Image. Anal. 2021, 71, 102062.
33. Zheng, S.; Song, Y.; Leung, T.; Goodfellow, I. Improving the robustness of deep neural networks via stability training. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, USA. Jun 27-30, 2016. IEEE, 2016; pp. 4480-8.
34. Michaelis, C.; Mitzkus, B.; Geirhos, R. Benchmarking robustness in object detection: autonomous driving when winter is coming. arXiv2019, arXiv:1907.07484. Available online: https://doi.org/10.48550/arXiv.1907.07484. (accessed on 12 Mar 2025)
35. Farrukh, Y. A.; Wali, S.; Khan, I.; Bastian, N. D. SeNet-I: an approach for detecting network intrusions through serialized network traffic images. Eng. Appl. Artif. Intell. 2023, 126, 107169.
36. Huang, Y.; Shi, P.; He, H.; He, H.; Zhao, B. Senet: spatial information enhancement for semantic segmentation neural networks. Vis. Comput. 2024, 40, 3427-40.
37. Zhao, Y.; Jiang, Y.; Huang, L.; Xia, K. SEF-UNet: advancing abdominal multi-organ segmentation with SEFormer and depthwise cascaded upsampling. PeerJ. Comput. Sci. 2024, 10, e2238.
38. Cai, Z.; Liu, S.; Wang, G.; Ge, Z.; Zhang, X.; Huang, D. Align-DETR: enhancing end-to-end object detection with aligned loss. arXiv2023, arXiv:2304.07527. Available online: https://doi.org/10.48550/arXiv.2304.07527. (accessed on 12 Mar 2025)
39. Nong, X.; Luo, X.; Lin, S.; Ruan, Y.; Ye, X. Multimodal deep neural network-based sensor data anomaly diagnosis method for structural health monitoring. Buildings 2023, 13, 1976.
40. Alammar, Z.; Alzubaidi, L.; Zhang, J.; Li, Y.; Lafta, W.; Gu, Y. Deep transfer learning with enhanced feature fusion for detection of abnormalities in X-ray images. Cancers 2023, 15, 4007.
41. Tang, Q.; Liang, J.; Zhu, F. A comparative review on multi-modal sensors fusion based on deep learning. Signal. Process. 2023, 213, 109165.
42. Fang, Y.; Wang, X.; Wu, R.; Liu, W. What makes for hierarchical vision transformer? IEEE. Trans. Pattern. Anal. Mach. Intell. 2023, 45, 12714-20.
Comments
Comments must be written in English. Spam, offensive content, impersonation, and private information will not be permitted. If any comment is reported and identified as inappropriate content by OAE staff, the comment will be removed without notice. If you have any queries or need any help, please contact us at support@oaepublish.com.