REFERENCES
1. Došilović FK, Brčić M, Hlupić N. Explainable artificial intelligence: a survey. In: 2018 41st International convention on information and communication technology, electronics and microelectronics (MIPRO). IEEE; 2018. pp. 0210–15.
2. Markus AF, Kors JA, Rijnbeek PR. The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies. J Biomed Inform 2021;113:103655.
3. Ribeiro MT, Singh S, Guestrin C. Model-agnostic interpretability of machine learning. arXiv preprint arXiv: 160605386 2016.
4. Gilad-Bachrach R, Navot A, Tishby N. An information theoretic tradeoff between complexity and accuracy. In: Learning Theory and Kernel Machines. Springer; 2003. pp. 595–609.
5. Linardatos P, Papastefanopoulos V, Kotsiantis S. Explainable AI: a review of machine learning interpretability methods. Entropy 2020;23:18.
6. de Bruijn H, Warnier M, Janssen M. The perils and pitfalls of explainable AI: Strategies for explaining algorithmic decision-making. Government Information Quarterly 2022;39:101666.
7. Khalid A. A swarm of Cruise robotaxis blocked San Francisco traffic for hours; 2022. Available from: https://www.engadget.com/cruise-driverless-taxis-blocked-san-francisco-traffic-for-hours-robotaxi-gm-204000451.html. [Last accessed on 22 Dec 2022].
8. Djordjević V, Stojanović V, Pršić D, Dubonjić L, Morato MM. Observer-based fault estimation in steer-by-wire vehicle. Eng Today 2022;1:7-17.
9. Pršić D, Nedić N, Stojanović V. A nature inspired optimal control of pneumatic-driven parallel robot platform. Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 2017;231:59-71.
10. Morato MM, Bernardi E, Stojanovic V. A qLPV nonlinear model predictive control with moving horizon Estimation. Complex Eng Syst 2021;1:5.
11. Stanton B, Jensen T. Trust and artificial intelligence. preprint 2021. Available from: https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=931087[Last accessed on 22 Dec 2022].
12. U.S. department of defense responsible artificial Intelligence strategy and implementation pathway. Department of Defense; 2022. Available from: https://media.defense.gov/2022/Jun/22/2003022604/-1/-1/0/Department-of-Defense-Responsible-Artificial-Intelligence-Strategy-and-Implementation-Pathway.PDF[Last accessed on 22 Dec 2022].
13. Singh A, Sengupta S, Lakshminarayanan V. Explainable deep learning models in medical image analysis. J Imaging 2020;6:52.
14. Ribeiro MT, Singh S, Guestrin C. "Why should i trust you?" Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; 2016. pp. 1135–44.
15. Dieber J, Kirrane S. Why model why? Assessing the strengths and limitations of LIME. arXiv preprint arXiv: 201200093 2020.
16. Lundberg SM, Lee SI. A unified approach to interpreting model predictions. Available from: https://proceedings.neurips.cc/paper/2017/hash/8a20a8621978632d76c43dfd28b67767-Abstract.html[Last accessed on 22 Dec 2022].
17. Vo TH, Nguyen NTK, Kha QH, Le NQK. On the road to explainable AI in drug-drug interactions prediction: a systematic review. Comput Struct Biotechnol J 2022;20:2112-23.
18. Kha QH, Tran TO, Nguyen VN, et al. An interpretable deep learning model for classifying adaptor protein complexes from sequence information. Methods 2022;207:90-96.
19. Durán JM. Dissecting scientific explanation in AI (sXAI): a case for medicine and healthcare. Art Int 2021;297:103498.
20. Shaban-Nejad A, Michalowski M, Buckeridge DL. Explainability and interpretability: keys to deep medicine. In: Explainable AI in Healthcare and Medicine. Springer; 2021. pp. 1–10.
21. Naser M. Deriving mapping functions to tie anthropometric measurements to body mass index via interpretable machine learning. Machine Learning with Applications 2022;8:100259.
22. Bhandari M, Shahi TB, Siku B, Neupane A. Explanatory classification of CXR images into COVID-19, Pneumonia and Tuberculosis using deep learning and XAI. Comput Biol Med 2022;150:106156.
23. Lombardi A, Tavares JMR, Tangaro S. Explainable Artificial Intelligence (XAI) in Systems Neuroscience. Front Syst Neurosci 2021;15.
24. Abdel-Zaher AM, Eldeib AM. Breast cancer classification using deep belief networks. Expert Systems with Applications 2016;46:139-44.
25. UCI Machine Learning Repository: Breast Cancer Wisconsin (diagnostic) data set; . Accessed: 2022-07-13. Available from: https://archive.ics.uci.edu/ml/datasets/breast+cancer+wisconsin+(diagnostic)[Last accessed on 22 Dec 2022].
26. Ribeiro MT, Singh S, Guestrin C. "Why should I trust you?": Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016; 2016. pp. 1135–44.
27. An introduction to explainable AI with Shapley values; . Accessed: 2022-07-15. Available from: https://towardsdatascience.com/shap-explained-the-way-i-wish-someone-explained-it-to-me-ab81cc69ef30. [Last accessed on 22 Dec 2022].
28. Paszke A, Gross S, Massa F, et al. PyTorch: An imperative style, high-performance deep learning library. In: Wallach H, Larochelle H, Beygelzimer A, d' Alché-Buc F, Fox E, et al., editors. Advances in Neural Information Processing Systems 32. Curran Associates, Inc.; 2019. pp. 8024–35. Available from: http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf[Last accessed on 22 Dec 2022].
29. Kadam VJ, Jadhav SM, Vijayakumar K. Breast cancer diagnosis using feature ensemble learning based on stacked sparse autoencoders and softmax regression. J Med Syst 2019;43:1-11.
30. Street WN, Wolberg WH, Mangasarian OL. Nuclear feature extraction for breast tumor diagnosis. In: Biomedical image processing and biomedical visualization. vol. 1905. SPIE; 1993. pp. 861–70.
31. Hariharan S, Rejimol Robinson R, Prasad RR, Thomas C, Balakrishnan N. XAI for intrusion detection system: comparing explanations based on global and local scope. J Comput Virol Hack Tech 2022:1-23.
32. Visani G, Bagli E, Chesani F, Poluzzi A, Capuzzo D. Statistical stability indices for LIME: Obtaining reliable explanations for machine learning models. J Operatl Res Society 2022;73:91-101.