REFERENCES
1. Montavon G, Samek W, Müller KR. Methods for interpreting and understanding deep neural networks. Digital signal processing 2018 Feb; 73: 1–15. Available from: https://dx.doi.org/10.1016/j.dsp.2017.10.011.
2. Vedaldi A, Montavon G, Hansen LK, Samek W, Muller KR. Explainable AI: interpreting, explaining and visualizing deep learning. Springer; 2019.
3. Adadi A, Berrada M. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE access 2018;6: 52138–60. Available from: https://ieeexplore.ieee.org/document/8466590.
4. Letham B, Rudin C, McCormick TH, Madigan D. Interpretable Classifiers Using Rules And Bayesian Analysis: Building A Better Stroke Prediction Model. The annals of applied statistics 2015 Sep 1;9: 1350–71. Available from: https://www.jstor.org/stable/43826424.
5. Katzmann A, Taubmann O, Ahmad S, Mühlberg A, Sühling M, et al. Explaining clinical decision support systems in medical imaging using cycle-consistent activation maximization. Neurocomputing (Amsterdam) 2021 Oct 7;458: 141–56. Available from: https://dx.doi.org/10.1016/j.neucom.2021.05.081.
6. Xiao W, Kreiman G. Gradient-free activation maximization for identifying effective stimuli. PLOS Comp Biol 2019 May 1;16.
7. Ribeiro MT, Singh S, Guestrin C. "Why Should I Trust You?". KDD '16. ACM; Aug 13, 2016. pp. 1135–44. Available from: http://dl.acm.org/citation.cfm?id=2939778.
8. Ribeiro MT, Singh S, Guestrin C. Nothing Else Matters: Model-Agnostic Explanations By Identifying Prediction Invariance; 2016. Available from: https://explore.openaire.eu/search/publication?articleId=od________18::5d14f874a3c1396f9cb09d48afc22423.
9. Lei J, G'Sell M, Rinaldo A, Tibshirani RJ, Wasserman L. Distribution-Free Predictive Inference for Regression. Journal of the American Statistical Association 2018 Jul 3;113: 1094–111. Available from: http://www.tandfonline.com/doi/abs/10.1080/01621459.2017.1307116.
10. Baehrens D, Schroeter T, Harmeling S, Kawanabe M, Hansen K, et al. How to explain individual classification decisions. Journal of Machine Learning Research 2010;11: 1803–31. Available from: http://publica.fraunhofer.de/documents/N-143882.html.
11. Zeiler MD, Fergus R. In: Visualizing and Understanding Convolutional Networks. Computer Vision – ECCV 2014. Cham: Springer International Publishing; . pp. 818–33. Available from: http://link.springer.com/10.1007/978-3-319-10590-1_53.
12. Zhou B, Khosla A, Lapedriza A, Oliva A, Torralba A. Learning Deep Features for Discriminative Localization. IEEE; Jun 2016. pp. 2921–29. Available from: https://ieeexplore.ieee.org/document/7780688.
13. Tripathy RK, Bilionis I. Deep UQ: Learning deep neural network surrogate models for high dimensional uncertainty quantification. Journal of computational physics 2018 Dec 15;375: 565–88. Available from: https://dx.doi.org/10.1016/j.jcp.2018.08.036.
14. Elith J, Leathwick JR, Hastie T. A Working Guide to Boosted Regression Trees. The Journal of animal ecology 2008 Jul 1;77: 802–13. Available from: https://www.jstor.org/stable/20143253.
15. Chipman HA, George EI, Mcculloch RE. BART: Bayesian Additive Regression Trees. The annals of applied statistics 2010 Mar 1;4: 266–98. Available from: https://www.jstor.org/stable/27801587.
16. Green DP, Kern HL. Modeling Heterogeneous Treatment Effects In Survey Experiments With Bayesian Additive Regression Trees. Public opinion quarterly 2012 Oct 1;76: 491–511. Available from: https://www.jstor.org/stable/41684581.
17. Goldstein A, Kapelner A, Bleich J, Pitkin E. Peeking Inside the Black Box: Visualizing Statistical Learning With Plots of Individual Conditional Expectation; 2015. Available from: https://search.datacite.org/works/10.6084/m9.figshare.1006469.v2.
18. Aung MSH, Lisboa PJG, Etchells TA, Testa AC, Calster BV, et al. In: Comparing Analytical Decision Support Models Through Boolean Rule Extraction: A Case Study of Ovarian Tumour Malignancy. Advances in Neural Networks – ISNN 2007. Berlin, Heidelberg: Springer Berlin Heidelberg; . pp. 1177–86. Available from: http://link.springer.com/10.1007/978-3-540-72393-6_139.
19. Johansson U, Lofstrom T, Konig R, Sonstrod C, Niklasson L. Rule Extraction from Opaque Models– A Slightly Different Perspective. IEEE; Dec 2006. pp. 22–27. Available from: https://ieeexplore.ieee.org/document/4041465.
20. Yashchenko AV, Belikov AV, Peterson MV, Potapov AS. Distillation of neural network models for detection and description of image key points. Scientific and Technical Journal of Information Technologies, Mechanics and Optics 2020 Jun 1;20: 402–9. Available from: https://doaj.org/article/587a8589d29d48d8b0858eb10db8ae53.
21. Tan S. Interpretable Approaches to Detect Bias in Black-Box Models. AIES '18. ACM; Dec 27, 2018. pp. 382–83. Available from: http://dl.acm.org/citation.cfm?id=3278802.
22. Zhang W, Biswas G, Zhao Q, Zhao H, Feng W. Knowledge distilling based model compression and feature learning in fault diagnosis. Applied soft computing 2020 Mar; 88: 105958. Available from: https://dx.doi.org/10.1016/j.asoc.2019.105958.
23. Cortez P, Embrechts MJ. Opening black box Data Mining models using Sensitivity Analysis. IEEE; Apr 2011. pp. 341–48. Available from: https://ieeexplore.ieee.org/document/5949423.
24. Cortez P, Embrechts MJ. Using sensitivity analysis and visualization techniques to open black box data mining models. Information sciences 2013 Mar 10;225: 1–17. Available from: https://dx.doi.org/10.1016/j.ins.2012.10.039.
25. Bien J, Tibshirani R. Prototype Selection For Interpretable Classification. The annals of applied statistics 2011 Dec 1;5: 2403–24. Available from: https://www.jstor.org/stable/23069335.
26. Yang Z, Zhang A, Sudjianto A. Enhancing Explainability of Neural Networks Through Architecture Constraints. IEEE transaction on neural networks and learning systems 2021 Jun; 32: 2610–21. Available from: https://ieeexplore.ieee.org/document/9149804.
27. Tran L, Dolph C, Zhao D. Enhancing Neural Network Explainability with Variational Autoencoders; .
28. Wolf L, Galanti T, Hazan T. A Formal Approach to Explainability. Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society; Jan 15, 2020. pp. 255–61.
29. Al-Mahasneh AJ, Anavatti SG, Garratt MA. Evolving General Regression Neural Networks for Learning from Noisy Datasets. IEEE; Dec 2019. pp. 1473–78. Available from: https://ieeexplore.ieee.org/document/9003073.
30. Jifara W, Jiang F, Rho S, Cheng M, Liu S. Medical image denoising using convolutional neural network: a residual learning approach. The Journal of supercomputing 2019 Feb 6;75: 704–18. Available from: https://search.proquest.com/docview/2187111783.
31. Kao YY, Hsu HP, Hung KH, Lee SK, Lai YH, et al. A study on attention-based objective function in deep denoising autoencoder based speech enhancement. The Journal of the Acoustical Society of America 2019 Oct; 146: 2794. Available from: http://dx.doi.org/10.1121/1.5136680.
32. Viaña J, Ralescu S, Cohen K, Ralescu A, Kreinovich V. Single Hidden Layer CEFYDRA: Cluster-first Explainable FuzzY-based Deep self-Reorganizing Algorithm. In: Applications of Fuzzy Techniques: Proceedings of the 2022 Annual Conference of the North American Fuzzy Information Processing Society NAFIPS; May 2022.
33. Viaña J, Ralescu S, Cohen K, Ralescu A, Kreinovich V. Multiple Hidden Layered CEFYDRA: Cluster-first Explainable FuzzY-based Deep self-Reorganizing Algorithm. In: Applications of Fuzzy Techniques: Proceedings of the 2022 Annual Conference of the North American Fuzzy Information Processing Society NAFIPS; May 2022.
34. Viaña J, Ralescu S, Cohen K, Ralescu A, Kreinovich V. Initialization and Plasticity of CEFYDRA: Cluster-first Explainable FuzzY-based Deep self-Reorganizing Algorithm. In: Applications of Fuzzy Techniques: Proceedings of the 2022 Annual Conference of the North American Fuzzy Information Processing Society NAFIPS; May 2022.
35. Bede B, Williams A. In: Takagi-Sugeno Fuzzy Systems with Triangular Membership Functions as Interpretable Neural Networks. Explainable AI and Other Applications of Fuzzy Techniques. Cham: Springer International Publishing; 2021. p. 14–25. Available from: http://link.springer.com/10.1007/978-3-030-82099-2_42.
36. GFS-TSK BCWDDAU. In: Allison Murphy. Explainable AI and Other Applications of Fuzzy Techniques. Cham: Springer International Publishing; 2021. Available from: http://link.springer.com/10.1007/978-3-030-82099-2_42.
37. Murtagh F, Contreras P. Algorithms for hierarchical clustering: an overview. Wiley interdisciplinary reviews Data mining and knowledge discovery 2012 Jan; 2: 86–97. Available from: https://api.istex.fr/ark:/67375/WNG-ZSSZ1C7W-3/fulltext.pdf.
38. Murtagh F, Contreras P. Algorithms for hierarchical clustering: an overview, Ⅱ. Wiley interdisciplinary reviews Data mining and knowledge discovery 2017 Nov; 7: n/a. Available from: https://onlinelibrary.wiley.com/doi/abs/10.1002/widm.1219.
39. Sharma N, Sharma P, Tiwari K. A Review on Clustering Method based on Unsupervised Learning Approach. International journal of computer applications 2018 Sep 18, ;181: 20–23.
40. Zhang X, Xu Z. Hesitant fuzzy agglomerative hierarchical clustering algorithms. International journal of systems science 2015 Feb 17;46: 562–76. Available from: http://www.tandfonline.com/doi/abs/10.1080/00207721.2013.797037.
41. Ciaramella A, Nardone D, Staiano A. Data integration by fuzzy similarity-based hierarchical clustering. BMC bioinformatics 2020 Aug 21;21: 350. Available from: https://www.ncbi.nlm.nih.gov/pubmed/32838739.
42. Aliahmadipour L, Eslami E. GHFHC: Generalized Hesitant Fuzzy Hierarchical Clustering Algorithm. International journal of intelligent systems 2016 Sep; 31: 855–71. Available from: https://api.istex.fr/ark:/67375/WNG-3QPVTL6Q-T/fulltext.pdf.
43. Polikar R. In: Ensemble Learning. Ensemble Machine Learning. Boston, MA: Springer US; 2012. pp. 1–34. Available from: http://link.springer.com/10.1007/978-1-4419-9326-7_1.
44. Viaña J, Ralescu S, Cohen K, Ralescu A, Kreinovich V. Why Cauchy Membership Functions: Efficiency. Advances in Artificial Intelligence and Machine Learning 2021;1:81-88.
45. Viaña J, Ralescu S, Cohen K, Ralescu A, Kreinovich V. Why Cauchy Membership Functions: Reliability. Advances in Artificial Intelligence and Machine Learning (To appear).
46. Dua D, Graff C. UCI Machine Learning Repository; 2017. University of California, Irvine, School of Information and Computer Sciences. Available from: http://archive.ics.uci.edu/ml.
47. Viaña J, Cohen K. In: Fuzzy-Based, Noise-Resilient, Explainable Algorithm for Regression. Explainable AI and Other Applications of Fuzzy Techniques. Cham: Springer International Publishing; 2021. pp. 461–72. Available from: http://link.springer.com/10.1007/978-3-030-82099-2_42.
48. Viaña J, Ralescu S, Cohen K, Ralescu A, Kreinovich V. Extension to Multidimensional Problems of a Fuzzy- based Explainable & Noise-Resilient Algorithm. In: Constraint Programming and Decision Making (CoProd 2021); 2021.
49. Combined Cycle Power Plant Data Set, UCI Machine Learning Repository 2012. Available from: https://archive.ics.uci.edu/ml/datasets/combined+cycle+power+plant.
50. Tufekci P. Prediction of full load electrical power output of a base load operated combined cycle power plant using machine learning methods. International journal of electrical power & energy systems 2014 Sep; 60: 126–40. Available from: https://dx.doi.org/10.1016/j.ijepes.2014.02.027.
51. Ekinci S, Celebi UB, Bal M, Amasyali MF, Boyaci UK. Predictions of oil/chemical tanker main design parameters using computational intelligence techniques. Applied soft computing 2011;11: 2356–66. Available from: https://dx.doi.org/10.1016/j.asoc.2010.08.015.
52. Azuaje F. Witten IH, Frank E: Data Mining: Practical Machine Learning Tools and Techniques 2nd edition. Biomedical engineering online 2006 Sep 29;5.
53. Simpson DG. In: Introduction to Rousseeuw (1984) Least Median of Squares Regression. Breakthroughs in Statistics. New York, NY: Springer New York; . pp. 433–61. Available from: http://link.springer.com/10.1007/978-1-4612-0667-5_18.
54. Kubat M. Neural networks: a comprehensive foundation by Simon Haykin, Macmillan, 1994, ISBN 0-02-352781-7. Knowledge engineering review 1999 Feb; 13: 409–12. Available from: https://dx.doi.org/10.1017/S0269888998214044.
55. Elish MO. A comparative study of fault density prediction in aspect-oriented systems using MLP, RBF, KNN, RT, DENFIS and SVR models. Artificial Intelligence Review 2014 Dec; 42: 695–703. Available from: https://search.proquest.com/docview/1624871931.
56. Han J, Kamber M. Data mining: concepts and techniques. Morgan Kaufmann; 2005. Available from: http://cds.cern.ch/record/1991675.
57. Cleary JG, Trigg LE. In: K: An Instance-based Learner Using an Entropic Distance Measure. Machine Learning Proceedings 1995. Elsevier Inc; 1995. pp. 108–14. Available from: https://dx.doi.org/10.1016/B978-1-55860-377-6.50022-0.
58. Friedman JH. Stochastic gradient boosting. vol. 38 of Computational Statistics & amp; Data Analysis. Amsterdam: Elsevier Science; 2002. pp. 367–78. Available from: http://econpapers.repec.org/article/eeecsdana/v_3a38_3ay_3a2002_3ai_3a4_3ap_3a367-378.htm.
59. D'Haen J, Poel DVD. Temporary Staffing Services: A Data Mining Perspective. IEEE; Dec 2012. pp. 287–92. Available from: https://ieeexplore.ieee.org/document/6406453.
60. Wang Y, Witten IH. Induction of model trees for predicting continuous classes. vol. 96/23. Hamilton, N. Z: Dept. of Computer Science, University of Waikato; 1996.
61. Portnoy S, Koenker R. The Gaussian Hare and the Laplacian Tortoise: Computability of Squared- Error versus Absolute-Error Estimators. Statistical science 1997 Nov 1;12: 279–96. Available from: https://www.jstor.org/stable/2246216.
62. Ernest N, Carroll D, Schumacher C, Clark M, Cohen K. Genetic Fuzzy based Artificial Intelligence for Unmanned Combat Aerial Vehicle Control in Simulated Air Combat Missions. Journal of defense management 2016;6.
63. Sathyan A, Ernest ND, Cohen K. An Efficient Genetic Fuzzy Approach to UAV Swarm Routing. Unmanned systems (Singapore) 2016 Apr; 4: 117–27. Available from: http://www.worldscientific.com/doi/abs/10.1142/S2301385016500011.
64. Ernest N, Cohen K, Kivelevitch E, Schumacher C, Casbeer D. Genetic Fuzzy Trees and their Application Towards Autonomous Training and Control of a Squadron of Unmanned Combat Aerial Vehicles. Unmanned systems (Singapore) 2015 Jul; 3: 185–204. Available from: http://www.worldscientific.com/doi/abs/10.1142/S2301385015500120.
65. Sathyan A, Cohen K, Ma O. Comparison Between Genetic Fuzzy Methodology and Q-Learning for Collaborative Control Design. International journal of artificial intelligence & applications 2019 Mar 31;10: 1–15.
66. Sathyan A, Cohen K, Ma O. Genetic Fuzzy Based Scalable System of Distributed Robots for a Collaborative Task. Frontiers in robotics and AI 2020;7. Available from: https://www.ncbi.nlm.nih.gov/pubmed/33501362.
67. Sathyan A, Ma J, Cohen K. Decentralized cooperative driving automation: a reinforcement learning framework using genetic fuzzy systems. Transportmetrica (Abingdon, Oxfordshire, UK) 2021 Jan 1;9: 775–97.