fig10

From prediction to practice: a narrative review of recent artificial intelligence applications in liver transplantation

Figure 10. Distribution of interpretation methods across AI models in LT for non-linear neural network and tree-based approaches. Non-linear models, such as neural networks and tree-based approaches like random forests, present significant challenges in interpretability. This figure highlights the various methods employed to interpret these models. Neural network-based segmentation often serves as a self-explanatory approach, while a substantial proportion of studies (red) do not utilize any interpretation methods. In contrast, linear approaches, such as Linear regression or Cox regression, offer straightforward interpretability through coefficients. This figure underscores the gap in interpretability strategies for complex AI models in LT. AI: Artificial intelligence; LT: liver transplantation.

Artificial Intelligence Surgery
ISSN 2771-0408 (Online)
Follow Us

Portico

All published articles will be preserved here permanently:

https://www.portico.org/publishers/oae/

Portico

All published articles will be preserved here permanently:

https://www.portico.org/publishers/oae/