Download PDF
Review  |  Open Access  |  20 Sep 2023

AI in colonoscopy - detection and characterisation of malignant polyps

Views: 346 |  Downloads: 84 |  Cited:   2
Art Int Surg 2023;3:186-94.
10.20517/ais.2023.17 |  © The Author(s) 2023.
Author Information
Article Notes
Cite This Article

Abstract

The medical technological revolution has transformed the nature with which we deliver care. Adjuncts such as artificial intelligence and machine learning have underpinned this. The applications to the field of endoscopy are numerous. Malignant polyps represent a significant diagnostic dilemma as they lie in an area in which mischaracterisation may mean the difference between an endoscopic procedure and a formal bowel resection. This has implications for patients’ oncological outcomes, morbidity and mortality, especially if post-procedure histopathology upstages disease. We have made significant strides with the applications of artificial intelligence to colonoscopic detection. Deep learning algorithms are able to be created from video and image databases. These have been applied to traditional, human-derived, classification methods, such as Paris or Kudo, with up to 93% accuracy. Furthermore, multimodal characterisation systems have been developed, which also factor in patient demographics and colonic location to provide an estimation of invasion and endoscopic resectability with over 90% accuracy. Although the technology is still evolving, and the lack of high-quality randomised controlled trials limits clinical usability, there is an exciting horizon upon us for artificial intelligence-augmented endoscopy.

Keywords

Colonoscopy, artificial intelligence, malignant polyps

INTRODUCTION

Colorectal cancer (CRC) is third in terms of incidence and second with respect to mortality of cancers globally[1]. There were around 2 million new cases in 2020[1]; 54% are thought to be preventable[2]. Colorectal polyps, defined as abnormal protrusions of mucosa, are the precursors to CRC and are defined by histopathological classification. Adenomas are the most common type of colonic polyp, with the adenoma-carcinoma sequence defining the stages of transformation. The gold standard investigation for screening and diagnosis of polyps and CRC is colonoscopy[3]. While there are definitive methods of how to endoscopically resect benign polyps and surgically remove cancers, the management of malignant polyps (MP) is more complicated. MP are defined as polyps that have neoplastic invasion of the submucosa but not extending into the muscularis mucosa[4]. They harbour cancer cells with subtle differences on the spectrum of the adenoma-carcinoma sequence. Owing to the subjective nature of identification and categorisation, clinicians are faced with a dilemma of how to manage potential MP. Clinical outcomes can vary greatly depending on the treatment modality. Endoscopic treatment offers low morbidity but potential incomplete resection. Surgery can offer an oncological benefit but with increased morbidity and even mortality risk.

Technological advances are exponentially rising in the field of medicine. Artificial intelligence (AI), through machine learning, has taken great strides in its clinical translation. Initially, these efforts were focused on polyp detection and optical diagnosis of small polyps. However, in recent years, the application of AI in colonoscopy is diversifying. In this review, we explore the current evidence in the literature for the application of AI to classify MP.

Malignant polyps

MP were historically classified as Duke A using the Duke staging system[5]. In recent years, the TNM staging system has been more widely adopted, with MP classified as pT1. In line with this, the depth of invasion is used to determine the modality of resection. Superficial invasion is classed as less than 1,000 micrometres of penetration and these lesions are thus amenable to curative endoscopic resection. Invasion below this point is deemed to be deeply invasive and warrants surgical resection. Consensus statements from The Association of Coloproctology of Great Britain and Ireland (ACPGBI) and the US Multi-Society Task Force (USMSTF) on Colorectal Cancer have published guidance on the management of malignant polyps over the past decade[4,6]. Core to both their principles is endoscopic assessment. The current clinical challenges posed centre around the identification of MP. The factors required to make a diagnosis are, however, subjective. This results in interobserver variability amongst endoscopists. An incorrect optical diagnosis can result in significant clinical implications for the patient. The incorrect identification can lead to delays or make subsequent treatment more difficult. For example, an endoscopist who incorrectly diagnoses a submucosal lesion as deeply invasive would only take a biopsy at the time of index procedure. However, if this lesion were to require an endoscopic resection, this would consequently be more challenging and have a higher risk of complications. Conversely, if a deeply invasive lesion were to be falsely identified as submucosal, timely histopathology would not be available and thus delay curative surgery. Concerningly, a recent prospective multicentre study identified that up to 40% of MPs were misclassified[7]. In a similar vein, histopathological analysis over a 14-year period in the USA reported a quarter of resected specimens were found to be non-malignant[8]. The importance of accurate diagnosis within a subjective field has been highlighted, with the natural progression of technological advances amalgamating together to aid endoscopic identification.

Artificial intelligence

The field of AI, alongside machine and deep learning, has paved the way for advancements in the non-medical and medical fields. AI and machine learning involve developing algorithms to simulate cognitive functions. An early application of these algorithms was in video games. Using a system of reinforcement learning, without any human input into the algorithm, the AI system was able to beat the reigning world champion in a virtual board game[9]. Translations to the healthcare industry were natural, with radiology an obvious starting point. Image-based recognition flourished, with one study reporting up to 98% accuracy in detecting COVID-19 from chest radiographs[10]. Further utilisation of artificial neural networks and convolutional neural networks have further extended radiological applications into other fields, including oncology[11,12]. The evolution into endoscopy was natural. Humans naturally fatigue, and with that, performance and polyp detection rates fall[13]. AI-based algorithmic solutions offer a mechanism of reducing the number of missed polyps. Several randomised controlled trials in recent years have demonstrated the potential of convolutional neural networks, a method of deep learning applying matrix multiplication to images, to significantly reduce the miss-rate of adenomatous polyps[14-16].

APPLICATION OF AI FOR HUMAN-DERIVED CLASSIFICATIONS

The starting point in the journey of AI in characterising MPs has been to replicate existing human classification systems. These include the Kudo, Japan NBI Expert Team (JNET) and NBI International Colorectal Endoscopic (NICE) classification, while the morphology of lesions, which can estimate the risk of lymph node metastases and deep invasion, is classified according to the Paris Classification. However, human-directed classification inherently suffers from inter and intra-observer variability. As such, the early applications of AI for MPs have been to replicate these classifications in an automated fashion.

Paris classification

This nomenclature was developed to describe superficial lesions. Type I polyps are elevated, type II are flat, and type III are excavated. Types I and II have further subdivisions, evident in Table 1. The highest risk of invasion is aligned with the ulcerated type III lesions; these come with not only a deeper level of invasion, but also a greater chance of lymph node metastasis. Once classified, clinicians can use this to determine treatment and prognosticate[17]. However, certain difficulties exist with respect to human inference in this categorisation. The most notable one is differentiating between Is and IIa type lesions.

Table 1

Paris classification

IpPedunculatedIIaSlightly elevatedIIIExcavated
IspSemi-pedunculatedIIbFlat
IsSemi-pedunculatedIIaSlightly depressed

With respect to applying AI to the Paris Classification, only one study has achieved this to date. Krenzer et al. developed an automated classification using deep learning methods from colonoscopy videos[18]. Around 50,000 white light image frames containing polyps from two existing Japanese datasets were evaluated and annotated by two experts to form the foundation of the AI recognition software. The premise of the final system is to crop the polyp image and then apply it to a Vision Transformer, which essentially decodes the polyp based on machine learning from prior expert annotations. The exact numbers of each polyp class were not reported, although type IIb, IIc and III lesions were excluded due to a paucity of image data, limiting its utility to classifying less advanced polyps. With regards to Is lesions, the authors stated a 93% accuracy in diagnosing Is type lesions; IIa lesions were diagnosed with 84% accuracy[18]. The mean accuracy was 89%, which surpasses all data in the literature. Initial promise has been shown.

NICE

In the same study as the aforementioned AI-based Paris classification, Krenzer et al. developed a convolutional neural network (CNN) to automate characterisation of polyps using the NICE classification[18]. The NICE classification categorises lesions dependent on their narrow band imaging (NBI) status. The colour, surface pattern and vascular morphology comprise the parameters to stratify polyps into categories of hyperplastic, adenomatous or deep submucosal invasive cancers (types 1-3, respectively). The AI utilised deep metric learning to develop a CNN, which essentially decodes the texture of the polyp based on prior learning. The same polyp database was used as previously mentioned, although 2,500 images were incorporated due to a minimum image resolution being required to evaluate the polyp pattern. On this occasion, 81% accuracy of diagnosis was observed[18]. Similar work from Okamoto et al. used over 4,000 images from a single centre over a 12-year period to develop a computer-aided diagnosis algorithm. This centred on expert endoscopists identifying regions of interest, with the algorithm cropping random areas of this for analysis. This was then evaluated with a total of 480 non-magnifying narrow band images and compared to the performance of two experts. The AI model was able to achieve mean accuracy levels of 94% for type 1-3 lesions[19].

JNET

This system was developed by JNET to find a solution for distinguishing between low-grade dysplasia and submucosal shallow invasive lesions. Similar to NICE, JNET Type 1 encapsulates hyperplastic polyps and sessile serrated polyps. Type 2 lesions are subdivided into 2A and 2B; 2A lesions are low-grade dysplastic lesions, with 2B lesions including high-grade dysplasia and submucosal shallow invasive lesions. Type 3 lesions represent submucosal deep carcinoma. This is clinically relevant as the management changes with respect to type 2B and type 3 lesions. Type 3 lesions have a risk of lymph node metastasis and therefore require surgery, as opposed to type 2B lesions which may be cured with endoscopic resection.

The only data in the literature comes from the aforementioned Okamoto et al., which also analysed their computer-based detection algorithm with respect to JNET[19]. After excluding non-magnified images, 3,670 magnified narrow band images were used for the separate testing system. The same principle was applied for testing and corroboration of the algorithm. On this occasion, they reported a mean 90% accuracy for JNET lesions 1-3[20]. With respect to the clinically relevant type 2B lesions, accuracy was 84.1% with a 96.3% negative predictive value. However, the positive predictive value was only 46.8%, highlighting the need for further training data to optimise the algorithm.

Kudo

Kudo et al. first applied pit patterns to denote potential malignancy in polyps in 1996[21]. This was utilising magnifying endoscopy and is now used widely. Polyps are categorised into six categories based on morphology, structure, and staining. With respect to MP, pit patterns of type I to IV reflect benign polyps that can be endoscopically resected, while type V reflects deep submucosal invasion that warrants surgery.

Pre-dating the application of deep learning to endoscopy, Takemura et al. developed a semi-automated computer software capable of identifying polyps based on differences in colours between the borders of polyps and mucosa. Magnified images of polyps, after dye spraying, were acquired in a single centre over a four-month period. A total of 72 polyp images were analysed by an expert endoscopist to determine their pit patterns. With a comparator of normal pits, the software programme was developed whereby pit patterns were then able to be analysed with respect to area, perimeter, circularity, and distances between pits. The overall accuracy of the software was 98.5%[22].

A more recent study from Patino-Barrientos et al. examined their deep-learning algorithm with respect to the Kudo classification. 600 polyp images from 142 patients were collected from colonoscopy videos in two centres, which were split into a training (68%), validation (16%) and testing dataset (16%). These were then classified by experts into the five Kudo categories, with a range of polyp numbers from 47 (type I) to 187 (type IV). Augmentations including rotation and shifting effectively multiplied the number of polyps in the dataset by 6. The deep learning model was then trained from this and allowed for an accuracy of 86%[23] [Table 2].

Table 2

Summary of human-derived application data

YearClassification systemNumber of centresNumber of lesionsNumber of endoscopistsVideo/still imagesSensitivity (%)Specificity (%)Overall accuracy (%)
Krenzer et al.[18]2023Paris1100Not statedStill imagesNot statedNot stated89.35
Krenzer et al.[18]2023NICE1611Still imagesNot statedNot stated81.13
Okamoto et al.[19]2022NICE14801Still imagesType 1 - 100
Type 2 - 95.8
Type 3 - 61.1
Type 1 - 97.1
Type 2 - 80.6
Type 3 - 99.5
Type 1 - 97.5
Type 2 - 91.2
Type 3 - 93.8
Okamoto et al.[19]2022JNET13201Still imagesType 1 - 100
Type 2A - 80.3
Type 2B - 80.4
Type 3 - 62.5
Type 1 - 96.3
Type 2A - 93.7
Type 2B - 84.7
Type 3 - 99.6
Type 1 - 96.9
Type 2A - 86.3
Type 2B - 84.1
Type 3 - 94.1
Takemura et al.[22]2010Kudo1721Still imagesNot statedNot stated98.5
Patino-Barrientos et al.[23]2020Kudo1600Not statedStill imagesNot statedNot stated83

APPLICATION OF AI FOR MULTIMODAL DATA

With respect to the previously discussed AI-based classification methods, these were based on retrospectively analysed single frames of video. Furthermore, single light sources were used for the majority of cases, which can omit certain lesion characteristics, increasing the likelihood of misdiagnosis. This contrasts with the consensus guidance from both the USMSTF, and ACPGBI with regards to the endoscopic recognition of malignant polyps[4-6]. They describe classifying polyps using both white light and image-enhanced endoscopy with magnification.

Lu et al. were able to create a novel convolutional neural network capable of combining white light images with image-enhanced endoscopy, making assessments in real time. Their system was trained with data collected from three centres over the course of five years and included the less prevalent large sessile lesions. White light and image-enhanced endoscopic images were combined for a total of over 300,000 image pairs from 268 non-invasive and superficial colorectal cancers and over 180,000 image pairs from 82 deep colorectal cancers, which were used for training. Almost 300,000 images were used in the evaluation of the system, which was compared to two experienced (> 5 years) endoscopists as the benchmark. Additionally, for real-time analysis of accuracy, 35 videos were used to test the system. Accuracy was 91.6% and 93.8% in image tests with and without advanced colorectal cancers, respectively. Comparing the AI system with expert and senior endoscopists showed there to be comparable accuracies of 88.1%, with significant improvements in 5 out of 7 junior endoscopists (P < 0.05)[24]. Remarkably, the system achieved an accuracy of 100% when validated with the 35 videos, with a real-time processing capability of 21 image pairs per second.

Yao et al. expanded on the application of deep learning to multimodal data (white light and image-enhanced endoscopy) by incorporating variables such as patient demographics and lesion location for lesions over 1 cm. These factors combined together were used to create a colorectal cancer invasion calculation, which could inform the endoscopist whether the lesion may be invasive and whether or not endoscopic resection was suitable. Training was performed with two expert endoscopists annotating 339 lesions from three different hospitals. Testing was subsequently performed on 198 lesions, with performance compared to 14 endoscopists who were not involved in the training phase. The system had a mean of 90% accuracy for both video and image processing. In addition, there were comparable rates with expert endoscopists and significant improvements (P < 0.05) in 7 out of 11 senior and junior endoscopists[25] [Table 3].

Table 3

Summary of multimodal classification data

Publication yearClassification systemNumber of centresNumber of lesionsNumber of endoscopistsVideo validation?Sensitivity (%)Specificity (%)Overall accuracy (%)
Lu et al.[24]2022Colorectal cancer invasion calculation32682No5096.291.6
Yao et al.[25]2022Colorectal cancer invasion calculation21982Yes78.7996.21Image - 90.4
Video - 89.7

Real life meaning

While the above data has been promising, it is important to consider how this may pertain to clinical application. The largest RCT to date from Karsenti et al. included over 2,000 patients from a single centre. Strengths of this study included its large cohort of patients and that it was evaluated in a non-academic centre, unlike previous studies. It reported an improvement in adenoma detection rate from 33.7% to 37.5% with the use of the AI system[26]. Although the improvement in adenoma detection rate is modest, the mean adenomas detected per colonoscopy, which is increasingly appreciated as a more granular metric of an endoscopist’s performance, demonstrated an increase from 0.71 to 0.89 when using AI. Similarly, the polyp detection rate improved from 41% to 45%. There was no significant difference observed for the detection rate of advanced adenomas, or the proximal serrated polyp detection rate, but it is important to acknowledge that the study was not powered for statistical difference of these less prevalent polyps.

LIMITATIONS AND FUTURE WORK

Polyp detection is the first clinical translation of AI for endoscopy, with more than 30 randomised controlled trials. While all RCTs have been overwhelmingly positive in improving the detection of polyps, efficacy in real-life clinical evaluations[27,28] has been less impressive for reasons that are currently unclear. It is important for this reason that future algorithms for MP are not only evaluated in RCTs but also in real-life clinical studies.

A limitation of current AI algorithms for MP is that the classification output focuses on the degree of advancement of neoplasia of MP. As the size of training datasets increases, it is important that the classification output transitions from AI-assisted human optical diagnosis towards the more clinically relevant objective output as to whether the MP should be endoscopically resected or referred for surgical resection.

With respect to human-derived classifications with AI applications, the majority of data used as the foundations for system programming and subsequent machine learning were from a single centre. Furthermore, owing to the fact that datasets came from single centres, the endoscopist validation is therefore also single centre based. Further evaluation with external datasets is warranted to evaluate its generalisability. Retrospectively collected images were also often used, which introduces a degree of bias, as did the commonly applied exclusion criteria of out-of-focus images or poor bowel preparation. Moreover, still images were commonly used for AI training purposes and therefore its performance in vivo, where the orientations of lesions vary, remains to be seen.

The diagnostic process that machine learning applies requires a certain level of transparency for reassurance to the end user[29]. Although AI can be a useful tool, without explainable algorithms, we cannot exclude the possibility of the system identifying incorrect features in its classification of lesions, as has been previously demonstrated in the field of radiology[30]. Furthermore, clinicians must have confidence in the technology in order to feel comfortable using it, and as such, it is necessary to evaluate these AI models’ performance in high-quality randomised controlled trials and the human-AI interaction. Progression to randomised controlled trials has been largely limited to polyp detection and characterisation of diminutive and small polyps. This will help to identify some of the generic challenges to human-computer interactions for AI in endoscopy, but there will likely be unique challenges to each specific application of AI. Furthermore, challenges for RCT design exist in the context of AI and MPs. This includes the recruitment size required to meet the statistical power given the low prevalence of MP in the population.

Despite the promising results for in-silico studies, rigorous validation in RCTs is still required to confirm their efficacy. Furthermore, if efficacy is demonstrated in RCTs, further validation will be required in real-life clinical evaluation studies to better understand their efficacy in clinical practice. As aforementioned, computer-aided detection (CADe) based clinical studies are less impressive than RCTs. The development of algorithms will allow for improvements to this. The importance of these algorithms having both high sensitivity and high specificity is paramount, as low specificity would lead to a higher rate of false positives. Furthermore, the human-AI interface would play a central role in this; however, it is currently poorly understood. While for CADe, studies evaluating false positive rates have confirmed it does not lead to a significant increase in resection of non-polyps, and likewise does not increase withdrawal/procedural time[31]; efficacy of computer-aided diagnosis (CADx) for optical diagnosis of diminutive polyps has been less impressive, particularly amongst non-expert endoscopists[32]. Given that the application for MP falls under CADx, important lessons are learnt from the translation of CADx of diminutive polyps, which is further ahead in the clinical translation pathway to the application of AI for MPs. Nonetheless, there is an exciting new horizon for AI-enhanced endoscopy. Developments in the field are exponentially expanding and will likely drastically change the future of prevention of colorectal cancer for years to come.

DECLARATIONS

Authors’ contributions

Made substantial contributions to the conception and design of the study and performed data analysis and interpretation: Shakir T, Kader R

Provided administrative, technical, and material support: Bhan C, Chand M

Availability of data and materials

Not applicable.

Financial support and sponsorship

None.

Conflicts of interest

All authors declared that there are no conflicts of interest.

Ethical approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Copyright

© The Author(s) 2023.

REFERENCES

1. Sung H, Ferlay J, Siegel RL, et al. Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin 2021;71:209-49.

2. Cancer Research UK. Bowel cancer statistics. Available from: https://www.cancerresearchuk.org/health-professional/cancer-statistics/statistics-by-cancer-type/bowel-cancer. [Last accessed on 16 Sep 2023].

3. Kader R, Hadjinicolaou AV, Georgiades F, Stoyanov D, Lovat LB. Optical diagnosis of colorectal polyps using convolutional neural networks. World J Gastroenterol 2021;27:5908-18.

4. Shaukat A, Kaltenbach T, Dominitz JA, et al. Endoscopic recognition and management strategies for malignant colorectal polyps: recommendations of the US multi-society task force on colorectal cancer. Gastroenterology 2020;159:1916-34.e2.

5. Sarma DP. The dukes classification of colorectal cancer. JAMA 1986;256:1447.

6. Williams JG, Pullan RD, Hill J, et al. Management of the malignant colorectal polyp: ACPGBI position statement. Colorectal Dis 2013;15:1-38.

7. Backes Y, Schwartz MP, ter Borg F, et al. Multicentre prospective evaluation of real-time optical diagnosis of T1 colorectal cancer in large non-pedunculated colorectal polyps using narrow band imaging (the OPTICAL study). Gut 2019;68:271-9.

8. Peery AF, Cools KS, Strassle PD, et al. Increasing rates of surgery for patients with nonmalignant colorectal polyps in the United States. Gastroenterology 2018;154:1352-60.e3.

9. Silver D, Schrittwieser J, Simonyan K, et al. Mastering the game of Go without human knowledge. Nature 2017;550:354-9.

10. Ozturk T, Talo M, Yildirim EA, Baloglu UB, Yildirim O, Acharya UR. Automated detection of COVID-19 cases using deep neural networks with X-ray images. Comput Biol Med 2020;121:103792.

11. Lee JG, Jun S, Cho YW, et al. Deep learning in medical imaging: general overview. Korean J Radiol 2017;18:570-84.

12. Hosny A, Parmar C, Quackenbush J, Schwartz LH, Aerts HJWL. Artificial intelligence in radiology. Nat Rev Cancer 2018;18:500-10.

13. Paeck KH, Heo WJ, Park DI, et al. Colonoscopy scheduling influences adenoma and polyp detection rates. Hepatogastroenterology 2013;60:1647-52.

14. Glissen Brown JR, Mansour NM, Wang P, et al. Deep learning computer-aided polyp detection reduces adenoma miss rate: a United States multi-center randomized tandem colonoscopy study (CADeT-CS trial). Clin Gastroenterol Hepatol 2022;20:1499-507.e4.

15. Wang P, Liu P, Glissen Brown JR, et al. Lower adenoma miss rate of computer-aided detection-assisted colonoscopy vs routine white-light colonoscopy in a prospective tandem study. Gastroenterology 2020;159:1252-61.e5.

16. Wallace MB, Sharma P, Bhandari P, et al. Impact of artificial intelligence on miss rate of colorectal neoplasia. Gastroenterology 2022;163:295-304.e5.

17. Endoscopic Classification Review Group. Update on the paris classification of superficial neoplastic lesions in the digestive tract. Endoscopy 2005;37:570-8.

18. Krenzer A, Heil S, Fitting D, et al. Automated classification of polyps using deep learning architectures and few-shot learning. Available from: https://doi.org/10.21203/rs.3.rs-2106189/v1. [Last accessed on 16 Sep 2023].

19. Okamoto Y, Yoshida S, Izakura S, et al. Development of multi-class computer-aided diagnostic systems using the NICE/JNET classifications for colorectal lesions. J Gastroenterol Hepatol 2022;37:104-10.

20. Ozawa T, Ishihara S, Fujishiro M, Kumagai Y, Shichijo S, Tada T. Automated endoscopic detection and classification of colorectal polyps using convolutional neural networks. Therap Adv Gastroenterol 2020;13:1756284820910659.

21. Kudo S, Tamura S, Nakajima T, Yamano H, Kusaka H, Watanabe H. Diagnosis of colorectal tumorous lesions by magnifying endoscopy. Gastrointest Endosc 1996;44:8-14.

22. Takemura Y, Yoshida S, Tanaka S, et al. Quantitative analysis and development of a computer-aided system for identification of regular pit patterns of colorectal lesions. Gastrointest Endosc 2010;72:1047-51.

23. Patino-Barrientos S, Sierra-Sosa D, Garcia-Zapirain B, Castillo-Olea C, Elmaghraby A. Kudo’s classification for colon polyps assessment using a deep learning approach. Appl Sci 2020;10:501.

24. Lu Z, Xu Y, Yao L, et al. Real-time automated diagnosis of colorectal cancer invasion depth using a deep learning model with multimodal data (with video). Gastrointest Endosc 2022;95:1186-94.e3.

25. Yao L, Lu Z, Yang G, et al. Development and validation of an artificial intelligence-based system for predicting colorectal cancer invasion depth using multi-modal data. Dig Endosc 2023;35:625-35.

26. Karsenti D, Tharsis G, Perrot B, et al. Effect of real-time computer-aided detection of colorectal adenoma in routine colonoscopy (COLO-GENIUS): a single-centre randomised controlled trial. Lancet Gastroenterol Hepatol 2023;8:726-34.

27. Levy I, Bruckmayer L, Klang E, Ben-Horin S, Kopylov U. Artificial intelligence-aided colonoscopy does not increase adenoma detection rate in routine clinical practice. Am J Gastroenterol 2022;117:1871-3.

28. Ladabaum U, Shepard J, Weng Y, Desai M, Singer SJ, Mannalithara A. Computer-aided detection of polyps does not improve colonoscopist performance in a pragmatic implementation trial. Gastroenterology 2023;164:481-3.e6.

29. Kader R, Baggaley RF, Hussein M, et al. Survey on the perceptions of UK gastroenterologists and endoscopists to artificial intelligence. Frontline Gastroenterol 2022;13:423-9.

30. Fazal MI, Patel ME, Tye J, Gupta Y. The past, present and future role of artificial intelligence in imaging. Eur J Radiol 2018;105:246-50.

31. Antonelli G, Rizkala T, Iacopini F, Hassan C. Current and future implications of artificial intelligence in colonoscopy. Ann Gastroenterol 2023;36:114-22.

32. Bang CS, Lee JJ, Baik GH. Computer-aided diagnosis of diminutive colorectal polyps in endoscopic images: systematic review and meta-analysis of diagnostic test accuracy. J Med Internet Res 2021;23:e29682.

Cite This Article

Export citation file: BibTeX | RIS

OAE Style

Shakir T, Kader R, Bhan C, Chand M. AI in colonoscopy - detection and characterisation of malignant polyps. Art Int Surg 2023;3:186-94. http://dx.doi.org/10.20517/ais.2023.17

AMA Style

Shakir T, Kader R, Bhan C, Chand M. AI in colonoscopy - detection and characterisation of malignant polyps. Artificial Intelligence Surgery. 2023; 3(3): 186-94. http://dx.doi.org/10.20517/ais.2023.17

Chicago/Turabian Style

Shakir, Taner, Rawen Kader, Chetan Bhan, Manish Chand. 2023. "AI in colonoscopy - detection and characterisation of malignant polyps" Artificial Intelligence Surgery. 3, no.3: 186-94. http://dx.doi.org/10.20517/ais.2023.17

ACS Style

Shakir, T.; Kader R.; Bhan C.; Chand M. AI in colonoscopy - detection and characterisation of malignant polyps. Art. Int. Surg. 2023, 3, 186-94. http://dx.doi.org/10.20517/ais.2023.17

About This Article

© The Author(s) 2023. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, sharing, adaptation, distribution and reproduction in any medium or format, for any purpose, even commercially, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Data & Comments

Data

Views
346
Downloads
84
Citations
2
Comments
0
4

Comments

Comments must be written in English. Spam, offensive content, impersonation, and private information will not be permitted. If any comment is reported and identified as inappropriate content by OAE staff, the comment will be removed without notice. If you have any queries or need any help, please contact us at support@oaepublish.com.

0
Download PDF
Cite This Article 14 clicks
Like This Article 4 likes
Share This Article
Scan the QR code for reading!
See Updates
Contents
Figures
Related
Artificial Intelligence Surgery
ISSN 2771-0408 (Online)
Follow Us

Portico

All published articles will be preserved here permanently:

https://www.portico.org/publishers/oae/

Portico

All published articles will be preserved here permanently:

https://www.portico.org/publishers/oae/