Download PDF
Commentary  |  Open Access  |  18 Sep 2023

Surgomics and the Artificial intelligence, Radiomics, Genomics, Oncopathomics and Surgomics (AiRGOS) Project

Views: 563 |  Downloads: 101 |  Cited:   2
Art Int Surg 2023;3:180-5.
10.20517/ais.2023.24 |  © The Author(s) 2023.
Author Information
Article Notes
Cite This Article

Abstract

The journal Artificial Intelligence Surgery was established to explore the integration of artificial intelligence (AI) in surgery. It originated from the desire to understand the potential of true robotic surgery, as existing robotic systems are tele-manipulators rather than autonomous robots. AI’s role in surgery involves levels of autonomy and a balance between human expertise and technological advancements. In this regard, a new field of Surgiomics emerges, integrating patient data such as genomics, radiomics, and pathomics to enhance surgical decision-making. Overcoming limitations in surgical data analysis, AI processes vast amounts of data, detects subtle patterns, and explores complex relationships. As Surgiomics continues to evolve, it holds the potential to reshape surgical patient management. Initiatives like the Artificial intelligence, Radiomics, Genomics, Oncopathomics and Surgomics (AiRGOS) Project aim to develop AI algorithms for precision therapeutic treatments in cancer patients using radiologic imaging, genomic sequencing, and clinical data. In this commentary, we envision a future where AI technologies revolutionize surgical decision-making and create personalized treatment plans based on comprehensive patient data.

Keywords

Artificial intelligence, radiomics, genomics, oncopathology, pathomics

The age of artificial intelligence (AI) in medicine is upon us, and with it, the fields of Genomics, Radiomics, and Pathomics have exploded[1,2]. Many surgeons jumped on the bandwagon and championed robotic surgery as our contribution to this emerging reality. As we will discuss below, surgeon’s understanding of robotics was significantly limited and perhaps led many of us down a rabbit hole. At the same time, more pragmatic minds with a more profound understanding of AI focused on AI outside of the operating room by working on topics ranging from Ranson’s criteria and Trauma scores all the way to Augmented Reality (AR) in the operating room and predictive algorithms to help the earlier detection of cancer. We believe that it is time for a fusion of the two, and it is time for the adventurous to meet the pragmatic. As many of us became surgeons to combat cancer, we have created a consortium of centers with a passion for advanced surgical technology and a fascination with AI, which has led us to create the Artificial intelligence, Radiomics, Genomics, Oncopathomics and Surgomics, a.k.a, the Artificial intelligence, Radiomics, Genomics, Oncopathomics and Surgomics (AiRGOS) Project.

When we were initially approached to create a journal, the publishing house wanted it called Intelligent Surgery. We initially agreed, but wanted to create a section that would focus on Artificial Intelligence Surgery Centers Around the World; as a result, the journal Artificial Intelligence Surgery was born. Although we worried that the timing for a journal like this was too early, we soon realized that our timing could not have been better. Our journal officially launched in December of 2021, and by the end of this year, we will apply for official listing in PubMed. The main spark that led to the creation of this journal was the desire to better understand what robotic surgery really was[3,4]. Current Robotic-Assisted Surgery (RAS) with complete robotic surgical systems such as the da Vinci and the CMR (formerly Cambridge Medical Robotics) robot are not true robots, but simply tele-manipulators. Perhaps the easiest way to understand this is to remember that the word “robot” is a Slavic word that means servant, but perhaps more honestly, slave[5]. Using this etymologic fact, we can see that the slave during RAS is the surgeon and nobody else. True robotic surgery should involve a machine or device that operates not simply automatically, but more importantly, autonomously[6].

The real debate is whether or not we want true robotic surgery at all. The easiest way to understand the two main ways that robotic surgery can develop is to look at perhaps the two largest Sci-Fi franchises, Star Trek and Star Wars. In general, Star Trek depicts the surgeon at the bedside; Star Wars, on the other hand, usually depicts a robot (called a “droid”, short for android) operating independently with no human doctor to be found. Given our conviction that the doctor should be an integral part of the surgical process, we believe that a future emulating Star Trek is the one for which we should be aiming. This leads us to contemplate whether we should advocate for a greater emphasis on non-console complete robotic systems like the Maestro or, alternatively, prioritize handheld collaborative robotic (cobots) devices[7,8].

Many people who do not understand AI think that AI is a zero-sum game, and that robots would have to do an entire procedure for it to be considered AI. As we have discussed in previous papers, the road to autonomy is a complex process with six levels of surgical autonomy defined. Additionally, entire procedures will become more autonomous in a step-wise fashion, with procedures being divided into dexemes and surgemes. Dexemes are numerical representations of a procedure or maneuver; a grouping of dexemes to accomplish a larger task is a surgeme. For instance, during a resection of the pancreatic head, known as a Whipple procedure that involves removal of the head of the pancreas, part of the stomach, gallbladder and distal bile duct, the reconstruction of the stomach known as a gastrojejunostomy would be an example of a surgeme. The hole created in the stomach known as a gastrotomy and the whole in the small intestine known as an enterotomy created for the linear stapler insertion would be a dexeme; the gastrointestinal stapled anastomosis (connection of the stomach to the small intestine) would be another with the closure of the remaining hole being yet another dexeme. If a powered stapler with a sensor was used to create the staple line (Signia, Medtronic, Dublin, Ireland) between the stomach and small intestine, this would be an example of an autonomous dexeme. Much has been written on what has been done to arrive at more autonomy, with the most visually pleasing advances being in the field of computer vision[6,9].

Ultimately, the decision to entrust surgical procedures entirely to autonomous robots requires careful evaluation of the risks and benefits, as well as the development of robust safety protocols, regulatory frameworks, and ethical guidelines. On the one hand, true robotic autonomous surgery holds the potential for enhanced precision, reduced human error, and improved surgical outcomes. It can also address limitations such as fatigue and hand tremors. On the other hand, concerns arise regarding the loss of human judgment, accountability, and the ability to respond to unforeseen circumstances. Another concern about the entirely autonomous robots deals with their reliability. Since the robots act following predefined programs, real-world situations can sometimes be more complex than programmed ones. Thus, the scientific community has to be aware of potential autonomous robot failures[10]. Because of this, collaboration between humans and robots, with surgeons overseeing and guiding the robotic systems, may provide a more pragmatic approach that combines the advantages of robotic technology with human expertise to ensure optimal surgeon-robot-patient relationships.

Along with advancements made in surgical technology, progress in the fields of genomics, radiomics, and pathomics has also provided valuable insights into surgical care. The word “omics” comes from the Greek for “whole”. Since the entire human genome was characterized by the human genome project in 1998, characterizing the entire human genome has become readily available, with the test now costing approximately $1,000. Radiomics is the field where all radiologic images are analyzed instead of a single or limited number of images. Radiomics is able to use deep learning architecture to more powerfully predict the natural evolution of a benign or premalignant lesion and more accurately diagnose diseases. Similarly, Pathomics is the field where all histopathologic images are analyzed. If genomics, radiomics, and pathomics exist, why not Surgomics[11]? What would something like Surgomics entail? Should it be limited to the operating room, or should it include components of genomics, radiomics and pathomics when they impact the management of the surgical patient?

Ideally, Surgomics seeks to integrate all available patient data that influences surgical outcomes, including other “omics” fields mentioned previously. While each field focuses on different types of data (genetic, imaging, or histopathological), they all aim to identify patterns, correlations, and predictive markers that can improve patient care. By leveraging AI technologies, this deluge of data will increase our collective insight into the surgical field and provide us with tools for surgical problem solving. By including, for example, genetic variants associated with adverse surgical events, such as those involved in blood clotting or anastomotic wound healing, surgeons can tailor their surgical approach, perioperative care, and medication selection accordingly. By understanding the radiomic landscape of a tumor, we can tailor the extent of surgery, identify potential early metastatic risks, and consider the need for adjuvant therapies, ultimately improving outcomes in cancer patients.

AI excels in this area by applying machine learning techniques to detect optimal classifications and/or regressions to help uncover hidden correlations. Additionally, traditional statistical analysis methods often rely on predefined hypotheses, limiting the exploration of huge amounts of data. Although particular attention has to be paid to prevent the drawback of finding unfeasible results due to biased data training, AI-based approaches, such as deep learning, enable data-driven discoveries and could provide a more comprehensive understanding of surgical outcomes, risk factors, and treatment strategies. This comprehensive analysis may enable surgeons to make more evidence-based decisions, personalize treatment plans, and predict potential complications. Nonetheless, the widespread use of genetic data, radiological images, and histopathological information could potentially expose patients to privacy breaches, genetic discrimination, and misuse of sensitive health information. As a result, stricter regulations to protect patient privacy may need to be enacted to safeguard patient information; at the same time, the need for these regulations should not hinder progress that could potentially help countless patients[12]. It is crucial to create public image databases containing patients with specific diseases and make them accessible to researchers. Unfortunately, at the time of this Commentary, there is only one open-access database meeting these criteria for liver diseases. This highlights the perils of overzealous patient privacy legislation[13].

In the past, surgical data analysis faced several limitations that have now been surmounted with the advent of AI and may support the field of Surgomics. One significant limitation was the sheer volume and complexity of surgical data, which made it challenging for researchers to effectively analyze and extract meaningful insights. AI overcomes this limitation by leveraging advanced algorithms and computing power to process vast amounts of surgical and non-surgical data quickly and accurately. Another limitation was the difficulty in identifying subtle patterns and correlations within the data. AI excels in this area by applying machine learning techniques to detect intricate relationships and uncover hidden correlations. Additionally, traditional statistical analysis methods often rely on predefined hypotheses, limiting the exploration of complex, non-linear relationships. AI-based approaches, such as deep learning, enable data-driven discoveries and provide a more comprehensive understanding of surgical outcomes, risk factors, and treatment strategies. By overcoming these limitations, AI empowers surgeons with the ability to leverage the full potential of Surgomics, leading to advancements in surgical practice and enhanced patient outcomes.

An additional, but perhaps more damning limitation is the quality of data that can be obtained; this was best articulated by an article entitled “Garbage in, Garbage out- Words of Caution on Big Data and Machine Learning in Medical Practice” that appeared in the Journal of the American Medical Association in 2023[14]. The question is, have we arrived at the inflection point in time where we can finally obtain quality data? We believe that we have and that there is no better field to test this hypothesis than the field of surgery.

Surgomics is emerging as a new potentially transformative tool in surgical decision-making. As the field of Surgomics continues to evolve, its integration into routine clinical practice holds tremendous potential to reshape the future of surgical interventions. Currently, a University in Japan has shown that a deep learning architecture of single arterial CT scan images of hepatocellular cancer can be used to differentiate patients that will recur at 2 years and which patients will recur after 2 years[15]. Similar methods have been used to predict the probability of hepatic colorectal metastatic recurrence location. These two examples could provide surgeons with valuable information that would significantly impact their decision to do surgery and what type of surgery to ultimately do. Interpretation of intraoperative video recordings could help with training, but intraoperative augmented reality could also lead to safety features that could reduce the risk of surgical complications such as unwanted injuries to adjacent structures and missed injuries. Video analysis of intraoperative recording with fluorescent ICG or cancer-specific dye could lead to intraoperative staging, identification of lymph node metastases, and better identification of margin status or occult peritoneal/pleural carcinomatosis and sarcomatosis.

In the coming years, we will see new initiatives develop incorporating Surgomics in large-scale data to gain valuable insights into disease mechanisms, patient characteristics, and treatment outcomes. For example, the aim of the AiRGOS [Artificial intelligence, Radiomics, Genomics, OncoPathology (Pathomics) and Surgomics] Project is to develop an Artificial Intelligence algorithm that enables clinicians to deliver precision, personalized therapeutic treatments for cancer patients after and ultimately before surgery. An algorithm could be created that combines radiologic imaging (CT, PET/CT, MRI), Whole Genomic Sequencing (WGS) of tumor tissue, chemotherapy/immunotherapy/radiation therapy regimen data with therapeutic responses and long-term survival data to provide a precision therapeutic plan based on the individual’s entire tumor genomic profile and to combine it with radiologic data from all cross-sectional images of tumors via a machine learning algorithm and a deep learning architecture to aid in clinical decision making.

Currently, some centers are analyzing up to 505 known genes, but the entire human genome contains approximately 25,000 genes. Additionally, although some centers have begun creating algorithms that analyze single preoperative images, no centers have begun analyzing all preoperative images, including non-contrast, arterial and venous phase cuts of the entire tumor. Interestingly, we do not know how computer vision algorithms can preoperatively predict which tumors will recur early and which tumors will recur late. This potential concern is known as the “black box phenomenon” and is an area of active concern in the modern world. Regardless, by combining the massive amounts of data from entire human genomes with computer vision analysis of 3-dimensional reconstructions of cross-sectional images (i.e., preoperative Pancreatic Protocol CT scan of the abdomen) with preoperative and outcome data, it may allow us to stop guessing which adjuvant regimens to utilize.

It is postulated that deep learning algorithms that combine clinical images with clinical data may currently represent a major way to improve survival and cure rates[16]. Initial studies will focus on patients with primary pancreatic cancer, but future studies may enable us to impact neoadjuvant treatments. Ultimately, our hope is that this algorithm can be expanded to secondary tumors of the hepatic-pancreatic and biliary system, as well as other tumors throughout the body. Effectively, this project would create an intelligent Tumor Board that could make decisions based on an analysis of all of the available data and not on just some of it, in effect, medical decision making that would be truly personalized.

DECLARATIONS

Acknowledgements

The authors would like to thank Stephen Song for his technical support in the conceptualization of this manuscript. Andrew A. Gumbs is the CEO of Talos Surgical, which is developing the AiRGOS project.

Authors’ contributions

Conceptualization, drafting of the manuscript, editing of the manuscript, technical support, administrative support: Gumbs AA

Editing of manuscript, administrative support: Croner R, Abu-Hilal M

Editing of manuscript, technical support: Bannone E

Conceptualization, editing of the manuscript: Ishizawa T

Conceptualization: Frigerio I, Siriwardena A

Conceptualization, editing of manuscript: Spolverato G

Conceptualization, drafting of manuscript, editing of manuscript, administrative support: Messaoudi N

Availability of data and materials

Not applicable.

Financial support and sponsorship

None.

Conflicts of interest

All authors declared that there are no conflicts of interest.

Ethical approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Copyright

© The Author(s) 2023.

REFERENCES

1. Le VH, Kha QH, Minh TNT, Nguyen VH, Le VL, Le NQK. Development and validation of CT-based radiomics signature for overall survival prediction in multi-organ cancer. J Digit Imaging 2023;36:911-22.

2. Lam LHT, Do DT, Diep DTN, et al. Molecular subtype classification of low-grade gliomas using magnetic resonance imaging-based radiomics and machine learning. NMR Biomed 2022;35:e4792.

3. Gumbs AA, Croner R, Chouillard E. Is robotic pancreatic surgery finally ready for prime-time? Hepatobiliary Surg Nutr 2020;9:650-3.

4. Gumbs AA, De Simone B, Chouillard E. Searching for a better definition of robotic surgery: is it really different from laparoscopy? Mini-Invasive Surg 2020;4:90.

5. Gumbs AA, Perretta S, d’Allemagne B, Chouillard E. What is artificial intelligence surgery? Art Int Surg 2021;1:1-10.

6. Gumbs AA, Frigerio I, Spolverato G, et al. Artificial intelligence surgery: how do we get to autonomous actions in surgery? Sensors 2021;21:5526.

7. Gumbs AA, Abu-Hilal M, Tsai TJ, Starker L, Chouillard E, Croner R. Keeping surgeons in the loop: are handheld robotics the best path towards more autonomous actions? (A comparison of complete vs. handheld robotic hepatectomy for colorectal liver metastases). Art Int Surg 2021;1:38-51.

8. Gumbs AA, Gayet B. Why artificial intelligence surgery (AIS) is better than current robotic-assisted surgery (RAS). Art Int Surg 2022;2:207-12.

9. Gumbs AA, Grasso V, Bourdel N, et al. The advances in computer vision that are enabling more autonomous actions in surgery: a systematic review of the literature. Sensors 2022;22:4918.

10. Misaros M, Stan OP, Donca IC, Miclea LC. Autonomous robots for services-state of the art, challenges, and research areas. Sensors 2023;23:4962.

11. Wagner M, Brandenburg JM, Bodenstedt S, et al. Surgomics: personalized prediction of morbidity, mortality and long-term outcome in surgery using machine learning on multimodal data. Surg Endosc 2022;36:8568-91.

12. Capelli G, Verdi D, Frigerio I, et al. White paper: ethics and trustworthiness of artificial intelligence in clinical surgery. Art Int Surg 2023;3:111-22.

13. Wakabayashi T, Ouhmich F, Gonzalez-Cabrera C, et al. Radiomics in hepatocellular carcinoma: a quantitative review. Hepatol Int 2019;13:546-59.

14. Teno JM. Garbage in, garbage out - words of caution on big data and machine learning in medical practice. JAMA Health Forum 2023;4:e230397.

15. Kinoshita M, Ueda D, Matsumoto T, et al. Deep learning model based on contrast-enhanced computed tomography imaging to predict postoperative early recurrence after the curative resection of a solitary hepatocellular carcinoma. Cancers 2023;15:2140.

16. Taher H, Grasso V, Tawfik S, Gumbs A. The Challenges of deep learning in artificial intelligence and autonomous actions in surgery: a literature review. Art Int Surg 2022;2:144-58.

Cite This Article

Export citation file: BibTeX | RIS

OAE Style

Gumbs AA, Croner R, Abu-Hilal M, Bannone E, Ishizawa T, Spolverato G, Frigerio I, Siriwardena A, Messaoudi N. Surgomics and the Artificial intelligence, Radiomics, Genomics, Oncopathomics and Surgomics (AiRGOS) Project. Art Int Surg 2023;3:180-5. http://dx.doi.org/10.20517/ais.2023.24

AMA Style

Gumbs AA, Croner R, Abu-Hilal M, Bannone E, Ishizawa T, Spolverato G, Frigerio I, Siriwardena A, Messaoudi N. Surgomics and the Artificial intelligence, Radiomics, Genomics, Oncopathomics and Surgomics (AiRGOS) Project. Artificial Intelligence Surgery. 2023; 3(3): 180-5. http://dx.doi.org/10.20517/ais.2023.24

Chicago/Turabian Style

Gumbs, Andrew A., Roland Croner, Mohammed Abu-Hilal, Elisa Bannone, Takeaki Ishizawa, Gaya Spolverato, Isabella Frigerio, Ajith Siriwardena, Nouredin Messaoudi. 2023. "Surgomics and the Artificial intelligence, Radiomics, Genomics, Oncopathomics and Surgomics (AiRGOS) Project" Artificial Intelligence Surgery. 3, no.3: 180-5. http://dx.doi.org/10.20517/ais.2023.24

ACS Style

Gumbs, AA.; Croner R.; Abu-Hilal M.; Bannone E.; Ishizawa T.; Spolverato G.; Frigerio I.; Siriwardena A.; Messaoudi N. Surgomics and the Artificial intelligence, Radiomics, Genomics, Oncopathomics and Surgomics (AiRGOS) Project. Art. Int. Surg. 2023, 3, 180-5. http://dx.doi.org/10.20517/ais.2023.24

About This Article

© The Author(s) 2023. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, sharing, adaptation, distribution and reproduction in any medium or format, for any purpose, even commercially, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Data & Comments

Data

Views
563
Downloads
101
Citations
2
Comments
0
3

Comments

Comments must be written in English. Spam, offensive content, impersonation, and private information will not be permitted. If any comment is reported and identified as inappropriate content by OAE staff, the comment will be removed without notice. If you have any queries or need any help, please contact us at support@oaepublish.com.

0
Download PDF
Cite This Article 32 clicks
Like This Article 3 likes
Share This Article
Scan the QR code for reading!
See Updates
Contents
Figures
Related
Artificial Intelligence Surgery
ISSN 2771-0408 (Online)
Follow Us

Portico

All published articles will be preserved here permanently:

https://www.portico.org/publishers/oae/

Portico

All published articles will be preserved here permanently:

https://www.portico.org/publishers/oae/