Download PDF
Review  |  Open Access  |  7 Jan 2026

AI in hand surgery: hype or helpful? A comprehensive survey of emerging technologies

Views: 64 |  Downloads: 2 |  Cited:  0
Plast Aesthet Res. 2026;13:1.
10.20517/2347-9264.2025.80 |  © The Author(s) 2026.
Author Information
Article Notes
Cite This Article

Abstract

Artificial intelligence (AI) is beginning to reshape the landscape of hand surgery, but most clinical evidence still originates from radiology and other surgical specialties. This literature survey provides a comprehensive overview of current and near-term AI applications in the field. Presently, AI enhances diagnostic accuracy by identifying subtle fractures, nerve compressions, and vascular anomalies on imaging that may elude human detection. Presently, AI contributes mainly to diagnosis/imaging (fracture detection; adjuncts for nerve/perfusion studies) and planning (AI-assisted 3D reconstructions), with intraoperative platforms such as augmented reality (AR) microscopes and robotics largely adapted from neurosurgery/spine and only emerging in hand surgery. While many of these visualization platforms themselves are not AI, they increasingly integrate AI-based modules for image processing and real-time data overlay. Early postoperative risk-stratification models (e.g., stiffness, infection, complex regional pain syndrome) and digital rehabilitation are promising but require prospective, multi-center validation. Additionally, AI-driven tools streamline operative documentation and empower patient education through conversational agents. Looking ahead, developments such as implantable micro-sensors for real-time anastomosis monitoring, AI-guided perforator mapping, and miniaturized AR-assisted visualization promise to further transform practice. However, challenges persist - from limited datasets and the need for external validation, to high costs, regulatory hurdles, and ethical concerns surrounding data privacy and algorithm transparency. Achieving the sub-millimeter precision required for safe surgical implementation remains one of the most critical technical challenges. Emphasizing explainable AI and maintaining the surgeon’s central role in decision-making will be crucial to safe implementation. Ultimately, the convergence of AI, advanced imaging, robotics, and microsurgical techniques holds significant promise to elevate precision, outcomes, and patient-centered care in hand surgery.

Keywords

Artificial intelligence, hand surgery, microsurgery, diagnostic imaging, surgical planning, robotics, large language models (LLMs), ethical considerations

INTRODUCTION

Artificial intelligence (AI) is no longer a futuristic concept - it has become an integral part of modern medicine. In radiology and pathology, AI already supports diagnoses daily. In surgery, its reach is expanding, from planning complex procedures to guiding hands in the operating room. However, in subspecialties such as hand surgery - where anatomy is intricate, data are scarce, and patient outcomes hinge on tiny structures - the integration of AI is still in its early days. It is also important to distinguish between AI-driven algorithms and other advanced but non-AI technologies such as robotic systems or augmented reality (AR) microscopes, which may incorporate AI-based functions without being AI themselves.

Hand surgery demands an exceptional synthesis of anatomical knowledge, careful planning and highly refined microsurgical technique to restore the intricate coordination of bones, joints, tendons, nerves, vessels, and skin. Even minor inaccuracies can have profound functional consequences, affecting not just motion but also fine sensory feedback and quality of life. Moreover, many hand procedures involve reconstructing structures only millimeters in diameter, underscoring the critical need for precision at every step. AI has the potential to enhance these processes, from interpreting complex imaging to assisting in surgical planning and postoperative rehabilitation, but current implementations remain limited and largely experimental.

Amidst the growing excitement about AI, a thorough and critical examination remains imperative. Hand surgery encompasses a wide spectrum of conditions, many of which are relatively uncommon, posing challenges to developing robust, generalizable AI models. Ethical considerations around data privacy, algorithmic bias, and medicolegal accountability add layers of complexity that must be thoughtfully addressed. Ultimately, AI in hand surgery should be viewed as a promising adjunct rather than a replacement for surgical expertise, requiring continued validation and careful clinical integration[1-3].

This raises an important question: Is AI in hand surgery currently more hype than real help? This literature survey seeks to answer that by providing a comprehensive overview of both current and emerging applications of AI in hand surgery. Additionally, it highlights future directions, practical limitations, ethical considerations, and priorities for research, offering a roadmap for integrating AI responsibly into this demanding and evolving field.

CURRENT APPLICATIONS OF AI IN HAND SURGERY

AI is still in the early stages of integration into hand surgery, yet developments are accelerating across the field. Several of the technologies discussed in this section have not yet been reported in dedicated hand-surgery studies but are included because the underlying concepts are technically transferable and have shown benefit in related surgical fields such as neurosurgery, orthopedics and reconstructive microsurgery. Their inclusion serves to highlight emerging opportunities for translation rather than established clinical practice in hand surgery. Current applications focus on several key areas:

· Diagnosis & imaging: AI supports the detection of fractures, nerve compressions, and vascular pathology, often identifying subtle findings that may be overlooked by the human eye.

· Surgical planning: algorithms reconstruct detailed 3D anatomical models from imaging data, aiding in precise preoperative mapping.

· Intraoperative assistance & robotics: from AR microscopes that overlay critical structures to AI-driven perfusion assessments and robotic systems reducing tremor, these tools assist surgeons directly during procedures.

· Postoperative monitoring & rehabilitation: machine learning helps predict complications such as stiffness or complex regional pain syndrome (CRPS), while virtual reality (VR) and AR platforms are enhancing patient engagement and functional recovery.

· Data-driven decision support & documentation: large language models.

Large language models (LLMs) and predictive analytics support clinical decision-making, quality improvement initiatives, and the automation of operative documentation.

Each of these areas is discussed in detail in the following sections[3,4]. It is worth noting that while some of these technologies do not yet have direct applications reported in hand surgery, they are discussed here under current applications because the underlying systems are already established in related surgical fields. Given the anatomical scale and microsurgical demands of hand procedures, only modest adaptations may be needed to translate these technologies effectively, making them immediately relevant for consideration in this specialty.

To provide a concise overview of how AI is presently being integrated into hand surgery, Table 1 summarizes the principal domains discussed in the preceding sections, along with illustrative tools and anticipated clinical benefits. This structured representation highlights the breadth of applications, ranging from diagnostic imaging to documentation.

Table 1

Current applications of AI in hand surgery

Area Examples/tools discussed Potential benefits Current state of evidence Key representative references
Diagnosis & imaging CNNs & YOLO for fractures, AI in nerve & perfusion imaging Improved sensitivity in detecting subtle pathology Early clinical hand-surgery studies; requires multi-center validation Ryhänen et al., 2025[4]; Yang et al., 2022[7]; Kalmet et al., 2020[8]; Dababneh et al., 2024[9]; Di Cosmo et al., 2022[10]; Faeghi et al., 2021[11]; Lin et al., 2022[13]
Surgical planning AI-driven 3D reconstructions from CT/MRI; patient-specific guides for scaphoid fixation Enhanced pre-op mapping of bones, tendons, vessels; greater precision in osteotomy and implant placement Demonstrated in research and limited clinical pilots in hand and orthopedic surgery Schweizer et al., 2016[15]; Wagner et al., 2024[16]; Wirth et al., 2025[17]
Intraoperative assistance & robotics AI-AR microscopes (KINEVO, ARveo), robotic platforms (Microsure, MUSA, Symani) Enhanced visualization of vessels and nerves, tremor reduction Primarily validated in neurosurgery and reconstructive microsurgery; early translation to hand surgery underway Ichikawa et al., 2022[18]; Rodriguez-Unda et al., 2022[19]; Ernst et al., 2023[20]; Brauckmann et al., 2024[21]; Besmens et al., 2024[22]; Cannizzaro et al., 2024[23]; van Mulken et al., 2020[24]; van Mulken et al., 2022[25] Kueckelhaus et al., 2025[26]; Malzone et al., 2023[27]
Post-op Monitoring & Rehab ML-Models for predicting stiffness/CRPS, VR/AR rehabilitation platforms and serious games Personalized rehabilitation, early risk stratification, improved patient engagement Early retrospective and prospective studies; further validation needed in hand surgery Pereira et al., 2020[52]; Bressler et al., 2024[31]; Prahm et al., 2025[32]
Decision support & documentation LLMs (ChatGPT, Gemini), automated operative reports from surgical video analysis; AI chatbots for patient education Streamlined workflows, decision support, improved patient communication and documentation accuracy Experimental phase in hand surgery; successful implementation in other surgical specialties AlShenaiber et al., 2024[33]; Pressman et al., 2024[34]; Demir et al., 2023[35]; Perkins et al., 2024[36]; Khanna et al., 2025[37]; Wyles et al., 2023[38]; Harvey et al., 2025[39]; Will et al., 2025[40]

Table 1 summarizes key areas where AI is actively being explored or implemented, with examples of technologies and their potential contributions to improving diagnosis, planning, intraoperative performance, postoperative care, and documentation.

Diagnosis & imaging

Accurate diagnosis is foundational in hand surgery, yet it remains challenging due to overlapping structures and the fine anatomical detail required. Several studies have investigated the use of convolutional neural networks (CNNs) and object detection models, such as YOLO (You Only Look Once), for detecting distal radius and scaphoid fractures on X-rays and computed tomography (CT) scans[5-8]. Ryhänen et al. and Dababneh et al. reported that these models often achieve sensitivities and specificities comparable to, or in some cases exceeding, those of human observers, particularly in detecting subtle or occult fractures[4,9].

AI is not just limited to bones. Early work suggests machine learning can trace nerve pathways on magnetic resonance imaging (MRI) or high-resolution ultrasound, potentially catching compressions or injuries earlier. Even for carpal tunnel, typically diagnosed with nerve tests, combining clinical data with ultrasound and AI pattern recognition could improve accuracy[10,11].

AI is also being integrated into perfusion imaging. Quantitative computed tomography angiography (CTA) and intraoperative indocyanine green (ICG) fluorescence imaging provide critical information for assessing vascular integrity. AI algorithms could enhance interpretation by generating objective perfusion maps and reducing reliance on the surgeon’s eye alone[12].

Beyond detection of fractures and nerve pathologies, AI also holds potential in the early recognition of soft tissue injuries, which are frequently underdiagnosed in hand trauma. Machine learning algorithms trained on musculoskeletal MRI have shown promise in delineating partial tendon tears, ligamentous disruptions, and even subtle synovitis patterns[13]. This capability is particularly valuable in cases of chronic wrist pain where conventional imaging often yields inconclusive results. However, these approaches still require rigorous validation. A persistent challenge is balancing sensitivity and specificity - models highly tuned to detect minimal changes may inadvertently increase false positives, leading to unnecessary interventions. Furthermore, ensuring that AI systems generalize across diverse imaging protocols and hardware remains a significant hurdle[14].

As multi-institutional datasets grow, algorithms can be trained on more heterogeneous inputs, enhancing their robustness and clinical applicability. These generalizability challenges are particularly relevant in hand surgery, where imaging techniques and scanner parameters vary widely between institutions. Even subtle differences in acquisition settings can impact algorithm performance. Embedding strategies such as cross-center validation, domain adaptation, and synthetic data generation will therefore be critical to achieve consistent diagnostic accuracy across diverse clinical environments.

Surgical planning

Optimal surgical planning is crucial for both function and aesthetics in hand surgery. Given the complex anatomy of the wrist and hand, careful preoperative mapping of bones, tendons, and vessels is essential. Technologies such as AI-driven 3D reconstructions from CT or MRI data - already successfully employed in preoperative planning for spine and orthopedic procedures to automatically delineate bones and vascular structures - demonstrate a high level of technical feasibility for translation to hand surgery. However, their clinical adoption in hand surgery remains limited, with most applications currently in the research or prototype stage and awaiting validation in prospective clinical settings.

Recent hand-surgery studies have already applied 3D reconstructions and patient-specific guides for scaphoid reconstruction and fixation, improving reduction accuracy and screw placement compared with conventional techniques[15-17]. Given the anatomical complexity of the hand, such AI-supported segmentation and reconstruction pipelines could assist in planning osteotomies for scaphoid nonunion or complex carpal malalignments and guiding the precise placement of Herbert screws or custom osteotomy wedges. These examples illustrate that AI-augmented 3D modeling is no longer theoretical but already technically feasible in hand surgery, even if routine clinical use remains limited.

Intraoperative assistance & robotics

Microsurgical procedures are a cornerstone of reconstructive hand surgery, whether repairing digital arteries or performing nerve grafts or free flaps. These fine surgeries require not only steady hands but also precise visualization of small-caliber blood vessels and small neural structures. Advanced imaging technologies such as 3D visualization and AR microscopes - most widely adopted in neurosurgery and spine surgery - offer substantial enhancements in intraoperative visualization. While these platforms themselves are not AI, they often integrate AI algorithms to optimize usability, process imaging data, and highlight critical structures. For example, systems such as the Zeiss KINEVO 900 and Leica ARveo use AI-enhanced modules to project real-time data overlays that can delineate perfusion zones, vessels, or nerve pathways[18].

Although most evidence originates from neurosurgery, early reports in upper-extremity and peripheral-nerve surgery demonstrate the translational potential of these exoscope platforms for hand procedures. Case reports describe exoscope-assisted revision carpal tunnel surgery with epineurolysis and hypothenar fat flap coverage, as well as peripheral-nerve reconstructions and targeted muscle reinnervation performed under a 3D robotic exoscope. These early experiences suggest that 3D exoscopes can provide ergonomic and optical advantages compatible with the fine-scale requirements of hand microsurgery, supporting their prospective evaluation in this field[19-22].

Robotic systems tailored for microsurgery - including MUSA (Microsure), Symani, and adaptations of the Da Vinci platform - have demonstrated the ability to reduce physiological tremor and scale surgeon movements, facilitating suturing of vessels and nerves with diameters below 1 mm[22,23]. Clinical studies have documented successful robot-assisted lymphovenous anastomoses of 0.3-0.8 mm in human patients, with durable one-year outcomes confirming safety and patency[24,25]. More recently, robotic-assisted procedures have also been reported in peripheral nerve coaptation, extremity replantation, and free-flap reconstruction, further supporting translation to hand and reconstructive microsurgery[20,22,26]. Complementary pre-clinical studies demonstrate comparable patency and reduced tissue trauma relative to manual suturing[27]. Collectively, these reports highlight that robotic assistance already enables sub-millimeter suturing in clinical settings and represents a promising adjunct for future hand and digital microsurgical applications.

Postoperative monitoring & rehabilitation

Optimal recovery following hand surgery depends on early detection of complications and individualized rehabilitation. Machine learning models trained on electronic health records (EHRs) have been explored to predict risks such as postoperative stiffness, infections, or CRPS, allowing for targeted early interventions[28]. Although most CRPS prediction models have been developed in non-hand contexts, such as post-stroke cohorts[29], they demonstrate the feasibility of algorithmic risk modeling. In hand surgery, clinical predictors such as early intense pain after distal radius fracture remain the most validated indicators and could inform future hybrid machine learning-clinical frameworks[30].

VR and AR tools are also being investigated to enhance rehabilitation adherence and functional outcomes. Bressler et al. demonstrated the benefits of VR-assisted exercises in improving range of motion and patient engagement[31]. Mixed reality approaches have been further employed to address phantom limb phenomena, suggesting potential applications in sensory retraining after complex reconstructions. Importantly, these digital platforms generate data that can be used to iteratively refine AI algorithms, enabling increasingly personalized rehabilitation pathways[31,32].

Data-driven decision support & documentation

AI is not just making a mark in imaging, surgical planning, or rehabilitation - it is also reshaping how decisions are made and documented in hand surgery. By harnessing large volumes of data - from EHRs to published research - AI is providing powerful tools to support clinical judgment and optimize care pathways. Recent studies have explored how LLMs such as ChatGPT and Gemini can help classify hand injuries, generate differential diagnoses, and even suggest treatment options in response to specific clinical scenarios. While still largely experimental, these tools show promise as real-time “second opinions,” particularly for less experienced surgeons or in rare, complex cases[33,34]. Well-documented limitations include the risk of hallucinated or factually incorrect outputs, variable reproducibility, and sensitivity to prompt phrasing, all of which underscore the need for careful clinician oversight and verification. Consequently, current use of LLMs in surgical decision support should be viewed as a research application rather than a clinical tool.

Natural language processing and deep learning systems are advancing automated documentation. For example, systems that analyze intraoperative videos can recognize procedural steps and automatically generate structured operative reports, achieving accuracy comparable to or exceeding human documentation[35-39].

Additionally, AI chatbots and virtual assistants have been used to simplify patient education, enhancing understanding of postoperative care and fostering adherence to rehabilitation protocols. AI is also changing how surgeons communicate with patients. Chatbots and smart virtual assistants can deliver personalized information on surgical procedures, expected recovery timelines, and wound care. Will et al. demonstrated how LLMs can make online patient education materials significantly easier to read and understand. This is particularly helpful in hand surgery, where patients often face complex postoperative regimens involving splints and carefully staged mobilization. Having reliable, patient-friendly content available around the clock can reduce anxiety and help patients stick to their therapy plans[40].

FUTURE DIRECTIONS

The future of AI in hand surgery is likely to involve increasingly sophisticated integrations across imaging, intraoperative technology, and postoperative care. Given the rapid evolution of technology, Table 2 distills the most promising avenues for future research and clinical integration of AI in hand surgery. These examples underscore how advances in sensors, robotics, data analytics, and regenerative approaches may converge to further refine patient care.

Table 2

Future directions for AI integration in hand surgery

Area/concept Description & examples Potential impact Technology readiness level
Implantable sensors & microchips Biodegradable implants for real-time perfusion & infection monitoring Early detection of thrombosis, ischemia Preclinical/prototype stage
Integrated surgical cockpits Unified AR overlays, instrument tracking, photoacoustic imaging Safer, data-rich microsurgery Concept/early development
AR-guided perforator mapping AI auto-mapping vessels on imaging, projecting to operative field Precision in flap planning & osteotomies Experimental/proof of concept
AI-enhanced robotics & training Semi-autonomous microsuturing, VR simulators fed by real data Shorter learning curves, reproducibility Early clinical evaluation
Remote monitoring & adaptive rehab Smart splints/gloves, ML-adjusted protocols Highly individualized recovery paths Prototype devices under testing
Predictive hospital analytics System-level ML for complications, staffing optimization Efficient care delivery, risk mitigation Limited clinical implementation
Federated learning & regenerative AI Cross-institutional training, AI-aided bioprinting Tailored grafts & nerve conduits Research stage

Table 2 highlights emerging technologies and concepts that may shape the next generation of precision and personalized care in hand surgery. Technology Readiness Level (TRL) classification represents the authors’ estimation of the current maturity of each application based on the available clinical evidence, validation status, and level of real-world implementation reported in the literature. No standardized TRL assignment exists for AI applications in hand surgery.

Several promising directions merit attention:

· Advanced implantable monitoring for perfusion, thrombosis, and infection: Implantable monitoring for perfusion, thrombosis, and infection represents a promising but still experimental direction. Early engineering studies describe biodegradable microchips and implantable sensors that, combined with AI algorithms, could one day enable continuous monitoring of critical parameters such as blood flow at microvascular anastomosis sites, tissue oxygenation, and local temperature. This real-time surveillance holds the potential to detect early signs of thrombosis, ischemia or even post-surgical infections. Previous proof-of-concept studies - such as a biodegradable implant antenna developed for post-surgical infection detection[41] and an implantable pH sensor readable via standard radiography[42] - demonstrate the technical feasibility of integrating micro- and nanotechnology with AI-driven data interpretation. While these innovations remain preclinical and have not yet been evaluated in microsurgical or hand-surgery settings, they illustrate a promising future direction for continuous postoperative monitoring. Such technologies could eventually enable earlier detection of perfusion compromise or infection in complex reconstructions, but their translation into clinical setting will require substantial further validation and interdisciplinary development.

· Integrated multimodal surgical cockpits: A major frontier involves the development of so-called “surgical cockpits” - integrated intraoperative environments that consolidate multiple data streams, including perfusion imaging, AR overlays, instrument tracking, and even emerging techniques such as photoacoustic imaging (PAI), into a unified real-time interface. Beyond large console systems, miniaturized solutions - such as embedding augmented-reality overlays and perfusion data directly into surgical loupes or AR glasses - should be viewed as a forward-looking vision rather than a current development. Although technically conceivable, they remain at an early conceptual stage and will require substantial advances in hardware miniaturization, energy efficiency, and data visualization before clinical implementation becomes feasible. These capabilities may eventually be integrated directly into surgical loupes or AR glasses. AI-driven platforms could automatically map perforator vessels, overlay planned osteotomies, and issue alerts if dissection approaches critical structures such as the superficial palmar arch or digital nerves[43]. These systems aim not to replace the surgeon’s judgment, but to function as vigilant co-pilots, reducing cognitive load and enhancing safety during complex microsurgical procedures - democratizing access even in settings without expensive microscope setups.

· Perioperative Planning and Intraoperative Guidance in Complex Hand Reconstruction: Looking ahead, AI-driven tools and AR hold substantial potential to transform perioperative planning and intraoperative guidance in complex hand reconstructions. In challenging cases such as malunions or staged tendon transfers, these technologies could assist surgeons in determining precisely where to perform osteotomies, how much bone to resect, which implant to select, and how to position it to achieve optimal stability. Coupled with real-time AR overlays, they may even provide intraoperative visualization of planned cuts, fixation points, and anticipated biomechanical outcomes. As these systems continue to evolve, they could become invaluable in tailoring surgical interventions to the patient’s unique anatomy and functional demands, ultimately improving precision and outcomes. One particularly promising application may lie in thumb carpometacarpal (CMC I) arthroplasty. Accurate placement of the prosthetic cup within the trapezium is technically demanding due to the joint’s oblique axis and limited bone stock. An AI-enhanced AR system projected onto surgical loupes could guide the surgeon in real time, ensuring optimal implant alignment directly within the surgical field. Similarly, in procedures such as scaphoid nonunion fixation, intraoperative AR guidance based on real-time imaging data could assist in the optimal placement of Herbert screws or Kirschner wires. Currently, multiple fluoroscopic attempts are often needed to achieve proper alignment. By integrating live intraoperative imaging into the surgical loupes and aligning it with AI-predicted trajectories, such systems could reduce radiation exposure, improve accuracy, and shorten operative time. This example illustrates how AI-assisted visualization could deliver tangible, near-term benefits in hand surgery by improving procedural safety and efficiency through reduced radiation exposure.

· Enhanced perforator mapping and AR-guided surgery: AI-based tools that automate the identification of perforator vessels on preoperative imaging could streamline flap planning and reduce interobserver variability. Coupled with AR overlays, these systems may allow surgeons to visualize critical structures or planned osteotomies projected directly onto the operative field. While such approaches have primarily been investigated in craniofacial and spinal surgery[44], similar applications could have a high impact in hand and forearm reconstruction, particularly for flaps such as the radial forearm, posterior interosseous artery, or ulnar artery perforator flaps. Incorporating AI-assisted vessel mapping could enhance precision in flap design and potentially shorten operative time in these microsurgically demanding procedures.

· Robotics augmented by AI and data-driven training ecosystems: Next-generation microsurgical robotic systems are evolving beyond simple motion scaling to incorporate AI-powered instrument recognition, predictive motion smoothing, and even virtual haptic overlays. These capabilities may one day support semi-autonomous tasks such as AI-guided micro-suturing or dynamic tension adjustments informed by live perfusion feedback. Importantly, data captured during these procedures can fuel high-fidelity VR simulators that replicate actual surgical cases, providing immersive, personalized training environments that dramatically shorten learning curves and refine microsurgical skills. This approach represents more than an incremental improvement - it marks a paradigm shift in surgical education. By continuously capturing and analyzing real operative data, AI systems can create feedback loops that connect clinical performance with simulation training, enabling adaptive curricula tailored to individual surgeons. Such closed-loop systems have the potential to accelerate skill acquisition, standardize microsurgical training, and objectively assess competency across institutions.

· System-level learning and predictive optimization: AI-driven analytics are increasingly being applied to hospital-wide datasets to identify patterns in complications, optimize resource allocation, and inform quality improvement initiatives. As data aggregation becomes more robust, predictive models could dynamically adjust protocols or resource deployment to meet evolving clinical demands in hand surgery.

· AI-enabled remote monitoring and adaptive rehabilitation: In the realm of postoperative care, AI may also play a pivotal role in remote monitoring. Wearable sensors embedded in splints or gloves are being developed to track metrics such as range of motion, grip strength, and adherence to prescribed exercises. In procedures such as flexor tendon repair, where early yet controlled mobilization is critical to prevent adhesions, machine learning models could analyze continuous motion data to detect subtle deviations from expected recovery trajectories, flagging risks such as tendon adhesions or joint contractures well before they become clinically evident. Similarly, after surgical release for Dupuytren’s contracture, remote monitoring could help identify early signs of functional recurrence. These AI-driven platforms could dynamically adjust rehabilitation protocols in response to individual progress, offering a highly personalized roadmap to recovery. By engaging patients through interactive feedback and gamified exercises, these systems may also help overcome one of the enduring challenges in hand surgery: ensuring compliance with intensive rehabilitation regimens essential for optimal functional outcomes.

· AI-enhanced clinical decision-making and documentation: Looking ahead, AI’s utility is expected to extend far beyond the operating theater into broader management of hand conditions. Predictive analytics platforms trained on large-scale registries could uncover patterns in disorders such as Dupuytren’s contracture, advanced osteoarthritis, and complex nerve injuries, informing counseling and risk stratification. Simultaneously, AI-driven documentation tools that combine speech-to-text with contextual understanding may streamline workflows by automatically generating operative reports, clinic notes, and insurance documents based on verbal narratives and intraoperative feeds - freeing time for direct patient care. LLMs could also assist during consultations by offering plain-language explanations of diagnoses and treatment options, supporting shared decision-making in complex cases where patient values are paramount.

· System-level learning and predictive optimization: Finally, AI-driven analytics applied at the health system level are increasingly identifying patterns in complications, guiding resource allocation, and shaping quality improvement initiatives. As data aggregation becomes more robust, predictive models could dynamically adjust protocols and staffing to meet evolving clinical demands in hand surgery.

Realizing these future directions will depend on multi-center collaborations to develop large, high-quality datasets, rigorous prospective validation, and careful attention to regulatory and ethical frameworks. Nevertheless, the convergence of AI, advanced imaging, robotics, and micro-scale monitoring holds significant promise for transforming the practice of hand surgery.

Looking even further ahead, the integration of AI with regenerative medicine may open entirely new paradigms. Bioprinting technologies guided by AI-optimized blueprints could one day fabricate patient-specific osteochondral grafts or nerve conduits tailored precisely to individual anatomical and biomechanical requirements. AI could also help predict how these bioengineered constructs will integrate and remodel over time, informing personalized post-implantation management. In parallel, federated learning approaches - where algorithms are trained across decentralized datasets without exchanging sensitive patient data - could dramatically accelerate insights into rare hand conditions by harnessing knowledge from globally distributed centers. Together, these advances point toward a future in which precision reconstruction and truly personalized care in hand surgery become not just possible, but routine.

CHALLENGES & LIMITATIONS

Despite substantial promise, AI implementation in hand surgery faces notable challenges:

· Data scarcity: Many hand conditions are relatively rare, complicating the development of robust AI models and necessitating multi-institutional collaboration to build sufficiently large, representative datasets. Emerging approaches such as synthetic data generation and federated learning offer potential solutions to this limitation - enabling model development across institutions without direct data sharing and helping to augment training datasets while preserving privacy. These strategies may play a crucial role in overcoming the inherent small-sample challenges of specialized surgical domains such as hand surgery.

· External validation: Algorithms often perform well on internal datasets but may generalize poorly across different patient populations, imaging protocols, or surgical techniques.

· Cost and complexity: Adoption of AI-enhanced microscopes, robotic systems, and sophisticated analytics requires significant investment and dedicated training.

· “Black box” concerns: The opacity of many AI models complicates understanding, acceptance, and legal accountability in surgical decision-making. Developing interpretable and explainable models is therefore essential to ensure clinician trust and regulatory acceptance. Our prior work on interpretable machine learning[45] demonstrates how transparent modeling approaches can improve insight into feature importance and decision logic, supporting safer and more accountable use of AI in surgical contexts.

· Adaptation to individual surgical practice: Most AI systems are designed for general optimization, often overlooking variations in technique, instrumentation, and workflow preferences among surgeons. In hand surgery, where approaches can be highly individualized, failure to align with these differences may limit acceptance and practical integration.

· Requirement for submillimeter precision: The anatomical scale of the hand is uniquely small, with densely packed neurovascular and tendinous structures. In this context, even minimal deviations of 1-2 mm - which may be acceptable in larger anatomical regions - can result in clinically significant errors, such as implant malposition, tendon injury, or neurovascular compromise. Therefore, any AR-based guidance system intended for hand surgery must achieve ultra-precise, real-time adaptation to the surgical field, accounting for intraoperative shifts and anatomical variations. Current technologies have not yet met this threshold, highlighting an urgent need for further refinement before widespread adoption in microsurgical environments becomes feasible.

· Patient-centered individualization: Many current models optimize general outcomes but do not adequately consider individual patient priorities, functional goals, or comorbidities, which are critical in personalized hand surgery decisions.

· Specialty-specific customization: Generic AI frameworks often fail to capture the unique anatomical, biomechanical, and outcome nuances of hand surgery compared to other surgical fields, necessitating specialty-tailored solutions.

Beyond these technical hurdles, integrating AI tools into established clinical workflows often demands substantial infrastructural investments, from high-performance computing systems to secure data environments capable of handling sensitive patient information. Reimbursement pathways for AI-assisted interventions also remain unclear in many healthcare systems, potentially limiting uptake even where clinical benefit is evident. Equally important is fostering digital literacy among surgeons and allied staff to ensure these technologies are deployed safely and effectively. As a community, hand surgeons must engage proactively with data scientists, ethicists, and policymakers to shape how these innovations are developed, validated, and reimbursed, ensuring they ultimately serve to enhance patient care.

ETHICAL CONSIDERATIONS

The integration of AI into hand surgery also raises important ethical considerations, foremost among them patient safety and data privacy. Training AI models demands large, high-quality datasets, often containing sensitive patient information such as imaging, operative details, and outcomes. This necessitates strict adherence to data protection regulations [e.g., General Data Protection Regulation (GDPR), Health Insurance Portability and Accountability Act (HIPAA)], robust anonymization, and advanced security protocols to mitigate risks of breaches or misuse. Failure to safeguard data could undermine patient trust and deter participation in both clinical care and research[46-49].

Transparency and interpretability of AI systems are critical. Many high-performing machine learning models function as “black boxes,” making it difficult to understand how specific predictions or intraoperative recommendations are generated. This opacity can undermine trust, complicate shared decision-making, and raise medicolegal concerns. As highlighted in our earlier work on interpretable machine learning in healthcare, developing and adopting models that provide clear, understandable rationales for their outputs - so-called explainable AI - is essential in clinical contexts where decisions carry significant consequences[45]. Ensuring that AI augments rather than replaces surgical judgment helps maintain accountability and aligns with ethical imperatives for patient-centered care.

In parallel with concerns around explainability, deeper cognitive and educational implications of AI integration in surgery deserve attention. While AI can serve as a valuable support tool - particularly for less experienced surgeons - it also carries the risk of over-reliance, potentially diminishing critical thinking and decision-making skills. Especially among early-career clinicians, there is concern that repeated deferral to algorithmic recommendations may erode clinical autonomy and experiential learning. The human tendency toward technological trust means that surgeons may accept AI-generated suggestions without sufficient scrutiny, which could ultimately hinder the development of nuanced, experience-based surgical judgment. Maintaining a balance between assistance and autonomy will be essential to ensure that AI supports human expertise.

Additionally, current AI models may fail to capture the individualized nature of surgical technique, as well as subtle intraoperative decisions that are guided more by tacit knowledge or intuition than by explicit rules. In hand surgery, where surgeons often adapt approaches to specific anatomical variations, patient goals, or intraoperative findings, such intuitive decisions may defy algorithmic modeling. Moreover, minority patient groups - underrepresented in training datasets - may experience reduced accuracy or biased recommendations, highlighting the need for algorithmic transparency, dataset diversity, and respect for clinical intuition. While AI may excel at pattern recognition, it does not yet replicate the “gut feeling” that often guides seasoned surgeons in ambiguous or borderline cases.

The ethical landscape becomes even more complex when considering the potential for algorithmic bias. If AI systems are trained predominantly on datasets from specific populations - whether defined by ethnicity, age, or socioeconomic status - they may inadvertently propagate disparities in care. A model that underestimates complication risks in underrepresented groups could result in delayed interventions and poorer outcomes. Transparent reporting of training datasets and external validation across diverse populations are essential safeguards. Additionally, the medicolegal question of liability - whether it rests with the surgeon, the institution, or the software developer if AI-guided recommendations contribute to harm - remains largely unresolved. These concerns underscore the need for robust regulatory frameworks that can keep pace with technological innovation.

Medicolegal accountability represents another unresolved frontier. As AI begins to influence surgical planning, intraoperative decisions, and postoperative management, questions arise: Who bears responsibility if AI-guided recommendations contribute to harm - the surgeon, the institution, or the software developer? Current regulatory and legal frameworks are ill-equipped to delineate liability in these scenarios. This uncertainty highlights the urgent need for updated medicolegal guidelines and regulatory pathways that can keep pace with technological innovation[50,51].

Table 3 outlines the primary obstacles that must be addressed to safely and effectively embed AI into hand surgical practice, emphasizing technical, economic, and ethical dimensions.

Table 3

Major challenges and ethical considerations

Challenge Details
Data scarcity Many hand conditions are rare, need multi-center datasets
External validation Algorithms may not generalize across hospitals/equipment
Cost & complexity High costs for AI-enhanced microscopes, robotics
“Black box” issues/lack of explainability Limited explainability complicates trust & liability; necessitates explainable AI approaches
Medicolegal accountability Liability for AI-guided errors remains unclear between surgeon, institution, and developer
Adaptation to surgical practice AI may not align with individual surgeon techniques & workflows
Precision at microsurgical scale Submillimeter accuracy needed; 1-2 mm error may harm critical structures
Patient-centered individualization May overlook personal goals, function, comorbidities
Specialty-specific customization Need for hand surgery-tailored anatomical & functional models
Data privacy & security Protecting sensitive patient information from breaches
Loss of clinical autonomy Risk of over-reliance on AI, especially among trainees; erosion of critical thinking and experiential learning
Lack of intuitive decision modeling AI cannot replicate tacit knowledge or surgical intuition in ambiguous intraoperative situations
Algorithmic bias Risks underperformance in underrepresented groups

This overview underscores critical hurdles - from data and cost issues to algorithmic bias and transparency - that require proactive strategies to ensure responsible adoption of AI.

CONCLUSION

In summary, AI is beginning to influence many aspects of hand surgery, from enhancing diagnostic imaging and refining surgical planning to supporting intraoperative precision, guiding rehabilitation, and transforming documentation and education. Yet, its current practical use in hand surgery remains limited, with only a few early examples demonstrating real-world benefit. AI appears most helpful in well-defined, data-rich tasks such as image interpretation, planning, and documentation, while its broader clinical adoption is constrained by technical limitations - most notably the requirement for sub-millimeter precision, reliable validation, and seamless integration into surgical workflows.

Looking ahead, several futuristic developments - including real-time perfusion monitoring, AR-guided navigation, and intelligent robotics - illustrate the exciting potential of this field. However, to move from promise to reality, AI must overcome not only technical barriers but also human challenges: excessive reliance may lead to a loss of clinical autonomy, and algorithms lack the surgical intuition or “gut feeling” that guides expert decision-making in ambiguous situations. Earning trust through transparency, rather than functioning as a “black box,” and building large, diverse datasets for reliable model training will be the critical next steps.

AI in hand surgery remains a fascinating and rapidly evolving field. Whether it will mature into a trusted surgical partner or remain more hype than help will depend on our ability to balance innovation with precision, transparency, and the irreplaceable judgment of the surgeon.

DECLARATIONS

Authors’ contributions

Conceptualization, manuscript drafting, literature review, supervision: Atay S

Literature search, manuscript editing: Wenger A

Technical analysis of AI tools and clinical relevance: Bankamp L

Evaluation of surgical applications and review: Krauß S

Contribution to hand surgical workflow mapping and content revision: Illg C

Assistance with visual materials and manuscript organization: Thiel JT

Supervision, review of clinical content and accuracy: Daigeler A

Literature organization, reference management, editing: Rachunek-Medved K

All authors read and approved the final manuscript.

Availability of data and materials

Not applicable.

Financial support and sponsorship

None.

Conflicts of interest

Daigeler A is an Editorial Board Member of the journal Plastic and Aesthetic Research. Daigeler A was not involved in any steps of the editorial process, notably including reviewers’ selection, manuscript handling, or decision-making. The other authors declare that there are no conflicts of interest.

Ethical approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Copyright

© The Author(s) 2026.

REFERENCES

1. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25:44-56.

2. Loftus TJ, Tighe PJ, Filiberto AC, et al. Artificial intelligence and surgical decision-making. JAMA Surg. 2020;155:148-58.

3. Hashimoto DA, Rosman G, Rus D, Meireles OR. Artificial intelligence in surgery: promises and perils. Ann Surg. 2018;268:70-6.

4. Ryhänen J, Wong GC, Anttila T, Chung KC. Overview of artificial intelligence in hand surgery. J Hand Surg Eur Vol. 2025;50:738-51.

5. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521:436-44.

6. Redmon J, Divvala S, Girshick R, Farhadi A. You only look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2016 Jun 27-30; Las Vegas, NV, USA. Piscataway (NJ): IEEE; 2016. pp. 779-88.

7. Yang TH, Horng MH, Li RS, Sun YN. Scaphoid fracture detection by using convolutional neural network. Diagnostics. 2022;12:895.

8. Kalmet PHS, Sanduleanu S, Primakov S, et al. Deep learning in fracture detection: a narrative review. Acta Orthop. 2020;91:215-20.

9. Dababneh S, Colivas J, Dababneh N, Efanov JI. Artificial intelligence as an adjunctive tool in hand and wrist surgery: a review. Art Int Surg. 2024;4:214-32.

10. Di Cosmo M, Fiorentino MC, Villani FP, et al. A deep learning approach to median nerve evaluation in ultrasound images of carpal tunnel inlet. Med Biol Eng Comput. 2022;60:3255-64.

11. Faeghi F, Ardakani AA, Acharya UR, et al. Accurate automated diagnosis of carpal tunnel syndrome using radiomics features with ultrasound images: a comparison with radiologists’ assessment. Eur J Radiol. 2021;136:109518.

12. Nakao K, Thavara BD, Tanaka R, et al. Surgeon experience of the surgical safety with KINEVO 900 in vascular neurosurgery: the initial experience. Asian J Neurosurg. 2020;15:464-7.

13. Lin KY, Li YT, Han JY, et al. Deep learning to detect triangular fibrocartilage complex injury in wrist MRI: retrospective study with internal and external validation. J Pers Med. 2022;12:1029.

14. Kelly CJ, Karthikesalingam A, Suleyman M, Corrado G, King D. Key challenges for delivering clinical impact with artificial intelligence. BMC Med. 2019;17:195.

15. Schweizer A, Mauler F, Vlachopoulos L, Nagy L, Fürnstahl P. Computer-assisted 3-dimensional reconstructions of scaphoid fractures and nonunions with and without the use of patient-specific guides: early clinical outcomes and postoperative assessments of reconstruction accuracy. J Hand Surg Am. 2016;41:59-69.

16. Wagner GA, Glennon A, Sieberer JM, Tommasini SM, Lattanza LL. A patient-specific three-dimensional-printed surgical guide for dorsal scaphoid fracture fixation: a comparative cadaver study. J Hand Surg Glob Online. 2025;7:158-66.

17. Wirth MA, Maniglio M, Jochum BC, et al. Three-dimensional-planned patient-specific guides for scaphoid reconstruction: a comparative study of primary and revision nonunion cases. J Clin Med. 2025;14:2082.

18. Ichikawa Y, Tobita M, Takahashi R, et al. Learning curve and ergonomics associated with the 3D-monitor-assisted microsurgery using a digital microscope. J Plast Reconstr Surg. 2023;2:1-8.

19. Rodriguez-Unda NA, Wu DS. Exoscope for upper extremity peripheral nerve surgery: revision carpal tunnel release with epineurolysis and hypothenar fat flap. Cureus. 2022;14:e22539.

20. Ernst J, Hahne JM, Markovic M, et al. Combining surgical innovations in amputation surgery-robotic harvest of the rectus abdominis muscle, transplantation and targeted muscle reinnervation improves myocontrol capability and pain in a transradial amputee. Medicina. 2023;59:2134.

21. Brauckmann V, Mayor JR, Ernst L, Ernst J. How a robotic visualization system can facilitate targeted muscle reinnervation. J Reconstr Microsurg Open. 2024;09:e19-26.

22. Besmens IS, Politikou O, Giovanoli P, Calcagni M, Lindenblatt N. Robotic microsurgery in extremity reconstruction - experience with a novel robotic system. Surg Innov. 2024;31:42-7.

23. Cannizzaro D, Scalise M, Zancanella C, Paulli S, Peron S, Stefini R. Comparative evaluation of major robotic systems in microanastomosis procedures: a systematic review of current capabilities and future potential. Brain Sci. 2024;14:1235.

24. van Mulken TJM, Schols RM, Scharmga AMJ, et al; MicroSurgical Robot Research Group. First-in-human robotic supermicrosurgery using a dedicated microsurgical robot for treating breast cancer-related lymphedema: a randomized pilot trial. Nat Commun. 2020;11:757.

25. van Mulken TJM, Wolfs JAGN, Qiu SS, et al; MicroSurgical Robot Research Group. One-year outcomes of the first human trial on robot-assisted lymphaticovenous anastomosis for breast cancer-related lymphedema. Plast Reconstr Surg. 2022;149:151-61.

26. Kueckelhaus M, Nistor A, van Mulken T, et al. Clinical experience in open robotic-assisted microsurgery: user consensus of the European Federation of Societies for Microsurgery. J Robot Surg. 2025;19:171.

27. Malzone G, Menichini G, Innocenti M, Ballestín A. Microsurgical robotic system enables the performance of microvascular anastomoses: a randomized in vivo preclinical trial. Sci Rep. 2023;13:14003.

28. van Boekel AM, van der Meijden SL, Arbous SM, et al. Systematic evaluation of machine learning models for postoperative surgical site infection prediction. PLoS One. 2024;19:e0312968.

29. Katsura Y, Ohga S, Shimo K, Hattori T, Yamada T, Matsubara T. A decision tree algorithm to identify predictors of post-stroke complex regional pain syndrome. Sci Rep. 2024;14:9893.

30. Wang AWT, Lefaivre KA, Potter J, et al. Complex regional pain syndrome after distal radius fracture: a survey of current practices. PLoS One. 2024;19:e0314307.

31. Bressler M, Merk J, Gohlke T, et al. A virtual reality serious game for the rehabilitation of hand and finger function: iterative development and suitability study. JMIR Serious Games. 2024;12:e54193.

32. Prahm C, Eckstein K, Bressler M, et al. PhantomAR: gamified mixed reality system for alleviating phantom limb pain in upper limb amputees-design, implementation, and clinical usability evaluation. J Neuroeng Rehabil. 2025;22:21.

33. AlShenaiber A, Datta S, Mosa AJ, Binhammer PA, Ing EB. Large language models in the diagnosis of hand and peripheral nerve injuries: an evaluation of ChatGPT and the isabel differential diagnosis generator. J Hand Surg Glob Online. 2024;6:847-54.

34. Pressman SM, Borna S, Gomez-Cabello CA, Haider SA, Forte AJ. AI in hand surgery: assessing large language models in the classification and management of hand injuries. J Clin Med. 2024;13:2832.

35. Demir KC, Schieber H, Weise T, et al. Deep learning in surgical workflow analysis: a review of phase and step recognition. IEEE J Biomed Health Inform. 2023;27:5405-17.

36. Perkins SW, Muste JC, Alam T, Singh RP. Improving clinical documentation with artificial intelligence: a systematic review. Perspect Health Inf Manag. 2024;21:1d.

37. Khanna A, Wolf T, Frank I, et al. Enhancing accuracy of operative reports with automated artificial intelligence analysis of surgical video. J Am Coll Surg. 2025;240:739-46.

38. Wyles CC, Fu S, Odum SL, et al. External validation of natural language processing algorithms to extract common data elements in THA operative notes. J Arthroplasty. 2023;38:2081-4.

39. Harvey CJ, Wong V, Huynh W, Lee JP, Woo RK. Ambient AI-assited clinical documentation in surgical outpatient care: a preliminary study of usability, workflow, and patient experience. World J Pediatr Surg. 2025;8:e001073.

40. Will J, Gupta M, Zaretsky J, Dowlath A, Testa P, et al. Enhancing the readability of online patient education materials using large language models: cross-sectional study. J Med Internet Res. 2025;27:e69955.

41. Ararat K, Altan O, Serbest S, Baser O, Dumanli S. A biodegradable implant antenna detecting post-surgical infection. In: Proceedings of the 14th European Conference on Antennas and Propagation (EuCAP); 2020 Mar 15-20; Copenhagen, Denmark. IEEE; 2020. pp. 1-4.

42. Arifuzzaman M, Millhouse P, Raval Y, et al. An implanted pH sensor read using radiography. Analyst. 2019;144:2984-93.

43. Yangi K, On TJ, Xu Y, et al. Artificial intelligence integration in surgery through hand and instrument tracking: a systematic literature review. Front Surg. 2025;12:1528362.

44. Li CR, Chang YJ, Lin MS, Tsou HK. Augmented reality in spine surgery: a case study of atlantoaxial instrumentation in Os odontoideum. Medicina. 2024;60:874.

45. Atay S. Interpretable machine learning in healthcare: comparison and evaluation of interpretable models. Bachelor Thesis, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany, 2022. Available from: https://www.researchgate.net/publication/393449289_Interpretable_Machine_Learning_in_Healthcare_Comparison_and_Evaluation_of_Interpretable_Models. [Last accessed on 8 Jan 2026].

46. Yadav N, Pandey S, Gupta A, Dudani P, Gupta S, Rangarajan K. Data privacy in healthcare: in the era of artificial intelligence. Indian Dermatol Online J. 2023;14:788-92.

47. Regulation GDP. General data protection regulation (GDPR) - official legal text. Gen Data Prot Regul. 2016.

48. States. Health Insurance Portability and Accountability Act of 1996. Public Law 104-191. US Statut Large. 1996;110:1936-2103.

49. Neural Trust. AI in healthcare: protecting patient data. Available from: https://neuraltrust.ai/blog/ai-healthcare-protecting-patient-data. [Last accessed on 29 Dec 2025].

50. Duffourc M, Møllebæk M, Druedahl LC, Minssen T, Gerke S. Surgeons’ perspectives on liability for the use of artificial intelligence technologies in the United States and European Union: results from a focus group study. Ann Surg Open. 2025;6:e542.

51. See B. Paging doctor robot: medical artificial intelligence, tort liability, and why personhood may be the answer. Brook L Rev. 2021;87:417-56. Available from: https://brooklynworks.brooklaw.edu/blr/vol87/iss1/10/. [Last accessed on 8 Jan 2026].

52. Pereira MF, Prahm C, Kolbenschlag J, Oliveira E, Rodrigues NF. Application of AR and VR in hand rehabilitation: a systematic review. J Biomed Inform. 2020;111:103584.

Cite This Article

Review
Open Access
AI in hand surgery: hype or helpful? A comprehensive survey of emerging technologies

How to Cite

Download Citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click on download.

Export Citation File:

Type of Import

Tips on Downloading Citation

This feature enables you to download the bibliographic information (also called citation data, header data, or metadata) for the articles on our site.

Citation Manager File Format

Use the radio buttons to choose how to format the bibliographic data you're harvesting. Several citation manager formats are available, including EndNote and BibTex.

Type of Import

If you have citation management software installed on your computer your Web browser should be able to import metadata directly into your reference database.

Direct Import: When the Direct Import option is selected (the default state), a dialogue box will give you the option to Save or Open the downloaded citation data. Choosing Open will either launch your citation manager or give you a choice of applications with which to use the metadata. The Save option saves the file locally for later use.

Indirect Import: When the Indirect Import option is selected, the metadata is displayed and may be copied and pasted as needed.

About This Article

Disclaimer/Publisher’s Note: All statements, opinions, and data contained in this publication are solely those of the individual author(s) and contributor(s) and do not necessarily reflect those of OAE and/or the editor(s). OAE and/or the editor(s) disclaim any responsibility for harm to persons or property resulting from the use of any ideas, methods, instructions, or products mentioned in the content.
© The Author(s) 2026. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, sharing, adaptation, distribution and reproduction in any medium or format, for any purpose, even commercially, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Data & Comments

Data

Views
64
Downloads
2
Citations
0
Comments
0
0

Comments

Comments must be written in English. Spam, offensive content, impersonation, and private information will not be permitted. If any comment is reported and identified as inappropriate content by OAE staff, the comment will be removed without notice. If you have any queries or need any help, please contact us at [email protected].

0
Download PDF
Share This Article
Scan the QR code for reading!
See Updates
Contents
Figures
Related
Plastic and Aesthetic Research
ISSN 2349-6150 (Online)   2347-9264 (Print)

Portico

All published articles are preserved here permanently:

https://www.portico.org/publishers/oae/

Portico

All published articles are preserved here permanently:

https://www.portico.org/publishers/oae/