Teaching oral care via vision-based deformation perception
Abstract
This paper presents a novel, cost-effective sensor platform based on Vision-based Deformation Perception (VBDeformP) for community oral health education. The system integrates a 3D-printed thermoplastic polyurethane soft structure with a rigid resin frame and an ArUco marker to encode interaction information, including the contact region and six-dimensional force and torque. By transforming force estimation into a marker-based pose tracking problem, the VBDeformP sensor achieves accurate and robust force/torque inference under both quasi-static and dynamic conditions, utilizing machine learning models. An adaptive image binarization algorithm extends reliable marker detection across a wide illumination range (10-5,000 lx), ensuring consistent performance of the vision system in realistic community teaching scenarios. Experimental validation involving 10 healthy participants per-forming standardized brushing tasks demonstrated that the sensor attains measurement accuracies comparable to a commercial ATI Axia80-M20 sensor, with mean absolute errors of 0.55 N (2.19% relative error) and 0.067 N·m (2.68% relative error) for quasi-static forces and torques, and 0.16 N (4.10% relative error) and 0.023 N·m (5.75% relative error) under dynamic conditions. Moreover, the system’s real-time brushing region classification algorithm achieved an overall accuracy of 98.12%, further underscoring its potential to provide immediate and personalized guidance on oral hygiene. Its low cost, rapid initialization, portability, and scalable fabrication render it a promising solution for enhancing oral health education in community settings.
Keywords
INTRODUCTION
Oral diseases affect approximately 3.5 billion people worldwide throughout their lives[1], making them one of the most pressing public health concerns. While clinical interventions can mitigate immediate symptoms, they often prove insufficient and unsustainable as a long-term preventive strategy[2]. Consequently, effective oral hygiene practices, particularly proper toothbrushing techniques, are crucial in preserving oral health. Research has shown widespread difficulties in appropriate brushing technique implementation - from excessive force application to in-adequate coverage and insufficient duration[3-5]. These persistent errors can lead to gum recession, enamel degra-dation, and elevated plaque levels[6], underscoring the urgent need for more effective oral hygiene education and prevention strategies.
In formal dental education, as summarized in Table 1, the methods and tools used to demonstrate proper brush-ing techniques significantly influence oral care teaching outcomes[15,16]. Previous research has shown that brush-ing force is a key factor in optimizing cleaning efficacy[17], and that physical jaw models are notably more effec-tive than video demonstrations for teaching brushing techniques[18]. These insights suggest that integrating force-measurement capabilities with physical models has excellent potential for creating ideal educational simulators[19].
Comparison of oral care teaching devices
Category | Device | Sensing methods | Capabilities | Complexity | Cost level |
Wearable/ mobile solutions | Lee et al.[7] | Accelerometer + magnetic sensor | Region only | Medium | Low |
Marcon et al.[8] | Camera-based | Region only | Low | Medium | |
mTeeth[9] | Wrist-worn inertial sensors | Region only | Low | Medium | |
ToothFairy[10] | Audio sensing | Region only | Low | Low | |
Oral care simulators | Herath et al.[11] | Multiple strain gauge | Region + force | High | Medium |
Daigo et al.[12] | Multiple pressure | Region + force | High | High | |
Matsuno et al.[13] | 6-axis F/T | Region + force | Low | High | |
Mouri et al.[14] | 6-axis F/T + motion capture | Region + force | High | High | |
This work | Soft F/T sensor | Region + force | Low | Low |
Although various oral care simulators incorporating advanced technologies have been proposed, ranging from multi-sensor force-sensing systems[11,12] and high-precision force/torque (F/T) measurement devices[13], to motion cap-ture[14] and extended reality systems[20], the complexity and cost of these solutions hinder their adoption in com-munity oral health initiatives. Such programs are highly effective for improving oral hygiene behaviors[21,22], yet they often rely on conventional jaw models and manual demonstrations due to limited resources and technical ex-pertise[23]. This mismatch between cutting-edge simulator technology and practical field requirements underscores the need to develop accessible, low-cost simulation tools tailored to community-based dental education.
In response to these challenges, researchers have investigated various sensing technologies, especially soft force sen-sors widely used in medical environments[24-26] and various wearable devices[27-29]. Traditional methods measure resistance[30] or capacitance[31], and newer innovations leverage hydrogels[32], liquid metals[33], or optical fibers[34,35]. However, these advanced materials often necessitate intricate fabrication processes, stringent calibration steps, and elaborate wiring systems[36,37], posing additional barriers in low-resource settings.
Vision-based Deformation Perception (VBDeformP), a method that analyzes visual deformation patterns to inter-pret physical interactions[38], has emerged as a promising sensing approach. VBDeformP has recently emerged as a compelling alternative, offering superior mechanical flexibility, resistance to electromagnetic interference, and simplified system architecture[39]. Its successful integration has been demonstrated in diverse surgical applications, from triaxial force sensing in ureteroscopy[40] and microneedle force sensing in microsurgery[41] to prostate cancer detection[42]. Particularly promising are VBDeformP solutions employing ArUco markers[43] for force prediction. Such systems boast low-cost components, reduced fabrication complexity, and minimal circuit requirements, mak-ing them highly suitable for resource-limited environments[44]. Recent innovations include a soft finger for six-degrees-of-freedom force and moment measurement[45], endoscopic tracking in robotic surgery[46], and gripping force analysis in robotic hands[47]. These successes underline the versatility and cost-effectiveness of VBDeformP, positioning it as a viable approach for bridging the gap between sophisticated sensing technology and the practical needs of community-based dental education.
This study proposes a novel soft F/T sensor that incorporates an ArUco marker into a 3D-printed thermoplastic polyurethane (TPU) structure, capitalizing on the inherent flexibility of TPU and the streamlined architecture offered by VBDeformP. The proposed sensor addresses common barriers to adoption in community oral health programs by employing low-cost manufacturing processes and simple circuit design. The system accurately predicts forces and torques through machine learning under quasi-static and dynamic conditions. Validation experiments with healthy participants performing standard brushing tasks confirm the sensor’s measurement precision across diverse environmental parameters. This work represents a notable breakthrough in practical oral health education tools. It provides a cost-effective, versatile, and dependable platform that enhances brushing technique instruction in diverse real-world environments. The primary contributions of this article are as follows:
• Developed a soft F/T sensor based on VBDeformP, featuring a 3D-printed TPU structure that ensures physical flexibility and simplifies fabrication. The new design provides an innovative F/T and brushing-region sensing solution in dental education.
• Achieved measurement accuracy comparable to an expensive industrial-grade F/T sensor (ATI Axia80-M20), maintaining robustness in both quasi-static and dynamic scenarios. An optimized combination of a simple circuit design and machine learning algorithms enables this performance.
• Demonstrated a cost-effective oral health education platform that effectively bridges the gap between state-of-the-art sensing technology and the practical needs of community-based health education programs. The system’s minimal technical complexity and accessible implementation underscore its potential for widespread adoption in dental education.
EXPERIMENTAL
Sensor design and fabrication
The proposed VBDeformP for soft F/T perception integrates a visual tracking mechanism with a deformable struc-ture, comprising four key components, including (1) A deformable soft structure that transmits forces; (2) An ArUco marker base with a 16 mm × 16 mm, 4 × 4 ArUco marker (ID: 0) attached to its underside as the primary visual reference; (3) A camera holder containing a compact monocular RGB camera (S-YUE WX605, Weixinshijie); and (4) A base mount in Figure 1A.
Figure 1. Design and analysis of the vision-based soft F/T sensor. (A) Exploded view of sensor components; (B) FEM simulation results reveal deformation patterns under six loading conditions: lateral forces (Fx and Fy), vertical force (Fz), bending moments (Tx and Ty), and torsional moment (Tz); (C) Experimental validation of soft structure deformation under corresponding forces and torques in oral care applications. F/T: Force/torque; FEM: finite element method.
The camera is equipped with a wide-angle lens (135° field of view) and captures video at 60 frames/s with a resolution of 1,280 × 720 pixels. The system analyzes this video stream in real-time to accurately track the ArUco marker’s pose, effectively serving as a miniature motion capture unit within the VBDeformP framework.
To balance accurate F/T measurement with design simplicity, we employed a wave-shaped deformable structure inspired by the wave spring washer concept[48]. The structure’s centerline is determined by:
where R0 is the overall radius, A0 defines wave amplitude, and n controls the number of sinusoidal waves around the circumference. An additional parameter, m, specifies the number of layer pairs, resulting in a total of 2m layers in the final structure. Adjacent layers alternate the sign in z(t), producing an interlocking multi-layer configuration.
For oral care applications, R0 = 33 mm was selected to fit typical jaw model dimensions. Following finite element method (FEM) simulations and tests of multiple 3D-printed prototypes, we chose A0 = 3 mm, n = 4, and m = 3 for the wave structure. Additionally, we invited 10 participants to evaluate prototypes with varying design parameters to determine the optimal structural parameters for comfortable use. Based on participant feedback, we determined h = 1.5 mm and w = 4 mm, where h represents the thickness of the wave structure perpendicular to the wave plane, and w denotes the width of the wave beam along its cross-section. These values were selected as the minimum at which all participants reported that the device’s slight movement did not interfere with their regular brushing. This selection aimed to optimize the balance between measurement sensitivity and user comfort.
Figure 1B depicts FEM results from Fusion 360, highlighting characteristic deformation under six load conditions: lateral forces (Fx and Fy), axial force (Fz), bending moments (Tx and Ty), and torsional moment
The soft structure was 3D-printed using TPU (Future TPU, Wenext) for its flexibility. Rigid components (marker base, camera holder, and sensor base) were 3D-printed using high-precision resin (R4600, Wenext), with threaded inserts embedded for secure mounting. The ArUco marker base and camera holder feature flanges that accommodate these inserts, ensuring a firm assembly. The sensor base attaches to the camera holder via threaded connectors, providing mounting interfaces to external fixtures. The soft structure connects to the camera holder and marker base using adhesive bonding (K-8818, Kafuter), which facilitates reliable force transfer throughout the system.
All printings were carried out at Wenext’s manufacturing facility. Total printing expenses amounted to approximately $29, covering $7 for the TPU soft structure, $5.5 for rigid resin parts, and $16.5 for threaded insert integration. The cost of embedding threaded inserts can be substantially reduced through batch production, making the approach scalable and cost-effective.
VBDeformP for force and torque inference
After establishing the fabrication methodology and cost structure of the VBDeformP sensor, we now describe its implementation for force and torque measurements. An overview of the system workflow is shown in Figure 2, illustrating both the hardware setup and the data processing pipeline. The VBDeformP soft sensor, equipped with an internal camera, is affixed onto a high-performance F/T sensor (ATI Axia80-M20) that provides ground-truth force and torque measurements in Figure 2A. The ATI sensor offers a resolution of 1/10 N for Fx/Fy/Fz and 1/200 N·m for Tx/Ty/Tz. Simultaneously, the VBDeformP camera captures 1,280 × 720 images at 60 Hz in Figure 2B, while reference force and torque data are logged in Figure 2C. We use a multi-layer neural network to infer six-dimensional force and torque [Fx, Fy, Fz, Tx, Ty,
Figure 2. Overview of force and torque inference workflow. (A) The soft sensor is mounted on the ATI F/T sensor; (B) Real-time image capture from the integrated camera; (C) Reference force and torque measurements from the ATI F/T sensor; (D) Data-driven model architecture for force and torque inference. F/T: Force/torque.
Community dental education occurs in diverse lighting environments - from well-lit indoor spaces to challenging outdoor settings with either excessive sunlight or dim evening illumination. In these varied conditions, reliable marker detection becomes essential for consistent system performance. While deep learning approaches have demonstrated robust marker detection[49], their high computational cost, exceeding 25 ms per frame on an NVIDIA RTX 3090 GPU, renders them unsuitable for broad adoption in community settings. Consequently, we employ a traditional detection pipeline with efficient binarization and pose estimation. Following[43], we first convert captured RGB frames to grayscale using the standard luminosity formula:
where Gray represents the grayscale value, and R, G, and B are the red, green, and blue channel values, respectively. We reduce noise by applying a 5 × 5 Gaussian blur (σ = 1.0).
We tested several binarization methods under diverse lighting conditions. For this purpose, we constructed a test platform. The sensor was mounted at the center of an optical board in a darkened room. Two photography fill lights were positioned on both sides of the platform, and the illumination conditions at the sensor location were controlled by adjusting the room’s ceiling and fill lights. In particular, uniform and non-uniform lighting environments could be simulated by changing the intensity of the fill lights. A KOMEZ K-8123 illuminance meter was used to measure the light intensity at the sensor location in lx. This meter features a range of 200,000 lx, a resolution of 0.1 lx, and an accuracy of 2%. Measurements were taken at four points around the sensor, with each point measured three times and averaged. The final illuminance value was determined by averaging these four points. Considering practical variations in light adjustment, the target illuminance was achieved when the measured intensity deviated less than 10 lx from the target value.
Using this experimental setup, we evaluated several standard binarization methods. Although k-means clustering can adapt to varying illumination, it requires over 100 ms per frame, which is too slow for our
To broaden the illumination range for reliable marker detection, we developed a ratio-based thresholding that capitalizes on the fact that the marker’s black squares consistently have the lowest grayscale values in each image. Denote I (x, y) as the grayscale image of size M × N, and let H (g) be the histogram of gray levels (g ∈ [0, 255]):
where the Kronecker delta δ (z) enforces exact matching:
Given a target black-pixel ratio rb (set to 0.50 empirically), the desired count of black pixels becomes:
The final target pixel count Ptarget is obtained by discarding the fractional part of the calculated value to ensure an integer quantity.
We then determine the threshold T by accumulating histogram counts until:
The binary image B (x, y) is formed by:
By focusing on the darkest areas of the image, where the ArUco marker is reliably visible, this ratio-based thresholding extends the usable lighting range without imposing a substantial computational cost. We validated its performance by gathering 2,000 frames under illuminations from 10 to 5,000 lx. A ≥ 99% detection rate was defined as sufficient for robust marker recognition. As illustrated in Figure 3, our method sustains reliable detection across 10-5,000 lx, whereas standard grayscale and raw image processing achieve stable detection only in narrower ranges (100-2,000 and 100-1,000 lx, respectively). Collectively, these techniques form the foundation of our VBDeformP platform. The system integrates deformable structural design, cost-effective fabrication, and robust image binarization to deliver real-time force and torque sensing that remains resilient to environmental changes.
Figure 3. Marker detection performance under different illumination conditions (10-5,000 lx). Comparison of original captured images, grayscaled images, and our binarized method. Green boxes indicate successful marker detection (detection rate ≥ 99% across 2,000 frames).
We quantitatively evaluated the detection reliability by analyzing the precision of pose parameters. For both translational [X, Y, Z] and rotational [roll, pitch, yaw] components, we calculated the range (maximum minus minimum value) across all 2,000 frames. Tables 2 and 3 present these variation ranges, demonstrating significantly reduced fluctuations using our method across all tested illumination conditions. Notably, under challenging lighting conditions (10 and 5,000 lx), where conventional methods fail to detect markers, our binarization approach shows stable detection with position fluctuations below 0.6 mm and angular variations under 1.5 degrees, suggesting potential improvements in the measurement precision of the F/T sensing system.
Position fluctuation ranges under various illuminances
Illuminance (lx) | X/mm | Y/mm | Z/mm | ||||||
Gray | Binary | Origin | Gray | Binary | Origin | Gray | Binary | Origin | |
10 | / | 0.0514 | / | / | 0.0482 | / | / | 0.5548 | / |
100 | 0.0147 | 0.0079 | 0.0213 | 0.0191 | 0.0107 | 0.0206 | 0.1040 | 0.0206 | 0.0607 |
500 | 0.0072 | 0.0074 | 0.0070 | 0.0070 | 0.0069 | 0.0118 | 0.0142 | 0.0140 | 0.0383 |
1,000 | 0.1400 | 0.0163 | 0.0845 | 0.1450 | 0.0170 | 0.0114 | 3.1594 | 0.0232 | 3.1932 |
2,000 | 0.4551 | 0.0038 | / | 0.2621 | 0.0048 | / | 4.1110 | 0.0155 | / |
5,000 | / | 0.0353 | / | / | 0.0119 | / | / | 0.1707 | / |
Angle fluctuation ranges under various illuminances
Illuminance (lx) | roll/deg | pitch/deg | yaw/deg | ||||||
Gray | Binary | Origin | Gray | Binary | Origin | Gray | Binary | Origin | |
10 | / | 1.450 | / | / | 1.470 | / | / | 0.217 | / |
100 | 0.566 | 0.407 | 0.642 | 0.571 | 0.270 | 0.739 | 0.078 | 0.046 | 0.094 |
500 | 0.152 | 0.135 | 0.238 | 0.245 | 0.246 | 0.267 | 0.027 | 0.023 | 0.034 |
1,000 | 3.633 | 0.405 | 0.928 | 2.201 | 0.602 | 1.085 | 0.384 | 0.055 | 0.059 |
2,000 | 9.861 | 0.147 | / | 10.614 | 0.123 | / | 2.514 | 0.019 | / |
5,000 | / | 0.487 | / | / | 0.694 | / | / | 0.057 | / |
RESULTS AND DISCUSSION
Brushing regions classification performance
The Bass brushing method segments tooth surfaces into 16 distinct areas for instructional purposes[7]. This study focused on the lower jaw surfaces and divided them into 8 regions, as shown in Figure 4A. Ten healthy, right-handed adults (six men and four women) with no professional dental or caregiving background were recruited. None of the participants had prior familiarity with the Bass technique.
Figure 4. Brushing regions classification performance. (A) Teeth surface categorization of the Bass brushing technique (lower jaw); (B) Experiment setup I: baseline configuration with the dental model directly mounted on the ATI Axia80-M20 F/T sensor (blue box); (C) Experiment setup II: enhanced configuration with VBDeformP soft sensor (yellow box) integrated between the jaw model and the F/T sensor; (D) Confusion matrix showing classification results from the ATI Axia80-M20 setup; (E) Confusion matrix showing classification results from the VBDeformP sensor setup; (F) Comparative F1 scores obtained through leave-one-participant-out cross-validation demonstrated similar performance in recognizing brushing regions between the two sensing approaches. F/T: Force/torque; VBDeformP: Vision-based Deformation Perception.
Two experiments were designed to evaluate the classification of the brushing region. In Experiment I in Figure 4B, each participant used a Philips HX9911 electric toothbrush (set to “clean 2”) to brush each of the eight areas on a lower jaw model (Oral Standard Model, Meikang Medical Model). The model was directly mounted onto an ATI Axia80-M20 F/T sensor via a gray connector plate. This setup provided six-dimensional ground-truth force and torque data [Fx, Fy, Fz, Tx, Ty, Tz]. Each brushing session is initiated and terminated by audio cues, lasting about 20 s (20,000 frames at the F/T sensor’s native 1,000 Hz sampling rate), with a 5-second pause between regions to reset the sensor and minimize zero drift. The F/T data collected in this experiment served as our baseline for classification performance and as a ground truth reference.
In Experiment II in Figure 4C, the same brushing protocol was replicated. Still, our VBDeformP soft sensor was integrated between the jaw model and a black connector plate, with the entire assembly still mounted on the ATI Axia80-M20 F/T sensor. An ArUco marker attached to the jaw model was tracked in real-time by a built-in camera, recording six-dimensional pose data [X, Y, Z, roll, pitch, yaw]. For system initialization, the mean pose values [X0, Y0, Z0, roll0, pitch0, yaw0] were computed from 100 unloaded frames and used as the reference, with subsequent measurements expressed as relative changes. Both F/T and pose data were collected simultaneously for distinct purposes. Participants completed three trials in both experiments, but only the latter two were analyzed. A minimum one-week gap separated the two experiments to mitigate learning effects. Before each session, participants only watched a standardized Bass technique video; they received no further instruction or hands-on training.
It is worth noting that in Experiment II, we simultaneously collected both F/T sensor data and ArUco marker pose data, each serving complementary roles in our research. The F/T sensor measurements primarily provided ground truth references for training our machine learning model and offered valuable benchmarks for error assessment during validation. Meanwhile, for the brushing region classification analysis, we utilized the pose data captured from the ArUco marker, which was subsequently processed through our MLP model to estimate forces and identify brushing regions. This methodological approach suggests that our VBDeformP system has the potential to function as a standalone solution in practical applications without requiring the presence of an F/T sensor.
Data preprocessing
In Experiment I, raw F/T data were smoothed via a 9-frame sliding window (to remove high-frequency noise while preserving brushing dynamics) and downsampled to 100 Hz (sufficient for capturing human brushing movements); a 0.5 N force threshold removed data likely representing no brush contact. In Experiment II, the same smoothing was applied, and the F/T data were aligned with the 6D pose data using a 0.05 s synchronization tolerance. Frames with ArUco marker displacements under 2 mm were considered non-contact and excluded.
Classification models
We trained two classifiers to identify which of the eight regions was being brushed:
Each model used a neural network with three hidden layers (512, 128, 32 neurons), employing ReLU activation and dropout (p = 0.1) after each layer. We adopted the Adam optimizer (learning rate 0.001, batch size 32) with a cross-entropy loss function:
where N is the number of samples, yi,c is the ground-truth one-hot label, and gϕ(xi)c is the predicted probability of class c. Experiment I used 6D force and torque data as input, while Experiment II used 6D pose data. The output identified which of the eight dental regions (1-8) was being brushed.
Results
Figure 4D and E compares the classification performance for both the commercial ATI Axia80-M20 sensor and the VBDeformP sensor using an 80/20 train-test split of data from all 10 participants. Both sensors achieved high accuracy, with the VBDeformP sensor slightly outperforming (98.12%) the ATI sensor (97.35%). Diagonal confusion matrix entries exceeded 95%, highlighting a robust classification.
For the ATI Axia80-M20 sensor, individual region accuracies ranged from 92.8% to 100% in Figure 4D, with perfect recognition in Regions 4 and 5. The main confusion involved Regions 1 and 8 (4.8% misclassification). In contrast, the VBDeformP sensor reduced the misclassification rate between these two regions to 1.5% in Figure 4E, achieving flawless classification in Regions 4 and 5. A leave-one-participant-out cross-validation analysis in Figure 4F confirmed consistent sensor performance across different users: both sensors achieved identical mean F1-scores of 0.96. The ATI sensor scores were calculated from direct force and torque measurements, while the VBDeformP scores were derived from ArUco marker pose data.
F/T perception in quasi-static and dynamic scenarios
We assessed the ability of our VBDeformP sensor to infer forces and torques under both quasi-static and dynamic conditions.
Quasi-static experiments
For the quasi-static test, a push rod applied controlled loads at different angles and depths. The ATI Axia80-M20 F/T sensor was sampled at 1,000 Hz while the internal camera operated at 60 Hz. An asynchronous forward-matching algorithm applied a 50 ms tolerance window to align the data, yielding about 5,000 matched pose-force pairs per round. The maximum applied forces reached 25 N, with torques up to
Dynamic experiments
For real-world dental brushing, we used data from Experiment II in Section “Brushing regions classification performance” to train a neural network under typical brushing motions. An MLP with three hidden layers (1,024, 128, 32 neurons), ReLU activation, and dropout was implemented in PyTorch. Inputs were Z-score–normalized pose data; outputs were the corresponding 6D force and torque values:
where Fi is the ground-truth F/T measurement, fθ(Pi) is the predicted F/T, and N is the sample size. Training proceeded via Adam (learning rate 0.001, batch size 32). The final model iteration with the lowest validation MSE was saved for testing. All predictions ran on a laptop with an NVIDIA RTX 3050 GPU, achieving an average processing time of 1.27 ms per inference.
Figure 5A and B illustrates the predicted and ground-truth six-dimensional F/T data, showcasing close agreement under both quasi-static and brushing conditions. The VBDeformP sensor naturally filters high-frequency noise in Figure 5C while preserving significant low-frequency force trends characteristic of brushing motions in Figure 5D[9].
Figure 5. F/T inference results. (A) F/T inference comparison under quasi-static random load application, showing predicted versus ground truth values across six F/T components; (B) Time-series F/T signal analysis during brushing operations; (C) Demonstration of high-frequency noise attenuation by the sensor; (D) Validation of accurate lower-frequency force variation tracking; (E) Quasi-static force performance assessment showing MAE (bar plots, left y-axis) and relative error (line plots, right y-axis); (F) Dynamic force performance assessment; (G) Quasi-static torque performance assessment; (H) Dynamic torque performance assessment; (I) Scatter plot validation for quasi-static conditions, demonstrating strong linear correlation (R2 > 0.95 for forces); (J) Scatter plot analysis in brushing scenarios shows similarly high R2 values across most axes, except for Tz, due to the minimal real-world torque in this axis. F/T: Force/torque; MAE: mean absolute error.
Quantitative error analysis
We evaluated performance using mean absolute error (MAE) and relative error. For quasi-static tests, the reference loads were 25 N and 2.5 N·m, while a reduced range of 4 N and 0.4 N·m was used for dynamic brushing (reflecting typical oral care loads).
Under quasi-static forces in Figure 5E, Fx, Fy, and Fz attained MAEs of 0.30, 0.28, and 0.38 N, respectively, with a total force MAE of 0.55 N. The relative errors were 1.20%, 1.12%, and 1.50%, respectively, and 2.19% for total force. In dynamic brushing in Figure 5F, the force range was 4 N, so absolute errors dropped with MAE values of 0.15, 0.16, and 0.10 N for individual components and a total force MAE of 0.16 N. However, relative errors increased to 3.73%, 4.05%, and 2.50%, respectively, with a total force relative error of 4.10%. See Supplementary Movie 1 for a video demonstration.
For quasi-static torques in Figure 5G, Tx, Ty, and Tz had MAEs of 0.043, 0.046, and 0.034 N·m, respectively, with a total torque MAE of 0.067 N·m. Relative errors (2.5 N·m range) remained below 2% for individual components and 2.68% overall. Under dynamic conditions in Figure 5H, MAEs decreased further (0.009-0.023 N·m), but relative errors increased (2.25%-5.75%) because of the smaller (0.4 N·m) torque range typical of brushing.
Correlation analyses confirmed the sensor’s effectiveness. In quasi-static tests in Figure 5I, all force components achieved R2 > 0.95. Torque components were equally strong with R2 values of 0.95, 0.96, and 0.82 for Tx, Ty, and Tz, respectively. Dynamic brushing in Figure 5J also showed high R2 values (> 0.92) for the three force axes and Tx and Ty. The negative R2 for Tz results from minimal torque in this axis during regular brushing during typical oral care procedures, leading to predominantly noise signals in the ATI sensor measurements.
Further analysis revealed that three participants performed strong lateral brushing motions, which can be potentially harmful to the gums and enamel, as shown in Figure 6A. To capture additional torque data along Tz, these participants repeated their brushing motions without powering the electric toothbrush, generating roughly 15,000 new pose–F/T data points. We retrained the MLP on 60% of this data, with 20% used for validation. For a previously unseen participant (the remaining 20%), torque predictions along Tz achieved an MAE of 0.034 N·m with R2 = 0.78, indicating improved tracking of this specific lateral movement in Figure 6B.
Figure 6. Demonstration of lateral brushing pattern and model prediction performance. (A) Harmful lateral brushing pattern observed during oral care; (B) Model performance on Tz torque for a participant excluded from training.
These quantitative results demonstrate the VBDeformP sensor’s capabilities and validate our soft material and design choices, which were optimized explicitly for oral healthcare applications. The use of 3D-printed TPU as the primary material exemplifies the advantages of soft perception technology in healthcare settings. The sensor’s relatively linear stress-strain response under typical oral care loads[51] contributes to the high R2 values observed in force measurements, while TPU’s intrinsic superelasticity and viscoelasticity[52,53] serve dual purposes: preserving low-frequency force profiles essential to brushing motions and attenuating high-frequency vibrations from electric toothbrushes. This inherent mechanical filtering characteristic eliminates the need for electronic filtering systems, demonstrating how soft materials can simplify medical sensing solutions.
A key element of this VBDeformP platform is the ArUco marker system, which efficiently encodes structural deformations into six degrees of freedom pose changes. This approach transforms force and torque estimation into a computationally lightweight pose-tracking problem, enabling rapid processing[38] while reducing system complexity. Although researchers have explored various visual markers for deformation sensing, such as dot arrays[54-56], color blocks[57,58], and LED-illuminated elastomers[42,59,60], these alternatives typically demand intensive computational resources or more complex setups. This computation-efficient approach, utilizing ArUco markers, is coupled with manufacturability enabled by parametric design (Rhino 7 with Grasshopper) and additive manufacturing, providing a practical solution for real-world applications. The combined use of TPU for deformable elements and resin for rigid components optimizes both cost and scalability, establishing a generalizable framework for developing various soft deformation sensors with minimal architectural modifications.
We developed a web-based interface to enhance system usability in real-world applications. The interface provides real-time visualization of classification results and F/T measurements, featuring a majority-voting filter that updates brushing regions only when at least two of the three most recent predictions agree, effectively reducing classification noise while maintaining responsiveness [Supplementary Movie 2]. The system demonstrates significant practical advantages over conventional sensors: while the ATI model requires a 30-minute warm-up period, this system initializes within 2 s by capturing 100 unloaded reference frames. The waterproof design ensures reliable operation despite exposure to toothpaste and water, while its lightweight construction (280 g including the jaw model, excluding counterweights) and standard USB connectivity facilitate easy deployment. Furthermore, our system offers substantial cost benefits. Both the soft structure and 3D-printed components are inexpensive and easily replaceable, eliminating the maintenance concerns associated with traditional force sensors when exposed to high-frequency oscillations from electric toothbrushes. This disposable approach maintains consistent performance while avoiding the degradation issues of conventional sensors. These features collectively enable the delivery of effective feedback in educational settings.
Despite these practical advantages, our error analysis identified several areas for improvement. Error analysis reveals a primary technical challenge in torque estimation around the z-axis (Tz, R2 = 0.82 in quasi-static tests). An attempted solution using an alternative “L”-shaped configuration with three smaller ArUco markers yielded mixed results: while improving Tz predictions (R2 from 0.82 to 0.90), this configuration compromised performance along other axes due to reduced marker size and less reliable localization. Additionally, our current experimental setup, where the soft sensor is placed on the F/T sensor for simultaneous measurement, follows a widely-adopted approach in quasi-static force validation[38,59,61]. However, under high-frequency impact loads, particularly those generated by electric toothbrushes, the force attenuation through the soft sensor layer may affect measurement accuracy. The reliability and accuracy of this conventional setup under such dynamic loading conditions warrant further investigation. Moreover, the limited demographic representation in the current user dataset constrains system generalizability, as participants were restricted to healthy, right-handed adults.
Future work will address these challenges through technical optimization and expanded clinical validation. To address the Tz measurement limitations, technical improvements will focus on sophisticated ArUco marker arrays and enhanced machine-learning models to achieve robust Tz measurements without compromising the performance of other axes. Furthermore, to better understand and characterize the force transmission characteristics of our system, we will investigate the force attenuation effects through systematic experimental validation. To achieve this goal, this will involve the development of automated testing platforms to precisely quantify force transmission under controlled conditions, particularly for high-frequency impact loads generated by electric toothbrushes. To overcome the current demographic limitations, comprehensive clinical validation will involve large-scale user studies across diverse demographics, age ranges, oral health profiles, and brushing techniques. Specifically, we will include children, elderly individuals, left-handed individuals, and those with various oral health conditions to validate system robustness across different user groups and accelerate the adoption of VBDeformP in community-based oral health education.
Oral care teaching evaluation
To assess the effectiveness of our oral care teaching system, we conducted a user study with 10 participants who had previously participated in our data collection phase. The experiment followed a pre-test/post-test design with three distinct phases:
• Phase 1 (Pre-test): Participants performed simulated oral care procedures on a jaw model equipped with an ATI Axia80-M20 F/T sensor for brushing force measurement [Figure 4B]. Each participant completed three trials, systematically brushing all eight oral regions following the standardized Bass method sequence.
• Phase 2 (Intervention): Participants engaged with our oral care teaching system for 5 min, utilizing only the soft sensor (without the ATI sensor). During this phase, they received real-time feedback on brushing force and region via the interactive interface.
• Phase 3 (Post-test): Participants repeated the identical brushing protocol from Phase 1, completing three trials on the instrumented jaw model while force measurements were recorded using the ATI sensor.
Force measurements were acquired and analyzed from the ATI sensor for each of the eight oral regions during both Phase 1 (pre-test) and Phase 3 (post-test).
As shown in Figure 7A, the analysis revealed significant differences in brushing forces across regions before using our system. Region 4 exhibited the highest average force (3.004 ± 1.786 N), while Region 2 showed the lowest (1.926 ± 0.636 N). Overall, the pre-intervention average force was 2.425 ± 1.240 N, substantially exceeding the recommended brushing force range (1.5-2.0 N).
Figure 7. Teaching evaluation. (A) Mean brushing force and standard deviation across all eight oral regions before and after using the teaching system; (B) Box plot comparison of brushing forces before and after using the teaching system.
After using the teaching system, participants demonstrated significant improvements in force control. The force variation across regions decreased, with Region 8 showing the highest average force (1.956 ± 1.573 N) and Region 7 the lowest (1.544 ± 0.840 N). The overall average force decreased to 1.771 ± 1.158 N, representing a 26.97% reduction and bringing the average within the recommended range.
Figure 7B provides additional insight through box plots of the force distribution. Before using the system, forces were symmetrically distributed with a mean and median of 2.425 N. The interquartile range (IQR = Q3 - Q1 = 1.668 N) was relatively large, indicating considerable variability in force. The first quartile (Q1 = 1.501 N) barely reached the lower limit of the recommended range, while the third quartile (Q3 = 3.169 N) significantly exceeded the upper limit (2.0 N).
After the intervention, the force distribution shifted significantly toward the recommended range. The median decreased to 1.569 N, close to the lower limit of the recommended range, and the IQR narrowed to 1.149 N, indicating reduced variability. The right-skewed distribution (median 1.569 N < mean 1.771 N) suggests most participants maintained lower forces, with fewer instances of excessive force application. The third quartile decreased substantially from 3.169 to 2.137 N, further confirming the reduction in excessive brushing forces.
These results demonstrate that our oral care teaching system effectively helps users adjust and control their brushing forces to approach the recommended range, promoting better oral hygiene practices while reducing the risk of gingival trauma from excessive force.
The effectiveness of our system is supported by its optimized design parameters. Through extensive FEM simulations and user tests with 10 participants, we determined structural parameters that optimize measurement sensitivity while maintaining natural brushing experience. In our experiments with the optimized design, when participants applied a typical brushing force of approximately 2 N, the maximum displacement of the ArUco marker was around 3 mm. This controlled deformation achieved our design objectives, with all participants confirming that the slight movement of the jaw model during brushing did not interfere with their regular brushing technique. While these results demonstrate the practical viability of our design, broader validation studies across diverse user groups will be valuable for further parameter refinement.
CONCLUSIONS
This work presents a VBDeformP soft F/T sensor to promote effective oral hygiene education in community settings. The integrated 3D-printed TPU structure, resin frame, and ArUco marker, combined with a robust machine-learning framework, enable accurate six-dimensional force and torque inference under both quasi-static and dynamic scenarios. The implemented adaptive image binarization method ensures reliable marker tracking over various lighting conditions, bolstering the vision system’s versatility. The platform’s key advantages include rapid initialization, portability, real-time feedback, and inherent mechanical filtering of high-frequency vibrations from the toothbrush. Experimental results highlight its feasibility as a low-cost, scalable solution for delivering immediate, personalized guidance in proper brushing techniques. This approach exemplifies how soft perception can bridge the gap between traditional rigid sensors and human-centered healthcare applications, potentially extending to other biomechanical monitoring and training scenarios.
DECLARATIONS
Authors’ contributions
Data curation: equal; formal analysis: equal; investigation: equal; methodology: equal; software: lead; validation: lead; visualization: lead; writing - original draft: lead: Dong, C.
Conceptualization: lead; formal analysis: equal; investigation: equal; methodology: equal; vali-dation: supporting; visualization: supporting; writing - original draft: equal: Dai, X.
Conceptualization: supporting; formal analysis: supporting; investigation: supporting; method-ology: supporting; resources: supporting: Pan, Y.
Formal analysis: supporting; investigation: supporting; methodology: supporting; valida-tion: supporting; visualization: supporting: Qiu, W.
Investigation: supporting; validation: supporting; visualization: supporting; writing - review and editing: supporting: Li, S.
Conceptualization: supporting; formal analysis: supporting; investigation: supporting; methodol-ogy: supporting; software: supporting; validation: supporting: Wu, T.
Formal analysis: supporting; investigation: supporting; validation: supporting: Jin, Y.
Formal analysis: supporting; methodology: supporting; software: supporting; validation: supporting: Wang, H.
Conceptualization: lead; funding acquisition: supporting; project administration: supporting; resources: supporting; supervision: supporting; writing - original draft: supporting; writing - review and editing: equal: Song, C.
Conceptualization: lead; funding acquisition: lead; methodology: lead; project adminis-tration: lead; resources: lead; supervision: lead; writing - review and editing: lead: Wan, F.
Availability of data and materials
All codes and data are available at GitHub: https://github.com/ancorasir/VBDeformP4OralCare. Further inquiries can be directed to the corresponding author(s).
Financial support and sponsorship
This work was partly supported by the National Natural Science Foundation of China (62206119 and 62473189), Guangdong Basic and Applied Basic Research Foundation (2025A1515010424), and Shenzhen Long-Term Support for Higher Education at SUSTech (20231115141649002).
Conflicts of interest
All authors declared that there are no conflicts of interest.
Ethical approval and consent to participate
This study was conducted in accordance with the ethical guidelines and principles set forth by the Internal Re-view Board (IRB) of Southern University of Science and Technology under approval number 2024PES300. All participants were informed about the experimental procedure and signed the informed consent forms prior to participation.
Consent for publication
Not applicable.
Copyright
© The Author(s) 2025.
Supplementary Materials
REFERENCES
1. Bernabe, E.; Marcenes, W.; Hernandez, C. R.; et al.; GBD 2017 Oral Disorders Collaborators. Global, regional, and national levels and trends in burden of oral conditions from 1990 to 2017: a systematic analysis for the global burden of disease 2017 study. J. Dent. Res. 2020, 99, 362-73.
2. Watt, R. G.; Venturelli, R.; Daly, B. Understanding and tackling oral health inequalities in vulnerable adult populations: from the margins to the mainstream. Br. Dent. J. 2019, 227, 49-54.
3. Weik, U.; Shankar-Subramanian, S.; Sämann, T.; Wöstmann, B.; Margraf-Stiksrud, J.; Deinzer, R. “You should brush your teeth better”: a randomized controlled trial comparing best-possible versus as-usual toothbrushing. BMC. Oral. Health. 2023, 23, 456.
4. Van der Weijden, G. A. F.; van Loveren, C. Mechanical plaque removal in step-1 of care. Periodontol. 2000. 2023.
5. Palanisamy, S. Innovations in oral hygiene tools: a mini review on recent developments. Front. Dent. Med. 2024, 5, 1442887.
6. Pindobilowo; Tjiptoningsih, U. G.; Ariani, D. Effective tooth brushing techniques based on periodontal tissue conditions: a narrative review. FJAS 2023, 2, 1649-62.
7. Lee, Y. J.; Lee, P. J.; Kim, K. S.; et al. Toothbrushing region detection using three-axis accelerometer and magnetic sensor. IEEE. Trans. Biomed. Eng. 2012, 59, 872-81.
8. Marcon, M.; Sarti, A.; Tubaro, S. Toothbrush motion analysis to help children learn proper tooth brushing. Comput. Vis. Image. Underst. 2016, 148, 34-45.
9. Akther, S.; Saleheen, N.; Saha, M.; Shetty, V.; Kumar, S. mTeeth: identifying brushing teeth surfaces using wrist-worn inertial sensors. Proc. ACM. Interact. Mob. Wearable. Ubiquitous. Technol. 2021, 5, 1-25.
10. Wang, Y.; Hong, F.; Jiang, Y.; Bao, C.; Liu, C.; Guo, Z. ToothFairy: real-time tooth-by-tooth brushing monitor using earphone reversed signals. Proc. ACM. Interact. Mob. Wearable. Ubiquitous. Technol. 2023, 7, 1-19.
11. Herath, B.; Dewmin, G. H. S.; Sukumaran, S.; et al. Design and development of a novel oral care simulator for the training of nurses. IEEE. Trans. Biomed. Eng. 2020, 67, 1314-20.
12. Daigo, T.; Muramatsu, M.; Mitani, A. Development of the second prototype of an oral care simulator. JRM 2021, 33, 172-9.
13. Matsuno, T.; Yabushita, T.; Mitani, A.; Hirai, S. Measurement algorithm for oral care simulator using a single force sensor. Adv. Robot. 2021, 35, 723-32.
14. Mouri, N.; Sasaki, M.; Yagimaki, T.; Murakami, M.; Igari, K.; Sasaki, K. Development of a training simulator for caregivers’ toothbrushing skill using virtual reality. ABE 2023, 12, 91-100.
15. Anwar, A. I.; Zulkifli, A. The influence of demonstration method education in the knowledge of tooth brushing in children age 10-12 years. Enferm. Clín. 2020, 30, 429-32.
16. Shida, H.; Okabayashi, S.; Yoshioka, M.; et al. Effectiveness of a digital device providing real-time visualized tooth brushing instructions: a randomized controlled trial. PLoS. One. 2020, 15, e0235194.
17. Acherkouk, A.; Götze, M.; Kiesow, A.; et al. Robot and mechanical testing of a specialist manual toothbrush for cleaning efficacy and improved force control. BMC. Oral. Health. 2022, 22, 225.
18. Rajab, L. D.; Assaf, D. H.; El-Smadi, L. A.; Hamdan, A. A. Comparison of effectiveness of oral hygiene instruction methods in improving plaque scores among 8-9-year children: a randomized controlled trial. Eur. Arch. Paediatr. Dent. 2022, 23, 289-300.
19. Haresaku, S.; Miyoshi, M.; Kubota, K.; et al. Current status and future prospects for oral care education in Bachelor of Nursing curriculums: a Japanese cross-sectional study. Jpn. J. Nurs. Sci. 2023, 20, e12521.
20. Li, Y.; Ye, H.; Wu, S.; et al. Mixed reality and haptic-based dental simulator for tooth preparation: research, development, and preliminary evaluation. JMIR. Serious. Games. 2022, 10, e30653.
21. Ponce-Gonzalez, I.; Cheadle, A.; Aisenberg, G.; Cantrell, L. F. Improving oral health in migrant and underserved populations: evaluation of an interactive, community-based oral health education program in Washington state. BMC. Oral. Health. 2019, 19, 30.
22. Chandio, N.; Micheal, S.; Tadakmadla, S. K.; et al. Barriers and enablers in the implementation and sustainability of toothbrushing programs in early childhood settings and primary schools: a systematic review. BMC. Oral. Health. 2022, 22, 242.
23. Nakre, P. D.; Harikiran, A. G. Effectiveness of oral health education programs: a systematic review. J. Int. Soc. Prev. Community. Dent. 2013, 3, 103-15.
24. Qiu, Y.; Ashok, A.; Nguyen, C. C.; Yamauchi, Y.; Do, T. N.; Phan, H. P. Integrated sensors for soft medical robotics. Small 2024, 20, e2308805.
25. Zhu, J.; Zhou, C.; Zhang, M. Recent progress in flexible tactile sensor systems: from design to application. Soft. Sci. 2021, 1, 3.
26. Guess, M.; Soltis, I.; Rigo, B.; et al. Wireless batteryless soft sensors for ambulatory cardiovascular health monitoring. Soft. Sci. 2023, 3, 24.
27. Jiang, Y.; Huang, J.; Liu, H.; Xie, H.; Zhou, S. A vitrimer-like elastomer with quadruple hydrogen bonding as a fully recyclable substrate for sustainable flexible wearables. Adv. Funct. Mater. 2025, 2503128.
28. Liu, P.; Ding, E. X.; Xu, Z.; et al. Wafer-scale fabrication of wearable all-carbon nanotube photodetector arrays. ACS. Nano. 2024, 18, 18900-9.
29. Tian, Y.; Wei, Y.; Wang, M.; et al. Ultra-stretchable, tough, and self-healing polyurethane with tunable microphase separation for flexible wearable electronics. Nano. Energy. 2025, 139, 110908.
30. Aubeeluck, D. A.; Forbrigger, C.; Taromsari, S. M.; Chen, T.; Diller, E.; Naguib, H. E. Screen-printed resistive tactile sensor for monitoring tissue interaction forces on a surgical magnetic microgripper. ACS. Appl. Mater. Interfaces. 2023, 15, 34008-22.
31. Arshad, A.; Saleem, M. M.; Tiwana, M. I.; ur Rahman, H.; Iqbal, S.; Cheung, R. A high sensitivity and multi-axis fringing electric field based capacitive tactile force sensor for robot assisted surgery. Sens. Actuators. A. Phys. 2023, 354, 114272.
32. Vijayakanth, T.; Shankar, S.; Finkelstein-Zuta, G.; Rencus-Lazar, S.; Gilead, S.; Gazit, E. Perspectives on recent advancements in energy harvesting, sensing and bio-medical applications of piezoelectric gels. Chem. Soc. Rev. 2023, 52, 6191-220.
33. Chen, S.; Fan, S.; Chan, H.; et al. Liquid metal functionalization innovations in wearables and soft robotics for smart healthcare applications. Adv. Funct. Mater. 2024, 34, 2309989.
34. Li, T.; Su, Y.; Zheng, H.; et al. An artificial intelligence-motivated skin-like optical fiber tactile sensor. Adv. Intell. Syst. 2023, 5, 2200460.
35. Mun, H.; Diaz Cortes, D. S.; Youn, J. H.; Kyung, K. U. Multi-degree-of-freedom force sensor incorporated into soft robotic gripper for improved grasping stability. Soft. Robot. 2024, 11, 628-38.
36. Dong, K.; Wei, M.; Zhou, Q.; He, B.; Gao, B. Bionic diffractive meta-silk patch for visually flexible wearables. Laser. Photonics. Rev. 2024, 18, 2300972.
37. Gerald, A.; Russo, S. Soft sensing and haptics for medical procedures. Nat. Rev. Mater. 2024, 9, 86-8.
38. Wu, T.; Dong, Y.; Liu, X.; et al. Vision-based tactile intelligence with soft robotic metamaterial. Mater. Design. 2024, 238, 112629.
39. Wong, D. C. Y.; Song, J.; Yu, H. The design of a vision-based bending sensor for PneuNet actuators leveraging ArUco marker detection. IEEE. Sens. J. 2023, 23, 27137-45.
40. Deng, Y.; Yang, T.; Dai, S.; Song, G. A miniature triaxial fiber optic force sensor for flexible ureteroscopy. IEEE. Trans. Biomed. Eng. 2021, 68, 2339-47.
41. Zhang, T.; Chen, B.; Zuo, S. A novel 3-DOF force sensing microneedle with integrated fiber bragg grating for microsurgery. IEEE. Trans. Ind. Electron. 2022, 69, 940-9.
42. Di, J.; Dugonjic, Z.; Fu, W.; et al. Using fiber optic bundles to miniaturize vision-based tactile sensors. IEEE. Trans. Robot. 2025, 41, 62-81.
43. Garrido-Jurado, S.; Muñoz-Salinas, R.; Madrid-Cuevas, F.; Marín-Jiménez, M. Automatic generation and detection of highly reliable fiducial markers under occlusion. Pattern. Recognit. 2014, 47, 2280-92.
44. Yang, Z.; Ge, S.; Wan, F.; Liu, Y.; Song, C. Scalable tactile sensing for an omni-adaptive soft robot finger. In 2020 3rd IEEE International Conference on Soft Robotics (RoboSoft), New Haven, USA. 15 May - 15 Jul, 2020. IEEE; 2020. pp. 572-7.
45. Liu, X.; Han, X.; Hong, W.; Wan, F.; Song, C. Proprioceptive learning with soft polyhedral networks. Int. J. Robot. Res. 2024, 43, 1916-35.
46. Edwards, P. J.; Colleoni, E.; Sridhar, A.; Kelly, J. D.; Stoyanov, D. Visual kinematic force estimation in robot-assisted surgery - application to knot tying. Comput. Methods. Biomech. Biomed. Eng. Imaging. Vis. 2021, 9, 414-20.
47. Fu, J.; Yu, Z.; Guo, Q.; Zheng, L.; Gan, D. A variable stiffness robotic gripper based on parallel beam with vision-based force sensing for flexible grasping. Robotica 2024, 42, 4036-54.
48. Visentin, F.; Naselli, G. A.; Mazzolai, B. A new exploration strategy for soft robots based on proprioception. In 2020 3rd IEEE International Conference on Soft Robotics (RoboSoft), New Haven, USA. 15 May - 15 Jul, 2020. IEEE; 2020. pp. 816-21.
49. Berral-Soler, R.; Muñoz-Salinas, R.; Medina-Carnicer, R.; Marín-Jiménez, M. J. DeepArUco++: improved detection of square fiducial markers in challenging lighting conditions. Image. Vis. Comput. 2024, 152, 105313.
50. Amiriebrahimabadi, M.; Rouhi, Z.; Mansouri, N. A comprehensive survey of multi-level thresholding segmentation methods for image processing. Arch. Computat. Methods. Eng. 2024, 31, 3647-97.
51. Ye, C.; Cang, T.; Zhu, J.; Wang, Z.; Li, X. Soft Thermoplastic polyurethane/silver nanowire membranes with low hysteresis for large strain sensing and joule heating. ACS. Appl. Polym. Mater. 2024, 6, 11149-59.
52. Bek, M.; Betjes, J.; von Bernstorff, B.; Emri, I. Viscoelasticity of new generation thermoplastic polyurethane vibration isolators. Phys. Fluids. 2017, 29, 121614.
53. Zhang, Y.; Hou, F.; Lu, Z.; Ding, H.; Chen, L. Analytical and experimental study of thermoplastic polyurethane inclined beam isolator with quasi-zero stiffness and fractional derivative damping. Mech. Syst. Signal. Process. 2025, 224, 111962.
54. Zheng, H.; Jin, Y.; Wang, H.; Zhao, P. DotView: a low-cost compact tactile sensor for pressure, shear, and torsion estimation. IEEE. Robot. Autom. Lett. 2023, 8, 880-7.
55. Funk, N.; Helmut, E.; Chalvatzaki, G.; Calandra, R.; Peters, J. Evetac: an event-based optical tactile sensor for robotic manipulation. IEEE. Trans. Robot. 2024, 40, 3812-32.
56. Fang, B.; Zhao, J.; Liu, N.; et al. Force measurement technology of vision-based tactile sensor. Adv. Intell. Syst. 2025, 7, 2400290.
57. Lin, X.; Wiertlewski, M. Sensing the frictional state of a robotic skin via subtractive color mixing. IEEE. Robot. Autom. Lett. 2019, 4, 2386-92.
58. Zhang, G.; Du, Y.; Yu, H.; Wang, M. Y. DelTact: a vision-based tactile sensor using a dense color pattern. IEEE. Robot. Autom. Lett. 2022, 7, 10778-85.
59. Yuan, W.; Dong, S.; Adelson, E. H. GelSight: high-resolution robot tactile sensors for estimating geometry and force. Sensors 2017, 17, 2762.
60. Sun, H.; Kuchenbecker, K. J.; Martius, G. A soft thumb-sized vision-based sensor with accurate all-round force perception. Nat. Mach. Intell. 2022, 4, 135-45.
Cite This Article

How to Cite
Download Citation
Export Citation File:
Type of Import
Tips on Downloading Citation
Citation Manager File Format
Type of Import
Direct Import: When the Direct Import option is selected (the default state), a dialogue box will give you the option to Save or Open the downloaded citation data. Choosing Open will either launch your citation manager or give you a choice of applications with which to use the metadata. The Save option saves the file locally for later use.
Indirect Import: When the Indirect Import option is selected, the metadata is displayed and may be copied and pasted as needed.
About This Article
Copyright
Data & Comments
Data

Comments
Comments must be written in English. Spam, offensive content, impersonation, and private information will not be permitted. If any comment is reported and identified as inappropriate content by OAE staff, the comment will be removed without notice. If you have any queries or need any help, please contact us at [email protected].