Download PDF
Research Article  |  Open Access  |  29 Nov 2024

An automotive tire visual laser marking robot system based on multi-information fusion

Views: 60 |  Downloads: 6 |  Cited:  0
Intell Robot 2024;4(4):422-38.
10.20517/ir.2024.25 |  © The Author(s) 2024.
Author Information
Article Notes
Cite This Article

Abstract

As smart manufacturing technology continues to advance, laser marking robots have been applied to automotive tire marking, revolutionizing the traditional manual process. In order to improve productivity, these robots face challenges posed by tire positioning, environmental variations, and the light-absorbing properties of the material. In order to solve the problem, a robot vision modeling method based on the fusion of the 3D point cloud information and 2D image information on the surface of automotive tires is presented and is used to construct a visual laser marking robot system for automotive tires. The constructed visual laser marking robot system for automotive tires has been tested and the results show that the laser marking is more effective compared to the traditional manual marking process; the laser marking robot system equipped with a multi-information fusion vision model increases the marking success rate by 8%, increases the speed by nearly nine times, reduces the waste tire rate by 8%, and reduces the economic consumption by nearly 56 times; compared with a single vision information marking robot system, the marking success rate increases by 3% and the tire waste rate reduces by 3%.

Keywords

Information fusion, vision modeling, robot, automotive tire, laser marking

1. INTRODUCTION

In recent years, with the rapid development of automation and intelligent technology, the field of intelligent manufacturing and the automotive tire industry ushered in a revolutionary change. In this context, laser marking robots are gradually applied to laser marking work on the surface of automotive tires. The advantages of these advanced robots are not only reflected in the replacement of traditional manual processes, such as pre-embedded cycle cards, steel tickets, and vulcanized hollow bar codes, but also in the significant productivity gains for the automotive tire manufacturing sector. Through the use of laser marking robots, automotive tire manufacturers have been able to dramatically increase automation in the production of automotive tires, reducing reliance on human labor and thus significantly improving production efficiency. This shift not only helps shorten cycle times, but also ensures the stability and consistency of product quality. At the same time, less human involvement also means fewer errors due to human factors, further improving the overall quality of automotive tire manufacturing.

The marking accuracy of a laser marking robot depends largely on the accuracy of its vision system in detecting and identifying the surface of automotive tires. Visual information is one of the main ways for human beings to perceive the world, and it is also the core research object in the field of computer vision and image processing. Wang et al. described the basic principles of digital image and machine vision technology, categorized and sorted out the current use of machine vision in various laser processing equipment, and pointed out the development direction and trend of machine vision in the field of laser fine processing[1]. Zhang et al. designed a laser marking system guided by machine vision positioning, which combined laser marking technology with two-dimensional images to accurately mark arbitrarily placed workpieces on a conveyor belt[2]. The test results showed that this marking system has high precision for the identification of workpieces that are neatly made, but has certain limitations when applied to automotive tires with complex characteristics.

In single-source information target detection and recognition, from single-stage detection networks represented by the YOLO[3] series to two-stage target detection networks represented by Region-based Convolutional Neural Networks (RCNN)[4], Fast RCNN[5], Faster RCNN[6], and Mask RCNN[7] have achieved great success in the task of detecting two-dimensional images. Zhao et al. proposed an automotive tire specification character recognition method based on YOLOv5 network, in which three major modifications are made on YOLOv5[9] network[8]. The experiment results showed that the method improved the efficiency and accuracy of automotive tire specification character recognition. Wang et al. proposed a machine vision-based character recognition method on the surface of tire rubber by performing 2D image denoising, character segmentation, and template matching to complete the character recognition[10]. The experimental results showed that the accuracy of character recognition achieves the expected results. Kazmi et al. applied convolutional neural network (CNN) classifier to text recognition, in which two independent deep CNNs are used for character detection and recognition[11]. Additionally, the experimental results showed that there is room for further improvement in text detection. Chen et al. proposed an improved method of recognizing 2D images based on the DALSA image software processing system, using the improved image processing method and marking technology to realize laser for tire marking, which improved the positioning accuracy, simplified the focusing process, and reduced the scrap produced by marking[12]. Zheng et al. used 3D data for automotive tire defect detection[13]. Using laser scanning techniques, a 3D dataset containing scans of tire surfaces was created, and a framework for tire defect detection based on 3D point cloud analysis was proposed. Experimental results show that challenging types of defects can be effectively detected in X-rays. The above methods based on a single source of information achieved good results. However, the experimental environment is mostly ideal and simple, which is far from the actual marking work. Furthermore, a single source of information is vulnerable to the interference of the location of the automotive tires, coupled with the large differences in the marking work environment of the automotive tire enterprises, as well as the automotive tires themselves material light-absorbing properties, which affects the detection and identification of automotive tire.

In terms of multi-source information target detection and recognition, 2D images provide rich texture, character, and semantic information, while 3D point clouds offer depth data; the fusion of these two kinds of information can further improve target detection and recognition accuracy. Chen et al. used 3D point clouds and 2D images as the input level by multi-view 3D object detection (MV3D), and used region proposal network to generate candidate proposals first, and then mapped the candidate proposals into three views, and detected the target by fusing the information from the three views, which made the detection accuracy of MV3D not high due to the loss of original information by way of multi-view mapping[14]. Jason et al. by aggregate view object detection (AVOD) first used CNN to generate feature maps for a top view of 2D images and 3D point clouds respectively, then employed the fused feature maps to generate proposals, and finally used the proposals with high confidence level combined with the feature maps to classify and bracket box regression of the target, because the fused feature maps have multi-dimensional feature information, which made the detection accuracy of AVOD higher than MV3D and the speed is also faster[15]. However, compared with F-PointNet[16], which directly processes 3D point clouds without multi-view mapping and fuses them with 2D image target detection results, the detection accuracy of AVOD is still lower. Xie et al. first projected 3D point clouds onto 2D images to generate 6D RGB point clouds[17]. They then extracted features on the input 6D RGB point clouds to obtain low-dimensional feature maps and high-dimensional feature maps. These features were fused to develop a high-precision real-time two-stage deep neural network, PointRGBNet, which was tested on the open KITTI dataset. The results showed that PointRGBNet not only outperformed target detection networks using only 2D images or 3D point clouds in terms of detection accuracy, but even surpassed some advanced multi-sensor information fusion networks. Wu et al. proposed Bird-PointNet, a 3D target detection method based on the remapping of point cloud top view, by combining the advantages of strong target recognition ability of 2D image data and more accurate spatial information of point cloud data[18]. The experimental results of the top view detection and 3D detection on KITTI dataset show that the 3D detection accuracy of Bird-PointNet method is higher compared to the baseline method which only uses the encoding of a top view of point clouds. Zhang et al. proposed a point cloud semantic segmentation method and a fused graph convolution network (FGCN) based on the multi-scale information of modalities in the phase of 2D images and 3D point clouds[19]. Experiments on the dataset SSKIT showed that a significant improvement in the detection and recognition and segmentation accuracies are obtained. The above studies effectively fuse 3D point cloud and 2D image information through different methods to improve the accuracy of target detection, recognition, and semantic segmentation, which provide a brand-new idea for the detection and recognition of automotive tire surfaces.

In terms of multi-sensor fusion systems, current advanced systems and methods realize complementarity and redundancy of information by integrating the advantages of multiple sensors, improving the accuracy and reliability of information. Li et al. proposed a sensor fusion algorithm combined with a global optimization algorithm[20]. Based on the keyframes, feature points in the local map, sensor information and loop information, a global optimization algorithm based on the graph optimization model was constructed to optimize the position and attitude of the intelligent hardware system and the position of the spatial feature points. Xue et al. proposed a novel distance indicator based on spiking neural network (SNN) by integrating multiple sensors[21]. The experimental results show that the introduction of SNN can realize effective multi-sensor data fusion, thus providing accurate and fast distance estimation from the signal source. Wang et al. proposed a multi-sensor data fusion algorithm to fuse filtered data from three single sensors to effectively solve the problems of mechanical vibration interference and sensor measurement errors[22]. Chen et al. proposed a novel in-situ monitoring method for rapid defect detection based on multi-sensor fusion digital twin (MFDT) for localized quality prediction and combined with a machine learning (ML) model for data fusion[23]. These advanced multi-sensor fusion systems and methods provide a theoretical basis for surface inspection, character recognition, and positioning accuracy of our automotive tires. However, automotive tires are black in color, the surface is extremely uneven, there are different fonts and symbols on the surface of different types of rubber tires, and because of the complexity of the production environment, the above system and method can not be effectively utilized in the actual marking work of automotive tires.

In order to improve the detection, identification and localization accuracy of automotive tire surfaces, this paper fuses the 3D point cloud information of the automotive tire surface and the image information of the automotive tire surface, innovatively proposes a robot vision modeling method based on the fusion of the 3D point cloud information of the automotive tire surface and the 2D image information, and applies it to the construction of the automotive tire vision laser marking robot system. The multi-information fusion makes full use of the spatio-temporal characteristics of multiple visual information acquisition sensors to realize the detection, identification and localization of inspection targets. Multiple visual information acquisition sensors can simultaneously collect multiple different feature information of the same target to provide support for laser marking. Then, the laser marking robot needs to improve the robustness and generalization ability of the visual model and reduce the computational complexity of the visual model when constructing the visual detection model of multi-information fusion.

2. ROBOT VISION MODELING BASED ON MULTI-INFORMATION FUSIOn

2.1. Robot vision modeling flow

The robot vision modeling flow is shown in Figure 1. First, 3D point cloud information and 2D image information on the surface of automotive tires are collected using multiple information sensors. After grayscaling the 2D image information on the surface of automotive tires acquired by the camera, the "DOT" character recognition is completed through template matching. At the same time, the 3D point cloud image information on the surface of automotive tires acquired by the line scanning laser is also grayscaled, and the "DOT" character recognition is completed through template matching. The point cloud corresponding to the recognized "DOT" character in the 3D point cloud image is projected into the image through perspective transformation and target-level information fusion is performed. Finally, the coordinates of the spatial position of the laser marker are determined by a multi-information fusion method.

An automotive tire visual laser marking robot system based on multi-information fusion

Figure 1. Flowchart of robot vision modeling.

2.2. Multi-information acquisition

Multi-information fusion aims to effectively integrate and utilize diverse information resources to provide more accurate, reliable, coordinated, and stable decision-making. As shown in Figure 1, the acquisition of 3D point cloud information and 2D image information on the surface of automotive tires is the first step of robot vision modeling, and its quality directly affects the effect of subsequent fusion.

Each automotive tire surface has a "DOT" logo, as shown in Figure 2, which is an "ID card" and contains the date of production, manufacturer, and other information. In the laser marking process, the position of the "DOT" logo is a reference benchmark to determine the specific location of the target marking. Therefore, "DOT" is defined as a character template. Then, each character candidate region in the surface information of the automotive tire is matched with the character template, and the region with the highest matching degree is selected as the recognition result.

An automotive tire visual laser marking robot system based on multi-information fusion

Figure 2. "DOT" marking on the surface of automotive tires.

As shown in Figure 3, the information acquisition equipment consists of a line-scanning laser and camera that convert real-world automotive tires into digital images. During the acquisition process, attention needs to be paid to the effects of lighting conditions, equipment resolution, focal length, and other factors on the quality of the information. Table 1 describes the detailed parameters of both equipment.

An automotive tire visual laser marking robot system based on multi-information fusion

Figure 3. Schematic diagram of robot vision information acquisition equipment.

Table 1

Detailed parameter description of information acquisition equipment

EquipmentParameters
CameraModel: Canon II D
Type: 35 mm focal plane shutter camera
Lens: Serenar 50mm f/2.8
Performance: the camera is equipped with clear imaging capability
and stable shutter speed for image capture under various lighting conditions.
Line-scanning laserModel: FU63511L5-GC12
Wavelength: 635 nm
Output power: 0.4-5 mW
Performance: the line-scanning laser is characterized by
high performance, good stability, constant output power, etc.
It can generate high-quality point cloud data and is suitable for
measuring and marking of various complex surfaces and workpieces

2.3. Aberration correction

Due to lens processing and mounting errors, the camera imaging has radial and tangential aberrations from the ideal small-hole imaging model. The mathematical expression for the aberration correction is:

$$ \begin{equation} \begin{cases} x=X\left(1+k_1r^2+k_2r^4\right)+\left[2p_1Y+p_2\left(r^2+2X^2\right)\right]\\ y=Y\left(1+k_1r^2+k_2r^4\right)+\left[2p_2X+p_1\left(r^2+2Y^2\right)\right] \end{cases} \end{equation} $$

Where $$ r=X^2+Y^2 $$, $$ k_1 $$, $$ k_2 $$ are the radial distortion coefficients, $$ p_1 $$ and $$ p_2 $$ are the tangential distortion coefficients, $$ (X, Y) $$ is the ideal imaging pixel coordinate, and $$ (x, y) $$ is the pixel coordinates after adding the distortion.

2.4. Multi-information processing

In order to ensure that the information on the surface of automotive tires collected by the camera and the line-scanning laser match each other, it is necessary for the sensors to spatially match the data, which is a prerequisite for multi-sensor data fusion. Spatial matching is the process of converting the respective coordinate systems of the camera and the line-scanning laser into the same coordinate system for representation, ensuring that both sensors recognize the same target in the same coordinate system and achieve spatial data matching. This process requires a joint calibration of both sensors. The sensor coordinate system shown in Figure 4 is established, in which the line-scanning laser coordinate system is $$ O_D-X_DY_DZ_D $$, the camera coordinate system is $$ O_C-X_CY_CZ_C $$, the image coordinate system is $$ o-xy $$, and the pixel coordinate system is $$ O-uv $$. In order to obtain the positional relationship between the two sensor coordinate systems, taking a point $$ P $$ in space as an example, its coordinates are $$ (X_D, Y_D, Z_D) $$, $$ (X_C, Y_C, Z_C) $$, and $$ (u_0, v_0) $$ in the line-scanning laser, camera, and pixel coordinate system, respectively. Converting from the line-scanning laser coordinate system to the camera coordinate system, the mathematical expression is:

$$ \begin{equation} \begin{bmatrix}X_C\\Y_C\\Z_C\\1\end{bmatrix}=\begin{bmatrix}R&T\\0&1\end{bmatrix}\begin{bmatrix}X_D\\Y_D\\Z_D\\1\end{bmatrix} \end{equation} $$

An automotive tire visual laser marking robot system based on multi-information fusion

Figure 4. Camera imaging schematic.

where $$ R $$ is the rotation matrix and $$ T $$ is the translation matrix.

Converting from the camera coordinate system to the pixel coordinate system, the mathematical expression is:

$$ \begin{equation} Z_C\begin{bmatrix}u\\\nu\\1\end{bmatrix}=\begin{bmatrix}\frac{1}{dx}&0&u_0\\0&\frac{1}{dy}&\nu_0\\0&0&1\end{bmatrix}\begin{bmatrix}f&0&0\\0&f&0\\0&0&1\end{bmatrix}\begin{bmatrix}X_C\\Y_C\\Z_C\end{bmatrix}=\begin{bmatrix}f_x&0&u_0\\0&f_y&\nu_0\\0&0&1\end{bmatrix}\begin{bmatrix}X_C\\Y_C\\Z_C\end{bmatrix} \end{equation} $$

where $$ f $$ is focal length.

Associating expression (2) and expression (3), we obtain the conversion of the line-scanning laser coordinate system to the pixel coordinate system, and the mathematical expression is:

$$ \begin{equation} Z_C\begin{bmatrix}u\\v\\1\end{bmatrix}=N_1\begin{bmatrix}R&T\\0&1\end{bmatrix}\begin{bmatrix}X_D\\Y_D\\Z_D\\1\end{bmatrix}=N_1N_2\begin{bmatrix}X_D\\Y_D\\Z_D\\1\end{bmatrix} \end{equation} $$

Where

$$ \begin{equation} N_1=\begin{bmatrix}f_x&&0&&u_0&&0\\0&&f_y&&\nu_0&&0\\0&&0&&1&&0\end{bmatrix} \end{equation} $$

$$ \begin{equation} N_2=\begin{bmatrix}R&T\\0&1\end{bmatrix} \end{equation} $$

where $$ N_1 $$ is the internal reference matrix of the camera, and $$ N_2 $$ is the external reference matrix from the coordinate system of the line-scanning laser to the coordinate system of the camera.

Based on the mathematical expression (4), the two parameter matrices $$ N_1 $$ and $$ N_2 $$ are solved to achieve the fusion of the 3D point cloud and pixel points acquired by the line-scanning laser. Among them, $$ N_1 $$ is obtained by using Zhang Zhengyou calibration method[24] through the MATLAB camera calibration toolbox. $$ N_2 $$ is obtained by extracting the 3D and 2D coordinates corresponding to the corner points of the planar calibration plate, and then solvePNP[25] iterative method in the software OpenCV3.4, which can be solved, and finally the projections of the point cloud on the image can be realized.

After grayscaling the 2D image information on the surface of automotive tires captured by the camera, the "DOT" character is identified by template matching, as shown in Figure 5. The minimum outer rectangular of the identified "DOT" character is set as the "DOT" area, and its area $$ S_1 $$ can be calculated as:

$$ \begin{equation} S_1=\sqrt{\left(x_2-x_1\right)^2+\left(y_2-y_1\right)^2}\cdot\sqrt{\left(x_3-x_2\right)^2+\left(y_3-y_2\right)^2} \end{equation} $$

An automotive tire visual laser marking robot system based on multi-information fusion

Figure 5. 2D image "DOT" region detection and recognition result.

where $$ (x_1, x_2) $$, $$ (x_2, y_2) $$, $$ (x_3, y_3) $$, and $$ (x_4, y_4) $$ are the coordinates corresponding to the four corners of the minimum outer rectangle. The same processing method as the 2D image, after grayscaling the 3D point cloud image information of the surface of automotive tires acquired by the line-scanning laser, the "DOT" character is recognized by template matching, as shown in Figure 6. The minimum outer rectangular of the identified "DOT" character is set as the "DOT" area. This region is projected onto the image to obtain a new region $$ S_2 $$, and the area of the overlapping region of $$ S_1 $$ and $$ S_2 $$ is $$ S_3 $$.

An automotive tire visual laser marking robot system based on multi-information fusion

Figure 6. 3D point cloud image "DOT" region detection and identification result.

2.5. Multi-information fusion method

In order to give full play to the advantages of the robot's sensors, break the limitations of a single sensor, and improve the accuracy of detection and identification in different production environments, a method for detecting and identifying the surface of automotive tires is proposed, which fuses 3D point clouds and 2D images collected by the line-scan laser and the camera, respectively. In the method, first, the overlap area $$ S_3 $$ is calculated, and then the ratio of $$ S_3 $$ to $$ S_1 $$ is calculated and is denoted as $$ K $$. In the paper, the judgment threshold of $$ K $$ is set to 0.98. When $$ K $$ is greater than or equals 0.98, the "with" information fusion is used; i.e., the 3D point cloud coordinates corresponding to the geometrical center of the overlapped part are used as the spatial position coordinates of the laser marker. When $$ K $$ is less than 0.98, the "or" information fusion is used; i.e., the spatial coordinates corresponding to the geometric center of the smallest outer rectangular region of the "DOT" character in the 3D point cloud image are used as the spatial coordinates of the laser marker.

To evaluate the performance and advantages of the proposed method in this paper, we compare it with the popular multi-information fusion methods, i.e., Bayes estimation and artificial neural network methods.

Bayes estimation method is a statistical inference method based on Bayes theorem. It uses prior information and sample data to update the knowledge of unknown parameters to obtain the posterior distribution. In this paper, the prior distribution is assumed to be normal and the posterior distribution is calculated based on the observed data. The specific calculation steps include: first, calculating the kernel of the posterior distribution based on the likelihood function of the prior distribution and the observed data; then, normalizing the posterior distribution to obtain the parameters of the posterior distribution; and finally, using the posterior distribution for parameter estimation and prediction.

Artificial neural network is a ML algorithm that simulates the neuronal structure of the human brain. It achieves prediction and classification of unknown data by learning the mapping relationship between input and output. In this paper, a multilayer feed-forward neural network is used which includes input, hidden and output layers. The input layer receives data from different sensors, the hidden layer extracts features through nonlinear transformation, and the output layer outputs the prediction results. The back-propagation algorithm is used for network training, and the prediction error is minimized by adjusting the network parameters. In terms of parameter settings, the appropriate number of neurons, learning rate and number of iterations are selected experimentally.

Experiments were conducted using Bayes estimation method, artificial neural network method and the method in this paper, respectively, and the corresponding experimental results were obtained, as shown in Table 2.

Table 2

Experimental results of multi-information fusion methods

MethodsDetection objectsNumber of detectionsAccurate amountAccuracy rate (%)
Bayes estimation methodThe automotive tire surface characters20016783.5
Artificial neural network methodThe automotive tire surface characters20017487
Method of this paperThe automotive tire surface characters20019396.5

According to Table 2, it can be seen that the Bayes estimation method shows good accuracy in the detection and recognition experiments of 200 automotive tire surface characters, but compared with the artificial neural network method and the method proposed in this paper, its detection accuracy is the lowest, which indicates that there are some limitations of the Bayes estimation method in dealing with the complex problem of detecting and recognizing the characters on the surface of automotive tires. The detection accuracy of the artificial neural network method is between the Bayes estimation method and the method in this paper, and it shows strong ability in multi-information fusion, but there are overfitting and underfitting problems in the training process. The method proposed in this paper has the highest detection accuracy, which is 13% and 9.5% higher than the Bayes estimation method and the artificial neural network method, respectively. This indicates that the method proposed in this paper shows better performance in terms of character detection and recognition accuracy on the surface of automotive tires and is suitable for automotive tire surface character detection and recognition scenarios that require higher accuracy and comprehensiveness of information fusion.

3. AUTOMOTIVE TIRE VISION LASER MARKING ROBOT SYSTEM CONSTRUCTION

3.1. System hardware compositions

The automotive tire vision laser marking robot system comprises different hardware modules, mainly including vision laser marking modules. As shown in Table 3, the visual laser marking robot system includes a robot (controller, laser marker, robotic arm, and information acquisition equipment), power supply, conveyor belt, code reader, detection sensor, power distribution box, and industrial control machine. It has high integration and multiple functions. The automotive tire vision laser marking system is designed based on machine vision and to meet the intelligent and automated manufacturing of automotive tires, which can support automated conveying, intelligent marking and autonomous inspection of automotive tires. The visual laser marking robot system is suitable for large, medium and small automotive tires in different production environments. The physical diagram of the piggyback hardware system is shown in Figure 7.

Table 3

System hardware components

DesignationIntroduction/parameters/functions
ControllerControl of the entire laser marking robot with embedded visual inspection system for automotive tires
Laser markerMarking content: vulcanization code, cycle sign, personalized logo, etc.
Robotic armRobotic arm with the flexibility to change the desired position
Information acquisition equipmentA line-scanning laser, a camera
Power supplyA voltage frequency: 3-380 V / 50 Hz, total power: 4 kW, total current: 32 A
Conveyor beltSuitable for transferring large, medium and small automotive tires, speed: 0.5 m/s
Code readerReading automotive tire information
Detection sensorsDetecting the real-time position of automotive tires
Power distribution boxControl of individual equipment power switches
Industrial control machineSoftware and hardware integrated controller
An automotive tire visual laser marking robot system based on multi-information fusion

Figure 7. Laser marker, conveyor belt, robotic arm, camera, line-scanning laser.

3.2. System software compositions

The system software consists of four core subsystems: automotive tire visual inspection subsystem, manufacturing execution subsystem, decision control subsystem, and work alarm subsystem. Subsystems interact and communicate with each other to jointly serve the entire laser marking robot system. The physical diagram of the piggyback software system is shown in Figure 8.

An automotive tire visual laser marking robot system based on multi-information fusion

Figure 8. Physical diagram of the piggyback software system.

An automotive tire vision inspection subsystem is an embedded software system developed based on a multi-information fusion robot vision model. As shown in Figure 9, the system communicates with the camera, line-scanning laser, code reader and manufacturing execution subsystem as needed to acquire 2D image data, 3D point cloud data, automotive tire molding barcode information, and production information. It interfaces with the programmable logic controller (PLC) in the decision control subsystem to invoke the functions of the system in the form of commands. Additionally, it communicates with the robot to transmit the current marking command.

An automotive tire visual laser marking robot system based on multi-information fusion

Figure 9. Subsystem interaction diagram.

A manufacturing execution subsystem is an intelligent management system for workshop production of automotive tires. In the process from the entry of automotive tires into the marking system to the completion of marking, the manufacturing execution subsystem plays the role of real-time transmission of marking information to optimize the marking of automotive tires. In the closed-loop process of automotive tire marking, real-time accurate marking information enables efficient guidance and facilitates rapid response for reporting the status of tire marking work. There are several advantages to using the Manufacturing execution subsystem. First, it enables quick responses to changes in the marking status, reduces non-value-added production activities, and improves the efficiency of marking and process. Second, accurate process status tracking and complete data recording provide more information for standardized automotive tire marking management. This facilitates timely monitoring of defective rates, simplifies quality control, and minimizes waste of human resources and materials. Third, integrating various equipment and intelligent systems ensures a high degree of automation of the workshop automotive tire marking operations. Finally, the system supports full-process tracking of marked products, offering traceability of man-machine-material information throughout the entire lifecycle of the finished product.

In the decision control subsystem, the German Siemens PLC is used, a digital arithmetic controller for automated control of automotive tire marking. This PLC consists of a microprocessor, instructions, data memory, input/output interfaces, power supply, digital-to-analog conversion and other functional units. Its advantage is that, in the marking of automotive tires, control instructions from the automotive tire visual inspection subsystem can be loaded into the memory at any time, enabling real-time storage and transmission of operating instructions to other equipment for execution.

A work alarm subsystem is used for real-time monitoring and tracking of the closed loop of automatic marking of automotive tires. When a fault occurs at work, it responds quickly. The relevant fault information and fault parts are displayed on the monitor in real time, and the corresponding solution is given.

4. EXPERIMENTS AND ANALYSIS

According to the constructed multi-information fusion laser marking robot system to conduct laser marking experiments on the surface of automotive tires. In order to ensure the accuracy and comparability of the experimental results, under the same marking experimental environment, the marking experiments are compared with those of manual and single information robots with the same content, respectively. The marking experiments were performed and recorded by varying light conditions, i.e., low light, normal light, and strong light, respectively. The experimental results are shown in Tables 4-6. In these tables, marking speed refers to the time required to complete the marking of a single tire. Economic consumption denotes the cost incurred in marking one tire.

Table 4

Experimental results (low light)

Marking speed (s)Marking accuracy (%)Abandoned tire rate (%)Economic consumption (¥)
Manual18090100.5
Single-information robot219280.028
Multi-information fusion robot219550.028
Table 5

Experimental results (normal light)

Marking speed (s)Marking accuracy (%)Abandoned tire rate (%)Economic consumption (¥)
Manual18090100.5
Single-information robot219550.028
Multi-information fusion robot219820.028
Table 6

Experimental results (strong light)

Marking speed (s)Marking accuracy (%)Abandoned tire rate (%)Economic consumption (¥)
Manual18090100.5
Single-information robot219280.028
Multi-information fusion robot219640.028

It is worth noting that during the experiments, the information for the single-information robot to detect and identify the surface of the automotive tires comes from the line-scanning laser.

As can be seen in Table 5, under normal lighting conditions, the multi-information fusion robot achieved an 8% improvement in marking accuracy compared to manual marking, with a corresponding 8% reduction in tire scrap. The accuracy of the multi-information fusion robot was improved by 3% and tire scrap was reduced by 3% compared to the single-information robot. As can be seen in Table 4, in a low light environment, the marking accuracy of the multi-information fusion robot increased by 5% compared to manual marking, with a corresponding 5% reduction in tire scrap. The accuracy of the multi-information fusion robot was improved by 3% and tire scrap was reduced by 3% compared to the single-information robot. As can be seen Table 6, in a strong light environment, the multi-information fusion robot has a 6% improvement in marking accuracy and a corresponding 6% reduction in tire scrap compared to manual marking. Compared with the single-information robot, the accuracy of the multi-information fusion robot was improved by 4%, and the tire scrap was reduced by 4%. In summary, this shows that the multi-information fusion robot is more accurate under the same marking conditions and comparable tasks. From manual to single-information to multi-information fusion, the effectiveness of the multi-information fusion method in improving marking accuracy and reducing tire scrap is strongly demonstrated. It is worth noting that, according to the experimental results, different lighting environments do not have an effect on manual marking.

In addition to accuracy, the multi-information fusion robot also excels in speed. Experimental results also show that the multi-information fusion robot significantly outperforms manual marking by a factor of nearly nine. Compared to the single-information robot, the marking speed of the multi-information fusion robot was not affected despite the fact that it needed to process information from multiple sensors. This suggests that multi-information fusion technology can not only improve marking accuracy, but also ensure high-speed marking efficiency. It is worth noting that, according to the experimental results, regardless of the marking method, the marking speed is not affected under different lighting conditions.

From an economic point of view, the use of a multi-information fusion robot for tire marking offers significant cost advantages. The cost of marking with a multi-information fusion robot is reduced by a factor of almost 56 compared to manual methods. This cost reduction is mainly attributable to the fact that electrical energy consumption replaces the use of traditional tools and materials. In addition, electricity expenditure is lower compared to the cost of tools and materials for traditional manual marking, further emphasizing the cost-saving and resource-efficient nature of robotic laser marking.

In summary, the use of robots for automotive tire marking offers significant advantages over traditional manual methods in terms of speed, success rate, tire waste and economic outlay. In addition, the multi-information fusion robot integrates 3D point cloud and 2D image information of automotive tires compared to a single-information robot, which further highlights its advantages in improving the success rate and reducing tire waste. Further, the multi-information fusion method is less affected by environmental factors, such as lighting conditions. These aspects strongly demonstrate the superiority of our proposed multi-information fusion method in automotive tire marking.

5. CONCLUSIONS

In order to improve the detection and identification and positioning accuracy of automotive tire surfaces, reduce the marking waste rate, and improve the marking efficiency, a robot vision modeling method based on the fusion of 3D point cloud information and 2D image information on the surface of automotive tires is proposed, and an automotive tire vision laser marking robot system is constructed using this method. First, 3D point cloud information and 2D image information on the surface of automotive tires are collected using multiple information sensors and processed. Then, based on the processed 3D point cloud information and 2D image information on the surface of automotive tires, a "with" and "or" multi-information fusion method is proposed. Further, based on the multi-information fusion method, a visual laser marking robot system for automotive tires is constructed.

Laser marking experiments on the surface of automotive tires are conducted based on the constructed automotive tire visual laser marking robot system. During the experimental process, statistics are made for the speed, accuracy, discard rate and economic consumption of marking, and comparative experiments are conducted with manual and single-information robots, respectively, under the same marking experimental environment. The results show that the use of robots for automotive tire marking has significant advantages over traditional manual methods in terms of speed, success rate, tire waste and economic expenditure. Moreover, the multi-information fusion robot integrates 3D point cloud and 2D image information of automotive tires compared to a single-information robot, which further improves the above advantages and proves the superiority of the multi-information fusion method for automotive tire marking.

In this paper, a robot vision modeling method based on the fusion of 3D point cloud information and 2D image information on the surface of automotive tires is investigated, and a visual laser marking robot system for automotive tires is constructed according to this method, and the effectiveness of the system is verified. However, the rapidly developing and applied deep learning technology can automatically extract features and optimize the model by learning a large amount of data, and applying it to the model construction of the automotive tire visual laser marking robot system to improve the accuracy and efficiency of the modeling of automotive tires is a matter of further research.

First, a suitable deep learning model is selected or customized according to the task characteristics, such as image type and feature complexity. Second, a large amount of high-quality image and point cloud data covering a variety of scenes and labels are acquired by 2D cameras and line-scan lasers, and preprocessing operations such as image and point cloud data enhancement, normalization, denoising, etc., are performed to improve the data quality and reduce the difficulty of model training. Third, the preprocessed data is used to train the deep learning model, and the hyperparameters are adjusted to optimize the performance. Notably, this process may include migration learning. Fourth, using the trained model to extract key features from the image provides an accurate basis for marking. Fifth, based on the extracted features, algorithms are designed to optimize the marking path, reduce redundant operations, and improve marking accuracy and efficiency. Sixth, the deep learning module is seamlessly integrated into the existing robot system to ensure functional integrity and compatibility. Finally, multiple rounds of testing are conducted to verify the integration effect of deep learning technology, including marking accuracy, efficiency and system stability.

With the continuous progress of science and technology and the rapid development of various fields, our proposed method shows a wide range of practical value and potential application prospects. In this paper, on the basis of the original, we add the analysis of the potential application of the method in aerospace, energy and other fields, as well as the exploration of the possible commercial application prospects, in order to fully demonstrate its strong practical value.

In the aerospace field, high-precision and high-stability information acquisition technology is crucial. The method proposed in this study realizes accurate measurement and marking of complex surfaces and workpieces through the combination of a 2D camera and a line-scanning laser, providing strong support for the manufacturing and inspection of aerospace equipment. In the manufacturing process of aerospace equipment, the method can be applied to the dimensional measurement and quality control of precision parts to ensure the manufacturing accuracy and reliability of the equipment. By generating high-quality point cloud data, the method can realize fault detection and maintenance of aerospace equipment, improving the operational efficiency and safety of the equipment.

In the field of energy, especially in the research and development and production of new energy equipment, the method can be used in the structural design and performance evaluation of new energy equipment, providing data support for the optimization and improvement of the equipment. Through real-time monitoring of the operating status of new energy equipment, the method can detect and deal with potential faults in a timely manner, ensuring stable operation and efficient power generation.

In addition, the method has a wide range of commercial applications. In the field of intelligent manufacturing, the method can be used for quality inspection and precision control of automated production lines to improve production efficiency and product quality. By accurately measuring and recording the 3D information of cultural relics, the method can provide scientific basis and technical support for cultural relics protection and restoration. In the field of medical imaging, the method can be used to assist doctors in disease diagnosis and treatment planning, improving the accuracy and efficiency of medical services.

In the future, we will continue to conduct in-depth research on the relevant technologies and applications of this method, and promote its wide application and in-depth development in more fields.

DECLARATIONS

Authors' contributions

Significant contributions to the conceptualization and design of the study and testing and analysis were made by: Ren C, Xu Y, Liu X

Provide technical, data and material support: Liu H

Availability of data and materials

Data will not be shared because it involves the confidentiality of product information related to the partner company.

Financial support and sponsorship

None.

Conflicts of interest

Liu H is the president of Shanghai Good Intelligent Technology Co., Ltd. During the writing of this thesis, Haibo Liu received support and resources, including but not limited to data access, use of experimental facilities support, from Shanghai Good Intelligent Technology Co, Ltd. Shanghai Good Intelligent Technology Co., Ltd is not directly responsible for the content of this thesis, but Haibo Liu, as a member of Shanghai Good Intelligent Technology Co., Ltd, has worked in accordance with the company's principles of scientific integrity and academic standards. While the other authors have declared that they have no conflicts of interest.

Ethical approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Copyright

© The Author(s) 2024.

REFERENCES

1. Wang J, Zhang L, Luo G, Zhang T, Jiang Z. Machine vision applied for laser processing system. Appl Laser 2009;29:523. Available from: https://www.opticsjournal.net/Articles/OJ22833ea80a2c84ab/References#art-nav. [Last accessed on 27 Nov 2024].

2. Zhang W, Xiao Z, Yan Z. Design of online laser marking system by vision guided based on template matching. J Phys Conf Ser 2021;1976:012047.

3. Gao C, Cai Q, Ming S. YOLOv4 object detection algorithm with efficient channel attention mechanism. In: 2020 5th International Conference on Mechanical, Control and Computer Engineering (ICMCCE); 2020 Dec 25-27; Harbin, China. IEEE; 2020. pp. 1764-70.

4. Girshick R, Donahue J, Darrell T, Malik J. Rich feature hierarchies for accurate object detection and semantic segmentation. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition; 2014 Jun 23-28; Columbus, USA. IEEE; 2014. pp. 580-7.

5. Wang S, Yang N, Duan L, Liu L, Dong J. Small-size pedestrian detection in large scene based on fast R-CNN. In: 9th International Conference on Graphic and Image Processing (ICGIP); Qingdao, China. 2017.

6. Liu B, Zhao W, Sun Q. Study of object detection based on faster R-CNN. In: 2017 Chinese Automation Congress (CAC); 2017 Oct 20-22; Jinan, China. IEEE; 2017. pp. 6233-6.

7. Yayla R, Albayrak E, Yüzgeç U. Vehicle detection from unmanned aerial images with deep mask R-CNN. Comput Sci J Moldova 2022;30:148-69.

8. Zhao Q, Wei H, Zhai X. Improving tire specification character recognition in the YOLOv5 network. Appl Sci 2023;13:7310.

9. Arifando R, Eto S, Wada C. Improved YOLOv5-based lightweight object detection algorithm for people with visual impairment to detect buses. Appl Sci 2023;13:5802.

10. Wang H, Zhang X, Guo Y, Li W. Recognition of characters on tire rubber surface based on machine vision. J Electron Meas Instrum 2021;35:191-9.

11. Kazmi W, Nabney I, Vogiatzis G, Rose P, Codd A. An efficient industrial system for vehicle tyre (tire) detection and text recognition using deep learning. IEEE Trans Intell Transp Syst 2021;22:1267-75.

12. Chen Y, Xia Q, Wang J, Zhang W. Control system for laser marking tires with machine vision. Appl Laser 2010;30:191-9.

13. Zheng L, Lou H, Xu X, Lu J. Tire defect detection via 3D laser scanning technology. Appl Sci 2023;13:11350.

14. Chen X, Ma H, Wan J, Li B, Xia T. Multi-view 3D object detection network for autonomous driving. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2017 Jul 21-26; Honolulu, USA. IEEE; 2017. pp. 6526-34.

15. Ku J, Mozifian M, Lee J, Harakeh A, Waslander SL. Joint 3D proposal generation and object detection from view aggregation. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); 2018 Oct 01-05; Madrid, Spain. IEEE; 2018. p. 1-8.

16. Qi CR, Liu W, Wu C, Su H, Guibas LJ. Frustum pointNets for 3D object detection from RGB-D data. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition; 2018 Jun 18-23; Salt Lake City, USA. IEEE; 2018. pp. 918-27.

17. Xie D, Xu Y, Lu F, Pan S. Real-time detection of 3D objects based on multi-sensor information fusion. Automot Eng 2022;44:340-9.

18. Wu Q, Li L. 3D object detection based on point cloud bird's eye view remapping. J South China Univ Technol 2021;49:39-46.

19. Zhang K, Chen R, Peng Z, Zhu Y, Wang X. FGCN: image-fused point cloud semantic segmentation with fusion graph convolutional network. Sensors 2023;23:8338.

20. Li X, Li Y. Research on the role of multi-sensor system information fusion in improving hardware control accuracy of intelligent system. Nonlinear Eng 2024;13:20240035.

21. Xue Y, Mou S, Chen C, et al. Rapid distance estimation of odor sources by electronic nose with multi-sensor fusion based on spiking neural network. Sensor Actuat B Chem 2025;422:136665.

22. Wang S, Yi S, Zhao B, et al. Sowing depth monitoring system for high-speed precision planters based on multi-sensor data fusion. Sensors 2024;24:6331.

23. Chen L, Moon SK. In-situ defect detection in laser-directed energy deposition with machine learning and multi-sensor fusion. J Mech Sci Technol 2024;38:4477-84.

24. Zhang Z. Flexible camera calibration by viewing a plane from unknown orientations. In: Proceedings of the Seventh IEEE International Conference on Computer Vision; 1999 Sep 20-27; Kerkyra, Greece. IEEE; 1999. pp. 666-73.

25. Zhang Q, He X, Yao S, Guo Z. Research on the fusion technology of camera and lidar based on ROS intelligent mobile robot. China Meas Test 2021;47:120-3. Available from: http://61.54.243.197:8089/KCMS/detail/detail.aspx?filename=SYCS202112019&dbcode=CJFQ&dbname=CJFD2021 [Last accessed on 27 Nov 2024].

Cite This Article

Research Article
Open Access
An automotive tire visual laser marking robot system based on multi-information fusion
Chuanxiang Ren, ... Haibo Liu

How to Cite

Ren, C.; Xu, Y.; Liu, X.; Liu, H. An automotive tire visual laser marking robot system based on multi-information fusion. Intell. Robot. 2024, 4, 422-38. http://dx.doi.org/10.20517/ir.2024.25

Download Citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click on download.

Export Citation File:

Type of Import

Tips on Downloading Citation

This feature enables you to download the bibliographic information (also called citation data, header data, or metadata) for the articles on our site.

Citation Manager File Format

Use the radio buttons to choose how to format the bibliographic data you're harvesting. Several citation manager formats are available, including EndNote and BibTex.

Type of Import

If you have citation management software installed on your computer your Web browser should be able to import metadata directly into your reference database.

Direct Import: When the Direct Import option is selected (the default state), a dialogue box will give you the option to Save or Open the downloaded citation data. Choosing Open will either launch your citation manager or give you a choice of applications with which to use the metadata. The Save option saves the file locally for later use.

Indirect Import: When the Indirect Import option is selected, the metadata is displayed and may be copied and pasted as needed.

About This Article

© The Author(s) 2024. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, sharing, adaptation, distribution and reproduction in any medium or format, for any purpose, even commercially, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Data & Comments

Data

Views
60
Downloads
6
Citations
0
Comments
0
3

Comments

Comments must be written in English. Spam, offensive content, impersonation, and private information will not be permitted. If any comment is reported and identified as inappropriate content by OAE staff, the comment will be removed without notice. If you have any queries or need any help, please contact us at support@oaepublish.com.

0
Download PDF
Share This Article
Scan the QR code for reading!
See Updates
Contents
Figures
Related
Intelligence & Robotics
ISSN 2770-3541 (Online)
Follow Us

Portico

All published articles are preserved here permanently:

https://www.portico.org/publishers/oae/

Portico

All published articles are preserved here permanently:

https://www.portico.org/publishers/oae/