Open Access  |  Research Article
Intell Robot 2023;3(4):632-46. 10.20517/ir.2023.35 © The Author(s) 2023.

Rotating 3D laser mapping system for multi-rotor drones

Views: 70 |  Downloads: 13 |  Cited:  0

School of Information Engineering, Southwest University of Science and Technology, Mianyang 621010, Sichuan, China.

Correspondence to: Prof. Hua Zhang, School of Information Engineering, Southwest University of Science and Technology, Room 814, Block 7A East, Mianyang 621010, Sichuan, China. E-mail:

This article belongs to the Special Issue Advances in Robotics, AI and Intelligence Control
© The Author(s) 2023. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License (, which permits unrestricted use, sharing, adaptation, distribution and reproduction in any medium or format, for any purpose, even commercially, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.


Accurate positional estimation is an essential prerequisite for the regular operation of an autonomous rotary-wing Unmanned Aerial Vehicles (UAV). However, the field of view (FOV) limitation problem of lidar makes it more challenging to locate the rotary-wing UAV in an unknown environment. To address rotor drones with an insufficient FOV and the observation blindness of lidar in complex environments, this paper designs a rotorcraft UAV system based on rotating 3D lidar and proposes a simultaneous localization and mapping algorithm for rotating 3D lidar. The algorithm distinguishes between planar and edge features based on the curvature value of the point cloud first. Then, to reduce the impact caused by the UAV motion and lidar rotation, messages about the Inertial Measurement Unit (IMU) and real-time rotation angles are used to compensate for these motions twice, while the IMU measurements are used for state prediction, and the error-sate iterative extended Kalman filter is used to update the residuals after matching line and surface features with sub-map. Finally, Smoother high-rate odometer data was obtained through IMU pre-integration and a first-order low-pass filter. The experimental results show that the proposed rotating lidar unit in indoor and outdoor conditions makes the rotorcraft UAV have a larger FOV, which not only improves the environmental perception capability and positional estimation accuracy of the rotorcraft but enhances the positioning reliability and flight stability of the rotorcraft UAV in complex environments.


Unmanned aerial vehicle, rotating 3D lidar, position estimation, environment perception


In recent years, multi-rotor Unmanned Aerial Vehicles (UAVs) have been gradually used in several industries, such as aerial photography, agricultural plant protection, power inspection, film, television shooting, post-disaster search, and rescue, because of their autonomous take-off and landing capabilities, hovering capability, and high mobility and agility[14]. They have also been widely studied by both academia and industry[5]. However, remote-controlled rotor UAVs can be limited by various factors, for example, distance and the operator's field of view (FOV), so it is necessary to implement autonomous rotor UAVs. In autonomous rotorcraft UAVs, reliable localization information and strong environmental awareness are prerequisites for their ability to perform other subsequent tasks[6]. Positioning information can be obtained by simultaneous localization and mapping (SLAM) technology, a crucial technology in industrial automation, autonomous driving, and mapping[7,8]. Furthermore, during its development, cameras, lidar, and Inertial Measurement Unit (IMU) have been used as the primary sensors for information acquisition. Although cameras can be lightweight and small in size while still providing rich color information, they cannot be widely used in SLAM techniques due to their lack of direct distance information, high computational resources, and sensitivity to light. Meanwhile, with the development of the manufacturing industry, 3D lidar has not only solved the above shortcomings of the camera but has also been widely used in SLAM systems because of its strong environmental awareness, high measurement accuracy, anti-jamming solid ability, and precise mapping details[9].

The pure laser odometry calculation methods, LOAM[10] and LEGO-LOAM[11], are classical algorithms based on lidar, which only match by extracting line surface features that will cause inaccurate odometer to pose estimation in complex scenes or intense robot motion, thus leading to the accuracy degradation of the SLAM algorithm. The development of multi-sensor fusion methods has led to more attention to inertial laser SLAM based on lidar and IMU[12], and lidar-based inertial odometer fusion methods can be divided into loosely coupled and tightly coupled. Compared with loosely coupled methods, tightly coupled methods can obtain better localization results due to the online estimation of IMU bias. The LIOM proposed by Ye et al. utilizes the same features as LOAM while optimizing the IMU pre-integration and inter-plane constraints using factor graphs[13]. Shan et al. proposed LIO-SAM, which uses factor graphs to optimize four different factors for further improvement in the accuracy of the positional estimation and scalability[14]. With the increase of feature points and system dimensions, various deformations of the Kalman filter are widely used, such as the extended Kalman filter, traceless Kalman filter, iterative Kalman filter, etc. Xu et al. proposed a tightly coupled laser SLAM based on an iterative Kalman filter, which uses a new Kalman gain calculation formula to reduce the computational complexity from the measurement dimension to the state dimension to the computational complexity[15]. Although SLAM technology can provide high-frequency and accurate position information to autonomous rotary-wing UAVs, obtaining more map information from the environment and enhancing the environmental awareness of rotary-wing UAVs are also urgent problems to be solved. The FOV in the vertical direction of the lidar in the above methods is limited, which makes the mobile robot in complex takes weak in perceiving the surrounding environment, thus causing a blind area in the robot's FOV, which has a significant impact on the subsequent work. Conventional methods to address these deficiencies are to increase the lidar scanning beam to expand the vertical FOV or to increase the number of lidars, which can solve the mobile robot FOV blindness problem to some extent, but they pose a challenging task for production[16]. Rotating the entire lidar around a fixed axis to obtain a more extensive scanning range in the vertical direction is a more effective solution[17]. Droeschel et al. proposed a small, continuously rotating 3D laser scanner with a 360° * 270° FOV for UAV applications[18]. Bosse et al. proposed a 3D-based rotating 2D lidar scan matching method, which can generate accurate local maps and reliable positional estimation of the vehicle[19]. However, there remains room for improvement in the perception of the carrier for the environment.

Therefore, this paper first designs a UAV system with rotating 3D lidar, in which the rotation device theoretically makes the FOV of the lidar in the vertical direction 360 degrees. Then, this paper proposes a SLAM algorithm based on the FAST-LIO algorithm framework for rotating 3D lidar to achieve reliable localization of the UAV in complex environments and real-time building of the surrounding environment map.


2.1. Unmanned aerial system

The rotor-wing UAV system based on a rotating 3D lidar is in Figure 1, and the sensors and their parameters are in Table 1. The system mainly comprises an environment sensing unit, a data processing unit, and a UAV platform. The UAV platform is built based on a four-axis UAV, which is mainly responsible for carrying the rotating 3D lidar device to complete subsequent tasks; the environment sensing unit can realize the UAV's perception of the surrounding environment and provide real-time position information for the UAV; the data processing unit is responsible for various data processing and UAV control command issuance to ensure the UAV's autonomous flight.

Rotating 3D laser mapping system for multi-rotor drones

Figure 1. Unmanned aerial system. IMU: Inertial Measurement Unit.

Table 1

Sensor parameters

Sensor name Parameters
Flight controller Pixhwak
Lidar Velodyne-VLP16
Inertial measurement unit Xsense-mti300
Motor Pulse towre intelligence
On-board computer Manifold2-C

The rotating 3D lidar unit added in this paper is the UAV system's environment sensing unit. It mainly consists of a 3D laser, IMU, and motor with an encoder; the periphery is its protection device. The FOV is changed to 360 * 360 degrees by rotating the lidar in one direction with its horizontal plane placed vertically on a rotating motor. Compared to conventional rotor UAVs, the proposed rotor UAV can effectively increase the FOV in the vertical direction.

2.2. Rotating external parameter calibration

As mentioned earlier, the rotating 3D lidar sensor platform consists of three main parts, and installation errors, i.e., external parameters, inevitably occur during the assembly process. Incorrect external parameters can distort the build map, localization drift, and so on. To eliminate the effect of installation errors on the building map and localization, calibration of external parameters is required, and in this paper, we utilize the team's previous work[20] for the calibration of external parameters. This method is an external parameter calibration method that does not require specific calibration parts and can be performed in any environment. Since the angle of the motor rotation direction is not observable, only the two-degree-of-freedom rotation angles other than the motor rotation direction are calibrated. To combine the information from the rotating motor and the 3D lidar, it is first necessary to time-synchronize the messages from both before the subsequent parameter calibration can be performed. Under the Robot Operating System (ROS), the system subscribes to the motor angle information and point cloud information and uses the time-similar point cloud information and motor angle information. This process aligns current point cloud information with the angle information when the servo is rotating, which can be used for the subsequent three-dimensional mapping and other work. Firstly, the two half-scans are constructed by a module that divides all the scanned points into two half-scans using the rotation angle. Next, the module that generates the grids divides all the half-scan points into small grids, ensures that the grids closest to the position but from different half-scan points are consistent, and then estimates the approximate probability that each grid contains a plane. Finally, based on the previously estimated probabilities, valuable grids are selected to extract planar points, correlate planes, and efficiently estimate parameters. The method makes the lidar rotate one revolution at a constant speed, combines the angle information of the rotating motor and the original point cloud of the 3D lidar, maps the scanned point cloud to the 3D space, obtains the external parameters by the method of plane extraction and matching, and establishes the objective function, which is optimized by using the Ceres library. The workflow is shown in Figure 2.

Rotating 3D laser mapping system for multi-rotor drones

Figure 2. External parameter calibration flowchart.

We denote the estimation parameters$$ \hat{w} _{y} ,\hat{w} _{z} $$, as a group $$ \hat{W}=(\hat{w} _{y} ,\hat{w} _{z}) $$, the calibrated parameters are to align all the plane matches and are calculated by

$$ W^{*}=\underset{\hat{W}\left(p_{f, i}, p_{f, j}\right) \in M_{i j}}{\operatorname{argmin}} F\left(p_{f, i}, p_{f, j} ; \hat{w}\right) $$

where the planes are from half-scans $$ S_{f} $$ and $$ S_{l} $$, respectively, and $$ F $$ is a function to measure the dissimilarity between the planes $$ P_{f,i} $$ and $$ P_{l,j} $$.

2.3. Rotational laser SLAM method

2.3.1. FAST-LIO algorithm

FAST-LIO is a filter-based, tightly coupled laser SLAM algorithm that uses the lidar and IMU as sensors and does not require an external reference between them. Firstly, the state prediction is performed by forward propagation of IMU measurements, along with point cloud feature extraction in the same way as in the literature [9]. Secondly, all the newly formed feature points are transformed to the end moment of each lidar scan frame using the IMU data to remove motion distortion during backpropagation of the state and residual calculation. Then, the Error-sate Iteration Kalman Filter (ESIEKF) sub-module is judged to converge; if not, the updated state is used to continue the iteration to obtain new global points for feature matching again until convergence to the optimal state; otherwise, the global map and odometer will be updated. This tightly coupled approach significantly improves the localization accuracy and map-building effect.

2.3.2. Rotating 3D laser SLAM algorithm

In order to enable the multi-rotor UAV to obtain more information about its surrounding environment, this paper proposes a rotating 3D laser SLAM algorithm based on FAST-LIO, whose workflow is in Figure 3. First of all, the line surface features are extracted from the lidar output and additional motion compensation using the encoder angle and IMU measurement data, respectively; the IMU measurement data are used as the state prediction of the system; then, the structural parameters and external parameters between sensors are combined to update the state and feature matching using the error-sate iteration Kalman filter, and the optimal state is obtained at the end of the iteration as the odometry output while the global map is updated; finally, the odometry output frequency is increased using IMU pre-integration, and the odometer data is smoothed by a first-order low-pass filter to achieve the condition of stable flight of the rotorcraft UAV.

Rotating 3D laser mapping system for multi-rotor drones

Figure 3. Rotational laser SLAM workflow diagram. IMU: Inertial Measurement Unit; SLAM: simultaneous localization and mapping.

(1) Angular and IMU-based motion compensation

Before the state estimation, the raw data obtained from sensors are firstly preprocessed, and the data preprocessing of the proposed algorithm in this paper mainly contains three parts. The first part is to extract the line and surface features from the original point cloud of the lidar output; the second part obtains the real-time rotation angle information of the lidar through the encoder to complete the first motion compensation. Subsequently, the real-time measurement data of IMU is used for the second motion compensation and state prediction.

(a) Extraction of point cloud feature

The feature point extraction in the rotating 3D lidar SLAM method is the same as in the literature [10]. The feature extraction is based on the curvature value of points; the points with larger curvature are considered edge points, and the points with smaller curvature are considered plane points, where $$ S_{p} $$ represents a set of points that are in the same frame as the point $$ X_{(k,i)}^{L}, and X_{(k,j)}^{L} $$ are points that are in $$ S_{p} $$. The curvature $$ c $$ is computed by

$$ c=\frac{1}{|S_{p}|*||X_{(k,i)}^{L}||}{||\sum\limits_{j\in S_{p}}^{}(X_{(k,i)}^{L},X_{(k,j)}^{L})||} $$

(b) Motion compensation

The rotational 3D laser SLAM algorithm requires two motion compensations for the acquired raw point cloud. Firstly, real-time rotation angle information is used to compensate for the motion compensation brought to the lidar by the rotation. In each frame of the lidar scan, the angle values of the start and end moments are obtained, so all feature points during this time should have an angle value, which is obtained by linear interpolation, thus converting the lidar feature points under the rotational coordinate system $$ S $$ to the motor axis coordinate system $$ B $$. The calculation formula is described as

$$ angle-po=|\frac{ct-st}{et-st}|*es-angle+|\frac{et-ct}{et-st}|*last-angle $$

where $$ st $$ represents the beginning of the time period, $$ et $$ represents the end of the time period, $$ ct $$ represents the time of the current feature point, $$ angle-po $$ represents the angle corresponding to each feature point, $$ es-angle $$ represents the angle at the beginning of the current frame, and $$ last-angle $$ represents the angle at the end of the current frame.

Each frame of data acquired by the lidar is from a period in the past rather than a moment in time during which the lidar or its carrier usually moves and results in inconsistencies in the origin of the current frame, so it is necessary to borrow the IMU measurement data to compensate for the second motion and convert the current frame to the last moment in the coordinate system, thereby converting the feature points from the motor axis coordinate system $$ B $$ to the IMU coordinate system $$ I $$. The calculation can be described as

$$ q_{slerp} =|\frac{ct-st}{et-st}|*q_{0} +|\frac{et-ct}{et-st}|*q_{1} $$

where $$ q_{slerp} $$ represents the quaternion pose of the feature point at the current moment, $$ q_{0} $$ represents the quaternion pose of the first feature point in the time, $$ q_{1} $$ represents the quaternion pose of the last feature point in the period. The process involves converting the quaternion into a rotation matrix and then transforming the feature points into the lidar coordinate system. The two motion compensation schematics are in Figure 4.

Rotating 3D laser mapping system for multi-rotor drones

Figure 4. Motion compensation process. IMU: Inertial Measurement Unit.

(2) State estimation with rotational external parameters

In this paper, the algorithm of the literature [15] is used as the main framework, and the rotation angle information of the lidar is written into the state equation on its original basis. The discrete equation of the state can be described as follows

$$x=[_{}^{W} R_{I}^{T}, _{}^{W} P_{I}^{T}, _{}^{W} V_{I}^{T}, b_{w}^{T}, b_{a}^{T}, g^{T}, _{}^{I} R_{B}^{T}, _{}^{I} t_{B}^{T}, _{}^{S} \varphi _{L}^{T}, _{}^{S} t_{L}^{T}]^{T}\in M=SO(3)$$

where $$ x $$ is the system state $$ _{}^{W}R_{I} $$, $$ _{}^{W}P_{I} $$ represents the rotation matrix and position of the IMU in the world coordinate system, $$ _{}^{W}V_{I} $$ represents the linear velocity of the IMU in the world coordinate system, $$ b_{w} $$ and $$ b_{a} $$ represent the gyroscope and accelerometer bias, respectively, $$ g $$is the gravity vector in the world coordinate system $$ _{}^{I}R_{B} $$, $$ _{}^{I}t_{B} $$ represents the rotation and translation of the rotary motor axis and the IMU, respectively, and $$ _{}^{S} \varphi _{L} $$ and $$ _{}^{S} t_{L} $$ represent the rotation and translation between the lidar and the rotary motor, respectively. Note that there are five coordinate systems in the rotary 3D laser SLAM algorithm: the lidar coordinate system $$ L $$, the rotary motor coordinate system $$ S $$, the motor axis coordinate system $$ B $$, the IMU coordinate system $$ I $$, and the world coordinate system $$ W $$.

In this paper, the state is predicted and updated using an error-state iterative extended Kalman filter. The state prediction is first started when the IMU data is received, at which there is the $$ i_{th} $$ propagation of the state $$ \hat{x} $$, and $$ f(x,u,w) $$ is the first-order derivative of the state $$ \hat{x_{i}} $$ to time, assuming that the IMU noise is zero at the start moment. They are both discrete models and can be described as

$$ \hat{x}_{i+1}=\hat{x}_{i}\oplus \bigtriangleup t*f(\hat{x}_{i},u_{i},0);\hat{x}_{0}=\bar{x}_{i-1} $$

$$ f(x_{i},u_{i},w_{i})=\begin{bmatrix} w_{mi}-b_{ai}-n_{wi}\\ _{}^{G} v_{Ii}\\ _{}^{G}R_{Ii}(a_{mi}-b_{ai}-n_{ai})+_{}^{G}g_{i} \\ n_{bwi}\\ n_{bai}\\ 0_{3\times1}\\ ...\\ \end{bmatrix} $$

where $$ \hat{x} $$ is the propagation value and $$ \bar{x} $$ is the update value. $$ w $$ is the IMU Gaussian white noise, and $$ u $$ is the IMU measurement, which can be written as

$$ w=[n_{w}^{T}, n_{a}^{T},n_{bw}^{T},n_{ba}^{T}]^{T} $$

$$ u=[w_{m}^{T}, a_{m}^{T}]^{T} $$

in this instance, the system state covariance matrix $$ \hat{P}_{i} $$ is calculated by

$$ \hat{P}_{i+1}=F_{\tilde{x}}\hat{P}_{i}F_{\tilde{x}}^{T}+F_{w}QF_{w}^{T} $$

where $$ \hat{P}_{i} $$ is the covariance matrix of the previous state, and $$ Q $$ is the covariance matrix of the IMU Gaussian white noise, please refer to the literature [21] for the detailed derivation process, and $$ F_{\tilde{x}}, F_{w} $$ can be calculated by

$$ F_{\tilde{x}}=(\frac{\partial G(\tilde{x}_{i},g(0,0))}{\partial \tilde{x}_{i}})+(\frac{\partial G(0,g(\tilde{x}_{i},0))}{\partial g(\tilde{x}_{i},0)})\frac{\partial g(\tilde{x}_{i},0)}{\partial \tilde{x}_{i}} $$

$$ F_{w}=\frac{\partial G(0,g(0,w_{i}))}{\partial g(0,w_{i})}\frac{\partial g(0,w_{i})}{\partial \tilde{w}_{i}} $$

the error $$ \tilde{x} $$ in the ground-truth $$ x $$ and propagated values $$ \hat{x} $$ is written as

$$ \tilde{x}_{i+1}=x_{i+1}\ominus \hat{x}_{i+1}=F_{\tilde{x}}\tilde{x}_{i}+F_{w}w_{i} $$

where $$ G(\tilde{x}_{i},g(\tilde{x}_{i},w_{i})) $$ and $$ g(\tilde{x}_{i},w_{i}) $$ are described as

$$ G(\tilde{x}_{i},g(\tilde{x}_{i},w_{i}))=((\hat{x}_{i}+\tilde{x}_{i})+g(\tilde{x}_{i},w_{i}))-(\hat{x}_{i}+g(0,0)) $$

$$ g(\tilde{x}_{i},w_{i})=\bigtriangleup t*f(x_{i},u_{i},w_{i}) $$

Then, based on the error between the ground truth at the previous moment and the predicted value at the current moment, an iterative state update is performed and the updated state is output when the update error is less than a threshold or reaches the maximum number of iterations. The residual $$ z $$, Kalman gain $$ K $$, and the Jacobi matrix $$ H $$ are calculated during the state update process by

$$ z_{j}^{k}=G_{j}(p^{G}-p),G_{j}=\left\{\begin{matrix} u_{j}^{T}, surface\\ \left \lfloor u_{j} \right \rfloor_{\times },edge \end{matrix}\right. $$

$$ K=PH^{T}(HPH^{T}+R)^{-1} $$

$$ H_{i}^{j}=\frac{\partial h_{i}(\hat{x_{i}^{j}\oplus \tilde{x}_{i}^{j}},0)}{\partial \tilde{x}_{i}^{j}} $$

Where $$ G_{j} $$ and $$ u_{j} $$ are the normal vector to the plane or edge, $$ p_{}^{G} $$ is the point on the normal vector, and $$ p $$ is the plane or edge point. Additionally, $$ h_{j}(x_{k},_{}^{L_{j}}n_{f_{j}}) $$ is residual and can be described as

$$h_{j}(x_{k},_{}^{L_{j}}n_{f_{j}})=G_{i}[_{}^{W}T_{I_{k} }^{k}*_{}^{I_{k}}T_{I_{j}}*_{}^{I_{j}}T_{B_{j}}*_{}^{B_{j}}T_{S_{j}}*_{}^{S_{j}}T_{L_{j}}(_{}^{L_{j}}p_{f_{j}}^{}-_{}^{L_{j}}n_{f_{j}}^{})-_{}^{G}q_{j}^{}]$$

where $$ _{}^{L_{j}}n_{f_{j}}^{} $$ is noise.

At this point, the updated state $$ \bar{x_{k} } $$, the propagation state $$ \hat{x}_{k}^{k+1} $$, and the covariance matrix $$ \bar{P}_{k} $$ are updated by

$$ \bar{x}_{k}=\hat{x}_{k}^{k+1}=\hat{x}_{k}^{k+1}+(-Kz_{k}^{k}-(I-KH)(J^{K})^{-1}(\hat{x}_{k}^{k}-\hat{x}_{k}^{})) $$

$$ \bar{P}_{k}=(I-KH)P $$

where $$ J $$ is the derivative of $$ (\hat{x}_{k}^{k}\oplus \tilde{x}_{k}^{k})-\hat{x}_{k}^{k} $$ with respect to $$ \tilde{x}_{k}^{k} $$. At the first iteration, $$ J=I $$(unit matrix). When the state is updated, each feature point is projected from the lidar coordinate system $$ I $$ to the world coordinate system $$ W $$, and at the same time, these feature points are added to the map that has been generated previously. The conversion formula can be written as

$$ _{}^{W}p_{f_{j}}=^{W}T_{I}*^{I}T_{B}*^{B}T_{S}*^{S}T_{L}*^{L_{k} }p_{f_{j}}$$

where $$ T $$ is the transformation matrix between coordinate systems (e.g., $$ ^{W}T_{I} $$ the transformation matrix between the IMU coordinate system and the world coordinate system).

(3) High frequency odometry based on IMU and filter

The odometer frequency after the error iterative Kalman filtering optimization is far from the normal flight conditions of the rotorcraft. The IMU has a high output frequency, so this paper first improves the overall interval frequency of the odometry through IMU pre-integration, which can be calculated by

$$ \bigtriangleup R_{ij}=\prod\limits_{k=i}^{j-1}Exp((\tilde{w}_{k}-b_{k}^{g}-\eta _{k}^{gd})\bigtriangleup t) $$

$$ \bigtriangleup v_{ij}=\sum\limits_{k=i}^{j-1}\bigtriangleup R_{ij}(\tilde{a}_{k}-b_{k}^{a}-\eta _{k}^{ad})\bigtriangleup t $$

$$ \bigtriangleup p_{ij}=\sum\limits_{k=i}^{j-1}[\bigtriangleup v_{ij}\bigtriangleup t+\frac{1}{2}\bigtriangleup R_{ij}(\tilde{a}_{k}-b_{k}^{a}-\eta _{k}^{ad})\bigtriangleup t^{2}] $$

In this case, $$ R $$ represents the rotation matrix between moments $$ i $$ and $$ j $$, $$ v $$ represents velocity, $$ p $$ represents pose, $$ a $$ is the IMU acceleration, $$ w $$ is the IMU angular velocity, $$ b^{a} $$ is the IMU accelerometer zero bias noise, $$ b^{g} $$ is the IMU gyroscope zero bias noise, $$ \eta^{ad} $$ is the accelerometer white noise, $$ \eta^{gd} $$ is the gyroscope white noise, and $$ \bigtriangleup t $$ is the sampling time. The data obtained after boosting the odometer frequency often has fluctuations, so a first-order low-pass filter is used to smooth the odometer data between the formal output odometers, calculated by

$$x_{i}=\lambda x_{i}+(1-\lambda)x_{i-1} $$

where $$ \lambda $$ is the filter factor.


In order to verify the effectiveness of the rotorcraft UAV and the positioning algorithm designed in this paper, rotating external parameter calibration experiments, simulation tests, indoor positioning map building, and outdoor positioning map building tests were constructed, respectively. The simulation experiments were conducted under the Ubuntu 18.04 system to test the map-building effect. Then, the actual tests were carried out indoors and outdoors using the UAV in Figure 1, and the sensors and other equipment used are detailed in Table 1.

3.1. External parameter calibration experiment

As mentioned before, the sensing platform consists of rotating motors, 3D lidar, and IMU. There is necessarily an external reference linking them. As can be seen from the workflow diagram of the algorithm, in this paper, the point cloud acquired by the rotating 3D lidar is firstly converted to a certain moment in time by removing aberrations, and then, the position estimation is carried out by utilizing the external parameters between the IMU and the lidar, which reveals that there are two sets of external parameters in the whole process. One group involves the relationship between IMU and 3D lidar, which will be estimated in real-time in the odometry method, and the other group pertains to the external between rotating motors and 3D lidar, which needs to be calibrated before proceeding with the SLAM algorithm. In this paper, we use the sensing platform to rotate one circle in the room at a constant speed and then use the algorithm for the estimation of the external parameter, and the final calibration result is 0.2 for the y-axis and 0.3 for the z-axis. The effect of the external parameter before and after the calibration is shown in Figure 5.

Rotating 3D laser mapping system for multi-rotor drones

Figure 5. The effect of building a map before and after calibration of external parameters.

3.2. Simulation test

In order to verify the robustness and flight safety of the proposed algorithm, the proposed system and the UAV system without rotating lidar were first built under Ubuntu 18.04 using the gazebo simulation platform, as in Figure 6A and B, respectively. Then, the simulation experiments were carried out in this scenario to build the map, as in Figure 7A and B separately. It can be seen from these figures that the FOV of the rotating 3D lidar is improved in the vertical direction compared with the non-rotating 3D lidar to build a complete map of the surrounding environment, and the lidar device proposed in this paper has a theoretical FOV of 360 degrees in both vertical and horizontal directions, but there may be partial occlusion due to installation. Since the FAST-LIO algorithm lacks the external parameters of the lidar and the rotating motor after rotating the lidar, it will make the FAST-LIO algorithm invalid. So, the lidar can only be fixed to a certain angle, and then independent experiments of the same scene will be conducted and then compared with the positioning accuracy of the proposed algorithm in this paper.

Rotating 3D laser mapping system for multi-rotor drones

Figure 6. Simulation environment.

Rotating 3D laser mapping system for multi-rotor drones

Figure 7. Comparison of simulation mapping effect.

3.3. Indoor positioning and map building test

Because the PX4 flight controller requires a positional input frequency of not less than 30Hz, to ensure a smooth UAV flight experiment, the odometry output frequency of this paper is 50Hz. Meanwhile, to get smoother odometer data, low-pass filtering is also performed on the positional results of the SLAM algorithm after pre-integration, and the filtering coefficient is 0.62.

3.3.1. Indoor positioning data validation

The rectangular motion was performed under the indoor motion capture system, and the displacement x-axis, y-axis, and z-axis output results of the rotating 3D laser SLAM odometer, fixed-angle FAST-LIO, and rotating FAST-LIO (Ours, F-FAST-LIO, R-FAST-LIO) were compared with the positioning results (Nokov) of the motion capture system, respectively, as in Figure 8A and B. From the experimental data, it is clear that the proposed method in the maximum positioning error is 0.08 m; the average error is 0.04 m, and the minimum is 0.005 m. The maximum positioning error of fixed angle FAST-LIO is 0.147 m; the average error is 0.062, and the minimum error is 0.005 m. The absolute attitude error (ATE) and root mean square error (RMSE) are also compared; the results are detailed in Table 2.

Rotating 3D laser mapping system for multi-rotor drones

Figure 8. Comparison of ground-truth location data.

Table 2

Algorithm positioning error comparison results

Scenes Algorithm ATE[m] RMSE[M]
Indoor R-FAST-LIO Failed Failed
Indoor F-FAST-LIO 0.062 0.070
Indoor Ours 0.034 0.028
Outdoor R-FAST-LIO Failed Failed
Outdoor F-FAST-LIO 1.25 1.302
Outdoor Ours 0.599 0.654

3.3.2. Indoor positioning and mapping experiment

To further verify the effect of the rotating 3D lidar SLAM algorithm in indoor localization and mapping, the localization and mapping experiments were conducted in the underground parking lot in Figure 9A and compared with the FAST-LIO localization and mapping effect in the same scene with a linear motion. The experimental localization data results are in Figure 9B; the mapping effects are in Figure 10A and B below, from which it can be seen that the proposed algorithm can build a complete environmental map to provide more environmental information for the UAV compared with the unrotated laser mapping.

Rotating 3D laser mapping system for multi-rotor drones

Figure 9. Indoor environment and comparison of indoor positioning data.

Rotating 3D laser mapping system for multi-rotor drones

Figure 10. Comparison of indoor mapping effect.

3.4. Outdoor positioning and mapping experiment

To verify the environmental perception capability of this system in complex outdoor scenes, the UAV platform is used to conduct localization and mapping experiments in outdoor scenes. The global position system (GPS) data of PX4 is used as the reference value to compare with the positioning data of the proposed algorithm and the positioning data of the FAST-LIO algorithm, as in Figure 11A and B, respectively, as can be seen from the chart that the GPS data is not stable in outdoor scenes, which has some influence on the error evaluation. The average absolute trajectory error of the proposed algorithm is 0.599 m. The absolute trajectory error of the FAST-LIO algorithm is 0.654 m, and the outdoor scene is in Figure 12A and B correspondingly.

Rotating 3D laser mapping system for multi-rotor drones

Figure 11. Comparison of GPS location data.

Rotating 3D laser mapping system for multi-rotor drones

Figure 12. Comparison of mapping effect in outdoor.


This article is proposed by the FAST-LIO algorithm that could effectively provide reliable pose information for rotor drones in indoor and outdoor scenarios; as shown in Figures 8A and 9B, the experimental results obtained are within an acceptable range compared to the actual values and the comparison algorithm. From Figure 7A and 8B, it can be concluded that compared to traditional positioning mapping platforms, our platform has a larger FOV and can obtain more environmental information. This also increases the application scenarios of rotor drones, such as when used to explore unknown environments, allowing them to obtain more environmental maps in a short period of time and have more options for path planning.


To address the problem of observation blindness of rotorcraft in complex environments for its own position estimation and environment map establishment, a rotorcraft UAV system based on rotating 3D lidar is first introduced, and then, a tightly coupled rotating laser SLAM algorithm is proposed. The algorithm uses FAST-LIO as the main framework, adds the lidar rotation angle information, and then obtains the odometry results after two motion compensation and error-sate iterative Kalman filtering iterations to update the state. It also combines IMU pre-integration and low-pass filtering to enhance the output frequency of the odometer to meet the rotorcraft UAV autonomous flight conditions. The absolute estimation error in this article is 0.034 m indoors and 0.599 m outdoors; these results show that the system obtains reliable positioning data and high-quality map-building effect in both indoor and outdoor complex scenes and has more robust environment sensing ability compared with traditional UAVs, which meets the usage conditions of various application scenarios. However, the algorithm is prone to degradation under the single constraint scenario, so further research will be conducted to improve the robustness of the algorithm under the single feature scenario. For example, corridors and tunnels, and we also plan to use them for autonomous exploration of underground mines by drones.


Authors' contributions

Made substantial contributions to the conception and design of the study and performed data analysis and interpretation: Fu M, Zhang H

Performed process guidance responsibility for the planning and execution of the study and the evolution of overarching research aims, critical review, and material support. Review and revise the original draft: Wang S, Shui Y

Availability of data and materials

Not applicable.

Financial support and sponsorship

This research was funded by the National Natural Science Foundation of China (No.12205245) and the Natural Science Foundation of Sichuan Province (No.2023NSFSC1437).

Conflicts of interest

All authors declared that there are no conflicts of interest.

Ethical approval and consent to participate

Not applicable.

Consent for publication

Not applicable.


© The Author(s) 2023.


1. Li Z, Chen Y, Lu H, Wu H, Cheng L. UAV autonomous landing technology based on apriltags vision positioning algorithm. In: 2019 Chinese Control Conference (CCC); 2019 Jul 27-30; Guangzhou, China. IEEE; 2019. p. 8148-53.

2. Takaya K, Ohta H, Kroumov V, Shibayama K, Nakamura M. Development of UAV system for autonomous power line inspection. In: 2019 23rd International Conference on System Theory, Control and Computing (ICSTCC); 2019 Oct 09-11; Sinaia, Romania. IEEE; 2019. p. 762-7.

3. Siean AI, Vatavu RD, Vanderdonckt J. Taking that perfect aerial photo: a synopsis of interactions for drone-based aerial photography and video. In: Proceedings of the 2021 ACM International Conference on Interactive Media Experiences. 2021. p. 275-9.

4. Reinecke M, Prinsloo T. The influence of drone monitoring on crop health and harvest size. In: 2017 1st International Conference on Next Generation Computing Applications (NextComp); 2017 Jul 19-21; Mauritius. IEEE; 2017. p. 5-10.

5. Gargoum S, El-Basyouny K. Automated extraction of road features using LiDAR data: a review of LiDAR applications in transportation. In: 2017 4th International Conference on Transportation Information and Safety (ICTIS); 2017 Aug 08-10; Banff, Canada. IEEE; 2017. p. 563-74.

6. Zhao S, Fang Z, Li H, Scherer S. A robust laser-inertial odometry and mapping method for large-scale highway environments. In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); 2019 Nov 03-08; Macau, China. IEEE; 2020. p. 1285-92.

7. Dissanayake MWMG, Newman P, Clark S, Durrant-Whyte HF, Csorba M. A solution to the simultaneous localization and map building (SLAM) problem. IEEE Tran Robot Automat 2001;17:229-41.

8. Yuan X, Liu H, Qi R. Research on key technologies of autonomous driving platform. J Phys Conf Ser 2001;1754:012127.

9. Wang H, Wang C, Xie L. Lightweight 3-D localization and mapping for solid-state LiDAR. IEEE Robot Autom Lett 2021;6:1801-7.

10. Zhang J, Singh S. LOAM: lidar odometry and mapping in real-time. In: Proceedings of Robotics: Science and Systems; California, USA. 2014. p. 1-9. Available from: [Last accessed on 21 Nov 2023].

11. Shan T, Englot B. LeGO-LOAM: lightweight and ground-optimized lidar odometry and mapping on variable terrain. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); 2018 Oct 01-05; Madrid, Spain. IEEE; 2018. p. 4758-65.

12. Xu X, Zhang L, Yang J, et al. A review of multi-sensor fusion SLAM systems based on 3D LIDAR. Remote Sens 2022;14:2835.

13. Ye H, Chen Y, Liu M. Tightly coupled 3D lidar inertial odometry and mapping. In: 2019 International Conference on Robotics and Automation (ICRA), 2019 May 20-24; Montreal, Canada. IEEE; 2019. p. 3144-50.

14. Shan T, Englot B, Meyers D, Wang W, Ratti C, Rus D. LIO-SAM: tightly-coupled lidar inertial odometry via smoothing and mapping. In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); 2020 Oct 24-2021 Jan 24; Las Vegas, USA. IEEE; 2020. p. 5135-42.

15. Xu W, Zhang F. FAST-LIO: a fast, robust LiDAR-inertial odometry package by tightly-coupled iterated Kalman Filter. IEEE Robot Autom Lett 2021;6:3317-24.

16. Wang S, Zhang H, Wang G. OMC-SLIO: online multiple calibrations spinning LiDAR inertial odometry. Sensors 2023;23:248.

17. Harchowdhury A, Kleeman L, Vachhani L. Coordinated nodding of a two-dimensional lidar for dense three-dimensional range measurements. IEEE Robot Autom Lett 2018;3:4108-15.

18. Droeschel D, Holz D, Behnke S. Omnidirektionale wahrnehmung für leichte MAVs mittels eines kontinuierlich rotierenden 3D-laserscanners. Photogramm Fernerkun 2014:451-64.

19. Bosse M, Zlot R. Continuous 3D scan-matching with a spinning 2D laser. In: 2009 IEEE International Conference on Robotics and Automation; 2009 May 12-17; Kobe, Japan. IEEE; 2009. p. 4312-9.

20. Wang S, Zhang H, Wang G, Liu R, Huo J, Chen B. FGRSC: improved calibration for spinning LiDAR in unprepared scenes. IEEE Sens J 2022;22:14250-62.

21. He D, Xu W, Zhang F. Kalman filters on differentiable manifolds. arXiv 2021; In press.

Cite This Article

OAE Style

Fu M, Zhang H, Wang S, Shui Y. Rotating 3D laser mapping system for multi-rotor drones. Intell Robot 2023;3(4):632-46.

AMA Style

Fu M, Zhang H, Wang S, Shui Y. Rotating 3D laser mapping system for multi-rotor drones. Intelligence & Robotics. 2023; 3(4): 632-46.

Chicago/Turabian Style

Fu, Meiqi, Hua Zhang, Shuang Wang, Yuhang Shui. 2023. "Rotating 3D laser mapping system for multi-rotor drones" Intelligence & Robotics. 3, no.4: 632-46.

ACS Style

Fu, M.; Zhang H.; Wang S.; Shui Y. Rotating 3D laser mapping system for multi-rotor drones. Intell. Robot. 2023, 3, 632-46.



Open Access Research Article
Event-triggered consensus control method with communication faults for multi-UAV
Available online: 22 Nov 2023
Open Access Research Article
Cooperative search for moving targets with the ability to perceive and evade using multiple UAVs
Available online: 27 Oct 2023
Open Access Research Article
UAV maneuver decision-making via deep reinforcement learning for short-range air combat
Available online: 8 Mar 2023
Open Access Research Article
Integrate memory mechanism in multi-granularity deep framework for driver drowsiness detection
Available online: 26 Nov 2023
Open Access Research Article
Event-triggered consensus control method with communication faults for multi-UAV
Available online: 22 Nov 2023
Open Access Research Article
Heterogeneous multi-agent task allocation based on graph neural network ant colony optimization algorithms
Available online: 30 Oct 2023
Open Access Research Article
Path planning with obstacle avoidance for soft robots based on improved particle swarm optimization algorithm
Available online: 28 Oct 2023
Open Access Review
Deep learning approaches for object recognition in plant diseases: a review
Available online: 27 Oct 2023
Open Access Research Article
Cooperative search for moving targets with the ability to perceive and evade using multiple UAVs
Available online: 27 Oct 2023
Open Access Research Article
Muscle synergy analysis for gesture recognition based on sEMG images and Shapley value
Available online: 23 Oct 2023


Comments must be written in English. Spam, offensive content, impersonation, and private information will not be permitted. If any comment is reported and identified as inappropriate content by OAE staff, the comment will be removed without notice. If you have any queries or need any help, please contact us at

Cite This Article 2 clicks
Commentary 0 comments
Like This Article 2 likes
Share This Article
Scan the QR code for reading!
See Updates
Hot Topics
Intelligence | Robotics | Reinforcement Learning | Machine Learning |
Intelligence & Robotics
ISSN 2770-3541 (Online)
Follow Us


All published articles are preserved here permanently:


All published articles are preserved here permanently: