In this section, a LiDAR and IMU integrated navigation system is proposed. This UAV navigation system includes an existing 2D laser scan matching method for planar localization of UAVs and altitude estimation using a low range LiDAR sensor. Also, modifications made to the scan matching process to overcome altitude angle variation during the flight of a UAV are described in detail.
3.1. Point-to-Point Scan Matching
Planar localization of a UAV is achieved using measurements from the horizontally scanning primary LiDAR sensor mounted on the on-board computer. Scan measurements from the LiDAR scanner can be understood as a momentary snapshot of the surrounding environment of the UAV. The point-to-point matching-based Polar Scan Matching (PSM) method [
19] is used to determine the relative translation of consecutive LiDAR scans during the flight of the UAV. Each scan covers 270° of the surrounding environment of the UAV, with 0.25° of angular resolution. The point in each measured angle indicates the distance between the UAV and the nearest obstacle in the corresponding direction. This approach is best suited for use with portable embedded devices because of its low computational requirements and high accuracy.
As mentioned before, we follow the point-to-point matching based on the PSM approach proposed by Diosi and Kleeman [
19] in order to determine the relative translation. In this matching method, the current scan is aligned with respect to the reference scan so that the sum of the square range residuals is minimized.
The current scan is described as , where describes position and orientation, and describes n range measurements at bearings , expressed in the current scan coordinate system. The reference scan is described as .
The scan matching process works as follows: scan pre-processing and scan projection are followed by translation estimation, which is alternated with scan projection and followed by orientation estimation.
Prior to matching, spurious measurements and objects that are likely to move are removed from the scan data using a median filter and a scan segmentation filter in order to increase the accuracy and robustness of scan matching. Next, the current scan acquired at position
S is projected with respect to the reference scan acquired at position
R such that it reflects how the current scan would look if it were performed from the reference position
R. The range bearings of points from position
R are calculated using Equations (1) and (2):
Here,
is the four-quadrant version of arctan, and (
,
) represents the projected current scan readings. Next, the aim is to estimate what the laser scanner would measure from reference position
R. The re-sampled range value for each sample bearing using linear interpolation is represented by the notation (
,
). Since the association rule is to match the bearings of the points, the next ranges
at the reference scan bearings
are calculated using interpolation.
Figure 2 shows the projection of consecutive scan points based on the rule mentioned above.
3.1.1. Orientation Estimation
Orientation of the current scan is obtained from a wireless IMU integrated with a LiDAR, as shown in
Figure 3. The custom-made wireless IMU includes a tri-axis gyroscope, tri-axis accelerometer, and tri-axis magnetometer. All of these sensors are connected to a microcontroller that collects and processes the data. A complementary filter sensor fusion algorithm [
26] with drift compensation is implemented to obtain precise and accurate 3D attitude data [
27]. The quaternion complementary provides a normalized quaternion output from the calibrated sensor data as input.
The normalized quaternion output Q from IMU is a vector with four elements [q1 q2 q3 q4], from which Euler angles are derived using Equations (7)–(9) in ZYX sequence, and the current scan measurement is obtained from the LiDAR.
Let
t and
t + 1 be the two successive time instants at which two LiDAR scans are acquired, and let
and
be the corresponding point clouds generated from the scan measurements with integrated IMU measurements that can be described as shown in Equation (3):
where
and
represent the pose of the scanner at reference and current locations, respectively.
is the range measurement obtained.
Using range measurements
, the coordinate point is estimated using Equation (4) and stored in the point cloud
and
, respectively, at the
and
time stamps:
Then pose of the LiDAR from the reference and current scanner locations can be represented using the matrix form in Equation (5):
where
and
are the Euler angle’s computed from the quaternion
and
respectively, using Equation (9).
and
are the homogeneous transformation matrix. Finally transformed point cloud scans obtained by multiplying the transformation and point cloud matrix as shown in the Equation (6):
Where
and
are the transformation matrices,
and
are the point clouds obtained at the
and
time stamps, respectively.
Now, after projecting the current scan line with respect to the reference scan, the angular difference
between the reference and current scan can determined as:
Figure 4 shows the projected two consecutive scans according to the orientation of the scanner obtained from the integrated IMU. In the figure, the reference scan and currents scan points are represented in blue and red colored circles, respectively. The right side of the image presents the scanner Field of View (FOV) with respect to orientation.
3.1.2. Ideal Scanner Plane
The primary LiDAR mounted on the on-board computer on a UAV detects obstacle information in the 2D scanning plane. In the case of ground robots, which always move in a plane, the altitude angle is maintained at an almost constant level. However, it is very difficult to control a UAV at a constant altitude angle during flight in the indoor environment. Therefore, LiDAR measurement information from the UAV platform cannot accurately describe the obstacle distribution in the horizontal plane.
This leads to an enormous amount of error in the scan matching process during relative position estimation. Therefore, it is necessary to reduce the influence of altitude angle variation during LiDAR measurement.
As shown in
Figure 5, the actual measurement plane
of LiDAR is defined by the coordinates
, and
are the coordinates defining the measurement plane
after variation in the altitude angle of the UAV.
The angular variation occurring during flight of the UAV is captured from the roll and pitch angles of the 6DOF IMU integrated with the primary LiDAR. Quaternion output from the IMU sensor are converted to Euler angles using Equations (8)–(10), which use the ZYX sequence.
Considering a unit quaternion
, the norm
where
(roll) represents the rotation around the Z-axis,
(pitch) represents the rotation around the Y-axis, and
(yaw) represents the rotation around the X-axis.
Each scan measurement from the LiDAR is associated with the
(roll) and
(yaw) angles acquired using integrated IMU as shown in
Figure 6. Before performing scan matching between the two consecutive scan lines, the vertical angular difference between the scan lines is computed. Depending on the pre-defined threshold, scan lines are considered for the matching process; otherwise, scan lines will be rejected. Formulation of the selection of valid plane measurements for the scan matching process is as follows.
Given two 2D laser scans,
is calculated from relative position
, and
is relative to positions
along with
and
angles, respectively:
In the case of invalid plane measurement, the current scan is rejected, and the next scan that satisfies Equation (11) will be considered for further matching. In the above equation, and are estimated based on the average tilting angle observed in multiple flights of UAV in different indoor environments such as a long corridor and a closed room. During the test flight of a UAV, we obtained 1.14° as minimum and 4.23° as maximum angular differences in consecutive time steps in roll angles and 1.28° minimum and 3.64° maximum yaw angles. Based on these observations, we considered the average angle as the threshold to identify the valid plane for scan matching.
3.3. Kalman Filtering
In order to integrate and filter the 2D position data from scan matching and 1D altitude data from line extraction, we also implemented a linear Kalman filtering algorithm, similar to the work presented in [
28]. The block diagram of the integration process is shown in
Figure 7. The filter includes three states shown in Equation (12), where two states are for the 2D position from the scan matching algorithm, and one state is for the altitude data from the line extraction algorithm.
where
is a measurement vector or current state of the system.
: X coordinate of the 2D position at the kth time interval from the scan matching algorithm.
: Y coordinate of the 1D position at the kth time interval from the altitude estimation algorithm.
: Z coordinate of the 2D position at the kth time interval from the scan matching algorithm.
where A, B, and H are state, input, and measurement matrices, respectively, and
x is the state of the system,
u is a known input to the system,
y is the measured output,
w is the process noise, and
z is the measurement noise. Vector
x contains the present state of the system, which is estimated from given measurements in vector
y. Equations (13) and (14) define the
linear model of the filter used [
29].
Figure 8 shows the displacement of 3D position data with respect to the planned and measured trajectory. Here, measured trajectory refers to the obtained output from the scan matching algorithm and is presented in the image with red colored dots. The planned trajectory is predefined in the known indoor corridor, where UAV is assigned with a set of commands to follow the desired trajectory and is presented with pink colored dots. Green colored dots in the image represents the Kalman filtered output of the measured trajectory. Even though the accuracy of the measured trajectory is poor compared to the planned trajectory, use of the Kalman filter increases the accuracy of the trajectory by reducing the measurement noise.