Skip to main content
  • Original Article
  • Open access
  • Published:

GNSS/IMU/LO integration with a new LO error model and lateral constraint for navigation in urban areas

Abstract

The quest for reliable vehicle navigation in urban environments has led the integration of Light Detection and Ranging (LiDAR) Odometry (LO) with Global Navigation Satellite Systems (GNSS) and Inertial Measurement Units (IMU). However, the performance of the integrated system is limited by a lack of accurate LO error modeling. In this paper, we propose a weighted GNSS/IMU/LO integration-based navigation system with a novel LO error model. The Squared Exponential Gaussian Progress Regression (SE-GPR) based LO error model is developed by considering the vehicle velocity and number of point cloud features. Based on error prediction for GNSS positioning and LO, a weighting strategy is designed for integration in an Extended Kalman Filter (EKF). Furthermore, error accumulation of the navigation state, especially in GNSS-challenging scenarios, is restrained by the LiDAR-Aided Lateral Constraint (LALC) and Non-Holonomic Constraint (NHC). An experiment was conducted in a deep urban area to test the proposed algorithm. The results show that the proposed algorithm delivers horizontal and three-dimensional (3D) positioning Root Mean Square Errors (RMSEs) of 3.669 m and 5.216 m, respectively. The corresponding accuracy improvements are 35.9% and 50.0% compared to the basic EKF based GNSS/IMU/LO integration.

Introduction

Accurate and reliable positioning information underpins Intelligent Transportation Systems (ITS) (Du et al., 2021; Feng & Law, 2002; Sun et al., 2020). As critical positioning sources, Global Navigation Satellite Systems (GNSS) are widely used with Inertial Measurement Units (IMU) in an integrated scheme to facilitate vehicle applications of ITS owing to their superior complementary features (El-Sheimy & Youssef, 2020; Kuutti et al., 2018). In urban areas, however, the signals of GNSS satellites are easily blocked by buildings or contaminated by reflected signals, causing gross errors or even error divergence of GNSS/IMU integrated navigation (Chen et al., 2023; Niu et al., 2023; Sun et al., 2021).

As a perception sensor, Light Detection and Ranging (LiDAR) transmits laser beams and receives the reflected ones to obtain relative positions of reflection points around it (Chiang et al., 2017). Tens of thousands of reflection points, composing a frame of point cloud data, can be collected with one circle of LiDAR scanning (Chang et al., 2020). Besl and McKay (1992) proposed an Iterative Closest Point (ICP) based scan registration method which directly matches the closest points. This method is the basis for LiDAR Odometry (LO). Zhang and Singh (2014) developed a LiDAR Odometry and Mapping (LOAM) framework, which estimates relative transformation by extracting features from point clouds and matching them. Thereafter, several variants of LOAM were proposed (Guo et al., 2023; Wang et al., 2021). Among them, Lightweight and Ground-Optimized LOAM (LeGO-LOAM) utilizes the existence of a land plane in point cloud segmentation and optimization steps to optimize pose estimation of ground vehicles (Shan & Englot, 2018). Besides those point or feature based algorithms, the distribution-based Normal Distribution Transform (NDT) is also used for point cloud registration of LO (Chiang et al., 2023; Magnusson et al., 2009; Zhou et al., 2017). Nevertheless, since LO is a dead reckoning navigation method, positioning error accumulation cannot be avoided.

To deal with the inherent defects of single-sensor navigation, many GNSS/IMU/LO integrated navigation schemes have been proposed (Meng et al., 2017; Wang et al., 2022). Shan et al. (2020) proposed a factor graph-based LiDAR Inertial Odometry via Smoothing and Mapping (LIO-SAM). Measurements from additional sensors, such as GNSS receivers, can be easily incorporated into the framework as new factors. Experiments conducted in various environments demonstrated that LIO-SAM achieved positioning accuracy levels similar to or better than LOAM. Li et al. (2021a) developed a GNSS Precise Point Positioning (PPP) and LOAM loosely coupled navigation algorithm, namely PPP based LOAM (P3-LOAM). The LiDAR positioning covariance was derived using Singular Value Decomposition (SVD) Jacobian model for ICP based LO. A Factor Graph Optimization-based Real-Time Kinematic (RTK) GNSS/Inertial Navigation System (INS)/LiDAR tightly coupled integration scheme (FGO-GIL) was also designed for navigation in urban areas (Li et al., 2023). Besides, there are also substantial filter based GNSS/IMU/LO integration solutions. Chiang et al. (2020) proposed an Extended Kalman Filter (EKF) based GNSS/IMU/LO integration algorithm. LO-derived navigation information was utilized to validate the reliability of the GNSS based measurement update vector and facilitate the appropriate implementation of vehicle motion constraints. The LO positioning covariance, constructed by the residuals of the Gauss–Newton model-based scan matching, was used in the integration. Li et al. (2021b) developed a tightly coupled PPP/IMU/LO integration algorithm. A sliding window strategy was designed for point cloud data processing. However, scan matching between each frame in the sliding window and the newest frame incurs a heavy computational burden. Rose et al. (2014) used the LiDAR-based lateral distance measurements from the vehicle to the land markings as additional information fused with GNSS and IMU information in the filter to improve the robustness of vehicle navigation in difficult environments for GNSS, but a known waypoint-based map is needed.

In summary, current GNSS/IMU/LO integration solutions focus on sensor data processing or filter/graph architecture. However, robust LO error modeling is lacking, which is critical for assigning an appropriate weight to LiDAR information. Although LO error covariance matrices have been established, they are rough or suffer from high computational cost (Chang et al., 2020; Chiang et al., 2020; Ju et al., 2019; Li et al., 2021a). A LO drift error model is proposed by estimating model parameters online using historical GNSS/LiDAR/map fusion-based positioning solutions, but it is suitable only for short-term use in relatively stable environments (Liu et al., 2023). To deal with this issue, this paper proposes a Squared Exponential Gaussian Process Regression (SE-GPR) based LO error model for the weighted integration of GNSS/IMU/LO. A LiDAR based lateral constraint of vehicle motion without the requirement of a prior map is established. To further improve the positioning performance, especially in the vertical direction, Non-Holonomic Constraint (NHC) is also proposed, which is a widely used and effective motion constraint of vehicle navigation applications (Niu et al., 2007; Xiao et al., 2021). The main contributions of this paper are summarized as follows:

  1. 1.

    A SE-GPR based LO error model is established by considering the vehicle’s velocity and number of feature points. It is trained with historical datasets and used to predict real-time LO error.

  2. 2.

    A weighting strategy for LO and GNSS positioning is proposed based on the GNSS protection level and predicted LO error. The weighted positioning solution is then used to update the navigation state derived by the IMU mechanization.

  3. 3.

    The LiDAR-Aided Lateral Constraint (LALC) is constructed by extracting the lateral distances between the vehicle and curbs from point clouds, and used with NHC to control error accumulation in the final navigation state.

The structure of the remainder of this paper is as follows: section “Proposed algorithm design” describes the algorithm framework, defines the involved coordinate frames, and illustrates the proposed algorithm in detail. Section “Results and discussion” discusses and analyzes the field test results. Finally, the conclusion is given.

Proposed algorithm design

The framework of the proposed GNSS/IMU/LO integration algorithm is shown in Fig. 1. Firstly, GNSS observations are collected by a GNSS receiver. Then, the GNSS pseudorange-based Single Point Positioning (SPP) solution and corresponding protection level, which is a statistical error bound of GNSS positioning, are calculated with the Least Squares Residuals (LSR) method of Receiver Autonomous Integrity Monitoring (RAIM) (Brown & Chin, 1998). The point cloud data acquired with a three-dimensional (3D) LiDAR are processed using the LO method of LeGO-LOAM, which is specially designed for ground vehicle applications. The SE-GPR based LO error regression model, which is established by considering the vehicle’s velocity and number of features in the point clouds, is used to estimate the real-time LO error. Then, the weighted fusion of GNSS/LO is achieved by assigning weights according to the GNSS protection level and LO error. The fused result is used to update the navigation state derived by IMU mechanization with an EKF. To further restrain the system error, the motion constraint in the lateral direction of the road segment, LALC, is executed along with NHC, which assumes the velocity components orthogonal to the forward direction of the vehicle body frame are zero with noises (Dissanayake et al., 2001). Finally the vehicle navigation state, including the position, velocity, and attitude, is generated as the output. When the predicted horizontal positioning error of LO exceeds the empirical threshold of 10 m, the previous state estimation of LO is replaced by the current output navigation state, more accurate at that moment, with the lever arm effect between the IMU and LiDAR being corrected.

Fig. 1
figure 1

Framework of the proposed GNSS/IMU/LO integrated navigation algorithm. In this framework, the GNSS/LO weighted position based on the error modeling is loosely integrated with IMU in the EKF. The motion constraints including LALC and NHC are implemented to further suppress positioning error accumulating

Coordinate frame definition

Since the spatial reference systems of GNSS, IMU, and LO are different, the following coordinate frames are defined. The navigation frame (n-frame), inertial frame (i-frame), and body frame (b-frame) are common coordinate frames used in IMU mechanization. GNSS positioning is implemented in the earth-centered and earth-fixed coordinate frame (e-frame). The point cloud data are collected in the LiDAR frame (l-frame), where its origin corresponds to the optical center of the LiDAR sensor and its three axes point towards the right, front, and up directions, respectively. Additionally, the right-handed road segment frame (r-frame) is defined. The origin is the optical center of the LiDAR sensor, and the three axes point towards the lateral, longitudinal, and up directions corresponding to the road surface, respectively.

SE-GPR based LO error model

Gaussian Process Regression (GPR) is a popular interpretable Bayesian model which demonstrates high predictive accuracy across various scenarios (Liu et al., 2020; Schulz et al., 2018). In this section, the proposed SE-GPR based LO error model is designed. Specifically, the LO error model is used to predict the error of LO-based vehicle displacement between two adjacent frames of point clouds to avoid the effect of previous state estimation of LO. Originally, the LO-based displacement of the vehicle is calculated in the l-frame. With the estimated attitude, the displacement can be transformed to the one in the n-frame. In theory, the vehicle velocity determines the relative spatial position of adjacent two-frame point clouds, and the number of features of a frame point cloud reflects whether the structural information of the environment is abundant for scan matching. Therefore, the vehicle velocity and number of features, which are related to the accuracy of scan matching, are selected into the input vector:

$${\varvec{x}} = \left( {\begin{array}{*{20}c} {v_{N} } & {v_{E} } & {v_{D} } & {F_{{\text{E}}} } & {F_{{\text{P}}} } \\ \end{array} } \right)^{{\text{T}}}$$
(1)

where \(v_{N}\), \(v_{E}\) and \(v_{D}\) are the north (N), east (E), and down (D) velocity components of the vehicle, respectively; \(F_{{\text{E}}}\) and \(F_{{\text{P}}}\) are the numbers of edge features and planar features, respectively.

The output variable \(y\) is the LO-based displacement error in one direction among north, east, and down. Labelling the error with the reference trajectory, the training dataset \({\varvec{D}}\) can be obtained:

$${\varvec{D}} = \left\{ {({\varvec{x}}_{i} ,y_{i} )|i = 1,2, \ldots ,n} \right\}$$
(2)

where the subscript i indicates the i_th frame point cloud; n is the total count of the samples.

It is supposed that the output variable \(y\) is the function of \({\varvec{x}}\):

$$y = f({\varvec{x}}) + \varepsilon$$
(3)

where \(\varepsilon\) is the noise and \(\varepsilon \sim N(0,\sigma_{0}^{2} )\).

In SE-GPR theory (Schulz et al., 2018), it is assumed that there is a prior probability distribution for \({\varvec{y}}\) (\({\varvec{y}} = \left( {\begin{array}{*{20}c} {y_{1} } & {y_{2} } & \ldots & {y_{n} } \\ \end{array} } \right)^{{\text{T}}}\)):

$${\varvec{y}}\sim N\left( {{\mathbf{0}}_{n \times 1} ,T\left( {{\varvec{X}},{\varvec{X}}} \right) + \sigma_{0}^{2} {\varvec{I}}_{n \times n} } \right)$$
(4)

where \({\varvec{X}} = \left( {\begin{array}{*{20}c} {{\varvec{x}}_{1} } & {{\varvec{x}}_{2} } & \ldots & {{\varvec{x}}_{n} } \\ \end{array} } \right)^{{\text{T}}}\); \({\mathbf{0}}_{n \times 1}\) and \({\varvec{I}}_{n \times n}\) are a zero vector of dimension \(n \times 1\) and identity matrix of dimension \(n \times n\), respectively; \(T\left( {{\varvec{X}},{\varvec{X}}} \right)\) is the positive definite covariance matrix of \({\varvec{X}}\) and

$$T\left( {{\varvec{X}},{\varvec{X}}} \right) = \left( {\begin{array}{*{20}c} {\begin{array}{*{20}c} {t\left( {{\varvec{x}}_{1} ,{\varvec{x}}_{1} } \right)} \\ \vdots \\ {t\left( {{\varvec{x}}_{n} ,{\varvec{x}}_{1} } \right)} \\ \end{array} } & {\begin{array}{*{20}c} {t\left( {{\varvec{x}}_{1} ,{\varvec{x}}_{2} } \right)} \\ \vdots \\ {t\left( {{\varvec{x}}_{n} ,{\varvec{x}}_{2} } \right)} \\ \end{array} } & {\begin{array}{*{20}c} {\begin{array}{*{20}c} \cdots \\ \ddots \\ \cdots \\ \end{array} } & {\begin{array}{*{20}c} {t\left( {{\varvec{x}}_{1} ,{\varvec{x}}_{n} } \right)} \\ \vdots \\ {t\left( {{\varvec{x}}_{n} ,{\varvec{x}}_{n} } \right)} \\ \end{array} } \\ \end{array} } \\ \end{array} } \right)$$
(5)

where \(t\left( {{\varvec{x}},{\varvec{x}}^{\prime}} \right)\) is the radial basis function kernel, which is defined as:

$$t\left( {{\varvec{x}},\user2{x^{\prime}}} \right) = \sigma_{f}^{2} {\text{exp}}\left( { - \frac{{{\varvec{x}} - {\varvec{x}}{^{\prime}}^{2} }}{{2\lambda^{2} }}} \right)$$
(6)

where \(\sigma_{f}^{2}\) and \(\lambda\) are the signal variance and length scale, respectively. Based on the training dataset, the hyper parameters \(\lambda\), \(\sigma_{f}^{2}\), and \(\sigma_{0}^{2}\) are obtained with the maximum likelihood method.

Let \(f_{*}\) be the function value corresponding to the input vector \({\varvec{x}}_{*}\). The joint prior distribution of \({\varvec{y}}\) and \(f_{*}\) is:

$$\left( {\begin{array}{*{20}c} {\varvec{y}} \\ {f_{*} } \\ \end{array} } \right)\sim N\left( {0,\left( {\begin{array}{*{20}c} {T\left( {{\varvec{X}},{\varvec{X}}} \right) + \sigma_{0}^{2} {\varvec{I}}_{n \times n} } & {T\left( {{\varvec{X}},{\varvec{x}}_{*} } \right)} \\ {T\left( {{\varvec{x}}_{*} ,{\varvec{X}}} \right)} & {t\left( {{\varvec{x}}_{*} ,{\varvec{x}}_{*} } \right)} \\ \end{array} } \right)} \right)$$
(7)

Then the mean and variance of \(f_{*}\) are obtained as:

$$\overline{{f_{*} }} = T\left( {{\varvec{x}}_{*} ,{\varvec{X}}} \right)\left[ {T\left( {{\varvec{X}},{\varvec{X}}} \right) + \sigma_{0}^{2} {\varvec{I}}_{n \times n} } \right]{\varvec{y}}^{ - 1}$$
(8)
$$D\left( {f_{*} } \right) = t\left( {{\varvec{x}}_{*} ,{\varvec{x}}_{*} } \right) - T\left( {{\varvec{x}}_{*} ,{\varvec{X}}} \right)\left[ {T\left( {{\varvec{X}},{\varvec{X}}} \right) + \sigma_{0}^{2} {\varvec{I}}_{n \times n} } \right]^{ - 1} T\left( {{\varvec{X}},{\varvec{x}}_{*} } \right)$$
(9)

According to (3), the mean of \(y_{*}\), which is the optimal estimation of the output variable, is equal to the mean of \(f_{*}\):

$$\overline{{y_{*} }} = \overline{{f_{*} }}$$
(10)

Following the steps above, the rule of predicting the error of LO-based displacement \(\overline{{y_{*} }}\) are obtained.

The weighting strategy based GNSS/IMU/LO integration

For robust GNSS/IMU/LO integration, a weighting strategy between GNSS and LO is specified. The error estimation results of GNSS positioning and LO are the core of the weighting strategy. Since protection level is a variable that envelopes and approximates the error of GNSS positioning, the Vertical Protection Level (VPL) and Horizontal Protection Level (HPL) are calculated with LSR RAIM as the estimation of errors of GNSS positioning (Brown & Chin, 1998). The HPL is decomposed into the North Protection Level (NPL) and East Protection Level (EPL):

$$V_{{{\text{NPL}}}} = \sqrt {\frac{{\left( {H_{1,I}^{ + } } \right)^{2} }}{{\left( {H_{1,I}^{ + } } \right)^{2} + \left( {H_{2,I}^{ + } } \right)^{2} }}} \cdot V_{{{\text{HPL}}}}$$
(11)
$$V_{{{\text{EPL}}}} = \sqrt {\frac{{\left( {H_{2,I}^{ + } } \right)^{2} }}{{\left( {H_{1,I}^{ + } } \right)^{2} + \left( {H_{2,I}^{ + } } \right)^{2} }}} \cdot V_{{{\text{HPL}}}}$$
(12)

where \(V_{{{\text{NPL}}}}\), \(V_{{{\text{EPL}}}}\), and \(V_{{{\text{HPL}}}}\) represent the values of the NPL, EPL, and HPL, respectively; I denotes the index of the satellite with the largest SLOPE, which is the coefficient projecting the pseudorange error onto the domain of positioning error; \({\varvec{H}}^{ + }\) represents the pseudo inverse of the satellite observation matrix; \(H_{1,I}^{ + }\) is the element of \({\varvec{H}}^{ + }\) in the 1st row and I_th column; \(H_{2,I}^{ + }\) is the element of \({\varvec{H}}^{ + }\) in the 2nd row and I_th column.

With the SE-GPR based LO error model, the error of LO-based location can be obtained as:

$$\updelta P_{{{\text{LO,}}N}} = \left| {\mathop \sum \limits_{m = p}^{q} \overline{y}_{{N{,}m}} } \right|,\,\,\,\updelta P_{{{\text{LO,}}E}} = \left| {\mathop \sum \limits_{m = p}^{q} \overline{y}_{E,m} } \right|,\,\,\,\updelta P_{{{\text{LO,}}D}} = \left| {\mathop \sum \limits_{m = p}^{q} \overline{y}_{{D{,}m}} } \right|$$
(13)

where \(p\) and \(q\) denote the epoch index corresponding to the last LO error feedback and current epoch index, respectively; \(\overline{y}_{{N{,}m}}\), \(\overline{y}_{E,m}\), and \(\overline{y}_{D,m}\) are the predicted LO displacement errors in north, east, and down directions at the m_th epoch, respectively.

Then the positioning solution of GNSS/LO weighted fusion is obtained as:

$${\varvec{P}}_{{\text{GNSS/LO}}} = {\varvec{W}}_{{{\text{GNSS}}}} {\varvec{P}}_{{{\text{GNSS}}}} + {\varvec{W}}_{{{\text{LO}}}} {\varvec{P}}_{{{\text{LO}}}}$$
(14)

where \({\varvec{P}}_{{{\text{GNSS}}}}\) and \({\varvec{P}}_{{{\text{LO}}}}\) are the GNSS based and LO based positions in the n-frame, respectively; \({\varvec{P}}_{{{\text{LO}}}}\) has been transformed with the lever arm between the GNSS antenna and the LiDAR sensor; supposing the weighting coefficients of GNSS positioning and LO are inversely proportional to corresponding predicted errors, the weight matrices for \({\varvec{P}}_{{{\text{GNSS}}}}\) and \({\varvec{P}}_{{{\text{LO}}}}\), namely \({\varvec{W}}_{{{\text{GNSS}}}}\) and \({\varvec{W}}_{{{\text{LO}}}}\), are obtained:

$${\varvec{W}}_{{{\text{GNSS}}}} = \left( {\begin{array}{*{20}c} {w_{{{\text{GNSS}}}}^{N} } & 0 & 0 \\ 0 & {w_{{{\text{GNSS}}}}^{E} } & 0 \\ 0 & 0 & {w_{{{\text{GNSS}}}}^{D} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {\frac{{|{\updelta }P_{{{\text{LO,}}N}} |}}{{V_{{{\text{NPL}}}} + |{\updelta }P_{{{\text{LO,}}N}} |}}} & 0 & 0 \\ 0 & {\frac{{|{\updelta }P_{{{\text{LO,}}E}} |}}{{V_{{{\text{EPL}}}} + |{\updelta }P_{{{\text{LO,}}E}} |}}} & 0 \\ 0 & 0 & {\frac{{|{\updelta }P_{{{\text{LO,}}D}} |}}{{V_{{{\text{VPL}}}} + |{\updelta }P_{{{\text{LO,}}D}} |}}} \\ \end{array} } \right)$$
(15)
$$\varvec{W}_{{{\text{LO}}}} = \left( {\begin{array}{*{20}l} {w_{{{\text{LO}}}}^{N} } \hfill & 0 \hfill & 0 \hfill \\ 0 \hfill & {w_{{{\text{LO}}}}^{E} } \hfill & 0 \hfill \\ 0 \hfill & 0 \hfill & {w_{{{\text{LO}}}}^{D} } \hfill \\ \end{array} } \right) = \left( {\begin{array}{*{20}l} {1 - w_{{{\text{GNSS}}}}^{N} } \hfill & 0 \hfill & 0 \hfill \\ 0 \hfill & {1 - w_{{{\text{GNSS}}}}^{E} } \hfill & 0 \hfill \\ 0 \hfill & 0 \hfill & {1 - w_{{{\text{GNSS}}}}^{D} } \hfill \\ \end{array} } \right) = \varvec{I}_{{3 \times 3}} - \varvec{W}_{{{\text{GNSS}}}}$$
(16)

where \(w_{{{\text{GNSS}}}}^{N}\), \(w_{{{\text{GNSS}}}}^{E}\), and \(w_{{{\text{GNSS}}}}^{D}\) are the weighting coefficients of north, east, and down GNSS positioning results, respectively; \(w_{{{\text{LO}}}}^{N}\), \(w_{{{\text{LO}}}}^{E}\), and \(w_{{{\text{LO}}}}^{D}\) are the weighting coefficients of north, east, and down LO positioning results, respectively.

The measurement vector of the EKF for GNSS/IMU/LO integration is:

$${\varvec{Z}}_{k} = \left( {\begin{array}{*{20}c} {{\varvec{P}}_{{\text{GNSS/LO}}} - {\varvec{P}}_{{{\text{IMU}}}} } \\ {{\varvec{V}}_{{{\text{GNSS}}}} - {\varvec{V}}_{{{\text{IMU}}}} } \\ \end{array} } \right)$$
(17)

where \({\varvec{V}}_{{{\text{GNSS}}}}\) is the velocity solution based on GNSS Doppler shifts; \({\varvec{V}}_{{{\text{IMU}}}}\) is the velocity reckoned by IMU mechanization.

The state vector of the EKF is set as:

$${\varvec{X}}_{k} = \left( {\begin{array}{*{20}l} {\left( {{\updelta }{\varvec{r}}_{{e{\text{,IMU}}}} } \right)^{{\text{T}}} } \hfill & {\left( {{\updelta }{\varvec{v}}_{{e{\text{,IMU}}}} } \right)^{{\text{T}}} } \hfill & {\left( {{\updelta }{\mathbf{\varvec{\varphi} }}_{{e{\text{,IMU}}}} } \right)^{{\text{T}}} } \hfill & {\left( {{\varvec{b}}_{{\text{g}}} } \right)^{{\text{T}}} } \hfill & {\left( {{\varvec{s}}_{{\text{g}}} } \right)^{{\text{T}}} } \hfill & {\left( {{\varvec{b}}_{{\text{a}}} } \right)^{{\text{T}}} } \hfill & {\left( {{\varvec{s}}_{{\text{a}}} } \right)^{{\text{T}}} } \hfill \\ \end{array} } \right)^{{\text{T}}}$$
(18)

where \({\updelta }{\varvec{r}}_{{e{\text{,IMU}}}}\), \({\updelta }{\varvec{v}}_{{e{\text{,IMU}}}}\), and \({\updelta }{\mathbf{\varvec{\varphi }}}_{{e{\text{,IMU}}}}\) are the error vectors of IMU mechanization based position, velocity, and attitude in the e-frame, respectively; \({\varvec{b}}_{{\text{g}}}\) and \({\varvec{s}}_{{\text{g}}}\) are the vectors of three-axis biases and scale factors of gyroscopes, respectively; \({\varvec{b}}_{{\text{a}}}\) and \({\varvec{s}}_{{\text{a}}}\) are the vectors of three-axis biases and scale factors of accelerometers, respectively. After measurement update with \({\varvec{Z}}_{k}\), the EKF based navigation state is obtained.

LiDAR-aided lateral constraint

To control error accumulation of the EKF based navigation state in GNSS challenging environments, the LiDAR-Aided Lateral Constraint (LALC) is established by extracting the lateral distance between the vehicle and curbs from point clouds. Let \({\varvec{S}}_{i,k}\) denote the set of reflection points corresponding to the i_th laser at scan k and \(\theta_{i}\) denote the vertical angle of the i_th laser. Since only the laser below the XY-plane of the l-frame is possible to scan curbs, only negative \(\theta_{i}\) needs to be considered. Firstly, to avoid interference of noisy points in \({\mathbf{S}}_{i,k}\), the points that do not satisfy the following condition are excluded:

$$\frac{{H_{{{\text{LiDAR}}}} }}{{{\text{sin}}\theta_{i} }} - l < L < \frac{{H_{{{\text{LiDAR}}}} }}{{{\text{sin}}\theta_{i} }} + l$$
(19)

where \(H_{{{\text{LiDAR}}}}\) is the height of LiDAR from the road surface; l is an allowable value considering the road slope and curb height; the suggested value of \(l\) is 1.5 m; \(L\) is the measured distance from the reflection point to the LiDAR sensor.

Then, the remaining points of \({\varvec{S}}_{i,k}\) are projected onto the XY-plane of the l-frame. The projection is simplified as shown in Fig. 2. Since those points are obtained by laser spinning and scanning for one circle, the reflection points on road surface form two arcs as the red points show, and these reflection points on the curbs form short straight lines as the green points show.

Fig. 2
figure 2

The projection of \({\varvec{S}}_{i,k}\) on the XY-plane of the l-frame. Figure shows the projection of reflection points of a laser scanning for one cirlce. The red and green squares represent the reflection points on road surface and curbs, respectively. And the blue squares denote special reflection points on the intersection lines of road surface and curbs

Based on above geometric features, the special reflection points on the intersection lines of road surface and curbs are detected. The detection factor for each reflection point is calculated as:

$${\Delta }\theta_{j} = \arctan \left( {k_{{{\text{F}},j}} } \right) - \arctan \left( {k_{{{\text{B}},j}} } \right)$$
(20)

where the subscript \(j\) indicates the j_th point of \({\varvec{S}}_{i,k}\); \(k_{{{\text{F}},j}}\) and \(k_{{{\text{B}},j}}\) are the slopes of least-squares lines of adjacent n points before and after the j_th point, respectively; the suggested value of n is 20 based on the experience.

The points corresponding to the peak detection factors are regarded as the special reflection points. After searching all the reflection point sets at scan k, a set of the special reflection points, \({\varvec{I}}_{k}\), is obtained. Line1 and Line2 are the least-squares lines fitted by the points of \({\varvec{I}}_{k}\) on the left and right sides of the vehicle, respectively. The partial curbs scanned by the LiDAR are straight if:

$$\left| {\arctan \left( {k_{{\text{L}}} } \right) - \arctan \left( {k_{{\text{R}}} } \right)} \right| < T_{1}$$
(21)
$$r_{{\text{L}}} < T_{2}$$
(22)
$$r_{{\text{R}}} < T_{2}$$
(23)

where \(k_{{\text{L}}}\) and \(r_{{\text{L}}}\) denote the slope and mean absolute residual of Line1, respectively; \(k_{{\text{R}}}\) and \(r_{{\text{R}}}\) denote the slope and mean absolute residual of Line2, respectively; \(T_{1}\) and \(T_{2}\) are thresholds taking experimental values of \(1^\circ\) and 0.1 m, respectively.

If the above conditions are satisfied, by calculating the distances from the origin \(O\) of the l-frame to Line1 and Line2, the lateral Euclidian distances from the LiDAR to the left and right curbs at epoch \(k\), \(D_{k}^{{\text{L}}}\) and \(D_{k}^{{\text{R}}}\), are obtained. In the same way, \(D_{k + 1}^{{\text{L}}}\) and \(D_{k + 1}^{{\text{R}}}\) can be obtained. Then the lateral displacement of the vehicle between epoch \(k\) and epoch \(k + 1\) is obtained as:

$$d_{k}^{{A{^{\prime}}}} = \frac{{\left( {D_{k + 1}^{{\text{L}}} - D_{k}^{{\text{L}}} } \right) - \left( {D_{k + 1}^{{\text{R}}} - D_{k}^{{\text{R}}} } \right)}}{2}$$
(24)

The current r-frame is determined by considering Line1 and Line2 as two longitudinal lines on the surface of the road segment. The EKF navigation state-based vehicle displacement between epoch \(k\) and epoch \(k + 1\) is transformed from the n-frame to the r-frame by:

$$\left( {\begin{array}{*{20}c} {d_{k}^{A} } \\ {d_{k}^{L} } \\ {d_{k}^{V} } \\ \end{array} } \right) = {\varvec{C}}_{n}^{r} \left( {\begin{array}{*{20}c} {d_{k}^{N} } \\ {d_{k}^{E} } \\ {d_{k}^{D} } \\ \end{array} } \right)$$
(25)

where \({\varvec{C}}_{n}^{r}\) is the direction cosine matrix converting a vector from the n-frame to the r-frame; \(\left( {\begin{array}{*{20}c} {d_{k}^{A} } & {d_{k}^{L} } & {d_{k}^{V} } \\ \end{array} } \right)^{{\text{T}}}\) is the EKF navigation state based vehicle displacement in the r-frame; the superscripts A, L, and V denote the lateral, longitudinal, and up axes of the r-frame; \(\left( {\begin{array}{*{20}c} {d_{k}^{N} } & {d_{k}^{E} } & {d_{k}^{D} } \\ \end{array} } \right)^{{\text{T}}}\) is the EKF navigation state based vehicle displacement in the n-frame; the superscripts N, E and D indicate the north, east, and down axes of the n-frame.

Then the lateral displacement constrained by the LALC is obtained as:

$$d_{k}^{{A^{\prime\prime}}} = \left\{ {\begin{array}{*{20}l} {d_{k}^{{A^{\prime}}} } \hfill & {\left| {d_{k}^{{A^{\prime}}} - d_{k}^{A} } \right| > T_{3} } \hfill \\ {(d_{k}^{A} + d_{k}^{{A^{\prime}}} )/2} \hfill & {T_{4} \le \left| {d_{k}^{{A^{\prime}}} - d_{k}^{A} } \right| \le T_{3} } \hfill \\ {d_{k}^{A} } \hfill & {\left| {d_{k}^{{A^{\prime}}} - d_{k}^{A} } \right| < T_{4} } \hfill \\ \end{array} } \right.$$
(26)

where \(T_{3}\) and \(T_{4}\) are two thresholds taking empirical values of 1.2 m and 0.8 m, respectively.

Then the vehicle displacement between epoch \(k\) and epoch \(k + 1\) in the n-frame is updated with:

$$\left[ {\begin{array}{*{20}c} {d_{k}^{N^{\prime}} } \\ {d_{k}^{E^{\prime}} } \\ {d_{k}^{{D^{\prime}}} } \\ \end{array} } \right] = \left( {{\varvec{C}}_{n}^{r} } \right)^{{\text{T}}} \left[ {\begin{array}{*{20}c} {d_{k}^{A^{\prime\prime}} } \\ {d_{k}^{L} } \\ {d_{k}^{V} } \\ \end{array} } \right]$$
(27)

where \(d_{k}^{N^{\prime}}\), \(d_{k}^{E^{\prime}}\), and \(d_{k}^{D^{\prime}}\) are the vehicle displacements constrained by the LALC in the north, east, and down directions. Then the vehicle position estimation is updated.

The proposed LALC is a motion constraint in the lateral direction, and has less auxiliary effect in the vertical direction. Therefore, NHC is also implemented with an additional Kalman filter to constrain the EKF based navigation state. As a widely used constraint, the procedure of NHC is omitted for brevity. With the aid of the LALC and NHC, the final estimated position, velocity, and attitude of the vehicle are output.

Results and discussion

To assess the performance of the proposed algorithm, a field test was carried out in the deep urban areas of Nanjing city, China. The experimental vehicle and related navigation sensors are shown in Fig. 3. A Micro Electromechanical System IMU, STIM300, whose gyroscope bias instability is 0.5 (°)/h, was employed to collect raw inertial measurement data at 125 Hz. A BDStar Navigation C520-AT receiver taking a NovAtel OEM7500 multiple frequency GNSS receiver module was used to collect raw GNSS data at 10 Hz. And a Velodyne VLP-16 LiDAR sensor was used to collect raw point cloud data. The reference trajectories were determined by the RTK/INS tightly coupled post processing mode of NovAtel Inertial Explorer software with the raw data of a high-grade GNSS/IMU integrated navigator, Honeywell HGuide N580, with gyroscope bias instability of 0.25 (°)/h. In addition, the antenna 1 and 2 were connected with the BDStar Navigation receiver and the N580, respectively.

Fig. 3
figure 3

The vehicle and equipment for the field test. The raw GNSS and inertial data were collected with the BDStar Navigation receiver and STIM300 IMU, respectively. One high-grade navigator, N580, was employed to provide the reference trajectory. The antenna 1 and 2 were connected with BDStar Navigation receiver and N580, respectively

The vehicle was driven in an area near the Nanjing South Railway Station on September 27, 2023, to collect the training dataset. The driving route corresponding to the training dataset is shown in Fig. 4. To evaluate the superiority of the proposed LO error model, the fitting results of SE-GPR and other regression models over the training dataset are compared in Table 1. The candidate fitting models include coarse tree, fine Gaussian Support Vector Machine (SVM), and ensemble boosted regression tree. It is clear that the Root Mean Square Error (RMSE) of SE-GPR is the smallest among these candidate fitting models. R-squared is another important evaluation parameter with a value range between 0 and 1, and higher value indicates better fitting performance. It can be seen that the SE-GPR model provides the highest R-squared in fitting north, east, and down LO errors.

Fig. 4
figure 4

The vehicle trajectory of the training dataset. Figure shows the driving route of the experimental vehicle corresponding to the training dataset. The start and end points of this trajectory are marked in this figure

Table 1 Performance analysis of the regression models over the training dataset

The testing dataset was collected in the downtown area of Nanjing city, about 8 km away from the location of the collected training dataset, on April 29, 2024. The vehicle trajectory of the testing dataset is shown in Fig. 5. The vehicle was driven between dense and tall buildings for the whole trajectory. In several sections of the trajectory, there are thick trees, which cause severe signal attenuation and occlusion to local GNSS positioning. The trained SE-GPR based LO error model was tested with the testing dataset. As Fig. 6 shows, the real LO errors are predicted accurately at most epochs. The north, east, and down RMSEs of the predicted results are 0.0473 m, 0.0510 m, and 0.0330 m, respectively. It can be seen that some jump errors of LO were not predicted well in small part of epochs. This is because the complex effect of environmental structures such as structure degeneracy, and surrounding moving objects such as other vehicles. The LO error model will be improved in our future research by finding the inputs more directly reflecting dynamic objects and environmental structures. The visible satellite number is depicted in Fig. 7, and the corresponding Position Dilution of Precision (PDOP) value is in Fig. 8. It can be seen that visible satellites are less than 5 and PDOP values are larger than 3 at around 400 s.

Fig. 5
figure 5

The vehicle trajectory of the testing dataset. Figure shows the vehicle trajectory of the testing dataset, which was collected in the downtown area of Nanjing city, about 8 km away from the location of the collected training dataset, on April 29, 2024

Fig. 6
figure 6

Fitting performance of the SE-GPR based LO error model over the testing dataset. Figure shows the performance of the proposed SE-GPR based LO error model. The green lines represent the labelled LO errors obtained by the reference trajectory. And the red lines denote the predicted LO errors

Fig. 7
figure 7

Number of GNSS satellites. Figure shows the number of visible satellites at each epoch, with the blue points in the upper and lower panels representing number of GPS and BDS satellites, respectively

Fig. 8
figure 8

PDOP value. Figure shows the values of PDOP at each epoch

The errors of GNSS-only positioning and LO-only positioning are shown in Fig. 9. It should be noted that the LO error feedback strategy of the proposed algorithm has been implemented by the LO-only case. Besides, the lever arm between the GNSS antenna and LiDAR sensor was corrected. It can be seen that there are jump errors of several tens of meters at many epochs for the GNSS positioning over the testing dataset. And the LO positioning has obvious error accumulation. As the results of the proposed weighting strategy, the weighting coefficients of GNSS positioning and LO positioning in the north, east, and down directions are shown in Fig. 10. It can be seen that the weighting coefficient of GNSS positioning is larger than the one of LO when GNSS positioning is more accurate, such as in the period from 620 to 660 s, and smaller when LO is more accurate, such as in the period from 410 to 440 s. Generally, the weighting coefficients vary correctly with the difference of GNSS positioning and LO errors.

Fig. 9
figure 9

Error comparison of GNSS positioning and LO positioning. The GNSS positioning error and LO positioning error are shown in Figure, with green and red lines denoting GNSS-only and LO-only cases, respectively. Since the scale of vertical axes is too large for LO error, the part in the blue dashed rectangle is zoomed in, as seen in these panels pointed by the blue arrows

Fig. 10
figure 10

Results of the proposed weighting strategy. Figure shows the results of the proposed weighting strategy, with green and red lines representing the weighting coefficients of GNSS and LO positions, respectively

To validate the performance improvement of the proposed GNSS/IMU/LO integrated navigation algorithm, four candidate algorithms were also evaluated with the testing dataset. These algorithms are described in Table 2.

Table 2 Description of the tested algorithms

Figure 11 compares the positioning errors of these algorithms in the n-frame. Figure 12 shows their horizontal positioning results in a real map. It is clear that the accuracy of horizontal positioning is improved significantly with the weighted GNSS/IMU/LO integration, especially from 80 to 130 s and from 240 to 310 s. By comparing the “weighted GNSS/IMU/LO integration” and the “weighted GNSS/IMU/LO integration with LALC”, it is clear that the LALC improves horizontal positioning accuracy in some periods, such as 630–660 s. The peak horizontal positioning error remains almost unchanged at 310 s despite the implementation of LALC, possibly due to the occlusion from surrounding vehicles obstructing LiDAR scanning curbs. The vertical positioning accuracy is improved significantly for the proposed algorithm due to NHC with maximum vertical position error being reduced to 10 m from 26 m.

Fig. 11
figure 11

Positioning error comparison in the n-frame. Figure shows positioning errors of the candidate algorithms in the n-frame, with green lines representing the “basic GNSS/IMU/LO integration” algorithm, purplish red lines representing the “weighted GNSS/IMU/LO integration” algorithm, light blue lines representing the “weighted GNSS/IMU/LO integration with LALC” algorithm, dark blue lines representing the “weighted GNSS/IMU/LO integration with NHC” algorithm, and red lines representing the proposed algorithm, respectively

Fig. 12
figure 12

Horizontal positioning results in a real map. Figure shows horizontal positioning results of these candidate algorithms in a real map, with yellow points representing the reference trajectory, green points representing the “basic GNSS/IMU/LO integration” algorithm, purplish red points representing the “weighted GNSS/IMU/LO integration” algorithm, light blue points representing the “weighted GNSS/IMU/LO integration with LALC” algorithm, dark blue points representing the “weighted GNSS/IMU/LO integration with NHC” algorithm, and red points representing the proposed algorithm, respectively

For further analysis, positional RMSEs of these algorithms were calculated as shown in Table 3. The positional RMSEs in north and east directions for the basic GNSS/IMU/LO integration are 3.890 m and 4.201 m, respectively. For the weighted GNSS/IMU/LO integration, the corresponding RMSEs are 3.088 m and 3.500 m, improved by 20.6% and 16.7%. The improvements are due to the error prediction based GNSS/LO weighting strategy. The improvement in the down direction is much smaller (9.6%) because of low vertical resolution of point clouds causing worse accuracy of LO in vertical direction. When the LALC is added to the weighted GNSS/IMU/LO integration, the north and east accuracy is improved by 32.5% and 27.7%, respectively. However, the improvement in down direction is smaller (13.9%). This is because the LALC is a constraint of vehicle motion on road surface which is approximately horizontal in this field test. To further improve positioning accuracy, NHC is utilized in the proposed algorithm for noise reduction of the vertical velocity, resulting in improvement of 35.9% and 50.0% in horizontal and vertical accuracy, respectively. Comparing the “weighted GNSS/IMU/LO integration with LALC” and the “weighted GNSS/IMU/LO integration with NHC” in horizontal positioning improvements (29.9% and 27.2%), the proposed LALC provides greater auxiliary effect on the horizontal positioning accuracy than that of NHC. When the LALC and NHC are implemented together, the horizontal positioning accuracy improvement (35.9%) is much higher than the ones (29.9% and 27.2%) when only one constraint is used. Comparing the weighted GNSS/IMU/LO integration with NHC, the slight decrease (0.057 m) of positioning accuracy in down direction with the proposed algorithm is attributed to the noise of the NHC and LALC projected onto the vertical direction. It is acceptable at this scale because the proposed algorithm aims at land vehicle applications, for which the horizontal positioning accuracy is more important.

Table 3 Positioning accuracy comparison in the whole trajectory

Table 4 shows velocity and attitude RMSEs of these algorithms. For the proposed algorithm, the velocity accuracies in north and east directions are improved by 18.2% and 28.3%, respectively. The down velocity accuracy is improved by 85.4% mainly due to the implementation of NHC. The heading angle, which is the most important attitude angle for vehicle navigation, is improved by 36.6%.

Table 4 Comparison of velocity and attitude RMSEs

Conclusion

A GNSS/IMU/LO weighted integration framework with motion constraints of the LALC and NHC for robust vehicle navigation is proposed by modeling the LO and GNSS positioning errors. In particular, a SE-GPR based LO error model is established by considering the vehicle velocity and the number of point cloud features. A field test was carried out in deep urban areas to evaluate the performance improvement. The proposed algorithm delivers a horizontal positioning RMSE of 3.669 m and 3D positioning RMSE of 5.216 m, while the ones for the basic EKF-based GNSS/IMU/LO integration are 5.725 m and 10.430 m, respectively. The proposed algorithm achieves improvements of 35.9% and 50.0% in horizontal and 3D positioning accuracy, respectively. Further research is ongoing on algorithm performance improvement by using RTK GNSS with LiDAR aiding ambiguity resolution in deep urban environments.

Availability of data and materials

The datasets generated during the current study are available from the corresponding author on reasonable request.

Abbreviations

ICP:

Iterative closest point

LALC:

LiDAR-aided lateral constraint

LeGO-LOAM:

Lightweight and ground-optimized LOAM

LIO-SAM:

LiDAR inertial odometry via smoothing and mapping

LO:

LiDAR odometry

LOAM:

LiDAR odometry and mapping

LSR:

Least squares residuals

NHC:

Non-holonomic constraint

RMSEs:

Root mean square errors

RTK:

Real-time kinematic

SE-GPR:

Squared exponential gaussian progress regression

SPP:

Single point positioning

SVD:

Singular value decomposition

SVM:

Support vector machine

References

Download references

Acknowledgements

Not applicable.

Funding

This work has been supported by the National Natural Science Foundation of China (Grant No. 42222401, 42174025), Natural Science Foundation of Jiangsu Province (Grant No. BK20211569), Postgraduate Research & Practice Innovation Program of Jiangsu Province (Grant No. KYCX24_0601), University Grants Committee of Hong Kong under the scheme Research Impact Fund (Grant No. R5009-21) and the Research Institute of Land and System, The Hong Kong Polytechnic University.

Author information

Authors and Affiliations

Authors

Contributions

HC and RS proposed the idea; QC, TY and YZ made suggestions for the framework; HC, QC and TY executed the experiment and write the original draft; RS, TY, YZ and WYO were involved in formal analysis and review. The final manuscript has been read and approved by all the authors.

Corresponding author

Correspondence to Tong Yin.

Ethics declarations

Competing interests

Washington Yotto Ochieng is an editorial board member for Satellite Navigation and was not involved in the editorial review or decision to publish this article. The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, H., Sun, R., Cheng, Q. et al. GNSS/IMU/LO integration with a new LO error model and lateral constraint for navigation in urban areas. Satell Navig 5, 30 (2024). https://doi.org/10.1186/s43020-024-00151-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s43020-024-00151-8

Keywords