Skip to main content
  • Original Article
  • Open access
  • Published:

GIL: a tightly coupled GNSS PPP/INS/LiDAR method for precise vehicle navigation


Accurate positioning and navigation play a vital role in vehicle-related applications, such as autonomous driving and precision agriculture. With the rapid development of Global Navigation Satellite Systems (GNSS), Precise Point Positioning (PPP) technique, as a global positioning solution, has been widely applied due to its convenient operation. Nevertheless, the performance of PPP is severely affected by signal interference, especially in GNSS-challenged environments. Inertial Navigation System (INS) aided GNSS can significantly improve the continuity and accuracy of navigation in harsh environments, but suffers from degradation during GNSS outages. LiDAR (Laser Imaging, Detection, and Ranging)-Inertial Odometry (LIO), which has performed well in local navigation, can restrain the divergence of Inertial Measurement Units (IMU). However, in long-range navigation, error accumulation is inevitable if no external aids are applied. To improve vehicle navigation performance, we proposed a tightly coupled GNSS PPP/INS/LiDAR (GIL) integration method, which tightly integrates the raw measurements from multi-GNSS PPP, Micro-Electro-Mechanical System (MEMS)-IMU, and LiDAR to achieve high-accuracy and reliable navigation in urban environments. Several experiments were conducted to evaluate this method. The results indicate that in comparison with the multi-GNSS PPP/INS tightly coupled solution the positioning Root-Mean-Square Errors (RMSEs) of the proposed GIL method have the improvements of 63.0%, 51.3%, and 62.2% in east, north, and vertical components, respectively. The GIL method can achieve decimeter-level positioning accuracy in GNSS partly-blocked environment (i.e., the environment with GNSS signals partly-blocked) and meter-level positioning accuracy in GNSS difficult environment (i.e., the environment with GNSS hardly used). Besides, the accuracy of velocity and attitude estimation can also be enhanced with the GIL method.


Accurate and continuous navigation is one of the fundamental prerequisites for a reliable and intelligent driving system. Nevertheless, it is often difficult for a stand-alone sensor to meet the needs of robust navigation in complex scenarios (Groves et al., 2014; Li et al., 2021). Thus, multi-sensor data fusion, which takes full advantage of different sensors (e.g., Global Navigation Satellite Systems (GNSS), Inertial Navigation System (INS), Laser Imaging, Detection, and Ranging (LiDAR), camera), has become a hotspot in both academic and industrial sectors.

GNSS which is able to deliver accurate Positioning, Navigation, and Timing (PNT) services, is widely adopted in various fields. Compared with dynamic differential positioning techniques, like Real-Time Kinematic (RTK), Precise Point Positioning (PPP) method proposed by Zumberge et al. (1997) can overcome the distance limitation and has the advantage of operational flexibility. Developments in satellite orbit and clock products (Kouba, 2013) further increase the practicality of the PPP (Tang et al., 2017; Wright et al., 2012). Along with the rapid development of GNSS, additional systems like Galileo navigation satellite system (Galileo) and BeiDou Navigation Satellite System (BDS) can enhance the geometric strength of PPP observations (Afifi & El-Rabbany, 2015). With more GNSS satellites available, the GPS (Global Positioning System) + BDS + Galileo PPP has been confirmed to have faster convergence and higher accuracy than the GPS-only PPP solution (Guo et al., 2018; Li et al., 2018).

Despite its advantages, the multi-GNSS PPP cannot be immune to degradation in poor satellite visibility or weak constellation geometry (Zhang & Li, 2012), thereby making continuous and precise navigation in urban areas an intractable problem. To address this problem, a lot of work has been done to aid GNSS with INS (Cui et al., 2019; Gao et al., 2017). As for the PPP, Roesler (2009) and Shin (2009) proved that the PPP/INS integration could achieve superior positioning accuracy and continuity in both open-sky and GNSS difficult environments (i.e., the environment with GNSS hardly used). Compared to loosely coupled approaches, tightly coupled integration has been demonstrated to be more effective and robust when the satellite availability is limited (Abd Rabbou & El-Rabbany, 2015a, 2015b). However, when GNSS signals are blocked, the navigation result degrades rapidly owing to the error accumulation, especially for Micro-Electro-Mechanical System (MEMS) Inertial Measurement Units (IMUs) (Abd Rabbou & El-Rabbany, 2015a).

Fortunately, the integration of LiDAR and IMU, which shows excellent precision and reliability in local navigation, provides another solution for vehicle navigation. Typical methods, such as Laser Odometry and Mapping (LOAM) (Zhang & Singh, 2014), Catergapher (Hess et al., 2016), and Lidar-IMU Odometry (LIO)-mapping (Ye et al., 2019) can achieve decimeter-level positioning accuracy for indoor navigation tasks. These methods commonly formulate an optimization problem to determine the best estimation of LiDAR poses. Besides, there are also substantial researches that tried to realize the fusion through filter-based practices. For instance, Zhen et al. (2017) proposed a tightly coupled INS/LiDAR method based on a Gaussian particle filter and verified its effectiveness in various challenging conditions, such as a kidnapped robot. Qin et al. (2020) utilized an iterated error-state Kalman filter to achieve real-time ego-motion estimation. Generally, the optimization-based methods require significant computational resources to obtain high accuracy through an iterative process, while the filter-based algorithms have advantages in operational efficiency with comparable accuracy (Qin et al., 2020). However, the error accumulation is unavoidable for the INS/LiDAR odometry in long-duration navigation without available global corrections.

To take advantage of both local and global navigation and achieve global drift-free localization, many scholars have conducted in-depth research on the integration of GNSS, INS, and LiDAR. One of the fusion schemes utilizes GNSS/INS results to estimate gravity’s orientation and predict LiDAR poses (Hess et al., 2016). However, the scheme does not consider GPS and IMU data processing and heavily depends on LiDAR registration. Alternatively, by extending the conventional GNSS/INS scheme with LiDAR Simultaneous Location and Mapping (SLAM), the authors (Chiang et al., 2020) presented a lane-level navigation approach for moving vehicles. Other integration strategies aided the tightly coupled LiDAR/IMU odometry with optional GPS position constraints (Shan et al., 2020; Koide, 2019). These strategies take GNSS as a “black box”, describe the poses in a self-defined local framework and loosely merge local odometer results with GPS positioning results. Soloviev (2008) introduced a tightly coupled solution of GPS, two-Dimensional (2D) laser, and INS to improve the performance of 2D plane positioning in urban areas. Nevertheless, this study only utilized GPS-derived horizontal positions.

In this contribution, we propose a tightly coupled multi-GNSS PPP/INS/LiDAR (GIL) algorithm to perform three-Dimensional (3D) large-scale vehicle navigation in urban environments. Raw observations from multi-GNSS PPP, MEME-IMU, and 16-line LiDAR are integrated through an Extended Kalman Filter (EKF) to enhance the navigation performance in terms of position, velocity and attitude. The experiments in the GNSS challenging environments around Wuhan University are designed to evaluate this method. The remaining paper is organized as follows. Firstly, we introduce the system state description. Then the process model is investigated, followed by the multi-GNSS PPP and LiDAR measurement models. Thirdly, the overall flow of the GIL is organized. Subsequently, the experimental platform is briefly introduced, and the experiments and corresponding analyses are presented. Finally, we summarize this study and the prospect for the follow-up research.

Mathematical model

State description

The state vector \({\updelta }{\mathbf{x}}\) consists of the error states from INS, PPP, and LiDAR, which are represented by \({\updelta }{\mathbf{x}}_{{{\text{ins}}}}\), \({\updelta }{\mathbf{x}}_{{{\text{ppp}}}}\) and \({\updelta }{\mathbf{x}}_{{{\text{lidar}}}}\), respectively. The specific forms can be defined as:

$$\begin{gathered}\updelta {\mathbf{x}}_{{{\text{ins}}}} { = [}\updelta {{\varvec{\uptheta}}}_{b}^{e} {,}\updelta {\mathbf{v}}_{b}^{e} {,}\updelta {\mathbf{p}}_{b}^{e} {,}\updelta {\mathbf{b}}_{a} {,}\updelta {\mathbf{b}}_{g} {]}^{{\text{T}}} \hfill \\\updelta {\mathbf{x}}_{{{\text{ppp}}}} { = }\left[ {\updelta {\mathbf{t}}_{r} ,\updelta Z_{{{\text{wet}}}} ,\updelta {\mathbf{N}}_{{{\text{IF}}}} } \right]^{{\text{T}}} \hfill \\\updelta {\mathbf{x}}_{{{\text{lidar}}}} = \left[ {\updelta {{\varvec{\uptheta}}}_{{l_{1} }}^{e} ,\updelta {\mathbf{p}}_{{l_{1} }}^{e} ,\updelta {{\varvec{\uptheta}}}_{{l_{2} }}^{e} ,\updelta {\mathbf{p}}_{{l_{2} }}^{e} , \ldots ,\updelta {{\varvec{\uptheta}}}_{{l_{N} }}^{e} ,\updelta {\mathbf{p}}_{{l_{N} }}^{e} } \right]^{{\text{T}}} \hfill \\\updelta {\mathbf{x}} \, = {[}\updelta {\mathbf{x}}_{{{\text{ins}}}} ,\updelta {\mathbf{x}}_{{{\text{ppp}}}} ,\updelta {\mathbf{x}}_{{{\text{lidar}}}} {]} \hfill \\ \end{gathered}$$

In \({\updelta }{\mathbf{x}}_{{{\text{ins}}}}\), \({\updelta }{{\varvec{\uptheta}}}_{b}^{e}\) denotes the three-dimensional attitude errors. The symbols \(b\) and \(e\) indicate the body coordinate frame (\(b\)-frame) and earth-centered earth-fixed frame (\(e\)-frame), respectively. Note that the IMU frame coincides with the \(b\)-frame in this study. The vectors \({\updelta }{\mathbf{v}}_{b}^{e}\) and \({\updelta }{\mathbf{p}}_{b}^{e}\) are corrections of velocity and position in \(e\)-frame. Besides, the biases of the accelerometer and gyroscope \({\mathbf{b}}_{a} ,{\mathbf{b}}_{g}\) are modeled as random walks. The corresponding bias error variations \({\updelta }{\mathbf{b}}_{a}\) and \({\updelta }{\mathbf{b}}_{g}\) are included in the state vector and estimated online.

As for \({\updelta }{\mathbf{x}}_{{{\text{ppp}}}}\), the position of the antenna center can be related to the INS position through the lever-arm. Therefore, only the receiver clock offsets of different constellation systems \({\updelta }{\mathbf{t}}_{r}\), zenith tropospheric wet delay \({\delta Z}_{{{\text{wet}}}}\), and Ionospheric-Free (IF) ambiguity \({\updelta }{\mathbf{N}}_{{{\text{IF}}}}\) are stored in the PPP state vector (Li et al., 2015).

Furthermore, the LiDAR state can be represented by the attitude error \({\updelta }{{\varvec{\uptheta}}}_{l}^{e}\) and position error \({\updelta }{\mathbf{p}}_{l}^{e}\). Several LiDAR poses maintained by sliding window mechanism are stored in the state vector for the multistate constraint Kalman Filter model (Mourikis & Roumeliotis, 2007). Thus, the LiDAR-correlated state vector at epoch \(k\) can be expressed as:

$${\updelta }{\mathbf{x}}_{{{\text{lidar}}}}^{k} = \left[ {{\updelta }{{\varvec{\uptheta}}}_{{l_{k} }}^{e} ,{\updelta }{\mathbf{p}}_{{l_{k} }}^{e} } \right]^{{\text{T}}}$$

Process model

The linearized continuous dynamic INS model (Shin, 2005) can be written as:

$$\left[ {\begin{array}{*{20}c} {{\updelta }{\dot{\mathbf{p}}}_{b}^{e} } \\ {{\updelta }{\dot{\mathbf{v}}}_{b}^{e} } \\ {{\updelta }{\dot{\varvec{\theta }}}_{b}^{e} } \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {{\updelta }{\mathbf{v}}_{b}^{e} } \\ {\left( {{\mathbf{R}}_{b}^{e} {\mathbf{f}}^{b} } \right) \times {\updelta }{{\varvec{\uptheta}}}_{b}^{e} - 2{{\varvec{\upomega}}}_{ie}^{e} \times {\updelta }{\mathbf{v}}_{b}^{e} + {\mathbf{R}}_{b}^{e} {\updelta }{\mathbf{f}}^{b} + {\updelta }{\mathbf{g}}^{e} } \\ { - \left( {{{\varvec{\upomega}}}_{ie}^{e} \times } \right){\updelta }{{\varvec{\uptheta}}}_{b}^{e} - {\mathbf{R}}_{b}^{e} {\updelta }{{\varvec{\upomega}}}_{ib}^{b} } \\ \end{array} } \right]$$

where \({\updelta }{\dot{\mathbf{p}}}_{b}^{e}\), \({\updelta }{\dot{\mathbf{v}}}_{b}^{e}\) and \({\updelta }{\dot{\varvec{\theta }}}_{b}^{e}\) refer to the derivative of position, velocity, and attitude errors, respectively. \({{\varvec{\upomega}}}_{ie}^{e}\) is the angular rate of the \(e\)-frame with respect to the inertial frame (\(i\)-frame), and \({\mathbf{f}}^{b}\) and \({{\varvec{\upomega}}}_{ib}^{b}\) are the measurements of accelerometer and gyroscope, respectively. \({\mathbf{R}}_{b}^{e}\) denotes the rotation matrix from \(b\)-frame to \(e\)-frame.

Specifically, the system model of the GIL method used for updating EKF is expressed as:

$$\begin{gathered} \left[ {\begin{array}{*{20}c} {{\updelta }{\dot{\mathbf{t}}}_{r} } \\ {{\updelta }\dot{d}_{{{\text{zwd}}}} } \\ {{\updelta }{\dot{\varvec{\theta }}}_{b}^{e} } \\ {{\updelta }{\dot{\mathbf{v}}}_{b}^{e} } \\ {{\updelta }{\dot{\mathbf{p}}}_{b}^{e} } \\ {{\updelta }{\dot{\mathbf{b}}}_{a} } \\ {{\updelta }{\dot{\mathbf{b}}}_{g} } \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {\mathbf{0}} & 0 & {\mathbf{0}} & {\mathbf{0}} & {\mathbf{0}} & {\mathbf{0}} & {\mathbf{0}} & {\mathbf{0}} & {\mathbf{0}} & {\mathbf{0}} & \cdots & {\mathbf{0}} & {\mathbf{0}} \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \cdots & 0 & 0 \\ {\mathbf{0}} & 0 & {\mathbf{0}} & { - {{\varvec{\upomega}}}_{{{\text{ie}} \times }}^{e} } & {\mathbf{0}} & {\mathbf{0}} & {\mathbf{0}} & { - {\mathbf{R}}_{b}^{e} } & {\mathbf{0}} & {\mathbf{0}} & \cdots & {\mathbf{0}} & {\mathbf{0}} \\ {\mathbf{0}} & 0 & {\mathbf{0}} & {{\mathbf{R}}_{b}^{e} {\mathbf{f}}_{ \times }^{b} } & { - 2{{\varvec{\upomega}}}_{{{\text{ie}} \times }}^{e} } & {\mathbf{0}} & {{\mathbf{R}}_{b}^{e} } & {\mathbf{0}} & {\mathbf{0}} & {\mathbf{0}} & \cdots & {\mathbf{0}} & {\mathbf{0}} \\ {\mathbf{0}} & 0 & {\mathbf{0}} & {\mathbf{0}} & {\mathbf{0}} & {\mathbf{0}} & {\mathbf{0}} & {\mathbf{0}} & {\mathbf{0}} & {\mathbf{0}} & \cdots & {\mathbf{0}} & {\mathbf{0}} \\ {\mathbf{0}} & 0 & {\mathbf{0}} & {\mathbf{0}} & {\mathbf{0}} & {\mathbf{0}} & {\mathbf{0}} & {\mathbf{0}} & {\mathbf{0}} & {\mathbf{0}} & \cdots & {\mathbf{0}} & {\mathbf{0}} \\ {\mathbf{0}} & 0 & {\mathbf{0}} & {\mathbf{0}} & {\mathbf{0}} & {\mathbf{0}} & {\mathbf{0}} & {\mathbf{0}} & {\mathbf{0}} & {\mathbf{0}} & \cdots & {\mathbf{0}} & {\mathbf{0}} \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {{\updelta }{\mathbf{t}}_{r} } \\ {{\updelta }d_{{{\text{zwd}}}} } \\ {{\updelta }{\mathbf{N}}_{{{\text{IF}}}} } \\ {{\updelta }{{\varvec{\uptheta}}}_{b}^{e} } \\ {{\updelta }{\mathbf{v}}_{b}^{e} } \\ {{\updelta }{\mathbf{p}}_{b}^{e} } \\ {{\updelta }{\mathbf{b}}_{a} } \\ {{\updelta }{\mathbf{b}}_{g} } \\ {{\updelta }{{\varvec{\uptheta}}}_{{l_{1} }}^{e} } \\ {{\updelta }{\mathbf{p}}_{{l_{1} }}^{e} } \\ \vdots \\ {{\updelta }{{\varvec{\uptheta}}}_{{l_{k} }}^{e} } \\ {{\updelta }{\mathbf{p}}_{{l_{k} }}^{e} } \\ \end{array} } \right] + \left[ {\begin{array}{*{20}c} {\mathbf{I}} & {\mathbf{0}} & {\mathbf{0}} & {\mathbf{0}} \\ 0 & 1 & 0 & 0 \\ {\mathbf{0}} & {\mathbf{0}} & {\mathbf{0}} & {\mathbf{0}} \\ {\mathbf{0}} & {\mathbf{0}} & {\mathbf{0}} & {\mathbf{0}} \\ {\mathbf{0}} & {\mathbf{0}} & {\mathbf{0}} & {\mathbf{0}} \\ {\mathbf{0}} & {\mathbf{0}} & {\mathbf{I}} & {\mathbf{0}} \\ {\mathbf{0}} & {\mathbf{0}} & {\mathbf{0}} & {\mathbf{I}} \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {{\mathbf{n}}_{{t_{r} }} } \\ {n_{{{\text{zwd}}}} } \\ {{\mathbf{n}}_{a} } \\ {{\mathbf{n}}_{g} } \\ \end{array} } \right] \hfill \\ \end{gathered}$$

where \({\mathbf{I}}\) is the identity matrix, \({\mathbf{n}}_{{t_{r} }} ,n_{{{\text{zwd}}}} ,{\mathbf{n}}_{a} ,{\mathbf{n}}_{g}\) represent the noise of \({\mathbf{t}}_{r} ,d_{{{\text{zwd}}}} ,{\mathbf{b}}_{a} ,{\mathbf{b}}_{g}\) measurements.

When a new LiDAR frame is available at epoch \(k{ + }1\), the initial LiDAR pose \(\left( {{\hat{\mathbf{R}}}_{{l_{k + 1} }}^{e} ,{\hat{\mathbf{p}}}_{{l_{k + 1} }}^{e} } \right)\) can be derived from the simultaneous IMU pose \(\left( {{\hat{\mathbf{R}}}_{{b_{k + !} }}^{e} ,{\hat{\mathbf{p}}}_{{b_{k + 1} }}^{e} } \right)\) through the pre-calibrated extrinsic parameters between the IMU and LiDAR \(\left( {{\mathbf{R}}_{l}^{b} ,{\mathbf{p}}_{l}^{b} } \right)\), which can be written as:

$$\begin{gathered} {\hat{\mathbf{R}}}_{{l_{k + 1} }}^{e} = {\hat{\mathbf{R}}}_{{b_{k + 1} }}^{e} {\mathbf{R}}_{l}^{b} \hfill \\ {\hat{\mathbf{p}}}_{{l_{k + !} }}^{e} = {\hat{\mathbf{p}}}_{{b_{k + 1} }}^{e} + {\hat{\mathbf{R}}}_{{b_{k + 1} }}^{e} {\mathbf{p}}_{l}^{b} \hfill \\ \end{gathered}$$

Then, the new LiDAR state \({\updelta }{\mathbf{x}}_{{{\text{lidar}}}}^{{k{ + }1}} = \left[ {{\updelta }{{\varvec{\uptheta}}}_{{l_{{k{ + }1}} }}^{e} ,{\updelta }{\mathbf{p}}_{{l_{k + 1} }}^{e} } \right]^{{\text{T}}}\) at epoch \(k + 1\) will be added into the system state vector, and the corresponding system covariance is also augmented (Mourikis & Roumeliotis, 2007):

$$\begin{gathered} {\updelta }{\mathbf{x}}_{{k{ + }1,k}} = \left[ {{\updelta }{\mathbf{x}}_{{{\text{ins}}}} ,{\updelta }{\mathbf{x}}_{{{\text{ppp}}}} ,{\updelta }{\mathbf{x}}_{{{\text{lidar}}}}^{{{\text{old}}}} ,{\updelta }{\mathbf{x}}_{{{\text{lidar}}}}^{{k{ + }1}} } \right] \hfill \\ {\mathbf{P}}_{{k{ + }1,k}} = \left[ {\begin{array}{*{20}c} {\mathbf{I}} \\ {\mathbf{J}} \\ \end{array} } \right]{\mathbf{P}}_{k,k} \left[ {\begin{array}{*{20}c} {\mathbf{I}} \\ {\mathbf{J}} \\ \end{array} } \right]^{{\text{T}}} \hfill \\ \end{gathered}$$

where \({\updelta }{\mathbf{x}}_{{k{ + }1,k}}\) is the system state vector after \({\updelta }{\mathbf{x}}_{{{\text{lidar}}}}^{{k{ + }1}}\) is added, \({\mathbf{P}}_{k,k}\) and \({\mathbf{P}}_{{k{ + }1.k}}\) are the covariance before and after the augmentation procedure, respectively. The non-zero elements in the jacobian matrix \({\mathbf{J}}\) can be expressed as:

$${\mathbf{J}}{ = }\left[ {\begin{array}{*{20}c} { - \left( {{\hat{\mathbf{R}}}_{l}^{e} } \right)^{{\text{T}}} } & {{\mathbf{0}}_{3 \times 3} } & {{\mathbf{0}}_{3 \times 3} } & {{\mathbf{0}}_{3 \times 3} } & {{\mathbf{0}}_{3 \times 3} } & {...} & {{\mathbf{0}}_{3 \times 3} } \\ {\left\lfloor {{\hat{\mathbf{R}}}_{b}^{e} {\mathbf{p}}_{l\; \times }^{b} } \right\rfloor } & {{\mathbf{0}}_{3 \times 3} } & {{\mathbf{I}}_{3 \times 3} } & {{\mathbf{0}}_{3 \times 3} } & {{\mathbf{0}}_{3 \times 3} } & {...} & {{\mathbf{0}}_{3 \times 3} } \\ \end{array} } \right]$$

Multi-GNSS PPP measurement model

In PPP processing, the ionospheric-free (IF) model is used to eliminate the first-order ionospheric delay, which can be written as:

$$\begin{gathered} P_{{r,{\text{IF}}}}^{s} = \rho + c\left( {t_{r} - t^{s} } \right) + T_{r}^{s} + e_{{r,{\text{IF}}}}^{s} \\ L_{{r,{\text{IF}}}}^{s} = \rho + c\left( {t_{r} - t^{s} } \right) + \lambda_{{{\text{IF}}}} N_{{r,{\text{IF}}}}^{s} + T_{r}^{s} + \varepsilon_{{r,{\text{IF}}}}^{s} \\ \end{gathered}$$

where \(s\) and \(r\) refer to the satellite and the receiver. \(P_{{r,{\text{IF}}}}^{s}\) and \(L_{{r,{\text{IF}}}}^{s}\) are the ionosphere-free psedorange and carrier phase observations, respectively, \(\rho { = }\left| {{\mathbf{p}}_{r} - {\mathbf{p}}^{s} } \right|\) stands for the distance between the receiver and the satellite, \(c\) is the speed of light, \(t^{s}\) and \(t_{r}\) represent the offsets of satellite clock and receiver clock, \(T_{r}^{s}\) denotes the tropospheric delay. The wavelength and ambiguity of the IF model are represented by \(\lambda_{{{\text{IF}}}}\) and \(N_{{r,{\text{IF}}}}^{s}\). \(e_{{r,{\text{IF}}}}^{s}\) and \(\varepsilon_{{r,{\text{IF}}}}^{s}\) denote the measurement noise and multipath error of the IF pseudorange and carrier phase, respectively.

According to (Böhm et al., 2006), the tropospheric delay is composed of dry and wet components, which are computed with their zenith components \(\left( {Z_{{{\text{dry}}}} ,Z_{{{\text{wet}}}} } \right)\) and corresponding mapping functions \(\left( {m_{{{\text{dry}}}} ,m_{{{\text{wet}}}} } \right)\). In this article, the dry component is corrected by a prior model, and the wet component is modeled as random walking. In the multi-constellation case of GPS (G), BDS (C), and Galileo (E), the linearized PPP observation equations can be expressed as:

$$\begin{gathered} \left[ {\begin{array}{*{20}c} {P_{{{\text{IF}}}}^{G} } \\ {P_{{{\text{IF}}}}^{C} } \\ {P_{{{\text{IF}}}}^{E} } \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {\left| {{\mathbf{p}}_{r} - {\mathbf{p}}_{G}^{s} } \right| + {{\varvec{\upmu}}}{\updelta }{\mathbf{p}}_{r} + c\left( {t_{r,G} - t_{G}^{s} + {\updelta }t_{r,G} } \right) + m_{{{\text{dry}}}} Z_{{{\text{dry}}}} + m_{{{\text{wet}}}} \left( {Z_{{{\text{wet}}}} + {\updelta }Z_{{{\text{wet}}}} } \right)} \\ {\left| {{\mathbf{p}}_{r} - {\mathbf{p}}_{C}^{s} } \right| + {{\varvec{\upmu}}}{\updelta }{\mathbf{p}}_{r} + c\left( {t_{r,C} - t_{C}^{s} + {\updelta }t_{r,C} } \right) + m_{{{\text{dry}}}} Z_{{{\text{dry}}}} + m_{{{\text{wet}}}} \left( {Z_{{{\text{wet}}}} + {\updelta }Z_{{{\text{wet}}}} } \right)} \\ {\left| {{\mathbf{p}}_{r} - {\mathbf{p}}_{E}^{s} } \right| + {{\varvec{\upmu}}}{\updelta }{\mathbf{p}}_{r} + c\left( {t_{r,E} - t_{E}^{s} + {\updelta }t_{r,E} } \right) + m_{{{\text{dry}}}} Z_{{{\text{dry}}}} + m_{{{\text{wet}}}} \left( {Z_{{{\text{wet}}}} + {\updelta }Z_{{{\text{wet}}}} } \right)} \\ \end{array} } \right] \hfill \\ \left[ {\begin{array}{*{20}c} {L_{{{\text{IF}}}}^{G} } \\ {L_{{{\text{IF}}}}^{C} } \\ {L_{{{\text{IF}}}}^{E} } \\ \end{array} } \right]{ = }\left[ {\begin{array}{*{20}c} {\left| {{\mathbf{p}}_{r} - {\mathbf{p}}_{G}^{s} } \right| + {{\varvec{\upmu}}}{\updelta }{\mathbf{p}}_{r} + c\left( {t_{r,G} - t_{G}^{s} + {\updelta }t_{r,G} } \right) + m_{{{\text{dry}}}} Z_{{{\text{dry}}}} + m_{{{\text{wet}}}} \left( {Z_{{{\text{wet}}}} + {\updelta }Z_{{{\text{wet}}}} } \right) + \lambda_{{\text{G,IF}}} \left( {N_{{{\text{IF}}}}^{G} + {\updelta }N_{{{\text{IF}}}}^{G} } \right)} \\ {\left| {{\mathbf{p}}_{r} - {\mathbf{p}}_{C}^{s} } \right| + {{\varvec{\upmu}}}{\updelta }{\mathbf{p}}_{r} + c\left( {t_{r,C} - t_{C}^{s} + {\updelta }t_{r,C} } \right) + m_{{{\text{dry}}}} Z_{{{\text{dry}}}} + m_{{{\text{wet}}}} \left( {Z_{{{\text{wet}}}} + {\updelta }Z_{{{\text{wet}}}} } \right) + \lambda_{{\text{C,IF}}} \left( {N_{{{\text{IF}}}}^{C} + {\updelta }N_{{{\text{IF}}}}^{C} } \right)} \\ {\left| {{\mathbf{p}}_{r} - {\mathbf{p}}_{E}^{s} } \right| + {{\varvec{\upmu}}}{\updelta }{\mathbf{p}}_{r} + c\left( {t_{r,E} - t_{E}^{s} + {\updelta }t_{r,E} } \right){ + }m_{{{\text{dry}}}} Z_{{{\text{dry}}}} + m_{{{\text{wet}}}} \left( {Z_{{{\text{wet}}}} + {\updelta }Z_{{{\text{wet}}}} } \right) + \lambda_{{\text{E,IF}}} \left( {N_{{{\text{IF}}}}^{E} + {\updelta }N_{{{\text{IF}}}}^{E} } \right)} \\ \end{array} } \right] \hfill \\ \end{gathered}$$

where \({\mathbf{p}}_{r}\) and \({\mathbf{p}}^{s}\) are the position vector of receiver and satellite, respectively. \({{\varvec{\upmu}}}\) denotes the unit vector from the satellite to the receiver. \({\updelta }{\mathbf{N}}_{{{\text{IF}}}} { = }\left[ {\begin{array}{*{20}c} {{\updelta }N_{{{\text{IF}}}}^{G} } & {{\updelta }N_{{{\text{IF}}}}^{C} } & {{\updelta }N_{{{\text{IF}}}}^{E} } \\ \end{array} } \right]^{{\text{T}}}\) is the ambiguity correction vector, and \({\updelta }{\mathbf{t}}_{r} = \left[ {{\updelta }\begin{array}{*{20}c} {t_{r,G} } & {{\updelta }t_{r,C} } & {{\updelta }t_{r,E} } \\ \end{array} } \right]^{{\text{T}}}\) is the receiver clock correction vector. In the case of multi-GNSS observations, the different hardware time delays called Inter-System Biases (ISBs) should be taken into account (Li et al., 2015),

$$\left[ {\begin{array}{*{20}c} {ISB_{G,C} } \\ {ISB_{G,E} } \\ \end{array} } \right]{ = }\left[ {\begin{array}{*{20}c} {t_{r,C} - t_{r,G} } \\ {t_{r,E} - t_{r,G} } \\ \end{array} } \right]$$

where \(ISB_{G,C}\) and \(ISB_{G,E}\) are the ISBs of BDS and Galileo with respect to GPS, which are modeled as random walks. Thus, the receiver clock offsets \({\updelta }{\mathbf{t}}_{r} = \left[ {{\updelta }\begin{array}{*{20}c} {t_{r,G} } & {{\updelta }t_{r,C} } & {{\updelta }t_{r,E} } \\ \end{array} } \right]^{{\text{T}}}\) can be re-parameterized as \({\updelta }{\mathbf{t}}_{r} = \left[ {{\updelta }\begin{array}{*{20}c} {t_{r,G} } & {ISB_{G,C} } & {ISB_{G,E} } \\ \end{array} } \right]^{{\text{T}}}\).

In the integrated system, the simultaneous IMU state is used to predict the GNSS-derived position, but the antenna center has a different reference point with respect to the INS predicted position, which leads to lever-arm offsets. The lever-arm is precisely measured in advance and its correction can be expressed as:

$${\mathbf{p}}_{r} = {\mathbf{p}}_{b}^{e} + {\mathbf{R}}_{b}^{e} {\mathbf{l}}^{b}$$

where \({\mathbf{l}}^{b}\) denotes the lever-arm. The relationship between \({\updelta }{\mathbf{p}}_{r}\) and \({\updelta }{\mathbf{p}}_{b}^{e}\) can be expressed from the differential of (11):

$${\updelta }{\mathbf{p}}_{r} = {\updelta }{\mathbf{p}}_{b}^{e} { + }\left\lfloor {{\mathbf{R}}_{b}^{e} {\mathbf{l}}^{b}_{ \times } } \right\rfloor {\updelta }{{\varvec{\uptheta}}}_{b}^{e}$$

The linearized GNSS observation equations can be written as:

$$\left[ {\begin{array}{*{20}c} {r_{{P_{k} }}^{{}} } \\ {r_{{L_{k} }}^{{}} } \\ \end{array} } \right]{ = }\left[ {\begin{array}{*{20}c} {\hat{P}_{{{\text{ins}}}}^{k} - P_{{\text{gnss,IF}}}^{k} } \\ {\hat{L}_{{{\text{ins}}}}^{k} - L_{{\text{gnss,IF}}}^{k} } \\ \end{array} } \right]{ = }\left[ {\begin{array}{*{20}c} {{\mathbf{H}}_{{P_{k} }}^{{}} } \\ {{\mathbf{H}}_{{L_{k} }}^{{}} } \\ \end{array} } \right]{\updelta }{\mathbf{x}}{ + }\left[ {\begin{array}{*{20}c} {n_{{P_{k} }} } \\ {n_{{L_{k} }} } \\ \end{array} } \right]$$

where \(k\) refers to the time epoch, \({\updelta }{\mathbf{x}}\) denotes the system state vector, \(r_{{P_{k} }}^{{}}\) and \(r_{{L_{k} }}^{{}}\) are the residuals of psedorange and carrier measurements, \(n_{{P_{k} }}\) and \(n_{{L_{k} }}\) represent the corresponding noises. \(\hat{P}_{{{\text{ins}}}}^{k}\) and \(\hat{L}_{{{\text{ins}}}}^{k}\) are the INS-predicted GNSS measurements, and the designed matrix \({\mathbf{H}}_{{P_{k} }} ,{\mathbf{H}}_{{L_{k} }}\) can be determined directly from (9) and (12).

LiDAR measurement model

Before introducing the LiDAR measurements, the sliding window mechanism is described in advance. As shown in Fig. 1, the threshold \(N\) of LiDAR frames stored in the state vector is pre-defined, and the oldest frame will be discarded when the window is full. The newly obtained LiDAR frame will be matched with other LiDAR scans in the sliding window to establish the geometry relationships between frames. We note that the \(N\) value should not be set too large. A small window benefits computational efficiency, but the scan matching tends to be inaccurate with a large distance between frames, particularly when the vehicle moves fast on the road. Considering some unreliable observations exist in the registration, a Chi-square test (Sun et al., 2018; Zuo et al., 2019) is employed to remove outliers from the scan matching, and only measurements that pass the test will be used for EKF measurement update.

figure 1

The strategy for LiDAR sliding window

When a new LiDAR frame arrives, feature extraction is first performed to select high-quality points that are approximately on edges or planes in the scan (Zhang & Singh, 2014). Then the edge or plane features are de-skew corrected (Ye et al., 2019) and projected to the old LiDAR frames to find the nearest line or plane correspondences. The spatial octree (De Berg et al., 2008) is constructed for fast indexing in this process. For more details on matching the point-line pairs and point-plane pairs refer to (Zhang et al., 2017).

In the case of edge matching, assuming that \(p_{i}^{{\text{edge},l_{{k{ + }1}} }}\) is an edge feature in the newest LiDAR frame \(k + 1\), its projection in the frame \(k\) is \(p_{i}^{{{\text{edge}},l_{k} }}\), and the corresponding line found in the frame \(k\) can be represented by two points \(p_{a}^{{{\text{edge}},l_{k} }}\) and \(p_{b}^{{{\text{edge}},l_{k} }}\), thus the point-to-line distance \(d_{i}^{{{\text{edge}}}}\) is expressed as:

$$d_{i}^{{{\text{edge}}}} = \frac{{\left| {\left( {{\mathbf{p}}_{a}^{{{\text{edge}},l_{k} }} - {\mathbf{p}}_{b}^{{{\text{edge}},l_{k} }} } \right) \times \left( {{\mathbf{p}}_{i}^{{{\text{edge}},l_{k} }} - {\mathbf{p}}_{b}^{{{\text{edge}},l_{k} }} } \right)} \right|}}{{\left| {\left( {{\mathbf{p}}_{a}^{{{\text{edge}},l_{k} }} - {\mathbf{p}}_{b}^{{{\text{edge}},l_{k} }} } \right)} \right|}}$$

As for plane matching, \({\mathbf{p}}_{i}^{{{\text{plane}},l_{k} }}\) denotes the projection of plane feature \({\mathbf{p}}_{i}^{{{\text{plane}},l_{{k{ + }1}} }}\) from frame \(k + 1\) to frame \(k\), three points \({\mathbf{p}}_{a}^{{{\text{plane}},l_{k} }} ,{\mathbf{p}}_{b}^{{{\text{plane}},l_{k} }} ,{\mathbf{p}}_{c}^{{{\text{plane}},l_{k} }}\) are selected in the frame \(k\) to form the corresponding matching plane. The point-to-plane distance \(d_{i}^{{{\text{plane}}}}\) is expressed as:

$$d_{i}^{{{\text{plane}}}} = \frac{{\left| {\left( {{\mathbf{p}}_{b}^{{{\text{plane}},l_{k} }} - {\mathbf{p}}_{a}^{{{\text{plane}},l_{k} }} } \right) \times \left( {{\mathbf{p}}_{c}^{{{\text{plane}},l_{k} }} - {\mathbf{p}}_{a}^{{{\text{plane}},l_{k} }} } \right) \cdot \left( {{\mathbf{p}}_{i}^{{{\text{plane}},l_{k} }} - {\mathbf{p}}_{a}^{{{\text{plane}},l_{k} }} } \right)} \right|}}{{\left| {\left( {{\mathbf{p}}_{b}^{{{\text{plane}},l_{k} }} - {\mathbf{p}}_{a}^{{{\text{plane}},l_{k} }} } \right) \times \left( {{\mathbf{p}}_{c}^{{{\text{plane}},l_{k} }} - {\mathbf{p}}_{a}^{{{\text{plane}},l_{k} }} } \right)} \right|}}$$

The LiDAR observation equations can be represented by the distances in formulas (14) and (15):

$$\begin{gathered} r_{i}^{{{\text{edge}}}} = d_{i}^{{{\text{edge}}}} { = }\,\,{\mathbf{H}}_{{{\text{edge}},i}}^{{k,k{ + }1}} \left[ {\begin{array}{*{20}c} {{\updelta }{\mathbf{x}}_{{{\text{lidar}}}}^{k} } & {{\updelta }{\mathbf{x}}_{{{\text{lidar}}}}^{k + 1} } \\ \end{array} } \right]^{{\text{T}}} { + }n_{{{\text{edge}}}} \hfill \\ r_{i}^{{{\text{plane}}}} = d_{i}^{{{\text{plane}}}} { = }\,\,{\mathbf{H}}_{{{\text{plane}},i}}^{k,k + 1} \left[ {\begin{array}{*{20}c} {{\updelta }{\mathbf{x}}_{{{\text{lidar}}}}^{k} } & {{\updelta }{\mathbf{x}}_{{{\text{lidar}}}}^{k + 1} } \\ \end{array} } \right]^{{\text{T}}} { + }n_{{{\text{plane}}}} \hfill \\ \end{gathered}$$

where \(r_{i}^{{{\text{edge}}}} ,r_{i}^{{{\text{plane}}}}\) are observation residuals, and \(n_{{{\text{edge}}}}\) and \(n_{{{\text{plane}}}}\) represent noises of edge and plane observations, respectively. \({\mathbf{H}}_{{{\text{edge}},i}}^{{k,k{ + }1}}\) and \({\mathbf{H}}_{{{\text{plane}},i}}^{{k,k{ + }1}}\) are the design matrixes, which can be calculated following the chain rule:

$$\begin{gathered} {\mathbf{H}}_{{{\text{edge}},i}}^{{k,k{ + }1}} = \frac{{\partial {\mathbf{r}}_{i}^{{{\text{edge}}}} }}{{\partial {\mathbf{p}}_{i}^{{{\text{edge}},l_{k} }} }}\left( {\frac{{\partial {\mathbf{p}}_{i}^{{{\text{edge}},l_{k} }} }}{{\partial {\mathbf{x}}_{{{\text{lidar}}}}^{k} }} + \frac{{\partial {\mathbf{p}}_{i}^{{{\text{edge}},l_{k} }} }}{{\partial {\mathbf{x}}_{{{\text{lidar}}}}^{k + 1} }}} \right) \hfill \\ {\mathbf{H}}_{{{\text{plane}},i}}^{{k,k{ + }1}} = \frac{{\partial {\mathbf{r}}_{i}^{{{\text{plane}}}} }}{{\partial {\mathbf{p}}_{i}^{{{\text{plane}},l_{k} }} }}\left( {\frac{{\partial {\mathbf{p}}_{i}^{{{\text{plane}},l_{k} }} }}{{\partial {\mathbf{x}}_{{{\text{lidar}}}}^{k} }} + \frac{{\partial {\mathbf{p}}_{i}^{{{\text{plane}},l_{k} }} }}{{\partial {\mathbf{x}}_{{{\text{lidar}}}}^{k + 1} }}} \right) \hfill \\ \end{gathered}$$

The subterms in the formula, \(\frac{{\partial {\mathbf{r}}_{i}^{{{\text{edge}}}} }}{{\partial {\mathbf{p}}_{i}^{{{\text{edge}},l_{k} }} }}\) and \(\frac{{\partial {\mathbf{r}}_{i}^{{{\text{plane}}}} }}{{\partial {\mathbf{p}}_{i}^{{{\text{plane}},l_{k} }} }}\) can be calculated according to (14) and (15). The relationship between \({\mathbf{p}}_{i}^{{{\text{edge}},l_{k} }} ,{\mathbf{p}}_{i}^{{{\text{plane}},l_{k} }}\) and LiDAR states \({\mathbf{x}}_{{{\text{lidar}}}}^{k} ,{\mathbf{x}}_{{{\text{lidar}}}}^{{k{ + }1}}\) can be derived from the projection formulas:

$$\begin{gathered} {\mathbf{p}}_{i}^{{{\text{edge}},l_{k} }} = {\hat{\mathbf{R}}}_{{l_{k + 1} }}^{{l_{k} }} {\mathbf{p}}_{i}^{{{\text{edge}},l_{{k{ + }1}} }} + {\hat{\mathbf{p}}}_{{l_{k + 1} }}^{{l_{k} }} = \left( {{\hat{\mathbf{R}}}_{{l_{k} }}^{e} } \right)^{{\text{T}}} {\hat{\mathbf{R}}}_{{l_{k + 1} }}^{e} {\mathbf{p}}_{i}^{{{\text{edge}},l_{{k{ + }1}} }} + \left( {{\hat{\mathbf{R}}}_{{l_{k} }}^{e} } \right)^{{\text{T}}} \left( {{\hat{\mathbf{p}}}_{{l_{k + 1} }}^{e} - {\hat{\mathbf{p}}}_{{l_{k} }}^{e} } \right) \hfill \\ {\mathbf{p}}_{i}^{{{\text{plane}},l_{k} }} = {\hat{\mathbf{R}}}_{{l_{k + 1} }}^{{l_{k} }} {\mathbf{p}}_{i}^{{{\text{plane}},l_{{k{ + }1}} }} + {\hat{\mathbf{p}}}_{{l_{k + 1} }}^{{l_{k} }} = \left( {{\hat{\mathbf{R}}}_{{l_{k} }}^{e} } \right)^{{\text{T}}} {\hat{\mathbf{R}}}_{{l_{k + 1} }}^{e} {\mathbf{p}}_{i}^{{{\text{plane}},l_{{k{ + }1}} }} + \left( {{\hat{\mathbf{R}}}_{{l_{k} }}^{e} } \right)^{{\text{T}}} \left( {{\hat{\mathbf{p}}}_{{l_{k + 1} }}^{e} - {\hat{\mathbf{p}}}_{{l_{k} }}^{e} } \right) \hfill \\ \end{gathered}$$

Note that \(\left\{ {{\hat{\mathbf{R}}}_{{l_{k} }}^{e} ,{\hat{\mathbf{p}}}_{{l_{k} }}^{e} } \right\}\) and \(\left\{ {{\hat{\mathbf{R}}}_{{l_{k + 1} }}^{e} ,{\hat{\mathbf{p}}}_{{l_{k + 1} }}^{e} } \right\}\) are the poses of LiDAR frame \(k\) and \(k{ + }1\), which can be obtained by referring to (5). Thus, the non-zero elements of edge or plane measurement in \(\frac{{\partial {\updelta }{\mathbf{p}}_{i}^{{l_{k} }} }}{{\partial {\mathbf{x}}_{{{\text{lidar}}}} }}\) can be expressed as:

$$\begin{gathered} \frac{{\partial \delta {\mathbf{p}}_{i}^{{l_{k} }} }}{{\partial {\mathbf{x}}_{{lidar}}^{k} }} = \left[ {\begin{array}{*{20}c} {\frac{{\partial \delta {\mathbf{p}}_{i}^{{l_{k} }} }}{{\partial \delta {\mathbf{\theta}} _{{l_{k} }}^{e} }}} & {\frac{{\partial \delta {\mathbf{p}}_{i}^{{l_{k} }} }}{{\partial \delta {\mathbf{p}}_{{l_{k} }}^{e} }}} \\ \end{array} } \right] \\ = \left[ {\begin{array}{*{20}c} {\left\lfloor {\left( {\hat{\mathbf{R}}_{{l_{k} }}^{e} } \right)^{T} \left( {\hat{\mathbf{R}}_{{l_{{k + 1}} }}^{e} {\mathbf{p}}_{i}^{{l_{{k + 1}} }} + \hat{\mathbf{p}}_{{l_{{k + 1}} }}^{e} - \hat{\mathbf{p}}_{{l_{k} }}^{e} } \right)_{ \times } } \right\rfloor } & { - \left( {\hat{\mathbf{R}}_{{l_{k} }}^{e} } \right)^{T} } \\ \end{array} } \right] \\ \frac{{\partial \delta {\mathbf{p}}_{i}^{{l_{k} }} }}{{\partial {\mathbf{x}}_{{lidar}}^{{k + 1}} }} = \left[ {\begin{array}{*{20}c} {\frac{{\partial \delta {\mathbf{p}}_{i}^{{l_{k} }} }}{{\partial \delta {\mathbf{\theta}} _{{l_{{k + 1}} }}^{e} }}} & {\frac{{\partial \delta {\mathbf{p}}_{i}^{{l_{k} }} }}{{\partial \delta {\mathbf{p}}_{{l_{{k + 1}} }}^{e} }}} \\ \end{array} } \right] \\ = \left[ {\begin{array}{*{20}c} { - \left( {\hat{\mathbf{R}}_{{l_{k} }}^{e} } \right)^{T} \hat{\mathbf{R}}_{{l_{{k + 1}} }}^{e} \left\lfloor {{\mathbf{p}}_{i}{^{{l_{{k + 1}} }}} _{ \times } } \right\rfloor } & {\left( {\hat{\mathbf{R}}_{{l_{k} }}^{e} } \right)^{T} } \\ \end{array} } \right] \\ \end{gathered}$$

Measurement update

When GNSS or LiDAR pre-processed observations are available at epoch \(k{ + }1\), the corresponding measurement update can be performed according to (13) and (16), which follows the standard EKF routines (Qin et al., 2020) as:

$$\begin{gathered} {\mathbf{r}}_{{k{ + }1}} { = }{\mathbf{H}}_{k + 1} \delta {\mathbf{x}}_{{k{ + }1,k}} + {\mathbf{n}}_{{k{ + }1}} \hfill \\ {\mathbf{K}} = {\mathbf{P}}_{{k{ + }1,k}} {\mathbf{H}}_{k + 1}^{{\text{T}}} \left( {{\mathbf{H}}_{k + 1} {\mathbf{P}}_{{k{ + }1,k}} {\mathbf{H}}_{k + 1}^{{\text{T}}} { + }{\mathbf{R}}_{n} } \right)^{ - 1} \hfill \\ \delta {\mathbf{x}}_{{k{ + }1}} = \delta {\mathbf{x}}_{{k{ + }1,k}} \otimes {\mathbf{Kr}}_{{k{ + }1}} \hfill \\ \end{gathered}$$

where \({\mathbf{n}}_{{k{ + }1}}\) is the observation noise that meets the zero-mean Gaussian noise assumption, whose covariance is \({\mathbf{R}}_{n}\). \({\mathbf{K}}\) denotes the near-optimal Kalman gain, and the symbol \(\otimes\) represents the compensation of the states. Meanwhile, the updated covariance matrix \({\mathbf{P}}_{k + 1,k + 1}\) can be calculated by:

$${\mathbf{P}}_{{k{ + }1,k + 1}} = \left( {{\mathbf{I}} - {\mathbf{KH}}_{{k{ + }1}} } \right){\mathbf{P}}_{{k{ + }1,k}} \left( {{\mathbf{I}} - {\mathbf{KH}}_{{k{ + }1}} } \right)^{{\text{T}}} + {\mathbf{KR}}_{n} {\mathbf{K}}^{{\text{T}}}$$

GIL algorithm implementation

Based on the mathematical model introduced above, the main process of the GIL method can be organized as Fig. 2. After the system initialization, the INS dynamic alignment (Han & Wang, 2010) is performed to obtain the initial pose of the vehicle, then the bias-compensated IMU data will be adopted for INS mechanization. The EKF update will be performed to predict the states as well as the covariance. Before constructing the IF combination, to ensure measurements are reliable for the EKF update process, we first check the raw GNSS data for cycle slip and outlier detections (Fang et al., 2009). Similarly, LiDAR feature detection (Zhang et al., 2014) and de-skew (Ye et al., 2019) are carried out in LiDAR data preparation, and the initial LiDAR pose is added to the system state through the augmentation procedure. When there are pre-processed GNSS and LiDAR measurements according to the mathematical models, the corresponding measurement updates will be performed sequentially to update the filter states. Furthermore, the corrected gyroscope and accelerometer biases will be fed back to the next IMU data for restraining the INS divergence.

Fig. 2
figure 2

Flowchart of the GIL method

Experimental platform and strategies

To evaluate the performance of the proposed GIL method, experiments with different GNSS conditions were conducted in Wuhan, Hubei, on 27th October 2020. A road with tall trees and buildings on both sides was selected for the tests, which reflects the real situation of frequent signal interruptions in urbanized areas. Some typical scenarios in the experiments are shown in Fig. 3.

Fig. 3
figure 3

Some typical scenarios in the experiments

The detailed strategies used in the GNSS PPP and LiDAR processing are listed in Tables 1 and 2, respectively. For the GNSS PPP, multiple GNSS systems, including GPS (G), BDS (C), and Galileo (E) were considered in the GNSS processing. The precise orbit and clock products used in the processing were provided by the Center for Orbit Determination in Europe (CODE). As for LiDAR, the size of the sliding window was set to 4, and the observation noise was predefined separately for the two kinds of observations based on the LiDAR resolution.

Table 1 Strategies and models in multiple GNSS PPP processing
Table 2 Strategies and models in LiDAR processing

The experiment data were collected with a mobile platform shown in Fig. 4. A LiDAR (Velodyne Corporation, 2021) with 16 lines was mounted above a MEMS-IMU (Analog Devices Corporation, 2019) to ensure unimpeded scans during the experiments. A tactical IMU (StarNeto Corporation, 2014) was installed in the middle of the platform. Additionally, a GNSS antenna (NovAtel Corporation, 2015), which was linked to a GNSS receiver (Septentrio Corporation, 2019), is equipped. Time synchronization on the hardware level unifies timestamps of all collected data to GPS time. For spatial synchronization, pre-calibration was performed to identify the extrinsic parameters between LiDAR and IMU (Lv et al., 2020), and the lever arms between IMUs and the GNSS antenna were measured accurately.

Fig. 4
figure 4

Installation of the experimental platform

Performance specifications of the two different grade IMUs are shown in Table 3. During the experiments, the sampling rate of the GNSS receiver was set to 1 Hz, and that of LiDAR was configured to 10 Hz. Both the MEMS-IMU and Tactical IMU worked at 100 Hz. In addition, another GNSS receiver (Septentrio PolaRx5) was fixed on the rooftop of a tall building as a base station.

Table 3 Technical specifications of the IMU sensors in the experimental platform

The reference solution calculated by commercial Inertial Explorer 8.9 software (NovAtel Corporation, 2018) was the smoothed results of multi-GNSS Post-Processing Kinematic (PPK) and INS tight integration. The overall positioning accuracy of the solution can reach the centimeter-level with a fixed rate of more than 90% and high-precision extrapolated results of tactical IMU are provided in non-fixed periods (NovAtel Corporation, 2018). On this basis, the accuracy of the reference solution is higher than that of the solutions to be analyzed by an order of magnitude, which can facilitate a fair and reliable performance comparison between different algorithms in the following evaluation.

Result analysis

In this section, we separately analyzed the vehicle navigation results in GNSS partly-blocked environment and GNSS difficult environment. In each experiment, the multi-GNSS PPP solution and satellite availability are first introduced. Then, the Position, Velocity, and Attitude (PVA) estimation of the GIL method is presented. Meanwhile, the multiple GNSS PPP/INS tightly coupled solution is used for comparison to assess the GIL performance in urban vehicle navigation.

PPP solution in GNSS partly-blocked experiment

The first experiment focuses on GNSS partly-blocked environment. Figure 5 shows the top view of the route and the multi-GNSS PPP (G + C + E) positioning result, where colors represent the corresponding error distribution intervals. It is noticed that in some road sections, the harsh environment caused frequent and slow re-convergence of PPP, resulting in positioning discontinuity and large abnormal offsets. Clearly, the multi-GNSS PPP result in this environment can hardly meet the requirement of high accuracy and continuous navigation.

Fig. 5
figure 5

Field test trajectory. Left: top view of the trajectory on google earth; right: positioning result of PPP when using multi-GNSS data in GNSS partly-blocked experiment

In this study, the number of visible satellites, satellite elevation, continuity of satellite signal tracking, and the corresponding Position Dilution of Precision (PDOP) are utilized to verify the GNSS availability. The number of available satellites is shown in Fig. 6a. The enhancement of GNSS observability is considerable when using multiple GNSS systems (Li et al., 2015). The average number of satellites for the G + C + E combination is 13.46, which is 3.5, 2.5, and 3.3 times the number of the visible satellites for the single GPS, BDS, and Galileo systems, respectively. According to the continuity of available satellites depicted in Fig. 6b, interruptions occur frequently in some segments, particularly for the satellites with low elevation. This is mainly due to the fact that roadside houses and trees cause severe GNSS signal obstructions. As one can see from the time series of the PDOP values shown in Fig. 6c, the dramatic jumps in PDOP always accompany severe tracking loss, and the maximum value of PDOP reaches 49.25. These facts reflect that the discontinuous tracking of satellites is a challenge for GNSS positioning.

Fig. 6
figure 6

Number of available GNSS satellites (a), continuity and elevation of satellites (b), PDOP of multi-GNSS PPP (c) in GNSS partly-blocked experiment

GNSS partly-blocked experiment analysis

Different from the absolute PPP, the PPP/INS integration system can produce high-frequency navigation output and attitude information due to the characteristics of INS. To further overcome the drawback of INS divergency in GNSS-constrained areas, we extend the fusion with LiDAR measurements. The solution of tightly coupled multi-GNSS PPP/INS is employed as a contrast to the GIL solution.

The position errors in three directions over time are displayed in Fig. 7, and the corresponding RMSEs, and the maximum errors during the experiment are summarized in Table 4. It can be found that the position differences in three directions have been improved noticeably with the aid of LiDAR data. The RMSEs of position are 0.81, 0.76, and 0.82 m for the PPP/INS method in east, north, and up components, while these are 0.30, 0.37, and 0.31 m for the GIL method. Additionally, the maximum positioning errors are reduced from 7.01, 3.58, and 7.55 m in east, north, and up directions to 1.39, 0.79, and 1.00 m, with improvements of 80.2%, 77.9%, and 86.8%, respectively. Besides, the region with GNSS signal blockage is usually rich in scene features, which is conducive for LiDAR registration. Thus, the LiDAR measurements can effectively assist the system to infer the vehicle movement in the environments with no GNSS signals.

Fig. 7
figure 7

Position offsets of multi-GNSS PPP/INS and GIL methods in GNSS partly-blocked experiment

Table 4 RMSEs and maximum error of position for multi-GNSS PPP/INS and GIL solutions in GNSS partly-blocked experiment

To further analyze the absolute positioning performance with our method, we calculated the distribution frequency of position errors in different intervals, as plotted in Fig. 8. According to the statistical diagram, the percentage of positioning difference within 0.5 m is 19.5% for the PPP/INS solution, which increases to 41.9% with LiDAR aided. Besides, the positioning accuracy for the GIL is within 1.0 m over 98% of the time in the field test. The results indicate that the addition of LiDAR can constrain the divergence of IMU error and contribute to the accurate positioning in urban areas. These improvements may benefit applications that require continuous and precise dynamic positioning.

Fig. 8
figure 8

Distribution of absolute position offsets of the multi-GNSS PPP/INS and GIL methods in GNSS partly-blocked experiment

For vehicle navigation, accurate velocity is another crucial factor. Figure 9 shows the velocity difference between the two methods. For the PPP/INS solutions, velocity offsets above 0.5 m/s mainly occur in low GNSS availability periods. Comparatively, after the inclusion of LiDAR data, the velocity offsets are within 0.1 m/s during most of the time. Besides, the statistics in Table 5 illustrate that the velocity RMSEs are 0.06, 0.03, and 0.04 m/s for the PPP/INS/LiDAR method in east, north, and up directions, with improvements of 60.0%, 50.0%, and 60.0% in comparison with the INS-aided PPP. Similarly, the maximum error drops from 0.66, 1.08, and 1.10 m/s in the east, north, and up components to 0.23, 0.28, and 0.22 m/s, respectively. Note that the velocity is propagated by INS mechanization, whose performance mainly depends on the IMU. In this situation, LiDAR observations can effectively mitigate the drift problem of INS, especially for MEMS-IMU, which makes a great contribution to the velocity estimation.

Fig. 9
figure 9

Velocity offsets of multi-GNSS PPP/INS and GIL methods in GNSS partly-blocked experiment

Table 5 RMSEs and maximum error of velocity for multi-GNSS PPP/INS and GIL solutions in GNSS partly-blocked experiment

Similar to position and velocity, attitude is another vital information for vehicle applications. However, when the GNSS performance is degraded, the accuracy of attitude will also be affected especially for the yaw component. In this experiment, the two solutions’ attitude differences over time are depicted in Fig. 10, and corresponding RMSEs of attitude are shown in Table 6. In terms of pitch and roll, the RMSEs of the two methods show comparable performance. Comparatively, the achievable accuracy of the yaw angle in both methods is much worse. This is due to the case that the yaw angle is unobservable in INS or LiDAR/INS systems (Hesch et al., 2013). Specifically, the RMSE of yaw angle for the PPP/INS solution is 3.97 degrees, and the corresponding figure for the GIL method is 1.45 degrees, which has an improvement of 63.5%. Generally, the LiDAR aiding can help the attitude-correlated parameters to be estimated more accurately.

Fig. 10
figure 10

Attitude offsets of multi-GNSS PPP/INS and GIL methods in GNSS partly-blocked experiment

Table 6 RMSEs of attitude for multi-GNSS PPP/INS and GIL solutions in GNSS partly-blocked experiment

PPP solution in GNSS difficult experiment

To verify the universality of the GIL method, we conducted another onboard vehicle experiment. As illustrated in Fig. 11, the condition was more challenging than the previous one with dense, tall houses and trees beside the road. The track length is approximately 1700 m and the vehicle traveled about 7 min. Severe signal blockages caused frequent re-convergence and the insufficient number of the observed satellites, resulting in poor positioning and frequently discontinuous PPP solution in this experiment. According to Fig. 11, though multi-GNSS observations are used, the PPP solution can hardly maintain meter level positioning accuracy.

Fig. 11
figure 11

Field test trajectory. Left: top view of the trajectory on google earth, right: positioning result of PPP when using multi-GNSS data in GNSS difficult experiment

Figure 12 shows the satellite availability over time. Compared with the previous experiment, severe interruptions occur even for the satellites with high elevation. The number of available satellites is 50% less than the previous test, and more fluctuates. The average number of GPS satellites is 2.23, while the corresponding value for the G + C combination is 4.49, which further increases to 7.74 for the G + C + E combination. We also notice that, compared to the previous test, the average PDOP variation degrades from 3.54 to 6.99, and the corresponding Standard Deviation (SD) increases from 3.40 to 5.95. Obviously, the GNSS observation condition in this experiment is much worse than the previous test, which leads to poor PPP result.

Fig. 12
figure 12

Number of available GNSS satellites (a), continuity and elevation of satellites (b), PDOP of multi-GNSS PPP (c) in GNSS difficult experiment

GNSS difficult experiment analysis

The position differences of the two methods with respect to the reference solution are shown in Fig. 13, and the corresponding position offsets distribution is shown in Fig. 14. The absolute RMSE of the PPP/INS positioning in this experiment reaches 4.56 m, which is about three times more than the value in the previous field test. Besides, position errors of more than 10 m can be found in both vertical and horizontal directions.

Fig. 13
figure 13

Position offsets of multi-GNSS PPP/INS and GIL methods in GNSS difficult experiment

Fig. 14
figure 14

Distribution of absolute position offsets of the multi-GNSS PPP/INS and GIL methods in GNSS difficult experiment

As shown in Fig. 13, the results with the inclusion of LiDAR measurements are significantly improved compared to the PPP/INS solution. Table 7 summarizes the statistics of RMSEs and maximum errors for the PPP/INS and GIL methods. Compared with the PPP/INS solution, the GIL method performs much better in positioning. The position RMSEs reduce from 2.82, 1.97, and 2.99 m to 1.41, 1.25, and 1.70 m in east, north, and up directions, respectively. The percentage of the position error larger than 4 m decreases by 30.0% with LiDAR aided. Generally, the positioning performance is significantly enhanced when LiDAR data are augmented, which means the proposed fusion method can ameliorate the positioning even in bad environments. This progress is mainly due to the constraints between adjacent epochs provided by LiDAR measurements. The improved accuracy of position can benefit those position-critical tasks.

Table 7 RMSEs and maximum error of position for PPP/INS and GIL solutions in GNSS difficult experiment

For velocity estimation, outperformance is also shown in the results of the GIL method, and the statistics are summarized in Table 8. According to the velocity offsets reflected in Fig. 15, the velocity error is within 0.3 m/s during most of the periods when using the GIL method. Specifically, the velocity RMSEs are 0.85, 1.09, and 0.93 m/s for the PPP/INS method in east, north, and up directions, which are improved by 42.4%, 56.9%, and 53.8% after integrating LiDAR together. Those improvements are mainly due to the geometry restraint established by LiDAR measurements. As presented in Fig. 15, some obvious offsets occur in the PPP/INS velocity estimation. Referring to Fig. 12, it is noticed that the corresponding procedures are usually with insufficient satellite observations, and the deterioration over time occurs when GNSS cannot provide effective constraints for the PPP/INS integration. As for the GIL method, additional observations from LiDAR can deliver accurate velocity output, which helps maintain a high-accuracy velocity solution even when there are few available satellites.

Table 8 RMSEs and maximum error of velocity for PPP/INS and GIL solutions in GNSS difficult experiment
Fig. 15
figure 15

Velocity offsets of multi-GNSS PPP/INS and GIL methods in GNSS difficult experiment

The time series of attitude error is presented in Fig. 16, and the related RMSEs and maximum errors are listed in Table 9. Similarly, the results show higher accuracy in the estimation of roll and pitch angle, which may be related to the better observability of INS in the two directions. For the heading component, the RMSE of the GIL method has a significant improvement of 68.9% compared to the PPP/INS method. Slight benefits are also visible in roll and pitch angles with the improvement of 45.5% and 41.2%.

Fig. 16
figure 16

Attitude offsets of multi-GNSS PPP/INS and GIL methods in GNSS difficult experiment

Table 9 RMSEs of attitude for PPP/INS and GIL solutions in GNSS difficult experiment


In recent years, the demand for stable, continuous, and accurate navigation has gradually become prominent in autonomous driving applications. In this contribution, we proposed a tightly coupled multi-GNSS PPP/INS/LiDAR method to improve the navigation performance in heavily urbanized areas. By formulating an EKF system driven by INS mechanization, the observations from LiDAR and PPP constructed the corresponding measurement updates to restrict the rapid error growth of INS. An efficient sliding window strategy was designed to deal with LiDAR data processing. Several field experiments in typical urban areas were conducted to validate the model, and the performance of the GIL method was assessed in terms of position, velocity, and attitude.

The statistics data demonstrate that compared to the GPS-only constellation, there are about three times more available satellites when three constellations of GPS, BDS, and Galileo are considered. However, the observation quality is still limited due to GNSS signal interruptions. In addition, the experimental results illustrate that our GIL method can maintain more accurate and stable navigation than the traditional tightly coupled PPP/INS solution. In GNSS partly-blocked environments, the position RMSEs in east-north-vertical directions are dramatically reduced from 0.81, 0.76, and 0.82 m to 0.30, 0.37, and 0.31 m compared with the PPP/INS solution. When the conditions are more difficult for GNSS measurements, the GIL method also shows an excellent performance in positioning with the improvements of 50.0%, 36.5%, and 43.1% in east, north, and vertical components, respectively. It is worth mentioning that the maximum positioning error has also been significantly reduced when using the GIL method. According to the experiment in GNSS partly-blocked areas, the velocity RMSEs of the PPP/INS solution are 0.15, 0.06, and 0.10 m/s in east, north, and up directions, respectively. The addition of LiDAR to the above solution can reduce the RMSEs to 0.06, 0.03, and 0.04 m/s. The attitude-related parameters can also be accurately estimated with integration of multi-GNSS and LiDAR data. In both GNSS partly-blocked and difficult environments, the GIL method has superior performances in the determination of roll and pitch angles, with average improvements of 37.5% and 25.9%. Meanwhile, significant improvements by 63.5% and 68.9% for both experiments are found in the yaw direction compared to the multi-GNSS PPP/INS results.

In conclusion, this paper presented a novel tight fusion method, which shows great potential in position, velocity, and attitude determination. The results from the field tests demonstrate that the GIL fusion outperforms the multi-GNSS PPP/INS integration and achieves better accuracy in dynamic vehicle scenarios. The further study is to take full advantage of LiDAR measurements. A point cloud map will be established for registration and closed-loop detection. Meanwhile, more efforts should also be devoted to eliminating the adverse effects of the outliers in each sensor measurements.

Availability of data and materials

The datasets used and analyzed in this study are available from the corresponding author on reasonable request.


  • Abd Rabbou, M., & El-Rabbany, A. (2015a). Tightly coupled integration of GPS precise point positioning and MEMS-based inertial systems. GPS Solutions, 19(4), 601–609.

    Article  Google Scholar 

  • Abd Rabbou, M., & El-Rabbany, A. (2015b). Integration of multi-constellation GNSS precise point positioning and MEMS-based inertial systems using tightly coupled mechanization. Positioning, 6(04), 81.

    Article  Google Scholar 

  • Afifi, A., & El-Rabbany, A. (2015). An innovative dual frequency PPP model for combined GPS/Galileo observations. Journal of Applied Geodesy, 9(1), 27–34.

    Article  Google Scholar 

  • Analog Devices Corporation (2019). ADIS16470 IMU Data Sheet. Accessed 15 June 2021.

  • Böhm, J., Niell, A., Tregoning, P., & Schuh, H. (2006). Global Mapping Function (GMF): A new empirical mapping function based on numerical weather model data. Geophysical Research Letters, 33(7).

  • Chiang, K. W., Tsai, G. J., Chu, H. J., & El-Sheimy, N. (2020). Performance enhancement of INS/GNSS/Refreshed-SLAM integration for acceptable lane-level navigation accuracy. IEEE Transactions on Vehicular Technology, 69(3), 2463–2476.

    Article  Google Scholar 

  • Cui, B., Wei, X., Chen, X., Li, J., & Li, L. (2019). On sigma-point update of cubature Kalman filter for GNSS/INS under GNSS-challenged environment. IEEE Transactions on Vehicular Technology, 68(9), 8671–8682.

    Article  Google Scholar 

  • De Berg, M., Van Kreveld, M., Overmars, M., & Schwarzkopf, O. (2008). Computational geometry: Algorithms and applications (3rd ed., pp. 1–17). Springer.

    Book  Google Scholar 

  • Fang, R., Shi, C., Wei, N., & Zhao, Q. (2009). Real-time cycle-slip detection for quality control of GPS measurements. Geomatics and Information Science of Wuhan University, 34(9), 1094–1097.

    Google Scholar 

  • Gao, Z., Zhang, H., Ge, M., Niu, X., Shen, W., Wickert, J., & Schuh, H. (2017). Tightly coupled integration of multi-GNSS PPP and MEMS inertial measurement unit data. GPS Solutions, 21(2), 377–391.

    Article  Google Scholar 

  • Groves, P. D., Wang, L., Walter, D., Martin, H., Voutsis, K., & Jiang, Z. (2014). The four key challenges of advanced multisensor navigation and positioning. In Proceedings of IEEE/ION PLANS 2014 (pp. 773–792).

  • Guo, J., Li, X., Li, Z., Hu, L., Yang, G., Zhao, C., & Ge, M. (2018). Multi-GNSS precise point positioning for precision agriculture. Precision Agriculture, 19(5), 895–911.

    Article  Google Scholar 

  • Han, S., & Wang, J. (2010). A novel initial alignment scheme for low-cost INS aided by GPS for land vehicle applications. The Journal of Navigation, 63(4), 663–680.

    Article  Google Scholar 

  • Hesch, J. A., Kottas, D. G., Bowman, S. L., & Roumeliotis, S. I. (2013). Consistency analysis and improvement of vision-aided inertial navigation. IEEE Transactions on Robotics, 30(1), 158–176.

    Article  Google Scholar 

  • Hess, W., Kohler, D., Rapp, H., & Andor, D. (2016, May). Real-time loop closure in 2D LIDAR SLAM. In 2016 IEEE International Conference on Robotics and Automation (ICRA) (pp. 1271–1278). IEEE.

  • Koide, K., Miura, J., & Menegatti, E. (2019). A portable three-dimensional LIDAR-based system for long-term and wide-area people behavior measurement. International Journal of Advanced Robotic Systems, 16(2), 1729881419841532.

    Article  Google Scholar 

  • Kouba, J. (2013). A guide to using international GNSS service (IGS) products. In Ocean Surface Topography Science Team (OSTST) Meeting in Boulder, CO.

  • Li, X., Ge, M., Dai, X., Ren, X., Fritsche, M., Wickert, J., & Schuh, H. (2015). Accuracy and reliability of multi-GNSS real-time precise positioning: GPS, GLONASS, BeiDou, and Galileo. Journal of Geodesy, 89(6), 607–635.

    Article  Google Scholar 

  • Li, X., Li, X., Yuan, Y., Zhang, K., Zhang, X., & Wickert, J. (2018). Multi-GNSS phase delay estimation and PPP ambiguity resolution: GPS, BDS, GLONASS, Galileo. Journal of Geodesy, 92(6), 579–608.

    Article  Google Scholar 

  • Li, X., Wang, X., Liao, J., Li, X., Li, S., & Lyu, H. (2021). Semi-tightly coupled integration of multi-GNSS PPP and S-VINS for precise positioning in GNSS-challenged environments. Satellite Navigation, 2(1), 1–14.

    Article  Google Scholar 

  • Lv, J., Xu, J., Hu, K., Liu, Y., & Zuo, X. (2020). Targetless calibration of lidar-imu system based on continuous-time batch estimation. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 9968–9975). IEEE.

  • Mourikis, A. I., & Roumeliotis, S. I. (2007, April). A multi-state constraint Kalman filter for vision-aided inertial navigation. In Proceedings 2007 IEEE International Conference on Robotics and Automation (pp. 3565–3572). IEEE.

  • NovAtel Corporation. (2015). NovAtel GPS-702-GG Product Sheets. Accessed 15 June 2021.

  • NovAtel Corporation. (2018). Inertial Explorer 8.70 User Manual. Accessed 15 June 2021.

  • Qin, C., Ye, H., Pranata, C. E., Han, J., Zhang, S., & Liu, M. (2020, May). Lins: A lidar-inertial state estimator for robust and efficient navigation. In 2020 IEEE International Conference on Robotics and Automation (ICRA) (pp. 8899–8906). IEEE.

  • Roesler, G., Martell H. (2009). Tightly Coupled Processing of Precise Point Position (PPP) and INS Data. Proceedings of ION GNSS 2009, Institute of Navigation, pp. 1898–1905, 2009.

  • Septentrio Corporation. (2019). Septentrio PolaRx5 Product datasheet, Accessed 15 June 2021.

  • Shan, T., Englot, B., Meyers, D., Wang, W., Ratti, C., & Rus, D. (2020, October). Lio-sam: Tightly-coupled lidar inertial odometry via smoothing and mapping. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 5135–5142). IEEE.

  • Shin, E. H. (2005). Estimation techniques for low-cost inertial navigation. UCGE report, 20219.

  • Shin, E. H., & Scherzinger, B. (2009, September). Inertially aided precise point positioning. In Proceedings of the 22nd International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GNSS 2009) (pp. 1892–1897).

  • Soloviev, A. (2008, May). Tight coupling of GPS, laser scanner, and inertial measurements for navigation in urban environments. In 2008 IEEE/ION Position, Location and Navigation Symposium (pp. 511–525). IEEE.

  • StarNeto Corporation. (2014). StarNeto XW-GI7660 User Manual, Accessed 15 June 2021.

  • Sun, K., Mohta, K., Pfrommer, B., Watterson, M., Liu, S., Mulgaonkar, Y., & Kumar, V. (2018). Robust stereo visual inertial odometry for fast autonomous flight. IEEE Robotics and Automation Letters, 3(2), 965–972.

    Article  Google Scholar 

  • Tang, X., Roberts, G. W., Li, X., & Hancock, C. M. (2017). Real-time kinematic PPP GPS for structure monitoring applied on the Severn Suspension Bridge. UK. Advances in Space Research, 60(5), 925–937.

    Article  Google Scholar 

  • Velodyne Corporation. (2021). VLP-16 User Manual. Accessed 15 June 2021.

  • Wright, T. J., Houlié, N., Hildyard, M., & Iwabuchi, T. (2012). Real‐time, reliable magnitudes for large earthquakes from 1 Hz GPS precise point positioning: The 2011 Tohoku‐Oki (Japan) earthquake. Geophysical research letters, 39(12).

  • Ye, H., Chen, Y., & Liu, M. (2019). Tightly coupled 3d lidar inertial odometry and mapping. 2019 International Conference on Robotics and Automation (ICRA).

  • Zhang, X., & Li, X. (2012). Instantaneous re-initialization in real-time kinematic ppp with cycle slip fixing. GPS Solutions, 16(3), 315–327.

    Article  Google Scholar 

  • Zhang, J., & Singh, S. (2014, July). LOAM: Lidar Odometry and Mapping in Real-time. In Robotics: Science and Systems (Vol. 2, No. 9).

  • Zhang, J., & Singh, S. (2017). Low-drift and real-time lidar odometry and mapping. Autonomous Robots, 41(2), 401–416.

    Article  Google Scholar 

  • Zhen, W., Zeng, S., & Soberer, S. (2017, May). Robust localization and localizability estimation with a rotating laser scanner. In 2017 IEEE International Conference on Robotics and Automation (ICRA) (pp. 6240–6245). IEEE.

  • Zumberge, J. F., Heflin, M. B., Jefferson, D. C., Watkins, M. M., & Webb, F. H. (1997). Precise point positioning for the efficient and robust analysis of GPS data from large networks. Journal of Geophysical Research: Solid Earth, 102(B3), 5005–5017.

    Article  Google Scholar 

  • Zuo, X., Geneva, P., Lee, W., Liu, Y., & Huang, G. (2019). Lic-fusion: Lidar-inertial-camera odometry. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 5848–5854). IEEE.

Download references


The numerical calculations in this paper were done on the supercomputing system in the Supercomputing Center of Wuhan University.


This work has been supported by the National Natural Science Foundation of China (Grant 41774030, Grant 41974027, and Grant 41974029), the Hubei Province Natural Science Foundation of China (Grant 2018CFA081), the frontier project of basic application from Wuhan science and technology bureau (Grant 2019010701011395), and the Sino-German mobility programme (Grant No. M-0054).

Author information

Authors and Affiliations



XL, HW conceived the idea and wrote the paper; SL and SF assisted in data acquisition; XW and JL helped with the article writing and revision. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Xingxing Li.

Ethics declarations

Conflict of interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, X., Wang, H., Li, S. et al. GIL: a tightly coupled GNSS PPP/INS/LiDAR method for precise vehicle navigation. Satell Navig 2, 26 (2021).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: