Skip to main content
  • Original Article
  • Open access
  • Published:

A practical method utilizing multi-spectral LiDAR to aid points cloud matching in SLAM

Abstract

Light Detection and Ranging (LiDAR) sensors are popular in Simultaneous Localization and Mapping (SLAM) owing to their capability of obtaining ranging information actively. Researchers have attempted to use the intensity information that accompanies each range measurement to enhance LiDAR SLAM positioning accuracy. However, before employing LiDAR intensities in SLAM, a calibration operation is usually carried out so that the intensity is independent of the incident angle and range. The range is determined from the laser beam transmitting time. Therefore, the key to using LiDAR intensities in SLAM is to obtain the incident angle between the laser beam and target surface. In a complex environment, it is difficult to obtain the incident angle robustly. This procedure also complicates the data processing in SLAM and as a result, further application of the LiDAR intensity in SLAM is hampered. Motivated by this problem, in the present study, we propose a Hyperspectral LiDAR (HSL)-based-intensity calibration-free method to aid point cloud matching in SLAM. HSL employed in this study can obtain an eight-channel range accompanied by corresponding intensity measurements. Owing to the design of the laser, the eight-channel range and intensity were collected with the same incident angle and range. According to the laser beam radiation model, the ratio values between two randomly selected channels’ intensities at an identical target are independent of the range information and incident angle. To test the proposed method, the HSL was employed to scan a wall with different coloured papers pasted on it (white, red, yellow, pink, and green) at four distinct positions along a corridor (with an interval of 60 cm in between two consecutive positions). Then, a ratio value vector was constructed for each scan. The ratio value vectors between consecutive laser scans were employed to match the point cloud. A classic Iterative Closest Point (ICP) algorithm was employed to estimate the HSL motion using the range information from the matched point clouds. According to the test results, we found that pink and green papers were distinctive at 650, 690, and 720 nm. A ratio value vector was constructed using 650-nm spectral information against the reference channel. Furthermore, compared with the classic ICP using range information only, the proposed method that matched ratio value vectors presented an improved performance in heading angle estimation. For the best case in the field test, the proposed method enhanced the heading angle estimation by 72%, and showed an average 25.5% improvement in a featureless spatial testing environment. The results of the primary test indicated that the proposed method has the potential to aid point cloud matching in typical SLAM of real scenarios.

Introduction

LiDAR sensors are active devices that obtain range information; they have been extensively employed in SLAM applications (Qian et al. 2017; Chen et al. 2018a, b, c; Tang et al. 2015). LiDAR sensors have also become popular in autonomous driving applications that utilize SLAM-based technology to offer a robust positioning solution when Global Navigation Satellite System (GNSS) navigation signals are degraded or not available (Kallasi et al. 2016; El-Sheimy and Youssef 2020). Range information is obtained by measuring the time of flight between the emitted pulse and the reflected laser echoes from targets. The intensity information that accompanies the range information refers to the power strength of the reflected laser echoes (Guivant et al. 2000; Yoshitaka et al. 2006). However, compared with visual sensors, traditional monochromatic LiDAR sensors are unable to acquire abundant textures of targets because they operate at a single spectral wavelength. Researchers attempted to leverage the one-channel intensity information to enhance LiDAR SLAM positioning accuracy (Wolcott and Eustice 2015; Barfoot et al. 2016; Hewitt and Marshall 2015). However, we consider that there are two inherent shortcomings in this method:

  1. 1.

    The intensity is only obtained from a single spectral wavelength, which is insufficient for feature extraction and certain target classification. In general camera-based solutions, RGB images refer to three-channel information in the visible spectrum. Additional spectral information or channels are preferable for this application.

  2. 2.

    LiDAR intensity values are determined by many extrinsic factors including range, laser incident angle, and material reflectivity (Khan et al. 2016; Singh and Nagla 2018; Jeong and Kim 2018; Engelhard et al. 2011). The incident angle of the laser beam has to be calculated according to the slope of the target surface or line. Although some line extraction methods have been proposed, it is still difficult to obtain these parameters robustly. The line slope calculation procedure significantly complicates the data processing.

Despite these problems, efficient and robust application of LiDAR intensity in SLAM is important to the research community. The important issue is to determine a robust technique to render the LiDAR intensity immune to the range and incident angle. In this study, an eight-channel tuneable hyperspectral LiDAR (HSL) with spectral wavelength ranging from 650 to 1000 nm (650, 690, 720, 760, 800, 850, 905, and 1000 nm, labelled as Channel 1–Channel 8, respectively) was designed, and the instrument was employed to generate point clouds accompanied by eight-channel spectral intensity information (Kaasalainen et al. 2007, 2016; Chen et al. 2010; Hakala et al. 2012).

In HSL, a supercontinuum (SC) laser beam is emitted at regular intervals to obtain point clouds. This guarantees that all laser pulses from the employed channels are reflected by the target with identical surfaces. Thus, the intensity ratio value between two different spectral channels’ intensities is obviously independent of the range and incident angle. The ratio values are determined only by the power of the emitted laser beam and the target surface reflectivity at a specific wavelength. Since the power density of the laser source is stable, certain features related to the target surface reflectivity can be extracted from the intensity measurement. Compared with the complicated intensity calibration processing in single-wavelength LiDAR, the ratio method is practical and straightforward. Overall, the contributions of this study are summarized as follows:

  1. 1.

    More spectral features or textures of the targets are obtained with the ranging information simultaneously. The HSL sensor was the first sensor to collect this information actively and simultaneously without any data registration or coordinate transformation issues. Compared with the RGB-D sensor, data registration between the RGB camera and depth camera is avoided; moreover, the HSL is insensitive to environmental illumination unlike the RGB camera sensor.

  2. 2.

    With the spectral ratio value vectors, features are extracted robustly according to the target surface reflectivity. Since the spectral reflectivity of the target is determined by the surface material, the similarity of the spectral ratio value vector can determine whether the points are from identical materials. Complicated calibration procedures are not needed for LiDAR intensity processing for the proposed method in SLAM.

The remainder of this paper is organized as follows: “System and method” section illustrates the design of the hyperspectral LiDAR, introduces the spectral feature extraction method in detail, and presents the motion estimation procedure. “Field tests and result analysis” section discusses the field test results and analysis. Finally, a conclusion is drawn.

System and method

This section introduces the traditional SLAM. Then, the HSL system and the proposed model are introduced. “Traditional SLAM” section presents the traditional SLAM using single-wavelength LiDAR as a range sensor. The HSL system design and implementation are described in “Hyperspectral LiDAR” section. “Intensity model” section describes the intensity model for single-wavelength LiDAR considering certain parameters, for instance, range, incident angle, and material reflectivity. The proposed spectral ratio method, and the calculation is given in detail in “LiDAR intensity calibration free method” section. “Motion estimation” section reveals the motion estimation procedure with ratio value vector matching.

Traditional SLAM

The SLAM combines positioning and mapping processing in a single framework. It is a problem to build a map of an unknown environment by traversing it with range sensors (laser, sonar, etc.) mounted on a platform by matching the spatial features extracted from two or more consecutive frames of range measurements to directly obtain the movement of the platform with various algorithms while simultaneously determining the location on the map. Figure 1 presents a typical structured indoor map generated by a backpack SLAM system employing a Voledyne VLP-16 laser scanner and an X-sense Micro-electromechanical System (MEMS) Inertial Measurement Unit (IMU). The red point is the point cloud collected by the backpack SLAM carried by a tester without a loop closure correction. The blue point is the reference point cloud collected by a Leica P40 laser scanner operating in terrestrial laser scanning mode. It can be observed that the SLAM-generated map (red) coincides with the reference data. However, SLAM performance deteriorates in featureless environments where the matching errors significantly increase. The drift errors of position and heading angle estimation derived from SLAM will accumulate over time in an exponential manner.

Fig. 1
figure 1

Typical structured indoor map generated by a backpack SLAM system

Hyperspectral LiDAR

Figure 2 presents the employed components for the HSL. First, an SC laser source is used as the “white” laser source with a spectral band from 450 to 2400 nm (the spectral power intensity of the SC laser source is presented in Fig. 3) (Kaasalainen et al. 2007, 2016; Chen et al. 2010, 2019; Hakala et al. 2012; Li et al. 2019; Jiang et al. 2019). Second, an Acousto-Optic Tuneable Filter (AOTF) is installed after the SC laser source. The filter enables a continuous spectral wavelength selection with a filtering resolution of 2–10 nm in the time domain (Chen et al. 2019; Li et al. 2019; Jiang et al. 2019).

Fig. 2
figure 2

Schematic design of HSL

Fig. 3
figure 3

Power density of SC laser source and the selected eight wavelengths for this research

Then, a collimator is employed to collimate the laser beam before a reflector mirror reflects the laser beam to the target. A Cassegrain optics system is utilized to collect the backscattered laser echoes, and an Avalanche Photodiode (APD) sensor module with an integrated amplifier is adopted to collect those reflected echoes from the target. All waveforms of the laser beam are sampled and collated by a linked high-speed oscilloscope. Range information and spectral intensity are acquired by processing the raw waveforms. The sampling frequency is 5 GHz which equals a 3-cm ranging resolution. This approximates the resolution in commonly used laser scanners, for example, HOKUYO® UTM-30LX. With the rotation of the device, the HSL can yield point clouds of the environment.

Intensity model

In this paper, LiDAR intensity refers to the received power of the returned pulse from the targets. Some corresponding models have been published (Kallasi et al. 2016; El-Sheimy and Youssef 2020; Guivant et al. 2000; Yoshitaka et al. 2006; Wolcott and Eustice 2015; Barfoot et al. 2016; Hewitt and Marshall 2015). According to these previous investigations, LiDAR intensity is always influenced by certain intrinsic and extrinsic parameters. Specifically, the intrinsic parameters include the power of the emitted laser beam and atmospheric attenuation; the extrinsic parameters include the reflectivity of the target, transmitting range, and incident angle. Considering these factors, a common intensity model is given as Eq. (1):

$$P_{\text{R}} = \frac{{\uppi P_{\text{E}} \rho \cos \left( \theta \right)}}{{4R^{2} }}\eta_{\text{stm}} \eta_{\text{sys}}$$
(1)

where \(P_{\text{R}}\) refers to the received signal power, \(P_{\text{E}}\) is the emitted signal power, \(\rho\) is the reflectance of the target, and \(\theta\) is the incident angle between the target surface normal vector and the laser beam incident on the target. R is the range between the target and LiDAR. \(\eta_{{\text{stm}}}\) and \(\eta_{\text{sys}}\) describe the systematic and atmosphere factors, respectively.

In this model, assuming that \(P_{\text{E}}\) is unchanged and the targets are Lambertian reflectors, the backscatter signal strength has a major portion in the incident beam direction.

LiDAR intensity calibration free method

Assuming that there are N spectral channels available for this HSL configuration (considering the data storage and practical hardware investment, eight channels are selected in this research), the received power at wavelength \(\lambda_{i}\) can be written as Eq. (2) according to the intensity model of Eq. (1):

$$P_{{\lambda_{i} }}^{\text{R}} = \frac{{\uppi P_{{\lambda_{i} }}^{\text{E}} \rho_{{\lambda_{i} }} \cos \left( \theta \right)}}{{4R^{2} }}\eta_{{\lambda_{i} }}^{\text{stm}} \eta_{{\lambda_{i} }}^{\text{sys}}$$
(2)

where \(P_{{\lambda_{i} }}^{\text{R}}\) is the received signal power, \(P_{{\lambda_{i} }}^{\text{E}}\) is the emitted signal power, and \(\rho_{{\lambda_{i} }}\) is the reflectance of the target. \(\theta\) refers to the incident angle between the target surface and the laser beam projected on the target. \(\eta_{{\text{stm}}}\) and \(\eta_{{\text{sys}}}\) describe the systematic and atmospheric factors, respectively. \(R\) refers to the range between the target and LiDAR.

With the characteristic that all spectral channels of the HSL have the same incident angle \(\theta\) and range \(R\), a ratio variable is defined using a channel as reference (the channel can be randomly selected). The new model is written as Eq. (3).

$${\text{ratio}}_{{\lambda_{i} }} = \frac{{P_{{\lambda_{i} }}^{\text{R}} }}{{P_{\text{ref}}^{\text{R}} }} = \frac{{P_{{\lambda_{i} }}^{\text{E}} }}{{P_{\text{ref}}^{\text{E}} }}\frac{{\rho_{{\lambda_{i} }} }}{{\rho_{\text{ref}} }}\frac{{\eta_{{\lambda_{i} }}^{\text{stm}} \eta_{{\lambda_{i} }}^{\text{sys}} }}{{\eta_{\text{ref}}^{\text{stm}} \eta_{\text{ref}}^{\text{sys}} }}$$
(3)

where \({\text{ratio}}_{{\lambda_{i} }}\) is the defined ratio for the channel with wavelength \(\lambda_{i}\); \(P_{\text{ref}}^{\text{R}}\) is the received signal power of the selected referenced spectral channel; \(\eta_{{\text{ref}}}^{{\text{stm}}}\) and \(\eta_{{\text{ref}}}^{{\text{sys}}}\) describe the systematic and atmosphere factors of the reference channel, respectively; \(\rho_{{\lambda_{i} }}\) is the reflectance of the target for spectral wavelength \(\lambda_{i}\); \(P_{{\lambda_{i} }}^{\text{R}}\) is the received signal power of the \(\lambda_{i}\) spectral channel; and \(\eta_{{\lambda_{i} }}^{{\text{stm}}}\), \(\eta_{{\lambda_{i} }}^{{\text{sys}}}\) describe the systematic and atmosphere factors for the \(\lambda_{i}\) spectral channel, respectively.

According to Eq. (3), the ratio value is determined by three major elements: the emitted power of the laser beam, reflectivity of different materials for the distinct spectral wavelength, and systematic and atmosphere factors. Figure 2 shows the power density for the corresponding spectral wavelengths ranging from 450 to 2400 nm; the emitted signal strength varies for different spectral wavelengths. However, the laser source has an almost fixed power density curve, and the emitted power strength is stable. \(\frac{{P_{{\lambda_{i} }}^{\text{E}} }}{{P_{{\text{ref}}}^{\text{E}} }}\) is a constant value termed as \(k_{{\lambda_{i} }}\), which is associated with a specific spectral wavelength.

With regard to the systematic and atmospheric factors, \(\frac{{\eta_{{\lambda_{i} }}^{{\text{stm}}} \eta_{{\lambda_{i} }}^{{\text{sys}}} }}{{\eta_{{\text{ref}}}^{{\text{stm}}} \eta_{{\text{ref}}}^{{\text{sys}}} }}\) can be also regarded as a constant value \(\eta_{{\lambda_{i} }}\), which differs slightly in the distinct spectral wavelength. Hence, Eq. (3) can be simplified as

$${\text{ratio}}_{{\lambda_{i} }} = k_{{\lambda_{i} }} \frac{{\rho_{{\lambda_{i} }} }}{{\rho_{\text{ref}} }}\eta_{{\lambda_{i} }}$$
(4)

In Eq. (4), \(\rho_{{\lambda_{i} }}\) and \(\rho_{\text{ref}}\) are determined by the laser beam wavelength and the target material. \(k_{{\lambda_{i} }}\) and \(\eta_{{\lambda_{i} }}\) are only associated with the specific spectral wavelength. Thus, for a dataset with \(N\) points, the following spectral ratio vector \(\varvec{O}_{{\lambda_{i} }}\) at \(\lambda_{i}\) wavelength can be obtained.

$$\varvec{O}_{{\lambda_{i} }} = \left[ {{\text{ratio}}_{{_{{\lambda_{i} }} }}^{1} , {\text{ratio}}_{{_{{\lambda_{i} }} }}^{2} , \ldots ,{\text{ratio}}_{{_{{\lambda_{i} }} }}^{N} } \right]$$
(5)

Obviously, the ratio vector can be employed for target classification without LiDAR intensity calibration.

Motion estimation

In the proposed hyperspectral LiDAR-simultaneous location and mapping (HSL-SLAM), after obtaining two consecutive laser scanning data, the motion estimation occurs in the following three steps. This is also presented in Fig. 4.

  1. 1.

    The defined ratio values are calculated according to the collected multichannel spectral information from Eqs. (3) and (4).

  2. 2.

    The spectral ratio vector, which is calculated in Eq. (5), is applied to match the consecutive scanning data.

  3. 3.

    A SLAM algorithm, which is a classic ICP algorithm in a case study, is employed to estimate the motion and heading angle with the matched range information from Step 2.

Fig. 4
figure 4

Motion estimation procedure

Field tests and result analysis

After illustrating the HSL design and proposed calibration-free method, this section aims to demonstrate the feasibility of the method on indoor SLAM through a field test. This section is divided into four parts: (1) “Experimental setup” section introduces the experimental setup, including the HSL settings, experiment scenario description, and operations; (2) “Spectral ratio vector results” section presents the details of the spectral ratio vector values at the selected 650-nm wavelength; (3) “Motion estimation” section performs the motion estimation process and results; and (4) the last subsection discusses the results.

Experimental setup

The field test scenario is presented in Fig. 5. An HSL hardware prototype is constructed and employed for scanning a corridor environment. A step motor is utilized to horizontally rotate the laser beam with a half-degree angular resolution. The laser source is capable of emitting a laser beam covering a Spectral Range from Visible (VIS) to Shortwave Infrared (SWIR) band. Eight spectral channels are adopted in this research with centre frequencies at 650 nm (Channel 1), 690 nm (Channel 2), 720 nm (Channel 3), 760 nm (Channel 4), 800 nm (Channel 5), 850 nm (Channel 6), 905 nm (Channel 7), and 1000 nm (Channel 8) with a 5-nm spectral bandwidth.

Fig. 5
figure 5

HSL hardware prototype and data collection

As illustrated in Fig. 5, a field test was carried out in a corridor with few spatial features, and the point clouds with spectral data were collected from four consecutive positions along the corridor with a 60-cm displacement. At each position, HSL collected 24 points horizontally from each scan on a step with an angular resolution of 0.5°. The point cloud consists of a horizontal scan line of 11.5° on colourful paper targets pasted on a flat window glass, as Fig. 6 shows. The four positions are named Site 1 to Site 4 and are located from the farthest to closest positions at the end of the corridor. During the test, HSL was moved along the corridor from Site 1 to Site 4 to simulate SLAM operation, while the heading direction was kept constant.

Fig. 6
figure 6

Colourful papers pasted on scan line

Figure 7 illustrates the scans for Site 1, Site 2, Site 3, and Site 4; the points contained in each scan were named point 1–point 24 clockwise. Since all of the papers were pasted on flat glass, we could observe few spatial features in these line scans. Traditional SLAM algorithms cannot extract meaningful spatial features, and it is difficult to locate the carrier robustly in such spatial featureless environment. However, abundant spectral features contained in HSL measurements can be extracted to contribute a more robust position solution.

Fig. 7
figure 7

Laser scanning point clouds for Site #01, Site #02, Site #03, and Site #04 (24 points for each scan)

Spectral ratio vector results

To demonstrate the feasibility of the proposed LiDAR intensity calibration-free method, point clouds acquired from the green paper were adopted as an example and processed with the method. Figure 8a presents the amplitude of the raw wavelength signals of Point 19 and Point 24 from the green paper at Site 1, which was extracted from the raw waveforms. Figure 8b illustrates the ratio values of the LiDAR-derived intensity of these two points. Specifically, in this paper, the intensity refers to the amplitude of the reflected laser echoes. In addition, the ratio values were calculated by employing the 800-nm wavelength’s spectral intensity as a reference. The calculation is shown as Eq. (6):

$${\text{ratio}}_{{\lambda_{i} }}^{j} = {\raise0.7ex\hbox{${r_{{\lambda_{i} }}^{j} }$} \!\mathord{\left/ {\vphantom {{r_{{\lambda_{i} }}^{j} } {r_{{\lambda_{ref} }}^{j} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${r_{{\lambda_{ref} }}^{j} }$}};j = 1, \ldots, 24; \, \lambda_{i} = \lambda_{1} , \ldots ,\lambda_{8} ;\quad \lambda_{\text{ref}} = 800\,{\text{nm}}$$
(6)

where \(j\) is the points’ index, \(\, \lambda_{i}\) is the spectral channel index, and \(\lambda_{\text{ref}}\) is the selected reference Channel 5 with an 800-nm wavelength.

Fig. 8
figure 8

Pink and green paper ratio values compared at Site #01

Through Fig. 8a, it is difficult to determine whether the two measurements were collected from the same object merely based on the intensity values without calibration. Meanwhile, as demonstrated in Fig. 8b, they are almost identical after being processed by the proposed method. Consequently, for the same target, the spectral ratio vectors are identical for the signals collected at distinct angles and ranges.

As presented in Fig. 6, the pink paper was placed close to the green paper. Figure 8c shows the pink paper ratio values and green paper ratio values, which were different at wavelengths of 650 nm, 690 nm, and 720 nm. Furthermore, for the 8 groups of 24 points obtained from different channels, the ratio values of echo intensities at Channel #01 (650 nm) and Channel #05 (800 nm) were calculated for each site and are presented in Fig. 9. A dramatic decrease in ratio values can be seen. A decrease in the ratio value curves occurred between Point #18 and Point #19 at Site #01, Point #19 and Point #20 at Site #02, Point #19 and Point #20 at Site #03, and Point #20 and Point #21 at Site #04 respectively. These are indicated at the border of the pink paper and green paper and are marked in Fig. 10. The pink paper and green paper were distinctive in their ratio values. Thus, the presented ratio vector could be employed for the consecutive laser scanning matching.

Fig. 9
figure 9

Spectral ratio vector for Site #01, Site #02, Site #03, and Site #04 (Channel #01/Channel #05)

Fig. 10
figure 10

Marked spectral features with wavelength settings

Limited by the spectral wavelength setting in this paper, only pink and green paper could be classified, and the other papers did not present significant differences at the selected wavelength of Channel #01 (650 nm) as the difference between green and pink papers. However, optimal spectral channel configuration for indoor SLAM should be investigated in future.

Motion estimation

As previously mentioned, motion estimation in this paper was conducted in three steps: spectral feature extracting, matching, and estimation. In “Spectral ratio vector results”, the spectral extraction results were presented, and the matching was conducted by aligning the spectral ratio vectors. Figure 9 shows the spectral ratio values of Channel #01 against Channel #05. A rule was set to convert the ratio vector to a binary vector. This is expressed as follows:

$$b_{i} = \left\{ {\begin{array}{*{20}l} {1,} \hfill & {\text{while}\,{\text{ratio}} > \delta } \hfill \\ {0,} \hfill & {\text{while}\,{\text{ratio < }}\delta } \hfill \\ \end{array} } \right.;\quad \left( {i = 1, \ldots ,24} \right)$$
(7)

where \(b_{i}\) is the converted ratio value, and \(\delta\) is a threshold value which empirically equals 0.3 in this research.

By matching the converted binary vectors, the point clouds were matched between the consecutive laser scanning. Then, the matched point clouds were applied in an ICP algorithm for motion estimation (Rusinkiewicz and Levoy, 2001). Here, the iteration step was set to 20. Table 1 lists the positioning and heading errors using a classic ICP. Specifically, the positioning errors stayed within 0.10 m, and the heading errors were all below 0.5°. The motion estimation was conducted at local coordinates (illustrated in Fig. 5) for which Site 1 was the originating point. Table 2 lists the motion results estimated using ratio vector matching and a classic ICP.

Table 1 Positioning and heading errors (ICP)
Table 2 Positioning and heading errors using spectral features (ICP + feature matching)

Discussion

From a comparison of the results between Tables 1 and 2, both traditional ICP and ICP aided by spectral features exhibited similar positioning errors, but the proposed method exhibited improved heading estimation by using the ratio vector matching operation before ICP. In particular, for the p heading estimation at Site #03, the two methods showed similar results because the spectral feature points offered identical feature indices for matching at Sites #02 and Site #03. Thus, the ratio value vector matching did not influence the results at Site #03.

The estimated heading results using spectral features for the scanning matching have better accuracy when compared to classic ICP. In the classic ICP case, the incoming laser scan has no feature changes in heading estimation since it scans a flat surface and the LiDAR range measurement noise is the only stochastic noise source which affects heading estimation. However, when spectral feature information (Fig. 9) is integrated with range measurements, the average heading estimation error decreases from 0.110° to 0.082° (with spectral information). For the best case in Site #02, the proposed method can enhance heading estimation by 72%. However, the positioning error is not efficiently mitigated. The explanation is straightforward: the point–point match strategy of ICP can extract information to detect the movement in consecutive scans, which is 60 cm along the corridor on the X-axis of local coordinates, as Fig. 5 shows. Thus, the enhancement introduced by the spectral information is limited. The nature of partly matching consecutive laser scans of ICP causes the heading estimation to drift quickly; the accumulated errors will cause the position accuracy to decrease with time. Therefore, the proposed method can supply a more robust SLAM solution by using spectral features inherent to the targets.

SLAM performance is poor in spatially featureless environments where the matching errors can significantly increase owing to the lack of matching spatial features. From the test cases, we can easily draw a preliminary conclusion: the spectral-feature-aided SLAM can enhance the indoor positioning especially in heading estimation by utilizing the spectral information collected by the HSL.

Nevertheless, with the current spectral channel configuration, the HSL cannot discriminate the spectral difference between all cases of two neighbouring papers. The major reason is that currently, most selected wavelengths are in a near-infrared band in which the spectral profile of different papers is similar. The spectral channel selection should be optimized with the spectral properties of the targets, for example, the channels from 450 to 650 nm should be optimized even with the weak power density of the SC source in that spectrum range.

Conclusion

This paper presented a new method utilizing LiDAR intensities to aid in point cloud matching. An eight-channel HSL ranging from 650 to 1000 nm was selected, and the objects’ spectral profiles were collected. A ratio value was defined based on multispectral information to exclude the influence of the range and incident angle on the LiDAR intensities. A field test was conducted to demonstrate the effectiveness of the proposed method. According to the results, we arrive at the following conclusions:

  1. 1.

    The HSL was able to collect the spectral information of the targets, and the defined spectral ratio vector can help classify various objects, which was significant for spectral feature searching.

  2. 2.

    The ratio vector matching was effective for improving motion estimation. When conducting motion estimation aided by ratio vector matching, although the position errors had minor differences, the heading estimation had improved accuracy.

In this study, the primary results of investigating the ability of the HSL to assist in point cloud matching occur in a very limited scenario. Future work is as follows:

  1. 1.

    A feasible and optimal channel selection for the HSL will be carried out. Improved channel selection and setting are essential for point cloud classification and ratio vector matching.

  2. 2.

    Complex indoor datasets will be collected to evaluate the proposed method in detail; we are firmly convinced that HSL has great potential in SLAM. Furthermore, a new advanced method will be designed to process point clouds with abundant spectral information.

Declarations

The datasets used and analysed during the current study are available from the corresponding author on reasonable request.

References

  • Barfoot, T. D., McManus, C., Anderson, S., Dong, H., Beerepoot, E., Tong, C. H., et al. (2016). Into darkness: Visual navigation based on a lidar-intensity-image pipeline. In M. Inaba & P. Corke (Eds.), Robotics research (pp. 487–504). Cham: Springer.

    Chapter  Google Scholar 

  • Chen, Y., Jiang, C., Hyyppä, J., Qiu, S., Wang, Z., Tian, M., et al. (2018a). Feasibility study of ore classification using active hyperspectral LiDAR. IEEE Geoscience and Remote Sensing Letters, 15(11), 1785–1789.

    Article  Google Scholar 

  • Chen, Y., Jiang, C., Zhu, L., Kaartinen, H., Hyyppä, J., Tan, J., Hyyppä, H., Zhou, H., Chen, R. Z., & Pei, L. (2018c). SLAM based indoor mapping comparison: mobile or terrestrial?. In IEEE ubiquitous positioning, indoor navigation and location-based services (UPINLBS) (pp. 1–7). Wuhan: IEEE.

  • Chen, Y., Li, W., Hyyppä, J., Wang, N., Jiang, C., Meng, F., et al. (2019). A 10-nm spectral resolution hyperspectral LiDAR system based on an acousto-optic tunable filter. Sensors, 19(7), 1620.

    Article  Google Scholar 

  • Chen, Y., Räikkönen, E., Kaasalainen, S., Suomalainen, J., Hakala, T., Hyyppä, J., et al. (2010). Two-channel hyperspectral LiDAR with a supercontinuum laser source. Sensors, 10(7), 7057–7066.

    Article  Google Scholar 

  • Chen, Y., Tang, J., Jiang, C., Zhu, L., Lehtomäki, M., Kaartinen, H., et al. (2018b). The accuracy comparison of three simultaneous localization and mapping (SLAM)-based indoor mapping technologies. Sensors, 18(10), 3228.

    Article  Google Scholar 

  • El-Sheimy, N., & Youssef, A. (2020). Inertial sensors technologies for navigation applications: state of the art and future trends. Satellite Navigation, 1, 2. https://doi.org/10.1186/s43020-019-0001-5.

    Article  Google Scholar 

  • Engelhard, N., Endres, F., Hess, J., Sturm, J., & Burgard, W. (2011). Real-time 3D visual SLAM with a hand-held RGB-D camera. In Proceedings of the RGB-D workshop on 3D perception in robotics at the European Robotics Forum (pp. 1–15). Vasteras.

  • Guivant, J., Nebot, E., & Baiker, S. (2000). Localization and map building using laser range sensors in outdoor applications. Journal of Robotic Systems, 17(10), 565–583.

    Article  Google Scholar 

  • Hakala, T., Suomalainen, J., Kaasalainen, S., & Chen, Y. (2012). Full waveform hyperspectral LiDAR for terrestrial laser scanning. Optics Express, 20(7), 7119–7127.

    Article  Google Scholar 

  • Hewitt, R. A., & Marshall, J. A. (2015). Towards intensity-augmented SLAM with LiDAR and ToF sensors. IEEE/RSJ international conference on intelligent robots and systems (pp. 1956–1961). IEEE: Hamburg.

    Google Scholar 

  • Jeong, J., & Kim, A. (2018). LiDAR intensity calibration for road marking extraction. In Proceedings of the 15th international conference on ubiquitous robots (UR) (pp. 455–460). Honolulu: IEEE.

  • Jiang, C., Chen, Y., Wu, H., Li, W., Zhou, H., Bo, Y., et al. (2019). Study of a high spectral resolution hyperspectral LiDAR in vegetation red edge parameters extraction. Remote Sensing, 11(17), 2007.

    Article  Google Scholar 

  • Kaasalainen, S., Gröhn, S., Nevalainen, O., Hakala, T., & Ruotsalainen, L. (2016). Work in progress: combining indoor positioning and 3D point clouds from multispectral Lidar. In 2016 international conference on indoor positioning and indoor navigation (IPIN). Alcalá de Henares, Spain: IEEE.

  • Kaasalainen, S., Lindroos, T., & Hyyppä, J. (2007). Toward hyperspectral lidar: Measurement of spectral backscatter intensity with a supercontinuum laser source. IEEE Geoscience and Remote Sensing Letters, 4(2), 211–215.

    Article  Google Scholar 

  • Kallasi, F., Rizzini, D. L., & Caselli, S. (2016). Fast keypoint features from laser scanner for robot localization and mapping. IEEE Robotics and Automation Letters, 1(1), 176–183.

    Article  Google Scholar 

  • Khan, S., Wollherr, D., & Buss, M. (2016). Modeling laser intensities for simultaneous localization and mapping. IEEE Robotics and Automation Letters, 1(2), 692–699.

    Article  Google Scholar 

  • Li, W., Jiang, C., Chen, Y., Hyyppä, J., Tagn, L., Li, C., et al. (2019). A liquid crystal tunable filter-based hyperspectral LiDAR system and its application on vegetation red edge detection. IEEE Geoscience and Remote Sensing Letters, 16(2), 291–295.

    Article  Google Scholar 

  • Qian, C., Liu, H., Tang, J., Chen, Y., Kaartinen, H., Kukko, A., et al. (2017). An integrated GNSS/INS/LiDAR-SLAM positioning method for highly accurate forest stem mapping. Remote Sensing, 9(1), 3.

    Article  Google Scholar 

  • Rusinkiewicz, S., & Levoy, M. (2001). Efficient variants of the ICP algorithm. International conference on 3-D digital imaging and modelling (pp. 141–152). Quebec City: IEEE.

    Google Scholar 

  • Singh, R., & Nagla, K. S. (2018). Improved 2D laser grid mapping by solving mirror reflection uncertainty in SLAM. International Journal of Intelligent Unmanned Systems, 6(2), 93–114.

    Article  Google Scholar 

  • Tang, J., Chen, Y., Niu, X., Wang, L., Chen, L., Liu, J., et al. (2015). LiDAR scan matching aided inertial navigation system in GNSS-denied environments. Sensors, 15(7), 16710–16728.

    Article  Google Scholar 

  • Wolcott, R. W., & Eustice, R. M. (2015). Fast LIDAR localization using multiresolution Gaussian mixture maps. IEEE International Conference on Robotics and Automation (ICRA) (pp. 2814–2821). IEEE: Seattle.

    Chapter  Google Scholar 

  • Yoshitaka, H., Hirohiko, K., Akihisa, O., & Shin’ichi, Y. (2006). Mobile robot localization and mapping by scan matching using laser reflection intensity of the SOKUIKI sensor. In Proceedings of the 32nd annual conference on IEEE industrial electronics (pp. 3018–3023). Paris, France: IEEE.

Download references

Acknowledgements

The author (Changhui Jiang) gratefully acknowledges the financial support of the China Scholarship Council (CSC, 201706840087). This research was financially supported by the Academy of Finland projects “Centre of Excellence in Laser Scanning Research (CoE-LaSR) (307362)” and “New Laser and Spectral Field Methods for In Situ Mining and Raw Material Investigations (project 292648).” Additionally, the Chinese Academy of Science (181811KYSB20160113), via the Chinese Ministry of Science and Technology (2015DFA70930) and Shanghai Science and Technology Foundations (18590712600), is acknowledged. The author thanks the Chinese Scholarship Council for covering living expenses in Finland (201706840087). Finally, the author appreciates the guidance received from his supervisor Professor Yuwei Chen and the paper review conducted by Professor Yuming Bo.

Funding

This research was financially supported by Academy of Finland projects “Centre of Excellence in Laser Scanning Research (CoE-LaSR) (307362)” and Strategic Research Council project “Competence-Based Growth Through Integrated Disruptive Technologies of 3D Digitalization, Robotics, Geospatial Information and Image Processing/Computing—Point Cloud Ecosystem (314312). Additionally, Chinese Academy of Science (181811KYSB20160113, XDA22030202), Beijing Municipal Science and Technology Commission (Z181100001018036), Shanghai Science and Technology Foundations (18590712600) and Jihua lab (X190211TE190) are acknowledged.

Author information

Authors and Affiliations

Authors

Contributions

CJ and YC proposed the idea and wrote the paper; WT and WL carried out the experiment; CZ assisted in conducting the experiment; HS and EP discussed the idea and reviewed the paper; and JH reviewed the paper and guided the writing. The authors read and approved the final manuscript.

Corresponding author

Correspondence to Yuwei Chen.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jiang, C., Chen, Y., Tian, W. et al. A practical method utilizing multi-spectral LiDAR to aid points cloud matching in SLAM. Satell Navig 1, 29 (2020). https://doi.org/10.1186/s43020-020-00029-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s43020-020-00029-5

Keywords