From the positioning architecture proposed in Sect. Cluster positioning architecture and model, cluster positioning can be divided into relative positioning and absolute positioning. Absolute positioning is based on relative positioning and is mainly completed by the general leader node. The relative positioning of the core cluster is completed autonomously, and after the completion, it acts as an anchor node in the positioning of the follower cluster. Therefore, the key to cluster positioning is the relative positioning algorithm of the core cluster. In this regard, this paper proposes a positioning algorithm that uses spatial–temporal correlation information and solves it by the MDS + MOPSO algorithm.
Core cluster localization algorithm
Taking the core cluster composed of 3 nodes as an example, the algorithm process, constraint relationship, objective function, and the number of unknowns of the entire relative positioning are summarized as follows. As shown in the Fig. 3, based on the cluster observation model, we first build the core cluster positioning equations, which are composed of six objective functions. To reduce the difficulty of solution, we use MDS algorithm to lower the dimensions. The unknown parameters are changed from 6 (\(\mathrm{number of nodes}\times 2\)) to 1 (rotation angle), reducing the search space dimension. At the same time, the objective functions of the equations are changed from 6 to 3. We use the reduced dimension objective function as the fitness function of MOPSO algorithm to solve the rotation angle, and then the rotation matrix can be constructed. Finally, the relative positioning results are obtained by coordinate transformation using the rotation matrix. Next, we will introduce the location algorithm in detail.
Constructing the cluster positioning equation
Based on the cluster observation model, we can give a set of equations for calculating the position of the core cluster consisting of three nodes. By combining the distance observation information of the current and previous moment, the equations are obtained as follows:
$$ \begin{gathered} d_{AB} \left( t \right) = \left| {P_{A} \left( t \right) - P_{B} \left( t \right)} \right| \hfill \\ d_{AC} \left( t \right) = \left| {P_{A} \left( t \right) - P_{C} \left( t \right)} \right| \hfill \\ d_{BC} \left( t \right) = \left| {P_{B} \left( t \right) - P_{C} \left( t \right)} \right| \hfill \\ d_{AB} \left( {t - 1} \right) = \left| {P_{A} \left( {t - 1} \right) - P_{B} \left( {t - 1} \right)} \right| \hfill \\ d_{AC} \left( {t - 1} \right) = \left| {P_{A} \left( {t - 1} \right) - P_{C} \left( {t - 1} \right)} \right| \hfill \\ d_{BC} \left( {t - 1} \right) = \left| {P_{B} \left( {t - 1} \right) - P_{C} \left( {t - 1} \right)} \right| \hfill \\ \end{gathered} $$
(3)
\({P}_{N}\left(t-1\right)\) can be calculated from \({P}_{N}\left(t\right)\) and \({ins}_{N}(t)\). So, the equation can be transformed into:
$$ \begin{gathered} d_{AB} \left( t \right) = \left| {P_{A} \left( t \right) - P_{B} \left( t \right)} \right| \hfill \\ d_{AC} \left( t \right) = \left| {P_{A} \left( t \right) - P_{C} \left( t \right)} \right| \hfill \\ d_{BC} \left( t \right) = \left| {P_{B} \left( t \right) - P_{C} \left( t \right)} \right| \hfill \\ d_{AB} \left( {t - 1} \right) = \left| {\left( {P_{A} \left( t \right) - ins_{A} } \right) - \left( {P_{B} \left( t \right) - ins_{B} \left( t \right)} \right)} \right| \hfill \\ \left| {d_{AC} \left( {t - 1} \right) = } \right|\left| {\left( {P_{A} \left( t \right) - ins_{A} \left( t \right)} \right) - \left( {P_{C} \left( t \right) - ins_{C} \left( t \right)} \right)} \right| \hfill \\ \left| {d_{BC} \left( {t - 1} \right) = } \right|\left| {\left( {P_{B} \left( t \right) - ins_{B} \left( t \right)} \right) - \left( {P_{C} \left( t \right) - ins_{C} \left( t \right)} \right)} \right| \hfill \\ \end{gathered} $$
(4)
In the equation constructed by motion vector and ranging information, the unknown variables are \({P}_{A}\left(t\right),{P}_{B}\left(t\right),{P}_{C}(t)\). Because the analysis is performed in a two-dimensional space, there are 6 unknowns, and it is not easy to converge to the global optimal solution when solving in a high-dimensional solution space. Therefore, we introduce a multi-dimensional calibration method for dimensionality reduction, which makes it easier for the positioning results to be converged to the optimal solution.
Dimensionality reduction by MDS
The essence of MDS is to map the similarity measures of several analysis objects from a high-dimensional space of unknown dimension to a lower-dimensional space, and fit the similarity between them in the lower-dimensional space(Niu et al., 2010; Yi & Ruml, 2004). Corresponding to unmanned cluster positioning, that is, mapping the Euclidean distance between nodes from the distance measurement dimensional space to the two-dimensional coordinate space, and obtaining the relative coordinates of each node (Chen et al., 2013).
Firstly, the distance matrix D between nodes is constructed with the ranging information between nodes at time \(t\). where \({d}_{AB}(t)\) is abbreviated as \({d}_{AB}\).
$${\varvec{D}}= \left[\begin{array}{ccc}0& {d}_{AB}& {d}_{AC}\\ {d}_{BA}& 0& {d}_{BC}\\ {d}_{CA}& {d}_{CB}& 0\end{array}\right]$$
(5)
$${{\varvec{D}}}^{2}= \left[\begin{array}{ccc}0& {d}_{AB}^{2}& {d}_{AC}^{2}\\ {d}_{BA}^{2}& 0& {d}_{BC}^{2}\\ {d}_{CA}^{2}& {d}_{CB}^{2}& 0\end{array}\right]$$
(6)
Let the coordinates of node \(N\) be \({P}_{N}(t)=({x}_{N}\left(t\right),{y}_{N}(t))\), abbreviated as \({P}_{N}=({x}_{N},{y}_{N})\), then \({d}_{AB}^{2}={x}_{A}^{2}+{y}_{A}^{2}+{x}_{B}^{2}+{y}_{B}^{2}-2{x}_{A}{x}_{B}-2{y}_{A}{y}_{B}\). \({I}_{N}^{2}= {x}_{N}^{2}+{y}_{N}^{2}\), and the matrix \({\varvec{R}}\) can be constructed as.
$${\varvec{R}}= \left[\begin{array}{ccc}{I}_{A}^{2}& {I}_{A}^{2}& {I}_{A}^{2}\\ {I}_{B}^{2}& {I}_{B}^{2}& {I}_{B}^{2}\\ {I}_{C}^{2}& {I}_{C}^{2}& {I}_{C}^{2}\end{array}\right]$$
(7)
The coordinate matrix \({\varvec{X}}\) of the nodes is constructed as.
$${\varvec{X}}= \left[\begin{array}{ccc}{x}_{A}& {x}_{B}& {x}_{C}\\ {y}_{A}& {y}_{B}& {y}_{C}\end{array}\right]$$
(8)
And \({{\varvec{D}}}^{2}={\varvec{R}}+{{\varvec{R}}}^{T}-2{{\varvec{X}}}^{T}{\varvec{X}}\).
Dual centralized of \({{\varvec{D}}}^{2}\), and receive a positive definite symmetric matrix \({\varvec{B}}\) (Borg & Groenen, 2005), which is.
$${\varvec{B}}= -\frac{1}{2}{\varvec{J}}{{\varvec{D}}}^{2}{\varvec{J}}$$
(9)
$${\varvec{J}}={\varvec{E}}-{n}^{-1}{\varvec{I}}= \left[\begin{array}{ccc}1-\frac{1}{n}& -\frac{1}{n}& -\frac{1}{n}\\ -\frac{1}{n}& 1-\frac{1}{n}& -\frac{1}{n}\\ -\frac{1}{n}& -\frac{1}{n}& 1-\frac{1}{n}\end{array}\right]$$
(10)
where \(n=3\).
$${\varvec{B}}=-\frac{1}{2}{\varvec{J}}({\varvec{R}}+{{\varvec{R}}}^{T}-2{{\varvec{X}}}^{T}{\varvec{X}}){\varvec{J}}$$
(11)
Because of \({\varvec{R}}{\varvec{J}}=0,\, {\varvec{J}}{{\varvec{R}}}^{T}=0\), so
$${\varvec{B}}=-{\varvec{J}}{{\varvec{X}}}^{{\varvec{T}}}{\varvec{X}}{\varvec{J}}={\varvec{V}}{\varvec{U}}{{\varvec{V}}}^{{\varvec{T}}}={\varvec{V}}\sqrt{{\varvec{U}}}{\left({\varvec{V}}\sqrt{{\varvec{U}}}\right)}^{{\varvec{T}}}$$
(12)
Eigenvalue decomposition of matrix \({\varvec{B}}\), because in two-dimensional space, retain the two largest eigenvalues \({\lambda }_{1},{\lambda }_{2}\) and corresponding eigenvectors \({q}_{1},{q}_{2}\) to calculate the 2D coordinates of the node. The relative coordinates of each node after centralization are \({\varvec{J}}{{\varvec{X}}}^{T}={\varvec{V}}\sqrt{{\varvec{U}}}\), where \({\varvec{U}}=diag({\lambda }_{1},{\lambda }_{2})\) and \({\varvec{V}}=[{q}_{1},{q}_{2}]\).
The coordinates obtained after MDS only represent the distance relationship between nodes, and its coordinate system will change per location calculation, which has no actual physical meaning, and is different from the NED coordinate system by a rotation angle. Therefore, the original problem of solving 6 unknowns is transformed into solving a rotation angle, and the solution space dimension is significantly reduced.
The coordinates obtained by MDS are represented as \({P}_{N}\mathrm{^{\prime}}(t)=({x}_{N}\mathrm{^{\prime}}\left(t\right),{y}_{N}\mathrm{^{\prime}}(t))\), abbreviated as \({P}_{N}\mathrm{^{\prime}}=({x}_{N}\mathrm{^{\prime}},{y}_{N}\mathrm{^{\prime}})\), and the relationship between the target \({P}_{N}\left(t\right)\) can be expressed as follows.
$$\left[\begin{array}{c}{x}_{N}\\ {y}_{N}\end{array}\right]=\left[\begin{array}{cc}cos\alpha & sin\alpha \\ -sin\alpha & cos\alpha \end{array}\right]\left[\begin{array}{c}{x}_{N}{^{\prime}}\\ {y}_{N}{^{\prime}}\end{array}\right]$$
(13)
where \(\alpha \) is the rotation angle to be solved, and the objective function is rewritten as.
$$ \begin{gathered} f_{1} = d_{{AB}} \left( {t - 1} \right) = \left| {\left( {R_{\alpha } P_{A} \prime \left( t \right) - ins_{A} \left( t \right)} \right) - \left( {R_{\alpha } P_{B} \prime \left( t \right) - ins_{B} \left( t \right)} \right)} \right| \hfill \\ f_{2} = d_{{AC}} (t - 1) = \left| {\left( {R_{\alpha } P_{A} \prime \left( t \right) - ins_{A} \left( t \right)} \right) - \left( {R_{\alpha } P_{C} \prime \left( t \right) - ins_{C} \left( t \right)} \right)} \right| \hfill \\ f_{3} = d_{{BC}} (t - 1) = \left| {\left( {R_{\alpha } P_{B} \prime \left( t \right) - ins_{B} \left( t \right)} \right) - \left( {R_{\alpha } P_{C} \prime \left( t \right) - ins_{C} \left( t \right)} \right)} \right| \hfill \\ \end{gathered} $$
(14)
There is only one unknown variable (the rotation angle α) in the objective function.
Solution using MOPSO
In order to solve this multi-objective function composed of nonlinear equations, we introduce the MOPSO algorithm.
Particle swarm optimization is an evolutionary algorithm, which was originally inspired by the regularity of birds flocking activities, and then used swarm intelligence to establish a simplified model (Shi & Eberhart, 1998). It makes the movement of the whole group in the problem-solving space evolve from disorder to order. The advantage of Particle Swarm Optimization (PSO) is that it is not easy to fall into the local optimal solution, has strong versatility, and can solve complex optimization problems (Marini & Walczak, 2015).
The algorithm randomly distributes a certain number of particles in the feasible region of the problem space, and each particle flies at a certain speed. During the flight, the particle adjusts its own state by combining its current best position and the best position of the population, and then flies to a better area to finally achieve the purpose of searching for the optimal solution (Wei & Li, 2004).
In the single-objective PSO algorithm, since there is only one objective function, the position of the global best particle (\({g}_{best}\)) and the best position of individual particle (\({p}_{best}\)) can be uniquely determined simply by comparing their fitness values. In the MOPSO (Reyes-Sierra & Coello, 2006), when selecting \({p}_{best}\), if each objective function value of the particle position is optimal, it should be the optimal particle position. If it cannot strictly compare which is better, one of them is randomly selected as the optimal position. Regarding the selection of \({g}_{best}\), there are many non-inferior solutions that can be used as the global optimal one. Therefore, saving these non-inferior solutions and selecting a better one are the core of particle swarm optimization for multi-objective optimization problems. The choice of \({g}_{best}\) is the core problem of multi-objective particle swarm optimization. Coello and Lechuga (2002) proposed a method, where the objective space is divided into hypercubes, and each hypercube is assigned a fitness value depending on its particles number. The more particles, the less fitness value is. Then roulette-wheel selection is applied on the hypercubes to select one. At the end, \({g}_{best}\) is randomly selected from this hypercube.
Meanwhile, MOPSO adopts an external repository (name is Archive) to maintain the diversity of the population, Archive stores the non-dominated solution set for each iteration (Mostaghim & Teich, 2003). The algorithm flow is as follows.
-
1)
Initialize the particle swarm. Set the population size \(N\), factor parameters, etc. Randomly generate the position \({X}_{i}\) and velocity \({V}_{i}\) of each particle;
-
2)
Divide the target space and calculate the crowding degree according to the number of particles in the grid;
-
3)
Calculate the objective function value of the particle. Update the individual optimal position \({p}_{best}\) of the particle;
-
4)
Calculate the non-dominated solution of the population and update the Archive set. If the number of non-dominated solutions exceeds the size of the external repository, random deletion is performed according to the degree of congestion;
-
5)
Update the global optimal particle \({g}_{best}\);
-
6)
Update the velocity and position of each particle. The particle velocity and position update equations are as follows (Fallah-Mehdipour et al., 2010):
$$\upsilon (t+1)=\omega \upsilon (t)+{c}_{1}{r}_{1}(p(t)-x(t))+{c}_{2}{r}_{2}(g(t)-x(t))$$
(15)
$$x(t+1)=x(t)+\upsilon (t+1)$$
(16)
Among them, \(\omega \) is the inertia weight; \({c}_{1},{c}_{2}\) are the individual experience coefficient and social experience coefficient, respectively; \({r}_{1},{r}_{2}\) are random numbers in the range [0, 1]; \(p(t)\) and \(g\)(t) are the individual optimal solution and the global optimal solution, respectively.
7) Terminate the program if the termination condition is satisfied, otherwise go to step 3.
Use the objective functions after MDS dimensionality reduction, that is, the equation set finally obtained in Sect. Core cluster localization algorithm 1(2), as the fitness function of MOPSO, where only the rotation angle \(\alpha \) is the unknown to be solved.
Relative coordinates calculation
Finally, target \({P}_{N}\left(t\right)\) can be obtained by constructing rotation matrix \({\varvec{R}}\) and MDS coordinate \({P}_{N}\mathrm{^{\prime}}(t)\) use Eq. (13).
So far, we have constructed the relative position relationship within the core cluster. The relative coordinate system is the NED coordinate system, which takes the centroid of the core cluster node as the origin.
Summary and discussion
Since MDS algorithm requires the complete ranging information between two nodes to build a distance matrix, it is difficult to ensure this condition when there are many nodes involved. In two-dimensional space, the minimum number of nodes in the system is 3, and the number of ranging between nodes to be maintained is \({c}_{n}^{2}\), where n is the number of nodes.
When the fault of individual node in the cluster results in the loss or unreliability of ranging information, the common ideas are: 1) Discard the observations related to this node. 2) Use algorithm to estimate and recover the lost or damaged measurement information, for example, matrix completion algorithm based on norm regularization(Xiao et al., 2015), Multidimensional Scaling Map (MDSMAP) (Shang et al., 2003), etc. 3) Improve the weight of reliable nodes in positioning, such as node reordering and edge reordering algorithms(Hamaoui, 2019). Although ideas 2 and 3 can bring some compensation, they will also introduce errors to some extent. Since the positioning result of the core cluster will affect the following cluster, any error in the core cluster should be avoided as far as possible.
Considering the complexity and reliability of engineering implementation, we suggest that the number of nodes in the core cluster should be limited to about 3–5. More than three will bring some redundancy to the core cluster positioning. When a node does not meet the conditions, it will be discarded with the relevant observations, and the remaining nodes can still ensure the minimum demand.
Another issue worth discussing is the coordinate system merging between multiple core clusters, coordinate conversion can be performed through the common nodes. Compared with the traditional methods (Moore et al., 2004) which need 2 to 3 common points, because the coordinate system constructed by each core cluster in this paper is NED coordinate system, there is no need to consider the rotation and mirror of the coordinate system in coordinate merging, so only one common point is needed to calculate the required translation.
Follower cluster localization algorithm
Considering that the number of nodes in the follower cluster is significantly larger than that of the core cluster, one should use a localization algorithm that is insensitive to the number of nodes.
For example, with the help of the robust relative coordinate relationship of the core cluster, the core cluster nodes can be used as anchor nodes to broadcast ranging signals and self-position information. After receiving the location information and relative distances of multiple core cluster nodes, the follower cluster node uses the trilateration method to calculate its own location. This algorithm is simple and of good scalability, and not affected by the number of nodes to be located.
In addition to the above algorithm, other methods can also be used to locate the nodes in the follower cluster. Have given the relative coordinates of the core clusters, the positioning of the follower clusters will be easy. The positioning of the follower clusters is not the focus of this paper, so only a brief discussion is given.