Skip to main content
  • Original article
  • Open access
  • Published:

Adaptive cubature Kalman filter based on the variance-covariance components estimation

Abstract

Although the Kalman filter (KF) is widely used in practice, its estimated results are optimal only when the system model is linear and the noise characteristics of the system are already exactly known. However, it is extremely difficult to satisfy such requirement since the uncertainty caused by the inertial instrument and the external environment, for instance, in the aided inertial navigation. In practice almost all of the systems are nonlinear. So the nonlinear filter and the adaptive filter should be considered together. To improve the filter accuracy, a novel adaptive filter based on the nonlinear Cubature Kalman filter (CKF) and the Variance-Covariance Components Estimation (VCE) was proposed in this paper. Here, the CKF was used to solve the nonlinear issue while the VCE method was used for the noise covariance matrix of the nonlinear system real-time estimation. The simulation and experiment results showed that better estimated states can be obtained with this proposed adaptive filter based on the CKF.

Introduction

In modern navigation and data fusion, the Kalman filter (KF) has become one of the most widely used estimation methods due to its advantages of being more simple and useful (Chen 2012; Han & Wang 2012; Feng et al. 2013; Santos 2015). However, the KF method still has some limitations (Tang et al. 2014; Wang et al. 2010; Wang 2009). For example, it has been developed on the base of linear systems while almost all of the systems are nonlinear actually. So if the KF is applied in practice, the nonlinearity may lead to large errors or even to the filter divergence. Thus, the nonlinear filters have been a hot area of the state estimation.

In practical engineering, two nonlinear methods named the extended Kalman filter (EKF) and the unscented Kalman filter (UKF) are most widely used (Gao et al. 2014; Dini et al. 2011; Vaccarella et al. 2013; Masazade et al. 2012). In the EKF, the nonlinear system can be linearized utilizing the Taylor series expansion for variance propagation while the prediction of the state vector and measurement vector are conducted using the nonlinear system (Gao et al. 2014; Arasaratnam & Haykin 2009). Although this method is used in many nonlinear systems for its simplicity, the precision is limited in the systems with strong nonlinearity and the fussy Jacobian matrix should be calculated which will inevitably increase the computational load. With the Unscented Transformation (UT), the UKF method can approximate the mean and the variance of the Gaussian state distribution using the nonlinear function to avoid the local linearization and the calculation of the Jacobian matrix effectively (Gao et al. 2014; Dini et al. 2011). However, the covariance matrix sometimes is easy non-positive in high-dimensional systems which will lead to filtering divergence.

To solve the above mentioned problems, Arasaratnam et al. proposed the Cubature Kalman filter (CKF) method based on the Cubature Transform to conduct the prediction using the Cubature points which have the same weight (Arasaratnam & Haykin 2009; Arasaratnam & Haykin 2010; Zhang et al. 2015). Similar with UKF, CKF avoids linearization to the nonlinear system by utilizing point sets to predict state vectors and covariance matrix. However, CKF has strictly theoretical derivation based on Bayesian and Spherical-Radial Cubature principles while UKF does not. In addition, the point set in CKF is acquired by integration and all weights are positive values and equal while the weights in UKF are negative values easily in high dimensional systems, which will reduce the filtering accuracy and stability. So the CKF is more widely used in practical to solve the nonlinear problem.

In above filters, the estimations are the optimal only when the mathematic model is exactly known and the system process and measurement noises are the White Gaussian Noise. However, this cannot easily be satisfied in practical aided inertial navigation applications, for example, because of the drift errors of the inertial components, the dynamic errors caused by the carrier’s maneuver, and the existence of the uncertainty in the external environment (Wang et al. 2010; Jin et al. 2014; Von Chong & Caballero 2014). The system model and its associated statistical characteristics of the noises are obtained based on the priori knowledge, which definitely have some errors compared with the true values. So the adaptive filter should be considered to improve the estimated accuracy.

The adaptive filter can estimate and correct the model parameters and the noise characteristics at the same time to continually reduce the errors of the state estimation. Then the estimated accuracy can be improved availably (Hu et al. 2003; Chang & Liu 2015; Mundla et al. 2012).

At present, there are various adaptive filters. One of the outstanding adaptive filters named the Sage-Husa adaptive filter can estimate variance matrix of the process noise or the one of the observation noise in real-time when the variance matrix of the observation noise or the one of the process noise is known (Mundla et al. 2012). In other words, if we want to estimate the process (or observation) noise variance matrix with the Sage-Husa adaptive filter, variance matrix of the observation noise (or the process noise) should be already known. Thus, these two variance matrices cannot be estimated simultaneously. It has limitations when the process and observation noises are both unknown.

By adjusting or restraining the impacts of the current observations on the parameter estimation using the fading factor, an adaptive filter makes full use of the current observations to improve the filtering accuracy (Chang & Liu 2015). Nevertheless, it is difficult to obtain the optimal fading factor and the computational complexity will accordingly be increased as well. Furthermore, the weights of the a-priori reliable information would be decreased to a certain extent if the current observation is abnormal, and then the filtering accuracy would also be degraded.

To estimate the parameters of the system correctly, the weights of different observations should be known. Thus, the Variance-Covariance Components Estimation (VCE) was proposed (Wang et al. 2010; Moghtased-Azar et al. 2014). At present, various VCE methods have widely be used in statistics and geodesy. And one of the famous VCE methods is a posteriori VCE method after Helmert, in which the weights of different observations can be calculated by the adaptive iteration. Since the matrix will be negative sometimes during the calculation, some improved methods were proposed. Wang et al. proposed an adaptive Kalman filter (AKF) based on the VCE method, and verified its effectiveness using the actual experiments (Wang et al. 2010; Wang 2009; Wang et al. 2009). Although the adaptive filter based on the VCE has many advantages in the filter estimations, its application in nonlinear systems has rarely been studied.

Thus, by combining the CKF and the VCE method, an improved adaptive filter was proposed in this paper. Since the adaptive filter and the nonlinear filter were engaged at the same time, this novel method can not only estimate the statistical property of the system’s noise, but also be applied in various nonlinear systems. Hence, the accuracy and the applicability of the new filter can be further improved. The rest of this paper was organized as follows. The description of the CKF and the VCE method was presented in Basic Knowledge. An improved adaptive filter based on the CKF proposed the novel adaptive filter based on the CKF. Numerical examples and experiment along with specific analysis were given in Simulations and Experiments. Conclusions concluded this manuscript.

Basic knowledge

Adaptive filter based on the VCE method

The adaptive filter based on the VCE method can estimate the variance components of the process noise and the measurement noise vectors in real time using the residual vectors to decompose the system innovation vector (Wang et al. 2010; Wang 2009; Moghtased-Azar et al. 2014). On the basis of the estimated variance components, the weighting matrices of the process noise and the measurement noise vectors can be adjusted and then their effects on the state vector can be adjusted accordingly.

Now under the consideration of the standard system model, the state and measurement equations are:

$$ \left\{\begin{array}{l}\mathbf{x}(k)=\boldsymbol{\Phi} \left( k, k-1\right)\mathbf{x}\left( k-1\right)+\boldsymbol{\Gamma} (k)\mathbf{w}(k)\\ {}\mathbf{z}(k)=\mathbf{H}(k)\mathbf{x}(k)+\boldsymbol{\Delta} (k)\end{array}\right. $$
(1)

where x(k) and z(k) are the state vector and the measurement vector, respectively; Φ(k, k − 1), Γ(k) and H(k) are the state-transition matrix, the coefficient matrix of the process noise vector and the design matrix, respectively; w(k) and Δ(k) denote the process noise vector and the measurement noise vector, respectively. Further, w(k) and Δ(k) are the zero mean Gaussian noises:

$$ \left\{\begin{array}{c}\hfill \mathbf{w}(k)\sim N\left(\mathbf{0},\ \mathbf{Q}(k)\right)\hfill \\ {}\hfill \boldsymbol{\Delta} (k)\sim N\left(\mathbf{0},\ \mathbf{R}(k)\right)\ \hfill \end{array}\right. $$
(2)

where Q(k) and R(k) are positive definite matrices.

Thus, the two-step update process of the Kalman filter is as follows:

The time update:

$$ \begin{array}{l}\widehat{\mathbf{x}}\left( k/ k-1\right)=\boldsymbol{\Phi} \left( k, k-1\right)\widehat{\mathbf{x}}\left( k-1\right)\ \\ {}{\mathbf{P}}_{XX}\left( k/ k-1\right)=\boldsymbol{\Gamma} (k)\mathbf{Q}(k){\boldsymbol{\Gamma}}^T(k)\\ {}\kern3.25em +\boldsymbol{\Phi} \left( k, k-1\right){\mathbf{P}}_{XX}\left( k-1\right){\boldsymbol{\Phi}}^T\left( k, k-1\right)\end{array} $$
(3)

The measurement update:

$$ \begin{array}{l}\widehat{\mathbf{x}}(k) = \widehat{\mathbf{x}}\left( k/ k-1\right)+\mathbf{G}(k)\mathbf{d}(k)\ \\ {}{\mathbf{P}}_{XX}(k)=\left[\mathbf{E}-\mathbf{G}(k)\mathbf{H}(k)\right]{\mathbf{P}}_{XX}\left( k/ k-1\right)\end{array} $$
(4)

where G(k) is the gain matrix and d(k) is the system innovation vector:

$$ \begin{array}{l}\operatorname{G}(k)={\operatorname{P}}_{XX}\left( k\left| k-1\right.\right){\operatorname{H}}^T(k){\operatorname{P}}_{dd}(k)\\ {}\operatorname{d}(k)=\operatorname{z}(k)-\operatorname{H}(k)\widehat{x}\left( k\left| k-1\right.\right)\\ {}{\operatorname{P}}_{dd}(k)=\operatorname{H}(k){\operatorname{P}}_{XX}\left( k\left| k-1\right.\right){\operatorname{H}}^T(k)+\operatorname{R}(k)\end{array} $$
(5)

When Q(k) and R(k) are already known, the estimated state vector is optimal. However, in practice they cannot easily be obtained. So it would be great if they could be estimated in real time in the adaptive filter based on the VCE method.

There exist three groups of stochastic information that is associated with the estimation of the state vector: the observation noise vector Δ(k), the system process noise w(k) and the noise on the predicted state vector \( \widehat{\mathbf{x}}\left( k/ k-1\right) \) brought by \( \widehat{\mathbf{x}}\left( k\hbox{-} 1\right) \) though the propagation of {Δ(1), , Δ(k − 1)} and {w(1), , w(k ‐ 1)}. Thus, according to noise sources, we can define three independent (pseudo-)observation groups as follows (Wang 2009):

$$ \begin{array}{l}{\mathbf{l}}_x(k)=\boldsymbol{\Phi} \left( k, k-1\right)\widehat{\mathbf{x}}\left( k-1\right)\ \\ {}{\mathbf{l}}_w(k)={\mathbf{w}}_0(k)\\ {}{\mathbf{l}}_z(k)=\mathbf{z}(k)\end{array} $$
(6)

where l x (k) is the pseudo-observation related to the predicted state vector, l w (k) is the pseudo-observation related to the system process noise, and lz(k) is the pseudo-observation related to the observation noise.

Using their residual equations, the system in Eq. (1) can be rewritten:

$$ \begin{array}{l}{\operatorname{v}}_{l_x}(k)=\widehat{x}(k)+\varGamma (k)\widehat{w}\left( k\hbox{-} 1\right)-{\operatorname{l}}_x(k)\\ {}{\operatorname{v}}_{l_w}(k)=\widehat{w}\left( k\hbox{-} 1\right)-{\operatorname{l}}_w(k)\\ {}{\operatorname{v}}_{l_z}(k)= H(k)\widehat{x}(k)-{\operatorname{l}}_z(k)\end{array} $$
(7)

with their measurement variance matrices as follows:

$$ \begin{array}{l}{\mathbf{P}}_{l_x{l}_x}(k)=\boldsymbol{\Phi} \left( k, k-1\right){\mathbf{P}}_{x x}\left( k-1\right){\boldsymbol{\Phi}}^T\left( k, k-1\right)\\ {}{\mathbf{P}}_{l_w{l}_w}(k)=\mathbf{Q}(k)\\ {}{\mathbf{P}}_{l_z{l}_z}(k)=\mathbf{R}(k)\end{array} $$
(8)

So we can estimate the covariance matrix of the system as long as we calculate the covariance matrix of the residual vectors.

According to the steps of the Kalman filter, the estimations of the residual vectors can be calculated by the equation (9):

$$ \left\{\begin{array}{l}{\operatorname{v}}_{l_x{l}_x}(k)={\operatorname{P}}_{l_x{l}_x}(k){\operatorname{P}}_{XX}^{-1}\left( k\left| k-1\right.\right)\operatorname{G}(k)\operatorname{d}(k)\\ {}{\operatorname{v}}_{l_w{l}_w}(k)=\operatorname{Q}\left( k-1\right){\varGamma}^T\left( k-1\right){\operatorname{P}}_{XX}^{-1}\left( k\left| k-1\right.\right)\cdot \\ {}\kern4.25em \operatorname{G}(k)\operatorname{d}(k)\\ {}{\operatorname{v}}_{l_z{l}_z}(k)=\left\{\operatorname{H}(k)\operatorname{G}(k)-\operatorname{E}\right\}\operatorname{d}(k)\end{array}\right. $$
(9)

And then the corresponding variance matrices are as below:

$$ \left\{\begin{array}{l}{\operatorname{P}}_{v_{l_x{l}_x}}(k)=\varPhi \left( k-1\right){\operatorname{P}}_{XX}\left( k-1\right){\varPhi}^T\left( k-1\right){\operatorname{H}}^T(k)\cdot \\ {}\kern2em {\operatorname{P}}_{dd}^{-1}(k)\operatorname{H}(k)\varPhi \left( k-1\right){\operatorname{P}}_{XX}\left( k-1\right){\varPhi}^T\left( k-1\right)\\ {}{\operatorname{P}}_{v_{l_w{l}_w}}(k)=\operatorname{Q}\left( k-1\right){\varGamma}^T\left( k-1\right){\operatorname{H}}^T(k){\operatorname{P}}_{dd}^{-1}(k)\cdot \\ {}\kern4em \operatorname{H}(k)\varGamma \left( k-1\right)\operatorname{Q}\left( k-1\right)\\ {}{\operatorname{P}}_{v_{l_z{l}_z}}(k)=\left\{\operatorname{E}-\operatorname{H}(k)\operatorname{G}(k)\right\}\operatorname{R}(k)\end{array}\right. $$
(10)

Here the innovation vector is epochwise projected into three residual vectors associated with the three groups of the measurements. Hence, we can estimate the variance factors.

Assume that all components in l z (k) and l w (k) are uncorrelated, so both of the R(k) and Q(k) become diagonal. In this case, the redundancy index of each measurement noise factor is given by (Wang 2009)

$$ {r}_{{\operatorname{z}}_i}(k)=1-{\left\{\operatorname{H}(k)\operatorname{G}(k)\right\}}_{i i} $$
(11)

Similarly, the redundancy index of each process noise factor is equal to

$$ \begin{array}{l}{r}_{{\operatorname{w}}_i}(k)=\left[\operatorname{Q}(k){\varGamma}^T(k){\operatorname{H}}^T(k){\operatorname{P}}_{dd}^{-1}(k)\cdot \right.\\ {}\kern3.75em {\left.\operatorname{H}(k)\varGamma \left( k-1\right)\right]}_{i i}\end{array} $$
(12)

Furthermore, the individual group redundancy contributions, or the total group redundant indexes, are equal to (Wang et al. 2010; Wang 2009; Wang et al. 2009):

$$ \left\{\begin{array}{l}{r}_{\mathrm{x}}(k)= trace\left[\varPhi \left( k-1\right){\operatorname{P}}_{XX}(k){\varPhi}^T\left( k-1\right)\cdot \right.\\ {}\kern5.75em \left.{\operatorname{H}}^T(k){\operatorname{P}}_{dd}^{-1}(k)\operatorname{H}(k)\right]\\ {}{r}_{\mathrm{w}}(k)= trace\left[\operatorname{Q}(k){\varGamma}^T(k){\operatorname{H}}^T(k){\operatorname{P}}_{dd}^{-1}(k)\cdot \right.\\ {}\kern5.75em \left.\operatorname{H}(k)\varGamma \left( k-1\right)\right]\\ {}{r}_{\mathrm{z}}(k)= trace\left[\operatorname{E}-\operatorname{H}(k)\operatorname{G}(k)\right]\end{array}\right. $$
(13)

On the basis of the Herlmet VCE method, the individual group variance factors of unit weight are estimated by the residual vector and the corresponding redundant index as follows:

$$ {\widehat{\sigma}}_{0\operatorname{g}}^2(k)=\frac{{\operatorname{v}}_{\mathrm{g}}^T{\operatorname{P}}_{l_{\mathrm{g}}{l}_{\mathrm{g}}}(k){\operatorname{v}}_{\mathrm{g}}(k)}{r_{\mathrm{g}}(k)}\left(\operatorname{g}= x, w, z\right) $$
(14)

Thus, at time k, the individual variance factors of l z (k) can be calculated by:

$$ {\widehat{\sigma}}_{{\operatorname{z}}_i}^2(k)=\frac{{\operatorname{v}}_{{\operatorname{z}}_i}^2(k)}{r_{{\operatorname{z}}_i}(k)} $$
(15)

And the covariance matrix of the measurement noise is as follows:

$$ \operatorname{R}(k)=\left[\begin{array}{ccc}\hfill {\widehat{\sigma}}_{{\operatorname{z}}_1}^2(k)\hfill & \hfill \hfill & \hfill \hfill \\ {}\hfill \hfill & \hfill \ddots \hfill & \hfill \hfill \\ {}\hfill \hfill & \hfill \hfill & \hfill {\widehat{\sigma}}_{{\operatorname{z}}_p}^2(k)\hfill \end{array}\right] $$
(16)

Similarly, the variance factors of l w (k) and the variance matrix Q(k) can be calculated by the equations (17) and (18):

$$ {\widehat{\sigma}}_{{\operatorname{w}}_j}^2(k)=\frac{{\operatorname{v}}_{{\operatorname{w}}_j}^2(k)}{r_{{\operatorname{w}}_j}(k)} $$
(17)
$$ \operatorname{Q}(k)=\left[\begin{array}{ccc}\hfill {\widehat{\sigma}}_{{\operatorname{w}}_1}^2(k)\hfill & \hfill \hfill & \hfill \hfill \\ {}\hfill \hfill & \hfill \ddots \hfill & \hfill \hfill \\ {}\hfill \hfill & \hfill \hfill & \hfill {\widehat{\sigma}}_{{\operatorname{w}}_m}^2(k)\hfill \end{array}\right] $$
(18)

CKF algorithm

Since the CKF is not only simple and easy to be implemented, but also high precise and convergent well, it is widely used in nonlinear estimations.

Let us consider a nonlinear state-space model:

$$ \left\{\begin{array}{l}{\operatorname{x}}_k=\operatorname{f}\left({\operatorname{x}}_{k-1}\right)+{\mathbf{w}}_{k-1}\\ {}{\operatorname{z}}_k=\operatorname{h}\left({\operatorname{x}}_k\right)+{\boldsymbol{\Delta}}_k\end{array}\right. $$
(19)

where x k and z k are the state vector and the measurement vector, respectively, f() and h() are the nonlinear state and measurement vector functions; w N(0, Q) and Δ N(0, R) represent the process noise and the measurement noise vectors, respectively.

For this nonlinear system, 2n Cubature points which are equal of weight were selected to calculate the Gaussian distribution, and then the CKF can be implemented through the time and measurement updates. And the cubature points are set as:

$$ \left\{\begin{array}{l}{\xi}_i=\sqrt{\frac{2 n}{2}}{\left[1\right]}_i\\ {}{\omega}_i=\frac{1}{2 n}\end{array}\right.,\kern1.25em i=1,2,\cdots, 2 n $$
(20)

where n is the dimension of the state vector.

And the estimation process is as below (Arasaratnam & Haykin 2009; Arasaratnam & Haykin 2010):

The time update:

$$ {\operatorname{P}}_{k-1\left| k-1\right.}={\operatorname{S}}_{k-1\left| k-1\right.}{\operatorname{S}}_{k-1\left| k-1\right.}^T $$
(21)
$$ {\operatorname{X}}_{i, k-1\left| k-1\right.}={\operatorname{S}}_{k-1\left| k-1\right.}{\xi}_i+{\widehat{x}}_{k-1\left| k-1\right.} $$
(22)
$$ {\operatorname{X}}_{i, k\left| k-1\right.}^{\ast }=\operatorname{f}\left({\operatorname{X}}_{i, k-1\left| k-1\right.}\right) $$
(23)
$$ {\widehat{x}}_{k\left| k-1\right.}=\frac{1}{2 N}{\displaystyle \sum_{i=1}^{2 N}{\operatorname{X}}_{i, k\left| k-1\right.}^{\ast }} $$
(24)
$$ {\operatorname{P}}_{k\left| k-1\right.}=\frac{1}{2 N}{\displaystyle \sum_{i=1}^{2 N}{\operatorname{X}}_{i, k\left| k-1\right.}^{\ast }{\operatorname{X}}_{i, k\left| k-1\right.}^{\ast T}}-{\widehat{x}}_{k\left| k-1\right.}{\widehat{x}}_{k\left| k-1\right.}^T+{\operatorname{Q}}_{k-1} $$
(25)

The measurement update:

$$ {\operatorname{P}}_{k\left| k-1\right.}={\operatorname{S}}_{k\left| k-1\right.}{\operatorname{S}}_{k\left| k-1\right.}^T $$
(26)
$$ {\operatorname{X}}_{i, k\left| k-1\right.}={\operatorname{S}}_{k\left| k-1\right.}{\xi}_i+{\widehat{x}}_{k\left| k-1\right.} $$
(27)
$$ {\operatorname{Y}}_{i, k\left| k-1\right.}=\operatorname{h}\left({\operatorname{X}}_{i, k\left| k-1\right.}\right) $$
(28)
$$ {\widehat{z}}_{k\left| k-1\right.}=\frac{1}{2 N}{\displaystyle \sum_{i=1}^{2 N}{\operatorname{Y}}_{i, k\left| k-1\right.}} $$
(29)
$$ {\operatorname{P}}_{k\left| k-1\right.}^{z z}=\frac{1}{2 N}{\displaystyle \sum_{i=1}^{2 N}{\operatorname{Y}}_{i, k\left| k-1\right.}{\operatorname{Y}}_{i, k\left| k-1\right.}^T}-{\widehat{z}}_{k\left| k-1\right.}{\widehat{z}}_{k\left| k-1\right.}^T+{\operatorname{R}}_k $$
(30)
$$ {\operatorname{P}}_{k\left| k-1\right.}^{x z}=\frac{1}{2 N}{\displaystyle \sum_{i=1}^{2 N}{\operatorname{X}}_{i, k\left| k-1\right.}{\operatorname{Y}}_{i, k\left| k-1\right.}^T}-{\widehat{x}}_{k\left| k-1\right.}{\widehat{z}}_{k\left| k-1\right.}^T $$
(31)

And the gain matrix is:

$$ {\operatorname{K}}_k={\operatorname{P}}_{k\left| k-1\right.}^{xz}{\left({\operatorname{P}}_{k\left| k-1\right.}^{zz}\right)}^{-1} $$
(32)

The estimation of the state vector can be obtained:

$$ {\widehat{x}}_{k\left| k\right.}={\widehat{x}}_{k\left| k-1\right.}+{\operatorname{K}}_k\left({\operatorname{z}}_k-{\widehat{z}}_{k\left| k-1\right.}\right) $$
(33)

where d k  = z k  − ẑ k|k − 1 is the system innovation vector of the nonlinear system.

The variance matrix of the estimated state vector is:

$$ {\operatorname{P}}_{k\left| k\right.}={\operatorname{P}}_{k\left| k-1\right.}-{\operatorname{K}}_k{\operatorname{P}}_{k\left| k-1\right.}^{zz}{\operatorname{K}}_k^T $$
(34)

Methods

An improved adaptive filter based on the CKF

To solve the nonlinear problem and improve the stochastic model simultaneously, we here proposed a new improved adaptive filter by combining the CKF and the VCE adaptive method.

With the nonlinear system described in Eq. (19), three pseudo measurement groups are defined as follows:

$$ \left\{\begin{array}{l}{\operatorname{l}}_x(k)=\operatorname{f}\left({\widehat{x}}_{k\hbox{-} 1\left| k\hbox{-} 1\right.}\right)\\ {}{\operatorname{l}}_w(k)={\operatorname{w}}_{k-1}\\ {}{\operatorname{l}}_z(k)={\operatorname{z}}_k\end{array}\right. $$
(35)

The residual equation of the nonlinear system is:

$$ \left\{\begin{array}{l}{\operatorname{v}}_{l_x}(k)={\widehat{x}}_{k\left| k\right.}+{\widehat{w}}_{k-1}-{\operatorname{l}}_x(k)\\ {}{\operatorname{v}}_{l_w}(k)={\widehat{w}}_{k-1}-{\operatorname{l}}_w(k)\\ {}{\operatorname{v}}_{l_z}(k)=\operatorname{h}\left({\widehat{x}}_{k\left| k\right.}\right)-{\operatorname{l}}_z(k)\end{array}\right. $$
(36)

According to the steps of the CKF, the equations (9) and (10) can be rewritten as:

$$ \left\{\begin{array}{l}{\operatorname{v}}_{l_x{l}_x}(k)={\operatorname{P}}_{l_x{l}_x}(k){\left({\operatorname{P}}_{k\left| k-1\right.}\right)}^{-1}{\operatorname{K}}_k{\operatorname{d}}_k\\ {}{\operatorname{v}}_{l_w{l}_w}(k)={\operatorname{Q}}_{k-1}{\left({\operatorname{P}}_{k\left| k-1\right.}\right)}^{-1}{\operatorname{K}}_k{\operatorname{d}}_k\\ {}{\operatorname{v}}_{l_z{l}_z}(k)=\left[{\left({\operatorname{P}}_{k\left| k-1\right.}^{x z}\right)}^T{\left({\operatorname{P}}_{k\left| k-1\right.}\right)}^{-1}{\operatorname{K}}_k-\operatorname{E}\right]{\operatorname{d}}_k\end{array}\right. $$
(37)

The corresponding variance matrices are:

$$ \left\{\begin{array}{l}{\operatorname{P}}_{v_{l_x{l}_x}}(k)={\operatorname{P}}_{l_x{l}_x}(k){\left({\operatorname{P}}_{k\left| k-1\right.}\right)}^{-1}{\operatorname{P}}_{k\left| k-1\right.}^{x z}{\left({\operatorname{P}}_{k\left| k-1\right.}^{z z}\right)}^{-1}\cdot \\ {}\kern4.25em {\left({\operatorname{P}}_{k\left| k-1\right.}^{x z}\right)}^T{\left({\operatorname{P}}_{k\left| k-1\right.}\right)}^{-1}{\operatorname{P}}_{l_x{l}_x}(k)\\ {}{\operatorname{P}}_{v_{l_w{l}_w}}(k)={\operatorname{Q}}_{k-1}{\left({\operatorname{P}}_{k\left| k-1\right.}\right)}^{-1}{\operatorname{P}}_{k\left| k-1\right.}^{x z}{\left({\operatorname{P}}_{k\left| k-1\right.}^{z z}\right)}^{-1}\cdot \\ {}\kern4.25em {\left({\operatorname{P}}_{k\left| k-1\right.}^{x z}\right)}^T{\left({\operatorname{P}}_{k\left| k-1\right.}\right)}^{-1}{\operatorname{Q}}_{k-1}\\ {}{\operatorname{P}}_{v_{l_z{l}_z}}(k)=\left[\operatorname{E}-{\left({\operatorname{P}}_{k\left| k-1\right.}^{x z}\right)}^T{\left({\operatorname{P}}_{k\left| k-1\right.}\right)}^{-1}{\operatorname{K}}_k\right]\operatorname{R}(k)\end{array}\right. $$
(38)

The individual group redundant indices are equal to:

$$ \left\{\begin{array}{l}{r}_{\mathrm{x}}(k)= trace\left[{\operatorname{P}}_{l_x{l}_x}(k){\left({\operatorname{P}}_{k\left| k-1\right.}\right)}^{-1}{\operatorname{P}}_{k\left| k-1\right.}^{x z}\cdot \right.\\ {}\left.\kern5.5em {\left({\operatorname{P}}_{k\left| k-1\right.}^{zz}\right)}^{-1}{\left({\operatorname{P}}_{k\left| k-1\right.}^{x z}\right)}^T{\left({\operatorname{P}}_{k\left| k-1\right.}\right)}^{-1}\right]\\ {}{r}_{\mathrm{w}}(k)= trace\left[{\operatorname{Q}}_{k-1}{\left({P}_{k\left| k-1\right.}\right)}^{-1}{\operatorname{P}}_{k\left| k-1\right.}^{x z}{\left({P}_{k\left| k-1\right.}^{zz}\right)}^{-1}\cdot \right.\\ {}\left.\kern6em {\left({\operatorname{P}}_{k\left| k-1\right.}^{x z}\right)}^T{\left({\operatorname{P}}_{k\left| k-1\right.}\right)}^{-1}\right]\\ {}{r}_{\mathrm{z}}(k)= trace\left[\operatorname{E}-{\left({\operatorname{P}}_{k\left| k-1\right.}^{x z}\right)}^T{\left({\operatorname{P}}_{k\left| k-1\right.}\right)}^{-1}{\operatorname{K}}_k\right]\end{array}\right. $$
(39)

Here, the covariance factors and the variance matrices for the nonlinear system can be calculated after the equations (15) ~ (18).

Results and discussion

Simulations and experiments

In order to show the advantages of the improved CKF, this section presents the solution comparison between the normal and the improved CKF methods through two nonlinear models: a bearing-only tracking model and a target tracking model, using the simulated data against their known reference solutions. A further experiment with the real data was also conducted and summarized.

Bearing-only tracking method

In the bearing-only tracking model, there are two states. Its nonlinear model was set as below (Zhang et al. 2015; Julier & Uhlman 1997; Kotecha & Djuric 2003):

$$ {x}_k=\left[\begin{array}{cc}\hfill 0.9\hfill & \hfill 0\hfill \\ {}\hfill 0\hfill & \hfill 1\hfill \end{array}\right]{x}_{k-1}+{\mathbf{w}}_{k-1} $$
(40)

where the state vector is \( \mathbf{x}={\left[\begin{array}{cc}\hfill {x}_1\hfill & \hfill {x}_2\hfill \end{array}\right]}^T={\left[\begin{array}{cc}\hfill s\hfill & \hfill t\hfill \end{array}\right]}^T \), which are the position of a moving carrier in the 2D Cartesian coordinate system. The process noise is w k  ~ N(0, Q) with

$$ \operatorname{Q}={10}^{-3}\times \left[\begin{array}{cc}\hfill 1\hfill & \hfill 0\hfill \\ {}\hfill 0\hfill & \hfill 3\hfill \end{array}\right] $$

The observation sensor is an angle observer and its coordinate value is (cos(k), sin(k)) at k moment. And the measurement equation is as follows:

$$ {\operatorname{z}}_k={ \tan}^{-1}\left(\frac{t_k- \sin k}{s_k- \cos k}\right)+{\operatorname{v}}_k $$
(41)

where the measurement noise is v k  ~ N(0, R) with R = 0.005.

The initial state vector and its covariance matrix were given as:

$$ {\operatorname{x}}_0={\left[\begin{array}{cc}\hfill 20\hfill & \hfill 5\hfill \end{array}\right]}^T $$
$$ {\operatorname{P}}_0=\left[\begin{array}{cc}\hfill 0.1\hfill & \hfill 0\hfill \\ {}\hfill 0\hfill & \hfill 0.1\hfill \end{array}\right] $$

The initial variance matrices for the process noise vector and the measurement noise vector were assigned to be 3 times and 4 times of their true values, respectively. The total time duration of the simulation is 500 s. To compare the performance, the results of the normal CKF were compared with the ones from the improved adaptive CKF filter. With 20 times of Monte Carlo simulations for this same 500-s long simulated data, the comparison of the mean square errors (MSE) of the estimated states from the normal CKF and the improved CKF were shown in Fig. 1. Figures 2 and 3 described the probability distribution function (PDF) of the estimated states with these two methods, respectively.

Fig. 1
figure 1

MSEs of 20 times of Monte Carlo simulations

Fig. 2
figure 2

PDF of state errors with normal CKF

Fig. 3
figure 3

PDF of state errors with improved CKF

In order to evaluate the performance of the improved CKF, the PDFs of the estimated state errors resulted from the simulation data were calculated (Figs. 2 and 3), which showed that the PDF with the improved CKF much better fits the normal distribution which tells that the estimated states were more close to the true states than the once from the normal CKF. So the proposed novel method can estimate the stochastic model more accurately. By the way, from the results shown in Fig. 1, the MSEs of estimated states with the normal CKF and the improved CKF are not that different, but the results from the improved CKF method behaved better, although one should not expect to have the smaller variances if the errors are distributed more reasonably in general.

Target tracking method

In this 2D tracking model, there are five states with the nonlinear model below (Arasaratnam & Haykin 2009):

$$ {\operatorname{x}}_k=\left(\begin{array}{lllll}1\hfill & \kern0.5em \frac{{ \sin \Omega \mathrm{T}}_{\Delta}}{\Omega}\hfill & 0\hfill & \frac{{\hbox{-} 1+ \cos \Omega \mathrm{T}}_{\Delta}}{\Omega}\hfill & 0\hfill \\ {}0\hfill & \kern0.5em { \cos \Omega \mathrm{T}}_{\Delta}\hfill & 0\hfill &\ \hbox{-} { \sin \Omega \mathrm{T}}_{\Delta}\hfill & 0\hfill \\ {}0\hfill & \frac{{1\hbox{-} \cos \Omega \mathrm{T}}_{\Delta}}{\Omega}\hfill & 1\hfill & \kern0.75em \frac{{ \sin \Omega \mathrm{T}}_{\Delta}}{\Omega}\hfill & 0\hfill \\ {}0\hfill & \kern0.5em { \sin \Omega \mathrm{T}}_{\Delta}\hfill & 0\hfill & \kern0.75em { \cos \Omega \mathrm{T}}_{\Delta}\hfill & 0\hfill \\ {}0\hfill & \kern1.5em 0\hfill & 0\hfill & \kern2em 0\hfill & 1\hfill \end{array}\right){\operatorname{x}}_{k-1}+{\operatorname{r}}_k $$
(42)

where the state vector is \( \operatorname{x}={\left[\begin{array}{ccccc}\hfill x\hfill & \hfill \overset{.}{x}\hfill & \hfill y\hfill & \hfill \overset{.}{y}\hfill & \hfill \Omega \hfill \end{array}\right]}^T \), in which (x, y), \( \left(\overset{.}{x},\overset{.}{y}\right) \) and Ω are the position, the velocity and the angular velocity of the carrier, respectively, and T Δ represents the sampling interval. The process noise is r k  N(0, Q) with

$$ \operatorname{Q}= diag\left(\left[\begin{array}{ccc}\hfill {q}_1\operatorname{M}\hfill & \hfill {q}_1\operatorname{M}\hfill & \hfill {q}_2{T}_{\varDelta}\hfill \end{array}\right]\right) $$
$$ \operatorname{M}=\left[\begin{array}{cc}\hfill {T}_{\varDelta}^3/3\hfill & \hfill {T}_{\varDelta}^2/2\hfill \\ {}\hfill {T}_{\varDelta}^2/2\hfill & \hfill {T}_{\varDelta}\hfill \end{array}\right] $$

The carrier was observed in real time, and the observations include the distance s k and the azimuth θ k from the observation system to the carrier. So, the observation equation is:

$$ \left(\begin{array}{c}\hfill {\operatorname{s}}_k\hfill \\ {}\hfill {\theta}_k\hfill \end{array}\right)=\left(\begin{array}{c}\hfill \sqrt{x_k^2+{y}_k^2}\hfill \\ {}\hfill { \tan}^{-1}\left(\frac{y_k}{x_k}\right)\hfill \end{array}\right)+{\operatorname{v}}_k $$
(43)

where v k is the observation noise vector, v k N(0, R) with \( \operatorname{R}= diag\left(\left[\begin{array}{cc}\hfill {\sigma}_r^2\hfill & \hfill {\sigma}_{\theta}^2\hfill \end{array}\right]\right) \).

In this subsection, the simulation parameters are set as in Table 1.

Table 1 Simulation Parameters

The initial state

$$ {\operatorname{x}}_0={\left[1000 m,300 m/ s,1000 m,300 m/ s,-{3}^{\circ }/ s\right]}^T $$

and the associated covariance

$$ {\operatorname{P}}_0= diag\left(\left[100{m}^2,100{m}^2,100{m}^2,100{m}^2,100 mra{d}^2/{s}^2\right]\right) $$

The root mean square error (RMSE) was used here as the evaluation criteria. So the RMSEs of the position, the velocity and the angular velocity of the carrier can be calculated after equation (44).

$$ \begin{array}{l} RMS{E}_{pos}(k)=\sqrt{\frac{1}{n_{\exp }}{\displaystyle \sum_{n=1}^{n_{\exp }}\left({\left({x}_k^n-{x}_{k/ k}^n\right)}^2+{\left({y}_k^n-{y}_{k/ k}^n\right)}^2\right)}}\\ {} RMS{E}_{vel}(k)=\sqrt{\frac{1}{n_{\exp }}{\displaystyle \sum_{n=1}^{n_{\exp }}\left({\left({\overset{.}{x}}_k^n-{\overset{.}{x}}_{k/ k}^n\right)}^2+{\left({\overset{.}{y}}_k^n-{\overset{.}{y}}_{k/ k}^n\right)}^2\right)}}\\ {} RMS{E}_{\Omega}(k)=\sqrt{\frac{1}{n_{\exp }}{\displaystyle \sum_{n=1}^{n_{\exp }}{\left({\Omega}_k^n-{\Omega}_{k/ k}^n\right)}^2}}\end{array} $$
(44)

where n exp is the times of the Monte Carlo simulations.

The comparison between the RMSE of the estimated position, velocity and angular rate with normal CKF and the improved CKF were shown in Fig. 4. Figures 5 and 6 described the PDF of the estimated position, velocity and angular rate with these two methods, respectively.

Fig. 4
figure 4

RMSEs of 20 Monte Carlo simulations

Fig. 5
figure 5

PDF of estimated state errors with normal CKF

Fig. 6
figure 6

PDF of estimated state errors with improved CKF

From above, the RMSEs are almost the same with these two methods. And the PDFs with the improved CKF are much more fits the standardized normal distribution curve too, which is similar with Bearing-only tracking method. Thus, the feasibility and the superiority of the improved adaptive filter based on the CKF can be verified.

Experiment and analysis

To verify the effectiveness of the novel adaptive filter based on CKF and VCE, an experiment with real data from practice was carried out. The experiment was taken in Zhanjiang area, China (shown in Fig. 7). And this ship sails directly with niform velocity fristly, then turns back (the heading changes 180 deg), then sails directly with niform velocity again, then turns back again, and then sails like this repeatedly. One Inertial Measurement Unit (IMU) developed by our own lab, which is composed of three Fiber Optic Gyroscopes (FOG) and three quartz flexible accelerometers, was used to acquire the ship’s inertial information. The performance parameter of the FOG and the accelerometer are listed in Tables 2 and 3, respectively. And the IMU was installed on the right side of the ship’s deck, shown as Fig. 8. The sampling frequency and total time of this experiment are 100Hz and 900 s, respectively.

Fig. 7
figure 7

Experiment area

Table 2 Performance Parameters of FOG
Table 3 Performance Parameters of accelerometer
Fig. 8
figure 8

Installation diagram in experiment

With the proposed algorithm, the comparison of the estimated trajectory is as follows. In Fig. 9, the red solid line is the estimated trajectory from the improved CKF based on the VCE while the blue dash-dot line is the estimated trajectories with the normal CKF, respectively. And the celeste diamond indicates the initial position.

Fig. 9
figure 9

Comparison of the estimated trajectory

From the above figure, we can see that the trajectory can be both estimated availably with the improved CKF and normal CKF. Since we don’t have the true trajectory, the PDFs cannot provide here. It still can verify the feasibility of the improved adaptive filter based on CKF.

Conclusions

This paper proposed a novel adaptive filter based on the VCE adaptive method and the nonlinear CKF method to improve the stochastic model due to the unknown system noises in the nonlinear system. By utilizing the frame structure of the nonlinear CKF, this proposed method can estimate the variances of the process noises and the measurement noises through variance component estimations in real time, describing the system more accurately. And the simulation and experiment results and the comparisons showed that the noise’s characteristics can be estimated effectively with the improved adaptive filter based on the CKF and VCE. Therefore, the effectiveness and superiority of the improved adaptive filter can be proved.

References

  • Arasaratnam I, Haykin S (2009) Cubature Kalman filter. IEEE Trans Autom Control 54(6):1254–1269

    Article  Google Scholar 

  • Arasaratnam I, Haykin S (2010) Cubature Kalman filtering for continuous-dicrete systems: theory and simulations. IEEE Trans Signal Process 58(10):4977–4993

    Article  Google Scholar 

  • Chang G, Liu M (2015) An adaptive fading Kalman filter based on Mahalanobis distance. J Aerosp Eng 229(6):1114–1123

    Google Scholar 

  • Chen SY (2012) Kalman filter for robot vision: a survey. IEEE Trans Ind Electron 59(11):4409–4420

    Article  Google Scholar 

  • Dini DH, Mandic DP, Julier SJ (2011) A widely linear complex unscented Kalman filter. IEEE Signal Processing Letters 18(11):623–626

    Article  Google Scholar 

  • Feng J, Wang Z, Zeng M (2013) Distributed weighted robust Kalman filter fusion for uncertain systems with autocorrelated and cross-correlated noises. Information Fusin 4(1):78–86

    Article  Google Scholar 

  • Gao W, Zhang Y, Wang J (2014) A straapdown inertial navigation system/beidou/doppler velocity Log integrated navigation algorithm based on a cubature Kalman filter. Sensors 14(1):1511–1527

    Article  Google Scholar 

  • Han S, Wang J (2012) Integrated GPS/INS navigation system with dual-rate Kaman filter. GPS Solutions 16(3):389–404

    Article  Google Scholar 

  • Hu C, Wu C, Chen Y, Liu D (2003) Adaptive Kalman filter for vejicle navigation. J Global Position System 2(1):42–47

    Article  Google Scholar 

  • Jin M, Zhao J, Jin J, Yu G, Li W (2014) The adaptive Kalman filter based on fuzzy logic for inertial motion capture system. Measurement 49:196–204

    Article  Google Scholar 

  • Julier SJ, Uhlman JK (1997) A New extension of the Kalman filter to nonlinear systems. Proceedings of the Society of Photo-Optical Instrumentation Engineers 3068:182–193

    Google Scholar 

  • Kotecha JH, Djuric PA (2003) Gaussian particle filtering. IEEE Trans Signal Process 51(10):2592–2601

    Article  Google Scholar 

  • Masazade E, Fardad M, Varshney PK (2012) Sparsity-promoting extended Kalman filtering for target tracking in wireless sensor networks. IEEE Signal Processing Letters 19(12):845–848

    Article  Google Scholar 

  • Moghtased-Azar K, Tehranchi R, Amiri-Simkooei AR (2014) An alternative method for Non-negative estimation of variance components. J Geod 88(5):47–439

    Article  Google Scholar 

  • Mundla N, Rangababu P, Samrat L et al (2012) A modified sage-husa adaptive Kalman filter for denoising fiber optical gyroscope signal. In Proceedings of the 2012 Annual IEEE India Conference (INDICON), Kochi, pp 1266–1271

    Google Scholar 

  • Santos MCP (2015) Estimating and Controlling UAV Position using RGB-D/IMU data Fusion with Decentralized Information/Kalman Filter. Proceedings in the 2015th IEEE International Conference on Industrial Technology (ICIT), Seville, pp 22–239

    Google Scholar 

  • Tang C, Ao Z, Zhang K, Wang Y (2014) A multi-sensor data fusion algorithm based on improved Kalman filter. Mechatronics and Automatic Control Systems 237:219–229

    Article  Google Scholar 

  • Vaccarella A, de Momi E, Enquobahrie A, Ferrigno G (2013) Unscented Kalman filter based sensor fusion for robust optical and electromagnetic tracking in surgical navigation. IEEE Trans Instrum Meas 62(7):2067–2081

    Article  Google Scholar 

  • Von Chong A, Caballero R (2014) Adaptive Kalman filtering for the estimation of orientation and displacement in submarine systems. Proceedings of the 2014 IEEE Central America and Panama Convention (CONCAPAN XXXIV), Panama city, pp 1–6

    Google Scholar 

  • Wang J (2009) Reliability analysis in Kalman filtering. J Global Positioning Systems 8(1):101–111

    Article  Google Scholar 

  • Wang J, Gopaul N, Scherzinger B (2009) Simplified algorithms of variance component estimation for static and kinematic GPS single point positioning. J Global Positioning Systems 8(1):43–52

    Article  Google Scholar 

  • Wang J, Gopaul SN, Guo J (2010) Adaptive Kalman filtering based on posteriori variance-covariance components estimation. CPGPS 2010 Technical Forum, Shanghai, pp 1–11

    Google Scholar 

  • Zhang C, Zhao M, Yu X, Cui M, Zhou Y, Wang X (2015) Cubature Kalman filter based on strong tracking. The Proceedings of the Third International Conference on Communications, Signal Processing, and Systems 322:131–138

    Article  Google Scholar 

Download references

Competing interests

The authors declare that they have no competing interests.

Authors’ contribution

YZ wrote this paper and analyzed the results of the experiments; JW proposed the initial idea and revised this manuscript; QS carried out the simulations and revised this manuscript; WG conceived of the experiments. All authors read and approved the final manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qian Sun.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, Y., Wang, J., Sun, Q. et al. Adaptive cubature Kalman filter based on the variance-covariance components estimation. J. Glob. Position. Syst. 15, 1 (2017). https://doi.org/10.1186/s41445-017-0006-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s41445-017-0006-z

Keyword