Robust Kalman filter model
The discrete models of the state equation and measurement equation are:
$$ {X}_k={FX}_{k-1}+{W}_{k-1} $$
(9)
$$ {Z}_k={H}_k{X}_k+{V}_k $$
(10)
It can be seen from the above two formulas that the standard Kalman filter update equations are:
$$ {\widehat{X}}_k^{-}=F{\widehat{X}}_{k-1}^{+} $$
(11)
$$ {P}_k^{-}={FP}_{k-1}^{+}{F}^T+{Q}_{k-1} $$
(12)
$$ {P}_{e_k}={H}_k{P}_k^{-}{H}_k^T+{R}_k $$
(13)
$$ {K}_k={P}_k^{-}{H}_k^T{P_{e_k}}^{-} $$
(14)
$$ {X}_k^{+}={X}_k^{-}+{K}_k\left({Z}_k-{H}_k{X}_k\right) $$
(15)
$$ {P}_k^{+}={P}_k^{-}-{K}_k{H}_k{P}_k^{-} $$
(16)
Where \( {\widehat{X}}_k^{-} \) and \( {X}_k^{+} \) are the prior estimation and posterior estimation of the filtering state, respectively. \( {P}_k^{-} \) and \( {P}_k^{+} \) are the prior covariance matrix and the posterior covariance matrix, respectively. Zk is the actual observation value, and Kk is the Kalman filter gain matrix. ek = Zk − HkXk is the difference between the observed and predicted values, which is usually called the innovation vector whose covariance matrix is as shown in Eq. (13). Assume that the state equation and the measured equation do not contain outliers, and the observation noise obeys Gauss white noise distribution, then the probability density function Zk is:
$$ \rho \left({Z}_k\right)=\frac{1}{\sqrt{{\left(2\pi \right)}^m\left|{P}_{e_k}\right|}}\exp \left(-\frac{1}{2}{e}_k^T{\left({P}_{e_k}\right)}^{-1}{e}_k\right) $$
(17)
Where m is defined as the dimension of Zk. We determine the index parameters are as follows:
$$ {\gamma}_k(m)={M}_k^2=-\frac{1}{2}{e}_k^T{\left({P}_{e_k}\right)}^{-1}{e}_k $$
(18)
Where Mk is known as Mahalanobis distance, if there is no outliers, the index parameter γk(m) should obey the Chi square distribution of degrees of freedom m. According to the given significance level α (small value), having a Chi-square test for γk(m), the probability of the event can be obtained formula:
$$ \Pr \left[\gamma \left(\mathrm{m}\right)>{\chi}_{\alpha, m}^2\right]<\alpha $$
(19)
Where \( {\chi}_{\alpha, m}^2 \) is the significance level α corresponding to the upper quantile, Pr is the probability of occurrence of the Eq. (19). The probability of occurrence is very small. If occurs, it can reject the original hypothesis, which means the measurement information is affected by the anomalous outliers. Thus, the detection of the outliers could be realized. For the detected outliers, the filter updating weight should be reduced, which can be achieved by correcting the innovation vector covariance matrix.
Robust Estimation factor can be expressed as:
$$ \lambda =\left\{\begin{array}{c}\frac{\gamma \left(\mathrm{m}\right)}{\chi_{\alpha, m}^2}, if\ \gamma \left(\mathrm{m}\right)>{\chi}_{\alpha, m}^2\\ {}1,\kern2.75em otherwise\end{array}\right. $$
(20)
Eq. (20) indicated that its introduction will increase the covariance matrix of the innovation, and reduce the filter gain in order to achieve the suppression of the observed outliers. The index parameters defined by Eq. (18) not only have considered the correlations between the elements of the innovation vector, but also are theoretically more rigorous. The detection of the outliers is based on the standard hypothesis testing process. The threshold selection has a clear statistical meaning.
The correction formula is:
$$ {P}_{e_k}=\lambda {P}_{e_k} $$
(21)
Eq. (14), Eq. (15), Eq. (16) can be obtained by modifying the Pe to obtain the robust Kalman filter algorithm.
Sequential robust Kalman filter algorithm
There are different types of observation information when the multi-sensors are integrated. If the outliers were determined only by an index parameter and then use a robustness factor to modify the innovation vector covariance matrix, it may lose a reliable prediction of state parameters accuracy and the state parameter weight of large outliers margin cannot reduced effectively.
In order to solve the above problems, we apply the sequential updating method to the robust Kalman filter algorithm to detect and correct outliers for different observations. Sequential measurement update method (Dan, 2006) is a method of reducing high dimensional measurement updates to multiple low dimensional measurement updates. In particular, the m dimension observation vector is decomposed into m scalar. In the process of finding the filter gain, the matrix inverse problem is transformed into the reciprocal operation of the scalar, which could effectively reduce the inverse calculation of the matrix and enhance the stability of the numerical calculation.
If the measurement variance matrix Rk is not a diagonal matrix, it can realize the observation vector diagonalization by Cholesky decomposition.
$$ {R}_k={L}_k{L}_k^T $$
(22)
Where Lk is a non-singular upper triangular matrix.
Let \( {L}_k^{-1} \) left multiplication eq. (9), available:
$$ {L}_k^{-1}{Z}_k={L}_k^{-1}{H}_k{X}_k+{L}_k^{-1}{V}_k $$
(23)
The above formula can be abbreviated as.
$$ {Z}_k^{\ast }={H}_k^{\ast }{X}_k+{V}_k^{\ast } $$
(24)
among them, \( {Z}_k^{\ast }={L}_k^{-1}{Z}_k \),\( {H}_k^{\ast }={L}_k^{-1}{H}_k \),\( {V}_k^{\ast }={L}_k^{-1}{V}_k \), and the measurement noise variance matrix in eq. (23) is:
$$ {\displaystyle \begin{array}{l}{R}_k^{\ast }=E\left[{v}_k^{\ast }{\left({v}_k^{\ast}\right)}^T\right]=E\left[{L}_k^{-1}{v}_k{\left({L}_k^{-1}{v}_k\right)}^T\right]\\ {}\kern1em ={L}_k^{-1}E\left[{v}_k{\left({v}_k\right)}^T\right]{\left({L}_k^{-1}\right)}^T={L}_k^{-1}{R}_k{\left({L}_k^{-1}\right)}^T=I\end{array}} $$
(25)
among them,
$$ {H}_k^{\ast }=\left[{h_1^{\ast}}^T\kern0.5em {h_2^{\ast}}^T\kern0.5em \cdots \kern0.5em {h_m^{\ast}}^T\right] $$
(26)
In the kth epoch the first j measurement update, available:
$$ {\widehat{y}}_{k,j}^{\ast }={h}_j^{\ast }{\widehat{x}}_{k,j-1} $$
(27)
$$ {\sigma}_{{\widehat{y}}_{k,j}^{\ast}}^2={h}_j^{\ast }{P}_{k,j}^{-}{h_j^{\ast}}^T+1 $$
(28)
Where: \( {\widehat{x}}_{k,j-1} \) is first j − 1 the predicted value of the vector, and \( {\widehat{x}}_{k,0}={\widehat{x}}_{k\mid \mathrm{j}\hbox{-} 1} \), \( {\widehat{y}}_{k,j}^{\ast } \) is \( {\widehat{y}}_k \) first j the predicted value of the element. By a degree of freedom Chi-square test judge Zk, j whether it contains outliers, robustness factor is:
$$ \gamma \left({\tilde{Z}}_{k,j}\right)=\frac{{\left({\tilde{Z}}_{k,j}-{\widehat{y}}_{k,j}^{\ast}\right)}^2}{\sigma_{{\widehat{y}}_{k,j}^{\ast}}^2} $$
(29)
For a given significance level value α and corresponding to the upper quantile \( {\chi}_{1,\alpha}^2 \), the judgment condition can be expressed as:
$$ \gamma \left({\tilde{Z}}_{k,j}\right)>{\chi}_{1,\alpha}^2 $$
(30)
If \( {\tilde{Z}}_{k,j} \) contains outliers, and then modified by the robustness factor innovation vector covariance matrix:
$$ \lambda =\left\{\begin{array}{c}\frac{\gamma \left({Z}_{k,j}\right)}{\chi_{1,\alpha}^2}, if\ \gamma \left({Z}_{k,j}\right)>{\chi}_{1,\alpha}^2\\ {}1,\kern2.75em otherwise\end{array}\right. $$
(31)
$$ {\sigma}_{{\widehat{y}}_{k,j}^{\ast}}^2={\lambda \sigma}_{{\widehat{y}}_{k,j}^{\ast}}^2 $$
(32)
Then we can obtain the Kalman filter updating equations:
$$ K=\frac{P_{k,j}^{-}{h_j^{\ast}}^T}{\sigma_{{\widehat{y}}_{k,j}^{\ast}}^2} $$
(33)
$$ {X}_{k,j}^{+}={X}_{k,j}^{-}+K\left({Z}_{k,j}-{\widehat{y}}_{k,j}^{\ast}\right) $$
(34)
$$ {P}_k^{+}={P}_k^{-}-{\sigma}_{{\widehat{y}}_{k,j}^{\ast}}^2{KK}^T $$
(35)