Camera autocalibration in GPS/INS/stereo camera integrated kinematic positioning and navigation system
 Nilesh S. Gopaul^{1}Email author,
 Jianguo Wang^{1} and
 Baoxin Hu^{1}
https://doi.org/10.1186/s4144501600037
© The Author(s) 2016
Received: 30 July 2016
Accepted: 7 September 2016
Published: 4 November 2016
Abstract
This paper presents a novel twostep camera calibration method in a GPS/INS/Stereo Camera multisensor kinematic positioning and navigation system. A camera autocalibration is first performed to obtain for lens distortion parameters, uptoscale baseline length and the relative orientation between the stereo cameras. Then, the system calibration is introduced to recover the camera leverarms, and the boresight angles with respect to the IMU, and the absolute scale of the camera using the GPS/INS solution. The autocalibration algorithm employs the threeview scalerestraint equations (SRE). In comparison with the collinearity equations (COL), it is free from landmark parameters and ground control points (GCPs). Therefore, the proposed method is computationally more efficient. The results and the comparison between the SRE and COL methods are presented using the simulated and road test data. The results show that the proposed SRE method requires less computation resources and is able to achieve the same or better accuracy level than the traditional COL.
Keywords
Camera autocalibration Lens distortion Relative orientation Lever arms Boresight angles GPS IMU Scale restraint equationIntroduction
The high demand for lowcost multisensor kinematic positioning and navigation systems as the core of directgeoreferencing technique in mobile mapping is continuously driving more research and development activities. The effective and sufficient utilization of images is among the most recent scientific research and hightech industry development subjects. In this particular field, York University’s Earth Observation Laboratory (EOL) is engaging in the study of the imageaided inertial integrated navigation as the natural continuation of its past research in the multisensor integrated kinematic positioning and navigation (Qian et al. 2012; Wang et al. 2015).
An imageaided inertial navigation system (IAINS) implies that the errors of an inertial navigator are estimated via the Kalman filter using measurements derived from images. The imagebased navigation algorithms, such as visual odometry (VO) (Konolige et al. 2011; Scaramuzza and Fraundorfer 2011; Gopaul et al. 2014, 2015) or visual Simultaneous Localization and Mapping (SLAM) (DurrantWhyte and Bailey 2006; Williams and Reid 2010; Lategahn et al. 2011; Alcantarilla et al. 2012), usually assume that a camera system is calibrated prior to its use and the calibration parameters do not change over time. The internal camera parameters (focal length, principal point and lens distortion) and the external camera parameters (baseline and relative orientation between cameras, leverarms and boresight angles with respect to the inertial measurement unit (IMU)) are required to relate the image coordinates with the object coordinates in the scene. The process of estimating these parameters is referred to as the camera calibration.
The traditional camera calibration consists of capturing images containing an array of the reference targets in a laboratory, whose coordinates are accurately known (Wolf and Dewitt 2000). However, these parameters can be invalidated during infield operations, e.g., during camera assembly/disassembly, replacement, bumps (Teller et al. 2010) or significant temperature variations. Recently many developments have focused on the infield camera autocalibration (or selfcalibration) for imageinertial systems. An autocalibration refers to the determination of the camera parameters from a sequence of the overlapping images without necessarily setting up ground control points (GCPs) or special calibration targets. Typically, autocalibration process is performed in a bundle adjustment (BA) (Triggs et al. 1999) or in the SLAM framework (Civera et al. 2009; Kelly and Sukhatme 2009; Kelly et al. 2011; Keivan and Sibley 2014). It involves the simultaneous estimation of the positions and orientations of the camera, the positions of the stationary landmarks, and the calibration parameters of the camera. The corresponding mathematical equation system, which models the parameters through the available measurements, is usually solved by using nonlinear leastsquares, the LevenbergMarquardt algorithm or a Kalman filter. These methods however are computationally expensive due to the very large number of landmark position parameters.
Accordingly, this paper proposes a novel camera calibration method that can precisely calibrate the internal and external camera parameters with a GPS/INS/Stereo camera system exclusive of the landmark position parameters. The method applies the threeview scalerestraint equation (Bethel 2003; Ghosh 2005), with which the measurements are processed exclusively in the image space without landmark parameters. Therefore, it does not allocate large memory and computation resources. The remainder of the paper is organized as follows. Related work section overviews the related work. Then, the novel algorithm is proposed in Twostep camera calibration method section, which is followed by test results using the simulated and real data as Test results and analysis section. Conclusions section ends the paper with discussions, and conclusions.
Related work
Bender et al. (2013) presented an inflight graph based the BA approach for system calibration between a rigidly mounted camera and an inertial navigation system. Image point features and GPS aidedINS position and orientation solution were used as measurements. Their method simultaneously computed the internal camera parameters as well as the 6dof transformation (i.e. lever arms and relative orientation) between the two systems. However their method also required at least one GCP inorder to recover the zcomponent of the leverarm. Kelly and Sukhatme (2009) proposed a cameraIMU selfcalibration method within the SLAM framework implemented by an unscented Kalman filter. The leverarms and mounting angles, the IMU gyroscope and accelerometer biases, the local gravity vector and landmarks could all be recovered from camera and IMU measurements alone. However, they assumed that the internal camera parameters were known beforehand. (Mirzaei and Roumeliotis 2008) presented a similar tightlycoupled approach using an iterative extended Kalman filter, but, in need of known landmark position.
The methods in (Bender et al. 2013; Kelly and Sukhatme 2009; Kelly et al. 2011) implemented structurefrommotion (SfM) and contains stationary landmark parameters. SfM algorithms, which compute 3D coordinates from 2D image correspondences, have some disadvantages. The 3D Cartesian coordinates of distant objects are biased (Sibley et al. 2005) and are not well represented by Gaussian distributions (Civera et al. 2008). Similar problems arise when the baseline length between the stereo cameras and the distance between the consecutive frames are small in monocular vision (Scaramuzza and Fraundorfer 2011). Furthermore, the inclusion of landmark position in the parameter vector has two main drawbacks. First, the BA and SLAM implementation requires a good initial guess which can be difficult especially in monocular vision and when landmarks that were far away. Second, the number of landmark parameters can be very large which can result in difficult and computationally expensive estimation. Efforts to reduce the computational load were introduced in (Dang et al. 2009) where a 3D landmark position was decomposed in to 1D feature depth parameter by algebraically eliminating the x and y components using equations from the stereo pair. However, it still required the estimation of the landmark depth, a parameter not particularly useful in the calibration procedure.
Autocalibration algorithms require a minimal constraint to define the network datum, which can be done by applying the minimum constraint, freenetwork adjustment, or through an explicit minimal control point (Remondino and Fraser 2006). In the freenetwork adjustment situation, the absolute scale of the camera system cannot be known without additional information. (Kelly et al. 2011) focused on determining the absolute scale of both the scene and the baseline in a stereo rig using GPS measurements. Their approach was similar to photogrammetric BA and the structure from motion algorithms. They could recover the baseline and relative orientation between the two cameras and leverarms between the GPS antenna and reference camera. Similar to their previous method in (Kelly and Sukhatme 2009), they assumed that the internal camera parameters were known beforehand. Three or more overlapping image frames are required in order to estimate the camera motion on a common scale. Structurefree motion algorithms typically relied on threeview constraints (Yu et al. 2006; Indelman et al., 2013) for the same reason. The advantage is obvious. They could be exclusive of landmark position parameters and result in a more efficient algorithm.
The proposed method consists of two steps; firstly, the threeview scalerestraint equation is used to perform the freenetwork autocalibration in a stereo camera system. This enables all images to operate on a common scale. And then the GPS/INS solution is applied to recover the absolute scale, as well as the boresight angles and the leverarms with respect to the IMU.
Methods

The object points in the scene are stationary;

The raw measurements from the sensors are synchronized;

The GPS/INS blended solution has been processed;

The GPS/INS position is referenced at the center of the IMU.
Reference frames
The navigation frame (nframe) is a frame that moves with the vehicle with its origin located at a predefine point on the vehicle. Its zaxis is normal to the reference ellipsoid and points downwards while its x and y axes point towards the geodetic North and East, respectively forming a righthanded Cartesian coordinate system.
The (n’frame) has the same origin as the nframe. Its orientation is arbitrary but fixed with respect to the nframe.
The body frame (bframe) shares the same origin with the nframe. Its xaxis points along the vehicle’s longitudinal axis and the zaxis points down while its yaxis forms a righthanded coordinate system.
The camera frame (cframe) is a frame in which the image measurements are taken. Its origin is at the perspective center of the reference camera. Its xaxis and yaxis are parallel to the columns and rows of the chargecoupled device (CCD) sensor while its zaxis points away from the CCD sensor to form a right handed coordinate system. The camera system is assumed to be rigidly mounted on the vehicle. Hereafter, the left camera is set as the reference camera in the stereo system.
Measurement equations
Collinearity equations
where r is the radial distance from the principal point to the image point \( \left({r}^2={\overline{x}}^2+{\overline{y}}^2={\left({x}_i{x}_o\right)}^2+{\left({y}_i{y}_o\right)}^2\right),\varDelta f \) is the focal length error, (Δx _{0}, Δy _{0}) is the principal point error, k _{ i } and p _{ i } are the coefficients of radial distortion and decentering distortion, respectively, and A _{ i } are the affine deformation parameters. Most of the radial lens distortion is generally accounted by second term k _{2} r ^{4} (Barazzetti et al. 2011). The terms with k _{3} and even with k _{4} term are typically included in higheraccuracy applications and wideangle lenses. Here, the decentering distortion and affine deformation models will not be applied because they are generally very small. Furthermore, their errors will be absorbed by other terms, for example, the principal point (Fraser 2013).
Scale restraint equation
where in \( {\mathbf{d}}_{12}^n={\mathbf{x}}_1^n\times {\mathbf{x}}_2^n \) and \( {\mathbf{d}}_{23}^n={\mathbf{x}}_2^n\times {\mathbf{x}}_3^n \). Equation (6) is the scale restraint equation that forces the independent scale factors for the common ray between the stereo pair 1–2 and stereo pair 2–3 to be equal (Bethel 2003). This equation is mainly used in successive relative orientation of image pairs and scale transfer.
Camera autocalibration
Autocalibration parameters (L and R denote left and right camera)
Parameter  Description 

Δf _{ L }, Δf _{ R }  Correction for focal lengths [px] 
\( \begin{array}{c}\hfill \varDelta {x}_{L,0},\varDelta {y}_{L,0}\hfill \\ {}\hfill \varDelta {x}_{R,0},\varDelta {y}_{R,0}\hfill \end{array} \)  Correction for principal points [px] 
\( \begin{array}{c}\hfill {k}_{L,1},{k}_{L,2},{k}_{L,3}\hfill \\ {}\hfill {k}_{R,1},{k}_{R,2},{k}_{R,3}\hfill \end{array} \)  Radial lens distortion parameters [px^{−2}, px^{−4}, px^{−6}] 
\( {\mathbf{b}}_{LR}^c \)  Stereo baseline vector [m] 
\( {C}_{cR}^c \)  Right camera to left camera DCM defined by Euler angles \( {\boldsymbol{\theta}}_{cR}^c\left[ \deg \right] \) 
\( \varDelta {\mathbf{X}}_{L,k,k1}^{n^{\prime }} \)  Position difference of the left camera between two consecutive frames [m] 
\( {C}_{c,k}^{n^{\prime }} \)  Camera to n’frame DCM defined by Euler angles \( {\boldsymbol{\theta}}_{c,k}^{n^{\prime }}\left[ \deg \right] \) 
Point features can be extracted using the Harris corner detector (Harris and Stephens 1988) and matching can be performed using the Sum of Absolute Differences (SAD) in an 11 × 11 window. To improve the matching results between stereo pairs, the search is performed along the epipolar lines (Bin Rais et al. 2003). Furthermore, to improve the matching between the consecutive frames, the locations of the features in the current frame can be predicted from the previous frame using the inertial sensors (Veth et al. 2006).
where \( {\mathbf{x}}_{L,k}^{n^{\prime }}={C}_{c,k}^{n^{\prime }}{\mathbf{x}}_{L,k}^c,{\mathbf{x}}_{R,k}^{n^{\prime }}={C}_{c,k}^{n^{\prime }}{C}_{cR}^c{\mathbf{x}}_{R,k}^{cR} \) and \( \varDelta {\mathbf{X}}_{R,k,k1}^{n^{\prime }}=\varDelta {\mathbf{X}}_{L,k,k1}^{n^{\prime }}+\left({C}_{c,k}^{n^{\prime }}{C}_{c,k1}^{n^{\prime }}\right){\mathbf{b}}_{LR}^c \). In autocalibration, the orientation of the camera with respect to the nframe is not required and can be put aside. At this point the global frame is set to the n’frame. In order to obtain a freenetwork adjustment, one component of the baseline vector \( {\mathbf{b}}_{LR}^c \) must be free (i.e. 2 dof) and one of the \( {\boldsymbol{\theta}}_c^{n^{\prime }} \) orientation parameters must be fixed (ideally \( {\boldsymbol{\theta}}_{c,k=1}^{n^{\prime }}=\mathbf{0} \)). This fixes both the orientation and scale of the system. Note that \( \varDelta {\mathbf{X}}_{L,k}^{n\hbox{'}} \) and \( {C}_{c,k}^{n\hbox{'}} \) contain transport rate error in this formulation. Under the assumption that the calibration area is within a few hundred meters, this error effect is negligibly small.
Camera system calibration
Equations (11) and (12) equate the GPS/INS information \( \left(\varDelta {\mathbf{X}}_{GPSINS,k}^n,{C}_{b, GPSINS,k}^n\right) \) and the autocalibration estimates \( \varDelta {\mathbf{X}}_{L,k}^{n\hbox{'}} \) and \( {C}_{c,k}^{n\hbox{'}} \). All of seven parameters can be solved by using the leastsquares.
Computation complexity of COL and SRE
This section compares the number of parameters and the number of floating point operations (flops) between COL and SRE autocalibration algorithms.
Dimension of the parameter vector (COL vs. SRE)
COL  SRE  

Number of image frames  n _{ x }  n _{ x } 
Number of observed landmarks  n _{ lm }  n _{ lm } 
Focal length, principal point  2x3  2x3 
Lens distortion (k _{ 1 } , k _{ 2 } , k _{ 3 })  2x3  2x3 
Stereo baseline and relative orientation  2 + 3  2 + 3 
Camera position and orientation  6 (n _{ x } −1)  6 (n _{ x }−1) 
Landmark parameters  3n _{ lm }  0 
Total  11 + 6n _{ x } + 3n _{ lm }  11 + 6n _{ x } 
The flop count is the total number of textbook multiplication and addition operations required to obtain a least squares (LS) solution. The factors taken into account in the analysis are the number of the matched stereo points (i.e. number of measurements), the number of the image frames, the number of the landmarks in view and the percentage overlap between consecutive frames. The percentage overlap encompasses camera rate, the velocity and the angular rate of the camera. Furthermore, COL employs the LS algorithm in the explicit form (i.e., z = h(x)) to estimate the parameters while the SRE uses implicit LS (i.e., h(x, z) = 0)); where x is the parameter vector, z is the measurement vector and h(.) is the functional model. The flop counts between the two will be different under a given number of measurements and parameters.
As expected, the plot shows that COL uses more flops than SRE. As percentage overlap increases, the number of flops in COL decreases because the number of the matched stereo pairs per landmark becomes larger. Therefore, given the same number of measurements, the number of the landmark parameters becomes smaller. As percentage overlap increases, the number of flops in SRE increases because more matrix inversion operations are needed in the implicit LS algorithm. The accuracy analysis is presented in Autocalibration results section.
Results and discussion
In this section test results from the simulated, laboratory and real data are presented. Simulations were performed to validate the proposed SRE autocalibration algorithm and to show how its performance (both computation and accuracy) in comparison with the one from the COL autocalibration method. Finally results from land vehicle data are presented.
Results from the simulated data
Autocalibration accuracy analysis
Autocalibration results
Left camera lens distortion parameters
Parameter  True value  COL  SRE1  SRE2 

Δf (px)  −2  −1.825  −2.713  −2.318 
±0.438  ±0.875  ±0.417  
Δx _{ 0 } (px)  2.5  2.393  2.665  2.432 
±0.230  ±0.309  ±0.142  
Δy _{ 0 } (px)  −3  −3.232  −2.851  −3.061 
±0.219  ±0.266  ±0.122  
k _{ 1 } (px^{−2})  5.0e^{−7}  5.03e^{−07}  5.07e^{−07}  5.06e^{−07} 
±6.47e^{−09}  ±6.97e^{−09}  ±3.21e^{−09}  
k _{ 2 } (px^{−4})  4.0e^{−13}  7.16e^{−14}  6.49e^{−13}  3.92e^{−13} 
±1.03e^{−13}  ±1.04e^{−13}  ±4.75e^{−14}  
k _{ 3 } (px^{−6})  4.5e^{−19}  1.78e^{−18}  4.89e^{−18}  3.72e^{−18} 
±4.86e^{−19}  ±4.89e^{−19}  ±2.20e^{−19} 
Right camera lens distortion parameters
Parameter  True value  COL  SRE1  SRE2 

Δf (px)  +2  2.082  2.534  2.078 
±0.441  ±0.884  ±0.421  
Δx _{ 0 } (px)  −2  −2.003  −2.250  −2.029 
±0.226  ±0.319  ±0.146  
Δy _{ 0 } (px)  1  0.391  1.217  1.233 
±0.217  ±0.243  ±0.112  
k _{ 1 } (px^{−2})  5.0e^{−07}  5.08e^{−07}  5.04e^{−07}  5.06e^{−07} 
±6.72e^{−09}  ±8.02e^{−09}  ±3.76e^{−09}  
k _{ 2 } (px^{−4})  4.0e^{−13}  2.85e^{−13}  2.38e^{−13}  3.60e^{−13} 
±1.08e^{−13}  ±1.28e^{−13}  ±6.03e^{−14}  
k _{ 3 } (px^{−6})  4.5e^{−19}  9.01e^{−19}  5.89e^{−18}  5.82e^{−18} 
±5.21e^{−19}  ±6.22e^{−19}  ±2.93e^{−19} 
Relative orientation of right camera w.r.t left camera
Parameter  True value  COL  SRE1  SRE2 

\( {b}_{LR.x}^c\left(\mathrm{m}\right) \)  0.01  0.011  0.007  0.007 
±0.001  ±0.002  ±0.001  
\( {b}_{LR.y}^c\left(\mathrm{m}\right) \) ^{a}  0.65  0.650  0.650  0.650 
\( {b}_{LR.y}^c\left(\mathrm{m}\right) \)  −0.1  −0.013  −0.013  −0.012 
±0.004  ±0.004  ±0.002  
\( {\theta}_{cR,x}^c\left( \deg \right) \)  −0.25  −0.257  −0.266  −0.260 
±0.007  ±0.008  ±0.004  
\( {\theta}_{cR,y}^c\left( \deg \right) \)  0.5  0.504  0.500  0.494 
±0.008  ±0.014  ±0.006  
\( {\theta}_{cR,z}^c\left( \deg \right) \)  0  0.002  −0.003  −0.002 
±0.002  ±0.002  ±0.001 
Number of points and parameters
COL  SRE1  SRE2  

Number of stereo points  17,074  17,074  77,945 
Number of parameters  8417  563  563 
log_{10} (flops)  12.4  8.8  9.4 
Results from road test data
IMU440CA Specification
Angular Rate  Bias Stability [deg/h]  <10.0 
Angle Random Walk [deg/√hr]  <4.5  
Acceleration  Bias Stability [mg]  <1.0 
Velocity Random Walk [m/s/√hr]  <1.0 
Autocalibration results
Left camera lens distortion parameters
Parameter  COL  SRE1  SRE2 

Δf (px)  −1.651  −1.815  −1.785 
±0.468  ±0.961  ±0.458  
Δx _{ 0 } (px)  0.049  0.157  0.178 
±0.134  ±0.177  ±0.112  
Δy _{ 0 } (px)  −0.245  −0.262  −0.155 
±0.756  ±0.819  ±0.452  
k _{ 1 } (px^{−2})  −3.63e^{−07}  −3.59e^{−07}  −3.46e^{−07} 
±7.42e^{−09}  ±7.51e^{−09}  ±3.73e^{−09}  
k _{ 2 } (px^{−4})  −7.45e^{−12}  −8.61e^{−13}  −6.61e^{−13} 
±1.03e^{−13}  ±1.02e^{−13}  ±5.80e^{−14}  
k _{ 3 } (px^{−6})  8.02e^{−18}  5.97e^{−18}  6.64e^{−18} 
±3.45e^{−19}  ±4.81e^{−19}  ±3.54e^{−19} 
Right camera lens distortion parameters
Parameter  COL  SRE1  SRE2 

Δf (px)  1.141  1.244  1.050 
±0.468  ±0.968  ±0.442  
Δx _{ 0 } (px)  1.652  1.577  1.601 
±0.132  ±0.174  ±0.106  
Δy _{ 0 } (px)  1.451  1.407  1.305 
±0.725  ±0.780  ±0.345  
k _{ 1 } (px^{−2})  −3.34e^{−7}  −3.82e^{−07}  −3.62e^{−07} 
±7.45e^{−09}  ±7.71e^{−09}  ±3.84e^{−09}  
k _{ 2 } (px^{−4})  −1.09e^{−12}  −4.65e^{−13}  −4.46e^{−13} 
±9.84e^{−14}  ±1.04e^{−13}  ±7.14e^{−14}  
k _{ 3 } (px^{−6})  7.26e^{−18}  4.32e^{−18}  6.54e^{−18} 
±4.42e^{−19}  ±4.83e^{−19}  ±3.72e^{−19} 
Relative orientation of right camera w.r.t left camera
Parameter  COL  SRE1  SRE2 

\( {b}_{LR.x}^c\left(\mathrm{m}\right) \)  0.002 ± 0.001  0.004 ± 0.001  0.003 ± 0.001 
\( {b}_{LR.y}^c\left(\mathrm{m}\right) \)  0.65  0.65  0.65 
\( {b}_{LR.y}^c\left(\mathrm{m}\right) \)  −0.002 ± 0.001  −0.001 ± 0.001  −0.001 ± 0.001 
\( {\theta}_{cR,x}^c\left( \deg \right) \)  0.250 ± 0.010  0.247 ± 0.014  0.251 ± 0.009 
\( {\theta}_{cR,y}^c\left( \deg \right) \)  −0.211 ± 0.003  −0.220 ± 0.004  −0.220 ± 0.003 
\( {\theta}_{cR,z}^c\left( \deg \right) \)  −0.093 ± 0.002  −0.092 ± 0.002  −0.091 ± 0.002 
Number of points, parameters and flops
COL  SRE1  SRE2  

Number of stereo points  26,564  26,564  109,652 
Number of parameters  9260  617  617 
log_{10} (flops)  13.0  9.0  9.5 
System calibration results
Leverarm, Scale and Boresight
Parameter  COL  SRE1  SRE2 

\( l{a}_{L,x}^b\left(\mathrm{m}\right) \)  −0.037 ± 0.016  −0.025 ± 0.026  −0.065 ± 0.013 
\( l{a}_{L,y}^b\left(\mathrm{m}\right) \)  0.352 ± 0.019  0.334 ± 0.031  0.331 ± 0.018 
\( l{a}_{L,z}^b\left(\mathrm{m}\right) \)  −0.018 ± 0.047  −0.167 ± 0.090  −0.093 ± 0.040 
s _{ c }  0.983 ± 0.001  0.989 ± 0.001  0.981 ± 0.001 
\( {\theta}_{c,x}^b\left( \deg \right) \)  90.445 ± 0.010  90.721 ± 0.013  90.654 ± 0.008 
\( {\theta}_{c,y}^b\left( \deg \right) \)  −0.195 ± 0.014  −0.161 ± 0.029  −0.174 ± 0.011 
\( {\theta}_{c,z}^b\left( \deg \right) \)  −90.401 ± 0.031  −90.365 ± 0.054  −90.452 ± 0.024 
Difference between the estimated leverarm and the measured lever arm components
Measured leverarm (m)  COL (m)  SRE1 (m)  SRE2 (m) 

−0.060  0.023 ± 0.016  0.035 ± 0.026  −0.005 ± 0.013 
0.325  0.027 ± 0.019  0.009 ± 0.031  0.006 ± 0.018 
−0.050  0.032 ± 0.047  −0.117 ± 0.090  −0.043 ± 0.040 
Conclusions
This paper presented a novel twostep camera calibration method in a GPS/INS/Stereo camera integrated kinematic positioning and navigation system. The first step performs the camera autocalibration for a stereo system by employing two scalerestraint equations to constrain the matched features from two consecutive stereo pairs. The lens distortion parameters, the uptoscale baseline length and the relative orientation between the two cameras are estimated using the leastsquares method. The second step performs system calibration where the autocalibration estimates are fused with the blended GPS/INS solution to recover the camera leverarms, the absolute scale of the camera and the boresight angles. The main advantage of the proposed novel method lies that it is free from landmark parameters and results in computation and memory savings. There are two main drawbacks in employing the scalerestraint equation over the collinearity equations for stereo autocalibration. Firstly the accuracy cannot be increased by performing loop closures when the same scene is revisited. Secondly the scalerestraint equation is highly nonlinear and therefore the LS estimator can diverge if a good approximation of the parameters is not available.
The results from the simulated and real road test data were presented and showed that the proposed autocalibration method requires less computation resources to achieve equal or better accuracy than applying the traditional collinearity equations despite the fact it using more measurements.
Declarations
Acknowledgement
The authors would like to acknowledge the financial support through research grants provided by the Natural Sciences and Engineering Research Council (NSERC) of Canada under its RGPIN Program. We would also like to thank Applanix Corp. for using their POSGNSS software.
Authors’ contributions
NG carried out the camera autocalibration and system calibration research. NG conceived, designed, implemented and tested the algorithms presented in this paper. JW and NG collected the GPS, IMU and image data used for the tests. NG drafted the manuscript. JW reviewed and edited the manuscript. All authors read and approved the final manuscript.
Competing interests
The authors declare that they have no competing interests.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 Alcantarilla PF, Yebes JJ, Almazàn J, Bergasa LM (2012) On combining visual SLAM and dense scene flow to increase the robustness of localization and mapping in dynamic environments. In: Proceedings at the 2012 IEEE International Conference on Robotics and Automation, Saint Paul, Minnesota, USA, 14–18 May 2012:1290–1297. doi:https://doi.org/10.1109/ICRA.2012.6224690.
 Barazzetti L, Mussio L, Remondino F, Scaioni M (2011) Targetless Camera Calibration, International Archives of the Photogrammetry. In: ISPRS  International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXVIII5/W16, 2011. pp 335–342. doi:https://doi.org/10.5194/isprsarchivesXXXVIII5W163352011.
 Bender D, Schikora M, Sturm J, Cremers D (2013) Graphbased bundle adjustment for INScamera calibration. In: International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL1/W2, 2013UAVg2013, Rostock. doi:https://doi.org/10.5194/isprsarchivesXL1W2392013.
 Bethel JS (2003) Photogrammetry and Remote Sensing. In: Chen WF, Liew JYR (ed) The Civil Engineering Handbook, 2nd Edition, Chapter 56. CRC Press, Boca Raton, London, New York, Washington D.C, p 2170.Google Scholar
 Brown DC (1971) CloseRange Camera Calibration. Photogramm Eng 37(8):855–866Google Scholar
 Bin Rais N, Khan HA, Kamran F, Jamal H (2003) A new algorithm of stereo matching using epipolar geometry. In: Proceedings at the Multi Topic Conference, 2003. INMIC 2003. 7th International, Islamabad, Pakistan, 9–9 December 2003:21–24. doi:https://doi.org/10.1109/INMIC.2003.1416609.
 Civera J, Davison AJ, Montiel JMM (2008) Inverse depth parametrization for monocular SLAM. IEEE Trans Robot 24(5):932–945. doi:https://doi.org/10.1109/TRO.2008.2003276 View ArticleGoogle Scholar
 Civera J, Bueno DR, Davison AJ, Montiel JMM (2009) Camera selfcalibration for sequential Bayesian structure from motion. In: Proceedings at the IEEE International Conference on Robotics and Automation, ICRA2009, Kobe. doi:https://doi.org/10.1109/ROBOT.2009.5152719.
 Dang T, Hoffmann C, Stiller C (2009) Continuous stereo selfcalibration by camera parameter tracking. IEEE Trans Image Process 18(7):536–1550. doi:https://doi.org/10.1109/TIP.2009.2017824.Google Scholar
 DurrantWhyte H, Bailey T (2006) Simultaneous localization and mapping (SLAM): part I the essential algorithms. IEEE Robot Autom Mag 13(2):99–110. doi:https://doi.org/10.1109/MRA.2006.1638022 View ArticleGoogle Scholar
 Fraser CS (2013) Automatic camera calibration in closerange photogrammetry. Photogramm Eng Remote Sens 79(4):381–388View ArticleGoogle Scholar
 Ghosh SK (2005) Fundamentals of Computational Photogrammetry. Concept Publishing Company, New Delhi, p 117119Google Scholar
 Gopaul NS, Wang JG, Hu B (2014) Discrete EKF with pairwise timecorrelated measurement noise for imageaided inertial integrated navigation. ISPRS Ann Photogramm, Remote Sens Spat Inf Sci 2(2):61–66. doi:https://doi.org/10.5194/isprsannalsII2612014.View ArticleGoogle Scholar
 Gopaul NS, Wang JG, Hu B (2015) Multiframe Visual Odometry in ImageAided Inertial Navigation System. J. Sun at al. (eds) China Satellite Navigation Conference (CSNC) 2015 proceedings. Lecture Notes Electr Eng 342(3):649–658. doi:https://doi.org/10.1007/978366466322_57.View ArticleGoogle Scholar
 Harris C, Stephens M (1988) A combined corner and edge detection. In: Proceedings of the Fourth Alvey Vision Conference (1998)., pp 147–151Google Scholar
 Indelman V, Melim A, Dellaert F (2013) Incremental Light Bundle Adjustment for Robotics Navigation. In: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Tokyo. doi:https://doi.org/10.1109/IROS.2013.6696615.
 Kelly J, Sukhatme GS (2009) Visualinertial simultaneous localization, mapping and sensortosensor selfcalibration. In: Proceedings at the Computational Intelligence in Robotics and Automation (CIRA), 2009 IEEE International Symposium, Daejeon, South Korea, 15–18 December 2009: 360–368 doi:https://doi.org/10.1109/CIRA.2009.5423178.
 Kelly J, Matthies LH, Sukhatme GS (2011) Simultaneous mapping and stereo extrinsic parameter calibration using GPS measurement. In: Proceedings at the Robotics and Automation (ICRA), 2011 IEEE International Conference, Shanghai, China, 9–13 May 2011:279–286. doi:https://doi.org/10.1109/ICRA.2011.5980443.
 Keivan N, Sibley G (2014) Constanttime Monocular SelfCalibration. In: Proceedings at the IEEE International Conference on Robotics and Biomimetics (ROBIO) 2014. Bali, Indonesia, 5–10 December 2014:1590–1595. doi:https://doi.org/10.1109/ROBIO.2014.7090561.
 Konolige K, Agrawal M, Solà J (2011) Large scale visual odometry for rough terrain, In: The International Journal of Robotics Research 66:201212. doi:https://doi.org/10.1007/9783642147432_18.
 Lategahn H, Geiger A, Kitt B (2011) Visual SLAM for autonomous ground vehicles. In: Proceedings of the Robotics and Automation (ICRA), 2011 IEEE International Conference, Shanghai, China, 9–13 May 2011:1732–1737. doi: https://doi.org/10.1109/ICRA.2011.5979711.
 Mirzaei FM, Roumeliotis SI (2008) A Kalman FilterBased Algorithm for IMUCamera Calibration: Observability Analysis and Performance Evaluation. IEEE Trans Robot 24(5):1143–1156. doi:https://doi.org/10.1109/TRO.2008.2004486.View ArticleGoogle Scholar
 Qian K, Wang JG, Gopaul NS, Hu B (2012) Low Cost Multisensor Kinematic Positioning and Navigation System with Linux/RTAI. J Sens Actuator Netw 1(3):166–182View ArticleGoogle Scholar
 Remondino F, Fraser C (2006) Digital Camera Calibration Methods: Considerations and Comparisons. The International Archives of the Photogrammetry and Remote Sensing Commission V Symposium. Image Engineering and Vision Metrology. IAPRS 36(5):266–272, Dresden 25–27 September 2006Google Scholar
 Scaramuzza D, Fraundorfer F (2011) Visual Odometry Part I&II: The First 30 Years and Fundamentals. IEEE Robot Autom Mag 18(4):80–92View ArticleGoogle Scholar
 Sibley G, Matthies L, Sukhatme G (2005) Bias Reduction and Filter Convergence for Long Range Stereo. In: 12th International Symposium of Robotics Research, San Francisco 28:285294. doi:https://doi.org/10.1007/9783540481133_26.
 Teller S, Koch O, Walter MR, Huang AS (2010) Ground robot navigation using uncalibrated cameras. In: Proceedings at the Robotics and Automation (ICRA), 2010 IEEE International Conference, Anchorage, USA, 3–7 May 2010: 2423–2430. doi:https://doi.org/10.1109/ROBOT.2010.5509325.
 Triggs N, McLauchlan P, Hartley R, Fitzgibbon A (1999) Bundle Adjustment—A Modern Synthesis. In: Proceedings at the International Workshop on Vision Algorithms ICCV 99. SpringerVerlag, pp 298–372. doi:https://doi.org/10.1007/3540444807_21.
 Veth M, Raquet J, Pachter M (2006) Stochastic constraints for efficient image correspondence search. IEEE Trans Aerosp Electron Syst 42(3):973–982. doi:https://doi.org/10.1109/TAES.2006.4439212.View ArticleGoogle Scholar
 Wang JG, Qian K, Hu B (2015) An Unconventional Full TightlyCoupled MultiSensor Integration for Kinematic Positioning and Navigation, J. Sun at al. (eds) China Satellite Navigation Conference (CSNC) 2015 proceedings. Lecture Notes Electr Eng 342(3):753–765. doi:https://doi.org/10.1007/9783662466322_65.View ArticleGoogle Scholar
 Williams B, Reid I (2010) On Combining Visual SLAM and Visual Odometry. In: Proceedings at the Robotics and Automation (ICRA), 2010 IEEE International Conference, Anchorage, USA, 3–7 May 2010: 3494–3500. doi:https://doi.org/10.1109/ROBOT.2010.5509248.
 Wolf PR, Dewitt BA (2000) Elements of Photogrammetry with Applications in GIS, 3rd edn.. ISBN 0072924543Google Scholar
 Yu YK, Wong KH, Chang MMY, Or SH (2006) Recursive CameraMotion Estimation With the Trifocal Tensor. IEEE Trans Syst, Man, Cybern, Part B (Cybern) 36(5):1081–1090. doi:https://doi.org/10.1109/TSMCB.2006.874133.View ArticleGoogle Scholar