《经典kalman滤波PPT【教育知识】》由会员分享,可在线阅读,更多相关《经典kalman滤波PPT【教育知识】(23页珍藏版)》请在金锄头文库上搜索。
1、Introduction to Kalman FiltersMichael Williams5 June 20031教书育人OverviewThe Problem Why do we need Kalman Filters?What is a Kalman Filter?Conceptual OverviewThe Theory of Kalman FilterSimple Example2教书育人The ProblemSystem state cannot be measured directlyNeed to estimate “optimally” from measurementsMe
2、asuring DevicesEstimatorMeasurementError SourcesSystem State (desired but not known)External ControlsObserved MeasurementsOptimal Estimate of System StateSystemError SourcesSystemBlack Box3教书育人What is a Kalman Filter?Recursive data processing algorithmGenerates optimal estimate of desired quantities
3、 given the set of measurementsOptimal?For linear system and white Gaussian errors, Kalman filter is “best” estimate based on all previous measurementsFor non-linear system optimality is qualifiedRecursive?Doesnt need to store all previous measurements and reprocess all data each time step4教书育人Concep
4、tual OverviewSimple example to motivate the workings of the Kalman FilterTheoretical Justification to come later for now just focus on the conceptImportant: Prediction and Correction5教书育人Conceptual OverviewLost on the 1-dimensional linePosition y(t)Assume Gaussian distributed measurementsy6教书育人Conce
5、ptual OverviewSextant Measurement at t1: Mean = z1 and Variance = z1Optimal estimate of position is: (t1) = z1Variance of error in estimate: 2x (t1) = 2z1Boat in same position at time t2 - Predicted position is z17教书育人Conceptual OverviewSo we have the prediction -(t2)GPS Measurement at t2: Mean = z2
6、 and Variance = z2Need to correct the prediction due to measurement to get (t2)Closer to more trusted measurement linear interpolation?prediction -(t2)measurement z(t2)8教书育人Conceptual OverviewCorrected mean is the new optimal estimate of positionNew variance is smaller than either of the previous tw
7、o variancesmeasurement z(t2)corrected optimal estimate (t2)prediction -(t2)9教书育人Conceptual OverviewLessons so far:Make prediction based on previous data - -, - Take measurement zk, zOptimal estimate () = Prediction + (Kalman Gain) * (Measurement - Prediction)Variance of estimate = Variance of predic
8、tion * (1 Kalman Gain)10教书育人Conceptual OverviewAt time t3, boat moves with velocity dy/dt=uNave approach: Shift probability to the right to predictThis would work if we knew the velocity exactly (perfect model)(t2)Nave Prediction -(t3)11教书育人Conceptual OverviewBetter to assume imperfect model by addi
9、ng Gaussian noisedy/dt = u + wDistribution for prediction moves and spreads out(t2)Nave Prediction -(t3)Prediction -(t3)12教书育人Conceptual OverviewNow we take a measurement at t3Need to once again correct the predictionSame as beforePrediction -(t3)Measurement z(t3)Corrected optimal estimate (t3)13教书育
10、人Conceptual OverviewLessons learnt from conceptual overview:Initial conditions (k-1 and k-1)Prediction (-k , -k)Use initial conditions and model (eg. constant velocity) to make prediction Measurement (zk)Take measurementCorrection (k , k)Use measurement to correct prediction by blending prediction a
11、nd residual always a case of merging only two GaussiansOptimal estimate with smaller variance14教书育人Theoretical BasisProcess to be estimated:yk = Ayk-1 + Buk + wk-1zk = Hyk + vkProcess Noise (w) with covariance QMeasurement Noise (v) with covariance RKalman FilterPredicted: -k is estimate based on me
12、asurements at previous time-stepsk = -k + K(zk - H -k )Corrected: k has additional information the measurement at time kK = P-kHT(HP-kHT + R)-1-k = Ayk-1 + BukP-k = APk-1AT + QPk = (I - KH)P-k15教书育人Blending FactorIf we are sure about measurements:Measurement error covariance (R) decreases to zeroK d
13、ecreases and weights residual more heavily than predictionIf we are sure about predictionPrediction error covariance P-k decreases to zeroK increases and weights prediction more heavily than residual16教书育人Theoretical Basis-k = Ayk-1 + BukP-k = APk-1AT + QPrediction (Time Update)(1) Project the state
14、 ahead(2) Project the error covariance aheadCorrection (Measurement Update)(1) Compute the Kalman Gain(2) Update estimate with measurement zk(3) Update Error Covariancek = -k + K(zk - H -k )K = P-kHT(HP-kHT + R)-1Pk = (I - KH)P-k17教书育人Quick Example Constant ModelMeasuring DevicesEstimatorMeasurement
15、Error SourcesSystem StateExternal ControlsObserved MeasurementsOptimal Estimate of System StateSystemError SourcesSystemBlack Box18教书育人Quick Example Constant ModelPredictionk = -k + K(zk - H -k )CorrectionK = P-k(P-k + R)-1-k = yk-1P-k = Pk-1Pk = (I - K)P-k19教书育人Quick Example Constant Model20教书育人Qui
16、ck Example Constant ModelConvergence of Error Covariance - Pk21教书育人Quick Example Constant ModelLarger value of R the measurement error covariance (indicates poorer quality of measurements)Filter slower to believe measurements slower convergence22教书育人References1.Kalman, R. E. 1960. “A New Approach to
17、 Linear Filtering and Prediction Problems”, Transaction of the ASME-Journal of Basic Engineering, pp. 35-45 (March 1960). 2.Maybeck, P. S. 1979. “Stochastic Models, Estimation, and Control, Volume 1”, Academic Press, Inc.3.Welch, G and Bishop, G. 2001. “An introduction to the Kalman Filter”, http:/www.cs.unc.edu/welch/kalman/23教书育人