Kalman Filter - Kalman Filter
1. What is Kalman filter
(What is the kalman filter?)
Before studying the Kalman filter, first look at why "Carman" is called "Kalman". Like other famous theories (such as Fourier transform, Taylor level, etc.), Kalman is also a person's name, and it is different from them. He is a modern person!
Carman's full name Rudolf Emil Kalman, Hungarian mathematician, born in Budapest, Hungary in 1930. In 1953, in 1954, the MIT was backed by the MIT and a master's degree. In 1957, he received a Ph.D. in Columbia University. The Kalman filter we have to learn now is from his doctoral thesis and 1960 issued a new approach to linear filter and prediction problems (new method for linear filtering and prediction issues). If you are interested in this pair, you can download here:
Http://www.cs.unc.edu/~welch/media/pdf/kalman1960.pdf.
Simply, the Kalman filter is a "Optimal Recursive Data Processing Algorithm". " For a large number of questions, he is optimal, the highest efficiency is even most useful. His extensive application has been more than 30 years, including robot navigation, control, sensor data integration, and even in military radar systems and missile tracking. In recent years, it has been applied to computer image processing, such as head-face recognition, image segmentation, image edge detection, and more.
2. Introduction of Kalman filter
(Introduction to the Kalman Filter)
In order to understand the Kalman filter more easily, the description method of the image will be applied, rather than the mathematical formula and mathematical symbols like most reference books. However, his 5 formulas are their core content. Combined with modern computers, in fact, Karman's procedures are quite simple, as long as you understand his 5 formulas.
Before introducing his 5 formulas, let us first explore step by step according to the example below.
Suppose the object we have to study is a temperature of a room. According to your experience, the temperature of this room is constant, that is, the next minute temperature is equal to the temperature of this minute (assuming we use a minute to do time units). Suppose you are not 100% of your experience, there may be a maximum of the upper and lower deviations. We see these deviations as the White Gaussian Noise, that is, these deviations have no relationship with the front and rear time and comply with Gaussian Distribution. In addition, we put a thermometer in the room, but this thermometer is not accurate, the measured value is deviated than the actual value. We also see these deviations as Gaussia White Noise.
Ok, now we have two temperature values related to the room: you are based on the predictive value of the experience (the predicted value of the system) and the temperature meter (measured). Below we have to use these two values to combine their respective noise to estimate the actual temperature value of the room.
If we want to estimate K, the actual temperature value is. First you have to predict the temperature of the K timing according to the temperature value of the K-1 time. Because you believe that the temperature is constant, you will get the temperature forecast value of the K time is the same as K-1, assuming is 23 degrees, and the deviation of the Gaussian noise of this value is 5 degrees (5 is obtained: If the deviation of the optimal temperature value estimated by the K-1 time is 3, you are 4 degrees for the uncertainty forecast, and the square plus again, it is 5). Then, you get the temperature value of the K time from the thermometer, assuming 25 degrees, and the deviation of this value is 4 degrees. Since we are used to estimate the actual temperature of the K time, there are two temperature values, from 23 degrees and 25 degrees, respectively. What is the actual temperature? I believe I still believe in the thermometer? I believe who is more, we can use their CoVariance to judge. Because KG ^ 2 = 5 ^ 2 / (5 ^ 2 4 ^ 2), Kg = 0.78, we can estimate the actual temperature value of the K time is: 23 0.78 * (25-23) = 24.56 degrees. It can be seen that because the Covariance of the thermometer is relatively small (compare the thermometer), the estimated optimal temperature value is biased toward the value of the thermometer.
Now we have got the optimal temperature value of the K, the next step is to enter K 1, and new optimal estimates. So far, it seems that there is still what you have never seen. By the way, we have to calculate the deviation of the optimal value (24.56 degrees) of the K time. The algorithm is as follows: ((1-kg) * 5 ^ 2) ^ 0.5 = 2.35. The 5 herein is the deviation of the 23 degree temperature value you predicted at the top of the above K, and the degradation 2.35 is a deviation of the optimal temperature value estimated by the K 1 time after K, corresponding to the above 3).
That is, the Kalman filter is constantly recursing Covariance, thereby estimating the optimal temperature value. He runs very quickly, and it only retains the CoVariance of the last moment. The top of the Kg is the Kalman Gain. He can change his own value with different moments, is it very magical!
Let's go to the right to go forward and discuss Kalman on the real engineering system.
3. Kalman filter algorithm
(The Kalman Filter Algorithm)
In this section, we will describe the Kalman filter from DR Kalman. The following description will involve some basic concepts, including probability, random variable, Gaussian Distribution, and State-Space MODEL, etc. However, for the detailed proven of the Kalman filter, it cannot be described here.
First, let's introduce a system of discrete control processes. The system can be described in a linear stocuctic Difference Equation:
X (k) = a x (k-1) B u (k) w (k)
Plus the system's measurement value:
Z (k) = h x (k) v (k)
In the upper two bonus, X (k) is the system state of the K time, u (k) is the amount of control of the system in k time. A and B are system parameters, for multi-model systems, they are matrix. Z (k) is the measured value of the K time, and H is the parameters of the measurement system, for the multi-measurement system, H is a matrix. W (k) and V (k) represent the process and measurement of noise, respectively. They were assumed to be Gaussian Noise, their Covariance is Q, R (here we assume that they do not change with the system status). For the above conditions (linear random differential system, process and measurement are Gauss White Noise), the Kalman filter is the optimal information processor. Let's use them to combine their CoVariances to estimate the optimization output of the system (examples of the temperature in the previous section).
First we need to use the system's process model to predict the system of the next state. Suppose the current system status is K, depending on the model of the system, can predict the state based on the previous state of the system:
X (k | k-1) = a x (k-1 | k-1) B u (k) ......... .. (1)
In the formula (1), X (k | k-1) is the result of the previous state predicted, X (K-1 | K-1) is the optimal result of the previous state, U (K) is now state The amount of control, if there is no control, it can be 0.
So far, our system results have been updated, but Covariance, which corresponds to X (K | K-1) has not been updated. We use P to represent Covariance:
P (k | k-1) = a p (k-1 | k-1) a ' q ......... (2)
In the formula (2), p (k | k-1) is a Covariance, P (K-1 | K-1) corresponding to X (K | K-1), corresponding to X (K-1 | K-1). Covariance, A 'represents a transposition matrix of A, Q is the CoVariance of the system process. The formula 1, 2 is the first two of the 5 formulas of the Kalman filter, that is, the prediction of the system.
Now we have the predicted results of the present state, then we collect the measured values of the current state. Combined with the predicted value and measurement, we can get the optimization estimation value X (K | K) of the current state (K):
X (k | k) = x (k | k-1) kg (k) (Z (k) -h x (k | k-1)) ......... (3)
The KG is Kalman Gain:
Kg (k) = p (k | k-1) h '/ (h p (k | k-1) h' r) ......... (4)
So far, we have obtained the optimal estimation value X (k | k) under K state. However, in order to run another Karman filter until the system is end, we also need to update the K state X (K | K) Covariance:
P (k | k) = (i-kg (k) h) p (k | k-1) ......... (5)
Where i is 1 matrix, for single model orders, i = 1. When the system enters K 1 state, p (k | k) is P (K-1 | K-1) of the formula (2). In this way, the algorithm can go to the return operation.
The principle of the Kalman filter is basically described, and the bonizers 1, 2, 3, 4 and 5 are five basic formulas. According to these 5 formulas, the computer can be easily implemented.
Below, I will use the program to raise an example of actual operation. . .
4. Simple example
(A Simple Example)
Here we combine the second third quarter to give a very simple example to illustrate the working process of the Kalman filter. The examples raised are to further describe the second sections, and also assemble the results. According to the description of the second section, the room is viewed as a system and then modeling this system. Of course, our model does not need to be very accurate. The temperature of this room we know is the same as the temperature at the previous moment, so A = 1. There is no control amount, so u (k) = 0. So it is obtained:
X (k | k-1) = x (k-1 | k-1) ......... .. (6)
Formula (2) can be modified:
P (k | k-1) = p (k-1 | k-1) Q ......... (7)
Because the value of the measurement is the temperature gauge, he is directly corresponding to the temperature, so H = 1. Forms 3, 4, 5 can be changed to:
X (k | k) = x (k | k-1) kg (k) (Z (k) -X (k | k-1)) ......... (8)
Kg (k) = p (k | k-1) / (p (k | k-1) r) ......... (9)
P (k | k) = (1-kg (k)) p (k | k-1) ......... (10)
Now we simulate a set of measured values as input. Assuming the real temperature of the room is 25 degrees, I simulated 200 measurements, the average of these measurements is 25 degrees, but the standard deviation is a few degrees of Gaussian noise (blue line in the figure).
In order to start working on the Kalman filter, we need to tell Karman's initial value of two zero times, which is X (0 | 0) and P (0 | 0). Their values don't care too much, just give one, because with the work of Karman, X will gradually converge. But for P, do not take 0, as this may make Kalman fully believe that your given X (0 | 0) is the best system, so that the algorithm cannot converge. I chose X (0 | 0) = 1 degree, p (0 | 0) = 10.
The real temperature of the system is 25 degrees, and the figure is expressed in the figure. The red line in the figure is the optimization result of the Kalman filter output (this result is set in the algorithm q = 1e-6, r = 1e-1).