科学网

 找回密码
  注册

tag 标签: 卡尔曼滤波

相关帖子

版块 作者 回复/查看 最后发表

没有相关内容

相关日志

Two (adapted) quotes on "Control"
josh 2020-7-21 09:19
What we cannot control, we do not understand. — Adapted from Richard Feynman: “What I cannot create, I do not understand.” The best way to predict the future is to control it. — Adapted from: “The best way to predict the future is to invent it.” (See https://quoteinvestigator.com/2012/09/27/invent-the-future/ )
个人分类: Engineering Cybernetics|1834 次阅读|0 个评论
个人领英(Linkedin)
josh 2020-7-14 06:02
链接: https://www.linkedin.com/in/charlesfangsong/
个人分类: Engineering Cybernetics|2002 次阅读|0 个评论
补充一下 ISIT 2020 Presentation 的 slides(关于反馈信道容量与卡尔曼滤波的一个联系)
josh 2020-7-3 09:09
补充一下前文 http://blog.sciencenet.cn/home.php?mod=spaceuid=286797do=blogid=1240136 中 presentation 的 slides: Slides_ISIT2020_Song.pdf
个人分类: Engineering Cybernetics|1671 次阅读|0 个评论
ISIT 2020 Presentation 视频(关于反馈信道容量与卡尔曼滤波的一个联系)
josh 2020-7-1 07:50
Song Fang, Quanyan Zhu. A connection between feedback capacity and Kalman filter for colored Gaussian noises. IEEE International Symposium on Information Theory (ISIT), 2020. 视频链接: https://www.bilibili.com/video/BV19v411B7SG/ 论文链接: https://arxiv.org/abs/2001.03108
个人分类: Engineering Cybernetics|1909 次阅读|0 个评论
一篇关于反馈信道容量 Feedback Capacity 与卡尔曼滤波 Kalman Filter 之间关系的论文
josh 2020-4-24 08:29
A Connection between Feedback Capacity and Kalman Filter for Colored Gaussian Noises 链接: https://arxiv.org/pdf/2001.03108.pdf
个人分类: Engineering Cybernetics|1619 次阅读|0 个评论
[转载]“卡尔曼滤波之父” 卡尔曼 Kalman 的一篇文章:Discovery and Invention
josh 2020-2-26 04:41
推荐 Kalman 的一篇文章: Discovery and Invention: The Newtonian Revolution in Systems Technology 链接: https://arc.aiaa.org/doi/abs/10.2514/2.6917
个人分类: The Art of Learning and Research|1381 次阅读|0 个评论
[转载]“卡尔曼滤波之父” 卡尔曼 Kalman 的 Kyoto Prize 演讲:What is System Theory?
josh 2020-2-22 23:48
推荐 Kalman 的 Kyoto Prize 演讲: What is System Theory? https://www.kyotoprize.org/wp-content/uploads/2019/07/1985_A.pdf
个人分类: Engineering Cybernetics|1310 次阅读|0 个评论
[转载]博士论文:《信息論與控制論之融合:從信息論測度到系統性能局限》
josh 2019-3-15 18:47
Towards Integrating Information and Control Theories: From Information-Theoretic Measures to System Performance Limitations 信息論與控制論之融合:從信息論測度到系統性能局限 链接: https://scholars.cityu.edu.hk/en/theses/towards-integrating-information-and-control-theories-from-informationtheoretic-measures-to-system-performance-limitations(8f21be24-127e-495d-a0cc-4a6e728450ee).html --- 其中引用了不少有意思的 quotes(也可参考: http://blog.sciencenet.cn/blog-286797-1022865.html 以及 http://blog.sciencenet.cn/blog-286797-1021452.html ): Frontispiece When one submerges the gourd bowl in water, there floats the gourd ladle. — Chinese proverb Chapter 1 There is an obvious analogy between the problem of smoothing the data to eliminate or reduce the effect of tracking errors and the problem of separating a signal from interfering noise in communications systems. — R. B. Blackman, H. W. Bode, and C. E. Shannon, “Data Smoothing and Prediction in Fire-Control Systems,” 1946 (We) become aware of the essential unity of the set of problems centring about communication, control, and statistical mechanics, whether in the machine or living tissue... We have decided to call the entire field of control and communication theory, whether in the machine or the animal, by the same Cybernetics. — N. Wiener, “Cybernetics,” 1948, Fundamental limits are actually at the core of many fields of engineering, science and mathematics... Firstly, they evolve from basic axioms about the nature of the universe. Secondly, they describe inescapable performance bounds that act as benchmarks for practical systems. And thirdly, they are recognized as being central to the design of real systems. — M. M. Seron, J. H. Braslavsky, and G. C. Goodwin, “Fundamental Limitations in Filtering and Control,” 1997 Chapter 2 The sciences do not try to explain, they hardly even try to interpret, they mainly make models. By a model is meant a mathematical construct which, with the addition of certain verbal interpretations, describes observed phenomena. The justification of such a mathematical construct is solely and precisely that it is expected to work. — John von Neumann The only way of discovering the limits of the possible is to venture a little way past them into the impossible. — Arthur C. Clarke A theory is the more impressive the greater the simplicity of its premises, the more different kinds of things it relates, and the more generalized its area of applicability. Therefore the deep impression that classical thermodynamics made upon me. It is the only physical theory of universal content which I am convinced will never be overthrown, within the framework of applicability of its basic concepts. — Albert Einstein Chapter 3 The idea of a statistical message source is central to Shannon’s work. The study of random processes had entered into communication before his communication theory. There was a growing understanding of and ability to deal with problems of random noise... Wiener had dealt extensively with the extrapolation, interpolation, and smoothing of time series. Although Wiener’s book was published in 1949, it had been available earlier in a wartime version known as the Yellow Peril (the cover was yellow). Shannon and Bode took considerable pains to put Wiener’s work in a form more directly useful to them (and to many others). — J. R. Pierce, “The Early Days of Information Theory,” 1973 We said before: “It feeds upon negative entropy,” attracting, as it were, a stream of negative entropy upon itself, to compensate the entropy increase it produces by living and thus to maintain itself on a stationary and fairly low entropy level. — Erwin Schrodinger, “What is Life,” 1944 If one has really technically penetrated a subject, things that previously seemed in complete contrast, might be purely mathematical transformations of each other. — John von Neumann Chapter 4 However, by building an amplifier whose gain is deliberately made, say 40 decibels higher than necessary (10000 fold excess on energy basis), and then feeding the output back on the input in such a way as to throw away that excess gain, it has been found possible to effect extraordinary improvement in constancy of amplification and freedom from nonlinearity. — H. S. Black, “Stabilized Feedback Amplifiers,” 1934 In control and communication we are always fighting nature’s tendency to degrade the organized and to destroy the meaningful; the tendency, as Gibbs has shown us, for entropy to increase. — N. Wiener, “The Human Use of Human Beings,” 1950 All stable processes we shall predict. All unstable processes we shall control. — John von Neumann Chapter 5 What’s in a name? In the case of Shannon’s measure the naming was not accidental. In 1961 one of us (Tribus) asked Shannon what he had thought about when he had finally confirmed his famous measure. Shannon replied: “My greatest concern was what to call it. I thought of calling it ‘information’, but the word was overly used, so I decided to call it ‘uncertainty’. When I discussed it with John von Neumann, he had a better idea. Von Neumann told me, ‘You should call it entropy, for two reasons. In the first place you uncertainty function has been used in statistical mechanics under that name. In the second place, and more importantly, no one knows what entropy really is, so in a debate you will always have the advantage.’” — M. Tribus, E. C. McIrvine, “Energy and information,” 1971 The bottom line for mathematicians is that the architecture has to be right. In all the mathematics that I did, the essential point was to find the right architecture. It’s like building a bridge. Once the main lines of the structure are right, then the details miraculously fit. The problem is the overall design. — Freeman Dyson Far waters fail to quench near fires. — Chinese Proverb Chapter 6 I like to think of Bode’s integrals as conservation laws. They state precisely that a certain quantity—the integrated value of the log of the magnitude of the sensitivity function—is conserved under the action of feedback. The total amount of this quantity is always the same. It is equal to zero for stable plant/compensator pairs, and it is equal to some fixed positive amount for unstable ones... This applies to every controller, no matter how it was designed. Sensitivity improvements in one frequency range must be paid for with sensitivity deteriorations in another frequency range, and the price is higher if the plant is open-loop unstable. — G. Stein, “Respect the Unstable,” 2003 The average performance of any pair of algorithms across all possible problems is identical. This means in particular that if some algorithm’s performance is superior to that of another algorithm over some set of optimization problems, then the reverse must be true over the set of all other optimization problems. — D. H. Wolpert, W. G. Macready, “No Free Lunch Theorems for Optimization,” 1997 We know the past but cannot control it. We control the future but cannot know it. — Claude Shannon Chapter 7 Consider the case where you are the controller and you observe samples of the process output whose average has been satisfactorily close to set point and that suffers only from white noise disturbances. Should you make an adjustment to the control output upon observing a sample of the process output that is not on set point? If the average of the process output is indeed nearly at the set point then any deviation, if it is really white or unautocorrelated, will be completely independent of the previous value of the control output and it will have no impact on subsequent disturbances. Therefore, if you should react to such a deviation, you would be wasting your time because the next observation will contain another deviation that has nothing to do with the previous deviation on which you acted. You, in fact, may make things worse... A feedback controller cannot decrease the standard deviation of the white noise riding on the process output. At best it can keep the average on set point. — D. M. Koenig, “Practical Control Engineering,” 2009 In respect of military method, we have, firstly, Measurement; secondly, Estimation of quantity; thirdly, Calculation; fourthly, Balancing of chances; fifthly, Victory. — Sun Tzu, “The Art of War” All the evidence shows that God was actually quite a gambler, and the universe is a great casino, where dice are thrown, and roulette wheels spin on every occasion. Over a large number of bets, the odds even out and we can make predictions... But over a very small number of rolls of the dice, the uncertainty principle is very important. — Stephen Hawking Chapter 8 The world is continuous, but the mind is discrete. — David Mumford Time is defined so that motion looks simple. — John Wheeler If everything seems under control, you’re just not going fast enough. — Mario Andretti Chapter 9 Essentially, all models are wrong, but some are useful. — George. E. P. Box In theory, theory and practice are the same. In practice, they are not. — Albert Einstein Any one who considers arithmetical methods of producing random digits is, of course, in a state of sin. — John von Neumann Chapter 10 An understanding of fundamental limitations is an essential element in all engineering. Shannon’s early results on channel capacity have always had center court in signal processing. Strangely, the early results of Bode were not accorded the same attention in control. — K. J. Astrom, in G. Stein, “Respect the Unstable,” 2003 If I turn toward a science not for external reasons such as earning an income, or for ambition, and also not — at least not exclusively — for the mere sportive joy and the fun of brain-acrobatics, then I must ask myself the question: what is the final goal that the science I am devoted to will and can reach? To what extent are its general results “true”? What is essential and what is based only on accident in its development? — Albert Einstein As a mathematical discipline travels far from its empirical source, or still more, if it is a second and third generation only indirectly inspired by ideas coming from “reality” it is beset with very grave dangers. It becomes more and more purely aestheticizing, more and more purely I’art pour I’art. This need not be bad, if the field is surrounded by correlated subjects, which still have closer empirical connections, or if the discipline is under the influence of men with an exceptionally well-developed taste. But there is a grave danger that the subject will develop along the line of least resistance, that the stream, so far from its source, will separate into a multitude of insignificant branches, and that the discipline will become a disorganized mass of details and complexities. In other words, at a great distance from its empirical source, or after much “abstract” inbreeding, a mathematical subject is in danger of degeneration. At the inception the style is usually classical; when it shows signs of becoming baroque, then the danger signal is up... In any event, whenever this stage is reached, the only remedy seems to me to be the rejuvenating return to the source: the re-injection of more or less directly empirical ideas. I am convinced that this was a necessary condition to conserve the freshness and the vitality of the subject and that this will remain equally true in the future. — John von Neumann
个人分类: Engineering Cybernetics|1767 次阅读|0 个评论
状态估计的新框架?!
JRoy 2018-12-14 14:59
50 年代末到60 年代初, 航天技术的发展涉及到大量的多输入多输出系统的最优控制问题, 用经典 控制理论已难以解决. 数字计算机的出现使得亨利¢ 庞加莱(1875-1906) 的状态空间表述方法可以作 为被控对象的数学模型和控制器设计与分析的工具.于是产生了以极大值原理、动态规划和 状态空间法 为核心的现代控制理论。 1. 经典状态空间法: State Space Model 状态空间模型包括两个模型: 一是状态方程模型,反映动态系统在输入变量作用下在某时刻所转移到的状态; 二是输出或量测方程模型,它将系统在某时刻的输出和系统的状态及输入变量联系起来。 如下 : 离散状态空间模型. 其中, k 为离散时间, x k 为状态变量, y k 为观测, u k ,v k 为噪声。 f k (.)为状态模型, h k (.) 为观测模型。 状态空间模型 提供一种方便、有效的时序递归的贝叶斯最优估计框架,因此有了坚实的理论基础。开山之作就是卡尔曼滤波,见下文的回顾: Approximate Gaussian Conjugacy: Parametric Recursive Filtering under Nonlinearity, Multimodality, Uncertainty, and Constraint, and Beyond, Frontiers of Information Technology Electronic Engineering, 2017, 18(12):1913-1939, LINK 其中特别值得一提的是,哈佛终身教授何毓琦院士1964年发表于TAC的经典文章最早(之一)阐释了卡尔曼滤波和贝叶斯最优估计的关系。这极大助力了后来卡尔曼滤波的蓬勃发展 ,至今已有近六十年(因为一个方法关联一个伟大的理论,将如虎添翼!): Ho, Y., Lee, R., 1964. A Bayesian approach to problems in stochastic estimation and control. IEEE Trans. Autom. Contr., 9(4):333-339. 状态空间模型的假设条件是动态系统符合马尔科夫Hidden Markov Model (HMM)特性,即上面的 x k = f k ( x k- 1 , u k ) ,即给定系统的现在状态,则系统的将来与其过去独立;这给建模和递归计算带来了极大方便。然而,HMM受限很多,对真实世界的刻画并不一定准确甚至有效,特别是,随着传感大数据时代的到来,其一些弊端日益突出. 毕竟我们今天的传感器和外界条件和卡尔曼、何院士的60年代完全不可同日而语: 目标变得越来越狡猾,难以用简单的HMM建模。特别是系统统计信息缺失(如不知道目标的运动模型,不知道系统噪声、甚至观测噪声模型,以及各种的复杂系统关联、时滞和耦合等等),根本无法构建较为准确甚至有效的的状态空间模型, 2. 抛弃HMM: 对于传感器数据越来越多,传感器精度越来越高的情况,是否可以有新的解决方案(HMM弃之不用)呐?见 : 如果我有成百上千个传感器,是否还需要动态模型? 以及 轻松多传感器多目标探测与跟踪! 这类方案主要应对完全未知系统背景,但数据量很大的情况 Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful. -- Box, George E. P.; Norman R. Draper (1987). Empirical Model-Building and Response Surfaces, p. 74 3. 数据驱动的新框架: 既然经典方法成也萧何(HMM)败也萧何(HMM),除了弃之不用(太过消极了点)之外,更恰当的解决方法是寻找一个更符合自然规律和更能够准确描述真实世界的替代模型。 下文提出了一种取代HMM的新框架: Joint Smoothing, Tracking, and Forecasting Based on Continuous-Time Target Trajectory Fitting, IEEE Trans. Automation Science and Engineering, Oct. 2018. DOI:10.1109/TASE.2018.2882641. @ IEEE Xplore Pre-print @ arXiv:1708.02196 Joint Smoothing and Tracking Based on Continuous-Time Target Trajectory Function Fitting 论文中提供了程序源代码(链接) Abstract: This paper presents a joint trajectory smoothing and tracking framework for a specific class of targets with smooth motion. We model the target trajectory by a continuous function of time (FoT), which leads to a curve fitting approach that finds a trajectory FoT fitting the sensor data in a sliding time-window. A simulation study is conducted to demonstrate the effectiveness of our approach in tracking a maneuvering target, in comparison with the conventional filters and smoothers. 基于数据驱动的估计新框架(与基于HMM的经典状态空间法的思路相比)的核心在于将HMM替换为一个连续时间上的目标轨迹曲线函数 FoT (Function of Time) x k = f ( t ) , 从而将传统的滤波、平滑与预报等估计问题转化为一个连续时间窗内的 曲线拟合和参数学习 问题,即可用一个参数化的函数近似曲线轨迹函数: F ( t ; C k ) ≈ f ( t ) , 其中 C k 为待求参数。从而可以采用聚类、拟合与机器学习等数据驱动的工具与方法解决复杂场景下的(多)目标探测、跟踪与预报问题,这样就有望克服传统方法严重依赖目标模型假设、机动探测时滞、对错序数据敏感等难题。如下图所示: 上图中,左侧为 经典的滤波估计方法: KF : Kalman Filter, AGC : Approximate Gaussian Conjugacy, PF : Particle Filter, MHT :Multiple hypothesis tracking, FISST :Finite-Set Statistics. 等等.....近六十年的发展,出现了非常多的理论和方法。 右侧为数据驱动的新范式: O2 : Observation-only , C4F : Clustering for Filtering , F4S :Fitting for Smoothing , FTC : Flooding-then-Clustering -, T-FoT : Trajectory Function of Time。 两者均采用相同的观测模型 y k = h k ( x k , v k ) , 但是不同的状态模型: 经典状态空间法采用HMM,新范式采用轨迹FoT。 一提到曲线拟合或者回归分析,可能会觉得计算效率低,不如递归迭代计算所以不能满足实时性?事实上: 对于线性观测系统,那么只需要线性拟合,并一般定义量测误差为范数2的马氏距离,曲线拟合退化为加权最小二乘直接给出,计算效率胜过线性卡尔曼滤波。 对于非线性观测系统进行线性拟合如多项式拟合,拟合需要往往需要迭代近似。对于非线性观测系统下的曲线拟合计算效率至关重要的是 参数的初始化, C k = C k -1 + ρ k 可大大加速计算效率(甚至一两步的梯度下降法就可以搜索到收敛的参数估计),从而可能使得拟合的计算效率扩展卡尔曼滤波(需要计算雅可比阵)还快 --- 这可能超出我们直觉想象 -- 不试不知道! 更进一步,如果系统含有约束条件呐?仍然可以有效解决,请参考下文: 4. 约束下的SSM和轨迹曲线拟合: Single-Road-Constrained Positioning Based on Deterministic Trajectory Geometry Tiancheng Li, IEEE Communications Letters (Volume: 23, Issue: 1 , Jan. 2019) pp.。 80-83 论文中提供了程序源代码(链接) Abstract: We consider the single-road-constrained estimation problem for positioning a target that moves on a single, deterministic and exactly known trajectory. Based on the geometry of the trajectory curve, we cast the constrained estimation problem as an unconstrained problem with reduced state dimension. Two approaches are devised based on a Markov transition model for unscented Kalman filtering and a continuous function of time for (weighted) least square fitting, respectively. A popular simulation model has been used for demonstrating the performance of the proposed approaches in comparison to existing approaches. 请参考论文。下面给出该短文关键部分的一些截图。
个人分类: 科研笔记|6298 次阅读|1 个评论
卡尔曼滤波 - 10 highlights - 10个容易忽略而重要的有趣之处
热度 2 JRoy 2017-9-23 07:55
Approximate Gaussian conjugacy : parametric recursive filtering under nonlinearity, multimode, uncertainty, and constraint, and beyond Author(s): Tian-cheng Li, Jin-ya Su, Wei Liu, Juan M. Corchado Affiliation(s): School of Sciences, University of Salamanca, Salamanca 37007, Spain; more Corresponding email(s): t.c.li@usal.es , J.Su2@lboro.ac.uk , w.liu@sheffield.ac.uk , corchado@usal.es Key Words: Kalman filter, Gaussian filter, time series estimation, Bayesian filtering, nonlinear filtering; constrained filtering, Gaussian mixture, maneuver, unknown inputs Abstract: Since the landmark work of R. E. Kalman in the 1960s, considerable efforts have been devoted to time series state space models for a large variety of dynamic estimation problems. In particular, parametric filters that seek exact analytical estimates based on closed-form Markov-Bayes recursion, e.g., recursion from a Gaussian or gaussian mixture (GM) prior to a Gaussian/GM posterior (termed Gaussian conjugacy in this paper), form the backbone for general time series filter design. Due to challenges arising from nonlinearity, multimode (including target maneuver ), intractable uncertainties (such as unknown inputs and/or non-Gaussian noises) and constraints (including circular quantities), and so on, new theories, algorithms and technologies are continuously being developed in order to maintain, or approximate to be more precise, such a conjugacy. They have in a large part contributed to the prospective developments of time series parametric filters in the last six decades. This paper reviews the stateof- the-art in distinctive categories and highlights some insights which may otherwise be overlooked . In particular, specific attention is paid to nonlinear systems with very informative observation , multimodal systems including gaussian mixture posterior and maneuver s, intractable unknown inputs and constraints, to fill the voids in existing reviews/surveys. To go beyond a pure review, we also provide some new thoughts on alternatives to the first order Markov transition model and on filter evaluation with regard to computing complexity. 10 Highlights presented in the paper: CRLB (Cramer-Rao Lower Bound) limits only the variance of unbiased estimators and lower MSE (mean squared error) can be obtained by allowing for a bias in the estimation, while ensuring that the overall estimation error is reduced. The KF (Kalman filter) is conditionally biased with a non-zero process noise realization in the given state sequence and is not an efficient estimator in a conditional sense, even in a linear and Gaussian system. Among all possible distributions of the observation noise $\\mathbf{w}$ with a fixed covariance matrix, the CRLB for $\\mathbf{x}$ attains its maximum when $\\mathbf{w}$ is Gaussian, i.e., the Gaussian scenario is the ``worst-case'' for estimating $\\mathbf{x}$. For sufficiently precise measurements, none of the KF variants, including the KF itself, are based on an accurate approximation of the joint density. Conversely, for imprecise measurements all KF variants accurately approximate the joint density, and therefore the posterior density. Differences between the KF variants become evident for moderately precise measurements. While the BCRLB (Bayesian Cramer-Rao Lower Bound) sets a best line (in the sense of MMSE) that any unbiased sequential estimator can at maximum achieve, the O2 inference sets the bottom line that any ``effective'' estimator shall at worst achieve. Many adaptive-model approaches proposed for MTT (manuevering target tracking) may show superiority when the target indeed maneuvers but perform disappointingly or even significantly worse than those without using an adaptive model, when there is actually no maneuver. We call this over-reaction due to adaptability. The theoretically best achievable second order error performance, namely the CRLB, in target state estimation is independent of knowledge (or the lack of it) of the observation noise variance. Robust filtering is much more related to robustness with respect to statistical variations than it is to optimality with respect to a specified statistical model. Typically, the worst case estimation error rather than the MSE needs to be minimized in a robust filter. As a result, robustness is usually achieved by sacrificing the performance in terms of other criteria such as MSE and computing efficiency. The standard structure of recursive filtering is based on infinite impulse response (IIR), namely all the observations prior to the present time have effect on the state estimate at present time and therefore the filter suffers from legacy errors. Computing speed matters! open access page: http://www.jzus.zju.edu.cn/iparticle.php? doi=10.1631/FITEE.1700379
个人分类: 科研笔记|8636 次阅读|3 个评论
卡尔曼滤波的原理
热度 1 YF2015 2015-11-22 19:45
在学习卡尔曼滤波器之前,首先看看为什么叫“卡尔曼”。跟其他著名的理论(例如傅立叶变换,泰勒级数等等)一样,卡尔曼也是一个人的名字,而跟他们不同的是,他是个现代人! 卡尔曼全名Rudolf Emil Kalman,匈牙利数学家,1930年出生于匈牙利首都布达佩斯。1953,1954年于麻省理工学院分别获得电机工程学士及硕士学位。1957年于哥伦比亚大学获得博士学位。我们现在要学习的卡尔曼滤波器,正是源于他的博士论文和1960年发表的论文《A New Approach to Linear Filtering and Prediction Problems》(线性滤波与预测问题的新方法)。如果对这编论文有兴趣,可以到这里的地址下载: http://www.cs.unc.edu/~welch/kalman/media/pdf/Kalman1960.pdf 简单来说,卡尔曼滤波器是一个“optimal recursive data processing algorithm(最优化自回归数据处理算法)”。对于解决很大部分的问题,他是最优,效率最高甚至是最有用的。他的广泛应用已经超过30年,包括机器人导航,控制,传感器数据融合甚至在军事方面的雷达系统以及导弹追踪等等。近年来更被应用于计算机图像处理,例如头脸识别,图像分割,图像边缘检测等等。 2.卡尔曼滤波器的介绍 (Introduction to the Kalman Filter) 为了可以更加容易的理解卡尔曼滤波器,这里会应用形象的描述方法来讲解,而不是像大多数参考书那样罗列一大堆的数学公式和数学符号。但是,他的5条公式是其核心内容。结合现代的计算机,其实卡尔曼的程序相当的简单,只要你理解了他的那5条公式。 在介绍他的5条公式之前,先让我们来根据下面的例子一步一步的探索。 假设我们要研究的对象是一个房间的温度。根据你的经验判断,这个房间的温度是恒定的,也就是下一分钟的温度等于现在这一分钟的温度(假设我们用一分钟来做时间单位)。假设你对你的经验不是100%的相信,可能会有上下偏差几度。我们把这些偏差看成是高斯白噪声(White Gaussian Noise),也就是这些偏差跟前后时间是没有关系的而且符合高斯分配(Gaussian Distribution)。另外,我们在房间里放一个温度计,但是这个温度计也不准确的,测量值会比实际值偏差。我们也把这些偏差看成是高斯白噪声。 好了,现在对于某一分钟我们有两个有关于该房间的温度值:你根据经验的预测值(系统的预测值)和温度计的值(测量值)。下面我们要用这两个值结合他们各自的噪声来估算出房间的实际温度值。 假如我们要估算k时刻的是实际温度值 。首先你要根据k-1时刻的温度值,来预测k时刻的温度。因为 你相信温度是恒定的 ,所以你会得到k时刻的温度预测值是跟k-1时刻一样的, 假设是23度 ,同时该值的 高斯噪声的偏差是5度( 5是这样得到的:如果k-1时刻估算出的最优温度值的偏差是3,你对自己预测的不确定度是4度,他们平方相加再开方,就是5)。 然后,你从 温度计 那里得到了k时刻的温度值,假设是 25度,同时该值的偏差是4度 。 由于我们用于估算k时刻的实际温度有两个温度值,分别是23度和25度。 究竟实际温度是多少呢 ?相信自己还是相信温度计呢?究竟相信谁多一点,我们可以用他们的covariance来判断。因为 Kg^2=5^2/(5^2+4^2), 所以Kg=0.78,我们可以估算出k时刻的 实际温度值是:23+0.78*(25-23)=24.56度 。可以看出,因为温度计的covariance比较小(比较相信温度计),所以估算出的最优温度值偏向温度计的值。 现在我们已经得到k时刻的最优温度值了,下一步就是要进入k+1时刻,进行新的最优估算。到现在为止,好像还没看到什么自回归的东西出现。对了,在进入k+1时刻之前,我们还要算出k时刻那个 最优值(24.56度)的偏差 。算法如下 :((1-Kg)*5^2)^0.5=2.35 。这里的5就是上面的k时刻你预测的那个23度温度值的偏差,得出的2.35就是进入k+1时刻以后k时刻估算出的最优温度值的偏差(对应于上面的3)。 就是这样,卡尔曼滤波器就不断的把covariance递归,从而估算出最优的温度值。他运行的很快,而且它只保留了上一时刻的covariance。上面的Kg,就是卡尔曼增益(Kalman Gain)。他可以随不同的时刻而改变他自己的值,是不是很神奇! 下面就要言归正传,讨论真正工程系统上的卡尔曼。 3. 卡尔曼滤波器算法 (The Kalman Filter Algorithm) 在这一部分,我们就来描述源于Dr Kalman 的卡尔曼滤波器。下面的描述,会涉及一些基本的概念知识,包括概率(Probability),随机变量(Random Variable),高斯或正态分配(Gaussian Distribution)还有State-space Model等等。但对于卡尔曼滤波器的详细证明,这里不能一一描述。 首先,我们先要引入一个离散控制过程的系统。该 系统 可用一个线性随机微分方程(Linear Stochastic Difference equation)来描述: X(k)=A X(k-1)+B U(k)+W(k) 再加上系统的 测量值 : Z(k)=H X(k)+V(k) 上两式子中,X(k)是k时刻的系统状态,U(k)是k时刻对系统的控制量。A和B是系统参数,对于多模型系统,他们为矩阵。Z(k)是k时刻的测量值,H是测量系统的参数,对于多测量系统,H为矩阵。W(k)和V(k)分别表示过程和测量的噪声。他们被假设成高斯白噪声(White Gaussian Noise),他们的covariance 分别是Q,R(这里我们假设他们不随系统状态变化而变化)。 对于满足上面的条件(线性随机微分系统,过程和测量都是高斯白噪声),卡尔曼滤波器是最优的信息处理器。下面我们来用他们结合他们的covariances 来估算系统的最优化输出(类似上一节那个温度的例子)。 首先我们要利用系统的过程模型,来预测下一状态的系统。假设现在的系统状态是k,根据系统的模型,可以基于系统的上一状态而预测出现在状态: X(k|k-1)=A X(k-1|k-1)+B U(k) ……….. (1) 式(1)中,X(k|k-1)是利用上一状态预测的结果,X(k-1|k-1)是上一状态最优的结果,U(k)为现在状态的控制量,如果没有控制量,它可以为0。 到现在为止,我们的系统结果已经更新了,可是,对应于X(k|k-1)的covariance还没更新。我们用P表示covariance: P(k|k-1)=A P(k-1|k-1) A’+Q ……… (2) 式(2)中,P(k|k-1)是X(k|k-1)对应的covariance,P(k-1|k-1)是X(k-1|k-1)对应的covariance,A’表示A的转置矩阵,Q是系统过程的covariance。式子1,2就是卡尔曼滤波器5个公式当中的前两个,也就是对系统的预测。 现在我们有了现在状态的预测结果,然后我们再收集现在状态的测量值。结合预测值和测量值,我们可以得到现在状态(k)的最优化估算值X(k|k): X(k|k)= X(k|k-1)+Kg(k) (Z(k)-H X(k|k-1)) ……… (3) 其中Kg为卡尔曼增益(Kalman Gain): Kg(k)= P(k|k-1) H’ / (H P(k|k-1) H’ + R) ……… (4) 到现在为止,我们已经得到了k状态下最优的估算值X(k|k)。但是为了要另卡尔曼滤波器不断的运行下去直到系统过程结束,我们还要更新k状态下X(k|k)的covariance: P(k|k)=(I-Kg(k) H)P(k|k-1) ……… (5) 其中I 为1的矩阵,对于单模型单测量,I=1。当系统进入k+1状态时,P(k|k)就是式子(2)的P(k-1|k-1)。这样,算法就可以自回归的运算下去。 卡尔曼滤波器的原理基本描述了,式子1,2,3,4和5就是他的5 个基本公式。根据这5个公式,可以很容易的实现计算机的程序。 下面,我会用程序举一个实际运行的例子。。。 4. 简单例子 (A Simple Example) 这里我们结合第二第三节,举一个非常简单的例子来说明卡尔曼滤波器的工作过程。所举的例子是进一步描述第二节的例子,而且还会配以程序模拟结果。 根据第二节的描述,把房间看成一个系统,然后对这个系统建模。当然,我们见的模型不需要非常地精确。我们所知道的这个房间的温度是跟前一时刻的温度相同的,所以A=1。没有控制量,所以U(k)=0。因此得出: X(k|k-1)=X(k-1|k-1) ……….. (6) 式子(2)可以改成: P(k|k-1)=P(k-1|k-1) +Q ……… (7) 因为测量的值是温度计的,跟温度直接对应,所以H=1。式子3,4,5可以改成以下: X(k|k)= X(k|k-1)+Kg(k) (Z(k)-X(k|k-1)) ……… (8) Kg(k)= P(k|k-1) / (P(k|k-1) + R) ……… (9) P(k|k)=(1-Kg(k))P(k|k-1) ……… (10) 现在我们模拟一组测量值作为输入。假设房间的真实温度为25度,我模拟了200个测量值,这些测量值的平均值为25度,但是加入了标准偏差为几度的高斯白噪声(在图中为蓝线)。 为了令卡尔曼滤波器开始工作,我们需要告诉卡尔曼两个零时刻的初始值,是X(0|0)和P(0|0)。他们的值不用太在意,随便给一个就可以了,因为随着卡尔曼的工作,X会逐渐的收敛。但是对于P,一般不要取0,因为这样可能会令卡尔曼完全相信你给定的X(0|0)是系统最优的,从而使算法不能收敛。我选了X(0|0)=1度,P(0|0)=10。 该系统的真实温度为25度,图中用黑线表示。图中红线是卡尔曼滤波器输出的最优化结果(该结果在算法中设置了Q=1e-6,R=1e-1)。 ×××××××××××××××××× 附matlab下面的kalman滤波程序: clear N=200; w(1)=0; w=randn(1,N) x(1)=0; a=1; for k=2:N; x(k)=a*x(k-1)+w(k-1); end V=randn(1,N); q1=std(V); Rvv=q1.^2; q2=std(x); Rxx=q2.^2; q3=std(w); Rww=q3.^2; c=0.2; Y=c*x+V; p(1)=0; s(1)=0; for t=2:N; p1(t)=a.^2*p(t-1)+Rww; b(t)=c*p1(t)/(c.^2*p1(t)+Rvv); s(t)=a*s(t-1)+b(t)*(Y(t)-a*c*s(t-1)); p(t)=p1(t)-c*b(t)*p1(t); end t=1:N; plot(t,s,'r',t,Y,'g',t,x,'b'); clear all N=200; w=0.1*randn(1,N); x(1)=0; a=1; x(1)=25; V=randn(1,N); q1=std(V); Rvv=q1.^2; q2=std(x); Rxx=q2.^2; q3=std(w); Qww=q3.^2; c=0.2; Y=x+V; p(1)=1; Bs(1)=0; for t=2:N; x(t)=x(t-1); p1(t)=p(t-1)+Qww; Kg(t)=p1(t)/(p1(t)+Rvv); Bs(t)=x(t)+Kg(t)*(Y(t)-x(t)); p(t)=p1(t)-Kg(t)*p1(t); end t=1:N; plot(t,Bs,\'r\',t,Y,\'g\',t,x,\'b\');
个人分类: 科学|3507 次阅读|2 个评论
[转载]数学归纳法在GPS中的应用(教研论文,或课程论文)
dengyh123 2014-7-7 09:06
数学归纳法在GPS中的应用(教研论文,或课程论文) 丽水学院 邓永和 (教研成果,相当学生的课程论文,也可叫教师的课程论文。我曾在《我国高校本科课程论文的研究》指出,教师应为学生做榜样,做示范,才有利于学生课程论文不抄袭或剽窃。我这里为学生示范课程论文,希望他们及其他们的老师不要糟蹋了课程论文。) 附件: 9 数学归纳法在GPS中的应用.pdf
个人分类: 教研|983 次阅读|0 个评论
[转载]卡尔曼滤波的原理说明
tsingguo 2014-4-13 08:55
转载出处:http://blog.chinaunix.net/uid-26694208-id-3184442.html 在学习卡尔曼滤波器之前,首先看看为什么叫“卡尔曼”。跟其他著名的理论(例如傅立叶变换,泰勒级数等等)一样,卡尔曼也是一个人的名字,而跟他们不同的是,他是个现代人! 卡尔曼全名Rudolf Emil Kalman,匈牙利数学家,1930年出生于匈牙利首都布达佩斯。1953,1954年于麻省理工学院分别获得电机工程学士及硕士学位。1957年于哥伦比亚大学获得博士学位。我们现在要学习的卡尔曼滤波器,正是源于他的博士论文和1960年发表的论文《A New Approach to Linear Filtering and Prediction Problems》(线性滤波与预测问题的新方法)。如果对这编论文有兴趣,可以到这里的地址下载: http://www.cs.unc.edu/~welch/kalman/media/pdf/Kalman1960.pdf 简单来说,卡尔曼滤波器是一个“optimal recursive data processing algorithm(最优化自回归数据处理算法)”。对于解决很大部分的问题,他是最优,效率最高甚至是最有用的。他的广泛应用已经超过30年,包括机器人导航,控制,传感器数据融合甚至在军事方面的雷达系统以及导弹追踪等等。近年来更被应用于计算机图像处理,例如头脸识别,图像分割,图像边缘检测等等。 2.卡尔曼滤波器的介绍 (Introduction to the Kalman Filter) 为了可以更加容易的理解卡尔曼滤波器,这里会应用形象的描述方法来讲解,而不是像大多数参考书那样罗列一大堆的数学公式和数学符号。但是,他的5条公式是其核心内容。结合现代的计算机,其实卡尔曼的程序相当的简单,只要你理解了他的那5条公式。 在介绍他的5条公式之前,先让我们来根据下面的例子一步一步的探索。 假设我们要研究的对象是一个房间的温度。根据你的经验判断,这个房间的温度是恒定的,也就是下一分钟的温度等于现在这一分钟的温度(假设我们用一分钟来做时间单位)。假设你对你的经验不是100%的相信,可能会有上下偏差几度。我们把这些偏差看成是高斯白噪声(White Gaussian Noise),也就是这些偏差跟前后时间是没有关系的而且符合高斯分配(Gaussian Distribution)。另外,我们在房间里放一个温度计,但是这个温度计也不准确的,测量值会比实际值偏差。我们也把这些偏差看成是高斯白噪声。 好了,现在对于某一分钟我们有两个有关于该房间的温度值:你根据经验的预测值(系统的预测值)和温度计的值(测量值)。下面我们要用这两个值结合他们各自的噪声来估算出房间的实际温度值。 假如我们要估算k时刻的是实际温度值 。首先你要根据k-1时刻的温度值,来预测k时刻的温度。因为 你相信温度是恒定的 ,所以你会得到k时刻的温度预测值是跟k-1时刻一样的, 假设是23度 ,同时该值的 高斯噪声的偏差是5度( 5是这样得到的:如果k-1时刻估算出的最优温度值的偏差是3,你对自己预测的不确定度是4度,他们平方相加再开方,就是5)。 然后,你从 温度计 那里得到了k时刻的温度值,假设是 25度,同时该值的偏差是4度 。 由于我们用于估算k时刻的实际温度有两个温度值,分别是23度和25度。 究竟实际温度是多少呢 ?相信自己还是相信温度计呢?究竟相信谁多一点,我们可以用他们的covariance来判断。因为 Kg^2=5^2/(5^2+4^2), 所以Kg=0.78,我们可以估算出k时刻的 实际温度值是:23+0.78*(25-23)=24.56度 。可以看出,因为温度计的covariance比较小(比较相信温度计),所以估算出的最优温度值偏向温度计的值。 现在我们已经得到k时刻的最优温度值了,下一步就是要进入k+1时刻,进行新的最优估算。到现在为止,好像还没看到什么自回归的东西出现。对了,在进入k+1时刻之前,我们还要算出k时刻那个 最优值(24.56度)的偏差 。算法如下 :((1-Kg)*5^2)^0.5=2.35 。这里的5就是上面的k时刻你预测的那个23度温度值的偏差,得出的2.35就是进入k+1时刻以后k时刻估算出的最优温度值的偏差(对应于上面的3)。 就是这样,卡尔曼滤波器就不断的把covariance递归,从而估算出最优的温度值。他运行的很快,而且它只保留了上一时刻的covariance。上面的Kg,就是卡尔曼增益(Kalman Gain)。他可以随不同的时刻而改变他自己的值,是不是很神奇! 下面就要言归正传,讨论真正工程系统上的卡尔曼。 3. 卡尔曼滤波器算法 (The Kalman Filter Algorithm) 在这一部分,我们就来描述源于Dr Kalman 的卡尔曼滤波器。下面的描述,会涉及一些基本的概念知识,包括概率(Probability),随即变量(Random Variable),高斯或正态分配(Gaussian Distribution)还有State-space Model等等。但对于卡尔曼滤波器的详细证明,这里不能一一描述。 首先,我们先要引入一个离散控制过程的系统。该 系统 可用一个线性随机微分方程(Linear Stochastic Difference equation)来描述: X(k)=A X(k-1)+B U(k)+W(k) 再加上系统的 测量值 : Z(k)=H X(k)+V(k) 上两式子中,X(k)是k时刻的系统状态,U(k)是k时刻对系统的控制量。A和B是系统参数,对于多模型系统,他们为矩阵。Z(k)是k时刻的测量值,H是测量系统的参数,对于多测量系统,H为矩阵。W(k)和V(k)分别表示过程和测量的噪声。他们被假设成高斯白噪声(White Gaussian Noise),他们的covariance 分别是Q,R(这里我们假设他们不随系统状态变化而变化)。 对于满足上面的条件(线性随机微分系统,过程和测量都是高斯白噪声),卡尔曼滤波器是最优的信息处理器。下面我们来用他们结合他们的covariances 来估算系统的最优化输出(类似上一节那个温度的例子)。 首先我们要利用系统的过程模型,来预测下一状态的系统。假设现在的系统状态是k,根据系统的模型,可以基于系统的上一状态而预测出现在状态: X(k|k-1)=A X(k-1|k-1)+B U(k) ……….. (1) 式(1)中,X(k|k-1)是利用上一状态预测的结果,X(k-1|k-1)是上一状态最优的结果,U(k)为现在状态的控制量,如果没有控制量,它可以为0。 到现在为止,我们的系统结果已经更新了,可是,对应于X(k|k-1)的covariance还没更新。我们用P表示covariance: P(k|k-1)=A P(k-1|k-1) A’+Q ……… (2) 式(2)中,P(k|k-1)是X(k|k-1)对应的covariance,P(k-1|k-1)是X(k-1|k-1)对应的covariance,A’表示A的转置矩阵,Q是系统过程的covariance。式子1,2就是卡尔曼滤波器5个公式当中的前两个,也就是对系统的预测。 现在我们有了现在状态的预测结果,然后我们再收集现在状态的测量值。结合预测值和测量值,我们可以得到现在状态(k)的最优化估算值X(k|k): X(k|k)= X(k|k-1)+Kg(k) (Z(k)-H X(k|k-1)) ……… (3) 其中Kg为卡尔曼增益(Kalman Gain): Kg(k)= P(k|k-1) H’ / (H P(k|k-1) H’ + R) ……… (4) 到现在为止,我们已经得到了k状态下最优的估算值X(k|k)。但是为了要另卡尔曼滤波器不断的运行下去直到系统过程结束,我们还要更新k状态下X(k|k)的covariance: P(k|k)=(I-Kg(k) H)P(k|k-1) ……… (5) 其中I 为1的矩阵,对于单模型单测量,I=1。当系统进入k+1状态时,P(k|k)就是式子(2)的P(k-1|k-1)。这样,算法就可以自回归的运算下去。 卡尔曼滤波器的原理基本描述了,式子1,2,3,4和5就是他的5 个基本公式。根据这5个公式,可以很容易的实现计算机的程序。 下面,我会用程序举一个实际运行的例子。。。 4. 简单例子 (A Simple Example) 这里我们结合第二第三节,举一个非常简单的例子来说明卡尔曼滤波器的工作过程。所举的例子是进一步描述第二节的例子,而且还会配以程序模拟结果。 根据第二节的描述,把房间看成一个系统,然后对这个系统建模。当然,我们见的模型不需要非常地精确。我们所知道的这个房间的温度是跟前一时刻的温度相同的,所以A=1。没有控制量,所以U(k)=0。因此得出: X(k|k-1)=X(k-1|k-1) ……….. (6) 式子(2)可以改成: P(k|k-1)=P(k-1|k-1) +Q ……… (7) 因为测量的值是温度计的,跟温度直接对应,所以H=1。式子3,4,5可以改成以下: X(k|k)= X(k|k-1)+Kg(k) (Z(k)-X(k|k-1)) ……… (8) Kg(k)= P(k|k-1) / (P(k|k-1) + R) ……… (9) P(k|k)=(1-Kg(k))P(k|k-1) ……… (10) 现在我们模拟一组测量值作为输入。假设房间的真实温度为25度,我模拟了200个测量值,这些测量值的平均值为25度,但是加入了标准偏差为几度的高斯白噪声(在图中为蓝线)。 为了令卡尔曼滤波器开始工作,我们需要告诉卡尔曼两个零时刻的初始值,是X(0|0)和P(0|0)。他们的值不用太在意,随便给一个就可以了,因为随着卡尔曼的工作,X会逐渐的收敛。但是对于P,一般不要取0,因为这样可能会令卡尔曼完全相信你给定的X(0|0)是系统最优的,从而使算法不能收敛。我选了X(0|0)=1度,P(0|0)=10。 该系统的真实温度为25度,图中用黑线表示。图中红线是卡尔曼滤波器输出的最优化结果(该结果在算法中设置了Q=1e-6,R=1e-1)。 ×××××××××××××××××× 附matlab下面的kalman滤波程序: clear N=200; w(1)=0; w=randn(1,N) x(1)=0; a=1; for k=2:N; x(k)=a*x(k-1)+w(k-1); end V=randn(1,N); q1=std(V); Rvv=q1.^2; q2=std(x); Rxx=q2.^2; q3=std(w); Rww=q3.^2; c=0.2; Y=c*x+V; p(1)=0; s(1)=0; for t=2:N; p1(t)=a.^2*p(t-1)+Rww; b(t)=c*p1(t)/(c.^2*p1(t)+Rvv); s(t)=a*s(t-1)+b(t)*(Y(t)-a*c*s(t-1)); p(t)=p1(t)-c*b(t)*p1(t); end t=1:N; plot(t,s,'r',t,Y,'g',t,x,'b');
个人分类: 自适应估计|8443 次阅读|0 个评论
关于‘卡尔曼滤波’的通讯
热度 1 TUGJAYZHAB 2013-11-3 09:15
有匿名博友昨天 22:18 您好,白老师,最近拜读您的超球面模型相关文章,您在《超球面模型应用于草原监测数据分析的探讨》中提到的卡门滤波有一些不明白的地方,即卡门滤波公式中的权重如何确定,是需要根据历史数据预测准确性还是有其他的方法? TUGJAYZHAB 昨天 22:57 你好。谢谢你的问题。我不太习惯和匿名者打交道,而愿意和实名博友公开讨论。 简单答复你的问题:我的老师教给我:根据方差大小来确定数据源的权。方差大的,权轻,方差小的权重。如果难以确定权重,缺省值(默认值)是一半对一半。 匿名博友 14 小时前 好的,谢谢白老师!我现在是在校研究生,主要做的是青藏高原一块的草地监测实验,看了您的方法,想对自己采集来的草地数据用您这个模型去分析预测,所以就冒昧的给您发短消息,还请白老师见谅! TUGJAYZHAB 刚刚 好的。既然是在校研究生为啥不在科学网实名注册?欢迎你注册,以便深入公开讨论,互相学习监督。 补充说明:当初 2000 年那篇文章中所说的‘卡门滤波’的正确称呼是‘卡尔曼滤波’(足见我的‘民科’真面目了不是? )。那是一个数学上很成熟的方法。我的美国导师 DONALD JAMESON ( -1999 )教授对其在草地监测中的应用有贡献、修订、和发展。 而在多元向量乘法群中的应用则可以被认为是 DONALD JAMESON 版本的再修订,或说‘卡尔曼滤波’的多元向量版。 2013-11-03 我现在一般称之为:“白-杰时间链”。
个人分类: 第八讲|1611 次阅读|5 个评论
C#平台的卡尔曼滤波的实验
applehacker 2013-9-24 11:19
最佳线性滤波理论起源于 40 年代美国科学家 Wiener 和前苏联科学家K олмогоров 等人的研究工作,后人统称为维纳滤波理论。从理论上说,维纳滤波的最大缺点是必须用到无限过去的数据,不适用于实时处理。为了克服这一缺点, 60 年代 Kalman 把状态空间模型引入滤波理论,并导出了一套递推估计算法,后人称之为卡尔曼滤波理论。卡尔曼滤波是以最小均方误差为估计的最佳准则,来寻求一套递推估计的算法,其基本思想是:采用信号与噪声的状态空间模型,利用前一时刻地估计值和现时刻的观测值来更新对状态变量的估计,求出现时刻的估计值。它适合于实时处理和计算机运算。 卡尔曼滤波在工程计算中十分有用,所以想去尝试一下卡尔曼滤波器的实现。通过C#平台,采用确定范围内的随机数的方法模拟实际测量值,进行卡尔曼滤波的计算,最后成图。 模拟温度的测量: 最后实际效果图如下: 贴上源代码: using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using System.Windows.Forms.DataVisualization.Charting; namespace 折线图 { public partial class Form1 : Form { public Form1() { InitializeComponent(); } double Observ = { 22, 24, 24, 25, 24, 26, 21, 26, }; //double ObsRand = new double ; private void button1_Click(object sender, EventArgs e) { //产生随机数列 for (int i = 0; i 100; i++) { //Random Random1 = new Random();//这样声明的话,如果多次调用rnd则会会出现每次的随机数值都一样 Random Random1 = new Random((int)DateTime.Now.Ticks); //利用系统日期作为参数传进去之后可以解决上面出现的问题 double i1 = Random1.Next(20, 30); double result = Random1.NextDouble() * (-0.99) + 0.99;//随机生成-0.90至0.99的随机双精度数值 ObsRand = i1+result; this.richTextBox2.Text += ObsRand .ToString() + \n; } chart1.Series.Clear(); graphPoint(chart1, 测量值, ObsRand, Color.PowderBlue); double Ave = new double ; for (int i = 0; i ObsRand.Length; i++) { Ave = Average; } graph(chart1, 平均值, Ave, Color.Red); } //画图函数 protected void graph(Chart c, string name, double )); } s.Color = clr; c.Series.Add(s); } //画图函数2 protected void graphPoint(Chart c, string name, double )); } s.Color = clr; c.Series.Add(s); } //滤波函数 double Average; public double CanShu, double ; double KamanP = CanShu ; double KamanQ = CanShu ; double KamanR = CanShu ; double KamanY = CanShu ; double KamanKg =CanShu ; double KamanSum = CanShu ; double ; for (int i = 0; i = ObsRand.Length - 1; i++) { KamanY = KamanX; KamanP = KamanP + KamanQ; KamanKg = KamanP / (KamanP + KamanR); KamanX = (KamanY + KamanKg * (Observe - KamanY)); KamanSum += KamanX; True = KamanX; this.richTextBox1.Text += KamanX.ToString() + \n; KamanP = (1 - KamanKg) * KamanP; } Average = KamanSum / Observe.Length; return True; } } } 特别注意一点,产生随机数的时候,一定要获取系统时间点,否则在循环后发现很多值表现为连续一样,不符合实际测量规律 具体算法可以参照维基百科
8848 次阅读|0 个评论
MATLAB程序如何在latex上发表?
热度 2 luyz23 2010-4-16 11:54
MATLAB和latex是科研工作者都非常熟悉的工具了,我们经常是先 利用MATLAB工具进行科学分析与计算,然后利用latex发表论文。 但是研究结果的发表与总结的确占用了大量的时间,这里介绍一下 如何将MATLAB程序及其分析结果直接利用latex进行发表,并生成 pdf格式文件。 (经验共享,共同进步:希望能提高大家的科研效率) 基本过程如下: 1)首先在MATLAB中创建一个m-file程序(参考附件),这里 提供的是机器视觉中经常运用的图像卡尔曼滤波算法示例。 2)利用上述程序进行分析计算,如果对结果满意,下面就开始 撰写科研总结报告了,直接调用MATLAB所提供的publish命令 就可自动生成tex文件。 publish(kalmanfilter.m','latex') 如本例MATLAB程序名为kalmanfilter.m,执行上述命令后系统会 在当前目录下自动创建一个html子目录,并生成同名的tex文件 kalmanfilter.tex及分析结果(eps格式) 3)对tex文件(参考附件)进行部分修改和调整,这里主要是增加 tex的中文字体支持。如 \usepackage{xeCJK} \setCJKmainfont{FangSong_GB2312} 可利用您所钟爱的emacs或ultraedit进行编辑处理,因为它们足够强大 到可以配置成latex的集成编译型IDE环境,可自动创建和浏览pdf格式文件。 (参考附件) 以ultraedit设置latex为例: 在菜单项中选取高级-工具配置菜单选项后,会弹出下图所示的 配置对话框,直接点击插入按钮后,分别填写菜单项名称,命令行和 工作目录所对应的编辑栏,进行如下设置: 3.1)编译(C):latex %F:%P;(latex编译) 3.2)生成(B):dvipdfmx %N.dvi:%P;(生成pdf文件) 3.3)阅读(R):texdoc %N.pdf:%P。(利用pdf-reader阅读) 设置成功后,大家可以在高级菜单中发现上述设置所添加的三条菜单项 (如下图所示),当然也可以将它们自定义在工具栏中,方便调用。这样 latex的IDE环境就设置好了,在不退出UltraEdit编辑环境的条件下就可直接调用 外部command命令,提高latex的排版编辑效率。 (至于emacs中设置latex的集成编译环境,请大家参考网络论坛。) 备注: 1)我使用的是可光盘直接运行(免安装)的texlive2009环境(请参阅 http://bbs.ctex.org/ ) * Windows用户可以直接用类似daemon的虚拟光驱工具安装读取iso光盘映像文件 * linux用户建议使用vmware虚拟机环境,感觉很方便,利用下列命令安装 iso光盘映像文件 $ mount -t iso9660 -o ro,loop,noauto texlive.iso /mnt 2)因中文显示问题,本例中的图片格式做了处理(由eps转为png),MATLAB 缺省会生成eps格式的图片。 卡尔曼滤波示例 (包含下列三个文件:kalmanfilter.m; kalmanfilter.tex; kalmanfilter.pdf)
个人分类: 学习笔记|11827 次阅读|5 个评论
卡尔曼滤波简介+ 算法实现代码(zz)
whitesun 2009-10-24 17:35
最佳线性滤波理论起源于 40 年代美国科学家 Wiener 和前苏联科学家K олмогоров 等人的研究工作,后人统称为维纳滤波理论。从理论上说,维纳滤波的最大缺点是必须用到无限过去的数据,不适用于实时处理。为了克服这一缺点, 60 年代 Kalman 把 状态空间模型引入滤波理论,并导出了一套递推估计算法,后人称之为卡尔曼滤波理论。卡尔曼滤波是以最小均方误差为估计的最佳准则,来寻求一套递推估计的算 法,其基本思想是:采用信号与噪声的状态空间模型,利用前一时刻地估计值和现时刻的观测值来更新对状态变量的估计,求出现时刻的估计值。它适合于实时处理 和计算机运算。 现设线性时变系统的离散状态防城和观测方程为: X(k) = F(k,k-1)X(k-1)+T(k,k-1)U(k-1) Y(k) = H(k)X(k)+N(k) 其中 X(k)和Y(k)分别是k时刻的状态矢量和观测矢量 F(k,k-1)为 状态转移矩阵 U(k)为k时刻动态噪声 T(k,k-1)为系统控制矩阵 H(k)为k时刻观测矩阵 N(k)为k时刻观测噪声 则卡尔曼滤波的算法流程为: 预估计X(k)^= F(k,k-1)X(k-1) 计算预估计协方差矩阵 C(k)^=F(k,k-1)C(k)F(k,k-1)'+T(k,k-1)Q(k)T(k,k-1)' Q(k) = U(k)U(k)' 计算卡尔曼增益矩阵 K(k) = C(k)^H(k)' ^(-1) R(k) = N(k)N(k)' 更新估计 X(k)~=X(k)^+K(k) 计算更新后估计协防差矩阵 C(k)~ = C(k)^ '+K(k)R(k)K(k)' X(k+1) = X(k)~ C(k+1) = C(k)~ 重复以上步骤 其c语言实现代码如下: #include stdlib.h #include rinv.c int lman(n,m,k,f,q,r,h,y,x,p,g) int n,m,k; double f ,r ,y ,p ; { int i,j,kk,ii,l,jj,js; double * e, * a, * b; e = malloc(m * m * sizeof ( double )); l = m; if (l n)l = n; a = malloc(l * l * sizeof ( double )); b = malloc(l * l * sizeof ( double )); for (i = 0 ;i = n - 1 ;i ++ ) for (j = 0 ;j = n - 1 ;j ++ ) {ii = i * l + j;a = 0.0 ; for (kk = 0 ;kk = n - 1 ;kk ++ ) a = a + p * f ; } for (i = 0 ;i = n - 1 ;i ++ ) for (j = 0 ;j = n - 1 ;j ++ ) {ii = i * n + j;p = q ; for (kk = 0 ;kk = n - 1 ;kk ++ ) p = p + f * a ; } for (ii = 2 ;ii = k;ii ++ ) { for (i = 0 ;i = n - 1 ;i ++ ) for (j = 0 ;j = m - 1 ;j ++ ) {jj = i * l + j;a = 0.0 ; for (kk = 0 ;kk = n - 1 ;kk ++ ) a = a + p * h ; } for (i = 0 ;i = m - 1 ;i ++ ) for (j = 0 ;j = m - 1 ;j ++ ) {jj = i * m + j;e = r ; for (kk = 0 ;kk = n - 1 ;kk ++ ) e = e + h * a ; } js = rinv(e,m); if (js == 0 ) {free(e);free(a);free(b); return (js);} for (i = 0 ;i = n - 1 ;i ++ ) for (j = 0 ;j = m - 1 ;j ++ ) {jj = i * m + j;g = 0.0 ; for (kk = 0 ;kk = m - 1 ;kk ++ ) g = g + a * e ; } for (i = 0 ;i = n - 1 ;i ++ ) {jj = (ii - 1 ) * n + i;x = 0.0 ; for (j = 0 ;j = n - 1 ;j ++ ) x = x + f * x ; } for (i = 0 ;i = m - 1 ;i ++ ) {jj = i * l;b = y ; for (j = 0 ;j = n - 1 ;j ++ ) b = b - h * x ; } for (i = 0 ;i = n - 1 ;i ++ ) {jj = (ii - 1 ) * n + i; for (j = 0 ;j = m - 1 ;j ++ ) x = x + g * b ; } if (ii k) { for (i = 0 ;i = n - 1 ;i ++ ) for (j = 0 ;j = n - 1 ;j ++ ) {jj = i * l + j;a = 0.0 ; for (kk = 0 ;kk = m - 1 ;kk ++ ) a = a - g * h ; if (i == j)a = 1.0 + a ; } for (i = 0 ;i = n - 1 ;i ++ ) for (j = 0 ;j = n - 1 ;j ++ ) {jj = i * l + j;b = 0.0 ; for (kk = 0 ;kk = n - 1 ;kk ++ ) b = b + a * p ; } for (i = 0 ;i = n - 1 ;i ++ ) for (j = 0 ;j = n - 1 ;j ++ ) {jj = i * l + j;a = 0.0 ; for (kk = 0 ;kk = n - 1 ;kk ++ ) a = a + b * f ; } for (i = 0 ;i = n - 1 ;i ++ ) for (j = 0 ;j = n - 1 ;j ++ ) {jj = i * n + j;p = q ; for (kk = 0 ;kk = n - 1 ;kk ++ ) p = p + f * a ; } } } free(e);free(a);free(b); return (js); } C++实现代码如下: ============================ kalman.h ================================ // kalman.h:interfaceforthekalmanclass. // ///////////////////////////////////////////////////////////////////// / #if !defined(AFX_KALMAN_H__ED3D740F_01D2_4616_8B74_8BF57636F2C0__INCLUDED_) #define AFX_KALMAN_H__ED3D740F_01D2_4616_8B74_8BF57636F2C0__INCLUDED_ #if _MSC_VER1000 #pragmaonce #endif // _MSC_VER1000 #include math.h #include cv.h class kalman { public : void init_kalman( int x, int xv, int y, int yv); CvKalman * cvkalman; CvMat * state; CvMat * process_noise; CvMat * measurement; const CvMat * prediction; CvPoint2D32fget_predict( float x, float y); kalman( int x = 0 , int xv = 0 , int y = 0 , int yv = 0 ); // virtual~kalman(); }; #endif // !defined(AFX_KALMAN_H__ED3D740F_01D2_4616_8B74_8BF57636F2C0__INCLUDED_) ============================ kalman.cpp ================================ #include kalman.h #include stdio.h /* testerdeprintertouteslesvaleursdesvecteurs */ /* testerdechangerlesmatricesdunoises */ /* replacestatebycvkalman-state_post??? */ CvRandStaterng; const double T = 0.1 ; kalman::kalman( int x, int xv, int y, int yv) { cvkalman = cvCreateKalman( 4 , 4 , 0 ); state = cvCreateMat( 4 , 1 ,CV_32FC1); process_noise = cvCreateMat( 4 , 1 ,CV_32FC1); measurement = cvCreateMat( 4 , 1 ,CV_32FC1); int code = - 1 ; /* creatematrixdata */ const float A = { 1 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 1 , 0 , 0 , 0 , 0 , 0 }; const float P = { pow(T, 3 ) / 3 ,pow(T, 2 ) / 2 , 0 , 0 , pow(T, 2 ) / 2 ,T, 0 , 0 , 0 , 0 ,pow(T, 3 ) / 3 ,pow(T, 2 ) / 2 , 0 , 0 ,pow(T, 2 ) / 2 ,T }; const float R = x; state - data.fl = xv; state - data.fl = y; state - data.fl = yv; cvkalman - state_post - data.fl = x; cvkalman - state_post - data.fl = xv; cvkalman - state_post - data.fl = y; cvkalman - state_post - data.fl = yv; cvRandSetRange( rng, 0 ,sqrt(cvkalman - process_noise_cov - data.fl ), 0 ); cvRand( rng,process_noise); } CvPoint2D32fkalman::get_predict( float x, float y){ /* updatestatewithcurrentposition */ state - data.fl = x; state - data.fl = y; /* predictpointposition */ /* x'k=A鈥?k+B鈥?k P'k=A鈥?k-1*AT+Q */ cvRandSetRange( rng, 0 ,sqrt(cvkalman - measurement_noise_cov - data.fl ), 0 ); cvRand( rng,measurement); /* xk=A?xk-1+B?uk+wk */ cvMatMulAdd(cvkalman - transition_matrix,state,process_noise,cvkalman - state_post); /* zk=H?xk+vk */ cvMatMulAdd(cvkalman - measurement_matrix,cvkalman - state_post,measurement,measurement); /* adjustKalmanfilterstate */ /* Kk=P'k鈥?T鈥?H鈥?'k鈥?T+R)-1 xk=x'k+Kk鈥?zk-H鈥?'k) Pk=(I-Kk鈥?)鈥?'k */ cvKalmanCorrect(cvkalman,measurement); float measured_value_x = measurement - data.fl ; float measured_value_y = measurement - data.fl ; const CvMat * prediction = cvKalmanPredict(cvkalman, 0 ); float predict_value_x = prediction - data.fl ; float predict_value_y = prediction - data.fl ; return (cvPoint2D32f(predict_value_x,predict_value_y)); } void kalman::init_kalman( int x, int xv, int y, int yv) { state - data.fl = x; state - data.fl = xv; state - data.fl = y; state - data.fl = yv; cvkalman - state_post - data.fl = x; cvkalman - state_post - data.fl = xv; cvkalman - state_post - data.fl = y; cvkalman - state_post - data.fl = yv; }
个人分类: 科研实践|10226 次阅读|2 个评论

Archiver|手机版|科学网 ( 京ICP备07017567号-12 )

GMT+8, 2024-5-11 15:27

Powered by ScienceNet.cn

Copyright © 2007- 中国科学报社

返回顶部