科学网

 找回密码
  注册

tag 标签: Pattern

相关帖子

版块 作者 回复/查看 最后发表

没有相关内容

相关日志

American Foreign Policy, Pattern and Process
黄安年 2019-2-25 11:52
American Foreign Policy, Pattern and Process 【 Charles W. Kegley, Jr. Eugene R. Wittkopf 著 《 美国的外交政策、模式和实践 》 1979 年版】 【黄安年个人藏书书目(美国问题英文部分编号 300 ) 黄安年文 黄安年的博客 / 2019 年 2 月 25 日 发布(第 21037 篇) 自2019年起,笔者将通过博客陆续发布个人收藏的全部图书书目,目前先发布美国问题英文书目,已经超过299本,每本单独编号,不分出版时间先后与图书类别。 这里发布的是 Charles W. Kegley, Jr. Eugene R. Wittkopf 著 American Foreign Policy, Pattern and Process( 《 美国的外交政策、模式和实践 》 ),St. Martin’s Press, 1979 年版,512页。ISBNshy;0-312-02327-8 照片15张拍自该书 1 , 2 , 3 , 4 , 5, 6, 7 , 8, 9, 10, 11, 12, 13, 14, 15,
个人分类: 个人藏书书目|1631 次阅读|0 个评论
【从V个P到抓取邮电地址看 clear patterns 如何抵御 sparse data
liwei999 2016-12-21 08:18
从 前几天的例子:V个P (挣个毛、挣个求、挣个妹,等) P={P,屁,头,鸟,吊,jiba,妹,鬼,......} 可以看到,小数据为依据的规则系统,有时候比大数据训练的系统,可能更加有效:更精准,更能对抗 sparse data 从而提高 recall(具有 clear patterns 性质的语言现象,可以一网打尽,完全没有 sparse data 的困扰),模拟语言现象更加直接,因此也更加容易debug和维护。 在 IE 历史上,直到 MUC-7,当时表现最牛的 NE 系统 NetOwl 就是基于 pattern rules 的,几乎所有的统计对手都拿它作为拼杀的对象。 NetOwl 从 SRA spinoff 出去想以 NE 为技术基础,进行商业运作,一开始在分类广告业拿下了一些业务,终究不能持续赚钱,后来被 SRA 收回,逐渐销声匿迹了。后来追随潮流,系统里面也混杂了机器学习的模块。 从此在学界就再也见不到规则系统了,哪怕是对于规则非常适用的某些 NE 任务:譬如 时间,数量结构,等。可见潮流之厉害,貌似所向披靡。但事物的本质和本性并没有改变,对于自然语言中的具有 clear patterns 的现象,依据小数据,经过人脑的归纳,行数据驱动去开发规则系统,仍然是如上述高效而高质量:工业界默默实行的人、团队和系统并不鲜见,只不过大家心知肚明,只做不说而已。 相对应,发动群众去标注大数据,然后用大数据训练一个系统如何?这是主流的默认、honored 的方法。如果数据足够大,其质量的确可以接近或匹敌规则系统。当数据量不理想的时候,就捉襟见肘了: 或者 underkill (由于 sparse data,漏掉很多统计性稍弱的变体)伤害 recall,或者 overkill (smoothing 过度,把不该抓的现象抓进),影响了precision。 什么叫有 clear pattern 的语言现象呢?举个例子,抓取邮政地址,这个工作我自己作为一个 fun project 做过。美国地址大体是 门牌、街道、城市、州、邮政编码,最后是国名,patterns 相当地 clear,可你可能无法想象上述 pattern 的构件变体之多,有些变体绝对是 long tails,再大的数据量也难涵盖其组合爆炸的本性。 如果你收集了一个巨大的美国地址库作为训练集(大数据),你完全可以设计一个学习系统来做这件事儿。而另一边,虽然也是 data driven,但只需要小数据样本,然后经过人的大脑去举一反三进行开发,最后到 raw data 的大数据中去验证反馈。可以拍胸脯的是,后一种办法做出来的系统绝对是高质量易维护,几乎天生地具有 sparse data 的免疫性。 云: @wei , 地址parsing属于reg expressions就能搞定的事,我们大数据分析经常要做的事。这个和NLP没有多大的关系。 这是一个context free的grammer, 相对简单。 我: finite state, 是 regex 就搞定,但不少人还是训练。这是其一。 其二是,自然语言复杂性比起相对简单的地址识别,不过是多了几层而已。都可以 finite. 譬如,subcat 说需要 主语、宾语,还要一个宾语补足语,这与地址说需要一个街名、城市名和州名,也差不多。 云: 不一样的, 1. 街名 2. 城市 3. 州名 各自独立,互不依赖。 而主谓宾相互有上下文关系 我: 比喻都是跛脚的。anyway 二者都是 finite 装置可以搞定。地址由于其组件的独立性,利用 macros 调用,可以一层搞定,也可以不利用 macros 多层搞定。NL 通常要多层 finite 装置搞定。 其实我要说的是,自然语言看上去千丝万缕,复杂无比,但本性上、大面上是背后具有 clear patterns 的 monster。为什么自然语言有 clear patterns (所谓句法)在背后?乔姆斯基归结为 UG,是从娘胎里带出来的。有意思的是,语言学家看自然语言,看到的是章法,甭管这个章法多么地扑朔迷离。而没多少语言学训练的NLP工作者,往往看到的是一团纠缠不清的迷雾。 【相关】 中文处理 Parsing 【置顶:立委NLP博文一览】 《朝华午拾》总目录
个人分类: 立委科普|2789 次阅读|0 个评论
[转载]深度学习、机器学习与模式识别三者之间的区别
machinelearn 2015-3-30 16:10
Lets take a close look at three relatedterms (Deep Learning vs Machine Learning vs Pattern Recognition), and see howthey relate to some of the hottest tech-themes in 2015 (namely Robotics andArtificial Intelligence). In our short journey through jargon, you shouldacquire a better understanding of how computer vision fits in, as well as gainan intuitive feel for how the machine learning zeitgeist has slowly evolvedover time. Fig 1. Putting a human inside a computeris not Artificial Intelligence (Photo from WorkFusionBlog ) If you look around, you'll see no shortage of jobs at high-tech startupslooking for machine learning experts. While only a fraction of them are lookingfor Deep Learning experts, I bet most of these startups can benefit from eventhe most elementary kind of data scientist. So how do you spot a futuredata-scientist? You learn how they think. The three highly-relatedlearning buzz words “Pattern recognition,” “machinelearning,” and “deep learning” represent three different schools ofthought. Pattern recognition is the oldest (and as a term is quiteoutdated). Machine Learning is the most fundamental (one of the hottest areasfor startups and research labs as of today, early 2015). And DeepLearning is the new, the big, the bleeding-edge -- we’re not even close tothinking about the post-deep-learning era . Just take a look at thefollowing Google Trends graph. You'll see that a) Machine Learning isrising like a true champion, b) Pattern Recognition started as synonymous withMachine Learning, c) Pattern Recognition is dying, and d) Deep Learning is newand rising fast. 1. Pattern Recognition: The birth ofsmart programs Pattern recognition was a term popularin the 70s and 80s. Theemphasis was on getting a computer program to do something “smart” likerecognize the character 3. And it really took a lot ofcleverness and intuition to build such a program. Just think of 3vs B and 3 vs 8. Back in the day, itdidn’t really matter how you did it as long as there was no human-in-a-boxpretending to be a machine. (See Figure 1) So if your algorithm wouldapply some filters to an image, localize some edges, and apply morphologicaloperators, it was definitely of interest to the pattern recognitioncommunity. Optical Character Recognition grew out of this community andit is fair to call “Pattern Recognition” as the “Smart Signal Processingof the 70s, 80s, and early 90s. Decision trees, heuristics, quadraticdiscriminant analysis, etc all came out of this era. Pattern Recognition becomesomething CS folks did, and not EE folks. One of the most popular booksfrom that time period is the infamous Duda Hart PatternClassification book and is still a great starting point for youngresearchers. But don't get too caught up in the vocabulary, it's a bitdated. The character 3 partitionedinto 16 sub-matrices. Custom rules, custom decisions, and customsmart programs used to be all the rage. SeeOCR Page . Quiz : The most popular Computer Vision conference is called CVPRand the PR stands for Pattern Recognition. Can you guess the year of thefirst CVPR conference? 2. Machine Learning: Smart programs canlearn from examples Sometime in the early 90s people startedrealizing that a more powerful way to build pattern recognition algorithms isto replace an expert (who probably knows way too much about pixels) with data(which can be mined from cheap laborers). So you collect a bunch of faceimages and non-face images, choose an algorithm, and wait for the computationsto finish. This is the spirit of machine learning. Machine Learningemphasizes that the computer program (or machine) must do some work after it isgiven data. The Learning step is made explicit. And believeme, waiting 1 day for your computations to finish scales better than invitingyour academic colleagues to your home institution to design some classificationrules by hand. What is Machine Learningfrom DrNatalia Konstantinova's Blog . The most important part of thisdiagram are the Gears which suggests thatcrunching/working/computing is an important step in the ML pipeline. As Machine Learning grew into a majorresearch topic in the mid 2000s, computer scientists began applying these ideasto a wide array of problems. No longer was it only character recognition,cat vs. dog recognition, and other “recognize a pattern inside an array ofpixels” problems. Researchers started applying Machine Learning to Robotics(reinforcement learning, manipulation, motion planning, grasping), to genomedata, as well as to predict financial markets. Machine Learning wasmarried with Graph Theory under the brand “Graphical Models,” every roboticsexpert had no choice but to become a Machine Learning Expert, and MachineLearning quickly became one of the most desired and versatile computing skills . However Machine Learning says nothing about the underlyingalgorithm. We've seen convex optimization, Kernel-based methods, SupportVector Machines, as well as Boosting have their winning days. Togetherwith some custom manually engineered features, we had lots of recipes, lots ofdifferent schools of thought, and it wasn't entirely clear how a newcomershould select features and algorithms. But that was all about tochange... Further reading: To learn more about the kinds of features that were used inComputer Vision research see my blog post: Fromfeature descriptors to deep learning: 20 years of computer vision . 3. Deep Learning: one architecture torule them all Fast forward to today and what we’reseeing is a large interest in something called Deep Learning. The most popularkinds of Deep Learning models, as they are using in large scale imagerecognition tasks, are known as Convolutional Neural Nets, or simplyConvNets. ConvNet diagram from TorchTutorial Deep Learning emphasizes the kind ofmodel you might want to use (e.g., a deep convolutional multi-layer neuralnetwork) and that you can use data fill in the missing parameters. Butwith deep-learning comes great responsibility. Because you are starting witha model of the world which has a high dimensionality, you really need a lot ofdata (big data) and a lot of crunching power (GPUs). Convolutions are usedextensively in deep learning (especially computer vision applications), and thearchitectures are far from shallow. If you're starting out with Deep Learning, simply brush up on some elementaryLinear Algebra and start coding. I highly recommend AndrejKarpathy's Hacker's guideto Neural Networks . Implementing your own CPU-based backpropagationalgorithm on a non-convolution based problem is a good place to start. There are still lots of unknowns. The theory of why deep learning works isincomplete, and no single guide or book is better than true machine learningexperience. There are lots of reasons why Deep Learning is gainingpopularity, but Deep Learning is not going to take over the world. Aslong as you continue brushing up on your machine learning skills, your job issafe. But don't be afraid to chop these networks in half, slice 'n dice atwill, and build software architectures that work in tandem with your learningalgorithm. The Linux Kernel of tomorrow might run on Caffe (one of the most popular deeplearning frameworks), but great products will always need great vision, domainexpertise, market development, and most importantly: human creativity. Other related buzz-words Big-data is the philosophy of measuring allsorts of things, saving that data, and looking through it forinformation. For business, this big-data approach can give you actionableinsights. In the context of learning algorithms, we’ve only startedseeing the marriage of big-data and machine learning within the past fewyears. Cloud-computing , GPUs , DevOps ,and PaaS providers have made large scale computing within reach ofthe researcher and ambitious everyday developer. Artificial Intelligence is perhaps the oldest term, the most vague,and the one that was gone through the most ups and downs in the past 50 years.When somebody says they work on Artificial Intelligence, you are either goingto want to laugh at them or take out a piece of paper and write down everythingthey say. Further reading: My 2011 Blog post ComputerVision is Artificial Intelligence . Conclusion Machine Learning is here to stay. Don't think about it as PatternRecognition vs Machine Learning vs Deep Learning, just realize that each termemphasizes something a little bit different. But the search continues. Go ahead and explore. Break something. We will continue building smartersoftware and our algorithms will continue to learn, but we've only begun toexplore the kinds of architectures that can truly rule-them-all. If you're interested in real-time visionapplications of deep learning, namely those suitable for robotic and homeautomation applications, then you should check out what we've been buildingat vision.ai . Hopefully in a few days, I'll be ableto say a little bit more. :-)
个人分类: 科研笔记|4633 次阅读|0 个评论
[转载]Understanding Biological Pattern Formation
gcshan 2014-2-20 01:28
Science 24 September 2010: Vol. 329 no. 5999 pp. 1616-1620 doi: 10.1126/science.1179047 Reaction-Diffusion Model as a Framework for Understanding Biological Pattern Formation Shigeru Kondo 1 , * , Takashi Miura 2 + Author Affiliations 1 Graduate School of Frontier Biosciences, Osaka University, Suita, Osaka, 565-0871, Japan. 2 Department of Anatomy and Developmental Biology, Kyoto University Graduate School of Medicine, Kyoto 606-8501, Japan. ↵ * To whom correspondence should be addressed. E-mail: skondo@fbs.osaka-u.ac.jp Abstract The Turing, or reaction-diffusion (RD), model is one of the best-known theoretical models used to explain self-regulated pattern formation in the developing animal embryo. Although its real-world relevance was long debated, a number of compelling examples have gradually alleviated much of the skepticism surrounding the model. The RD model can generate a wide variety of spatial patterns, and mathematical studies have revealed the kinds of interactions required for each, giving this model the potential for application as an experimental working hypothesis in a wide variety of morphological phenomena. In this review, we describe the essence of this theory for experimental biologists unfamiliar with the model, using examples from experimental studies in which the RD model is effectively incorporated.
个人分类: 科学札记|1633 次阅读|0 个评论
Pattern Recognition期刊论文投稿提交源文件时的几点注意事项
热度 4 tyfbyfby 2014-1-4 17:41
笔者刚刚提交一篇Pattern Recognition期刊论文的修稿,在提交时遇到很多问题,可是PR网页的说明又不够详细,为了避免大家再走弯路,下面把PR提交需要注意的事项列举如下: 注:第一稿不需要提交文章的源文件,只需要pdf文件即可,第二稿才要上传LaTex源文件。 1. 提交时,LaTex文件(后缀名为.tex)和参考文献文件(后缀名为.bib)需要选择为Manucript,LaTex编译产生的文件都不需要上传。 2. 所有的图片不能放在子文件夹下,要和LaTex源文件放在同一个文件夹,同时图片的名称不能有相同的。 3. 自己生成的pdf文档不需要再上传了,PR的网页会使用我们上传的文件和图片自动编译出pdf文件。 4. 所有图片的格式要求为.eps,但是貌似.pdf网页也可以识别为图片,也许pdf文件也可以作为图片使用,本次没有尝试。 5. 删除LaTex源文件中的“References”章节标题,保留使用的.bib文件名,PR网页LaTex编译系统会自动在文章的最后添加“References”标题。 6. 最好删除自己LaTex文件中的中文注释,在没有安装中文的系统下中文字符显示为“口”。 7. 仔细检查网页生成的pdf文件,避免后续重新修改。 最后,祝大家好运,科研顺利!
个人分类: LaTex技巧|25720 次阅读|6 个评论
[转载]First Announcement of DDAP7
bhwangustc 2011-11-10 20:39
[转载]First Announcement of DDAP7
Dynamic Days Asia Pacific 7 (DDAP7) The 7th International Conference on Nonlinear Science Academia Sinica, Taipei, Taiwan, 6 August (Monday)-9 August 2012 General Information: Dynamic Days Asia Pacific (DDAP) is a regular series of international conferences rotating among Asia-Pacific countries every two years in recent years. Its purpose is to bring together researchers world-wide to discuss the most recent developments in nonlinear science. It also serves as a forum to promote regional as well as international scientific exchange and collaboration. The conference covers a variety of topics in nonlinear physics, biological physics, nonequilibrium physics, complex networks, econophysics, and quantum/classical chaos, etc. DDAP1 started in 1999 in Hong Kong, then continued in Hangzhou (DDAP2, 2002), Singapore (DDAP3, 2004), Pohang (DDAP4, 2006), Nara (DDAP5, 2008) and Sydney (DDAP6, 2010). DDAP7 will take place at Academia Sinica in Taipei, Taiwan on 6-9 August 2012. Plans for the 8th to the 9th DDAP are scheduled for India (2014) and Hong Kong (2016). Information for some former conferences: DDAP6: University of New South Wales, Sydney, Australia, 12-14 July 2010 http://conferences.science.unsw.edu.au/DDAP6/DDAP6.html DDAP5: Nara Prefectural New Public Hall, Nara, Japan, 9-12 September 2008 http://minnie.disney.phys.nara-wu.ac.jp/~toda/ddap5/ DDAP4: Pohang University of Science and Technology, Pohang, Korea, 12-14 July 2006 http://www.apctp.org/topical/ddap4/ DDAP3: National University of Singapore, Singapore, 30 June-2 July 2004 http://www.cse.nus.edu.sg/com_science/story/body.html DDAP2: Zhejian University, HangZhou, China, 8-12 August 2002 http://physics.zju.edu.cn/note/dispArticle.Asp?ID=132 DDAP1: Hong Kong Baptist University, Hong Kong, 13-16 July 1999 http://www.hkbu.edu.hk/~ddap/ Topics of the conference Chaos Pattern formation Econophysics Complex networks Protein folding and aggregation etc Organization Committee (OC) Chin Kun Hu* (huck@phys.sinica.edu.tw ) Academia Sinica: Chairperson Ming-Chya Wu* (mcwu@phys.sinica.edu.tw) National Central University: Secretary Chi Keung Chan* (ckchan@gate.sinica.edu.tw) Academia Sinica Cheng-Hung Chang (chchang@mail.nctu.edu.tw) National Chiao Tung University Chi-Ming Chen (cchen@phy.ntnu.edu.tw) National Taiwan Normal University Chi-Ning Chen (cnchen@mail.ndhu.edu.tw) National Dong Hwa University Hsuan-Yi Chen* (hschen@phy.ncu.edu.tw) National Central University Yeng-Long Chen* (yenglong@phys.sinica.edu.tw) Academia Sinica Yih-Yuh Chen (yychen@phys.ntu.edu.tw) National Taiwan University Chung-I Chou (cichou@faculty.pccu.edu.tw ) Chinese Culture University Lin-Ni Hau (lnhau@jupiter.ss.ncu.edu.tw) National Central University Ming-Chung Ho (t1603@nknucc.nknu.edu.tw) National Kaohsiung Normal University Tzay-Ming Hong (ming@phys.nthu.edu.tw) National Tsing Hua University Ding-wei Huang (dwhuang@cycu.edu.tw) Chung-Yuan Christian University Ming-Chang Huang (ming@phys.cycu.edu.tw) Chung-Yuan Christian University Kwan-Tai Leung* (leungkt@phys.sinica.edu.tw) Academia Sinica Sai-Ping Li* (spli@phys.sinica.edu.tw) Academia Sinica Sy-Sang Liaw (liaw@phys.nchu.edu.tw) National Chung Hsing University Chai-Yu Lin (lincy@phy.ccu.edu.tw) National Chung Cheng University Hsiu-Hau Lin (hsiuhau@phys.nthu.edu.tw) National Tsing Hua University Chun-Yi David Lu (cydlu@ntu.edu.tw) National Taiwan University Wen-Jong Ma* (mwj@nccu.edu.tw) National Chengchi University Ning-Ning Pang (nnp@phys.ntu.edu.tw) National Taiwan University Yuo-Hsien Shiau (yhshiau@nccu.edu.tw) National Chengchi University Chi-Tin Shih (ctshih@thu.edu.tw ) Tunghai University Hsen-Che Tseng (tseng@phys.nchu.edu.tw) National Chung Hsing University Wen-Jer Tzeng (wjtzeng@mail.tku.edu.tw) Tamkang University Zicong Zhou (zzhou@mail.tku.edu.tw ) Tamkang University *Members of Local Organization Committee International Advisory Committee (IAC) Asia-Pacific Moo Young Choi (Seoul National University, mychoi@snu.ac.kr) Robert Dewar (The Australian National University, robert.dewar@anu.edu.au) Bruce Henry (University of New South Wales, b.henry@unsw.edu.au) Gang Hu (Beijing Normal University, ganghu@bnu.edu.cn) Pak Ming Hui (The Chinese University of Hong Kong, pmhui@phy.cuhk.edu.hk) Byungnam Kahng (Seoul National University, bkahng@snu.ac.kr) Kunihiko Kaneko (The University of Tokyo, kaneko@complex.c.u-tokyo.ac.jp) Seunghwan Kim (APCTP, Pohang, swan@postech.ac.kr) Yuri S. Kivshar (The Australian National University, ysk124@physics.anu.edu.au) Takahisa Harayama (ATR Wave Engineering Laboratories, harayama@atr.jp) Yoshiki Kuramoto (Kyoto University, kuramoto@kurims.kyoto-u.ac.jp) Choy-Heng Lai (National University of Singapore, phylaich@nus.edu.sg) Baowen Li (National University of Singapore, phylibw@nus.edu.sg) Bing Hong Wang (China Univ of Science Technology, bhwang@ustc.edu.cn) Po Zheng (Zhejiang University, bozheng@zju.edu.cn) Zhigang Zheng (Beijing Normal University, zgzheng@bnu.edu.cn) Changsong Zhou (Hong Kong Baptist University, cszhou@hkbu.edu.hk) Ravindra E. Amritkar (Physical Research Laboratory, amritkar@prl.ernet.in) Mustansir Barma (Tata Institute of Fundamental Research, Mumbai, barma@theory.tifr.res.in) Abhishek Dhar (Raman Research Institute in Bangalore, dabhi@rri.res.in) Ramakrishna Ramaswamy (Jawaharlal Nehru University, New Delhi, r.ramaswamy@mail.jnu.ac.in) Europe Giulio Casati (Center for Nonlinear and Complex Systems, Via Vallegio, Giulio.Casati@uninsubria.it) Michel Peyrard (ENS de Lyon, Michel.Peyrard@ens-lyon.fr) Mogens Jensen (University of Copenhagen, mhjensen@nbi.dk) Celso Grebogi (University of Aberdeen, grebogi@abdn.ac.uk) Stefano Ruffo (University of Florence, stefano.ruffo@unifi.it) Tamas Vicsek (Etvs Loránd University (ELTE), vicsek@hal.elte.hu) America Predrag Cvitanovic (Georgia Tech., predrag@gatech.edu) Ying-Cheng Lai (Arizona State University, Ying-Cheng.Lai@asu.edu) Edward Ott (University of Maryland, edott@umd.edu) Rajarshi Roy (University of Maryland, rroy@umd.edu) Gene Stanley (Boston University, hes@bu.edu ) Host Institute Institute of Physics of Academia Sinica Sponsors: APCTP (Pohang, South Korea) Physical Society of the Republic of China (Taipei, Taiwan) National Science Council (Taipei, Taiwan) National Center for Theoretical Sciences (Taipei, Taiwan) Ministry of Education (Taipei, Taiwan) Lectures: 12 plenary lectures 12-18 invited talks in 3 parallel sessions Some contributed talks and posters * 1-2 mins short report for each poster will be arranged during poster session. 10-15 mins talk will be arranged on Aug 9 for the reporter who wins the best poster award. Important dates: 30 November 2011: collecting responses from international advisory committee 2 December 2011: preparing a list of plenary lectures and invited talks January 2012: applying NSC grant DDAP7schedule
个人分类: 会议信息|5551 次阅读|0 个评论
Period doubling in a periodically forced Belousov-Zhabotinsky reaction
kingroupxz 2010-9-20 15:21
Abstract: Using an open-flow reactor periodically perturbed with light, we observe sub-harmonic frequency locking of the oscillatory Belousov-Zhabotinsky chemical reaction at one sixth the forcing frequency (6:1) over a region of the parameter space of forcing intensity and forcing frequency where the Farey sequence dictates we should observe one third the forcing frequency (3:1). In this parameter region the spatial pattern also changes from slowly moving traveling waves to standing waves with a smaller wavelength. Numerical simulations of the FitzHugh-Nagumo equations show qualitative agreement to the experimental observations and indicate the oscillations in the experiment are a result of period doubling. 这篇文献还是小师弟发给我看看的,但是看完之后,检索了相关文献以后,发现这个方向的工作有一个组已经做了很多,德国的,而且已经将这种驱动的系统中的螺旋波漫游情况进行了分析。所以这篇文章的学术价值我个人认为Ok,不理解的是为什么不用已经早就使用了的Oregonator模型?这可是一直认为是最贴近实验的一个模型。如果只从模型研究的角度出必当然也无可厚非。 对这个模型中加驱动的方式我也是有点不同看法的。既然驱动是作用在第二个变量上的,为什么驱动项的时间尺度不是与第二变量一致,反而与第一个变量一致。 文章的构成很流行了,实验在前,模型模拟在后。 我将用这个模型,修改驱动方式,重新考察一下螺旋波的漫游。网络的事,就让它在网络中继续进行吧。
个人分类: 文献阅读|4588 次阅读|0 个评论
谈谈生物物理(8)--- 细胞世界里的盘古开天辟地
sunon77 2009-3-9 05:50
If you can not see movie here, you can go to the source: Source: http://www.molbio.wisc.edu/white/CECDBio.html 在我们上古的神话传说中,对宇宙的起源有一个颇有英雄主义的故事。三国吴人徐整《三五历记》曰:天地混沌如鸡子,盘古生其中。万八千岁,天地开辟,阳清为天,阴浊为地。盘古在其中,一日九变,神于天,圣于地。天日高一丈,地日厚一丈,盘古日长一丈。如此万八千岁,天数极高,地数极深,盘古极长。意即宇宙开初是混沌一片,没有空隙,盘古日增一丈,兢兢业业的工作,终于开辟出一片广阔的新天新地。我们的祖先在宇宙的起源这个问题上颇有见地,因为这个故事的背后有一个非常深刻的道理。 一沙一世界 如果宇宙处处对称的话,世界是死寂无声的,只有当对称性被打破以后,这个宇宙才有了组织和秩序,从简单到复杂,从基本粒子到能够思考的人类,这样复杂性的继承、发展和演化从而成为可能。而任何多细胞生物体的发育,从某种程度上再现(replay)了这样一个过程。一个原本大致均匀分布的受精卵,会在第一次分裂之前产生细胞极化,从而打破原有细胞的对称,为以后各种不同功能的组织、器官的发育作了铺垫。以后形成的干细胞,也存在着这样的非对称分裂(asymmetric division),这是整个人体形成有着复杂功能的精密结构的基石。 一花一天堂, 一沙一世界 ,这不仅仅是诗人浪漫的咏唱,这其中蕴含着真正的哲理。生命发育和宇宙演化的相似性,窃以为这并不是巧合。恰恰说明这个看是纷繁复杂的大千世界的背后,在模式产生和组织进化上有着相同的道理。细胞是如何消耗能量而形成负熵,抵御无所不在的热力学第二定律,从而打破对称而形成有序结构?这个问题的回答,从某种意义上也为宇宙的有序结构的自形成和自组织的解释提供了强烈的暗示。 C. elegans 细胞的极化过程 C. elegans 细胞在细胞膜的内侧有一层致密的富有弹性的组织,称为cell membrane cortex。其由错综复杂排列的Actin和众多在上面行走的molecular motors组成。 而细胞内部有无数的microtubules,向撑起帐篷的众多的支杆。然后,这些microtubules都不是静态的,而会不断的随着细胞内化学成分的改变而生长或分解,就好像看是静止的一杯水却有无数的分子在产生碰撞。这样的类似随即热运动的过程在生物现象中比比皆是。这也是为什么统计物理的方法是目前生物物理最重要的研究方法的原因之一。(可以参见 Random walk in biology, Howard C. Berg)。尽管细胞中的microtubules和行走其上的molecular motors都在做随机运动,如果我们这是对molecular motors的蛋白和基因Par6做荧光标记的话,会发现这些molecular motors在细胞中的分布式大致均匀的,而Par6也是均匀分布在细胞膜上的。 Fig. 1 cell membrane cortex layer and Par2 grows as Par6 shrinks source: http://raven.zoology.washington.edu/celldynamics/research/worms/images/working_hypothesis.jpg 随后,细胞的posterior一侧开始产生基因Par2,因为Par2和Par6相互抑制,彼此都会因为protein domain binding而是对方更易于从cortex上脱落。同时,Cortex的anterior一侧会因为molecular motors消耗ATP能量产生收缩,引起Cortex向anterior方向流动(可以想象成很粘稠的胶体),自然会带动细胞内的液体产生流动。液体流动会带动更多的molecular motors流向anterior方向,导致细胞的极化加速,基因Par2扩张,基因Par6收缩。最后分裂前,细胞两侧的基因和细胞器的分布已经彻底打破了对称,从而为细胞分化和器官形成做好了准备。更多的细节请看题头的Flash 动画,那最为清楚不过了。 解释细胞的极化的两组数学方程 要解释为什么细胞会分为两段,而不是三段或者更多?要了解液体流动与membrane cortex上molecular motors的相互促进作用是如何形成的?光靠语言来描述是不够的,我们需要使用更为定量的方法来解释细胞的极化过程。都说一个数学方程可以吓走一半的读者,但是要解释细胞的极化,再没有比数学更为简洁明了的方法了。 第一组方程是描述Par2和Par6相互抑制的非线性动力过程。假设A2(x,t)代表Par2在细胞内的时空分布,A6(x,t)代表Par6在细胞内的时空分布,我们可以写出两者作用的常微分方程(更多相关细节详见Ref 1): 这是生物化学中最常用的蛋白质动力作用方程,我就不一一解释其每一项的含义了。因为最终的解释在于这个动力方程组的动力特性上。如果我们解出方程的稳态值,并在稳态值的附近做一个微扰展开作稳定性分析,Par2和Par6的第一阶稳态分布恰好是一个Sin(x)函数,这说明为什么么细胞会分为两段而不是更多。 Fig.2 第一阶稳态 如果对原核生物的分子震荡集制作类似的分析,其蛋白酶 MinC, MinD和MinE 也构成一个可以产生震荡的非线性系统。 MinC, MinD和MinE 的第一阶稳态分布也是一个Sin(x)函数 ,故通常细菌都分裂成两段。但这个系统的第二阶稳态分布可以是波峰-波谷-波峰-,这样的取值会使细 菌 分为三段。 有趣的是, 生物学家根据这样的结论对E. Coli基因变异,从而改变 MinC, MinD和MinE 动力作用的强弱,最终真的制造出了三段分裂的细 菌 。 Fig.2 第 二 阶稳态 而另一组描述液体流动与 membrane cortex上molecular motors的相互促进作用的方程是:假设v(x,t)代表液体流动在细胞内的时空分布, 代表molecular motors在细胞内浓度的时空分布(更多相关细节详见Ref 2):, 熟悉非线性动力学的人能一下辨认出这是1952年图灵最早提出的反应-扩散方程(reactiondiffusion equations)的推广。那么,你就不会惊奇在一定参数条件下,这样的方程会随时间演进打破对称,形成模式。 为什么反应- 扩散方程会形成模式? 之所以写出数学方程的原因之一,就在于作为生物物理的研究者的我常常发现,生命现象不仅仅是由基因所决定的。往往是由物理和化学过程为基因所设定的边界条件,辅助甚至有时是决定基因对生命功能发挥作用的。最明显的一个例子,就是即使是完全相同基因的细菌在相同的环境条件下,会发展出两种完全不同的基因表达模式来(Ref. 3)。在这里,生命现象中的随机物理过程,发挥着极为重要的作用。 虽然细胞生物学的一些现象,已经可以通过数学模型来加以定量的描述。但是,为什么反应-扩散方程会随时间演进打破对称,产生负熵从而形成模式?这个数学方程背后的哲学意味是什么?是否宇宙秩序的产生与演进又存在着相类似的原理?这些巨谜的揭破,期待着人们打破传统的物理、化学和生物的人为设定的主观界限,还原出自然律原本简单而统一的美丽。 Reference: A Gamba, I Kolokolov, V Lebedev and G Ortenzi, Universal features of cell polarization processes, J. Stat. Mech. (2009) P02019, doi:10.1088/1742-5468/2009/02/P02019 Hydrodynamic Theory for Multi-Component Active Polar Gels , J.-F. Joanny, F. Jlicher, K. Kruse and J. Prost, New J. Phys. 9 , 422 (2007) Long Cai, Nir Friedman and X. Sunney Xie, Stochastic protein expression in individual cells at the single molecule level, Nature 440, 358-362 (16 March 2006) | doi:10.1038/nature04599 END
个人分类: 生物物理-biophysics|8218 次阅读|6 个评论

Archiver|手机版|科学网 ( 京ICP备07017567号-12 )

GMT+8, 2024-5-22 02:25

Powered by ScienceNet.cn

Copyright © 2007- 中国科学报社

返回顶部