Lets take a close look at three relatedterms (Deep Learning vs Machine Learning vs Pattern Recognition), and see howthey relate to some of the hottest tech-themes in 2015 (namely Robotics andArtificial Intelligence). In our short journey through jargon, you shouldacquire a better understanding of how computer vision fits in, as well as gainan intuitive feel for how the machine learning zeitgeist has slowly evolvedover time. Fig 1. Putting a human inside a computeris not Artificial Intelligence (Photo from WorkFusionBlog ) If you look around, you'll see no shortage of jobs at high-tech startupslooking for machine learning experts. While only a fraction of them are lookingfor Deep Learning experts, I bet most of these startups can benefit from eventhe most elementary kind of data scientist. So how do you spot a futuredata-scientist? You learn how they think. The three highly-relatedlearning buzz words “Pattern recognition,” “machinelearning,” and “deep learning” represent three different schools ofthought. Pattern recognition is the oldest (and as a term is quiteoutdated). Machine Learning is the most fundamental (one of the hottest areasfor startups and research labs as of today, early 2015). And DeepLearning is the new, the big, the bleeding-edge -- we’re not even close tothinking about the post-deep-learning era . Just take a look at thefollowing Google Trends graph. You'll see that a) Machine Learning isrising like a true champion, b) Pattern Recognition started as synonymous withMachine Learning, c) Pattern Recognition is dying, and d) Deep Learning is newand rising fast. 1. Pattern Recognition: The birth ofsmart programs Pattern recognition was a term popularin the 70s and 80s. Theemphasis was on getting a computer program to do something “smart” likerecognize the character 3. And it really took a lot ofcleverness and intuition to build such a program. Just think of 3vs B and 3 vs 8. Back in the day, itdidn’t really matter how you did it as long as there was no human-in-a-boxpretending to be a machine. (See Figure 1) So if your algorithm wouldapply some filters to an image, localize some edges, and apply morphologicaloperators, it was definitely of interest to the pattern recognitioncommunity. Optical Character Recognition grew out of this community andit is fair to call “Pattern Recognition” as the “Smart Signal Processingof the 70s, 80s, and early 90s. Decision trees, heuristics, quadraticdiscriminant analysis, etc all came out of this era. Pattern Recognition becomesomething CS folks did, and not EE folks. One of the most popular booksfrom that time period is the infamous Duda Hart PatternClassification book and is still a great starting point for youngresearchers. But don't get too caught up in the vocabulary, it's a bitdated. The character 3 partitionedinto 16 sub-matrices. Custom rules, custom decisions, and customsmart programs used to be all the rage. SeeOCR Page . Quiz : The most popular Computer Vision conference is called CVPRand the PR stands for Pattern Recognition. Can you guess the year of thefirst CVPR conference? 2. Machine Learning: Smart programs canlearn from examples Sometime in the early 90s people startedrealizing that a more powerful way to build pattern recognition algorithms isto replace an expert (who probably knows way too much about pixels) with data(which can be mined from cheap laborers). So you collect a bunch of faceimages and non-face images, choose an algorithm, and wait for the computationsto finish. This is the spirit of machine learning. Machine Learningemphasizes that the computer program (or machine) must do some work after it isgiven data. The Learning step is made explicit. And believeme, waiting 1 day for your computations to finish scales better than invitingyour academic colleagues to your home institution to design some classificationrules by hand. What is Machine Learningfrom DrNatalia Konstantinova's Blog . The most important part of thisdiagram are the Gears which suggests thatcrunching/working/computing is an important step in the ML pipeline. As Machine Learning grew into a majorresearch topic in the mid 2000s, computer scientists began applying these ideasto a wide array of problems. No longer was it only character recognition,cat vs. dog recognition, and other “recognize a pattern inside an array ofpixels” problems. Researchers started applying Machine Learning to Robotics(reinforcement learning, manipulation, motion planning, grasping), to genomedata, as well as to predict financial markets. Machine Learning wasmarried with Graph Theory under the brand “Graphical Models,” every roboticsexpert had no choice but to become a Machine Learning Expert, and MachineLearning quickly became one of the most desired and versatile computing skills . However Machine Learning says nothing about the underlyingalgorithm. We've seen convex optimization, Kernel-based methods, SupportVector Machines, as well as Boosting have their winning days. Togetherwith some custom manually engineered features, we had lots of recipes, lots ofdifferent schools of thought, and it wasn't entirely clear how a newcomershould select features and algorithms. But that was all about tochange... Further reading: To learn more about the kinds of features that were used inComputer Vision research see my blog post: Fromfeature descriptors to deep learning: 20 years of computer vision . 3. Deep Learning: one architecture torule them all Fast forward to today and what we’reseeing is a large interest in something called Deep Learning. The most popularkinds of Deep Learning models, as they are using in large scale imagerecognition tasks, are known as Convolutional Neural Nets, or simplyConvNets. ConvNet diagram from TorchTutorial Deep Learning emphasizes the kind ofmodel you might want to use (e.g., a deep convolutional multi-layer neuralnetwork) and that you can use data fill in the missing parameters. Butwith deep-learning comes great responsibility. Because you are starting witha model of the world which has a high dimensionality, you really need a lot ofdata (big data) and a lot of crunching power (GPUs). Convolutions are usedextensively in deep learning (especially computer vision applications), and thearchitectures are far from shallow. If you're starting out with Deep Learning, simply brush up on some elementaryLinear Algebra and start coding. I highly recommend AndrejKarpathy's Hacker's guideto Neural Networks . Implementing your own CPU-based backpropagationalgorithm on a non-convolution based problem is a good place to start. There are still lots of unknowns. The theory of why deep learning works isincomplete, and no single guide or book is better than true machine learningexperience. There are lots of reasons why Deep Learning is gainingpopularity, but Deep Learning is not going to take over the world. Aslong as you continue brushing up on your machine learning skills, your job issafe. But don't be afraid to chop these networks in half, slice 'n dice atwill, and build software architectures that work in tandem with your learningalgorithm. The Linux Kernel of tomorrow might run on Caffe (one of the most popular deeplearning frameworks), but great products will always need great vision, domainexpertise, market development, and most importantly: human creativity. Other related buzz-words Big-data is the philosophy of measuring allsorts of things, saving that data, and looking through it forinformation. For business, this big-data approach can give you actionableinsights. In the context of learning algorithms, we’ve only startedseeing the marriage of big-data and machine learning within the past fewyears. Cloud-computing , GPUs , DevOps ,and PaaS providers have made large scale computing within reach ofthe researcher and ambitious everyday developer. Artificial Intelligence is perhaps the oldest term, the most vague,and the one that was gone through the most ups and downs in the past 50 years.When somebody says they work on Artificial Intelligence, you are either goingto want to laugh at them or take out a piece of paper and write down everythingthey say. Further reading: My 2011 Blog post ComputerVision is Artificial Intelligence . Conclusion Machine Learning is here to stay. Don't think about it as PatternRecognition vs Machine Learning vs Deep Learning, just realize that each termemphasizes something a little bit different. But the search continues. Go ahead and explore. Break something. We will continue building smartersoftware and our algorithms will continue to learn, but we've only begun toexplore the kinds of architectures that can truly rule-them-all. If you're interested in real-time visionapplications of deep learning, namely those suitable for robotic and homeautomation applications, then you should check out what we've been buildingat vision.ai . Hopefully in a few days, I'll be ableto say a little bit more. :-)
Science 24 September 2010: Vol. 329 no. 5999 pp. 1616-1620 doi: 10.1126/science.1179047 Reaction-Diffusion Model as a Framework for Understanding Biological Pattern Formation Shigeru Kondo 1 , * , Takashi Miura 2 + Author Affiliations 1 Graduate School of Frontier Biosciences, Osaka University, Suita, Osaka, 565-0871, Japan. 2 Department of Anatomy and Developmental Biology, Kyoto University Graduate School of Medicine, Kyoto 606-8501, Japan. ↵ * To whom correspondence should be addressed. E-mail: skondo@fbs.osaka-u.ac.jp Abstract The Turing, or reaction-diffusion (RD), model is one of the best-known theoretical models used to explain self-regulated pattern formation in the developing animal embryo. Although its real-world relevance was long debated, a number of compelling examples have gradually alleviated much of the skepticism surrounding the model. The RD model can generate a wide variety of spatial patterns, and mathematical studies have revealed the kinds of interactions required for each, giving this model the potential for application as an experimental working hypothesis in a wide variety of morphological phenomena. In this review, we describe the essence of this theory for experimental biologists unfamiliar with the model, using examples from experimental studies in which the RD model is effectively incorporated.
Dynamic Days Asia Pacific 7 (DDAP7) The 7th International Conference on Nonlinear Science Academia Sinica, Taipei, Taiwan, 6 August (Monday)-9 August 2012 General Information: Dynamic Days Asia Pacific (DDAP) is a regular series of international conferences rotating among Asia-Pacific countries every two years in recent years. Its purpose is to bring together researchers world-wide to discuss the most recent developments in nonlinear science. It also serves as a forum to promote regional as well as international scientific exchange and collaboration. The conference covers a variety of topics in nonlinear physics, biological physics, nonequilibrium physics, complex networks, econophysics, and quantum/classical chaos, etc. DDAP1 started in 1999 in Hong Kong, then continued in Hangzhou (DDAP2, 2002), Singapore (DDAP3, 2004), Pohang (DDAP4, 2006), Nara (DDAP5, 2008) and Sydney (DDAP6, 2010). DDAP7 will take place at Academia Sinica in Taipei, Taiwan on 6-9 August 2012. Plans for the 8th to the 9th DDAP are scheduled for India (2014) and Hong Kong (2016). Information for some former conferences: DDAP6: University of New South Wales, Sydney, Australia, 12-14 July 2010 http://conferences.science.unsw.edu.au/DDAP6/DDAP6.html DDAP5: Nara Prefectural New Public Hall, Nara, Japan, 9-12 September 2008 http://minnie.disney.phys.nara-wu.ac.jp/~toda/ddap5/ DDAP4: Pohang University of Science and Technology, Pohang, Korea, 12-14 July 2006 http://www.apctp.org/topical/ddap4/ DDAP3: National University of Singapore, Singapore, 30 June-2 July 2004 http://www.cse.nus.edu.sg/com_science/story/body.html DDAP2: Zhejian University, HangZhou, China, 8-12 August 2002 http://physics.zju.edu.cn/note/dispArticle.Asp?ID=132 DDAP1: Hong Kong Baptist University, Hong Kong, 13-16 July 1999 http://www.hkbu.edu.hk/~ddap/ Topics of the conference Chaos Pattern formation Econophysics Complex networks Protein folding and aggregation etc Organization Committee (OC) Chin Kun Hu* (huck@phys.sinica.edu.tw ) Academia Sinica: Chairperson Ming-Chya Wu* (mcwu@phys.sinica.edu.tw) National Central University: Secretary Chi Keung Chan* (ckchan@gate.sinica.edu.tw) Academia Sinica Cheng-Hung Chang (chchang@mail.nctu.edu.tw) National Chiao Tung University Chi-Ming Chen (cchen@phy.ntnu.edu.tw) National Taiwan Normal University Chi-Ning Chen (cnchen@mail.ndhu.edu.tw) National Dong Hwa University Hsuan-Yi Chen* (hschen@phy.ncu.edu.tw) National Central University Yeng-Long Chen* (yenglong@phys.sinica.edu.tw) Academia Sinica Yih-Yuh Chen (yychen@phys.ntu.edu.tw) National Taiwan University Chung-I Chou (cichou@faculty.pccu.edu.tw ) Chinese Culture University Lin-Ni Hau (lnhau@jupiter.ss.ncu.edu.tw) National Central University Ming-Chung Ho (t1603@nknucc.nknu.edu.tw) National Kaohsiung Normal University Tzay-Ming Hong (ming@phys.nthu.edu.tw) National Tsing Hua University Ding-wei Huang (dwhuang@cycu.edu.tw) Chung-Yuan Christian University Ming-Chang Huang (ming@phys.cycu.edu.tw) Chung-Yuan Christian University Kwan-Tai Leung* (leungkt@phys.sinica.edu.tw) Academia Sinica Sai-Ping Li* (spli@phys.sinica.edu.tw) Academia Sinica Sy-Sang Liaw (liaw@phys.nchu.edu.tw) National Chung Hsing University Chai-Yu Lin (lincy@phy.ccu.edu.tw) National Chung Cheng University Hsiu-Hau Lin (hsiuhau@phys.nthu.edu.tw) National Tsing Hua University Chun-Yi David Lu (cydlu@ntu.edu.tw) National Taiwan University Wen-Jong Ma* (mwj@nccu.edu.tw) National Chengchi University Ning-Ning Pang (nnp@phys.ntu.edu.tw) National Taiwan University Yuo-Hsien Shiau (yhshiau@nccu.edu.tw) National Chengchi University Chi-Tin Shih (ctshih@thu.edu.tw ) Tunghai University Hsen-Che Tseng (tseng@phys.nchu.edu.tw) National Chung Hsing University Wen-Jer Tzeng (wjtzeng@mail.tku.edu.tw) Tamkang University Zicong Zhou (zzhou@mail.tku.edu.tw ) Tamkang University *Members of Local Organization Committee International Advisory Committee (IAC) Asia-Pacific Moo Young Choi (Seoul National University, mychoi@snu.ac.kr) Robert Dewar (The Australian National University, robert.dewar@anu.edu.au) Bruce Henry (University of New South Wales, b.henry@unsw.edu.au) Gang Hu (Beijing Normal University, ganghu@bnu.edu.cn) Pak Ming Hui (The Chinese University of Hong Kong, pmhui@phy.cuhk.edu.hk) Byungnam Kahng (Seoul National University, bkahng@snu.ac.kr) Kunihiko Kaneko (The University of Tokyo, kaneko@complex.c.u-tokyo.ac.jp) Seunghwan Kim (APCTP, Pohang, swan@postech.ac.kr) Yuri S. Kivshar (The Australian National University, ysk124@physics.anu.edu.au) Takahisa Harayama (ATR Wave Engineering Laboratories, harayama@atr.jp) Yoshiki Kuramoto (Kyoto University, kuramoto@kurims.kyoto-u.ac.jp) Choy-Heng Lai (National University of Singapore, phylaich@nus.edu.sg) Baowen Li (National University of Singapore, phylibw@nus.edu.sg) Bing Hong Wang (China Univ of Science Technology, bhwang@ustc.edu.cn) Po Zheng (Zhejiang University, bozheng@zju.edu.cn) Zhigang Zheng (Beijing Normal University, zgzheng@bnu.edu.cn) Changsong Zhou (Hong Kong Baptist University, cszhou@hkbu.edu.hk) Ravindra E. Amritkar (Physical Research Laboratory, amritkar@prl.ernet.in) Mustansir Barma (Tata Institute of Fundamental Research, Mumbai, barma@theory.tifr.res.in) Abhishek Dhar (Raman Research Institute in Bangalore, dabhi@rri.res.in) Ramakrishna Ramaswamy (Jawaharlal Nehru University, New Delhi, r.ramaswamy@mail.jnu.ac.in) Europe Giulio Casati (Center for Nonlinear and Complex Systems, Via Vallegio, Giulio.Casati@uninsubria.it) Michel Peyrard (ENS de Lyon, Michel.Peyrard@ens-lyon.fr) Mogens Jensen (University of Copenhagen, mhjensen@nbi.dk) Celso Grebogi (University of Aberdeen, grebogi@abdn.ac.uk) Stefano Ruffo (University of Florence, stefano.ruffo@unifi.it) Tamas Vicsek (Etvs Loránd University (ELTE), vicsek@hal.elte.hu) America Predrag Cvitanovic (Georgia Tech., predrag@gatech.edu) Ying-Cheng Lai (Arizona State University, Ying-Cheng.Lai@asu.edu) Edward Ott (University of Maryland, edott@umd.edu) Rajarshi Roy (University of Maryland, rroy@umd.edu) Gene Stanley (Boston University, hes@bu.edu ) Host Institute Institute of Physics of Academia Sinica Sponsors: APCTP (Pohang, South Korea) Physical Society of the Republic of China (Taipei, Taiwan) National Science Council (Taipei, Taiwan) National Center for Theoretical Sciences (Taipei, Taiwan) Ministry of Education (Taipei, Taiwan) Lectures: 12 plenary lectures 12-18 invited talks in 3 parallel sessions Some contributed talks and posters * 1-2 mins short report for each poster will be arranged during poster session. 10-15 mins talk will be arranged on Aug 9 for the reporter who wins the best poster award. Important dates: 30 November 2011: collecting responses from international advisory committee 2 December 2011: preparing a list of plenary lectures and invited talks January 2012: applying NSC grant DDAP7schedule
Abstract: Using an open-flow reactor periodically perturbed with light, we observe sub-harmonic frequency locking of the oscillatory Belousov-Zhabotinsky chemical reaction at one sixth the forcing frequency (6:1) over a region of the parameter space of forcing intensity and forcing frequency where the Farey sequence dictates we should observe one third the forcing frequency (3:1). In this parameter region the spatial pattern also changes from slowly moving traveling waves to standing waves with a smaller wavelength. Numerical simulations of the FitzHugh-Nagumo equations show qualitative agreement to the experimental observations and indicate the oscillations in the experiment are a result of period doubling. 这篇文献还是小师弟发给我看看的,但是看完之后,检索了相关文献以后,发现这个方向的工作有一个组已经做了很多,德国的,而且已经将这种驱动的系统中的螺旋波漫游情况进行了分析。所以这篇文章的学术价值我个人认为Ok,不理解的是为什么不用已经早就使用了的Oregonator模型?这可是一直认为是最贴近实验的一个模型。如果只从模型研究的角度出必当然也无可厚非。 对这个模型中加驱动的方式我也是有点不同看法的。既然驱动是作用在第二个变量上的,为什么驱动项的时间尺度不是与第二变量一致,反而与第一个变量一致。 文章的构成很流行了,实验在前,模型模拟在后。 我将用这个模型,修改驱动方式,重新考察一下螺旋波的漫游。网络的事,就让它在网络中继续进行吧。
If you can not see movie here, you can go to the source: Source: http://www.molbio.wisc.edu/white/CECDBio.html 在我们上古的神话传说中,对宇宙的起源有一个颇有英雄主义的故事。三国吴人徐整《三五历记》曰:天地混沌如鸡子,盘古生其中。万八千岁,天地开辟,阳清为天,阴浊为地。盘古在其中,一日九变,神于天,圣于地。天日高一丈,地日厚一丈,盘古日长一丈。如此万八千岁,天数极高,地数极深,盘古极长。意即宇宙开初是混沌一片,没有空隙,盘古日增一丈,兢兢业业的工作,终于开辟出一片广阔的新天新地。我们的祖先在宇宙的起源这个问题上颇有见地,因为这个故事的背后有一个非常深刻的道理。 一沙一世界 如果宇宙处处对称的话,世界是死寂无声的,只有当对称性被打破以后,这个宇宙才有了组织和秩序,从简单到复杂,从基本粒子到能够思考的人类,这样复杂性的继承、发展和演化从而成为可能。而任何多细胞生物体的发育,从某种程度上再现(replay)了这样一个过程。一个原本大致均匀分布的受精卵,会在第一次分裂之前产生细胞极化,从而打破原有细胞的对称,为以后各种不同功能的组织、器官的发育作了铺垫。以后形成的干细胞,也存在着这样的非对称分裂(asymmetric division),这是整个人体形成有着复杂功能的精密结构的基石。 一花一天堂, 一沙一世界 ,这不仅仅是诗人浪漫的咏唱,这其中蕴含着真正的哲理。生命发育和宇宙演化的相似性,窃以为这并不是巧合。恰恰说明这个看是纷繁复杂的大千世界的背后,在模式产生和组织进化上有着相同的道理。细胞是如何消耗能量而形成负熵,抵御无所不在的热力学第二定律,从而打破对称而形成有序结构?这个问题的回答,从某种意义上也为宇宙的有序结构的自形成和自组织的解释提供了强烈的暗示。 C. elegans 细胞的极化过程 C. elegans 细胞在细胞膜的内侧有一层致密的富有弹性的组织,称为cell membrane cortex。其由错综复杂排列的Actin和众多在上面行走的molecular motors组成。 而细胞内部有无数的microtubules,向撑起帐篷的众多的支杆。然后,这些microtubules都不是静态的,而会不断的随着细胞内化学成分的改变而生长或分解,就好像看是静止的一杯水却有无数的分子在产生碰撞。这样的类似随即热运动的过程在生物现象中比比皆是。这也是为什么统计物理的方法是目前生物物理最重要的研究方法的原因之一。(可以参见 Random walk in biology, Howard C. Berg)。尽管细胞中的microtubules和行走其上的molecular motors都在做随机运动,如果我们这是对molecular motors的蛋白和基因Par6做荧光标记的话,会发现这些molecular motors在细胞中的分布式大致均匀的,而Par6也是均匀分布在细胞膜上的。 Fig. 1 cell membrane cortex layer and Par2 grows as Par6 shrinks source: http://raven.zoology.washington.edu/celldynamics/research/worms/images/working_hypothesis.jpg 随后,细胞的posterior一侧开始产生基因Par2,因为Par2和Par6相互抑制,彼此都会因为protein domain binding而是对方更易于从cortex上脱落。同时,Cortex的anterior一侧会因为molecular motors消耗ATP能量产生收缩,引起Cortex向anterior方向流动(可以想象成很粘稠的胶体),自然会带动细胞内的液体产生流动。液体流动会带动更多的molecular motors流向anterior方向,导致细胞的极化加速,基因Par2扩张,基因Par6收缩。最后分裂前,细胞两侧的基因和细胞器的分布已经彻底打破了对称,从而为细胞分化和器官形成做好了准备。更多的细节请看题头的Flash 动画,那最为清楚不过了。 解释细胞的极化的两组数学方程 要解释为什么细胞会分为两段,而不是三段或者更多?要了解液体流动与membrane cortex上molecular motors的相互促进作用是如何形成的?光靠语言来描述是不够的,我们需要使用更为定量的方法来解释细胞的极化过程。都说一个数学方程可以吓走一半的读者,但是要解释细胞的极化,再没有比数学更为简洁明了的方法了。 第一组方程是描述Par2和Par6相互抑制的非线性动力过程。假设A2(x,t)代表Par2在细胞内的时空分布,A6(x,t)代表Par6在细胞内的时空分布,我们可以写出两者作用的常微分方程(更多相关细节详见Ref 1): 这是生物化学中最常用的蛋白质动力作用方程,我就不一一解释其每一项的含义了。因为最终的解释在于这个动力方程组的动力特性上。如果我们解出方程的稳态值,并在稳态值的附近做一个微扰展开作稳定性分析,Par2和Par6的第一阶稳态分布恰好是一个Sin(x)函数,这说明为什么么细胞会分为两段而不是更多。 Fig.2 第一阶稳态 如果对原核生物的分子震荡集制作类似的分析,其蛋白酶 MinC, MinD和MinE 也构成一个可以产生震荡的非线性系统。 MinC, MinD和MinE 的第一阶稳态分布也是一个Sin(x)函数 ,故通常细菌都分裂成两段。但这个系统的第二阶稳态分布可以是波峰-波谷-波峰-,这样的取值会使细 菌 分为三段。 有趣的是, 生物学家根据这样的结论对E. Coli基因变异,从而改变 MinC, MinD和MinE 动力作用的强弱,最终真的制造出了三段分裂的细 菌 。 Fig.2 第 二 阶稳态 而另一组描述液体流动与 membrane cortex上molecular motors的相互促进作用的方程是:假设v(x,t)代表液体流动在细胞内的时空分布, 代表molecular motors在细胞内浓度的时空分布(更多相关细节详见Ref 2):, 熟悉非线性动力学的人能一下辨认出这是1952年图灵最早提出的反应-扩散方程(reactiondiffusion equations)的推广。那么,你就不会惊奇在一定参数条件下,这样的方程会随时间演进打破对称,形成模式。 为什么反应- 扩散方程会形成模式? 之所以写出数学方程的原因之一,就在于作为生物物理的研究者的我常常发现,生命现象不仅仅是由基因所决定的。往往是由物理和化学过程为基因所设定的边界条件,辅助甚至有时是决定基因对生命功能发挥作用的。最明显的一个例子,就是即使是完全相同基因的细菌在相同的环境条件下,会发展出两种完全不同的基因表达模式来(Ref. 3)。在这里,生命现象中的随机物理过程,发挥着极为重要的作用。 虽然细胞生物学的一些现象,已经可以通过数学模型来加以定量的描述。但是,为什么反应-扩散方程会随时间演进打破对称,产生负熵从而形成模式?这个数学方程背后的哲学意味是什么?是否宇宙秩序的产生与演进又存在着相类似的原理?这些巨谜的揭破,期待着人们打破传统的物理、化学和生物的人为设定的主观界限,还原出自然律原本简单而统一的美丽。 Reference: A Gamba, I Kolokolov, V Lebedev and G Ortenzi, Universal features of cell polarization processes, J. Stat. Mech. (2009) P02019, doi:10.1088/1742-5468/2009/02/P02019 Hydrodynamic Theory for Multi-Component Active Polar Gels , J.-F. Joanny, F. Jlicher, K. Kruse and J. Prost, New J. Phys. 9 , 422 (2007) Long Cai, Nir Friedman and X. Sunney Xie, Stochastic protein expression in individual cells at the single molecule level, Nature 440, 358-362 (16 March 2006) | doi:10.1038/nature04599 END