本代码计算给定序列的自相关函数ACF。 Computes ACF for a given series. 根据滞后量p返回自相关输出向量。 Returns a vector of autocorrelations through lag p. 还可以生成 自相关的条形图 ,其中拒绝域用于测试(在白噪声假设下)每个自相关等于0。 Also produces bar graph of autocorrelations, with rejection region bands for testing (under white noise assumption) each autocorrelation = 0. 源代码: function ta = acf(y,p) % ACF - Compute Autocorrelations Through p Lags % myacf = acf(y,p) % % Inputs: % y - series to compute acf for, nx1 column vector % p - total number of lags, 1x1 integer % % Output: % myacf - px1 vector containing autocorrelations % (First lag computed is lag 1. Lag 0 not computed) % % % A bar graph of the autocorrelations is also produced, with % rejection region bands for testing individual autocorrelations = 0. % % Note that lag 0 autocorelation is not computed, % and is not shown on this graph. % % Example: % acf(randn(100,1), 10) % % -------------------------- % USER INPUT CHECKS % -------------------------- = size(y) ; if n2 ~=1 error( 'Input series y must be an nx1 column vector' ) end = size(p) ; if ~((a1==1 a2==1) (pn1)) error( 'Input number of lags p must be a 1x1 scalar, and must be less than length of series y' ) end % ------------- % BEGIN CODE % ------------- ta = zeros(p,1) ; global N N = max(size(y)) ; global ybar ybar = mean(y); % Collect ACFs at each lag i for i = 1:p ta(i) = acf_k(y,i) ; end % Plot ACF % Plot rejection region lines for test of individual autocorrelations % H_0: rho(tau) = 0 at alpha=.05 bar(ta) line( , (1.96)*(1/sqrt(N))*ones(1,2)) line( , (-1.96)*(1/sqrt(N))*ones(1,2)) % Some figure properties line_hi = (1.96)*(1/sqrt(N))+.05; line_lo = -(1.96)*(1/sqrt(N))-.05; bar_hi = max(ta)+.05 ; bar_lo = -max(ta)-.05 ; if (abs(line_hi) abs(bar_hi)) % if rejection lines might not appear on graph axis( ) else axis( ) end title({ ' ' , 'Sample Autocorrelations' , ' ' }) xlabel( 'Lag Length' ) set(gca, 'YTick' , ) % set number of lag labels shown if (p28 p4) set(gca, 'XTick' ,floor(linspace(1,p,4))) elseif (p=28) set(gca, 'XTick' ,floor(linspace(1,p,8))) end set(gca, 'TickLength' , ) % --------------- % SUB FUNCTION % --------------- function ta2 = acf_k(y,k) % ACF_K - Autocorrelation at Lag k % acf(y,k) % % Inputs: % y - series to compute acf for % k - which lag to compute acf % global ybar global N cross_sum = zeros(N-k,1) ; % Numerator, unscaled covariance for i = (k+1):N cross_sum(i) = (y(i)-ybar)*(y(i-k)-ybar) ; end % Denominator, unscaled variance yvar = (y-ybar)'*(y-ybar) ; ta2 = sum(cross_sum) / yvar ; 源码下载地址: http://page2.dfpan.com/fs/8lcje221e291c679df8/ 更多精彩文章请关注微信号:
What is DG? An Example consider \begin{equation} \left\{ \begin{aligned} u_{x}=cos(x) on \\ u(0)=0 \\ \end{aligned} \right. \end{equation} \begin{equation} \left\{ \begin{aligned} \overset{.}x(t) =A_{ci}x(t)+B_{1ci}w(t)+B_{2ci}u(t) \\ z(t) =C_{ci}x(t)+D_{ci}u(t) \\ \end{aligned} \right. \end{equation} Standard Finite Element Metho Approximate: $u(x)=\sum_j u_j \phi_j(x)$,where $\phi_{j}$ are 'hat' basis function. PicTUre Weak form of equations: $ \int_I u_x \phi_i dx =\int_I cos(x) \phi_i dx $ $-\int_I u\phi_(i,x)dx+u\phi_i\mid_{x_0} ^{x_4} =\int_Icos(x)\phi_{i}dx$ substitute $u(x)=\sum_j u_j \phi_j(x)$,our goal is solve for the $u_j$ $$-\sum_j u_j \int_I \phi_j \phi_{i,x}dx +u_4\phi_i-u_0\phi_i=\int_I cos(x) \phi_i dx$$ Where DG differ \item Formulation is the same as standard FEM; \item DG difference is the choice of basis,no continuity constraint between elements;PICTURE \item Numerical flux \item DG weak form: $$\int_{I_k} u_x \phi_i dx =\int_{I_k} cos(x)\phi_i dx $$ $$-\int_{I_k}u\phi_{i,x}dx +u\phi_i \mid _{x^-_{k-\frac{1}{2}}} ^{x^-_{k+\frac{1}{2}}}=\int_{i_k} cos(x)\phi_i dx$$ \item $u$ is multi valued at element interfaces: PICTURE \item Motivation:choosing $\phi_i=1$ (1-D) $$u^-_{k+1/2}-u^-_{k+1/2}=u_k-u_{k-1}=\Delta x cos(x_{k-1/2})$$ DG for conservation law $$u_t +f(u)_x=0$$ $$u(x,0) =u_0(x)$$ \item Approximate: $$u^h(x,t)=\sum_{l=0}^{k}u_i^{(l)}(t)v_l^{(i)}(x)\ for\ x \in I_i$$ where $v_0^i(x)=1,v_1^i(x)=\frac{x-x_i}{\Delta x_i/2},v_2^i(x)=(\frac{x-x_i}{\Delta x_i/2})^2-\frac{1}{3}$ is the orthogonal basis over $I_i$;$u^{(l)}_i(t)=\frac{1}{\int_{I_i}(v^{(i)}_l)^2dx}\int_{I_i} u^h(x,t)v^{(i)}_l(x)dx$ are degree of freedom \item DG weak formulation $$\int_{I_i} u_tv dx -\int_{I_i} f(u)v_{x}dx +f(u(x_{i+1/2},t))v(x_{i+1/2})-f(u(x_{i-1/2},t))v(x_{i-1/2})=0$$(it's not Ok! ) \item Numerical flux $\hat{f}_{i+1/2}=\hat{f}(u^-_{i+1/2},u^+_{i+1/2})$ to replace the $f(u_{i+1/2})$ \begin{description} \item monotone $\hat{f}(\uparrow,\downarrow)$ \item $\hat{f}$ is Lipschitz continuos with respect to both arguments. \item $\hat{f}(u,u)=f(u)$ \end{description} \item Example of numerical flux \begin{description} \item $$\hat{f}^{LF}(a,b)=\frac{1}{2} ,\alpha=max|f'(u)|$$ Particular :if $f(u)=u\Rightarrow \hat{f}=\hat{f}(u^-_{i+1/2},u^-_{i-1/2})=u^-_{i+1/2}$ \item \ \end{description} \item Semi-discretization scheme$$\frac{d}{dt}u^{(l)}_i +\frac{1}{a_l}(-\int_{I_i}f(u^h)\frac{d}{dx}v^{(i)}_l(x)dx +\hat{f}_{i+1/2}v^{(i)}_l(x_{i+1/2})-\hat{f}_{i-1/2}v^{(i)}_l(x_{i-1/2}))=0$$ we write the above as $$u_t=L(u)$$ \item Time discretization(Runge-Kutta) \begin{align*} u^{(1)} =u^n+\Delta tL(u^n) \\ u^{(2)} =\frac{3}{4}u^n+\frac{1}{4}u^{(1)}+\frac{1}{4}\Delta t L(u^{(1)}) \\ u^{n+1} =\frac{1}{3}u^n+\frac{2}{3}u^{2}+\frac{2}{3}\Delta t L(u^{(2)}) \end{align*} \end{enumerate} \boxed{\emph{\textbf{Cell entropy inequality and $L^2$ -stability}}} \begin{description} \item $$U(u)_t+F(u)_x \leqslant 0$$ where $U(u)$ is entropy with $U''(u)\geqslant 0$ and the corresponding entropy flux $F(u)=\int ^u U'(u)f'(u)u_x dx$\\ \item :\\ (1) The solution $u_h$ to the semi-discrete DG scheme satisfies the following cell entropy inequality $$\frac{d}{dt}\int_{I_j} U(u_h)dx +\hat{F}_{j+1/2} -\hat{F}_{j-1/2}\leqslant 0$$ for the square entropy $U(u) = u^2/2$ and for some consistent entropy flux $\hat{F}_{j+1/2}=\hat{F}(u^-_{h,j+1/2},u^+_{h,j+1/2})$,with $\hat{F}(u,u)=F(u)=uf(u)-\int ^u f(u)du$\\ (2)Furthermore, uh satisfies the following $L^2$-stability $$\frac{d}{dt}\int_0 ^1 (u_h)^2dx \leqslant 0 $$ $$\|u_h(\cdot,t)\|_{L^2}\leqslant \|u_h(\cdot,0)\|_{L^2}\leqslant \|u_0\|_{L^2}$$ \end{description} \boxed{\emph{\textbf{Limiter and total variation stability}}} \begin{description} \item :\\ \begin{itemize} \item Maintain the local conservation: keep the cell average; \item Do not degrade the accuracy of the scheme \end{itemize} \item \item \begin{align*} \tilde{u}_i^{(mod)} =m(\tilde{u}_i,\Delta_+ \bar{u}_i,\Delta_-\bar{u}_i) \\ \tilde{\tilde{u}}_i^{(mod)} =m(\tilde{\tilde{u}}_i,\Delta_+ \bar{u}_i,\Delta_-\bar{u}_i) \end{align*} with \ Then set $$u_h ^{(mod)}(x^-_{i+1/2})=\bar{u}_{i}+\tilde{u}^{(mod)}_i$$,$$u_h ^{(mod)}(x^+_{i+1/2})=\bar{u}_{i}+\tilde{\tilde{u}}^{(mod)}_i$$ New $u_h=$\\ \item \item If a scheme can be written in the form$$ u^{n+1}_i=u^n_i +C_{i+1/2}\Delta_+ u^n_i -D_{i-1/2}\Delta_-u^n_i$$periodic or compacted supported boundary conditions, where $C_{i+ 1 /2}$ and $D_{i? 1/ 2}$ may be nonlinear functions of the grid values $u^n_j$ for $j = i-p, .... , i+q $ with some $p, q\geqslant 0$, satisfying$$C_{i+1/2}\geqslant0,D_{i+1/2}\geqslant0,C_{i+1/2}+D_{i+1/2}\leqslant 1$$ then the scheme is TVD,namely:$$TV(u^n+1) \leqslant TV(u^n)$$ where $$TV(u)=\sum_i |\Delta_+ u_i|$$ \item The solution un h of the DG scheme with the forward Euler time discretization using the limiter discussed above, is total variation diminishing in the means (TVDM), that is$$ TV M(u^{n+1}_h) \leqslant TV M(u^n_h)$$.with the semi-norm defined as $TV M(u_h) = \sum_i|\Delta_+\bar{u}_i|$. Similar result can be extended to high order SSP time discretizations \item ? \\ \begin{enumerate} \item In the smooth, monotone region:assume $u_h$ is an approximation to a locally smooth function $u$, then$$\tilde{u}_i=\frac{1}{2}u_x(x_i)\Delta x_i +o(h^2),\tilde{\tilde{u}}_i=\frac{1}{2}u_x(x_i)\Delta x_i+o(h^2)$$ while:\begin{align*} \Delta_+\bar{u}_i =\frac{1}{2}u_x(x_i)(\Delta x_i+\Delta x_{i-1} )+o(h^2) \\ \Delta_-\bar{u}_i =\frac{1}{2}u_x(x_i)(\Delta x_i+\Delta x_{i-1} )+o(h^2) \\ \tilde{u}^{(mod)}_i =m(\bar{u_i},\Delta_+\bar{u_i},\Delta_-\bar{u_i})\approx \tilde{u}_i \end{align*} \item At the smooth extrema:Accuracy loss!PICTURE $$\Delta_+\bar{u_i}\Delta_-\bar{u_i}0,and\ \tilde{u}^{(mod)}_i =m(\bar{u_i},\Delta_+\bar{u_i},\Delta_-\bar{u_i})=0$$ \item :\\ \ \boxed{Error estimate:} \item Let $u$ be the smooth exact solution to the conservation law $u_t + u_x = 0$, and let uh be the numerical solution to the semi-discrete DG method, then $\parallel u-u_h\parallel_{L^2} \leqslant Ch^{k+1}$ here the constant C depends on the exact solution and it is independent of $h$, \end{enumerate} \end{description}
我的新书《Meaning of the Wave Function: In search of the ontology of quantum mechanics》已经完稿,将由 剑桥大学出版社 于明年初出版。这里是书的前言,目录和推荐语。 Mwf book by Shan Gao preface.pdf A thoughtful survey of the many issues arising from the question: Does the quantum mechanical wave function represent physical reality? Gao's book will provoke stimulating discussions among physicists and philosophers of science. --- Stephen L. Adler, Institute for Advanced Study, Princeton, author of Quantum Theory as an Emergent Phenomenon This book discusses in great detail the fundamental problem of the conceptual and philosophical status of the quantum wavefunction. The remarkable deepness and completeness of the analysis and the appreciable and objective way of the author in discussing divergent positions render the book a useful tool of investigation. I unrestrictedly recommend this work to all people interested in contributing to clarify the most intriguing aspects of the measurement problem and the various obscure and debated aspects of quantum mechanics. --- GianCarlo Ghirardi, University of Trieste and ICTP, Trieste, author of Sneaking a Look at God's Cards A profound book for a deep question. --- Nicolas Gisin, University of Geneva, author of Quantum Chance The meaning of the wave function is a problem encountered by all students of quantum mechanics. The wave function is usually attributed just a probabilistic significance but might it have other characteristics - could it be a physical field? Shan Gao's admirable book is the first to present a comprehensive analysis of this fundamental topic. Drawing upon recent thinking, the author presents a readable up-to-the-minute assessment of the various viewpoints on the significance of the wave function. The book provides an excellent introduction to this key area in the foundations of physics. --- Peter Holland, University of Oxford, author of The Quantum Theory of Motion The reality or unreality of the quantum wave function is a topic of lively debate in the foundations of quantum mechanics. In this thoughtful and thought-provoking book, Shan Gao offers nothing less than a novel realist interpretation of the wave function, as describing the propensities of particles undergoing random discontinuous motion. It is a book that everyone interested in the ongoing debates will want to take a look at. --- Wayne Myrvold, Western University, co-editor of Studies in History and Philosophy of Modern Physics 补记:书的初稿可以从下述链接下载 https://arxiv.org/abs/1611.02738 http://philsci-archive.pitt.edu/12608/ https://gucas.academia.edu/ShanGao
Define a function: % in_arg stands for input arguments, while out_arg stands for output arguments. Syntax: function =function_name(in_arg1,in_arg2,...) description end Define a subfunction (also: local function) into the function (also: main function): % Only main function can be called from outside (by the Command Window scripts). % The inputoutput arguments within the subfunction is unavailable to the main function and the Command Window. The inputoutput arguments within the main function is unavailable to the subfunction and the Command Window. % See e.g. below: we give input arguments to the function myRand, and get a as an output. Then a value is copied and passed to M as an input argument of the function sumAllElements, and get summa as an output of sumAllElements. Finally, summa value is copied and passed to s as the other output of the function myRand. % make the function polymorphic by setting variable number of arguments Two built-in functions: nargin : returns the number of actual input arguments that the function was called with. nargout : returns the number of output argument s that the function caller requested.
达尔文写道: “——the mind is function of body.——” 自然的翻译是 “ 意识是肉体的功能 ” 。而 function 有函数的意思。这么想也挺有意思:天生的肉体加上听的话看的书,在意识这个函数的变换下,产生手工制品、思想、书籍、图画、音乐和人世间悲欢离合。 y = f (x) output = mind (body, input)
Formula as follow: where D m is McIntosh diversity index,N is the total number of individuals in the plant population, N i is the number of individuals of the i th species. R function as follow: #A R function for calculating the mcIntosh diversity index mcIntosh-function(x,MARGIN = 1){ x - drop(as.matrix(x)) if (length(dim(x)) 1) { total - apply(x, MARGIN, sum) Dm-(total-(rowSums(x^2))^0.5)/(total-total^0.5) } else { Dm - (sum(x)-(sum(x^2))^0.5)/(sum(x)-sum(x)^0.5)} return(Dm) } Example: library(vegan) data(BCI) mcIntosh(BCI) mcIntosh.txt
“拼接体”工作能否再次获得诺贝尔奖? Spliceosome Structure and Function7.pdf 因拼接体获得诺贝尔奖者: Phillip Sharp and Richard J. Roberts were awarded the 1993 Nobel Prize in Physiology or Medicine for their discovery of introns and the splicing process A spliceosome is a large and complex molecular machine found primarily within the splicing speckles of the cell nucleus of eukaryotic cells. The spliceosome is assembled from snRNAs and protein complexes. The spliceosome removes introns from a transcribed pre-mRNA, a kind of primary transcript. This process is generally referred to as splicing.Only eukaryotes have spliceosomes and metazoans have a second spliceosome, the minor spliceosome 拼接体的组装的动态描述( Spliceosome assembly)还没有获奖,可能已经不再值得授予诺贝尔奖 以下是施一公院士的文章,对比上图(已经知道的过程)看看施一公院士的工作究竟有没有新意? Science. 2015 Aug 20. pii: aac8159. Structural basis of pre-mRNA splicing. Hang J(1), Wan R(1), Yan C(1), Shi Y(2). Author information: (1)Ministry of Education Key Laboratory of Protein Science, Tsinghua-Peking Joint Center for Life Sciences, Center for Structural Biology, School of Life Sciences, Tsinghua University, Beijing 100084, China. (2)Ministry of Education Key Laboratory of Protein Science, Tsinghua-Peking Joint Center for Life Sciences, Center for Structural Biology, School of Life Sciences, Tsinghua University, Beijing 100084, China. shi-lab@tsinghua.edu.cn. Splicing of pre-mRNA is performed by the spliceosome. In the cryo-EM structure of the yeast spliceosome, U5 snRNP acts as a central scaffold onto which U6 and U2 snRNAs are intertwined to form a catalytic center next to Loop I of U5 snRNA. Magnesium ions are coordinated by conserved nucleotides in U6 snRNA. The intron lariat is held in place through base pairing interactions with both U2 and U6 snRNAs, leaving the variable-length middle portion on the solvent-accessible surface of the catalytic center. The protein components of the spliceosome anchor both 5'- and 3'-ends of the U2 and U6 snRNAs away from the active site, direct the RNA sequences, and allow sufficient flexibility between the ends and the catalytic center. Thus, the spliceosome is in essence a protein-directed ribozyme, with the protein components essential for the delivery of critical RNA molecules into close proximity of one another at the right time for the splicing reaction.
题目: Schur function(S-function) 主讲: 周池春 时间:2015年06月03 星期三上午10:15 地点:16教学楼308室 1. Schur-function 介绍(S-function又称为Symmetry function,是一类在交换变量操作下保持不变的函数。S-function 与置换群,U(n)群的群表示有着很重要的关系。) 2. Schur-function 和置换群之间的关系介绍。 3. Schur-function 和U(n)群表示之间的关系介绍。 参考书目: 《The Theory of Group Characters and Matrix Representations of Groups》 Dudley E.Littlewood 《Symmetric functions and Hall polynomials》Macdonald I.G 《New S-function Series and Application of Group Theory in Suppersymmetries》 M.Yang
gnuplot script used to generate the above plot: set yrange set xrange unset border set arrow from -5,0 to 5.5,0 set arrow from 0,-2.7 to 0,2.8 set key at -5,2.2 set grid set size square plot sin(x) w p pt 6, cos(x) w p pt 7, tan(x) lw 2, 1/tan(x) lw 2 ============== Note that there are no cosecant, secant, and cotangent functions in gnuplot - but you can create them yourself by writing: gnuplot cosec (x) = 1 / sin (x) gnuplot sec (x) = 1 / cos (x) gnuplot cot (x) = 1 / tan (x) More information about gnuplot avaliable at: http://faraday.elec.uow.edu.au/subjects/spring/ecte212/gnuplot.pdf
白藜芦醇抗衰老作用的国际研究进展与文献分析 检索分析用词 SIRT1 and resveratrol Aging 信息分析平台 http://www.gopubmed.org/web/gopubmed/1?WEB0fepw17wls3d5I0I1I00h001000j10021000300.y Term: Aging Description: The gradual irreversible changes in structure and function of an organism that occur as a result of the passage of time. Synonyms: Biological Aging, Senescence 文献分析结果如下: Top Years Publications 2010 29 2012 23 2008 14 2009 12 2011 9 2007 7 2005 4 2006 3 2003 3 2013 2 2004 1 1 2 Top Countries Publications United States 39 China 10 Japan 8 France 8 Spain 4 South Korea 4 Australia 4 Italy 4 Taiwan 4 United Kingdom 3 Netherlands 2 Thailand 2 Canada 2 Hong Kong 1 Chile 1 Switzerland 1 Greece 1 Finland 1 Poland 1 Sweden 1 1 2 1 2 3 4 Top Cities Publications Farmington 6 Boston 4 Sydney 4 Taipei 4 Barcelona 3 Philadelphia 3 Beijing 3 New York City 3 Rochester 3 Paris 3 Tianjin 2 Bethesda 2 Fukuoka 2 Tokyo 2 Kantharalak 2 Lyon 2 Seoul 2 Cambridge 2 Montréal 2 Hong Kong 1 1 2 3 4 1 2 3 4 5 Top Journals Publications Plos One 4 Biochim Biophys Acta 4 Mech Ageing Dev 3 Aging 3 Biochem Biophys Res Commun 3 Cell 3 Arterioscler Thromb Vasc Biol 3 Nature 3 Age (dordr) 2 Free Radic Biol Med 2 J Biol Chem 2 Heart Fail Rev 2 Altern Med Rev 2 Arch Biochem Biophys 2 Curr Aging Sci 2 Int J Mol Med 2 Hum Reprod 1 Cell Metab 1 Am J Physiol Heart Circ Physiol 1 Biochem Pharmacol 1 1 2 3 4 5 1 2 3 ... 33 Top Authors Publications Morris B 3 Mukherjee S 2 Das D 2 Guarente L 2 Puigserver P 2 Sinclair D 2 Liu L 1 Liu M 1 Yin Y 1 Ye X 1 Zeng M 1 Zhao Q 1 Keefe D 1 Zhou Z 1 Liu B 1 Ghosh S 1 Yang X 1 Zheng H 1 Liu X 1 Wang Z 1 1 2 3 ... 33 1 2 3 ... 76 Top Terms Publications NAD-dependent deacetylase sirtuin-1 80 Humans 78 Stilbenes 73 Animals 73 Sirtuins 69 Longevity 67 Aging 60 Proteins 50 Mice 42 Unknown term default#fulltext 41 trihydroxystilbene synthase activity 39 Genes 37 Metabolism 37 metabolic process 37 Caloric Restriction 32 Unknown term default#review 32 NAD 31 Cell Aging 24 senescence 24 Histone Deacetylases 24 1 2 3 ... 76 publications over time world map network of top authors Top 20 of Important Phrases 1. sirtuin 1 browse it with it or it without it 85 2. in vitro browse it with it or it without it 84 3. oxidative stress browse it with it or it without it 59 4. histone deacetylase browse it with it or it without it 46 5. protein kinase browse it with it or it without it 43 6. gene expression browse it with it or it without it 38 7. caloric restriction browse it with it or it without it 30 8. transcription factor browse it with it or it without it 28 9. insulin sensitivity browse it with it or it without it 27 10. insulin resistance browse it with it or it without it 25 11. endothelial cells browse it with it or it without it 25 12. cell survival browse it with it or it without it 24 13. cell death browse it with it or it without it 22 14. stem cells browse it with it or it without it 20 15. health benefits browse it with it or it without it 19 16. cell lines browse it with it or it without it 18 17. adipose tissue browse it with it or it without it 17 18. skeletal muscle browse it with it or it without it 17 19. neuroprotective effects browse it with it or it without it 17 20. animal models browse it with it or it without it 14 1. Stilbenes/pharmacology browse it with it or it without it 267 2. Sirtuins/metabolism browse it with it or it without it 263 3. Sirtuin 1/physiology browse it with it or it without it 188 4. Sirtuin 1/metabolism browse it with it or it without it 157 5. Sirtuins/genetics browse it with it or it without it 139 6. Sirtuin 1/genetics browse it with it or it without it 85 7. Antioxidants/pharmacology browse it with it or it without it 75 8. Gene Expression Regulation/drug effects browse it with it or it without it 65 9. Stilbenes/therapeutic use browse it with it or it without it 61 10. Sirtuins/antagonists inhibitors browse it with it or it without it 49 11. Signal Transduction/drug effects browse it with it or it without it 42 12. Enzyme Inhibitors/pharmacology browse it with it or it without it 39 13. Apoptosis/drug effects browse it with it or it without it 34 14. Enzyme Activation/drug effects browse it with it or it without it 31 15. Oxidative Stress/drug effects browse it with it or it without it 30 16. Reactive Oxygen Species/metabolism browse it with it or it without it 27 17. Sirtuin 1/antagonists inhibitors browse it with it or it without it 26 18. Forkhead Transcription Factors/metabolism browse it with it or it without it 25 19. Acetylation/drug effects browse it with it or it without it 23 20. Niacinamide/pharmacology browse it with it or it without it 23 数据来源 http://pubmedpro.cn/Pubmed/Statistic 白藜芦醇(resveratrol,简称Res)是非黄酮类的 多酚化合物 ,天然的白藜芦醇在很多植物中存在,是植物为了抵御病菌入侵而产生的一种抗毒型物质。研究发现白藜芦醇是某些 草药 治疗炎症、脂类代谢紊乱和心脏疾病等的有效成分。随着对白藜芦醇的深入研究,显示白藜芦醇具有抗癌、抗氧化、 抗菌 、抗炎以及预防食物过敏等作用,可广泛地应用于医药、保健品、食品、化妆品等领域。 【新药或帮助人寿命延长至150岁】如何延长人的寿命一直是许多科学家研究的目标。据媒体报道制药巨头葛兰素史克正在测试部分新药,如果成功或许能让人的预期寿命延长至150岁。主要成分是人工合成的白藜芦醇,通过促进被称为长寿基因的SIRT1组蛋白脱乙酰酶活力,起到延缓衰老作用。 http://t.cn/zjn3LoG 英国制药巨头正在研制抗老新药 有望延寿150岁 动物测试:实验白鼠寿命延长15% 尽管针对白藜芦醇的研究已经开展了有10年之久,但相关领域的科学基础方面一直存在争议。 不过,人工合成的白藜芦醇在一些病症的临床测试中已经显现出令人满意的效果,并且这些病症包括癌症、心血管疾病、心力衰竭、2型糖尿病、老年痴呆症、帕金森症、脂肪肝、白内障、骨质疏松、关节炎等各种与年龄衰老有关的病症。 参与这一研发项目的哈佛大学遗传学教授戴维·辛克莱尔说:“从根本上来说,这些药物能够治疗某种病症,但与当前药物不同之处在于,它们还可以预防20种其他病症。实际意义上来说,它们可以延缓衰老。 ” 据悉,这些人工合成 “活化剂”已经在有限范围内使用于2型糖尿病患者和牛皮癣患者的临床治疗方面。科研人员发现,这些药物对2型糖尿病患者的新陈代谢带来了好处,并且让牛皮癣患者的皮肤减少变红的症状。 而在动物测试中,体重超标的实验白鼠在使用人工合成白藜芦醇后,其运动能力甚至达到了普通白鼠的两倍,并且寿命也长了15%。 http://news.sohu.com/20130312/n368468265.shtml 白藜芦醇抗衰老机理阐明 港大目标直指抗衰老药 http://www.biodiscover.com/news/hot/research/103322.html 纽约时报披露红酒抗衰老论文造假 http://www.biodiscover.com/news/hot/research/100452.html 科学家称发现"长生不老药" 人能活到1000岁 http://society.stnn.cc/qiwen/201010/t20101009_1428943.html
我们的结果应该投哪个期刊? 我们近年一共发现了 3 个 比当前国内外《数理统计学》教材中普遍使用的“ Fisher z 变换”更好的初等显式 函数 。 (1) 第一个 形式 特别简单,已经被《 Transactions of Tianjin University 》录用。这是EI核心期刊。 该函数的最大误差是“Fisher z 变换”的 70.7% ,累计误差为“Fisher z 变换”的 141% 。 该 Quadratic Radical Function 很简单,对改进某重要问题的计算机算法,有很好的作用。不排除将来进入《计算机算法设计与分析》之类书籍或教材的可能性。这个是不是有点 阿Q 了 ? (2) 第二个 形式略微复杂,已经被《 Communications in Statistics-Theory and Methods 》录用。目前是4区的SCI期刊。 该函数的最大误差是“Fisher z 变换”的 21.4% ,累计误差为“Fisher z 变换”的 8.90% 。 该 Sigmoid-like 函数首次胜过“Fisher z 变换”,不仅可以替换“Fisher z 变换”取得更好的精度,还对加深了解“Fisher z 变换”有重要的启发。 现在是 第三个 初等显式函数要投稿,不知道 投哪里 ?请您指教! 谢谢! 第三个函数 的最大误差约是“Fisher z 变换”的 18% ,累计误差约为“Fisher z 变换”的 4.7% 。 第三个函数的形式比第二个“应该”简单些。 除了我们的工作,Yun, Beong In 的 Approximation to the cumulative normal distribution using hyperbolic tangent based functions, Journal of the Korean Mathematical Society , 2009, 46(6): 1267-1276. 是近期的他人工作,里面有近几十年有 关工作的概述。这也是个4区的SCI期刊。 ————————— 相关背景 ————————— Sir Ronald Aylmer Fisher Photograph courtesy of Professor A W F Edwards by kind permission of Joan Fisher Box http://www.galtoninstitute.org.uk/Newsletters/GINL0306/university_of_cambridge_eugenics.htm 在《 大 英百科全书 , Encyclopaedia Britannica 》 http://www.britannica.com/EBchecked/topic/208658/Sir-Ronald-Aylmer-Fisher Sir Ronald Aylmer Fisher, byname R.A. Fisher (born February 17, 1890, London, England—died July 29, 1962, Adelaide, Australia), British statistician and geneticist who pioneered the application of statistical procedures to the design of scientific experiments. 在《 The MacTutor History of Mathematics archive 》 http://www-history.mcs.st-andrews.ac.uk/Biographies/Fisher.html Fisher z-transform 在《苏联数学百科全书》的当前网络版 词条“Correlation (in statistics)” http://www.encyclopediaofmath.org/index.php/Correlation_(in_statistics) If one usually uses the Fisher z-transform , with replaced byz according to the formula Even at relatively small values the distribution of is a good approximation to the normal distribution with mathematical expectation and variance . On this basis one can now define approximate confidence intervals for the true correlation coefficient . For the distribution of the sample correlation ratio and for tests of the linearity hypothesis for the regression, see . References H. Cramér, "Mathematical methods of statistics" , Princeton Univ. Press (1946) B.L. van der Waerden, "Mathematische Statistik" , Springer (1957) M.G. Kendall, A. Stuart, "The advanced theory of statistics" , 2. Inference andrelationship , Griffin (1979) S.A. Aivazyan, "Statistical research on dependence" , Moscow (1968) (In Russian) 相关链接: 《胜过 Fisher z 变换!(1)》 http://bbs.sciencenet.cn/home.php?mod=spaceuid=107667do=blogid=603297 《胜过 Fisher z 变换!(2)》 http://blog.sciencenet.cn/home.php?mod=spaceuid=107667do=blogid=657534
PRO sf,time,rate,lag,sf,evenly=evenly $ ,delta=delta,logdelta=logdelta,lagsample=lagsample $ ,mincouples=mincouples ;+ ; NAME: ; sf ; ; PURPOSE: ; Compute the structure function of a lightcurve, according to the ; definitions given by Nandikotkur et al., 1997, AIP ; Conf. Proc 410, 1361P for the evenly case and Paltani et ; al., 1997 AA 327, 539P for the unevenly case. ; ; CATEGORY: ; time series analysis ; ; ; CALLING SEQUENCE: ; sf,time,rate,lag,sf ; ; INPUTS: ; time : The times at which the time series was measured ; rate : the corresponding count rates ; ; ; OPTIONAL INPUTS: ; none ; ; ; KEYWORD PARAMETERS: ; delta : width for the lag rebining in the unevenly case ; (if not set, is equal to 1e-6) ; logdelta : delta in logaritmic expression ; evenly : if set, compute the estimation of the structure ; function for evenly sample data, otherwise ; compute the estimation for unevenly sample data ; lagsample : sample of time lags, if not set, all possible ; lags in the time sample are take as lag sample. ; mincouples: minimum number of couples in a bin of the ; structure function not to be ignored, if not set, ; is equal to 1. ; ; OUTPUTS: ; sf : the "structure function"-values to each time lag ; lag : sample of time lags ; ; OPTIONAL OUTPUTS: ; none ; ; ; COMMON BLOCKS: ; none ; ; ; SIDE EFFECTS: ; none ; ; ; RESTRICTIONS: ; none ; ; ; PROCEDURE: ; none ; ; EXAMPLE: ; sf,time,rate,lag,sf,delta=2. ; ; ; MODIFICATION HISTORY: ; Version 1.0, 1998.12.21, Sara Benlloch (IAAT) ; Version 1.1, 1999.01.07, lag sample in the unevenly case modificied. ; ( benlloch@astro.uni-tuebingen.de ) ; Version 1.2, 1999.01.26, new keywords; logdelta, lagsample, ; mincoupples. ;- ;; ;; lightcurve (lc) parameters ;; ;; number of points in the time series npt = n_elements(time) ;; subtract mean from data rat = rate-mean(rate) ;; ;; Structure Function ;; IF (keyword_set(evenly)) THEN BEGIN ;; for evenly-sampled discrete time series (ti=i*bint; xi) bt = (time(npt-1)-time(0))/(npt-1) ;; Estimation of the structure function: ;; sf(lag) = 1/npt(tau)SUM_{i=0,...npt-1-lag}(x(i+lag)-x(i))^2 ;; the average been taken over all measurements separated by ;; the respective lag and npt(lag) being the number of such ;; pairs ( Nandikotkur et al., 1997, AIP Conf. Proc 410, 1361P) sf = dblarr(2,npt-1) FOR lag=1L,npt-1 DO BEGIN ; ( lag = 1,...,npt-1 ) sf = total((rat -rat )^2)/(npt-lag) sf = npt-lag END ;; Lag sample lag = bt*findgen(npt-1)+bt ENDIF ELSE BEGIN ;; for unevenly time series (ti,xi),i=1,...npt with arbritary ti ;; Estimation of the structure function: ;; sf(lag,delta)=(1/npt(lag,bin))SUM_{(i,j)/R} ^2 ;; such that npt(lag,delta) is the number of couples ;; that satisfy the relationship ;; R := ( lag-delta/2 tj-ti lag+delta/2 ) ;; ( Paltani et al., 1997 AA 327, 539P ) ;; logarithmic R := | log(|tj-ti|)-log(lag) | = delta ;; time and rate couples : tj-ti and xj-xi for all ji t_couple = time-shift(time,1) r_couple = rat-shift(rat,1) time_couples = t_couple(1:npt-1) rate_couples = r_couple(1:npt-1) FOR i=2L,npt-1 DO BEGIN t_couple = time-shift(time,i) r_couple = rat-shift(rat,i) time_couples = rate_couples = ENDFOR ;; lag sample IF keyword_set(lagsample) THEN BEGIN lag = lagsample ENDIF ELSE BEGIN lag = time_couples(sort(time_couples)) lag = lag(uniq(lag,sort(lag))) ENDELSE ;; width for the lag rebining IF (n_elements(delta) EQ 0) THEN delta=1e-6 ;; structur function estimate sf = dblarr(2,n_elements(lag)) FOR t=0L,n_elements(lag)-1 DO BEGIN IF keyword_set(logdelta) THEN BEGIN rel = abs( alog10(time_couples) - alog10(lag(t)) ) relationship = where( rel LE logdelta , count ) ENDIF ELSE BEGIN relationship = where( (time_couples LT lag(t)+delta/2.) $ AND (time_couples GT lag(t)-delta/2.), count) ENDELSE IF (count NE 0) THEN BEGIN sum = 0. sum_couples = rate_couples(relationship) FOR h=0L,count-1 DO BEGIN sum = sum + sum_couples(h)^2. sf(1,t) = (1./count)*sum ENDFOR ENDIF ELSE BEGIN sf(1,t)=0. ENDELSE sf(0,t) = count ENDFOR result = where( sf(1,*) NE 0. , count ) IF count NE 0 THEN BEGIN lag = lag(result) sf = sf(*,result) ENDIF IF keyword_set(mincouples) THEN BEGIN result = where( sf(0,*) GE mincouples , count ) IF count NE 0 THEN BEGIN lag = lag(result) sf = sf(*,result) ENDIF ENDIF ENDELSE return END
PRO acf,time,rate,lag,acf,covf=covf ;+ ; NAME: ; acf ; ; PURPOSE: ; Compute the autocorrelation function of an evenly sample lightcurve ; ; ; CATEGORY: ; time series analysis ; ; ; CALLING SEQUENCE: ; acf,time,rate,lag,acf,covf ; ; ; INPUTS: ; time : the times at which the time series was measured ; rate : the corresponding count rates ; ; ; OPTIONAL INPUTS: ; none ; ; ; KEYWORD PARAMETERS: ; none ; ; ; OUTPUTS: ; lag : sample time lags ; acf : autocorrelation values to each time lag given in ; a two-dimensional array, once without ; correction-factor and once with. ; ; OPTIONAL OUTPUTS: ; covf : autocovariance values to each time lag given in ; a one-dimensional array. ; ; COMMON BLOCKS: ; none ; ; ; SIDE EFFECTS: ; none ; ; ; RESTRICTIONS: ; evenly time series ; ; ; PROCEDURE: ; The autocorrelation function is computed according to the ; approximation of evenly sample given by ; covf(lag)=(1/(npt-lag))SUM_{j=1,..npt-lag} ; acf(lag)=covf(lag)/covf(0) ; Additionally, a corection-factor given by Sutherland et ; al. 1978, Ajp 219, 1029P, is added. ; ; EXAMPLE: ; acf,time,rate,lag,acf,covf=covf ; ; ; MODIFICATION HISTORY: ; Version 1.0, 1998.12.21, Sara Belloch IAAT, Joern Wilms IAAT. ; ( benlloch@astro.uni-tuebingen.de ) ;- ;; ;; lightcurve (lc) parameters ;; ;; dimension of lc in bins npt = n_elements(time) ;; Autocorrelation is defined for series with mean zero rat = rate-mean(rate) ;; ;; Autocovariance function estimate by ;; covf(lag)=(1/(npt-lag))SUM_{j=1,..npt-lag} ;; covf = dblarr(npt) FOR lag=0,npt-1 DO BEGIN ; ( lag = 0,...,npt-1 ) covf = total(rat *rat ) / (npt-lag) END lag = (time -time ) * findgen(npt) ;; ;; Autocorrelation function ;; acf(lag)=covf(lag)/covf(0) ;; acf = dblarr(2,npt) acf(0,*) = covf / covf(0) ;; ;; correction-factors (Sutherland et al. 1978, ApJ 219, 1029P) ;; k1 = 1. / npt k2 = 1. / (npt*npt) FOR i=0,npt-1 DO BEGIN acf(1,i) = acf(0,i) + k1 - float(i)*k2 ENDFOR END
http://www.roguewave.com/Portals/0/products/imsl-numerical-libraries/fortran-library/docs/6.0/math/default.htm?turl=evesf.htm EVESF(输出的结果为本征值的绝对值按照从大到小的顺序) Computes the largest or smallest eigenvalues and the corresponding eigenvectors of a real symmetric matrix. Required Arguments NEVEC — Number of eigenvalues to be computed. (Input) A — Real symmetric matrix of order N . (Input) SMALL — Logical variable. (Input) If . TRUE ., the smallest NEVEC eigenvalues are computed. If . FALSE ., the largest NEVEC eigenvalues are computed. EVAL — Real vector of length NEVEC containing the eigenvalues of A in decreasing order of magnitude. (Output) EVEC — Real matrix of dimension N by NEVEC . (Output) The J -th eigenvector, corresponding to EVAL ( J ), is stored in the J -th column. Each vector is normalized to have Euclidean length equal to the value one. Optional Arguments N — Order of the matrix A . (Input) Default: N = SIZE ( A ,2). LDA — Leading dimension of A exactly as specified in the dimension statement in the calling program. (Input) Default: LDA = SIZE ( A ,1). LDEVEC — Leading dimension of EVEC exactly as specified in the dimension statement in the calling program. (Input) Default: LDEVEC = SIZE ( EVEC ,1). FORTRAN 90 Interface Generic: CALL EVESF (NEVEC , A , SMALL , EVAL , EVEC ) Specific: The specific interface names are S_ EVESF and D_ EVESF . FORTRAN 77 Interface Single: CALL EVESF (N , NEVEC , A , LDA , SMALL , EVAL , EVEC , LDEVEC) Double: The double precision name is DEVESF . Description Routine EVESF computes the largest or smallest eigenvalues and the corresponding eigenvectors of a real symmetric matrix. Orthogonal similarity transformations are used to reduce the matrix to an equivalent symmetric tridiagonal matrix. Then, an implicit rational QR algorithm is used to compute the eigenvalues of this tridiagonal matrix. Inverse iteration is used to compute the eigenvectors of the tridiagonal matrix. This is followed by orthogonalization of these vectors. The eigenvectors of the original matrix are computed by back transforming those of the tridiagonal matrix. The reduction routine is based on the EISPACK routine TRED2 . See Smith et al. (1976). The rational QR algorithm is called the PWK algorithm. It is given in Parlett (1980, page 169). The inverse iteration and orthogonalization computation is discussed in Hanson et al. (1990). The back transformation routine is based on the EISPACK routine TRBAK1 . Comments 1. Workspace may be explicitly provided, if desired, by use of E5ESF/DE5ESF . The reference is: CALL E5ESF (N, NEVEC, A, LDA, SMALL, EVAL, EVEC, LDEVEC, WK, IWK) The additional arguments are as follows: WK — Work array of length 9 N . IWK — Integer array of length N . 2. Informational errors Type Code 3 1 The iteration for an eigenvalue failed to converge. The best estimate will be returned. 3 2 Inverse iteration did not converge. Eigenvector is not correct for the specified eigenvalue. 3 3 The eigenvectors have lost orthogonality. Example In this example, a DATA statement is used to set A to a matrix given by Gregory and Karney (1969, page 55). The largest two eigenvalues and their eigenvectors are computed and printed. The performance index is also computed and printed. This serves as a check on the computations. For more details, see IMSL routine EPISF . USE EVESF_INT USE EPISF_INT USE UMACH_INT USE WRRRN_INT IMPLICIT NONE ! Declare variables INTEGER LDA, LDEVEC, N PARAMETER (N=4, LDA=N, LDEVEC=N) ! INTEGER NEVEC, NOUT REAL A(LDA,N), EVAL(N), EVEC(LDEVEC,N), PI LOGICAL SMALL ! ! Set values of A ! ! A = ( 5.0 4.0 1.0 1.0) ! ( 4.0 5.0 1.0 1.0) ! ( 1.0 1.0 4.0 2.0) ! ( 1.0 1.0 2.0 4.0) ! DATA A/5.0, 4.0, 1.0, 1.0, 4.0, 5.0, 1.0, 1.0, 1.0, 1.0, 4.0, 2.0, 1.0, 1.0, 2.0, 4.0/ ! ! Find eigenvalues and vectors of A NEVEC = 2 SMALL = .FALSE. CALL EVESF (NEVEC, A, SMALL, EVAL, EVEC) ! Compute performance index PI = EPISF(NEVEC,A,EVAL,EVEC) ! Print results CALL UMACH (2, NOUT) CALL WRRRN ('EVAL', EVAL, 1, NEVEC, 1) CALL WRRRN ('EVEC', EVEC, N, NEVEC, LDEVEC) WRITE (NOUT,'(/,A,F6.3)') ' Performance index = ', PI END Output EVAL 1 2 10.00 5.00 EVEC 1 2 1 0.6325 -0.3162 2 0.6325 -0.3162 3 0.3162 0.6325 4 0.3162 0.6325 Performance index = 0.031
function =eof(X,neof) % function =eof_analysis(X,neof) % Wrapper function to perform PCA of a field X % with TWO spatial dimensions. (This code will also % work with a SINGLE spatial dimension. But it might % be easier to directly call 'principal_component_analysis'.) % This function basically transforms the data matrix into % a standard 2-d data matrix, taking into account NaNs. % Input: % X: (x,y,t) or (x,t). % neof: number of EOF/PC to return % Output: % EOFs: 3-d (x,y,e) or 2-d (x,e) matrix with spatial % patterns. % PCs: 2-d matrix (t,e) with principal components (scores) % in the columns % Var: variance of each principal component % Xrecon: 3-d (x,y,t) or 2-d (x,t) matrix with reconstructed X % WITHOUT adding back the mean % Note: Xpc is the projection of X onto the corresponding EOF. % That is, Xpc(it,ie) = nsum(nsum(X(:,:,it).*Xeofs(:,:,ie))) % Use this to project a physical field onto EOF space. % If X only has 2 dimensions, assume second dimension is time. % Insert 'y' dimension to conform with rest of function which % assumes X represents multiple realizations of a 2-D field. if ndims(X)==2 =size(X); X=reshape(X, ); end % Flatten 2-d fields into single vector =size(X); Xd=reshape(X, ); % (space,time) % remove NaNs inn=find(~isnan(Xd(:,1))); Xd=Xd(inn,:); % (space, time) n=size(Xd,1); %normalize for i=1:n % Xd(i,:)=Xd(i,:)-nanmean(Xd(i,:)); % Xd(i,:)=detrend(squeeze(Xd(i,:)),'constant'); % Xd(i,:)=detrend(squeeze(Xd(i,:))); Xd(i,:)=zscore(detrend(squeeze(Xd(i,:)),'constant')); end Xd=Xd'; Xd=double(Xd); % PCA if nargout3 =principal_component_analysis(Xd,neof); else =principal_component_analysis(Xd,neof); end % Xpcs (t,e) % Reshape EOFs to physical space Xeofs=repmat(NaN, ); Xeofs(inn,:)=EOFs; Xeofs=squeeze(reshape(Xeofs, )); Xrecon=repmat(NaN, ); for ie=1:neof for it=1:nt Xrecon(:,:,it)=Xrecon(:,:,it)+Xpcs(it,ie)*Xeofs(:,:,ie); end end if nargout3 XR=XR'; % (x,t) Xrecon=repmat(NaN, ); Xrecon(inn,:)=XR; Xrecon=squeeze(reshape(Xrecon, )); end function Xpcs=eof_project(Xanom,Xeofs,numd) % function Xpcs=eof_analysis(Xanom,Xeofs,numd) % Function to project physical field (1 or 2 spatial % dimensions) onto EOF. % Input: % Xanom: (x,y,t) or (x,t) - physical ANOMALY field % Xeofs: 3-d (x,y,e) or 2-d (x,e) matrix with spatial % patterns. % numd: if Xanom is (x,t), MUST set ndims=1 % Output: % Xpcs: 2-d matrix (t,e) with projection coefficients % (principal components) in the columns. Number % of projection coefficients returned is equal % to the number of EOFs passed in the input argument. % Samar Khatiwala (spk@ldeo.columbia.edu) if nargin3 % default is 2-d numd=2; end if ndims(Xanom)==2 numd==1 % data is (x,t) Xpcs=Xeofs'*Xanom; Xpcs=Xpcs'; else % nt=size(Xanom,3) % neofs=size(Xeofs,3) % Xpcs=repmat(NaN, ); % for it=1:nt % Xpcs(it,:)=squeeze(nansum(nansum(Xeofs.*repmat(Xanom(:,:,it), ))))'; % end % for ie=1:neofs % Xpcs(:,ie)=squeeze(nansum(nansum(Xanom.*repmat(Xeofs(:,:,ie), )))); % end % Flatten 2-d fields into single vector =size(Xanom); neofs=size(Xeofs,3); Xanom=reshape(Xanom, ); % (x,t) Xeofs=reshape(Xeofs, ); % (x,e) % remove NaNs inn=find(~isnan(Xanom(:,1))); Xanom=Xanom(inn,:); % (x,t) Xeofs=Xeofs(inn,:); % (x,e); Xpcs=Xeofs'*Xanom; Xpcs=Xpcs'; end function =principal_component_analysis(X,neof) % function =principal_component_analysis(X,neof) % Function to do a principal component analysis of % data matrix X. % Input: % X: (t,x) each row corrsponds to a sample, each column % is a variable. (Each column is a time series of a % variable.) % neof: number of EOF/PC to return % Output: % EOFs: (x,e) matrix with EOFs (loadings) in the columns % PCs: (t,e) matrix with principal components (scores) in the columns % Var: variance of each principal component % Xrecon: (t,x) reconstructed X (WITHOUT adding back the mean) % To reconstruct: Xrecon = PCs*EOFs' % Notes: (1) This routine will subtract off the mean of each % variable (column) before performing PCA. % (2) sum(var(X)) = sum(Var) = sum(diag(S)^2/(m-1)) % Samar Khatiwala (spk@ldeo.columbia.edu) if strcmp(class(X),'single') disp('WARNING: Converting input matrix X to class DOUBLE') end % Center X by subtracting off column means = size(X); %X = X - repmat(mean(X,1),m,1); r = min(m-1,n); % max possible rank of X % SVD if nargin 2 =svds(X,r); else =svds(X,min(r,neof)); end % EOFs: (x,e) % U: (t,e) % Determine the EOF coefficients PCs=U*S; % PCs=X*EOFs (t,e) % compute variance of each PC % Modified EV=diag(S).^2/sum( diag(S).^2 ); Var = EV*100; % Note: X = U*S*EOFs' % EOFs are eigenvectors of X'*X = (m-1)*cov(X) % sig^2 (=diag(S)^2) are eigenvalues of X'*X % So tr(X'*X) = sum(sig_i^2) = (m-1)*(total variance of X) if nargout3 Xrecon = PCs*EOFs'; % (t,x) end
function Y = runmean(X, m, dim, modestr) ; % RUNMEAN - Very fast running mean (aka moving average) filter % For vectors, Y = RUNMEAN(X,M) computes a running mean (also known as % moving average) on the elements of the vector X. It uses a window of % 2*M+1 datapoints. M an positive integer defining (half) the size of the % window. In pseudo code: % Y(i) = sum(X(j)) / (2*M+1), for j = (i-M):(i+M), and i=1:length(X) % % For matrices, Y = RUNMEAN(X,M) or RUNMEAN(X,M, ,1) % % - 1.33 2 3 4 4.67 % runmean( ,1,'mean') % % - 2 2 3 4 4 % runmean( ,1,1) % dimension 1 is larger than 2*(M=1)+1 ... % % - 2 4 6 8 10 % runmean(ones(10,7),3,2,'zero') ; % along columns, using mode 'zero' % runmean(repmat( ,5,1),2,2) ; % % - all NaN result % A = rand(10,10) ; A(2,7) = NaN ; % runmean(A,3,2) ; % % - column 7 is all NaN % runmean(1:2:10,100) % mean % % - 5 5 5 5 5 % % This is an incredibly fast implementation of a running mean, since % execution time does not depend on the size of the window. % % See also MEAN, FILTER % for Matlab R13 % version 3.0 (sep 2006) % Jos van der Geest % email: jos@jasen.nl % History: % 1.0 (2003) created, after a snippet from Peter Acklam (?) % 1.1 (feb 2006) made suitable for the File Exchange (extended help and % documentation) % 1.2 (feb 2006) added a warning when the window size is too big % 1.3 (feb 2006) improved help section % 2.0 (sep 2006) working across a dimension of a matrix. % 3.0 (sep 2006) several treatments of the edges. % Acknowledgements: (sep 2006) Thanks to Markus Hahn for the idea of % working in multi-dimensions and the way to treat edges. error(nargchk(2,4,nargin)) ; if ~isnumeric(m) || (numel(m) ~= 1) || (m 0) || fix(m) ~= m, error('The window size (M) should be a positive integer') ; end if nargin == 2, dim = ; else modestr = 'edge' ; end end modestr = lower(modestr) ; % check mode specifier if ~ismember(modestr,{'edge','zero','mean'}), error('Unknown mode') ; end szX = size(X) ; if isempty(dim), dim = min(find(szX1)) ; end if m == 0 || dim ndims(X), % easy Y = X ; else mm = 2*m+1 ; if mm = szX(dim), % if the window is larger than X, average all sz2 = ones(size(szX)) ; sz2(dim) = szX(dim) ; Y = repmat(mean(X,dim),sz2) ; else % here starts the real stuff % shift dimensions so that the desired dimensions comes first = shiftdim(X, dim-1); szX = size(X) ; % make the rest of the dimensions columns, so we have a 2D matrix % (suggested of Markus Hahn) X = reshape(X,szX(1), ; % the cumsum trick (by Peter Acklam ?) Y = cumsum(Y,1) ; Y = (Y(mm+1:end,:)-Y(1:end-mm,:)) ./ mm ; % reshape into original size Y = reshape(Y,szX) ; % and re-shift the dimensions Y = shiftdim(Y,ndims(Y)-nshifts) ; end end % ===================== % CODE OF VERSION 1.3 % ===================== % function Y = runmean(X,m) ; % % RUNMEAN - Very fast running mean filter for vectors % % Y = RUNMEAN(X,M) computes a running mean on vector X using a window of % % 2*M+1 datapoints. X is a vector, and M an positive integer defining % % (half) the size of the window. In pseudo code: % % Y(i) = sum(X(j)) / (2*M+1), for j = (i-M):(i+M), and i=1:length(X) % % % % If the total window size (2M+1) is larger than the length of the vector, the overall % % average is returned. % % % % Example: % % runmean(1:10,1) % - % % % % % % This is an incredibly fast implementation of a running average, since % % execution time does not depend on the size of the window. % % % % X should not contains NaNs (a NaN will result in a all NaN result) % % At both ends the values of Y can be inaccurate, as the first and last % % values of X are used multiple times. % % % % See also MEAN % % % for Matlab R13 % % version 1.3 (feb 2006) % % Jos van der Geest % % email: jos@jasen.nl % % % History: % % 1.0 (2003) created, after a snippet from Peter Acklam (?) % % 1.1 (feb 2006) made suitable for the File Exchange (extended help and % % documentation) % % 1.2 (feb 2006) added a warning when the window size is too big % % 1.3 (feb 2006) improved help section % % error(nargchk(2,2,nargin)) ; % % sz = size(X) ; % % if numel(sz) ~= 2 || (min(sz) ~= 1), % error('X should be a vector') ; % end % % if any(isnan(X)), % error('NaNs cannot be dealt with') ; % end % % if ~isnumeric(m) || (numel(m) ~= 1) || (m 0) || fix(m) ~= m, % error('The window size (M) should be a positive integer') ; % elseif m == 0, % Y = X ; % return ; % end % % mm = 2*m+1 ; % % if mm = prod(sz), % % if the window is larger than X, average all % warning('Window size is larger than the length of the vector.') % Y = repmat(mean(X),sz) ; % else % % the cumsum trick ... % Y = ; % Y = ; % Y = (Y(mm+1:end)-Y(1:end-mm)) / mm ; % Y = reshape(Y,sz) ; % end
function p=gkdeb(x,p) % GKDEB Gaussian Kernel Density Estimation with Bounded Support % % Usage: % p = gkdeb(d) returns an estmate of pdf of the given random data d in p, % where p.pdf and p.cdf are the pdf and cdf vectors estimated at % p.x locations, respectively and p.h is the bandwidth used for % the estimation. % p = gkdeb(d,p) specifies optional parameters for the estimation: % p.h - bandwidth % p.x - locations to make estimation % p.uB - upper bound % p.lB - lower bound. % p.alpha - to calculate inverse cdfs at p.alpha locations % % Without output, gkdeb(d) and gkdeb(d,p) will disply the pdf and cdf % (cumulative distribution function) plot. % % See also: hist, histc, ksdensity, ecdf, cdfplot, ecdfhist % Example 1: Normal distribution %{ gkdeb(randn(1e4,1)); %} % Example 2: Uniform distribution %{ clear p p.uB=1; p.lB=0; gkdeb(rand(1e3,1),p); %} % Example 3: Exponential distribution %{ clear p p.lB=0; gkdeb(-log(1-rand(1,1000)),p); %} % Example 4: Rayleigh distribution %{ clear p p.lB=0; gkdeb(sqrt(randn(1,1000).^2 + randn(1,1000).^2),p); %} % V3.2 by Yi Cao at Cranfield University on 7th April 2010 % % Check input and output error(nargchk(1,2,nargin)); error(nargoutchk(0,1,nargout)); n=length(x); % Default parameters if nargin2 N=100; h=median(abs(x-median(x)))/0.6745*(4/3/n)^0.2; xmax=max(x); xmin=min(x); xmax=xmax+3*h; xmin=xmin-3*h; dx=(xmax-xmin)/(N-1); p.x=xmin+(0:N-1)*dx; p.pdf=zeros(1,N); p.cdf=zeros(1,N); p.h=h; dxdz=ones(size(p.x)); z=p.x; else =checkp(x,p); N=numel(p.x); h=p.h; end % Gaussian kernel function kerf=@(z)exp(-z.*z/2); ckerf=@(z)(1+erf(z/sqrt(2)))/2; nh=n*h*sqrt(2*pi); for k=1:N p.pdf(k)=sum(kerf((p.x(k)-x)/h)); p.cdf(k)=sum(ckerf((p.x(k)-x)/h)); end p.x=z; p.pdf=p.pdf.*dxdz/nh; dx= ; p.cdf=p.cdf/n; % p.cdf=cumsum(p.pdf.*dx); if isfield(p,'alpha') n=numel(p.alpha); p.icdf=p.alpha; for k=1:n alpha=p.alpha(k); ix=find(p.cdfalpha,1)-1; x1=p.x(ix); x2=p.x(ix+1); F1=p.cdf(ix); F2=p.cdf(ix+1); p.icdf(k)=x1+(alpha-F1)*(x2-x1)/(F2-F1); end end % Plot if ~nargout subplot(211) plot(p.x,p.pdf,'linewidth',2) grid % set(gca,'ylim', ) ylabel('f(x)') title('Estimated Probability Density Function'); subplot(212) plot(p.x,p.cdf,'linewidth',2) ylabel('F(x)') title('Cumulative Distribution Function') xlabel('x') grid meanx = sum(p.x.*p.pdf.*dx); varx = sum((p.x-meanx).^2.*p.pdf.*dx); text(min(p.x),0.6,sprintf('mean(x) = %g\n var(x) = %g\n',meanx,varx)); if isfield(p,'alpha') numel(p.alpha)==1 text(min(p.x),0.85,sprintf('icdf at %g = %g',p.alpha,p.icdf)); end end function =checkp(x,p) n=numel(x); %check structure p if ~isstruct(p) error('p is not a structure.'); end if ~isfield(p,'uB') p.uB=Inf; end if ~isfield(p,'lB') p.lB=-Inf; end if p.lB-Inf || p.uBInf =bounded(x,p); else if ~isfield(p,'h') p.h=median(abs(x-median(x)))/0.6745*(4/3/n)^0.2; end error(varchk(eps, inf, p.h, 'Bandwidth, p.h is not positive.')); if ~isfield(p,'x') N=100; xmax=max(x); xmin=min(x); xmax=xmax+3*p.h; xmin=xmin-3*p.h; dx=(xmax-xmin)/(N-1); p.x=xmin+(0:N-1)*dx; end dxdz=ones(N,1); z=p.x; end p.pdf=zeros(size(p.x)); p.cdf=zeros(size(p.x)); function =bounded(x,p) if p.lB==-Inf dx=@(t)1./(p.uB-t); y=@(t)-log(p.uB-t); zf=@(t)(p.uB-exp(-t)); elseif p.uB==Inf dx=@(t)1./(t-p.lB); y=@(t)log(t-p.lB); zf=@(t)exp(t)+p.lB; else dx=@(t)(p.uB-p.lB)./(t-p.lB)./(p.uB-t); y=@(t)log((t-p.lB)./(p.uB-t)); zf=@(t)(exp(t)*p.uB+p.lB)./(exp(t)+1); end x=y(x); n=numel(x); if ~isfield(p,'h') p.h=median(abs(x-median(x)))/0.6745*(4/3/n)^0.2; end h=p.h; if ~isfield(p,'x') N=100; xmax=max(x); xmin=min(x); xmax=xmax+3*h; xmin=xmin-3*h; p.x=xmin+(0:N-1)*(xmax-xmin)/(N-1); z=zf(p.x); else z=p.x; p.x=y(p.x); end dxdz=dx(z); function msg=varchk(low,high,n,msg) % check if variable n is not between low and high, returns msg, otherwise % empty matrix if n=low n=high msg=[]; end
来自心心水滴论坛--sea 发表 Brodmann's area 1 Sensory cortical area in the crest of the postcentral gyrus; this area is a component of the primary somatosensory cortex. Location: anterior parietal lobe (postcentral gyrus) Function: processes somatic sensory sensations See Also: Central sulcus Parietal lobe Paracentral lobule Primary somatic sensory cortex Brodmann's area 2 Brodmann's area 3 Brodmann's area 2 Sensory cortical area in the crest and posterior bank of the postcentral gyrus; this area is a component of the primary somatosensory cortex. Location: anterior parietal lobe (postcentral gyrus) Function: processes somatic sensory sensations See Also: Central sulcus Parietal lobe Paracentral lobule Primary somatic sensory cortex Brodmann's area 3 Brodmann's area 1 Brodmann's area 3 Sensory cortical area in the posterior bank of the central sulcus (postcentral gyrus); this area is a principal component of the primary somatosensory cortex. Area 3 is further subdivided into area 3a, which receives proprioceptive signals that originate in deep receptors, and area 3b, which receives discriminitive mechanosensory signals that arise from cutaneous receptors. Location: anterior parietal lobe (postcentral gyrus) Function: processes somatic sensory sensations See Also: Central sulcus Parietal lobe Paracentral lobule Primary somatic sensory cortex Brodmann's area 2 Brodmann's area 1 Brodmann's area 4 Motor cortical area in the anterior bank of the central sulcus (precentral gyrus); this area corresponds to the primary motor cortex, which governs the execution of volitional movement. Location: posterior frontal lobe (precentral gyrus) Function: involved in motor execution See Also: Precentral gyrus Primary motor cortex Central sulcus Thalamus, ventral lateral nucleus Premotor cortex Brodmann's area 5 Associational cortical area in the superior parietal lobe, just posterior to the somatosensory cortex in the postcentral gyrus; this area is involved in maintaining a spatial reference system for goal oriented behavior. Location: superior parietal lobe Function: involved in spatial orientation, among other parietal associational functions See Also: Parietal lobe Somatic sensory cortex Postcentral gyrus Brodmann's area 7 Superior parietal lobule Brodmann's area 6 Motor cortical area in the posterior frontal lobe just anterior to the primary motor cortex; this area contains the lateral and medial divisions of the premotor cortex that participate in the planning and execution of volitional movement. Location: posterior frontal lobe Function: involved in motor planning and execution See Also: Frontal lobe Premotor cortex Primary motor cortex Brodmann's area 7 Associational cortical area in the posterior part of the superior parietal lobe; this area is involved in maintaining a spatial reference system for goal oriented behavior. Location: superior parietal lobe Function: involved in spatial orientation, among other parietal associational functions See Also: Parietal lobe Somatic sensory cortex Postcentral gyrus Brodmann's area 5 Superior parietal lobule Brodmann's area 8 Motor cortical area in the dorsal-lateral prefrontal region of the frontal lobe; this area contains the frontal eye fields, which participate (together with the superior colliculus) in the control of saccadic eye movements. Location: frontal lobe Function: involved in governance of eye movements (contains "frontal eye fields") See Also: Frontal lobe Superior colliculus Brodmann's area 9 Associational cortical area in the dorsal-lateral prefrontal region of the frontal lobe; this area participates in prefrontal cortical networks that govern executive functions. Location: frontal lobe Function: participates in prefrontal associational integration See Also: Frontal lobe Prefrontal cortex Brodmann's area 10 Associational cortical area in the anterior-polar prefrontal region of the frontal lobe; this area participates in prefrontal cortical networks that govern executive functions. Location: frontal pole Function: participates in prefrontal associational integration See Also: Frontal lobe Prefrontal cortex Brodmann's area 11 Associational cortical area in the orbital-medial prefrontal region of the frontal lobe; this area participates in prefrontal cortical networks that govern personal and social behavior, emotion, and decision making. Location: ventral frontal lobe (orbitofrontal cortex) Function: participates in prefrontal associational integration See Also: Frontal lobe Prefrontal cortex Orbitofrontal cortex Brodmann's area 12 Associational cortical area in the orbital-medial prefrontal region of the frontal lobe; this area participates in prefrontal cortical networks that govern personal and social behavior, emotion, and decision making. Location: medial frontal lobe (orbitofrontal cortex) Function: participates in prefrontal associational integration See Also: Frontal lobe Prefrontal cortex Orbitofrontal cortex Brodmann's area 13 Associational cortical area in the insula. This area is not visible in medial and lateral views of the hemisphere. Location: insula Function: associational cortex See Also: Insula Brodmann's area 14 Associational cortical area in the insula. This area is not visible in medial and lateral views of the hemisphere. Location: insula Function: associational cortex See Also: Insula Brodmann's area 15 Associational cortical area in the insula. This area is not visible in medial and lateral views of the hemisphere. Location: insula Function: associational cortex See Also: Insula Brodmann's area 16 Associational cortical area in the insula. This area is not visible in medial and lateral views of the hemisphere. Location: insula Function: associational cortex See Also: Insula Brodmann's area 17 Sensory cortical area in the banks of the calcarine sulcus (lingual and cuneus gyral formations of the medial occipital lobe); this area corresponds to the primary visual cortex (also known as "striate cortex"). Location: medial occipital lobe Function: processes visual information See Also: Occipital lobe Calcarine sulcus Primary visual cortex Cuneus Lingual gyrus Brodmann's area 18 Sensory cortical area in the medial and lateral aspect of the occipital lobe; this area is part of the extrastriate visual cortex that surrounds the primary visual cortex (area 17 is also known as "striate cortex"). Location: occipital lobe Function: processes visual information See Also: Occipital lobe Primary visual cortex Brodmann's area 19 Sensory cortical area in the medial and lateral aspect of the occipital lobe; this area is part of the extrastriate visual cortex that surrounds the primary visual cortex (area 17 is also known as "striate cortex"). Location: occipital lobe Function: processes visual information See Also: Occipital lobe Primary visual cortex Brodmann's area 20 Associational cortical area in the inferior temporal gyrus; this area participates in the analysis of visual form and the representation of objects. Location: ventral temporal lobe (inferior temporal gyrus) Function: processes visual information See Also: Temporal lobe Inferior temporal gyrus Brodmann's area 21 Associational cortical area in the middle temporal gyrus; this area participates in the analysis of visual signals related to object form and motion. Location: lateral temporal lobe (middle temporal gyrus) Function: involved in processing visual information, among other temporal associational functions See Also: Temporal lobe Middle temporal gyrus Brodmann's area 22 Associational cortical area in the lateral aspect of the superior temporal gyrus; this area participates in the analysis of auditory signals and the reception of language (this area is a major component of Wernicke's area). Location: lateral temporal lobe (superior temporal gyrus) Function: involved in auditory processing and language reception See Also: Temporal lobe Superior temporal gyrus Wernicke's area Brodmann's area 23 Associational cortical area in the posterior part of the cingulate gyrus; this area is a cortical component of the limbic system. Location: medial parietal lobe (posterior cingulate gyrus) Function: participates in limbic associational integration See Also: Cingulate gyrus Limbic system Brodmann's area 24 Associational cortical area in the anterior part of the cingulate gyrus; this area is a cortical component of the limbic system that is involved in emotional processing, the control of facial expressions and the affective dimensions of pain. Location: medial frontal lobe (anterior cingulate gyrus) Function: involved in emotional and cognitive processing See Also: Cingulate gyrus Limbic system Brodmann's area 25 Associational cortical area in the medial prefrontal region of the frontal lobe; this area participates in prefrontal cortical networks that governs personal and social behavior, emotion, and decision making. Location: medial frontal lobe (orbitofrontal cortex) Function: participates in prefrontal associational integration See Also: Frontal lobe Prefrontal cortex Subcallosal area Brodmann's area 26 Associational cortical area in the transitional region between the posterior cingulate gyrus and the medial temporal lobe; this area is a cortical component of the limbic system. Location: medial parietal lobe (posterior cingulate gyrus) Function: participates in limbic associational integration See Also: Cingulate gyrus Limbic system Temporal lobe Brodmann's area 27 Associational cortical area in the medial temporal lobe; this area corresponds to Ammon's horn division of the hippocampal formation, which is subdivided into (cornu ammonis) fields, CA1-CA4. Together with the other parts of the hippocampal formation in the medial temporal lobe, the hippocampus proper is involved in short-term declarative memory processes. This area is not visible in medial and lateral views of the hemisphere. Location: medial temporal lobe: hippocampal formation Function: hippocampal formation: short-term declarative memory See Also: Temporal lobe Hippocampal formation Hippocampus CA1 CA3 Brodmann's area 28 Associational and sensory cortical area in the anterior-medial temporal lobe. This area is part of the olfactory cortex; it also is a component of the entorhinal division of the hippocampal formation. Location: medial temporal lobe Function: involved in olfaction and hippocampal processing See Also: Temporal lobe Entorhinal cortex Primary olfactory cortex Brodmann's area 29 Associational cortical area in the transitional region between the posterior cingulate gyrus and the medial temporal lobe; this area is a cortical component of the limbic system. Location: medial parietal lobe (posterior cingulate gyrus) Function: participates in limbic associational integration See Also: Cingulate gyrus Limbic system Temporal lobe Parietal lobe Brodmann's area 30 Associational cortical area in the transitional region between the posterior cingulate gyrus and the medial temporal lobe; this area is a cortical component of the limbic system. Location: medial temporal lobe Function: participates in limbic associational integration See Also: Cingulate gyrus Limbic system Temporal lobe Brodmann's area 31 Associational cortical area in the posterior part of the cingulate gyrus and the posterior banks of the cingulate sulcus; the cingulate part of this area is a cortical component of the limbic system. Location: medial parietal lobe Function: participates in limbic and parietal associational integration See Also: Cingulate gyrus Limbic system Brodmann's area 32 Associational cortical area in the medial prefrontal region of the frontal lobe; this area participates in prefrontal cortical networks that governs personal and social behavior, emotion, and decision making. Location: medial frontal lobe (orbitofrontal cortex) Function: involved in emotional and cognitive processing See Also: Frontal lobe Prefrontal cortex Brodmann's area 33 Associational cortical area in the anterior part of the cingulate gyrus just dorsal to the corpus callosum; this area is a cortical component of the limbic system that is involved in emotional processing and the affective dimensions of pain, among other functions. Location: medial frontal lobe (orbitofrontal cortex) Function: involved in emotional and cognitive processing See Also: Cingulate gyrus Limbic system Corpus callosum Brodmann's area 34 Associational and sensory cortical area in the anterior-medial temporal lobe; this area is a principal division of the olfactory cortex; it also is a component of the entorhinal division of the hippocampal formation. Location: medial temporal lobe Function: involved in olfaction and hippocampal processing See Also: Temporal lobe Entorhinal cortex Primary olfactory cortex Brodmann's area 35 Associational cortical area in the medial temporal lobe near the position of the rhinal sulcus; this area (also known as the perirhinal cortex) is a component of the hippocampal formation. Location: medial temporal lobe Function: participates in hippocampal associational functions See Also: Temporal lobe Rhinal sulcus Hippocampal formation Parahippocampal gyrus Brodmann's area 36 Associational cortical area in the medial temporal lobe; this area lies at the interface of visual processing systems in the inferior temporal lobe and semantic memory systems in the medial temporal lobe. Location: medial temporal lobe Function: involved in visual and hippocampal associational functions See Also: Temporal lobe Hippocampal formation Brodmann's area 37 Associational cortical area in the temporal lobe that extends from the medial to lateral sides of this lobe; this area participates in the analysis of visual form, motion, and the representation of objects. Location: posterior temporal lobe Function: involved in visual recognition See Also: Temporal lobe Brodmann's area 38 Associational cortical area in the anterior pole of the temporal lobe; this temporal area is related to networks in the amygdala and orbital prefrontal cortex that govern personal and social behavior, emotion, and decision making. Location: temporal pole Function: participates in limbic associational integration See Also: Temporal lobe Temporal pole Amygdala Orbitofrontal cortex Brodmann's area 39 Associational cortical area in the angular gyrus at the interface between the posterior parietal and occipital lobes. Location: lateral junction of temporal, parietal and occipital lobes Function: involved in processing language, spatial orientation and semantic representation See Also: Angular gyrus Parietal lobe Occipital lobe Inferior parietal lobule Brodmann's area 40 Associational cortical area in the inferior parietal lobe, including the supramarginal gyrus. Location: inferior parietal lobe Function: involved in spatial orientation and semantic representation See Also: Parietal lobe Supramarginal gyrus Inferior parietal lobule Brodmann's area 41 Sensory cortical area in the superior aspect of the temporal lobe (located in a series of transverse gyri, called Heschl's gyri, that form the inferior bank of the lateral fissure); this area corresponds to the primary auditory cortex. Location: superior temporal lobe Function: processes auditory information See Also: Temporal lobe Superior temporal gyrus Primary auditory cortex Lateral fissure Brodmann's area 42 Sensory cortical area in the superior aspect of the temporal lobe and the dorsal-lateral margin of the superior temporal gyrus; this area is part of a "belt" of higher-order auditory areas that surround the primary auditory cortex (area 41). Location: superior temporal lobe Function: processes auditory information See Also: Temporal lobe Superior temporal gyrus Primary auditory cortex Lateral fissure Brodmann's area 43 Sensorimotor cortical area in the inferior margin of the postcentral and precentral gyri where the frontal-parietal operculum merges with the insula just below the inferior termination of the central sulcus; this area may participate in the sensorimotor representation of the mouth and taste reception. Location: junction of insula, frontal and parietal lobes Function: involved in sensorimotor representation and taste processing See Also: Postcentral gyrus Precentral gyrus Insula Central sulcus Brodmann's area 44 Motor cortical area in the posterior part of the inferior frontal gyrus; this division of the lateral premotor cortex is involved in the production of language, especially in the left hemisphere (also known as Broca's area). Location: inferior frontal lobe (inferior frontal gyrus) Function: involved in language production See Also: Inferior frontal gyrus Brodmann's area 45 Broca's area Premotor cortex Inferior frontal gyrus, pars opercularis Inferior frontal gyrus, pars triangularis Brodmann's area 45 Associational cortical area in the anterior part of the inferior frontal gyrus; the posterior part of this area may contribute (with area 44) to the production of language (Broca's area), while other circuits in this area participate in prefrontal cortical networks that govern executive functions. Location: inferior frontal lobe (inferior frontal gyrus) Function: involved in language production and participates in prefrontal associational integration See Also: Frontal lobe Inferior frontal gyrus Brodmann's area 44 Broca's area Premotor cortex Inferior frontal gyrus, pars orbitalis Brodmann's area 46 Associational cortical area in the middle frontal gyrus and anterior part of the inferior frontal gyrus; this area participates in prefrontal cortical networks that govern executive functions. Location: lateral frontal lobe (dorsolateral prefrontal cortex) Function: participates in prefrontal associational integration See Also: Frontal lobe Inferior frontal gyrus Middle frontal gyrus Prefrontal cortex Brodmann's area 47 Associational cortical area in the anterior-ventral part of the inferior frontal gyrus; this area participates in prefrontal cortical networks that govern executive functions. Location: inferior frontal lobe (inferior frontal gyrus) Function: participates in prefrontal associational integration See Also: Frontal lobe Inferior frontal gyrus Prefrontal cortex
Quadratic radical function 胜过Fisher Z Transformation 后的困惑 中国人的新发现! 超过了Sir Ronald Aylmer Fisher,这可能是近97年来最好的数理统计学逼近结果之一。 1912年的 Sir Ronald Aylmer Fisher, 22岁 我们提出了一个 quadratic radical function,比目前国内外教材普遍使用的 Fisher Z Transformation (Fisher z-transform,Fisher z transformation,Fisher's z' transformation,Fisher's Z Transformation) 逼近标准正态分布累积分布函数的误差小了29%。投稿给我们学校的 《Transactions of Tianjin University》 后,收到的一个英文评审意见 Comments 如下: (1) The proposed quadratic radial function is rather elementary and may provide limited if any useful information in applications . Some pratical application of the proposed function should be provided. 但是我不懂 may provide limited if any useful information in applications 是神马意思。请大家给解释一下吧。很希望该文能够发表。 我们的 quadratic radical function,可是比大名鼎鼎的数理统计学老祖宗 Sir Ronald Aylmer Fisher (17 February 1890 – 29 July 1962) 在1915年提出的 Fisher Z Transformation 还好。 丑土豆 也可能超过 洋帅哥 , 土老帽 也可能胜过 洋大师 ! Si r Ronald Aylmer Fisher , 帅帅的洋大师。 http://www.npg.org.uk/collections/search/portraitLarge/mw98424/Sir-Ronald-Aylmer-Fisher 请您指出以上各种错误!谢谢您的指教! Sir Ronald Aylmer Fisher 在 T he MacTutor History of Mathematics archive http://www-history.mcs.st-and.ac.uk/Biographies/Fisher.html 2012-06-11《温家宝在两院院士大会上的讲话(全文)》 http://news.xinhuanet.com/politics/2012-07/02/c_112335725.htm 基础科学是科学技术应用的先导和源泉。今天的基础科学就是明天科学技术的应用。 http://news.xinhuanet.com/politics/2012-07/02/c_112335725_2.htm 真正的核心技术是买不来的,在关键领域,我们必须依靠自己解决问题。加强科技发展战略规划,对我们这样一个大国来说非常重要。要真正把“虚”的务透,这样才能明确方向,才能抓住重点把“实”的真正做实。 相关链接: 《 本人简介(真傻工作情况简介)》 http://blog.tech110.net/?uid-11851-action-viewspace-itemid-37878 是一个粗线条的部分介绍。 《 毛西德格的瓷器密码:神马乖乖?》 http://blog.sciencenet.cn/blog-107667-518968.html http://blog.tech110.net/?11851/viewspace-62350 《胜过 Fisher z 变换!(1)》 http://bbs.sciencenet.cn/home.php?mod=spaceuid=107667do=blogid=603297 《胜过 Fisher z 变换!(2)》 http://blog.sciencenet.cn/home.php?mod=spaceuid=107667do=blogid=657534
google 搜索:lammps mail list, comb (function(w, d, g, J) { var e = J.stringify || J.encode; d = d || {}; d = d || function() { w.postMessage(e({'msg': {'g': g, 'm':'s'}}), location.href); } })(window, document, '__huaban', JSON); http://lammps.sandia.gov/threads/msg26852.html http://lammps.sandia.gov/threads/msg26832.html http://lammps.sandia.gov/threads/msg28484.html
The Heaviside step function is a mathematical function denoted , or sometimes or (Abramowitz and Stegun 1972, p.1020), and also known as the "unit step function." The term "Heaviside step function" and its symbol can represent either a piecewise constant function or a generalized function . When defined as a piecewise constant function, the Heaviside step function is given by (1) (Abramowitz and Stegun 1972, p.1020; Bracewell 2000, p.61). The plot above shows this function (left figure), and and how it would appear if displayed on an oscilloscope (right figure). When defined as a generalized function , it can be defined as a function such that (2) for the derivative of a sufficiently smooth function that decays sufficiently quickly (Kanwal 1998). Mathematica represents the Heaviside generalized function as HeavisideTheta , while using UnitStep to represent the piecewise function Piecewise (which, it should be noted, adopts the convention instead of the conventional definition ). The shorthand notation (3) is sometimes also used. The Heaviside step function is related to the boxcar function by (4) and can be defined in terms of the sign function by (5) The derivative of the step function is given by (6) where is the delta function (Bracewell 2000, p.97). The Heaviside step function is related to the ramp function by (7) and to the derivative of by (8) The two are also connected through (9) where denotes convolution . Bracewell (2000) gives many identities, some of which include the following. Letting denote the convolution , (10) (11) (12) (13) (14) In addition, (15) (16) The Heaviside step function can be defined by the following limits, (17) (18) (19) (20) (21) (22) (23) (24) (25) (26) (27) where is the erfc function, is the sine integral , is the sinc function , and is the one-argument triangle function . The first four of these are illustrated above for , 0.1, and 0.01. Of course, any monotonic function with constant unequal horizontal asymptotes is a Heaviside step function under appropriate scaling and possible reflection. The Fourier transform of the Heaviside step function is given by (28) (29) where is the delta function .
Lipids Health Dis. 2011 Nov 12;10:210. Probucol alleviates atherosclerosis and improves high density lipoprotein function . Zhong JK , Guo ZG , Li C , Wang ZK , Lai WY , Tu Y . Source Division of Cardiology, Nanfang Hospital, Southern Medical University, Guangzhou 510515, Guangdong, P,R, China. guozhigang126@126.com. Abstract ABSTRACT: BACKGROUND: Probucol is a unique hypolipidemic agent that decreases high density lipoprotein cholesterol ( HDL -C). However, it is not definite that whether probucol hinders the progression of atherosclerosis by improving HDL function . METHODS: Eighteen New Zealand White rabbits were randomly divided into the control, atherosclerosis and probucol groups. Control group were fed a regular diet; the atherosclerosis group received a high fat diet, and the probucol group received the high fat diet plus probucol. Hepatocytes and peritoneal macrophages were isolated for labeled cholesterol efflux rates and expression of ABCA1 and SR-B1 at gene and protein levels; venous blood was collected for serum paraoxonase 1, myeloperoxidase activity and lipid analysis. Aorta were prepared for morphologic and immunohistochemical analysis after 12 weeks. RESULTS: Compared to the atherosclerosis group, the paraoxonase 1 activity, cholesterol efflux rates, expression of ABCA1 and SR-BI in hepatocytes and peritoneal macrophages, and the level of ABCA1 and SR-BI in aortic lesions were remarkably improved in the probucol group, But the serum HDL cholesterol concentration, myeloperoxidase activity, the IMT and the percentage plaque area of aorta were significantly decreased. CONCLUSION: Probucol alleviated atherosclerosis by improving HDL function . The mechanisms include accelerating the process of reverse cholesterol transport, improving the anti-inflammatory and anti-oxidant functions.
Plotting a table of numbers as an image using R Problem : How to plot a table of numbers such that the values are represented by color? Solution : Use the function below by handing it a matrix of numbers. It will plot the matrix with a color scale based on the highest and lowest values in the matrix. Optional arguments are: usage: myImagePlot (m) where m is a matrix of numbers optional arguments: myImagePlot (m, xlabels, ylabels, zlim, title=c("my title")) xLabels and yLabels are vectors of strings to label the rows and columns. zlim is a vector containing a low and high value to use for the color scale. For example, you might have a table of gene expression values which you want to represent visually. Alternatively, you might want to display a table of correlation coefficients between a set of DNA microarrays. ="" td="" ="" td="" 1.) A panel of gene expression values. 2.) A table of correlation values illustrating how identical samples placed on several different DNA microarrays correlate with one another. The images above were generated as follows: myImagePlot(bm , yLabels=c(as.character(baylies )), title=c("Sig Genes")) notes : bm is a dataframe with numbers in the first three columns. The code above references the first 25 entries of the first 3 columns. The row labels are taken from the second column of another data frame called baylies. myImagePlot(cm, xLabels=c(targets$SlideNumber), title=c("stage 12-14 array correlation matrix"), zlim=c(0,1)) notes : cm is a matrix of correlation values. You can load the function below into your R session by copy and paste, or you can load it directly using the source command: source(" http://www.phaget4.org/R/myImagePlot.R ") image plot function # ----- Define a function for plotting a matrix ----- # myImagePlot - function(x, ...){ min - min(x) max - max(x) yLabels - rownames(x) xLabels - colnames(x) title -c() # check for additional function arguments if( length(list(...)) ){ Lst - list(...) if( !is.null(Lst$zlim) ){ min - Lst$zlim max - Lst$zlim } if( !is.null(Lst$yLabels) ){ yLabels - c(Lst$yLabels) } if( !is.null(Lst$xLabels) ){ xLabels - c(Lst$xLabels) } if( !is.null(Lst$title) ){ title - Lst$title } } # check for null values if( is.null(xLabels) ){ xLabels - c(1:ncol(x)) } if( is.null(yLabels) ){ yLabels - c(1:nrow(x)) } layout(matrix(data=c(1,2), nrow=1, ncol=2), widths=c(4,1), heights=c(1,1)) # Red and green range from 0 to 1 while Blue ranges from 1 to 0 ColorRamp - rgb( seq(0,1,length=256), # Red seq(0,1,length=256), # Green seq(1,0,length=256)) # Blue ColorLevels - seq(min, max, length=length(ColorRamp)) # Reverse Y axis reverse - nrow(x) : 1 yLabels - yLabels x - x # Data Map par(mar = c(3,5,2.5,2)) image(1:length(xLabels), 1:length(yLabels), t(x), col=ColorRamp, xlab="", ylab="", axes=FALSE, zlim=c(min,max)) if( !is.null(title) ){ title(main=title) } axis(BELOW-1, at=1:length(xLabels), labels=xLabels, cex.axis=0.7) axis(LEFT -2, at=1:length(yLabels), labels=yLabels, las= HORIZONTAL-1, cex.axis=0.7) # Color Scale par(mar = c(3,2.5,2.5,2)) image(1, ColorLevels, matrix(data=ColorLevels, ncol=length(ColorLevels),nrow=1), col=ColorRamp, xlab="",ylab="", xaxt="n") layout(1) } # ----- END plot function ----- #
MATLAB Functions What is a MATLAB function? A MATLAB “function” is a MATLAB program that performs a sequence of operations specified in a text file (called an m-file because it must be saved with a file extension of *.m). A function accepts one or more MATLAB variables as inputs, operates on them in some way, and then returns one or more MATLAB variables as outputs and may also generate plots, etc. (sometimes a function doesn’t return any output variables but instead just generates plots, etc.). How do I Create a new MATLAB function? Since an m-file is nothing more than a text file it can be created using any text editor – however, MATLAB provides its own editor that provides some particular features that are useful when writing/editing functions. To open a new m-file: In the MATLAB command window, go to FILE on the toolbar, select NEW, then select M-FILE. This opens the MATLAB editor/debugger and gives an empty file in which you can create whatever m-file you want. What do I have to put on the First Line of a MATLAB function? The 1st line of a function must contain the “function definition,” which has a general structure like this (see also the specific example below) 1 : function = function_name(In_1,In_2,…,In_M) where Out_1,Out_2,…,Out_N are the N output variables and In_1,In_2,…,In_M are the M input variables; If there is only a single output variable use: function Out_1 = function_name(In_1,In_2,…,In_M) If there is no output variable use: function function_name(In_1,In_2,…,In_M) What do I have to put after the 1 line? After the first line, you just put a sequence of MATLAB commands – with one command per line – just like you are computing in MATLAB in the command line environment. This sequence of commands is what makes up your program – you perform computations using the input variables and other variables you create within the function and in doing so, you create the output variables you desire. How do I use the input variables in a MATLAB function? When you are writing the lines that make up your function you can use the names of the input variables defined in the first line just like they are previously created variables. So if In_1 is one of the input variables you could then do something like this: y=In_1.^2; This takes the values in In_1, squares them, and assigns the result to the variable y. Note: putting a semicolon at the end of an expression stops the display of the result – a good idea unless you really WANT to see the result (sometimes useful when debugging or verifying a function). How do I make the output variables in a MATLAB function? On any line in your function you can assign any result you compute to any one of the output variables specified. For example: Out_1=cos(y); will compute the cosine of the values in the variable y and then assigns the result to the variable Out_1, which will then be output by the function (assuming that Out_1 was specified as an output variable name). How do I Save a MATLAB function? Once you have finished writing your function you have to save it as an m-file before you can use it. This is done in the same way you save a file in any other application: • go to FILE, and SAVE. • type in the name that you want to use o it is best to always use the “function name” as the “file name” o you don’t need to explicitly specify the file type as *.m • navigate to the folder where you want to save the function file o see below for more details on “Where to Save an M-File?” • click on SAVE Where to Save an M-File? It doesn’t really matter where you store it… BUT when you want to use it, it needs to be somewhere in “MATLAB’s path”or should be in MATLAB’s present working directory (PWD) • the path specifies all the folders where MATLAB will look for a function’s file when the function is run • the PWD specifies a single folder that MATLAB considers its primary folder for storing things – it is generally advisable to specify an appropriate PWD each time you start up MATLAB and specify it to be wherever you have the m-files you are working on o Click on FILE, click on SET PATH, click on BROWSE, navigate to the folder you want as PWD and click on it, and then click OK How do I Run an M-File? Once you have a function saved as an m-file with a name the same as the function name and in a folder that is either the PWD or is in MATLAB’s path, you can run it from the command line: = function_name(In_1,In_2,…,In_M) How do I Test an M-File? Once you have a file written you need to test it. When you try to run it the first time there is a good chance that it will have some syntax error that will cause its operation to terminate – MATLAB will tell you on what line the operation was stopped, so you can focus immediately on somewhere further along in the function and will give you some idea if what the problem is. Just keep at it and you’ll eventually get it to run, but here is a tip: • open the m-file that you are debugging • click on DEBUG on the toolbar at top • click on STOP IF ERROR o note: there some other options that can be useful, so try them out • Now when you run your function and it encounters an error, it will stop and will put you into a mode that will allow you to view all the variables at the point at which the error occurred o When you are stopped (i.e., “in debug mode”) you have a different prompt than MATLAB’s standard prompt o To quit from the debug mode click on DEBUG and the click on QUIT DEBUGGING • When you are all done using this feature you can turn it off by clicking on DEBUG on the toolbar and then clicking on STOP IF ERROR (which should be checked to indicate that it was turned on) Then you are returned to having the standard prompt Then, once you have it running without any syntax errors or warnings, you need to test it to verify that it really does what you intended it to do. Obviously, for simple functions you may be able to verify that it works by running a few examples and checking that the outputs are what you expect. Usually you need to do quite a few test cases to ensure that it is working correct. For more complex functions (or when you discover that the outputs don’t match what you expect) you may want to check some of the intermediate results that you function computes to verify that they are working properly. To do this you set “breakpoints” that stop the operation of the function at a specified line and allow you then view from the command: • open the m-file that you are debugging • put the cursor in the line at which you want to set a breakpoint • click on DEBUG on the toolbar at top and then click SET/CLEAR BREAKPOINT • Now when you run your function, it will stop at that line and will put you into a mode that will allow you to view all the variables at that point • When you are stopped (i.e., “in debug mode”) you have a different prompt than MATLAB’s standard prompt o When you are all done using this feature you can turn it off by repeating the process used to set the breakpoint Once you have it stopped, any variable that has been computed up to that point can be inspected (plot it, look at its values, check its size, check if it is row or column, etc. You can even modify a variable’s contents to correct a problem). Note that the ONLY variable you have access to are those that have been created inside the function or have been passed to it via the input arguments. Note that you can set multiple breakpoints at a time – once you have stopped at the first one you can click on DEBUG and then click on CONTINUE and it will pick up execution immediately where it left off (but with the modified variables if you changed anything). Note also that once you have a function stopped in debug mode you can “single-step” through the next several lines: click on DEBUG and click on SINGLE STEP. Comments on Programming Style 1. In many ways, programming in MATLAB is a lot like programming in C, but there are some significant differences. Most notably, MATLAB can operate directly on vectors and matrices whereas in C you must operate directly on individual elements of an array. Because of this, loops are MUCH less common in MATLAB than they are in C: in C, if you want to add two vectors you have to loop over the elements in the vectors, adding an element to an element in each iteration of the loop; in MATLAB, you just issue a command to add the two vectors together and the vector addition of all the elements is done with this single command. 2. The “comment symbol” in MATLAB is %. Anything that occurs after a % on a line is considered to be comments. 3. It is often helpful to put several comment lines right after the function definition line. These comments explain what the function does, what the inputs and outputs are, and how to call the function. 4. Putting in lots of comments helps you and others understand what the function does – from a grading point of view you will have a higher probability of getting full credit if you write comments that tell me what your code is doing . FROM:http://www.ws.binghamton.edu/fowler/fowler%20personal%20page/EE521_files/MATLAB%20Functions.pdf
Curve fitting is the process of constructing a curve, or mathematical function , that has the best fit to a series of data points, possibly subject to constraints. Curve fitting can involve either interpolation , where an exact fit to the data is required, or smoothing , in which a "smooth" function is constructed that approximately fits the data. A related topic is regression analysis , which focuses more on questions of statistical inference such as how much uncertainty is present in a curve that is fit to data observed with random errors. Fitted curves can be used as an aid for data visualization, to infer values of a function where no data are available, and to summarize the relationships among two or more variables. Extrapolation refers to the use of a fitted curve beyond the range of the observed data, and is subject to a greater degree of uncertainty since it may reflect the method used to construct the curve as much as it reflects the observed data.
Drug design From Wikipedia, the free encyclopedia Jump to: navigation , search Not to be confused with Designer drug . Drug design , also sometimes referred to as rational drug design or structure based drug design, is the inventive process of finding new medications based on the knowledge of the biological target . The drug is most commonly an organic small molecule which activates or inhibits the function of a biomolecule such as a protein which in turn results in a therapeutic benefit to the patient . In the most basic sense, drug design involves design of small molecules that are complementary in shape and charge to the biomolecular target to which they interact and therefore will bind to it. Drug design frequently but not necessarily relies on computer modeling techniques. This type of modeling is often referred to as computer-aided drug design . The phrase "drug design" is to some extent a misnomer . What is really meant by drug design is ligand design. Modeling techniques for prediction of binding affinity are reasonably successful. However there are many other properties such as bioavailability , metabolic half-life , lack of side effects , etc. that first must be optimized before a ligand can become a safe and efficacious drug. These other characteristics are often difficult to optimize using rational drug design techniques. Contents 1 Background 2 Types 2.1 Ligand based 2.2 Structure based 2.2.1 Active site identification 2.2.2 Ligand fragment link 2.2.3 Scoring method 3 Rational drug discovery 4 Computer-assisted drug design 5 Examples 6 See also 7 References 8 External links Background Typically a drug target is a key molecule involved in a particular metabolic or signaling pathway that is specific to a disease condition or pathology , or to the infectivity or survival of a microbial pathogen . Some approaches attempt to inhibit the functioning of the pathway in the diseased state by causing a key molecule to stop functioning. Drugs may be designed that bind to the active region and inhibit this key molecule. Another approach may be to enhance the normal pathway by promoting specific molecules in the normal pathways that may have been affected in the diseased state. In addition, these drugs should also be designed in such a way as not to affect any other important "off-target" molecules or antitargets that may be similar in appearance to the target molecule, since drug interactions with off-target molecules may lead to undesirable side effects . Sequence homology is often used to identify such risks. Most commonly, drugs are organic small molecules produced through chemical synthesis, but biopolymer-based drugs (also known as biologics ) produced through biological processes are becoming increasingly more common. In addition mRNA based gene silencing technologies may have therapeutic applications. Types Flow charts of two strategies of structure-based drug design There are two major types of drug design. The first is referred to as ligand-based drug design and the second, structure-based drug design . Ligand based Ligand-based drug design (or indirect drug design ) relies on knowledge of other molecules that bind to the biological target of interest. These other molecules may be used to derive a pharmacophore model which defines the minimum necessary structural characteristics a molecule must possess in order to bind to the target. In other words, a model of the biological target may be built based on the knowledge of what binds to it and this model in turn may be used to design new molecular entities that interact with the target. Alternatively, a quantitative structure-activity relationship (QSAR) in which a correlation between calculated properties of molecules and their experimentally determined biological activity may be derived. These QSAR relationships in turn may be used to predict the activity of new analogs. Structure based Structure-based drug design (or direct drug design ) relies on knowledge of the three dimensional structure of the biological target obtained through methods such as x-ray crystallography or NMR spectroscopy . If an experimental structure of a target is not available, it may be possible to create a homology model of the target based on the experimental structure of a related protein. Using the structure of the biological target, candidate drugs that are predicted to bind with high affinity and selectivity to the target may be designed using interactive graphics and the intuition of a medicinal chemist . Alternatively various automated computational procedures may be used to suggest new drug candidates. As experimental methods such as X-ray crystallography and NMR develop, the amount of information concerning 3D structures of biomolecular targets has increased dramatically. In parallel, information about the structural dynamics and electronic properties about ligands has also increased. This has encouraged the rapid development of the structure-based drug design. Current methods for structure-based drug design can be divided roughly into two categories. The first category is about “finding” ligands for a given receptor, which is usually referred as database searching. In this case, a large number of potential ligand molecules are screened to find those fitting the binding pocket of the receptor. This method is usually referred as ligand-based drug design. The key advantage of database searching is that it saves synthetic effort to obtain new lead compounds. Another category of structure-based drug design methods is about “building” ligands, which is usually referred as receptor-based drug design. In this case, ligand molecules are built up within the constraints of the binding pocket by assembling small pieces in a stepwise manner. These pieces can be either individual atoms or molecular fragments. The key advantage of such a method is that novel structures, not contained in any database, can be suggested. These techniques are raising much excitement to the drug design community. Active site identification Active site identification is the first step in this program. It analyzes the protein to find the binding pocket, derives key interaction sites within the binding pocket, and then prepares the necessary data for Ligand fragment link. The basic inputs for this step are the 3D structure of the protein and a pre-docked ligand in PDB format, as well as their atomic properties. Both ligand and protein atoms need to be classified and their atomic properties should be defined, basically, into four atomic types: hydrophobic atom : all carbons in hydrocarbon chains or in aromatic groups. H-bond donor : Oxygen and nitrogen atoms bonded to hydrogen atom(s). H-bond acceptor : Oxygen and sp2 or sp hybridized nitrogen atoms with lone electron pair(s). Polar atom : Oxygen and nitrogen atoms that are neither H-bond donor nor H-bond acceptor, sulfur, phosphorus, halogen, metal and carbon atoms bonded to hetero-atom(s). The space inside the ligand binding region would be studied with virtual probe atoms of the four types above so the chemical environment of all spots in the ligand binding region can be known. Hence we are clear what kind of chemical fragments can be put into their corresponding spots in the ligand binding region of the receptor. Ligand fragment link Flow chart for structure based drug design When we want to plant “seeds” into different regions defined by the previous section, we need a fragments database to choose fragments from. The term “fragment” is used here to describe the building blocks used in the construction process. The rationale of this algorithm lies in the fact that organic structures can be decomposed into basic chemical fragments. Although the diversity of organic structures is infinite, the number of basic fragments is rather limited. Before the first fragment, i.e. the seed, is put into the binding pocket, and other fragments can be added one by one, it is useful to identify potential problems. First, the possibility for the fragment combinations is huge. A small perturbation of the previous fragment conformation would cause great difference in the following construction process. At the same time, in order to find the lowest binding energy on the Potential energy surface (PES) between planted fragments and receptor pocket, the scoring function calculation would be done for every step of conformation change of the fragments derived from every type of possible fragments combination. Since this requires a large amount of computation, one may think using other possible strategies to let the program works more efficiently. When a ligand is inserted into the pocket site of a receptor, conformation favor for these groups on the ligand that can bind tightly with receptor should be taken priority. Therefore it allows us to put several seeds at the same time into the regions that have significant interactions with the seeds and adjust their favorite conformation first, and then connect those seeds into a continuous ligand in a manner that make the rest part of the ligand having the lowest energy. The conformations of the pre-placed seeds ensuring the binding affinity decide the manner that ligand would be grown. This strategy reduces calculation burden for the fragment construction efficiently. On the other hand, it reduces the possibility of the combination of fragments, which reduces the number of possible ligands that can be derived from the program. These two strategies above are well used in most structure-based drug design programs. They are described as “ Grow ” and “ Link ”. The two strategies are always combined in order to make the construction result more reliable. Scoring method Main article: Scoring functions for docking Structure-based drug design attempts to use the structure of proteins as a basis for designing new ligands by applying accepted principles of molecular recognition. The basic assumption underlying structure-based drug design is that a good ligand molecule should bind tightly to its target. Thus, one of the most important principles for designing or obtaining potential new ligands is to predict the binding affinity of a certain ligand to its target and use it as a criterion for selection. A breakthrough work was done by Böhm to develop a general-purposed empirical function in order to describe the binding energy. The concept of the “Master Equation” was raised. The basic idea is that the overall binding free energy can be decomposed into independent components which are known to be important for the binding process. Each component reflects a certain kind of free energy alteration during the binding process between a ligand and its target receptor. The Master Equation is the linear combination of these components. According to Gibbs free energy equation, the relation between dissociation equilibrium constant, K d and the components of free energy alternation was built. The sub models of empirical functions differ due to the consideration of researchers. It has long been a scientific challenge to design the sub models. Depending on the modification of them, the empirical scoring function is improved and continuously consummated. Rational drug discovery In contrast to traditional methods of drug discovery which rely on trial-and-error testing of chemical substances on cultured cells or animals , and matching the apparent effects to treatments, rational drug design begins with a hypothesis that modulation of a specific biological target may have therapeutic value. In order for a biomolecule to be selected as a drug target, two essential pieces of information are required. The first is evidence that modulation of the target will have therapeutic value. This knowledge may come from, for example, disease linkage studies that show an association between mutations in the biological target and certain disease states. The second is that the target is "drugable". This means that it is capable of binding to a small molecule and that its activity can be modulated by the small molecule. Once a suitable target has been identified, the target is normally cloned and expressed . The expressed target is then used to establish a screening assay . In addition, the three-dimensional structure of the target may be determined. The search for small molecules that bind to the target is begun by screening libraries of potential drug compounds. This may be done by using the screening assay (a "wet screen"). In addition, if the structure of the target is available, a virtual screen may be performed of candidate drugs. Ideally the candidate drug compounds should be " drug-like ", that is they should possess properties that are predicted to lead to oral bioavailability , adequate chemical and metabolic stability, and minimal toxic effects. Several methods are available to estimate druglikeness such Lipinski's Rule of Five and a range of scoring methods such as Lipophilic efficiency . Several methods for predicting drug metabolism have been proposed in the scientific literature, and a recent example is SPORCalc. Due to the complexity of the drug design process, two terms of interest are still serendipity and bounded rationality . Those challenges are caused by the large chemical space describing potential new drugs without side-effects . Computer-assisted drug design Computer-assisted drug design uses computational chemistry to discover, enhance, or study drugs and related biologically active molecules . The most fundamental goal is to predict whether a given molecule will bind to a target and if so how strongly. Molecular mechanics or molecular dynamics are most often used to predict the conformation of the small molecule and to model conformational changes in the biological target that may occur when the small molecule binds to it. Semi-empirical , ab initio quantum chemistry methods , or density functional theory are often used to provide optimized parameters for the molecular mechanics calculations and also provide an estimate of the electronic properties (electrostatic potential, polarizability , etc.) of the drug candidate which will influence binding affinity. Molecular mechanics methods may also be used to provide semi-quantitative prediction of the binding affinity. Alternatively knowledge based scoring function may be used to provide binding affinity estimates. These methods use linear regression , machine learning , neural nets or other statistical techniques to derive predictive binding affinity equations by fitting experimental affinities to computationally derived interaction energies between the small molecule and the target. Ideally the computational method should be able to predict affinity before a compound is synthesized and hence in theory only one compound needs to be synthesized. The reality however is that present computational methods provide at best only qualitative accurate estimates of affinity. Therefore in practice it still takes several iterations of design, synthesis, and testing before an optimal molecule is discovered. On the other hand, computational methods have accelerated discovery by reducing the number of iterations required and in addition have often provided more novel small molecule structures. Drug design with the help of computers may be used at any of the following stages of drug discovery: hit identification using virtual screening (structure- or ligand-based design) hit-to-lead optimization of affinity and selectivity (structure-based design, QSAR , etc.) lead optimization optimization of other pharmaceutical properties while maintaining affinity Flowchart of a Usual Clustering Analysis for Structure-Based Drug Design In order to overcome the insufficient prediction of binding affinity calculated by recent scoring functions, the protein-ligand interaction and compound 3D structure information are used to analysis. For structure-based drug design, several post-screening analysis focusing on protein-ligand interaction has been developed for improving enrichment and effectively mining potential candidates: Consensus scoring Selecting candidates by voting of multiple scoring functions May lose the relationship between protein-ligand structural information and scoring criterion Geometric analysis Comparing protein-ligand interactions by visually inspecting individual structures Becoming intractable when the number of complexes to be analyzed increasing Cluster analysis Represent and cluster candidates according to protein-ligand 3D information Needs meaningful representation of protein-ligand interactions. Examples A particular example of rational drug design involves the use of three-dimensional information about biomolecules obtained from such techniques as x-ray crystallography and NMR spectroscopy. This approach to drug discovery is sometimes referred to as structure-based drug design. The first unequivocal example of the application of structure-based drug design leading to an approved drug is the carbonic anhydrase inhibitor dorzolamide which was approved in 1995. Another important case study in rational drug design is imatinib , a tyrosine kinase inhibitor designed specifically for the bcr-abl fusion protein that is characteristic for Philadelphia chromosome -positive leukemias ( chronic myelogenous leukemia and occasionally acute lymphocytic leukemia ). Imatinib is substantially different from previous drugs for cancer , as most agents of chemotherapy simply target rapidly dividing cells, not differentiating between cancer cells and other tissues. Additional examples include: Many of the atypical antipsychotics Cimetidine , the prototypical H 2 -receptor antagonist from which the later members of the class were developed Selective COX-2 inhibitor NSAIDs Dorzolamide , a carbonic anhydrase inhibitor used to treat glaucoma Enfuvirtide , a peptide HIV entry inhibitor Nonbenzodiazepines like zolpidem and zopiclone Probenecid SSRIs (selective serotonin reuptake inhibitors), a class of antidepressants Zanamivir , an antiviral drug Isentress , HIV Integrase inhibitor Case studies 5-HT3 antagonists Acetylcholine receptor agonists Angiotensin receptor blockers Bcr-Abl tyrosine kinase inhibitors Cannabinoid receptor antagonists CCR5 receptor antagonists Cyclooxygenase 2 inhibitors Dipeptidyl peptidase-4 inhibitors HIV protease inhibitors NK1 receptor antagonists Non-nucleoside reverse transcriptase inhibitors Proton pump inibitors Triptans TRPV1 antagonists Renin inhibitors Discovery and development of small molecule c-Met inhibitors See also Bioinformatics Cheminformatics Drug development Drug discovery List of pharmaceutical companies Medicinal chemistry Molecular Conceptor Molecular design software References ^ Madsen, Ulf; Krogsgaard-Larsen, Povl; Liljefors, Tommy (2002). Textbook of Drug Design and Discovery . Washington, DC: Taylor Francis. ISBN 0-415-28288-8 . ^ Cohen, N. Claude (1996). Guidebook on Molecular Modeling in Drug Design . Boston: Academic Press. ISBN 012178245x . ^ Guner, Osman F. (2000). Pharmacophore Perception, Development, and use in Drug Design . La Jolla, Calif: International University Line. ISBN 0-9636817-6-1 . ^ Leach, Andrew R.; Harren Jhoti (2007). Structure-based Drug Discovery . Berlin: Springer. ISBN 1-4020-4406-2 . ^ a b Wang R,Gao Y,Lai L (2000). "LigBuilder: A Multi-Purpose Program for Structure-Based Drug Design". Journal of Molecular Modeling 6 (7-8): 498–516. doi : 10.1007/s0089400060498 . ^ a b Schneider G, Fechner U (August 2005). "Computer-based de novo design of drug-like molecules". Nat Rev Drug Discov 4 (8): 649–63. doi : 10.1038/nrd1799 . PMID 16056391 . ^ Jorgensen WL (March 2004). "The many roles of computation in drug discovery". Science 303 (5665): 1813–8. doi : 10.1126/science.1096361 . PMID 15031495 . ^ Verlinde CL, Hol WG (July 1994). "Structure-based drug design: progress, results and challenges". Structure 2 (7): 577–87. doi : 10.1016/S0969-2126(00)00060-5 . PMID 7922037 . ^ Böhm HJ (June 1994). "The development of a simple empirical scoring function to estimate the binding constant for a protein-ligand complex of known three-dimensional structure". J. Comput. Aided Mol. Des. 8 (3): 243–56. doi : 10.1007/BF00126743 . PMID 7964925 . ^ Gohlke H, Hendlich M, Klebe G (January 2000). "Knowledge-based scoring function to predict protein-ligand interactions". J. Mol. Biol. 295 (2): 337–56. doi : 10.1006/jmbi.1999.3371 . PMID 10623530 . ^ Clark RD, Strizhev A, Leonard JM, Blake JF, Matthew JB (January 2002). "Consensus scoring for ligand/protein interactions". J. Mol. Graph. Model. 20 (4): 281–95. doi : 10.1016/S1093-3263(01)00125-5 . PMID 11858637 . ^ Wang R, Lai L, Wang S (January 2002). "Further development and validation of empirical scoring functions for structure-based binding affinity prediction". J. Comput. Aided Mol. Des. 16 (1): 11–26. doi : 10.1023/A:1016357811882 . PMID 12197663 . ^ Smith J, Stein V (April 2009). "SPORCalc: A development of a database analysis that provides putative metabolic enzyme reactions for ligand-based drug design". Computational Biology and Chemistry 33 (2): 149–59. doi : 10.1016/j.compbiolchem.2008.11.002 . PMID 19157988 . ^ Rajamani R, Good AC (May 2007). "Ranking poses in structure-based lead discovery and optimization: current trends in scoring function development". Curr Opin Drug Discov Devel 10 (3): 308–15. PMID 17554857 . ^ de Azevedo WF, Dias R (December 2008). "Computational methods for calculation of ligand-binding affinity". Curr Drug Targets 9 (12): 1031–9. doi : 10.2174/138945008786949405 . PMID 19128212 . ^ Liang S, Meroueh SO, Wang G, Qiu C, Zhou Y (May 2009). "Consensus scoring for enriching near-native structures from protein-protein docking decoys" . Proteins 75 (2): 397–403. doi : 10.1002/prot.22252 . PMC 2656599 . PMID 18831053 . http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrezartid=2656599 . ^ Oda A, Tsuchida K, Takakura T, Yamaotsu N, Hirono S (2006). "Comparison of consensus scoring strategies for evaluating computational models of protein-ligand complexes". J Chem Inf Model 46 (1): 380–91. doi : 10.1021/ci050283k . PMID 16426072 . ^ Deng Z, Chuaqui C, Singh J (January 2004). "Structural interaction fingerprint (SIFt): a novel method for analyzing three-dimensional protein-ligand binding interactions". J. Med. Chem. 47 (2): 337–44. doi : 10.1021/jm030331x . PMID 14711306 . ^ Amari S, Aizawa M, Zhang J, Fukuzawa K, Mochizuki Y, Iwasawa Y, Nakata K, Chuman H, Nakano T (2006). "VISCANA: visualized cluster analysis of protein-ligand interaction based on the ab initio fragment molecular orbital method for virtual ligand screening". J Chem Inf Model 46 (1): 221–30. doi : 10.1021/ci050262q . PMID 16426058 . ^ Greer J, Erickson JW, Baldwin JJ, Varney MD (April 1994). "Application of the three-dimensional structures of protein target molecules in structure-based drug design". Journal of Medicinal Chemistry 37 (8): 1035–54. doi : 10.1021/jm00034a001 . PMID 8164249 . ^ Hendrik Timmerman; Klaus Gubernator; Hans-Joachim Böhm; Raimund Mannhold; Hugo Kubinyi (1998). Structure-based Ligand Design (Methods and Principles in Medicinal Chemistry) . Weinheim: Wiley-VCH. ISBN 3-527-29343-4 . ^ http://autodock.scripps.edu/news/autodocks-role-in-developing-the-first-clinically-approved-hiv-integrase-inhibitor
Variants of three genes related to clopidogrel metabolism and platelet receptor function – CYP2C19, ABCB1, and ITGB3 – appear to be independent risk factors for early stent thrombosis, beyond the already known clinical and angiographic risk factors, according to a report in the Oct. 26 issue of JAMA. A risk-prediction model that incorporated both genetic and clinical data had greater sensitivity and specificity at predicting early stent thrombosis than did a clinical model alone, said Dr. Guillaume Cayla of Institut de Cardiologie, INSERM Unite Mixte de Recherche, Salpetriere Hospital, Paris, and his associates. The researchers assessed all of the 23 genetic variants that have been reported to correlate with clopidogrel pharmacogenetics and arterial thrombosis to determine which ones contribute most to early stent thrombosis. They used a nationwide French registry of patients who had definite stent thrombosis within 30 days of implantation to identify 123 cases, then matched these for age and sex with 246 control subjects who had no stent thrombosis. Peripheral blood samples from these 369 subjects were genotyped for the suspect genetic variations. Only four variations in three genes were found to be significantly associated with early stent thrombosis. First, the CYP2C19 loss-of-function allele occurred in 49% of cases but only 26% of controls. Second, the CYP2C19 gain-of-function allele occurred in only 20% of cases but in 33% of controls. These findings strengthen the current evidence that CYP2C19 plays a predominant role in clopidogrel metabolism, Dr. Cayla and his colleagues said. "The effects of different genes according to different ethnic groups may warrant dedicated studies." Third, an ABCB1 variant occurred in 32% of cases but only 19% of controls. "The ABCB1 gene encodes a drug efflux transporter, P-glycoprotein, that modulates clopidogrel absorption. It has been previously associated with reduced clopidogrel response, but with variable clinical consequences," they noted. And fourth, an ITGB3 variant occurred in only 16% of cases but 28% of controls. The ITGB3 gene encodes for integrin beta-3, "a component of the glycoprotein IIb/IIIa platelet receptor, which mediates the final pathway of platelet aggregation," they said. There was a dose-response relationship in that risk of early stent thrombosis climbed steadily with carriage of an increasing number of these risk alleles, the investigators said. A risk-prediction model that combined genetic data with clinical data had significantly greater power to predict early stent thrombosis than did a clinical model alone. The combined model had 67% sensitivity and 79% specificity in this regard, compared with 60% sensitivity and 70% specificity for the model using only clinical data. "Patients in the highest tertile of risk using the combined clinical and genetic model had a sevenfold increased risk of early stent thrombosis vs. patients in the lowest tertile," Dr. Cayla and his associates said ( JAMA 2011;306:1765-74 ). The researchers also found that two nongenetic factors – loading dose of clopidogrel and the concomitant use of proton pump inhibitors (PPIs) – were significantly related to early stent thrombosis. Cases were much more likely than controls to have received a low loading dose of clopidogrel at stent implantation. And cases also were much more likely than controls to be taking PPIs, which have been suspected of interfering with clopidogrel metabolism. Unlike the genetic risk factors, both clopidogrel dose and PPI use are modifiable risk factors, they noted. This study was limited in that patients with the most severe early stent thrombosis died before they could be included in the study, so the genotype-phenotype relation remains unknown for them. "Stent malappositions or underexpansions are other important factors associated with stent thrombosis that were not evaluated," the investigators said. In addition, the study findings may apply only to white patients because virtually all the subjects, who were drawn from the general population in France, were white. "The effects of different genes according to different ethnic groups may warrant dedicated studies," they added. This study was funded by ACTION, the Société Francaise de Cardiologie, the Fédération Francaise de Cardiologie, and INSERM. The French registry of patients with early stent thrombosis was partially supported by Eli Lilly and the SGAM Foundation. Dr. Cayla and his associates reported ties to numerous industry sources. 来源: http://www.internalmedicinenews.com/news/cardiovascular-disease/single-article/variations-in-three-genes-predict-early-stent-thrombosis/2e03d029e5.html
自动I/O数字读入系统 中島翔太, 北園優希,陸慧敏,張力峰,芹川聖一 Recently, measuring instruments that automatically record measurement values by using the communication function of PC and RS232C have been widely used. However, there are a lot of measuring instruments that cannot communicate with an external instrument at present. Also, the ones that have the communication function are very expensive. The system that reads the instruction value from the image is by taking a picture of the measuring instrument with a camera. However, because the specification of the target measuring instrument has been limited, versatility of this system is low. Therefore,this paper proposes a strong numerical recognition system that doesn't depend on the model of a digital measuring instrument. The experiments showed that the proposed method has the characteristics of fast speed, efficiency and strong anti-interference. 2-a numeric reading system for digital meter without IO interface.pdf
Passing a Scalar Let’s look at a simple example of C code and its MEX-file equivalent. Here is a C computational function that takes a scalar and doubles it. #include math.h void timestwo(double y ) { y = 2.0*x ; return; } Below is the same function written in the MEX-file format. /* * ============================================================= * timestwo.c - example found in API guide * * Computational function that takes a scalar and doubles it. * * This is a MEX-file for MATLAB. * Copyright (c) 1984-2000 The MathWorks, Inc. * ============================================================= */ /* $Revision: 1.8 $ */ #include "mex.h" void timestwo(double y ) { y = 2.0*x ; } void mexFunction(int nlhs, mxArray *plhs ) { double *x, *y; int mrows, ncols; /* Check for proper number of arguments. */ if (nrhs != 1) { mexErrMsgTxt("One input required."); } else if (nlhs 1) { mexErrMsgTxt("Too many output arguments"); } /* The input must be a noncomplex scalar double.*/ mrows = mxGetM(prhs ); ncols = mxGetN(prhs ); if (!mxIsDouble(prhs ) || mxIsComplex(prhs ) || !(mrows == 1 ncols == 1)) { mexErrMsgTxt("Input must be a noncomplex scalar double."); } /* Create matrix for the return argument. */ plhs = mxCreateDoubleMatrix(mrows,ncols, mxREAL); /* Assign pointers to each input and output. */ x = mxGetPr(prhs ); y = mxGetPr(plhs ); /* Call the timestwo subroutine. */ timestwo(y,x); } In C, function argument checking is done at compile time. In MATLAB, you can pass any number or type of arguments to your M-function, which is responsible for argument checking. This is also true for MEX-files. Your program must safely handle any number of input or output arguments of any supported type. To compile and link this example source file at the MATLAB prompt, type mex timestwo.c This carries out the necessary steps to create the MEX-file called timestwo with an extension corresponding to the platform on which you’re running. You can now call timestwo as if it were an M-function. x = 2; y = timestwo(x) y = 4 You can create and compile MEX-files in MATLAB or at your operating system’s prompt. MATLAB uses mex.m, an M-file version of the mex script, and your operating system uses mex.bat on Windows and mex.sh on UNIX. In either case, typing mex filename at the prompt produces a compiled version of your MEX-file. In the above example, scalars are viewed as 1-by-1 matrices. Alternatively, you can use a special API function called mxGetScalar that returns the values of scalars instead of pointers to copies of scalar variables. This is the alternative code (error checking has been omitted for brevity). /* * ============================================================= * timestwoalt.c - example found in API guide * * Use mxGetScalar to return the values of scalars instead of * pointers to copies of scalar variables. * * This is a MEX-file for MATLAB. * Copyright (c) 1984-2000 The MathWorks, Inc. * ============================================================= */ /* $Revision: 1.5 $ */ #include "mex.h" void timestwo_alt(double *y, double x) { *y = 2.0*x; } void mexFunction(int nlhs, mxArray *plhs ) { double *y; double x; /* Create a 1-by-1 matrix for the return argument. */ plhs = mxCreateDoubleMatrix(1, 1, mxREAL); /* Get the scalar value of the input x. */ /* Note: mxGetScalar returns a value, not a pointer. */ x = mxGetScalar(prhs ); /* Assign a pointer to the output. */ y = mxGetPr(plhs ); /* Call the timestwo_alt subroutine. */ timestwo_alt(y,x); } This example passes the input scalar x by value into the timestwo_alt subroutine, but passes the output scalar y by reference. Exsample2:PassingStrings AnyMATLABdatatypecanbepassedtoandfromMEX-files.Forexample, thisCcodeacceptsastringandreturnsthecharactersinreverseorder. /* *============================================================= *revord.c *Exampleforillustratinghowtocopythestringdatafrom *MATLABtoaC-stylestringandbackagain. * *Takesastringandreturnsastringinreverseorder. * *ThisisaMEX-fileforMATLAB. *Copyright(c)1984-2000TheMathWorks,Inc. *============================================================ */ /*$Revision:1.10$*/ #include"mex.h" voidrevord(char*input_buf,intbuflen,char*output_buf) { inti; /*Reversetheorderoftheinputstring.*/ for(i=0;ibuflen-1;i++) *(output_buf+i)=*(input_buf+buflen-i-2); } Inthisexample,theAPIfunctionmxCallocreplacescalloc,thestandardC functionfordynamicmemoryallocation.mxCallocallocatesdynamicmemory usingtheMATLABmemorymanagerandinitializesittozero.Youmustuse mxCallocinanysituationwhereCwouldrequiretheuseofcalloc.Thesame istrueformxMallocandmxRealloc;usemxMallocinanysituationwhereC wouldrequiretheuseofmallocandusemxReallocwhereCwouldrequire realloc. NoteMATLABautomaticallyfreesupmemoryallocatedwiththemx allocationroutines(mxCalloc,mxMalloc,mxRealloc)uponexitingyour MEX-file.Ifyoudon’twantthistohappen,usetheAPIfunction mexMakeMemoryPersistent. BelowisthegatewayroutinethatcallstheCcomputationalroutinerevord. voidmexFunction(intnlhs,mxArray*plhs ) { char*input_buf,*output_buf; intbuflen,status; /*Checkforpropernumberofarguments.*/ if(nrhs!=1) mexErrMsgTxt("Oneinputrequired."); elseif(nlhs1) mexErrMsgTxt("Toomanyoutputarguments."); /*Inputmustbeastring.*/ if(mxIsChar(prhs )!=1) mexErrMsgTxt("Inputmustbeastring."); /*Inputmustbearowvector.*/ if(mxGetM(prhs )!=1) mexErrMsgTxt("Inputmustbearowvector."); /*Getthelengthoftheinputstring.*/ buflen=(mxGetM(prhs )*mxGetN(prhs ))+1; /*Allocatememoryforinputandoutputstrings.*/ input_buf=mxCalloc(buflen,sizeof(char)); output_buf=mxCalloc(buflen,sizeof(char)); /*Copythestringdatafromprhs intoaCstring *input_buf.*/ status=mxGetString(prhs ,input_buf,buflen); if(status!=0) mexWarnMsgTxt("Notenoughspace.Stringistruncated."); /*CalltheCsubroutine.*/ revord(input_buf,buflen,output_buf); /*SetC-stylestringoutput_buftoMATLABmexFunctionoutput*/ plhs =mxCreateString(output_buf); return; } Thegatewayroutineallocatesmemoryfortheinputandoutputstrings.Since theseareCstrings,theyneedtobeonegreaterthanthenumberofelements intheMATLABstring.NexttheMATLABstringiscopiedtotheinputstring. Boththeinputandoutputstringsarepassedtothecomputationalsubroutine (revord),whichloadstheoutputinreverseorder.Notethattheoutputbuffer isavalidnull-terminatedCstringbecausemxCallocinitializesthememoryto 0.TheAPIfunctionmxCreateStringthencreatesaMATLABstringfromthe Cstring,output_buf.Finally,plhs ,theleft-handsidereturnargumentto MATLAB,issettotheMATLABarrayyoujustcreated. ByisolatingvariablesoftypemxArrayfromthecomputationalsubroutine,you canavoidhavingtomakesignificantchangestoyouroriginalCcode. Inthisexample,typing x='helloworld'; y=revord(x) produces Thestringtoconvertis'helloworld'. y= dlrowolleh
Examples of Fortran MEX-Files exsample 2================================================================================ C This is an adaptation of matsq from the API Guide. C C This file has been written to demonstrate the use C of two functions so that the size of the input C matrices must not be known at compilation. C C As an m-file, this would be written C function z=multscalar(x,y) C % x is a matrix of any dimension C % y is a scalar C z=x*y; C C Written by Matthew C Roberts, matthewcroberts@hotmail.com C C This is the gateway function subroutine mexFunction(nlhs,plhs,nrhs,prhs) implicit none integer nlhs, nrhs, mxGetM, mxGetN integer m(nrhs), n(nrhs), mxIsNumeric C Change this to integer*8 for 64 bit matchines integer plhs(nlhs), prhs(nrhs) C Check the proper number of input and output arguments if (nlhs .gt. 1) then call mexErrMsgTxt('No more than one output argument') end if if (nrhs .ne. 2) then call mexErrMsgTxt('Must be exactly two input arguments') end if C Check to make sure that the arguments are non-numeric if (mxIsNumeric(prhs(1)).eq.0) then call mexErrMsgTxt('Non-numeric Data Present') end if if (mxIsNumeric(prhs(2)).eq.0) then call mexErrMsgTxt('Non-numeric Data Present') end if C Get size of input arguments m(1)=mxGetM(prhs(1)) m(2)=mxGetM(prhs(2)) n(1)=mxGetN(prhs(1)) n(2)=mxGetN(prhs(2)) call matmult_1(nlhs,plhs,nrhs,prhs,m,n) end C ================================================================= C This is the computational function subroutine matmult_1(nlhs,plhs,nrhs,prhs,m,n) implicit none integer nrhs, nlhs integer m(nrhs), n(nrhs) C These are the input argument double precision x(m(1),n(1)), y, z(m(1),n(1)) C Change these to integer*8 for 64 bit machines integer plhs(nlhs), prhs(nrhs), zpr integer mxCreateFull, mxGetPr external mxGetPr C Copy the inputs into FORTRAN call mxCopyPtrtoReal8(mxGetPr(prhs(1)),x,m(1)*n(1)) call mxCopyPtrtoReal8(mxGetPr(prhs(2)),y,1) C Now, do the math: call matmult_2(x,y,z,m,n) C Send the answer home plhs(1)=mxCreateFull(m(1),n(1),0) zpr=mxGetPr(plhs(1)) call mxCopyReal8toPtr(z,zpr,m(1)*n(1)) end C Computational Subroutine? subroutine matmult_2(x,y,z,m,n) implicit none integer m(2), n(2) double precision x(m(1),n(1)), y, z(m(1),n(1)) z = x*y end exsample 3========================= dgemv.f ===================================== Here is an example of a fortran routine for multiplying a vector by a matrix. Of course, Matlab can do this already. This example was chosen because it can so easily be verified to work, and also because it contains several examples of important issues in creating a gateway file. In particular, variables of all types (integer, float, character) are passed, and in the case of float, we use scalar, vector, and matrix forms. Also addressed the issue of when an input parameter in fortran is modified and is output as well. This is distinct from functions in Matlab, where input and output to and from a function are distinct. This example contains most of the commands that would be needed by someone using numerical algorithms. dgemv.f is a fortran subroutine for multiplying a vector by a matrix. It is from the Blas set of routines, downloaded from Netlib. xerbla.f and lsame.f are called by dgemv.f. dgemvg.f is the gateway routine that links the fortran code to Matlab. mvmults.m is a Matlab script that defines the matrix and vector we wish to multiply, and calls the routine for doing the multiplication. --------------------------------dgemv.f-------------------------- subroutine mexfunction(nlhs,plhs,nrhs,prhs) integer*4 plhs(*),prhs(*) integer nlhs,nrhs C Define types for mx functions. Recall that pointers used C in mex files must be of type integer*4 integer*4 mxgetpr, mxcreatefull, mxgetstring real*8 mxgetscalar integer*4 status C Variables (and pointers to variables) used by the Fortran subroutine. C The floating point variables will be referred to by pointers. C For the integer and character variables, we will use the actual C values, so we defing the variables accordingly. integer*4 a,x,y,alpha,beta character*1 trans if (nrhs .ne. 11) then call mexerrmsgtxt('Wrong number of inputs in sgemv') end if if (nlhs .ne. 1) then call mexerrmsgtxt('Wrong number of outputs in sgemv') end if C Retrieve the dimensions of the input matrix and vector. This is not C necessary, but it gives us an opportunity to check for errors. C Moreover, this will help us determine how long to make the output C vector. ma = mxgetm(prhs(4)) na = mxgetn(prhs(4)) mx = mxgetm(prhs(6)) nx = mxgetn(prhs(6)) status = mxGetString(prhs(1),trans,1) m = int(mxgetscalar(prhs(2))) n = int(mxgetscalar(prhs(3))) lda = int(mxgetscalar(prhs(6))) incx = int(mxgetscalar(prhs(8))) incy = int(mxgetscalar(prhs(11))) C Create the output variable. plhs(1) = mxcreatefull(na,1,0) alpha = mxgetpr(prhs(4)) a = mxgetpr(prhs(5)) x = mxgetpr(prhs(7)) beta = mxgetpr(prhs(9)) y = mxgetpr(prhs(10)) call dgemv(trans,m,n,%val(alpha),%val(a),lda, %val(x),incx,%val(beta), %val(y),incy) plhs(1) = prhs(10) return end --------------------- xerbla.f ------------------------------- SUBROUTINE XERBLA( SRNAME, INFO ) * * -- LAPACK auxiliary routine (preliminary version) -- * Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., * Courant Institute, Argonne National Lab, and Rice University * February 29, 1992 * * .. Scalar Arguments .. CHARACTER*6 SRNAME INTEGER INFO * .. * * Purpose * ======= * * XERBLA is an error handler for the LAPACK routines. * It is called by an LAPACK routine if an input parameter has an * invalid value. A message is printed and execution stops. * * Installers may consider modifying the STOP statement in order to * call system-specific exception-handling facilities. * * Arguments * ========= * * SRNAME (input) CHARACTER*6 * The name of the routine which called XERBLA. * * INFO (input) INTEGER * The position of the invalid parameter in the parameter list * of the calling routine. * * WRITE( *, FMT = 9999 )SRNAME, INFO * STOP * 9999 FORMAT( ' ** On entry to ', A6, ' parameter number ', I2, ' had ', $ 'an illegal value' ) * * End of XERBLA * END -------------------- lsame.f ----------------------------------- LOGICAL FUNCTION LSAME( CA, CB ) * * -- LAPACK auxiliary routine (version 2.0) -- * Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., * Courant Institute, Argonne National Lab, and Rice University * January 31, 1994 * * .. Scalar Arguments .. CHARACTER CA, CB * .. * * Purpose * ======= * * LSAME returns .TRUE. if CA is the same letter as CB regardless of * case. * * Arguments * ========= * * CA (input) CHARACTER*1 * CB (input) CHARACTER*1 * CA and CB specify the single characters to be compared. * * ===================================================================== * * .. Intrinsic Functions .. INTRINSIC ICHAR * .. * .. Local Scalars .. INTEGER INTA, INTB, ZCODE * .. * .. Executable Statements .. * * Test if the characters are equal * LSAME = CA.EQ.CB IF( LSAME ) $ RETURN * * Now test for equivalence if both characters are alphabetic. * ZCODE = ICHAR( 'Z' ) * * Use 'Z' rather than 'A' so that ASCII can be detected on Prime * machines, on which ICHAR returns a value with bit 8 set. * ICHAR('A') on Prime machines returns 193 which is the same as * ICHAR('A') on an EBCDIC machine. * INTA = ICHAR( CA ) INTB = ICHAR( CB ) * IF( ZCODE.EQ.90 .OR. ZCODE.EQ.122 ) THEN * * ASCII is assumed - ZCODE is the ASCII code of either lower or * upper case 'Z'. * IF( INTA.GE.97 .AND. INTA.LE.122 ) INTA = INTA - 32 IF( INTB.GE.97 .AND. INTB.LE.122 ) INTB = INTB - 32 * ELSE IF( ZCODE.EQ.233 .OR. ZCODE.EQ.169 ) THEN * * EBCDIC is assumed - ZCODE is the EBCDIC code of either lower or * upper case 'Z'. * IF( INTA.GE.129 .AND. INTA.LE.137 .OR. $ INTA.GE.145 .AND. INTA.LE.153 .OR. $ INTA.GE.162 .AND. INTA.LE.169 ) INTA = INTA + 64 IF( INTB.GE.129 .AND. INTB.LE.137 .OR. $ INTB.GE.145 .AND. INTB.LE.153 .OR. $ INTB.GE.162 .AND. INTB.LE.169 ) INTB = INTB + 64 * ELSE IF( ZCODE.EQ.218 .OR. ZCODE.EQ.250 ) THEN * * ASCII is assumed, on Prime machines - ZCODE is the ASCII code * plus 128 of either lower or upper case 'Z'. * IF( INTA.GE.225 .AND. INTA.LE.250 ) INTA = INTA - 32 IF( INTB.GE.225 .AND. INTB.LE.250 ) INTB = INTB - 32 END IF LSAME = INTA.EQ.INTB * * RETURN * * End of LSAME * END ---------------------------- mvmults.m ------------------------------------- mvmults.m is a Matlab script that defines the matrix and vector we wish to multiply, and calls the routine for doing the multiplication. clear rows = 100; for i = 1:rows, for j = 1:rows, amat(i,j) = (i*j-1)/rows; end xvec(i) = i/rows; end xvec = xvec'; yin = zeros(rows,1); yvec = dgemv('N',rows,rows,1.0,amat,rows,xvec,1,0.0,yin,1);
cudaThreadSynchronize() is a _host_ function that waits for all previous async operations (i.e. kernel calls, async memory copies) to complete. __synchtreads() is a _device_ function that acts as a thread barrier. All threads in a block must reach the barrier before any can continue execution. It is only of use when you need to avoid race conditions when threads in a block access shared memory.
2008年5月15日,四川省北川县一所被地震震塌的教室的废墟下,一只蝴蝶在一位已经死亡的学生脚旁飞舞。Jason Lee 摄 2008的那场地震。 蝴蝶落在姑娘的花布鞋上。 这到底伤心了谁? 摄影师的蝴蝶比摄影师更令人难过, 科学家却比科学家的文章更加忧伤。 Pnas说,伤及大脑,日后难以忘记。 Plosone说,他们不难过,伤心的是你们。 都是真相吧,区别就在一年以后。 担忧的事很多,担忧的人却很少, 昨晚版纳的地震就够人难以入睡的。 更难以入梦的却不是摇晃,而是摇晃的房子。 和摇晃过后,我们该如何摇晃我们的房子。 参考文献: Su Luia, Qiyong Gonga,2009 High-field MRI reveals an acute impact on brain function in survivors of the magnitude 8.0 earthquake in China .Pnas.106(36)15412–15417.doi: 10.1073/pnas.0812751106 Li S, Rao L-L, Bai X-W, Zheng R, Ren X-P, et al. (2010) Progression of the “Psychological Typhoon Eye” and Variations Since the Wenchuan Earthquake . PLoS ONE 5(3): e9727. doi:10.1371/journal.pone.0009727 附: 如果西双版纳是这样的楼房,再大的地震我们都不会害怕。
http://pre.aps.org/abstract/PRE/v81/i6/e066202 Computation of the drift velocity of spiral waves using response functions Abstract: Rotating spiral waves are a form of self-organization observed in spatially extended systems of chysical,chemical, and biological nature. In the presence of a small perturbation, the spiral waves center of rotation and fiducial phase may change over time, i.e., the spiral wave drifts. In linear approximation, the velocity of the drift is proportional to the convolution of the perturbation with the spirals response functions, which are the eigenfunctions of the adjoint linearized operator corresponding to the critical eigenvalues =0, +-omega. Here, we demonstrate that the response functions give quantitatively accurate prediction of the drift velocities due to a variety of perturbations: a time dependent, periodic perturbation inducing resonant drift; a rotational symmetry-breaking perturbation inducing electrophoretic drift; and a translational symmetry-breaking perturbation inhomogeneity induced drift including drift due to a gradient, stepwise, and localized inhomogeneity. We predict the drift velocities using the response functions in FitzHugh-Nagumo and Barkley models, and compare them with the velocities obtained in direct numerical simulations. In all cases good quantitative agreement is demonstrated. 这是作者所在研究组一系列文章的一个总结类文献,理论基石仍然是第二作者,D. Barkley在上世纪九十年代发展的稳定性分析。 如其所言,计算response 函数的复杂性让一般,习惯于直接模拟的研究者有点发怵,别人我不敢说,对于我是这样的。 我感兴趣的是其中的resonant drift.
There are several method for curve fitting should be summarized for latter use. 1. "Solver" in Excel Step1 Define the objective parameters; Step2 Calculate the objective function with initial estimation of parameters and the observation data; Step3 Calculate and sum " { y_obs -y_pred }^2 "; Step4 Click "Solver" in the "Tool" tab; Step5 Select the "objective cell" and let "Solver" guess the "changed cell", and press "Solve" Advantage: Easy to implement without coding; Disadvantage: Time-consuming in estimating the reasonable initial value of parameters without priori-knowledge. 2. "lsqcurvefit" and "nlinfit"function F=fun(x,xdata)F=(x(1)./(xdata-x(2))).^(x(3)+1).*exp(-x(1)./(xdata-x(2)))/(x(1)*gamma(x(3))); xdata= ; ydata= ;x0= ; =nlinfit(xdata,ydata,@fun,x0) or =lsqcurvefit(@fun,x0,xdata,ydata) MaxValue=max(xdata); MinValue=min(xdata); x1= ; y1=(x(1)./(xdata-x(2))).^(x(3)+1).*exp(-x(1)./(xdata-x(2)))/(x(1)*gamma(x(3))); loglog(xdata, ydata, 'ko',xdata,y1,'k-') Summary: The "Isqcurvefit" doesn't need very accurate initial estimation of parameters,however, the result is much worse than solver or good initial estimation of parameters. The "nlinfit" needs good estimation of initial parameters to get results. 3. Maximum Likelihood Estimation The advantage of this method is the loose need for the accurate initial estimation of the parameters. And, it is super-good method to do the curve fitting for the data set span several orders of magnitude. Open the M file of “Inverse_Gamma” Step1. Prepare Input Data Copy the bin of landslide area into “A_L” at line 5; Copy the calculated probability into “y” at line 11; Copy the frequency of landslides into “y_freq” at line 15; Step2. Change function ‘mlecustom’ in MATLAB Open ‘mlecustom’ with the path of “InstallationDrive:\Matlab\toolbox\stats\private\mlecustom.m”; Or type ‘mlecustom’ in MATLAB Editor, then, select the string, right-click the string, and select ‘Open Selection’ in the popup list; Right after the function head, type in “global y_freq”; At line 150, replace “uncensFreq=freq(c);” with “uncensFreq =y_freq;” Step3. Output Run the script (Inverse_Gamma.m) in MATLAB Editor; The optimized parameters and the coefficient of determination will be showed in the command window; The fitting curve is showed in the figure window;
Xuzhang Shen Sex: Male Born: 7 /29/1976 Email: shenxzh@gmail.com Office Address: Lanzhou Institute of Seismology, China Earthquake Administration (CEA). 450 Donggang West Road, Lanzhou, Gansu Province, 730000, China Tel: +86-09318277661; Cell phone: +86-15009313127 Education PH.D. Earth College of Graduate University of Chininese Academy of Science (GUCAS). Beijing, China. 2008 M.S. Lanzhou seismology institute of China Earthquake Administration (CEA). Lanzhou, China. 2003 Undergraduate. Physical department of Gansu educational institute. Lanzhou, China. 2000 Junior college. Physical department of Zhangye teacher’s college. Zhangye, China. 1996 Professional Experience 11/2012-present. Researcher, LIS. 11/2008-11/2012. Associate researcher, LIS. 01/2005-12/2008. Assistant researcher , LIS. 05/2013-06/2013, Visit researcher in GFZ, hosted by Dr. Xiaohui Yuan 04/2012-05/2012, Visit researcher in GFZ, hosted by Dr. Xiaohui Yuan 09/2011-10/2011, Visit researcher in GFZ, hosted by Dr. Xiaohui Yuan 12/2009-12/2010, Visit researcher in ERI, hosted by Prof. Hitoshi Kawakatsu 11/2007-12/2007, Foreign research fellow in ERI, hosted by Prof. Hitoshi Kawakatsu 02/2007-06/2007, Foreign research fellow in ERI of Tokyo University, hosted by Prof. Hitoshi Kawakatsu 07/2003-12/2004. Research Assistant, Lanzhou institute of seismology (LIS), Chinese Earthquake Administration (CEA). List of academic publications 1. SHEN Xu-Zhang . An analysis of the deformation of the crust and LAB beneath the Lushan and Wenchuan earthquakes in Sichuan province (In Chinese with English abstract) . Chinese Journal Geophysics, 2013,56(6): 1895-1903,doi: 10.6038/cjg20130612 2. SHEN Xu-Zhang . Imaging structures of crust and upper mantle beneath the source of the 14 April 2010 Yushu, Qinghai earthquake using P- and S- wave receiver functions (In Chinese with English abstract) . Chinese Journal Geophysics, 2013,56(2): 493-503,doi: 10.6038/cjg20130213 3. Shen Xuzhang , Zhou Huilan, Receiver functions of CCDSN and crustal structure of China continent, Earthquake Science, 2012, 25:3-16, doi:10.1007/s11589- 012-0826-6 4. Shen Xuzhang , Zhou Huilan, Peeling linear inversion of upper mantle velocity structure with receiver functions, Earthquake Science, 2012, 25:65-74, doi:10.1007/s11589-012-0832-8 5. Xuzhang Shen , Xiuping Mei and Yuansheng Zhang. The crustal and upper mantle structures beneath the northeastern margin of Tibet, Bull. Seismol. Soc. Am.,2011,101,6, 2782–2795, doi: 10.1785/0120100112 6. SHEN Xu-Zhang , QIN Man-Zhong, ZHANG Yuan-Sheng et al. Seismological evidence of ultra-crust faults in northeastern margin of Qinghai-Tibet Plateau (In Chinese with English abstract). Chinese J. Geot. Eng., 2011, 33(Suppl.1): 255-260. 7. Shen Xuzhang , Study on the low-velocity discontinuity at the depth of 170 km beneath SSE station in Shanghai, China. Chinese J. Geophys.2011. 8. SHEN Xu-zhang , MEI Xiu-ping, YANG Hui. Study on the crustal structures beneath Wenchuan earthquake rupture zone. Progress in Geophysics (In Chinese with English abstract), 2011, 26(2): 477-488, doi: 10.3969/j.issn.1004-2903.2011.02.012 9. Shen Xuzhang , Zhang Shuzhen, Zheng zhong and Hao Chunyue, Study on the small-scale heterogeneities below the Hailaer CTBTO seismic array, Chinese J. Geophys (In Chinese with English abstract). 2010, 53(5):1158-1166. DOI:10.3969/j.issn. 0001-5733.2010.05.107 10. Shen Xuzhang and Mei Xiuping, The Array Response Function of Lanzhou Seismic Array and Results of FK analysis for Small earthquakes of Different Azimuths(In Chinese with English abstract). 2010, Northwestern Seismological Journal, 32 ( 1 ): 59-64 11. Xuzhang Shen and Joachim R. R. Ritter, Small-scale heterogeneities below the Lanzhou CTBTO seismic array, from seismic wavefield fluctuations, Journal of Seismo, 2009, 10.1007/s10950-009-9177-8. 12. Shen Xuzhang and Zhou Huilan, Locating seismic scatterers at the base of the mantle beneath eastern Tibet with PKIKP precursors, Chinese Science Bulletin, 2009, 10.1007/s11434-009-0382-1 13. Shen Xuzhang and Zhou Huilan, The low-velocity layer at the depth of 620 km beneath Northeast China, Chinese Science Bulletin, 2009, DOI 10.1007/s11434-008-0559-z 14. Xuzhang Shen , Huilan Zhou, and Hitoshi Kawakatsu, Mapping the upper mantle discontinuities beneath China with teleseismic receiver functions, Earth Planets Space, 2008 , 60 (7):713-719 15. Shen Xuzhang and Mei Xiuping, Temporal and spatial evolution of water-tube tiltmeter record of China deformation stations and its implication before and after Ms8.1 earthquake west to the Kunlun Mountain pass (In Chinese with English abstract). Northwestern Seismological Journal, 2009,31(1):57-60 16. Shen Xuzhang and Zhou Huilan Inversion of velocity structure under HLR seismic station with receiver function and NA method(In Chinese with English abstract), Journal of the Graduate School of the Chinese Academy of Sciences, 2005, 22(3 ) :322-328 17. Shen Xuzhang , Chang qianjun and Mei Xiuping, Analysis on precursor effect to earthquake of FSQ water tube tiltmeter in Lanzhou deformation station (In Chinese with English abstract). Northwestern Seismological Journal, 2004, 26(4): 368-370 18. Xiaoqing Xu , Xuzhang Shen * , Chang Ming , et al., Preliminary analysis of teleseismic receiver functions of the Ningxia and its adjacent area, Earthquake Science, 2012, 25:47-53. doi:10.1007/s11589-012-0830-x 19. Tang Jiuan, Shen Xuzhang *, and Gao Antai et al., Exprolation of estimation of observations of confined water level(In Chinese with English abstract). Journal of Geodesy and Geodynamics,2012,32(4):37-40 Meeting Abstracts Xuzhang Shen , Teh-Ru Alex Song, Determinating the lithosphere/asthenosphere boundary (LAB) with multiples on P-receiver functions, “Symposium III: Craton vs. Orogen”at theInternational Conference on Craton Formation and Destruction, 2011, Beijing, 25-29 April 2011. Shen Xuzhang , Origin of the geothermal in Tianshui region in the northeastern margin of Tibet and its implication, JPGU, 2010.6 Tokyo, Japan (Talk) Shen Xuzhang , Zhou Huilan. Peeling hybrid inversion of upper mantle structure with receiver functions. Poster, WPGM, 2006, Beijing, China. Shen Xuzhang , Zhou Huilan. The low velocity zone influence of receiver functions and the treat with it in theinversion, China Earthquake Annual, 2005 393 (Talk) Shen Xuzhang , Zhou Huilan. Parallel NA Inversion of Receiver Functions China Earthquake Annual, 2004 370 (Talk) Fu Guanyu, Shen Xuzhang , Tang, Jiuan, Fukuda,Y, Coseismic Strain Steps of the 2008 Wenchuan Earthquake Indicate EW Extension of Tibetan Plateau and Increased Hazard South to Epicenter, American Geophysical Union, Fall Meeting 2008 Research grant record Project Manager National Science Foundation of China. Study on the lithospheric anisotropy and deformation of eastern margin of Tibet and its adjacent area, 2013-2016 National Science Foundation of China. Study of lithosphere and deep discontinuities of upper mantle beneath northeast and east margin Tibet with P and S receiver functions. 2010---2012. Basic RD fund of Institute of Earthquake Prediction, CEA, Full wavefield simulation in 3-D media in the northeastern margin of Tibet. 2009-2010. China Earthquake Administration (CEA) associate fund. Study of the detailed structure of lithosphere and upper mantle under Lanzhou Dajianshan array. 2005---2007. Project Participants China Earthquake Administration (CEA) associated fund. Relationship of seismocity and spatio-temporal distribution of China deformation field. 2005---2007 National Science Foundation of China. Study of peeling inversion method of receiver functions and detection of deep discontinuities of upper mantle under China. 2003---2006. Special basic funding of Ministry of Science and Technology of the P.R.C. Construction of China tidal database. 2004---2006 Skills Operation system: Proficient in Windows, Linux Programming languages: Fortran Seismic software: SAC, Matseis, SH Script languages: Be familiar with Matlab, Perl, Shell, Java, Python Other software applications: Word, Powerpoint, Excel, GMT, TauP, Photoshop Excellent communication skills including written,verbal and interpersonal.
Biodiversity in a complex world: consolidation and progress in functional biodiversity research 从 Ecology Letters 作者: Helmut Hillebrand, Birte Matthiessen The global decline of biodiversity caused by human domination of ecosystems worldwide is supposed to alter important process rates and state variables in these ecosystems. However, there is considerable debate on the prevalence and importance of biodiversity effects on ecosystem function (BDEF). Here, we argue that much of the debate stems from two major shortcomings. First, most studies do not directly link the traits leading to increased or decreased function to the traits needed for species coexistence and dominance. We argue that implementing a trait-based approach and broadening the perception of diversity to include trait dissimilarity or trait divergence will result in more realistic predictions on the consequences of altered biodiversity. Second, the empirical and theoretical studies do not reflect the complexity of natural ecosystems, which makes it difficult to transfer the results to natural situations of species loss. We review how different aspects of complexity (trophic structure, multifunctionality, spatial or temporal heterogeneity, and spatial population dynamics) alter our perception of BDEF. We propose future research avenues concisely testing whether acknowledging this complexity will strengthen the observed biodiversity effects. Finally, we propose that a major future task is to disentangle biodiversity effects on ecosystem function from direct changes in function due to human alterations of abiotic constraints. Source: http://www3.interscience.wiley.com/journal/122659286/abstract?CRETRY=1SRETRY=0
差异基因筛选方法 和 时间序列表达数据分析工具 有多种,结果多样,就会导致下游的功能分析大相径庭。 下面这两段话似乎解决了问题? Hosack , showed that prevalent biological themes within the set of differentially expressed transcripts derived from the same experiment, but using different transcript selection methods, are a stable representation of the biology underlying the experiment . Therefore, even though differentially expressed transcript lists have only partial overlap they all represent subsets of transcripts associated to a specific biological event (Figure 2). The integration of transcription profiles with biology knowledge bases (e.g. GO, KEGG, PUBMED, etc.) is another way of mapping differentially expressed transcripts in specific biology knowledge domains. A coordinated change among many gene products can produce potent biological effects, while the effect of each individual transcript can be subtle. The identification of pathways distinctively enriched within a set of differentially expressed transcripts can also be subsequently used to check if more subtle transcriptional variations, not considered in the stringent differential expression analysis , could also be used to strengthen the biological mean of the identified pathway. Another possible application could be the link of alternative splicing events, detected with the new exon-oriented Affymetrix microarray platform, to functional pathways depicted by conventional differential expression analysis. GO is however marked by flaws of certain characteristic types, due to a failure to address basic ontological principles . 做过GO注释的都会遇到这个头疼的问题,就是每个基因都会有好几个GO号,但不一定所有基因包含同一层次的GO号(本来就是有向无环图,没法确定层次), 因此,拿哪些GO号来分类?真懵!幸好已经有工具解决了这个问题。 工具: Class 1: Singular enrichment analysis (SEA) The most traditional strategy for enrichment analysis is to 1. take the users preselected (e.g. differentially expressed genes selected between experimental versus control samples by t-test with a P-value 0.05 and fold change 1.5) interesting genes, and then iteratively 2. test the enrichment of each annotation term one-by-one in a linear mode. Thereafter, the individual, enriched annotation terms passing the enrichment P-value threshold are reported in a tabular format ordered by the enrichment probability (enrichment P-value). The enrichment P-value calculation, i.e. number of genes in the list that hit a given biology class as compared to pure random chance, can be performed with the aid of some common and well-known 3. statistical methods (11,12,76), including Chi-square, Fishers exact test, Binomial probability and Hypergeometric distribution , etc. (Table 1). More discussion regarding the enrichment P-value can be found 4. in a later section of this paper. (概括得太好了,我已经写完的那篇论文就是这个路子,看来已经过时了) Even though the strategy and output format of SEA are simple, SEA is indeed a very efficient way to extract the major biological meaning behind large gene lists, which may be generated from any type of high-throughput genomic studies or bioinformatics software packages. Most of the earlier tools (such as GoMiner, Onto-Express, DAVID and EASE ) and a lot of the recently released tools (such as GOEAST and GFinder ), adopted this strategy and demonstrated significant success in many genomic studies. However, the common weakness of tools in this class is that the linear output of terms can be very large and overwhelming (from hundreds to thousands). Therefore, the data analysts focus and interrelationships of relevant terms can be diluted . For example, relevant GO terms like apoptosis, programmed cell death, induction of apoptosis, anti-apoptosis, regulation of apoptosis, etc., are spread out at different positions in a large linear output. It is difficult to focus on interrelationships of relevant biology terms among hundreds or thousands of other terms. In addition, the quality of pre-selected gene lists could largely impact the enrichment analysis , which makes SEA analysis unstable to a certain degree when using different statistical methods or cutoff thresholds .(这就回到了本文开头的那种情况) Class 2: Gene set enrichment analysis (GSEA) GSEA carries the core spirit of SEA, but with a distinct algorithm to calculate enrichment P-values as compared to SEA (35). People in the field give great attention and expectation to the GSEA strategy. The unique idea of GSEA is its no-cutoff strategy that takes all genes from a microarray experiment without selecting significant genes (e.g. genes with P-value 0.05 and fold change 1.5). This strategy benefits the enrichment analysis in two aspects: 1) it reduces the arbitrary factors in the typical gene selection step that could impact the traditional enrichment analysis; and 2) it uses all information obtained from microarray experiments by allowing the minimally changing genes, which cannot pass the selection threshold, to contribute to the enrichment analysis in differing degrees. The maximum enrichment score (MES) is calculated from the rank order of all gene members in the annotation category. Thereafter, enrichment P-values can be obtained by matching the MES to randomly shuffled MES distributions (a KolmogorovSmirnov-like statistic) (35). Other enrichment tools in the GSEA class using the no-cutoff strategy, such as ErmineJ (31), FatiScan (55), MEGO (36), PAGE (29), MetaGF, Go-Mapper (22) and ADGO (45) , etc., employ parametric statistical approaches such as z-score, t-test, permutation analysis , etc. These approaches directly take experimental values (e.g. fold change) of all genes into the calculation for each annotation term. Collectively, recent GSEA tools which integrate the total experimental values into the functional data mining are an interesting trend with a lot of potential as a complement to traditional SEA (47,7779). However, tools in the GSEA class are also associated with some common limitations. First, the no-cutoff strategy is the key advantage of GSEA, but is also becoming its major limitation in many biological studies. The GSEA method requires a summarized biological value (e.g. fold change) for each of the genome-wide genes as input. Sometimes, it is a difficult task to summarize many biological aspects of a gene into one meaningful value when the biological study and genomic platform are complex . For example, each gene derived from a SNP microarray could associate with a set of SNPs, which vary in size, P-values, physical distances, disease regions, LD (Linkage Disequilibrium) strength and SNP-gene locations (e.g. in exon, or in intron) from gene to gene. It is still a very experimental procedure to summarize such diverse aspects of biology into one comprehensive value. Similar challenges may be found in many of the emerging genomic platforms (e.g. SNP, Exon, Promoter microarray). The situations in the examples fully or partially fail in the GSEA-required input data structure requirement. For another example, many clinical microarray studies involve multiple factors/variants simultaneously, such as disease/normal, ages, sex, drug treatment/control, reagent batch effects, animal batch effect, etc. In such complex situations, sophisticated statistical methods, like ANOVA, time series analysis, survival analysis, etc., will be more powerful to handle multi-variances, multiple time points and batch effects, etc. simultaneously for datamining interesting gene lists. In many similar cases, the upstream data processing and comprehensive gene selection statistics cannot be simply avoided or replaced by GSEA. Moreover, the genes ranked in higher positions (usually with higher differences, e.g. fold change) are the major force driving (highly weighted) the enrichment Pvalues in GSEA. Thus, the underlying assumption is that the genes with large regulations (e.g. fold changes) are contributing more to the biology. Obviously, this is not always true in real biology . Biologists know that small changes of some signal transduction genes can result in larger downstream biological consequences. In contrast, some big changes in metabolic genes may be just a consequence of other small, but important, signal regulation events. Depending on the questions that the researcher is asking, the mildly changed signal transduction genes may be more interesting/important than those largely regulated genes. The GSEA and SEA methods have been available in the community for many years. Surprisingly, no comprehensive and systematic side-by-side comparisons are available yet. A recent study ran the same datasets with DAVID methods (a SEA/MEA method) versus ErmineJ (a GSEA method) (60). As expected, the results from both methods were highly consistent with each other. The consistency makes sense because the major driving force of the enrichment calculation in GSEA is the largely changing genes. In addition, those genes most likely have better chances to be selected in the traditional gene selection procedures, thus resulting in very similar results between the SEA and GSEA methods . (共同的缺点却导致了结果的一致) Class 3: Modular enrichment analysis (MEA) MEA inherits the basic enrichment calculation found in SEA and incorporates extra network discovery algorithms by considering the term-to-term relationships. Recent tools, such as Ontologizer (69), topGO (41), GENECODIS (59), ADGO (45) and ProfCom (68), claimed to improve discovery sensitivity and specificity by considering inter-relationships of GO terms in the enrichment calculations, i.e. using genes of composite (joint) annotation terms as a reference background. The key advantage of this approach is that the researcher can take advantage of termterm relationships, in which joint terms may contain unique biological meaning for a given study, not held by individual terms . Moreover, when using heterogeneous annotation content, the annotation terms are highly redundant, and also have strong interrelationships regarding different aspects for the same biological process. Building such relationships is one step closer to the true nature of biology during data mining. GoToolBox (18) developed functions to cluster related GO terms or genes, which provides the gene functional annotation in a network context. However, the functions only work for a small scope and only for GO terms. DAVID (60,61) recently provided a new tool that is able to organize and condense a wide range of heterogeneous annotation content, such as GO terms, protein domains, pathways and so on, into term or gene classes. This organization is accomplished by using Kappa statistics to mine the complex biological co-occurrences found in multiple heterogeneous annotation content. Combined with traditional enrichment P-value calculations, the new approach allows the enrichment analysis to progress from termcentric or gene-centric to biological module-centric analysis . These methods take into account the redundant and networked nature of biological annotation content in order to concentrate on building the larger biological picture rather than focusing on an individual term or gene . Such data-mining logic seems closer to the nature of biology in that a biological process works in a network manner. However, the obvious limitation of MEA is that orphan terms or genes (without strong relationships to neighbor terms/genes) could be left out from the analysis . Thus, it is important to examine those terms or genes that are left out during analysis when using MEA (60). In addition, the quality of the pre-selected gene list impacts the analytic results , just as it does in SEA analysis. Class1 举例: GOEAST is web based software toolkit providing easy to use, visualizable, comprehensive and unbiased Gene Ontology (GO) analysis for high-throughput experimental results, especially for results from microarray hybridization experiments. The main function of GOEAST is to identify significantly enriched GO terms among give lists of genes using accurate statistical methods. Class2 举例: topGO (topology-based Gene Ontology scoring) More recently, Alexa proposed a conditional hypergeometric test that computes the significance of a GO term based on its neighbourhood. Using the classical approach in which each node is scored independently, only few true significant nodes remain undiscovered . However, the dependencies between top scoring nodes yield a high false-positive rate . Alexa introduced the possibility of weighting genes annotated to a GO term based on the scores of neighboring GO terms or iteratively removing genes mapped to significant GO terms from more general (higher level) GO terms. The conditional hypergeometric test based on GO terms weightings reduces the false-positive rate, while not missing many true enriched nodes . The other conditional test is more efficient in finding the important areas in the GO graph, it also further reduces the false-positive rate, but with a higher risk of discarding relevant nodes. 总结一下, 无非是从两种角度出发: 1.从差异基因出发,先筛基因,再分析功能;受到基因筛选方法和参数的限制。 2.从功能出发,再看每个功能对应的基因如何;倒向基因变化幅度大的功能,是否可以通过降低基因变化的权重来改进一下? 功能分析的检验 Routinely, both over- and under-representation of ontology terms can be detected using the standard hypergeometric test . In probability theory and statistics, the hypergeometric distribution is a discrete probability distribution that describes the number of successes in a sequence of n draws from a finite population without replacement. The test based on the hypergeometric distribution is identical to the corresponding one-tailed version of Fishers exact test(看不同) . Reciprocally, the P-value of a two- sided Fishers exact test(看大小) can be calculated as the sum of two appropriate hypergeometric tests. Even though ontology enrichment approaches are widely used, only the most significant portion of the gene list is used to compute their statistic. Furthermore, the order of genes on the significant gene list is not taken into consideration. As a result simply counting the number of gene set members contained in the short list leads to loss of information, especially if the list is long and the difference between the more significant and the less significant is substantial. Finally, the correlation structure of gene sets is not considered at all . Two of the most used statistics to evaluate the association between functional pathways and differential expression are the one-tailed Fisher exact test, (FET) and Gene Set Enrichment Analysis (GSEA)(TM4 Mev) . FET is a statistical significance test used in the analysis of categorical data where sample sizes are small. The test is used to examine the significance of the association between two variables in a 22 contingency table. GSEA on the other hand evaluates microarray data at the level of genesets. The gene sets are defined based on prior biological knowledge , e.g. published information about biochemical pathways or co-expression in previous experiments. The goal of GSEA is to determine whether members of a gene set S tend to occur toward the top (or bottom) of the list L, in which case the gene set is correlated with the phenotypic class distinction. GSEA acts through three steps: (i) Calculation of an enrichment score. (ii) Estimation of significance level of enrichment score. (iii) Adjustment for multiple hypothesis testing. Since an accurate and rapid identification of perturbed pathways through the analysis of genomewide expression profiles facilitates the generation of biological hypotheses, Tian proposed a statistical framework for determining whether a specified group of genes for a pathway has a coordinated association with a phenotype of interest. In this framework, the overall objective of the analysis is to test whether a group of genes has a coordinated association with a phenotype of interest evaluating the following two null hypothesis: (i) The genes in a gene set show the same pattern of associations with the phenotype compared with the rest of the genes. (ii) The gene set does not contain any genes whose expression levels are associated with the phenotype of interest. After the test statistics are computed for testing the two hypotheses gene sets are then ranked in order of their significance and a control for the inflated Type I error due to multiple comparisons of gene sets is also applied. The authors claimed that their approach has more statistical power than currently available methods and can result in the discovery of statistically significant pathways that are not depicted by other methods . 牛! Markowetz proposed an algorithm to infer nontranscriptional pathway features based on differential gene expression in silencing assays . The authors idea is that cellular signalling pathways, which are not modulated on a transcriptional level, cannot be directly deduced from expression profiling experiments. However, when external interventions occur i.e. RNA interference or gene knock-outs, even if the expression of the signalling genes is not changed, secondary effects in downstream genes shed light on the pathway, and allow partial reconstruction of its topology. The core of Markowetz approach is the definition of a scoring function, which measures how well hypotheses about pathway topology are supported by experimental data. 参考文献: Microarray data analysis and mining approaches Francesca Cordero , Marco Botta and Raffaele A. Calogero Corresponding author: Raffaele A. Calogero, Department of Clinical and Biological Sciences, University of Torino, Italy. Briefings in Functional Genomics and Proteomics 2008 6(4):265-281; doi:10.1093/bfgp/elm034 Bioinformatics enrichment tools: paths toward the comprehensive functional analysis of large gene lists Da Wei Huang , Brad T. Sherman and Richard A. Lempicki * Laboratory of Immunopathogenesis and Bioinformatics, Clinical Services Program, SAIC-Frederick, Inc., National Cancer Institute at Frederick, Frederick, MD 21702, USA Nucleic Acids Research 2009 37(1):1-13; doi:10.1093/nar/gkn923