leapfrog does not yet support Nose-Hoover chains, nhchainlength reset to 1 。 为什么? http://gromacs.org_gmx-users.maillist.sys.kth.narkive.com/cXo18Y84/dr-lemkul-s-umbrella-sampling-tutorial-grompp-note-on-leap-frog-nose-hoover 这里说,可以不用管的。
gmx mdrun -deffnm md_1 -cpi md_1.cpt -maxh 24 -append 经测试,如此续跑是在 md_1.cpt 的基础上延长MD,输出文件在原来的输出文件上继续写,比如log文件和xtc文件。 非常适用于模拟非正常结束和集群有运行时间限制的情况! http://www.gromacs.org/Documentation/How-tos/Extending_Simulations 1. Version 4 and Newer 1.1. Changing .mdp file options 2. Version 3.3.3 and Before 3. Exact vs binary identical continuation It is possible to extend or continue a simulation that has completed, and even those that have crashed (see Doing Restarts ). This is actually good technique for the handling of longer simulations to reduce time lost due to crashes of the computer(s) being utilised. It is only possible to continue seamlessly from a checkpoint file, or the point at which the last full precision coordinates and velocities are available for the system. Even though some coordinate file formats can contain velocities (e.g. .gro format ) these are not precise enough for an exact restart. Therefore, you have to ensure that full-precision data is written at sufficiently short lengths of time (several times per day?) to avoid loss of too much time due to crashes. How to achieve this varies with the version of GROMACS you are using. Version 4 and Newer A simulation that has terminated, but not completed (e.g. because of queue limits, power failure or the use of the -maxh option of mdrun ) can be continued without needing to use tpbconv (which is now called gmx convert-tpr as of version 5.0 ). You may or may not wish to use -append in your mdrun command in this case. Otherwise, a simulation that has completed can be extended using tpbconv , mdrun and checkpoint files ( .cpt ). First, the number of steps or time has to be changed in the .tpr file, then the simulation is continued from the last checkpoint with mdrun . This will produce a simulation that will be the same as if a continuous run was made (but see reproducibility for more discussion). tpbconv -s previous.tpr -extend timetoextendby -o next.tpr mdrun -s next.tpr -cpi previous.cpt You might want to use the - append option of mdrun to append the new output to the old files. Note that this will only work when the old output files have not been modified by the user. Appending is the default behavior as of version 4.5. If you would like to change the default filenames while running a lengthy simulation in manageable parts, then cyclically running commands such as the following will work: (with suitable values for name and time) mdrun -deffnm ${name} -cpi ${name} -append tpbconv -s ${name} -o ${name}_new.tpr -extend ${time} mv ${name}_new.tpr ${name}.tpr If you felt the need to archive your checkpoint and run input files, then you could do that, too. If you used -noappend , then mdrun will add numerical suffixes to a series of files based on your name, just as described in mdrun -h . When running in a queuing system, it is useful to set the number of steps you want for the total simulation with grompp or tpbconv and use the -maxh option of mdrun to gracefully stop the simulation before the queue time ends. With this procedure you can simply continue by letting mdrun read checkpoint files and no other tools are required. However, if your queueing system permits job suspension, the -maxh mechanism will be unaware of the time spent suspended, and you may simulate for less wall time than you would expect. The time can also be extended using the -until and -nsteps options with tpbconv . A simulation can be continued without the checkpoint file , which will be non-binary identical and will have small errors that, for most situations, are negligible. The reason for the errors is that the trajectory and energy files do not store all the state variables of the thermostats and barostats . If this is the case, you must make use of the version 3.3.3 procedure below. Changing .mdp file options If you wish/need to change .mdp file options, then either grompp -f new.mdp -c old.tpr -o new.tpr -t old.cpt mdrun -s new.tpr or grompp -f new.mdp -c old.tpr -o new.tpr mdrun -s new.tpr -cpi old.cpt should work. The former is necessary under GROMACS 4.x if the thermodynamic ensemble has changed. (Someone said If your old.cpt is for a run that has finished, then use tpbconv -extend after grompp and before mdrun . but mabraham disagrees. A run finishing is judged by the contents of the .cpt in the context of the .tpr. So, if the latter is changed, then the run isn't finished.)
recently i installed an open-source molecular dynamics simulation software -gromacs in my desktop and laptop. since i met some trouble in my installation and i found some settings are very effective to the final performances, i decide to write it down. for ubuntu-14.04 64bit users: step1: install g++ sudo apt-get install g++ step2: install cmake here is the instruction of installing cmake in ubuntu: http://www.cmake.org/cmake/help/install.html it should be noted that gromacs requires cmake version 2.8.8 or higher. step3:install fftw noticed that in official installation guide, they mention that you can use cmake -DGMX_BUILD_OWN_FFTW=ON to download and build FFTW from source automatically for you, and i strongly recommend you to install fftw in this way instead of confirguring and making it manually. to be continued
From Gromacs community We've finally got the first beta release of GROMACS 4.6 ready for you to try out! We've put a lot of very hard work into it, and we hope you'll like the good things we've done. Things won't be perfect yet, so we'll be looking forward to your help finding the things we haven't done well enough yet! Remember, if you want the big performance gains that will be available in 4.6, then you'll want to know things will build and work well on your hardware, and the best way of doing that is helping us over the next few weeks. At the same time, we discourage you from doing work with this code whose scientific reliability you need to trust - this is very much a draft version of the software! You can find the manual here ftp://ftp.gromacs.org/pub/manual/gromacs-manual-4.6-beta1.pdf and the source code here ftp://ftp.gromacs.org/pub/gromacs/gromacs-4.6-beta1.tar.gz It would be great for us if some of you want to try out the new code on lots of different hardware and operation systems and report build problems, inconsistencies, strange or lacking documentation and in worst case pure bugs. To tempt you to do so here's a bit of a carrot corresponding to the new features: * A brand-new native GPU implementation layer. Gromacs now does heterogeneous parallalization using both CPUs and modern NVIDIA GPUs at the same time, the GPU port also works in parallel using both multiple cards in a node or multiple nodes, and it's smoking fast. There's lots of heroic work by Szilard Pall and Berk Hess here, and special thanks to NVIDIA and Mark Berger for their assistance in making this happen. * Gromacs can now use OpenMP parallelization for better scaling inside nodes, in particular when doing the FFT part on the CPU while the GPU does the normal nonbonded interactions. * Automatic load balancing between direct-space and PME nodes, and lots of improvements in domain decomposition load balancing and scaling. * We have a brand new set of classical nonbonded interaction kernels, and Gromacs can now use either SSE2, SSE4.1, 128-bit AVX with FMA support (AMD) or 256-bit AVX (Intel), all of them in both single and double precision. The performance difference depends on your system and parallelization, but it is quite large in many cases - we have seen 40% improvement on ion channels running on modern AMD machines! Did we mention that the classical C kernels are faster too since we can now do force-only interactions for most steps? * There are new kernels using analytical switch/shift functions that are quite a bit faster, and a new CPU-implementation of verlet kernels that guarantee buffered interactions (no atoms drifting in/out of the neighbor list range) that conserve energy extremely well. * There is a large new module to do advanced free energy calculations, thanks to Michael Shirts. Trust us, you need the full manual to decipher all the possibilities… * Gromacs has switched completely to CMake for configuration and building. To be honest, we do expect some hiccups from this, but it has enabled us to provide much more automation and advanced features as part of the setup - and Gromacs now works on Windows out-of-the-box. Please test as many parts of the build system as you can! * All raw assembly has been replaced by machine intrinsics in C. This does wonders for readability, but it means the compiler and compiler flags matter. On x86, you will typically get 5-10% better performance from icc than gcc. Happy simulating! The GROMACS development team
As the most important membrane protein, GPCR MD simulation is a quite hot topic nowadays. However, how to assemble protein/membrane system and which forcefield to assign for the whole system are two main critical and tough task for GPCR simulations. 1. Assemble protein/membrane system Many tools are now available for protein/membrane building including: VMD (http://www.ks.uiuc.edu/Research/vmd/) CHARMM-GUI (http://www.charmm-gui.org/) Desmond System Builder (http://www.deshawresearch.com/resources_desmond.html) g_membed (http://wwwuser.gwdg.de/~ggroenh/membed.html) InflateGro (http://moose.bio.ucalgary.ca/index.php?page=Translate_lipdis) VMD can support well for POPC and POPE membrane system building with CHARMM27 and CHARMM36 FF. H owever, one have to add additional solvent and i ons into the syst em by tcl script from VMD tutorial. It will also n eed some script to merge membrane and protein. Moreover, since the lipids are not pre-equilirated, it would be necessary to equ ilibrate the whole system at least 20 ns before MD production. CHARMM-GUI aims to provide more convenient way for NAMD or CHARMM sim ulation. It can gene rate a embeded protein/memb rane system with OPM position through web page interface. It even can helps to assi gn CHARMM CGFF for the ligand. H owever, the re are also some o bvious week nees for it: there are so many atom clashed between lipids that we can hardly believe the lipids are pre-eq uilib rated which claimed by the author; the input file provided by CHARMM-GUI is not good enough for membrane protein simulation s ince the whole quilibration step on ly contains no m ore than 3 ns and obvious GPCR helix movement are of ten observed w ithin such short time which shouldn't expected at this time scale level. So, one have to improve the protocol by himself. Desmond System builder tool is incorporated in Schrodinger Maestro GUI and it provides very friendly interface for users. It can build a OPM based position for pr otein/membrane system very easily by c licking some bottoms . It can also assign CHARMM36 FF by VIPARR tool in D esmond. Both g_membed and InflateGro are tools w ithin Groma cs and both of them can embed the prot ein into a pre-equlibrated membrane system which save lot of ti me for equilibration . Although g_membed is a little bi t diff icult than InflateGro, but the output seems to be much better than the later one. 2. Force filed It is said that CHARMM36 FF is the best FF for lipids which i s currently the only FF can reproduce lipid gel phase property. Howe ver, recen t Lipid 11 FF from latest Amber 12 is also claimed to be as good as CHARMM36 FF , al though related p aper is being revi ewed th ese days. Both full atom FF are quite good option for protein system simulation. There are also other FF and me thods including Gromos FF which is a united atoms FF and nowadays coarse gain MD which use dum my sphere to represent groups and acce lerate the simulation dra matically (24 core workstation can even achieve up to several microsecond/day ). The demerit for those methods are also obvious: we gain w hat we paid. 3. Top ology for Ligand T his is always a head ache problem for many people in MD simulation. In Desmond, ligand to pology can be recognized automatically with OPLS_2005 FF. H owe ver, OPLS_2005 is only go od e nough for t ens of ns MD simulation, it is rather poor if su b mit to micro se cond MD. If we would like to use CHARMM36 FF for protein/membrane system bound with ligand, we can generate to pol ogy from S wissparam (http://swissparam.ch/) and convert them into Desmond Viparr format by script from Desmond 2012 ($SCHRO DINGER/desmond-v31023/data/viparr/converters ). This tool sometimes doesn't work well, it is said that it can be full y supported by D esmond in the next version of Desmond which would be released at the beginning of next year . What's n eed to mention is that CHARMM C G FF is also reachable in Des mond 2012 ($SCHRODING ER/desmond-v31023/data/viparr/ff/cgenff_base_v2b7 ), one can build manually with those molecul ar templates if the target on e is not so complicated. CHARMM CG FF also could be obtained from (https://www.paramchem.org/). If the ligand structure is not so complicated, it may work well. H owever, it some time s may not re cognize ligand bond order and so on correctly. In this way, one have to go to CHARMM forum for helps. Amber GAFF is definitely ext re mely at tractive and the primary choice for a ligand bound system. When the latest LIPID 11 FF in Amber 12 come out, Ambe r should be the first choic e for many people especially those work with ligand. 4. Efficiency Alt hough the hardware of compu ter deve lop s so fast nowada ys that CPU update one generation almost each year, the e ffici ency of MD simulations seems don't imp rove so much these days. F or instan ce: no matter how many CPU we use , for a typical mem brane protein simulation (13 2 lipids, 300aa protein, 50,000 at oms in all ) with full atom FF and typical cutoff (9-10) with PME: Gromacs can get up to 2 0 ns/day (double precision), Amber 12 ns/day, NAMD 4ns/day. Desmond is an exception since the parralization is much more superior tha n any other MD tools, it can up to 100 ns/day with 512 CPU. I t can even up to several microseco nd /day in A nton with full atom FF. It woul d be a wise option to use either D esmond or Gromacs , if one would like to run hundre ds of ns with full atom FF. Of course, it is also accepta ble for Amber and NAMD CPU performance if the simulation on ly last for tens of ns. GPU technology is developing quite fast in recent one or two years and it also bring exc iting news for computational work especially NVIDIA CUDA accel erations. F or instance, with two GTX590, A mber 12 can get up to 20ns/day while 24 core i7 3.6 GH z CPU can only get 4ns/day ( with intel com piler , gnu is even m uch slower). NAMD on the other hand, can get 5ns/day with CUDA acc eleration w hile 24 core i7 3.6 GHz CPU can only get 0.5 ns/day. C urrently, GPU calculation is not supported in Desmond, but this feature is e x pected to be available in next version.
1. download source code: git clone git://git.gromacs.org/gromacs.git 2. install openMM and CUDA library into /soft/gromacs-gpu/openmm (download from https://simtk.org/project/xml/downloads.xml?group_id=506 ) /soft/gromacs-gpu/cuda configuring .cshrc as following etenv OPENMM_ROOT_DIR /soft/openmm setenv LD_LIBRARY_PATH /soft/gromacs4.6-gpu/cuda/lib64:/soft/gromacs4.6-gpu/cuda/lib:/soft/gromacs-gpu /openmm/lib 3. compile GROMACS CUDA from source code mkdir cuda cd cuda cmake ../. -DGMX_OPENMM=ON -DGMX_THREADS=OFF -DCMAKE_INSTALL_PREFIX=/soft/gromacs4.6-gpu make -j24 make install 4. configure .cshrc source /soft/gromacs4.6-gpu/bin/GMXRC.csh done!
计算机技术的快速发展,算法及相应软件的不断更新,使得当前利用我们手上的普通电脑来模拟相对较大的生物大分子体系和多聚体分子成为了可能,而且这种趋势会越来越明显,尤其是多核心CPU的出现及分子动力学模拟大规模并行化计算能力的提高,让从事生物学研究的人们有可能利用手中的个人电脑对感兴趣的蛋白分子进行有目的的模拟,并充分与生物学实验有机地结合在一起,这是一件非常有意思和好玩的事情。 尽管如此,许多生物大分子体系还是非常巨大的,比如我的一个体系:腺病毒六邻体(Hexon)三聚体蛋白,我想对其进行温控的分子动力学模拟,以动态分析其总表位构成、高温变性机制及高温变性在免疫原性上的反应。该体系共含有约940*3 = 2820个氨基酸残基,再加上一个立方体的水盒子,总体系约200,000个原子数,在QX9650四核心CPU,64位Linux系统下,Gromacs每模似10ns的时间要花上约27天,非常耗时,在一个约500ns的总体设计中,这种计算能力是无法忍受的。 GPU加速的Gromacs为我们带来了非常振奋的好消息,官方称利用Nvidia的CUDA技术,可以将MD模拟提高原单CPU的十倍以上,以下是我利用Nvidia GTX460 2G 进行分子动力学模拟的全过程,现拿出来和大家进行分享。 第一天: Nvida GTX460 2G大显存 按gmx网站( www. gromacs .org/gpu )上的说明,可以 模拟 200,000左右的 原子 ,我模拟的 体系则好 是190,000 硬件:Dell T3400工作站 X38主板 CPU: QX9650 4G ECC内存 GTX460 2G显存 软件 :Ubuntu9.10 64位, CUDA3.1, OpenMM2.0, FFTW3.2.2, CMake, Gromacs 4.5.1, 按照官网要求,独立安装CPU的Gromacs4.5.1(CMake 编译 ),再 下载 预编译好的 mdrun-gpu beta2 for Ubuntu9.10 64位 设好环境变量,运行~ 但是运行后,提示: Fatal error : reading tpx file (md.tpr) version 73 with version 71 program For more information and tips for troubleshooting, please check the GROMACS website at http://www.gromacs.org/Documentation/Errors 大概的意思是说:预编译好的mdrun-gpu跑不了由4.5.1版 grompp 程序 生成的tpr 文件。 第二天: 采取从头编译的方法解决了上述问题,因为预编译好的mdrun-gpu与4.5.1里的程序版本号不同,所以会出现不兼容现象, 按照提示,顺利编译4.5.1版的mdrun-gpu成功, —————————————————————————— export OPENMM_ROOT_DIR=path_to_custom_openmm_installation cmake -DGMX_OPENMM=ON make mdrun make install-mdrun —————————————————————————— 但是新的问题来了, 运行出现错误提示: mdrun-gpu: error while loading shared libraries: libopenmm_api_wrapper.so: cannot open shared object file: No such file or directory 很奇怪! 环境变量也设好了,没有问题 在openmm目录下找不到libopenmm_api_wrapper.so文件 第三天: 我将操作系统换成RHEL5.5系统 再利用相同的安装方法,顺利解决上述问题, 但不明白其中原因,不过我想还是有办法解决的(先不管它)! 第四天: 总结一下我所遇到的问题,及解决办法: 1,版本问题 —————————————————————————— Fatal error: reading tpx file (md.tpr) version 73 with version 71 program For more information and tips for troubleshooting, please check the GROMACS website at http://www.gromacs.org/Documentation/Errors 这里是说版本不兼容 2,Openmm不支持多组的温度耦合 —————————————————————————— Fatal error: OpenMM does not support multiple temperature coupling groups. For more information and tips for troubleshooting, please check the GROMACS website at http://www.gromacs.org/Documentation/Errors 3,不能按以前的mpd来设置 —————————————————————————— Fatal error: OpenMM uses a single cutoff for both Coulomb and VdW interactions. Please set rcoulomb equal to rvdw. For more information and tips for troubleshooting, please check the GROMACS website at http://www.gromacs.org/Documentation/Errors 4,GPU加速的gmx现在只支持amber力场及charmm力场 —————————————————————————— Fatal error: The combination rules of the used force-field do not match the one supported by OpenMM: sigma_ij = (sigma_i + sigma_j)/2, eps_ij = sqrt(eps_i * eps_j). Switch to a force-field that uses these rules in order to simulate this system using OpenMM. 5,GPU加速的gmx不支持G96里的 interaction ,实际上还是力场问题 —————————————————————————— Fatal error: OpenMM does not support (some) of the provided interaction type(s) (G96Angle) For more information and tips for troubleshooting, please check the GROMACS website at http://www.gromacs.org/Documentation/Errors 6,在Ubuntu9.10里面用cmake编译gromacs4.5.1会遇到找不到libopenmm_api_wrapper.so文件的问题,换成RHEL5.5可以解决 —————————————————————————— error while loading shared libraries: libopenmm_api_wrapper.so: cannot open shared object file: No such file or directory 第五天: mdrun-gpu终于跑起来了,mdp文件是用的官网提供的 bench里面的,不过还是有一些warning: It is also possible to optimize the transforms for the current problem by performing some calcula- tions at the start of the run. This is not done by default since it takes a couple of minutes, but for large runs it will save time. Turn it on by specifying optimize_fft = yes WARNING: OpenMM does not support leap-frog, will use velocity-verlet integrator. WARNING: OpenMM supports only Andersen thermostat with the md/md-vv/md-vv-avek integrators. Pre-simulation ~15s memtest in progress...done, no errors detected starting mdrun 'Protein in water' 1000000 steps, 2000.0 ps. NODE (s) Real (s) (%) Time: 33.080 99.577 33.2 (Mnbf/s) (MFlops) (ns/day) (hour/ns) Performance: 0.000 0.074 47.609 0.504 gcq#330: "Go back to the rock from under which you came" (Fiona Apple) ———————————————————————————————— 最后这个Performance,有点看不懂,单从(ns/day) 这一点看,性能是l四核心CPU的五倍,但实际运行,性能仅是CPU的2倍, (MFlops) 一项,竟然是 0.074 CPU的 (MFlops) 是12GFlops 总体上看,GPU加速的GMX,性能提升,至少可以达到传统四核心CPU的2倍, imp模型官网上说可达到10倍以上,继续更新中。。。。。。 第六天: 190,000个原子的体系,共设了10ns,performance显示是5 ns/day,理论上两天就算完了, 实际上得到28号才算完(10月18号下午1点开始),这个结果和performance明显不符~~~ 5000000 steps, 10000.0 ps. step 417300, will finish Thu Oct 28 10:39:59 2010 /10月18号开始,显示10月28号结束 Received the TERM signal, stopping at the next step step 417378, will finish Thu Oct 28 10:39:46 2010 Post-simulation ~15s memtest in progress...done, no errors detected NODE (s) Real (s) (%) Time: 13633.960 71173.931 19.2 3h47:13 (Mnbf/s) (MFlops) (ns/day) (hour/ns) Performance: 0.000 0.003 5.290 4.537 gcq#47: "I Am Testing Your Grey Matter" (Red Hot Chili Peppers) 第七天: 相同的体系,相同的设置,以下是用QX9650 四核心CPU 跑的performance: 性能虽然比GTX460弱,但也只是多算了三天时间 —————————————————————————————— Back Off! I just backed up md.trr to ./#md.trr.1# Back Off! I just backed up md.edr to ./#md.edr.1# WARNING: This run will generate roughly 3924 Mb of data starting mdrun 'Good gRace! Old Maple Actually Chews Slate in water' 5000000 steps, 10000.0 ps. step 0 NOTE: Turning on dynamic load balancing step 500, will finish Mon Nov 1 09:57:48 2010vol 0.74 imb F 2% /10月19号早上开始,显示的是11月1号结束 Received the TERM signal, stopping at the next NS step step 550, will finish Mon Nov 1 10:08:34 2010 Average load imbalance: 2.4 % Part of the total run time spent waiting due to load imbalance: 1.1 % Steps where the load balancing was limited by -rdd, -rcon and/or -dds: Y 0 % Parallel run - timing based on wallclock. NODE (s) Real (s) (%) Time: 123.856 123.856 100.0 2:03 (Mnbf/s) (GFlops) (ns/day) (hour/ns) Performance: 156.395 11.835 0.769 31.220 gcq#358: "Now it's filled with hundreds and hundreds of chemicals" (Midlake) 第八天: 跑官网上的bench: GTX460 的成绩是102ms/day,与c2050并没有想象的那么大差距! Pre-simulation ~15s memtest in progress...done, no errors detected starting mdrun 'Protein' -1 steps, infinite ps. step 285000 performance: 102.1 ns/day Received the TERM signal, stopping at the next step step 285028 performance: 102.1 ns/day Post-simulation ~15s memtest in progress...done, no errors detected NODE (s) Real (s) (%) Time: 481.290 482.224 99.8 8:01 (Mnbf/s) (MFlops) (ns/day) (hour/ns) Performance: 0.000 0.002 102.335 0.235 总结: 新一代GPU加速的Gromacs分子动力学模拟为我们展示了GPU将来在分子动力学领域应用美好前景,但目前还不成熟。从以上测试中我们可以看出在隐性溶剂水模型的MD模拟计算中,GPU加速的计算性能是传统四核心CPU的至少10倍以上,但是在显性溶剂水模型中,GPU加速未见得多么明显;另外最为重要的一点是,当前版本Gromacs 4.5.1 对于GPU加速MD计算有很多限制,如支持力场有限,许多特性还不支持,模拟的可重复性差(与CPU模拟相比)等,不过在足够长的模拟时间下,还是会生成重复性较好的具有统计学意义的模拟轨迹。相关资料请参考: www.gromacs.org/GPU
Gromacs 4.0.7并行带QM/MM安装 平台 SUSE Linux Enterprise Desktop 10 SP3 gcc4.1.2 mpich2-1.2.1p1 ifrot 10.1 fftw 3.2.2 解压缩mpich2-1.2.1p1.tar.gz进入此目录,运行: ./configure make make install 运行 touch /etc/mpd.conf chmod 700 /etc/mpd.conf 将下面加入mpd.conf: secretword=secretword (比如secretword=ltwd) 解压fftw3.2.2压缩后进入目录,安装到/soft/fftw下 ./configure --enable-float --enable-threads make make install 把libmopac.a复制到/soft/fftw/lib和/lib下 配置环境变量 setenv CPPFLAGS -I/soft/fftw/include setenv LDFLAGS -L/soft/fftw/lib 解压gromacs4.0.7,进入目录 ./configure --prefix=/soft/gromacs --enable-mpi -enable-fortran --with-qmmm-mopac --enable-shared make make install 配置环境变量 setenv LIBS -lmopac setenv LD_LIBRARY_PATH /soft/gromacs/lib source /soft/gromacs/bin/completion.csh set path=(/soft/gromacs/bin $path) configure中的其他选项 CC C compiler command 一般这个环境变量就是gcc CFLAGS C compiler flags 编译时的参数,一般是-O3 LDFLAGS linker flags, e.g. -Llib dir if you have libraries in a nonstandard directory lib dir 库文件目录 LIBS libraries to pass to the linker, e.g. -llibrary 设的时候不用-l xxx,无需引号 CPPFLAGS C/C++/Objective C preprocessor flags, e.g. -Iinclude dir if you have headers in a nonstandard directory include dir F77 Fortran 77 compiler command 一般这个环境变量就是gfortran或ifort FFLAGS Fortran 77 compiler flags 编译时的参数,一般是-O3 CCAS assembler compiler command (defaults to CC) CCASFLAGS assembler compiler flags (defaults to CFLAGS) CPP C preprocessor CXX C++ compiler command 一般是g++ CXXFLAGS C++ compiler flags CXXCPP C++ preprocessor XMKMF Path to xmkmf, Makefile generator for X Window System Optional Features: --disable-FEATURE do not include FEATURE (same as --enable-FEATURE=no) --enable-FEATURE include FEATURE --enable-shared build shared libraries --disable-float use double instead of single precision --enable-double same effect as --disable-float --enable-fortran use fortran (default on sgi,ibm,sun,axp) --enable-mpi compile for parallel runs using MPI --disable-threads don't try to use multithreading --enable-mpi-environment=VAR only start parallel runs when VAR is set --disable-ia32-3dnow don't build 3DNow! assembly loops on ia32 --disable-ia32-sse don't build SSE/SSE2 assembly loops on ia32 --disable-x86-64-sse don't build SSE assembly loops on X86_64 --disable-ppc-altivec don't build Altivec loops on PowerPC --disable-ia64-asm don't build assembly loops on ia64 --disable-cpu-optimization no detection or tuning flags for cpu version --disable-software-sqrt no software 1/sqrt (disabled on sgi,ibm,ia64) --enable-prefetch-forces prefetch forces in innerloops --enable-all-static make completely static binaries --disable-dependency-tracking speeds up one-time build --enable-dependency-tracking do not reject slow dependency extractors --enable-static build static libraries --enable-fast-install optimize for fast installation --disable-libtool-lock avoid locking (might break parallel builds) --disable-largefile omit support for large files Optional Packages: --with-PACKAGE use PACKAGE --without-PACKAGE do not use PACKAGE (same as --with-PACKAGE=no) --with-fft= FFT library to use. fftw3 is default, fftpack built in. --with-external-blas Use system BLAS library (add to LIBS). Automatic on OS X. --with-external-lapack Use system LAPACK library (add to LIBS). Automatic on OS X. --without-qmmm-gaussian Interface to mod. Gaussian0x for QM-MM (see website) --with-qmmm-gamess use modified Gamess-UK for QM-MM (see website) --with-qmmm-mopac use modified Mopac 7 for QM-MM (see website) --with-gnu-ld assume the C compiler uses GNU ld --with-pic try to use only PIC/non-PIC objects --with-tags include additional configurations --with-dmalloc use dmalloc, as in http://www.dmalloc.com/dmalloc.tar.gz --with-x use the X Window System --with-motif-includes=DIR Motif include files are in DIR --with-motif-libraries=DIR Motif libraries are in DIR --without-gsl do not link to the GNU scientific library, prevents certain analysis tools from being built --with-xml Link to the xml2 library, experimental