科学网

 找回密码
  注册
科学网 标签 MPI

tag 标签: MPI

相关帖子

版块 作者 回复/查看 最后发表

没有相关内容

相关日志

Linux多线程MrBayes安装教程及使用(Mpi_MrBayes)
LovePotato 2018-10-27 09:38
多线程MrBayes安装教程及使用(Mpi_MrBayes) 絮 语 记录自己多线程Mrbayes安装和使用的过程,其实很简单。如果有需要的朋友,可以借鉴借鉴。 正文 要求: Linux 操作系统 步骤: 1. 安装 MPI \0 \0 2. 安装 Mrbayes 1. 安装MPI 下面的代码不用改动,直接用就可以,在make 的时候,可能需要一下root权限(sudo)。 wgethttps://www.open-mpi.org/software/ompi/v1.10/downloads/openmpi-1.10.1.tar.gz tar-xzvf./openmpi-1.10.1.tar.gz cdopenmpi-1.10.1 ./configure--prefix=/usr/local/mpi make-jall makeinstall ##每一次都要用下面这两行才可以运行mpi_mrbayers exportPATH=/usr/local/mpi/bin:$PATH exportLD_LIBRARY_PATH=/usr/local/mpi/lib:$LD_LIBRARY_PATH 当然如果你觉得每次 都要运行上面两行代码再运行Mrbayes比较费时的话,我们也可以把它加入默认启动路径里。(可加 也可不加) 这一步 需要root权限。 sudovi/etc/profile #添加如下代码 exportPATH=/usr/local/mpi/bin:$PATH exportLD_LIBRARY_PATH=/usr/local/mpi/lib:$LD_LIBRARY_PATH 添加完代码,保存。到此为止MPI 安装和配置完成了。 2.安装 Mrbayes 这里我用的mrbayes-3.2.6.tar,可以从官网下载。解压过程,我就不详细了,直接从配置开始。 cdsrc autoconf ./configure--with-beagle=no--enable-mpi=yes 现在Mrbayes 也安装完成了,让我们测试一下吧! cdexample mpirun-np4../src/mb./codon.nex 这样测试的4线程同时运行也没有问题。可以使用了。 祝大家科研顺利~
个人分类: 软件教学|11961 次阅读|0 个评论
cannot connect to local mpd错误解决
lemoncyb 2015-6-2 20:14
今天使用Intel MPI运行程序时,报错: mpiexec_cooler: cannot connect to local mpd (/tmp/mpd2.console_cuiyb); possible causes: 1. no mpd is running on this host 2. an mpd is running but was started without a console (-n option) 说是未启动 mpd 或是 mpd 启动不正确。 那么什么是 mpd 呢? mpd=mutipurpose daemon,是 Intel MPI库中 用于启动并行任务的进程管理系统。在启动一个工作之前,首先需要在各个节点上启动 mpd 守护进程(mpd daemon),使得各节点相连形成环。 也就是说在使用 Intel MPI 执行程序前,要先启动 mpd,命令如下: $mpdboot \ \ \ 如果在单节点上执行任务,直接运行 $mpdboot 即可。
个人分类: MPI|9229 次阅读|0 个评论
浅析hadoop和MPI
xyxiao 2015-4-20 19:22
Hadoop是一个分布式系统基础架构,由Apache基金会开发。用户可以在不了解分布式底层细节的情况下,开发分布式程序。充分利用集群的威力高速运算和存储。 简单地说来,Hadoop是一个可以更容易开发和运行处理大规模数据的软件平台。 Hadoop实现了一个分布式文件系统(HadoopDistributedFileSystem),简称HDFS。HDFS有着高容错性(fault-tolerent)的特点,并且设计用来部署在低廉的(low-cost)硬件上。而且它提供高传输率(highthroughput)来访问应用程序的数据,适合那些有着超大数据集(largedataset)的应用程序。HDFS放宽了(relax)POSIX的要求(requirements)这样可以流的形式访问(streamingaccess)文件系统中的数据。Hadoop是一个分布式计算基础设施,它包含一系列相关的子项目,这些项目都隶属于Apache软件基金会(ASF)。ASF为这些开源社区项目提供支持。Hadoop里最著名的是MapReduce和分布式文件系统(HDFS),其他的子系统提供了一些附加功能,或者在core上增加了一些高级的抽象。下面Hadoop简介向你介绍一下Hadoop子系统的一些附加功能。Core分布式系统和通用IO组件和接口(序列化,Java远程过程调用等等)。Avro支持跨语言过程调用,持久数据存储的数据序列化系统。MapReduce构建在廉价的PC机器上的分布式数据处理模型和运行环境。HDFSHadoop简介中的HDFS构建在廉价的PC机器上的分布式文件系统。Pig处理海量数据集的数据流语言和运行环境。pig运行在HDFS和MapReduce之上。HBase分布式,面向列的数据库。HBase使用HDFS作为底层存储,同时使用MapReduce支持批处理模式的计算和随机查询。ZooKeeper提供分布式、高效的协作服务。ZooKeeper提供分布式锁这样的原子操作,可以用来构建分布式应用。Hive分布式数据仓库,Hive使用HDFS存储数据,提供类似SQL的语言(转换为MapReduce任务)查询数据。Chukwa分布式数据采集和分析系统。使用HDFS存储数据,使用Mapreduce输出分析报告。 而MPI也是一样的, 在消息传递库方法的并行编程中,一组进程所执行的程序是用标准串行语言书写的代码加上用于消息接收和发送的库函数调用。其中MPI(Message Passing Interface)是1994年5月发布的一种消息传递接口。它实际上是一个消息传递函数的库的标准说明,吸取了众多消息传递系统的优点, 是目前国际上最流行的并行编程环境之一,尤其是分布式存储的可缩放并行计算机和工作站网络以及机群 的一种编程范例。MPI具有很多优点:具有可移植性和易用性;有完备的异步通信功能;有正式和详细的精确定义。固而为并行软件产业的增长提供了必要的条件。 在基于MPI编程模型中,计算是由一个或多个彼此通过调用库函数进行消息收、发通信的进程所组成。在绝大部分MPI实现中,一组固定的进程在程序初始化时生成,一般情况下,一个处理器只生成一个进程。这些进程可以执行相同或不同的程序(相应地称为单程序多数据(SPMD)或多程序多数据(MPMD)模式)。进程间的通信可以是点到点的,也可以是集合的。 MPI只是为程序员提供了一个并行环境库,程序员通过调用MPI的库程序来达到程序员所要达到的并行目的,MPI提供C语言和Fortran语言程序接口。 MPI是个复杂的系统,包括129个函数(根据1994年发布的MPI标准)。事实上,1997年修订的标准,称之为MPI-2,已超过200个,目前最常用的也有约30个。然而我们可以只适用其中的6个最基本的函数就能编写一个完整的MPI程序去求解很多问题。 但二者有根本性的不同,个人认为,需求不同,可选用不同的平台,MPI的优势(比如子节点之间的消息传递,信息交互)是hadoop没有的,至少是现在没有的,而hadoop的优势也是MPI所缺少的。
6491 次阅读|0 个评论
MPI 编程框架学习心得
businessman 2013-3-19 21:23
2013 年 3 月 19 日 ________________________________________________________________________________________ 题记:好记心不如烂笔头!阅读时往往会有所悟,而这种感悟则通常比书籍本身的内容更有价值,因为这是你阅读时的思考,代表了你对于相关知识的个人领悟。以文字形式记录下来,方便以后参考。
个人分类: 学习手记|162 次阅读|0 个评论
MPI并行分子模拟(基本篇)
hellojwx 2013-1-27 11:49
在分子模拟中,当体系非常庞大时,单机计算就显得非常缓慢,这时候就要用到多核/多机并行运算。MPI就是一种很好的并行实现方式,它通过网络把许多计算机连接起来,把任务分配给许多CPU以实现加速。下面用一个例子说明在分子模拟中如何利用MPI来计算体系的能量。 假定体系有1 000 000个原子,那么要计算原子间总的相互作用就需要进行5$\times$10$^{11}$次计算,单机运行起来很慢。 do iAtom = 1, 1000000-1 do jAtom = iAtom + 1, 1000000 uTot = uTot + getEn(iAtom, jAtom) end do end do 但是如果我们有10个CPU,那么每个CPU只要计算$\frac{1}{10}$的相互作用就可以了,理论上可以提速十倍。对每台机器来说,只要运行 do iAtom = iProc + 1, 1000000-1, nProcs do jAtom = iAtom + 1, 1000000 uTot = uTot + getEn(iAtom, jAtom) end do end do 其中iProc是当前CPU的序号,nProcs是要使用的CPU的总数。每个CPU得到各自的结果后,用MPI_Reduce把大家的结果都加起来,就得到了体系的总能量。
0 个评论
openmp mpi mix
asksky 2012-11-6 00:02
openmp mpi mix-mode is so cool Compile: mpif90 -fopenmp *.f90 -o a.out Run: mpirun -np 4 a.out
个人分类: computer|2788 次阅读|0 个评论
parallel Meep installation
isping 2012-9-9 15:47
Steven G. Johnson放出了Meep的1.2版,决定从源码编译开始安装。 1、安装blas和lapack库,如果直接按官方教程安装则会生成*.a库,这里按建议装了OpenBlas,但发现安装好了后并不能进行下一步,OpenBlas并没有生成LAPACK的库文件(可能是配置没对,以后研究一下),原因是Harminv需要找文件名为libblas和liblapack的库,所以需要改名。 2、源码安装harminv、Guile、libctl。 3、Openmpi的1.6.1安装编译后并不能运行,提示找不到libopen-pal.so.4,于是卸载后重新安装1.6版。实际上关于这点可以这样做http://blog.chinaunix.net/uid-7726704-id-2045241.html 需要注意的是,gcc 的库在目前的linu x中早就舍弃什么什么 libc.so.5, 如果提示这个出错,不 用安装老版本的gcc ,自己上google 搜下,有i386和x 86-64版本的rp m包,专门补充这个换 代的旧库文件。注意自 己的机器适合什么版本 的就ok了,下来rp m -ivh 安装之。 4、再编译并行hdf5 5、最后安装meep-mpi 6、安装h5utils时提示找不到libhdf5.so.8,是由于该库被放到/usr/local/lib目录下,而非共享库查找的标准目录:/lib或/usr/lib,故需以下操作(管理员权限): # cat /etc/ld.so.conf include ld.so.conf.d/*.conf # echo /usr/local/lib /etc/ld.so.conf # ldconfig 这样程序在运行时就可以找到共享库文件。
个人分类: meep|1 次阅读|0 个评论
[转载]Open MPI, MPICH2
qlm2001 2012-7-18 18:54
Open MPI, MPICH2 Open MPI Information ompi_info --all #show all Open MPI info mpif90 -showme #show command line that would be invoked to compile the program mpif90 -showme:link #show linker flags that would be supplied to the Fortran 90 compiler mpif90 -showme:compile #show compiler flags that would be supplied to the Fortran 90 compiler Installation Instructions for Open MPI Mac OS X (via MacPorts) sudo port install openmpi Mac OS X (Compile from source) 1. Download openmpi-1.4.3.tar.gz from http://www.open-mpi.org/. 2. Extract the Open MPI source tar zxf openmpi-1.4.3.gz cd openmpi-1.4.3/ 3. Run the configure script using the following commands. Replace '1.4.3' with the version number of the openmpi .tar.gz/.tgz file which you downloaded from the internet. Replace /opt/openmpi with the folder of your choice. Optionally, you can specify which C/C++ compiler (such as gcc/g++ which comes with XCode) and its flags, as well as the Fortran 77/90 compiler. The '-m64' flag (gcc compilers only) specifies to build Open MPI for 64-bit architecture. ./configure --help #if you need to check the settings. mkdir build/ ../configure --prefix=/opt/openmpi CC=gcc CXX=g++ F77=gfortran FC=gfortran CFLAGS=-m64 CXXFLAGS=-m64 FFLAGS=-m64 FCFLAGS=-m64 21 | tee config.out #../configure --prefix=/opt/openmpi 21 | tee config.out #if you wish to use the default settings. 4. Compile Open MPI from source using: make -j 21 | tee make.out #as a user with write permissions in the build tree. optional '-j' setting is for a parallel make. 5. Install Open MPI with: sudo make install 21 | tee install.out #as a user with write permissions to the install tree 6. Setup the enviroment with: export PATH="/opt/openmpi/bin:$PATH" export OPAL_PREFIX="/opt/openmpi " #optional 7. Check the installation with: which mpicc; which mpic++; which mpif90 #check that executables are in the correct path ompi_info to only just check the compilers, type: ompi_info | grep compiler 8. Run your code using any of the following three options: mpirun --prefix /opt/openmpi -np 4 ./a.out /opt/openmpi/bin/mpirun -np 4 ./a.out add "export PATH=/opt/openmpi/bin:$PATH" to $HOME/.bash_profile (if using bash) Additional configuration options from http://www.open-mpi.org/faq/?category=building: If you need to specify the compiler bindings, you'll need to change the ./configure option above as: ./configure --prefix=/opt/openmpi CC=icc CXX=icpc F77=ifort FC=ifort 21 | tee config.out #for Intel Compilers ./configure --prefix=/opt/openmpi CC=gcc CXX=g++ F77=gfortran FC=gfortran 21 | tee config.out #for GNU Compilers To produce 64 bit C,C++,F77, F90 objects ( for the GNU compiler suite) : ./configure --prefix=/opt/openmpi CC=gcc CXX=g++ F77=gfortran FC=gfortran CFLAGS=-m64 CXXFLAGS=-m64 FFLAGS=-m64 FCFLAGS=-m64 21 | tee config.out Note that in future compiles, you'll need to specify '-m64', otherwise you'll get errors like: ld warning: in /opt/openmpi/lib/libmpi_f90.a, file is not of required architecture ld warning: in /opt/openmpi/lib/libmpi_f77.dylib, file is not of required architecture ld warning: in /opt/openmpi/lib/libmpi.dylib, file is not of required architecture ld warning: in /opt/openmpi/lib/libopen-rte.dylib, file is not of required architecture ld warning: in /opt/openmpi/lib/libopen-pal.dylib, file is not of required architecture To shift the entire openmpi tree to another folder, add the following to ~/.bash_profile : export OPAL_PREFIX=newfolder Troubleshooting : 1. Problem when mpic++ points to the binary of a different Open MPI installation (in /usr/local/bin) instead of the new Open MPI installation (/opt/openmpi/bin): $ mpic++ -m64 hellompi.cpp -o hellompi $ which mpic++ /usr/local/bin/mpic++ $ mpirun -np 1 ./hellompi mca: base: component_find: unable to open /opt/openmpi/lib/openmpi/mca_paffinity_darwin: file not found (ignored) mca: base: component_find: unable to open /opt/openmpi/lib/openmpi/mca_paffinity_test: file not found (ignored) mca: base: component_find: unable to open /opt/openmpi/lib/openmpi/mca_carto_auto_detect: file not found (ignored) mca: base: component_find: unable to open /opt/openmpi/lib/openmpi/mca_carto_file: file not found (ignored) -------------------------------------------------------------------------- It looks like opal_init failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during opal_init; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): opal_carto_base_select failed -- Returned value -13 instead of OPAL_SUCCESS -------------------------------------------------------------------------- ,INVALID] ORTE_ERROR_LOG: Not found in file runtime/orte_init.c at line 77 ,INVALID] ORTE_ERROR_LOG: Not found in file orterun.c at line 454 MPICH-2 Installing MPICH-2 with Intel Fortran/C++ (or another compiler) 1. Install Intel Compilers (ifort, icc, icpc), or another compiler 2. Download the appropriate MPICH2 source file. Extract the tar file and go to the folder: wget http://www.mcs.anl.gov/research/projects/mpich2/downloads/tarballs/1.4/mpich2-1.4.tar.gz tar -xzf mpich2-1.4.tar.gz cd mpich2-1.4/ 3. Use the environment variables to link MPICH2 to the appropriate Intel compilers: export FC=ifort export CC=icc export CXX=icc export RSH=ssh (change appropriately if another compiler is to be used). 4. Carry out the usual ./configure, make, make install (and optionally set up the path): ./configure --prefix=$HOME/usr make make install vi /etc/bashrc #as root (or vi ~/bashrc as non-root user) PATH=$HOME/usr/bin:$PATH From:https://sites.google.com/site/materialssimulation/home/coding-support/open-mpi
15452 次阅读|0 个评论
[转载]MPI 小转。
yiboliu 2011-7-12 10:20
摘抄部分MPI普及阶段知识。 From: http://blog.zhaocong.info/2010/08/mpi-%E7%BC%96%E7%A8%8B%E5%B0%8F%E8%AE%B0/ 谢谢 Zhao Cong。 什么是MPI? MPI,Message Passing Interface,是一个基于Linux的东西,大致上是提供一种在process间传递信息标准,然后通过不同的library来执行这个标准。MPI无疑减少了大量程序员在开发parallel computing时候的投入和工作。MPI现在只提供对C和High Performance Fortran的支持。 一个小结 MPI既然是一个interface,那么自然有一个关于功能的定义,概括一个有下面几个: 传数据的:一传多(MPI_Bcast),多传一(MPI_Gather),点对点(MPI_Send) 收数据的:一传多,多传一,点对点(MPI_Recv) 干重活的:求和(MPI_SUM),比大小(MPI_MAX, MPI_MIN)…. 干杂活的: 定义新的数据类型(MPI_Type_Struct)… 总之,一切都是为通讯服务,具体的细节可以参照各种tutorial MPI的样子 一个MPI程序长成这个样子: include mpi.h … int main(int argc,char *argv ; #define FILE_NAME “/export/home/zhaocong/zhaocong/MPI/etaana.dat” //#define FILE_NAME “/export/home/zhaocong/zhaocong/MPI/test.dat” #define ROW 64 #define COL 120 #define SIZE 7680 //#define SIZE 15 int main(int argc,char *argv , sum, sum_p = 0; int rc; double d, test_sum = 0; FILE * pFile; int i = 0; MPI_Status stat; MPI_Datatype rowtype; MPI_Init(argc,argv); MPI_Comm_rank(MPI_COMM_WORLD, rank); MPI_Comm_size(MPI_COMM_WORLD, numtasks); int shared_task = SIZE / (numtasks-1); MPI_Type_contiguous(shared_task, MPI_DOUBLE, rowtype); MPI_Type_commit(rowtype); if (rank == 0){ pFile = fopen (FILE_NAME , “r”); if (pFile == NULL) perror (“Error opening file”); while (fscanf(pFile,”%lf”,d)!= EOF){ dat = d; i++; } fclose(pFile); } if (SIZE % (numtasks -1) == 0) { //ensure each process share the same load if (rank == 0) { for (i=1; i numtasks; i++) //assign task for different process rc = MPI_Send(dat , 1 , rowtype , i, tag, MPI_COMM_WORLD); } if (rank != 0){ MPI_Recv(b,1,rowtype, source, tag, MPI_COMM_WORLD, stat); double c = 0, a = 0, error=0, error1=0, temp, sum1; for (i = 0; i shared_task ;i++){ c = b ; //printf(“ : %lf\n”,rank,i,c); a = sum_p; temp = c + error; sum1 = a+temp; error1 = temp+ (a-sum1); sum_p = sum1; error = error1; } } else{ // rank == 0 sum_p = 0; } } else printf(“Must specify different number of processors. Terminating.\n”); //printf(“%d : %lf\n”,rank,sum_p); MPI_Type_free(rowtype); MPI_Barrier(MPI_COMM_WORLD); //MPI_Reduce (sendbuf,recvbuf,count,datatype,op,root,comm) MPI_Reduce(sum_p, sum, 1, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD); if (rank == 0) printf(“sum: %.15lf\n”,sum); MPI_Finalize(); return 0; } 呵呵,Hello MPI_COMM_WORLD!
个人分类: 程序学习|2646 次阅读|0 个评论
review: Efficient audit service outsourcing for data
jiangdm 2011-5-14 12:16
Efficient audit service outsourcing for data integrity in clouds Yan Zhua,b,∗, Hongxin Huc, Gail-Joon Ahnc, Stephen S. Yauc The Journal of Systems and Software 85 (2012) 1083– 1095 Abstract: Cloud-based outsourced storage relieves the client’s burden for storage management and maintenance by providing a comparably low-cost, scalable, location-independent platform. However, the fact that clients no longer have physical possession of data indicates that they are facing a potentially formidable risk for missing or corrupted data. To avoid the security risks, audit services are critical to ensure the integrity and availability of outsourced data and to achieve digital forensics and credibility on cloud computing. Provable data possession (PDP), which is a cryptographic technique for verifying the integrity of data without retrieving it at an untrusted server, can be used to realize audit services. In this paper, profiting from the interactive zero-knowledge proof system, we address the construction of an interactive PDP protocol to prevent the fraudulence of prover (soundness property) and the leakage of verified data (zero-knowledge property). We prove that our construction holds these properties based on the computation Diffie–Hellman assumption and the rewindable black-box knowledge extractor. We also propose an efficient mechanism with respect to probabilistic queries and periodic verification to reduce the audit costs per verification and implement abnormal detection timely. In addition, we present an efficient method for selecting an optimal parameter value to minimize computational overheads of cloud audit services. Our experimental results demonstrate the effectiveness of our approach. Keywords: Security, Cloud storage, Interactive proof system, Provable data possession, Audit service Efficient audit service outsourcing for data integrity in clouds.pdf
个人分类: CHI|239 次阅读|0 个评论
SWAN的编译安装
热度 2 zhouguidi 2010-6-5 16:16
SWAN是著名的第三代海浪模式,以GPL协议发布,支持并行计算,支持多平台。关于SWAN的研究和应用已有很多,这里仅讨论linux系统下编译运行SWAN的方法。 安装之前需要做的准备包括安装perl环境和安装fortran编译器。 perl环境一般linux系统都默认安装了,要确认是否安装了可以在终端执行perl -v显示perl的版本,若没有安装则需自行安装。若是debian等基于apt的系统,则可以简单的运行sudo apt-get install perl或者在新立得里搜索perl并标记安装。其他系统(如redhat等)安装同样简单且安装方法网上有很多,在此不再赘述。 fortran编译器可用gfortran或g95。gfortran是gcc编译器的一部分,但同g95一样,一般不默认安装。二者都能通过apt安装。 做好上述准备后,在http://130.161.13.149/swan/download/info.htm下载swan for linux源码包,解压,打开终端进入解压后的文件夹,运行命令make config,生成macros.inc文件,这个文件包含了编译需要的一些平台相关的宏。 然后运行make ser,将编译生成串行版本的可执行文件。然而编译时出现以下错误(gfortran,其他编译器未测试): swanout1.f:4073.25: XC, YC, ((JX(JC), JY(JC), WW(JC)), JC=1,4), SUMWW 1 Error: Expected a right parenthesis in expression at (1) 可打开swanout1.f文件,将第4073行((JX(JC), JY(JC), WW(JC)), JC=1,4)的第二层括号去掉即可编译通过。 如果需要编译并行版本,需要首先安装mpif90并行编译器:sudo apt-get install mpich2。然后make mpi即可。 编译后生成的swan.exe即为模式的可执行文件,执行chmod +x swan.exe将其标记为可执行,然后./swan.exe即可运行模式。模式运行时需要的输入文件是INPUT(不区分大小写,无扩展名),生成的错误信息文件为Errfile。
个人分类: numerical simulation|10783 次阅读|0 个评论
MPI简介
guodanhuai 2009-10-12 09:29
MPI简介 在消息传递库方法的并行编程中,一组进程所执行的程序是用标准串行语言书写的代码加上用于消息接收和发送的库函数调用。其中MPI(Message Passing Interface)是1994年5月发布的一种消息传递接口。它实际上是一个消息传递函数的库的标准说明,吸取了众多消息传递系统的优点, 是目前国际上最流行的并行编程环境之一,尤其是分布式存储的可缩放并行计算机和工作站网络以及机群 的一种编程范例。MPI具有很多优点:具有可移植性和易用性;有完备的异步通信功能;有正式和详细的精确定义。固而为并行软件产业的增长提供了必要的条件。 在基于MPI编程模型中,计算是由一个或多个彼此通过调用库函数进行消息收、发通信的进程所组成。在绝大部分MPI实现中,一组固定的进程在程序初始化时生成,一般情况下,一个处理器只生成一个进程。这些进程可以执行相同或不同的程序(相应地称为单程序多数据(SPMD)或多程序多数据(MPMD)模式)。进程间的通信可以是点到点的,也可以是集合的。 MPI只是为程序员提供了一个并行环境库,程序员通过调用MPI的库程序来达到程序员所要达到的并行目的,MPI提供C语言和Fortran语言程序接口。 MPI是个复杂的系统,包括129个函数(根据1994年发布的MPI标准)。事实上,1997年修订的标准,称之为MPI-2,已超过200个,目前最常用的也有约30个。然而我们可以只适用其中的6个最基本的函数就能编写一个完整的MPI程序去求解很多问题。 MPI_INIT MPI_FINALIZE MPI_COMM_SIZE 确定进程数 MPI_COMMON_RANK 确定自己的进程标识符 MPI_SEND:发送一条消息 MPI_RECV:接收一条信息 引自天涯博客
个人分类: HPC|7490 次阅读|0 个评论
【学术活动】2009年马普同学会在中国农业大学成功举办
luyahai 2009-9-16 14:01
2009年马普同学会在中国农业大学成功举办 2009 年 7 月 19-21 日为期三天的马普同学会在金码大厦举行。此次马普同学会是由中国农业大学资源与环境学院陆雅海教授组织主办的学术交流会,旨在提供不同研究小组相互了解的机会,促进学术交流,寻求探索科学问题的合作机会。 与会者包括德国马普学会陆地微生物研究所所长、国际著名微生物学家 Ralf Conrad 教授,北京大学吴晓磊教授,中国农业大学陆雅海教授等其他曾在德国马普学会陆地微生物研究所( MPI )工作和学习过的老师和同学。活动首先由陆雅海教授致欢迎词,然后 Ralf Conrad 教授对于马普学会陆地微生物研究所现状及发展方向进行了详细的介绍,并做了题为 Isotope fractionation during the anaerobic consumption of acetate by methanogenic, sulfate- and sulfur-reducing microorganisms 的学术报告,其深厚的学术功底以及对于科学问题的把握给与会者留下了深刻印象。 学术部分按照大家于马普学习的时间顺序进行。首先各自用几张片子轻松自由地展示一下大家在马堡的生活学习回忆;然后介绍了目前岗位上的研究方向、研究进展、研究成果和今后展望等。来自于海南大学的但建国教授以一种轻松愉快的心情介绍了自己从事科学研究以后与甲烷之间的不解之缘;来自于北京大学工学院的吴晓磊教授的报告题目为 Dynamics of a saline soil bacterial community responding to heavy crude oil pollution and a combined biostimulation treatment ,分析了原油污染土壤上嗜盐微生物的动态变化以及生物刺激处理的后细菌群落变化;中国农业大学资源环境学院陆雅海教授结合本小组的研究工作对于水稻土中碳循环机理进行了阐述,报告中提出底物浓度的变化决定了不同温度下古菌和细菌的优势种群;华中师范大学的杨红副教授做了题为 Primary studies on symbiotic microorganisms in the intestinal tract of Reticulitermes chinensis Sndyer 的学术报告,报告中对于黑胸散白蚁内生肠道中共生微生物群落结构、固氮相关基因系统发育多样性进行了分析,而且从其肠道中分离到可培养的好氧以及兼性厌氧的微生物,最后并对今后的工作进行了展望;中科院青岛生物能源与过程研究所李福利研究员对于其研究小组的研究方向和研究进展进行了讲述,其报告着重提到可以利用麦糠进行能源燃料丁醇的发酵,因为其易于降解成为可溶性糖;其次还从不同的来源中分离得到多株可以分解利用木质素的酿酒酵母纯菌株;另外还分离得到可以产油的藻类多种;同样来自于中科院青岛生物能源与过程研究所的郭荣波研究员也是关注能源问题,如何从蓝藻中获得未来的新型可再生能源燃料氢气?目前的研究主要是筛选可以抗氧并具有活性的氢酶并对于获得的氢酶运用分子生物学方法进行修饰以期获得高活性的可以进行氢气大批量生产的氢酶。南京大学的季荣教授做了题为 Using 14 C -Tracer to Study Transformation of Organic Substances in Environment 的报告,利用 14C 标记技术研究外生物质在环境中的转化过程,包括腐殖质、儿茶酚和 nonylphenol 三种物质的转化。中科院南京土壤所的贾仲君教授其幽默风趣的讲话风格给大家留下了深刻的印象,在并在会上汇报了其在马普博士后工作期间的研究成果,题为 Bacteria rather than Archaea dominate ammonia oxidation in agricultural soils 的报告,步步深入的研究揭示了是细菌而非古菌在农田土壤氨氧化过程中起主要作用;并对于分子微生物生态学未来的发展方向发表了自己的观点与见解。来自于中国农大微生物分子生态学实验室的博士生裘琼芬做了题为 Identify Active Methanotrophs in the Rhizosphere and on the Roots of Rice 的报告,水稻根和水稻根际活跃存在的甲烷氧化菌,在水稻根际 I 型甲烷氧化菌占优势并活跃存在,而在水稻根部则是 II 型甲烷氧化菌占优势并活跃存在。最后中国农大微生物分子生态学实验室的博士生袁艳丽做了题为 Responses of methanogenic archaeal community to oxygen exposure in rice field soil 的报告,主要观点是水稻土在受到氧气胁迫后虽然甲烷的产生受到了明显的抑制,但是产甲烷古菌的群落结构却相对恒定,为阐明水稻土中受到氧气胁迫后碳循环提供了一定的理论基础解释。 学术报告结束后, Ralf Conrad 教授做了总结陈词,对于本次学术研讨会的成功举行给与了高度评价,并和大家一起回顾了他与中国的科学家们的学术交流历程。 最后几位与会者参观了中国农业大学分子微生物生态学实验室并进行了积极的交流。 撰稿人 袁艳丽
个人分类: 学术活动|5206 次阅读|0 个评论
An Introduction to the Lab of Molecular Ecology – MPI Partner Group, China Agric
luyahai 2009-4-15 15:13
An Introduction to the Lab of Molecular Ecology MPI Partner Group, China Agricultural University (Yahai Lus Lab) The Lab of Molecular Ecology MPI Partner Group, China Agricultural University (CAU), was co-founded in Sep. 2006 by the Max Planck Institute for Terrestrial Microbiology of Marburg and the Key Laboratory of Plant and Soil Interactions of the Ministry of Education of China (MOE), which is affiliated to the College of Resources and Environmental Sciences at CAU. The faculty of the lab consists of one professor, chair professor, assistant professor and post doctor respectively, besides eight Ph.D. candidates and seven M.S. candidates. Prof. Yahai Lu, the director of the lab and the obtainer of the National Outstanding Youth Funds, was awarded both the title of New Century Excellent Talents and Changjiang Scholar by the MOE in 2006. Prof. Lu has worked at Philippines International Rice Research Institute, Japan Science and Technology Agency and MPI for years, where he conducted systematic researches on both biochemistry and microbiology of rice paddy soils. Since 2002, years of cooperative researches have been made between him and Prof. Ralf Conrad from MPI, who together not only developed and established the methodologies for microbial diversity on the interfaces between the rhizosphere and the soil, but also discovered and determined the key microbial groups and their mechanisms wherein, these results have been published in important academic magazines such as Science. In Sep.2006, the MPI Partner Group was co-founded by MPI and CAU based on the former cooperative foundation and achievements, to continue support and promote the leading edge cooperative researches between MPI and CAU. Prof. Lu was appointed the director of the partner group by Peter Gruss, the president of MPI. In the appointment letter, the president wrote that Dr. Lus scientific research has won very high international reputation, the MPI especially the Max Planck Institute for Terrestrial Microbiology of Marburg will support Dr. Lu to develop his partner group in the subsequent five years, he also expected the group to become an active scientific research centre in China and wished Dr. Lu successful researches at his new working place. In the same period, Prof. Ralf Conrad was appointed the chair professor of CAU by Zhangliang Chen, the president of CAU at that time. Prof. Conrad has been the director of the Max Planck Institute for Terrestrial Microbiology of Marburg since 1991, focused on soil microbiology and biogeochemistry. As a prestigious scientist in the world, he has published more than 250 scientific papers and 37 books, during which 35 Ph.D. students graduated from his lab, thus made outstanding academic accomplishments in his professional fields. He also won the lecture award of American Society for Microbiology in 1997 and the Francis E. Clark Distinguished Lecturer for the 2003 Soil Science Society of America Annual Meetings, soon after he became a board member for American Geophysical Union in 2005. He is the commissioner of ten international scientific committees such as the European Centre of Environmental Studies, and the former chief editor of FEMS Microbiology Ecology. Prof. Conrad has visited the partner group for once every year since 2006, where he gave us lectures and helped with our researches. The main topic of the lab is the microbial mechanisms of the environmental processes, which includes: 1. Molecular microbial ecology 2. Microbial diversity and their ecological functions in soil. 3. The microbial mechanisms of the carbon and nitrogen cycles. 4. Microbial remediation. Our tool box mainly consists of stable isotope probing and molecular biological techniques. The 211 and 985 projects of MOE have provided us EA-IRMS, Bechman Coulter Ultra-high-speed Centrifuge, ABI 3130 Sequencer, ABI 7900 Real Time PCR, etc. Multiple PCR instruments, DGGE equipment, Gel Imaging System and Ultra-low temperature refrigerator are available as well for conventional molecular analyses. (by Zhe Lv)
个人分类: 学术活动|6511 次阅读|2 个评论

Archiver|手机版|科学网 ( 京ICP备07017567号-12 )

GMT+8, 2024-6-2 20:08

Powered by ScienceNet.cn

Copyright © 2007- 中国科学报社

返回顶部