Written by Christopher Tozzi , April 10, 2017 If you use a free and open source operating system, it’s almost certainly based on the Linux kernel and GNU software. But these were not the first freely redistributable platforms, nor were they the most professional or widely commercialized. The Berkeley Software Distribution, or BSD, beat GNU/Linux on all of these counts. So why has BSD been consigned to the margins of the open source ecosystem, while GNU/Linux distributions rose to fantastic prominence? Read on for some historical perspective. Understanding BSD requires delving far back into the history of Unix, the operating system first released by ATT Bell Labs in 1969. BSD began life as a variant of Unix that programmers at the University of California at Berkeley, initially led by Bill Joy, began developing in the late 1970s. At first, BSD was not a clone of Unix, or even a substantially different version of it. It just included some extra utilities, which were intertwined with code owned by ATT. That all started to change in the early 1980s, however, when ATT’s decision to commercialize Unix raised demand for a Unix clone that would be freely redistributable without steep licensing fees. As a result, BSD programmers worked throughout the mid-1980s to separate their code from ATT’s, and made slow but steady progress toward releasing a complete Unix-like operating system of their own. They finally achieved their goal in June 1991, when the Net 2 release of BSD became available. In contrast to the Net 1 release that preceded it, which comprised mostly networking code but not a full operating system, Net 2 was a complete, Unix-like system. Because Net 2 BSD was available under a permissive license that granted access to the source code and the right to redistribute the system or derivatives of it freely, it was effectively the first “open source” operating system to see the light of day. The term “open source” did not yet exist at the time, and the BSD license did not satisfy the Free Software Foundation’s requirements for free-software licensing, but Net 2 was still a major step forward for the free-software community, since it showed that efforts to create a free, Unix-like system could succeed. Net 2 was also an important leap because it was the only free Unix clone that actually worked. At the time, the Linux kernel did not yet exist. (Linus Torvalds released the first version of Linux several months after Net 2 appeared, and it took more than two more years before Linux became fully functional.) And the GNU operating system, which Richard Stallman and his supporters had been working on since 1984, lacked a kernel. So, if BSD Net 2 was the first—and, at the time, by far the best—free Unix-like system, why did it not end up taking the hacker community by storm, and become the open source platform we all use today instead of GNU/Linux? Fighting the Law Part of the answer was the lawsuit that Unix Systems Labs (USL), which by the early 1990s had acquired rights to what had been ATT Unix, sued Berkeley Software Design Inc. (BSDI) in early 1992, claiming that BSDI’s commercial implementation of BSD infringed USL’s copyright. In March 1993, a court dismissed most of these claims, but the legal drama continued when the University of California countersued. It was not until early 1994, by which time Novell had acquired the rights to Unix, that the legal disputes were fully resolved through settlement. Ultimately, the legal drama did not undercut programmers’ ability to use or redistribute BSD. However, it did stunt adoption of the operating system by creating doubts about BSD’s legal future. As a result, it arguably forged an opening that allowed Linux to gain ground despite being developed primarily by an undergraduate in his Helsinki apartment, rather than a team of professional computer scientists at a major American university. Licenses, Licenses But the lawsuits do not fully explain BSD’s slow adoption. After all, the GNU/Linux community faced its own series of major legal battles in the early 2000s, when the SCO Group sued several major Linux distributors and corporate users. Yet the GNU/Linux community emerged relatively unscathed from those disputes, which were essentially resolved in 2007 in Linux’s favor. Part of BSD’s lack of immense popularity with hackers—that is, the people who made GNU and Linux what they became—also had to do with the permissiveness of the Net 2 licensing terms. Unlike GNU’s GPL license, which required the source code of all derivative works of GPL-licensed software to remain publicly available, the BSD license did not force developers who borrowed or tweaked the BSD code for their own projects to share their source code publicly. That was good news for commercial companies wary of sharing their code, but bad for hackers who valued openness and transparency. The BSD Cathedral Last but not least, it also mattered that BSD was built by a relatively small, mostly centralized team of professional programmers based in Berkeley. That set it apart from a system like Linux, which Torvalds created in collaboration with a wide network of loosely organized volunteer developers spread across the world. Thus, while BSD functioned as what Eric S. Raymond would call a software “cathedral,” carefully and elegantly built by a small group of master coders, the Linux development scene looked more like a “bazaar,” with code released early and often by a decentralized team of programmers whose only qualification was their ability to get the job done. The cathedral approach—which GNU, for its part, also adopted for the first 15 years of its history—did not lead to the rapid innovation that helped make Linux so popular in its early years. Thus, the fact that Torvalds, mostly by mistake, stumbled upon a very new, more effective development strategy lent momentum to Linux that BSD never saw. BSD’s Legacy Of course, BSD hardly disappeared entirely once Linux had become popular by the mid-1990s. On the contrary, a variety of operating systems based on Net 2, including NetBSD, OpenBSD and FreeBSD, remain alive and well today, with small but passionate communities of users. At the same time, BSD’s permissive licensing terms made its derivatives popular with some proprietary-software companies—most notably Apple, which included some code derived from BSD in its OS X and iOS operating systems. In this sense, BSD—or some form of it—has a massive following today, although the vast majority of people who own Macs, iPhones and iPods have no idea that their hardware relies partially on “open source” code developed at Berkeley in the 1980s and early 1990s. Maybe that’s sad. After all, Apple software is about as closed as closed can be, making it the total opposite of the type of system the BSD developers envisioned when they unveiled Net 2 in 1991. Either way, it’s an interesting outcome.
August 3, 2016 • Physics 9, 91 A loophole in a result from classical electromagnetism could allow a simple device on the Earth’s surface to generate a tiny electric current from the planet’s magnetic field. P. Reid/Univ. of Edinburgh Tapping into Earth’s rotation. Although the Earth’s magnetic field is not aligned exactly with the planet’s rotation axis, there is a component of the field that is symmetric about this axis. A proposed device interacting with this component would ex... Show more It might seem that classical electromagnetic theory would hold few surprises, but two researchers argue that one aspect of received wisdom is wrong. They show theoretically that a device, sitting passively on the Earth’s surface, can generate an electric current through its interaction with the Earth’s magnetic field. The power from the proposed device would be measured in nanowatts, but might, in principle, be scaled up. A century-old experiment showed that if any electromagnet with cylindrical symmetry (the symmetry of a bar magnet) rotates about its long axis, its magnetic field does not rotate . There is a component of the Earth’s magnetic field that is symmetric around the rotation axis (which is not aligned with the magnetic poles), so according to this old principle, the axisymmetric component does not rotate. Any stationary object on the Earth’s surface sweeps through this component of the field, which is constant at any given latitude. Another basic result from electromagnetism says that no electric current will develop within a conducting object moving through a uniform magnetic field. Charges within the material experience a sideways force that could, in principle, produce current. But the displacements of the electrons and atomic nuclei quickly set up a static electric field that opposes the magnetic force. Equilibrium between the electric and magnetic forces is quickly established, so there is no net motion of charge following the small, initial rearrangement. This principle seems to squelch any idea that a stationary device on the Earth’s surface, moving at constant velocity through the nonrotating part of the Earth’s field, could generate any electric power. But Chris Chyba of Princeton University and Kevin Hand of the Jet Propulsion Laboratory in Pasadena, California, saw a way forward. To produce current in the conductor, they needed to create a magnetic force on the electrons that could not be completely canceled by the electric force. In what they call a loophole to the traditional impossibility argument, the theorists show that there are configurations of magnetic fields that can’t be electrically cancelled; however, these configurations require special conditions. The researchers show that such a magnetic field configuration is possible in a conducting cylindrical shell made of a material with unusual magnetic properties. First they point out that (as others have shown) the magnetic field inside such a shell placed on the Earth’s surface—say, oriented vertically at the equator—is significantly smaller than the field outside. As this object sweeps through the planet’s field, it is continually confronting the Earth’s uniform field and distorting it into some nonuniform configuration where the field is suppressed in the internal space. If the shell material’s magnetic properties prevent the incoming field from distorting rapidly, then the field will never reach the configuration it would have at rest. Chyba and Hand argue that the resulting magnetic force cannot be canceled by the resulting electric field. The team shows that in this situation, an electric current can flow around certain closed paths within the cylindrical shell. Electrodes could tap this power source—which ultimately comes, Chyba and Hand prove, from the energy of the Earth’s rotation. To design their novel device, Chyba and Hand needed a conducting material with this unusual magnetic response—a difficult combination. As an example of such a material, they found a manganese-zinc ferrite called MN60 that has the right properties while being, as Chyba puts it, “a lousy conductor, with about one-tenth the conductivity of sea water.” Largely because of that poor conductivity, the power the team predicts is small. A cylinder 20 cm long and 2 cm across would generate tens of nanowatts at tens of microvolts. Chyba thinks there could be ways to increase those numbers, but he emphasizes that the first order of business is an experimental test to show that the mechanism really works. Philip Hughes, a radio astronomer at the University of Michigan, Ann Arbor, who studies the magnetohydrodynamics of astrophysical objects, says that Chyba and Hand’s mechanism is “based on sound physics” but is less optimistic about the possibility of scaling up. Chyba says that if the mechanism proves correct—and he is adamant that only experiments can say for sure—he hopes engineers will get to work to improve the output. One possibility worth exploring, he suggests, would be a two-layer cylinder in which the slow magnetic material induces a current-generating field geometry in an adjacent material with higher conductivity. This research is published in Physical Review Applied . –David Lindley David Lindley is a freelance science writer in Alexandria, Virginia, and author of Uncertainty: Einstein, Heisenberg, Bohr, and the Struggle for the Soul of Science (Doubleday, 2007). References S. J. Barnett, “On Electromagnetic Induction and Relative Motion,” Phys. Rev. (Series I) 35 , 323 (1912) .
(Diamond Foundry) Diamond Foundry says it grows its diamonds layer-by-layer from a superheated plasma. Diamonds are a girl's best friend, but they don't grow on trees. Or do they? The Santa Clara-based startup Diamond Foundry claims it can grow diamonds in a lab that are as high-quality as natural gems, minus the exploitation of the mining industry. Actor Leonardo DiCaprio, along with 10 billionaires, have already invested in the company, which says it can make hundreds of diamonds in two weeks, weighing up to nine carats each. But how exactly are these diamonds made, and how does it differ from existing synthetic methods? The company's website is short on details, but here's what we know: They start with a real diamond as a seed crystal. (This is what make their product different from other synthetic diamonds, according to a company spokesperson.) Then, using a super heated plasma, they build more atoms onto this seed, layer by layer, until they have a diamond. The gems are grown in chemical reactors that can reach a scorching 8,000 degrees Celsius (more than 14,000 degrees Fahreneit) — hotter than the surface of the sun , which is about 5,500 degrees Celsius. We chatted briefly with Catherine McManus, chief scientist of Materialytics , a company that specializes in distinguishing natural, synthetic, and fake diamonds, to find out what separates Diamond Foundry's gems from other synthetic diamonds. (Diamond Foundry) Diamonds created by Diamond Foundry. How to make a diamond Diamonds are made of carbon, the same material found in pencil graphite. In nature, geologists believe diamonds are created over millions of years under intense pressure and temperature in the Earth's mantle, and then regurgitated onto the surface by volcanoes. By contrast, synthetic diamonds are made in a lab. Chemically, natural and synthetic diamonds are almost identical, but they can vary in the trace elements found inside. The two most common techniques for making synthetic diamonds are known as high-pressure high-temperature (HPHT) and chemical vapor deposition (CVD). In HPHT , a carbon seed crystal is placed inside a device called a press with a metal solvent and subjected to immense pressures at temperatures around 1,400 degrees Celsius (about 2,250 degrees Fahrenheit), which melts the metal. The molten metal dissolves the carbon crystal, and it solidifies into a diamond. In CVD , a carbon-hydrogen gas mixture is deposited on a surface layer-by-layer. This process usually takes place at about 800 degrees Celsius (1,470 degrees Fahrenheit). Diamond Foundry's method seems to be a combination of HPHT and CVD, said McManus. It's basically the latter method, but at much higher temperatures, she said. The result are diamonds that are as pure as natural ones, the company claims, But ethically and morally pure as well. 下面的是另外一篇报道: Diamonds are a girl’s best friend, but they don’t grow on trees. Or do they? The Santa Clara-based startup Diamond Foundry claims it can grow diamonds in a lab that are as high-quality as natural gems, minus the exploitation of the mining industry. Actor Leonardo DiCaprio, along with 10 billionaires, have already invested in the company, which says it can make hundreds of diamonds in two weeks, weighing up to nine carats each. But how exactly are these diamonds made, and how does it differ from existing synthetic methods? The company’s website is short on details, but here’s what we know: They start with a real diamond as a seed crystal. (This is what make their product different from other synthetic diamonds, according to a company spokesperson.) Then, using a super heated plasma, they build more atoms onto this seed, layer by layer, until they have a diamond. The gems are grown in chemical reactors that can reach a scorching 8,000 degrees Celsius (more than 14,000 degrees Fahreneit) — hotter than the surface of the sun , which is about 5,500 degrees Celsius. We chatted briefly with Catherine McManus, chief scientist of Materialytics , a company that specializes in distinguishing natural, synthetic, and fake diamonds, to find out what separates Diamond Foundry’s gems from other synthetic diamonds. Diamonds are made of carbon, the same material found in pencil graphite. In nature, geologists believe diamonds are created over millions of years under intense pressure and temperature in the Earth’s mantle, and then regurgitated onto the surface by volcanoes.
Links to source code of saliency detection methods FT R. Achanta, S. Hemami, F. Estrada, and S. Susstrunk,“ Frequency-tuned salient region detection ,” in IEEE CVPR, 2009, pp. 1597–1604. AIM N. Bruce and J. Tsotsos, “ Saliency, attention, and visual search: An information theoretic approach ,” Journal of Vision, vol. 9, no. 3, pp. 5:1–24, 2009. MSS R. Achanta and S. S ¨ usstrunk, “ Saliency detection using maximum symmetric surround ,” in IEEE ICIP, 2010, pp. 2653–2656. SEG E. Rahtu, J. Kannala, M. Salo, and J. Heikkila, “ Segmenting salient objects from images and videos ,” ECCV, pp. 366–379, 2010. SeR H. Seo and P. Milanfar, “ Static and space-time visual saliency detection by self-resemblance ,” Journal of vision, vol. 9, no. 12, pp. 15:1–27, 2009. SUN L. Zhang, M. Tong, T. Marks, H. Shan, and G. Cottrell, “ SUN: A bayesian framework for saliency using natural statistics ,” Journal of Vision, vol. 8, no. 7, pp. 32:1–20, 2008. SWD L. Duan, C. Wu, J. Miao, L. Qing, and Y. Fu, “Visual saliency detection by spatially weighted dissimilarity,” in IEEE CVPR, 2011, pp. 473–480. IM N. Murray, M. Vanrell, X. Otazu, and C. A. Parraga, “ Saliency estimation using a non-parametric low-level vision model ,” in IEEE CVPR, 2011, pp. 433–440. IT L. Itti, C. Koch, and E. Niebur, “ A model of saliency-based visual attention for rapid scene analysis ,” IEEE TPAMI, vol. 20, no. 11, pp. 1254–1259, 1998. GB J. Harel, C. Koch, and P. Perona, “ Graph-based visual saliency ,” in NIPS, 2007, pp. 545–552. SR X. Hou and L. Zhang, “ Saliency detection: A spectral residual approach ,” in IEEE CVPR, 2007, pp. 1–8. CA S. Goferman, L. Zelnik-Manor, and A. Tal, “ Context-aware saliency detection ,” in IEEE CVPR, 2010, pp. 2376–2383. LC Y. Zhai and M. Shah, “Visual attention detection in video sequences using spatiotemporal cues,” in ACM Multimedia, 2006, pp. 815–824. AC R. Achanta, F. Estrada, P. Wils, and S. S ¨ usstrunk, “ Salient region detection and segmentation ,” in IEEE ICVS, 2008, pp. 66–75. CB H. Jiang, J. Wang, Z. Yuan, T. Liu, N. Zheng, and S. Li,“ Automatic salient object segmentation based on context and shape prior ,” in British Machine Vision Conference, 2011, pp. 1–12. LP T. Judd, K. Ehinger, F. Durand, A Torralba, Learning to predict where humans look , ICCV 2009.
Links to source code of saliency detection methods FT R. Achanta, S. Hemami, F. Estrada, and S. Susstrunk,“ Frequency-tuned salient region detection ,” in IEEE CVPR, 2009, pp. 1597–1604. AIM N. Bruce and J. Tsotsos, “ Saliency, attention, and visual search: An information theoretic approach ,” Journal of Vision, vol. 9, no. 3, pp. 5:1–24, 2009. MSS R. Achanta and S. S ¨ usstrunk, “ Saliency detection using maximum symmetric surround ,” in IEEE ICIP, 2010, pp. 2653–2656. SEG E. Rahtu, J. Kannala, M. Salo, and J. Heikkila, “ Segmenting salient objects from images and videos ,” ECCV, pp. 366–379, 2010. SeR H. Seo and P. Milanfar, “ Static and space-time visual saliency detection by self-resemblance ,” Journal of vision, vol. 9, no. 12, pp. 15:1–27, 2009. SUN L. Zhang, M. Tong, T. Marks, H. Shan, and G. Cottrell, “ SUN: A bayesian framework for saliency using natural statistics ,” Journal of Vision, vol. 8, no. 7, pp. 32:1–20, 2008. SWD L. Duan, C. Wu, J. Miao, L. Qing, and Y. Fu, “Visual saliency detection by spatially weighted dissimilarity,” in IEEE CVPR, 2011, pp. 473–480. IM N. Murray, M. Vanrell, X. Otazu, and C. A. Parraga, “ Saliency estimation using a non-parametric low-level vision model ,” in IEEE CVPR, 2011, pp. 433–440. IT L. Itti, C. Koch, and E. Niebur, “ A model of saliency-based visual attention for rapid scene analysis ,” IEEE TPAMI, vol. 20, no. 11, pp. 1254–1259, 1998. GB J. Harel, C. Koch, and P. Perona, “ Graph-based visual saliency ,” in NIPS, 2007, pp. 545–552. SR X. Hou and L. Zhang, “ Saliency detection: A spectral residual approach ,” in IEEE CVPR, 2007, pp. 1–8. CA S. Goferman, L. Zelnik-Manor, and A. Tal, “ Context-aware saliency detection ,” in IEEE CVPR, 2010, pp. 2376–2383. LC Y. Zhai and M. Shah, “Visual attention detection in video sequences using spatiotemporal cues,” in ACM Multimedia, 2006, pp. 815–824. AC R. Achanta, F. Estrada, P. Wils, and S. S ¨ usstrunk, “ Salient region detection and segmentation ,” in IEEE ICVS, 2008, pp. 66–75. CB H. Jiang, J. Wang, Z. Yuan, T. Liu, N. Zheng, and S. Li,“ Automatic salient object segmentation based on context and shape prior ,” in British Machine Vision Conference, 2011, pp. 1–12. LP T. Judd, K. Ehinger, F. Durand, A Torralba, Learning to predict where humans look , ICCV 2009.
This new version finally update VIPARR tool from 3.x into 4.x which can handle NBFIX terms in CHARMM 36 protein FF and latest CHARMM 36 LIPIDS FF parameter sets. ____ _ _____ _ _ ___ | _ \ ___ ___ _ __ ___ ___ _ __ __| | |___ /| || | / _ \ | | | |/ _ \/ __| '_ ` _ \ / _ \| '_ \ / _` | |_ \| || |_| | | | | |_| | __/\__ \ | | | | | (_) | | | | (_| | ___) |__ _| |_| | |____/ \___||___/_| |_| |_|\___/|_| |_|\__,_| |____(_) |_|(_)___/ Desmond Source Release Desmond 3.4.0.2 April 1, 2013 D. E. Shaw Research Desmond ======= Changes to support compilation on more Linux distributions. 3.4.0.1 ======= This section summarizes the significant changes made in Desmond since the Desmond 3.0 release. Features -------- Configuration error reporting has been improved. In particular, a key history is internally maintained so that, when there is a complaint about a missing or invalid parameter, the user gets a much better idea about where that parameter is located. Configuration files for 3.4 are backward compatible. Desmond now uses the new Random123 random number generator. One benefit of this is that random operations across particles return the same results, independent of the number of processes running. For example, Desmond now gets nearly identical results (up to arithmetic error) when running randomized integrators (Langevin, Brownie) on different numbers of processors. Desmond is now using the MSYS library for all reading of structure files. As a result, a new boot.type optional (for backwards compatibility) flag has been added to allow (experimental) booting of Maestro files in Desmond through MSYS. Currently supported types are 'DMS' and 'MAE' (yes, in theory, you don't need mae2dms -- in theory). Developer infrastructure is in place to support any other type that MSYS can import (i.e., should be fairly easy to support anything that MSYS supports). DMS files are read in a more strongly typed fashion. It used to be that the types of columns in the DMS file were not checked to see if they matched the type required by Desmond. It would do the right thing in some mismatch cases and the wrong thing in others. As a result, some DMS files that were produced by SQLITE3 hacks will likely cause problems. Some new functions have been added to enhanced_sampling (see guide). A long-standing bug in which the 'pow' function was not recognized by the parser has been fixed. The enhanced_sampling extension now supports 'whim' descriptors. A 'Mixed' thermostat has been added to the Multigrator plugin (see guide). An 'Antithetic' thermostat has been added to the cadre of basic thermostats (see guide). Both Multigrator and Polarization can make use of it. The extended energy term records the heat changes that result from the plugins remove_com_motion and randomize_velocities, and thus the conservation of the total energy will persist in the presence of such plugins. New Gibbs types have been added which support a three charge (chargeA, chargeB, chargeC) alchemical system, as in force fields where the formal charges on all atoms are subject to change, rendering all formal charges alchemical. This is still mostly an experimental feature. Gibbs also had a new alchemical_pair_softcore_es pipeline to do alchemical pair terms with a softcore functional form. See guide. Gibbs nonbonded terms have the option of having their nonbonded correction corrected to account for the variable presence of ligand terms in FEP binding calculations (see guide). To support a certain replica-exchange/FEP workflow, an optional 'deltaE' parameter has been added to remd-graph (see guide). The trajectory plugin writes out a VIRIALTENSOR into its frames. The trajectory extension also provides the special dbl_trajectory plugin which will output in DBL_WRAPPED_V_2 format. vrun in ReleaseDbl will read these frames back in their full precision, while vrun in Release will always read these back in single precision. These features exist mostly to support debugging and Anton/Desmond comparisons. A new main loop plugin called virial_breakdown has been added to the additional_output extension. It reports a breakdown of virial contributions across force categories. This is primarily expected to be used as a debugging tool for Anton developers, though a brief description is in the users’ guide. Optimizations were made in both GSE and PME pipelines. The GSE pipelines now perform much faster than before and have comparable performance to PME with a high order. The performance gap between GSE and PME is substantially more narrow. Added a force.nonbonded.suppress_exclusions optional flag (default false) which allows exclusion injection to be avoided as might be needed for certain debugging operations. We do not expect its use outside of this context and do not document it in the guide. The 'reshake' flag has been enabled for ReleaseDbl (default to true as in Release). Although ReleaseDbl does not need it for constraints, enabling it causes Desmond to be a bit more robust against initial constraint violations in the structure file, and thus Release and ReleaseDbl give closer numerical results in these cases. The performance impact is negligible. Bugfixes -------- Reich water constraints test the water terms to make sure they satisfy certain symmetry assumptions made by the algorithm. We used to have particle GID's reported when a particle went missing. We now report them again. The LCn virtual sites had a very subtle non-reversibility bug which prevented systems with these sites from working reversibly in unconstrained NVE simulations. The alchemical nonbonded near terms and softcore pair in Gibbs used to generator a NaN if two noninteracting alchemical particles wound up on top of each other. A long-standing bug in the GSE virial bug has been corrected. This bug manifested when the grid-size of the FFT was picked too aggressively (triggers roughly when the grid spacing equals sigma_s or larger). Checkpointing does in memory buffering which speeds up the save and loads at the expensive of a bit of RAM at runtime. For some systems this was a dramatic speed up of saving and loading.
Compiling Desmond from source code seems to be the most challenging work that I've ever found especially the latest V3.4. Fortunately, it was successfully compiled these days. During the sampling test, 2 CPU can even get up to 4 ns /day with NVE ensemble for a typical membrane system which is really amazing. OS: SUSE Linux Enterprise 11 SP2 GCC: 4.3.4 (system implemented) Boost: 1.49 (installed from SUSE system tool YAST) Sqlite: 3.7.6 (system implemented) Python: 2.6.7 (self compiled) numpy: 1.5.0 (self compiled) scons: 1.2 (self compiled) Antlr: 3.2 (self compiled) Pcre: 8.21 (self compiled) openMPI: 1.4.3 (self compiled) after compiling the all above necessary tools and libraries, it would be necessary to configure proper Python environment before compiling Desmond: setenv PYTHONPATH /soft/python-2.6.7 alias scons /soft/python-2.6.7/bin/scons alias python /soft/python-2.6.7/bin/python The configuration file user-conf.sample.py are following: # Boost ; boost_prefix = ' /usr/include/boost' EXTRA_INCLUDE_PATH += ' /usr/include' EXTRA_LIBRARY_PATH += ' /usr/lib64' EXTRA_LINK_FLAGS += ' -Wl,-rpath,/usr/lib64' EXTRA_LIBS += ' -lboost_iostreams -lboost_thread' # ANTLR, used in MSYS EXTRA_INCLUDE_PATH += ' /soft/desmond-3.4/libantlr3c/include' EXTRA_LIBRARY_PATH += ' /soft/desmond-3.4/libantlr3c/lib' EXTRA_LINK_FLAGS += ' -Wl,-rpath,/soft/desmond-3.4/libantlr3c/lib' EXTRA_LIBS += ' -lantlr3c' # PCRE, used in MSYS EXTRA_INCLUDE_PATH += ' /soft/desmond-3.4/pcre-8.21/include' EXTRA_LIBRARY_PATH += ' /soft/desmond-3.4/pcre-8.21/lib' EXTRA_LINK_FLAGS += ' -Wl,-rpath,/soft/desmond-3.4/pcre-8.21/lib' EXTRA_LIBS += ' -lpcre' # SQLITE - used in MSYS EXTRA_INCLUDE_PATH += ' /usr/include' EXTRA_LIBRARY_PATH += ' /usr/lib64' EXTRA_LINK_FLAGS += ' -Wl,-rpath,/usr/lib64' EXTRA_LIBS += ' -lsqlite3' # MPI WITH_MPI = 1 mpi_prefix = ' /soft/openmpi-1.4.3' MPI_CPPFLAGS = " -I%s/include -pthread -DOMPI_SKIP_MPICXX" % mpi_prefix MPI_LDFLAGS = " -L%s/lib -Wl,-rpath,%s/lib" % (mpi_prefix,mpi_prefix) # Python # DESRES installs Python and numpy in separate path locations; your installation # will likely install numpy somewhere in the Python path hierarchy python_prefix = ' /soft/python-2.6.7' numpy_prefix = " /soft/python-2.6.7/lib/python2.6/site-packages" EXTRA_INCLUDE_PATH += ' %s/include/python2.6' % python_prefix EXTRA_LIBRARY_PATH += ' %s/lib' % python_prefix EXTRA_LINK_FLAGS += ' -Wl,-rpath,%s/lib' % python_prefix EXTRA_INCLUDE_PATH += ' %s/numpy/core/include' % numpy_prefix EXTRA_LIBRARY_PATH += ' %s/numpy/core' % numpy_prefix EXTRA_LINK_FLAGS += ' -Wl,-rpath,%s/numpy/core' % numpy_prefix EXTRA_LIBS += ' -lboost_python -lpython2.6' Now we can compile and install Desmond with following command: scons --user-conf=user-conf.sample.py PREFIX=/soft/desmond-3.4/desmond -j4 scons --user-conf=user-conf.sample.py install PREFIX=/soft/desmond-3.4/desmond -j4 If everything goes well, we should see the following from terminal: scons build done Testing: setenv DESMOND_PLUGIN_PATH /soft/desmond-3.4/desmond/lib/plugin set path=(/soft/desmond-3.4/desmond/bin $path) desmond --include ./share/samples/dhfr.cfg \ --cfg boot.file=./share/samples/dhfr.dms \ --cfg mdsim.plugin.eneseq.name=dhfr.eneseq diff dhfr.eneseq ./share/samples/dhfr.eneseq.reference the values printed should be correspond to steps 160 and 200 and should be identical to the final few lines of values obtained when running the first parallel simulation. parallel testing: setenv DESMOND_PLUGIN_PATH /soft/desmond-3.4/desmond/lib/plugin set path=(/soft/desmond-3.4/desmond/bin $path) mpirun -np 2 desmond \ --include ./share/samples/dhfr.cfg \ --cfg boot.file=./share/samples/dhfr.dms \ --cfg mdsim.plugin.eneseq.name=dhfr.eneseq.2 \ --destrier mpi diff dhfr.eneseq.2 dhfr.eneseq The first commands sets an environment variable that determines where Desmond will search for its dynamically loaded plugins. Using the instructions above there are one or two plugins built that contain 'destriers'. Destriers implement communication protocols in Desmond with the default protocol being one for a single processor. The second command runs two mdsim.exe processes using the Open MPI job launcher orterun and the Desmond 'mpi' destrier. The third command again compares the results obtained from the parallel run with those of the serial run. Note,that in general these results will be different.
Pro nvss_upfile,ra,dec,radius=radius,filename=filename,format=format ;+ ;NAME: ; nvss_upfile ;PURPOSE: ; write a upfile for NVSS matched source ;CALLING SEQUENCE: ; nvss_upfile,ra,dec,filename=filename ; ;INPUT: ; ra ----- RA for source in Unit:degree ; dec ----- DEC for source in Unit:degree ;OPTIONAL KEYWORD INPUT: ; radius ---- matched radius in Unit:arcsec ; default it,it will set it 15 arcsec ; format ---- if set it,it will output according to format ;OPTIONAL KEYWORD OUT: ; filename ---- upfile name ;EXAMPLE: ; IDL nvss_upfile,ra,dec,filename='NVSS_update.dat' ; ;REVISION HISTORY: ; Original by DL.Wang,Aug-30-2007 ;- if ( N_PARAMS() lt 2 ) then begin message,'Syntax:nvss_upfile,ra,dec, ' return endif if not keyword_set(radius) then radius=15 openw,lun,filename,/get_lun n=n_elements(ra) for i=0L,n-1 do begin if not keyword_set(format) then begin printf,lun,adstring(ra ,dec ,1),' ',radius,' ','0' endif else begin printf,lun,adstring(ra ,dec ,1),radius,'0',format=format endelse endfor free_lun,lun End
Pro write_2mass_upfile,ra,dec,radius=radius,name=name, $ filename=filename,format=format ;+ ;NAME: ; write_2mass_upfile ;PURPOSE: ; write a upfile for matched source list,matched with 2MASS ;CALLING SEQUENCE: ; write_2mass_upfile,ra,dec,radius=radius,filename=filename,format=format, ; /name ;INPUT: ; RA ----- RA for matched source list in Unit:degree ; DEC ----- DEC for matched source list in Unit:degree ;OPTIONAL KEYWORD INPUT: ; radius ----- match radius in Unit:arcsec ; default it,radius is 7.0 arcsec ; name ----- if set it,it will print source name. ; default it,it will print source sequence ;OPTIONAL KEYWORD OUTPUT: ; filename ----- name for output file ; default it,filename is '2mass_update.dat' ; format ----- format for output data ; ;EXAMPLE: ; IDL write_2mass_upfile,ra,dec,filename='rgb_2mass_update.dat' ; ;REVISION HISTORY: ; Original by DL,Wang,Aug-17-2007 ;- Npar = N_params() if ( Npar lt 2 ) then message, $ 'ERROR - RA and Declination do not have equal number of elements' if not keyword_set(radius) then radius=7.0 if not keyword_set(format) then format='(1x,A14,1x,F10.6,1x,F10.6,1x,F3.1)' if not keyword_set(filename) then filename='2mass_update.dat' openw,lun,filename,/get_lun printf,lun,'\ Example of cone search' printf,lun,"\EQUINOX = 'J2000.0'" printf,lun,'| objname | ra | dec | radius |' printf,lun,'| string | double | double | double |' printf,lun,'| unit | unit | unit | arcsec |' if not keyword_set(name) then begin for i=0L,n_elements(ra)-1 do begin printf,lun,i+1,ra ,dec ,radius,F=format endfor endif else begin for i=0L,n_elements(ra)-1 do begin printf,lun,name ,ra ,dec ,radius,F=format endfor endelse free_lun,lun End
As the most widely used MD analysis tool, VMD isdistributedthe pre-compiled version for almost all kinds of platform. However, if we would like to use advanced module of VMD such as running python script or render scene with alternative engine, the pre-compiled versions are most likely to fail for those purposes. In this regard, compile VMD from source code would be necessary for some users. Here are some some steps for how to compile it under SUSE Linux Enterprise Desktop 11 SP2 OS. Pre-requisition: python2.6.8 (system lib) fltk (system lib) actc tcl/tk 8.5 libsball points raster3d stride surf tachyon c uda-4.2 1. compile tcl/tk 8.5 and setup proper envrionment: setenv TCLINC -I/soft/vmd-1.9.1.src/tcl-8.5/include setenv TCLLIB -F/soft/vmd-1.9.1.src/tcl-8.5/lib 2. compile plugin untar the source code into somewhere you like : /soft/vmd-1.9.1.src cd /soft/vmd-1.9.1.src/plugins make LINUXAMD64 make ARCH=LINUXAMD64 PLUGINDIR=/soft/vmd-1.9.1.src/vmd-1.9.1/plugins distrib 3. compile VMD main code cd /soft/vmd-1.9.1.src/vmd-1.9.1/ edit /soft/vmd-1.9.1.src/vmd-1.9.1/configure file, and setup proper environment for all possible plugin such as: python, tcl/tk, tachyon, libsball, stride, cuda and so on. ./configure LINUXAMD64 OPENGL FLTK TK CUDA IMD LIBSBALL XINERAMA XINPUT LIBTACHY ON VRPN NETCDF TCL PYTHON PTHREADS NUMPY SILENT cd src make veryclean make if it goes well, you should see " No resource compiler required on this platform." make install you should see "Make sure /soft/vmd-1.9.1/bin/vmd is in your path. VMD installation complete. Enjoy!" now you can enjoy your VMD now with command: /soft/vmd-1.9.1/bin/vmd be sure that there is no any complain for missing modules n the terminal. good luck.
ENCODE experiments http://genome.ucsc.edu/ENCODE/protocols/dataStandards/ChIP_DNase_FAIRE_DNAme_v2_2010.pdf Requirements for a DNase-seq and FAIRE-seq experiments Following an analysis of deeply sequenced DNase-seq and FAIRE-seq datasets, we suggest the following requirements. Controls Deeply sequenced reference samples such as input DNA exhibit uneven coverage. For example, peaks in promoters have been observed in some input samples, perhaps as the result of endogenous nuclease activity, or sonication and solubility biases (Auerbach et al., 2009, Giresi, et al., 2007). These promoter peaks likely represent real open chromatin and therefore should not be excluded from analysis. Other reasons for uneven input signals are copy number variation, and under-representation of repetitive DNA sequence in the reference genome. However, the impact of uneven coverage in input chromatin is limited. Advances in computational methods to correct for such features are being incorporated into the analysis. For example, using reads that are not in peaks, the DNase/FAIRE-seq data itself can be used to identify regions that exhibit copy number variations in samples. In addition, the true signals from FAIRE and DNase exhibit a unique structure that differs greatly from the type of signal produced by uneven input coverage. While it is always preferable to have deeply sequenced matched input for each sample, for DNase and FAIRE experiments, input sequencing from every cell type is not required. Sequencing Depth. Since DNase and FAIRE data represent a continuum of the degree to which chromatin is “open”, achieving true saturation may not be practical, or even definable. However, a decision must be made regarding adequate level of coverage. We propose that the optimal depth of sequencing be guided by our ability to identify regions that were also identified by other methods such as tiled arrays (Giresi 2009, Giresi et al., 2007, Sabo et al., 2006, Crawford et al., 2006), qPCR (Boyle et al., Cell 2008), or Southern blots (Sabo et al., 2006). For DNase and FAIRE this is typically 20-50 million reads. In general, it is best to sequence replicates to a similar depth. We have found that similar sequence depth matters most for replicates on the lower end of the recommended read depth. Number of Replicates. By definition, at least two biological replicates are necessary to ensure that the experiment is reproducible. Experiments completed to date indicate that there will not be a significant gain in information beyond two biological replicates, when they are in reasonable agreement. For DNase, we recommend that at least 80% of the top 50,000 peaks in one replicate are detected in the top 100,000 peaks in the second replicate, and vice-versa. For FAIRE, we recommend that at least 50% of the top 50,000 peaks in one replicate are detected in the top 200,000 peaks in the second replicate, and vice-versa. Scoring. ChIP, DNase, FAIRE, DNAme standards July 2011 1 Similar to ChIP-seq, a variety of peak calling methods can be used to score peak intensity, including Fseq (Boyle et al., Bioinformatics 2008), Hotspot, and others. The following suggestions can be used to identify a statistical significant cutoff by one of the following methods. 1) Fitting the data to a gamma distribution to calculate p-values, and contiguous regions where p-values were below a 0.05 threshold can be considered significant. 2) Irreproducible discovery rate (IDR) analysis described in section IIb above. ================= ENCODE - Wikipedia, the free encyclopedia en.wikipedia.org/wiki/ENCODE The primary assays used in ENCODE are ChIP-seq, DNase I Hypersensitivity, RNA-seq, and assays of DNA methylation. ======================== Cell, tissue or DNA sample: Cell line or tissue used as the source of experimental material. cell Tier Description Lineage Tissue Karyotype Sex Documents Vendor ID Term ID Label HEEpiC 3 esophageal epithelial cells endoderm epithelium U Stam ScienCell 2700 HEEpiC http://main.genome-browser.bx.psu.edu/cgi-bin/hgEncodeVocab?ra=encode%2Fcv.raterm=%22HEEpiC%22
原文地址: http://www.cnblogs.com/feisky/archive/2010/03/31/1701560.html linux系统下给命令指定别名alias命令用法: 在linux系统中如果命令太长又不符合用户的习惯,那么我们可以为它指定一个别名。虽然可以为命令建立“链接”解决长文件名的问题,但对于带命令行参数的命令,链接就无能为力了。而指定别名则可以解决此类所有问题。只要举一些例子就可以了: alias l='ls -l' ;用 l 代替 ls -l 命令(Xenix 下就有类似的 l 命令) alias cd..='cd ..' ;用 cd.. 代替 cd .. 命令(对在 DOS 下使用惯了 cd.. 的人帮助很大) alias md='mkdir' ;用 md 代替 mkdir 命令(对在 DOS 下…) alias c:='mount /dev/hda1 /mnt/c cd /mnt/c' ;用 c: 命令代替命令序列:安装 DOS 分区,再进入。 通常我们可以将以上命令放到自己的home目录下的.bash_prifle文件中,在使用source .bash_profile 命令.即可使用. Shell编程基础 我们可以使用任意一种文字编辑器,比如gedit、kedit、emacs、vi等来编写shell脚本,它必须以如下行开始(必须放在文件的第一行): # !/bin/sh 注意:最好使用“!/bin/bash”而不是“!/bin/sh”,如果使用tc shell改为tcsh,其他类似。 符号#!用来告诉系统执行该脚本的程序,本例使用/bin/sh。编辑结束并保存后,如果要执行该脚本,必须先使其可执行: chmod +x filename 此后在该脚本所在目录下,输入 ./filename 即可执行该脚本。 目录 1 变量赋值和引用 2 Shell里的流程控制 2.1 if 语 句 2.2 和 || 操作符 2.3 case 语句 2.4 select 语句 2.5 while/for 循环 3 Shell里的一些特殊符号 3.1 引号 4 Here Document 5 Shell里的函数 6 Shell脚本示例 6.1 二进制到十进制的转换 6.2 文件循环拷贝 7 脚本调试 变量赋值和引用 Shell编程中,使用变量无需事先声明,同时变量名的命名须遵循如下规则: 首个字符必须为字母(a-z,A-Z) 中间不能有空格,可以使用下划线(_) 不能使用标点符号 不能使用bash里的关键字(可用help命令查看保留关键字) 需要给变量赋值时,可以这么写: 变量名=值 要取用一个变量的值,只需在变量名前面加一个$ ( 注意: 给变量赋值的时候,不能在"="两边留空格 ) #!/bin/sh # 对变量赋值: a="hello world" #等号两边均不能有空格存在 # 打印变量a的值: echo "A is:" $a 挑个自己喜欢的编辑器,输入上述内容,并保存为文件first,然后执行 chmod +x first 使其可执行,最后输入 ./first 执行该脚本。其输出结果如下: A is: hello world 有时候变量名可能会和其它文字混淆,比如: num=2 echo "this is the $numnd" 上述脚本并不会输出"this is the 2nd"而是"this is the ";这是由于shell会去搜索变量numnd的值,而实际上这个变量此时并没有值。这时,我们可以用花括号来告诉shell要打印的是num变量: num=2 echo "this is the ${num}nd" 其输出结果为:this is the 2nd 需要注意shell的默认赋值是字符串赋值。比如: var=1 var=$var+1 echo $var 打印出来的不是2而是1+1。为了达到我们想要的效果有以下几种表达方式: let "var+=1" var=$ var=`expr $var + 1`#注意加号两边的空格,否则还是按照字符串的方式赋值。 注意:前两种方式在bash下有效,在sh下会出错。 let表示数学运算,expr用于整数值运算,每一项用空格隔开,$ "来表示条件测试 ,注意这里的空格很重要,要确保方括号前后的空格。 :判断是否是一个文件 :判断/bin/ls是否存在并有可执行权限 :判断$var变量是否有值 :判断$a和$b是否相等 执行man test可以查看所有测试表达式可以比较和判断的类型。下面是一个简单的if语句: #!/bin/sh if ; then echo "your login shell is the bash (bourne again shell)" else echo "your login shell is not bash but ${SHELL}" fi 变量$SHELL包含有登录shell的名称,我们拿它和/bin/bash进行比较以判断当前使用的shell是否为bash。 还可以使用 test 选项 文件名 来测试,而测试结果使用 echo $? 来查看 选项有: -d -f -w -r -x -L 数值测试的选项有: -eq = -ne -qt -lt -le = -ge = 和 || 操作符 熟悉C语言的朋友可能会喜欢下面的表达式: echo "This computer uses shadow passwords" 这里的 就是一个快捷操作符,如果左边的表达式为真则执行右边的语句,你也可以把它看作逻辑运算里的与操作。上述脚本表示如果/etc/shadow文件存在,则打印”This computer uses shadow passwords”。同样shell编程中还可以用或操作(||),例如: #!/bin/sh mailfolder=/var/spool/mail/james || { echo "Can not read $mailfolder" ; exit 1; } echo "$mailfolder has mail from:" grep "^From " $mailfolder 该脚本首先判断mailfolder是否可读,如果可读则打印该文件中的"From" 一行。如果不可读则或操作生效,打印错误信息后脚本退出。需要注意的是,这里我们必须使用如下两个命令: -打印错误信息 -退出程序 我们使用花括号以匿名函数的形式将两个命令放到一起作为一个命令使用;普通函数稍后再作说明。即使不用与和或操作符,我们也可以用if表达式完成任何事情,但是使用与或操作符会更便利很多。 case 语句 case表达式可以用来匹配一个给定的字符串,而不是数字(可别和C语言里的switch...case混淆)。 case ... in ...) do something here ;; … ;; esac file命令可以辨别出一个给定文件的文件类型,如:file lf.gz,其输出结果为: lf.gz: gzip compressed data, deflated, original filename, last modified: Mon Aug 27 23:09:18 2001, os: Unix 我们利用这点写了一个名为smartzip的脚本,该脚本可以自动解压bzip2, gzip和zip 类型的压缩文件: #!/bin/sh ftype=`file "$1"` # Note ' and ` is different case "$ftype" in "$1: Zip archive"*) unzip "$1" ;; "$1: gzip compressed"*) gunzip "$1" ;; "$1: bzip2 compressed"*) bunzip2 "$1" ;; *) echo "File $1 can not be uncompressed with smartzip";; esac 你可能注意到上面使用了一个特殊变量$1,该变量包含有传递给该脚本的第一个参数值。也就是说,当我们运行: smartzip articles.zip $1 就是字符串 articles.zip。 select 语句 select表达式是bash的一种扩展应用,擅长于交互式场合。用户可以从一组不同的值中进行选择: select var in ... ; do break; done .... now $var can be used .... 下面是一个简单的示例: #!/bin/sh echo "What is your favourite OS?" select var in "Linux" "Gnu Hurd" "Free BSD" "Other"; do break; done echo "You have selected $var" 如果以上脚本运行出现 select :NOT FOUND 将 #!/bin/sh 改为 #!/bin/bash 该脚本的运行结果如下: What is your favourite OS? 1) Linux 2) Gnu Hurd 3) Free BSD 4) Other #? 1 You have selected Linux while/for 循环 在shell中,可以使用如下循环: while ...; do .... done 只要测试表达式条件为真,则while循环将一直运行。关键字"break"用来跳出循环,而关键字”continue”则可以跳过一个循环的余下部分,直接跳到下一次循环中。 for循环会查看一个字符串行表(字符串用空格分隔),并将其赋给一个变量: for var in ....; do .... done 下面的示例会把A B C分别打印到屏幕上: #!/bin/sh for var in A B C ; do echo "var is $var" done 下面是一个实用的脚本showrpm,其功能是打印一些RPM包的统计信息: #!/bin/sh # list a content summary of a number of RPM packages # USAGE: showrpm rpmfile1 rpmfile2 ... # EXAMPLE: showrpm /cdrom/RedHat/RPMS/*.rpm for rpmpackage in $*; do if ;then echo "=============== $rpmpackage ==============" rpm -qi -p $rpmpackage else echo "ERROR: cannot read file $rpmpackage" fi done 这里出现了第二个特殊 变量$*,该变量包含有输入的所有命令行参数值 。如果你运行showrpm openssh.rpm w3m.rpm webgrep.rpm,那么 $* 就包含有 3 个字符串,即openssh.rpm, w3m.rpm和 webgrep.rpm。 Shell里的一些特殊符号 引号 在向程序传递任何参数之前,程序会扩展通配符和变量。这里所谓的扩展是指程序会把通配符(比如*)替换成适当的文件名,把变量替换成变量值。我们可以使用引号来防止这种扩展,先来看一个例子,假设在当前目录下有两个jpg文件:mail.jpg和tux.jpg。 #!/bin/sh echo *.jpg 运行结果为: mail.jpg tux.jpg 引号(单引号和双引号)可以防止通配符*的扩展: #!/bin/sh echo "*.jpg" echo '*.jpg' 其运行结果为: *.jpg *.jpg 其中单引号更严格一些,它可以防止任何变量扩展;而双引号可以防止通配符扩展但允许变量扩展: #!/bin/sh echo $SHELL echo "$SHELL" echo '$SHELL' 运行结果为: /bin/bash /bin/bash $SHELL 此外还有一种防止这种扩展的方法,即使用转义字符——反斜杆:\: echo \*.jpg echo \$SHELL 输出结果为: *.jpg $SHELL Here Document 当要将几行文字传递给一个命令时,用here documents是一种不错的方法。对每个脚本写一段帮助性的文字是很有用的,此时如果使用here documents就不必用echo函数一行行输出。Here document以 开头,后面接上一个字符串,这个字符串还必须出现在here document的末尾。下面是一个例子,在该例子中,我们对多个文件进行重命名,并且使用here documents打印帮助: #!/bin/sh # we have less than 3 arguments. Print the help text: if ; then cat HELP ren -- renames a number of files using sed regular expressions USAGE: ren 'regexp' 'replacement' files... EXAMPLE: rename all *.HTM files in *.html: ren 'HTM$' 'html' *.HTM HELP exit 0 fi OLD="$1" NEW="$2" # The shift command removes one argument from the list of # command line arguments. shift shift # $* contains now all the files: for file in $*; do if ; then newfile=`echo "$file" | sed "s/${OLD}/${NEW}/g"` if ; then echo "ERROR: $newfile exists already" else echo "renaming $file to $newfile ..." mv "$file" "$newfile" fi fi done 这个示例有点复杂,我们需要多花点时间来说明一番。第一个if表达式判断输入命令行参数是否小于3个 (特殊变量$# 表示包含参数的个数) 。如果输入参数小于3个,则将帮助文字传递给cat命令,然后由cat命令将其打印在屏幕上。打印帮助文字后程序退出。如果输入参数等于或大于3个,我们就将第一个参数赋值给变量OLD,第二个参数赋值给变量NEW。下一步,我们使用shift命令将第一个和第二个参数从参数列表中删除,这样原来的第三个参数就成为参数列表$*的第一个参数。然后我们开始循环,命令行参数列表被一个接一个地被赋值给变量$file。接着我们判断该文件是否存在,如果存在则通过sed命令搜索和替换来产生新的文件名。然后将反短斜线内命令结果赋值给newfile。这样我们就达到了目的:得到了旧文件名和新文件名。然后使用 mv命令进行重命名 Shell里的函数 如果你写过比较复杂的脚本,就会发现可能在几个地方使用了相同的代码,这时如果用上函数,会方便很多。函数的大致样子如下: functionname() { # inside the body $1 is the first argument given to the function # $2 the second ... body } 你需要在每个脚本的开始对函数进行声明。 下面是一个名为xtitlebar的脚本,它可以改变终端窗口的名称。这里使用了一个名为help的函数,该函数在脚本中使用了两次: #!/bin/sh # vim: set sw=4 ts=4 et: help() { cat HELP xtitlebar -- change the name of an xterm, gnome-terminal or kde konsole USAGE: xtitlebar "string_for_titelbar" OPTIONS: -h help text EXAMPLE: xtitlebar "cvs" HELP exit 0 } # in case of error or if -h is given we call the function help: help help # send the escape sequence to change the xterm titelbar: echo -e "33]0;$107" # 在脚本中提供帮助是一种很好的编程习惯,可以方便其他用户(和自己)使用和理解脚本。 == 命令行参数 == XXXXXXXXXXXXXXXXXXXXXXXXXX 我们已经见过$* 和 $1, $2 ... $9 等特殊变量,这些特殊变量包含了用户从命令行输入的参数。迄今为止,我们仅仅了解了一些简单的命令行语法(比如一些强制性的参数和查看帮助的-h选项)。但是在编写更复杂的程序时,您可能会发现您需要更多的自定义的选项。通常的惯例是在所有可选的参数之前加一个减号,后面再加上参数值 (比如文件名)。 有好多方法可以实现对输入参数的分析,但是下面的使用case表达式的例子无疑是一个不错的方法。 #!/bin/sh help() { cat HELP This is a generic command line parser demo. USAGE EXAMPLE: cmdparser -l hello -f -- -somefile1 somefile2 HELP exit 0 } while ; do case $1 in -h) help;shift 1;; # function help is called -f) opt_f=1;shift 1;; # variable opt_f is set -l) opt_l=$2;shift 2;; # -l takes an argument - shift by 2 --) shift;break;; # end of options -*) echo "error: no such option $1. -h for help";exit 1;; *) break;; esac done echo "opt_f is $opt_f" echo "opt_l is $opt_l" echo "first arg is $1" echo "2nd arg is $2" 你可以这样运行该脚本: cmdparser -l hello -f -- -somefile1 somefile2 返回结果如下: opt_f is 1 opt_l is hello first arg is -somefile1 2nd arg is somefile2 这个脚本是如何工作的呢?脚本首先在所有输入命令行参数中进行循环,将输入参数与case表达式进行比较,如果匹配则设置一个变量并且移除该参数。根据unix系统的惯例,首先输入的应该是包含减号的参数。 Shell脚本示例 === 一般编程步骤=== xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 现在我们来讨论编写一个脚本的一般步骤。任何优秀的脚本都应该具有帮助和输入参数。写一个框架脚本(framework.sh),该脚本包含了大多数脚本需要的框架结构,是一个非常不错的主意。这样一来,当我们开始编写新脚本时,可以先执行如下命令: cp framework.sh myscript 然后再插入自己的函数。 让我们来看看如下两个示例。 二进制到十进制的转换 脚本 b2d 将二进制数 (比如 1101) 转换为相应的十进制数。这也是一个用expr命令进行数学运算的例子: #!/bin/sh # vim: set sw=4 ts=4 et: help() { cat HELP b2d -- convert binary to decimal USAGE: b2d binarynum OPTIONS: -h help text EXAMPLE: b2d 111010 will return 58 HELP exit 0 } error() { # print an error and exit echo "$1" exit 1 } lastchar() { # return the last character of a string in $rval if ; then # empty string rval="" return fi # wc puts some space behind the output this is why we need sed: numofchar=`echo -n "$1" | wc -c | sed 's/ //g' ` # now cut out the last char rval=`echo -n "$1" | cut -b $numofchar` } chop() { # remove the last character in string and return it in $rval if ; then # empty string rval="" return fi # wc puts some space behind the output this is why we need sed: numofchar=`echo -n "$1" | wc -c | sed 's/ //g' ` if ; then # only one char in string rval="" return fi numofcharminus1=`expr $numofchar "-" 1` # now cut all but the last char: rval=`echo -n "$1" | cut -b -$numofcharminus1` #原来的 rval=`echo -n "$1" | cut -b 0-${numofcharminus1}`运行时出错. #原因是cut从1开始计数,应该是cut -b 1-${numofcharminus1} } while ; do case $1 in -h) help;shift 1;; # function help is called --) shift;break;; # end of options -*) error "error: no such option $1. -h for help";; *) break;; esac done # The main program sum=0 weight=1 # one arg must be given: help binnum="$1" binnumorig="$1" while ; do lastchar "$binnum" if ; then sum=`expr "$weight" "+" "$sum"` fi # remove the last position in $binnum chop "$binnum" binnum="$rval" weight=`expr "$weight" "*" 2` done echo "binary $binnumorig is decimal $sum" # 该脚本使用的算法是利用十进制和二进制数权值 (1,2,4,8,16,..),比如二进制"10"可以这样转换成十进制: 0 * 1 + 1 * 2 = 2 为了得到单个的二进制数我们是用了lastchar 函数。该函数使用wc –c计算字符个数,然后使用cut命令取出末尾一个字符。Chop函数的功能则是移除最后一个字符。 文件循环拷贝 你可能有这样的需求并一直都这么做:将所有发出邮件保存到一个文件中。但是过了几个月之后,这个文件可能会变得很大以至于该文件的访问速度变慢;下面的脚本 rotatefile 可以解决这个问题。这个脚本可以重命名邮件保存文件(假设为outmail)为outmail.1,而原来的outmail.1就变成了 outmail.2 等等... #!/bin/sh # vim: set sw=4 ts=4 et: ver="0.1" help() { cat HELP rotatefile -- rotate the file name USAGE: rotatefile filename OPTIONS: -h help text EXAMPLE: rotatefile out This will e.g rename out.2 to out.3, out.1 to out.2, out to out.1 and create an empty out-file The max number is 10 version $ver HELP exit 0 } error() { echo "$1" exit 1 } while ; do case $1 in -h) help;shift 1;; --) break;; -*) echo "error: no such option $1. -h for help";exit 1;; *) break;; esac done # input check: if ; then error "ERROR: you must specify a file, use -h for help" fi filen="$1" # rename any .1 , .2 etc file: for n in 9 8 7 6 5 4 3 2 1; do if ; then p=`expr $n + 1` echo "mv $filen.$n $filen.$p" mv $filen.$n $filen.$p fi done # rename the original file: if ; then echo "mv $filen $filen.1" mv $filen $filen.1 fi echo touch $filen touch $filen 这个脚本是如何工作的呢?在检测到用户提供了一个文件名之后,首先进行一个9到1的循环;文件名.9重命名为文件名.10,文件名.8重命名为文件名. 9……等等。循环结束之后,把原始文件命名为文件名.1,同时创建一个和原始文件同名的空文件(touch $filen) 脚本调试 最简单的调试方法当然是使用echo命令。你可以在任何怀疑出错的地方用echo打印变量值,这也是大部分shell程序员花费80%的时间用于调试的原因。Shell脚本的好处在于无需重新编译,而插入一个echo命令也不需要多少时间。 shell也有一个真正的调试模式,如果脚本"strangescript"出错,可以使用如下命令进行调试: sh -x strangescript 上述命令会执行该脚本,同时显示所有变量的值。 shell还有一个不执行脚本只检查语法的模式,命令如下: sh -n your_script 这个命令会返回所有语法错误。 我们希望你现在已经可以开始编写自己的shell脚本了,尽情享受这份乐趣吧! :)
Documentation for Documentation for Murnaghan Fit Code Content of this archive murn.f - the source code of the Fit-Program readme.html - this documentation murn_zero.f - the source code of the fit-code, including zero-point vibrations What does it do? The murn.x code fits any given pairs of a_lat/Energy to the Murnaghan Equation of state and calculates the equilibrium lattice constant and the bulk modulus. How to build the executable Pretty easy: AIX: xlf -o murn.x murn.f The code does not need any external libraries, so it should compile cross-platform. AIX is the one I tested. How to setup an input file The general syntax for the murn.f- input is 3 : unit of energy for input values (1=Rydberg, 2=EV, 3 = Hartree) 0.25 : i.e. volume of unit cell/cell used 7.3 8 50 : minimal/maximal value for lattice const., number of points to calculate (30-50 will do it) 11 number of alat/Energy pairs 1 0.2 : alat /energy (11 pairs, each pair at a single line) How to run it Pretty easy: murn.x inputfile outputfile The output file is self-explaining ( I hope!) Whats the difference between murn_zero.f and murn.f ? The murn_zero.f offers the posibility to calculate the zero-point energy. It needs additional input at line 4 of the inputfile . You have to provide, the Grueneisen-constant, the Debye-Temperature and the atomic Volume at the Debye temperature. The syntax is: 3 : unit of energy for input values (1=Rydberg, 2=EV, 3 = Hartree) 0.25 : i.e. volume of unit cell/cell used 7.3 8 50 : minimal/maximal value for lattice const., number of points to calculate (30-50 will do it) .true. 2.19 428 109.9 : calculate zero-point vibrations (true), Grueneisen constant, T(Debye), Volume/atom for T(Debye) 11 : number of alat/Energy pairs 1 0.2 : alat /energy (11 pairs, each pair at a single line) Note: This code has only been tested for fcc-Al! Some values for other materials: Gamma T(Debye) Volume/atom for T(Debye) Al: 2.19 428 109.9 Fe: 1.66 467 78.95 Cu: 2.00 343 78.92 What kind of strange things can happen? The code is pretty reliable (my experience), however with some strange input parameters it can not properly perform the interpolation. The way it reports this error is by giving NaNQ in the output. It happened while calculating Al, using nlcc -Pseudopots at some nlcc -radii and at low cutoff-energies. Check the form of your parabola, sometimes it has "2 minima" and the min/max finder does not find the global one. Further comments, questions,... Anything else can be understood from examing the source code directly. Any further improvements, questions, bug-reports should be reported to the mailing list. Content of this archive murn.f - the source code of the Fit-Program readme.html - this documentation murn_zero.f - the source code of the fit-code, including zero-point vibrations What does it do? The murn.x code fits any given pairs of a_lat/Energy to the Murnaghan Equation of state and calculates the equilibrium lattice constant and the bulk modulus. How to build the executable Pretty easy: AIX: xlf -o murn.x murn.f The code does not need any external libraries, so it should compile cross-platform. AIX is the one I tested. How to setup an input file The general syntax for the murn.f- input is 3 : unit of energy for input values (1=Rydberg, 2=EV, 3 = Hartree) 0.25 : i.e. volume of unit cell/cell used 7.3 8 50 : minimal/maximal value for lattice const., number of points to calculate (30-50 will do it) 11 number of alat/Energy pairs 1 0.2 : alat /energy (11 pairs, each pair at a single line) How to run it Pretty easy: murn.x inputfile outputfile The output file is self-explaining ( I hope!) Whats the difference between murn_zero.f and murn.f ? The murn_zero.f offers the posibility to calculate the zero-point energy. It needs additional input at line 4 of the inputfile . You have to provide, the Grueneisen-constant, the Debye-Temperature and the atomic Volume at the Debye temperature. The syntax is: 3 : unit of energy for input values (1=Rydberg, 2=EV, 3 = Hartree) 0.25 : i.e. volume of unit cell/cell used 7.3 8 50 : minimal/maximal value for lattice const., number of points to calculate (30-50 will do it) .true. 2.19 428 109.9 : calculate zero-point vibrations (true), Grueneisen constant, T(Debye), Volume/atom for T(Debye) 11 : number of alat/Energy pairs 1 0.2 : alat /energy (11 pairs, each pair at a single line) Note: This code has only been tested for fcc-Al! Some values for other materials: Gamma T(Debye) Volume/atom for T(Debye) Al: 2.19 428 109.9 Fe: 1.66 467 78.95 Cu: 2.00 343 78.92 What kind of strange things can happen? The code is pretty reliable (my experience), however with some strange input parameters it can not properly perform the interpolation. The way it reports this error is by giving NaNQ in the output. It happened while calculating Al, using nlcc -Pseudopots at some nlcc -radii and at low cutoff-energies. Check the form of your parabola, sometimes it has "2 minima" and the min/max finder does not find the global one. Further comments, questions,... Anything else can be understood from examing the source code directly. Any further improvements, questions, bug-reports should be reported to the mailing list.
It's time to eliminate professional bias in China http://www.scidev.net/en/science-and-innovation-policy/r-d/opinions/it-s-time-to-eliminate-professional-bias-in-china.html Huafeng Wang 26 April 2012 | EN | 中文 Science researchers need engineers to support their work Flickr/Novartis AG A culture of favouring science over technology comes at the expense of both technologists and research outputs, writes engineer Huafeng Wang. Two scientists won China's Top Science and Technology award in February. Eighty-nine year-old Wu Liangyong and 92 year-old Xie Jialin received a prize of five million Yuan (US$800,000) each for achievements in architecture and physics, respectively. But for the eighth time since 2000, there was no top prize awarded for the national Natural Science Award — raising the question of why the country seems to lack lasting and important scientific achievements in the natural sciences. One possibility is that researchers pay more attention to publishing articles than to the impact of their work on technology . The relationship between scientific research and technology in China is seriously flawed — as not only do basic researchers often ignore technological development, but also, technologists can ignore the important role of scientific research. We need to reward the contribution of both scientists and technical personnel, such as engineers, to scientific research programmes. Scientists rule the roost The Confucian philosopher Mencius said: Those who labour with their minds govern others; those who labour with strength are governed by others. This is a fitting description of the research environment in China, where scientists control the work of technicians. Science tends to command more attention than technology, and researchers enjoy a high status. But we must acknowledge that technical staff provide support for researchers, such as by conducting routine experiments. Scientific ideas cannot be realised without an engineer's assistance, while the work by engineering and technical personnel is best guided by the needs of scientific researchers. The substantial contributions of technicians to scientific achievements can be neglected. And over time, this approach in China's scientific culture has caused a serious imbalance in the development of science and technology. Technical input is missing In 2010, nearly 130,000 Chinese papers were included in the Science Citation Index — the second highest number in the world. But China was ranked only eighth in the world in the number of citations for these papers. It is a discrepancy that should lead us to consider how much of original Chinese research is respected and used by others. Most research projects require a major portion of their funding to be used to buy laboratory instruments and reagents. But the nature of the Chinese research culture has stifled the development of research equipment, which is lagging far behind that of developed countries. Our scientific ideas are mostly achieved with the help of foreign research platforms. Some equipment could be easily constructed by our research teams. But under the current scientific research evaluation system, writing a paper is seen as more valuable. This means that much of China's public research budget is spent on instruments or reagents from abroad. Finally, the culture of paying more attention to science than technology undermines the professional motivation of technicians, and often leads to a shortfall in their number. As it is easier for a professor to publish a science paper than for an engineer to invent instrumentation, careers tend to advance through the publication route. The appointment system favours research scientists over engineers, and because benefits are linked to titles, many senior engineers lose passion for their work. Equality needed for engineers In fact, science and technology complement and enhance one another — technology drives progress in science, and science promotes the birth of new technology. Take the 1986 Nobel Prize in physics for the invention of the electron microscope: it promoted progress in science by allowing us to examine specimens at a fine scale. Most countries do recognise the value of technology in their research culture. For example, achievement awards by the Thompson Rivers University in Canada recognise not only scientific research and teaching staff, but also technical and management staff. As things stand in China, it is difficult to imagine a researcher and a laboratory technician receiving the same benefits or professional status. This is the product of a social system dominated by a feudal mentality. It is understandable, but unfair, that administrative power has this influence over our professions. Ultimately, it is a loss for China's capacity for lasting and important scientific achievements, which depends on research and technical personnel cooperating with each other. New ideas need to be tested with new methods and instruments developed by technicians. The most important and easiest reform that we can make to our research system is to remove bias by abolishing benefits linked to titles, and treat each professional according to their ability and contribution. Huafeng Wang is an engineer at the Research Center for Eco-Environmental Sciences (RCEES), Chinese Academy of Sciences (CAS). He can be contacted at hfwang@rcees.ac.cn . Impact+Analysis+Y1.pdf
本文来源于 果壳网 2012年02月17日 20:00 我要评论( 0 ) 打印 | 字号: 为什么在每个人的“眼中体”中,自己眼中的自己和他人眼中的自己都是那么不同?一个原因是,我们对自己更加看重的是潜力,而对别人则更看重实际能力 【沐沐知雪/文】 编辑的话: 最近流行的“眼中体”看似与去年的“伤不起”大同小异,但却从简单地通过抱怨式的咆哮来寻找社会认同感升级为表达理想与现实、个人与社会的差距。为何我们自己眼中的自己和别人眼中的我们有那么大的差距呢?原因很多,其中一个是我们身处不同的角度,我们对自己更加看重的是潜力,而对别人则更看重实际能力。 美国诗人朗费罗(Henry Wadsworth Longfellow)说:“我们通过自己能够做些什么来判断自己,别人却通过我们已经做过的事情来判断我们。” 当我们评估自己未来的潜力和别人未来的潜力时,总会突出自己潜力有多大,却很难发现别人身上潜在的发光点。 评估自己的潜力和别人的潜力为何有双重标准? 过年亲戚朋友问得最多的问题是什么?无非就是两个,有男(女)朋友了吗?工资多少?也许工资是个不错的衡量能力的标准。如果问你,你觉得5年后你的工资多少?你会想我脾气性格很好,与上司、同事关系处得都不错,我脑子很灵活,我学东西很快,我平时做事也很认真……我这样的优秀人才, 5年后发展不可限量。那让你评估一个跟你实力差不多的人呢?你会想他上学时成绩很一般,没参与过什么社团活动,默默无闻,现在在公司也没什么突出的表现……所以以后不会有太大的改变吧。 为什么在评估自己的潜力和他人的潜力时会有双重的标准呢? 在评估自己的时候会考虑很多自己的优点和特质,在评估他人的时候却变得严格多了,而且是从别人已有的行为和平时的表现来做出判断。这是因为,我们知道自己的“真实自我”,却只知道别人的“现实自我”。也就是说我们知道自己的过去、现在和未来,知道自己的追求是什么,我有什么理想,我要为之付出多少努力,我们对自己的评估加入了很多情感成分,而我们却很难体会别人的理想与奋斗,只知道他现在如何。所以我们会对自己的潜力有更高的评价。而且我们在评估自己的时候会预期自己未来的高峰表现,也就是未来表现出的最好可能;对于其他人呢,我们往往是通过那个人的平均水平来评估他的潜力。 我们对自己有更多的了解,对别人的了解相对而言就少很多,所以理所当然在评估潜力时会有所差别。当我们做什么事情失败了,我们也许会找各种借口来为自己辩解。比如说我迟到了是因为偶尔堵车,别人迟到了就是他懒惰、赖床。 我的潜力大是因为我一直都没有尽全力 我们觉得自己的潜力更大,还有个原因就是认为自己在很多事情上没有尽全力,所以表现出来的水平和别人差不多,或者比别人略微差一些。比如,我某次考评拿到85分,另外一个同事拿到了88分,于是我就会找借口说同事努力一年才比我高3分,而我只要轻轻松松就能跟他差不多,如果我像他那么认真,一定比他好。我们总是偏向于把自己形容成一个后进生,也就是说觉得自己的实力远不止现在的表现;而把别人的表现与他们的实力相匹配起来。 美国弗罗里达大学的心理学家伊拉诺•F•威廉姆斯(Elanor F. Williams)与康奈尔大学的心理学家托马斯•吉洛维奇(Thomas Gilovich)和大卫•邓宁(David Dunning)做了一系列实验来说明这一问题。 在其中一个实验中,研究者让参与者完成一份问卷。在问卷的一开始,研究者先陈述了一段话:“很多人的技能水平和他表现出来的水平是不匹配的。有些人完成某件事所表现出来的能力没有达到他的真实水平,有些人则超越了他的真实水平。”看到这段话的时候,参与者会觉得自己就是那个没有发挥出真实水平的人。然后看完这段话,参与者要回答3个问题,(1) 描述自己是个怎么样的人,从1-7中打一个分(1表示我的潜力远远高于我表现出来的能力;4表示我的潜力和我目前的能力一样;7表示我目前的能力已经远远超过我的潜力);(2) 我希望别人怎么来评价我的潜力和目前的能力;(3) 用一样的打分方法,让参与者评价一个熟人。 结果参与者给自己打分的平均分为3.4分,给熟人打分的平均分为4.3分,大多数人给自己的打分更低,也就是说他们认为自己的潜力更大一些。在另一个实验中,研究者也发现网球运动员一直输给他的对手,但他还是会觉得自己比他的对手更优秀,这不是自嘲,也不是自我安慰,而是觉得自己的潜力有待挖掘。 谁最关心我的潜力? 伊拉诺等人又做了另一个实验来研究我们对自己的潜力和别人的潜力的关心程度。首先,他们让参与者玩一个拼字游戏,每个人玩5轮游戏,然后告诉他们这个游戏能反映出一个人学术上的前途。做完游戏后,研究者把参与者分为两组,研究者告诉其中一组参与者他们在游戏中的得分是71分,他们经过多次练习之后的潜力得分是93分;而跟他们一起参与游戏的另外一个人的得分是67分,潜力得分是88分。另外一组则相反,告诉他们自己的现实和潜力得分是67和88分,另一个人的得分分别是71和93分。知道这4个分数后,研究者让参与者记住这4个分数,而这另外一个人其实根本不存在,是研究者杜撰出来的。 然后,研究者在电脑屏幕上呈现刚才让参与者记住的4个分数,就像上面这个图,每次在屏幕的左右随机呈现2个分数;分数消失后,在其中一个分数的位置呈现一个圆点(下图),让参与者在看到圆点后立刻按键,圆点在左边就按“C”键,在右边就按“M”键。 从图2可以看出,圆点出现在自己的潜力得分呈现的地方时,参与者的反应最快,而圆点出现在别人的潜力得分时反应最慢。这说明我们更加关注自己的潜力,而往往忽视了别人的潜力,更注重看别人的现实能力和过去的表现。 纵坐标表示的是参与者看到圆点按键的反应时。横坐标表示圆点出现在自己的分数和别人的分数呈现的地方这2种情况。黑色的表示现实得分,灰色的表示潜力得分 在生活中,我们找工作的时候,会关注自己的潜力能否发展,关注公司有没有发展的前途;但是,公司重视的是你现有的能力,你能拿出什么证书,你有什么工作经历,你有没有得过什么奖……别人更加关注的是你目前的能力,你的潜力别人看不到,而且这是你主观认为的潜力,你究竟能否达到这一程度,没人知道。所以这就是一个悲剧。 参考文献: Elanor F. Williams, Thomas Gilovich, David Dunning. (2012). Being All That You Can Be : The Weighting of Potential in Assessments of Self and Others. Personality and Social Psychology Bulletin, 38(2), 143–154. 什么是“眼中体”?它是怎么流行起来的呢? 果壳网“眼中体”大汇总 本文获果壳网(Guokr.com)授权转载
Fortran语言 : Fortran言语读写bin(unformatted)型文件的code ! Fortran write data in bin mode: open ( unit = 10 , file = 'restart' , form = 'unformatted' , status = 'unknown' ) write ( 10 ) nx , ny , cfl , gamma , tcur , cpu write ( 10 ) ((( uc ( i , j , m , 0 ), i = 1 , nx ), j = 1 , ny ), m = 1 , 4 ) close ( 10 ) ! In Fortran, every write code, it will first write the bytes that will save, then is the data, ! and at the end also is the bytes saved. ! bytewrite, ,bytewrite ! so, if nx,ny is integer, cfl,gamma,tcur,cpu is double , then total bytes will be 2*4+4*8=40, ! then bytewrite will be 40. !Also, in fortran, the file can not be read as: open ( unit = 10 , file = 'restart' , form = 'unformatted' , status = 'unknown' ) read ( 10 ) nx , ny , cfl , gamma , tcur , cpu read ( 10 ) (( uc ( i , j , 1 , 0 ), i = 1 , nx ), j = 1 , ny ) read ( 10 ) (( uc ( i , j , 2 , 0 ), i = 1 , nx ), j = 1 , ny ) ! here will be an error will running close ( 10 ) ! correct read code open ( unit = 10 , file = 'restart' , form = 'unformatted' , status = 'unknown' ) read ( 10 ) nx , ny , cfl , gamma , tcur , cpu read ( 10 ) ((( uc ( i , j , 1 , 0 ), i = 1 , nx ), j = 1 , ny ), m = 1 , 4 ) close ( 10 )
http://books.google.com.hk/books?id=w2jvzWOMWOMCdq=thin+layer+chroma+bernard+friedhl=zh-CNsource=gbs_navlinks_s Thin-layer chromatography Bernard Fried , Fried/Sherma, Bernard Fried, Joseph Sherma , Joseph Sherma 0 篇评论 CRC Press, 1999 - 512 页 The fourth edition of this work emphasizes the general practices and instrumentation involving TLC and HPTLC, as well as their applications based on compound types, while providing an understanding of the underlying theory necessary for optimizing these techniques. 括号外为我的翻译,括号内为个人意见,仅供参考。 1# 大部分情况下 展开剂甲苯取代苯,正己烷取代正戊烷对Rf影响很小 2# 调整展开剂中大极性的溶剂的比例对于Rf的影响不是线性的,需要测试才知。 3# 两相展开剂中两相若极性很大,则大极性的占0.05~5%,若两者不大,则大极性的占20~50%。(若两相极性相差很大,大极性的溶剂比例超过10%,此时可以考虑换一个溶剂体系) 4# 尽管近20年发展了大量的展开剂优化工具,如电脑辅助,二维平板等等,但是在日常TLC使用中极少见。\ 5# 展开剂的极性是一个在氧化铝上测试得到的半经验公式,值越大表示形成氢键的能力越强,这些值可以相应的使用在硅胶体系上,虽然有些变动,但是变化不懂,顺序是一致的。 6# (推荐一个关于展开剂和展开方式的网页 图文并茂 系统而清晰 http://www.chemalink.net/books/C/1342/1.html )
项目介绍:基于互联网的分布式软件开发模式---大学生开源软件工程实践探索 Software development is no longer bound by time zones or national borders. Projects of all kindsacademic, commercial, and open sourcemay have their GUI designers in Boston, their database team in Bangalore, and their testers in Budapest and Buenos Aires. Working effectively in such teams is challenging: it requires strong communication skills , and makes proper use of coordination tools such as version control and ticketing systems more important than ever. But it is also an opportunity for students to build ties with peers across the country and around the world, and for instructors to breathe new life into old courses. Since September 2008, undergraduates from several universities in Canada and the US have been taking part in joint capstone projects in order to learn first-hand what distributed development is like. Each team has students from two or three schools, and uses a mix of agile and open source processes under the supervision of a faculty or industry lead. This FAQ describes the programs current incarnation; if you have other questions, please contact Greg Wilson . What are the learning objectives of this program? For students to gain hands-on experience with real-world development practices in a realistic environment while simultaneously learning and applying some core concepts of computer science. Is this part of an official university or government program? Not yetwe are still in the pilot phase (which is a professors way of saying, Were still learning how to do this.). Who is sponsoring this? Our sponsors page describes and thanks the companies and organizations that have helped to make this program possible. What projects are available? Our project list for Winter 2010 is available on the Projects page. Who can enrol? Undergraduates who are in their final four terms of study, have a strong B or A average, and are able to enrol in an appropriate course at their home institution. (Typically, this course is called a capstone, a senior project, or a directed studies course.) To ensure balance, we limit the number of students per school; please contact your local faculty for details. What schools are taking part? The following schools are confirmed for the Winter 2010 term: University of Alberta: Eleni Stroulia Michigan State University: Titus Brown Minnesota State University: Steven Case Simon Fraser University: Ted Kirkpatrick University of British Columbia: Meghan Allen University of Toronto: Greg Wilson University of the Virgin Islands: Steven Case University of Waterloo: Michael Godfrey See the Fall 2009 page for a list of schools that took part in that term. Can I still take part if my school is not in this list? Sure, if you can persuade a faculty member at your schoolplease have them contact Greg Wilson for more information. Can I take part more than once? Sure, if you can persuade a faculty member at your school. If you do come back, you can either stay with the same project or move to a new one. Who chooses what projects students work on? The students themselves, on a first-come, first-served basis. We try to make sure that each team has roughly six students drawn from at least three universities. We also try to have at least two students from any university on any one project so that everyone will have a local partner. How are tasks within a project allocated? This varies from team to team. In general, though, theres enough work for everyone to spend most of their time working on something they find interesting. What skills do students need? The programming skills required vary widely from project to project: students are more likely to succeed in a Java-based project if they already speak Java, and more likely to do well on a cellphone project if they have some previous experience with handheld devices or wireless networks. Students should also be familiar with, or willing to learn, version control, bug tracking, and other coordination tools. Keep in mind, though, that being able to set their own goals, manage their own time, and communicate with others is at least as important as knowing any particular programming language or operating system. What do students find challenging about these projects? Cooperation, communication, and commitment. For many students, this is the first time they have had to set their own goals and deadlines, and some struggle with that freedom during the first few weeks. What are the benefits to the students? Experience working in a distributed team on a meaningful project; peer contacts (social/professional networking); something cool to demo in interviews for jobs and graduate school. Do potential employers and graduate supervisors actually look at these projects? Yes. When do projects start and finish? Projects start at the beginning of term (September or January), and run to its end (December or April). Because different schools calendars dont line up, some students may start or finish earlier or later than others. We do not currently accommodate students whose schools use a quarter system, and have no plans to run this program during the summer. How much effort is expected from students? The same as any other course, i.e., about 8-10 hours/week. Can I do this course at the same time as a co-op job or other industry placement? Only if your schools policy permits it, and even then, only if you can commit 8-10 hours/week. How are projects managed? The organizers take care of week-by-week project management, though other faculty are very welcome to get involved as well. How are projects graded? Grades are awarded jointly by the local faculty organizer in consultation with the project lead. Grading schemes are tailored to individual teams and projects, and take into account the requirements of the courses in which students are officially registered. (For example, a student who is registered in a senior course on Software Architecture may spend more time on design and documentation than on coding.) In many projects, students themselves propose grading schemes once they are familiar with the project. There is usually not a midterm or final exam, but some schools require students to do an end-of-term presentation and/or create a screencast to show what they have accomplished. What development process do teams use? Standard software development processes are not well-suited to students realities: unlike professionals in industry, students usually have to work on several projects at once, and are almost always new to the technologies theyre using and the problem domain theyre working in. Based on past experience, the best fit is a mix of open source practices and Scrum : Every project keeps its work in a version control repository, uses tickets to track work items, etc. Teams are strongly encouraged to do code reviews. Each team has an hour-long online meeting each week to review progress, set goals, answer questions, and resolve outstanding issues. Work is usually done in two-week iterations. At the end of each iteration, each team member sets their goals for the next one, so that students have a chance to develop planning and estimation skills. Each team is required to produce a five-minute screencast demo of their work at the end of the term. What kind of work do students do? All the things that real software projects need, including design, construction, testing, packaging, and documentation. Do teams get to meet each other and their project leads? Yesthere is a three-day code sprint in Toronto near the start of term at which teams meet in person to discuss the strategies for the term, attend team-building social events, and write lots of code. How do team members communicate? Via the usual online tools, such as blogs, chat, mailing lists, and Skype. Team members may agree on something else new and trendy, such as FriendFeed, Twitter, or Google Wave. What level of work is expected? (Almost) all of these projects are producting software for real-world use, so standards are high. Remember, 95% correct may be an A academically, but if 5% of an application is buggy, users arent going to be happy. Do the projects vary from term to term? Yes, although we try to keep projects rolling for several terms to reduce startup overheads. Do students get help from students who have worked on the same project in previous terms? Yes, where possible. Unlike most university courses, we strongly encourage students to communicate with each other and their predecessors. How do code reviews work? Students post finished work (including tests) to their teams code review site. Another team member then reads it through and gives the author on what needs to be fixed before it can be committed. Once the author has made all the fixes suggested, and the reviewer gives the ok, the code is put into the version control repository for others to use. How can I find out more? Please contact Greg Wilson or the organizer at your university for more details. 项目主页
来源: http://www.oracle.com/technologies/open-source/index.html Oracle is committed to offering choice, flexibility, and a lower cost of computing for end users. By investing significant resources in developing, testing, optimizing, and supporting open source technologies such as Linux, PHP, Apache, Eclipse, Berkeley DB, Xen, and InnoDB, Oracle is clearly embracing and offering open source solutions as a viable choice for development and deployment. We cannot stress the importance of using open standards enough, whether in the context of open source or non-open source software. Today, many customers are using Oracle together with open source technologies in mission-critical environments and are reaping the benefits of lower costs, easier manageability, higher availability, and reliability along with performance and scalability advantages. KEY OPEN SOURCE INITIATIVES Linux Oracle's technical contributions to Linux enhance and extend enterprise-class capabilities, and Oracle Unbreakable Linux delivers enterprise-quality support for Linux at a lower cost. Xen Oracle contributes heavily to feature development of Xen mainline software, is a member of the Xen Advisory Board , and hosted Xen Summit 2009 at Oracle. Part of Oracle VM , next generation server virtualization software, includes the Xen hypervisor. Eclipse Oracle is a strategic developer and board member of the Eclipse Foundation , contributing developers and leadership to three Eclipse projects: Dali JPA Tools, JavaServer Faces (JSF), and BPEL; Oracle has also donated Oracle TopLink to the open source community. Berkeley DB Oracle Berkeley DB is a family of open source, embeddable databases that allows developers to incorporate within their applications a fast, scalable, transactional database engine with industrial grade reliability and availability; it is the most widely used open source database in the world with deployments estimated at more than 200 million. PHP Oracle is committed to enabling open source scripting language PHP for the enterprise with Zend Core for Oracle. Download your free copy of the Underground PHP and Oracle Manual which explains how to use the PHP scripting language with the Oracle Database, from installation through optimization and management. Open Source Tooling Projects Oracle contributes to several open-source tooling projects , including Project Trinidad (ADF Faces), Eclipse, and Spring. InnoDB Created by Oracle subsidiary Innobase OY, InnoDB is the leading transactional storage engine for the popular MySQL open source database. Free and Open Source Software Everything you need to know about free and open source software from, and for, Oracle, including community projects, downloads, blogs, and more.