简单的语言 在本贴中,我将介绍简单的语言对于学术文章的重要性。科学是复杂的,因此语言需简练。复杂的语言会让读者难以理解你想表达的学术内容。最好使用简单、清楚、简洁的语言让读者易于理解。我在后面还会提供一个图表帮助作者来简化文章中的语言。 Simple is best The nature of reality and science are complicated enough without English getting in the way "Easy writing's curse is hard reading." --Richard Brinsley Sheridan (1751-1816) The origins and evolution of the universe, the Earth and life. The climate system and effects of humans on the environment. Alternative energies and sustainability. The struggle to understand ageing, cancer and neurodegenerative diseases. These are only some of the issues being addressed currently by modern science, and they are all extremely complex. A commonly-used approach in science to manage this complexity is to tackle one small part of the problem at a time, and it is in this incremental approach that the modern scientific endeavor has been so successful in helping us understand our universe. Never the less, the methods, technologies and language of science are also extremely complex. Our disciplines are replete with their own vocabularies of technical jargon, often with their specific “grammars” in which only certain combinations of terms make scientific sense. Given this complexity of nature and our scientific methods and language, clear and concise written scientific English is essential. Unfortunately the English language is not intrinsically concise, although it is becoming more so. Compare, for example, the following lists: a majority of most a number of many, several, some at a rapid rate rapidly as a consequence of because of at this point in time currently based on the fact that because despite the fact that although due to the fact that because in order to to so as to to on the basis of based on On the left are terms that are perfectly correct, but not concise. The concise versions are on the right. The list on the left is a legacy from when English was more formal and “wordy”. These terms are still used frequently in much English writing, but in technical scientific writing they are being discarded increasingly in favor of the terms on the right. Many native English writers believe that using the wordy terms appears more intellectual or clever, and non-native writers have also been taught the same. This is wrong. The use of these wordy terms is fine in creative literature and poetry and in other types of writing where style may be considered as important as substance, but in scientific writing we should be aiming for concision. In my time here in China, I’ve twice had Chinese people tell me about their school education in English. A common task would be for the teacher to tell the class to write, for example, 200 words on a particular topic. Frequent use of terms such as “in order to” and “at this point in time” were a good way to reach the word limit, but in scientific writing the objective is usually the opposite: to say as much as you can in as few words as possible. The less you use these wordy terms the more space you have to use the technical and often complex scientific language you should be focusing on. This is especially the case for abstracts, where upper word limits need to be strictly adhered to. Another simple way to improve concision is the use, or rather not using, the word “of”. As you know, this word indicates the possessive of something, for example “the slope of the mountain”. This can be shortened to “the mountain slope”, where the word “mountain” is now used essentially as an adjective to describe the slope. Now take the short sentence “the slope of the mountain that is covered with forest.” This can be shortened to “the forested mountain slope”, where the adjective “forested” means the same as “covered with forest”. The sentence has been reduced from 10 words to four, and means exactly the same thing. This is a very simple and general example of improving concision, but this process constitutes much of the editing I perform on English manuscripts written by non-native English writers. The opening quote of this post – “easy writing's curse is hard reading” – means that writing the first thing that enters your head will usually convey most of what you want to say, but will often contain too many words and too much repetition. This can make it difficult and tiring to read. An initial draft of a piece of English writing can almost always be improved and made more concise. One of the great things about English is that although it has the capacity to be unnecessarily wordy, it is flexible enough to improve concision dramatically, as the above example showed. Unfortunately, this is often not an easy thing to do, especially for non-native speakers and writers. Similar to native speakers, non-native speakers need to spend time both writing and reading in English to develop the writing (and re-writing) skills needed for optimal concision. It is not just in English that this is an issue. The 17th Century French mathematician, physicist and theologian Blaise Pascal (1623-1662) famously said to a correspondent in a letter: “I would have written a shorter letter, but I did not have the time.” He meant that to have improved the concision of his letter would have required much more thought and effort than the initial draft. Pascal wrote this in French, but it still applies equally to making things concise in English. Returning to a theme of an earlier post (Writing and the art of scientific reading), good readers make good writers. The more English text you can read in your scientific field, and the greater familiarity you gain with the scientific vocabulary and grammar of your discipline, the more easily you will be able to use it in your writing. This includes learning the most concise way to write things. The nature of reality and science are complicated enough without wordy English getting in the way. Try your best to write concisely and clearly, and let your English language be an open doorway to the importance of your research, not an unwieldy barrier. Matthew Hughes, PhD Soil Sciences Editor Edanz Group China
Even language editor can't do language editing for your manuscipt if they can't figuer out what you want to express in your paper or when your paper is ill-organized. Some manuscripts that have passed peer-review, revised by the authors and edited by editors will be sent out to native speakers for language editing. Today, one language editor told me he had great difficulty in continuing the language editing after doing only about two pages.He sent backthe edited manuscriptwith many comments and questions in these two pages, and also along explanation-- "I only touched the first part of the abstract since I “smelled” difficulties in the body of the paper. I also did a spell check (US English). The names of reference authors and places should be carefully checked. The authors used inconsistent terminology when referring to the Wenchuan earthquake, indicting that they had not carefully reviewed their own work. The authors should remember that the international readership will not be as familiar with the Wenchuan earthquake as they are. My principle comments are contained in small font between the marks. From these comments, you will see why I believe the text is not acceptable scientific writing." Follows are our conversation about this issue. Languageeditor Ispentacoupleofhoursattemptingtoeditthepaper.IftheEnglishtranslationisanyreflection,evenadistortedreflection,oftheoriginalChinese,thenthepaperispoorlyorganized.Thesectionson"systemstheory"requireeitherelaborationwithreferencesorshouldbedroppedcompletely.Idonotdoubtthedataandthemathematics,butisthepointofthepapertoexplaintheconceptualmodelofsystemsthinkingoramethodofexaminingrisksfordebrisflow?I'dsaytheauthorsaretryingtodobothinonepaper,andthisconfusesthecontent.IfIwerethe"professor",I'dcallforare-evaluationofthepurposeofthepaperandacompleterewrite.I'mafraidI'mthewrongpersonforthispaper. waterlily Thispaperhasbeenpeer-reviewedand primarily revised byone professor. Thenthe authorswere asked to have a revision. "The paper covers a topic of paramount importance. However, I think it has to be completely revised first because of the English, which is sometime not understandable. Please note I am not a native speaker and I am the last person who can complain about English, but I believe that the paper has been sent without any care about this aspect. "---reviewer's comments Languageeditor Iamhappytohelpwithapaperof"paramountimportance".ButtheEnglishrenditionleavesmealittlehelpless.I'llgiveyouwhatIdidsofarandyoumightbeabletoseetheproblemsasIencounteredthem. Waterlily okay. I'llasktheauthortoreviseitonceagain. ButIamafraidtheycan'tcookabetterdishinlightoftheirpresent Englishproficiency Languageeditor Imaintainthatmanyoftheproblemsareorganizationalwithsomeofthecarelessnessyououtlined.Theyarescientists,butthatdoesnotmeantheyaretranslators.PerhapstheyshouldseekhelpfromcolleagueswhohaveabettercommandofEnglish. Languageeditor Ijustwanttoindicatethatmycriticismareharsh.I'velearnedalotinthelast3yearsaboutthewritingofChinesescientists.Aswebothknow,thereisvastroomforimprovement,yetthecontentofmostscientificwritingisexcellent.Sinceyouhaveindicateditsimportance,Itrulywishthebestforthewritersofthispaper.
Classical Paper List on Machine Learning and Natural Language Processing from Zhiyuan Liu Hidden Markov Models Rabiner, L. A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition. (Proceedings of the IEEE 1989) Freitag and McCallum, 2000, Information Extraction with HMM Structures Learned by Stochastic Optimization, (AAAI'00) Maximum Entropy Adwait R. A Maximum Entropy Model for POS tagging, (1994) A. Berger, S. Della Pietra, and V. Della Pietra. A maximum entropy approach to natural language processing. (CL'1996) A. Ratnaparkhi. Maximum Entropy Models for Natural Language Ambiguity Resolution. PhD thesis, University of Pennsylvania, 1998. Hai Leong Chieu, 2002. A Maximum Entropy Approach to Information Extraction from Semi-Structured and Free Text, (AAAI'02) MEMM McCallum et al., 2000, Maximum Entropy Markov Models for Information Extraction and Segmentation, (ICML'00) Punyakanok and Roth, 2001, The Use of Classifiers in Sequential Inference. (NIPS'01) Perceptron McCallum, 2002 Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms (EMNLP'02) Y. Li, K. Bontcheva, and H. Cunningham. Using Uneven-Margins SVM and Perceptron for Information Extraction. (CoNLL'05) SVM Z. Zhang. Weakly-Supervised Relation Classification for Information Extraction (CIKM'04) H. Han et al. Automatic Document Metadata Extraction using Support Vector Machines (JCDL'03) Aidan Finn and Nicholas Kushmerick. Multi-level Boundary Classification for Information Extraction (ECML'2004) Yves Grandvalet, Johnny Marià , A Probabilistic Interpretation of SVMs with an Application to Unbalanced Classification. (NIPS' 05) CRFs J. Lafferty et al. Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. (ICML'01) Hanna Wallach. Efficient Training of Conditional Random Fields. MS Thesis 2002 Taskar, B., Abbeel, P., and Koller, D. Discriminative probabilistic models for relational data. (UAI'02) Fei Sha and Fernando Pereira. Shallow Parsing with Conditional Random Fields. (HLT/NAACL 2003) B. Taskar, C. Guestrin, and D. Koller. Max-margin markov networks. (NIPS'2003) S. Sarawagi and W. W. Cohen. Semi-Markov Conditional Random Fields for Information Extraction (NIPS'04) Brian Roark et al. Discriminative Language Modeling with Conditional Random Fields and the Perceptron Algorithm (ACL'2004) H. M. Wallach. Conditional Random Fields: An Introduction (2004) Kristjansson, T.; Culotta, A.; Viola, P.; and McCallum, A. Interactive Information Extraction with Constrained Conditional Random Fields. (AAAI'2004) Sunita Sarawagi and William W. Cohen. Semi-Markov Conditional Random Fields for Information Extraction. (NIPS'2004) John Lafferty, Xiaojin Zhu, and Yan Liu. Kernel Conditional Random Fields: Representation and Clique Selection. (ICML'2004) Topic Models Thomas Hofmann. Probabilistic Latent Semantic Indexing. (SIGIR'1999). David Blei, et al. Latent Dirichlet allocation. (JMLR'2003). Thomas L. Griffiths, Mark Steyvers. Finding Scientific Topics. (PNAS'2004). POS Tagging J. Kupiec. Robust part-of-speech tagging using a hidden Markov model. (Computer Speech and Language'1992) Hinrich Schutze and Yoram Singer. Part-of-Speech Tagging using a Variable Memory Markov Model. (ACL'1994) Adwait Ratnaparkhi. A maximum entropy model for part-of-speech tagging. (EMNLP'1996) Noun Phrase Extraction E. Xun, C. Huang, and M. Zhou. A Unified Statistical Model for the Identification of English baseNP. (ACL'00) Named Entity Recognition Andrew McCallum and Wei Li. Early Results for Named Entity Recognition with Conditional Random Fields, Feature Induction and Web-enhanced Lexicons. (CoNLL'2003). Moshe Fresko et al. A Hybrid Approach to NER by MEMM and Manual Rules, (CIKM'2005). Chinese Word Segmentation Fuchun Peng et al. Chinese Segmentation and New Word Detection using Conditional Random Fields, COLING 2004. Document Data Extraction Andrew McCallum, Dayne Freitag, and Fernando Pereira. Maximum entropy Markov models for information extraction and segmentation. (ICML'2000). David Pinto, Andrew McCallum, etc. Table Extraction Using Conditional Random Fields. SIGIR 2003. Fuchun Peng and Andrew McCallum. Accurate Information Extraction from Research Papers using Conditional Random Fields. (HLT-NAACL'2004) V. Carvalho, W. Cohen. Learning to Extract Signature and Reply Lines from Email. In Proc. of Conference on Email and Spam (CEAS'04) 2004. Jie Tang, Hang Li, Yunbo Cao, and Zhaohui Tang, Email Data Cleaning, SIGKDD'05 P. Viola, and M. Narasimhan. Learning to Extract Information from Semi-structured Text using a Discriminative Context Free Grammar. (SIGIR'05) Yunhua Hu, Hang Li, Yunbo Cao, Dmitriy Meyerzon, Li Teng, and Qinghua Zheng, Automatic Extraction of Titles from General Documents using Machine Learning, Information Processing and Management, 2006 Web Data Extraction Ariadna Quattoni, Michael Collins, and Trevor Darrell. Conditional Random Fields for Object Recognition. (NIPS'2004) Yunhua Hu, Guomao Xin, Ruihua Song, Guoping Hu, Shuming Shi, Yunbo Cao, and Hang Li, Title Extraction from Bodies of HTML Documents and Its Application to Web Page Retrieval, (SIGIR'05) Jun Zhu et al. Mutual Enhancement of Record Detection and Attribute Labeling in Web Data Extraction. (SIGKDD 2006) Event Extraction Kiyotaka Uchimoto, Qing Ma, Masaki Murata, Hiromi Ozaku, and Hitoshi Isahara. Named Entity Extraction Based on A Maximum Entropy Model and Transformation Rules. (ACL'2000) GuoDong Zhou and Jian Su. Named Entity Recognition using an HMM-based Chunk Tagger (ACL'2002) Hai Leong Chieu and Hwee Tou Ng. Named Entity Recognition: A Maximum Entropy Approach Using Global Information. (COLING'2002) Wei Li and Andrew McCallum. Rapid development of Hindi named entity recognition using conditional random fields and feature induction. ACM Trans. Asian Lang. Inf. Process. 2003 Question Answering Rohini K. Srihari and Wei Li. Information Extraction Supported Question Answering. (TREC'1999) Eric Nyberg et al. The JAVELIN Question-Answering System at TREC 2003: A Multi-Strategh Approach with Dynamic Planning. (TREC'2003) Natural Language Parsing Leonid Peshkin and Avi Pfeffer. Bayesian Information Extraction Network. (IJCAI'2003) Joon-Ho Lim et al. Semantic Role Labeling using Maximum Entropy Model. (CoNLL'2004) Trevor Cohn et al. Semantic Role Labeling with Tree Conditional Random Fields. (CoNLL'2005) Kristina toutanova, Aria Haghighi, and Christopher D. Manning. Joint Learning Improves Semantic Role Labeling. (ACL'2005) Shallow parsing Ferran Pla, Antonio Molina, and Natividad Prieto. Improving text chunking by means of lexical-contextual information in statistical language models. (CoNLL'2000) GuoDong Zhou, Jian Su, and TongGuan Tey. Hybrid text chunking. (CoNLL'2000) Fei Sha and Fernando Pereira. Shallow Parsing with Conditional Random Fields. (HLT-NAACL'2003) Acknowledgement Dr. Hang Li , for original paper list.
Scopus TopCited 是Scopus推出的评价功能,提供26个学科领域3-5年中最高被引论文前20篇。 查阅各学科 请见 http://www.info.sciverse.com/topcited/ TOP document.write(intNumArticles) 20 cited articles in all subject areas (2007 - 2011) 1. A short history of SHELX Sheldrick, G.M. (2007), Acta Crystallographica Section A: Foundations of Crystallography, Volume 64, Issue 1, Pages 112-122 Cited by: 13,006 2. MEGA4: Molecular Evolutionary Genetics Analysis (MEGA) software version 4.0 Tamura, K. (2007), Molecular Biology and Evolution, Volume 24, Issue 8, Pages 1596-1599 Cited by: 5,880 3. Cancer statistics, 2008 Jemal, A. (2008), CA Cancer Journal for Clinicians, Volume 58, Issue 2, Pages 71-96 Cited by: 4,299 4. Cancer statistics, 2007 Jemal, A. (2007), Ca-A Cancer Journal for Clinicians, Volume 57, Issue 1, Pages 43-66 Cited by: 4,061 5. Review of Particle Physics Amsler, C. (2008), Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics, Volume 667, Issue 1-5, Pages 1-6 Cited by: 2,861 6. Structure validation in chemical crystallography Spek, A.L. (2009), Acta Crystallographica Section D: Biological Crystallography, Volume 65, Issue 2, Pages 148-155 Cited by: 2,810 7. The rise of graphene Geim, A.K. (2007), Nature Materials, Volume 6, Issue 3, Pages 183-191 Cited by: 2,791 8. Genome-wide association study of 14,000 cases of seven common diseases and 3,000 shared controls Burton, P.R. (2007), Nature, Volume 447, Issue 7145, Pages 661-678 Cited by: 2,528 9. Three-year Wilkinson Microwave Anisotropy Probe (WMAP) observations: Implications for cosmology Spergel, D.N. (2007), Astrophysical Journal, Supplement Series, Volume 170, Issue 2, Pages 377-408 Cited by: 2,475 10. Cancer statistics, 2009 Jemal, A. (2009), CA Cancer Journal for Clinicians, Volume 59, Issue 4, Pages 225-249 Cited by: 2,367 11. Induction of Pluripotent Stem Cells from Adult Human Fibroblasts by Defined Factors Takahashi, K. (2007), Cell, Volume 131, Issue 5, Pages 861-872 Cited by: 1,972 12. Clustal W and Clustal X version 2.0 Larkin, M.A. (2007), Bioinformatics, Volume 23, Issue 21, Pages 2947-2948 Cited by: 1,931 13. Iron-based layered superconductor La FeAs (x= 0.05-0.12) with Tc = 26 K Kamihara, Y. (2008), Journal of the American Chemical Society, Volume 130, Issue 11, Pages 3296-3297 Cited by: 1,923 14. Five-year wilkinson microwave anisotropy probe observations: Cosmological interpretation Komatsu, E. (2009), Astrophysical Journal, Supplement Series, Volume 180, Issue 2, Pages 330-376 Cited by: 1,538 15. Induced pluripotent stem cell lines derived from human somatic cells Yu, J. (2007), Science, Volume 318, Issue 5858, Pages 1917-1920 Cited by: 1,531 16. 2007 Guidelines for the Management of Arterial Hypertension: The Task Force for the Management of Arterial Hypertension of the European Society of Hypertension (ESH) and of the European Society of Cardiology (ESC) Mancia, G. (2007), Journal of Hypertension, Volume 25, Issue 6, Pages 1105-1187 Cited by: 1,495 17. PLINK: A tool set for whole-genome association and population-based linkage analyses Purcell, S. (2007), American Journal of Human Genetics, Volume 81, Issue 3, Pages 559-575 Cited by: 1,476 18. Chromatin Modifications and Their Function Kouzarides, T. (2007), Cell, Volume 128, Issue 4, Pages 693-705 Cited by: 1,449 19. Effect of rosiglitazone on the risk of myocardial infarction and death from cardiovascular causes Nissen, S.E. (2007), New England Journal of Medicine, Volume 356, Issue 24, Pages 2457-2471 Cited by: 1,436 20. Sunitinib versus interferon alfa in metastatic renal-cell carcinoma Motzer, R.J. (2007), New England Journal of Medicine, Volume 356, Issue 2, Pages 115-124 Cited by: 1,387
Making the Social World: The Structure of Human Civilization John Searle (Author) There are few more important philosophers at work today than John Searle, a creative and contentious thinker who has shaped the way we think about mind and language. Now he offers a profound understanding of how we create a social reality--a reality of money, property, governments, marriages, stock markets and cocktail parties. The paradox he addresses in Making the Social World is that these facts only exist because we think they exist and yet they have an objective existence. Continuing a line of investigation begun in his earlier book The Construction of Social Reality, Searle identifies the precise role of language in the creation of all "institutional facts." His aim is to show how mind, language and civilization are natural products of the basic facts of the physical world described by physics, chemistry and biology. Searle explains how a single linguistic operation, repeated over and over, is used to create and maintain the elaborate structures of human social institutions. These institutions serve to create and distribute power relations that are pervasive and often invisible. These power relations motivate human actions in a way that provides the glue that holds human civilization together. Searle then applies the account to show how it relates to human rationality, the freedom of the will, the nature of political power and the existence of universal human rights. In the course of his explication, he asks whether robots can have institutions, why the threat of force so often lies behind institutions, and he denies that there can be such a thing as a "state of nature" for language-using human beings. http://www.amazon.com/Making-Social-World-Structure-Civilization/dp/0195396170/ref=sr_1_1?s=booksie=UTF8qid=1299057125sr=1-1 Philosophy in a New Century: Selected Essays John R. Searle (Author) John R. Searle has made profoundly influential contributions to three areas of philosophy: philosophy of mind, philosophy of language, and philosophy of society. This volume gathers together in accessible form a selection of his essays in these areas. They range widely across social ontology, where Searle presents concise and informative statements of positions developed in more detail elsewhere; artificial intelligence and cognitive science, where Searle assesses the current state of the debate and develops his most recent thoughts; and philosophy of language, where Searle connects ideas from various strands of his work in order to develop original answers to fundamental questions. There are also explorations of the limitations of phenomenological inquiry, the mind-body problem, and the nature and future of philosophy. This rich collection from one of America's leading contemporary philosophers will be valuable for all who are interested in these central philosophical questions. http://www.amazon.com/Philosophy-New-Century-Selected-Essays/dp/0521731585/ref=sr_1_9?s=booksie=UTF8qid=1299057125sr=1-9 Freedom and Neurobiology: Reflections on Free Will, Language, and Political Power (Columbia Themes in Philosophy) John Searle (Author) "This engaging small volume serves as a token reminder of how masterfully Searle manages to combine philosophical innovation with clarity of prose." -- Constantine Sandis, Metapsychology "Clear and engaging." -- Randall J. Russac, Science Books and Films "Searle is a beacon of accessible expertise, a throwback to a time when philosophy was part of the public debate." -- David Papineau, Times Literary Supplement " slim, elegantly written and intellectually rigorous volume." -- British Journal of Psychiatry "A brief clearly articulated account by one the world's foremost philosophers." -- Henry Stapp, Journal of Consciousness Studies, Volume 15, No 7 (2008) "Perhaps most importantly, it sets forth a suggestive vision of the systematic connections across various philosophical fields and avenues for their further exploration." -- Daniel K. Silber, Philosophy in Review http://www.amazon.com/Freedom-Neurobiology-Reflections-Political-Philosophy/dp/0231137532/ref=sr_1_10?s=booksie=UTF8qid=1299057125sr=1-10 Mind, Language, and Society : Philosophy in the Real World John R. Searle (Author) ohn Searle's summation of earlier writings is not just an essential tie-up volume for existing readers; it is also a perfect introduction to the work of one of the clearest heads in the philosophy of mind. Searle's book is a riposte to all those academics who make a career out of contradicting and complicating such default positions as the existence of an external reality, the reality of personal consciousness, and the reasonable fit of language to the perceived world. Certainly, we should examine these positions! But the first duty of philosophy, Searle argues, is that it should attempt to accommodate what is known. As far as we can tell, for example, consciousness is a biological product, but there is a long-running contention between the materialists--whose reductive descriptions of consciousness arrive, finally, at an embarrassed denial that consciousness exists at all--and the dualists, who cannot describe consciousness without evoking some supernatural involvement. Neither position is tenable--each offers some corrective to the other. The good explanation is in there somewhere, but the sheer intractability of the debate won't let it be expressed. In situations like this, Searle argues, it is always the terms that are wrong. Terms, mind you, that in this case include "matter," "mind," "physical," and "mental"! Searle--married as he is to common sense--is of necessity one of our most iconoclastic and creative thinkers. --Simon Ings, Amazon.co.uk --This text refers to an out of print or unavailable edition of this title. From Publishers Weekly For years, Searle (Intentionality; The Mystery of Consciousness; Minds, Brains, and Science), a professor of philosophy at UC-Berkeley, has battled against philosophical fashion to insist that the world is, in fact, intelligible to the human mind. This may sound unremarkable to laypeople. But, as Searle remarks, at a time when postmodernism and deconstruction are in vogue, intellectuals, to be taken seriously, often must believe that different cultures have different rationalities and that the world as a whole is unintelligible. Searle, however, defends the naturalistic belief that there does exist a real world, which is perceivable and comprehensible and is not changed by the angle of our observation. Among his most forceful arguments are that consciousness is a genuine phenomenon caused by knowable physical processes; that intention is real, produced by causal mechanisms in the brain; and that language expands the possibilities of intentionality. In an interesting aside, Searle speculates that contemporary thinkers reject an objectivist theory because of "an urge to power." They don't want to be answerable to the world but for the world to be answerable to them. To Searle, however, realism "is not a theory at all but the framework within which it is possible to have theories." Copyright 1998 Reed Business Information, Inc. --This text refers to an out of print or unavailable edition of this title. http://www.amazon.com/Mind-Language-Society-Philosophy-World/dp/0465045219/ref=sr_1_5?s=booksie=UTF8qid=1299057125sr=1-5
Nowadays, I am very interested in animation package in R language. So today I made a simple animated picture by R to express my best wishes to every one. the results: R script: library(MASS) library(animation) pp-function(N) { x1-runif(N,0,N) y1-runif(N,0,N) par(ann=F,bg = darkblue) x-seq(1,30*N) j-sample(x,30) plot(1, ann = F, type = n, axes = F,xlim=c(0,N),ylim=c(0,N)) points(j,N-1.5*rep(1,30),pch=19,col=white,cex=2) for(i in 2:N) { x-seq(1,30*N) x-x j-c(sample(x,30),j) plot(1, ann = F, type = n, axes = F,xlim=c(0,N),ylim=c(0,N)) y-N-1.5*rep(1:i,rep(30,i)) points(j,y,pch=19,col=white,cex=2) z-sample(N,length(x1),replace=T) #plot(1, ann = F, type = n, axes = F,xlim=c(0,N),ylim=c(0,N)) points(x1 ,y1 ,pch=19,col=N-i,cex=3) points(x1 ,y1 ,pch=19,col=N-i-1,cex=2.5) text(N/2, N/2, Merry Christmas, srt = 12*i, col = rainbow(N) , cex = 4.5 * i/N) Sys.sleep(0.005) } } saveMovie(pp(30))
English is a Crazy Language by Richard Lederer it is now time to face the fact that English is a crazy language -- the most loopy and wiggy of all tongues. In what other language do people drive in a parkway and park in a driveway? In what other language do people play at a recital and recite at a play? Why does night fall but never break and day break but never fall? Why is it that when we transport something by car, it's called a shipment, but when we transport something by ship, it's called cargo? Why do we pack suits in a garment bag and garments in a suitcase? Why are people who ride motorcycles called bikers and people who ride bikes called cyclists? Why -- in our crazy language -- can your nose run and your feet smell? we find that hot dogs can be cold, darkrooms can be lit, homework can be done in school, nightmares can take place in broad daylight while morning sickness and daydreaming can take place at night, tomboys are girls and midwives can be men, hours -- especially happy hours and rush hours -- often last longer than sixty minutes, quicksand works very slowly, boxing rings are square, silverware and glasses can be made of plastic and tablecloths of paper, most telephones are dialed by being punched (or pushed?), and most bathrooms don't have any baths in them. In fact, a dog can go to the bathroom under a tree -- no bath, no room; it's still going to the bathroom. And doesn't it seem a little bizarre that we go to the bathroom in order to go to the bathroom? Why is it that a woman can man a station but a man can't woman one, that a man can father a movement but a woman can't mother one, and that a king rules a kingdom but a queen doesn't rule a queendom? How did all those Renaissance men reproduce when there don't seem to have been any Renaissance women? Sometimes you have to believe that all English speakers should be committed to an asylum for the verbally insane: In what other language do they call the third hand on the clock the second hand? Why do they call them apartments when they're all together? Why do we call them buildings, when they're already built? Why it is called a TV set when you get only one? Why is phonetic not spelled phonetically? Why is it so hard to remember how to spell mnemonic? Why doesn't onomatopoeia sound like what it is? Why is the word abbreviation so long? Why is diminutive so undiminutive? Why does the word monosyllabic consist of five syllables? Why is there no synonym for synonym or thesaurus? And why, pray tell, does lisp have an s in it? English is crazy. You can read on at http://www.english-zone.com/language/english.html
九种mark-up languages that can be used to Sweave together data, code, and results: * LaTeX: The Sweave utility in base R provides a method for creating LaTeX files wich can then be converted to PDF files. * HTML: The R2HTML, HTMLutils, hwriter, prettyR, and Hmisc packages provide methods for outputting R code and output to HTML pages. * asciiDoc: The ascii package provides methods for creating asciiDoc files which can then be converted to HTML, XML, or LaTeX files. See examples here. * t2tags: The ascii package provides methods for creating t2tags files which can then be converted to HTML, XML, or LaTeX files. See examples here. * reStructuredText: The ascii package provides methods for creating reStructuredText files which can then be converted to HTML, XML, or LaTeX files. See examples here. * org: The ascii package provides methods for creating org files which can then be converted to HTML, XML, or LaTeX files. See examples here. * textile: The ascii package provides methods for creating textile files which can then be converted to HTML, XML, or LaTeX files. See examples here. * OpenOffice: The odfWeave package provides methods for creating files in the open document format (odf) which can then be opened in OpenOffice. These files can then be output to a vareity of formats. * MSWord: The R2WD package provides a markup language for creating MSWord documents from within R. The SWord software provides the ability to use Sweave-like markup to create MSWord documents. Inference for R is proprietary software that allows the user to embed R code and output into a Microsoft Word document. 这里的部分内容转自网络
the following websites for short message service are recommended by my friends in America. I collected a lot of data from them. http://www.webopedia.com/quick_ref/textmessageabbreviations.asp http://www.comp.nus.edu.sg/~rpnlpir/ http://www.txtmania.com/messages/text.php http://www.chinadaily.com.cn/language_tips/auvideo/2009-08/27/content_8624761.htm http://www.lingo2word.com/index.php http://en.wikipedia.org/wiki/Text_messaging http://www.txt2nite.com/
语言学与人类本质 language and human nature 正常人至少能说一种语言,而且大多数人对所说的语言都有比较好的认识。语言是什么东西?为人类所特有?对人类又有何贡献?语言学作为研究语言现象的学科,算不算科学?这样的话题,经常出现在与语言问题相关的场合。 一个多世纪以来,语言学家一直在尝试把语言学解释给其他对语言学话题感兴趣的人。有许多杰出的语言学家都曾撰书立说,介绍语言与语言学知识,目的在于教育外行人员或启迪临近学科的学者。这些书有些已成为经典,比如美国著名语言学家 William Dwight Whitney ,在 1875 年出版了专著 The life and growth of language: an outline of linguistic science( 《语言的生命与成长:语言科学简介》 ) 。随后有三位著名的以英语为母语的语言学家 Edward Sapir ( 1921 )、 Otto Jespersen (1922) 与 Leonard Bloomfield (1933) 出版了同名书籍 Language( 《语言》 ) ,这些书成为研究语言的一代经典。美国当代著名语言学家 Noam Chomsky在1975年 出版了 Language and Mind( 《语言与心智》 ) ,另一著名学者 Steven Pinker于1995年 出版了 The language instinct (《语言本能》 ,该书曾数月保持为最畅销书之一)。这样的书,不一而足,举不胜举。 语言学家把自身所研究的语言看成一门科学,即专门研究语言的科学。这一观点,早在十九世纪就已经得到学者的认可。 Max Mueller 在 1869 年出版专著 The science of language (《语言的科学》),在该书第一章,作者就指出 the science of language one of the physical science( 语言科学物理学的分支学科 ) 。 语言,与人类其他活动一样,不属于真正的科学活动。语言学家把所研究的领域看成是一种科学,那是因为他们共同分享科学所共有的特征即具有明确的科学研究与调查目标,那就是语言。语言可以通过科学的手段进行客观科学的认知(或更准确地说进行跨学科理解)。一旦我们接受科学需要调查研究这样的观点,我们就可以说,研究任何事物,只要能通过科学手段进行理解与阐释,都可以说具有科学性。 在很大程度上,科学理解的可能性取决于研究目标的复杂性与规律性,物理学之所以如此成功,相对来说,是因为物理现象的高度规律性而非杂乱无章。相反,人文科学没有取得像物理学这样的成就,主要是因为人类行为非常复杂的,不像物理世界或生物界那样有规律。语言与人类行为的其他方面相比,也具有规律性,我们称之为语言规则制约性。正是语言及与语言相关行为的这种本质特征,使得我们在人类语言这一领域内取得了巨大进步。通过研究这种为人类所特有的语言,我们可以更好地了解人类,认识人类的本质。 We now know that the possibility of scientific understanding depends largely on the complexity and regularity of the object of study. Physics has been so successful because the physical world is, relatively speaking, highly regular and not terribly complex. Human sciences, by contrast, have been much less successful and much slower to produce results, largely because human behavior is so complex and not nearly so regular as is the physical or even the biological world. Language, though, contrasts with other aspects of human behavior precisely in its regularity, what has been called its rule-governed nature. It is precisely this property of language and language-related behavior that has allowed for fairly great progress in our understanding of this delimited area of human behavior. Furthermore, the fact that language is the defining property of humans, that it is shared across all human communities and is manifested in no other species, means that by learning about language we will inevitably also learn about human nature. Aronoff, M. Janie Rees-Miller. (eds.), 2001/2003. The handbook of linguistics. Oxford : Blackwell Publishers Ltd.
Early wordsa few facts about the acquisition of word for children 有关儿童语言习得的话题:词类习得的顺序 When we discuss the acquisition of words for the children, we will ask this question: Whats the order of acquisition for word categories? As far as major lexical categories go, childrens early production vocabularies exhibit a preponderance of nouns, typically used to refer to objects in the childs immediate environment (e.g. mummy, daddy, dolly, car).Alongside these, children are often quick to develop a small number of general purpose verb (Radford et al 1999/2000: 212). In a study of the 1970s, Roger Brown and his colleagues at Harvard reported the results of their detailed longitudinal work with three children. It included a number of verbal inflections within which Brown distinguished between regular and irregular past tense inflections (as in jumped and came ) and between regular and irregular third person singular present ( walks , does ). His research shows that the verbal inflections were acquired as the following orders: 1) progressive ing 2) past tense irregular 3) past tense regular 4) third person singular present regular 5) third person singular present irregular (cf. Radford et al 1999/2000: 215) This reason for the progressive morpheme coming first is its regularity. Unlike the past tense and third person singular morphemes, the progressive morpheme has no variation realizations as allomorphs. As a verbal suffix, it attaches in a fixed form to the vast majority of English verbs, and this, coupled with its relatively transparent semantics in signaling ongoing activities, may be sufficient to account for its accessibility to children. ( 这里我保留一种疑问:对于中国学习者来说,动词过去时的不规则变化与规则变化的习得顺序,是否不同于英语母语学习者? Of course, Brown et al 1999/2000:216 present their suspicions and observations on this order of acquisition: First, the irregular forms, while relatively small in number, include some of the most frequently occurring verbs in English (was, had, came, went, brought, took, etc.). Second, the regular patterns does indeed prevail but only after a period during which the irregular forms are correctly produced. So, here comes the phenomenon of overregularization which needs another chapter to discuss.
An assembly language is a low-level language for programming computers . It implements a symbolic representation of the numeric machine codes and other constants needed to program a particular CPU architecture. This representation is usually defined by the hardware manufacturer, and is based on abbreviations (called mnemonics ) that help the programmer remember individual instructions , registers , etc. An assembly language is thus specific to a certain physical or virtual computer architecture (as opposed to most high-level languages , which are usually portable ). Assembly languages were first developed in the 1950s, when they were referred to as second generation programming languages . They eliminated much of the error-prone and time-consuming first-generation programming needed with the earliest computers, freeing the programmer from tedium such as remembering numeric codes and calculating addresses. They were once widely used for all sorts of programming. However, by the 1980s (1990s on small computers ), their use had largely been supplanted by high-level languages , in the search for improved programming productivity . Today, assembly language is used primarily for direct hardware manipulation, access to specialized processor instructions, or to address critical performance issues. Typical uses are device drivers , low-level embedded systems , and real-time systems. A utility program called an assembler is used to translate assembly language statements into the target computer's machine code. The assembler performs a more or less isomorphic translation (a one-to-one mapping) from mnemonic statements into machine instructions and data. (This is in contrast with high-level languages , in which a single statement generally results in many machine instructions. This is done by one of two means: a compiler is used to most-efficiently translate high-level language statements into machine code executable files; an interpreter executes similar statements directly and in its own application environment.) Many sophisticated assemblers offer additional mechanisms to facilitate program development, control the assembly process, and aid debugging . In particular, most modern assemblers (although many have been available for more than 40 years already) include a macro facility (described below), and are called macro assemblers . 功能补充:提高程序非功能性要求的有力武器、程序破解技术的关键、理解和构筑操作系统内核的关键、语言处理技术的基础、开发工具构筑的重要环节、应付与扩展封闭系统的突破口等. 高校汇编语言课程应引入32位汇编内容,或者16与32对比教学与研究,32位汇编可以结合GCC这个交叉编译平台,比较不同层次语言(高级与低级)处理指标的对应关系、性能、调试、优化等方面,有利于培养不断层的专业人才,同时也有利于提升潜在的能力。 Assembly language_Wikipedia