科学网

 找回密码
  注册

tag 标签: peer-review

相关帖子

版块 作者 回复/查看 最后发表

没有相关内容

相关日志

The influence of the US government shutdown is world-wide
waterlilyqd 2019-2-2 10:33
美国政府关门影响期刊审稿 今天,收到一位美国审稿人的两封信, 一封说因为政府关门无法查看邮件,另外一封说审稿邀请信在政府关门期间收到,现在要处理政府关门期间累积的很多事情,没有时间审稿。 The Reviewer' two letters: 1. Not able to view my emails because of the government shutdown. I hope you have been able to find another suitable reviewer. 2. The request arrived during the government shutdown and thus was not able to see it. Because a lot of work has piled up during the shutdown I do not believe I have time to review th manuscript. I hope you were able to find a suitable replacement. 全球化让世界成为一个地球村,修高墙,设壁垒,搞贸易制裁,采取强硬措施让对手就范而不是通过友好协商解决问题,会让全世界地球村的村民都不好过。 本刊审稿人的最大群体来自美国,这波美国政府的关门, 加上ScholarOne稿件系统升级造成的一些问题,再加上各种高墙的存在,近期会对我们期刊造成较大的影响。
个人分类: 社会杂谈|1992 次阅读|0 个评论
给编委们的一封信--关于邀请审稿人的问题
waterlilyqd 2017-4-26 12:43
Journal of Mountain Science的大部分编委和科学编辑均担任责任编辑(Editor),负责稿件的送审和与审稿人、作者的联络沟通。鉴于最近媒体和学术圈炒得火热的Springer-Nature撤销我国作者在《Tumor Biology》上发表的107篇论文的问题,以及作者推荐的审稿人可能存在的诸多问题,ScholarOne 稿件系统自动匹配推荐的审稿人可能存在的一些问题,我在本刊的编委编辑QQ群上进行了说明和提醒,由于还有不少编委和科学编辑没有加入QQ群, 因此,随后向所有编委和科学编辑发信进行提醒。以下是信的内容: --------------------- Dear colleauges, Last year Springer-Nature retracted 58 articles written by Iranian authors. These days the whole academic circles and the media in China are hotly discussing the retraction of 107 articles written by Chinese authors and published in Tumor Biology (See the report 107 cancer papers retracted due to peer review fraud at https://arstechnica.com/science/2017/04/107-cancer-papers-retracted-due-to-peer-review-fraud/ ). The key issue is the fake peer-review conducted by author-recommended reviewers or manipulated by a third-party company. Thus here as a reminder, please be cautious of inviting reviewers recommended by the authors when you are managing the manuscripts and invite reviewers. Here, the reviewers with recommended in green in the reviewer list are all recommended by the authors, it is not recommended and preferred by the manuscript system. If you'd like to invite the author-recommended reviewer, please search the email address in the internet to check its authenticity, check the consistency of the email address with with the name of the recommended reviewers, and most importantly to check the research background and experience, and also their pubications! Usually we only use author-recommended reviewers as the last resort when it is difficult to find suitable reviewers. Our manuscript system provides Reviewer Locator to automatically match the manuscript's topics and key words with papers in Web of Science, and then list the potential reviewers for selection. For the machine-recommended reviewers, I strongly suggest to check whether the email addresses are still effective and also the scholar's research background. We often receive the invited reviewers' complaints that they donot have the expertise to make the peer-review. Besides the above-mentioned two channels to find the reviewers, you can also create accounts for reviewers that you find from other channels, for more information about the operation, you can read the Guide to Editors (ppt) attached here. I am also looking forward to hearing from your valuable suggestions on how to make the peer-review more efficient and effective! Best regards QIU Dunlian Guide to Editors20170418.pdf ------------------------------------------------------------------------- 信件发出后,好几位编委和科学编辑回复了邮年,有些表示将在同行外审时特别注意对审稿人背景的审查. 副主编Prof. Iain Taylor给我发来一封长信,对期刊审稿流程提出详细的建议。我将Prof. Iain Taylor的回信贴于此,供同行参考交流: ------------------------------------- Dear Dunlian, Now the bad news. I am not at all surprised that Springer have unearthed reviewer fraud in the reviewer community. Some countries just count papers listed for promotion or even funding and reward those names as authors. All the wonderful materials in your attachments seem almost certain to open the door for fraud, because JMS has apparently found a remarkably complex process for peer reviewer selection. The assumption is that peer reviewer selection should be run by a computer. I suggest that you ignore the 'automated' systems and rely on one (possibly two) members of the Editorial Board , who are experts in the field to choose the peer reviewers. The process relies on having Editorial Board members who are active in research. Throughout my 30+ years involved as author, peer reviewer, editorial board member, and editor-in-chief, I have found that the best papers come from active researchers whose submissions are professionally assessed by active experts in the field. No matter how import an individual may be or be is seen to be influential, the journal editor MUST find peer reviewers who can deliver objective comment. In practice this means that peer reviewers are generally trusted within their scientific community and are able to control their inevitable biases. So, don't task a person who is known to be biased, and avoid bureaucrats who may be famous but are no longer active. Author suggestions may useful, but it is good to remember that ethical scientists do not wish to see their friends make fools of themselves by giving friendly reviews to bad science. The following process may help the Associate Editor (or whatever title you give to the person who makes the recommendation to publish, revise or refuse a paper) to choose peer reviewers and act as they recommend. 1. A manager should check to see that the paper is written in decent English and conforms generally to the journal's style. 2. The Editor in chief now selects a member of the Editorial Board who is generally familiar with the scientific area of the research reported, to oversea the peer review process. 3. The chosen Board member determine who would be appropriate as a peer reviewers 4. Board member contacts peer reviewers by phone or e-mail and obtains agreement to do the job within 2-3 weeks or an agreed time 5. Board member follows up to get professionally responsible timeliness by reviewers. 6. Board member acts on peer reviewer comments (REMEMBER THIS IS NOT AN ELECTION). Board member decides to ask for revisions BASED his/her assessment of the reviews. 7. Board member sends submission to authors explaining what changes are required. This may require editing of reviews to avoid unprofessional remarks by the authors. 8. Revised manuscript may be acceptable when Board Members demands are achieved, but may or may not go back to one or both reviewers. 9. If revised paper is now acceptable, send the paper to the Editor-in-chief and explain why the Board Member is recommending publication/rejection. 10. Publisher then prepares the paper to publication. Most publishers and research institutions just want happy researchers. Damage done by fraud can easily be forgotten and politicians can be told that fraud is rare. But we are all human and society at large suffers from false research. Iain
个人分类: 编辑杂谈|6510 次阅读|0 个评论
如何找到最棒的同行评审
WileyChina 2015-5-27 09:34
转载自: Exchanges 原作者: Thomas Gaston (总编辑 , Wiley ) 翻译: Wiley China Blog 众所周知,在同行评审过程中,对指导编辑的决策和为作者提供反馈来说最理想的情况就是找到最好、最有资格、最敬业的同行评审。然而,找到最棒的同行评审并不是一件简单的事情。作为行业的专家,编辑们有一大串的联系方式可以使用;然而,随着投稿量的逐渐增加,编辑们已经不能再局限于已有的联系人了。 1. 建立一个数据库 找到最棒的同行评审的要素之一是在合适的位置建立一个强大的同行评审数据库。虽然数据库的建立不是一朝一夕的事情,但是构建并维护这样一个数据库必不可少。询问同行评审人他们所属专业领域的关键词(关键词注意要能在相关领域内用分类学分类)。这样做的显著优点在于能帮助编辑匹配文章和同行评审人。在数据库内给同行评审打分,打分标准可以包括完成同行评审的及时性和质量。打分为定位更好的同行评审提供了有效的度量法,度量法还可以考虑周转时间和完成度。标记出既慢又帮助不大的同行评审,下一次就可避免挑选这些同行评审。 2. 询问专家 编辑要发现新人担任同行评审,需要从期刊的编委会征集意见。期刊的编委会成员都是因为他们的专业知识而成为编委会成员地,因此,他们可以担任同行评审,也知道其领域的潜在同行评审人。相似地,编辑也可以向拒绝做同行评审的专家询问潜在的同行评审人。 3. 开拓稿件的参考文献 另一个寻找新的同行评审的方法是通过开拓稿件的参考文献来找到相关的作者。即使一个作者的文章被稿件引用,也不能保证这个作者就是担任稿件同行评审的合适人选。但是,查看被引用作者的其它出版物和研究领域可以为他们是否适合担任同行评审做一个更准确的估计。除了查看参考文献,编辑也可以查看其期刊的作者。因为这些作者已经拥有被期刊仔细审查的文章,所以这些作者的专业性是有一定保证的。 4. 谨防赝品 最近,一系列引人注目的案例(戳 这里 、 这里 、 这里 查看案例)表明有一些肆无忌惮的作者试图自己担任自己文章的同行评审人。这个问题起源于一些期刊允许作者推荐同行评审人。作者本人或者第三方代表机构创建了邮箱地址并声称邮箱地址属于他们推荐的同行评审人。如果编辑选择邀请被作者推荐的同行评审,那么同行评审邀请就会被发送到作者本人创建的邮箱中。 编辑应该已经对依靠作者推荐同行评审持谨慎态度,因为作者会受到诱惑而推荐那些会对他们的文章有共鸣的同行评审。用假名的同行评审的危险性应该让编辑更加警惕被推荐的同行评审人。 一个使用假名的同行评审的明显信号是:使用免费的邮箱服务,例如:雅虎,微软和谷歌邮箱,真正应该使用的是机构邮箱。简单的在线搜索通常已经足够分辨或核实评审人的机构邮箱。 5. 相信你的直觉 类似的事件是对自动化的过度依赖的警示。虽然电子系统可以在管理和搜索数据库时提供无可估量的帮助,但是电子系统始终不能代替一个编辑。 点击阅读英文原文,参与与作者的互动
个人分类: 同行评审|2889 次阅读|0 个评论
再贴一份详尽的审稿意见
热度 8 waterlilyqd 2015-5-13 09:53
在过去的博文中,我曾多次展示过为Journal of Mountain Science审稿的审稿意见,今天读到一份给本刊的一篇关于根的形态、土壤形态、树木类型对根的锚固特性的影响一文的审稿意见,非常具体中肯,这对作者更好地提高文章质量,在将来的写作中应该注意哪些问题,均具有指导作用。 同时,我觉得这份审稿意见对国内的科学家们在做同行外审方面也有借鉴作用。 审稿切忌粗线条,大而化之,让编辑和作者都无所适从,而应该先做总体评价,再做详细解析,如,文章的数据是否充分,运用的方法是否得当,结果分析和解释是否清楚明白,讨论是否充分并抓住了关键,另外就是写作规范的问题,参考文献的引用是否与本文相关,重要的参考文献是否引用,参考文献的引用是否规范,图表是否清晰明了,与正文是否互相呼应,必须具体举例加以阐述。 ———————————————————————— General Comments This paper reports some interesting observations of variation in tree root morphology and biomass in the field and,using laboratory tests, assesses variation in root tensile strength in relation to root diameter and root anchorage as a function of soil bulk density and moisture content. The authors are probably correct to point out that this is an understudied topic and the data they have collected are useful. The data they have on belowground biomass and root morphology may have more importance than the authors think as they provide information on belowground carbon stocks. Though the paper contains some useful data and the overall experimental design appears sound the manuscript is not suitable for publication in its current form for X key reasons: 1) Statistical analysis methods and results are not described or reported fully or appropriately. I believe the statistical analysis of the data can be greatly improved. The authors should define key objectives for their analysis that should govern the empirical modelling they complete. 2) The paper lacks organization as currently results are reported in the methods and discussion sections. 3) The results section is poorly-written and really just points readers to tables and figures rather than describing patterns in the data. 4) The discussion section frequently just re-states the results and there is limited effort made to explain the patterns the authors observe or to relate their observations to previous work. 5) I am always hesitant to criticize the writing of those whose first language isn’t English, there is a need for significant improvement in the paper’s clarity. I have provided some specific comments and suggestions below. The authors are welcome to follow my suggestions for improving phrasing but must address the specific comments and queries. Below P= page and L = Line. Line numbers quoted are authors’own. Introduction P1 L27 “reinforcing soil” Suggest “soil stabilization” P2 L28 Change to “understanding the mechanisms…” P2 L29 Change to “…and root-anchorage is important…” P2 L30 Change to “slopes” P2 L30-31 Change to “…attributed soil shear strength to vegetation through its…” P2 L33 Change to “…roots’cross sectional area and…” P2 L34 Change to “…was that it overestimated the…” P2 L37 Change to “An FBM model…” Note that the convention is that use of a/an is based on the sound of the word not whether it is a vowel or constant. In this case the sentence sounds like “An eff bee em…” P2 L39 Change to “…Waldron and Wu et al.models through load…” P2 L41 Change to “…using parameters such as root…” P2 L42-43 Change to “Recently, an RBM model…” P2 L45 Change to “…the estimation of root reinforcement with these models is…” P2 L47 Change to “In past studies, thee ffect of soil properties…” P2 L48 Change to “…in lower shear-strength soil provide…” P2 L48 Change to “…those in higher shear-strength soils…” P2 L51 Change to “Root experienced breakage…” P2 L54 COMMENT “The deformed shape…” This sentence needs clarifying P2 L55 Change to “…root-reinforced soils increases…” P2 L56 Change to “Different soil types…” P3 L59 Delete “(clay soils…etc)” P3 L60 Change to “conditions” P3 L61-63 COMMENT The authors should define“winching”. Might not differences in resistance to winching be related to differences in root morphology between trees growing at the edge and interiorof forest stands? Wind shear stress on trees has previously been shown to influence their root growth. This sections needs careful re-phrasing as it’s difficult to follow. P3 L63 Change to “Soil properties should…” P3 L64 Change to “…effect of dry weight density and water content on the roots’anchorage properties has been less reported” P3 L66 Change to “Researchers have done…” P3 L67 Change to “…in past studies.” P3 L67 Delete “Numerous…laboratory” P3 L68 Change to “…tests using leaning…” P3 L69 Change to “…field, and laboratory pullout tests have been conducted…” P3 L76 Change to “…root anchorage…” P3 L83 “The root anchorage…” Methods GENERAL COMMENTS · Sampling effort is not described for the assessment of root morphology. What species were sampled? Why were these species selected? How many trees were sampled per species? How old were the sampled trees? Why were these particular tree ages/sizes selected?Were the sampled trees all located in homogenous conditions (i.e. slope,aspect,soil type, elevation)? · The authors need to provide some justification for their pullout test methods. Do they feel their results will be more representative of field conditions than the previous studies theycite? · Sampling effort for the “pullouttest” experiment is not defined. How many species were tested? How many tests per species? What was the size of the root fragments tested? What were the soil properties used in the test? P4 L91 QUERY “Beigou forestry field” I’m not sure what a “forestry field” is, do the authors perhaps mean a “field station”? P4 L94-95 QUERY Not sure what you mean by“mus” P4 L97 COMMENT Species list followed by“etc.” but the other species the authors refer to will only be apparent to readers familiar with the system. A more complete description of the forest’s composition and structure is needed P4 L99 Change to “…sampled using the…” and“…excavation method…” Provide a reference for this method P4 L102-103 Change to “…put into sealed bars, transported to the laboratory and stored in a refrigerator…” P4 L106 COMMENT “…soil was fine sandy loam,dark brown with light particles…” This is a rather subjective description of the soil’s colour. Could the authors provide a colour classification using the Munsell system? P4 L108 Change to “root morphology was measured…” P4 L110 Change to “…dry bulk density…” P4 L111 Change to “following oven drying…” P4 L111-112 COMMENT How long were soil samples dried for? Dry weights and water contents do not have any measure of variability associated with them, was only a single sample taken? If so can the authors justify this? On what basis (dry- or wet-weight) is the soil moisture reported? Information on soil properties should probably be placed in previous paragraph after description of soil colour. P5 L115 COMMENT Again, why are there no errors associated with cohesive force and friction angle? Place description ofsoil properties together. Results GENERAL COMMENTS There is little in the results section that actually helps the reader understand what the authors observed. For example,simply stating “The mean root length of five tree species in different layers was calculated as shown in Table 2” does not actually tell the reader anything.The results section should seek to describe patterns in the data (e.g. how did root length vary with depth or between species). The discussion should then seek to explain and compare the patterns one observes. With regards to data analysis the authors have fitted a large number of different regression equations to predict maximum bond force. For example,bond force is predicted based on root diameter with separate regressions for different soil bulk densities. In reality the authors only need to complete two analyses · Examining the effects of soil properties (bulk density and moisture content) and root diameter on bond force for Pinus . The authors should seek to identify the single best model to predict bond force using all three independent variables whilst accounting for colinearity in the predictors and simplifying the model as necessary. · Examining the effect of species and (fixed factor) and root diameter on bond force Both the above models could be constructed using standard linear modelling approaches, should examine the importance of interactions and can use standard model simplification approaches. P6 L133-138 COMMENT “To investigate the root…mean slope angle was 8°.” All of this information on sampling should be located in the methods section. The authors need to define what a “sunny” slope is. Is sunniness related to topography, aspect etc? The authors need to explain why they chose these sizes of tree. Note that species names can be shortened following first mention (e.g. U. pumila ).Note that the authors “estimated” root length they didn’t “calculate” it. P6 L134 Change to “…and diameter at breast height…” P6 L140-143 COMMENT “The roots were divided…80-2500px (S5).” This description of the classification of roots intodifferent size groups should be in the methods. P7 L153-156 COMMENT Most of this section describes the authors’ methods rather than results P7 L157 COMMENT “The regression equation…”statistical methods should be described in the methods section. The authors should briefly justify their analytical approach on the basis of the objectives of their empirical modeling. P7 L161 COMMENT “…regression equation was in good agreement with experimental results.” This is strange phrasing since the equation is based on the experimental results. It might be better to state that the regression model was able to describe a substantial proportion of the variance in the experimental dataset. P7 L162 COMMENT “marked exponent relation”I’m not certain what this means P8 L167-178 COMMENT Nothing in this section actually describes any of the patterns in the results P8 L167-170 DELETE “In this study…method in Lab.” This should already be apparent from the methods P8 L170-172 COMMENT “The water content…1.32g/cm 3 ” This should be in the methods P8 L179 DELETE the table legend is giventwice Discussion GENERAL COMMENT The discussion is disappointing, in general the authors just re-state their results or even present information that should have been contained in the results section itself. Patterns in the data are not explored or interpreted and there is extremely limited comparison with previouspublished work. This section needs to be thoroughly revised P12 L214-216 COMMENT I’m afraid I couldn't understand this section P12 L218 “Fig. 3” elsewhere the word“Figure” has been given in full, check what style the journal requires P12 L218-223 COMMENT This section does not seek to explain the patterns in the results instead, as previously, it just points the reader to figures without explaining or discussing them. P13 L243-246 COMMENT These regression equations are results of the authors’ statistical analyses and should not be presented in the discussion P13 L252 COMMENT “could be 10% - 30% higher…” Can the authors suggest what the implications of this finding are? P13 L254-259 COMMENT Can the authors suggest why their results are different to those of Fan and Su? P15 L302-306 COMMENT These regression equations are results of the authors’ statistical analyses and should not be presented in the discussion P16 L327 COMMENT “There was a reasonable…”The authors need to define what they mean by reasonable References Check formatting, some references have missing spaces or include journal issue numbers. Figures andTables Table 3: Consideration of the informationin this table could be expanded as it provides useful data on below-ground carbon stocks in tree roots, something for which data is often lacking. The authors should seek to try and turn their estimates into an estimate of kg C m -2 held in the trees’ roots. Table 4: Symbols should be explained in thetable legend rather than in a footnote Table 5: The thinking behind the experimental design implicit in this table should be described in the methods.Note that you examined the effect of soil properties using Pinus roots but then also examined species effects using standard soil conditions. Justify why you selected those particular standard conditions Figure 2 This figure does not provide any useful information and duplicates Table 5. Could the authors use this information to model (logistic regression) the probability of breakage failure based on soil properties and root diameter? Figure 3 In the key what does “calculating”mean? Calculated using what and on the basis of what? Explain in the methods Figure 4 Change “fitting” to “fit” Are the different lines here justified.
个人分类: JMS信息|5559 次阅读|10 个评论
学术期刊论文的"伪审稿现象"与学术不端
热度 33 waterlilyqd 2015-4-2 16:15
国际上闹得烘烘烈烈的BMC撤稿事件( BMC撤回中国41篇伪评审论文 38家机构被“打脸”! )让伪审稿现象走进了公众的视线。 上月在北京参加Springer中国合作期刊研讨会时,Springer出版发展部副总裁 Harry Blom博士特别提到期刊的伪审稿现象 (Fake peer-review) , 提醒期刊编辑部一定要慎用作者推荐的审稿人,同时,要求期刊编辑部对负责稿件送审的associate editor要有一定的监督机制。 那么,什么是“伪审稿”呢? 一般来说,发表在学术期刊上的论文都要经过同行外审后才能发表, 很大部分期刊的同行外审采用的是盲审制。Journal of Mountain Science采用的是双盲审稿制度,特约稿件也不例外,另外,一般情况下,中国作者的文章请国外专家审稿,印度作者的文章找印度以外的专家审稿,依次类推,通过空间隔离的形式尽可能地避免熟人审稿的情况发生。 “伪审稿”主要有以下几种情况: 1. 部分OA期刊,对稿件来者不拒, 为了谋取利益,没有设置严格的同行评议制度,只要投稿的格式符合要求,语言通顺,就直接刊用。部分影响因子高的期刊也不例外! 2. 全世界论文产量大幅增加,审稿人难找是很多期刊编辑部的共性问题。因此,期刊编辑部大多会要求作者为自己的文章推荐审稿人。很多由作者推荐的审稿人都有可能是作者的熟人,或者是作者专门写信打过招呼的人,他们可能碍于与作者关系熟悉,不好对文章提出过多的批评建议,于是,对一些方法不成熟,研究数据不充分,文章将就过得去的,在推荐意见中往往也写推荐Accept或者说只需Minor Revision即可发表。 3. 作者推荐知名专家做审稿人,但邮箱并不是该专家的邮箱,很有可能是作者自己申请的邮箱或者是作者的同学朋友的邮箱,于是,同行评审变成了作者自己在审或者作者的同学和朋友在审,从而使得同行评审形同虚设。投稿到JMS 的作者推荐的邮箱中也发现了类似的现象,很多被推荐的审稿人的邮箱是QQ邮箱而不是机构邮箱。对这类作者的文章,我们往往会特别关注,当然,编辑部也不会将稿件送给这些审稿人审稿。 4. 编辑人员与作者串通,将稿件送给并不符合资格的审稿人审稿。对这类问题,需要靠编辑部的内部管理制度进行管理和约束,同时,主编要对编辑人员推荐接收的文章要进行全面的检查。 5. 一些在学术机构拥有权利,掌握了支配权的人员,为了自己的学生或者是熟人能够尽快发表文章,往往向编辑部说情或者是施压,请编辑人员将稿件送给作者推荐的审稿人审稿。 6. 一些语言润色公司或者是预评审公司当起了论文审稿和操纵论文审稿的掮客。记得上上周武夷山老师还发过一篇文章“ 期刊论文稿预评审业务,您怎么看?” ,个人认为, 有些预评审公司就是拿了作者的钱干不符合学术道德规范的事,当然也不是说所有的语言润色公司和预评审公司公司都这样! 不管是哪种形式的“伪审稿”,都是学术不端行为,既扰乱了学术的正常评价体系,也败坏了期刊和作者本人的学术声誉,像BMC那样一连撤掉中国作者的41篇稿件的事件,给中国学术界也丢了脸。 延伸阅读: Proliferation of fake peer-review journals Major publisher retracts 43 scientific papers amid wider fake peer-review scandal Science’s Big Scandal A paper by Maggie Simpson and Edna Krabappel was accepted by two scientific journals “中国论文工厂”一文被点名作者回应:我们只是英文不好
个人分类: 科技杂谈|36923 次阅读|60 个评论
副主编Iain Taylor教授对本刊的评价及对审稿和编辑人员的建议
waterlilyqd 2015-3-12 22:15
Comments and Suggestions from JMS editorial member Prof. Iain Iain on editoral practices Notes: At the end of 2014, I made review on the development and challenges of the Journal of Mountain Science and sent the report to all editorial board members and request their suggestions and commens. I received about ten members' response. Most of them gave high praise on the journal's great development in the past years. Many gave concrete suggestions on how to attact high quality papers and how to raise journal's international impact. The follows are the comments and suggestions from Prof. Iain Taylor, the University of British Columbia, Canada. I think what he said is quite right about how to make decison based on peer-review comments: We always reminded ourselves that we were NOT running an Election. Rarely we got two positive reviews that were badly justified or 2 negatives that were unprofessional. The criteria HAD TO BE THE QUALITY OF THE SCIENCE. I think it can be useful for editors of other journals, thus I paste his letter here. ---------------------------- Dunlian: I have spent some time thinking about your challenge to raise the influence of JMS in the scientific community. I used Volume 11, Number 2, March 2014 as one example for my comments. I still think that worrying about Impact is not the correct path. JMS is already publishing some good papers and we need to see just which ones are cited, say 3 years after publication. The Issue in question contained 24 papers; authors were from 15 countries; 10 papers included authors from China mainland and Taiwan, Korea and Pakistan. It seems that JMS is attracting papers from mountainous areas, which suggests that the title is a reasonable choice. Time from Submission to Acceptance looks good - only one took longer than 10 months. 4 accepted in less that 2 months seems very fast - review and revision in this time is VERY RARE and at Can J. Bot. always had us wondering if the review was really that professional. How long is Springer taking to bring a paper from your acceptance letter note to actual publication? What is your assessment of reviewers? We always checked timeliness, clarity, professionalism and constructive comment to rank reviewers. We also checked to see if serious reviewer comments were actually being acted upon and authors explained their changes AND the items they did not change or were properly justified one way or another. We occasionally used a reviewer who was personally known to one or more of the authors, but always used a 3rd reviewer in these cases. We tried to use at least one reviewer whose first language was English. This was often the way we detected poor language in the paper. Periodically we got conflicting or sloppy reviews. We always reminded ourselves that we were NOT running an Election. Rarely we got two positive reviews that were badly justified or 2 negatives that were unprofessional. The criteria HAD TO BE THE QUALITY OF THE SCIENCE. DO REMEMBER THAT THE EDITOR WITH ANY BEHAVIOUR THAT IS UNPROFESSIONAL (for or against publication) will always add to distrust of the Journal. Enough for now. More if anything comes up later. All the very Best Iain
个人分类: 编辑杂谈|1992 次阅读|0 个评论
展示一份非常细致详尽的审稿意见
热度 17 waterlilyqd 2014-6-13 10:27
曾经看到国内某EI期刊返给作者的审稿意见就只有两句话,大意就是“文章的新意不够,研究不深入”。一个作者,特别是一位青年科学工作者,看到这份审稿意见能够做什么呢?他们能够从审稿人的审稿意见得到什么有启发性的见解呢?他们能够根据审稿人的意见对文章进行改进性的修改吗?不能! 因此,一份审稿意见应该有方向性的指导,还要有具体的建议,如果能够对作者未来继续进行深入研究有启发性的作用就更好! 在此展示的这份审稿意见,其实是作者根据审稿人的意见对文章进行了修改以后的二审意见。审稿意见洋洋洒洒好几页,非常具体中肯。对这样的评审专家,我们应该表示由衷的敬意! ---------------------------------------------------------------------------------- This manuscript is considerably improved over the initial submission, but there needs to be greater clarification of the analysis, more attention to important details, and improved organization. Also, the use of the English language needs improvement in places, but I leave that to the copyeditors to assist the authors. My comments below are organized by section of the manuscript. Introduction. I wouldn't say that interpolation techniques resolve the shortage of observations, as the suitability of these methods is in part a function of the station density. The sentence that reanalysis most closely estimate the state of real atmosphere is also problematic as it is not clear what the reanalyses are being compared against. Also, in the introduction the authors state that their goal is to verify monthly 2 m air temperature in the ERA-Interim for the Tibetan Plateau, however, most of their analysis is at the annual and seasonal (rather than monthly) temporal scale. Section 1.2. If a time series was not complete how was this handled? Were the missing data filled in with data from neighboring stations? Also, the 5 consecutive years criteria seems somewhat odd, especially as it appears that only one station had less than 10 years of data. Why not exclude that station and use a stricter criterion? Also, the authors should provide the period of record in Table 1 for each of the stations to help readers interpret the results. In addition, how did the authors deal with the 20 stations that didn't have data for the full period 1979-2010? Were these stations included in the trend analysis and climatological maps, for example? Section 2.1 Is the ratio of standard deviations for the annual standard deviation? I am assuming that the other parameters, bias, rmse, etc. also are for the annual means of Te and To. Is that correct? At this time, it would also been helpful to see a plot of bias and/or rmse against the difference in elevation between the observation station and the ERA-Interim gridpoint. It is important for readers to see what the shape of this distribution looks like in order to better understand the impact of the lapse rate correction discussed later in the manuscript. This type of graph would also better support the authors' convention that bias is related to the elevation station, rather than providing examples in the text for only a few stations with small and large biases. Also, are the differences shown in Table 3 statistically significant? I doubt if they are. Section 2.2 In my earlier review, I had suggested that the annual cycle be removed when correlating the time series of monthly data from observations and ERA-Interim, because the correlation was representing how well the annual cycles agreed between the two datasets rather than how well ERA-Interim was simulating the month-to-month variability seen in the observations. However, the authors say on page 6 that they removed the annual cycle from the values of annual and seasonal mean temperatures rather than the monthly values. I am perplexed on how and why they did this. What method was used to remove the annual cycle? Or when they say the removed the annual cycle are they just saying that they averaged the monthly values by year and season? There really isn't any need to remove the annual cycle when calculating and comparing annual and seasonal means. I have some additional concerns regarding the Section 2.2 on temporal and spatial variability. Rather than annual variability on line 7, I think the authors mean inter-annual variability, or in other words, year-to-year variability in the annual mean. And by seasonal variability are the authors referring to the year-to-year fluctuations in the seasonal means? I suspect so, but they need to make this clear. Figure 3 for station No. 1 is not very useful in demonstrating how well the ERA-Interim is replicating inter-annual variability of the annual and seasonal means. The bias is very large at this station and the vertical resolution of the graphs is coarse, thus the graphs for both observations and ERA-Interim appear very flat. Based on the plot, only the winter averages of ERA-Interim show considerable inter-annual variability, but that is partly because the vertical resolution of the winter graph is finer than that for the plots of the others seasons and the annual mean. Table 4 shows that at station No. 1 the correlation for winter is 0.295 but the annual value is only a little higher at 0.376. But because of the scale of the graph, the curve for the inter-annual variations in the annual mean appears very flat for both the Te and To series when the correlation suggests that the inter-annual variability of the two series is rather dissimilar. Also, perhaps a better station to use to show differences in inter-annual variability is station 14, which has low correlations for some seasons but the elevations of the observation station and the ERA-Interim gridpoint are more similar than for Station #1 and bias is much smaller. In the second paragraph under 2.2 it appears that the authors are now looking at bias in the monthly values rather than the inter-annual variability of the monthly values. This needs to be made clear. The writing of this paragraph can also be tightened and the paragraph shortened to highlight the differences in bias between summer and spring, and between the eastern versus southern TP. Again, I question the focus on Station No. 1 given the large difference in the To and Te elevations. There are also some grammar errors in this paragraph. Section 2.3. I don't understand what the authors mean when they say that the The monthly lapse rates are obtained from Kunkel (1989) and Liston and Elder (2006). These studies were for other parts of the world -- weren't the lapse rates calculated specifically for the Tibetan Plateau? Or do the authors mean that they used the procedures from Kunkel and Liston and Elder to calculate the lapse rates? Whatever, it is not clear how the lapse rates were found. The authors need to explicitly describe how the correction was calculated. Table 5 doesn't need to include all the different parameters as the correlation, standard deviation ratios and standard deviation don't change much for the lapse-rate corrected temperatures. Why not focus instead on the bias and the rmse and show the difference between the values for the corrected and uncorrected series or in other words show the difference between the values in Table 2 and Table 5. Also, are the bias and rmse values given at the top of page 7 the average values across all the stations? Another question is why would the variation of Tc differ from that of Te? After all, for each month a constant is being added to Te to calculate Tc. Even though the constant varies by month, I wouldn't expect much difference in the variation of Te and Tc. Also, what is the correlation model? And why isn't the bias reduced at all the stations, why just at 57 stations? Does it have something to do with the shape of the relationship between bias and elevation difference? Here is where a graph of bias versus elevation difference would be very useful. Note that at station #11 Te and To are at similar elevations and the bias was initially small. What if you stratify the data by elevation? Do you see that the correction was useful for higher elevation stations but introduced error for lower elevation stations? And why didn't you also use the Gao et al. method in addition to the Kunkel method? Section 2.4. It is not clear from the climatology maps what new information the ERA-Interim provided. What did the ERA-Interim tell you about the temperature climatology that was not previously known from observations only? Also, the authors need to be more careful when they say that the ERA-Interim captures the topographic features very well since the climatology maps are not compared to similar maps prepared from observations. I also would caution against speculating on the causes of the temporal trends, but rather focus on how similar the trends for the ERA-Interim are compared to the trends calculated from observations, especially given that this paper is an evaluation of ERA-Interim. On what evidence are you basing your statements that the ERA-Interim does better than ERA-40 to capture temporal trends? Section 3. Rather than whole time series, say instead ... complex TP at annual, monthly and seasonal temporal scales ... Also, what does Overall the TP has great temporal and spatial variations mean? Figures and Tables. More attention needs to be paid to figure and table captions: Figure 1. The caption needs to include what are the numbers in this figure. Something like, The numbers refer to the station number shown in Table 1. Figure 2. Is this a plot of annual mean temperature? The variable that is plotted needs to be described in the figure caption. Figure 3. The scales on these plots are not consistent -- note that the scale is finer for the winter plot and consequently there appears to be greater inter-annual variability. Keep the scales consistent so that it is easier to compare the different plots. Also, I question whether this plot is even necessary, as basically it shows that, with the exception of some years in winter, the Te is considerably warmer than To. Figure 4. Include units for RMSE (oC). Figure 5. A better caption would be Average annual mean temperature (oC) across the Tibetan Plateau for 1979-2010 from the ERA-Interim reanalysis. Figure 6. Average temperature (oC) across the Tibetan Plateau for 1979-2010 from the ERA-Interim reanalysis for a) spring, b) summer, c) autumn, and d) winter.
个人分类: JMS信息|9266 次阅读|25 个评论
Initial review by the Scientific Editors of the J Mt Sci
waterlilyqd 2014-4-25 16:09
The Journal of Mountain has a team of scienctific Sditors (SE) composed of 35 experienced scientists around the world. To save manuscript processing time, and reduce the later processing workload, after the author submits their papers, we add one stage, initial review, before the peer-review. The initial review by the SEs is a very important step for selecting valuable papers for later peer review. To provide convenience for the SE and try to shorten the manuscript processing time, the system set three choices for the SE to click: Reject, Revision, Sent out for peer-review, and in each choice, reasons to reject a paper are listed. As some SEs can also be reviewers. So we revise the email plate provided by the ScholarOne, try to explain more clearly the invitation purpose, and the initial review requirements. The following is the invitation letter to a scientific editor. ___________________________________________________ Dear Dr. XXX: We'd like to invite you as a scientific editor to do initial review on manuscript ID 14-3XXX entitled XXXXXXXXXXXXXXXXXXXXXX . Before making comments, you need to log in the JMS online manuscript system http://mc03.manuscriptcentral.com/jmsjournal and enter the Scientific Editor Center. You are kindly requested to finish the review within 4 days. If you can't do the initial review timely, please send email to jms@imde.ac.cn immediately, then we can assign a different SE to do the initial review. If your recommendation is REJECT, please click the corresponding lists and then state concrete reasons to reject it. If your recommendation is REVISION, please also give revision comments. If your recommendation is SENT OUT FOR PEER-REVIEW, please recommend two to three referees outside the first author' country. Sincerely, Journal of Mountain Science Editorial Office jms@imde.ac.cn
个人分类: JMS信息|4356 次阅读|0 个评论
什么是发表后同行评议(post-publication peer-review)?
Bobby 2009-3-13 07:04
袁贤讯的博客《后同行评议( post-peer review )中的道德( Ethics )问题》 http://www.sciencenet.cn/m/user_content.aspx?id=219925 王志明的博客《后同行评议时代的到来》 http://www.sciencenet.cn/blog/user_content.aspx?id=219943 注:后同行评议( post-peer review )严格来讲应该是出版后(或发表后)同行评议(或评价)( post-publication peer-review )。 一、出版(或发表)后同行评议的必要性 读者可阅读下面的文章: Are journals doing enough to prevent fraudulent publication? 出处: http://www.cmaj.ca/cgi/content/full/174/4/431 FDP 文件: Are journals doing enough to prevent fraudulent pu Recent warnings by editors of 3 major journals that data contained in published papers were or may have been incomplete,1 falsified2 or fabricated3 has dismayed scientists and scientific editors around the world and added to the public's growing scepticism about the authority of science. How is it that flawed or fraudulent research can slip through the net of peer review and editorial scrutiny? Reputable scientific journals use a systematic approach to reviewing and editing research papers. At CMAJ, submissions that are not intercepted after an initial screening for suitability and relevance are sent for peer review. Reviewers are chosen on the basis of their interest and expertise, publication record, and quality of previous reviews. Peer reviewers devote perhaps a few hours to reading the paper, consulting the existing literature and writing their review. About 20% of the completed reviews we receive are rated as excellent; we generally succeed in obtaining 2 good or excellent reviews for each manuscript. After peer review, submissions are carefully reassessed by the scientific editors, and about 6% are selected for publication. Almost all require substantive editing, guided by a scientific editor working closely with the authors. Once this process of revision is complete, the manuscript is copyedited for clarity, precision, coherence and house style. Problems with the presentation and interpretation of data can come to light at any point in this process, even at the late stage of copyediting. For the most part, this intensive series of editorial check-points works well. But it is not perfect. In 2005, PubMed received 67 notices of article retractions (Sheldon Kotzin, National Library of Medicine; personal communication, 2006.) This is undoubtedly an underestimate of the total number of flawed, grossly misleading or frankly fraudulent papers. Editors (and peer reviewers) work from the submitted manuscript along with any other material supplied by the authors (e.g., survey instruments or additional tables, graphs and figures). In assessing randomized clinical trials, most editors examine the study protocol to try to ensure that the study report reflects the planned design and analysis. However, it is almost impossible to detect by these processes whether data have been fabricated, or if key elements are missing. Editors, particularly of general journals, rarely have the expertise in the particular topic of the research to enable them even to suspect fabrication when it occurs. Reviewers may have the expertise but not necessarily the time to examine findings in exhaustive detail; moreover, they can assess only those data that the authors actually disclose.    Alarmed by their own experiences with particular manuscripts, some journals are taking further steps to ensure that authors are faithful to their data. For example, the Journal of the American Medical Association (JAMA) now requires independent statistical re-analysis of the entire raw data set of any industry-sponsored study in which the data analysis has been conducted by a statistician employed by the sponsoring company.4 The Journal of Cell Biology ( www.jcb.org ) has specific policies prohibiting the enhancement of images and scrutinizes submitted images for evidence of manipulation. It will be important to evaluate the effectiveness of these measures as time goes on, since their costs in time and resources are not trivial.    At CMAJ we are contemplating the steps that would be required to allow us to make available, as an online-only appendix, the entire data set on which a research paper is based. Doing so would enable more intensive post-publication peer review. Interested persons with the necessary expertise could confirm the published analyses, conduct further analyses and increase the efficiency of research by making it more widely used. Fraud might also be detected sooner, and perhaps the knowledge that their data set will be open to public scrutiny will deter some authors from fabricating or falsifying data (if it does not make others more clever in their deceits). Current online publishing systems enable authors to readily supplement their articles with data sets in any file format (spreadsheets, databases, jpegs, etc.) and to index these files for proper attribution and with helpful information (e.g., the open source Open Journal Systems at http://pkp.sfu.ca/ojs ; Dr. John Willinsky, University of British Columbia; personal communication, 2006). The costs of posting additional data as appendices to manuscripts are trivial, and the ethical and legal obstacles (rendering the data anonymous when they involve patients, and protecting the intellectual property rights of investigators and sponsors) can be overcome.5    No editorial review system will ever be entirely impermeable to human error or deceipt. But journals could do more to ensure the integrity of published scientific results; one place to start might be to publish all of the data on which research findings are based. ― CMAJ    REFERENCES    Bombardier C, Laine L, Reicin A, et al. Comparison of upper gastrointestinal toxicity of rofecoxib and naproxen in patients with rheumatoid arthritis. VIGOR Study Group. N Engl J Med 2000;343:1520-8.    Hwang WS, Roh SI, Lee BC, et al. Patient-specific embryonic stem cells derived from human SCNT blastocysts. Science 2005;308:1777-83.    Sudb? J, Lee JJ, Lippman SM, et al. Non-steroidal anti-inflammatory drugs and the risk of oral cancer: a nested casecontrol study. Lancet 2005;366:1359-66.    Fontanarosa PB, Flanagain A, DeAngelis CD. Reporting conflicts of interest, financial aspects of research and role of sponsors in funded studies. JAMA 2005;294:110-1. Walter C, Richards EP. Public access to scientific research data. Available: http://biotech.law.lsu.edu/IEEE/ieee36.htm (accessed 2006 Jan 13). 二、 PLoS 挑战传统的出版(发表后)后同行评议模式 有关 PLoS 读者可参阅我的博文《关注科技期刊新贵 PLoS 系列免费获取在线期刊》: http://www.sciencenet.cn/m/user_content.aspx?id=51127 。 PLoS 系列期刊与绝大多数期刊不同的是, PLoSOne 发表任何方法上可行的论文,而不在乎研究结果的重要性,审稿人只核查论文中的实验方法和分析是否有明显、严重的错误。 PLoSOne 认为论文的重要性体现在发表后被关注和引用的情况。读者可以在网上对 PLoSOne 的每篇论文进行评论和评分,编辑根据这些反馈情况鉴别并推荐出重要论文。 PLoSOne 的一位编辑说:我们努力让期刊的论文成为讨论的起点而不是终点。( http://cbb.upc.edu.cn/showart.asp?art_id=84cat_id=4 )。 关于这方面,王丹红和何姣在《挑战传统:先发表 后评价》采访张曙光时,美国麻省理工学院高级研究员张曙光说这是非常好的主意。论文的发表意味着真正的评判才开始,而不是结束。如果论文真的很好,大家知道得就更快,可以节省很多时间、精力和金钱。同样,如果一篇论文有问题或是造假,那么也能很快被发现。从长远来看,这有利于科学的发展。(见, 2007-2-1 8:51 :5 , http://www.sciencenet.cn/htmlnews/200721102729932823.html )。 三、 出版(或发表)后同行评议可行吗? 读者可阅读下面的文章: Can post publication peer review work? The PLoS ONE report card 来源: http://blog.openwetware.org/scienceintheopen/2008/08/27/can-post-publication-peer-review-work-the-plos-one-report-card/ This post is an opinion piece and not a rigorous objective analysis. It is fair to say that I am on the record as and advocate of the principles behind PLoS ONE and am also in favour of post publication peer review and this should be read in that light. To me, anonymous peer review is, and always has been, broken. The central principle of the scientific method is that claims and data to support those claims are placed, publically, in the view of expert peers. They are examined, and re-examined on the basis of new data, considered and modified as necessary, and ultimately discarded in favour of an improved, or more sophisticated model. The strength of this process is that it is open, allowing for extended discussion on the validity of claims, theories, models, and data. It is a bearpit, but one in which actions are expected to take place in public (or at least community) view. To have as the first hurdle to placing new science in the view of the community a process which is confidential, anonymous, arbitrary, and closed, is an anachronism. It is, to be fair, an anachronism that was necessary to cope with rising volumes of scientific material in the years after the second world war as the community increased radically in size. A limited number of referees was required to make the system manageable and anonymity was seen as necessary to protect the integrity of this limited number of referees. This was a good solution given the technology of the day. Today, it is neither a good system, nor an efficient system, and we have in principle the ability to do peer review differently, more effectively, and more efficiently. However, thus far most of the evidence suggests that the scientific community dosent want to change. There is, reasonably enough, a general attitude that if it isnt broken it doesnt need fixing. Nonetheless there is a constant stream of suggestions, complaints, and experimental projects looking at alternatives. The last 12-24 months have seen some radical experiments in peer review. Nature Publishing Group trialled an open peer review process . PLoS ONE proposed a qualitatively different form of peer reivew, rejecting the idea of importance as a criterion for publication. Frontiers have developed a tiered approach where a paper is submitted into the system and will gradually rise to its level of importance based on multiple rounds of community review. Nature Precedings has expanded the role and discipline boundaries of pre-print archives and a white paper has been presented to EMBO Council suggesting that the majority of EMBO journals be scrapped in favour of retaining one flagship journal for which papers would be handpicked from a generic repository where authors would submit, along with referees reports and authors response, on payment of a submission charge. Of all of these experiments, none could be said to be a runaway success so far with the possible exception of PLoS ONE. PLoS ONE, as I have written before , succeeded precisely because it managed to reposition the definition of peer review. The community have accepted this definition, primarily because it is indexed in PubMed . It will be interesting to see how this develops. PLoS has also been aiming to develop ratings and comment systems for their papers as a way of moving towards some element of post publication peer review. I, along with some others (see full disclosure below) have been granted access to the full set of comments and some analytical data on these comments and ratings. This should be seen in the context of Euan Adies discussion of commenting frequency and practice in BioMedCentral journals which broadly speaking showed that around 2% of papers had comments and that these comments were mostly substantive and dealt with the science. How does PLoS ONE compare and what does this tell us about the merits or demerits of post publication peer review ? PLoS ONE has a range of commenting features, including a simple rating system (on a scale of 1-5) the ability to leave freetext notes, comments, and questions, and in keeping with a general Web 2.o feel the ability to add trackbacks, a mechanism for linking up citations from blogs. Broadly speaking a little more than 13% (380 of 2773) of all papers have ratings and around 23% have comments, notes, or replies to either (647 of 2773, not including any from PLoS ONE staff) . Probably unsurprisingly most papers that have ratings also have comments. There is a very weak positive correlation between the number of citations a paper has received (as determined from Google Scholar) and the number of comments (R^2 = 0.02, which is probably dominated by papers with both no citations and no comments, which are mostly recent, none of this is controlled for publication date). Overall this is consistent with what wed expect. The majority of papers dont have either comments or ratings but a significant minority do. What is slightly suprising is that where there is arguably a higher barrier to adding something (click a button to rate versus write a text comment) there is actually more activity. This suggests to me that people are actively uncomfortable with rating papers versus leaving substantive comments. These numbers compare very favourably to those reported by Euan on comments in BioMedCentral but they are not yet moving into the realms of the majority. It should also be noted that there has been a consistent programme at PLoS ONE with the aim of increasing the involvement of the community. Broadly speaking I would say that the data we have suggest that that programme has been a success in raising involvement. So are these numbers good? In reality I dont know. They seem to be an improvement on the BMC numbers arguing that as systems improve and evolve there is more involvement. However, one graph I received seems to indicate that there isnt an increase in the frequency of comments within PLoS ONE over the past year or so which one would hope to see. Has this been a radical revision of how peer review works? Well not yet certainly, not until the vast majority of papers have ratings, but more importantly not until we have evidence that people are using those ratings. We are not yet in a position where we are about to see a stampede towards radically changed methods of peer review and this is not surprising. Tradition changes slowly - we are still only just becoming used to the idea of the paper being something that goes beyond a pdf, embedding that within a wider culture of online rating and the use of those ratings will take some years yet. So I have spent a number of posts recently discussing the details of how to make web services better for scientists. Have I got anything useful to offer to PLoS ONE? Well I think some of the criteria I suggested last week might be usefully considered. The problem with rating is that it lies outside the existing workflow for most people. I would guess that many users dont even see the rating panel on the way into the paper. Why would people log into the system to look at a paper? What about making the rating implicit when people bookmark a paper in external services? Why not actually use that as the rating mechanism? I emphasised the need for a service to be useful to the user before there are any social effects present. What can be offered to make the process of rating a paper useful to the single user in isolation? I cant really see why anyone would find this useful unless they are dealing with huge number of papers and cant remember which one is which from day to day. It may be useful within groups or journal clubs but all of these require a group to sign up. It seems to me that if we cant frame it as a useful activity for a single person then it will be difficult to get the numbers required to make this work effectively on a community scale. In that context, I think getting the numbers to around the 10-20% level for either comments or ratings has to be seen as an immense success. I think it shows how difficult it is to get scientists to change their workflows and adopt new services. I also think there will be a lot to learn about how to improve these tools and get more community involvement. I believe strongly that we need to develop better mechanisms for handling peer review and that it will be a very difficult process getting there. But the results will be seen in more efficient dissemination of information and more effective communication of the details of the scientific process. For this PLoS, the PLoS ONE team, as well as other publishers, including BioMedCentral, Nature Publishing Group, and others, that are working on developing new means of communication and improving the ones we have deserve applause. They may not hit on the right answer first off, but the current process of exploring the options is an important one, and not without its risks for any organisation. Full disclosure: I was approached along with a number of other bloggers to look at the data provided by PLoS ONE and to coordinate the release of blog posts discussing that data. At the time of writing I am not aware of who the other bloggers are, nor have I read what they have written. The data that was provided included a list of all PLoS ONE papers up until 30 July 2008, the number of citations, citeulike bookmarks, trackbacks, comments, and ratings for each paper. I also received a table of all comments and a timeline with number of comments per month. I have been asked not to release the raw data and will honour that request as it is not my data to release. If you would like to see the underlying data please get in contact with Bora Zivkovic.
个人分类: 科学感想|30641 次阅读|4 个评论

Archiver|手机版|科学网 ( 京ICP备07017567号-12 )

GMT+8, 2024-6-13 16:09

Powered by ScienceNet.cn

Copyright © 2007- 中国科学报社

返回顶部