I enjoyed everything about my first visit to Seattle after 11 years, except for the mess at the Sea-Tac airport. So, I was determined to take the Link Light Rail from downtown to Sea-Tac at the end of my visit. (I was also told by a friend that rental car picked up at the airport is usually more expensive, another reason to take a public transportation to the Sea-Tac.) Finding the entrance to the Light Rail was not easy, because it is not marked. Most people would start the trip by taking a bus to the "bus tunnel," from where it's easy to get to the Light Rail. When we finally found the entrance (next to the Nordstrom store), the adventure continued... First, I didn't read every small sign on the way down to the platform. I just wanted to get there. I didn't even bother to use the elevator at the end, just carried my luggage and walked downstairs. When I got there, I didn't know what the Light Rail looked like, and asked a guy in uniform. He said this is where you get on the rail. I looked around and didn't see how I could get a ticket. So, I was sent back upstairs; this time, I used the elevator. Someone helped me to start the conversation with the ticket machine, and the rest was easy. A one-way ticket costs $2.75, and can be purchased using a credit card. No one was there to check the ticket, and we all got on the rail. I was curious about the system, and started to look around. There were some handouts, one of which says "A fine of USD 124" for not having the ticket (or not using the pre-paid card correctly). I wondered if anyone would dare to get on a cheap rail and risk the fine of USD 124. Midway, two guys in uniform showed up to check tickets. To my surprise, two people were "caught" without proper "tickets." One looked like a foreigner (non-Asian), who held dollar bills in her hand. She was lectured, but not fined. The other looked like a student in college. He was told that he didn't initiate the pre-paid card properly, and was put into the city data base for the record. Both were NOT fined. Clearly, the system is still new, and the fine will not come any time soon. The goal is to educate people! Link to the SoundTransit in Seattle: http://www.soundtransit.org/Rider-Guide/Link-light-rail.xml
Dear Dr. XXXXX, Thank you once again for reviewing the above-referenced paper. With your help the following final decision has now been reached: Accept We appreciate your time and effort in reviewing this paper and greatly value your assistance as a reviewer for XXX XXX. If you have not yet activated or completed your 30 days of access to Scopus, you can still access Scopus via this link: http://scopees.elsevier.com/ees_login.xxxxxxxxxx You can use your EES password to access Scopus via the URL above. You can save your 30 days access period, but access will expire 6 months after you accepted to review. Yours sincerely, XXX XXX, Dr. Editor xxxxxx journal
题目:DATA COMPRESSION FOR RADAR SIGNALS: AN SVD BASED APPROACH 作者:ZHEN ZHOU 学位:MASTER OF SCIENCE 学校:STATE UNIVERSITY OF NEW YORK AT BINGHAMTON 时间:2001.5 摘要: Multiple platform coherent location systems operate by computing the time difference of arrival (TDOA) and frequency difference of arrival (FDOA) among signals received at geographically separated platforms. The bandwidth of the data link is the major bottleneck in the processing. Previously developed data compression methods can not satisfy the compression ratio and the accuracy requirements because they were designed for the generic signal case and do not fully exploit the characteristics of the radar signal. A new compression scheme presented in this thesis is built from the ground up with the characteristics of the radar signal in mind. It is based on the idea that a radar pulse train can be modelled as one prototype pulse and a parameter vector for each pulse to transform the prototype pulse to each specific pulse. A compression ratio of 10 20 : 1 has been achieved with minor, if any, FDOA/TDOA accuracies degradation in most cases. The two major techniques involved here are the fractional delay filter and the singular value decomposition (SVD). This thesis starts with some preliminary technical background used later in this thesis. Then two chapters are dedicated to the fractional delay filter and the SVD method, respectively. In addition to presenting and verifying our compression scheme, a newly developed LMS adaptive FIR fractional delay filter and an alternative to the cross-ambiguity processing based on the parameterization method are developed. Extensive simulation results are presented throughout the thesis. In the last chapter, conclusions and suggestions for future work are given. 本文主要解决多平台定位中各平台之间通信时的雷达数据压缩问题,以降低系统的复杂度。 论文共四章,67页。 1 Background 2 The fractional delay filters 3 The SVD method 4 Conclusions and suggestions for future work 2001-05Data compression for radar signals an SVD based approach.pdf
NTCIR-9 Cross-lingual Link Discovery Task http://ntcir.nii.ac.jp/CrossLink ############################################################ Introduction: Cross-lingual link discovery (CLLD) is a way of automatically finding potential linking between isolated documents in different languages. It is not very dissimilar from traditional cross-lingual information retrieval (CLIR) because CLIR can be viewed as a process of creating a virtual link between the provided cross-lingual query and the retrieved documents; on the other hand, CLLD recommends a set of meaningful anchors actively in the source document and use them as queries with the contextual information from the text to establish actual linking with documents in other languages. Wikipedia is an online multilingual encyclopaedia that contains enormous articles covering most written languages in this planet and so includes extensive hypertext links between documents of same language for easy reading and referencing. However, the pages in different languages are rarely interrelated except for the cross-lingual link between pages about the same subject. This could pose serious difficulties for users who try to seek information or knowledge from different lingual sources. Therefore, cross-lingual link discovery tries to break the language barrier in knowledge sharing. With CLLD users are allowed to discover documents in languages which they either are familiar with (or not), or which have a richer set of documents than in their language of choice. For English there are several link discovery tools, which assist topic curators in discovering prospective anchors and targets for a given document. No such tools yet exist, that support the cross linking of documents from multiple languages. This task aims to incubate the technologies assisting CLLD and enhance the user experience in viewing or editing documents in cross-lingual manner. The language difference, ambiguities and other language issues such as Chinese segmentation could all make this task even more challenging. Researchers who interest in cross-lingual link discovery are all welcome to join us. Particularly, researchers from either CLIR or link discovery community are encouraged to participate in this exciting task. To participate, please visit the registration pages: http://research.nii.ac.jp/ntcir/ntcir-9/howto.html , also you will have to sign a user agreement form - details will be announced from NII later Task Definition: Generally, the link between documents can be classified as either outgoing or incoming, but in this task we mainly focus on the outgoing link starting from English source documents and being pointed to Chinese, Korean, and Japanese target documents. The whole CLLD task is comprised of following three subtasks: * English to Chinese CLLD * English to Japanese CLLD * English to Korean CLLD Participants can choose one or more of the above three subtasks to participate in. The English topics and the target corpus consist of actual Wikipedia pages in xml format with rich structured information. To submit a run, participants are required to choose the most suitable anchors from the topic document, and for each anchor identify the most suitable documents in the test corpus. For each topic we will allow up to 50 anchors, each with up to 5 targets may, so there is a total of 250 outgoing links per topic. Topic and Document Collections: Two sets of 25 articles chosen from the English Wikipedia will be used as topics for the uses of creating dry run and formal run separately. These topics will be orphaned by removing all links to then (from the collection) and from them (to the collection). The corresponding pages in Chinese, Japanese and Korean will also be removed from those collections. The training and test collections for the three subtasks are exactly the same. The collections are formed by search engine friendly xml files created from Wikipedia mysql database dumps taken on June 2010. The details of the collections are given as following (the language of the corpus, the number of articles, the size of the corpus, and date of dump): Chinese 318,736 2.7G 27/06/2010 Japanese 716,088 6.1G 24/06/2010 Korean 201,596 1.2G 28/06/2010 Assessment and Evaluation: There will be two types of assessments: automatic assessment using the Wikipedia ground truth (existing cross-lingual links); and manual assessment done by human assessors. For the latter, all submissions will be pooled and a GUI tool for efficient assessment will be used. In manual assessment, either the anchor candidate or the target link could be identified relevant (or non-relevant). Once the anchor candidate is assessed as non-relevant, all anchors and associated links inside this anchor will become non-relevant. After the assessment, the performance of cross-lingual link discovery system then will be evaluated using Precision, Recall and Mean Average Precision metrics. FOR MORE DETAILS: Please visit http://ntcir.nii.ac.jp/CrossLink Please also note that the registration deadline is December 20, 2010 (for all NTCIR-9 tasks). ORGANIZERS: Shlomo Geva, Queensland University of Technology, Australia Andrew Trotman, University of Otago, New Zealand Yue Xu, Queensland University of Technology, Australia Eric Tang, Queensland University of Technology, Australia Darren Huang, Queensland University of Technology, Australia If you have any questions, please contact Eric Tang ( l4.tang@qut.edu.au ) or send an email to the task mailing list: crosslink@lists.otago.ac.nz
Many complex systems can be well described by networks where nodes present individuals or agents, and links denote the relations or interactions between nodes. Recently, the link prediction of complex networks has attracted more and more attention from computer scientists and physicists. Link prediction aims at estimating the likelihood of the existence of a link between two nodes, based on the observed links and the attributes of the nodes. For example, classical information retrieval can be viewed as predicting missing links between words and documents, and the process of recommending items to a user can be considered as a link prediction problem in the user-item bipartite network. Attached please find two newly published papers about the problem of link prediction. One (EPJB)discussed missing links prediction via local information. The other (PRE) introduced an efficient and effective similarity index, called Local Path index for link prediction. PRE_80_046122 EPJB_71_623