感谢杜建给我的信息。 哥伦比亚大学医学院生物医学信息学专家Edward H. Shortliffe 主编的、2013年10月即将出版的《生物医学信息学:计算机在卫生保健和生物医学中的应用》(第四版)(Biomedical Informatics Computer Applications in Health Care and Biomedicine) http://people.dbmi.columbia.edu/~ehs7001/TOC_4th-edition.html 新增了 转化生物信息学的内容Translational Bioinformatics 4th edition - Chapter 1 - complete.pdf Fourth Edition Biomedical Informatics Computer Applications in Health Care and Biomedicine E.H. Shortliffe, Editor J.J. Cimino, Associate Editor Table of Contents Front Matter Front Pages and Dedication Preface Acknowledgements Contributors I. Recurrent Themes in Biomedical Informatics 1. Biomedical Informatics: The Science and the Pragmatics Edward H. Shortliffe and Marsden S. Blois 2. Biomedical Data: Their Acquisition, Storage, and Use Edward H. Shortliffe and G. Octo Barnett 3. Biomedical Decision Making: Probabilistic Clinical Reasoning Douglas K. Owens and Harold C. Sox 4. Cognitive Science and Biomedical Informatics Vimla L. Patel and David R. Kaufman 5. Computer Architectures for Health Care and Biomedicine Jonathan Silverstein and Ian Foster 6. Software Engineering for Health Care and Biomedicine Adam Wilcox, Scott Narus, and David Vawdrey 7. Standards in Biomedical Informatics W. Edward Hammond and Stan Huff 8. Natural Language and Text Processing in Health Care and Biomedicine Carol Friedman and No閙ie Elhadad 9. Imaging and Structural Informatics Daniel L. Rubin and James Brinkley 10. Ethics and Health Informatics: Users, Standards, and Outcomes Kenneth W. Goodman, Reid Cushman, and Randolph A. Miller 11. Evaluation and Technology Assessment Charles P. Friedman and Jeremy C. Wyatt II. Biomedical Informatics Applications 12. Electronic Health Record Systems Paul C. Tang, Clement J. McDonald, and George Hripcsak 13. The Health Information Infrastructure William A. Yasnoff 14. Management of Information in Healthcare Organizations Lynn Harold Vogel 15. Patient-Care Systems Judy Ozbolt, Suzanne Bakken, and Patricia Dykes 16. Public Health Informatics Martin LaVenture, David Ross, and William A. Yasnoff 17. Consumer Health Informatics Kevin Johnson, Holly Jimison, and Kenneth Mandl 18. Telemedicine and Telehealth Justin Starren, Michael Chiang, and Thomas S. Nesbitt 19. Patient Monitoring Systems Reed M. Gardner, Terry Clemmer, Scott Evans, and Roger G. Mark 20. Imaging Systems in Radiology Bradley Erickson and Robert A. Greenes 21. Information Retrieval and Digital Libraries William Hersh 22. Clinical Decision-Support Systems Mark A. Musen, Robert A. Greenes, and Blackford Middleton 23. Computers in Health Science Education Parvati Dev and Titus Schleyer 24. Bioinformatics Russ B. Altman and Sean Mooney 25. Translational Bioinformatics Jessica Tenenbaum, Nigam Shah, and Russ B. Altman 26. Clinical Research Informatics Philip Payne, Peter Embi, and James J. Cimino III. Biomedical Informatics in the Years Ahead 27. Health Information Technology Policy Robert Rudin, Paul Tang, and David Bates 28. The Future of Computer Applications in Biomedicine Mark Frisse, Isaac Koohane, Valerie Florance, and Kenneth Mandl Bibliography Glossary Name Index Subject Index
Due to popular request, here is my overview of some of the coolest stuff from Day 2 of CVPR 2012 in Providence, RI . While the Lobster dinner was the highlight for many of us, there were also some serious learning/optimization-based papers presented during Day 2 worthy of sharing. Here are some of the papers which left me with a very positive impression. Dennis Strelow of Google Research in Mountain View presented a general framework for Wiberg minimization. This is a strategy for minimizing objective functions with multiple variables -- objectives which are typically tackled in an EM-style fashion. The idea is to express one of the variables as a linear function of the other variable, effectively making the problem depend on only one set of variables. The technique is quite general and has been shown to produce state-of-the-art results on a bundle adjustment problem. I know Dennis from my second internship at Google where we worked on some sparse-coding problems. If you perform lots of matrix decomposition problems, check out his paper! Dennis Strelow General and Nested Wiberg Minimization CVPR 2012 Another cool paper which is all about learning is Hossein Mobahi's algorithm for optimizing objectives by smoothing them to avoiding getting stuck in local minima. This paper is not about blurry images, but about applying Gaussians to objective functions. In fact, for the problem of image alignment, Hossein provides closed form versions of image operators. Now when you apply these operators to images, you efficiently smooth the underlying cross-correlation alignment objective. You decrease the blur, while following the optimum path, and get much nicer answers that doing naive image alignment. Hossein Mobahi , C. Lawrence Zitnick, Yi Ma Seeing through the Blur CVPR 2012 Ira Kemelmacher-Shlizerman, of Photobios fame, showed a really cool algorithm for computing optical flow between two different faces based on learning a subspace (using a large database of faces). The ideas is quite simple and allows for flowing between two very different faces where the underlying operation produces a sequence of intermediate faces in an interpolation-like manner. She shared this video with us during her presentation, but it is on Youtube, so now you can enjoy it for yourself. Ira Kemelmacher-Shlizerman, Steven M. Seitz Collection Flow CVPR 2012 Now talk about cool ideas! Pyry, of CMU fame, presented a recommendation engine for classifiers. The idea is to take techniques from collaborative filtering (think Netflix!) and apply then to the classifier selection problem. Pyry has been working on action recognition and the ideas presented in this work are not only quite general, but have are quite intuitive and likely to benefit anybody working with large collections of classifiers. Pyry Matikainen, Rahul Sukthankar, Martial Hebert Model Recommendation for Action Recognition CVPR 2012 And finally, a super-easy algorithm presented for metric learning by Martin Kstinger had me intrigued! This a Mahalanobis distance metric learning paper which uses equivalence relationships. This means that you are given pairs of similar items and pairs of dissimilar items. The underlying algorithm is really not much more than fitting two covariance matrices, one to the positive equivalence relations, and another to the non-equivalence relations. They have lots of code online, and if you don't believe that such a simple algorithm can beat LMNN (Large-Margin Nearest Neighbor from Killian Weinberger), then get their code and hack away! Martin Kstinger, Martin Hirzer, Paul Wohlhart, Peter M. Roth, Horst Bischof Large Scale Metric Learning from Equivalence Constraints CVPR 2012 CVPR 2012 gave us many very math-oriented papers, and while I cannot list of all of them, I hope you found my short list useful. Tuesday, June 19, 2012 CVPR 2012 Day 1: Accidental Cameras, Large Jigsaws, and Cosegmentation Today ended the first day of CVPR 2012 in Providence, RI. And here's a quick recap: On the administrative end of things, Deva Ramanan received an award for his contributions to the field as a new young CVPR researcher. This is a new nomination-based award so be sure to vote for your favorite vision scientists next year! Deva's work has truly influenced the field and he is well-known for being a co-author of the Felzenszwalb et al. DPM object detector , but since then he has pushed his ideas on part-based models to the next level. Congratulations Deva , you are the type of researcher we should all strive to be. Secondly, it looks like CVPR 2015 will be in Boston. Here are some noteworthy papers from the oral sessions of Day 1: During the first oral session, Antonio Torralba gave an intriguing talk where he showed the world how accidental anti-pinhole and pin-speck cameras are "all around us." In his presentation, he showed how a person walking in front of a window can be used to image the world outside of a window. Additionally he showed a variant of image-based Van-Eck phreaking , where his technique could be used to view what is on a person's computer screen without having to look at the screen directly. Accidental pinhole and pinspeck cameras: revealing the scene outside the picture Antonio Torralba and William T. Freeman CVPR 2012 Andrew Gallagher gave a really great presentation on using computer vision to solve jigsaw puzzles, where not only are the pieces jumbled, but their orientation is unknown. His algorithm was used to solve really really large puzzles, ones which are much larger than could be tackled by a human. Jigsaw Puzzles with Pieces of Unknown Orientation Andrew Gallagher CVPR 2012 Gunhee Kim presented his newest work on co-segmentation. He has been working on this for quite some time and if you are interested in segmentation in image collections, you should definitely check it out. On Multiple Foreground Cosegmentation Gunhee Kim and Eric P. Xing CVPR 2012 Sunday, June 17, 2012 Workshop on Egocentric Vision @ CVPR 2012 Today (Sunday 6/17/2012) is the second day of CVPR 2012 workshops and I'll be going to the Egocentric Vision workshop . The workshop kicks off at 8:50am (come earlier for some CVPR breakfast) and will start with a keynote talk by Takeo Kanade . There will also be a talk by Hartmut Neven of Neven-vision and now a part of Google. Also during the poser session, my fellow colleague, Abhinav Shrivastava , will be presenting his work on applying ExemplarSVMs to detection from a first-person point of view --- yet another super-cool application of ExemplarSVMs . Object detection from first person's view using exemplar SVMs There are lots of other plenty of cool talks during this workshop including: action recognition from a first-person point of view, experience classification, as well as a study of the obtrusiveness of wearable computing platforms by some fellow MIT vision hackers. The accuracy-obtrusiveness tradeoff for wearable vision platforms You might be thinking, "What is egocentric vision?" but nothing explains it better than the following video from Google about its super exciting research project codename Project Glass . I'm really hoping Hartmut talks about this... If you're looking for me, you know where I'll be tomorrow. Happy computing. Wednesday, May 23, 2012 Why your vision lab needs a reading group I have a certain attitude when it comes to computer vision research -- don't do it in isolation. Reading vision papers on your own is not enough. Learning how your peers analyze computer vision ideas will only strengthen your own understanding of the field and help you become a more critical thinker. And that is why at places like CMU and MIT we have computer vision reading groups. The computer vision reading group at CMU (also known as MISC-read to the CMU vision hackers) has a long tradition, and Martial Hebert has made sure it is a strong part of the CMU vision culture. Others ex-CMU hackers such as Sanjiv Kumar have continued the vision reading group tradition onto places such as Google Research in NY (correct me if this is no longer the case). I have continued the reading group tradition to MIT (where I'm currently a postdoc) because I was surprised there wasn't one already! In reality, we spend so much time talking about papers in an informal setting, that I felt it was a shame to not do so in a more organized fashion. Image courtesy of Platypus My personal philosophy is that as a vision researcher, the way towards the goal of creating novel long-lasting ideas is learning how others think about the field. There's a lot of value in being able to analyze, criticize, and re-synthesize other researchers' ideas. Believe me when I say that a lot of new vision papers come out of top tier vision conferences every year. You should be reading them! But not just reading, also criticizing them among your peers. Because once you learn to criticize others' ideas, you will become better at promulgating your own. Do not equate criticism with nasty words for the sake of being nasty -- good criticism stems from a keen understanding of what must be done in science to convince a broad audience of your ideas. In case you want to start your own computer vision research group, I've collected some tips, tricks, and advice: 1. You don't need faculty. If you can't find a season vision veteran to help you organize the event, do not worry. You just need 3+ people interested in vision and the motivation to maintain weekly meetings. Who cares if you don't understand every detail of every paper! Nobody besides the authors will ever understand every detail. 2. Be fearless. Ask dumb questions. Alyosha Efros taught me that if you're reading a paper or listening to a presentation, if you don't understand something then there's a good chance you're not the only one in the audience with the same questions. Sometimes younger PhD students are afraid of "asking a dumb question" in front of audience. But if you love knowledge, then it is your duty to ask. Silence will not get you far. Be bold, be curious, and grow wise. 3. Choose your own papers to present. Do not present papers that others want you to present -- that is better left for a seminar course led by a faculty member. In a reading group it is very important that you care about the problems you will be discussing with your peers. If you keep up with this trend then when it comes to "paper writing time" you should be up to date on many relevant papers in your field and you will know about your other lab mates' research interests. 4. It is better to show a paper PDF up on a projector than cancel a meeting. Even if everybody is busy, and the presenter didn't have time to create slides, it is important to keep the momentum going . 5. After a major conference, have all of the people who attended the conference present their "top K paper." The week after CVPR it will be valuable to have such a massive vision brain dump onto your peers because it is unlikely that everybody got to attend. 6. Book a room every week and try to have the meeting at the same time and place. Have either the presenter or the reading group organizer send out an announcement with the paper they will be presenting ahead of time. At MIT we share a google doc with the information about interesting papers and the upcoming presenter usually chooses the paper one week in advance so that the following week's presenter doesn't choose the same paper. If somebody already presents your paper, don't do it a second time! Choose another paper. cvpapers.com is a great resource to find upcoming papers. At CMU, there is a long rotating schedule which includes every vision student and faculty member. Once it is your time to present, you can only get off the hook if you swap your slot with somebody else. Being on a schedule months in advance means you'll have lots of time to prepare your slides. At MIT, we are currently following the object recognition / scene understanding / object detection theme where we (Prof. Torralba, his students, his postdocs, his visiting students, etc) choose a paper highly relevant to our interests. By keeping such a focus, we can really jump into the relevant details without having to explain fundamental concepts such as SVMs, features, etc. However, at CMU the reading group is much broader because on the queue are students/profs interested in all aspects of vision and related fields such as graphics, illumination, geometry, learning, etc. Wednesday, April 18, 2012 One Part Basis to Rule them All: Steerable Part Models Last week, some of us vision hackers at MIT started an Object Recognition Reading Group. The group is currently in stealth-mode, but our goal is to analyze, criticize, and re-synthesize ideas from the object detection/recognition community. To inaugurate the group, I covered Hamed Pirsiavash 's Steerable Part Models paper from the upcoming CVPR 2012 conference. As background reading, I had to go over the mathematical basics of learning with tensors (i.e., multidimensional arrays) which were outlined in their earlier NIPS 2009 paper, Bilinear Classifiers for Visual Recognition . After reading up on their work, I have a better grasp of what the trace operator actually does. It is nothing more than a Hermitian inner product defined between the space of linear operators from C^N to C^M (see post here for geometric interpretations of the trace ). Hamed Pirsiavash , Deva Ramanan , " Steerable part models ", CVPR 2012 "Our representation can be seen as an approach to sharing parts." -- H. Pirisiavash and D. Ramanan The idea behind this paper is relatively simple -- instead of learning category-specific part-models, learn a part-basis from which all category-specific part models come from. Consider the different parts learned from a deformable part model (see Felzenszwalb's DPM page for more info about DPMs) and their depiction below. If you take a close look you see that the parts are quite general, and it makes sense to assume that there is a finite basis from which these parts come from. Parts from a Part-model The model learns a steerable basis by factoring the matrix of all part models into the product of two low rank matrices, and because the basis is shared across categories, this performs both dimensionality reduction (like to help prevent over-fitting as well as speed up the final detectors) and sharing (likely to boost performance). The learned steerable basis While the objective function is not convex, it can be tackled via a simple alternating optimization algorithm where the resulting sub-objectives are convex and can be optimized using off-the-shelf Linear SVM solvers. They call this property bi-convexity, and it doesn't guarantee finding the global optimum, just makes using standard tools easy. While the results on PASCAL VOC2007, do not show an improvement in performance (VOC2007 is not a very good dataset for sharing as there are only a few category combinations which should in theory benefit significantly from sharing (e.g., bicycle and motorbike)), they show a significant computational speed up. Below is a picture of the part-based car model from Felzenszwalb et al, as well as the one from their steerable basis approach. Note that the HOG visualizations look very similar. In conclusion, this is one paper worthy of checking out if you are serious about object recognition research. The simplicity of the approach is a strong point, and if you are a HOG-hacker (like many of us these days) then you will be able to understand the paper without a problem. Tuesday, April 17, 2012 Using Panoramas for Better Scene Understanding There's a lot more to automated object interpretation than merely predicting the correct category label. If we want machines to be able to one day interact with objects in the physical world, then predicting additional properties of objects such as their attributes, segmentations, and poses is of utmost importance. This has been one of the key motivations in my own research behind exemplar-based models of object recognition. The same argument holds for scenes. If we want to build machines which understand environments around them, then they will have to do much more than predict some sloppy "scene category." Consider what happens when a machine automatically analyzes a picture and says that it from the "theatre" category. Well, the picture could be of the stage, the emergency exit, or just about anything else within a theater -- in each of these cases, the "theatre" category would be deemed correct, but would fall short of explaining the content of the image. Most scene understanding papers either focus getting the scene category right, or strive to obtain a pixel-wise semantic segmentation map. However, there's more to scene categories than meets the eye. Well, there is an interesting paper which will be presented this summer at the CVPR2012 Conference in Rhode Island which tries to bring the concept of " pose " into scene understanding. Pose-estimation has already been well established in the object recognition literature, but this is one of the first serious attempts to bring this new way of thinking into scene understanding. J. Xiao, K. A. Ehinger, A. Oliva and A. Torralba. Recognizing Scene Viewpoint using Panoramic Place Representation. Proceedings of 25th IEEE Conference on Computer Vision and Pattern Recognition, 2012. The SUN360 panorama project page also has links to code, etc. The basic representation unit of places in their paper is that of a panorama . If you've ever taken a vision course, then you probably stitched some of your own. Below are some examples of cool looking panoramas from their online gallery. A panorama roughly covers the space of all images you could take while centered within a place. Car interior panoramas from SUN360 page Building interior panoramas from SUN360 page What the proposed algorithm accomplishes is twofold. First it acts like an ordinary scene categorization system, but in addition to producing a meaningful semantic label, it also predicts the likely view within a place . This is very much like predicting that there is a car in an image, and then providing an estimate of the car's orientation. Below are some pictures of inputs (left column), a compass-like visualization which shows the orientation of the picture (with respect to a cylindrical panorama), as well as a depiction of the likely image content to fall outside of the image boundary. The middle column shows per-place mean panoramas (in the style of TorralbaArt), as well as the input image aligned with the mean panorama. I think panoramas are a very natural representation for places, perhaps not as rich as a full 3D reconstruction of places, but definitely much richer than static photos. If we want to build better image understanding systems, then we should seriously start looking at using richer sources of information as compared to static images. There is only so much you can do with static images and MTurk, thus videos, 3D models, panoramas, etc are likely to be big players in the upcoming years.
Use bc $ echo "0.8 0.7" | bc 1 $ echo "0.8 0.7" | bc 0 $ echo ".08 0.7" | bc 0 use awk x = "0.80" y = "0.70" result = $ ( awk - vx = $x - vy = $y 'BEGIN{ print x=y?1:0}' ) if ; then echo "x more than y" fi
创建动态链接库主要有两种方法: 一种是使用Matlab 为VC++ IDE提供的Add-in。这种方法比较简单,方便快捷,只要在VC++中创建工程的时候选择Matlab Project Wizard,并且在接下来的Step 1 中的Visual Matlab Application Type选择Shared M-DLL 就可以了。接下来就是添加*.m 文件, 进行编译了。另一种方法就是使用Matlab 的mcc 命令将*.m文件编译为动态链接库( *.DLL)。因为Add-in 也是调用Compiler 的命令mcc 进行编译工作的, 而且有时候这个Add-in 还会出现不能使用的情况, 因此这里主要讨论使用mcc 命令的方法。 关于mcc 有很多参数可以使用, 而且有多种用法,主要参数如下: Bold entries in the Comment/Options column indicate default values -a filename Add filename to the CTF archive 无 -b Generate Excel-compatible formula function Requires MATLAB Builder for Excel -B filename ] Replace -B filename on the mcc command line with the contents of filename The file should contain only mcc command line options. These are MathWorks included options files: -B csharedlib:fooC shared library -B cpplib:fooC++ library -c 生成C语言包装函数 Equivalent to -T codegen -d directory 输出到指定目录 无 -f filename Use the specified options file, filename, when calling mbuild mbuild -setup is recommended. -g 生成 debugging 信息 无 -G 同 -g 无 -I directory Add directory to search path for M-files MATLAB path is automatically included when running from MATLAB, but not when running from DOS/UNIX shell. -l 创建库函数的宏 等效于命令 -W lib -T link:lib -m 生成C语言独立运行程序的宏 等效于命令 -W main -T link:exe -M string 传递字符串给 mbuild Use to define compile-time options. -N Clear the path of all but a minimal, required set of directories 无 -o outputfile 制定最终可执行文件的名字 Adds appropriate extension -P directory Add directory to compilation path in an order-sensitive context Requires -N option -R option Specify run-time options for MCR option = -nojvm -nojit -S Create Singleton MCR Requires MATLAB Builder for COM -T target Specify output stage target = codegen compile: bin link: bin where bin =exe lib -v 显示详细; 显示编译步骤 无 -w option 显示警告信息 option = list level level :string where level =disable enable error -W type Control the generation of function wrappers type = main cpplib:string lib:string none com:compname,clname,version -Y licensefile Use licensefile when checking out a Compiler license 无 -z path 指定库文件和包含文件的路径 无 -? 显示帮助信息 无 Feedback
When compute nodes experience a hard reboot (e.g., when the compute node is reset by pushing the power butto after a power failure), they will reformat the root file system and reinstall their base operating environment. To disable this feature: • Login to the frontend • Create a file that will override the default: # cd /export/rocks/install # cp rocks-dist/arch/build/nodes/auto-kickstart.xml \ site-profiles/5.3/nodes/replace-auto-kickstart.xml Where arch is "i386" or "x86_64". • Edit the file site-profiles/5.3/nodes/replace-auto-kickstart.xml • Remove the line: packagerocks-boot-autopackage • Rebuild the distribution: # cd /export/rocks/install # rocks create distro • Reinstall all your compute nodes An alternative to reinstalling all your compute nodes is to login to each compute node and execute: # /etc/rc.d/init.d/rocks-grub stop # /sbin/chkconfig --del rocks-grub