科学网

 找回密码
  注册

tag 标签: signal

相关帖子

版块 作者 回复/查看 最后发表

没有相关内容

相关日志

[转载]0057: 信号领域投稿期刊推荐
cwhe10 2018-1-21 09:35
1.IEEE Signal Processing Magazine (IF:2.655) 一年6期。在信号处理领域杂志 影响因子 普遍不高的情况下,能够拥有这样的 影响因子 足见这份杂志的威信。言外之意也就是说现阶段这份杂志似乎不是象我这样的普通人能够“灌水”的地方啦,呵呵,够直白吧,不过仰望仰望还是可以的啦。这份杂志是双月刊,版面是彩色的,看起来比较爽,所发表的文章就我个人感觉来说都是一些很新的并且很热门的东西,其中有一部分似乎是邀稿专区,当然接受邀稿的都是信号处理领域的一些大牛啦,呵呵,写写他们怎么样做研究,怎么样写论文啊什么的。比如提出MLT变换(也就是MDCT,主要应用在音频里面)的H.S. Malvar就在这个杂志上发过两篇文章:Effective Communication: Tips on Technical Writing( http://ieeexplore.ieee.org/iel5/ ... mp;isnumber=4490183) Leading Research and Innovation ( http://ieeexplore.ieee.org/iel5/ ... isnumber=36051) 另外一部分似乎是最近比较流行的一些新的研究方向,这个太广泛了我就不多说了,其实我想说也说不过来,呵呵,有好多东西我听都没听说过呢。 对于这份杂志,我还想再说两点比较有意思的东西: (1)我对这份杂志印象最深刻的一篇文章(题目非常的吓人) Signal Processing: is the future bright? http://ieeexplore.ieee.org/iel5/ ... isnumber=31384 这个名字够响亮吧,所以我就记下来了,呵呵,所以说写文章标题还是很重要啊。 (2)前两天浏览这份杂志时发现同我们实验室紧密合作的法国LTSI实验室,在2008年在上面灌了一篇(当然我不知道以前是不是还发过),这个应该好好祝贺一下的。题目是 ICA: A Potential Tool for BCI Systems http://ieeexplore.ieee.org/iel5/ ... mp;isnumber=4404853 2.IEEE Transactions on Signal Processing (TSP)-- (IF:4.3) 一年12期,不过有的时候一期可能有2个Part。这也是信号里面非常好的一份杂志。哎,说起这份杂志我就感觉比较遗憾,差点就在上面灌一篇长文了,没办法,第一次写文章就往这份杂志上投,还太嫩了点,想想被拒也比较正常。我同这份杂志直接打过两次交道(投过两次稿,都被拒,全部转投其他杂志),总体感觉这份杂志要求的非常严格,说得不好听点,就是在给你的文章“挑刺”。特别是那些评审对于科研的严谨态度真的是一览无余。记得其中有位评审在评审我们的文章时竟然把我们的文章打印出来,用红笔改好需要修改的地方,然后扫描成 PDF文档 ,通过副主编以附件的形式传给我们,让我们照样子修改,现在想起来都还比较感动。 个人感觉这份杂志从60年代到90年代出了很多关于变换(Transform)快速算法方面的文章,但是现在要想再在这份杂志上出相类似的文章似乎比较困难了(除非文章确实有比较大的创新,而且效果是所有方法中最好的)。如果是Transform快速算法方面的文章,不是感觉自己写的文章有很强的创新性还是不要投这份杂志的好,免得负责你这篇文章的副主编直接建议你撤稿,呵呵,我投的第二篇文章就碰到了这种尴尬的情况。 这份杂志的审稿似乎很快,我的第一篇文章第一审只用了一个多月,不过第二审用了3个多月。一般是3个审稿人,如果一个审稿人持反对意见,那么再送第四个审稿人。当评审的结果出现2:2时,稿子还可能被拒(我的第一篇文章进入二审之后就是这样被拒的)。 3.IEEE Transactions on Circuits and Systems-I: Regular Papers (CAS-I)---(IF:1.139) 一年12期。呵呵,很奇怪吧,这份电路与系统的杂志竟然也能够发变换(Transform)快速算法方面的文章。我的第一篇文章被TSP拒后,增加了好多新的内容,然后就转投了这份杂志,今年8月份被接收了,呵呵,挺高兴的,作者简介是带着相片的。 这份杂志感觉审稿周期太长了一点,投稿的时候就会回复一封信告诉你稿子至少要6个月后才能有初步的结果,说实话,挺打击积极性的。(后来我跟踪了一下自己稿子的状态,确实是要到5-6个月后才给稿子分配副主编,才开始正常的审稿程序。)这份杂志好像只安排2个审稿人,如果评审的结果出现1:1时,不知道是不是由副主编决定?没碰到过这种情况。个人感觉这份杂志的审稿似乎没有TSP那么严格,不过话又说回来,严格与否最关键的还是评审。哦,还记起了一点:这份杂志的投稿系统看起来似乎比较简单,呵呵。 4. Signal Processing: Image Communication (IF: 1.109) 一年10期。这份杂志我没有投过稿,不过看了几篇文章,感觉质量好像还蛮高的。主要好像是偏向于图像方面的文章。 5.IEE Electronics Letters (IF:1.063) 这份杂志一年25期,发表的文章都非常非常短(排成双栏格式,可能就2-3页的样子),就是简短的报道一下你研究的最新进展。可能对创新性的要求比较高吧,呵呵,没有投过稿,不是很清楚。 6.IEEE Transactions on Circuits and Systems-II: Express Briefs (CAS-II)---(IF:0.922) 一年12期。我的第二篇被TSP拒的文章转投了这份杂志,去年12月份被接收的,现在已经刊出来了。这份杂志一般也只安排2个审稿人,感觉审稿也挺严格的。一审的时间2-3个月吧,但是感觉不是很爽的就是一审似乎只告诉你根据评审的意见需要做修改,并不表示一定会接收你的文章(这个让人心里有点悬)。 7. Digital Signal Processing(IF: 0.889) 一年6期。没投过稿,不过看过几篇文章,感觉还不错,呵呵。 8.IEEE Signal Processing Letters (SPL)---(IF: 0.722) 一年12期。这份杂志尽管我还没有被接收一篇文章,但是应该算是老朋友了,呵呵,投了3篇被拒了3篇。最近才有一篇二作的文章被这个杂志接收了。个人感觉这份杂志要求的篇幅短(双栏4页),但是对创新性的要求很高。而且可供评审对你的稿子的评价就两个:要么接收,要么拒绝。需要做大修改就是拒绝,当然你可以做完大修改之后作为一篇新的文章再投。这份杂志的审稿周期比较短,貌似是承诺3个月之内一定给答复,但是往往是1-2个月就可以给答复了。因为杂志给评审的审稿时间是3个星期。我个人比较喜欢这种效率比较高的杂志,尽管被拒的比较多,但还是比较爽,因为你从投稿开始很快就能知道结果了。 。 9. Signal Processing (IF: 0.669) 一年12期。这份杂志创刊时间很长,也算是老朋友了,呵呵,有一篇一作和一篇二作的文章发在这份杂志上。个人感觉这份杂志的审稿没有SPL这么严格,好像是重点看你的文章有没有亮点。个人比较喜欢这份杂志,审稿挺快的,对于Correspondence审稿一审可能2个月左右吧,对于Regular Papers 的审稿一审可能3-4个月吧。当然最关键的是我们对这份杂志的命中率比较高(到目前为止100%),呵呵,这个让自己心里蛮舒服的,也很乐意去投。 10. IET Signal Processing 貌似2007年才创刊,但是是IEEE Explore里面的杂志,2007年出了4期,正开办的杂志 另外,常路宾博士对这类期刊投稿有丰富经验! 文章转载地址: https://zhidao.baidu.com/question/135342853.html
个人分类: 科学研究|11 次阅读|0 个评论
我将主持2015年IEEE信号处理竞赛
热度 1 oliviazhang 2014-6-20 11:36
经过IEEE 信号处理各个technical committee chairs 的投票, 我的提交的竞赛计划被选中。明年IEEE Signal Processing Cup将由我来主持。比赛课题是: Heart Rate Monitoring During Physical Exercise Using Wrist-Type Photoplethysmographic (PPG) Signals 比赛将会从今年9,10月开始。具体细节我会进一步提供。 2015年的这场信号处理国际比赛的目的,是为了让学生们把学到的各种信号处理知识用到实际中来,通过解决实际问题,更加深入所学到的知识,并希望通过这个比赛,使学生们了解到,一个实际的信号处理系统,并不仅仅是采用最好的信号处理算法那么简单,还要考虑各种实际中的情况。 这场比赛,有着非常深远的应用场景和市场,并且不但吸引学术界的学生和老师,还将吸引一大批公司。因为目前已经有许多公司的smart-watch产品都采用PPG信号来实时monitor心率,比如:Mio Alpha® Heart Rate Sport Watch, Philips Actiwatch®, Atlas®, Samsung Gear Fit® and Samsung Simband Fitness Tracker…… 关于这个比赛,感兴趣的朋友可以参考 2014年的IEEE Signal Processing Cup: http://www.icassp2014.org/SP_cup.html 欢迎转载,让更多的老师, 学生早点了解,早做背景知识的准备。
5544 次阅读|1 个评论
[转载]fMRI信号Percent Signal Change FAQ
bnuzgy 2013-1-18 13:58
Percent Signal Change FAQ Frequently Asked Questions - Percent Signal Change PercentSignalChangePapers PercentSignalChangeHowTos PercentSignalChangeLinks Check out RoisFaq for more info about region-of-interest analysis in general... 1. What’s the point of looking at percent signal change? When is it helpful to do that? The original statistical analyses of functional MRI data, going way back to '93 or so, were based exclusively on intensity changes. It was clear from the beginning of fMRI studies that raw intensity numbers wouldn't be directly comparable across scanners or subjects or even sessions - average means of each of those things varies widely and arbitrarily. But simply looking at how much the intensity in a given voxel or region jumped in one condition relative to some baseline seemed like a good way to look at how big the effect of the condition was. So early block experiments relied on averaging intensity values for a given voxel in the experimental blocks, doing the same for the baseline block, and comparing the two of 'em. Relatively quickly, fancier forms of analysis became available, and it seemed obvious that correcting that effect size by its variance was a more sensitive analysis than looking at it raw - and so t-statistics came into use, and the general linear model, and so forth. So why go back to percent signal change? For block experiments, there are a couple reasons, but basically percent signal change serves the same function as beta weights might (see RoisFaq for more on them): a numerical measure of the effect size. Percent signal change is a lot more intuitive a concept than parameter weights are, which is nice, and many people feel that looking at a raw percent signal change can get you closer to the data than looking at some statistical measure filtered through many layers of temporal preprocessing and statistical evaluation. For event-related experiments, though, there's a more obvious advantage: time-locked averaging. Analyzing data in terms of single events allows you to create the timecourse of the average response to a single event in a given voxel over the whole experiment - and timecourses can potentially tell you something completely different than beta weights or contrasts can. The standard general linear model approach to activation assumes a shape for the hemodynamic response, and tests to see how well the data fit that model, but using percent signal change as a measure lets you actually go and see the shape of the HRF for given conditions. This can potentially give you all kinds of new information. Two voxels might both be identified as "active" by the GLM analysis, but one might have an onset two seconds before the next. Or one might have a tall, skinny HRF and one might have a short but wide HRF. That sort of information may shed new light on what sort of processing different areas are engaging in. Percent signal change timecourses in general also allow you to validate your assumptions about the HRF, correlate timecourses from one region with those from another, etc. And, of course, the same argument about percent signal change being somehow "closer" to the data still applies. Timecourses are rarely calculated for block-related experiments, as it's not always clear what you'd expect to see, but for event-related experiments, they're fast becoming an essential element of a study. 2. How do I find it? Good question, and very platform dependent. In AFNI and BrainVoyager, whole-experiment timecourses are easily found by clicking around, and in the Gablab the same is available for SPM with the Timeseries Explorer. Peristimulus timecourses, though, ususally require some calculation. In SPM, you can get fitted responses through the usual results panel, using the plot command, but those are in arbitrary units and often heavily smoothed relative to the real data. The simplest way these days for SPM99 is to use the Gablab Toolbox's roi_percent code. Check out RoiPercent for info about that function. That creates timecourses averaged over an ROI for every condition in your experiment, with a variety of temporal preprocessing and baseline options. In SPM2, the new Gablab roi_deconvolve is sort of working, although it's going to be heavily updated in coming months. It's based off AFNI's 3dDeconvolve function, which is the newest way to get peristimulus timecourses in AFNI. That's based on a finite impulse response (FIR) model (more on those below). BrainVoyager's ROI calculations will also automatically run an FIR model across the ROI for you. 3. How do those timecourse programs work? The simplest way to find percent signal change is perfectly good for some types of experiments. The basic steps are as follows: Extract a timecourse for the whole experiment for your given voxel (or extract the average timecourse for a region). Choose a baseline (more on that below) that you'll be measuring percent signal change from . Popular choices are "the mean of the whole timecourse" or "the mean of the baseline condition." Divide every timepoint's intensity value by the baseline, multiply by 100, and subtract 100, to give you a whole-experiment timecourse in percent signal change. For each condition C, start at the onset of each C trial. Average the percent signal change values for all the onsets of C trials together. Do the same thing for the timepoint after the onset of each C trial, e.g., average together the onset + 1 timepoint for all C trials. Repeat for each timepoint out from the onset of the trial, out to around 30 seconds or however long an HRF you want to look at. You'll end up with an average peristimulus timecourse for each condition, and even a timecourse of standard deviations/confidence intervals if you like - enough to put confidence bars on your average timecourse estimate. This is the basic method, and it's perfect for long event-related experiments - where the inter-trial interval is at least as long as the HRF you want to estimate, so every experimental timepoint is included in one and only one average timecourse. This method breaks down, though, with short ISIs - and those are most experiments these days, since rapid event-related designs are hugely more efficient than long event-related designs. If one trial onsets before the response of the last one has faded away, then how do you know how much of the timepoint's intensity is due to the previous trial and how much due to the current trial? The simple method will result in timecourses that have the contributions of several trials (probably of different trial types) averaged in, and that's not what you want. Ideally, you'd like to be able to run trials with very short ISIs, but come up with peristimulus timecourses showing what a particular trial's response would have been had it happened in isolation. You need to be able to deconvolve the various contributions of the different trial types and separate them into their component pieces. Fortunately, that's just what AFNI's 3dDeconvolve, BrainVoyager QX, and the Gablab's roi_deconvolve all do. SPM2 also allows it directly in model estimation, and Russ Poldrack's toolbox allows it to some degree, I believe. They all use basically the same tool - the finite impulse response model. 4. What's a finite impulse response model? Funny you should ask. The FIR model is a modification of the standard GLM which is designed precisely to deconvolve different conditions' peristimulus timecourses from each other. The main modification from the standard GLM is that instead of having one column for each effect, you have as many columns as you want timepoints in your peristimulus timecourse. If you want a 30-second timecourse and have a 3-second TR, you'd have 10 columns for each condition. Instead of having a single model of activity over time in one column, such as a boxcar convolved with a canonical HRF, or a canonical HRF by itself, each column represents one timepoint in the peristimulus timecourse. So the first column for each condition codes for the onset of each trial; it has a single 1 at each TR that condition has a trial onset, and zeros elsewhere. The second column for each condition codes for the onset + 1 point for each trial; it has a single 1 at each TR that's right after a trial onset, and zeros elsewhere. The third column codes in the same way for the onset + 2 timepoint for each trial; it has a single 1 at each TR that's two after a trial onset, and zeros elsewhere. Each column is filled out appropriately in the same fashion. With this very wide design matrix, one then runs a standard GLM in the multiple regression style. Given enough timepoints and a properly randomized design, the design matrix then assigns beta weights to each column in the standard way - but these beta weights each represent activity at a certain temporal point following a trial onset. So for each condition, the first column tells you the effect size at the onset of a trial, the second column tells you the effect size one TR after the onset, the third columns tells you the effect size two TRs after the onset, and so on. This clearly translates directly into a peristimulus timecourse - simply plot each column's beta weight against time for a given condition, and voila! A nice-looking timecourse. FIR models rely crucially on the assumption that overlapping HRFs add up in linear fashion, an assumption which seems valid for most tested areas and for most inter-trial intervals down to about 1 sec or so. These timecourses can have arbitrary units if they're used to regress on regular intensity data, but if you convert your voxel timecourses into percent signal change before they're input to the FIR model, then the peristimulus timecourses you get out will be in percent signal change units. That's the tack taken by the Gablab new roi_percent. Some researchers have chosen to ignore the issue and simply report the arbitrary intensity units for their timecourses. By default, FIR models include some kind of baseline model - usually just a constant for a given session and a linear trend. That corresponds to choosing a baseline for the percent signal change of simply the session mean (and removing any linear trend). Most deconvolution programs include the option, though, to add other columns to the baseline model, so you could choose the mean of a given condition as your baseline. There are a lot of other issues in FIR model creation - check out the AFNI 3dDeconvolve model for the basics and more. 5. What are temporal basis function models? How do they fit in? Basis function models are a sort of transition step, representing the continuum between the standard, canonical-HRF, GLM analysis, and the unconstrained FIR model analysis. The standard analysis assumes an exact form for the HRF you're looking for; the FIR places no constraints at all on the HRF you get. But sometimes it's nice to have some kinds of constraints, because it's possible (and often happens) that the unconstrained FIR will converge on a solution that doesn't "look" anything like an HRF. So maybe you'd like to introduce certain constraints on the type of HRFs you'll accept. You can do that by collapsing the design matrix from the FIR a little bit, so each column models a certain constrained fragment of the HRF you'd like to look for - say, a particular upslope, or a particular frequency signature. Then the beta weight from the basis function model represents the effect size of that part of the HRF, and you can multiply the fragment by the beta weight and sum all the fragments from one condition to make a nice smooth-looking (hopefully) HRF. Basis function models are pretty endlessly complicated, and the interested reader is referred to the papers by Friston, Poline, etc. on the topic - check out the Friston et. al, "Event-related fMRI," here: ContrastsPapers . 6. How do you select a baseline for your timecourse? What are pros and cons of possible options? Do some choices make particular comparisons easier or harder? Good question. Choosing a particular baseline places a variety of constraints on the shape of possible HRFs you'll see. The most popular option is usually to simply take the mean intensity of the whole timecourse - the session mean. The problem with that as a baseline is that you're necessitating that there'll be as much percent signal change under the baseline as over it. If activity is at its lowest point during the inter-trial interval or just before trial onset, then, that may lead to some funny effects, like the onset of a trial starting below baseline, and dramatic undershoots. As well, if you've insufficiently accounted for drifts or slow noise across your timecourse, you may overweight some parts of the session at the expense of others, depending on what shape the drift has. Alternatively, you could choose to have the mean intensity during a certain condition be the baseline. This is great if you're quite confident there's not much response happening during that condition, but if you're not, be careful. Choosing another condition as the baseline essentially calculates what the peristimulus timecourse of change is between the two conditions, and if there's more response at some voxels than you thought in the baseline condition, you may seriously underestimate real activations. Even if you pick up a real difference between them, the difference may not look anything like an HRF - it may be constant, or gradually increase over the whole 30 seconds of timecourse. If you're interested in a particular difference between two conditions, this is a great option; if you're interested in seeing the shape of one condition's HRF in isolation, it's iffier. With long event-related experiments, one natural choice is the mean intensity in the few seconds before a trial onset - to evaluate each trial against its own local baseline. With short ISIs, though, the response from the previous trial may not have decayed enough to show a good clean HRF. 7. What kind of filtering should I do on my timecourses? Generally, percent signal analysis is subject to the same constraints in fMRI noise as the standard GLM, and so it makes sense to apply much of the same temporal filtering to percent signal analysis. At the very least, for multi-session experiments, scaling each session to the same mean is a must, to allow different sessions to be averaged together. Linear detrending (or the inclusion of a first-order polynomial in the baseline model, for the AFNI users) is also uncontroversial and highly recommended. Above that, high-pass filtering can help remove the low-frequency noise endemic to fMRI and is highly-recommended - this would correspond to higher-order polynomials in the baseline model for AFNI, although studies have shown anything above a quadratic isn't super useful (Skudlarski et. al, TemporalFilteringPapers ). Low-pass filtering can smooth out your peristimulus timecourses, but can also severely flatten out their peaks, and has fallen out of favor in standard GLM modeling; it's not recommended. Depending on your timecourse, outlier removal may make sense - trimming the extreme outliers in your timecourse that might be due to movement artifacts. 8. How can you compare time courses across ROIs? Across conditions? Across subjects? (peak amplitude? time to peak? time to baseline? area under curve?) How do I tell whether two timecourses are significantly different? How can you combine several subjects’ ROI timecourses into an average? What’s the best way? All of these are great questions, and unfortunately, they're generally open in the literature. FIR models generally allow contrasts to be built just as in standard GLM analysis, so you can easily do t- or F-tests between particular aspects of an HRF or combinations thereof. But what aspects make sense to test? The peak value? The width? The area under the curve? Most of these questions aren't super clear, although Miezin et. al ( PercentSignalChangePapers ) and others have offered interesting commentary on which parameters might be the most appropriate to test. Peak amplitude is the de facto standard, but faced with questions like whether the tall/skinny HRF is "more" active than the short/fat HRF, we'll need a more sophisticated understanding to make sense of the tests. As for group analysis of timecourses, that's another area where the literature hasn't pushed very far. A simple average of all subjects' condition A, for example, vs. all subjects' condition B may well miss a subject-by-subject effect because of differing peaks and shapes of HRFs. That simple average is certainly the most widely used method, however, and so fancier methods may need some justification. One fairly uncontroversial method might be simply analogous to the standard group analysis for regular design matrices - simply testing the distribution across subjects of the beta weight of a given peristimulus timepoint, for example, or testing a given contrast of beta weights across subjects.
5172 次阅读|0 个评论
Theory and evaluation of single-molecule signal
Irasater 2012-6-26 17:32
单分子研究 theory and evaluation of single-molecule signals.pdf 另外的下载地址: http://bbs.bbioo.com/thread-132696-1-1.html Eli Barkai's Page http://faculty.biu.ac.il/~barkaie/ free papers and tex sources http://uk.arxiv.org/find/Computer+Science,Mathematics,Nonlinear+Sciences,Physics,Quantitative+Biology,Quantitative+Finance,Statistics/1/Barkai/0/1/0/all/8/0
个人分类: Professional knowledge|3232 次阅读|0 个评论
My Favorite Books on Signal Processing
spirituallife 2012-4-25 12:22
The list below includes some of the authors and corresponding books that have influenced me the most in the field of signal processing. The list I provide consist of my top 15 selection. I hope to have time to do a more detailed review of each of them someday in the future: -Digital Signal Processing, Proakis (DSP Fudamentals) -Introduction to Linear Algebra, Strang (Math Basics for SSP) -Intuitive Probability and Random Processes using MATLAB, Kay (Math Basics for SSP) -Statistical Digital Signal Processing and Modeling, Hayes (Statistical SP) -Statistical and Adaptive Signal Processing, Manolakis (Statistical SP) -Applied Optimal Estimation, Gelb (Kalman Filters) -Linear Estimation, Kailath/Sayed (Kalman Fitlers) -Statistical Signal Processing, Kay (Linear Estimation) -Adaptive Filter Theory, Haykin (Adaptive Filtering) -Adaptive Signal Processing, Widrow (Adaptive Filtering) -Fundamentals of Adaptive Filtering, Sayed (Adaptive Filtering) -Spectral Analysis of Signal, Stoica (Spectrum Estimation) -Digital Spectral Analysis with Applications, Marple (Spectrum Estimation) -An Introduction to the Bootstrap, Efron (Adaptive Filtering) -Bootstrap Techniques for Signal Processing, Zoubir (Bootrap) Of these, the most influencial books for my research have been Statistical Digital Signal Processing by Hayes and Linear Estimation by Kailath.
个人分类: 信号处理|0 个评论
A wavelet tour of signal processing 第三版(Mallat) 下载
Liushli 2011-5-13 15:31
网上找这本书不太好找,刚上传了,提供下载地址: 地址1:http://bbs.sciencenet.cn/forum.php?mod=viewthreadtid=319560extra=page%3D1(无积分,要注册) 地址2:http://ishare.iask.sina.com.cn/f/10028709.html?from=isnom(要积分,要注册) 地址3:http://download.csdn.net/source/1319864(用积分,要注册,但速度很快)
个人分类: 写写|175 次阅读|0 个评论
Signal Localization in MRI (See attachment)
backswimming 2010-10-13 10:26
The experiment protocol of generating MRI signal is very simple: place an object in an uniform, main magnetic filed and then exite it with a oscilating magnetic filed (also called RF pulse) at the resonance frequency. However, the received signal is a sum signal from all the pixels. Therefore, in order to differenciate local signals from different parts of a given object, spatial encoding is very necessay. Fourier transform is the essential imaging equation in MR imaging. Combining the Fourier transform and spatial encoding, we can get the imaging equation in MRI, which is very important for us to understand the essential of many mechanisms. This note starts from the siglal localization, and then we can get a different express of spacial encoding, whick is call k-space. The relationship between normal spatial encoding and k-space is also described. Based on the Fourier transform equation, we add the concept of k-space to it and get theimaging equation of MRI. At last, we analyse two kinds of k-space filling strategy using imaging equation. This script is a note of Priciples of Manetic Resonance Imaging: A signal Processing Perspective by Zhi-pei Liang and Paul C. Lauterbur. If you want to get the original article, your can purchase one copy for Amzon. If you have any comments or suggestion, Please feel free to send me an email or leave a message. I really attache great importance to any advice from you. Signal Localization in MRI
个人分类: Basic of MRI|3244 次阅读|1 个评论
上帝来研究人类社会信号传导通路
anny424 2010-4-27 18:26
天堂一天,地上千年。 话说上帝以耶稣的身份来替人类赎完罪,就休息去了,一睡就睡了两天,这时地上已经过了两千年。 上帝起床,看了看地球,以为自己还在做梦,揉了揉惺忪的睡眼,掐了自己一把,才知道自己的确是醒了。 这地上怎么跟我睡觉前不一样了呢? 好多
个人分类: 自然科学笔记|6 次阅读|0 个评论
How to submit your article to IEEE Trans. SP
kinglandom 2009-5-14 19:52
Please read carefully!! To submit a new manuscript, click on the blue star in the right column. Clicking on the various manuscript status links under My Manuscripts will display a list of all the manuscripts in that status at the bottom of the screen. To continue a submission already in progress, click the Continue Submission link in the Unsubmitted Manuscripts list. To submit a revision , click the Manuscripts with Decisions link in the left navigation. When you click on Create a Revision, your new submission record will be pre-populated with information from your original submission. Please update the manuscript record to reflect the characteristics of your revision. Please remember to include in a new manuscript submission: The required single-column double-spaced (1-column) version of the manuscript which may not exceed 30 pages for regular paper, 12 pages for correspondence and 3 pages for comment correspondence. The required double-column single-spaced (2-column) version of the manuscript which shall be no more than 10 pages for regular paper, or 6 pages for correspondence. Manuscripts that exceed these limits will incur mandatory overlength page charges. VERY IMPORTANT NOTE: In accordance with the Societys Information for Authors, all Signal Processing Society Journals do not accept the IEEE Electronic Copyright Form with electronic signature. When you are redirected to the IEEE Electronic Copyright Form wizard at https://ecopyright.ieee.org/ECTT/login.jsp?referer=blank at the end of your submission, please simply exit it and submit your hand signed form to the SPS office either by fax or e-mail. You may also upload the form as a Supporting Document when submitting your manuscript files to the MC system. The properly executed and signed IEEE Copyright Form. The IEEE Copyright Form is available online at http://www.ieee.org/web/publications/rights/copyrightmain.html. The form can be uploaded on MC as supporting document or print the form, complete it (PAPER# ON TOP, paper title, author names, publication title, handwritten signature and date) and send it by fax to the IEEE Signal Processing Society Publications Office at +1 732 235 1627/ +1 732 562 8905, or e-mail a scanned version to z.kowalski@ieee.org. Please note that your manuscript will not be processed for review if a properly executed copyright form is not received by the publications office. Failure to comply with this requirement will result in the immediate rejection of the manuscript. Please refer to Instructions Forms for more detailed information: Information for Authors: http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=04799354 Author Guide : http://mcv3help.manuscriptcentral.com/tutorials/Author.pdf The Online Training Documentation Resources : (here you will find Online Users Guides and Quick Start Guides, such as the Editor Guide, Reviewer Guide, Author Guide)
个人分类: 学术发表|10458 次阅读|0 个评论

Archiver|手机版|科学网 ( 京ICP备07017567号-12 )

GMT+8, 2024-5-3 17:34

Powered by ScienceNet.cn

Copyright © 2007- 中国科学报社

返回顶部