《A Formalisation of Adaptable Pervasive Flows》, Antonio Bucchiarone, Alberto Lluch Lafuente, Annapaola Marconi, and Marco Pistore WS-FM 2009 Abstract. Adaptable Pervasive Flows is a novel workflow-based paradigm for the design and execution of pervasive applications, where dynamic workflows situated in the real world are able to modify their execution in order to adapt to changes in their environment. In this paper, we study a formalisation of such flows by means of a formal flow language. More precisely, we define APFoL (Adaptable Pervasive Flow Language) and formalise its textual notation by encoding it in Blite, a formalisation of WS-BPEL. The encoding in Blite equips the language with a formal semantics and enables the use of automated verification techniques. We illustrate the approach with an example of a Warehouse Case Study. 个人点评: 作者扩展WS-BPEL为APFoL,用Blite(process algebra)加以形式化,将应用于Prevasive computing,其扩展基点是:1) flexible; 2) adaptive A Formalisation of Adaptable Pervasive Flows.pdf beamer_Formalisation_Adaptable_Pervasive_Flows.pdf
《A Tool for Integrating Pervasive Services and Simulating Their Composition》, Ehsan Ullah Warriach, Eirini Kaldeli, Jaap Bresser, Alexander Lazovik, and Marco Aiello ICSOC 2010, Abstract. As computation and services are pervading our working and living environments, it is important for researchers and developers to have tools to simulate and visualize possible executions of the services and their compositions. The major challenge for such tools is to integrate highly heterogeneous components and to provide a link with the physical environment. We extend our previous work on the RuG ViSi tool , in a number of ways: first, we provide a customizable and interactive middleware based on open standards (UPnP and OSGi) ; second, we allow any composition engine to guide the simulation and visualization (not only predefined compositions using BPEL) ; third, the interaction with simulated or physical devices is modular and bidirectional, i.e., a device can change the state of the simulation. In the demo, we use an AI planner to guide the simulation, a number of simulated UPnP devices, a real device running Java, and a two room apartment. The related video is available at http://www.youtube.com/watch?v=2w_UIwRqtBY . The EU Smart Homes for All (SM4All) project: goal: apply a Service Oriented Computing approach to the smart home 个人点评: 工程角度研究论文,用SOC增加理论水分 学术文章分两类: 工程类: 如google GFS,MapReduce等论文,始于解决问题,重视实用,强调集成。但从理论角度,并不太新,而是从数学,控制理论等多角度建模。换而言之,理论只是起抽象,提升文章理论的作用 学术类: 更多时将理论瞎套用在热点问题上,也就所谓的 “锤子理论 ”,不管行不行,先砸再说。不管理论是否适用,先出文章再说。 A Tool for Integrating Pervasive Services and Simulating Their Composition.pdf
综述 A journey to highly dynamic, self-adaptive service-based applications Abstract: Future software systems will operate in a highly dynamic world. Systems will need to operate correctly despite of unespected changes in factors such as environmental conditions, user requirements, technology, legal regulations, and market opportunities. They will have to operate in a constantly evolving environment that includes people, content, electronic devices, and legacy systems. They will thus need the ability to continuously adapt themselves in an automated manner to react to those changes. To realize dynamic, self-adaptive systems, the service concept has emerged as a suitable abstraction mechanism. Together with the concept of the service-oriented architecture (SOA), this led to the development of technologies, standards, and methods to build service-based applications by flexibly aggregating individual services. This article discusses how those concepts came to be by taking two complementary viewpoints. On the one hand, it evaluates the progress in software technologies and methodologies that led to the service concept and SOA. On the other hand, it discusses how the evolution of the requirements, and in particular business goals, influenced the progress towards highly dynamic self-adaptive systems. Finally, based on a discussion of the current state of the art, this article points out the possible future evolution of the field. Keywords: Service-oriented computing · Services · Adaptive systems ·Self-adaptation 文章: 1)a historical perspective 2)Current technology and methods for service-based applications 该部分围绕 (Figure 5)Layers and aspects relevant for service-based applications 3)Today’s technology and methods vs. application scenarios present two application scenarios: Federated organizations A pervasive computing scenario 4)Open issues and challenges 从以下六个方面讨论 Business process management Service composition and coordination Service infrastructure Analysis, design, and development Service quality definition, negotiation, and assurance Run-time adaptation Context beamer_journey_serivces.pdf A journey to highly dynamic, self-adaptive service-based applications.pdf
《A quick introduction to membrane computing》,The Journal of Logic and Algebraic Programming,Elsevier,2010 A B S T R A C T: Membrane computing is a branch of natural computing inspired from the architecture and the functioning of biological cells. The obtained computing models are distributed parallel devices, called P systems, processing multisets of objects in the compartments defined by hierarchical or more general arrangements of membranes. Many classes of P systems were investigated – mainly fromthe point of viewofcomputing power andcomputing efficiency; also, a series of applications (especially in modeling biological processes) were reported. This note is a short and informal introduction to this research area, introducing a few basic notions, research topics, types of results, and pointing out to some relevant references. review: membrane computing --- a branch of natural computing which abstracts computing models from the architecture and the functioning of living cells, as well as from the organization of cells in tissues, organs (brain included) or other higher order structures such as colonies of cells (e.g., of bacteria). The main ingredients of a P systemare (i) the membrane structure, delimiting compartments where (ii) multisets of objects evolve according to (iii) (reaction) rules of a biochemical inspiration. the main issues studied concern the computing power (in comparison with standard models from computability theory, especially Turing machines/Chomsky grammars and their restrictions) the computing efficiency (the possibility of using parallelism for solving computationally hard problems in a feasible time). there are three main types of P systems: (i) cell-like P systems, (ii) tissue-like P systems, and (iii) neural-like P systems. web site: http://ppage.psystems.eu A quick introduction to membrane computing.pdf Membrane computing and programming.pdf 补充中文文献:《自然计算的新分支———膜计算》,计算机学报,2010.2 摘 要 作为自然计算的新分支,膜计算是当前计算机科学、数学、生物学和人工智能等多学科交叉的研究热点. 概述膜计算的最新动态,以一个简单膜系统为例介绍膜计算的基本概念和基本原理,从细胞型、组织型和神经型三类膜系统以及它们的计算能力和计算效率方面介绍膜计算理论研究进展,通过概括膜计算国内外应用研究成果讨论其应用前景和方向,并从软硬件发展历程分析膜系统软硬实现研究现状.最后给出有关膜计算研究的重要网络资源、热点研究领域和重点关注的问题 自然计算的新分支—膜计算.pdf
From wikipedia: Criticism of the term This article's Criticism or Controversy section(s) may mean the article does not present a neutral point of view of the subject . It may be better to integrate the material in those sections into the article as a whole. (March 2010) During a video interview, Forrester Research VP Frank Gillett expresses criticism about the nature of and motivations behind the push for cloud computing. He describes what he calls cloud washing in the industry whereby companies relabel their products as cloud computing resulting in a lot of marketing innovation on top of real innovation. The result is a lot of overblown hype surrounding cloud computing. Gillett sees cloud computing as revolutionary in the long term but over-hyped and misunderstood in the short term, representing more of a gradual shift in our thinking about computer systems and not a sudden transformational change. Larry Ellison , CEO of Oracle Corporation has stated that cloud computing has been defined as everything that we already do and that it will have no effect except to change the wording on some of our ads. Oracle Corporation has since launched a cloud computing center and worldwide tour. Forrester Research Principal Analyst John Rymer dismisses Ellison's remarks by stating that his comments are complete nonsense and he knows it. Richard Stallman said that cloud computing was simply a trap aimed at forcing more people to buy into locked, proprietary systems that would cost them more and more over time. It's stupidity. It's worse than stupidity: it's a marketing hype campaign, he told The Guardian. Somebody is saying this is inevitable and whenever you hear somebody saying that, it's very likely to be a set of businesses campaigning to make it true.
Maybe it's time to rethink the cloud. Yeah, I know -- at this point, most IT shops haven't thought through the cloud the first time. But Microsoft's recent troubles keeping its cloud services available to users shine a harsh light on the issue of cloud availability and reliability. Trouble is, those are the wrong things to be thinking about. Sure, it sounds bad when a vendor as big as Microsoft can't keep its cloud network running. It's not comforting to know that Google , Amazon , Rackspace and other cloud providers have had outages too. So has software-as-a-service king Salesforce.com . Look, the cloud involves too many miles of somebody else's wire between users and their applications for networking hiccups to be eliminated completely. But cloud availability will get better. It's a problem that cloud vendors know about -- and know they have to solve. Let's think about a much bigger problem: The cloud is a good place to move a stand-alone virtualized server (or it will be, once vendors get their availability act together). But how much of your current data center falls into that category? Don't answer yet. First, think about all your virtual-server applications that don't really stand alone. They talk to shared data stores or other applications. Their performance literally depends on how far data has to travel. Inside your data center, that's trivial. But up in the cloud, millions of round trips could be necessary between an application in the cloud and data in your IT shop. Even at light speed, that takes time. Maybe you're thinking you could send the whole group of applications that use the same data stores up to the cloud. No more round trips, right? But one key principle of cloud computing is that you never know exactly where an app will run. With some providers, apps and data could end up communicating between New York, Silicon Valley, Seattle and Mumbai -- and total network latency could go from a problem to a catastrophe. You can solve those problems. But that might mean redesigning how those applications work, how they communicate and how they interact. Now think about this: That's the pretty part of your data center. Then there's the ugly stuff, the part we don't like to think about. Apps that, say, scrape some mainframe screens, combine their contents with data from specialized industry-vertical software, then run the result through legacy business logic that no one has touched in years for fear of breaking a critical piece of some department's business process. Our data centers are littered with that kind of cruft, accumulated over decades as we've moved from one IT paradigm to the next. There's never time or money to fix it because, kludgy as it is, it still does what users need, and untangling it will be all IT cost with no business-side benefit. But without that untangling, it will never work in the cloud. So here's something worth thinking about: How much of what's in your data center is ready for the cloud? How much of it will have to be reconfigured, rebuilt or re-architected before you'll be able to move it up to the cloud? How much will never be cloudworthy? And do you really think you'll have thought that all through before Microsoft learns how to run a network? Read more about Cloud Computing in Computerworld's Cloud Computing Topic Center. 来自: Computerworld
Wang, S., M. K. Cowles, et al. (2008). Grid computing of spatial statistics: using the TeraGrid forGi* analysis. Concurrency and Computation: Practice and Experience 20(14): 1697-1720. The massive quantities of geographic information that are collected by modern sensing technologies are difficult to use and understand without data reduction methods that summarize distributions and report salient trends. Statistical analyses, therefore, are increasingly being used to analyze large geographic data sets over a broad spectrum of spatial and temporal scales. Computational Grids coordinate the use of distributed computational resources to form a large virtual supercomputer that can be applied to solve computationally intensive problems in science, engineering, and commerce. This paper presents a solution to computing a spatial statistic, Gi*(d) using Grids. Our approach is based on a quadtree-based domain decomposition that uses task-scheduling algorithms based on GridShell and Condor. Computational experiments carried out on the TeraGrid were designed to evaluate the performance of solution processes. The Grid-based approach to computing values for Gi*(d) shows improved performance over the sequential algorithm while also solving larger problem sizes. The solution demonstrated not only advances knowledge about the application of the Grid in spatial statistics applications but also provides insights into the design of Grid middleware for other computationally intensive applications. Copyright 2008 John Wiley Sons, Ltd.
维基百科(Wikipedia.org)对云计算的定义: 云计算是分布式计算技术的一种,其最基本的概念,是通过网络将庞大的计算处理程序自动分拆成无数个较小的子程序,再交由多部服务器所组成的庞大系统进行搜寻和计算分析,最后将处理结果回传给用户.通过这项技术,网络服务提供者可以在数秒内,处理数以千万计甚至亿计的信息,达到和超级计算机同样强大效能的网络服务. 加州大学伯克利分校(University of California at Berkeley)的Michael Armbrust等对云计算的定义: 云计算包括互联网上各种服务形式的应用以及这些服务所依托数据中心的软硬件设施,这些应用服务一直被称作软件即服务(SaaS),而数据中心的软硬件设施就是所谓的云, 云计算就是SaaS和效用计算.云分为公共云(Public Cloud)和私有云(Private Cloud). 云计算(Cloud computing)是一种基于互联网的超级计算模式,其原理非常类似于网格计算.它是把存储在大量分布式计算机产品中的大量数据和处理器资源整合在一起协同工作.作为一种新兴的共享基础架构的方法,它可以将巨大的系统池连接在一起以提供各种IT 服务.这使得企业能够将资源切换到需要的应用上,根据需求访问计算机和存储系统. 云安全联盟(Cloud Security Alliance, CSA)的研究报告提出的SPI模型(SPI Model)把云计算的服务形式分为基础架构即服务(Infrastructure as a Service,IaaS),平台即服务(Platform as a Service,PaaS)和软件即服务(Software as a Service,SaaS)三大类。开放云宣言组织(Open Cloud Manifesto)则根据研究的需要把云计算细分成终端用户―云、企业―云―终端用户、企业―云(综合)、企业―云―企业、企业―云(便携式)、私有(内部)云等6种模式。 作为网络时代的新计算形式,云计算是以数据、用户和服务为三大中心为导向的.在功能方面,真正的云计算平台应该能具备以下三方面的功能特性. ⑴提供资源――包括计算、存储及网络资源,需要服务提供者架设出规模巨大的全球化的数据库及存储中心,能够实现海量的存储、出色的安全性和高度的隐私性和可靠性.此外,它还应是高效的、低价的、节省能源的. ⑵提供动态的数据服务――包括原始数据、半结构化数据和经过处理的结构化数据,一个优秀的云计算架构一定要有提供大规模数据存储、分享、管理、挖掘、搜索、分析和服务的智能. ⑶提供云计算平台――包括软件开发API、环境和工具.云计算需要真正形成一个有生命力、有黏性、可持续发展的生态系统.
Social and ethical issues in computer science social: issues about computers in society social, political and legal ethical: making decisions about what is right Social informatics Rob Kling : . . . is the interdisciplinary study of the design, uses and consequences of information technologies that takes into account their interaction with institutional and cultural contexts. What is Social Informatics and Why Does it Matter? on Ethical Issues in Computing Social and ethical issues in computer science