KaaShiv InfoTech, Number 1 Inplant Training Experts in Chennai.
There are varieties between the various cloud computing solutions and their infrastructure betting on the supplier. Still, usually speaking, we are able to define some basic similarities. In most cases cloud computing depends on the following: 1. Distributed filing system that spreads over multiple laborious disks and machines. It provides redundancy, high speed and dependableness. Information is rarely keep in one place solely and once one unit fails, its place is taken mechanically by another one. For the reader it would be necessary to grasp that user disc space is allotted on the distributed filing system. Whereas this technology ensures high dependableness for the user's files, it's costlier than a regular answer while not information replication. 2. Algorithmic program for resource allocation. Cloud computing could be a advanced distributed setting and it depends heavily on sturdy algorithms for allocating properly computer hardware, RAM and disc operations to finish users and core processes in a very mutual and shared system. Here comes the matter of resource accounting and there are 2 distinct alternatives. The primary one is strictly usage-oriented wherever you have got a restricted variety of units. Such units may be connected to computer hardware and/or Memory usage, time or they'll be a compound indicator. This covers usually the thought of utility computing. As a full it offers some flexibility however it's costlier within the long run. .
Follows an honest internet hosting example once this different ought to be most well-liked - you have got atiny low website with low constant traffic and resource usage. Tho' the key purpose is that you just expect rare peaks, as an example once per month. During this case you pay just for the height over usage however don’t invest an excessive amount of cash for dedicated capability. The second different is capability pre-allocation. During this case there are completely different plans with predefined constant resources - dedicated computer hardware and Memory. This still offers flexibility to upgrade resources on demand however it conjointly permits lower cost for higher resource usage within the long run.A web hosting example would show that capability pre-allocation different is appropriate for mid-size to greater sites with constant traffic. It's a lot of reliable different to an obsessive server and lower in worth. Since it's the same as dedicated servers, it's referred to as generally virtual dedicated server (VDS) or cloud serversSite Ground is happy to supply the most effective cloud infrastructure through its cloud hosting offers that are totally managed and cost-efficient.
Providing consistent security services in on-demand provisioned Cloud infrastructure services is of primary importance due to multi-tenant and potentially multi-provider nature of Clouds Infrastructure as a Service (IaaS) environment. Cloud security infrastructure should address two aspects of the IaaS operation and dynamic security services provisioning: (1) provide security infrastructure for secure Cloud IaaS operation, (2) provisioning dynamic security services, including creation and management of the dynamic security associations, as a part of the provisioned composite services or virtual infrastructures. The first task is a traditional task in security engineering , while dynamic provisioning of managed security services in virtualized environment remains a problem and requires additional research. In this paper we discuss both aspects of the Cloud Security and provide suggestions about required security mechanisms for secure data management in dynamically provisioned Cloud infrastructures. The paper refers to the architectural framework for on-demand infrastructure services provisioning, being developed by authors, that provides a basis for defining the proposed Cloud Security Infrastructure. The proposed SLA management solution is based on the WS-Agreement and allows dynamic SLA management during the whole provisioned services lifecycle. The paper discusses conceptual issues, basic requirements and practical suggestions for dynamically provisioned access control infrastructure (DACI). The paper proposes the security mechanisms that are required for consistent DACI operation, in particular security tokens used for access control, policy enforcement and authorization session context exchange between provisioned infrastructure services and Cloud provider services. The suggested implementation is based on the GAAA Toolkit Java library developed by authors that is extended with the proposed Common Security Services Interface (CSSI) and additional mechanisms for - inding sessions and security context between provisioned services and virtualized platform.
One characteristic of a cloud computing infrastructure are their frequently changing virtual infrastructure. New Virtual Machines (VMs) get deployed, existing VMs migrate to a different host or network segment and VMs vanish since they get deleted by their user. Classic incidence monitoring mechanisms are not flexible enough to cope with cloud specific characteristics such as frequent infrastructure changes. In this paper we present a prototype demonstration of the Security Audit as a Service (SAaaS) architecture, a cloud audit system which aims to increase trust in cloud infrastructures by introducing more transparency to user and cloud provider on what is happening in the cloud. Especially in the event of a changing infrastructure the demonstration shows, how autonomous agents detect this change, automatically reevaluate the security status of the cloud and inform the user through an audit report.
With the growing popularity of cloud computing, it is important to have guarantees on the quality of the protection. This is particularly true for infrastructure as a service (IaaS) and the need for protecting applications that are deployed in a cloud. Applications come with different levels of required protection and thus require different levels of protection at the infrastructure level. Quality of protection should be commensurate with the risks. In the cloud responsibility for protecting the application is split between the cloud user (the one who deploys the application) and the cloud provider that manages the infrastructure. The responsibility for securing the application is still in the hands of the cloud user. However the responsibility for securing the infrastructure on which the application runs is in the hands of the cloud provider. For the cloud user to be able to guarantee a given level of protection he must obtain some guarantees in quality of protection from the cloud infrastructure in which the application runs. This article describes and compares a client side (transparent) and a provider side (less transparent) model for specifying and monitoring quality of protection in IaaS, and discusses the benefits and pitfalls of the two models. The paper concludes by comparing these models to assess which one is the most adequate for IaaS. The contribution of this paper is to suggest the need for more transparent quality of protection management in clouds and to provide a method for moving from non-transparent quality of protection models to more transparent quality of protection models based on risk analysis of threats and the identification of the security controls to be monitored. The approach is illustrated with an example for monitoring VM/data location in a cloud.
Increasing numbers of physical sensors are used for various purposes. Those physical sensors are usually used by their own applications. Because each application manages both of physical sensors and their sensor data exclusively, other applications cannot use the physical sensors in the different party easily. We propose a new infrastructure called Sensor-Cloud infrastructure which can manage physical sensors on IT infrastructure. The Sensor-Cloud Infrastructure virtualizes a physical sensor as a virtual sensor on the cloud computing. Dynamic grouped virtual sensors on cloud computing can be automatic provisioned when the users need them. The approach to enable the sensor management capability on cloud computing. Since the resource and capability of physical sensor devices is limited, the cloud computing on the IT infrastructure can be behalf of the sensor management such as availability and performance of physical sensors. This paper describes the design of Sensor-Cloud Infrastructure, the system architecture and the implementation.
The pathway for the concept of cloud robotics is continually unfolding and revealing new opportunities in science. With this, the focus of the research paper is aimed at identifying the progress completed towards the development of a full scale cloud infrastructure to implement formation control on a multi robot system. A small scale cloud infrastructure was developed utilizing a single virtual machine operating with the boundaries of a hypervisor's resource pool. A robot with minimal hardware was constructed to work within the control of the cloud. Once the proof of concept on a lower tier has been completed, more advance robotics concepts, such as Null-Spaced-base behavior control and advanced neural network control, will be tested by offloading the computational load to the cloud infrastructure. The goal is to demonstrate the ability to simplify the robot hardware and implement control on a global scale utilizing the cloud infrastructure.
To manage cloud computing infrastructures consisting of many servers having a massive number of configuration parameters is quite burdensome for administrators of infrastructures. While some policy-based management approaches have been proposed to maintain the system configuration, it is quite difficult for administrators to define proper configuration policies for parameter settings in large-scale cloud computing infrastructure. To solve this problem, we developed a method of extracting parameter configuration policies from the configuration information of the existing infrastructure using UML/OCL verification. In this method, first we identify the scopes of management from the hierarchical topology of the cloud infrastructure. Next, we execute verifications of two types of OCL constraints (regarding parameter configuration patterns) for the configuration of the infrastructure, while changing the range of the scopes we focus on. By determining whether or not these constraints can be satisfied with some scopes, we extract policies that represent patterns satisfied between parameter settings of servers deployed in a certain range of the scope. Then, we demonstrate that we can derive configuration policies for all of parameters of servers in an actual cloud service infrastructure through a case study.
Recently, several mobile services are changing to cloud-based mobile services with richer communications and higher flexibility. We present a new mobile cloud infrastructure that combines mobile devices and cloud services. This new infrastructure provides virtual mobile instances through cloud computing. To commercialize new services with this infrastructure, service providers should be aware of security issues. In this paper, we first define new mobile cloud services through mobile cloud infrastructure and discuss possible security threats through the use of several service scenarios. Then, we propose a methodology and architecture for detecting abnormal behavior through the monitoring of both host and network data. To validate our methodology, we injected malicious programs into our mobile cloud test bed and used a machine learning algorithm to detect the abnormal behavior that arose from these programs.
Enterprise Architecture Management (EAM), and in particular IT--landscape management try to model the IT- and business elements of a company, in order to analyze its efficiency towards supporting business goals, optimize business-IT alignment, and to plan future IT-transformation as well as IT-standardization. A major challenge in this field is the elicitation of infrastructure information from run-time systems, e.g., to answer the question which servers provide services to a specific information system. Capturing this data is a time consuming manual task which leads to quickly outdated information. Similar to traditional hardware, cloud infrastructure needs to be documented in an EA modeling order to gain insight on its relationships with business information systems and ultimately the business goals. The aim of our research in this area is the automatic integration of various runtime information sources into an EAM view. The overall goal is to minimize manual work to keep enterprise architecture information up-to-date. This enables enterprise architects to make timely and precise decisions. In this work we focus on how information on the cloud infrastructure can be seamlessly integrated into an EA view. Making the cloud visible for enterprise architects is especially important to meet legal (privacy) requirements, on the storage and processing location of data. We present a conceptual approach for the information integration problem, and introduce our prototypical implementation with the open--source infrastructure cloud implementation Eucalyptus, and the open--source enterprise architecture management tool iteraplan.
Cloud computing is a paradigm of virtualized distributed environments with virtual machines as their primary building blocks. Being a distributed system, Cloud computing paradigm needs a new layer of software to provide different aspects of distribution transparency at virtual machine level. This software layer called Cloud infrastructure is deployed above the virtual machine monitor layer. In this paper, we present some of the Cloud infrastructure technology challenges such as image management and scheduling in Cloud distributed environments. In addition, we argue how some prominent Cloud infrastructures such as Eucalyptus and OpenNebula tackle these challenges. Finally, as a proof-of-concept, we show how the deployment of different transparent network services is possible using a specific Cloud infrastructure service through the mentioned technology.
The recent progress in computer performance and the development of virtualization technologies has led to the prevalence of cloud computing. Data center providers providing public cloud services have to install additional resources and infrastructures continuously to keep up with the increasing demands from cloud users. Since the newly installed infrastructure (e.g. servers) usually have similar structure as the existing infrastructure, the configuration settings for the existing ones can be copied and used for the new one. One of the exceptions is network setting (e.g. IP address) which must be customized for each infrastructure. However, the customization requires manual configuration, which can cause misconfigurations, resulting in communication failures in the new infrastructure. One of the promising approaches to identify the misconfigurations is to detect the differences between the communication logs recorded in the existing infrastructure and the new infrastructure being developed. In order to execute this approach, we need to identify a pair of servers that play the same role in the existing and new infrastructure so that we can verify whether or not the same functions are working properly in both of these infrastructures. In this paper, we propose a method that automatically identifies the pair of servers playing the same role by detecting the common communication patterns observed in both infrastructures. We evaluated our method in actual cloud infrastructure and confirmed that it identified 94.1% of corresponding pairs of servers correctly.
With the developing of Cloud Computing technology more and more latency-sensitive critical operations in Internet of Things are taken over by Cloud infrastructure. Meeting the constraints of real-time and power-aware is important for the deployment of Cloud. While various strategies have been developed to reduce power consumption of hardware components by transitioning them to lower power states or hibernating them when workload is relative low, they fail to work for latency-sensitive applications in virtualization environment. This paper presents a novel paravirtualized power-aware hypervisor as Cloud Computing infrastructure fits for latency-sensitive applications. This hypervisor introduces latency measurement process in guest OS side and CPU Power Modulator in host OS side. It enables the host OS modulate the CPU frequency level to save power with latency guarantee in guest OS side. The benchmark results show that this approach solidly improves Cloud Computing infrastructure power efficiency for latency-sensitive services.
Cloud computing has been growing in popularity because of the demand of scalability and extensibility for infrastructure resources. Recent work has shown that dynamic increasing resource capacity by using public cloud infrastructure services such as Amazon's EC2 can achieve higher peak throughput with lower cost. On the other hand, dedicated server infrastructure is still suitable to frontend requests and processing average workload rather than pure cloud infrastructure. In this paper, we propose a novel extensible architecture for high throughput task processing to self-adapt peak workload on the hybrid infrastructure combining fixed-size system with public cloud. Usually, public cloud only can loosely couple with fixed size system without the access of local resources. Hence, bridged queues and event based instant storage are proposed to cascade the cloud queue and local task queue and to decouple the dependency of database access during task processing. Furthermore, an implementation based on the approach of bridged queues and event based instant storage is presented.
Cloud computing processes and stores the organization's sensitive data in the third party infrastructure. Monitoring these activities within the cloud environment is a major task for the security analysts and the cloud consumer. The cloud service providers may voluntarily suppress the security threats detected in their Infrastructure from the consumers. The goal is to decouple Intrusion Detection System (IDS) related logic from individual application business logic and adhere to the Service Oriented Architecture Standards. This paper provides a framework for Intrusion Detection and reporting service to the cloud consumers based on the type of applications and their necessary security needs. Cloud consumers can choose the desired signatures from this framework to protect their applications. The proposed technique is deployed in existing open source cloud environment with minimum changes. A proof-of-concept prototype has been implemented based on Eucalyptus open source packages to show the feasibility of this approach. Our results show that this framework provides effective way to monitor the cloud infrastructure in service oriented approach.
In the Cloud context, the monitoring of service levels becomes critical because of the conflicts of interest that might occur between provider and customer in case of outage. Here we focus on Cloud monitoring at infrastructure level (IaaS), but with the perspective of a Cloud customer. Cloud customers cannot check the compliance of the Service Level Agreement (SLA) trusting the monitoring service of their own Cloud provider. In fact the Cloud provider has a conflicting interest ensuring the guarantees on service levels it provides. Besides Cloud customers need to detect under-utilization and overload conditions to take decisions about load balancing and resource reconfiguration. In this paper we present an agency that is deployed in the Cloud together with the customer's applications and can be used to configure a monitoring infrastructure. Mobile software agents take measures inside the Cloud resources, which are completely under the customer's administration, collect performance information and compute metrics according to the user's requirements. They implement a provider independent monitoring of the Cloud infrastructure during the execution of applications.
In this paper we present a novel Kerberos-based security concept for heterogeneous distributed e-Science infrastructures. The e-Science infrastructure we have recently developed is currently being tested by the breath gas analysis community, whose activities are based on large-scale collaborations. In many e-Science domains personal related data (e.g. patient data) is involved and therefore privacy and security is very important. Several publications mentioned that it is straightforward to add additional security to an existing infrastructure by the means of Kerberos. Our experience shows that it is not really true; at our e-Science infrastructure we discovered the following key problems: (a) to forward Kerberos tickets and (b) to use Kerberos within a cloud infrastructure. Exactly such challenges are addressed by this paper. The central aspect of the security concept presented is the authentication of the user to the lowest level (e.g. database) and not only to the first level of the e-Science services. We have to consider that our infrastructure involves several research centers with their own scientific private data. The designed security concept was implemented and tested with a cloud-based code execution framework to be able to concurrently execute problem solving environment codes (e.g. MATLAB, R, Octave). The resulting system supports EC2 compatible cloud infrastructures (e.g. AWS, Eucalyptus), enabling them to be combined to build a hybrid cloud. This paper describes several challenges and their solution including how to (a) use client authentication through all levels of the system, (b) guarantee secured execution of time consuming cloud based analysis, and (c) inject security credentials into dynamically created VM-instances
Current cloud infrastructures have opaque service offerings where customers cannot monitor the underlying physical infrastructure. This situation raises concerns for meeting compliance obligations by critical business applications with data location constraints that are deployed in a Cloud. When federated cloud infrastructures span across different countries where data can migrate from one country to another, it should be possible for data owners to monitor the location of their data. This paper shows how an existing federated Cloud monitoring infrastructure can be used for data location monitoring without compromising Cloud isolation. In the proposed approach collaboration is required between the cloud infrastructure provider (IP) and the user of the cloud, the service provider (SP): the IP monitors the virtual machines (VM) on the SP's behalf and makes the infrastructure level monitoring information available to him. With the monitoring information the SP can create the audit logs required for compliance auditing. The proposed logging architecture is validated by an e-Government case study with legal data location constraints.
Cloud is strongly emerging as the new deal of distributed computing. One of the reason behind the Cloud success is its business/commercial-oriented nature, proof of its effectiveness and applicability to real problems. There are actually a lot of open-private Cloud infrastructures aiming at providing dynamic-on demand resource provisioning according to the IAAS paradigm. To this purpose they usually apply a best effort policy, without taking into account service level agreements (SLA) and related quality of service (QoS) requirements. In such a context, the main goal of Cloud@Home is to implement a volunteer-Cloud paradigm which allows to aggregate Cloud infrastructure providers. In this work we specifically focus on SLA-QoS aspects, describing how to provide SLA based QoS guarantees through Cloud@Home on top of non-QoS oriented Cloud Providers. Aim of the paper is to demonstrate how Cloud@Home can fulfill such goal, providing and specifying the architecture, the algorithms and the components that implement SLA-QoS management features.
Cloud Computing has become more and more prevalent over the past few years, and we have seen the emergence of Infrastructure-as-a-Service (IaaS) which is the most acceptable Cloud Computing service model. However, coupled with the opportunities and benefits brought by IaaS, the adoption of IaaS also faces management complexity in the hybrid cloud environment which enterprise users are mostly building up. A cloud management system, Monsoon proposed in this paper provides enterprise users an interface and portal to manage the cloud infrastructures from multiple public and private cloud service providers. To meet the requirements of the enterprise users, Monsoon has key components such as user management, access control, reporting and analytic tools, corporate account/role management, and policy implementation engine. Corporate Account module supports enterprise users' subscription and management of multi-level accounts in a hybrid cloud which may consist of multiple public cloud service providers and private cloud. Policy Implementation Engine module in Monsoon will allow users to define the geography-based requirements, security level, government regulations and corporate policies and enforce these policies to all the subscription and deployment of user's cloud infrastructure.
In order to benefit from cloud computing services without risk, a good solution is building a private cloud in premise. This paper analyzes the advantages of a private cloud, through physical machine clusters and XEN virtualization technology, A private cloud topology is designed first, and analyze cache mechanism and load balancing policy of a private cloud infrastructure, then design a XEN-based private cloud infrastructure. This private cloud infrastructure provides unified and simple interface for users.
It is widely accepted that cloud computing technologies will soon have substantial impact on a broad range of industrial and institutional sectors such as governance, health care, education, agriculture, logistics, manufacturing, media etc. Cloud infrastructures are typically based on virtualized environments to allow physical infrastructure to be shared by multiple and diverse end users. However, the efficient sharing of a cloud infrastructure can be performed only through user-centered service admission control procedure, which should also be flexible enough to adapt to the various real market cloud deployment scenarios. Several conflicting parameters such as the type of the hybrid cloud infrastructure being deployed, multiple user priority groups, security, energy efficiency and financial costs should also be taken into account. Thus, aiming to deal with all these emerging resource management trade-off problems, we propose in this paper a user-oriented, highly customizable infrastructure sharing approach, namely IaaS Request Admission Control (IRAC), designed for Hybrid Cloud Computing Environments.
Power is becoming an increasingly important concern for large-scale cloud computing systems. Meanwhile, cloud service providers leverage virtualization technologies to facilitate service consolidation and enhance resource utilization. However, the introduction of virtualization makes the cloud infrastructure more complex, and thus challenges cloud power management. In a virtualized environment, resource needs to be configured at runtime at the cloud, server and virtual machine levels to achieve high power efficiency. In addition, cloud power management should guarantee high users' SLA (service level agreement) satisfaction. In this paper, we present an adaptive power management framework in the cloud to achieve autonomic resource configuration. We propose a software and lightweight approach to accurately estimate the power usage of virtual machines and cloud servers. It explores hypervisor-observable performance metrics to build the power usage model. To configure cloud resources, we consider both the system power usage and the SLA requirements, and leverage learning techniques to achieve autonomic resource allocation and optimal power efficiency. We implement a prototype of the proposed power management system and test it on a cloud testbed. Experimental results show the high accuracy (over 90%) of our power usage estimation mechanism and our resource configuration approach achieves the lowest energy usage among the compared four approaches.
A novel approach to implement cloud computing for smart phone devices has been presented based on Eucalyptus, an open source cloud-computing framework that provides infrastructure as a service (IaaS). It has full support of Virtualization and is Amazon Web Services interface compatible. A private cloud has been designed using Eucalyptus to develop a smart phone application store. The architecture, physical and network implementation of Eucalyptus Private Cloud on Intel based platform has been discussed in details. We have developed two sample thin applications for mobile based on mobile learning that can be downloaded from our private cloud using Amazon Web Services Platform (PaaS). These thin apps use the private cloud as computing platform, and perform better even on low processing smart phones.
Over the last years, the evaluation of cloud computing infrastructures has received considerable attention as a prominent activity for improving service quality as well as planning modifications in an existing infrastructure. This paper presents Stochastic Model Generator for Cloud Infrastructure Planning (SMG4CIP), a automatic generation system of stochastic models for cloud infrastructure planning. This system indicates feasible cloud infrastructures according to dependability and cost requirements. The proposed system adopts a technique based on GRASP metaheuristic in order to recommend optimized cloud infrastructures. Furthermore, SMG4CIP adopts stochastic models, such as Petri Net and Reliability Block Diagram, to evaluate dependability and cost metrics. A case study based on Electronic Funds Transfer (EFT) system is adopted to demonstrate the feasibility of the proposed system.
Migration is one of the useful techniques that is used in cloud computing. This technique has been used by different purposes such as load balancing, fault tolerance, power management, reducing response time and increasing quality of service. Because the use of this technique is highly dependent on cloud computing infrastructure architecture, in some cloud infrastructures, such as Eucalyptus, the virtual machine migration technique has not been used yet. In this paper, we describe important challenges that are prevent to implementing migration technique in Eucalyptus, and then we propose a solution for overcome to these challenges. Through using the solution, we provided the field with further development and empowerment of the cloud environment.
The Cloud infrastructure services landscape advances steadily leaving users in the agony of choice. As a result, Cloud service identification and discovery remains a hard problem due to different service descriptions, non-standardised naming conventions and heterogeneous types and features of Cloud services. In this paper, we present an OWL-based ontology, the Cloud Computing Ontology (CoCoOn) that defines functional and non-functional concepts, attributes and relations of infrastructure services. We also present a system, Cloud Recommender-that implements our domain ontology in a relational model. The system uses regular expressions and SQL for matching user requests to service descriptions. We briefly describe the architecture of the Cloud Recommender system, and demonstrate its effectiveness and scalability through a service configuration selection experiment based on a set of prominent Cloud providers' descriptions including Amazon, Azure, and GoGrid.
Pub/Sub systems permit users to submit subscriptions and notify interested users of the events detected in a distributed way. Moving a Pub/Sub system to a cloud infrastructure is for high performance and scalability. This paper describes how to migrate two Pub/Sub systems i.e. PADRES and Once Pub Sub to Xen Cloud Platform, especially proposes black-box method, grey-box method and white-box method so as to take full advantage of cloud mechanisms. This paper then conducts a series of experiments on the Pub/Sub systems in the cloud to evaluate benefits and costs. The experimental results indicate that the black-box method does not always take effect although it can be implemented easily, the grey-box method is more appropriate to a Pub/Sub system if its workload features and brokers' roles are known in advance. Further, the experimental results show the white-box method, combined the load balance mechanism both in the cloud and in a Pub/Sub system, can achieve satisfying performance and scalability especially facing the workload with unidentified distribution.
Analytical workloads abound in application domains ranging from computational finance and risk analytics to engineering and manufacturing settings. In this paper we describe a Platform for Parallel R-based Analytics on the Cloud (P2RAC). The goal of this platform is to allow an Analyst to take a simulation or optimization job (both the code and associated data) that runs on their personal workstations and with minimum effort have them run on large-scale parallel cloud infrastructure. If this can be facilitated gracefully, an Analyst with strong quantitative but perhaps more limited development skills can harness the computational power of the cloud to solve larger analytically problems in less time. P2RAC is currently designed for executing parallel R scripts on the Amazon Elastic Computing Cloud infrastructure. Preliminary results obtained from an experiment confirm the feasibility of the platform.
This paper presents a novel monitoring architecture addressed to the cloud provider and the cloud consumers. This architecture offers a Monitoring Platform-as-a-Service to each cloud consumer that allows to customize the monitoring metrics. The cloud provider sees a complete overview of the infrastructure whereas the cloud consumer sees automatically her cloud resources and can define other resources or services to be monitored. This is accomplished by means of an adaptive distributed monitoring architecture automatically deployed in the cloud infrastructure. This architecture has been implemented and released under GPL license to the community as “MonPaaS”, open source software for integrating Nagios and OpenStack. An intensive empirical evaluation of performance and scalability have been done using a real deployment of a cloud computing infrastructure in which more than 3700 VMs have been executed.
Today's cloud computing infrastructures usually require customers who transfer data into the cloud to trust the providers of the cloud infrastructure. Not every customer is willing to grant this trust without justification. It should be possible to detect that at least the configuration of the cloud infrastructure -- as provided in the form of a hyper visor and administrative domain software -- has not been changed without the customer's consent. We present a system that enables periodical and necessity-driven integrity measurements and remote attestations of vital parts of cloud computing infrastructures. Building on the analysis of several relevant attack scenarios, our system is implemented on top of the Xen Cloud Platform and makes use of trusted computing technology to provide security guarantees. We evaluate both security and performance of this system. We show how our system attests the integrity of a cloud infrastructure and detects all changes performed by system administrators in a typical software configuration, even in the presence of a simulated denial-of-service attack.
Cloud Computing provides an optimal infrastructure to utilise and share both computational and data resources whilst allowing a pay-per-use model, useful to cost-effectively manage hardware investment or to maximise its utilisation. Cloud Computing also offers transitory access to scalable amounts of computational resources, something that is particularly important due to the time and financial constraints of many user communities. The growing number of communities that are adopting large public cloud resources such as Amazon Web Services  or Microsoft Azure  proves the success and hence usefulness of the Cloud Computing paradigm. Nonetheless, the typical use cases for public clouds involve non-business critical applications, particularly where issues around security of utilization of applications or deposited data within shared public services are binding requisites. In this paper, a use case is presented illustrating how the integration of Trusted Computing technologies into an available cloud infrastructure - Eucalyptus - allows the security-critical energy industry to exploit the flexibility and potential economical benefits of the Cloud Computing paradigm for their business-critical applications.
Cloud computing is an emerging technology in the IT world. Some features of cloud, such as low cost, scalability, robustness and availability are attracting large-scale industries as well as small business towards cloud. A virtual machine (VM) can be defined as a software that can run its own operating systems and applications like an operating system in physical computer. As the number of users increases, allocation of resources and scheduling become a complex task. The optimization of VM provisioning policies offer improvement like increasing provider's profit, energy savings and load balancing in large data centres. In cloud computing when resource requirement of user's requests exceed resources limits of cloud provider, to fulfil the requests the cloud provider outsources to othercloud providers resources, this concept is known as cloud federation. In this paper we propose an algorithm for VM provisioning in federated cloud environment. The approach tries to improve the cloud providers profit. We have used the CloudSim to find-out the results and result show that how Cloud federation help to Cloud providers in order to improve its profit.
Modern cloud-based applications (e.g., Face book, Dropbox) serve a wide range of edge clients (e.g., laptops, smart phones). The clients' characteristics vary significantly in terms of hardware (e.g., high end desktop vs. resource constrained smart phones), operating systems (e.g., Linux, Android, Mac OS, Windows), network connections (e.g., wireless vs. wired, 3G vs. 2G), and software versions (e.g., Firefox 12 vs. Firefox 13), just to name a few. Unfortunately, due to misconfiguration, outdated software, faulty hardware, or other reasons, many edge systems operate at suboptimal performance. Poor performance and root cause identification is extremely challenging for the client of the cloud system. To address this challenge, the troubleshooting service presented in this paper leverages such heterogeneity to identify and debug performance problems on edge devices. First, by looking at many runs across many different clients, the service groups clients in different clusters based on performance. Next, the service enables logging on remote clients to collect run time traces, and subsequently identifies the root cause by analyzing logs automatically. We leverage high level features such as machine/OS type along with more low level kernel level statistics such as I/O rate and system calls. To demonstrate our system we first introduce a configuration bug that was artificially injected in a recently built cluster by changing the TCP buffer size. Next, we present two real-life bugs, one I/O inefficiency bug relating to network transfers on Android, and another misconfiguration bug in VirtualBox, that were identified using our tool.
Cloud computing infrastructures are providing resources on demand for tackling the needs of large-scale distributed applications. Determining the amount of resources to allocate for a given computation is a difficult problem though. This paper introduces and compares four automated resource allocation strategies relying on the expertise that can be captured in workflow-based applications. The evaluation of these strategies was carried out on the Aladdin/Grid'5000 testbed using a real application from the area of medical image analysis. Experimental results show that optimized allocation can help finding a trade-off between amount of resources consumed and applications make span.
In this paper, we present and empirically evaluate the performance of database-agnostic transaction (DAT) support for the cloud. Our design and implementation of DAT is scalable, fault-tolerant, and requires only that the data store provide atomic, row-level access. Our approach enables applications to employ a single transactional data store API that can be used with a wide range of cloud data store technologies. We implement DAT in App Scale, an open-source implementation of the Google App Engine cloud platform, and use it to evaluate DAT's performance and the performance of a number of popular key-value stores.
In this paper, we propose a concept for improving the energy efficiency and resource utilization of cloud infrastructures by combining the benefits of heterogeneous machine instances. The basic idea is to integrate low-power system on a chip (SoC) machines and high-power virtual machine instances into so-called Elastic Tandem Machine Instances (ETMI). The low-power machine serves low load and is always running to ensure the availability of the ETMI. When load rises, the ETMI scales up automatically by starting the high-power instance and handing over traffic to it. For the non-disruptive transition from low-power to high-power machines and vice versa, we present a handover mechanism based on software-defined networking technologies. Our evaluations show the applicability of low-power SoC machines to serve low load efficiently as well as the desired scalability properties of ETMIs.
With cloud computing, a cycle of fault diagnosis and recovery becomes the norm. There is a large amount of monitoring data and log events available, but it is hard to figure out which events or metrics are critical in fault diagnosis. Other approaches model faults as a deviation from normal behaviors, and thus are less applicable in cloud where changes in the environment may impact what is considered normal. In this work, we propose an adaptive and flexible fault diagnosis framework to automatically identify the key fault indicators and detect fault patterns. Leveraging ideas from social media, we represent the hierarchical relationships among metrics and events as well as how they relate to faults. We apply the Edge Rank algorithm to decide the key events that contribute to a fault. Our approach works across different environments to detect the potential faults. We evaluated our framework using acloud-based enterprise system using a list of injected faults that vary from environmental (e.g. virtual machine or network) to application degradation. We considered both private and public clouds. Our solution achieves over 90% detection accuracy with modest overhead. A comparison of our approach shows it is more accurate than alternative approaches in the literature.
Massive Parallel Sequencing is a term used to describe several revolutionary approaches to DNA sequencing, the so-called Next Generation Sequencing technologies. These technologies generate millions of short sequence fragments in a single run and can be used to measure levels of gene expression and to identify novel splice variants of genes allowing more accurate analysis. The proposed solution provides novelty on two fields, firstly an optimization of the read mapping algorithm has been designed, in order to parallelize processes, secondly an implementation of an architecture that consists of a Grid platform, composed of physical nodes, a Virtual platform, composed of virtual nodes set up on demand, and a scheduler that allows to integrate the two platforms.
Cloud architectures capitalise on the many benefits of virtualisation. The central component of virtualisation is the hypervisor, which plays a fundamental role in the virtualised environment. Thus, a hypervisor is typically a complex and large piece of software. The NoHype architecture is a new approach to the security problems related to hypervisors and proposes simply to eliminate the hypervisor. However, as any new approach to security, it can introduce new threats in the target environment and it can have drawbacks that could make it unfeasible to use this architecture. In this paper we conduct an investigation of the NoHype architecture, considering the new data flows, processes, entities, data stores and boundaries introduced by it. We point out that this new architecture does not mitigate all threats that a hypervisor is prone to in cloud architecture, and may even introduce new threats.
Due to all the pollutants generated by it and the steady increases in its rates, energy consumption has become a key issue. Cloud computing is an emerging model for distributed utility computing and is being considered as an attractive opportunity for saving energy through central management of computational resources. Obviously, a substantial reduction in energy consumption can be made by powering down servers when they are not in use. This work presents a resources provisioning approach based on an unsupervised predictor model in the form of an unsupervised, recurrent neural network based on a self-organizing map. Unsupervised learning in computers has for long been considered as the desired ambition of computer problems. Unlike conventional prediction-learning methods which assign credit by means of the difference between predicted and actual outcomes, the proposed study assigns credit by means of the difference between temporally successive predictions. We have shown that the proposed approach gives promising results.
The trusted virtual data center (TVDc) is a technology developed to address the need for strong isolation and integrity guarantees in virtualized environments. In this paper, we extend previous work on the TVDc by implementing controlled access to networked storage based on security labels and by implementing management prototypes that demonstrate the enforcement of isolation constraints and integrity checking. In addition, we extend the management paradigm for the TVDc with a hierarchical administration model based on trusted virtual domains and describe the challenges for future research.
The provision of security services is a key enabler in cloud computing architectures. Focusing on multi-tenancy authorization systems, the provision of different models including role based access control (RBAC), hierarchical RBAC (hRBAC), conditional RBAC (cRBAC) and hierarchical objects (HO) is the main objective of this paper. Our proposal is based on the Common Information Model (CIM) and Semantic Web technologies, which have been demonstrated as valid tools for describing authorization models. As the same language is being used for the information and the authorization models they are both well aligned and thus reducing the potential mismatch that may appear between the semantics of both models. A trust model enabling the establishment of coalitions and federations across tenants is also an objective being covered as part of the research being presented in this paper.
Cloud computing is a challenging technology that promises to strongly modify the way computing and storage resources will be accessed in the near future. Clouds may demand huge amount of energy if adequate management policies are not put in place. Optimization strategies are needed in order to allocate, migrate, consolidate virtual machines and manage the switch on/switch off period of a data center. In this paper, we present a modeling approach based on Stochastic reward nets to investigate the more convenient strategies to manage a federation of Clouds, having in mind the final goal to reduce the overall energy consumption. Several policies are presented and their impact is evaluated, thus contributing to a rational and efficient adoption of the Cloud computing paradigm.
Mobile and cloud computing are converging as the prominent technologies that are leading the change to the post personal computing (PC) era. Computational offloading and data binding are the core techniques that foster to elastically augment the capabilities of low-power devices, such as smartphones. Mobile applications may be bonded to cloud resources by following a task delegation or code offloading criteria. In a delegation model, a handset can utilize the cloud in a service-oriented manner to delegate asynchronously a resource-intensive mobile task by direct invocation of the service. In contrast, in an offloading model, a mobile application is partitioned and analyzed so that the most computational expensive operations at code level can be identified and offloaded to a remote cloud-based surrogate. We compared in this paper, the mobile cloud computing models for offloading and delegation. We utilized our own frameworks for computational offloading and data binding in the analysis. While in principle, offloading and delegation are viable methods to augment the capabilities of the mobile devices with cloud power, they enrich the mobile applications from different perspectives at diverse computational scales.
This paper proposes an architecture for a resilient cloud computing infrastructure that provably maintains cloud functionality against persistent successful corruptions of cloud nodes. The architecture is composed of a self-healing software mechanism for the entire cloud, as well as hardware-assisted regeneration of compromised (or faulty) nodes from a pristine state. Such an architecture aims to secure critical distributed cloud computations well beyond the current state of the art by tolerating, in a seamless fashion, a continuous rate of successful corruptions up to certain corruption rate limit, e.g., 30% of all cloud nodes may be corrupted within a tunable window of time. The proposed architecture achieves these properties based on a principled separation of distributed task supervision from the computation of user-defined jobs. The task supervision and enduser communication are performed by a new software mechanism called the Control Operations Plane (COP), which builds a trustworthy and resilient, self-healing cloud computing infrastructure out of the underlying untrustworthy and faulty hosts. The COP leverages provably-secure cryptographic protocols that are efficient and robust in the presence of many corrupted participants - such a cloud regularly and unobtrusively refreshes itself by restoring COP nodes from a pristine state at regular intervals.
Cloud Computing provides flexible and dynamic provisioning of resources, services, and applications. As such, Cloud Computing is the ideal IT infrastructure for the ever changing workload of companies and service providers. Although, cloud providers offer functionality for usage accounting, this functionality is limited to their own requirements. Companies and service providers as cloud users have also a need for usage accounting, but with different requirements than the cloud providers. Additionally,cloud users are not limited to a single cloud, but make use of multiple cloud infrastructures and applications depending on their needs. Cloud users require a usage accounting infrastructure not only capable of supporting billing as an accounting application, but capable of supporting all kinds of applications. For example applications like cost allocation or trend analysis. In order to be able to manage such a complex and adaptable usage accounting infrastructure, we present a policy-based management approach. Policies are used as a high-level description of the intended behavior of the infrastructure which are then utilized to derive configurations for the infrastructure services. Such a solution ensures the efficient management and administration of complex usage accounting infrastructures.
Cloud platforms are increasingly being used for hosting a broad diversity of services from traditional e-commerce applications to interactive web-based Ides. How-ever, we observe that the proliferation of offers by cloud providers raises several challenges. Developers will not only have to deploy applications for a specific cloud, but will also have to consider migrating services from one cloud to another, and to manage distributed applications spanning multiple clouds. In this paper, we present our federated multi-cloud PaaS infrastructure for addressing these challenges. This infrastructure is based on three foundations: i) an open service model used to design and implement both our multi-cloud PaaSand the SaaS applications running on top of it, ii) a configurable architecture of the federated PaaS, and iii) some infrastructure services for managing both our multi-cloud PaaS and the SaaS applications. We then show how this multi-cloud PaaS can be deployed on top of thirteen existing IaaS/PaaS. We finally report on three distributed SaaS applications developed with and deployed on our federated multi-cloudPaaS infrastructure.
This paper presents a novel evaluation study of the Cloud Computing technology, with a focused emphasis on the Cloud Storage mechanisms and the way they are affecting the progress of the present Cloud Services. Considering the exponential growth of the user data and its impact on the Cloud Storage infrastructure, this work provides two major contributions through comprehensive performance evaluations. Firstly, it proposes a unique 10-point performance evaluation framework for existing Cloud Storage infrastructure and applies it for evaluating six major Cloud Storage Service Providers currently in the market. Secondly, it presents a detailed insightful assessment of eighteen most popular Cloud Storage Hardware vendors with respect to the storage technologies being implemented by them. In conclusion, it takes stock of the current trends on optimizing storage infrastructure for Cloud Computing and predicting future research possibilities in this rapidly growing technology.
A core idea of cloud computing is elasticity, i.e., enabling applications to adapt to varying load by dynamically acquiring and releasing cloud resources. One concrete realization is cloud bursting, which is the migration of applications or parts of applications running in a private cloud to a public cloud to cover load spikes. Actually building a cloud bursting enabled application is not trivial. In this paper, we introduce a reference model and middleware realization for Cloud bursting, thus enabling elastic applications to run across the boundaries of different Cloud infrastructures. In particular, we extend our previous work on application-level elasticity in single clouds to multiple clouds, and apply it to implement an hybrid cloud model that combines good utilization of a private cloud with the unlimited scalability of a public cloud. By means of an experimental evaluation we show the feasibility of the approach and the benefits of adopting Cloud bursting in hybrid cloud models.
Cloud computing is a rising IT industry. How to charge for cloud computing is an important problem to be solved urgently at present. By analyzing and sorting, this article designs a charging model based on infrastructure layer in cloud computing called the infrastructure cloud. This article also proposes a Petri net which introduces costs and profits. A problem that attracts common concern in industrial circles. Based on the cost-profit Petri net, this article contributes to modeling design and analysis about charging model in cloud computing infrastructure layer.
With the fast advancements in cloud computing and software- as-a-service (SaaS), testing and evaluation of cloud-based software and SaaS applications became an important task for engineers. Since most existing tools are not developed to support cloud-based software testing and SaaS evaluation, there is a strong demand for a new cloud-based testing infrastructure and evaluation environment for SaaS applications. This paper proposes a testing-as-service (TaaS) infrastructure and reports a cloud-based TaaS environment with tools (known as CTaaS) developed to meet the needs in SaaS testing, performance and scalability evaluation. The paper presents TaaS concepts and CTaaS, including their infrastructure, design and implementation. In addition, the paper demonstrates the application results of our previously proposed graphic models and metrics for SaaS performance and scalability evaluation. Moreover, the paper reports one case study for a selected SaaS (OrangeHRM) using the developed TaaS environment.
Cloud computing is a term used to refer to a model of network computing where a program or application runs on a connected server or servers rather than on a local computing device such as a PC, tablet or smartphone. Like the traditional client-server model or older mainframe computing,a user connects with a server to perform a task. The difference with cloud computing is that the computing process may run on one or many connected computers at the same time, utilizing the concept of virtualization. With virtualization, one or more physical servers can be configured and partitioned into multiple independent "virtual" servers, all functioning independently and appearing to the user to be a single physical device. Such virtual servers are in essence disassociated from their physical server, and with this added flexibility, they can be moved around and scaled up or down on the fly without affecting the end user. The computing resources have become "granular", which provides end user and operator benefits including on-demand self-service, broad access across multiple devices, resource pooling, rapid elasticity and service metering capability.In more detail, cloud computing refers to a computing hardware machine or group of computing hardware machines commonly referred as a server or servers connected through a communication network such as the Internet, an intranet, a local area network (LAN) or wide area network (WAN). Any individual user who has permission to access the server can use the server's processing power to run an application, store data, or perform any other computing task. Therefore, instead of using a personal computer every time to run a native application, the individual can now run the application from anywhere in the world, as the server provides the processing power to the application and the server is also connected to a network via the Internet or other connection platforms to be accessed from anywhere.All this has become possible due to increased computer processing power available to humankind with decreased cost as stated in Moore's law. In common usage the term "the cloud" has become a shorthand way to refer to cloud computing infrastructure. The term came from the cloud symbol that network engineers used on network diagrams to represent the unknown (to them) segments of a network. Marketers have further popularized the phrase "in the cloud" to refer to software, platforms and infrastructure that are sold "as a service", i.e. remotely through the Internet. Typically, the seller has actual energy-consuming servers which host products and services from a remote location, so end-users don't have to; they can simply log on to the network without installing anything. The major models of cloud computing service are known as software as a service, platform as a service, and infrastructure as a service. These cloud services may be offered in a public, private or hybrid network. Cloud computing relies on sharing of resources to achieve coherence and economies of scale, similar to a utility (like the electricity grid) over a network.At the foundation of cloud computing is the broader concept of converged infrastructure and shared services.The cloud also focuses on maximizing the effectiveness of the shared resources. Cloud resources are usually not only shared by multiple users but are also dynamically reallocated per demand. This can work for allocating resources to users. For example, a cloud computer facility that serves European users during European business hours with a specific application (e.g., email) may reallocate the same resources to serve North American users during North America's business hours with a different application (e.g., a web server). This approach should maximize the use of computing power thus reducing environmental damage as well since less power, air conditioning, rackspace, etc. are required for a variety of functions. With cloud computing, multiple users can access a single server to retrieve and update their data without purchasing licenses for different applications. The term "moving to cloud" also refers to an organization moving away from a traditional CAPEX model (buy the dedicated hardware and depreciate it over a period of time) to the OPEX model (use a shared cloud infrastructure and pay as one uses it).Proponents claim that cloud computing allows companies to avoid upfront infrastructure costs, and focus on projects that differentiate their businesses instead of infrastructure.Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and enables IT to more rapidly adjust resources to meet fluctuating and unpredictable business demand. Cloud providers typically use a "pay as you go" model. This can lead to unexpectedly high charges if administrators do not adapt to the cloud pricing model. The term "cloud computing" is mostly used to sell hosted services in the sense of application service provisioning that run client server software at a remote location. Such services are given popular acronyms like 'SaaS' (Software as a Service), 'PaaS' (Platform as a Service), 'IaaS' (Infrastructure as a Service), 'HaaS' (Hardware as a Service) and finally 'EaaS' (Everything as a Service). End users access cloud-based applications through a web browser, thin client or mobile app while the business software and user's data are stored on servers at a remote location. Examples include Amazon Web Services and Google App engine, which allocate space for a user to deploy and manage software "in the cloud".