KaaShiv InfoTech, Number 1 Inplant Training Experts in Chennai.
Supporting timely data services using fresh data in data-intensive real-time applications, such as e-commerce andtransportation management is desirable but challenging, since the workload may vary dynamically. To control the data service delay tobe below the specified threshold, we develop a predictive as well as reactive method for database admission control. The predictivemethod derives the workload bound for admission control in a predictive manner, making no statistical or queuing-theoreticassumptions about workloads. Also, our reactive scheme based on formal feedback control theory continuously adjusts the databaseload bound to support the delay threshold. By adapting the load bound in a proactive fashion, we attempt to avoid severe overloadconditions and excessive delays before they occur. Also, the feedback control scheme enhances the timeliness by compensating forpotential prediction errors due to dynamic workloads. Hence, the predictive and reactive methods complement each other, enhancing the robustness of real-time data services as a whole. We implement the integrated approach and several baselines in an open-sourcedatabase. Compared to the tested open-loop, feedback-only, and statistical prediction + feedback baselines representing the state ofthe art, our integrated method significantly improves the average/transient delay and real-time data service throughput.
The main focus of High Performance computing process is to Complete a time consuming process in a very less time, that is to complete the task in a tight schedule and there by performing numerous operations per seconds. In our project, a computing system that implements a queue based processing technique is created. It uses a prediction engine which will take care of predicting the possible duration of your job initiation and job completion.Parallel processing systems with clusters enable the process to act faster to process the request in the queue. The request from the end user is bifurcated and submitted in the intermediate nodes. So that, the process ensues at the client side or farthest is reduced and ensues near the HPC servers. Hence, the performance and memory utilization on the HPC server is compromised or adjusted by the intermediate nodes.
Web service composition is an important problem in web service based systems. It is about how to build a new value-added web service using existing web services. A web service may have many implementations, all of which have the same functionality, but may have different QoS values. Thus, a significant research problem in web service composition is how to select a web service implementation for each of the web services such that the composite web service gives the best overall performance. This is so-called optimal web service selection problem. There may be mutual constraints between some web service implementations. Sometimes when an implementation is selected for one webservice, a particular implementation for another web service must be selected. This is so called dependency constraint. Sometimes when an implementation for one web service is selected, a set of implementations for another web service must be excluded in the web service composition. This is so called conflict constraint. Thus, the optimal web service selection is a typical constrained combinatorial optimization problem from the computational point of view. This paper proposes a new hybrid genetic algorithm for the optimal web service selection problem. The hybrid genetic algorithm has been implemented and evaluated. The evaluation results have shown that the hybrid genetic algorithm outperforms other two existing genetic algorithms when the number of web services and the number of constraints are large.
Currently, increasingly large number of Geospatial Web Services are being built in Spatial Data Infrastructures (SDIs). Although services make it easy for users to access desired information, the quality of Geospatial Web Services will greatly affect the willingness of users in access of theseservices. Therefore, in order to improve the use of service-oriented architecture for distributed geospatial data sharing, proper measurement of the Geospatial Web Service quality is highly valuable. In this paper, we propose to evaluate Geospatial Web Service quality from Geospatial Web Service activities and Geospatial Service usage. The Geospatial Web Service activities contain four layers: Geospatial Web Service commitment, Geospatial Web Service description, Geospatial Web Service process, and Geospatial Web Service outcome layers. To measure Geospatial Web Service quality score, we consider both objective measurement and subjective measurement. Objective measurement is generated from the comparison of actual service performance with application requirements. Subjective measurement determines users' attitudes towards the consumption of services. In conclusion, this study brings new perspective in evaluating Geospatial Web Services in SDIs. It provides a solution to calculate the Geospatial Web Service quality score from both objective and subjective measurement.
Web services provide a complete new way of the loosely coupled service-oriented architecture and facilitate the process of application integration by encapsulating information and existing software’s. To exploit the true potential of Web services, it is vital to develop methodologies and tools for automatic Web service composition. Numerous Web service composition description languages have been developed for professionals to study and compose web services in the past. However, current Web service composition description languages like OWL-S, WSMO and METEOR-S still have many limitations such as the lack of elements to describe dynamic transformations among components of Web service compositions. In this paper, we propose a fast Web service composition approach (FWSCA) which adopts the extended OWL-S Web service semantic description language (OWL-ES). OWL-ES is able to describe dynamic transformations among components of Web service compositions by extending the existing process model of OWL-S. We elaborate on the design of FWSCA and discuss its usability by using an example of Web service composition. For the tooling part of FWSCA, we develop an automatic Web service composition visualization and formalization tool (VFT) which is able to ease the process of Web service composition for end-users.
The Utility-Infrastructure providers serve as an intermediary between the service providers and the service consumers. The challenge of selecting optimal Web service by these service providers based on service consumer's preference using aggregate, score or degree has been tackled to a certain extent where optimal service selection is done in the context of subjective criterion alone. One major challenge being faced by these providers is the selection of optimal Web service when there are ties with the used criterion where the performance alternatives have the same score. We propose a Quality of Service aware multi-level strategy for selection of optimal Web services for Utility-Infrastructure providers. To get optimal Web service selection, the service consumer's Quality of Service preference is compared with the Web service Quality of Service offerings. The offering that best matches the Quality of Service preference is taken to be the optimal one. We consider two alternative e-Market services for our Quality of Service driven service selection: the Information services and the Complex services. We concentrate on the Information services and our model uses the non deterministic Quality of Service metrics. In our experiment, we use Web service data set Quality of Service information as our input and the experimental results show that our proposed multilevel strategy model can satisfy service consumers' request based on our non-functional requirements.
Business-to-business will be a considerable market in the near future of Internet e-business. In this future market, several providers need to be able to integrate or exchange information in providing a global service. The problem that we want to tackle in this paper is related to the existing information sources in the current Internet environment. That is how to integrate existing Web sites each other to become a new Internet service. The difficulty comes from a historical objective. Internet Web sites were developed for human users browsing and so, they do not support machine-understandable as well as interprovider interaction. To overcome this gap, we need a framework to systematically migrate the existing presentation-oriented Web sites to service-oriented one. Evidently, redeveloping all of them is an unacceptable solution. In this paper, we propose a mechanism of Web service gateway in which existing Web sites are wrapped by several Web service wrappers. Thus, without any efforts to duplicate the Web sites code, these services inherit all features from the sites while can be enriched with otherWeb Service features like UDDI publishing, semantic describing, etc. As a consequence, they can be easily integrated to each other in a business-to-business schema to provide a more valuable service for users. This Web service gateway was developed in Toshiba with Web Service Generator, allowing to automatically generate Web service wrappers. By using this system, several "real" Web services were generated and made available for use. The Web service gateway and these services are also presented and evaluated in this paper.
Web service selection is an indispensable process for web service composition as to select best web service to a client's requirement. As many web services are increased in the internet for similar functionality, which service will be the best for the client requirement is an elusive task for web service operators. In this paper, we have proposed a web service selection model used to select best webservice based on QoS Constraints. The QoS manager actuates as an agent for service providers and clients to perform publish and find web service operations. QoSDB caters the needs to store the details of QoS of a web service. The QoS attributes of a web service such as response time, throughput, reliability, availability and cost are optimized and ranked by our algorithm, the rank value will be stored in the QoSDB. To find a web service, the user has to specify the functional details of a web service and its QoS values which are required to identify the list of services which are matched with the given criteria. Moreover, the user has to set the threshold value for response time and throughput which are used to filter the related services from the list. For each web service request, the QoS manager asses the request and list of pertinent candidate services which are matched with the requirement will be filtered and provided to the client for setting the preferences over the QoS attributes. The highest ranked service will be provided to the client for further processing.
Web Service Composition creates new composite Web Services from the set of existing ones. There are the two of some composition pattern in Web Service Composition: Web Service Orchestrationand Web ServiceChoreography. We design a Web Service Composition language based on actor system theory within which has a natural relationship between Web Service Orchestration and Web Service Choreography.
The exponential growth of Web service makes building high-quality service-oriented applications an urgent and crucial research problem. User-side QoS evaluations of Web services are critical for selecting the optimal Web service from a set of functionally equivalent service candidates. Since QoS performance of Web services is highly related to the service status and network environments which are variable against time, service invocations are required at different instances during a long time interval for making accurate Web service QoS evaluation. However, invoking a huge number of Webservices from user-side for quality evaluation purpose is time-consuming, resource-consuming, and sometimes even impractical (e.g., service invocations are charged by service providers). To address this critical challenge, this paper proposes a Web service QoS prediction framework, called WSPred, to provide time-aware personalized QoS value prediction service for different service users. WSPred requires no additional invocation of Web services. Based on the past Web service usage experience from different service users, WSPred builds feature models and employs these models to make personalized QoS prediction for different users. The extensive experimental results show the effectiveness and efficiency of WSPred. Moreover, we publicly release our real-world time-aware Web service QoS dataset for future research, which makes our experiments verifiable and reproducible.
Service-oriented communication (SOC) is a new development in the industry to enable communication through Web services and SOA. SOC is to make communication as service and provide a service-oriented architecture to integrate communication in business applications. Recent advances of Webservice and SOA have made it possible for a full Web service and SOA based communication paradigm over IP. This paper is an overview of the recent development in this area with a focus on key technologies that can be applied to Web service enablement of communication. In particular, we discuss the generic Web service based application session management based on WS-session, the two-way full duplex Web service interaction framework, and the development of Web service initiation protocol (WIP). WIP is a full Web service and SOA based communication protocol for multimedia and voice communication over IP. The generic Web service approach of WIP overcomes many limitations which would be otherwise difficult to achieve in non-Web service based communication methods used today. In addition, we discuss the service composition and orchestration framework based on WS-BPEL, and illustrate the application of this approach through some real use cases.
The Web services paradigm provides organizations with an environment to enhance B2B communications. The aim is to create modularized services supporting the business processes within their organization and also those external entities participating in these same business processes. Current Web service frameworks do not include the functionality required for Web service execution performance measurement from an organization perspective. As such, a shift to this paradigm is at the expense of the organization's performance knowledge, as this knowledge will become buried within the internal processing of the Web service platform. This research introduces an approach to reclaim and improve this knowledge for the organization establishing a framework that enables the definition of Web services from a performance measurement perspective, together with the logging and analysis of the enactment of Web services. This framework utilizes Web service concepts, DSS principles, and agent technologies, to enable feedback on the organization's performance measures through the analysis of the Web services. A key benefit of this work is that the data is stored once but provides information both to the customer and the supplier of a Web service, removing the need for development of internal Web service performance monitoring.
Dynamic Web service selection refers to determining a subset of component Web services to be invoked so as to orchestrate a composite Web service. Previous work in Web service selection usually assumes the invocations of Web service operations to be independent of one another. This assumption however does not hold in practice as both the composite and component Web services often impose some orderings on the invocation of their operations. Such orderings constrain the selection of component Web services to orchestrate the composite Web service. We therefore propose to use finite state machine (FSM) to model the invocation order of Web service operations. We define a measure, called aggregated reliability, to measure the probability that a given state in the composite Web service will lead to successful execution in the context where each component Web service may fail with some probability. We show that the computation of aggregated reliabilities is equivalent to eigenvector computation. The power method is hence adopted to efficiently derive aggregated reliabilities. In orchestrating a composite Web service, we propose two strategies to select component Web services that are likely to successfully complete the execution of a given sequence of operations. Our experiments on a synthetically generated set of Web service operation execution sequences show that our proposed strategies perform better than the baseline random selection strategy.
Functionally similar Web services are grouped as a community to facilitate and speed up the process of Web services discovery. This paper presents a solution to keep the Web service community highly-available to the user or application. The term highly-available refers that the Web service community can continue providing services even when master Web service (i.e., coordinator) fails operationally. Our solution customizes a distributed election algorithm called fast bully algorithm to identify a temporary master Web service when there is any operational failure in existing master Web service of community. The identified temporary master Web service handles the service provision and management responsibilities of Web service community. Permanent master Web service takes back the mastering responsibilities from temporary master Web service when it resumes. For this, we introduce some additional functions to the existing architectures of master and slave Web services to run the customized fast bully algorithm. Finally, a weather community is developed as a prototype example to illustrate our ideas practically.
The significant advancement of mobile technologies has led to the possibility of accessingweb services in a pervasive way and semantic web service discovery approaches seem to be the most promising approach to perform semantic matching. The proliferation of web services with similar functionalities is increasing rapidly. Thus, differentiation of the characteristics of web service offered has become more crucial. Without considering non-functional properties (NFPs) such as user requirement and quality standard of web services, it may result to irrelevant services to the users' need in mobile computing. Mobile users may discover unusable web services invoked, due to the device compatibility issues and lack of quality standard information in service description. This paper proposes WSMO-M(Mobile), an enhancement of WSMO to describe NFPs as a context and Quality of Web Service(QoWS) information for mobile computing environment. Initially, for annotating web service description, the context and QoWS models are specified by using Web Service Modeling Ontology (WSMO). Semantic matchmaking and degree of match calculation are also presented to define the importance of non-functional properties in mobile computing during the discovery and selection of web services. Finally, this paper demonstrates the applicability of the enhancement through a simple case study.
E-Government has aggressively promoted the provision of government. However, current e-Governments can not enable these service channels to workflow integration process very well. This paper is devoted to presenting a systematic workflow integration method for e-Government publicservice channels integration. There are three main public service channels, including phone servicechannel, network service channel and field service channel. The multi-service channels for e-Government content of web service-based platform and application model is present, before webservice platform-based on e-Government system architecture is then proposed. The framework of e-Government service channels supporting e-Government service is then shown. The framework has four layers: data layer, service layer, function layer, and interface layer. The e-Government workflow process is then discussed in detail. After the public service channels of e-Government workflow process analysis based on web service technology, the mathematical definition of the workflow and specific construction method is proposed to achieve e-Government dynamic workflow process. Then the model application and simulation can be analyzed and certificated based on Petri net. Therefore, the idea to use e-Government to guide the collaborative workflow management, with advanced workflow technology to support collaborative environment, to better achieve the standard and efficient information technology requirements. The service channels workflow process of e-Government for declaring Hukou is given as an example, which demonstrates that this method is obviously helpful for e-Government public service channels workflow integration.
Nowadays, there are many Web Service in the network, these Web Services release to the network are free, can not be classified effective, it leads to Web Service Requesters waste lots of time in searching for appropriate Web Service. For accelerating Web Service founded and enhancing the precision and exactness of searching service, this paper introduces a method about making use of Ontology WebLanguage to classify Web Services, and how to speed up Web Service discovery. Web Serviceclassification depends on the matching methods to compare the service name, input and output parameter, service description among services that we decide to classify. We determine the position and meaning of them in the ontology after using some matching algorithm. Through query OWL ontology to find the Web Service which can meet our needs. For confirm our study, we design and implement a Web Service Classification and Discovery System which based on Semantic and in the environment of Multi-Agent.
In this paper, we present an approach for Web service based distributed communication systems. Unlike conventional one-way Web service, which is based on a stateless request/response interaction pattern, communication systems typically require stateful, two-way, full duplex interaction, and in many cases, it needs to establish certain association or context in the form of "session" before any message can be exchanged. We describe a system level architecture for distributed communication systems based on two-way, full duplex Web service interaction that supports stateful and session based Webs ervice transactions. In addition, we address the issue of enabling two-way Web service interaction to cross enterprise domains and firewalls, and the approach of two-way Web service router gateway, TARGET, is described. TARGET is a generic solution for two-way Web service interaction to traverse legitimately through NATs and strictly configured firewalls. Different from conventional Web service access gateway, TARGET is based on a novel combination of two-way SOAP message tunneling,service local registry, and service routing to enable two-way Web services for distributed communication systems. A research Web service based distributed communication system has been developed that supports Web service based conferencing services crossing enterprise domains and firewalls. Our experimental service results indicate that a full Web service based distributed communication system is not only feasible but also desirable.
The use of services, especially Web services, became a common practice. In Web services, standard communication protocols and simple broker-request architectures are needed to facilitate exchange ofservices, and this standardization simplifies interoperability. In the coming few years, services are expected to dominate software industry. There are increasing amount of Web services being made available in the Internet, and an efficient Web services composition algorithm would help to integrate different algorithm together to provide a variety of services. In this paper, we provide a dynamic Web service composition algorithm with verification of Petri-Nets. Each Web service is described by Web Service Definition Language (WSDL) and their interactions with other services are described by Web Service Choreography Interface (WSCI). Our algorithm compose the Web services with the information provided by these two descriptions. After the composition, we verify the Web service to be deadlock free with modeling the Web service as a Petri-Net. We conduct a series of experiments to evaluate the correctness and performance of the composed Web service.
When numerous web services are available, it seems natural to reuse existing web services to create a composite web service. A pivotal problem of web service composition is how to model input and output data dependencies of candidate web services to form composite web services efficiently. In a sequential web services invocation model, each web service needs to wait till the previous web service provides its output for the web service to be invoked, even if they don't have any inter dependency. Hence they are very poor in resource utilization. Peer to peer network provides certain interesting capabilities such as fault tolerance, high scalability which are not possible in traditional client/server network architecture. JXTA holds tremendous promise for the P2P world. It defines a set of protocols that developers can use to build almost any P2P application. For P2P applications which perform a lot of web service transactions, the use of sequential service invocation architecture may become a limiting factor. This research is made to introduce parallel service invocation model and web service composition model to JXTA. XML tree based notation is introduced to represent a composite webservice in JXTA peer environment. The model is implemented by using Java, XML RPC, AXIS2, JXTA and JXTA-SOAP technologies.
In this paper, we present WIP - Web service initiation protocol for multimedia and voice communication over IP. WIP is an entirely Web service based communication protocol, consisting of a set of Web service operations for initiating and establishing converged (e.g. multimedia, IM, voice, etc.) communication services over IP. It inherits the principle of separation signaling and media transmission of SIP (session initiation protocol); but it relies on a single Web service stack to provide a full featured communication signaling protocol. WIP opens a new paradigm of Web service based VoIP communication, which is extensible and can be easily integrated in end-to-end SOA solutions. The generic Web service approach used in WIP overcomes many limitations which would be otherwise difficult to achieve in non-Web service based communication methods used today. WIP is based on two-way, full duplex Web service interaction. The communication signaling establishment in WIP is through Web service interactions, and the media negotiation in WIP is modeled as a special Web service "event" subscription, which is fully extensible for various media needs. The signaling messages of WIP are encoded in the standard based SOAP message envelops which can be carried by multiple transport protocols, including HTTP. WIP supports both P2P (peer-to-peer) and B2B (back-to-back) broker mode communication services. A prototype research system has been implemented, and the results indicate that WIP, as a full Web service based communication protocol, is both feasible and advantageous.
We provide a novel approach for specifying and relating non-functional properties for distributed component Web services that can be used to adapt a composite Web service. Our approach uses distributed aspect-oriented programming (AOP) technology to model an adaptive architecture for Web services composition and execution. Existing Web service adaptation mechanisms are limited only to the process of Web service choreography in terms of Web service selection/invocation vis-a-vis pre-specifled (Service Level Agreement) SLA constraints. Our system extends this idea by representing the non-functional properties of each Web service - composite and component - via AOP. Hence our system models a relation function between the aspects of the composite Web service, and the individual aspects of the component Web services. This enables mid-flight adaptation of the composite Web service - in response to changes in non-functional requirements - via suitable modifications in the individual aspects of the component Web service. From the end users' viewpoint, such upfront aspect- oriented modeling of non-functional properties enables on-demand composite Web service adaptation with minimal disruption in quality of service.
Quality-of-Service (QoS) is widely employed for describing non-functional characteristics of Webservices. Although QoS of Web services has been investigated in a lot of previous works, there is a lack of real-world Web service QoS datasets for validating new QoS based techniques and models of Webservices. To study the performance of real-world Web services as well as provide reusable research datasets for promoting the research of QoS-driven Web services, we conduct several large-scale evaluations on real-world Web services. Firstly, addresses of 21,358 Web services are obtained from the Internet. Then, invocation failure probability performance of 150 Web services is assessed by 100 distributed service users. After that, response time and throughput performance of 5,825 Web servicesare evaluated by 339 distributed service users. Detailed experimental results are presented in this paper and comprehensive Web service QoS datasets are publicly released for future research.
This paper proposes a model to quantify the reliability of web services. Based on their structures, webservices can be classified into the following two kinds: atomic web service which calls no other webservices and composite web service consisting of atomic web services. The model first evaluates the reliability of atomic web services from the consumers' perspective. After that this paper points out redundancy is a very effective approach to improve the reliability of composite web services. Then the reliability issue of composite web services is discussed. A method of computing the aggregating reliability of composite web services is introduced using the structure information with the assumption that the reliabilities of atomic web services in the composite web services are available.
The advances of communication technologies and standardization efforts have dramatically increase the interactions between organizations in recent years, and Web services have become a de-facto standard for organizations to provide information and services. There are two different perspectives to describe Web service composition: orchestration and choreography. This work focuses on choreography model. While there exist quite a few works that verify a choreography model so as to alleviating some problems such as deadlock , the verification of implementations based on a choreography model has not be fully addressed. In this work, we propose an approach to verifying the conformance of a set of Web services to a given choreography model and pruning those candidateWeb services that do not comply with the choreography model. The proposed approach is evaluated by simulating 10,000 execution sequences of composite Web services to show how it improves the effectiveness and efficiency of subsequent web services selection. Preliminary experimental results show that our proposed method improves the success rate of Web services selection by pruning unsuitable candidate Web services at an early stage.
XML Web Services, which have been recently used to implement SOA (Service Oriented Architecture), have been making it possible to build and to execute business processes by dynamically callingservices on World Wide Web. To use of these services, it is required for users to find relevant servicesfrom a collection of services dispersed on the web. We currently use UDDI, a distributed registry system for Web services, to find services. However, UDDI only supports exact keyword match and category based query towards UDDI data entries representing each Web service, so it is hard to get a ranked query result and alternate services which are other possible services when a currently is no more useful or unreachable. Moreover, UDDI does not use WSDI definitions that actually describe service interfaces and message formats of Web services as a target of queries. In this paper, we introduce a framework for XMI Web services retrieval, which can solve the current problems that lie on the UDDI. Our system is located on top of UDDI using UDDI data entries and WSDL files; it provides ranked query result of Web services and finding alternate Web services by using a similarity measure. In addition, we discuss related works and further features needed to improve performance of this Webservice retrieval framework.
Web Services are emerging technologies that enable application-to-application communication and reuse of autonomous services over the Web. Recent efforts, OWLS, model the semantics of WebServices that includes the capabilities of the service, the service interaction protocol, and the actual messages for service exchanges. However, there is a need to automate discovery, selection and execution of OWL-S services. Further, a framework that meets the quality of service (QoS) requirements for ad hoc Internet based services is rarely provided. In this paper, we have proposed a rule-based framework, called SetnWebQ, which manages workflows composed of Semantic WebServices. SemWebQ is capable of conducting QoS-based adaptive selection as well as dynamic binding and execution of Web Services according to the semantics of workflow, thereby rendering a resilient and adaptive Web based service flow. A series of experiments performed on the SemWebQ with real Web Services have confirmed the effectiveness of proposed framework with respect to adaptive selection and execution of the Web Services in Web based workflows.
Web services are becoming prevalent nowadays. Finding desired Web services is becoming an emergent and challenging research problem. In this paper, we present WSExpress (Web ServiceExpress), a novel Web service search engine to expressively find expected Web services. WSExpress ranks the publicly available Web services not only by functional similarities to users' queries, but also by nonfunctional QoS characteristics of Web services. WSExpress provides three searching styles, which can adapt to the scenario of finding an appropriate Web service and the scenario of automatically replacing a failed Web service with a suitable one. WSExpress is implemented by Java language and large-scale experiments employing real-world Web services are conducted. Totally 3,738 Web services(15,811 operations) from 69 countries are involved in our experiments. The experimental results show that our search engine can find Web services with the desired functional and non-functional requirements. Extensive experimental studies are also conducted on a well known benchmark dataset consisting of 1,000 Web service operations to show the recall and precision performance of our search engine.
Nowadays, Web Services are considered as de facto and attracting distributed approach of application/services integration over the Internet. Web Services can also operate within communities to improve their visibility and market share. In a community, Web Services usually offer competing and/or complementing services. In this paper, we augment the community approach by defining a specific-purpose community to monitor Web Services operating in any Web Services community. This monitoring community consists of a set of Web Services capable of observing other Web Services. Clients, providers, as well as managers of communities can make use of the monitoring community to check if a Web Service is operating as expected. This paper defines the overall architecture of the monitoring community, the business model behind, different rules and terms to be respected by its members, services it offers to its various classes of customers. The paper also presents promising experimental results using the monitoring community.
Web service selection is one of important steps in many Web service applications such as Webservice composition systems and UDDI registries. As more and more Web services are available on the Internet, there are often a number of Web services which are functionally matched with a servicerequest. QoS plays a crucial role in selecting Web services in terms of their quality. According to userpsilas requirements on service quality, candidate Web services are ranked in order to find bestWeb services. In many cases, the value of a QoS property may be difficult to be precisely defined. Therefore fuzzy logic can be applied to support for representing imprecise QoS constraints. In addition, suitable ranking algorithms which can deal with fuzzy numbers have been proposed. In this paper, we will present an overview of current research works which concentrate on developing QoS based ranking algorithms by basing on fuzzy logic. These works are summarized and compared in order to assess their benefits and limitations. Such analysis contributes in developing more complete solutions in ongoing works in the field of Web service discovery and selection.
With more and more Web services available on the Internet, users reuse these services in their own applications. Because Web services are delivered by third parties and hosted on remote servers, it becomes a big problem to determine the trustworthiness of the Web services. Many Web servicestrustworthiness evaluation approaches have been proposed, however, the trustworthy evidences used in these approaches are limited and the methods proposed lack customizability and extensibility, which makes them difficult to apply. In this paper, we propose a lightweight Propositional Logic based Webservices trustworthiness evaluation method, which is customizable, extensible, and easy to apply in reality. First we collect comprehensive trustworthy evidences including both objective trustworthy evidences (e.g. QoS) and subjective evidences (e.g. reputation) from the Internet. Then we propose a Propositional Logic-based Web services trustworthiness evaluation model, which is customizable and extensible, to capture users' trustworthiness requirements. Finally, the trustworthiness of all Web services are evaluated and returned to the users via a Web services search engine. To validate the effectiveness of our approach, two experiments are conducted on a large-scale real-world dataset. The experimental results show that our method is easy to use and can effectively evaluate Web services trust worthiness, which helps users to reuse Web services.
In our previous works, we issued a significant difference between active Web services, that help carry out an action - e.g. buying a ticket to next Saturday's representation in the theater - and informative Webservices that merely search for information - e.g. querying some databases to get a list of available CD's. This paper presents a dynamic logic for modeling accurately this distinction among composite Web services. We use a logical formalism allowing a calculus on complex Web services preconditions and effects. On the one hand, this formalism enlightens that active Web services modeling does not fall back to database update, when resources are taken into account (e.g. the number of seats in the theater). On the other hand, it models active Web service as a composed one : an informative gathers information, a choice function is applied and feeds the input of an updating Webservice.
The internet has created the foundation for a networked economy-an extended business community in which vendors, partners, and customer interact and collaborate. Web Services are self-contained, self-describing, modular applications that can be published, located, and invoked across the web. One of the most important and promising advantages of Web service technology is the possibility of combining and linking existing services to create new web processes according to the given requirement. We shall present the motivation for web services Composition. Web Service is a software component invoked over the Web via an XML message that follows the Simple object Access Protocol (SOAP), which is a simple XML based protocol to let applications exchange information over HTTP and to transport the messages using open protocols standard. Web Services are based on distributed technology and provide standard means of interoperating between different software applications across and within organizational boundaries with the use of XML. Web services are provided with different Quality ofService (QOS) so they should be selected dynamically on composition. Optimized selection is achieved by minimizing a certain objective function. According to study of objective function, it should always be based on user requirements. We will introduce four types of objective function styles. Sometimes a single service given alone does not meet user's needs. In this case, it is necessary to compose several services in order to achieve the user's goal. This proposal discusses the Quality ofService (QOS) aware composition of web services. The work is based on the assumption that for each task in a workflow a set of alternative Web services with similar functionality is available and these web services have different QOS parameters and costs. This leads to the general optimization problem of how to select web services for each task so that the overall QOS and cost requirements of each composition - re satisfied. Service-oriented architecture (SOA) provides a scalable and flexible framework for service composition. Service composition algorithms play an important role in selecting services from different providers to reach desirable QoS levels according to the performance requirements of composite services, and improve customer satisfaction.
This paper investigates techniques for supporting individual users in choosing appropriate Web services according to their specific needs, and accessing their favorite Web services. It also examines the provision of visualization of Web services that can be made available to service consumers as well as service providers who promote their Web services. To achieve these goals, we propose a three-layer architecture: a data layer for handling measurements of quality attribute of Web services; a Web service middleware processing layer, which computes quality attribute weights for ranking Web services; and finally, a visualization layer for monitoring the quality of Web services. The proposed architecture is implemented and tested on real world Web services drawn from the QWS database of Web services. Our Web services visualization technique is more convenient and user-friendly compared to current basic text based Web services discovery techniques.
With the surge of Service-Oriented Architecture (SOA) and Web services, service discovery has become increasingly crucial. Public Web services that are available "on the Web" provide unlimited values for a great number of online service consumers and developers. However, the public UDDI Business Registry - the primary service discovery mechanism over the Internet - has been shut down permanently since 2006. This has raised serious problems for public Web services discovery. In this paper, we propose a WSDL-based public Web service discovery approach. It provides a simple-to-use yet effective Web service discovery mechanism that can retrieve relevant Web services directly from the Internet to suffice various requirements. The major contributions of this paper include: (1) a review on the state-of-the-art WSDL-based Web services discovery approaches; (2) the unique WSDL crawling and manipulation algorithms catering for the Vector Space Model, and (3) a proof-of-concept prototype - an online Web services search engine - based on the proposed approach.
Based on real development experience, the paper presents a collection of design techniques for building enterprise web services. By applying the techniques to web services development, not only the development increases reusability and productivity, but also the web services improve agility and compatibility .Enterprise web services require high grade of competency in designing web service contracts. A contract of web service formalizes an agreement between web service provider and consumer, in the forms of WSDLs, service schemas and policies. Though contract-first method provides great potential of directly dealing with the contracts, and a number of articles have been published regarding designing WS and XML schemas, however it is still hard for developers to find cookbooks or guidelines concentrated on designing web service contracts with contract-first method. To fill the gap, a set of design techniques are introduced and deployed in practice, incorporating some best practices scattered over the web services community. These techniques cover most of the key aspects of web service, including consolidating service schemas in line with business entities, constructing coarse-grained namespaces, applying versioning over WSDLs and service schemas, and writing fine-grained filters with contracts.
One of the interesting aspects of the Web 2.0 evolution is the wide-availability of various Web applications as APIs or Web services. These APIs expose informational services on the Web and take many forms of remote invocation of functions using standard Web protocols and XML for data representations, e.g., REST, SOAP/WSDL, XML-RPC, and other approaches. The services (or APIs) are also usually accompanied by user facing Web applications for human-consumption. Canonical examples are Google Maps, Yahoo! Flykr and del.icio.us, EVDBÂ¿s Eventful application and API, Amazon.comÂ¿s S3, ECS, Alexa, and many others. The Ruby programming language and its Rails framework are ideal for programming Web applications and services in the Web 2.0. RubyÂ¿s modern and dynamic features make it an excellent language for rapid prototyping and integration of various Web services. RailsÂ¿ superb support for rapid Web application development, database access, and AJAX, make it well suited for creating front-ends and back-ends to the next generation of Web applications and services. In this tutorial we will take a hands-on deep-dive into the Ruby and Rails platform and learn how they can be used to: (1) create Web applications backed by a relational database, (2) consume Web services, (3) create and deploy APIs or Web services, and (4) mashup of existing Web services and applications. No a priori knowledge of Ruby or Rails is required - although some programming in a modern OO language and Web application development are definite plus.
Web service technologies are becoming increasingly important for integrating systems and services. There is much activity and interest around standardization and usage of Web service technologies. The unified modeling language (UML) and the model driven architecture (MDA)™ provide a framework that can be applied to Web service development. We describe a model-driven Web service development process, where Web service descriptions are imported into UML models; integrated into composite Web services; and the new Web service descriptions are exported. The main contributions are conversion rules between UML and Web services described by Web service description language (WSDL) documents and XML schema.
Web service runtime is an important infrastructure middleware for service-based applications. It processes exchanged messages according to web service protocols. Correct implementation of web service protocols is critical for ensuring the reliability of web service runtime. In this paper, we first introduce a Service-Oriented Description Language (SODL) to precisely and concisely describe message processing logics for web service protocol implementations. Then, we propose a formal model for verifiable web service runtime, named FSM4WSR, based on Estelle (an ISO formal description standard). FSM4WSR uses module and channel to capture the essential components of the runtime architecture. Furthermore, the internal behaviors in each module are formally described by using a combination of the extended finite-state machine and SODL. Based on FSM4WSR, we automatically generate the web service protocol implementations and construct a verifiable web service runtime system, named XServices SODL Runtime.
With increasing presence and adoption of Web services on the World Wide Web, Quality-of-Service (QoS) is becoming important for describing nonfunctional characteristics of Web services. In this paper, we present a collaborative filtering approach for predicting QoS values of Web services and making Web service recommendation by taking advantages of past usage experiences of service users. We first propose a user-collaborative mechanism for past Web service QoS information collection from different service users. Then, based on the collected QoS data, a collaborative filtering approach is designed to predict Web service QoS values. Finally, a prototype called WSRec is implemented by Java language and deployed to the Internet for conducting real-world experiments. To study the QoS value prediction accuracy of our approach, 1.5 millions Web service invocation results are collected from 150 service users in 24 countries on 100 real-world Web services in 22 countries. The experimental results show that our algorithm achieves better prediction accuracy than other approaches. Our Web service QoS data set is publicly released for future research.
Sustainable success of service oriented applications relies on capabilities to manage possible service failures. To substitute a failed service with some other equivalent service is unavoidable in recovering a suspended application due to failure of a constituent service. In this paper, we report a rule based approach to Web service substitution in order to secure availability ofservices. Availability provides delivery assurance for each Web service so that Simple Object Access Protocol (SOAP) messages cannot be lost undetectably, especially in a Web servicecomposition. The rules are written in Semantic Web Rule Language. The rules are a formal representation of a categorization-based scheme to identify exchangeable Web services. This scheme not only tackles the issue of heterogeneity of domain ontology in describing the Web services, it also adapts itself by learning newly discovered ontology instances. A technical framework of Web service substitution using rule based deduction is demonstrated. Experiments on service substitution based on the proposed framework achieve a best precision of 85%.
The NERSC Web Toolkit (NEWT) brings High Performance Computing (HPC) to the web through easy to write web applications. Our work seeks to make HPC resources more accessible and useful to scientists who are more comfortable with the web than they are with command line interfaces. The effort required to get a fully functioning web application is decreasing, thanks to Web 2.0 standards and protocols such as AJAX, HTML5, JSON and REST. We believe HPC can speak the same language as the web, by leveraging these technologies to interface with existing grid technologies. NEWT presents computational and data resources through simple transactions against URIs. In this paper we describe our approach to building web applications for science using a RESTfull web service. We present the NEWT web service and describe how it can be used to access HPC resources in a web browser environment using AJAX and JSON. We discuss our REST API for NEWT, and address specific challenges in integrating a heterogeneous collection of backend resources under a single web service. We provide examples of client side applications that leverage NEWT to access resources directly in the web browser. The goal of this effort is to create a model whereby HPC becomes easily accessible through the web, allowing users to interact with their scientific computing, data and applications entirely through such web interfaces.
The increasing demand of industry for enabling business to business and application-to-application communication has led to a growing requirement for service oriented architecture.Web services are based on service oriented architecture which enable application-to-application communication over the Internet and easy accessibility to heterogeneous applications and devices. As Web services proliferate and become more sophisticated and interdependent, the issues regarding their publication and discovery become of utmost importance. This paper proposes design of a discovery cum publishing engine for Web service discovery with refined searching mechanism which uses service rating techniques for efficient and effective Web service discovery within optimum response time. We have used data mining techniques to narrow down the search space in UBRs. Further the proposed engine has an ability to publish or search Web service across multiple UBRs. In addition to this an extended design of service registry is proposed to store service rating data along with the service information. The engine publishes the Web services in UBR by following a classification scheme and performs a validation test on discovered Web services. Service reviews and rating have been utilized to help a user for selection of appropriate service.
Proliferation of web services opens an unleashed opportunity to the service providers compete themselves to satisfy a request. Since numerous functionally similar web services are readily available, identifying pertinent web service to the client's requirement is a challenge. Furthermore, recommending the pertinent web service and not missing the relevant web service are the two complex issues to be addressed in service discovery process. Since the keyword search doesn't bear the underlying semantics and unable to articulate the search query accurately, operation based search has been introduced. The discovery process is matured, if the search must be incorporated with client's QoS requirements. To alleviate the aforementioned difficulties, QoS driven web service discovery approach has been introduced. The gist of the approach is to assist the user to search the pertinent web service based on functional and non functional evaluation. The web services metadata are extracted from the WSDL file and it will be stored in a service pool. The functional and nonfunctional evaluation can be performed by the proposed algorithm, and the aggregated value will be stored in a service pool. The services which are matched with the user query will be provided to the client.
In the world of distributed web services, its environment and features are more vital to afford and satisfy the requested users query for communicating globally and virtually with the help of DWS (Distributed Web Servers), An organizational datasets are considered to be distributed as a service only its components and pair are interconnected with each other in a relatively insubstantial approach used in own home networks. If they think to share or communicate their techniques and data across globally through internet there must be a strong cohesion in those sharing which need more dominant issues throughout global network. In this paper we are proposing to develop a Distributed Web Service Evaluator-DWSE, which will help to evaluate the features of web service in DWS environment using traditional testing techniques, this kit help the clients and their dependable tools to integrate automatically in the DWS world. By fabricating the concepts encoded with SOAP (Simple Object Access Protocol), WSDL(WebService Description Language) and XML (Extensible Mark-up Language) with a unique and specific structure for designing a light weighted application in the form of simulator test kit supervising in DWSE. The proposed test kit helps the user and providers to resolve manyservice conflicts in a timely and consistent service approaches.
Today, Web services are pervasive and omnipresent in the Internet and within enterprises. Even though there are massive Web services specifications development underway, early adoption by developers and tool vendors is becoming a need. The potential growth of this technology is highly predictable because of its universal acceptance and use among the developer community. This industry may expect to grow enormously based on the support from various communities that benefit from this technology. Researches are carried out in various standards bodies on various aspects of Web services such as definition, architecture, security, discovery, interoperability, etc. As we are committed to the success of this technology, we need to research on service oriented containers that makes Web services potential more constructive. This paper proposes a container for Web services which can manage and monitor the state and behavior of Web services, which may address the quality of service (QoS) factor for Web services.
Much effort is being made by the IT industry towards the establishment of a Web servicesinfrastructure and the refinement of its component technologies to enable the sharing of heterogeneous application resources. Traditional roles of the service provider, servicerequestor and service broker and their interactions are now being improved upon to enable more effective services. The implementation of the Web service broker is currently limited to being an interface to the service repository for service registration, browsing and/or programmatic access. In this work, we have extended the functionality of the Web servicesbroker to include constraint specification and processing, which enables the broker to find a good match between a service provider's capabilities and a service requestor's requirements. This paper presents the extension made to the Web Services Description Language to include constraint specifications in service descriptions and requests, the architecture of a constraint-based broker, the constraint matching technique, some implementation details, and preliminary evaluation results.
Web service failures or degradations directly cause operational inefficiencies and financial losses in Web service-based business processes (WSBP). In today's competitive Web servicemarket a considerable number of functionally-similar Web services offered by different providers are differentiated by competitive QoS levels and pricing structures. Consequently, dynamic and optimal Web service selection is a significant challenge to business organizations which run such WSBPs. Having a cost-effective and efficient Web service selection approach is becoming an important necessity for such organizations. Current Web service selection approaches offer "one-size-fits-all"; solution, i.e., one optimal service selection for all running BP instances. However, such solutions are neither efficient nor cost-effective given that theservice levels of WSBPs are associated with customers classes/profiles, e.g., Gold, Silver or Bronze. Therefore, we propose a group-based Web service selection approach, "one-size-does-not-fit-all", that optimizes multiple QoS criteria and differentiate the service selection based on the BP customers classes/profiles. Our approach shows a very good improvement over existing "one-size-fits-all" approaches; 65% average cost reduction and 30% utility value improvement.
Web service discovery is a main challenge despite the enhanced proposed methods based on information retrieval techniques (word sense disambiguation, stemming, etc.), domain knowledge and ontology. Unfortunately, the proposed approaches are, however, complex in practice. Despite the addition of extra information to WSDL documents, discovering a required web service looks like finding a needle in a haystack. Thus, we propose in this paper a seamless way to discover a more appropriate web service independently of its type (simple, composed or semantic). In fact, we consider the usage of the service and more particularly, users' requirements, and we invoke it using a generated GUI. This approach concerning a Webservice search engine based on popularity is implemented and validated by performing an experiment.
A Web services is programmable application logic accessible using standard Internet protocols.Web services combine the best aspects of component based development and the Web. like components, Web services represent black-box functionality that can be reused without knowing about how the service is implemented. Unlike current component technologies, Webservices are not accessed via object-model-specific protocol. In this paper, we suggest the architecture development modeling and the Web integration architecture to develop Webservice-security that is embodied in supplier on service oriented architecture (SOA). We also describe the various possibilities with which the components may be integrated well with applications have been explored and well-exposed using Web services. This paper focuses on integrating component based architecture modeling and Web services to enable the development and usage of facade and backside components through the Web by dynamically creating Web services-security for the functionalities of the components.
Existing Web services standards consider data primarily on the level of inputs and outputs specifications, with the major focus on functional aspects of interactions. Majority of applications rely on data sources, but such data sources are not part of the Web service specifications and cannot be accessed directly by clients. The fact that data are treated independently or as second-class citizens severely limits re-use, flexibility, customization and integration options of current Web services. In this paper we suggest to extend the WS specifications by introducing a data-centric Web services model that integrates functional and data perspectives in one coherent framework. The approach is based on Business Artifacts and in particular on the declarative modular Guard-Stage-Milestone (GSM) model. We introduce a Web Data- and Artifact- centric Service (W-DAS) model using GSM in its core which in addition to usual application specific WS operations defines a set of data access interfaces including CRUD operations, artifacts retrieval interface for querying, filtering and sorting data, and operations for arbitrary custom defined ad hoc run-time queries. We discuss W-DAS publish-subscribe mechanisms and implementation.
Semantic Web service matchmaking is the process of searching the space of possible matches between demands and supplies, and finds the best available ones. To achieve this, the semantic Web service matchmaking framework is one of the absolutely necessary components of service oriented architecture. This paper proposes a matchmaking framework based on Description Logic for semantic Web service described by RDF4S model, a novel semantic Webservice description which consists of three layers: service oriented knowledge base, semantic description model and semantic description language. We describe the service description andservice query conditions by using the ontologies that are included in service oriented knowledge base. The service resource specification is described according to Description Logics syntax and a Description Logic reasoner is used to compare service descriptions. By representing the semantics of service descriptions, the matchmaker enables the behavior of an intelligent agent to approach more closely that of a human user trying to locate suitable Webservice resources.
The term Web services describes a standardized way of integrating Web-based applications using the XML, SOAP, WSDLand UDDI open standards over an Internet protocol backbone. XML is used to tag the data, SOAP is used to transfer the data, WSDL is used for describing the services available and UDDI is used for listing what services are available. Used primarily as a means for businesses to communicate with each other and with clients, Web services allow organizations to communicate data without intimate knowledge of each other's IT systems behind the firewall.Unlike traditional client/server models, such as a Web server/Web page system, Web services do not provide the user with a GUI. Web services instead share business logic, data and processes through a programmatic interface across a network. The applications interface, not the users. Developers can then add the Web service to a GUI (such as a Web page or an executable program) to offer specific functionality to users.
Web services allow different applications from different sources to communicate with each other without time-consuming custom coding, and because all communication is in XML, Web services are not tied to any oneoperating system or programming language. For example, Java can talk with Perl, Windows applications can talk with UNIX applications.
Many organizations use multiple software systems for management. Different software systems often need to exchange data with each other, and a web service is a method of communication that allows two software systems to exchange this data over the internet. The software system that requests data is called a service requester, whereas the software system that would process the request and provide the data is called a service provider. Different software might be built using different programming languages, and hence there is a need for a method of data exchange that doesn't depend upon a particular programming language. Most types of software can, however, interpret XML tags. Thus, web services can use XML files for data exchange.
Rules for communication between different systems need to be defined, such as:
• How one system can request data from another system
• Which specific parameters are needed in the data request
• What would be the structure of the data produced. Normally, data is exchanged in XML files, and the structure of the XML file is validated against an .xsd file.
• What error messages to display when a certain rule for communication is not observed, to make troubleshooting easier.
All of these rules for communication are defined in a file called WSDL (Web Services Description Language), which has the extension .wsdl.
Web services architecture: the service provider sends a WSDL file to UDDI. The service requester contacts UDDI to find out who is the provider for the data it needs, and then it contacts the service provider using the SOAP protocol. The service provider validates the service request and sends structured data in an XML file, using the SOAP protocol. This XML file would be validated again by the service requester using an XSD file.
A directory called UDDI (Universal Description, Discovery and Integration) defines which software system should be contacted for which type of data. So when one software system needs one particular report/data, it would go to the UDDI and find out which other system it can contact for receiving that data. Once the software system finds out which other system it should contact, it would then contact that system using a special protocol called SOAP (Simple Object Access Protocol). The service provider system would first of all validate the data request by referring to the WSDL file, and then process the request and send the data under the SOAP protocol.
Automated tools can aid in the creation of a web service. For services using WSDL, it is possible to either automatically generate WSDL for existing classes (a bottom-up model) or to generate a class skeleton given existing WSDL (a top-down model).
• A developer using a bottom-up model writes implementing classes first (in some programming language), and then uses a WSDL generating tool to expose methods from these classes as a web service. This is simpler to develop but may be harder to maintain if the original classes are subject to frequent change.
• A developer using a top-down model writes the WSDL document first and then uses a code generating tool to produce the class skeleton, to be completed as necessary. This model is generally considered more difficult but can produce cleaner designs and is generally more resistant to change. As long as the message formats between sender and receiver do not change, changes in the sender and receiver themselves do not affect the web service. The technique is also referred to as contract first since the WSDL (or contract between sender and receiver) is the starting point.
• Representational state transfer (REST) is a way to create, read, update or delete information on a server using simple HTTP calls. It is an alternative to more complex mechanisms like SOAP, CORBA and RPC. A REST call is simply an HTTP request to the server.
• More abstractly, REST is a software architectural style consisting of a coordinated set of architectural constraints applied to components, connectors, and data elements, within a distributed hypermedia system. REST ignores the details of component implementation and protocol syntax in order to focus on the roles of components, the constraints upon their interaction with other components, and their interpretation of significant data elements.
• The term representational state transfer was introduced and defined in 2000 by Roy Fielding in his doctoral dissertation at UC Irvine. REST has been applied to describe desired web architecture, to identify existing problems, to compare alternative solutions and to ensure that protocol extensions would not violate the core constraints that make the web successful. Fielding used REST to design HTTP 1.1 and Uniform Resource Identifiers (URI).The REST architectural style is also applied to the development of web services as an alternative to other distributed-computing specifications such as SOAP.