For Inplant/Internship training Request, Please Download the Registration Form, Please fill the details and send back the same to kaashiv.info@gmail.com

Semantic-Aware Metadata Organization Paradigm in Next-Generation File Systems

KaaShiv InfoTech, Number 1 Inplant Training Experts in Chennai.


IEEE TITLE


Semantic-Aware Metadata Organization Paradigm in Next-Generation File Systems


IEEE ABSTRACT


Existing data storage systems based on the hierarchical directory-tree organization do not meet the scalability and functionality requirements for exponentially growing data sets and increasingly complex metadata queries in large-scale, Exabyte-level file systems with billions of files. This paper proposes a novel decentralized semantic-aware metadata organization, called SmartStore, which exploits semantics of files' metadata to judiciously aggregate correlated files into semantic-aware groups by using information retrieval tools. The key idea of SmartStore is to limit the search scope of a complex metadata query to a single or a minimal number of semantically correlated groups and avoid or alleviate brute-force search in the entire system. The decentralized design of SmartStore can improve system scalability and reduce query latency for complex queries (including range and top-k queries). Moreover, it is also conducive to constructing semantic-aware caching, and conventional filename-based point query. We have implemented a prototype of SmartStore and extensive experiments based on real-world traces show that SmartStore significantly improves system scalability and reduces query latency over database approaches. To the best of our knowledge, this is the first study on the implementation of complex queries in large-scale file systems.


Next Generation Semantic File Container with Inherent Indexer


ABSTRACT:


Existing data storage systems based on the hierarchical directory-tree organization do not meet the scalability and functionality requirements for exponentially growing data sets and increasingly complex metadata queries in large-scale, Exabyte-level file systems with billions of files. This paper proposes a novel decentralized semantic-aware metadata organization, called SmartStore, which exploits semantics of files' metadata to judiciously aggregate correlated files into semantic-aware groups by using information retrieval tools. The key idea of SmartStore is to limit the search scope of a complex metadata query to a single or a minimal number of semantically correlated groups and avoid or alleviate brute-force search in the entire system. The decentralized design of SmartStore can improve system scalability and reduce query latency for complex queries (including range and top-k queries). Moreover, it is also conducive to constructing semantic-aware caching, and conventional filename-based point query. We have implemented a prototype of SmartStore and extensive experiments based on real-world traces show that SmartStore significantly improves system scalability and reduces query latency over database approaches. To the best of our knowledge, this is the first study on the implementation of complex queries in large-scale file systems.


SAMPLE PROGRAM

KAASHIV INFOTECH

SCREENSHORT


KAASHIV INFOTECH

RELATED URL'S FOR REFERENCE



[1]Performance debugging of parallel and distributed embedded systems



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=847858&queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

Validation, a crucial stage in the development cycle of embedded systems, is normally carried out using static analysis based on scheduling techniques. In high performance embedded systems, where several tasks with high computing requirements are working on input and output signals with high sampling rates, parallel and distributed processing is a valuable design alternative in order for the system to achieve the fulfillment of its real-time constraints. When the validation of parallel and distributed embedded systems is considered, many simplifications are made in order to make analysis tractable. This means that even if the system can be statically validated, the real behaviour of the system in execution may be different enough from its theoretical behaviour to make it invalid. Thus, conservative designs that lead to over-dimensioned systems with partially wasted resources are commonly adopted. Although static analysis is the only alternative in case of critical embedded systems, where the fulfillment must be always guaranteed, dynamic analysis, based on measurement, is an interesting alternative for validation of non-critical embedded systems. This paper describes a methodology for performance debugging of parallel and distributed embedded systems with non-critical end-to-end deadlines. The methodology is based on the measurement of a prototype of the system in execution and is supported by a behavioural mode.



[2]Specifying parallel and distributed systems in Object-Z: the lift case study



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=596834&queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

There has been an increasing emphasis on formality in software system specification in the last few years. A number of standards bodies are recommending the use of formal notations for specifying software systems. Parallel and distributed systems have their own complex features such as: the concurrent interactions between various system components; the reactive nature of the systems; various message passing schemes between system components. Object-Z is an extension to the Z language specifically to facilitate specification in an object-oriented style. Because parallel and distributed systems are typically complex systems, the extra structuring afforded by the various Object-Z modelling constructs (i.e. the class, object containment constructs, and various composite operation expressions) enables the various hierarchical relationships and the communication between system components to be succinctly specified. Object-Z history invariants allow system temporal properties to be specified as well. The use of Object-Z in the specification of parallel and distributed systems is demonstrated by presenting a case study based on a multi-lift system. To enhance the understandability of the formal model, OMT notation is used to grasp the static structure of the system, and a finite state machine diagram is used to highlight the system behavior


[3]Transient performance model for parallel and distributed systems



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=1316133&queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

In studying or designing parallel and distributed systems one should have available a robust analytical model that includes the major parameters that determine the system performance. Jackson networks have been very successful in modeling parallel and distributed systems. However, Jackson networks have their limitations. In particular, the product-form solution of Jackson networks assumes steady state and exponential service centers with certain specialized queueing discipline. In this paper, we present a performance model that can be used to study the transient behavior of parallel and distributed systems with finite workload. When the number of tasks to be executed is large enough, the model approaches the product-form of Jackson networks (steady state solution). We show how to use the model to analyze the performance of parallel and distributed systems. We also use the model to show to what extent the product-form solution of Jackson networks can be used.



[4]Application-dependent dynamic monitoring of distributed and parallel systems



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=238299&queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

Achieving high performance for parallel or distributed programs often requires substantial amounts of information about the programs themselves, about the systems on which they are executing, and about specific program runs. The monitoring system that collects, analyzes, and makes application-dependent monitoring information available to the programmer and to the executing program is presented. The system may be used for off-line program analysis, for on-line debugging, and for making on-line, dynamic changes to parallel or distributed programs to enhance their performance. The authors use a high-level, uniform data model for the representation of program information and monitoring data. They show how this model may be used for the specification of program views and attributes for monitoring, and demonstrate how such specifications can be translated into efficient, program-specific monitoring code that uses alternative mechanisms for the distributed analysis and collection to be performed for the specified views. The model's utility has been demonstrated on a wide variety of parallel machines


[5]Perfect difference networks and related interconnection structures for parallel and distributed systems



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=1458687&queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

In view of their applicability to parallel and distributed computer systems, interconnection networks have been studied intensively by mathematicians, computer scientists, and computer designers. In this paper, we propose an asymptotically optimal method for connecting a set of nodes into a perfect difference network (PDN) with diameter 2, so that any node is reachable from any other node in one or two hops. The PDN interconnection scheme, which is based on the mathematical notion of perfect difference sets, is optimal in the sense that it can accommodate an asymptotically maximal number of nodes with smallest possible node degree under the constraint of the network diameter being 2. We present the network architecture in its basic and bipartite forms and show how the related multidimensional PDNs can be derived. We derive the exact average internode distance and tight upper and lower bounds for the bisection width of a PDN. We conclude that PDNs and their derivatives constitute worthy additions to the repertoire of network designers and may offer additional design points that can be exploited by current and emerging technologies, including wireless and optical interconnects. Performance, algorithmic, and robustness attributes of PDNs are analyzed in a companion paper.



[6]Formal design and performance evaluation of parallel and distributed software systems



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=668172&queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

The advantages of parallel and distributed software systems in terms of additional reliability, redundancy, work load balancing etc. are easily outweighed by the additional complexity parallelism and distribution introduce into a software architecture. The authors consider an approach to describe the architecture of parallel and distributed software systems. This approach is based on a component model of software which contains special constructs for concurrency control and additional information about distribution. Rather than describing the distribution properties within a component most of these properties are stated with the use relation between components which may be local or remote. They describe how this design approach can be implemented on top of CORBA and how performance-related properties of remote use relations are used to quantitatively assess the software architecture. Thus the design of complex, hierarchically structured distributed software systems making full use of parallelism can be assessed w.r.t. response time of remote operation invocations.


[7] Enabling requirements-based programming for highly-dependable complex parallel and distributed systems



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=1524378&queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

The manual application of formal methods in system specification has produced successes, but in the end, despite any claims and assertions by practitioners, there is no provable relationship between a manually derived system specification or formal model and the customer's original requirements. Complex parallel and distributed systems present the worst case implications for today's dearth of viable approaches for achieving system dependability. No avenue other than formal methods constitutes a serious contender for resolving the problem, and so recognition of requirements-based programming has come at a critical juncture. We describe a new, NASA-developed automated requirements-based programming method that can be applied to certain classes of systems, including complex parallel and distributed systems, to achieve a high degree of dependability.



[8] Supporting small accesses for the parallel file subsystem on distributed shared memory systems



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=741171&queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

The main goal of parallel file subsystem on Distributed Shared Memory (DSM) systems is to reduce the network traffic in page based software DSM systems, thereby improving system performance. Our laboratory has built a prototype of the parallel file subsystem on two DSM systems, namely Cohesion and TreadMarks. But these two prototypes have several limitations: users must read/write the whole parallel file in a single access; users cannot modify an existing parallel file; the parallel file request must be issued from the root node. In our new parallel file subsystem on Teamster, a new DSM system developed by our laboratory, we eliminate the limitations revealed in the two previous parallel file subsystems. In addition, we have developed two new mechanisms, the software cache mechanism and the asynchronous file offset mechanism, to lessen the performance degradation caused by the frequent small accesses.


[9]Performance of software-based MPEG-2 video encoder on parallel and distributed systems



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=611179&queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS

Video encoding due to its high processing requirements has been traditionally done using special-purpose hardware. Software solutions have been explored but are considered to be feasible only for nonreal-time applications requiring low encoding rates. However, a software solution using a general-purpose computing system has numerous advantages: it is more available and flexible and allows experimenting with and hence improving various components of the encoder. In this paper, we present the performance of a software video encoder with MPEG-2 quality on various parallel and distributed platforms. The platforms include an Intel Paragon XP/S and an Intel iPSC/860 hypercube parallel computer as well as various networked clusters of workstations. Our encoder is portable across these platforms and uses a data-parallel approach in which parallelism is achieved by distributing each frame across the processors. The encoder is useful for both real-time and nonreal-time applications, and its performance scales according to the available number of processors. In addition, the encoder provides control over various parameters such as the size of the motion search window, buffer management, and bit rate. The performance results include comparisons of execution times, speedups, and frame encoding rates on various systems.



[10]Semantic-Aware Metadata Organization Paradigm in Next-Generation File Systems



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=5871607&queryText%3DSemantic-Aware+Metadata+Organization+Paradigm+in+Next-++++Generation+File+Systems  

Existing data storage systems based on the hierarchical directory-tree organization do not meet the scalability and functionality requirements for exponentially growing data sets and increasingly complex metadata queries in large-scale, Exabyte-level file systems with billions of files. This paper proposes a novel decentralized semantic-aware metadata organization, called SmartStore, which exploits semantics of files' metadata to judiciously aggregate correlated files into semantic-aware groups by using information retrieval tools. The key idea of SmartStore is to limit the search scope of a complex metadata query to a single or a minimal number of semantically correlated groups and avoid or alleviate brute-force search in the entire system. The decentralized design of SmartStore can improve system scalability and reduce query latency for complex queries (including range and top-k queries). Moreover, it is also conducive to constructing semantic-aware caching, and conventional filename-based point query. We have implemented a prototype of SmartStore and extensive experiments based on real-world traces show that SmartStore significantly improves system scalability and reduces query latency over database approaches. To the best of our knowledge, this is the first study on the implementation of complex queries in large-scale file systems.


[11] A Comparative Study and Analysis of PVM and MPI for Parallel and Distributed Systems



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=1598580&queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

Networks of Workstations (NOWs) based on various operating systems are globally accepted as the standard for computing environments in the IT industry. To harness the tremendous potential for computing capability represented by these NOWs, various new tools are being developed. Parallel Virtual Machine (PVM) and Message Passing Interface (MPI) have existed on UNIX workstations for some time, and are maturing in their capability for handling Distributed Parallel Processing (DPP). This research is aimed to explore each of these two vehicles for DPP, considering capability, ease of use, and availability, and compares their distinguishing features; and also explores programmer interface and their utilisation for solving real world parallel processing applications. The research conducted herein concludes that each API has its unique features of strength, hence has potential to remain active into the foreseeable future. This work recommends a potential research issue, that is, to study the feasibility of creating a programming environment that allows access to the virtual machine features of PVM and the message passing features of MPI.



[12] The Third Workshop on Scheduling and Resource Management for Parallel and Distributed Systems (SRMPDS’07)



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=4447806&queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

The following topics are dealt with: multiprocessor architecture; grid and cluster computing; information retrieval and mining; overlay and network architecture; power-aware architecture; mobile computing; parallel and distributed systems; security and trustworthy computing; wireless networks; resource scheduling; fault tolerance; peer-to-peer computing; wireless communication; parallel algorithms.


[13]A binary Particle Swarm Optimization approach to fault diagnosis in parallel and distributed systems



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=5586002&queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

The efficient diagnosis of hardware and software faults in parallel and distributed systems remains a challenge in today's most prolific decentralized environments. System-level fault diagnosis is concerned with the identification of all faulty components among a set of hundreds (or even thousands) of interconnected units, usually by thoroughly examining a collection of test outcomes carried out by the nodes under a specific test model. This task has non-polynomial complexity and can be posed as a combinatorial optimization problem. Here, we apply a binary version of the Particle Swarm Optimization meta-heuristic approach to solve the system-level fault diagnosis problem (BPSO-FD) under the invalidation and comparison diagnosis models. Our method is computationally simpler than those already published in literature and, according to our empirical results, BPSO-FD quickly and reliably identifies the true ensemble of faulty units and scales well for large parallel and distributed systems.



[14] Large-Scale Software Testing Environment Using Cloud Computing Technology for Dependable Parallel and Distributed Systems



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=5463684&queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

Various information systems are widely used in information society era, and the demand for highly dependable system is increasing year after year. However, software testing for such a system becomes more difficult due to the enlargement and the complexity of the system. In particular, it is too difficult to test parallel and distributed systems sufficiently although dependable systems such as high-availability servers usually form parallel and distributed systems. To solve these problems, we proposed a software testing environment for dependable parallel and distributed system using the cloud computing technology, named D-Cloud. D-Cloud includes Eucalyptus as the cloud management software, and FaultVM based on QEMU as the virtualization software, and D-Cloud frontend for interpreting test scenario. D-Cloud enables not only to automate the system configuration and the test procedure but also to perform a number of test cases simultaneously, and to emulate hardware faults flexibly. In this paper, we present the concept and design of D-Cloud, and describe how to specify the system configuration and the test scenario. Furthermore, the preliminary test example as the software testing using D-Cloud was presented. Its result shows that D-Cloud allows to set up the environment easily, and to test the software testing for the distributed system.


[15] A Survey of Payment Card Industry Data Security Standard



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=1247671&queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

We introduce a middleware infrastructure that provides software services for developing and deploying high-performance parallel programming models and distributed applications on clusters and networked heterogeneous systems. This middleware infrastructure utilizes distributed agents residing on the participating machines and communicating with one another to perform the required functions. An intensive study of the parallel programming models in Java has helped identify the common requirements for a runtime support environment, which we used to define the middleware functionality. A Java-based prototype, based on this architecture, has been developed along with a Java object-passing interface (JOPI) class library. Since this system is written completely in Java, it is portable and allows executing programs in parallel across multiple heterogeneous platforms. With the middleware infrastructure, users need not deal with the mechanisms of deploying and loading user classes on the heterogeneous system. Moreover, details of scheduling, controlling, monitoring, and executing user jobs are hidden, while the management of system resources is made transparent to the user. Such uniform services are essential for facilitating the development and deployment of scalable high-performance Java applications on clusters and heterogeneous systems. An initial deployment of a parallel Java programming model over a heterogeneous, distributed system shows good performance results. In addition, a framework for the agents' startup mechanism and organization is introduced to provide scalable deployment and communication among the agents.



[16] Fault identification with binary adaptive fireflies in parallel and distributed systems



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=5949774&pageNumber%3D2%26queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

The efficient identification of hardware and software faults in parallel and distributed systems still remains a serious challenge in today's most prolific decentralized environments. System-level fault diagnosis is concerned with the detection of all faulty nodes in a set of interconnected units. This is accomplished by thoroughly examining the collection of outcomes of all tests carried out by the nodes under a particular test model. Such task has non-polynomial complexity and can be posed as a combinatorial optimization problem, whose optimal solution has been sought through bio-inspired methods like genetic algorithms, ant colonies and artificial immune systems. In this paper, we employ a swarm of artificial fireflies to quickly and reliably navigate across the search space of all feasible sets of faulty units under the invalidation and comparison test models. Our approach uses a binary encoding of the potential solutions (fireflies), an adaptive light absorption coefficient to accelerate the search and problem-specific knowledge to handle infeasible solutions. The empirical analysis confirms that the proposed algorithm outperforms existing techniques in terms of convergence speed and memory requirements, thus becoming a viable approach for real-time fault diagnosis in large-size systems.


[17]A trace-driven analysis of parallel prefetching algorithms for parallel and distributed systems



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=1592279&pageNumber%3D2%26queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

I/O for parallel and distributed systems has drawn increasing attention over the past decade as it has become apparent that I/O performance, rather than CPU performance, may be the key limiting factor in the performance of future systems. Prefetch is the fundamental approach for improving the overall read performance. At present, there are some well-known prefetching algorithms in parallel and distributed systems, such as LRU-Lookahead, Fixed Horizon, and greedy algorithmin. In this paper, we study these parallel prefetching algorithms and explore the performance characteristics of each of the algorithms using the trace-driven simulation. We find that when performance is limited by I/O stall, aggressive prefetch helps to alleviate the problem. However, conservative prefetch performs well in computing-bound situations. Moreover, we find that carefully choosing replacement decision is not necessary to balance the load across the disks when the data is well laid out.



[18] Semi-distributed load balancing for massively parallel multicomputer systems



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=99188&pageNumber%3D2%26queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

A semidistributed approach is given for load balancing in large parallel and distributed systems which is different from the conventional centralized and fully distributed approaches. The proposed strategy uses a two-level hierarchical control by partitioning the interconnection structure of a distributed or multiprocessor system into independent symmetric regions (spheres) centered at some control points. The central points, called schedulers, optimally schedule tasks within their spheres and maintain state information with low overhead. The authors consider interconnection structures belonging to a number of families of distance transitive graphs for evaluation, and, using their algebraic characteristics, show that identification of spheres and their scheduling points is in general an NP-complete problem. An efficient solution for this problem is presented by making exclusive use of a combinatorial structure known as the Hadamard matrix. The performance of the proposed strategy has been evaluated and compared with an efficient fully distributed strategy through an extensive simulation study. The proposed strategy yielded much better results.


[19]Run-time detection in parallel and distributed systems: application to safety-critical systems



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=776517&pageNumber%3D2%26queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

There is growing interest in run-time detection as parallel and distributed systems grow larger and more complex. This work targets run-time analysis of complex, interactive scientific applications for purposes of attaining scalability improvements with respect to the amount and complexity of the data transmitted, transformed, and shared among different application components. Such improvements are derived from using database techniques to manipulate data streams. Namely, by imposing a relational model on the data streams, constraints on the stream may be expressed as database queries evaluated against the data events comprising the stream. The application in the paper is to a safety-critical system. The paper also presents a tool, dQUOB, Dynamic QUery OBjects, which: (1) offers the means for dynamic creation of queries and for their application to large data streams; (2) permits implementation and runtime use of multiple “query optimization” techniques; and (3) supports dynamic reoptimization of queries based on streams' dynamic behavior



[20] The robustness of resource allocation in parallel and distributed computing systems



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=1372042&pageNumber%3D2%26queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

This paper gives an overview of the material to be discussed in the invited keynote presentation by H. J. Siegel. Performing computing and communication tasks on parallel and distributed systems involves the coordinated use of different types of machines, networks, interfaces, and other resources. Decisions about how best to allocate resources are often based on estimated values of task and system parameters, due to uncertainties in the system environment. An important research problem is the development of resource management strategies that can guarantee a particular system performance given such uncertainties. We have designed a methodology for deriving the degree of robustness of a resource allocation - the maximum amount of collective uncertainty in system parameters within which a user-specified level of system performance (QoS) can be guaranteed. Our four-step procedure for deriving a robustness metric for an arbitrary system will be presented. We will illustrate this procedure and its usefulness by deriving robustness metrics for some example distributed systems.


[21]Issues on designing server squads for future large-scale distributed and parallel computing systems



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=138334&pageNumber%3D2%26queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

The authors address some of the important issues concerning the construction of server squads in massively parallel and distributed systems. Constructing a system with thousands of processors has become a real possibility in recent years. A server squad is a group of server processes that cooperatively provides services in such systems. Such server squads are necessary in order to avoid possible congestion of services among clients as systems become larger. Issues discussed include the configuration of a server squad, which defines the way individual server processes cooperate with each other; the effects of squad structures on the communication pattern; the relationship between neighboring members of the squad; and information sharing within a squad. Some general rules for the placement of individual servers in a system and the partitioning of clients among members of a squad, and the mechanisms and policies for adaptive growth of a server squad are also discussed.



[22]Edge Scheduling Algorithms in Parallel and Distributed Systems



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=1690615&pageNumber%3D2%26queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

Many research efforts have been done in the domain of static scheduling algorithms based on DAG. However, most of these literatures assume that all processors are fully connected and receive communication data concurrently, while ignoring the contentions and delays on network links in real applications, which leads to low efficiency. This paper focuses on the issue of edge scheduling for dependent task set in parallel and distributed environment. Combined with conventionally efficient heuristics, two contention-aware scheduling algorithms are proposed in the paper: OIHSA (optimal insertion hybrid scheduling algorithm) and BBSA (bandwidth based scheduling algorithm). Both the proposed algorithms start from the inherent characteristic of the edge scheduling problem, and select route paths with relatively low network workload to transfer communication data by modified routing algorithm. OISHA optimizes the start time of communication data transferred on links in form of theorems. BBSA exploits bandwidth of network links fully to transfer communication data as soon as possible. Therefore, the makespan yielded by our algorithms can be reduced efficiently. Moreover, the proposed algorithms adapt to not only homogeneous systems but also heterogeneous systems. The experiment results indicate that the proposed algorithms obviously outperform other algorithms so far in terms of make span.


[23] A flexible I/O framework for parallel and distributed systems



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=470556&pageNumber%3D2%26queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

We propose a framework for I/O in parallel and distributed systems. The framework is highly customizable and extendible, and enables programmers to offer high level objects in their applications, without requiring them to struggle with the low level and sometimes complex details of high performance distributed I/O. Also, the framework exploits application specific information to improve I/O performance by allowing specialized programmers to customize the framework. Internally, we use indirection and granularity control to support migration, dynamic load balancing, fault tolerance, etc. for objects of the I/O system, including those representing application data.



[24] Performability of integrated software-hardware components of real-time parallel and distributed systems



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=159799&pageNumber%3D2%26queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMSy  

A method for analyzing performability of the integrated software-hardware components of parallel and distributed systems is presented. The technique uses generalized stochastic Petri nets at the system level (analysis of integrated software-hardware systems). The intractable problem of evaluating a parallel software environment consisting of interacting fault-tolerant parallel tasks is addressed. This is accomplished using a decomposition technique at the task-graph level, where the task graph is decomposed into segments. Recovery blocks are effectively modeled in the interacting parallel modules as well as their supporting hardware. This method greatly facilitates the analysis of performability at the system level. However, the integrated performability model increases the size of the Markov generator matrix. This issue is addressed, and a performability decomposition technique at the task-graph level presented is illustrated by a simple example of a radar command center.


[25] Fault propagation analysis based variable length checkpoint placement for fault-tolerant parallel and distributed systems



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=625081&pageNumber%3D2%26queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

The paper proposes optimal checkpoint placement strategies using failure propagation analysis in a distributed rollback recovery system. The authors' previously proposed idea of failure propagation analysis (FPA) based checkpoint placement strategy is enhanced by incorporating link failures, task grouping/allocation, and loop stabilization aspects. Owing to the empirical observation that a large number of faults occur around message communication instructions, the checkpoint placement strategy places more checkpoints around message send/receive regions of the code. Allocation of tasks (or, threads) onto different processors can lead to varied communication patterns, which in turn can affect the FPA process and the checkpoint placement strategies. Thus, another key contribution of our research is to show the cyclic relationship between checkpointing and task allocation, as well as recursion in parallel or distributed programs. The proposed ideas and FPA approaches are illustrated using a typical parallel algorithm-the fast Fourier transform (FFT).



[26] Theory and tool for estimating global time in parallel and distributed systems



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=647195&pageNumber%3D2%26queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

This article presents a method and a tool for estimating a global time base from local event traces. These event traces result from monitoring parallel and distributed systems of arbitrary topology, where the processors are working on a common application. The method relies on stability properties of physical clocks and on the causality relationships of cooperating processes. The tool copes with arbitrary communication structures, event trace formats, and any number of different clocks.


[27] Buffered coscheduling: a new methodology for multitasking parallel jobs on distributed systems



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=846019&pageNumber%3D2%26queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

Buffered coscheduling is a scheduling methodology for time-sharing communicating processes in parallel and distributed systems. The methodology has two primary features: communication buffering and strobing. With communication buffering, communication generated by each processor is buffered and performed at the end of regular intervals to amortize communication and scheduling overhead. This infrastructure is then leveraged by a strobing mechanism to perform a total exchange of information at the end of each interval, thus providing global information to more efficiently schedule communicating processes. This paper describes how buffered coscheduling can optimize resource utilization by analyzing workloads with varying computational granularities, load imbalances, and communication patterns. The experimental results, performed using a detailed simulation model, show that buffered coscheduling is very effective on fast SANs such as Myrinet as well as slower switch-based LANs.



[28]Design of parallel and distributed systems with high-level Petri nets using CASE technology



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=565523&pageNumber%3D2%26queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

The purpose of this paper is to share multi-year educational experiences in application of a modern CASE system called Design/CPNTM from Meta Software Corp. in a design and development of a parallel and distributed software using a visual programming methodology of a Petri net model. We put emphasis on understanding of fundamental concepts and on their applications to specification and design of software using colored Petri nets. The paper contains a description of a course contents with project assignments explored over a several years period. The material presented here is based on the elective graduate course topics on parallel computations taught at the University of Massachusetts Dartmouth.


[29] A model-based software engineering of parallel and distributed systems using Petri nets



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=725423&pageNumber%3D2%26queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

This paper describes a sequence of two graduate courses devoted to Software engineering of Parallel and Distributed Systems (SEPDS) in a software-oriented graduate computer science curriculum. We apply a model-based integrated approach using Petri nets. Two main objectives of these courses are: to present Petri nets as a model to specify, verify, validate, and evaluate performance of PDSs, and to show examples of Petri nets applications to development of PDSs. Software tools such as: Design/CPN, Great GSPN, UltraSAN are utilized.



[30] Software engineering for parallel and distributed systems



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=605915&pageNumber%3D3%26queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

What are the main issues in software engineering for parallel and distributed systems? This question, deceptively simple perhaps, produced a range of views, some convergent, others governed by the needs of specialist domains and the writer's particular expertise. These various views are expressed in a series of articles which also appear on the Web


[31]Submitted to IEEE Transactions on Parallel and Distributed Systems Special Issue on CMP Architectures



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=4359393&pageNumber%3D3%26queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

Dual-core execution (DCE) is an execution paradigm proposed to utilize chip multiprocessors to improve the performance of single-threaded applications. Previous research has shown that DCE provides a complexity-effective approach to building a highly scalable instruction window and achieves significant latency-hiding capabilities. In this paper, we propose to optimize DCE for power efficiency and/or transient-fault recovery. In DCE, a program is first processed (speculatively) in the front processor and then reexecuted by the back processor. Such reexecution is the key to eliminating the centralized structures that are normally associated with very large instruction windows. In this paper, we exploit the computational redundancy in DCE to improve its reliability and its power efficiency. The main contributions include: 1) DCE-based redundancy checking for transient-fault tolerance and a complexity-effective approach to achieving full redundancy coverage and 2) novel techniques to improve the power/energy efficiency of DCE-based execution paradigms. Our experimental results demonstrate that, with the proposed simple techniques, the optimized DCE can effectively achieve transient-fault tolerance or significant performance enhancement in a power/energy-efficient way. Compared to the original DCE, the optimized DCE has similar speedups (34 percent on average) over single-core processors while reducing the energy overhead from 93 percent to 31 percent.



[32]Fast allocation of processes in distributed and parallel systems



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=207592&pageNumber%3D4%26queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

MULTIFIT-COM, a static task allocator which could be incorporated into an automated compiler/linker/loader for distributed processing systems, is presented. The allocator uses performance information for the processes making up the system in order to determine an appropriate mapping of tasks onto processors. It uses several heuristic extensions of the MULTIFIT bin-packing algorithm to find an allocation that will offer a high system throughput, taking into account the expected execution and interprocessor communication requirements of the software on the given hardware architecture. Throughput is evaluated by an asymptotic bound for saturated conditions and under an assumption that only processing resources are required. A set of options are proposed for each of the allocator's major steps. An evaluation was made on 680 small randomly generated examples. Using all the search options, an average performance difference of just over 1% was obtained. Using a carefully chosen small subset of only four options, a further degradation of just over 1.5% was obtained. The allocator is also applied to a digital signal processing system consisting of 119 tasks to illustrate its clustering and load balancing properties on a large system


[33] Reducing null message traffic in large parallel and distributed systems



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=4625703&pageNumber%3D4%26queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

Null message algorithm (NMA) is one of the efficient conservative time management algorithms that use null messages to provide synchronization between the logical processes (LPs) in a parallel discrete event simulation (PDES) system. However, the performance of a PDES system could be severely degraded if a large number of null messages need to be generated by LPs to avoid deadlock . In this paper, we present a mathematical model based on the quantitative criteria specified in (Rizvi et al., 2006) to optimize the performance of NMA by reducing the null message traffic. Moreover, the proposed mathematical model can be used to approximate the optimal values of some critical parameters such as frequency of transmission, Lookahead (L) values, and the variance of null message elimination. In addition, the performance analysis of the proposed mathematical model incorporates both uniform and non-uniform distribution of L values across multiple output lines of an LP. Our simulation and numerical analysis suggest that an optimal NMA offers better scalability in PDES system if it is used with the proper selection of critical parameters.



[34] Distributed predicate detection in series-parallel systems



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=995818&pageNumber%3D4%26queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

This paper addresses the problems of state space decomposition and predicate detection in a distributed computation involving asynchronous messages. We introduce a natural communication dependency which leads to the definition of the communication graph. This abstraction proves to be a useful tool to decompose the state lattice of a distributed computation into simpler structures, known as concurrent intervals. Efficient algorithms have been proposed in the literature to detect special classes of predicates, such as conjunctive predicates and bounded sum predicates. We show that more general classes of predicates can be detected when proper constraints are imposed on the underlying computations. In particular, we introduce a class of predicates, defined as separable predicates, that properly includes the above-mentioned classes. We show that separable predicates can be efficiently detected on distributed computations whose communication graphs satisfy the series-parallel constraint.


[35] An implementation framework for HPF distributed arrays on message-passing parallel computer systems



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=536935&pageNumber%3D4%26queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

Data parallel languages, like High Performance Fortran (HPF), support the notion of distributed arrays. However, the implementation of such distributed array structures and their access on message passing computers is not straightforward. This holds especially for distributed arrays that are aligned to each other and given a block-cyclic distribution. In this paper, an implementation framework is presented for HPF distributed arrays on message passing computers. Methods are presented for efficient (in space and time) local index enumeration, local storage, and communication. Techniques for local set enumeration provide the basis for constructing local iteration sets and communication sets. It is shown that both local set enumeration and local storage schemes can be derived from the same equation. Local set enumeration and local storage schemes are shown to be orthogonal, i.e., they can be freely combined. Moreover, for linear access sequences generated by our enumeration methods, the local address calculations can be moved out of the enumeration loop, yielding efficient local memory address generation. The local set enumeration methods are implemented by using a relatively simple general transformation rule for absorbing ownership tests. This transformation rule can be repeatedly applied to absorb multiple ownership tests. Performance figures are presented for local iteration overhead, a simple communication pattern, and storage efficiency.



[36]Rapid prototyping of parallel and distributed systems by means of high-level Petri nets



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=638272&pageNumber%3D5%26queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS

The control flow of parallel or distributed application can be specified by means of high-level Petri nets. One of the aims of rapid prototyping is to provide a validation of the specification; another is to produce a real application code which can be used in the final implementation. We restrict our considerations in this paper to the first goal only. When high-level Petri nets are involved, such as colored Petri nets, then colored linear invariants contribute to map automatically a set of processes on a specific parallel architecture. The purpose of this paper is to present the state of the art of the prototyping methodology and to illustrate it with a suitable example


[37] scheduling tool for parallel and distributed systems



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=496067&pageNumber%3D5%26queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

This paper first briefly describes the PARSA (PARallel program Scheduling and Assessment) prototype tool. PARSA is designed to address the efficient partitioning and scheduling of parallel programs on multiprocessor systems. It then presents the scheduling methods that have been implemented in the PARSA prototype and provides a comparative performance evaluation of these schedulers. The PARSA prototype distinguishing features, demonstrated through several examples, include: (1) PARSA simplifies the application development process by eliminating synchronization, scheduling, and machine-dependent concerns; (2) applications developed with PARSA efficiently utilize parallel system resources; (3) PARSA allows development of portable parallel code across a wide range of concurrent systems; (4) applications developed with PARSA can be easily scaled to various sized parallel systems; and (5) PARSA supports fine-tuning of parallel application performance and/or their mappings on the target parallel system.



[38] Proceedings Seventh International Conference on Parallel and Distributed Systems: Workshops



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=883979&pageNumber%3D5%26queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

The following topics were dealt with: parallel and multi-computing; communication and protocol; object-oriented systems; network architecture; image processing; multimedia database and application; Internet application; simulation and scientific visualisation; modelling, animation and virtual reality; quality of service; network performance; security; VPN; mobile computing; parallel computing application; interactive applications; multiagent framework, platform and tools; flexible network; physical agents; parallel architectures; and parallel algorithms.


[39] Testing a Web Application Involving Web Browser Interaction



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=5286605&pageNumber%3D3%26queryText%3Dweb+application  

Web applications become more and more complex. Thus, systematic approaches for Web application testing are needed. Existing methods take into consideration only those actions provided by the application itself and do not involve actions provided by the browser, such as the usage of backward and forward buttons. Base on existing testing techniques, this paper addresses an approach to discovering possible inconsistencies caused by interactions with Web browser buttons and the property of a Web page related to Web browser buttons. A navigation tree considering the role of the browser buttons while navigating a Web application is constructed. Three adequacy criteria based on user actions are presented for test case selection. For illustration, a simple inquiring balance system of a Web application is exemplified.



[40]Contention Management Policy in Software Transactional Memory in Parallel and Distributed Systems



"http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=6821418&pageNumber%3D5%26queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

In Parallel Programming, Transactional memory serves as a better alternative compared to the inherently used lock based programming. Transactional Memory implements fine grained locking at the ease of coarse grained programming. It scales well under low contention scenarios. A conflict depends on the interactions among the underlying data patterns. To handle conflicts in Transactional Memory the existing solution is addressed in two ways. One is Scheduling the Contentions and the other is Contention Management policy. Scheduling the contentions require storage of data structures that stores the contentions. Complexity of these data structures increases with increase in the application size. On the other hand the contention managers cannot handle high contention workloads and they are suited only for specific data patterns. In this project, the conflicts are handled by using a novel approach for contention management by using the time based and the workload based policies. The experiments conducted on the proposed contention manager policies along with the existing contention managers using Stamp Benchmarks, Inset Benchmarks and a sample bank application available in the Transactional Memory Library. Experiments are conducted on all the benchmarks and a review of policies for the various Benchmark programs is reported in this work.


[41] Parallel and distributed systems for constructive neural network learning



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=263844&pageNumber%3D5%26queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

A constructive learning algorithm dynamically creates a problem-specific neural network architecture rather than learning on a pre-specified architecture. The authors propose a parallel version of their recently presented constructive neural network learning algorithm. Parallelization provides a computational speedup by a factor of O(t) where t is the number of training examples. Distributed and parallel implementations under p4 using a network of workstations and a Touchstone DELTA are examined. Experimental results indicate that algorithm parallelization may result not only in improved computational time, but also in better prediction quality.



[42] A formal study on the fault tolerance of parallel and distributed systems



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=1303231&pageNumber%3D5%26queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

This paper analyzes the ability of several bounded degree networks that are commonly used for parallel computation to tolerate faults. Among other things it is shown that an N-node butterfly containing N1-ε worst-case faults (for any constant ε>0) can emulate a fault-free butterfly of the same size with only constant slowdown. Similar results are proven for the shuffle-exchange graph. Hence, these networks become the first connected bounded-degree networks known to be able to sustain more than a constant number of worst-case faults without suffering more than a constant-factor slowdown in performance.


[43] Study on assessment method for computer network security based on rough set



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=5358114&pageNumber%3D2%26queryText%3DNetwork+Security  

As the traditional network security assessment methods have subjective factors when the weights a assessment indexes are identified, it is difficult to make accurate and objective assessment. However, the Rough set theory has the advantages of not needing apriori knowledge when dealing with uncertain problems. Therefore, the application of the Rough set theory in network security assessment is quite necessary. This paper is identifies the principle of assessment indexes system of network security and establishes the indexes system of network security assessment, establishes security assessment model and common steps of network security assessment which are both based on the Rough set theory, and finally analyzes and validates this model by an example. The network security assessment based on the Rough set theory effectively overcomes the subjectivity of determining the weights of indexes by traditional methods, gives more objective results, and enhances the veracity and validity of network security assessment.



[44]A portable ATPG tool for parallel and distributed systems



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=512613&pageNumber%3D5%26queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

The use of parallel architectures for the solution of CPU and memory critical problems in the electronic CAD area has been limited up to now by several factors, like the lack of efficient algorithms the reduced portability of the code, and the cost of the hardware. However, portable message-passing libraries are now available, and the same code runs on high-cost supercomputers, as well as on common workstation networks. The paper presents an effective ATPG system for large sequential circuits developed using the PVM library and based on a genetic algorithm. The tool, named GATTO has been run on a DEC Alpha AXP farm and on a CM-5. Experimental results are provided.


[45] Scalable and Efficient Parallel and Distributed Simulation of Complex, Dynamic and Mobile Systems



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=841091&pageNumber%3D5%26queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

In this work we illustrate the design and implementation guidelines of a recently developed middleware defined to support the parallel and distributed simulation of large scale, complex and dynamically interacting system models. The distributed simulation of complex system models, may suffer the communication and synchronization required to maintain the causality constraints between distributed model components. We designed and implemented the ARTÌS middleware as a new framework by incorporating a set of features that allow adaptive optimization by exploiting many complex and dynamic model and distributed simulation characteristics. As an example, a dynamic migration mechanism for the run-time adaptive allocation of model entities has been designed and exploited for dynamic load and communication balancing. Optimizations have been introduced to obtain the maximum advantage from heterogeneous and asymmetric communication systems, from shared memory to LAN and Internet communication. Other optimizations have been introduced by the exploitation of concurrent replications of parallel and distributed simulations, in order to increase the resources utilization and to maximize the speedup of simulation processes. Solutions have been designed, implemented and tuned to obtain a significant reduction in the communication and synchronization overheads between the physical execution units, and an increased model scalability and simulation speedup, even in worst-case modeling assumptions and simulation scenarios.



[46]Parallel distributed real-time systems in manufacturing (an aerospace view)



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=637992&pageNumber%3D5%26queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

An overview of the evolution of distributed real-time controllers in aerospace manufacturing environments is given. Programmable logic controllers (PLCs) and projected PLC market revenues are reviewed, as well as manufacturing deployment strategies and customer support. The hidden costs of PLC technology are identified. Examples of today's state-of-the-art parallel distributed real-time manufacturing systems are given. Large hidden support costs associated with such systems are identified. In conclusion, ideas on how new real-time distributed systems can be supported by the current corporate philosophy of speed, quality and cost, are discussed. Techniques of how to move forward to the implementation of new real-time distributed systems are presented.


[47]Parallel and Distributed Load Flow of Distribution Systems



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=5224800&pageNumber%3D5%26queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

In this chapter we discuss the parallel and distributed computation of load flow for the distribution system with respect to specific characteristics of the distribution system and based on a distributed computing system. Our discussion will focus on design methods for parallel and distributed load flow algorithms and their convergence analyses. The designed parallel and distributed load flow computation are simulated on a COW, and the unpredictability of communication delay is taken into account by designing asynchronous distributed algorithms.



[48] Parallel and Distributed Processing of Power Systems



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=5224913&pageNumber%3D5%26queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

In this chapter we review a few pertinent topics on parallel and distributed algorithms and apply those topics to the parallel and distributed processing of electric power systems. Because the topic of parallel and distributed processing is very broad, we have written this chapter with the assumption that readers have some basic knowledge of parallel and distributed computation.


[49] A New Adaptive Middleware for Parallel and Distributed Simulation of Dynamically Interacting Systems



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=1364594&pageNumber%3D5%26queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

In this work we define and test a new framework obtained as the integration of two recently developed middlewares defined to support the parallel and distributed simulation of large scale, complex and dynamically interacting system models (like wireless and mobile network systems). In a distributed simulation of highly interacting system models, the main bottleneck may become the communication and synchronization required to maintain the causality constrains between distributed model components. We designed and implemented the ARTÌS middleware as a new framework incorporating a set of features that allow an adaptive optimization of the communication layer management in a distributed simulation scenario. ARTÌS has been integrated with GAIA, a dynamic mechanism for the runtime management and adaptive allocation of model entities in a distributed simulation. By adopting a runtime evaluation of causal bindings between model entities GAIA adapts the dynamic and time-persistent causal effects of model interactions to dynamic migration of model entities. Preliminary results demonstrate that the combined effect of ARTÌS management and GAIA heuristics leads to a significant reduction in the communication and synchronization overheads between the physical execution units. Simulation performance enhancements have been obtained also in worst-case modelling assumptions and simulation scenarios.



[50] Distributed parallel coordinated control strategy for provincial -regional grid based on subarea division of the power system



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=5735867&pageNumber%3D5%26queryText%3DPARALLEL+AND+DISTRIBUTED+SYSTEMS  

Based on subarea division of the power system and multithread technology, a distributed parallel coordinated control strategy for automatic voltage control system is proposed. Using auxiliary problem principle (APP) method, a complex power system is decomposed into several logical independent subsystems geographically, which are coordinated via restrictions of the jointed borders. An iterative computation framework is formed and the influence on the internal system exposed by external system is considered through exchanging the information of boundary nodes. The regional power system coordinated control problem is decomposed into several parallel coordinating subsystem optimization ones and solved with the particle swarm optimization algorithm. Considering coordinated control between provincial and regional power systems, the subsystem computes the maximum devoting/removing reactive power. Considering the correlation with voltage and reactive power in each subsystem, the reserve gateway reactive power capacity is computed and provided to provincial AVC system. After receiving the adjustable range of gateway power factors, which is computed and sent by provincial AVC system based on the reserve gateway reactive power capacity, regional AVC system finds a balance between reactive power flow and the minimum network loss. In order to avoid the bottleneck problem of data upload and order transmission in centralized parallel computation, each subsystem is encapsulated and parallel computed by multithread technology. The proposed parallel control algorithm applies to a practical regional grid, and the results indicate that the algorithm reduces the calculation complexity with higher efficiency and convergence.


KaaShiv InfoTech offers world class Final Year Project for BE, ME, MCA ,MTech, Software engineering and other students in Anna Nagar, Chennai.

internship in chennai



Website Details:


Inplant Training:


http://inplant-training.org/
http://www.inplanttrainingchennai.com/
http://http://inplanttraining-in-chennai.com/

Internship:


http://www.internshipinchennai.in/
http://www.kernelmind.com/