For Inplant/Internship training Request, Please Download the Registration Form, Please fill the details and send back the same to kaashiv.info@gmail.com

Parallel And Distributed Computing

KaaShiv InfoTech, Number 1 Inplant Training Experts in Chennai.


Description:


A distributed system is a collection of independent computers, interconnected via a A network, capable of collaborating on a task. distributed system can be characterized as collection of multiple autonomous computers that communicate over a communication network and having following features No common Physical clock Enhanced Reliability Increased performance/cost ratio Access to geographically remote data and resources Scalability . Telephone Networks and Cellular Examples of Distributed System ATM(bank) ,Computer Networks Such as internet or intranet ,Networks Distributed database and distributed database ,Machines management Mobile Computing ,Network of Work stations system.



kaashiv infotech

Algorithm:


  • Parallel and Distributed Algorithms for Inference and Optimization
  • Parallel algorithm
  • Parallel and Distributed Computing Series
  • Design and Analysis of Parallel Algorithms
  • Serial Algorithm
  • Concurrent Algorithm
  • Designing Efficient Algorithm
  • Data Parallel Algorithm
  • Control Parallel Algorithm
  • Parallel Matrix Algorithm
  • String Pattern Matching Algorithm

  • Next Generation Semantic File Container with Inherent Indexer


    Abstract:


    Semantic file system is an information storage system which can provide flexible relative access to the system’s contents by automatically extracting attributes from files with file type specific details.Associative access on the files is provided by an initial extension to existing tree structured file system protocols, and by the use of these protocols that are designed specifically for content based file system access. Access on the file details such as versions or any other concepts were interpreted as queries applied on our container engine, and thus provide flexible associative access to files. indexing of key properties of file system objects and indexing/ caching on the file system is one of the fantastic feature of our system. The automatic indexing of files and grouped based on relativity is called “semantic” because user programmable nature of the system uses information about the semantics of updated file system objects to extract the properties for indexing. Our semantic file systems presents a more effective storage abstraction than do traditional tree structured file systems. In addition, the data will be archived after a multiple level of layering such as a one level of encryption and the second level of compression on the documents.


    IEEE Title:


    Semantic-Aware Metadata OrganizationParadigm in Next-Generation File Systems.


    Abstract:


    Existing data storage systems based on the hierarchical directory-tree organization do not meet the scalability and functionality requirements for exponentially growing data sets and increasingly complex metadata queries in large-scale, Exabyte-level file systems with billions of files. This paper proposes a novel decentralized semantic-aware metadata organization, called SmartStore, which exploits semantics of files’ metadata to judiciously aggregate correlated files into semantic-aware groups by using information retrieval tools. The key idea of SmartStore is to limit the search scope of a complex metadata query to a single or a minimal number of semantically correlated groups and avoid or alleviate brute-force search in the entire system. The decentralized design of SmartStore can improve system scalability and reduce query latency for complex queries (including range and top-k queries). Moreover, it is also conducive to constructing semantic-aware caching, and conventional filename-based point query. We have implemented a prototype of SmartStore and extensive experiments based on real-world traces show that SmartStore significantly improves system scalability and reduces query latency over database approaches. To the best of our knowledge, this is the first study on the implementation of complex queries in large-scale file systems


    Implementation:



    kaashiv-research-images

    Screen shot:



    kaashiv-research-images

    Related URL’s for reference :


    [1]High-performance parallel and distributed computing for the BMI eigenvalue problem


    http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=1015574&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D1015574 

    The BMI Eigenvalue Problem is one of optimization problems and is to minimize the greatest eigenvalue or a bilinear matrix function. This paper proposes a parallel algorithm to compute the ϵ-optimal solution of the BMI Eigenvalue Problem on parallel and distributed computing systems. The proposed algorithm performs a parallel branch and bound method to compute the e-optimal solution using the Master-Worker paradigm. The performance evaluation results on PC clusters and a Grid computing system showed that the proposed algorithm reduced computation time of the BMI Eigenvalue problem to 1/91 of the sequential computation time on. a PC cluster with 128CPUs and reduced that to 1/7 on a Grid computing system. The results also showed that tuning of the computational granularity on a worker was required to achieve the best performance on a Grid computing system.


    [2]Performance Modeling and Prediction of Parallel and Distributed Computing Systems: A Survey of the State of the Art


    http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=4159746&url=http%3A%2 

    Performance is one of the key features of parallel and distributed computing systems. Therefore, in the past a significant research effort was invested in the development of approaches for performance modeling and prediction of parallel and distributed computing systems. In this paper we identify the trends, contributions, and drawbacks of the state of the art approaches. We describe a wide range of the performance modeling approaches that spans from the high-level mathematical modeling to the detailed instruction-level simulation. For each approach we describe how the program and machine are modeled and estimate the model development and evaluation effort, the efficiency, and the accuracy. Furthermore, we present an overall evaluation of the presented approaches.


    [3]Performance modeling of parallel and distributed computing using PACE


    http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=830354&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D830354  

    There is a wide range of performance models being developed for the performance evaluation of parallel and distributed systems. A performance modelling approach described in this paper is based on a layered framework of the PACE methodology. With an initial implementation system, the model described by a performance specification language, CHIP3S, can provide a capability for rapid calculation of relevant performance information without sacrificing accuracy of predictions. An example of the performance evaluation of an ASCI kernel application, Sweep3D, is used to illustrate the approach. The validation results on different parallel and distributed architectures with different problem sizes show a reasonable accuracy (approximately 12% error at most) can be obtained, allows cross-platform comparisons to be easily undertaken, and has a rapid evaluation time (typically less than 2s)..


    [4]Current trends in high performance parallel and distributed computing


    http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=1213452&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D1213452 

    Summary form only given. Parallel computing for high performance scientific applications gained widespread adoption and deployment about two decades ago. Computer systems based on shared memory and message passing parallel architectures were soon followed by clusters and loosely coupled workstations, that afforded flexibility and good performance for many applications at a fractional cost of MPP. Such platforms, referred to as parallel distributed computing systems, have evolved considerably and are currently manifested as very sophisticated metacomputing and Grid systems. This paper traces the evolution of loosely coupled systems and highlights specific functional, as well as fundamental, differences between clusters and NOW of yesteryear versus metacomputing Grids of today. In particular, semantic differences between Grids and systems such as PVM and MPICH are explored. In addition, the recent trend in Grid frameworks to move away from conventional parallel programming models to a more service-oriented architecture is discussed. Exemplified by toolkits that follow the OGSA specification, these efforts attempt to unify aspects of Web-service technologies, high performance computing, and distributed systems in order to enable large scale, cross-domain sharing of compute, data, and service resources. The paper also presents specific examples of current metacomputing and Grid systems with respect to the above characteristics, and discusses the relative merits of different approaches.


    [5]Advances in parallel and distributed computing models – APDCM


    http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5470826&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5470826 

    The past twenty years have seen a flurry of activity in the arena of parallel and distributed computing. In recent years, novel parallel and distributed computational models have been proposed in the literature, reflecting advances in new computational devices and environments such as optical interconnects, programmable logic arrays, networks of workstations, radio communications, mobile computing, DNA computing, quantum computing, sensor networks etc. It is very encouraging to note that the advent of these new models has lead to significant advances in the resolution of various difficult problems of practical interest. The main goal of this workshop is to provide a timely forum for the exchange and dissemination of new ideas, techniques and research in the field of the parallel and distributed computational models. The workshop is meant to bring together researchers and practitioners interested in all aspects of parallel and distributed computing taken in an inclusive, rather than exclusive, sense. We are convinced that the workshop atmosphere will be conducive to open and mutually beneficial exchanges of ideas between the participants.


    [6]Designing systems for highly parallel and distributed computing


    http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=183873&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D183873 

    The parallel computational model with data flow sequencing is introduced. Description of the basic principles and features of mpDF, a massively parallel architecture, based on the dataflow operational model and RISC organizational principles is presented. The analytical performance model is developed to evaluate proposed architectural solution for distributed/parallel computing based on data flow sequencing of instructions. Algorithmic performance model is extended to include characterization of the parallel programs in terms of the average parallelism. Model is solved for number of different workloads. The values of the basic architectural characteristics are analyzed showing that architecture is well balanced providing consistent performance made for wide rage of parallel applications..


    [7]Parallel and distributed computing in circular accelerators


    http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=988018&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D988018 

    There are some problems of accelerator simulation (for example, computer design of new accelerator projects, optimization of working machines and quasi-real time accelerator control) which demand high performance computing using parallel and distributed computer systems. In particular, usage of computer clusters set is perspective for this purpose. In this report we develop the approach suggested in our previous works. This approach is based on several levels of process of parallel and distributed computing. The structured physical and mathematical description of a beam evolution behavior (including space charge forces) allows one to distribute calculations on several computing clusters with parallel computing inside each structural block. Computer algebra tools play the great role in such approach. Indeed, on the one hand, the used matrix formalism for Lie algebraic methods admits symbolic computation and storage of the results in corresponding databases. On the other hand, the possibilities of visual 2D- and 3D-presentation allows one to analyze the studied effects more carefully and intuitively. The multivariate description of beam evolution, space charge forces, and optimization procedures permits one to compute various representative models simultaneously on different clusters.


    [8]A message passing interface for parallel and distributed computing


    http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=263854&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D263854 

    The proliferation of high performance workstations and the emergence of high speed networks have attracted a lot of interest in parallel and distributed computing (PDC). The authors envision that PDC environments with supercomputing capabilities will be available in the near future. However, a number of hardware and software issues have to be resolved before the full potential of these PDC environments can be exploited. The presented research has the following objectives: (1) to characterize the message-passing primitives used in parallel and distributed computing; (2) to develop a communication protocol that supports PDC; and (3) to develop an architectural support for PDC over gigabit networks.


    [9]The economy of parallel and distributed computing in the cloud:


    http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5783617&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5783617 

    Large-scale simulation and analysis software is used heavily in the VLSI industry. One would naturally think that it's a job for HPC. However, the cost of such clusters and the existing pricing model of commercially available software packages lead to some interesting tradeoffs and business models. We propose the use of cloud computing and a corresponding pricing model that directly relates to the amount of computing accomplished. This should promote better utilization of both hardware and energy in using such software..


    [10]Parallel and Distributed Computing with Java


    http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=4021901&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D4021901 

    The Java language first came to public attention in 1995. Within a year, it was being speculated that Java may be a good language for parallel and distributed computing. Its core features, including being objected oriented and platform independence, as well as having built-in network support and threads, has encouraged this view. Today, Java is being used in almost every type of computer-based system, ranging from sensor networks to high performance computing platforms, and from enterprise applications through to complex research-based simulations. In this paper the key features that make Java a good language for parallel and distributed computing are first discussed. Two Java-based middleware systems, namely MPJ Express, an MPI-like Java messaging system, and Tycho, a wide-area asynchronous messaging framework with an integrated virtual registry are then discussed. The paper concludes by highlighting the advantages of using Java as middleware to support distributed applications.

    [11]Optical interconnections for parallel and distributed computing


    http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=867698&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D867698 

    This paper reports the application of optical interconnections to a parallel an distributed computing system in the form of a calibration-free 64-Gbps/board parallel optical interconnection sub-system mounted directly on the four-CPU processor board of a newly developed parallel-processing machine, "RWC-1". The sub-system is composed of eight parallel optical module/single-chip link large-scale integrated circuit pairs. The subsystem successfully transmitted parallel data for a variety of link lengths (between 1 m and 1 km), and with deskewing and synchronizing functions, phase-matching calibration for link lengths is automatic. Further, a method is described for the simplified merging of optical interconnections into electronic systems.


    [12]Dome: parallel programming in a distributed computing environment


    http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=508061&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D508061 
    The Distributed object migration environment (Dome) addresses three major issues of distributed parallel programming: ease of use, load balancing, and fault tolerance. Dome provides process control, data distribution, communication, and synchronization for Dome programs running in a heterogeneous distributed computing environment. The parallel programmer writes a C++ program using Dome objects which are automatically partitioned and distributed over a network of computers. Dome incorporates a load balancing facility that automatically adjusts the mapping of objects to machines at runtime, exhibiting significant performance gains over standard message passing programs executing in an imbalanced system. Dome also provides checkpointing of program state in an architecture independent manner allowing Dome programs to be checkpointed on one architecture and restarted on another.


    [13] The economy of parallel and distributed computing in the cloud


    http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=5783617&pageNumber%3D4%26queryText%3Dparallel+and+distributed+computing 

    Large-scale simulation and analysis software is used heavily in the VLSI industry. One would naturally think that it's a job for HPC. However, the cost of such clusters and the existing pricing model of commercially available software packages lead to some interesting tradeoffs and business models. We propose the use of cloud computing and a corresponding pricing model that directly relates to the amount of computing accomplished. This should promote better utilization of both hardware and energy in using such software.


    [14] Parallelization of MPEG-2 video encoder for parallel and distributed computing systems


    http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=510218&pageNumber%3D4%26queryText%3Dparallel+and+distributed+computing 

    In this paper, a parallel implementation of the MPEG-2 video encoder on various parallel and distributedplatforms is presented. We use a data-parallel approach and exploit parallelism within each frame, which makes our encoder suitable for real-time applications where the complete video sequence may not be present on the disk. The Express environment is employed as the underlying message-passing system making our encoder portable across a wide range of parallel and distributed architectures. The encoder also provides control over various parameters such as number of processors, size of search window, buffer management and bit-rate. It is flexible and allows inclusion of fast and new algorithms for different stages of the codec, replacing current algorithms. Comparisons of execution times, speedups as well as frame encoding rates using different number of processors are provided. In addition, our study reveals the degrees of parallelism and bottlenecks in various computational modules of MPEG-2.


    [15] Performance modeling of parallel and distributed computing using PACE


    http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=830354&pageNumber%3D4%26queryText%3Dparallel+and+distributed+computing 

    There is a wide range of performance models being developed for the performance evaluation ofparallel and distributed systems. A performance modelling approach described in this paper is based on a layered framework of the PACE methodology. With an initial implementation system, the model described by a performance specification language, CHIP3S, can provide a capability for rapid calculation of relevant performance information without sacrificing accuracy of predictions. An example of the performance evaluation of an ASCI kernel application, Sweep3D, is used to illustrate the approach. The validation results on different parallel and distributed architectures with different problem sizes show a reasonable accuracy (approximately 12% error at most) can be obtained, allows cross-platform comparisons to be easily undertaken, and has a rapid evaluation time (typically less than 2s)..


    [16]Workshop on java for parallel and distributed computing [introductory remaerks]


    http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=925076&pageNumber%3D4%26queryText%3Dparallel+and+distributed+computing 

    Summary form only given. The past decades have produced a wide-variety of automated techniques for assessing the correctness of software systems. In practice, when applied to large modern software systems all existing automated program analysis and verification techniques come up short. They might produce false error reports, exhaust available human or computational resources, or be incapable of reasoning about some set of important properties. Whatever their shortcoming, the goal of proving a system correct remains elusive.


    [17]A stochastic model for robust resource allocation in heterogeneous parallel and distributed computing systems


    http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=4536431&pageNumber%3D4%26queryText%3Dparallel+and+distributed+computing 

    This paper summarizes some of our research in the area of robust static resource allocation fordistributed computing systems operating under imposed quality of service (QoS) constraints. Often, these systems are expected to function in a physical environment replete with uncertainty, which causes the amount of processing required over time to fluctuate substantially. Determining a resource allocation that accounts for this uncertainty in a way that can provide a probabilistic guarantee that a given level of QoS is achieved is an important research problem. The stochastic robustness metric described in this research is based on a mathematical model where the relationship between uncertainty in system parameters and its impact on system performance are described stochastically.


    [18] Scalable optical interconnection network for parallel and distributed computing


    http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=5411917&pageNumber%3D4%26queryText%3Dparallel+and+distributed+computing 

    In this paper, a high-performance, scalable, parallel computing system called RAPID is designed using switchless, passive optical interconnect technology. RAPID outperforms current electrical multiprocessor systems by significantly decreasing the remote memory access latency..


    [19] Dynamic Power-Aware Scheduling Algorithms for Real-Time Task Sets with Fault-Tolerance in Parallel and Distributed Computing Environment


    http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=1419821&pageNumber%3D4%26queryText%3Dparallel+and+distributed+computing 

    At present, saving energy consumption of modern processors and fault tolerance become major concerns due to the fact that high power consumption increases heat dissipation, which leads to decreased reliability of systems. Similarly, the faults of running tasks also reduce the reliability of systems. The algorithms proposed in this paper are based on the policy of shortest-task-first and combined with other efficient techniques, such as shared slack reclamation and checkpoint. Consequently, not only real-time tasks can be completed before deadline, but also reduction of the global power consumption and fault-tolerance will be satisfied dynamically. In this paper, we present four algorithms to cope with scheduling independent task sets and task sets with precedence relationship in homogeneous and heterogeneous systems, respectively. Moreover, we present dynamic fault-tolerant algorithm. Compared to the efficient algorithms presented so far, our algorithms show lower communicational complexity and much better scheduling performance in terms of makespan and energy consumption..


    [20] A workflow for parallel and distributed computing of large-scale genomic data


    http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=6750194&pageNumber%3D4%26queryText%3Dparallel+and+distributed+computing 

    Workflow management systems are emerging as dominant solution in bioinformatics because they enable researchers to analyze the huge amount of data generated by modern laboratory equipment. The growth of genomic data generated by next generation sequencing (NGS) results in an increasing need to analyze data on distributed computer clusters. In this paper, we construct a semi-automated workflow system for the analysis of large-scale sequence data sets, describe a pipeline designed withparallel computation to perform the optimal computational steps required to analyze whole genome sequence data, and report the overall execution time of the pipeline using cores on multiple machines.


    [21]A Distributed Parallel Computing Environment for Bioinformatics Problems:


    http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=4293834&pageNumber%3D4%26queryText%3Dparallel+and+distributed+computing 

    Certain bioinformatics research, such as sequence alignment, alternative splicing, protein function/structure prediction, gene identify, bio-chip data analysis, and so on, requires massivecomputing power, which is hardly available in a single computing node. In order to facilitate bioinformatics research, we have designed and implemented a distributed and parallel computingenvironment with grid technology, in which, biologists can solve bioinformatics problems usingdistributed computing resources in parallel and reduce execution time. In this environment, thecomputing power and program information of computing nodes are described with XML documents. A web service named Local Resource Management Service is deployed on each computing node so that the distributed resources can be accessed in a uniform manner. With an API suite, biologists can usedistributed computing resources in parallel easily in their applications. Further more, users can monitor the status of distributed resources dynamically on the portal. A real use case of alternative splicing is also presented, through which we have analyzed the usability, efficiency, and stability of the environment.


    [22] Parallel and distributed computing for cybersecurity


    http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=6814701&pageNumber%3D5%26queryText%3DAndroid+Application 

    Parallel and distributed data mining offer great promise for addressing cybersecurity. The Minnesota Intrusion Detection System can detect sophisticated cyberattacks on large-scale networks that are hard to detect using signature-based systems.


    [23] Self Adaptive Application Level Fault Tolerance for Parallel and Distributed Computing


    http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=4228332&pageNumber%3D4%26queryText%3Dparallel+and+distributed+computing 

    Most application level fault tolerance schemes in literature are non-adaptive in the sense that the fault tolerance schemes incorporated in applications are usually designed without incorporating information from system environments such as the amount of available memory and the local or network I/O bandwidth. However, from an application point of view, it is often desirable for fault tolerant high performance applications to be able to achieve high performance under whatever system environment it executes with as low fault tolerance overhead as possible In this paper, we demonstrate that, in order to achieve high reliability with as low performance penalty as possible, fault tolerant schemes in applications need to be able to adapt themselves to different system environments. We propose a framework under which different fault tolerant schemes can be incorporated in applications using an adaptive method. Under this framework, applications are able to choose near optimal fault tolerance schemes at run time according to the specific characteristics of the platform on which the application is executing..


    [24]A Distributed Parallel Computing Environment for Bioinformatics Problems


    http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=4293834&pageNumber%3D4%26queryText%3Dparallel+and+distributed+computing 

    Certain bioinformatics research, such as sequence alignment, alternative splicing, protein function/structure prediction, gene identify, bio-chip data analysis, and so on, requires massivecomputing power, which is hardly available in a single computing node. In order to facilitate bioinformatics research, we have designed and implemented a distributed and parallel computingenvironment with grid technology, in which, biologists can solve bioinformatics problems usingdistributed computing resources in parallel and reduce execution time. In this environment, thecomputing power and program information of computing nodes are described with XML documents. A web service named Local Resource Management Service is deployed on each computing node so that the distributed resources can be accessed in a uniform manner. With an API suite, biologists can usedistributed computing resources in parallel easily in their applications. Further more, users can monitor the status of distributed resources dynamically on the portal. A real use case of alternative splicing is also presented, through which we have analyzed the usability, efficiency, and stability of the environment.


    [25] Implementations of Grid-Based Distributed Parallel Computing


    http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=4673567&pageNumber%3D4%26queryText%3Dparallel+and+distributed+computing 

    Grid computing provides new solutions for numerous complex problems. It is an inevitable trend to implement the distributed parallel computing of large-scale problems with the grid. This paper presents two implementations for distributed parallel computing on globus toolkit, a wide-used grid environment. The first implementation, loosely coupled parallel services is used to achieve the large-scale parallelcomputing that can be broken down into independent sub-jobs by using the corresponding implementation framework, and the second implementation, grid MPI parallel program is able to deal with specialized applications, which can't easily be split up into numerous independent chunks, by using the proposed implementation framework. Finally, two examples of large-scale parallel computing based on proposed implementations are achieved and the experimental results are shown. We make a beneficial attempt to implement distributed parallel computing on grid computing environments.


    [26] Parallel performance of domain decomposition method on distributed computing environment :


    http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=4803010&pageNumber%3D4%26queryText%3Dparallel+and+distributed+computing 

    Distributed parallel computing, which uses general-purpose workstations connected by a network as a large parallel computing resource, is one of the most promising trends in parallel computing. Over the last two decades, the growth in the use of such parallel computing environment has ensured the interest on the parallel finite element method. In this work, parallel finite element method using domain decomposition technique has been adapted to the distributed parallel environment of networked workstations and the supercomputer. Using the developed code, several model problems are solved and the parallel performances are analyzed on the network of eight Pentium IV PC cluster and GP7000F workstations. Finally, very large size of problem over 11 million DOFs has been solved using 1024 SR8000 supercomputer.


    [27] Distributed parallel computing architecture design philosophy for TTC evaluation with transient stability constraints


    http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=4523512&pageNumber%3D5%26queryText%3Dparallel+and+distributed+computing 

    This paper presents a novel calculation framework for TTC evaluation with transient stability (TS) constraints with respect to an usual or a critical contingency set. The basic solution strategy for TTC evaluation is introduced, and the whole system architecture and computing flow are given in this paper. In order to implement the security margin of large-scale interconnected power system efficiently and rapidly, especially for on-line application, a novel distributed parallel computing architecture design philosophy, called as optimal workstation cluster computing model is proposed. Furthermore, some key module design philosophies are issued as well. The proposed distributed parallel computing platform can be easily and conveniently embedded into the existing dynamic security assessment (DSA) system or such systems related to power market to implement the on-line security assessment. On the other hand, the distributed parallel computing platform can make intensive off-line simulation related to the specified case studies.


    [28] RHiNET-3/SW: an 80-Gbit/s high-speed network switch for distributed parallel computing


    http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=946703&pageNumber%3D5%26queryText%3Dparallel+and+distributed+computing 

    TWe have developed a prototype network switch, RHiNET-3/SW, for a RHiNET high-performancedistributed parallel computing environment. It has eight I/O ports and each port provides high-speed, bi-directional 10-Gbit/s-per-port parallel optical data transmission in a distance of over 300 m. The aggregate throughput is 80 Gbit/s per board. A switch consists of a one-chip CMOS ASIC 8×8 switch LSI (SW-LSI; a 784-pin BGA package), four deskew LSIs, and eight pairs of 1.25-Gbit/s×12-channel optical interconnection modules on a single board. The switch uses a hop-by-hop retransmission mechanism and credit-based flow control to provide reliable and long-transmission-distance data communication. The deskew LSI has a skew compensation function for 10-bit parallel data channels and an 8B10B encoder/decoder. Its optical transmitter modules use an 850-nm VCSEL and a 12-channel MMF (multi-mode fiber) ribbon. RHiNET-3/SW enables high-throughput. Long-distance and flexible-flow-control network communication by means of distributed parallel computing using commercial PCs.


    [29] A web-based parallel PDE solver generation system for distributed memory computing environments


    http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=884691&pageNumber%3D5%26queryText%3Dparallel+and+distributed+computing 

    The finite element method is widely applied to many domains, such as engineering , atmology, oceanography, biology, etc. The major drawback of the finite element method is that its execution takes a lot of time and memory spaces. Due to the computation-intensiveness and computation-locality properties, we can use the parallel processing method to improve the performance of the finite element method on distributed memory computing environments. However, it is quite difficult to program the finite element method on a distributed memory computing environment. Therefore, the development of a front-end parallel partial differential equations solver generation system is important. In this paper, we want to develop a front-end parallel partial differential equations solver generation system based on the World Wide Web on a distributed-memory computing environment, such as a PC cluster, a workstation cluster, etc. With the system, users who want to use parallel computers to solver partial differential equations can use web browser to input data and parameters. The system will automatically generate the corresponding parallel codes and execute the codes on the distributed memory computingenvironment. The execution result will be shown on the web browser. The results can also be download by user.


    [30] Large-scale structural analysis using domain decomposition method on distributed parallel computing environment:


    http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=592211&pageNumber%3D5%26queryText%3Dparallel+and+distributed+computing 

    This paper presents a developed application which integrates new communication capabilities of set top boxes running Android OS. The application is a game showing its content overlaid on top of the television program whereas Android mobile devices are used as controllers. The performance of theapplication is tested by measuring the speed of the various communication devices and by analyzing feedback from a selected group of users.


    [31] Finite difference simulations of the Navier-Stokes equations using parallel distributed computing


    http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=1250333&pageNumber%3D5%26queryText%3Dparallel+and+distributed+computing 

    We discuss the implementation of a numerical algorithm for simulating incompressible fluid flows based on the finite difference method and designed for parallel computing platforms with distributed-memory, particularly for clusters of workstations. The solution algorithm for the Navier-Stokes equations utilizes an explicit scheme for pressure and an implicit scheme for velocities, i. e., the velocity field at a new time step can be computed once the corresponding pressure is known. The parallelimplementation is based on domain decomposition, where the original calculation domain is decomposed into several blocks, each of which given to a separate processing node. All nodes then execute computations in parallel, each node on its associated subdomain. The parallel computations include initialization, coefficient generation, linear solution on the subdomain, and inter-node communication. The exchange of information across the subdomains, or processors, is achieved using the message passing interface standard, MPI. The use of MPI ensures portability across differentcomputing platforms ranging from massively parallel machines to clusters of workstations. The execution time and speed-up are evaluated through comparing the performance of different numbers of processors. The results indicate that the parallel code can significantly improve prediction capability and efficiency for large-scale simulations.


    [32] Performance analysis of parallel computing in a distributed overlay network


    http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=6129040&pageNumber%3D5%26queryText%3Dparallel+and+distributed+computing 

    There are many data and computation intensive applications that generally require very high performance and a lot of computing resources which lead to the increase in the overall execution time.Parallel computing can improve overall execution time which involves breaking up large program into smaller pieces that can be executing in multi processors system. While, distributed computing offers some advantages for parallel computing, where multiple connected processors can run in parallel by contributing their computing time and memory storage. However, due to the nature of heterogeneity of processing power in distributed computing, the effect of imbalance of workload distribution between processors is an important factor to be taken into consideration. In this paper, we conducted simulations of Pi value computation for tree-based distributed system under several of types of workloads distributions. We measured the overall execution time to study and analyze the effect of different workloads distribution. From our simulation results, we found that the increment of waiting time for a processor to receive back a result significantly impact the overall execution time as well as the scalability of the distributed system.


    [33] Parallax - A New Operating System for Scalable, Distributed, and Parallel Computing


    http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=6008946&pageNumber%3D5%26queryText%3Dparallel+and+distributed+computing 

    Parallax, a new operating system, implements scalable, distributed, and parallel computing to take advantage of the new generation of 64-bit multi-core processors. Parallax uses the DistributedIntelligent Managed Element (DIME) network architecture, which incorporates a signaling network overlay and allows parallelism in resource configuration, monitoring, analysis and reconfiguration on-the-fly based on workload variations, business priorities and latency constraints of the distributedsoftware components. A workflow is implemented as a set of tasks, arranged or organized in a directed acyclic graph (DAG) and executed by a managed network of DIMEs. These tasks, depending on user requirements are programmed and executed as loadable modules in each DIME. Parallax is implemented using the assembler language at the lowest level for efficiency and provides a C/C++ programming API for higher level programming


    [34] A performance and portability study of parallel applications using a distributed computing testbed


    http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=581423&pageNumber%3D5%26queryText%3Dparallel+and+distributed+computing 

    A case study was conducted to examine the performance and portability of parallel applications, with an emphasis on data transfer among the processors in heterogeneous environments. Several parallel test programs using MPICH, a message passing interface (MPI) library, and the Linda parallel environment were developed to analyze communication performance and portability. These programs implement loosely and tightly synchronized communication models in which each processor exchanges data with two other processors. This data-exchange pattern mimics communication in certain parallelapplications using striped partitioning of the computational domain. Tests were performed on an isolated, distributed computing testbed, a live development network and a symmetrical multiprocessing computer system. All network configurations used asynchronous transfer mode (ATM) network technologies. The testbed used in the study was a heterogeneous network consisting of various workstations and networking equipment. This paper presents an analysis of the results and recommendations for designing and implementing course-grained, parallel, scientific applications.


    35] Parallel computing with distributed shared data


    http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=47253&pageNumber%3D5%26queryText%3Dparallel+and+distributed+computing 

    Summary form only given. The issue of ease of using shared data in a data-intensive parallel computingenvironment is discussed. An approach is investigated for transparently supporting data sharing in a loosely coupled parallel computing environment, where a moderate to a large number of individualcomputing elements are connected via a high-bandwidth network without necessarily physically sharing memory. A system called VOYAGER is discussed which serves as the underlying system facility that supervises the distributed shared virtual memory. VOYAGER allows shared-data parallel applications to take advantage of parallel and distributed processing with relative ease. The application program merely maps the shared data onto its virtual address space replicates itself on distributed machines and spawns appropriate execution threads; the threads would automatically be given coordinated access to the shared data distributed in the network. Multiple computation threads migrate and populate the processors of a number of computing elements, making use of the multiple processors to achieve a high degree of parallelism. The low-level resource management chores are made available once and for all in the underlying facility VOYAGER, usable by many different data-intensive applications.


    [36] PAPC: A Simple Distributed Parallel Computing Framework for Mass Legacy Code Tasks


    http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=6473633&pageNumber%3D5%26queryText%3DAndroid+Application  

    With the fast development of multi-core technology, future high performance computing system will be a system of many multi-core processors where parallel computing is a must. However, it is not an easy task in such an environment to run legacy code programs that are developed for machines of single-core processors to achieve the desired speedup. To deal with the problem, in this paper we put forward a parallel computing framework named PAPC on the basis of Proactive, an open source middleware that aims to ease the development of parallel and distributed applications and that has excellent support for high performance computing. We explain in detail the design and implementation of the PAPC and conduct some experiments to evaluate it. Experimental results show that the framework PAPC is of strong scalability, good adaptability and can make full use of the high performance of multi-core.


    [37] Research of Distributed Parallel Computing Based on Mobile Agent


    http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=4428202&pageNumber%3D5%26queryText%3Dparallel+and+distributed+computing 

    This paper introduces the mobile agent technology, and presents a new approach to realizing thedistributed parallel computing based on mobile agent. With the Miller-Rabin primality test as an example, the working process and the realization method are introduced in detail. The numerical experiment shows that the speedup and the parallel efficiency of this program are high enough, and the stability of the distributed parallel computing is improved, as well as the flexibility, the expansibility and the mobility.


    [38] Utilizing heterogeneous networks in distributed parallel computing systems


    http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=6821695&pageNumber%3D5%26queryText%3DAndroid+Application 

    Heterogeneity is becoming quite common in distributed parallel computing systems, both in processor architectures and in communication networks. Different types of networks have different performance characteristics, while different types of messages may have different communication requirements. In this work, we analyze two techniques for exploiting these heterogeneous characteristics and requirements to reduce the communication overhead of parallel application programs executed ondistributed computing systems. The performance based path selection (PBPS) technique selects the best (lowest latency) network among several for each individual message, while the second technique aggregates multiple networks into a single virtual network. We present a general approach for applying and evaluating these techniques to a distributed computing system with multiple interprocessor communication networks. We also generate performance curves for a cluster of IBM workstations interconnected with Ethernet, ATM, and Fibre Channel networks. As we show with several of the NAS benchmarks, these curves can be used to estimate the potential improvement in communication performance that can be obtained with these techniques, given some simple communication characteristics of an application program.


    [39] The Mathworks Distributed and Parallel Computing Tools for Signal Processing Applications


    http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=4218318&pageNumber%3D5%26queryText%3Dparallel+and+distributed+computing 

    As requirements for technical computing applications become more complex, engineers and scientists must solve problems of increasing computational intensity that frequently outstrip the capability of their own computers. Some are distributed applications (also called coarse-grained or embarrassinglyparallel applications), where the same algorithm is independently executed over and over on different input parameters. Others consist of parallel (or fine-grained) applications, which contain interdependent tasks that exchange data during the execution of the application. This article introduces the distributedand parallel computing capabilities in The MathWorks distributed computing tools and provides examples of how these capabilities are applied to signal processing applications..


    [40] A Comparative Study on the Performance of the Parallel and Distributing Computing Operation in MatLab


    http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=5474692&pageNumber%3D5%26queryText%3Dparallel+and+distributed+computing 

    This study describes the performance results on testing MatLab applications using the parallelcomputing and the distributed computing toolboxes under different platforms with different hardware and operating systems. Each trial was executed keeping the hardware fixed and changing the operating system to obtain unbiased results. To standardize the benchmarking test, Fast Fourier Transform (FFT), discrete cosine transform (DCT), edge detection and matrix multiplication algorithms were executed. The results show that the leveraging of multicore platforms can speed up considerably the processing of images through the use of parallel computing tools in MatLab. Two different system hardware platforms (systems 1 and 2) were used in a series of experiments. Four rounds of experiments were performed benchmarking the FFT algorithm using the parallel tool box, by changing system platform, number of workers, image size and number of images. The results of the ANOVA test suggest that although there is no statistical significance on the factor represented by the operating system (OS) on system 1, the OS plays a significant roll on system 2. Moreover, on both systems there is statistical significance on the factors represented by the number of workers utilized and the number of images processed, yielding more than a 500% performance increase by using 8 MatLab workers on a dual quad-core machine.


    [41] Using parallel distributed reasoning for monitoring computing networks


    http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=5680347&pageNumber%3D5%26queryText%3Dparallel+and+distributed+computing 

    We describe a distributed reasoning system called Otto-Mate that is used to detect, reason about, and respond to incidents on a computing network. Events for monitoring computing networks occur at different system levels. Some information might relate to data, some might be operating system specific, some application or service related, some could be network related, and from each there will be compound events that describe incident effects and information about the situation context. All together there can be thousands of events per second. Today's approaches to monitoring networks are typically centralized, sending events over the network to a single engine for analysis. Centralized monitoring ultimately cannot scale to address the volume of events that one would ideally like to be able to monitor, so techniques of today often make severe compromises relating to the events that they ingest. Centralized monitoring creates a single point of failure and also generates significant network load. To overcome these deficiencies we have developed a more distributed, approach: our reasoner agents can (in theory) be installed on every monitored resources and the reasoner language (used for programming the reasoners) enables knowledge in a reasoner's working memory to be synchronized over multiple reasoners enabling them to implement parallel distributed reasoning algorithms that are able detect event patterns irrespective of whether the events are local or remote. Distributing the reasoning makes the system extremely resilient. Additionally, since the knowledge shared between the reasoning agents represents summary information, and because many on-line event correlation algorithms often suppress reporting once an incident has been reported, the amount of network load needed to support the distributed monitoring can actually be reduced. To demonstrate our approach we describe its application to the monitoring of a computing network that has been instrumented to protect it - - against 0-day email virus attacks.


    [42] Automated Deployment Support for Parallel Distributed Computing


    http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=4135271&pageNumber%3D5%26queryText%3Dparallel+and+distributed+computing  

    Google launched a new Android smart phone with the open source operating system when the smart phone system at the disputes. It uses Linux Core, and the members of OHA(Open Handset Alliance) can use and modify SDK bag at will, which open source of system makes it good expanding. This phone takes all software that smart phone needs when working, including operating system, user interface and application. The most outstanding characteristic of Android is the open system framework with good IDE(Integrated Development Environment) and all kinds of extensible user experience services which including abundant graphics components, multimedia support functions and powerful browser. Therefore, Android platform is attractive for software developers. This paper will analysis and discuss that how to develop a set of software with comprehensive DAQ(Data Acquisition) and clouds processing function.


    [43] Bluetooth-Based Android Interactive Applications for Smart Living:


    http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=6118784&pageNumber%3D5%26queryText%3DAndroid+Application  

    Against the low index speed of serial algorithm for Web page inverted indexes construction, according to a characteristic of merge-sort algorithm meets the theory of scheduling divisible loads in parallel anddistributed system, the paper proposed a new parallel algorithm basing on the triple sort-merge for Web page inverted indexes construction. The algorithm distributed parallel dealt with the two tasks parsing term and sorting these term postings which spent lots of time in the construction of inverted indexes, each term was represented as a triple, the time complexity of the algorithm was analyzed. This paper also applied a Java middleware named ProActive, designed and implemented a distributive parallel Web page indexer named P_Indexer on the cluster computing systems. The algorithm analysis and experimental results showed the parallel algorithm reaches high efficiency and good scalability.


    [44] Object Recognition Architecture Using Distributed and Parallel Computing with Collaborator


    http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=4403148&pageNumber%3D5%26queryText%3Dparallel+and+distributed+computing 

    These days, object recognition is regarded as a sufficient condition for essential requirements of intelligent service robot. Under such demands, object recognition's algorithms and its methods have been increasing in complexity along with the increase of computational ability. Despite these developments, object recognition still consumes many computational resources, which impede total time throughput drop. The purpose of this paper is to suggest an object recognition software architecture, which reduces time throughput by applying concepts of 'Component based approach' and COMET (Concurrent Object Modeling and architectural design mEThod), a computational efficiency improvement method. In COMET, the component based approach reduces total time throughput by supporting dynamic distributed and parallel processing. To enable these computations, surplus computational resources of nearby collaborator robot can be used for distributed computing by SHAGE, which is a component management framework based on COMET. Using SHAGE, in order to connect physical operation among components, software function module should be a componentized component defined by 'COMET component design guideline'. This paper componentizes the object recognition software function modules via this guideline, and shows the object recognition architecture as a connected relationship among these components. The experimental results show a maximum of 42% performance improvement compared to the original multi-feature evidence recognition framework.


    [45] A Distributed Parallel Algorithm for Web Page Inverted Indexes Construction on the Cluster Computing Systems


    http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=5231316&pageNumber%3D5%26queryText%3Dparallel+and+distributed+computing 

    Against the low index speed of serial algorithm for Web page inverted indexes construction, according to a characteristic of merge-sort algorithm meets the theory of scheduling divisible loads in parallel anddistributed system, the paper proposed a new parallel algorithm basing on the triple sort-merge for Web page inverted indexes construction. The algorithm distributed parallel dealt with the two tasks parsing term and sorting these term postings which spent lots of time in the construction of inverted indexes, each term was represented as a triple, the time complexity of the algorithm was analyzed. This paper also applied a Java middleware named ProActive, designed and implemented a distributive parallel Web page indexer named P_Indexer on the cluster computing systems. The algorithm analysis and experimental results showed the parallel algorithm reaches high efficiency and good scalability.


    [46] Parallel distributed computing application for iterative harmonic and interharmonic analysis in large scale multi-bus multi-convertor electrical power systems


    http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=826660&pageNumber%3D5%26queryText%3Dparallel+and+distributed+computing 

    The paper presents an efficient computing system to perform harmonic and interharmonic distortion analysis in large scale multi-bus multi-convertor electrical power systems. The iterative harmonic and interharmonic analysis algorithm (IHIA), based on automated ATP-EMTP simulations of single/groups of convertors and operating in a parallel distributed computing environment, is implemented.


    [47] A distributed parallel genetic local search in distributed computing environments


    http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=6305105&pageNumber%3D64%26queryText%3DAndroid+Application  

    http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=1299811&pageNumber%3D5%26queryText%3Dparallel+and+distributed+computing


    [48] Smartphone application for fault recognition :


    http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=6305105&pageNumber%3D64%26queryText%3DAndroid+Application  

    This system aims to provide a low-cost means of monitoring a vehicle's performance and tracking by communicating the obtained data to a mobile device via Bluetooth. Then the results can be viewed by the user to monitor fuel consumption and other vital vehicle electromechanical parameters. Data can also be sent to the vehicle's maintenance department which may be used to detect and predict faults in the vehicle. This is done by collecting live readings from the engine control unit (ECU) utilizing the vehicle's built in on-board diagnostics system (OBD). An electronic hardware unit is built to carry-out the interface between the vehicle's OBD system and a Bluetooth module, which in part communicates with an Android-based mobile device. The mobile device is capable of transmitting data to a server using cellular internet connection.

    We propose a parallel and distributed computation of genetic local search with irregular topology indistributed environments. The scheme we propose is implemented with tree network topologies where each computing element carries out genetic local search on its own chromosome set and communicates with its parent when the best solution of each generation is improved. We evaluate the proposed algorithm in a grid simulation environment implemented on a PC-cluster. We test our algorithm on four types topologies: star, line, balanced binary tree and sided binary tree, and find that the topology's depth and the number of independent search nodes influences on the evolution process. Furthermore, we observe in the experiment 'Reset' mechanism of the population after convergence is so useful in grid computing environments.

    [49] Distributed computing with hierarchical master-worker paradigm for parallel branch and bound algorithm


    http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=1199364&pageNumber%3D5%26queryText%3Dparallel+and+distributed+computing 

    This paper discusses the impact of the hierarchical master-worker paradigm on performance of an application program, which solves an optimization problem by a parallel branch and bound algorithm on a distributed computing system. The application program, which this paper addresses, solves the BMI Eigenvalue Problem, which is an optimization problem to minimize the greatest eigenvalue of a bilinear matrix function. This paper proposes a parallel branch and bound algorithm to solve the BMI Eigenvalue Problem with the hierarchical master-worker paradigm. The experimental results showed that the conventional algorithm with the master-worker paradigm significantly degraded performance on a Grid test bed, where computing resources were distributed on WAN via a firewall; however, the hierarchical master-worker paradigm sustained good performance


    [50]A framework for software development for distributed parallel computing systems


    http://http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=217488&pageNumber%3D5%26queryText%3Dparallel+and+distributed+computing 

    Measurements of vital signs and behavioral patterns can be translated into accurate predictors of health risk, even at an early stage, and can be combined with alarm-triggering systems in order to initiate the appropriate actions. The paper presents the design and implementation of a mobile TeleCare system based on a smart wrist-worn device with a non-obtrusive sensing module for cardiac, respiratory and motor activity, a microcontroller platform for primary processing of the data from the sensors and wireless communication using Bluetooth protocol. Advanced data processing, data management, human computing interfacing and data communication are implemented using a smartphone runningAndroid operating system (OS). A Web based health TeleCare information system was implemented being characterized by the following functionalities: data synchronization with the smartphone, advanced data processing and data presentation assuring a comprehensive data analysis and evidence based health management as well as for remote assistance of the patients by doctors and nurses. Experimental results associated with vital signs sensing and the software implementation are included in the paper.


    More about Parallel and Distributed Computing


    Android is an operating system based on the Linux kernel with a user interface based on direct manipulation, designed primarily for touch screen mobile devices such as smart phones and tablet computers, with variations designed for the car, wrist, and television. The operating system uses touch inputs that loosely correspond to real-world actions, like swiping, tapping, pinching, and reverse pinching to manipulate on-screen objects, and a virtual keyboard. Despite being primarily designed for touchscreen input, it also has been used in televisions, games consoles, digital cameras, and other electronics. As of 2011, Android has the largest installed base of any mobile OS and as of 2013, its devices also sell more than Windows, iOS and Mac OS devices combined. As of July 2013 the Google Play store has had over 1 million Android apps published, and over 50 billion apps downloaded. A developer survey conducted in April–May 2013 found that 71% of mobile developers develop for Android.Android's source code is released by Google under open source licenses, although most Android devices ultimately ship with a combination of open source and proprietary software. Initially developed by Android, Inc., which Google backed financially and later bought in 2005,Android was unveiled in 2007 along with the founding of the Open Handset Alliance—a consortium of hardware, software, and telecommunication companies devoted to advancing open standards for mobile devices. Android is popular with technology companies which require a ready-made, low-cost and customizable operating system for high-tech devices. Android's open nature has encouraged a large community of developers and enthusiasts to use the open-source code as a foundation for community-driven projects, which add new features for advanced users or bring Android to devices which were officially released running other operating systems. The operating system's success has made it a target for patent litigation as part of the so-called "Smartphone wars" between technology companies. Android's default user interface is based on direct manipulation, using touch inputs, that loosely correspond to real-world actions, like swiping, tapping, pinching, and reverse pinching to manipulate on-screen objects, and a virtual keyboard. The response to user input is designed to be immediate and provides a fluid touch interface, often using the vibration capabilities of the device to provide haptic feedback to the user. Internal hardware such as accelerometers, gyroscopes and proximity sensors are used by some applications to respond to additional user actions, for example adjusting the screen from portrait to landscape depending on how the device is oriented, or allowing the user to steer a vehicle in a racing game by rotating the device, simulating control of a steering wheel. Android devices boot to the home screen, the primary navigation and information point on the device, which is similar to the desktop found on PCs. Android home screens are typically made up of app icons and widgets; app icons launch the associated app, whereas widgets display live, auto-updating content such as the weather forecast, the user's email inbox, or a news ticker directly on the home screen. A home screen may be made up of several pages that the user can swipe back and forth between, though Android's home screen interface is heavily customisable, allowing the user to adjust the look and feel of the device to their tastes. Third-party apps available on Google Play and other app stores can extensively re-theme the home screen, and even mimic the look of other operating systems, such as Windows Phone. Most manufacturers, and some wireless carriers, customise the look and feel of their Android devices to differentiate themselves from their competitors. Present along the top of the screen is a status bar, showing information about the device and its connectivity. This status bar can be "pulled" down to reveal a notification screen where apps display important information or updates, such as a newly received email or SMS text, in a way that does not immediately interrupt or inconvenience the user. Notifications are persistent until read (by tapping, which opens the relevant app) or dismissed by sliding it off the screen. Beginning on Android 4.1, "expanded notifications" can display expanded details or additional functionality; for instance, a music player can display playback controls, and a "missed call" notification provides buttons for calling back or sending the caller an SMS message. Android provides the ability to run applications which change the default launcher and hence the appearance and externally visible behaviour of Android. These appearance changes include a multi-page dock or no dock, and many more changes to fundamental features of the user interface.

    KaaShiv InfoTech offers world class Final Year Project for BE, ME, MCA ,MTech, Software engineering and other students in Anna Nagar, Chennai.

    internship in chennai



    Website Details:


    Inplant Training:


    http://inplant-training.org/
    http://www.inplanttrainingchennai.com/
    http://http://inplanttraining-in-chennai.com/

    Internship:


    http://www.internshipinchennai.in/
    http://www.kernelmind.com/
    http://www.kaashivinfotech.com/