For Inplant/Internship training Request, Please Download the Registration Form, Please fill the details and send back the same to kaashiv.info@gmail.com

testing

KaaShiv InfoTech, Number 1 Final Year Project experts in Chennai.


Description


Software testing is an exploration conducted to provide collaborator with information about the quality of the product or service under test. Software testing can also provide independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include, but are not limited to the process of executing a program or application with the intent of finding software bugs. Software testing can be stated as the process of validating and verifying that a computer program/application/product: meets the requirements that guided its design and development and works as expected. It can be implemented with the same characteristics and satisfies the needs of collaborators.Software testing, depending on the testing method engaged, can be actualize at any time in the software development process. Traditionally most of the test effort occurs after the requirements have been defined and the coding process has been completed, but in the rapid approaches most of the test effort is on-going. As such, the methodology of the test is governed by the chosen software development methodology.

images

Algorithm


  •  Fast static compaction algorithm
  •  Branch and Cut algorithm
  •  Probabilistic algorithm for testing
  •  Genetic algorithm
  •  Unit-testing a complex algorithm
  •  Primality test
  •  Clever Algorithms
  •  Search Algorithm
  •  orthagonal array Testing algorithmM
  •  Quantum-Inspired Genetic Algorithm
  •  Comprehensive testing algorithms
  •  complex algorithms
  •  Node Reduction testing Algorithm
  •  Software Quality testing Algorithm
  •  partitioning algorithm in testing
  •  Probabilistic algorithm for testin
  •  Cryptographic Algorithm Testing
  •  Relational Symbolic Execution testing algorithm
  •  Clonal Selection testing Algorithm


1. Machine translated validation in Testing


Machine Translated Evaluation Techniques in validating the webbased testing framework.



Abstract


Automatic test generation based on the user specification and user eminency and providing appropriate test strategy and performing Intelligent evaluation and updating the same is the core idea of this project.The existing system utilize the option of 0-1 Integer Linear Program technique (which has the sparse matrix property), by using the fractional optimal solution by identifying the nature of the user’s specification.The proposed approach uses the primal-dual interior point algorithm which is one of the most efficient algorithm for solving the Linear Programming relaxation problem. This technique involves and efficient way of discarding the unevaluated node selection.By filtering the selected nodes and un selected nodes, our system provides an effective branching strategy to reduce the size of the branch-and-bound search tree. At each stage the feasibility of finding the optimized solution for the problem is identified.Based on the finalized state of the problem. An optimized solution is provided for the user to validate the data and apart from this, the performance of the candidates will be evaluated and reported to the user.



IEEE Title


Large-Scale Multiobjective StaticTest Generation for Web-Based Testing with Integer Programming



IEEE Abstract


Web-based testing has become a ubiquitous self-assessment method for online learning. One useful feature that is missing from today’s web-based testing systems is the reliable capability to fulfill different assessment requirements of students based on a large-scale question data set. A promising approach for supporting large-scale web-based testing is static test generation (STG), which generates a test paper automatically according to user specification based on multiple assessment criteria. And the generated test paper can then be attempted over the web by users for assessment purpose. Generating high-quality test papers under multiobjective constraints is a challenging task. It is a 0-1 integer linear programming (ILP) that is not only NP-hard but also need to be solved efficiently. Current popular optimization software and heuristic-based intelligent techniques are ineffective for STG, as they generally do not have guarantee for high-quality solutions of solving the large-scale 0-1 ILP of STG. To that end, we propose an efficient ILP approach for STG, called branch-and-cut for static test generation (BAC-STG). Our experimental study on various data sets and a user evaluation on generated test paper quality have shown that the BAC-STG approach is more effective and efficient than the current STG techniques.


Implementation

images



ScreenShot

images



2.Generation Paradigm in Software Testing:


Effective web service accessible application test suggestion/generation paradigm.


Abstract


Web services are XML-based information exchange systems that use the Internet for direct application-to-application interaction. These systems can include programs, objects, messages, or documents.In recent trends applications play a vital role in the smart phones and they access information from web services that are exposed in the internet.An testing strategy to check the correctness of the software under study has been introduced in this under three major testing aspects such as black, gray and white box testing.Black box testing usually tests the external functionality part of a software, whereas white box tests the internal functionality part of the software. And the gray box is the combination of black and white box testing that tests the integration part of the software.In the existing system, the application that access the web services are tested under the three testing aspects and produces a problem statement thereby analysing and identifying the conflicts that occur .The main drawback in the existing is that it does not suggests any rectification strategy for the problem statement that are identified by the three aspects of testing. Hence a system has been proposed to test the functionality with the three aspects of testing and thereby provides suggestion to rectify from the conflicts identified. This in turn develop an effective application that are to be used in the smart phone.


IEEE Title


Efficient Storage and Processing ofHigh-Volume Network Monitoring Data



IEEE Abstract


Monitoring modern networks involves storing andtransferring huge amounts of data. To cope with this problem, inthis paper we propose a technique that allows to transform themeasurement data in a representation format meeting two mainobjectives at the same time. Firstly, it allows to perform a numberof operations directly on the transformed data with a controlledloss of accuracy, thanks to the mathematical framework it isbased on. Secondly, the newrepresentation has a small memoryfootprint, allowing to reduce the space needed for data storageand the time needed for data transfer. To validate our technique,we perform an analysis of its performance in terms of accuracyand memory footprint. The results show that the transformeddata closely approximates the original data (within 5% relativeerror) while achieving a compression ratio of 20%; storagefootprint can also be gradually reduced towards the one ofthe state-of-the-art compression tools, such as bzip2, if higherapproximation is allowed. Finally, a sensibility analysis showthat technique allows to trade-off the accuracy


Implementation

images



ScreenShot

images



Related URL’s for reference



[1] Static Testing as a Service on Cloud


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=6550468&queryText%3Dtesting



Static testing is a form of software testing where the software needn't be executed and can find valuable software defects in the early stage of software development cycle. In contrast to dynamic testing, static testing performs code walkthrough automatically instead of the traditional manual way. It is primarily syntax checking of the code and this kind of software testing tool can work alone without software deployment under some strict hardware environment, so it's very convenient and hardware-saving. As it might be waste of costs to own various static testing tools and employ large number of technicians trained for using them skillfully, many small or medium sized enterprises may choose a third-party testing group for the whole or most of the testing work and is quite common in industry. Since static testing does little harm to the testing environment in contrast to dynamic testing, it's more convenient and easier to build and maintain a public software testing platform offering static testing service supplied by static testing tool built-in which can have many static testing jobs running at one time for different customers who can make their access via Internet. In this way, customers can cut their cost on software testing and get testing result with high quality. This paper presents a cloud-based platform architecture offering static testing service which can be fetched via popular Internet browsers such as IE and Fire fox. According to the design, customers should compress the software to be tested and then send upload this package to the platform with the help of browsers and then start testing execution step by step through the web page and finally can download testing result from remote server. This architecture now has an implementation in use, which is built on Hadoop and HBase both deployed on cluster of servers whose operation system are Windows, using Mapreduce implementation built-in Hadoop to distribute testing tasks over different servers, and - ith a friendly user interface web site built up by Tomcat. The static testing tool offering static testing service on this platform is called Defect Testing System (DTS), which is a mature static testing tool working on ordinary Windows OS and is installed standalone usually, developed by Beijing University of Posts and Telecommunications (BUPT).



[2] A systematic and flexible approach for testing future mobile networks by exploiting a wrap-around testing methodology


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=6476882&queryText%3Dtesting



The testing process of state-of-the-art telecommunication equipment is a demanding task requiring multiple testing phases and significant amounts of resources. This article introduces a testing concept that enables the same testing environment to be used throughout the development lifecycle of telecommunication equipment. Moreover, the testing approach is designed to exploit the salient features of the upcoming IMT-Advanced systems in order to make the testing process more efficient. At the heart of the this wrap-around testing approach are a few multi-purpose network and radio channel emulators which can be configured to support all the testing phases from unit testing to system testing. By utilizing the same testing tools in all the testing phases, the amount of resources required for test environment maintenance and testing tool training are minimized. In addition, the extensive use of simulations makes the testing environment very flexible when it comes to studying the new features of the most recent telecommunication standards. In order to verify the feasibility of the devised testing approach, this article also contains a case study example, in which an implementation of the wrap-around testing environment is built from commercial testing tools. In the presented case study, the testing environment implementation is used for throughput testing of an LTE eNB in SISO and MIMO configurations. The results obtained are compared to the theoretical performance values of the technology and, together with earlier studies, demonstrate that a testing environment based on the wrap-around testing concept can be used in different testing phases during LTE eNB development.



[3] A Model of Third-Party Integration Testing Process for Foundation Software Platform


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=4709144&queryText%3Dtesting



FSP, short for foundation software platform, refers to the supporting foundation of application system. Third-party integration testing is the important method to ensure the quality of FSP. The integration testing of FSP would involve collaboration among many vendors, cover functionality testing, compatibility testing, performance testing etc. Furthermore, foundation software usually has very huge code size and is written in different languages. As a result, the integration testing of FSP is concerned with numerous and complicated testing objects, methods and processes. So there is an urgent need for an integration testing process model fitting for the properties of FSP to guide and organize all the testing tasks and the testing activities. Combined with the properties of FSP and the existed traditional software testing process, an integration testing process model for FSP is proposed, which can be summarized as "metadata-based, bug-driven, iterative improvement". The architecture of integration testing engineering environment for FSP is also presented. The model has been used in the integration testing and maintenance of domestic FSP in China.



[4] A Comparison of Structural Testing Strategies Based on Subdomain Testing and Random Testing


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=5261018&queryText%3Dtesting



Both partition testing and random testing methods are commonly followed practice towards selection oftest cases. For partition testing, the program psilas input domain is divided into subsets, called sub domains, and one or more representatives from each sub domain are selected to test the program. In random testing test cases are selected from the entire program psilas input domain randomly. The main aim of the paper is to compare the fault-detecting ability of partition testing and random testing methods. The results of comparing the effectiveness of partition testing and random testing may be surprising to many people. Even when partition testing is better than random testing at finding faults, the difference in effectiveness is marginal. Using some effectiveness metrics for testing and some partitioning schemes this paper investigates formal conditions for partition testing to be better than random testing and vice versa.


[5] Biometric Interagency Testing & Evaluation Schema (BITES)


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=6459907&queryText%3Dtesting



This paper addresses the concepts of reusable biometric testing in a general sense, and then describes the US Government initiative to establish a mechanism to facilitate sharing of biometric testing information both within the government and with stakeholders. The fundamental motivation for promoting reuse of biometric testing information is to achieve cost avoidance. If a well defined test has been successfully completed and documented by a trusted party, then the results of that testing should be sufficient to allow other consumers of that product to rely on that test, and thereby avoid the cost of repeating that testing. The extent of reusability depends on the type of testing being conducted. The most straightforward type of testing suited for reuse are Conformance tests, such as conformance to American National Standards Institute/National Institute of Standards and Technology (ANSI/NIST) or International Organization for Standardization (ISO) standards. These tests are typically automated and are fully repeatable. Biometric Performance testing using the Technology Testing approach, is similarly repeatable and easily reused given a fixed set of biometric samples. Biometric Performance testing using the Scenario Testing approach is quite different in that it is inherently not repeatable due to the use of human test subjects, and not easily reusable. These tests are also typically expensive. There are several notable examples of testing programs for which the results have demonstrated reusability. One of the first and most visible may be the Federal Bureau of Investigation (FBI) Appendix F Certification of fingerprint image quality supported by the FBI for procurement of lives can fingerprint devices. There are fundamental prerequisites for reusable testing. First, there is a need for agreement on the method/procedure for conducting the testing and reporting the results. Secondly, the methods must be “Open”, and additionally, the product must be - ested by a trusted party . In order for reusable testing to work, the participants in a test must have a willingness and the authority to share the results, and establish a common level of integration. In order for reusability to succeed, there must be a capability to disseminate the information. The United States Government (USG) has established an effort to develop a repository for biometrics test methods and successfully completed test results - “BITES” - Biometric Interagency Testing and Evaluation Schema, to promote efficient and effective reuse of biometric testing information.


[6] 29119-2-2013 - Software and systems engineering Software testing Part 2:Test processes


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=6588543&queryText%3Dtesting



The purpose of the ISO/IEC/IEEE 29119 series of software testing standards is to define an internationally-agreed set of standards for software testing that can be used by any organization when performing any form of software testing. ISO/IEC/IEEE 29119-2 comprises test process descriptions that define the software testing processes at the organizational level, test management level and dynamic test levels. It supports dynamic testing, functional and non-functional testing, manual and automated testing, and scripted and unscripted testing. The processes defined in ISO/IEC/IEEE 29119-2 can be used in conjunction with any software development lifecycle model. Since testing is a key approach to risk-mitigation in software development, ISO/IEC/IEEE 29119-2 follows a risk-based approach to testing. Risk-based testing is a common industry approach to strategizing and managing testing. Risk-based testing allows testing to be prioritized and focused on the most important features and functions.



[7] A tool for data flow testing using evolutionary approaches (ETODF)


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=6743535&queryText%3Dtesting



Software testing is one of the most important phases of software development lifecycle. Software testing can be categorized into two major types; white box testing and black box testing. Data flow testing is a white box testing technique that uses both flow of control and flow of data through the program for testing. Evolutionary testing selects and generates test data by applying optimizing search techniques. This paper discusses the architecture and implementation of an automated tool for data flow testing by applying genetic algorithm for the automatic generation of test paths for data flow testing based on selected criteria for data flow testing. Our tool generates random initial population of test paths and then based on the selected data flow testing criteria new paths are generated by applying a genetic algorithm. A fitness function in tool evaluates each chromosome (path) based on the selected data flow testing criteria and computes its fitness. We have applied one point crossover and mutation operators for the generation of new paths based on fitness value. The proposed research tool called ETODF is continuation of our previous research work [6] on data flow testing using evolutionary approaches. The tool ETODF (evolutionary testing of data flow) has been implemented in Java. In experiments with this tool, our implemented tool has much better results as compared to random testing.


[8] Effect of class testing on the reliability of object-oriented programs


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=630876&queryText%3Dtesting



Although object-oriented programming has been increasingly adopted for software development and many approaches for testing object-oriented programs have been proposed, the issue of reliability of object-oriented programs has not been explored. The objective of this study was to investigate the effectiveness of class testing from the perspective of reliability. The experiments in this study involved testing and measuring the reliability of a C++ program and a Java program. We introduced a class testing technique that exploits the function dependence relationship to reduce the testing effort in subclass testing and in testing polymorphism without degrading the reliability of object-oriented programs. In subclass testing, the impact of function dependence class testing on reliability was compared with two other techniques: exhaustive class testing, which flattens every class and tests every function in the class; and minimal class testing, which tests only new and re-defined functions. The results show that function dependence class testing preserves the same level of program reliability as does exhaustive class testing, while the effort is significant reduced. In polymorphism testing, we conducted an experiment to observe the relationship between the binding coverage and the reliability of the program. The results suggest that testing possible bindings is necessary, and using the function dependence relationship to determine which bindings to cover in testing is sufficient.



[9] Formal Analysis of the Probability of Interaction Fault Detection Using Random Testing


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=5999671&queryText%3Dtesting



Modern systems are becoming highly configurable to satisfy the varying needs of customers and users. Software product lines are hence becoming a common trend in software development to reduce cost by enabling systematic, large-scale reuse. However, high levels of configurability entail new challenges. Some faults might be revealed only if a particular combination of features is selected in the delivered products. But testing all combinations is usually not feasible in practice, due to their extremely large numbers. Combinatorial testing is a technique to generate smaller test suites for which all combinations of t features are guaranteed to be tested. In this paper, we present several theorems describing the probability of random testing to detect interaction faults and compare the results to combinatorial testing when there are no constraints among the features that can be part of a product. For example, random testing becomes even more effective as the number of features increases and converges toward equal effectiveness with combinatorial testing. Given that combinatorial testing entails significant computational overhead in the presence of hundreds or thousands of features, the results suggest that there are realistic scenarios in which random testing may outperform combinatorial testing in large systems. Furthermore, in common situations where test budgets are constrained and unlike combinatorial testing, random testing can still provide minimum guarantees on the probability of fault detection at any interaction level. However, when constraints are present among features, then random testing can fare arbitrarily worse than combinatorial testing. As a result, in order to have a practical impact, future research should focus on better understanding the decision process to choose between random testing and combinatorial testing, and improve combinatorial testing in the presence of feature constraints.


[10]A Novel Evolutionary Approach for Adaptive Random Testing


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=5338642&queryText%3Dtesting



Random testing is a low cost strategy that can be applied to a wide range of testing problems. While the cost and straightforward application of random testing are appealing, these benefits must be evaluated against the reduced effectiveness due to the generality of the approach. Recently, a number of novel techniques, coined Adaptive Random Testing, have sought to increase the effectiveness of random testing by attempting to maximize the testing coverage of the input domain. This paper presents the novel application of an evolutionary search algorithm to this problem. The results of an extensive simulation study are presented in which the evolutionary approach is compared against the Fixed Size Candidate Set (FSCS), Restricted Random Testing (RRT), quasi-random testing using the Sobol sequence (Sobol), and random testing (RT) methods. The evolutionary approach was found to be superior to FSCS, RRT, Sobol, and RT amongst block patterns, the arena in which FSCS, and RRT have demonstrated the most appreciable gains in testing effectiveness. The results among fault patterns with increased complexity were shown to be similar to those of FSCS, and RRT; and showed a modest improvement over Sobol, and RT. A comparison of the asymptotic and empirical runtimes of the evolutionary search algorithm, and the other testing approaches, was also considered, providing further evidence that the application of an evolutionary search algorithm is feasible, and within the same order of time complexity as the other adaptive random testing approaches.



[11] Enhancing the Efficiency of Regression Testing through Intelligent Agents


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=4426561&queryText%3Dtesting



Software testing is indispensable for all software development. As all mature engineering disciplines need to have systematic testing methodologies, software testing is a very important subject of software engineering. In software development practice, testing accounts for as much as 50% of total development efforts. Testing can be manual, automated, or a combination of both. Manual testing is the process of executing the application and manually interacting with the application, specifying inputs and observing outputs. Manually testing the software is inefficient and costly. It is imperative to reduce the cost and improve the effectiveness of software testing by automating the testing process, which contains many testing related activities using various techniques and methods. In order to automate the process, we have to have some ways to generate oracles from the specification, and generate test cases to test the target software against the oracles to decide their correctness. Today we still don't have a full-scale system that has achieved this goal. In general, significant amount of human intervention is still needed in testing. The degree of automation remains at the automated test script level. This paper therefore provides a timely summary and enhancement of agent theory in software testing ,which motivates recent efforts in adapting concepts and methodologies for agent oriented software testing to complex system which has not previously done. Agent technologies facilitate the automated software testing by virtue of their high level decomposition, independency and parallel activation[4]. Usage of agent based regression testing reduces the complexity involved in prioritizing the test cases. With the ability of agents to act autonomously, monitoring code changes and generating test cases for the changed version of the code can be done dynamically. Agent-Oriented Software testing(AOST) is a nascent but active field of research . A comprehensive methodology - that plays an essential role in Software testing must be robust but easy-to use. Moreover, it should provide a roadmap to guide engineers in creating Agent-based systems. The agent based regression testing(ABRT) proposed here is to offer a definition for encompassing to cover the regression testing phenomena based on agents, yet sufficiently tight that it can rule out complex systems that are clearly not agent based.



[12] Experimental assessment of manual versus tool-based maintenance of GUI-directed test scripts



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=5306345&sortType%3Dasc_p_Sequence%26filter%3DAND%28p_IS_Number%3A5306271%29



Since manual black-box testing of GUI-based APplications (GAPs) is tedious and laborious, test engineers create test scripts to automate the testing process. These test scripts interact with GAPs by performing actions on their GUI objects. As GAPs evolve, testers should fix their corresponding test scripts so that they can reuse them to test successive releases of GAPs. Currently, there are two main modes of maintaining test scripts: tool-based and manual. In practice, there is no consensus what approach testers should use to maintain test scripts. Test managers make their decisions ad hoc, based on their personal experience and perceived benefits of the tool-based approach versus the manual. In this paper we describe a case study with forty five professional programmers and test engineers to experimentally assess the tool-based approach for maintaining GUI-directed test scripts versus the manual approach. Based on the results of our case study and considering the high cost of the programmers' time and the lower cost of the time of test engineers, and considering that programmers often modify GAP objects in the process of developing software we recommend organizations to supply programmers with testing tools that enable them to fix test scripts faster so that these scripts can unit test software. The other side of our recommendation is that experienced test engineers are likely to be as productive with the manual approach as with the tool-based approach, and we consequently recommend that organizations do not need to provide each tester with an expensive tool license to fix test scripts.



[13]Prioritizing JUnit test cases in absence of coverage information


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=5306350&sortType%3Dasc_p_Sequence%26filter%3DAND%28p_IS_Number%3A5306271%29



Better orderings of test cases can detect faults in less time with fewer resources, and thus make the debugging process earlier and accelerate software delivery. As a result, test case prioritization has become a hot topic in the research of regression testing. With the popularity of using the JUnit testing framework for developing Java software, researchers also paid attention to techniques for prioritizing JUnit test cases in regression testing of Java software. Typically, most of them are based on coverage information of test cases. However, coverage information may need extra costs to acquire. In this paper, we propose an approach (named Jupta) for prioritizing JUnit test cases in absence of coverage information. Jupta statically analyzes call graphs of JUnit test cases and the software under test to estimate the test ability (TA) of each test case. Furthermore, Jupta provides two prioritization techniques: the total TA based technique (denoted as JuptaT) and the additional TA based technique (denoted as JuptaA). To evaluate Jupta, we performed an experimental study on two open source Java programs, containing 11 versions in total. The experimental results indicate that Jupta is more effective and stable than the untreated orderings and Jupta is approximately as effective and stable as prioritization techniques using coverage information at the method level.



[14] Prioritizing component compatibility tests via user preferences



http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=5306357&sortType%3Dasc_p_Sequence%26filter%3DAND%28p_IS_Number%3A5306271%29



Many software systems rely on third-party components during their build process. Because the components are constantly evolving, quality assurance demands that developers perform compatibility testing to ensure that their software systems build correctly over all deployable combinations of component versions, also called configurations. However, large software systems can have many configurations, and compatibility testing is often time and resource constrained. We present a prioritization mechanism that enhances compatibility testing by examining the ldquomost importantrdquo configurations first, while distributing the work over a cluster of computers. We evaluate our new approach on two large scientific middleware systems and examine tradeoffs between the new prioritization approach and a previously developed lowest-cost-configuration-first approach.



[15] Towards a better understanding of software evolution: An empirical study on open source software


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=5306356&sortType%3Dasc_p_Sequence%26filter%3DAND%28p_IS_Number%3A5306271%29



Software evolution is a fact of life. Over the past thirty years, researchers have proposed hypotheses on how software changes, and provided evidence that both supports and refutes these hypotheses. To paint a clearer image of the software evolution process, we performed an empirical study on long spans in the lifetime of seven open source projects. Our analysis covers 653 official releases, and a combined 69 years of evolution. We first tried to verify Lehman's laws of software evolution. Our findings indicate that several of these laws are confirmed, while the rest can be either confirmed or infirmed depending on the laws' operational definitions. Second, we analyze the growth rate for projects' development and maintenance branches, and the distribution of software changes. We find similarities in the evolution patterns of the programs we studied, which brings us closer to constructing rigorous models for software evolution.


[16] Linux kernels as complex networks: A novel method to study evolution


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=5306348&sortType%3Dasc_p_Sequence%26filter%3DAND%28p_IS_Number%3A5306271%29



In recent years, many graphs have turned out to be complex networks. This paper presents a novel method to study Linux kernel evolution - using complex networks to understand how Linux kernel modules evolve over time. After studying the node degree distribution and average path length of the call graphs corresponding to the kernel modules of 223 different versions (V1.1.0 to V2.4.35), we found that the call graphs of the file system and drivers module are scale-free small-world complex networks. In addition, both of the file system and drivers module exhibit very strong preferential attachment tendency. Finally, we proposed a generic method that could be used to find major structural changes that occur during the evolution of software systems.



[17] Towards a better understanding of software evolution: An empirical study on open source software


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=5306356&sortType%3Dasc_p_Sequence%26filter%3DAND%28p_IS_Number%3A5306271%29



Software evolution is a fact of life. Over the past thirty years, researchers have proposed hypotheses on how software changes, and provided evidence that both supports and refutes these hypotheses. To paint a clearer image of the software evolution process, we performed an empirical study on long spans in the lifetime of seven open source projects. Our analysis covers 653 official releases, and a combined 69 years of evolution. We first tried to verify Lehman's laws of software evolution. Our findings indicate that several of these laws are confirmed, while the rest can be either confirmed or infirmed depending on the laws' operational definitions. Second, we analyze the growth rate for projects' development and maintenance branches, and the distribution of software changes. We find similarities in the evolution patterns of the programs we studied, which brings us closer to constructing rigorous models for software evolution.



[18] On the Benefits of Planning and Grouping Software Maintenance Requests


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=5741246&sortType%3Dasc_p_Sequence%26filter%3DAND%28p_IS_Number%3A5741244%29


Despite its unquestionable importance, software maintenance usually has a negative image among software developers and even project managers. As a result, it is common to consider maintenance requests as short-term tasks that should be implemented as quick as possible to have a minimal impact for end-users. In order to promote software maintenance to a first-class software development activity, we first define in this paper a light weighted process - called PASM (Process for Arranging Software Maintenance Requests) - for handling maintenance as software projects. Next, we describe an in-depth evaluation of the benefits achieved by the PASM process at a real software development organization. For this purpose, we rely on a set of clustering analysis techniques in order to better understand and compare the requests handled before and after the adoption of the proposed process. Our results indicate that the number of projects created to handle maintenance requests has increased almost three times after this organization has adopted the PASM process. Furthermore, we also concluded that projects based on the PASM present a better balance between the various software engineering activities. For example, after adopting PASM the developers have dedicated more time to analysis and validation and less time to implementation and codification tasks.



[19] Revealing Mistakes in Concern Mapping Tasks: An Experimental Evaluation


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=5741266&sortType%3Dasc_p_Sequence%26filter%3DAND%28p_IS_Number%3A5741244%29


Concern mapping is the activity of assigning a stakeholder's concern to its corresponding elements in the source code. This activity is primordial to guide software maintainers in several tasks, such as understanding and restructuring the implementation of existing concerns. Even though different techniques are emerging to facilitate the concern mapping process, they are still manual and error-prone according to recent studies. Existing work does not provide any guidance to developers to review and correct concern mappings. In this context, this paper presents the characterization and classification of eight concern mapping mistakes commonly made by developers. These mistakes were found to be associated with various properties of concerns and modules in the source code. The mistake categories were derived from actual mappings of 10 concerns in 12 versions of industry systems. In order to further evaluate to what extent these mistakes also occur in wider contexts, we ran two experiments where 26 subjects mapped 10 concerns in two systems. Our experimental results confirmed the mapping mistakes that often occur when developers need to interact with the source code.



[20]Software Deployment Activities and Challenges - A Case Study of Four Software Product Companies


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=5741269&sortType%3Dasc_p_Sequence%26filter%3DAND%28p_IS_Number%3A5741244%29



Software deployment, including both clean installs and updates, is a crucial activity for all software vendors. It starts with a customer's order of a new release and incorporates all steps taken until the customer is satisfied with the deployed product. Using interviews as the main data collection method, we conducted a case study of four companies to discover their software deployment activities and challenges. The studied products were more complicated than pure COTS products. We noticed three product characteristics that make deployment more challenging: 1) the product is tightly integrated to other customer systems, 2) the product offers various configuration options to support different ways of working, and 3) the product requires a pre-created, complex, real-world data model to be usable. We also noticed that software deployment is multifaceted, involving activities related to customer interaction, making integrations, and configuring, installing and testing the products.



21]Notice of Retraction The Construction and Practice of the Training Pattern of Practical College Software Testing Talents


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=5459039&pageNumber%3D3%26queryText%3Dsoftware+testing



After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.The presenting author of this paper has the option to appeal this decision by contacting TPII@ieee.org.Software testing has already developed into a unique market of software industry. Due to the rapid development of software testing industry, large numbers of software testing talents are urgently needed. This paper analyses the current demand situation of software testing talents and the characteristics of practical college software testing talents training in detail. In the light of the requirement of practical software testing talents, it sheds light on the construction and practice of the training pattern of practical college software testing talents and its curriculum system.



[22]Comparing the Effectiveness of Software Testing Strategies


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=1702179&pageNumber%3D3%26queryText%3Dsoftware+testing



This study applies an experimentation methodology to compare three state-of-the-practice software testing techniques: a) code reading by stepwise abstraction, b) functional testing using equivalence partitioning and boundary value analysis, and c) structural testing using 100 percent statement coverage criteria. The study compares the strategies in three aspects of software testing: fault detection effectiveness, fault detection cost, and classes of faults detected. Thirty-two professional programmers and 42 advanced students applied the three techniques to four unit-sized programs in a fractional factorial experimental design. The major results of this study are the following. 1) With the professional programmers, code reading detected more software faults and had a higher fault detection rate than did functional or structural testing, while functional testing detected more faults than did structural testing, but functional and structural testing were not different in fault detection rate. 2) In one advanced student subject group, code reading and functional testing were not different in faults found, but were both superior to structural testing, while in the other advanced student subject group there was no difference among the techniques. 3) With the advanced student subjects, the three techniques were not different in fault detection rate. 4) Number of faults observed, fault detection rate, and total effort in detection depended on the type of software tested. 5) Code reading detected more interface faults than did the other methods. 6) Functional testing detected more control faults than did the other methods.



[23]Declarative Testing: A Paradigm for Testing Software Applications


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=5070714&pageNumber%3D3%26queryText%3Dsoftware+testing



Traditional techniques to test a software application through the application's graphical user interface have a number of weaknesses. Manual testing is slow, expensive, and does not scale well as the size and complexity of the application increases. Software test automation which exercises an application through the application's UI using an API set can be difficult to maintain. We propose a software testing paradigm called declarative testing. In declarative testing, a test scenario focuses on what to accomplish rather than on the imperative details of how to manipulate the state of an application under test and verify the final application state against an expected state. Declarative testing is a test design paradigm which separates test automation code into conceptual Answer, Executor, and Verifier entities. Preliminary experience with declarative testing suggests that the modular characteristics of the paradigm may significantly enhance the ability of a testing effort to keep pace with the evolution of a software application during the application's development process.


[24]An empirical study on testing and fault tolerance for software reliability engineering


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=1251036&pageNumber%3D3%26queryText%3Dsoftware+testing



Software testing and software fault tolerance are two major techniques for developing reliable software systems, yet limited empirical data are available in the literature to evaluate their effectiveness. We conducted a major experiment to engage 34 programming teams to independently develop multiple software versions for an industry-scale critical flight application, and collected faults detected in these program versions. To evaluate the effectiveness of software testing and software fault tolerance, mutants were created by injecting real faults occurred in the development stage. The nature, manifestation, detection, and correlation of these faults were carefully investigated. The results show that coverage testing is generally an effective means to detecting software faults, but the effectiveness of testing coverage is not equivalent to that of mutation coverage, which is a more truthful indicator of testing quality. We also found that exact faults found among versions are very limited. This result supports software fault tolerance by design diversity as a creditable approach for software reliability engineering. Finally we conducted domain analysis approach for test case generation, and concluded that it is a promising technique for software testing purpose.


[25]Software Internationalization: Testing Methods for Bidirectional Software


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=5331721&pageNumber%3D3%26queryText%3Dsoftware+testing



Testing Global software differs from conventional software testing in that the test design approach and the testing methods must consider the defined and implied issues of specific culture, language, date format, and currency format. In the case of Bidirectional software testing (Software targeting Arabic, Hebrew, Farsi, and Urdu markets) there are many unique and crucial aspects that require thorough and carefully planned and executed tests. In this paper we propose testing methods that can be used to test critical aspects of bi-directional software in general and specifically Arabic software. These testing methods are discussed in the context of the Dynamic Systems Development Methodology (DSDM) process lifecycle that integrate internationalization (I18 n) and localization(L10 n) processes into the software lifecycle.



[26]Optimal allocation and control problems for software-testing resources


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=55878&pageNumber%3D3%26queryText%3Dsoftware+testing



Two kinds of software-testing management problems are considered: testing-resource allocation to best use specified testing resources during module testing, and a testing-resource control problem concerning how to spend the allocated amount of testing-resource expenditures during it. A software reliability growth model based on a nonhomogeneous Poisson process is introduced. The model describes the time-dependent behavior of software errors detected and testing-resource expenditures spent during the testing. The optimal allocation and control of testing resources among software modules can improve reliability and shorten the testing stage. Based on the model, numerical examples of these two software testing management problems are presented


[27]Research on path generation for software architecture testing matrix transform-based


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=5974519&pageNumber%3D3%26queryText%3Dsoftware+testing



Software architecture testing is a technology on the connection and functions of system components. It is an important research topic in the area of software engineering. The testing at SA level is a hotspot and difficulty in the field of software testing. This paper proposes a technology of software architecturetesting matrix transform-based. Software architecture interface connectivity graph is used to describe the connection relationship between components and connector, and gives the definition of design structure matrix, adjacency matrix, and reachability matrix, generate testing coverage path of the ICG according to testing coverage criteria and reachability matrix. Finally, we verify the testing technology in the example and realize software architecture testing.



[28]Efficient allocation of testing resources for software module testing based on the hyper-geometric distribution software reliability growth model


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=558884&pageNumber%3D3%26queryText%3Dsoftware+testing



A considerable amount of testing resources is required during software module testing. In this paper, based on the HGDM (Hyper-Geometric Distribution Model) software reliability growth model, we investigate the following optimal resource allocation problems in software module testing: (1) minimization of the number of software faults still undetected in the system after testing given a total amount of testing resources, and (2) minimization of the total amount of testing resources repaired, given the number of software faults still undetected in the system after testing. Furthermore, based on the concepts of “average allocation” and “proportional allocation”, two simple allocation methods are also introduced. Experimental results show that the optimal allocation method can improve the quality and reliability of the software system much more significantly than these simple allocation methods can. Therefore, the optimal allocation method is very efficient for solving the testing resource allocation problem.


[29]Increasing understanding of the modern testing perspective in software development projects


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=1174707&pageNumber%3D3%26queryText%3Dsoftware+testing



Testing can be difficult to integrate into software development. Approaches to software testing in relation to implementing software are based on the V-model of testing. The software process behind the V-model is the traditional waterfall model, and as such the traditional testing approaches cannot take iterative, incremental and agile approaches to developing software into account well enough. In this paper, we describe the use of a general iterative and incremental framework defined for controlling product development - 4CC - from a modern testing perspective. The framework provides a common language in which the implementation details and pacing as well as testing details and pacing in software product development projects can be communicated. Viewing testing through a general iterative and incremental framework adds to understanding how the testing process should be defined and improved in relation to the software development process. Additionally, best practices for testing are identified.



[30]Software reliability accelerated testing method based on mixed testing


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=5448017&pageNumber%3D3%26queryText%3Dsoftware+testing



TThis study is conducted by solving these four key questions: the key points of the whole testing process; the data and information to be collected during the testing process; the model to estimate the software reliability; the verification of the testing method and the estimation model. Software reliability accelerated testing (SRAT) based on mixed testing is proposed. SRAT increases the effectiveness and efficiency of the reliability testing, and it is able to test some operations or program modules with low operation frequency to satisfy the coverage criteria if the number of the test cases is the same as the conventional reliability testing. Improved software reliability model based on order statistics (SRMOS) is presented. It gives more accurate estimation than the classic software reliability models when the SRAT is applied during the reliability growth testing. Finally the experiment of web based software used as a case study is carefully designed and implemented to verify the feasibility and effectiveness of SRAT. Meanwhile the comprehensive performance of SRMOS is evaluated.



[31]Model Based Testing Using Software Architecture


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=5501432&pageNumber%3D3%26queryText%3Dsoftware+testing



Software testing is an ultimate obstacle to the final release of software products. Software testing is also a leading cost factor in the overall construction of software products. On the one hand, model-based testing methods are new testing techniques aimed at increasing the reliability of software, and decreasing the cost by automatically generating a suite of test cases from a formal behavioral model of a system. On the other hand, the architectural specification of a system represents a gross structural and behavioral aspect of a system at the high level of abstraction. Formal architectural specifications of a system also have shown promises to detect faults during software back-end development. In this work, we discuss a hybrid testing method to generate test cases. Our proposed method combines the benefits of model-based testing with the benefits of software architecture in a unique way. A simple Client/Server system has been used to illustrate the practicality of our testing technique.


[32]Application software configuration management testing in a pharmaceutical laboratory automation environment


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=10159&pageNumber%3D3%26queryText%3Dsoftware+testing



A discussion is presented of the use of the computer system accounting structure to control softwarechange, provide phased software testing, and handle software and data files in parallel with the softwarelife-cycle. The US Food and Drug Administration, as the federal regulating body for the industry, directly influences software maintenance practices and procedures. Traceability to back versions of thesoftware, as well as software validation, is required. Three categories of testing are discussed: systemstesting (for designed functionality), user acceptance testing (for validation), and verification testing (for consistent operation). Testing is introduced into the life cycle according to established procedures.Software and data integrity is ensured throughout the life cycle through strictly controlled user access and file management using the operating system accounting structure.



[33]Optimal test profile in the context of software cybernetics


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=990014&pageNumber%3D3%26queryText%3Dsoftware+testing



Software cybernetics explores the interplay between software theory/engineering and control theory/engineering Following the idea of software cybernetics, the controlled Markov chains (CMC) approach to software testing treats software testing as a control problem. The software under test serves as a controlled object, and the (optimal) testing strategy determined by the theory of controlled Markov chains serves as a controller. The software under test and the corresponding (optimal) testing strategy constitute a closed-loop feedback system, and the software state transitions behave as a Markov chain. The paper analyzes the behavior of the corresponding optimal test profile determined by the CMC approach to software testing. It is shown that in some cases the optimal test profile is Markovian, whereas in some other cases the optimal test profile demonstrates a different scenario. The analyses presented in the paper deepen our understanding of the CMC approach to software testing and are related to software operational profile modeling.


[34]Developing a customized software engineering testing for Shared Banking Services (SBS) System


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=5993436&pageNumber%3D3%26queryText%3Dsoftware+testing



Developing a customized software engineering testing fThis paper presented the concept of developing a customized software testing process for software engineering in Shared Banking Services (SBS) System on one developed banking system in Malaysia. Software testing is one of the main activities in software development life cycle. This process consists of a few activities, which includes developing test plan and strategy, test design, test execution and evaluation of test result. However, a customized testing process is required for a specific domain in order to ensure the correctness and completeness of the test result. This project proposed a customized software testing process for Shared Banking Services (SBS) system and the study was conducted in order to identify the specific modules in SBS and its characteristics which differentiate it from other software applications. Besides that, two testing methodologies, which are RUP Test Discipline and Systematic Test and Evaluation Process (STEP), were compared based on criteria such as main activities, roles and responsibilities, artifacts and level of testing. The comparison result was then map to the characteristic of SBS to produce the proposed software testing process for SBS.or Shared Banking Services (SBS) System.



[35]The Software Quality Evaluation Method Based on Software Testing


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=6394607&pageNumber%3D3%26queryText%3Dsoftware+testing



In order to improve the effectiveness, visibility and specification of the TT&C software testing, this paper made an in-depth study according to its characteristics. At first, this paper presented a quality assessment model with high reliability and real-time demands getting idea from analytic hierarchy process (AHP). Then, the paper brought forward a dedicated simulation test environment and a generating method of software testing cases based on fault tree analysis (FTA). Next, the paper defined the software testing procedure learning from CMMI demands. At last, the paper gave the performance of quantitative assessment results in the form of a radar chart. Practice has proved that the given model of this paper can represent the software quality objectively, the generating method can improve the sufficiency of software testing cases effectively, and the defined procedure can ensure the specification of software testing availably.



[36]Issues on software testing for safety-critical real-time automation systems


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=1390801&pageNumber%3D3%26queryText%3Dsoftware+testing



Software quality indicates how well the software product complies with the user requirements. The challenge in software testing is how to uncover the difficult-to-find software problems. Networked and embedded real-time automation software in safety-critical applications such as avionics software poses unique concerns about software quality due to its demanding requirements on system performance. The benefits of the rigorous software development and verification processes include both functional and nonfunctional software performance assurance. The heart of safety-critical software development lies in processes and techniques for software validation and verification. The effective software testing can ensure the software quality as well as make the developer garner, the customer kudos for the high software quality.


[37]RPB in Software Testing


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=4137063&pageNumber%3D3%26queryText%3Dsoftware+testing



Software testing is one of the crucial activities in the system development life cycle. The purpose of testing can be quality assurance, verification and validation, or reliability estimation. In Motorolareg, Automation Testing is a methodology TM uses by GSG-iSGT (Global Software Group - iDENtrade Subcriber Group Testing) to increase the testing volume, productivity and reduce test cycle-time in cell phone software testing. In addition, automated stress testing can make the products become more robust before release to the market. In this paper, the author discuss one of the automation stress testing tools that iSGT use, which is RPB (Record and PlayBack). RPB is a platform specific desktop rapid test-case creation tool. This tool is able to capture all the information on the phone in order to have highest accurately in testing. Furthermore, RPB allows user to do testing at anytime, anywhere provided there is an Internet connection. The authors also discuss the value that automation has bought to iDENtrade phone testing together with the metrics. We will also look into the advantages of the proposed system and some discussion of the future work.



[38]Research on the Application of Data Mining in Software Testing and Defects Analysis


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=5287758&pageNumber%3D3%26queryText%3Dsoftware+testing



The high dependability software is not only one of software technique development commanding points, but also is the software industry development essential foundation, this paper summarizes the data mining to face the detect of the software credibility test, the appraisal and the technical aspect newest research, elaborated the data mining technology in the software flaw test application, including flaw testin commonly used data mining method, data mining system and software testing management system. Introduced specifically in view of the software flaw's different classification based on the connection rule's software flaw parsing technique's application, proposed based on the association rule's softwaredetect evaluation method, the purpose of which is to decrease software defects and to achieve the rapid growth of software dependability.


[39]Effectively Testing for a Software Product Line with OTM3 Organizational Testing Management Maturity Model


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=5945246&pageNumber%3D3%26queryText%3Dsoftware+testing



Software Product Line can be effectively tested with a framework utilization following Experimental Software Engineering concepts. During this research, a description of an Operational Maturity Structure for Software Testing Management was created for the Operational Domain of the Organizational Testing Management Maturity Model (OTM3) framework. The proposed structure addresses both the defects management and the measurement of software testing results, through a method using effective metrics. This structure allows to identify, to establish, and to keep the capabilities demanded by software testing maturity models. These goals are achieved according to standards, measures, controls, and continuous improvements. In order to verify and validate the application of the OTM3 framework, a case study was developed within a Research & Development Project at the Brazilian Aeronautics Institute of Technology (ITA).



[40]Keynote Paper: Search Based Software Testing for Software Security: Breaking Code to Make it Safer


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=4976374&pageNumber%3D3%26queryText%3Dsoftware+testing



Ensuring security of software and computerized systems is a pervasive problem plaguing companies and institutions and affecting many areas of modern life. Software vulnerability may jeopardize information confidentiality and cause software failure leading tocatastrophic threats to humans or severe economic losses. Size, complexity, extensibility, connectivity and the search for cheap systems make it very hard or even impossible to manually tackle vulnerability detection. Search based software testing attempts to solve two aspects of the cost - vulnerabilityproblem. First, it's cheaper because itis far less labor intensive when compared to traditional testing techniques. As a result, it can be used to more thoroughly test software and reduce the risk that a vulnerability slips into production code. Also, search based software testing can be specifically tailored to tackle the subset of well known security vulnerabilities responsible for most security threats. This paper is divided into two parts. It examines promising search based testing approaches to detecting software vulnerabilities, and then presents some of the most interesting open research problems.



[41]Software Reliability Growth Models with Testing-Effort


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=4335332&pageNumber%3D3%26queryText%3Dsoftware+testing



Many software reliability growth models have been proposed in the past decade. Those models tacitly assume that testing-effort expenditures are constant throughout software testing. This paper develops realistic software reliability growth models incorporating the effect of testing-effort. The software error detection phenomenon in software testing is modeled by a nonhomogeneous Poisson process. The software reliability assessment measures and the estimation methods of parameters are investigated. Testing-effort expenditures are described by exponential and Rayleigh curves. Least-squares estimators and maximum likelihood estimators are used for the reliability growth parameters. The software reliability data analyses use actual data. The software reliability growth models with testing-effort can consider the relationship between the software reliability growth and the effect of testing-effort. Thus, the proposed models will enable us to evaluate software reliability more realistically.



[42]Longer is Better: On the Role of Test Sequence Length in Software Testing


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=5477052&pageNumber%3D3%26queryText%3Dsoftware+testing



In the presence of an internal state, often it is required a sequence of function calls to test software. In fact, to cover a particular branch of the code, a sequence of previous function calls might be required to put the internal state in the appropriate configuration. Internal states are not only present in object-oriented software, but also in procedural software(e.g., static variables in C programs). In the literature, there are many techniques to test this type of software. However, to our best knowledge, the properties related to choosing the length of these sequences have received only little attention in the literature. In this paper, we analyse the role that the length plays in software testing, in particular branch coverage. We show that on “difficult” software testing benchmarks longer test sequences make their testing trivial. Hence, we argue that the choice of the length of the test sequences is very important in software testing.


[43]A Model of Third-Party Integration Testing Process Management for Foundation Software Platform


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=4681154&pageNumber%3D3%26queryText%3Dsoftware+testing



FSP (foundation software platform) is the shoring of foundation for application system. The integration testing is the important method to ensure the quality of FSP. The integration testing of FSP would involve not only foundation software vendors, such as the vendor of operating system, the vendors of database management system and the vendors of middleware etc.; but also the users of FSP, such as application system developer and integrator. It is a reliable way to carry out the integration testing by a third-party. Concerning the problems which the third party testing organization face with, including how to deal with the problem that testing information is insufficient, how to organize collaboration among the software vendors, and how to build the integration testing environment etc., In this paper, a Third-party Integration Testing Process Management model for FSP (TITPM) is proposed. In the model, we identify the third party testing metadata, define the testing process toward collaboration of three parties, and give a method of building testing environment based on open source software. The model has been used in the integration testing and maintenance of domestic FSP in China. The successful application of the model in integration testing of FSP by the third party testing center in China proved its validity.



[44]Software-based self-testing of embedded processors


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=1401865&pageNumber%3D4%26queryText%3Dsoftware+testing



Embedded processor testing techniques based on the execution of self-test programs have been recently proposed as an effective alternative to classic external tester-based testing and pure hardware built-in self-test (BIST) approaches. Software-based self-testing is a nonintrusive testing approach and provides at-speed testing capability without any hardware or-performance overheads. In this paper, we first present a high-level, functional component-oriented, software-based self-testing methodology for embedded processors. The proposed methodology aims at high structural fault coverage with low test development and test application cost. Then, we validate the effectiveness of the proposed methodology as a low-cost alternative over structural software-based self-testing methodologies based on automatic test pattern generation and pseudorandom testing. Finally, we demonstrate the effectiveness and efficiency of the proposed methodology by completely applying it on two different processor implementations of a popular RISC instruction set architecture including several gate-level implementations.



[45][44]A Cross Platform Test Management System for the SUDAAN Statistical Software Package


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=5381758&pageNumber%3D4%26queryText%3Dsoftware+testing



Testing software can be particularly challenging for a small or mid-size firm interested in commercially distributing their software to a wide variety of users. Testing is clearly an important part of any software development life cycle (SDLC) because it provides a method for the developers to verify and validate the software. However, testing can be expensive and time-consuming, and creating a testing strategy that ensures a software product is 100% bug-free is unrealistic and impossible. This paper discusses the methodologies used to address one component of software testing by a small group of developers (essentially equivalent to a small firm) responsible for programming and distributing the SUDAAN® Statistical Software product. In addition we discuss SUDAAN's bug management system. Specifically, this paper discusses issues related to testing and debugging software on multiple platforms and operating systems.


[46]A Cross Platform Test Management System for the SUDAAN Statistical Software Package


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=5381758&pageNumber%3D4%26queryText%3Dsoftware+testing



Testing software can be particularly challenging for a small or mid-size firm interested in commercially distributing their software to a wide variety of users. Testing is clearly an important part of any software development life cycle (SDLC) because it provides a method for the developers to verify and validate the software. However, testing can be expensive and time-consuming, and creating a testing strategy that ensures a software product is 100% bug-free is unrealistic and impossible. This paper discusses the methodologies used to address one component of software testing by a small group of developers (essentially equivalent to a small firm) responsible for programming and distributing the SUDAAN® Statistical Software product. In addition we discuss SUDAAN's bug management system. Specifically, this paper discusses issues related to testing and debugging software on multiple platforms and operating systems.



[47]Design of a Tool for Checking Integration Testing Coverage of Object-Oriented Software


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=6579458&pageNumber%3D4%26queryText%3Dsoftware+testing



Software testing is a necessary process in software development life cycle to verify if the developed software follows its specification. One important thing of the testing process is test coverage analysis because defects may exist in uncovered parts and emerge when users try to use them. Test coverage analysis is performed to measure the comprehensiveness or thoroughness of testing. It is necessary and can be applied at every level of testing process. Nowadays, the object-oriented software is gaining interest in software industry which the test method differs from the conventional software, so some of available techniques and tools cannot be applied to object- oriented software. This paper proposes design of a tool for checking integration testing coverage of object- oriented software which can check the integration testing coverage of object oriented software and generate additional test cases in case of the existing test cases cannot cover the code.



[48]Vulnerability Testing of Software Using Extended EAI Model


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=5319549&pageNumber%3D4%26queryText%3Dsoftware+testing



Software testing, throughout the development life cycle of software, is one of the important ways to ensure the quality of software. Model-based software testing technology and tools have higher degree of automation, as well as efficiency of testing. They also can detect vulnerabilities that other technologies are difficult to do. So they are widely used. This paper presents an extended EAI model (Extended Environment-Application Interaction Model), and does further research for vulnerability testing based on the model. Extended EAI model inherits the methodology of anomalies simulation of the original one. In order to monitor and control the process under test, we give an idea of introducing artificial intelligence technology and status feedback into the model, and also try to use virtual execution technology for testing. We use this technique based on the Extended EAI model to experiment on Internet work Operation System (IOS) software, and detect that some services of certain protocols running in IOS software have vulnerabilities. So the experimental results indicate that our method is feasible.



[49]Guest Editors' Introduction: Software Testing Practices in Industry


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=1657934&pageNumber%3D4%26queryText%3Dsoftware+testing



Four papers and a roundtable discussion shed light on the current state of software testing practices. Case studies from industry experience address topics including unit-testing practices, agile testing, and automating software testing. Although many of these approaches show promise, software testing is still one of the more neglected practices within the software development life cycle. Suggestions for improvements in industry and academia are offered.



[50]An Application of Six Sigma and Simulation in Software Testing Risk Assessment


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=5477075&pageNumber%3D4%26queryText%3Dsoftware+testing



The conventional approach to Risk Assessment in Software Testing is based on analytic models and statistical analysis. The analytic models are static, so they don't account for the inherent variability and uncertainty of the testing process, which is an apparent deficiency. This paper presents an application of Six Sigma and Simulation in Software Testing. DMAIC and simulation are applied to a testing process to assess and mitigate the risk to deliver the product on time, achieving the quality goals. DMAIC is used to improve the process and achieve required (higher) capability. Simulation is used to predict the quality (reliability) and considers the uncertainty and variability, which, in comparison with the analytic models, more accurately models the testing process. Presented experiments are applied on a real project using published data. The results are satisfactorily verified. This enhanced approach is compliant with CMMI® and provides for substantial Software Testing performance-driven improvements.



[51]Stress testing software to determine fault tolerance for hardware failure and anomalies


http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=6334582&pageNumber%3D4%26queryText%3Dsoftware+testing

Today's military systems rely for their performance on combinations of hardware and software. While testing of hardware performance during design, development and operation is well understood, the testing of software is less mature. In particular, the effect of hardware failures in the field on software performance, and therefore systems performance, is all-too-often overlooked or is tested in a far less rigorous manner that that applied to Hardware failures alone. Numerous examples exist of major system failures driven by software anomalies but triggered by Hardware failures, with consequences that range from degraded mission performance to weapons system destruction and operator fatalities. Measuring software development quality and fault tolerance is a challenging task. Many software test methods focus on source-code only approach (unit tests, modular test) and neglect the impacts caused by hardware anomalies or failures. Such missing test coverage can and will result in potential degraded software performance quality, thereby adding to project cost and delaying schedule. It can also result in far more disastrous consequences for the warfighters. This paper will discuss the general nature of the hardware-failure-software anomaly - system failure flow-down. It will then describe techniques that exist for system software testing and will highlight extensions of these techniques to focus on an effective and comprehensive software testing that includes performance prediction and hardware failure fault tolerance. The end result is a suite of test methods that, when properly applied, offer a systematic and comprehensive analysis of prime software behaviors under a range of hardware field failure conditions.


More about Testing


Software Testing Overview


What is testing?


Testing is the process of evaluating a system or its component(s) with the intent to find that whether it satisfies the specified requirements or not. This activity results in the actual, expected and difference between their results. In simple words testing is executing a system in order to identify any gaps, errors or missing requirements in contrary to the actual desire or requirements.According to ANSI/IEEE 1059 standard, Testing can be defined as A process of analyzing a software item to detect the differences between existing and required conditions (that is defects/errors/bugs) and to evaluate the features of the software item.Who does testing?It depends on the process and the associated stakeholders of the project(s). In the IT industry, large companies have a team with responsibilities to evaluate the developed software in the context of the given requirements. Moreover, developers also conduct testing which is called Unit Testing. In most cases, following professionals are involved in testing of a system within their respective capacities:


  • Software Tester

  • Software Developer

  • Project Lead/Manager

  • End User

  • Different companies have difference designations for people who test the software on the basis of their experience and knowledge such as Software Tester, Software Quality Assurance Engineer, and QA Analyst etc.
    It is not possible to test the software at any time during its cycle. The next two sections state when testing should be started and when to end it during the SDLC.

    When to Start Testing?


    An early start to testing reduces the cost, time to rework and error free software that is delivered to the client. However in Software Development Life Cycle (SDLC) testing can be started from the Requirements Gathering phase and lasts till the deployment of the software. However it also depends on the development model that is being used. For example in Water fall model formal testing is conducted in the Testing phase, but in incremental model, testing is performed at the end of every increment/iteration and at the end the whole application is tested.
    Testing is done in different forms at every phase of SDLC like during Requirement gathering phase, the analysis and verifications of requirements are also considered testing. Reviewing the design in the design phase with intent to improve the design is also considered as testing. Testing performed by a developer on completion of the code is also categorized as Unit type of testing.


    When to Stop Testing?


    Unlike when to start testing it is difficult to determine when to stop testing, as testing is a never ending process and no one can say that any software is 100% tested. Following are the aspects which should be considered to stop the testing:


  • Testing Deadlines.

  • Completion of test case execution.

  • Completion of Functional and code coverage to a certain point.

  • Bug rate falls below a certain level and no high priority bugs are identified.

  • Management decision.

  • Testing, Quality Assurance and Quality Control

  • Most people are confused with the concepts and difference between Quality Assurance, Quality Control and Testing. Although they are interrelated and at some level they can be considered as the same activities, but there is indeed a difference between them. Mentioned below are the definitions and differences between them:


    Quality Assurance Quality Control Testing


    1 Activities which ensure the implementation of processes, procedures and standards in context to verification of developed software and intended requirements. Activities which ensure the verification of developed software with respect to documented (or not in some cases) requirements. Activities which ensure the identification of bugs/error/defects in the Software.


    2 Focuses on processes and procedures rather then conducting actual testing on the system. Focuses on actual testing by executing Software with intend to identify bug/defect through implementation of procedures and process. Focuses on actual testing.


    3 Process oriented activities. Product oriented activities. Product oriented activities.


    4 Preventive activities. It is a corrective process. It is a corrective process.


    5 It is a subset of Software Test Life Cycle (STLC). QC can be considered as the subset of Quality Assurance. Testing is the subset of Quality Control.


    Testing and Debugging


    TESTING:


    It involves the identification of bug/error/defect in the software without correcting it. Normally professionals with a Quality Assurance background are involved in the identification of bugs. Testing is performed in the testing phase.


    DEBUGGING:



    It involves identifying, isolating and fixing the problems/bug. Developers who code the software conduct debugging upon encountering an error in the code. Debugging is the part of White box or Unit Testing. Debugging can be performed in the development phase while conducting Unit Testing or in phases while fixing the reported bugs.


    Testing Types


    Manual testing


    This type includes the testing of the Software manually i.e. without using any automated tool or any script. In this type the tester takes over the role of an end user and test the Software to identify any un-expected behavior or bug. There are different stages for manual testing like unit testing, Integration testing, System testing and User Acceptance testing.
    Testers use test plan, test cases or test scenarios to test the Software to ensure the completeness of testing. Manual testing also includes exploratory testing as testers explore the software to identify errors in it.

    Automation testing


    Automation testing which is also known as Test Automation, is when the tester writes scripts and uses another software to test the software. This process involves automation of a manual process. Automation Testing is used to re-run the test scenarios that were performed manually, quickly and repeatedly.


    Software Automated Testing


    Apart from regression testing, Automation testing is also used to test the application from load, performance and stress point of view. It increases the test coverage; improve accuracy, saves time and money in comparison to manual testing.

    Testing Methods


    Black Box Testing


    The technique of testing without having any knowledge of the interior workings of the application is Black Box testing. The tester is oblivious to the system architecture and does not have access to the source code. Typically, when performing a black box test, a tester will interact with the system's user interface by providing inputs and examining outputs without knowing how and where the inputs are worked upon.


    White Box Testing


    White box testing is the detailed investigation of internal logic and structure of the code. White box testing is also called glass testing or open box testing. In order to perform white box testing on an application, the tester needs to possess knowledge of the internal working of the code.
    The tester needs to have a look inside the source code and find out which unit/chunk of the code is behaving inappropriately.


    Grey Box Testing


    Grey Box testing is a technique to test the application with limited knowledge of the internal workings of an application. In software testing, the term the more you know the better carries a lot of weight when testing an application.
    Mastering the domain of a system always gives the tester an edge over someone with limited domain knowledge. Unlike black box testing, where the tester only tests the application's user interface, in grey box testing, the tester has access to design documents and the database. Having this knowledge, the tester is able to better prepare test data and test scenarios when making the test plan.


    Functional Testing


    This is a type of black box testing that is based on the specifications of the software that is to be tested. The application is tested by providing input and then the results are examined that need to conform to the functionality it was intended for. Functional Testing of the software is conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements.


    There are five steps that are involved when testing an application for functionality.


    Steps Description


  • I The determination of the functionality that the intended application is meant to perform.
  • II The creation of test data based on the specifications of the application.
  • III The output based on the test data and the specifications of the application.
  • IV The writing of Test Scenarios and the execution of test cases.
  • V The comparison of actual and expected results based on the executed test cases.

  • An effective testing practice will see the above steps applied to the testing policies of every organization and hence it will make sure that the organization maintains the strictest of standards when it comes to software quality.


    Unit Testing


    This type of testing is performed by the developers before the setup is handed over to the testing team to formally execute the test cases. Unit testing is performed by the respective developers on the individual units of source code assigned areas. The developers use test data that is separate from the test data of the quality assurance team.The goal of unit testing is to isolate each part of the program and show that individual parts are correct in terms of requirements and functionality.


    LIMITATIONS OF UNIT TESTING


    Testing cannot catch each and every bug in an application. It is impossible to evaluate every execution path in every software application. The same is the case with unit testing.There is a limit to the number of scenarios and test data that the developer can use to verify the source code. So after he has exhausted all options there is no choice but to stop unit testing and merge the code segment with other units.Integration TestingThe testing of combined parts of an application to determine if they function correctly together is Integration testing. There are two methods of doing Integration Testing Bottom-up Integration testing and Top Down Integration testing.


    Integration Testing Method


    1.Bottom-up integration


    This testing begins with unit testing, followed by tests of progressively higher-level combinations of units called modules or builds.


    2.Top-Down integration


    This testing, the highest-level modules are tested first and progressively lower-level modules are tested after that. In a comprehensive software development environment, bottom-up testing is usually done first, followed by top-down testing. The process concludes with multiple tests of the complete application, preferably in scenarios designed to mimic those it will encounter in customers' computers, systems and network.


    System Testing


    This is the next level in the testing and tests the system as a whole. Once all the components are integrated, the application as a whole is tested rigorously to see that it meets Quality Standards. This type of testing is performed by a specialized testing team.


    System testing is so important because of the following reasons:


    System Testing is the first step in the Software Development Life Cycle, where the application is tested as a whole. The application is tested thoroughly to verify that it meets the functional and technical specifications.The application is tested in an environment which is very



    KaaShiv InfoTech offers world class Final Year Project for BE, ME, MCA ,MTech, Software Engineering and other students in Anna Nagar, Chennai.

    internship in chennai



    Website Details:


    Inplant Training:


    http://inplant-training.org/
    http://www.inplanttrainingchennai.com/
    http://http://inplanttraining-in-chennai.com/

    Internship:


    http://www.internshipinchennai.in/
    http://www.kernelmind.com/
    http://www.kaashivinfotech.com/