7chapter 1 Repaired)

download 7chapter 1 Repaired)

of 64

Transcript of 7chapter 1 Repaired)

  • 8/6/2019 7chapter 1 Repaired)

    1/64

    ED3M

    CHAPTER 1

    INTRODUCTION

    1.1 General Information

    Software testing is a critical element of software quality assurance and represents the

    ultimate review of specifications, design and coding. Following are some of the objectives of

    testing.

    1. Testing is the process of executing program with the intent of finding an error.

    2. A good test case in one that has a high probability of finding an as yet undiscovered error.

    3. A successful test in one that uncovers an as yet undiscovered error.

    4. Testing cannot show the absence of defects, it can only show that software errors are present.

    4. Testing cannot show the absence of defects, it can only show that software errors are present.

    Tools of special importance during acceptance testing include:

    Test coverage Analyzer- records the control paths followed for each test case.

    Time analyzer- also called a profiler, reports the time spent in various regions of the source

    code under different test cases. These regions of the code are areas to concentrate on to

    improve system performance.

    Coding standards- static analysis and standard checkers are used to inspect code for

    derivations from standards and guidelines.

    1.2 Statement of the Problem

    The goal of the project is estimate the defects in a software product. The availability of

    this estimate allows a test manager to improve his planning, monitoring, and controlling

    activities; this provides a more efficient testing process. Estimators can achieve high accuracy as

    more and more data becomes available and the process nears completion.

    1.3Objective of the Study

    Department of CSE, SVCE-2010 1

  • 8/6/2019 7chapter 1 Repaired)

    2/64

    ED3M

    Here, a new approach, called Estimation of Defects based on Defect Decay Model

    (ED3M) is presented that computes an estimate of the total number of defects in an ongoing

    testing process. ED3M is based on estimation theory. Unlike many existing approaches, the

    technique presented here does not depend on historical data from previous projects or any

    assumptions about the requirements and/or testers productivity. It is a completely automated

    approach that relies only on the data collected during an ongoing testing process.

    First, the technique should be accurate as decisions based on inaccurate estimates can be

    time consuming and costly to correct. However, most estimators can achieve high accuracy as

    more and more data becomes available and the process nears completion.

    Second important characteristic is that accurate estimates need to be available as early as

    possible during the system testing phase. The faster the estimate converges to the actual value

    (i.e., the lower its latency), the more valuable the result is to a test manager.

    Third, the technique should be generally applicable in different software testing processes

    and on different kinds of software products. The inputs to the process should be commonly

    available and should not require extensive expertise in an underlying formalism.

    CHAPTER 2

    Department of CSE, SVCE-2010 2

  • 8/6/2019 7chapter 1 Repaired)

    3/64

    ED3M

    LITERATURE SURVEY

    2.1 Introduction

    The traditional way of predicting software reliability has since the 1970s been the use of

    software reliability growth models. They were developed in a time when software was developed

    using a waterfall process model. This is inline with the fact that most software reliability growth

    models require a substantial amount of failure data to get any trustworthy estimate of the

    reliability. Software reliability growth models are normally described in the form of an equation

    with a number of parameters that need to be fitted to the failure data. A key problem is that the

    curve fitting often means that the parameters can only be estimated very late in testing and hence

    their industrial value for decision-making is limited. This is particularly the case whendevelopment is done, for example, using an incremental approach or other short turnaround

    approaches. A sufficient amount of failure data is simply not available. The software reliability

    growth models have initially been developed for a quite different situation than today. Thus, it is

    not a surprise that they are no really fit for the challenges today unless the problems can be

    circumvented. This paper addresses some of the possibilities of addressing the problems with

    software reliability growth models by looking at ways of estimating the parameters in software

    reliability growth models before entering integration or system testing.

    Construction simulation tools typically provide results in the form of numerical or

    statistical data. However, they do not illustrate the modeled operations graphically in 3D. This

    poses significant difficulty in communicating the results of simulation models, especially to

    persons who are not trained in simulation but are domain experts. The resulting Black-Box

    Effect is a major impediment in verifying and validating simulation models. Decision makers

    often do not have the means, the training and/or the time to verify and validate simulation

    models based solely onthe numerical output of simulation models and are thus always skeptic

    about simulation analyses and have little confidence in their results. This lack of credibility is a

    major deterrent hindering the widespread use of simulation as an operations planning tool in

    construction. This paper illustrates the use of DES in the design of a complex dynamic earthwork

    operation whose control logic was verified and validated using 3D animation. The model was

    created using Stroboscope and animated using the Dynamic Construction Visualizer.

    Department of CSE, SVCE-2010 3

  • 8/6/2019 7chapter 1 Repaired)

    4/64

    ED3M

    Over the years, many defect prediction studies have been conducted. The studies consider

    the problem using a variety of mathematical models (e.g., Bayesian Networks, probability

    distributions, reliability growth models, etc.) and characteristics of the project, such as module

    size, file structure, etc. A useful survey and critique of these techniques is available in .Several

    researchers have investigated the behavior of defect density based on module size. One group of

    researchers has found that larger modules have lower defect density. Two of the reasons

    provided for their findings are the smaller number of links between modules and that larger

    modules are developed with more care. The second group has suggested that there is an optimal

    module size for which the defect density is minimal. Their results have shown that defect density

    depicts a U-shaped behavior against module size. Still others have reported that smaller modules

    enjoy lower defect density, exploiting the famous divide and conquer rule. Another line of

    studies has been based on the use of design metrics to predict fault-prone modules. Briand have

    studied the degree of accuracy of capture-recapture models, proposed by biologists, to predict the

    number of remaining defects during inspection using actual inspection data. They have also

    studied the impact of the number of inspectors and the total number of defects on the accuracy of

    the estimators based on relevant recapture models. Ostrand and Bell have developed a model to

    predict which files will contain the most faults in the next release based on thestructure of each

    file, as well as fault and modification history from the previous release. Their research has shown

    that faults are distributed in files according to the famous Pareto Principle, i.e., 80 percent of the

    faults are found in 20 percent of the files. Zhang and Mockus assume that defects discovered and

    fixed during development are caused by implementing new features recorded as Modification

    Requests (MRs). Historical data from past projects are used to collect estimates for defect rate

    per feature MR, the time to repair the defect in a feature, and the delay between a feature

    implementation and defect repair activities. The selection criteria for past similar projects are

    based only on the size of the project while disregarding many other critical characteristics. These

    estimates are used as input to a prediction model, based on the Poison distribution, to predict the

    number of defect repaired.

    The technique that has been presented by Zhang and Mockus relies solely on historical

    data from past projects and does not consider the data from the current project. Fenton have used

    BBNs to predict the number of defects in the software. The results shown are plausible; the

    authors also explain causes of the results from the model. However, accuracy has been achieved

    Department of CSE, SVCE-2010 4

  • 8/6/2019 7chapter 1 Repaired)

    5/64

    ED3M

    at the cost of requiring expert knowledge of the Project Manager and historical data (information

    besides defect data) from past projects. Currently, such information is not always collected in

    industry. Also, expert knowledge is highly subjective and can be biased. These factors may limit

    the application of such models to a few companies that can cope with these requirements. This

    has been a key motivating factor in developing the ED3M approach. The only information

    ED3M needs is the defect data from the ongoing testing process; this is collected by almost all

    companies. Gras also advocate the use and effectiveness of BBNs for defect prediction.

    However, they point out that the use of BBN is not always possible and an alternative method,

    Defect Profile Modeling (DPM), is proposed. Although DPM does not demand as much on

    calibration as BBN, it does rely on data from past projects, such as the defect identifier, release

    sourced, phase sourced, release found, phase found, etc.

    Many Reliability models have been used to predict the number of defects in a software

    product. The models have also been used to provide the status of the testing process based on the

    defect growth curve. For example, if the defect curve is growing exponentially, then more

    undiscovered defects are to follow and testing should continue. If the growth curve has reached

    saturation, then the decision regarding the fate of testing can be reviewed by managers and

    engineers.

    2.2 Reference papers:

    2.2.1 Bayesian Estimation of Defects based on Defect Decay Model:

    BayesED3M

    Department of CSE, SVCE-2010 5

  • 8/6/2019 7chapter 1 Repaired)

    6/64

    ED3M

    The estimation of the total number of defects at early stages of the testing process helps

    managers to make resource allocation and deadline decisions. The use of nonbayesian

    approaches has proven to be accurate but presents a certain latency to achieve a reasonable

    accuracy.

    The major goal of a software testing process is to find and fix, during debugging, as many

    defects as possible and release a product with a reasonable reliability. Unfortunately, many

    companies do not use any prediction technique and the release date is solely based on a given

    deadline and not on the status of the product based on the results of the testing process. Under

    such circumstances a product with a high number of defects may be released incurring high

    customer support and dissatisfaction. A trade-off between releasing a product earlier or investing

    more time on testing is always an issue for the organization. The clear view of the status of the

    testing process is crucial to compute the pros and cons of possible alternatives. Time to achieve

    the established goal and percentage of the goal achieved up to the moment is important factors to

    determine the status of a software testing process.

    Many techniques, such as software reliability models and coverage analysis, can be used

    to estimate the status of a testing process. Assume the goal of a testing process is to achieve

    100% decision coverage. It is easy to see that a product with only 60% of coverage is far from

    achieving its goal. However, it does not help to determine the time required to achieve such a

    goal much less what would be the consequences of releasing a product with a lower coverage.

    Reliability models improve on coverage analysis as time to completion can be estimated as well

    as side effects of releasing a product with a lower reliability. However, reliability models may be

    very sensitive to operational profiles which are not easily estimated.

    An alternative measure to compute the status of a testing process is the number of

    remaining defects in a software product which clearly depends on the estimation of the total

    number of defects. The availability of an accurate estimation of the number of defects at earlystages of the testing process allows for proper planning of resource usage, estimation of

    completion time, and current status of the process. Also, an estimate of the number of defects in

    the product by the time of release, allows the inference of required customer support. This paper

    focuses on the prediction of the total number of defects using Bayesian Estimation. The

    limitations were related mainly to the presence of high noise during the initial phase of system

    Department of CSE, SVCE-2010 6

  • 8/6/2019 7chapter 1 Repaired)

    7/64

    ED3M

    testing. The estimator described here is based upon two assumptions. First assumption is that

    system testing presents an exponential or s-shaped decay in the number of defects. This

    assumption is confirmed by the data sets used in this study as well as others available in the

    literature. Second assumption is that prior knowledge about the parameters which represents the

    total number of defects in the software is available. This assumption mainly addresses the issue

    of noise in the early phase of system testing.

    2.2.2 A Study of Estimation Methods for Defect Estimation:

    Accurate defect prediction is useful for planning and management of software testing.

    They discussed here the statistical characteristics and requirements of estimation methods

    available for developing the defect estimators. They have also analyzed the models, assumptions

    and statistical performance of three defect estimators available in the field using data from the

    literature. The study of the estimation methods here is general enough to be applied to other

    estimation problems in software engineering or other related field.

    Our goal in this paper is to discuss various estimation methods which are used to develop

    these defect estimation techniques. We would discuss the assumptions that each method makes

    about the data model, probability distribution, and mean and variance of data and estimator. We

    will also discuss the statistical efficiency of the estimators developed from these estimation

    methods. . If there is prior statistical knowledge about the parameter which we are interested to

    estimate, then an estimator based on Bayesian Approach can be found. Note that the prior

    knowledge can be in the form of prior mean and variance and probability distribution of the

    parameter. There are several Bayesian estimators available. Main advantage of Bayesian

    estimators is that they provide better early estimates compared to estimators developed from

    Classical Approach. In the initial phase of software testing not enough test data is available,

    consequently in the absence of prior knowledge initial performance of estimator suffers.

    An accurate prediction of total number of software defects helps in evaluation of the

    status of testing process. But the accuracy of the estimator owes to the estimation method which

    is used to develop the estimator. They have tried to provide a general framework of available

    estimation methods. Although they have discussed this framework for defect estimation problem,

    the discussion is general enough to be used for other estimation problems. They have elicited the

    Department of CSE, SVCE-2010 7

  • 8/6/2019 7chapter 1 Repaired)

    8/64

    ED3M

    requirements of each method. They have also discussed the statistical efficiency that each

    method offers. Note that even though the discussion is limited to single parameter estimation, it

    can be easily extended to a vector of parameters to be estimated.

    Many estimators exist for the prediction of number of defects and/or failure intensity.

    However, due to space constraints we are limited here to the brief description of three estimators

    which are discussed next. Many reliability models have been proposed which predicts the total

    number of defects in the software.

    CHAPTER 3

    HARDWARE AND SOFTWARE REQUIREMENTS

    3.1 Hardware Requirements

    Processor : Intel Pentium 4 or equivalent

    Department of CSE, SVCE-2010 8

  • 8/6/2019 7chapter 1 Repaired)

    9/64

    ED3M

    Hard disk : 10 GB

    Memory : 1 GB RAM or greater

    Display : VGA

    3.2 Software Requirements

    Operating System : Windows XP

    Front End : ASP.Net, Visual C#

    Back End : Microsoft SQL Server 2005, ADO.Net

    Platform Used : Microsoft Visual Studio 2005

    Documentation package : Microsoft Word

    CHAPTER 4

    SOFTWARE REQUIREMENT SPECIFICATION

    (SRS)

    4.1 Overview of the .NET Framework

    The .NET Framework is a new computing platform that simplifies application development

    in the highly distributed environment of the Internet. The .NET Framework is designed to

    fulfill the following objectives:

    Department of CSE, SVCE-2010 9

  • 8/6/2019 7chapter 1 Repaired)

    10/64

    ED3M

    To provide a consistent object-oriented programming environment whether object code is

    stored and executed locally, executed locally but Internet-distributed, or executed

    remotely.

    To provide a code-execution environment that minimizes software deployment and

    versioning conflicts.

    To provide a code-execution environment that guarantees safe execution of code,

    including code created by an unknown or semi-trusted third party.

    To provide a code-execution environment that eliminates the performance problems of

    scripted or interpreted environments.

    To make the developer experience consistent across widely varying types of applications,

    such as Windows-based applications and Web-based applications.

    To build all communication on industry standards to ensure that code based on the .NET

    Framework can integrate with any other code.

    The .NET Framework has two main components: the common language runtime and

    the .NET Framework class library. The common language runtime is the foundation of the .NET

    Framework. You can think of the runtime as an agent that manages code at execution time,

    providing core services such as memory management, thread management, and remoting, while

    also enforcing strict type safety and other forms of code accuracy that ensure security and

    robustness. In fact, the concept of code management is a fundamental principle of the runtime.

    Code that targets the runtime is known as managed code, while code that does not target the

    runtime is known as unmanaged code. The class library, the other main component of the dot

    NET.

    The .NET Framework can be hosted by unmanaged components that load the common

    language runtime into their processes and initiate the execution of managed code, thereby

    creating a software environment that can exploit both managed and unmanaged features. The

    .NET Framework not only provides several runtime hosts, but also supports the development of

    third-party runtime hosts.

    Department of CSE, SVCE-2010 10

  • 8/6/2019 7chapter 1 Repaired)

    11/64

    ED3M

    4.1.1 Features of the Common Language Runtime

    The common language runtime manages memory, thread execution, code execution, code

    safety verification, compilation, and other system services. These features are intrinsic to

    the managed code that runs on the common language runtime.

    With regards to security, managed components are awarded varying degrees of trust,

    depending on a number of factors that include their origin (such as the Internet, enterprise

    network, or local computer). This means that a managed component might or might not be

    able to perform file-access operations, registry-access operations, or other sensitive

    functions, even if it is being used in the same active application.

    The runtime enforces code access security. For example, users can trust that an executable

    embedded in a Web page can play an animation on screen or sing a song, but cannot access

    their personal data, file system, or network. The security features of the runtime thus

    enable legitimate Internet-deployed software to be exceptionally featuring rich.

    The runtime also enforces code robustness by implementing a strict type- and code-

    verification infrastructure called the common type system (CTS). The CTS ensures that all

    managed code is self-describing. The various Microsoft and third-party language compilers

    generate managed code that conforms to the CTS. This means that managed code canconsume other managed types and instances, while strictly enforcing type fidelity and type

    safety.

    The runtime also accelerates developer productivity. For example, programmers can write

    applications in their development language of choice, yet take full advantage of the

    runtime, the class library, and components written in other languages by other developers.

    Any compiler vendor who chooses to target the runtime can do so. Language compilers

    that target the .NET Framework make the features of the .NET Framework available to

    existing code written in that language, greatly easing the migration process for existing

    applications.

    Finally, the runtime can be hosted by high-performance, server-side applications, such as

    Microsoft SQL Server and Internet Information Services (IIS). This infrastructure

    Department of CSE, SVCE-2010 11

  • 8/6/2019 7chapter 1 Repaired)

    12/64

    ED3M

    enables you to use managed code to write your business logic, while still enjoying the

    superior performance of the industry's best enterprise servers that support runtime hosting.

    4.1.2 Common Type System

    The common type system defines how types are declared, used, and managed in the runtime,

    and is also an important part of the runtime's support for cross-language integration. The

    common type system performs the following functions:

    Establishes a framework that enables cross-language integration, type safety, and high

    performance code execution.

    Provides an object-oriented model that supports the complete implementation of many

    programming languages.

    Defines rules that languages must follow, which helps ensure that objects written in

    different languages can interact with each other.

    In This Section Common Type System Overview

    Describes concepts and defines terms relating to the common type system.

    Type Definitions

    Describes user-defined types.

    Type Members

    Describes events, fields, nested types, methods, and properties, and concepts such as

    member overloading, overriding, and inheritance.

    Value Types

    Describes built-in and user-defined value types.

    Classes

    Department of CSE, SVCE-2010 12

  • 8/6/2019 7chapter 1 Repaired)

    13/64

    ED3M

    Describes the characteristics of common language runtime classes.

    Delegates

    Describes the delegate object, which is the managed alternative to unmanaged function

    pointers.

    Arrays

    Describes common language runtime array types.

    Interfaces

    Describes characteristics of interfaces and the restrictions on interfaces imposed by the

    common language runtime.

    Pointers

    Describes managed pointers, unmanaged pointers, and unmanaged function pointers.

    Describes the run-time environment that manages the execution of code and provides

    application development services.

    4.1.3 Cross-Language Interoperability

    The common language runtime provides built-in support for language interoperability.

    However, this support does not guarantee that developers using another programming language

    can use code you write. To ensure that you can develop managed code that can be fully used by

    developers using any programming language, a set of language features and rules for using them

    called the Common Language Specification (CLS) has been defined. Components that follow

    these rules and expose only CLS features are considered CLS-compliant.

    This section describes the common language runtime's built-in support for language

    interoperability and explains the role that the CLS plays in enabling guaranteed cross-language

    interoperability. CLS features and rules are identified and CLS compliance is discussed.

    Department of CSE, SVCE-2010 13

  • 8/6/2019 7chapter 1 Repaired)

    14/64

    ED3M

    In This Section

    Language Interoperability

    Describes built-in support for cross-language interoperability and introduces the

    Common Language Specification.

    What is the Common Language Specification?

    Explains the need for a set of features common to all languages and identifies CLS rules

    and features.

    Writing CLS-Compliant Code

    Discusses the meaning of CLS compliance for components and identifies levels of CLS

    compliance for tools.

    Common Type System

    Describes how types are declared, used, and managed by the common language runtime.

    Metadata and Self-Describing Components

    Explains the common language runtime's mechanism for describing a type and storing

    that information with the type itself.

    4.2. NET Framework Class Library

    The .NET Framework class library is a collection of reusable types that tightly integrate

    with the common language runtime. The class library is object oriented, providing types from

    which your own managed code can derive functionality. This not only makes the .NET

    Framework types easy to use, but also reduces the time associated with learning new features of

    the .NET Framework. In addition, third-party components can integrate seamlessly with classes

    in the .NET Framework.

    Department of CSE, SVCE-2010 14

  • 8/6/2019 7chapter 1 Repaired)

    15/64

    ED3M

    For example, the .NET Framework collection classes implement a set of interfaces that

    you can use to develop your own collection classes. Your collection classes will blend

    seamlessly with the classes in the .NET Framework.

    As you would expect from an object-oriented class library, the .NET Framework types

    enable you to accomplish a range of common programming tasks, including tasks such as string

    management, data collection, database connectivity, and file access. In addition to these common

    tasks, the class library includes types that support a variety of specialized development scenarios.

    For example, you can use the .NET Framework to develop the following types of applications

    and services:

    Console applications.

    Scripted or hosted applications.

    Windows GUI applications (Windows Forms).

    ASP.NET applications.

    XML Web services.

    Windows services.

    For example, the Windows Forms classes are a comprehensive set of reusable types that

    vastly simplify Windows GUI development. If you write an ASP.NET Web Form application,

    you can use the Web Forms classes.

    4.3 Assemblies Overview

    Assemblies are a fundamental part of programming with the .NET Framework. An assembly

    performs the following functions:

    It contains code that the common language runtime executes. Microsoft intermediate

    language (MSIL) code in a portable executable (PE) file will not be executed if it does

    not have an associated assembly manifest. Note that each assembly can have only one

    entry point (that is, DllMain, WinMain, orMain).

    Department of CSE, SVCE-2010 15

  • 8/6/2019 7chapter 1 Repaired)

    16/64

    ED3M

    It forms a security boundary. An assembly is the unit at which permissions are requested

    and granted. For more information about security boundaries as they apply to assemblies,

    see Assembly Security Considerations

    It forms a type boundary. Every type's identity includes the name of the assembly in

    which it resides. A type called MyType loaded in the scope of one assembly is not the

    same as a type called MyType loaded in the scope of another assembly.

    It forms a reference scope boundary. The assembly's manifest contains assembly

    metadata that is used for resolving types and satisfying resource requests. It specifies the types

    and resources that are exposed outside the assembly. The manifest also enumerates other

    assemblies on which it depends.

    It forms a version boundary. The assembly is the smallest version able unit in the

    common language runtime; all types and resources in the same assembly are versioned as

    a unit. The assembly's manifest describes the version dependencies you specify for any

    dependent assemblies. For more information about versioning, see Assembly Versioning

    It forms a deployment unit. When an application starts, only the assemblies that the

    application initially calls must be present. Other assemblies, such as localization

    resources or assemblies containing utility classes can be retrieved on demand. This

    allows applications to be kept simple and thin when first downloaded. For more

    information about deploying assemblies, see Deploying Applications

    It is the unit at which side-by-side execution is supported. For more information about

    running multiple versions of the same assembly, see Side-by-Side Execution

    Assemblies can be static or dynamic. Static assemblies can include .NET Framework

    types (interfaces and classes), as well as resources for the assembly (bitmaps, JPEG files,

    resource files, and so on). Static assemblies are stored on disk in PE files. You can also use

    the .NET Framework to create dynamic assemblies, which are run directly from memory and are

    not saved to disk before execution. You can save dynamic assemblies to disk after they have

    executed.

    Department of CSE, SVCE-2010 16

  • 8/6/2019 7chapter 1 Repaired)

    17/64

    ED3M

    There are several ways to create assemblies. You can use development tools, such as

    Visual Studio .NET, that you have used in the past to create .dll or .exe files. You can use tools

    provided in the .NET Framework SDK to create assemblies with modules created in other

    development environments. You can also use common language runtime APIs, such as

    Reflection. Emit, to create dynamic assemblies.

    CHAPTER 5

    SYSTEM ANALYSIS

    Department of CSE, SVCE-2010 17

  • 8/6/2019 7chapter 1 Repaired)

    18/64

    ED3M

    5.1 General Use Cases

    user

    Login

    Browse

    Error estimation

    Error correction

    Report

    Admin

    Figure 5.1 General use case of ED3M

    5.2 Use Case based on each module:

    The Unified Modeling Language (UML) is a standard language for specifying,

    visualizing, constructing, and documenting the artifacts of software systems, as well as for

    business modeling and other non-software systems. The UML represents a collection of best

    Department of CSE, SVCE-2010 18

  • 8/6/2019 7chapter 1 Repaired)

    19/64

    ED3M

    engineering practices that have proven successful in the modeling of large and complex systems.

    The UML is very important parts of developing object oriented software and the software

    development process. The UML uses mostly graphical notations to express the design of

    software projects. Using the UML helps project teams communicate, explore potential designs,

    and validate the architectural design of the software.

    Here we are providing use case for different modules present in the ED3M project .

    5.2.1 ED3M scenario Login Page

    Figure 5.2.1: ED3M Scenario Login Page

    Brief Description:

    1. The User is on the Windows Authentication page.

    2. The User logs in using his user id and password.

    3. User is redirected to Election Master Project Home page.

    4. If user doesnt provide authenticated user name or password then an error will be shown.

    Department of CSE, SVCE-2010 19

    ED3M Data Base

  • 8/6/2019 7chapter 1 Repaired)

    20/64

    ED3M

    5.2.2 Use Case: Browse

    Figure 5.2.2 browse scenario

    Brief Description:

    1. User is moved from login page to browse, where he/she has to select the folder that has to

    be tested.

    2. User is redirected to another module Folder Select which he has to select.

    3. Once user selects folder to test, all the files present in that folder along with its name and

    exe files are displayed.

    4. We select only .cs files in order to test.

    Department of CSE, SVCE-2010 20

    Browse

    Folder

    Select

    Details

  • 8/6/2019 7chapter 1 Repaired)

    21/64

    ED3M

    5.2.3 Use case: Folder select

    Figure 5.2.3 folder scenario

    Brief Description:

    1. All possible drives will be displayed

    2. User has to select any of the folders, while choosing the folder for testing corresponding

    path is displayed and is stored.

    3. All possible nodes ( files/folders are displayed)

    4. Once user has selected he is redirected again to browse for further steps.

    Department of CSE, SVCE-2010 21

    Folder

    select

    Multi doc

    Doc

    Folder

  • 8/6/2019 7chapter 1 Repaired)

    22/64

    ED3M

    5.2.4 Use Case: Estimator

    Figure 5.2.4 estimator scenario

    Brief Description:

    Estimator is the heart of the project

    1. The estimator takes the folder/.cs file as an input

    2. Enum checks for data type present in file (all possible)

    3. Modifier modifies all the related data that is required during estimation

    4. CSparser takes all modified .cs files and generates token later parsers based on

    token number.

    5.2.5 Use Case: Corrector

    Department of CSE, SVCE-2010 22

    Folder

    Select

    (folder)

    Modifier

    Enum

    CSparser

  • 8/6/2019 7chapter 1 Repaired)

    23/64

    ED3M

    Figure 5.2.5 Error corrector scenario

    Brief Description:

    1. This module displays the error based on token number.

    2. Earlier to this corrector finds the line number where the defect has been encountered,

    creates a doc tree (specifying its leaf node which token is affected).

    3. Shows the defect and finally specifies its line number, so that the user can easily traverse.

    4. Once the defect has been detected user modifies the code which gets updated in our

    folder, later user has to login to verify whether files has been updated with new changes.

    5. User has to run .cs file explicitly.

    5.2.6 Use Case: Report

    Department of CSE, SVCE-2010 23

    CountLine

    GotoLine

    CreateTree

    ShowError

  • 8/6/2019 7chapter 1 Repaired)

    24/64

    ED3M

    Figure 5.2.6 report scenario

    Brief Description:

    1. Finally the user maintains a record of all occurred defects and its solution.

    2. User name, email id , project name and status is also recorded .

    3. This record is maintained in order to estimate the how well the softwares are working

    well once its been deployed.

    5.3 Computing the Estimator

    Department of CSE, SVCE-2010 24

    Report Data Base

  • 8/6/2019 7chapter 1 Repaired)

    25/64

    ED3M

    The decay of defects in the System Test phase is assumed to have a double exponential

    format as given below:

    Where,

    R(n) is the number of remaining defects at day (or any other fixed time interval) n after

    removing defects at day n _ 1.

    Rinit is the total or initial number of defects present in the software product.

    1 and 2 values are calculated by the 1 2 Approximator;

    Let the defects removed at any day k are given by d(k). The relationship between R(n)

    and d(k) is given by

    R(n) = Rinit- D(n);

    Where,

    for . Now, solving for D(n), which represents the total number of

    defects removed from the product under test in n days, leads to

    The model defined here is deterministic, i.e., it does not account for random variations.

    However, noise and unforeseen perturbations are common place in the software testing process

    and a deterministic model cannot capture this stochastic behavior. There are multiple sources for

    these variations, such as work force relocation, noise in the data collection process, test of

    complex or poorly implemented parts of the system, multiple reports of the same error, among

    others. A random variable that consists of the sum of a large number of small random variables

    (various sources of noise) under general conditions can be approximated by Gaussian random

    Department of CSE, SVCE-2010 25

  • 8/6/2019 7chapter 1 Repaired)

    26/64

    ED3M

    variable. Because of this, the Gaussian distribution finds its use in a wide variety of man-made

    and natural phenomena. Therefore, random behavior is incorporated into the model by adding a

    random error defined by White Gaussian noise,z[k] N[0,p2], in defects removed every day.

    The model is simplified by first assuming that a Gaussian noise affects the whole testing

    process identically. Moreover, the samples taken at different instants are independent of each

    other.

    Fig.5.3.1. Finding for k: .

    On applying Gaussian central limits, a simplified problem is now defined in:

    Writing in vector form, where D, h, and w are vectors of dimensions N1.

    On applying Probability Density Function on D[n], finally results in:

    5.4 Approximator

    Department of CSE, SVCE-2010 26

  • 8/6/2019 7chapter 1 Repaired)

    27/64

    ED3M

    As stated before, computing the estimator using three unknowns, Rinit, , and , leads

    to a very complex solution. To avoid this situation, the Approximator approximates the

    values of and , which are then used in

    ; This approach simplifies the problem to determine one unknown, Rinit.

    The Approximator uses two steps:

    The first step is to find the initial values and using a technique calledExponential

    Peeling [24]. These initial values are thenprovided as input to a nonlinear regression

    method in thesecond step.

    The nonlinear regression, which is implementedusing the Gauss-Newton method with

    the Levenberg-Marquardt modification [24], results in the values of and that are

    used in the Estimator block.

    5.4.1 Exponential Peeling:

    The Exponential Peeling technique is used to find the initial values and . On

    normalizing the resulting curve can be approximated by a straight line

    as given by:

    The slope of this line is m2 = .

    Therefore, the initial approximation for can be found as well.

    Department of CSE, SVCE-2010 27

  • 8/6/2019 7chapter 1 Repaired)

    28/64

    ED3M

    The values and computed using the exponential peeling are not accurate enough to

    be directly used by the estimator. However, they have been shown to be reasonably good to be

    used as initial values for nonlinear regression methods.

    5.4.2Nonlinear regression:

    The second step is to use a nonlinear regression technique to compute the final

    approximations of and , as shown in below figure.

    Fig 5.4.1 The ED3M model composed of an Estimator, 1 2 value Approximator, and an Error Correction block

    used to improve the Convergence time.

    Let , then rewriting as a function of leads to

    The Taylor series expansion of is applied with initial values , which

    results in the residual error and the Gauss-Newton algorithm given in :

    Department of CSE, SVCE-2010 28

  • 8/6/2019 7chapter 1 Repaired)

    29/64

    ED3M

    where is given by

    Sometimes matrices are singular or ill conditioned. To overcome this obstacle,the Levenberg-Marquardt algorithms have been used.

    CHAPTER 6

    SYSTEM DESIGN

    Department of CSE, SVCE-2010 29

  • 8/6/2019 7chapter 1 Repaired)

    30/64

    ED3M

    In the design phase ED3M is divided in to two parts namely-

    Front end design

    Back end design

    6.1 Front end design

    1. Login:

    Two text boxes, one for user name and one for password and its corresponding labels for

    each text box respectively. Three buttons are used, one button for submit one for change of

    password and one for exit.

    2. Browse:

    One text box for folder path, three list boxes one for all files present in the folder, one for

    only .cs files and last list box for details for the folder. To buttons, one for next and other for exit.

    3. Folder select:

    One text box for path, one big list box to show all possible tree structure of folders and

    file present in the system and one button to proceed.

    4. Estimator :

    One list box for displaying the defects.

    5. Corrector:

    Department of CSE, SVCE-2010 30

  • 8/6/2019 7chapter 1 Repaired)

    31/64

    ED3M

    Two list boxes, one for traversal of code and other for showing all possible nodes (such

    as methods, classes, and main).

    6. Report:

    Text boxes with its label.

    6.2 Activity Diagram:

    Department of CSE, SVCE-2010 31

  • 8/6/2019 7chapter 1 Repaired)

    32/64

    ED3M

    Login

    Checklogin

    loginsuccessfully new user

    Browse the folder and

    select input

    Error Estimation

    Error correction

    reprot

    Figure 6.1Activity Diagram

    6.3 Page layout

    Department of CSE, SVCE-2010 32

  • 8/6/2019 7chapter 1 Repaired)

    33/64

    ED3M

    Fig 6.2:Page Layout

    6.4Back end design:

    Department of CSE, SVCE-2010 33

  • 8/6/2019 7chapter 1 Repaired)

    34/64

    ED3M

    The back end consists of table where the data entered from the front end are selected,

    inserted and updated. The backend data base contains the following tables-

    1. Login- it contains username and password.

    Username Password Conform Email

    2. Edreport- it contains filename, project name, status.

    FileName ProjectName UserName Date Status

    3. Error-it contains error estimation info.

    ErrorEstimator

    4. UserReport it contains user report.

    UserReport

    6.5 E-R Diagram:

    Department of CSE, SVCE-2010 34

  • 8/6/2019 7chapter 1 Repaired)

    35/64

    ED3M

    user

    Login. the

    specified user

    Browse and selected

    input project

    Estimate

    the errorError

    correction.

    Report has been

    generated

    Figure 6.3 E-R diagram.

    CHAPTER 7

    IMPLEMENTATION & SAMPLE CODE

    7.1 Sample code for login

    Department of CSE, SVCE-2010 35

  • 8/6/2019 7chapter 1 Repaired)

    36/64

    ED3M

    using System;

    using System.Collections.Generic;

    using System.ComponentModel;

    using System.Data;

    using System.Drawing;

    using System.Text;

    using System.Windows.Forms;

    using System.IO;

    using System.Data.SqlClient;

    namespace CodeTreeView

    {

    public partial class regform : Form

    {

    SqlConnection con;

    SqlCommand cmd;

    SqlCommand cmd1;

    SqlDataReader dr;

    public regform()

    {

    InitializeComponent();

    Department of CSE, SVCE-2010 36

  • 8/6/2019 7chapter 1 Repaired)

    37/64

    ED3M

    }

    private void button1_Click(object sender, EventArgs e)

    {

    con = new SqlConnection("Data Source=HOME- 40C1749D5D\\SQLEXPRESS;Initial

    Catalog=estimationcvdc;Integrated Security=True");

    con.Open();

    cmd =new SqlCommand ("select * from login where username='"+username .Text +"'

    AND password='"+password .Text +"'",con );

    cmd1 = new SqlCommand("select * from rreport", con);

    cmd1.CommandText = "insert into rreport(userreport)values(@userreport)";

    cmd1.Parameters.Add("@userreport", SqlDbType.VarChar, 20).Value = username.Text;

    cmd1.ExecuteNonQuery();

    dr = cmd.ExecuteReader();

    if (dr.Read())

    {

    browse bbb = new browse();

    bbb.Show();

    }

    else

    {

    label4.Text = "please enter a correct username and password";

    Department of CSE, SVCE-2010 37

  • 8/6/2019 7chapter 1 Repaired)

    38/64

    ED3M

    }

    }

    private void linkLabel1_LinkClicked(object sender, LinkLabelLinkClickedEventArgs e)

    {

    registration rrr = new registration();

    rrr.Show();

    }

    private void button2_Click(object sender, EventArgs e)

    {

    Application.Exit();

    }

    private void button3_Click(object sender, EventArgs e)

    {

    registration rrr = new registration();

    rrr.Show();

    }

    private void button4_Click(object sender, EventArgs e)

    {

    Update f=new Update();

    f.Show();

    Department of CSE, SVCE-2010 38

  • 8/6/2019 7chapter 1 Repaired)

    39/64

    ED3M

    }

    7.2 Sample code for enum

    using System;

    namespace IvanZ.CSParser

    {

    public enum Types

    {

    Class, Interface, Struct, Enum, Delegate

    }

    public enum Members

    {

    Constant, Field, Method, Property, Event, Indexer, Operator, Constructor,

    StaticConstructor, Destructor ,NestedType.

    }

    public enum ModifierAttribs

    {

    // Access

    Private = 0x0001,

    Internal = 0x0002,

    Protected = 0x0004,

    Public = 0x0008,

    Department of CSE, SVCE-2010 39

  • 8/6/2019 7chapter 1 Repaired)

    40/64

    ED3M

    // Scope

    Abstract = 0x0010,

    Virtual = 0x0020,

    Sealed = 0x0040,

    Static = 0x0080,

    Override = 0x0100,

    Readonly = 0x0200,

    Const = 0X0400,

    New = 0x0800,

    // Special

    Extern = 0x1000,

    Volatile = 0x2000,

    Unsafe = 0x4000

    }

    public enum ParamModifiers

    {

    In,

    Out,

    Ref

    }

    7.3 Sample code for modifier

    Department of CSE, SVCE-2010 40

  • 8/6/2019 7chapter 1 Repaired)

    41/64

    ED3M

    using System;

    using System.CodeDom;

    using System.Reflection;

    namespace IvanZ.CSParser

    {

    public class Modifiers

    {

    public static bool CheckTypeModifiers(ModifierAttribs a, bool classScope, Types

    type)

    {

    bool res= true;

    ModifierAttribs mask=ModifierAttribs.Extern|

    ModifierAttribs.Override|

    ModifierAttribs.Readonly|

    ModifierAttribs.Static|

    ModifierAttribs.Virtual|

    ModifierAttribs.Volatile;

    if (type!= Types.Class)

    mask|=ModifierAttribs.Abstract|

    ModifierAttribs.Sealed;

    if (type==Types.Enum)

    Department of CSE, SVCE-2010 41

  • 8/6/2019 7chapter 1 Repaired)

    42/64

    ED3M

    mask|=ModifierAttribs.Unsafe;

    if (!classScope)

    {

    mask |= ModifierAttribs.Private|

    ModifierAttribs.Protected;

    }

    if ((a & mask)!=0) res=false;

    return res;

    }

    public static bool CheckMemberModifiers(ModifierAttribs a, Members type)

    {

    bool res= true;

    ModifierAttribs mask=0;

    switch (type)

    {

    case Members.Constant:

    mask= ModifierAttribs.Abstract|

    ModifierAttribs.Extern|

    ModifierAttribs.Sealed|

    ModifierAttribs.Override|

    ModifierAttribs.Readonly|

    Department of CSE, SVCE-2010 42

  • 8/6/2019 7chapter 1 Repaired)

    43/64

    ED3M

    ModifierAttribs.Virtual|

    ModifierAttribs.Volatile|

    ModifierAttribs.Static|

    ModifierAttribs.Unsafe;

    break;

    case Members.Constructor:

    mask= ModifierAttribs.Abstract|

    ModifierAttribs.New|

    ModifierAttribs.Const|

    ModifierAttribs.Sealed|

    ModifierAttribs.Override|

    ModifierAttribs.Readonly|

    ModifierAttribs.Virtual|

    ModifierAttribs.Volatile;

    break;

    case Members.StaticConstructor:

    mask= ModifierAttribs.Abstract|

    ModifierAttribs.Protected|

    ModifierAttribs.Private|

    ModifierAttribs.Public|

    ModifierAttribs.Internal|

    Department of CSE, SVCE-2010 43

  • 8/6/2019 7chapter 1 Repaired)

    44/64

    ED3M

    ModifierAttribs.Sealed|

    ModifierAttribs.New|

    ModifierAttribs.Const|

    ModifierAttribs.Override|

    ModifierAttribs.Readonly|

    ModifierAttribs.Virtual|

    ModifierAttribs.Volatile;

    break;

    case Members.Destructor:

    mask= ModifierAttribs.Abstract|

    ModifierAttribs.Protected|

    ModifierAttribs.Private|

    ModifierAttribs.Public|

    ModifierAttribs.Internal|

    ModifierAttribs.Sealed|

    ModifierAttribs.Static|

    ModifierAttribs.New|

    ModifierAttribs.Const|

    ModifierAttribs.Override|

    ModifierAttribs.Readonly|

    ModifierAttribs.Virtual|

    Department of CSE, SVCE-2010 44

  • 8/6/2019 7chapter 1 Repaired)

    45/64

    ED3M

    ModifierAttribs.Volatile;

    break;

    case Members.Operator:

    mask= ModifierAttribs.Abstract|

    ModifierAttribs.Protected|

    ModifierAttribs.Private|

    ModifierAttribs.Internal|

    ModifierAttribs.Sealed|

    ModifierAttribs.New|

    ModifierAttribs.Const|

    ModifierAttribs.Override|

    ModifierAttribs.Readonly|

    ModifierAttribs.Virtual|

    ModifierAttribs.Volatile;

    break;

    case Members.Field:

    mask= ModifierAttribs.Abstract|

    ModifierAttribs.Const|

    ModifierAttribs.Sealed|

    ModifierAttribs.Extern|

    ModifierAttribs.Override|

    Department of CSE, SVCE-2010 45

  • 8/6/2019 7chapter 1 Repaired)

    46/64

    ED3M

    ModifierAttribs.Virtual;

    break;

    case Members.Indexer:

    mask= ModifierAttribs.Readonly|

    ModifierAttribs.Volatile|

    ModifierAttribs.Static|

    ModifierAttribs.Const;

    break;

    default:

    mask= ModifierAttribs.Readonly|

    ModifierAttribs.Volatile|

    ModifierAttribs.Const;

    break;

    }

    if ((a & mask)!=0) res=false;

    return res; }

    7.4 sample code for CSparser

    using System.Text;

    using System;

    Department of CSE, SVCE-2010 46

  • 8/6/2019 7chapter 1 Repaired)

    47/64

    ED3M

    using IvanZ.CSParser;

    using System.Globalization;

    namespace Mono.CSharp

    {

    using System.Collections;

    using System.CodeDom;

    using Mono.Languages;

    using IvanZ.CSParser;

    public class CSharpParser : GenericParser {

    /** simplified error message.

    @see yyerror

    */

    public void yyerror (string message) {

    yyerror(message, null);

    }

    /** (syntax) error message.

    Can be overwritten to control message format.

    @param message text to be displayed.

    @param expected vector of acceptable tokens, if available.

    */

    public void yyerror (string message, string[] expected) {

    Department of CSE, SVCE-2010 47

  • 8/6/2019 7chapter 1 Repaired)

    48/64

    ED3M

    if ((expected != null) && (expected.Length > 0)) {

    System.Console.Write ("(" + lexer.Line+","+lexer.Col+") "+ message+", expecting");

    for (int n = 0; n < expected.Length; ++ n)

    System.Console.Write (" "+expected[n]);

    System.Console.WriteLine ();

    } else

    System.Console.WriteLine (message);

    }

    /** debugging support, requires the package jay.yydebug.

    Set to null to suppress debugging messages.

    */

    protected yydebug.yyDebug debug;

    CHAPTER 8

    TESTING

    In the test phase various test cases intended to find the bugs and loop holes exist in the

    software will be designed. During testing, the program to be tested is executed with a set of test

    cases and the output of the program is performing as it is expected to.

    Department of CSE, SVCE-2010 48

  • 8/6/2019 7chapter 1 Repaired)

    49/64

    ED3M

    The software is tested using control structures testing method under white box testing

    techniques. The two tests done under this approach. One condition testing to check the Boolean

    operator errors, Boolean variable errors, Boolean parenthesis errors etc. Loop testing to check

    simple loops and tested loops.

    Faults can be occurred during any phase in the software development cycle. Verification

    is performed on the output in each phase but still some fault. We likely to remain undetected by

    these methods. These faults will be eventually reflected in the code. Testing is usually relied

    upon to detect these defaults in addition to the fault introduced during the code phase .For this,

    different levels of testing are which perform different tasks and aim to test different aspects of

    the system.

    8.1Testing Levels

    Unit Testing

    Integration Testing

    System Testing

    8.1.1UNIT TESTING

    Unit testing focuses verification effort on the smallest unit of software design module.

    Using the detail design description as an important control path is tested to uncover errors with in

    the boundary of the modules unit. Testing has many important results for the next generation is

    to be easy. The unit testing considers the following condition of a program module while testing.

    Interface

    Logical data structure

    Boundary data structures

    Independent path

    Error handling path

    Department of CSE, SVCE-2010 49

  • 8/6/2019 7chapter 1 Repaired)

    50/64

    ED3M

    The table applied out the modules or interface test to answer that information properly

    flows into and out of the program unit under test. The local data structure is examine to ensure

    that data stores temporary monitors its integrity during all steps in algorithm execution.

    Boundary conditions are tested to ensure that the module operates properly at boundaries,

    establish to limit on restrict proclaim.

    All independent paths, the basic paths through the control structures are exercised to

    ensure that all statements in a module have been executed at least one. Finally all error- handling

    paths are tested. Tests to data flow errors a module interfaces are required before any other is

    initiated if data are not to enter and exit properly, all other test is waste.

    Unit testing is normally considered as adjunct to the coding step. After source level code

    has been developed, retrieved and verified for correct sentence, unit test design begins as review

    of design information provides guidance for establishing test cases that are likely to uncover

    errors in each and every category of the program unit. Each test case should be coupled with nine

    sets of expected results.

    Unit testing is simplified when a module with high cohesion in design when a module

    addressing only one function. The number of test cases is reduced and errors can be more easily

    predicted and uncovered.

    8.1.2 SYSTEM TESTING

    System testing is a method of testing applied on the whole system. A classic system-

    testing is finger printing. This occurs when defect is uncovered. The software engineer should

    anticipate potential interfacing problems and design error handling paths that lost all information

    coming from other demerits of the system conduct a series tests that simulate bad data or other

    potential errors at the software interfaces. Record the results do test to use on evidence of system

    testing to ensure that software adequately tested.

    Department of CSE, SVCE-2010 50

    Testing

  • 8/6/2019 7chapter 1 Repaired)

    51/64

    ED3M

    That recovery is properly performed. If recovery is automatic, reification , check pointing

    mechanism , data recovery reviser become intervention, the main time repair is evaluated to

    determine whether it is within acceptable limits.

    Security testing attempts to verify that protection mechanisms built in to system will

    infect, protect it from improper penetration. During this testing the tester plays role of the

    individual who desires to penetrate the system. This tester may attempt to acquire passwords

    through external device means.

    The tester may attack the system with custom of software designed to break down any

    defects that name been constructed. He may overwrite the system here by designing service to

    others. He may purposely cause system errors, hoping to find the key to system in recovery. He

    may ever browse through seen data.

    8.1.3 INTEGRATION TESTING

    Integration testing is a systematic technique for constructing the program structure while at

    the same time conducting tests to uncover errors associated with interfacing. The objective is to

    take unit tested components and build a program structure that has been dictated by design.

    An overall plan for integration of the software and a description of specific tests are

    documented in a test specification. This documentation contains a test plan, and a test procedure,

    is a work product of software process, and becomes part of the software configuration.

    The test plan describes the overall strategy for integration. Tested is divided into phases

    and builds that address specific functional and behavioral characteristics of the software. For

    example, integration testing for a CAD system might be divided into the following test phases:

    User interaction (command selection, drawing creation, display representation, error

    processing and representation.

    Data manipulation and analysis (symbol creation, dimensioning, rotation, computation of

    physical properties).

    Department of CSE, SVCE-2010 51

  • 8/6/2019 7chapter 1 Repaired)

    52/64

    ED3M

    Display processing and generation (two-dimensional displays, three-dimensional

    displays, graphs and charts).

    Data base management (access, update, integrity, performance).

    Each of these phases and sub phases (denoted in parenthesis) delineates a broad

    functional category with in the software and can generally be related to a specific domain of the

    program structure. Therefore, program builds (group of modules) are created to correspond to

    each phase.

    The following criteria and corresponding tests are applied for all test phases:

    Interface integrity: Internal and external interfaces are tested as each module(or cluster)

    is incorporated into the structure.

    Functional validity: Tests designed to uncover functional errors are conducted.

    Information content: Tests designed to uncover errors associated with local or global

    data structures are conducted.

    Performance: Tests designed to verify performance bounds established during software

    design are conducted.

    CHAPTER 9

    SNAPSHOTS

    Department of CSE, SVCE-2010 52

  • 8/6/2019 7chapter 1 Repaired)

    53/64

    ED3M

    Figure 9.1 Front page of ED3M

    Department of CSE, SVCE-2010 53

  • 8/6/2019 7chapter 1 Repaired)

    54/64

    ED3M

    Figure 9.2 Windows Authentication Page

    Department of CSE, SVCE-2010 54

  • 8/6/2019 7chapter 1 Repaired)

    55/64

    ED3M

    Figure 9.3 change of password

    Department of CSE, SVCE-2010 55

  • 8/6/2019 7chapter 1 Repaired)

    56/64

    ED3M

    Figure 9.4 Browse

    Department of CSE, SVCE-2010 56

  • 8/6/2019 7chapter 1 Repaired)

    57/64

    ED3M

    Figure 9.5 Folder Select

    Department of CSE, SVCE-2010 57

  • 8/6/2019 7chapter 1 Repaired)

    58/64

    ED3M

    Figure 9.6 Estimator of defects display

    Department of CSE, SVCE-2010 58

  • 8/6/2019 7chapter 1 Repaired)

    59/64

    ED3M

    Figure 9.7 Corrector

    Department of CSE, SVCE-2010 59

  • 8/6/2019 7chapter 1 Repaired)

    60/64

    ED3M

    Figure 9.8 Report

    Department of CSE, SVCE-2010 60

  • 8/6/2019 7chapter 1 Repaired)

    61/64

    ED3M

    Figure 9.9 View of report

    Department of CSE, SVCE-2010 61

  • 8/6/2019 7chapter 1 Repaired)

    62/64

    ED3M

    CHAPTER 10

    FUTURE ENHANCEMENT

    The current project has been developed for estimating the defects present only in .cs (i.e.,

    only CSharp files) files and thereby uses CS parser. But we can implement this project to serve

    its purpose for projects developed in many more languages like .java and .vb files also. This

    requires us to implement parsers of different languages to be embedded in estimator module

    which enables creation of a common and language independent platform on which project

    developed on various languages can be tested during the maintenance period thereby

    empowering the software developed.

    We have used some static methods to implement this project, but we can also use somebulk files for testing.

    Department of CSE, SVCE-2010 62

  • 8/6/2019 7chapter 1 Repaired)

    63/64

    ED3M

    CONCLUSION

    Estimation of Defects based on Defect Decay Model (ED3M), is a novel approach

    proposed here which has been rigorously validated using case studies, simulated data sets, and

    data sets from the literature. Based on this validation work, the ED3M approach has been shown

    to produce accurate final estimates with a convergence rate that meets or improves upon closely

    related, well-known techniques. The only input is the defect data; system test managers would

    benefit from obtaining a prediction of the defects to be found in ST well before the testing

    begins, ideally in the requirements or design phase. This could be used to improve the plan for

    developing the test cases. The ED3M approach, which requires test defect data as the input,

    cannot be used for this. Alternate approaches which rely on different input data (e.g., historical

    project data and expert knowledge) could be selected to accomplish this. However, in general,

    these data are not available at most companies.

    Finally this ED3M project can test only .cs files so far which requires neither historical

    data nor any random values while testing.

    BIBLOGRAPHY

    Department of CSE, SVCE-2010 63

  • 8/6/2019 7chapter 1 Repaired)

    64/64

    ED3M

    www.wikipedia.org

    www.codeproject.com

    www.codeguru.com

    www.dotnetspider.com

    www.functionx.com

    www.msdn.com

    C# and the .NET platform second edition by Andrew Troelsen

    http://www.wikipedia.org/http://www.codeproject.com/http://www.codeguru.com/http://www.dotnetspider.com/http://www.functionx.com/http://www.msdn.com/http://www.wikipedia.org/http://www.codeproject.com/http://www.codeguru.com/http://www.dotnetspider.com/http://www.functionx.com/http://www.msdn.com/