Automated Test Case Generation and Performance Analysis for GUI Application

download Automated Test Case Generation and Performance Analysis for GUI Application

of 10

Transcript of Automated Test Case Generation and Performance Analysis for GUI Application

  • 8/7/2019 Automated Test Case Generation and Performance Analysis for GUI Application

    1/10

    Automated Test Case Generation and Performance

    Analysis for GUI ApplicationMs. A.Askarunisa#1, Ms. D. Thangamari*2

    #Assistant Proffessor,Department of Computer Science and Engineering, Affiliated to Anna University, Thirunelveli

    Thiagarajar College of Engineering, Madurai, Tamilnadu, [email protected],

    *Department of Computer Science and Engineering, Affiliated to Anna University, Thirunelveli

    Thiagarajar College of Engineering, Madurai, Tamilnadu, [email protected]

    Abstract A Common method for GUI testing is the Captureand Replay (CR) technique.GUIs is complex pieces of software.

    Testing their correctness is challenging for several reasons: 1.

    Tests must be automated, but GUIs are designed for humans to

    use.2. Conventional unit testing, involving tests of isolated

    classes, is unsuitable for GUI Components. 3. GUIs respond to

    user-generated events 4. Changes in the GUIs layout shouldnt

    affect robust tests.5.Conventional test coverage criteria, such as90 percent coverage of lines of code. This paper proposes GUI

    Automation testing framework to test GUI-Based java programs

    as an alternative to the CR technique. The framework develops

    GUI-event test specification language for GUI application

    written using java swing APIs, which initiates an automated test

    engine. Visual editor helps in viewing the test runs. The test

    engine generates GUI events and captures event responses to

    automatically verify the results of the test cases. The testing

    framework includes the test case generation, test case execution

    and test case verification modules. The testing efficiency is

    measured by determining coverage metric based on Code

    coverage, Event coverage and Event Interaction coverage, while

    may be useful during Regression Testing. The paper uses Abbot

    and JUnit tools for test case generation and execution and Clover

    tool for code coverage. We have performed testes on various GUIapplications and the efficiency of the framework is provided.

    KeywordsAutomated testing, Coverage, GUI Testing, TestSuite Reduction

    I. INTRODUCTION

    Test automation of GUI means mechanizing the testing

    process where testers use software in controlling the

    implementation of the test on the new products and comparing

    the expected and the actual outcomes of the product

    application. Scheduled testing tasks on a daily basis and

    repeating the process without human supervision is one

    advantage of automated testing. With all the mass productionof gadgets and electronic GUI devices, the testing period

    seems quite demanding. Electronic companies must ensure

    quality products to deliver excellent products and maintain

    customer preferences over their products.

    In running automatic tests for the GUI application, the tester

    saves much time, especially when he is in a huge production

    house and needs to be multi-tasking. There are actually four

    strategies to test a GUI.1. Window mapping assigns certain

    names to each element, so the test is more manageable and

    understandable.2. Task libraries sort the step sequence of the

    user task when they appear in multiple tests.3. The Data-

    driven type of test automation separates the limitation of the

    test case and test script so the test script will be reusable.4.

    Keyword-driven test automation converts tests as spreadsheets

    or tables, as it creates parsers to decode and perform the

    description of the test.

    The reminder of this paper is organized as follows: Section 2covers the Background material for this proposal, Section 3

    describes the proposed approach for GUI testing framework.

    Section 4 briefly highlights the implementation details.

    Section 5 gives the Conclusion and future enhancement.

    II. BACKGROUNDAND RELATED WORK

    Existing works on GUI testing are mainly concerned with

    test automation assisting tools supporting the capture/replay

    technique. Researches have considered various techniques for

    GUI application testing, and coverage metric for test case.

    White et al [2, 3] and Belli [4] developed model based testing

    for GUI application under test. Each responsibility is simplythe desired response for the user and can be specified as a

    complete interaction sequence (CIS) between the user and the

    GUI application under test. Then a finite-state machine is

    developed for each CIS, which generates the required tests

    and materializes the CIS.

    In the work of Memon et al [5], the GUI under test is

    modelled as a finite-state machine with hierarchical structure.

    The test case generation problem of GUI testing then follows

    the goal-oriented philosophy and is treated as an AI(Artificial

    Intelligence) planning problem. The approach can be viewed

    as a global one in the sense that a single or global finite-state

    machine is constructed for all test cases of interest. In the

    work of Cai et al [6], a GUI test case is defined as a word

    defined over a finite alphabet that comprises primitive GUI

    actions or functions of concern as symbols.

    Meyer [7] defined Capture/Replay Testing technique, a test

    case as an input with its expected output. Binder [12] defined

    a test case to consist of a pretest state of the software under

    test (including its environment), a sequence of test inputs, and

    a statement of expected test results. Memon et al [1,5] defined

    a test case to consist of an initial state and a legal action

    mailto:[email protected]:[email protected]
  • 8/7/2019 Automated Test Case Generation and Performance Analysis for GUI Application

    2/10

  • 8/7/2019 Automated Test Case Generation and Performance Analysis for GUI Application

    3/10

    of test cases are generated by using Abbot and JUnit Tool.

    The test cases contain the sequence of user Input Events.

    Fig 2 Test Case Generation using Abbot Tool

    This framework shown in Fig 2 chooses a specific

    model of the GUI application. This model satisfies the GUI

    application under test which is also the input of the Test Case

    generation. Collections of test cases are generated by using

    Abbot and JUnit Tool. The Test case contains the sequence of

    user Input events. The test cases are run with test runner. A

    test designer interacts with the GUI and generates mouse and

    keyboard events.

    2) Test Case Execution

    Test cases from the repository are executed one by one

    automatically using Abbot and JUnit Tools as shown in Fig 3.

    Fig 3 Test Execution by JUnit Tool

    3) Test case Verification:

    The expected results of various test cases are manually

    determined and stored in the GUI model of Testing Oracle.

    The Testing Oracle contains the expected state of sequences

    for each application. When the test cases start running, thestart timer is initialized. The events are generated

    automatically and the actual results from the tool are verified

    with the expected results as shown in Fig 4. Pass/Failure

    testing report is generated accordingly.

    Fig 4 Test Case Verification

    B. Performance Analysis

    Measurement is the process by which numbers or

    symbols are assigned to attributes of entities in the real world

    in such a way as to characterize them according to clearly

    defined rules. Coverage of GUI applications Require various

    tasks: Performance analysis and Coverage Report. Coverage

    measurement also helps to avoid test entropy. As your code

    goes through multiple release cycles, there can be a tendency

    for unit tests to atrophy. As new code is added, it may not

    meet the same testing standards you put in place when the

    project was first released. Measuring code coverage can keep

    your testing up to the standards you require. You can be

    confident that when you go into production there will be

    minimal problems because you know the code not only passes

    its tests but that it is well tested.

    1) Code Coverage

    Code coverage analysis is sometimes called test coverage

    analysis. The two terms are synonymous. The academic world

    more often uses the term "test coverage" while practitioners

    more often use "code coverage". Likewise, a coverage

    analyser is sometimes called a coverage monitor. Code

    coverage is not a panacea. Coverage generally follows an 80-

    20 rule. Increasing coverage values becomes difficult, with

    new tests delivering less and less incrementally. If you follow

    defensive programming principles, where failure conditions

    are often checked at many levels in your software, some code

    can be very difficult to reach with practical levels of testing.

    Coverage measurement is not a replacement for good code

    review and good programming practices. In general you

    should adopt a sensible coverage target and aim for even

    coverage across all of the modules that make up your code.

    Relying on a single overall coverage figure can hide large

    gaps in coverage.

    2) Code Coverage with Clover

    Clover [24] uses source code instrumentation, because

    although it requires developers to perform an instrumented

    build; source code instrumentation produces the most accuratecoverage measurement for the least runtime performance

    overhead. As the code under test executes, code coverage

    systems collect information about which statements have been

    executed. This information is then used as the basis of reports.

    In addition to these basic mechanisms, coverage approaches

    vary on what forms of coverage information they collect.

    There are many forms of coverage beyond basic statement

    coverage including conditional coverage, method entry and

    path coverage. Clover is designed to measure code coverage

    in a way that fits seamlessly with your current development

    environment and practices, whatever they may be. Clover's

    IDE Plug-in provide developers with a way to quickly

    measure code coverage without having to leave the IDE.Clover's Ant and Maven integrations allow coverage

    measurement to be performed in Automated Build and

    Continuous Integration systems, and reports generated to be

    shared by the team.

    The Clover Coverage Explorer:

    The Coverage Explorer allows you to view and control

    Clover's instrumentation of your Java projects, and shows you

    the coverage statistics for each project based on recent test

    GUI ModelTest Case

    Generation

    JAR Files

    Collection of testJar files

    Source Program

    Repository

    Test Execution

    Automatically

    Actual State Expected State

    Verified

    Automated Verified

  • 8/7/2019 Automated Test Case Generation and Performance Analysis for GUI Application

    4/10

    runs or application runs. The main tree shows coverage and

    metrics information for packages, files, class and methods of

    any Clover-enabled project in your workspace. Clover will

    auto-detect which classes are your tests and which are your

    application classes - by using the drop-down box above the

    tree you can then restrict the coverage tree shown so that you

    only see coverage for application classes, test classes or both.

    Summary metrics are displayed alongside the tree for the

    selected project, package, file, class or method in the tree.

    The Clover Coverage Measurement:

    Clover uses these measurements to produce a Total

    Coverage Percentage for each class, file, and package and for

    the project as a whole. The Total Coverage Percentage allows

    entities to be ranked in reports. The Total Coverage

    Percentage (TPC) is calculated as follows: TPC = (BT + BF +

    SC +MC)/(2*B + S + M) where BT - branches that evaluated

    to "true" at least once BF - branches that evaluated to "false"

    at least once SC statements covered MC - methods entered

    B - total number of branches S - total number of statements M

    - total number of methods

    3) Event Coverage

    These coverage criteria use events and event sequences to

    specify a measure of test adequacy. Since the total number of

    permutations of event sequences in any non-trivial GUI is

    extremely large, the GUI's hierarchical structure is exploited

    to identify the important event sequences to be tested. A GUI

    is decomposed into GUI components, each of which is used as

    a basic unit of testing. A representation of a GUI component,

    called an event flow graph [16], identifies the interaction of

    events within a component and intra component criteria areused to evaluate the adequacy of tests on these events. The

    hierarchical relationship among components is represented by

    an integration tree, and inter-component coverage criteria are

    used to evaluate the adequacy of test sequences that cross

    components.

    4) Event Interaction Coverage

    The sequence of possible Event Interacts with an other

    Event. This type of event coverage is called as Event

    Interaction Coverage [22]. The event interaction coverage is

    consisting of 2 ways and 3 ways Combination.

    C. Coverage Report

    1) Coverage HTML Report

    The clover html report task generates a full HTML

    report with sensible default settings. It is also generated prior

    to generation of the full report.

    2) Coverage XML Report

    The clover xml report task generates a full HTML

    report with sensible default settings. It is also generated prior

    to generation of the full report.

    3) Coverage PDF Report

    The clover pdf report task generates a PDF report with

    sensible default settings. It is also generated prior to

    generation of the full report.

    D. Coverage Metric

    The coverage metric CONTeSSi (n) (CONtext Test

    Suite Similarity) [23] for each model value factor is calculated

    and compared with the original pair of model. CONTeSSi (n)

    (CONtext Test Suite Similarity) that explicitly considers the

    context of n preceding events in test cases to develop a new

    context-aware notion of test suite similarity. This metric is

    an extension of the cosine similarity metric used in Natural

    Language Processing and Information Retrieval for comparing

    an item to a body of knowledge, e.g., finding a query string ina collection of web pages or determining the likelihood of

    finding a sentence in a text corpus (collection of

    documents).We evaluate CONTeSSi (n) by comparing four

    test suites, including suites reduced using conventional

    criteria, for four open source applications. Our results show

    that CONTeSSi (n) is a better indicator of the similarity of test

    suites than existing metrics.

    This paper considers different models with varying

    frequencies of events like all individual events, all two pair

    events, all three pair events, etc. Coverage metric is calculated

    for various factors like statement, branch, method, etc.

    IV. IMPLEMENTATION

    This paper uses the GUI model of Calculator

    Application, which is written using Java Swing. A scientific

    Calculator Application contains a Collection of Standard

    Buttons, Control Buttons and Scientific Buttons. It is used to

    calculate the Arithmetic and Scientific data values. It performs

    several Basic operations such as Add, Sub, Mul and Div and

    Also Scientific Operations Such as Sin, Cos, Tan, Log, sqrt

    and etc,The Program also Contains Radio Buttons like

    Hexadecimal, Octal, Decimal, and Binary, which enable the

    The automatic execution of user input sequence is shown in

    Fig 5. All the operations are performed by Click Events .For

    this Application, the test cases are written by using javaSwing. The unit testing is done by Abbot and JUnit Tool.

  • 8/7/2019 Automated Test Case Generation and Performance Analysis for GUI Application

    5/10

  • 8/7/2019 Automated Test Case Generation and Performance Analysis for GUI Application

    6/10

    e1 represents the clicking in the file Menu

    e2 represents the clicking in the Menu Selection event

    e4 represents the clicking in the Basic button after

    clicking the e2 event. It is similar to e5.

    Fig 11 Shows that Event Based Sequences

    Table1 displays the report title and the time of the

    coverage contained in the report. The header displays metrics

    for the package files or project overview which is currently

    selected. Depending on the current selection, the metricsinclude all or a subset of: Number of Lines of Code

    (LOC),Number of Non-commented Lines of Code

    (NCLOC),Number of Methods, Number of Classes, Number

    of Files, Number of Packages.

    In considering this context, the event pair

    coverage suite in Table 4 is expected to be more similar to the

    original suite than the event coverage suite, since the event

    pair coverage suite is created based on the existence of event

    pairs. Table 4(b) shows the count of each event pair for each

    suite. This is the basis of CONTeSSi (n), for n = 1, since we

    are looking at events in the context of one other (previous)

    event. Now if we extend this example to compute CONTeSSi

    (2), we obtain the frequencies shown in Table 4(C). In

    general, as n increases, the frequencies for the event sequences

    decrease, as they appear less frequently in the test suites.

    Intuitively, comparing test suites on longer sequences will

    make it harder for the test suites to be similar.

    Therefore, if two test suites have a high similarity score

    with a larger n, they are even more similar than two suites

    being compared with a small n. By treating each row in Table

    4 (a), (b), or (c) as a vector, CONTeSSi is computed as

    follows:CONTeSSi(A,B) =(A B) /(|A| |B|) (1)where A and

    B are the vectors corresponding to the two test suites, A B is

    the dot product of the two vectors, i.e.,Pj i=1(AiBi) where jis the number of terms in the vector; and |A| = qPj i=1(Ai)2.

    The value of CONTeSSi lies between 0 and 1, where a value

    closer to 1 indicates more similarity. Hence, CONTeSSi (n) is

    computed as shown in Equation 1, creating a vector for each

    suite, representing the frequencies of all possible groups of

    n + 1 events. The inclusion of n previous events will increase

    the number of terms in the vector, thereby increasing j. The

    values in Table 5 show the values of CONTeSSi(n) for all our

    test suites, for n = 0, 1, 2, 3. From these values, we observe

    that if we ignore context, i.e., use n = 0, most of the reducedsuites are quite similar to the original, as indicated by the high

    (> 0.9) value of CONTeSSi (0). However, the similarity

    between the test suites decreases as more context

    TABLE:1 VIEWOFALL TESTCASESVALUEUSING CLOVERCOVERAGE TOOL

    AreaTest Case State

    ment

    Bran

    ches

    Metho

    ds

    Classes LOC NCLOC Total

    Cmp

    Cmp

    Densit

    y

    Avg

    method

    Cmp

    Stmt /

    methods

    Methods /

    classes

    Total

    Coverage

    (in %)

    Circle 280 132 21 1 625 483 138 0.49 6.57 13.33 21 54.3

    Rectangle 293 132 24 2 667 513 141 0.48 5.88 12.21 12 53.6

    Parallelogram 291 132 24 2 666 511 141 0.48 5.88 12.12 12 53.6

    Triangle 298 132 24 2 672 518 141 0.47 5.88 12.42 12 54.1

    Trapezoid 297 132 24 2 672 517 141 0.47 5.88 12.38 12 54.1

    Total 339 132 28 2 738 570 145 0.43 5.18 12.11 14 55

    Surface

    Rectangle 312 132 25 2 690 535 142 0.46 5.68 12.48 12.5 41.6

    Prims 304 132 25 2 681 527 142 0.47 5.68 12.16 12.5 42.7Cylinder 303 132 25 2 676 525 142 0.47 5.68 12.12 12.5 53.3

    Cone 300 132 25 2 671 522 142 0.47 5.68 12 12.5 53.3

    Sphere 294 132 24 2 668 513 141 0.48 5.88 12.25 12 51.7

    Total 373 132 28 2 775 602 145 0.39 5.18 13.32 14 53.6

    Volume

    Rectangle 291 132 24 2 666 511 141 0.48 5.88 12.12 12 55

    Prims 293 132 24 2 669 513 141 0.48 5.88 12.21 12 55

    Cylinder 296 132 24 2 671 516 141 0.48 5.88 12.33 12 55

    Cone 297 132 24 2 672 517 141 0.47 5.88 12.38 12 55

    Sphere 299 132 24 2 677 519 141 0.47 5.88 12.46 12 55.5

  • 8/7/2019 Automated Test Case Generation and Performance Analysis for GUI Application

    7/10

    Pyramid 295 132 24 2 671 515 141 0.48 5.88 12.29 12 55

    Total 351 132 29 2 762 586 146 0.42 5.03 12.1 14.5 55TABLE: 2 VIEWOFALL TESTCASES VALUEUSING CODE COVERAGE TOOL

    Test Case Stmt Branch Loop Strict Con

    Basic 59.4 2.3 10.1 6.0

    Minus 63.9 10.5 8.7 7.7

    Add 63.9 9.3 8.7 7.7

    Mul 63.9 11.6 8.7 7.7

    Div 63.9 12.8 8.7 7.7

    Mod 67.1 15.1 10.1 9.4

    Hex 61.6 11.6 14.5 19.7

    Dec 60.7 9.3 13.0 7.7

    Oct 61.6 11.6 14.5 15.4

    Bin 61.6 14 13 9.4

    Area

    Circle 68.9 36.0 10.1 14.5

    Rectangle 63.9 12.8 8.7 11.1

    Parallelogram 63.9 12.8 8.7 10.3

    Triangle 65.3 17.4 8.7 13.7

    Trapezoid 65.8 18.6 8.7 15.4Surface

    Rectangle 64.4 14.0 8.7 13.7

    Prims 65.3 16.3 8.7 13.7

    Cylinder 68.9 34.9 10.1 16.2

    Cone 68.9 34.9 10.1 15.4

    Sphere 67.6 30.2 10.1 9.4

    Volume

    Rectangle 63.9 11.6 8.7 8.5

    Prims 64.8 15.1 8.7 10.3

    Cylinder 68.5 34.9 10.1 14.5

    Cone 68.5 34.9 10.1 12.8

    Sphere 68.5 33.7 10.1 12.8

    Pyramid 64.4 15.1 8.7 11.1Total 97.7 93 34.8 94.9

    TABLE: 3 VIEWOFALL TESTCASE EVENT SEQUENCE

    Test Plan Event Execution

    (in sec)

    Execution

    Delay (1000 sec)

    ViewBasic 1 1.684 2.699

    Scientific 1 1.763 2.714

    Hex 1 1.342 2.371

    Dec 1 1.295 2.309

    Octal 1 2.262 2.356

    Binary 1 1.342 2.324

    Area

    Circle 11 3.588 4.555

    Rectangle 7 2.434 3.463

    Parallelogram 6 2.262 3.26

    Triangle 12 3.447 4.446

    Trapezoid 12 3.401 4.415

    Volume

    Rectangle 6 2.278 3.291

    Prism 8 2.036 3.65

    Cylinder 11 3.525 4.633

    Pyramid 10 2.995 4.009

    Cones 11 3.588 4.556

    Sphere 14 4.119 5.085

    Surface

    Rectangle 26 6.006 6.989

    Prims 18 4.524 5.506

    Cylinder 17 4.68 5.647

    Cones 13 3.916 4.898

    Sphere 9 3.183 4.165

    TABLE: 4 EXAMPLE TEST CASESYIELDEDFROMSEVERALREDUCTIONTECHNIQUES

    ORIGINAL PAIR EVENT PAIR EVENT STATEMENT METHOD BRANCH Illustrative Tests

    e2,e5 e6,e9 e2,e4,e6 e9,e10,e2,e5,e11,e9 e2,e5,e11,e9 e6,e9 e2

    e2,e4,e6 e7,e9 e2,e4,e8 e2,e5 e9,e10,e9 e7,e9 e4

    e2,e4,e8 e8,e9 e2,e4,e7 e6,e9 e8,e9 e5

    e2,e4,e7 e2,e5,e11,e9,e10,e9 e2,e5,e11,e10 e7,e9 e2,e5,e11,e9 e6

    e2,e4,e9,e10,e9 e2,e4,e9,e10,e9 e9,e12 e8,e9 e2,e4,e9,e10,e9 e7

    e2,e4,e9 e9,e10,e2,e5,e11,e9 e8,e9 e2,e5,e11,e9,e10,e9 e8

    e2,e5,e11,e9,e10,e9 e9

    e2,e5,e11,e9 e10

    e9,e10,e9 e11

    e6,e9,e10,e9 e12

    e8,e9

    e7,e9,e10,e9

    e11,e9,e10,e9

    e11,e9,e10

    e6,e12

    e2,e5,e11,e10

    e9,e10,e2,e5,e11,e9

  • 8/7/2019 Automated Test Case Generation and Performance Analysis for GUI Application

    8/10

    TABLE: 4(A) FREQUENCYOFUNIQUEEVENTSOCCURRINGINTHETESTSUITE (LENGTH=0)

    TEST SUITE E2 E4 E5 E6 E7 E8 E9 E10 E11 E12

    ORIGINAL 10 5 5 3 2 2 18 9 6 1

    EVENT PAIR 3 1 2 1 1 1 9 3 2 0

    EVENT 4 3 1 1 1 2 2 1 1 1

    STMT 3 0 3 1 1 1 7 2 2 0

    METHOD 1 0 1 0 0 0 3 1 1 0

    BRANCH 2 1 1 1 1 1 6 1 1 0

    Illus. Suite 1 1 1 1 1 1 1 1 1 1

    TABLE: 4( B) FREQUENCYOFALLEVENTSPAIROCCURRINGINTHETESTSUITE (LENGTH=1)

    TEST SUITE E2,E4 E2,E5 E4,E6 E4,E7 E4,E8 E4,E9 E5,E11 E6,E9 E6,E10 E6,E12 E7,E9 E8,E9

    ORIGINAL 5 5 1 1 1 2 4 1 1 1 1 1

    EVENT PAIR 1 2 0 0 0 1 2 1 0 1 1 1

    EVENT 3 1 1 1 1 0 1 0 0 0 0 1

    STMT 0 3 0 0 0 0 2 1 0 0 1 1

    METHOD 0 1 0 0 0 0 1 0 0 0 0 0

    BRANCH 0 1 0 0 0 1 1 1 0 0 1 1

    Illus. Suite 0 0 0 0 0 0 0 0 0 0 0 0

    TABLE: 4(C) FREQUENCYOFALLEVENTSPAIROCCURRINGINTHETESTSUITE (LENGTH=2)

    TEST SUITE E2,E4 E2,E5 E4,E6 E4,E7 E4,E8 E4,E9 E5,E11 E6,E9 E6,E10 E6,E12 E7,E9 E8,E9

    ORIGINAL 5 5 1 1 1 2 4 1 1 1 1 1

    EVENT PAIR 1 2 0 0 0 1 2 1 0 1 1 1

    EVENT 3 1 1 1 1 0 1 0 0 0 0 1

    STMT 0 3 0 0 0 0 2 1 0 0 1 1

    METHOD 0 1 0 0 0 0 1 0 0 0 0 0

    BRANCH 0 1 0 0 0 1 1 1 0 0 1 1

    Illus. Suite 0 0 0 0 0 0 0 0 0 0 0 0

    TABLE: 5 CONTESSI (N) VALUESFORSUITE COMPAREDTO ORIGINALFORALL BUTTONSIN CALCULATORAPPLICATIONIN GUI EXAMPLE SUITES

    n EVENT PAIR EVENT STMT METHOD BRANCH ILLUS. SUITE

    0 0.97308 0.78513 0.95434 0.94405 0.94571 0.7816

    1 0.9274 0.60669 0.788116 0.82365 0.685188 0.0000

    2 0.9106 0.39509 0.79697 0.79018 0.79018 0.0000

    3 0.9428 0.73786 0.82495 0.7071 0.68041 0.0000

    4 0.9999 0.0000 0.8660 0.0000 0.5000 0.0000

    5 0.9999 0.0000 0.9999 0.0000 0.0000 0.0000

    V. CONCLUSIONSAND FUTURE SCOPE

    GUIs may introduce new types of error, increase complexityand make testing more difficult. The Automation testing

    reduces the work intensity of the developer and the tester. The

    main advantage of test automation is that as a result software

    developers can run tests more often, find and fix bugs on the

    early stage of development, before end users will face these

    bugs. The framework suggested in this project simplifies the

    creation and maintenance of robust GUI tests. Abbot is easy to

    learn and use, and provides some unique features that can

    make GUI development more productive.

    This project developed the method for (i) automatic test case

    generation, execution and verification (ii) performance

    analysis by calculating the coverage based on statement,

    method, branch, event and event interactions for GUI

    applications. Our results showed that CONTeSSi (n) is abetter indicator of the similarity of Event Interaction test

    suites than existing metrics. In that metric also describes

    Event pair Coverage is the best Coverage Compared with

    Statement, Method, and Branch and Loop coverage.

  • 8/7/2019 Automated Test Case Generation and Performance Analysis for GUI Application

    9/10

  • 8/7/2019 Automated Test Case Generation and Performance Analysis for GUI Application

    10/10