Assessing Model-Based Testing: An Empirical Study Conducted in Industry
-
Upload
dharmalingam-ganesan -
Category
Software
-
view
229 -
download
3
description
Transcript of Assessing Model-Based Testing: An Empirical Study Conducted in Industry
© 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering
Assessing Model-Based Testing- an Empirical Study Conducted in Industry
Christoph Schulze, Dharmalingam Ganesan, Mikael Lindvall, Rance Cleaveland
Fraunhofer CESE, Maryland
Daniel Goldman
Global Net Services Inc. (GNSI), Maryland
ICSE 2014 (SEIP Track)
© 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering 2
The Big Picture of our Experiment
Manual Testing vs. Model-based Testing (MBT)
© 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering
Model-based Testing (MBT): Brief Overview
• Generate test cases using models (built for testing)– Incrementally model the software under test based on requirements
• Usage behavior, expected system response
– Models are state machines in this work (other advanced notations exist)
• Every path through the model is a test case
– Manual work
• Model construction and maintenance
• Mapping of model elements to concrete instructions
• Analysis of test case failures
– Automatic
• Test case generation
• Test case execution and verdict
• MBT fits many types of systems, types of test cases• Web, APIs, xUnit (e.g. Junit, Cunit, etc.)
3
© 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering 4
We have applied MBT to
• Embedded Flight Software
• Ground systems
• Database-driven systems
• Architecture styles:– Pub-Sub-based systems
– Client-Server based systems
• API-level:– Middleware and Operating System wrappers
This presentation: MBT of Web-based systems
© 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering 5
Goal of the Project
• Evaluate the costs and benefits of MBT as compared to completely manual testing
• Compare effectiveness and efficiency of MBT and manually testing methods– Compare the number of detected issues
– Compare the effort
• Observe differences between manual and automated testing
© 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering 6
Commercial System Under Test (SUT)
• Used by the customers of U.S. FDA– Allows researchers to exchange findings of
laboratory analyses regarding food borne illnesses
• General Functionality:– Add/Edit data
– Review data
– Search data
– Sort data into tree structure
© 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering 7
System Under Test – Web Interface
© 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering 8
Experimental Set-up
• Two testers– One tester used MBT (@ Fraunhofer)
– Another tester used a completely manual approach (@ GNSI)
• Testers have no prior experience with the SUT– Manual Tester had 3.5 years of testing experience
– MBT Tester had 2 years of MBT-based testing experience
• Both Testers were given the same artifacts– Use cases, Requirements, Analysts, and SUT
• Two versions– Version v1: GUI inherited from a previous contractor
– Version v2: New GUI front-end
© 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering 9
Overview of the MBT Approach
Manual Automation Support Fully Automated
© 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering 10
Overview of the MBT Approach
Manual Automation Support Fully Automated
© 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering 11
Overview of the MBT Approach
Manual Automation Support Fully Automated
© 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering 12
Overview of the MBT Approach
Manual Automation Support Fully Automated
© 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering 13
Overview of the MBT Approach
Manual Automation Support Fully Automated
© 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering 14
Overview of the MBT Approach
Manual Automation Support Fully Automated
© 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering 15
Overview of the MBT Approach
Manual Automation Support Fully Automated
© 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering 16
Overview of the Manual Approach
Manual
• No testing tool used by the manual tester
• The tester manually:
– entered data
– clicked on buttons
– compared actual results to expected
© 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering 17
Overview of the Manual Approach
Manual
• No testing tool used by the manual tester
• The tester manually:
– entered data
– clicked on buttons
– compared actual results to expected
© 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering 18
Overview of the Manual Approach
Manual
• No testing tool used by the manual tester
• The tester manually:
– entered data
– clicked on buttons
– compared actual results to expected
© 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering 19
Overview of the Manual Approach
• No testing tool used by the manual tester
• The tester manually:
– entered data
– clicked on buttons
– compared actual results to expected
Manual
© 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering 20
Classification of Issue Types
• Business Logic Issues– E.g. Functional Issues
• Field Validation– E.g. Field Length violations are not handled correctly
• Naming Discrepancies– E.g. “Lab or Organization” instead of “Organization”
• Field Discrepancies– E.g. Extra/Missing Fields
• Usability Issues– E.g. Broken Layout
© 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering 21
Issues Found in Version 1 and Version 2
Category MBT Manual Union
Business Logic 22 12 27
Field Validation 6 1 6
Naming Discrepancies 0 5 5
Extra Fields 1 6 6
Usability 7 5 9
Total 36 (24 + 12) 29 (17 + 12) 53 (17 + 12 + 24)
Only MBT Only Manual Both
24 17 12
© 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering 22
Observation 1: MBT better than Manual
• Business Logic– Manual testing was accidentally uneven
– Focused on some parts but missed others
• Field Validation– MBT always tests the limits of all fields
• Usability– Systematic use of the system by MBT is good
to find usability issues
6 1
22 12
7 5
MBT
Manual
© 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering 23
Observation 2: Manual better than MBT
• Field Discrepancies
• Naming Discrepancies
• Why were they missed by MBT?– Models focused on functional issues
• MBT found more severe issues than manual– See the paper for definition of severity levels
1 6
0 5
MBT
Manual
© 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering 24
Model and Test Infrastructure Metrics
• Model for add/edit/approve features– States: 166
– Transitions: 250
• Model for user roles / page access– States: 21
– Transitions: 30
• Size of test infrastructure– ~2500 Lines of Code
– Most of the code very simple, filling in forms, reading from forms and validating the data
© 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering 25
Generated Test Suite Metrics
• 100 automatically generated test cases divided into scenarios:– add method: 20
– edit method: 20
– add/edit method: 20
– approve method: 10
– table of content: 10
– mix of above scenarios: 20
• Average length: ~580 lines of code per test case
© 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering 26
Preparation Effort (in person/hours)
Task MBT Manual
Requirement Elicitation 16.5 16Modeling 24 N/AImplementing Test Infrastructure 87 N/ATest Case Development N/A 16Total 127.5 32
Why was the test infrastructure for MBT so expensive?• Had to develop utilities to programmatically interact with the web-browser
• Same cost for setting up automated test case execution framework• Creating models, generating test cases is the smaller cost
• Limitations of Selenium• Is the table sorted?• How many rows in the table, etc.?• File uploading and native windows controls, etc.
© 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering 27
Effort Breakdown for Two Versions
Task MBT Manual
Overall Effort 139.5 65
Overall Effort
Task MBT Manual
Adapting the test infrastructure 6 N/A
Test Execution N/A 7Issue Analysis 2 N/ATotal 8 7
Effort (V2)
Task MBT Manual
Test Execution N/A 26
Issue Analysis 4 N/A
Total 4 26
Effort (V1)
© 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering 28
Benefits of Manual Testing
• Limited initial investment
• No coding experience is necessary
• Exploratory testing is possible
• Good at bringing the system to a particular state and test around it
• Easy to characterize test case failures
• No problem at all when GUI changes
© 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering 29
Drawbacks of Manual Testing
• Tester gets tired and ends testing early– Stopping criteria
• Time consuming to test all corner cases
• Test execution takes longer
© 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering 30
Benefits of MBT
• Business Logic can be encoded in testing models (precise spec. of the system)
• Well-defined stopping criteria– Various model coverage metrics
• Generated test cases can be (and are) reused– Applied the same set of tests on multiple versions
– Could reuse the tests for a modified version with moderate changes to the testing infrastructure
– Pays off in the long run (great for regression testing)
• Several corner case issues were detected– Manual testing missed many of them
– It is tedious and time consuming to check corner cases manually
© 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering 31
Drawbacks of MBT
• Tradeoff between completeness of the models and time spent– Right-level of abstraction
• Analysis of test failures is not always easy– Long and random test cases
– Multiple tests could fail for the same reason
• Managing data for test cases is not always easy– Data intensive systems
– Managing the state of the database
© 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering 32
Interested in further details?
• Read the paper:– MBT for beginners
– GNSI business context and SUT
– Detailed design of the experiment
– Detailed definition of severity of issues
– Lessons learned
– Threats to validity
– Related work
• Send your questions/comments:– [email protected]
© 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering 33
Conclusion
• Performing an empirical study in industry is difficult but possible (management support)– Too many versions and changes during the study
• MBT and Manual find different types of issues
• MBT is expensive to start but pays off after a couple of versions:– Test infrastructure/driver for MBT is the bottleneck
– Changes in GUI breaks the concrete tests
• MBT is better in detecting functional issues and (most) of the corner-cases
© 2014 Fraunhofer USA, Inc. Center for Experimental Software Engineering 34
Acknowledgement
• GNSI management– Ori Reiss
– Pino Marinelli
• GNSI engineers, testers, and analysts– Jangho Ki
– Anjana Sreeram
– Prashant Pandya
– Eyal Rand