Software Testing Overview - as c · PDF file Software Testing Overview About this white paper:...

Click here to load reader

  • date post

  • Category


  • view

  • download


Embed Size (px)

Transcript of Software Testing Overview - as c · PDF file Software Testing Overview About this white paper:...

  • Software Testing Overview About this white paper: This paper was written by David C. Young, an employee of CSC. It was written as supplemental documentation for use by the HPC account holders at the Alabama Supercomputer Center (ASC). This was written in 2015. Why is software testing important? Most people have had the experience of purchasing a product that didn’t work, or didn’t work correctly, or didn’t do what they felt it should obviously be able to do. In addition to refunds and complaints from a dissatisfied customer, the customer may avoid additional purchases from the manufacturer for years to come. This loss of current and future revenue can be devastating to any company. In software development, it can be particularly difficult to know what flaws are in the product, so that they can be addressed. Software testing encompasses a wide range of techniques to find those flaws. To be successful, a number of types of software testing must be done. More importantly, they must be integrated into every stage of the software development process. Too many software development teams have waited until the week before product release to do significant testing, just to find that the software had fundamental flaws that would delay that product release by months. Software developers are encouraged to use systematic testing processes. These include both unit tests of individual functions and functional tests of the entire program. For most software developers the success of the software in passing tests (either internally or customer usage) is a factor in getting jobs, getting promotions, and quite possibly losing jobs. Also, a software developer can be proud to list a robust piece of software on their resume. Consider the following Venn diagram software depicting the relationship between successful software development and software testing.

  • As indicated by the diagram above, there is a slim chance of successfully developing computer software without any systematic, formal testing strategy or plan. Those cases where ad hoc testing is sufficient tend to be projects that are fairly small and simple by software development standards. Even in such a case there are often a few remaining bugs that only manifest under less typical usage scenarios. That small problem can snowball into a much larger problem if more complex functionality is built on top of the original software at a later date. The diagram also distinguishes between functional testing and unit testing. Functional testing consists of tests run on the completed program, or the full program at the current stage of development. Unit tests are done to test each program function, procedure, method, or object class outside of the full program. Functional testing is more easily done and identifies the most important bugs that will be readily visible to the end user of the software. However, functional testing does not give any indication as to where in the program the bug is located. Unit testing is much better for finding the specific function and ultimately line of code that needs to be corrected. As software packages get larger and more complex, drilling down from functional tests to find the specific line of code to be corrected becomes more difficult and time consuming. Eventually a large project with only functional testing practically grinds to halt as it takes weeks to find just a few bugs. At that point either the project stops or unit tests must be implemented. Implementing unit tests for a project after years of development is a tedious process that takes a long time to show significant results. The best practice is that any software development project expected to take more than thirty days should have both functional tests and unit tests built into the development process from day one. Opinions vary as to whether that threshold should be thirty days, three months, or up to a year. Usually people pushing for a long threshold want to complete the project sooner or only do the more interesting work of writing the software. Proponents of doing unit testing on all projects are usually people who want top quality results or who have experienced the pain of a long term project without unit tests. Software testing (and the fixing of the identified problems) certainly improves the quality of the software being developed. However, developers should be aware that testing and fixing bugs is an ongoing process. If all of the bugs are worked, afterwards there may be half as many bugs. This is because 20% won't be fixed on the first try, 20% of the fixes will have inadvertently created new bugs, and new bugs (about 10%) will be discovered. These percentages are from p. 108 of the Hass book listed in the bibliography. This gives rise to a diminishing returns situation in which more and more detailed testing results in finding bugs that are fewer in number and less likely to be noticeable in the typical usage of the software. There have even been hand-wavy “proofs” that it is impossible to have perfect, completely bug free software for a project of any significant complexity. While we disagree with the view that perfection is impossible, the reality is that time and labor cost nearly always put an end to testing before a perfect product is obtained. Most software development organizations start testing too late, and stop testing too early, before obtaining their desired level of software quality. In practicality, the goal of a good software testing plan and process is to get the best possible result for any given amount of time and resources devoted to software testing. Software packages that are more complex and/or developed over a longer period of time tend to need more types of more thorough testing. Clearly more complex programs have more places in the code where a bug could occur. Software packages developed over a long period of time are more likely to have turn over in the development staff, and less likely to have every one on the project remembering all of the previous testing and reasons for architectural decisions. In this case, sections of code and design

  • choices are sometimes revisited by later developers, who may make changes that break something that was working previously. A good set of regression tests will ensure that the broken item is found soon rather than months later. Types of software testing If a novice software developer or test engineer is asked to test the software, what do they test? The simplest obvious answer is to run the software with some typical set of input options to see if it works. That is a type of test is called a smoke test. However, a smoke test only finds the biggest, most obvious problems. Many other software bugs may remain even if the smoke test passes. In order to have a high degree of confidence that the software is working correctly, a variety of types of tests must be performed. Functional and unit tests described in the previous section are two broad categories of tests, but there are many specific types of things those tests can be designed to find. Some of these are applicable to any type of software. A few are only applicable to certain types of software. The following table lists various types of software tests. Nearly all software testing falls into one or more of these categories.

  • Test  type   What  it  tests   When  used  

    Valid  outputs   Correct  behavior  on  correct  input   Frequently  when  writing  the  applicable  routine,  and   periodically  (at  least  nightly)  for  regression.  

    Invalid  inputs   Robust  handling  of  invalid  input   During  code  development,  and  for  regression  (perhaps  less   often)  

    Boundary  values   Valid  and  invalid  inputs  likely  to  cause   problems   Development  &  regression  

    Trivial  cases   Valid  inputs  likely  to  cause  problems   Development  &  regression   Permutations  of   inputs   Thorough  selection  of  inputs   Development  &  regression   Smoke  test   Quick  test  of  major  functionality   Development  &  installation  

    Sanity  check   Check  that  results  are  physically   reasonable   Development  

    Data   transformation   Data  conversions   Development  &  regression  

    Interaction   How  an  object  sends  and  receives   data  to  interact  with  other  objects.   Development  &  regression  

    Standards   Conformance  to  established  standards   Development  &  regression  

    Simulation   Software  for  systems  where  the  "real"   system  can't  be  used  for  testing.  

    For  all  tests  when  testing  in  the  live  environment  is   impractical  

    Performance   Speed,  memory,  I/O,  etc.   When  tuning  code  performance.   Load  testing   Behavior  under  heavy  load   Periodically  during  development.   Multithreaded   code   Errors  due  to  threading   Periodically  during  multithreaded  code  development.   Presentation   layer   Graphical  display   For  regression.   Localization   Versions  for  multiple  languages.   As  part  of  graphic  interface  testing.   Mean  time