Lec 22 Web Driver III - 1 CSCE 747 Fall 2013 CSCE 747 Software Testing and Quality Assurance Lecture...

31
Lec 22 Web Driver III - 1 CSCE 747 Fall 2013 CSCE 747 Software Testing and Quality Assurance Lecture 22 WebDriver 11/11/2013 1

Transcript of Lec 22 Web Driver III - 1 CSCE 747 Fall 2013 CSCE 747 Software Testing and Quality Assurance Lecture...

Page 1: Lec 22 Web Driver III - 1 CSCE 747 Fall 2013 CSCE 747 Software Testing and Quality Assurance Lecture 22 WebDriver 11/11/2013 1.

Lec 22 Web Driver III - 1 CSCE 747 Fall 2013

CSCE 747 Software Testing and Quality Assurance

Lecture 22 WebDriver

11/11/2013 1

Page 2: Lec 22 Web Driver III - 1 CSCE 747 Fall 2013 CSCE 747 Software Testing and Quality Assurance Lecture 22 WebDriver 11/11/2013 1.

Lec 22 Web Driver III - 2 CSCE 747 Fall 2013

Last Time WebDriver again

Today More WebDriver Plan HW – Test 2 – take home Beautiful Testing – Rex

Black

Page 3: Lec 22 Web Driver III - 1 CSCE 747 Fall 2013 CSCE 747 Software Testing and Quality Assurance Lecture 22 WebDriver 11/11/2013 1.

Lec 22 Web Driver III - 3 CSCE 747 Fall 2013

The Plan

Nov 13 – WebDriver; Beautiful testing Nov 18 – Test 2 assigned due Tuesday before

Thanksgiving Nov 20 Nov 25 Dec 2 Dec 4 Exam

.seleniumhq.org/docs/04_webdriver_advanced.jsp

Page 4: Lec 22 Web Driver III - 1 CSCE 747 Fall 2013 CSCE 747 Software Testing and Quality Assurance Lecture 22 WebDriver 11/11/2013 1.

Lec 22 Web Driver III - 4 CSCE 747 Fall 2013

if(driver.getPageSource().contains("Build my Car - Configuration - Online Chat")) { try { //Find the Close Button on Chat Popup Window //and close the Popup by clicking Close Button // instead of closing it directly WebElement closeButton =

driver.findElement(By.id("closebutton")); closeButton.click();…

Selenium Testing Tools Cookbook by Gundecha – 2012

Page 5: Lec 22 Web Driver III - 1 CSCE 747 Fall 2013 CSCE 747 Software Testing and Quality Assurance Lecture 22 WebDriver 11/11/2013 1.

Lec 22 Web Driver III - 5 CSCE 747 Fall 2013

Homework for this week

MySC

.seleniumhq.org/docs/04_webdriver_advanced.jsp

Page 6: Lec 22 Web Driver III - 1 CSCE 747 Fall 2013 CSCE 747 Software Testing and Quality Assurance Lecture 22 WebDriver 11/11/2013 1.

Lec 22 Web Driver III - 6 CSCE 747 Fall 2013

Test 2

.seleniumhq.org/docs/04_webdriver_advanced.jsp

Page 7: Lec 22 Web Driver III - 1 CSCE 747 Fall 2013 CSCE 747 Software Testing and Quality Assurance Lecture 22 WebDriver 11/11/2013 1.

Lec 22 Web Driver III - 7 CSCE 747 Fall 2013

Web-Driver APIModifier and Type Method and Description

void close() Close the current window, quitting the browser if it's the last window currently open.

WebElement findElement(By by) Find the first WebElement using the given method.

java.util.List<WebElement> findElements(By by) Find all elements within the current page using the given mechanism.

void get(java.lang.String url) Load a new web page in the current browser window.

java.lang.String getCurrentUrl() Get a string representing the current URL that the browser is looking at.

java.lang.String getPageSource() Get the source of the last loaded page.

http://selenium.googlecode.com/git/docs/api/java/org/openqa/selenium/WebDriver.html

Page 8: Lec 22 Web Driver III - 1 CSCE 747 Fall 2013 CSCE 747 Software Testing and Quality Assurance Lecture 22 WebDriver 11/11/2013 1.

Lec 22 Web Driver III - 8 CSCE 747 Fall 2013

Web-Driver APIjava.lang.String getTitle() The title of the current page.

java.lang.String getWindowHandle() Return an opaque handle to this window that uniquely identifies it within this driver instance.

java.util.Set<java.lang.String>getWindowHandles() Return a set of window handles which can be used to iterate over all open windows of this WebDriver instance by passing them to switchTo().WebDriver.Options.window()

WebDriver.Options manage() Gets the Option interface

WebDriver.Navigation navigate() An abstraction allowing the driver to access the browser's history and to navigate to a given URL.

void quit() Quits this driver, closing every associated window.

WebDriver.TargetLocator switchTo() Send future commands to a different frame or window.

http://selenium.googlecode.com/git/docs/api/java/org/openqa/selenium/WebDriver.html

Page 9: Lec 22 Web Driver III - 1 CSCE 747 Fall 2013 CSCE 747 Software Testing and Quality Assurance Lecture 22 WebDriver 11/11/2013 1.

Lec 22 Web Driver III - 9 CSCE 747 Fall 2013

Method detail e.g. get get - void get(java.lang.String url) Load a new web page in the current browser window. This is

done using an HTTP GET operation, and the method will block until the load is complete. This will follow redirects issued either by the server or as a meta-redirect from within the returned HTML. Should a meta-redirect "rest" for any duration of time, it is best to wait until this timeout is over, since should the underlying page change whilst your test is executing the results of future calls against this interface will be against the freshly loaded page.

Synonym for WebDriver.Navigation.to(String). Parameters:url - The URL to load. It is best to use a fully

qualified URL http://selenium.googlecode.com/git/docs/api/java/org/openqa/selenium/WebDriver.html

Page 11: Lec 22 Web Driver III - 1 CSCE 747 Fall 2013 CSCE 747 Software Testing and Quality Assurance Lecture 22 WebDriver 11/11/2013 1.

Lec 22 Web Driver III - 11 CSCE 747 Fall 2013

Beautiful Testing

Testing Satisfies Stakeholders - Rex Black When we describe something as beautiful,

we mean that it has qualities that give great pleasure or satisfaction.

the latter, not the former. testing should provide pleasure.

Beautiful Testing … Goucher & Riley 2009

Page 12: Lec 22 Web Driver III - 1 CSCE 747 Fall 2013 CSCE 747 Software Testing and Quality Assurance Lecture 22 WebDriver 11/11/2013 1.

Lec 22 Web Driver III - 12 CSCE 747 Fall 2013

For Whom Do We Test?

test stakeholders

Beautiful Testing … Goucher & Riley 2009

Page 13: Lec 22 Web Driver III - 1 CSCE 747 Fall 2013 CSCE 747 Software Testing and Quality Assurance Lecture 22 WebDriver 11/11/2013 1.

Lec 22 Web Driver III - 13 CSCE 747 Fall 2013

What Testing Satisfies?

What Satisfies? Each stakeholder has a set of objectives and

expectations related to testing. They want these carried out

effectively, efficiently, and elegantly.

Beautiful Testing … Goucher & Riley 2009

Page 14: Lec 22 Web Driver III - 1 CSCE 747 Fall 2013 CSCE 747 Software Testing and Quality Assurance Lecture 22 WebDriver 11/11/2013 1.

Lec 22 Web Driver III - 14 CSCE 747 Fall 2013

Effectiveness

Effectiveness means satisfying these objectives and expectations.

Unfortunately, the objectives and expectations are not always clearly defined or articulated.

testers must work with the stakeholder groups to determine their objectives and expectations.

resolve any unrealistic expectations,

Beautiful Testing … Goucher & Riley 2009

Page 15: Lec 22 Web Driver III - 1 CSCE 747 Fall 2013 CSCE 747 Software Testing and Quality Assurance Lecture 22 WebDriver 11/11/2013 1.

Lec 22 Web Driver III - 15 CSCE 747 Fall 2013

Efficiency

Efficiency means satisfying objectives and expectations in a way that maximizes the value received for the resources invested.

Different stakeholders have different views on invested resources

maximize value — as defined by your stakeholders

Beautiful Testing … Goucher & Riley 2009

Page 16: Lec 22 Web Driver III - 1 CSCE 747 Fall 2013 CSCE 747 Software Testing and Quality Assurance Lecture 22 WebDriver 11/11/2013 1.

Lec 22 Web Driver III - 16 CSCE 747 Fall 2013

Elegance

Elegance means achieving effectiveness and efficiency in a graceful, well-executed fashion.

You and your work should impress the stakeholders as fitting well with the overall project.

You should never appear surprised — or worse yet, dumbfounded — by circumstances that stakeholders consider foreseeable.

Elegant testers exhibit what Ernest Hemingway called “grace under pressure,”

Beautiful Testing … Goucher & Riley 2009

Page 17: Lec 22 Web Driver III - 1 CSCE 747 Fall 2013 CSCE 747 Software Testing and Quality Assurance Lecture 22 WebDriver 11/11/2013 1.

Lec 22 Web Driver III - 17 CSCE 747 Fall 2013

What’s the Focus? The more bugs the testers find, the more efficient the

testers consider themselves. Such testers consider it elegant to construct a particular

devilish — sometimes even tortured — test case that causes a crash, abnormal application termination, computer lock-up, data loss, or similarly spectacularly severe system crash.

At the extreme end of the scale, some test managers even pay bonuses or measure testers on their yearly performance evaluations based on the number of severe bugs found.

the focus should be quality assurance not bug-focused

Beautiful Testing … Goucher & Riley 2009

Page 18: Lec 22 Web Driver III - 1 CSCE 747 Fall 2013 CSCE 747 Software Testing and Quality Assurance Lecture 22 WebDriver 11/11/2013 1.

Lec 22 Web Driver III - 18 CSCE 747 Fall 2013

bug-focused ?

Development managers and projects managers generally consider bug-obsessed antagonistic, disruptive, and obstructive.

Effectiveness means that testers focus their efforts on important areas and typical

workflows, and find whatever bugs exist there.

Efficiency means covering critical and typical scenarios and finding important bugs early in the project.

Beautiful Testing … Goucher & Riley 2009

Page 19: Lec 22 Web Driver III - 1 CSCE 747 Fall 2013 CSCE 747 Software Testing and Quality Assurance Lecture 22 WebDriver 11/11/2013 1.

Lec 22 Web Driver III - 19 CSCE 747 Fall 2013

Questions for Testers

What percentage of the bugs delivered to us do we find?

Do we find a higher percentage of the important bugs?

What is our cost per bug found and fixed during testing compared to the cost of a failure in production?

develop metrics

Beautiful Testing … Goucher & Riley 2009

Page 20: Lec 22 Web Driver III - 1 CSCE 747 Fall 2013 CSCE 747 Software Testing and Quality Assurance Lecture 22 WebDriver 11/11/2013 1.

Lec 22 Web Driver III - 20 CSCE 747 Fall 2013

Metrics

Equation 2-1. Defect detection percentage DDP = bugs-detected / bugs-present

Equation 2-2. Defect detection percentage DDP = test-bugs / (test-bugs + production-bugs) for the last level of testing prior to UAT and deployment

an independent test team’s defect detection percentage for a system test or system integration test averages around 85%.

Beautiful Testing … Goucher & Riley 2009

Page 21: Lec 22 Web Driver III - 1 CSCE 747 Fall 2013 CSCE 747 Software Testing and Quality Assurance Lecture 22 WebDriver 11/11/2013 1.

Lec 22 Web Driver III - 21 CSCE 747 Fall 2013

Bug-finding focus

Equation 2-3. Bug-finding focus DDPall-bugs < DDPcritical-bugs , risk-based testing RBCS Library

http://www.rbcs-us.com/software-testing-resources/library

Beautiful Testing … Goucher & Riley 2009

Page 22: Lec 22 Web Driver III - 1 CSCE 747 Fall 2013 CSCE 747 Software Testing and Quality Assurance Lecture 22 WebDriver 11/11/2013 1.

Lec 22 Web Driver III - 22 CSCE 747 Fall 2013

Costs

Cost of detection - The testing costs that we would incur even if we found no bugs. performing a quality risk analysis, setting up the test environment, and creating test data are activities that incur costs of detection.

Cost of internal failure The testing and development costs that we incur purely because we find bugs. filing bug reports, fixing bugs, confirmation testing bug fixes,

and regression testing changed builds are activities that incur costs of internal failure.

Cost of external failure

Beautiful Testing … Goucher & Riley 2009

Page 23: Lec 22 Web Driver III - 1 CSCE 747 Fall 2013 CSCE 747 Software Testing and Quality Assurance Lecture 22 WebDriver 11/11/2013 1.

Lec 22 Web Driver III - 23 CSCE 747 Fall 2013

Cost of external failure

The support, testing, development, and other costs that we incur because we don’t deliver 100% bug-free, perfect products.

For example, much of the costs for technical support or help desk organizations and sustaining engineering teams are costs of external failure.

Business costs – future sales lost

Beautiful Testing … Goucher & Riley 2009

Page 24: Lec 22 Web Driver III - 1 CSCE 747 Fall 2013 CSCE 747 Software Testing and Quality Assurance Lecture 22 WebDriver 11/11/2013 1.

Lec 22 Web Driver III - 24 CSCE 747 Fall 2013

Equation 2-4. Average cost of a test bug ACTB =

Equation 2-5. Average cost of a production bug

ACPB=

Beautiful Testing … Goucher & Riley 2009

Page 25: Lec 22 Web Driver III - 1 CSCE 747 Fall 2013 CSCE 747 Software Testing and Quality Assurance Lecture 22 WebDriver 11/11/2013 1.

Lec 22 Web Driver III - 25 CSCE 747 Fall 2013

testing return on investment (ROI)

Equation 2-6. Calculating the testing return on investment Test ROI= (ACPB – ACTB)*test-bugs / cost-of-detect

ROIs range from 25% all the way up to more than 3,500%.

as the cost of external failure goes up relative to the cost of internal failure, the return on the testing investment also goes up.

Beautiful Testing … Goucher & Riley 2009

Page 26: Lec 22 Web Driver III - 1 CSCE 747 Fall 2013 CSCE 747 Software Testing and Quality Assurance Lecture 22 WebDriver 11/11/2013 1.

Lec 22 Web Driver III - 26 CSCE 747 Fall 2013

HW Victor Basili’s goal-question-metric approach

Beautiful Testing … Goucher & Riley 2009

Page 27: Lec 22 Web Driver III - 1 CSCE 747 Fall 2013 CSCE 747 Software Testing and Quality Assurance Lecture 22 WebDriver 11/11/2013 1.

Lec 22 Web Driver III - 27 CSCE 747 Fall 2013

Questions on regression tests

What percentage of regression tests have we automated?

What percentage of regression-related quality risks do we cover?

How much more quickly can we run our automated regression tests?

Equation 2-7. Regression test automation percentage RTA = automated-regression-tests / (manual-rt + a-rt)

Beautiful Testing … Goucher & Riley 2009

Page 28: Lec 22 Web Driver III - 1 CSCE 747 Fall 2013 CSCE 747 Software Testing and Quality Assurance Lecture 22 WebDriver 11/11/2013 1.

Lec 22 Web Driver III - 28 CSCE 747 Fall 2013

Regression risk coverage

Equation 2-8. Regression risk coverage RRC =

Equation 2-9. Acceleration of regression testing ART =

Beautiful Testing … Goucher & Riley 2009

Page 29: Lec 22 Web Driver III - 1 CSCE 747 Fall 2013 CSCE 747 Software Testing and Quality Assurance Lecture 22 WebDriver 11/11/2013 1.

Lec 22 Web Driver III - 29 CSCE 747 Fall 2013

Know your stakeholders. Know their objectives and expectations for testing. Establish metrics and targets for stakeholder objectives and expectations (external beauty). Establish metrics and targets for testing objectives and expectations (internal beauty).

Beautiful Testing … Goucher & Riley 2009

Page 30: Lec 22 Web Driver III - 1 CSCE 747 Fall 2013 CSCE 747 Software Testing and Quality Assurance Lecture 22 WebDriver 11/11/2013 1.

Lec 22 Web Driver III - 30 CSCE 747 Fall 2013

1. Cost of finding a defect in testing (CFDT)

= Total effort spent on testing / defects found in testing

Note: Total time spent on testing including time to create, review, rework, execute the test cases and record the defects. This should not include time spent in fixing the defects.

2. Test Case Adequacy: This defines the number of actual test cases created vs estimated test cases at the end of test case preparation phase. It is calculated as

No. of actual test cases / No: of test cases estimated

3. Test Case Effectiveness: This defines the effectiveness of test cases which is measured in number of defects found in testing without using the test cases. It is calculated as

No. of defects detected using test cases*100/Total no: of defects detected

4. Effort Variance can be calculated as

{(Actual Efforts-Estimated Efforts) / Estimated Efforts} *100

5. Schedule Variance: It can be calculated as

{(Actual Duration - Estimated Duration)/Estimated Duration} *100

Fatal - 5Major - 3Minor - 1

http://www.softwaretestingstuff.com/2007/10/software-testing-metrics.html

Page 31: Lec 22 Web Driver III - 1 CSCE 747 Fall 2013 CSCE 747 Software Testing and Quality Assurance Lecture 22 WebDriver 11/11/2013 1.

Lec 22 Web Driver III - 31 CSCE 747 Fall 2013

6. Schedule Slippage: Slippage is defined as the amount of time a task has been delayed from its original baseline schedule. The slippage is the difference between the scheduled start or finish date for a task and the baseline start or finish date. It is calculated as

((Actual End date - Estimated End date) / (Planned End Date – Planned Start Date) * 100

7. Rework Effort Ratio:

{(Actual rework efforts spent in that phase / Total actual efforts spent in that phase)} * 100

8. Review Effort Ratio:

(Actual review effort spent in that phase / Total actual efforts spent in that phase) * 100

9. Requirements Stability Index:

{1 - (Total No. of changes /No of initial requirements)}

10. Requirements Creep:

(Total No. of requirements added / No of initial requirements) * 100

11. Weighted Defect Density:

WDD = (5*Count of fatal defects)+(3*Count of Major defects)+(1*Count of minor defects)

Note: Here the Values 5, 3, 1 correspond to severities as mentioned below:

http://www.softwaretestingstuff.com/2007/10/software-testing-metrics.html