PT Issue21

31
Essential for software testers TE TER SUBSCRIBE It’s FREE for testers June 2013 v2.0 number 21 £ 4 ¤ 5 / THIS ISSUE OF PROFESSIONAL TESTER IS SPONSORED BY Continuous testing and delivery Including articles by: Bogdan Bereza VictO Roy de Kleijn Polteq Mark Lehky Jessica Schiffmann Prism Informatics Eric M.S.P. Veith Wilhelm Büchner Hochschule/TU Bergakademie Freiberg

description

PT Issues 21

Transcript of PT Issue21

Page 1: PT Issue21

E s s e n t i a l f o r s o f t w a r e t e s t e r sTE TERSUBSCRIBE

It’s FREE for testers

Simulation virtualization

June 2013 v2.0 number 21£ 4 ¤ 5/

Including articles by:

Howard Osbornee-Testing

Huw Price and Llyr JonesGrid-Tools

Vu LamQASymphony

Stephen JohnsonROQ IT

Doug Lawson

Bogdan BerezaVictO

A bunch of developersCoverity

THIS ISSUE OF PROFESSIONAL TESTERIS SPONSORED BY

Continuoustesting and delivery

Continuoustesting and delivery

Including articles by:

Bogdan BerezaVictO

Roy de KleijnPolteq

Mark Lehky

Jessica Schiffmann Prism Informatics

Eric M.S.P. VeithWilhelm Büchner Hochschule/TU Bergakademie Freiberg

Page 3: PT Issue21

3PT - June 2013 - professionaltester.com

From the editor

Stopping stoppingIn software development, “continuous” is often used wrongly to mean “very frequent”. Many developers like frequent release lifecycles because they prefer making mistakes and fixing them over trying harder not to make them in the first place. Everyone who writes code knows how much fun that is. And work should be fun, shouldn’t it?

Yes, if you bear the consequences yourself. But developers typically don’t: it is others who suffer, often badly.

It has been said that change is the enemy of quality. In this issue we see that, increasingly, frequent software change is the friend and ally of business. So the first statement needs qualifying: change due to mistakes is the enemy of quality and must be stamped out, even more so because developers

try to dress deliberate mistake-making up as something positive by calling it “iterative” and so on.

But rapid change for business reasons is a reality testing must embrace, and that brings us to the correct meaning of continuous: “without stopping”. In software development, that means eliminating the need for human intervention by automating as much as possible.

This issue is about how testing can achieve that. We are very grateful to its sponsor HP for making it possible.

Edward BishopEditor

IN THIS ISSUEContinuous testing and delivery

4 We can no longer beat agile so must join itEdward Bishop examines the impact of mobile apps on testing

8 AutometricsJessica Schiffmann and Eric M. S. P. Veith on extending test automation and its measurement in SAP

12 Behave yourselfBDD for testers with Roy de Kleijn

18 Cross purposesBogdan Bereza discusses centralizing mobile app test automation

21 Flash lightMark Lehky’s comprehensive guide to automating a technology often perceived as hard to automate

Visit professionaltester.com for the latest news and commentary

Contact

EditorEdward Bishop

[email protected]

Managing DirectorNiels Valkering

[email protected]

Art DirectorChristiaan van Heest

[email protected]

SalesRikkert van Erp

[email protected]

PublisherJerome H. Mol

[email protected]

[email protected]

Contributors to this issueJessica SchiffmannEric M. S. P. Veith

Roy de KleijnBogdan Bereza

Mark Lehky

Professional Tester is published by Professional Tester Inc

We aim to promote editorial independence and free debate: views expressed by contri- butors are not necessarily those of the editor nor of the proprietors.© Professional Tester Inc 2013All rights reserved. No part of this publication may be reproduced in any form without prior written permission. “Professional Tester” is atrademark of Professional Tester Inc.

Page 4: PT Issue21

4 PT - June 2013 - professionaltester.com

Continuous testing and delivery

Testers have been arguing about agile development and accelerating iteration frequency, with legitimate reason, for a long time. But the growth of mobile apps has now rendered resistance futile. Testing must eliminate testing cycles and become continuous or development cycles will eliminate testing.

The agile businessWhen developers say “agile” they usu-ally mean “agile development” and are talking about the working practices of programmers: finding the most efficient and productive ways to spend their time. Testers use the same definition, and testers working with agile developers are concerned with helping to make the developers’ work efficient, produc-tive and safe. Some developers think testing can do that, some don’t. Most are somewhere in the middle and want to use testing, but in times and ways of their own choice.

When business people say “agile” they mean “agile business”: in other words, a business that can change its operations rapidly. In a doomed attempt to avoid con-fusion, the term “nimble” has been coined to mean the same thing.

Business decision-makers are employed for their ability to make good strategic decisions. But strategy becomes more difficult as things change faster. So suc-cessful strategists need their decisions to be applied as immediately as possible. They tend to think of themselves as racing drivers. Whether in F1 (blue chip), go-kart (startup) or anywhere between, they want their car to be as manœuvrable as pos-sible, because that will give their driving skills the best chance of success and make them less likely to crash.

The majority of HP’s clients are actively seeking greater business agility.

Is agile development the best way to deliver an agile business?A lot of people are convinced it is, or isn’t, but the truth is no-one knows. Like so much in testing, it depends. Much more importantly, soon no-one will care.

Mobile apps have not just changed the game, they’ve ripped it up and started it again. They are more immediately-busi-ness-critical than anything that has existed before. To business strategists focused on sales at any level, the mobile app is their racing car. It must respond to their controls. That requires frequent change to produc-tion and that requires continuous delivery. Web changed mainframe development because it depended upon it, yet became as important as it. In the same way, mobile is changing all other development.

Consider a mobile app developer. He or she must be fluent in Objective-C/iOS

We can no longer beat agile so must join itby Edward Bishop

PT editor Edward Bishop assesses the impact of mobile on testing

How to stop testing limiting release frequency

Page 5: PT Issue21

5PT - June 2013 - professionaltester.com

Continuous testing and delivery

and/or Java/Android and expert in design of responsive/adaptive mobile GUIs. Can you find anyone with those skills who knows or cares about any sequential development lifecycle or independent quality control process? There is no such animal.

But consider also a mobile app with many users in which a severe defect is detected in production. The defect is fixed and a new version released. But what about users who don’t update immediately? They will probably experience and be influenced by failure. Worse, what if that failure causes wider failure, for example of data integrity, that impacts more users, damaging the business?

So shall we prevent use of the defective version(s)? Display a message telling users they cannot use their current version and must update? Deliberately cripple some functionality? Hold some transac-tions in suspense and start an emergency project aimed at consolidating them? How agile is a business that does any of these? Obviously it is better to avoid them by preventing the defect entering production. Testing is the only way to achieve that.

Is agile testing the best way to support agile development? This depends what “agile testing” means, and I can find no useful definition. Whatever it means, agile development must not require testing itself to change. Although continuous delivery suits agile development and requires testing to be integrated better and to be continuous, good governance requires testing to remain formal and independent.

Many testers use HP Quality Center to deliver superb testing practices which were originally designed for application to sequential development practices. The fact that the basis of that design is now obsolete does not render the practices obsolete. They remain as valid and impor-tant as ever. They must be done, but they must be done more rapidly and with more granularity, so their risk-reducing power can be applied to the frequent releases to production demanded by mobile apps.

Agile and business continuityOne of HP’s clients is a multinational retail franchise business. It retails products and services at thousands of outlets. The average retail price per unit is in the hundreds of pounds. The franchisor supplies applications to the franchisee. These are diverse, ranging from POS and retail management to control of onsite manufacturing machin-ery. One of the largest is used for supply chain management, a critical business function with a close relationship to rev-enue. Because it knows all projects will soon be agile, the business has started its agile journey by converting this key project to agile development.

The first thing an agile project must have is a viable plan B so that plan A can take more risk. The SCM project took the considerable risk of outsourcing most of the development work to two suppliers, both in India. It paid off as both suppliers quickly proved superb. But what if that should change?

If a business depends on a datacentre, its business continuity plan needs to address that. If a project depends on suppliers, in exactly the same way it must consider supplier failure: what if one of them suffers disruption? If you are not in a position to replace them quickly, your project inherits all the business continuity risk of its suppli-ers – and that of their suppliers.

HP Application Lifecycle Management records all the information the replace-ment supplier needs in nearly-real time, and makes it quick and easy to transfer. For that reason alone, ALM is essential to business agility.

Testing in the agile projectPeriodic releases of information about what changed in the last iteration and what change is planned for the next are not sufficient. Testing needs to know about change to the product immediately it occurs, so testing can adapt immedi-ately and is always of the current product, achieving the greatest effectiveness

Figure 1: Application Lifecycle Intelligence

Page 6: PT Issue21

6 PT - June 2013 - professionaltester.com

Continuous testing and delivery

possible in the time and with the resources available.

HP’s Application Lifecycle Intelligence links ALM with build servers and software configuration management. That allows testing to see how many lines of code went into a specific change, who wrote them, which build that change is in, what proportion of relevant unit tests the changed components have passed and exactly what was done, enabling strategic allocation of retesting and regression test-ing effort (see figure 1).

Developers interact with Quality Center within their IDE (figure 2), seeing easily all test results relevant to their current work, and knowing that changes they make and unit tests they execute will influence those results in nearly-real time. This helps to bring developers onboard with testing and its formal defect management lifecycle.

In October 2012 Professional Tester considered the interface of testing to

“DevOps”, a trend closely related to CI and similarly driven by the need for busi-ness agility. More frequent app releases are increasingly likely to contain opera-tional as well as code change, therefore continuous testing must be also of the cur-rent environment. HP’s Lab Management Automation allows deployment and configuration of test environments to be controlled by the same schedule that executes tests.

Those environments need to be identical to their production equivalents except, in the case of composite applications such as most mobile apps, in one respect: they cannot be interfaced to external services

because that would be dangerous and/or expensive. HP’s Service Virtualization keeps testing continuous by enabling test-ers to create and maintain the interfaces their tests need easily and rapidly.

This divergence of test from development environment creates a danger. What if the behaviour of an external service changes? In PT’s April 2013 issue Huw Price and Llyr Jones explained why it is vital to carry out frequent empirical testing to detect that: it is untraceable, invalidates testing and is likely to cause regression failure. To ensure reliable detection HP has integrated QuickTest Professional with Service Test, adding very easy code-free API testing to the same QTP IDE used for GUI testing. This fully-automated Unified Functional Testing is assimilated seam-lessly to the CI workflow. When change is detected, the test environment is cor-rected easily using Service Virtualization’s visually intuitive interface (figure 3) so it is ready for use to test resulting change to code immediately it takes place.

Contributors to PT’s February 2011 issue noted that even in the most highly automated software organization some manual test execution is needed. HP Sprinter accelerates this by semi-automating it, eliminating human error in data entry, replicating execution across multiple environments and capturing every detail of what is done and its results directly into ALM.

Closing the dev-test-deploy loop using automation means it can be made smaller: frequency of releases to production neither impacts nor is limited by test-ing. Effective, continuous testing that is integral to continuous delivery maximizes business agility

To learn more about the HP products mentioned in this article please visit http://hp.com/go/alm

Figure 3: editing a Service Virtualization definition

Figure 2: Quality Center interaction within Eclipse

Page 7: PT Issue21

Test StudioEasily record automated tests for your modern HTML5 apps

Test the reliability of your rich, interactive JavaScript apps with just a few clicks. Benefit from built-in translators for the new HTML5 controls, cross-browser support, JavaScript event handling, and codeless test automation of multimedia elements.

www.telerik.com/html5-testing

Page 8: PT Issue21

8 PT - June 2013 - professionaltester.com

Continuous testing and delivery

Autometricsby Jessica Schiffmann and Eric M.S.P. Veith

Ideas to improve test automation of SAP and other applications from Jessica Schiffmann and Eric M.S.P. Veith

Applications are changing faster than ever. Advanced IDEs, ultra-flexible enter-prise platforms such as SAP, continuous integration, automated deployment, cloud and sequential lifecycles all lead to more frequent releases. Test automation needs to keep up, but is still dogged by the same problems it always has been. For example:

• it takes a lot of effort to vary already-automated tests in order to sustain defect-finding potential

• there is a tendency to create “lab” tests that don’t simulate realistic actual use

• it’s difficult to demonstrate its value in a way that will help sustain manage-ment support.

In this article we will offer ideas for improving these areas, especially the last, at the same time relating how functional test automation of SAP appli-cations can be achieved.

Top eCATTOur automation tool of choice is eCATT (extended computer-aided test tool). This achieves a high rate of success in identify-ing SAP GUI objects and can also access SAP at the database level which third-party tools typically cannot. Its weakness is that it is limited to the SAP environment. Our example here has no web or other external interfaces: otherwise it might be necessary to use additional tools, or even make a different choice of main tool.

Like most “capture-replay” test automa-tion tools eCATT records manual user interaction with the GUI of the application under test (see figure 1) as a script (figure 2). Here, the section UserChangedState contains the detail of what the user did to an object or objects: for example making a choice from a drop-down control or selecting a radio button. The command GETGUI is used to read the state of objects for validation. Other commands enable internal validation, for example of database table contents. The script is extended and developed by adding further commands from Inline ABAP, a subset of SAP’s Advanced Business Application Programming language, and parameter-ized, configured and executed using the transaction seCATT (figure 3). Other transactions provide detailed logging of execution and outcomes at GUI, database and system levels.

Deciding which tests to automateIn the early days of test automation standard advice was “automate first or next those tests which will be executed the greatest number of times”. This is logically correct, but in this form is applicable only to sequential development lifecycles with formal test planning. In iterative/incremental lifecycles with rapid change the question becomes meaningless and impossible to answer. All valid tests are essential to

Better automation means automating more, including measurement

Page 9: PT Issue21

9PT - June 2013 - professionaltester.com

Continuous testing and delivery

preventing regression and must be exe-cuted against every release. On the other hand any test can immediately become obsolete due to product change.

Instead, consider the parameterization (called “test data” in eCATT). How many lines will be required in the data file? The tests where this number is largest are the ones to automate first/next, because doing so will reduce the need for manual execu-tion fastest.

In terms of manual effort, the same test script executed with n parameterizations is in fact n tests. Bearing this fact in mind, choosing tests to automate in this way makes true persuasive statements such as “automating these n tests required effort e1, but they can now be executed automatically in time t rather than requir-ing effort e2 for manual execution”. It is also often worth reminding management – and ourselves – that automatic execu-tion is not prone to human error in action nor in observation.

Test data design starting from historical realityThe structure unit for test data in eCATT is called “test data container”. It’s important to note that any execution run can be done against any number of containers, making it easy to group, categorize and reuse data. Because applications built in SAP are so configurable – for example, they are often rolled out with the GUI in multiple natural languages – some useful test data variants are easily found or obvious.

So how to set about designing test data? The familiar techniques such as equiva-lence partitioning and boundary value analysis can be used easily of course. But in the case of most SAP applications, we question their defect-finding potential and also the importance of defects they might find. The inputs they generate are unlikely to occur in real use (the reverse might be true of, say, a public-facing web application which users might deliberately wish to try to break). Furthermore, tests created by application of a too-simple technique will become “stale” (that is,

their potential to detect important defects even when the AUT changes will dimin-ish) quickly.

So actual inputs extracted from previous transactions may be a superior starting point. The data could then be increased in volume by permutation and by addition of transaction information from other systems obtained using data mining techniques. Thus real-life data is evolved into synthetic data and retains the advantages of both.

Test data design starting from randomnessAlthough this approach, we believe, works well, in practice it is not always easy to do. Obtaining the real transaction data may be problematical both technically and because of its implications regarding privacy.

In that case an opposite approach, using pure random data as the starting point, may be valuable. What testing really needs to develop that into realistic data

Figure 1: eCATT recorder

Figure 2: eCATT script

Figure 3: seCATT transaction

Page 10: PT Issue21

10 PT - June 2013 - professionaltester.com

Continuous testing and delivery

with good potential to detect defects is not the raw real-life data itself, but the coher-ences and probabilities that can be found within it. That might be obtained from statistical analysis of the data – perhaps part of the standard functionality of the system that processes it – with no need for sight of the data itself.

For example consider an ordering system. Generate a test case including a random number of items to be ordered. Then determine, from analysis of histori-cal data, the average number of items per order, the probability of an order being for only one item and the probability of an order being for the maximum permitted number of items. Use this information to weight the calculation used to generate the random number. Then apply the same procedure to other inputs and permutate the resulting test cases.

Metrics to demonstrate ROIMuch has been written about selecting test metrics. In general a metric should provide quantitative information and be objective, comprehensible, adaptive (that is, for use at different stages of product development) and possible to gener-ate automatically. Peter Liggesmeyer (Software-Qualität, Spektrum der Wissenschaft, 2010) suggests and defines further attributes: elementariness, validity, stability, seasonableness, analyzability, and possibility to reproduce. All these things, considered together, can be used to good effect as metric selection criteria.

Basic effectivenessOne way to quantify the effectiveness of any activity that aims to improve quality of a product is number of defects it detects divided by the effort taken to perform it. The effort to produce a manual test includes the time taken to understand the requirement on which it is based plus that to apply what-ever model is used. For automated testing the time to automate must be added. It is possible to consider the time taken to learn how to automate using a given toolset as an additional effort, especially when tools are changed, but as it is approximately a one-time effort we will omit it.

Unfortunately this metric is challenged by one of our key criteria: it is not obvious in SAP development how to measure the time taken to achieve understanding. However SAP comes into its own when measuring the execution and defects: eCATTs logging facilities provide all the information needed. The answer, as for non-SAP development, is integrated application lifecycle management. We achieve this using SAP Solution Manager which allows the use of effort measurement, incident weighting and categorization optimized for the specific product in hand.

Test case defect densityIn their article “What should we measure during testing?” (Testing Experience magazine September 2010), Yogesh Singh and Ruchika Malhotra define this as number of defects found divided by number of test cases executed. This metric indicates the effort needed to detect defects and is easily and entirely gener-ated from eCATT’s logging.

Defect detection percentage (DDP)Effectiveness of a quality activity depends very much on when defects are detected and this common metric addresses that. It can be calculated at any time as number

of defects detected by the activity divided by total number of known defects. Again, eCATT makes it easy to count the defects detected by automated test execution and so to monitor DDP continuously using Solution Manager. Meaningfulness is increased further by weighting the defects as discussed above.

Transaction coverageAn SAP application consists of transac-tions: some standard built-in ones plus custom, specially-developed ones. The logging functionality of eCATT is based on these transactions and records, by their names, which were performed and when. This enables a simple routine to be written that calculates an SAP-specific metric: number of transactions performed by automated test execution divided by total number of transactions performed.

Execution of data-driven test cases with different parameters can also be logged in detail by the eCATT script. This might raise the possibility of measuring depth of coverage of transactions. Unfortunately this information is in the script log not the test execution log and we have so far found no easy way for Solution Manager to obtain it, making it available for auto-matic metric calculations

Jessica Schiffmann is a senior consultant in SAP technology for Prism Informatics (prism-informatics.com). Eric M.S.P. Veith ([email protected]) is a researcher at Wilhelm Büchner Hochschule/TU Bergakademie Freiberg

Page 11: PT Issue21

Any Device. Any Network. Any Region.

www.neotys.com

Records 100% of Mobile Applications

Integrated Network Simulation

Cloud Integrated

30-Day License with Free Technical Support

Free Evaluation of NeoLoad

www.neotys.com/trial

The fi rst complete mobile performance testing solution.

Neotys Prof Tester A4 Ad 01.indd 1 12-11-23 5:35 PM

Page 12: PT Issue21

12 PT - June 2013 - professionaltester.com

Continuous testing and delivery

Defects in software are caused by one of three kinds of human error. In the first, the developer understands what is required and attempts to achieve it but fails. In the second, the developer misunderstands the requirement and so succeeds in achieving the wrong thing.

That misunderstanding also causes the third kind of error: working to achieve something that is not required and adds no business value. This also introduces a defect, but typically not one with the potential to cause product failure. More importantly, it wastes effort making project failure more likely.

In this article I will explain an approach to preventing the second and third types of error.

Acceptance-test-firstBehaviour-driven development grew in the mind of its inventor Dan North (see http://dannorth.net/introducing-bdd) from test-driven development. However whereas TDD is a development activity dealing with unit testing, BDD is very definitely a testing approach: it aims to deliver effec-tive acceptance testing based on defined requirements, but in an agile way compat-ible with lifecycles that develop and deliver business value in short iterations.

Put another way, TDD works from the “inside” of software creation, out. BDD works “outside-in”, assuring that everything done at lower levels is correct according to the high level. So, BDD is an application of the “test-first” principle used in TDD, but it changes the meaning of that term from “unit-test-first” to the greatly superior “acceptance-test-first”. This makes it very suitable for maintenance of legacy software as well as new-build, because knowledge of existing internal design and structure is not needed to do BDD.

BDD in practiceAn agile iteration (“sprint” or equivalent) begins with a short meeting defining its objectives, that is the requirements it will deliver. It’s important that all involved contribute to defining these and their underlying requirements: but even then, the understanding of different people and roles tends to vary because of their differ-ent viewpoints and the different languages they use to communicate. BDD aims to solve this problem by having everyone work together also on acceptance criteria, called “scenarios”. Steve Watson dis-cussed how to do this in the February 2013 issue of Professional Tester.

Behave yourselfby Roy de Kleijn

Roy de Kleijn’s tutorial and get-started guide to behaviour-driven development for testers

Acceptance-test-driven development

Page 13: PT Issue21

Gothenburg, Sweden 4 - 7 November 2013

6 Worldclass Keynotes 7 TuTorials

40+ presentations 50 Exhibitors, 1000 attendees

Early Bird &

Group Discounts

Available

NOW

Europe’s Biggest Gathering of Software Testers

w w w . e u r o s t a r c o n f e r e n c e s . c o m

LEarn, nEtwork, EngagE & Discuss….First time Delegate connection session | Welcome Drinks

Speed Networking | The Test Lab | Taste of Gothenburg Community Dinneractive workshops | Special Interest Discussion Tables | Test Clinic & Tip Exchange

The Official EuroSTAR Party | and Much More!

nEw

nEw nEw

nEwnEw

Page 14: PT Issue21

14 PT - June 2013 - professionaltester.com

Continuous testing and delivery

The syntax of a BDD story is shown in figure 1. Obviously its creation is driven mainly by business representatives. The (usually many) acceptance criteria for the story are defined as BDD scenarios in the syntax shown in figure 2, surrounded if necessary by examples set out in a table as shown in figure 3. Notice here that each entry in the table is either a test data item or an expected outcome, and each line in the table is a test.

This defined, uniform syntax makes the acceptance criteria easy to read and under-stand so that everyone involved can define them together. It also means they can be parsed and managed using tools. The testers present must ensure that they are clear and unambiguous. If everyone agrees that has been achieved, there should be no nasty surprises in store for anyone.

After the meeting the testers can get started on writing functional, executable (and therefore automatable) acceptance tests to implement the scenarios accord-ing to the acceptance criteria. Each scenario and each of the test steps of which it consists is taken through a series of states as shown in figure 4. Initially, the scenario fails (state 2) because there is no step. Now the step is written but it too fails because there is no test object (3). Now just enough code is created to make the step pass (4) and if necessary previously-created code is refactored (5). When all steps pass, the scenario passes (6) and if neces-sary code created for other scenarios is refactored (7).

This process is repeated for all scenarios against all acceptance criteria, so in theory minimum code required to pass them should be written. Whether that is true in practice depends of course on how well the code is written and that is where TDD comes in.

Executable specifications and living documentationBDD’s most important advantage is that it maps acceptance criteria, expressed as scenarios, directly to code. This can be seen as the ultimate in traceability. Using a BDD framework such as JBehave (also originally invented by Dan North, see http://jbehave.

org) the stories can be executed immediately. Only the actual test code need be imple-mented. This is based upon and analogous to the way developers use TDD frameworks.

Traceability of course must work in both directions. When a story is executed, the results map back to the scenarios and examples: every line is viewed in red or green type depending on the results of tests associated with it. The exact current state of the product is immediately apparent. In theory, any other documentation describing its functionality becomes superfluous.

This information can be interfaced to continuous integration/delivery servers. It provides immediate feedback on the effect on existing functionality of every commit or deployment, in a form that is easily acces-sible and comprehensible to everyone.

Getting started with BDDI hope all this will make those PT readers who have not done so already want to try out BDD. To help, I have set up an example project which you can use to test web applications. It supports parallel test execution and takes a screenshot whenever the actual and expected out-come differs. To set it up, please follow the instructions in figure 5

Roy de Kleijn ([email protected]) is a technical test consultant at Polteq where he works with clients on test process and automation, mainly in agile teams, and develops and presents training in Selenium/WebDriver. He is especially interested in web technology, new programming languages and knowledge transfer methods and publishes Selenium-related tutorials at http://selenium.polteq.com and test-related articles at http://rdekleijn.nl. This article is based on one originally published in Dutch by TestNet (see http://testnet.org)

Scenario: <scenario title>Given <pre-condition>And <optional additional pre-condition>[...]When <action>And <optional additional action>[...]Then <post-condition>And <optional additional post-condition>[...]

In order to <receive benefit>As a <role>I want to <goal/desire>

Examples:|input1|input2|outcome||yes|20|accepted||yes|13|not accepted|[...]

Figure 1: syntax of a BDD story

Figure 3: example table Figure 2: syntax of a BDD scenario

2

1

3

4

67

5

st

epscen

ario

Figure 4: sequence of scenario and step states (after P. Creux, see http://eggsonbread.com)

Page 15: PT Issue21

15PT - June 2013 - professionaltester.com

Continuous testing and delivery

First install the following software:1 Eclipse IDE for Java Developers from http://eclipse.org/downloads2 Maven. In Eclipse, choose Help –> Eclipse Marketplace and search for “Maven Integration for Eclipse”3 JBehave Plugin. Follow the instructions at https://github.com/Arnauld/jbehave-eclipse-plugin

Now import the example project:1 Download the sourcecode from http://roydekleijn.github.io/Spring-Jbehave-WebDriver-Example/ 2 Extract the ZIP file to your chosen folder3 In Eclipse, right-click in the ‘Package Explorer’ panel and choose Import –> Existing Maven Projects4 Choose the folder where you stored the files as ‘root Directory’.5 Complete the wizard and click Finish

The project is in three parts (see figure 5a):• the package org.google.pages contains an abstraction of the application under test• the package org.google.steps contains the mapping between textual sentences and code• the folder org/google/web contains the textual stories

To execute the stories:• Choose Run –> Run Configurations (figure 5b)• Right-click on Maven Build and choose New• Select the project using Browse Workspace• In the Goals field, enter the command integration-test -Dgroup=google –Dbrowser=firefox

Figure 5: setting up the example project

Figure 5a

Figure 5b

Page 16: PT Issue21

Are you on the right path to mobile?What enterprises need to know

Mobile is exploding…In 2014 more users will connect to the Internet over mobile devices than desktop PCs.1

1.2 billion smartphones will enter the market over the next 5 years, about 40 percent of all handset shipments.2

6 billion mobile subscriptions were active at the end of 2011, equivalent to 87% of the world's population.3

Creating increased demand for mobile applications.

$The proportion of companies increasing their budgets for mobile applications has almost doubled from 28% to 51%.5

64%of mobile phone time is spent on apps.4

By 2015, the number of mobile app downloads is projected to reach 98 billion.6

These trends are impacting enterprises…

88% of employees are using their personal devices for business.7

30% of organizations have or are implementing a private app store.8

Introducing challenges to IT departments trying to develop, test, and support mobile apps.

7 18

29 420

common operating platforms on the market9

active models of Android mobile phones10

versions of Android released since 200712

versions of iOS released since 200711

Performance can make or break a mobile app…

25% of users abandon a mobile app after three seconds of delay.13

:03

Only 30% of organizations have currently implemented any kind of mobile application performance infrastructure or strategy.14

48% of organizations listed performance as their main priority for mobile testing.15

And the security risk is growing...

Only 18% of organizations are testing mobile apps for security.15

More than 50% of organizations

report security or compliance issues

in their mobile deployments.16

But, many organizations are unprepared for mobile testing.

2/3 of organizations say they lack the right tools to test mobile apps.15

1/3 of organizations lack mobile testing methodologies and processes.15

20172016 2018201520142013

In 2016, total global mobile application revenue will reach an estimated $46 billion.19

The overall mobility market will top US $1 trillion by 2014.18

What does this all mean?Develop a user-centric mobile strategy • Support multiple devices • Integrate with your legacy systems & data • Monitor and manage the full mobile user experience

Give your app dev a boost • Embrace agile dev practices • Automate mobile testing • Account for mobile network conditions • Embed security throughout the mobile lifecycle

Are youprepared?

Read “Change your mobile application challenges into opportunities” for more guidance on how you can address these top mobile issues.

1. Mary Meeker, “Internet Trends,” Apr. 2010, Morgan Stanley 2. ABI Research 3. The International Telecommunication Union, Nov. 2011 4. “The Digital Revolution: A Look Through the Marketer’s Lens,” Apr. 2012, Nielsen 5. Chris Marsh, “Mobility Outlook 2013,” Nov. 2012, Yankee Group 6. "The Mobile Application Market," Oct. 2011, Berg Insight 7. “Global Survey: Dispelling Six Myths of Consumerization of IT,” Jan. 2012, Avenade 8. “2012 State of Mobility Survey,” Symantec 9. “Mobile Operating System,” Wikipedia 10. Count from Android forums as of April 11, 2012 11. “iOS version history,” Wikipedia 12. “Android version history,” Wikipedia 13. “First Class Mobile Application Performance Management,” Aug. 2012, The Aberdeen Group 14. “The Challenge of Application Performance in a Mobile Application World,” Jul. 2012, The Aberdeen Group 15. “World Quality Report, 2012–13”, Capgemini and HP 16. “Worldwide Mobile Security 2010–2014 Forecast and Analysis,” Mar. 2010, IDC 17. “World Quality Report, 2012–13”, Capgemini and HP 18. “Gartner Says Mobility will be a Trillion Dollar Business by 2014,” Oct. 21, 2010, Gartner (press release) 19. “Mobile Application Business Models,” Q1 2012, ABI Research

QR-CODEFPO

The opportunity is huge.

Page 17: PT Issue21

18 PT - June 2013 - professionaltester.com

Continuous testing and delivery

Multiple versions of applications have always been bad news for testing. Many PT readers will remem-ber from experience the nightmare caused by variation between web clients: platforms, browsers, plugins etc, all in various releases. Test design was shrunk by having to repeat test execution across a large number of combinations: important tests simply could not be done. The many disgrace-fully simple failures that occurred at the time were no coincidence.

The browser war was won by standards. Users and support technicians learned that using a more standards-compliant browser gave them a better chance of achieving what they wanted on a wider range of sites. That in turn incentivized businesses to make their sites more compliant. Good browsers like FireFox and Safari rose up the priority list. Bad ones like Internet Explorer 5 and 6 fell down it, so their subsequent versions became more compliant. At that point most of the problem disappeared.

That is well because since then, while the number of important browsers has remained constant, platforms have proliferated in the form of mobile devices. Browser and platform are coupled tightly: the behaviour of browser A for mobile phone B can be very different to its ver-sion for tablet C. Thanks to standards, it is feasible to develop and test adequately a single version of a web application to work well on most clients. Even so, there are often two versions, “desktop” and “mobile”, in order to improve the experi-ence of those with small screens and slow connections. Thus testing effort is split in two (perhaps less than two where the two versions share many components). That’s not good, but it’s better than having to split it into several fragments as was common at the height of the browser war.

Here we go againIf all mobile applications were web appli-cations, cross-client web and mobile development would now be in a strong position. Of course they are not: native apps have gained preference. There are many reasons, good and bad, for this which I will not attempt to analyse here. Some people consider building native apps to be retrograde: for example PhoneGap (http://phonegap.com) “wraps” web applications, converting them to native mobile apps: this idea may catch on.

Cross purposesby Bogdan Bereza

Bogdan Bereza on regression testing without human intervention

Complete automation requires central programmatic control

Page 18: PT Issue21

Agile is going places – are you?

0179

3/PDS/A

D/081

2

© BCS, The Chartered Institute for IT, is the business name of The British Computer Society (Registered charity no. 292786) 2012

BCS, The Chartered Institute for IT, offers the leading agile testing certification for software testers –Certified Agile Tester.

bcs.org/agiletester

The Certified Agile Tester scheme is a trademark of iSQI.

01487_pds_ad_hp_cat_proftest_ma_Layout 1 10/08/2012 14:27 Page 1

Event Loading...

London Autumn 2013

This highly anticipated event now returns to London on 24th October 2013.

Register by 5th July for your early bird discount!

Just visit: testexpo.co.uk

Page 19: PT Issue21

20 PT - June 2013 - professionaltester.com

Continuous testing and delivery

Until and unless it does it is necessary once again to develop for different propri-etary client platforms. That means testers are once again faced with multiple ver-sions of their test items, and they won’t go away any time soon. The economic forces around users’ choice of device are more powerful and complex than those affecting free browser software. There is no incen-tive for standardization and no chance of one platform achieving dominance.

Worse still, the most business-critical apps are hybrids, in the sense that they have multiple web and mobile clients sharing the same server/mainframe ser-vices and data. Web cross-client failures were usually of presentation; native apps have the dangerous potential to cause organization-wide failure. And this is in continuous delivery environments where they are being changed very frequently with high possibility of regression.

Continuous cross-client hybrid app testingSo testing needs to become continuous: the growing and evolving regression suite should be executing automatically and always, to assure against defects introduced by constant change to the test items made by automated integra-tion and deployment.

Although it is very simple, the test script shown in figure 1 could if executed in time have prevented a serious failure: a less-than-one-line change to code prevented the app, when running on one specific and superseded version of the mobile device OS, from accessing the account balance. The rarely-executed code that should have trapped that event also failed. An incor-rect balance, the last one retrieved, was displayed and transactions which should have been prevented were allowed.

So how can a script like this, requiring execution and verification on multiple devices, be fully automated? The dif-ferent approaches to functional testing of mobile apps taken by different tools suggest some possibilities. As so often in test automation, the key difference

between them is how they recognize interface objects.

The device executes the testMost or possibly all tools that do this require the device to be jailbroken or rooted: because of the obvious disad-vantages of doing that I won’t consider them here.

It is possible “legitimate” tools which are themselves apps might emerge, possibly supported by a change of policy by the device manufacturers of the kind that is easier to imagine happening on, say, Android than on iOS. If it does happen, whether they can be used for our current purpose will depend on (i) at what level they intrude on the app under test and (ii) whether and how they communicate with the cross-client “master” script so that it can control execution of the test, monitor its progress and verify its outcome.

Remote control the deviceIt is possible to execute tests on a device from a workstation, using various “desktop sharing” systems. The simplicity of this method is both its strength and its weakness. The recorded script is easy to read and can be controlled and verified by the master script as we require, but it is not robust: and because the only information available about what the application is doing is what is in the pixel image displayed on the device screen, modifying, reusing and maintaining it are visual and therefore very manual tasks.

Remote control the appThe most successful tools such as Ranorex control the app, not the device. This requires its code to be instrumented, a transparent process which makes the app able to automate itself: no other app is needed. User actions on the device are

captured and documented by Ranorex Recorder running on a workstation exactly as those performed on any other platform and can then be augmented with additional actions and validations, managed, maintained and reused in all the same ways, with object recognition accomplished in the most appropriate of multiple available ways.

Continuous delivery aspires to auto-mate as many deployment operations as possible. That is not done simply by recording them and playing them back: programmatic control is needed. Continuous testing requires exactly the same. “Easy” approaches designed to appeal to people who cannot understand simple code are not easy except in dem-onstrations which are always done using trivial example apps. In real testing of real apps they quickly become difficult or impossible. Continuous testing of mobile apps requires not ease, but power: that is what Ranorex offers

Frequent PT contributor Bogdan Bereza is a testing consultant, speaker and trainer, proprietor of VictO (see http://victo.eu) and co-ordinator of the inaugural Good Requirements conference taking place in Warsaw on 3rd and 4th October 2013 (see http://conference.wymagania.org.pl/english.html) A free trial of Ranorex, including all the mobile app testing features described here, is available from http://ranorex.com

Access test account A via the web application

Transfer the entire balance of test account A to test account B

[repeat for all mobile devices and OS versions currently supported]

Access test account A via the mobile application and verify that its balance is displayed as zero

[end repeat]

Figure 1: cross-client test script

Page 20: PT Issue21

21PT - June 2013 - professionaltester.com

Continuous testing and delivery

Its impending decline has been rumoured for years, but Flash remains one of the most popular ways to create rich Internet applica-tions. By testers, it is often perceived as difficult to automate. In this article I will try to dispel misconceptions and explain how effective test automation of Flash applications can be achieved easily: and when and why it cannot.

Technology overviewMost web Flash applications can be described very well by the three-tier architecture model (see figure 1). Ideally, none of the tiers knows anything about the platform, technology, or structure of any of the others, and therefore any one can easily be swapped out as new

development (or test) needs arise. If the model is adhered to strictly, test automa-tion becomes simple: any of the tiers can be replaced with virtually any tool. If, due to design and/or development decisions, the boundaries between the tiers start to blur, testing (and future development) become more difficult. This is one of the reasons testing should get involved in the design process as early as possible.

The data tier is usually, and should be, unimportant from the test point of view. It should have very little impact on the tiers above, and the choice of database engine is usually dictated by other factors such as performance, security and cost.

The logic tier should be responsible for transmitting, recoding, and processing all messages. Transmission of messages is usually done over classic HTTP or Real Time Messaging Protocol (RTMP). The content of the messages is usually encoded using Action Message Format (AMF). If the application is written in Java, then the logic tier server is typically BlazeDS; if the application is written in C# .NET, the server will likely be FluorineFX. Some logic tiers are very complex and use multiple technologies (web server, mes-sage queue, etc). Here I will deal with the most common AMF messages only.

The presentation tier of a Flash application is most often written in ActionScript. This has the look and feel of Java: object-ori-ented design, compile-once-run-anywhere to produce an SWF file playable by a large number of available player applica-tions on all desktop and mobile platforms. This technology is completely different to HTML, but the player can be made com-patible with a web browser via a plug-in: I am often (ie several times a week) asked: “can I automate Flash with Selenium?”. The answer is no. Sort of.

Flash lightby Mark Lehky

Mark Lehky illuminates Flash for testers

Genie and monster: a groovy soap opera

Page 21: PT Issue21

22 PT - June 2013 - professionaltester.com

Continuous testing and delivery

UI automation with GenieSelenium is not a tool, but a library avail-able in many programming languages that allows the coder to access and manipulate objects (text, controls, images etc) in a web browser’s Document Object Model (DOM). Many technologies, eg HTML5, JavaScript, AJAX and others, generate or manipulate the DOM. Flash does not.

The fact that a web browser requires a plug-in to be able to display Flash appli-cations is the first giveaway that Selenium cannot automate Flash. A Flash applica-tion is separate to the DOM, even though it can be displayed in the browser window alongside DOM objects. Selenium was never intended to automate Flash, and in all likelihood never will.

Fortunately, there are other libraries to enable you to manipulate objects in a Flash application. Here I will discuss one of them, Genie (http://sourceforge.net/adobe/genie/). Another example, also often recommended, is flash-selenium (http://code.google.com/p/flash-selenium/). One reason, and probably the most

important, why you would want to choose Genie over flash-selenium is that flash-selenium is dependent on Selenium Remote Control, which was deprecated (in favour of WebDriver) in July 2011. Genie depends on nothing and can be used completely standalone, or combined with anything else you may want to use. It can be plugged into almost any existing framework or toolset (Ant, Maven, JUnit, TestNG, etc etc).

Unlike Selenium, Genie is only available in Java and some of its development sup-port tools will only work in Eclipse. This is a barrier for testers using only .NET and Visual Studio. Genie is open source so a port is possible but I don’t foresee it. If you need to test Flash and want to use Genie but don’t know Eclipse I strongly recommend the tutorials at http://eclipse.org/resources/.

Before installing Genie you will need to install Eclipse, the Java Development Kit (JDK) at least version 1.5, and the debug version of Flash Player. The Genie down-load comes with full instructions. You will

also need to run a Genie Socket Server (reminiscent of the old Selenium Server). After that, things get pretty easy.

Genie browserGenie automation scripts refer to objects by their GenieID. This is similar to the HTML element id or the element locators used by Selenium, but more complex: it is formed from several attributes of the Flash application determined by its devel-opers together with some cryptic portions assigned by Genie itself. The result can look quite cryptic, for example SP^botBarContainer:::FP^userBox:::SE^imgBox::PX^2::PTR^0::IX^2::ITR^0. GenieIDs are difficult to predict, even in close collabora-tion with developers.

Genie includes a browser (figure 2) that helps you locate elements and discover their GenieIDs and other attributes.

Genie recorderUser actions within the Flash application are recorded as a generic Java script. Genie has no playback functionality; the script is intended to be pasted into Eclipse and run from there, somewhat similar to recording a script with Selenium IDE and then exporting it to Java.

An example Genie script is shown in figure 3. It is somewhat similar to Selenium RC code, and the recorder will help to ease you into Genie program-ming. There is no contextual menu to help create assertions; they have to be inserted manually, in Eclipse. Once you become proficient at writing code by hand you will almost never use Genie recorder again.

As can be seen from the script, Genie has its own test framework called GenieScript. This is all you need to create and execute extensive test suites,

Figure 1: three-tier architecture (public domain image from http://en.wikipedia.org/wiki/ File:Overview_of_a_three-tier_application_vectorVersion.svg)

DatabaseStorage

>GET SALES TOTAL

>GET SALES TOTAL

GET LIST OF ALLSALES MADELAST YEAR

ADD ALL SALESTOGETHER

4 TOTAL SALES

QUERYSALE 1SALE 2SALE 3SALE 4Data tier

Presentation tier

Logic tierThis layer coordinates the application, processes commands, makes logical decisions and evaluations, and performs calculations. It also moves and processes data between the two surrounding layers.

Here information is stored and retrieved from a database or file system. The information is then passed back to the logic tier for processing, and then eventually back to the user.

The top-most level of the applicationis the user interface. The main functionof the interface is to translate tasks and results to something the user can understand.

Page 22: PT Issue21

23PT - June 2013 - professionaltester.com

Continuous testing and delivery

although it does not have all the features of more mature frameworks such as JUnit or TestNG which are often used instead.

Genie also features built-in custom logging which produces an XML file, with an XSLT provided to convert it to viewable HTML. The file can of course be parsed with any tool, but it does not follow any established format such as JUnit’s. Furthermore, the log is limited to only Genie commands: if your test fails in any other step (for example an assert), this will not be logged automati-cally. Genie does have extra commands to log your own events, but this is additional overhead.

If you decide to use some other frame-work such as JUnit or TestNG you will probably use the logging capabilities it provides so the logs can be parsed automatically by your continuous integra-tion server, and once you have finished debugging your test will want to turn off Genie’s logging. This is done with:1. LogConfig logger = new

LogConfig();2. logger.

setNoLogging(true);

All your Selenium skills (and frustrations) still apply!Because Genie, like Selenium, is a Java library, having the framework of your choice run your tests is trivial. Figure 4 shows how to do this with JUnit; figure 5 combines it with Selenium.

I have also found that all my Genie tests run with no problems extended from GroovyTestCase. This is significant because Groovy, a scripting language for the Java platform, is natively understood by SoapUI. I’ll explain this significance further in the section on API automation below.

The Page Object ModelThink of your application as a collec-tion of screens, each with a number of controls and actions that it can perform. If you can describe one screen’s actions as a collection of methods, abstracting away all the controls, and make it independent of any other screen, then you have a PageObject. Alan Richardson described the advantages of this in Professional Tester August 2012 (http://professionaltester.com/magazine/backissue/PT016/ProfessionalTester-August2012-Richardson.pdf).

Genie, with its obscure GenieIDs, is perfect for this approach! Further, because Flash is often embedded in a webpage via a browser plug-in, there is great incentive to combine the automation with Selenium. Using the PageObject pattern, you can effectively hide which actions require accessing Selenium and which are accessing Genie.

Page loading delaysAs with any web technology, there are issues with page loading delays. Since the Flash application runs in a plug-in you will not be able to query the browser to see if the page is loaded – it is always loaded.

Figure 2: Genie browser

Page 23: PT Issue21

24 PT - June 2013 - professionaltester.com

Continuous testing and delivery

1. package scripts;2. 3. import com.adobe.genie.genieCom.SWFApp;4. import com.adobe.genie.executor.GenieScript;5. import com.adobe.genie.executor.components.*;6. import com.adobe.genie.executor.uiEvents.*;7. import static com.adobe.genie.executor.GenieAssertion.*;8. import com.adobe.genie.executor.enums.GenieLogEnums;9. 10. 11. /**12. * This is a sample Genie script.13. */14. //Change name of the class15. public class Unnamed extends GenieScript {16. 17. public Unnamed() throws Exception {18. super();19. 20. }21. 22. @Override23. public void start() throws Exception {24. //Turn this on if you want script to exit25. //when a step fails26. EXIT_ON_FAILURE = false;27. 28. //Turn this on if you want a screenshot29. //to be captured on a step failure30. CAPTURE_SCREENSHOT_ON_FAILURE = false;31. 32. SWFApp app1=connectToApp(“[object paMain]”);33. (new GenieDisplayObject(“SP^stageContainer:::FP^loginContainer:::SE^userN

ameTxt::PX^6::PTR^0::IX^1::ITR^0”,app1)).click(56,17,436,174,1090,821,3,f alse);

34. (new GenieTextInput(“SP^stageContainer:::FP^loginContainer:::SE^userNameT xt::PX^6::PTR^0::IX^1::ITR^0”,app1)).selectText(0,0);

35. (new GenieTextInput(“SP^stageContainer:::FP^loginContainer:::SE^userNameT xt::PX^6::PTR^0::IX^1::ITR^0”,app1)).input(“147258369”);

36. (new GenieDisplayObject(“SP^stageContainer:::FP^loginContainer:::SE^passw ordTxt::PX^6::PTR^0::IX^2::ITR^0”,app1)).click(63,12,443,246,1090,821,3,f alse);

37. (new GenieTextInput(“SP^stageContainer:::FP^loginContainer:::SE^passwordT xt::PX^6::PTR^0::IX^2::ITR^0”,app1)).selectText(0,0);

38. (new GenieTextInput(“SP^stageContainer:::FP^loginContainer:::SE^passwordT xt::PX^6::PTR^0::IX^2::ITR^0”,app1)).input(“1234”);

39. (new GenieDisplayObject(“SP^stageContainer:::FP^loginContainer:::SE^login Btn::PX^6::PTR^0::IX^4::ITR^0”,app1)).click(42,44,1011,218,1090,821,3,fal se);

40. }41. }

Figure 3: Genie script

Page 24: PT Issue21

25PT - June 2013 - professionaltester.com

Continuous testing and delivery

1. import junit.framework.TestCase;2. 3. import com.adobe.genie.executor.Genie;4. import com.adobe.genie.executor.LogConfig;5. import com.adobe.genie.executor.components.GenieDisplayObject;6. import com.adobe.genie.executor.components.GenieTextInput;7. import com.adobe.genie.genieCom.SWFApp;8. 9. public class MyTestSuite extends TestCase {10. 11. Genie jean;12. SWFApp app1;13. 14. protected void setUp() throws Exception {15. super.setUp();16. 17. LogConfig logger = new LogConfig();18. logger.setLogFolder(“log”);19. jean = Genie.init(logger);20. app1 = jean.connectToApp(“[object paMain]”);21. jean.EXIT_ON_FAILURE = true;22. jean.EXIT_ON_TIMEOUT = true;23. jean.CAPTURE_SCREENSHOT_ON_FAILURE = false;24. }25. 26. protected void tearDown() throws Exception {27. 28. jean.stop();29. 30. super.tearDown();31. }32. 33. void testCase_LogIn() {34. 35. (new GenieDisplayObject(“SP^stageContainer:::FP^loginContainer:::SE^userN

ameTxt::PX^6::PTR^0::IX^1::ITR^0”, app1)).click();36. (new GenieTextInput(“SP^stageContainer:::FP^loginContainer:::SE^userNameT

xt::PX^6::PTR^0::IX^1::ITR^0”, app1)).selectText(0, 100);37. (new GenieTextInput(“SP^stageContainer:::FP^loginContainer:::SE^userNameT

xt::PX^6::PTR^0::IX^1::ITR^0”, app1)).input(“147258369”);38. (new GenieDisplayObject(“SP^stageContainer:::FP^loginContainer:::SE^passw

ordTxt::PX^6::PTR^0::IX^2::ITR^0”, app1)).click();39. (new GenieTextInput(“SP^stageContainer:::FP^loginContainer:::SE^passwordT

xt::PX^6::PTR^0::IX^2::ITR^0”, app1)).selectText(0, 100);40. (new GenieTextInput(“SP^stageContainer:::FP^loginContainer:::SE^passwordT

xt::PX^6::PTR^0::IX^2::ITR^0”, app1)).input(“1234”);41. (new GenieDisplayObject(“SP^stageContainer:::FP^loginContainer:::SE^login

Btn::PX^6::PTR^0::IX^4::ITR^0”, app1)).click();42. }43. 44. void testCase2() {45. ...46. }47. }

Figure 4: Genie in JUnit

Page 25: PT Issue21

26 PT - June 2013 - professionaltester.com

Continuous testing and delivery

1. import junit.framework.TestCase;2. 3. import org.openqa.selenium.WebDriver;4. import org.openqa.selenium.firefox.FirefoxDriver;5. 6. import com.adobe.genie.executor.Genie;7. import com.adobe.genie.executor.LogConfig;8. import com.adobe.genie.executor.components.GenieDisplayObject;9. import com.adobe.genie.executor.components.GenieTextInput;10. import com.adobe.genie.genieCom.SWFApp;11. 12. public class MyTestSuite extends TestCase {13. 14. Genie jean;15. SWFApp app1;16. WebDriver driver;17. 18. protected void setUp() throws Exception {19. super.setUp();20. 21. driver = new FirefoxDriver();22. driver.get(“http://some.server.test/login/”);23. 24. LogConfig logger = new LogConfig();25. logger.setLogFolder(“log”);26. jean = Genie.init(logger);27. app1 = jean.connectToApp(“[object paMain]”);28. jean.EXIT_ON_FAILURE = true;29. jean.EXIT_ON_TIMEOUT = true;30. jean.CAPTURE_SCREENSHOT_ON_FAILURE = false;31. }32. 33. protected void tearDown() throws Exception {34. 35. jean.stop();36. driver.quit();37. 38. super.tearDown();39. }40. 41. void testCase2() {42. // tests now have access to Selenium such as43. driver.findElement(By...)44. }45. }

Figure 5: Genie with Selenium

Page 26: PT Issue21

27PT - June 2013 - professionaltester.com

Continuous testing and delivery

Just as in Selenium, you will need to use tricks to discover if elements are present, detect busy streamers, and so on.

Genie has methods for all elements, such as .isEnabled(), .isPresent(), and .isVisible(). So, for example, waiting for something like a busy streamer to go away is simple (figure 6).

XPath and PageFactoryNeither XPath not PageFactory (a design pattern used in a framework to make discovery of elements simpler and more dynamic) is implemented in Genie. However a useful feature of the Genie browser gives Genie the ability to handle both: it can dump an XML representa-tion of the entire application. The code to do this is shown in figure 7 and you can then search for GenieIDs using XPath as shown in figure 8.

The Genie IDs are guaranteed to be consistent from one run of your applica-tion to the next. However, it is possible for the developer to create a Flash applica-tion in which controls look the same but are instantiated on the fly, making the GenieIDs effectively dynamic. This can be a problem but PageFactory solves it by finding the GenieIDs on the fly.

To build your own PageFactory you will need to add the Java Reflection library to your framework.

API automation with SoapUIIn many Flash projects the underlying API will be available to testing long before the UI materializes (assuming it ever does). It makes sense to test the API first.

The popular API testing framework SoapUI has limited support for AMF mes-sages and no support for any flavour of the RTMP protocol. To test the APIs you

will need to have HTTP enabled on the server: RTMP and HTTP can be enabled simultaneously. Although you may not be doing this in production, bear in mind that (i) you are not testing the protocol nor, probably, the server configuration; (ii) transmission via the RTMP protocol will be validated as part of your end-to-end test, which you will probably do through the UI. Right now we are concerned only with functional testing of the APIs.

There is no call discovery for AMF mes-sages in SoapUI. This is the equivalent of reading in a WSDL for SOAP endpoints, or a WADL for REST endpoints. For AMF everything has to be done by hand. However SoapUI does provide other goodies: test scripting, Groovy exten-sions and XML parsing of responses. We

will now see how to hook up AMF with SoapUI tests. There is a useful, short tutorial on these at http://soapui.org/AMF/calling-amf-services.html.

Method discoveryAs there is no way to define an AMF endpoint in SoapUI, one just creates an empty project (unless the project also contains SOAP and/or REST endpoints) and a new test case. When you add an AMF step to your test case you will need to provide three pieces of information: the endpoint, the AMF call, and all the inputs (figure 9). If you are lucky you will be able to get these from a detailed technical specification provided by your developers. If you are like me and you want any documentation you will have to write it yourself.

1. static void waitForStreamer(GenieMovieClip streamer, int seconds) {2. Date rightNow = new Date().getTime();3. while(streamer.isVisible()) {4. if(rightNow + (seconds * 1000) < new Date(). getTime()) {5. throw new TimeoutException(“Page busy? Streamer still visible after “ + seconds + “ seconds.”);6. }7. }8. }

Figure 6: waiting for a busy streamer to go away

1. String appXml = SynchronizedSocket.getInstance(). getAppXMLGeneric(app.name);2. ByteArrayInputStream inputStream = new ByteArrayInputStream(appXml.getBytes());3. DocumentBuilder docBuilder = DocumentBuilderFactory.newInstance(). newDocumentBuilder();4. Element docElement = docBuilder.parse(inputStream). documentElement;5. XPath xpath = XPathFactory.newInstance(). newXPath();

Figure 7: dumping an XML representation of the application

1. Node page = xpath.evaluate( “//*[@name=” + fieldname + “’]”, pageNode, XPathConstants.NODE );2. return pageNodes.attributes. getNamedItem(“genieID”).getNodeValue();

Figure 8: searching for a GenieID using XPath

Page 27: PT Issue21

28 PT - June 2013 - professionaltester.com

Continuous testing and delivery

AMF endpointThis is made up of several parts as follows: {protocol}://{server}/{service}/mes-sagebroker/amf. You will have to find out the parts in curly brackets from develop-ment. Remember that for SoapUI the protocol has to be http or https: rtmp will not work. For secure messaging the last part is sometimes “messagebroker/amf”. Note however that this does not make the messaging secure: configured secure transmission (the “s” part of https) does.

Again, remember that for now we are only concerned with functional testing of the APIs and everything else just gets in the way of that.

AMF callNow that we have an endpoint, we need to know the method calls. The tool BlazeMonster (http://sujitreddyg.wordpress.com/blazemonster/) can be used for, among other things, discovering them. For that we need only its “Existing Services” function. To use it, enter the web application’s root URL, that is the variable parts of the endpoint, and click “Load

destinations”. If everything is configured correctly, BlazeMonster fills in the AMF endpoint URL automatically and all the method calls are loaded up (figure 10).

Note that BlazeMonster is not perfect. It shows all the method calls on the server, whether they are public (available to be called) or private (not exposed as callable methods). Later on, when you are actually making the call using SoapUI, if you get a response that such a method does not exist even though BlazeMonster says it does, check with your development whether that particular method is public or private.

You can now determine the AMF call. It is {destination name}{method} as those are shown in BlazeMonster: I usually have both BlazeMonster and SoapUI open on facing screens, and just copy-paste every-thing between the two windows.

Now you can click on the “run step” button in SoapUI, and should get an error response saying that you did not supply any inputs (unless that call does not require any).

Simple inputsThese are simply added to the input window of SoapUI. In the example shown, the call is named “findCustomerBySSN” and it takes only one input: probably the SSN (social security number). I say “probably”, because BlazeMonster tells us only that there is one argument and that it is of type java.lang.String. It is able to read only the method signature so knows the number and type of inputs, but not what they represent. We can test our guess from within BlazeMonster using the “Invoke selected” button.

Unfortunately, things are seldom that simple! If the AMF call takes more than one argument, guessing what they are becomes harder. Even if you can do that, guessing which is which becomes very hard, especially if more than one of them are of the same type (figure 11).

The order in which the parameters are dis-played now becomes important. You will need to find out where, in the source code repository, is the interface to the service, then read the code and find the method by

Figure 9: adding an AMF step Figure 10: method calls in BlazeMonster

Figure 11: two arguments of the same type: which is which? Figure 12: naming inputs

Page 28: PT Issue21

29PT - June 2013 - professionaltester.com

Continuous testing and delivery

name, eg “findCustomerByName(String surname, String givenName)”. Then when creating the AMF step in SoapUI, give the inputs the same or similar names in the same order (figure 12). The name is not actually important, only the order is.

Complex inputsThings now (as always in testing) get worse. We are accessing via AMF the underlying methods, right in the technology layer. That means the arguments of some of the calls are probably not simple numeri-cal or string variables. They may be java.util.Collection, or a custom object. Custom objects are easy to identify (their type starts with com.<your_company_name>) but dif-ficult to deal with. Selecting a method with these argument types in BlazeMonster and clicking the “Invoke selected” button gives the error message “Selected operation expects object types which are not sup-ported by this application. Generate code for invoking this operation”.

And so we return to Groovy, the scripting language that can be used to generate the inputs for the call within SoapUI. Luckily for non-Groovers, Groovy is source-com-patible with Java. You can write simple Java code to create the needed object(s) and it should work in your SoapUI AMF step with very little modification.

First, create a new AMF step in the way already described: enter the endpoint and AMF call and name the input. There could be more than one input, but in that case you should have a discussion with devel-opment about whether they would want to consider redesigning this call.

Next, in BlazeMonster, select the appropri-ate method and click on “Generate code”. A new window appears (figure 13) contain-ing AS3 (ActionScript) VO (Value Objects) code for Java classes. The lower pane

Figure 13: AS3 VO code and custom objects

1. package com.company.dto2. {3. [Bindable]4. [RemoteClass(alias=’com.company.dto.CustomerDTO’)]5. public class CustomerDTO6. {7. public var dateOfBirth:Date;8. public var givenName:String;9. public var id:Number;10. public var surname:String;11. 12. public function CustomerDTO()13. {14. super()15. }16. }17. }

Figure 14: AS3 VO code for a specific custom object

1. import com.company.dto.CustomerDTO2. def scriptCustomer = new CustomerDTO()3. scriptCustomer.givenName = “Peter”4. scriptCustomer.surname = “Parker”5. scriptCustomer.dateOfBirth = new GregorianCalendar (1962, Calendar.AUGUST, 15).getTime()6. parameters[‘customer’] = scriptCustomer

Figure 15: Groovy script

Page 29: PT Issue21

30 PT - June 2013 - professionaltester.com

Continuous testing and delivery

lists the input and output custom classes. Select the class you are trying to create and click on “Generate VO”. This opens a new tab with more AS3 code, which gives a lot of hints about what is needed. Copy-paste it into the script pane of the AMF call in SoapUI. An example of what it looks like is shown in figure 14.

Now we will convert this to Groovy code SoapUI can understand. I usually start by enclosing all the code in a comment block. Now change lines 4 and 5 to lines 1 and 2 of the Groovy script as shown in figure 15.

Next, we need the jar file that defines the object: place this in SoapUI’s bin/ext folder and restart SoapUI to load it into the ses-sion. Now click the run button. If you get a dialog saying something like “paramets {} amfHeaders {}” you’re OK! If you get errors, don’t move on until you resolve them. Then, continue building the script inputs (lines 3-5 in figure 15, replacing lines 7-9).

You will have to find out, from the source code (which may not be trivial) or from developers, which variables are required: in this example, we’ll assume id is optional. Note also that our three new lines do not introduce the variables in the same order as the old ones: this is fine because it’s Groovy.

SoapUI cannot handle the situation where one of the variables is another custom object. Even if you do get all the pieces lined up, the call will probably fail. I don’t have any way around this brick wall at present except that, again, the design might be questioned.

Once the object has been built up, assign it back to the variable name (line 6 in figure 15) and run the script. If you see a dialog showing your inputs (figure 16), you are ready to run the test step.

Figure 16: call inputs confirmed

1. <flex.messaging.io.amf.ASObject>2. <dateOfBirth>1962-08-15 00:00:00.0 PDT

<dateOfBirth>3. <givenName>Peter</givenName>4. <surname>Parker</surname>5. </flex.messaging.io.amf.ASObject>

Figure 17: the response as XML (simple objects)

1. <flex.messaging.io.amf.ASObject serialization=”custom”>2. <string>dateOfBirth</string>3. <date>1962-08-15 00:00:00.0 PDT</date>4. <string>givenName</string>5. <string>Peter</string>6. <string>surname</string>7. <string>Parker</string>8. </flex.messaging.io.amf.ASObject>

Figure 18: the response as XML (custom object)

Page 30: PT Issue21

Continuous testing and delivery

Complex responsesThe AMF response is a binary object. You can see a representation of sorts on SoapUI’s “Raw response” tab. However SoapUI also goes to great lengths to give you a more familiar representation: “Response as XML”.

For simple response objects it usually does a pretty good job of guessing what belongs where so the output is quite readable (figure 17). For example, givenName could be validated using the XPath //givenName.

For complex custom objects, the response can be more verbose (figure 18). In this case XPath axis is needed, eg //string[text()=’givenName’]/following-sibling::*[1].

Dealing with multiple architecturesUnfortunately this method for handling complex inputs will not work for .NET logic tiers, not least because SoapUI cannot import DLLs and serialization of objects is done differently by .NET and Java.

I have seen some workarounds tried: for example publishing the information on a special JMS test queue; duplicat-ing information in the message headers; exposing methods as a SOAP call. All achieved some success but all require additional development work. If you will be testing a Flash application using .NET the only suggestion I can offer at present is that you address this issue as urgently as possible

Mark Lehky has worked in test automation in multiple industries since 1999. He blogs on the subject at http://siking.wordpress.com and keeps in touch via LinkedIn (http://linkedin.com/in/marklehky)

Achieve test coverage of up to 95%

Increase test automation up to 90%

Reduce your time-to-marketFree Demowww.tricentis.com/PT

Achieve test coverage of up to 95%

Increase test automation up to 90%

Reduce your time-to-market