Automated softwaretestingmagazine april2013

44
....... MAGAZINE APRIL 2013 $10.95 Test Automation on Embedded Systems THE UNIQUE CHALLENGES ASSOCIATED WITH ‘EMBEDDED’ AUTOMATION THE RIGHT TOOL: BUILDING A MOBILE AUTOMATION T ESTING MATRIX SOFTWARE T ESTING A UTOMATED A UTOMATED An A UTOMATED T ESTING I NSTITUTE Publication - www.automatedtestinginstitute.com Automated Testing In Agile Development TEST AUTOMATION WHEN AUTOMATION IS NOT YOUR ONLY TASK N AVIGATING C ONTINUOUS AND DEVELOPER T OOLS A Tester In An Ocean of Developer Tools MAKING THE TRANSITION FROM WATERFALL TO AGILE

Transcript of Automated softwaretestingmagazine april2013

Page 1: Automated softwaretestingmagazine april2013

.......MAGAZINE

April 2013 $10.95

Test Automation on Embedded

Systems The UniqUe chAllenges

AssociATed wiTh ‘embedded’ AUTomATion

The righT Tool: Building a MoBile autoMation testing Matrix

Software teStingautoMatedautoMatedAn AUTOMATED TESTING INSTITUTE Publication - www.automatedtestinginstitute.com

Automated Testing In Agile

DevelopmentTesT AUTomATion when

AUTomATion is noT YoUr onlY TAsk

Changenavigating Continuous

and developer tools

A Tester In An Ocean of

Developer Tools mAking The TrAnsiTion

from wATerfAll To Agile

Page 2: Automated softwaretestingmagazine april2013

TestKITOD

ONDEMAND

OD

Anywhere, anytime access to testing and test automation sessionsONDEMAND Sessions

If you can’t be live, be virtual

>>>> Available anywhere you have an internet connection <<<<>>>> Learn about automation tools, frameworks & techniques <<<<>>>> Explore mobile, cloud, virtualization, agile, security & management topics <<<<>>>> Connect & communicate with testing experts <<<<

ONDEMAND.TESTKITCONFERENCE.COM

Page 3: Automated softwaretestingmagazine april2013

autoMatedSoftware teStingApril 2013, Volume 5, Issue 1

Automated Software Testing Magazinewww.automatedtestinginstitute.comApril 2013 3

Contents

ColumnS & DepartmentSEditorial 4

authors and EvEnts 6

shiFtinG trEnds 8

opEn sourcEry 10

hot topics in automation 40

i ‘b’loG to u 36

continuous change and development tools

automation ups and downs

introducing... new mobile source

the right tool For the Job

The challenge of “fast-paced” environments.

Analyze trends from the ATI Honors.

Mobile tools in the ATI Honors

Building a Mobile Automation Testing Matrix

Read featured blog posts from the web.Learn about AST authors and upcoming events.

Go on a rEtwEEt 38Read featured blog posts from the web.

The AST Magazine is a companion to the ATI Online Reference. http://www.astmagazine.automatedtestinginstitute.com

ContinuouS Change anD Developer toolSA test automator in a “fast-paced environment” is faced with little time for automation and unclear information about non-standard, non-GUI systems for which they have no comprehensive tool for dealing with. This issue is dedicated to approaches necessary for successful automation in these types of environments, including close coordination with developers and the use of tools that may not traditionally be used by testers.

featureSa tEstEr in an ocEan oF dEvElopEr tools 12This article describes one team’s journey from a waterfall environment to an agile-like environment where testers were more greatly exposed to developer processes and tools. By Michael Albrecht

ovErcominG challEnGEs oF tEst automation on EmbEddEd systEms 18This article addresses approaches for effectively adjusting your automation techniques when faced with a non-conventional system such

as an embedded system. By David Palm

how automatEd tEstinG Fits into aGilE soFtwarE dEvElopmEnt 28This article offers a roadmap for test automation implementation when test automation is not your only task By Bo Roop

local chaptEr nEws 16Read featured blog posts from the web.

Page 4: Automated softwaretestingmagazine april2013

4 Automated Software Testing Magazine www.automatedtestinginstitute.com April 2013

Working in a “fast-paced environment” is often a daunting task, particularly for a software quality engineers. There are often no straight-forward answers for anything, system requirements are spread out in a loose collection of stories contained in a tool organized largely by build, sprint and or release. And just when you think you’ve got things figured out, there is another layer to be pealed back revealing some fundamental aspects of the system that you, to that point, were never aware of.

The task is even more challenging for a software test automator. Test automators are often faced with non-traditional or non-GUI systems and a lower than normal tolerance for investment compared with their “slower-paced” counterparts - which is scary given the fact that even these so-called “slower-paced” projects typically have a low tolerance for investment themselves. This lower tolerance for investment may finds roots in the fact that “fast-paced” projects are often pretty accepting of whatever the finished product is. Given the frequent delivery of updates, little hesitation is given to moving a feature back several releases. As long as the release contains something of value, no one bats an eye at delaying functionality or even the testing of functionality. In addition, processes are often left undocumented which makes it difficult to measure the effectiveness of those processes, thus also making it difficult to make a case for investment in process improvement and tools that don’t directly affect the software developers’ perception of convenience and efficiency.

A test automator is therefore faced with little time for automation, unclear information about non-standard, non-

GUI systems for which they have no comprehensive tool for quickly or effectively dealing with. Test automation

implementation in this type of situation often relies on a little ingenuity, close coordination with developers and the use of tools that may not traditionally be used by testers. This issue of the magazine focuses on automation under these circumstances.

The first feature entitled “A Tester in An Ocean of Developer Tools” by Michael Albrecht describes one team’s journey from a waterfall environment to an agile-like environment where testers were more greatly exposed to developer

processes and tools. Next, we are plunged even further into an environment that is often less conventional for software

testers. Entitled “Overcoming the Unique Challenges of Test Automation on Embedded Systems”, this feature written by David Palm addresses how to effectively adjust your automation techniques when faced with a non-conventional system such as an embedded system. Finally, we address adjusting your automation approaches to fit into agile development where multi-tasking is critical. In this article, Bo Roop offers a roadmap for test automation implementation when test automation is not your only task.

navigating continuous change and development toolsby Dion Johnson

editorial

A test automator is therefore faced with little time for automation, unclear information about non-standard, non-GUI systems for which they have no

comprehensive tool

Page 5: Automated softwaretestingmagazine april2013

ATI Automation Honors

Celebrating Excellence in the Discipline of Software Test

Automation

5th Annual

Nominations Begin April 8th!www.atihonors.automatedtestinginstitute.com

Page 6: Automated softwaretestingmagazine april2013

6 Automated Software Testing Magazine www.automatedtestinginstitute.com April 2013

Managing Editor Dion Johnson

Contributing EditorsDonna Vance Edward Torrie

Director of Marketing and Events Christine Johnson

A PUBLICATION OF THE AUTOMATED TESTING INSTITUTE

CONTACT US AST Magazine [email protected]

ATI Online Reference [email protected]

bo roop is employed as a senior software quality assurance engineer at the world’s largest designer and manufacturer of color measurement systems. He is responsible for testing their retail paint matching systems, which are created using an agile-based software development methodology. Bo helps gather and refine user requirements, prototypes user interfaces, and ultimately performs the software testing of the final product. Testing is a passion of Bo’s, and he is involved with local software groups as well as a few online forums. He’s an advocate for software that meets the customer’s needs and expectations and can frequently be heard trying to redirect teams toward a more customer-centric point of view.

Who’s In This Issue?

darren madonick is a mobile testing specialist at Keynote Systems. He has six years of experience in the mobile industry, along with a lifetime of experience as a technology enthusiast. As a mobile evangelist Madonick works closely with Keynote customers to find the right mix of products and services that meet their current needs, as well as future needs for mobile testing and development. He spends many hours both remotely and on site within many customers in various verticals to help plan their mobile testing and development effort.Madonick’s experience in this industry has enabled him to work directly with many Fortune 500 companies on a regular basis, and helps these organizations make important decisions regarding the future of their mobile enterprise.

Authors and events

autoMatedSoftware teSting

ATI and Partner Events

Now AvailableTestKIT on DemanD Virtual Conferencehttp://ondemand.testkitconference.com

david palm is lead test engineer at Trane, a leading global provider of indoor comfort systems and services and a brand of Ingersoll Rand. After gaining 18 years of experience in embedded software development in the materials testing, automotive, hydraulics, and heating, ventilation and air conditioning industries, Palm transitioned to embedded software testing in

2005. He has extensive experience managing the testing process for complex and interconnected embedded control systems, from concept to production. His greatest expertise is in applying test automation to embedded control systems.

The Automated Software Testing (AST) Magazine is an Automated Testing Institute (ATI) publication.

For more information regarding the magazine visithttp://www.astmagazine.automatedtestinginstitute.com

michael albrecht Michael Albrecht has been working within the test profession since the mid 90s. He has been a test engineer, test manager, technical project manager and test strategy/architect manager. From late 2002, Michael has been working intensely with test management, test process improvement and test automation architectures. He has also been

teaching ways to adapt test-to-agile-system development methodologies like Scrum.

June 17-18, 2013ATI Europe TABOK [email protected]

September 23-25TestKIT 2013 Conferencehttp://www.testkitconference.com

Page 7: Automated softwaretestingmagazine april2013

www.automatedtestinginstitute.comApril 2013 7Automated Software Testing Magazine

TestKIT Conference2013

www.testkitconference.com

Crowne Plaza HotelArlington, VA

Testing & Test Automation Conference

The KIT is Koming... Back

Save the Date!September 23 - 25, 2013

Page 8: Automated softwaretestingmagazine april2013

8 Automated Software Testing Magazine www.automatedtestinginstitute.com April 2013

automation ups and downs The 4th Annual ATI Automation Honors reveal contemporary tool sentiment

shifting Trends

GorillaLogic is no stranger to victories in the ATI Honors with its FlexMonkey tool winning first place in the Best Functional

Automated Test Tool – Flash/Flex subcategory in both 2010 and 2011, while also being named the runner up in the Best Function Automated Test Tool – Overall subcategory in 2011, coming in just behind the ever popular Selenium. This organization seems to have truly hit their stride however, in the 4th Annual awards with FlexMonkey’s successor tool known as MonkeyTalk. MonkeyTalk not only picked up where FlexMonkey left off by winning Best Functional Automated Test Tool – Flash/Flex subcategory, it also swept all subcategories in the newly added Best Mobile Automated Test Tool Category. This included the Android, iOS, and Overall subcategories.

Google lost. How many times do you get to say that? Well, in this year’s ATI Honors, this is an accurate statement as Google

lost the crown it held for two years in the Best Open Source Unit Automated Test Tool – C++ subcategory. ATF, a tool that entered the fray as the runner up in this category last year, pulled an upset by beating Google for the number one spot. I’m sure Google is not too concerned by this minor setback (if they are aware of it at all), but our community has spoken and made their voices clear.

There seems to be no definitive favorite in the Best Open Source

Functional Automated Test Tool – Java/Java Toolkits subcategory. FEST, a finalist that made its first appearance this year, has taken the top spot, but it better watch its back, because there has been a different winner each year since this subcategory was introduced. The jury is still out on whether the community will eventually lock into a long-term favorite, but for now, FEST is the champion.

Much like the Best Open Source Functional Automated Test Tool – Java/Java Toolkits

subcategory, the Best Commercial Functional Automated Test Tool – Web subcategory has also experienced its fair share of turnover. This seems largely due to the in-and-out nature of QuickTest Professional (QTP) aka (HP Functional Tester). Since this tool seems to only have an eligible release every other year, it has only been named a finalist in the awards every other year. In years that it has been a finalist, it has dominated this and other subcategories. Years that QTP has not been a finalist seems to be open season for multiple other tools to shine. The 2nd Annual awards saw SilkTest win this subcategory in the absence of QTP. This year, it saw new comer, Automation Anywhere win the subcategory. Congratulations to Automation Anywhere… at least for now.

This is the first year that the top performance tool LoadRunner has not been a finalist – due to the fact that it had no eligible release.

SilkPerformer was the clear benefactor of the high profile absence. SilkPerformer has had a steady assent to the top over the years, coming in as the Runner Up to LoadRunner in the Best Commercial Performance Automated Test Tool – Overall subcategory in the 2nd Annual and 3rd Annual awards. With LoadRunner out of the picture, SilkPerformer was unrelenting in its quest for number one and it finally achieved the spot this year.

The ATI Honors tells us a lot about current and future tool

trends[ ]

Page 9: Automated softwaretestingmagazine april2013

www.automatedtestinginstitute.comApril 2013 9Automated Software Testing Magazine

“It’s OK To Follow the Crowd”

CrowdamationCrowdsourCed TesT AuTomATion

“It’s OK To Follow the Crowd”Inquire at [email protected]

It wIll offer the flexibility to

use a tool of choice (open source

and commercial), have teams

operate out of different locations,

address the challenges of different

platforms introduced by mobile and

other technologies, all while still

maintaining and building a cohesive,

standards-driven automated test

implementation that is meant to last.

Page 10: Automated softwaretestingmagazine april2013

10 Automated Software Testing Magazine www.automatedtestinginstitute.com April 2013

introducing... new mobile sourceMobile Open Source Tools that Made Their First Appearance in the ATI Honors

Mobile automated test tool categories were added for consideration during

the 4th Annual ATI Automation Honors, which brought several new tools to the forefront for recognition in the awards. These tools are highlighted in this article.

monkeytalk

While MonkeyTalk has not technically been in the ATI Honors before, it

is not totally new to the awards. MonkeyTalk was in the awards under one of its former names: FlexMonkey. It’s other former name was FoneMonkey. FoneMonkey and FlexMonkey have now combined to form a tool known as MonkeyTalk and is a free and open source, cross-platform, functional testing tool from GorillaLogic that supports test automation for native iOS and Android apps, as well as mobile web and hybrid apps. It’s name change was apparently well received as it claimed the top prize in each of the

open sourcery

As the testing community gears up for the 5th Annual ATI Automation Honors, let’s take a look at the current open source finalists and winners that made their first entry into the Honors during the 4th Annual Awards.

Table 1: best open source mobile Test Tool finalist

Page 11: Automated softwaretestingmagazine april2013

www.automatedtestinginstitute.comApril 2013 11Automated Software Testing Magazine

three mobile test subcategories.

frank

The dog inside of a bun trotted into the ATI A u t o m a t i o n

Honors as the runner-up in the Best

Open Source Mobile Automated Test Tool iOS subcategory. Frank is a tool for writing structured acceptance tests and requirements using Cucumber and have them execute against an iOS application

kif (keep it funCtional)

Ever heard of K.I.S.S, which stands for Keep It Simple Stupid”? It is a principle that asserts the power of simplicity in system design. A test tool came into the ATI Honors with a name that may be inferred to follow a similar principle, but instead of just keeping things simple, this framework touts the power of keeping things functional. KIF, which stands for Keep It Functional,

is an iOS integration test framework that leverages undocumented iOS APIs for easy automation of iOS apps.ZuCChini

Let’s see if we can link Frank, one of the previously

mentioned finalists, with our next finalist, Zucchini, through a series of associations. While discussing the Frank automated tool, a tool by the name of Cucumber was mentioned. As you are probably already aware of, a cucumber is not only a tool, but the name of food as well. A food that is often mistaken for a cucumber is a Zucchini.

Tada!

Maybe this association is how our next tool was able to make its first appearance in the ATI Honors. Or maybe it’s because the community likes the way this tool uses natural language for interacting with and developing automated tests for iOS based applications.

CalabaSh

Most of the mobile

tools in the ATI Honors either supported iOS or Android, but not both. Although only nominated in the Android category, Calabash is one of the few tools that supports

both mobile platforms. In addition, like a couple of the other tools, this LessPainful supported tool also supports Cucumber for developing automated scripts.

robotium

The final mobile tool that entered the ATI Honors in the Best Open Source Mobile Automated Test Tool Android subcategory is Robotium. With Robotium, test automators can write functional, system and acceptance test scenarios for testing multiple Android activities. Robotium was not only a finalist in the Android subcategory, but was also the Runner-up in the Overall subcategory behind MonkeyTalk.

Mobile automated test tool categories were added for

consideration during the 4th Annual ATI Automation

Honors, which brought several new tools to the

forefront for recognition

open sourcery

Page 12: Automated softwaretestingmagazine april2013

12 Automated Software Testing Magazine www.automatedtestinginstitute.com April 2013

OCEANa tester in an

Of developer tools

by Michael albrecht

For years we’d been walking in the proteCted world of waterfall projects far away from requirement negotiations and aCCeptanCe tests, and suddenly awoke to a project with short iterations (two weeks) and in the lap of the customer

“If you can’t beat them join them”

Page 13: Automated softwaretestingmagazine april2013

www.automatedtestinginstitute.comApril 2013 13Automated Software Testing Magazine

Of developer tools

Imagine an organization used to slow-pace, annual deliveries, always with a

Graphical User Interface (GUI), and very little test automation. Suddenly that organization is faced with short iterations

(two weeks), no GUI and very high performance requirements. This was the situation my test team faced, and forcing us to deal with the fact that our testing approaches were no longer going to be effective. As a result, we formed a small, technical group

to assess our approaches and identify new quality

assurance tactics. This article follows our journey to agility prior to the modern popularity of agile.

Page 14: Automated softwaretestingmagazine april2013

14 Automated Software Testing Magazine www.automatedtestinginstitute.com April 2013

The birth of new tacticsThe first order of business was to find a way of testing without a GUI. Scary! Due to lack of tools knowledge, the QA people turned to the developers to find useful tools and experience. To be able to proceed, the project created a few simple rules:

1. Sit together

2. Hold daily status meetings

3. Learn some basic programming (same language as the developers)

4. Select tools already used by the developers for unit testing if possible

5. Create semi-automatic and automatic tests

6. Learn from the past

7. Get going!

Developer automation and toolsAs is often the case, an agreement with the customer was made without talking to the test team. The delivery cycles were built strictly upon the developer’s delivery capacity.

In addition, the development team began relying on APIs and open source tools for implementing a Test Driven

Development (TDD) approach to product development. To adjust to the new development practices and software the test team identified pertinent tools for low-level test automation, including tools for:

• XML schema validation

• SOAP testing

• “Check in” and nightly test execution

Customer requirements and test cyclesFor the first time in my life I found the customer requirements very detailed, but at the same time flexible. The requirements were created as use cases from an actor’s point of view.

All requirements started with a picture describing the basic flow, followed by short informative text explaining the flow throughout the system. The requirements were divided into groups:

• Basic flow• Alternative flows• Error flows• XML structure expected from the

system

Short iterations and continuous deliveryAs I mentioned, before a complete system delivery had to take place every fortnight, but we actually delivered every week. A typical fortnight cycle is illustrated in Figure 2. Since the project only had one team, the team focus shifted over time during this two-week cycle.

The birth of the technical automatorThe absence of a GUI increased the need for more technical skills to accompany excellent domain knowledge, but finding all that knowledge in one person is very hard. So we discarded the traditional project approach, and got everyone together (both testers and developers). In our team we got two testers with some development skills, and one tester with

Writing requirements for a technical automator is very easy; finding all that knowledge in one person

is very hard. So we discarded the traditional project approach, and got everyone together

Figure 1 : Meet ing Notes

Page 15: Automated softwaretestingmagazine april2013

www.automatedtestinginstitute.comApril 2013 15Automated Software Testing Magazine

very good business knowledge. The lack of development skills within the test group was cured by getting expert help from the developers within the team.

Traditional testers seeking usable tools!With little time and money, we abandoned our thoughts of acquiring a new tool. The developers had been using simple, open-source tools for some time in other projects, so we just started to use the same tools for our test scenarios.

Performance test at no costThe company had no earlier experience from performance tests, and the customer

now had very precise and demanding transaction time requirements. No environment except for production

was sufficient for executing the tests. Running tests in production was not a

big issue since we agreed to limit the performance tests to search functions, and not updates. Doing the tests at night limited outside interference. The challenge was in building our own performance tool. Once again we could thank team spirit for solving this. The developers implemented extended logging in databases and APIs together with a simple GUI to control parameters such as the number of concurrent users, time intervals between executing tests and sequence order. As testers, we developed functional tests and scenarios that could be run independently and as part of load/performance scenarios. The test cases were both created in Java code, as well as saved batches in our SOAP test tools. The tricky part came when we wanted to measure transaction times throughout the system during performance test execution. In Excel, we collected data from the database, API and GUI logging, connected each transaction to the loggings, and created a macro in Excel that calculated the duration for each and every one. While we did not spend any money on tools, we did spend a LOT of time. In our situation it was easier to explain (hide) then tool costs.

The conclusions of a success storyTeamwork, teamwork and once again teamwork – combined with continuous improvements and automated test batches – resulted in a successful test effort. Being forced to use the same tools as the developers was a blessing from above, but we probably would’ve been better off without creating our own performance tool. It was a mess from start to end. I have several other lessons learned, my top three are:

1. Work as a team

2. Use the same tools

3. Involve the customer in continuous feedback loops.

Retrospective

Figure 2 : Two Week Cycle

The absence of a GUI increased

the need for more technical skills

Page 16: Automated softwaretestingmagazine april2013

16 Automated Software Testing Magazine www.automatedtestinginstitute.com April 2013

latest From the local chaptersLocal Chapter Training Announcement

local chapter news

Training Announcement!

ATI Europe is organizing a TABOK training class in Rotterdam from June 17-18! If you are interested in this training please contact us at [email protected]

Ne

wsfl

ash

N

ew

sfl

ash

N

ew

sfl

ash

Ne

wsfl

ash

Ne

wsfl

ash

Ne

wsfl

ash

Page 17: Automated softwaretestingmagazine april2013

www.automatedtestinginstitute.comApril 2013 17Automated Software Testing Magazine

ATI Local Chapter Program

ATI - Meeting Local Needs In Test Automation

ATI’s Local Chapter Program is established to help better facilitate the grassroots, global discussion around test automation. In addition, the chapter program seeks to provide a local based from which the needs of automation practitioners may be met.

ENHANCE THE AwArENESS OF TEST AUTOMATION as a discipline that, like other disciplines, requires continuous education and the attainment of a standard set of skills

OFFEr TrAINING AND EVENTS for participation by people in specific areas around the world

HELP PrOVIDE COMPrEHENSIVE, yET rEADILy AVAILABLE rESOUrCES that will aid people in becoming more knowledgeable and equipped to handle tasks related to testing and test automation

Start a Local Chapter Today Email contact(at)automatedtestinginstitute.com to learn more

www.automatedtestinginstitute.com

Page 18: Automated softwaretestingmagazine april2013

18 Automated Software Testing Magazine www.automatedtestinginstitute.com April 2013

Overcoming the Unique Challenges of Test

Automation on Embedded Systems

Challenges of, techniques for, and return on investment from automation of embedded systems

bY dAvid pAlm

Test automation on an embedded system presents a unique set of challenges not encountered when automating tests in more conventional computing environments. If these differences are recognized and managed, the benefits of such automation—seen in terms of both expanded test coverage and time savings—can be great

Speaking broadly, an embedded system is a computer system designed to interface with, and control, some sort of electromechanical device(s). That amalgamation of computing power with an interface to external devices creates special challenges when it comes time to test the system software. Most software shares certain potential anomalies

in common: incorrect logic, math, algorithm implementation, program flow [branching, looping, etc.], bad data, data boundary issues, initialization problems, mode switching errors, data sharing, etc. Techniques to discover these software anomalies are well documented in the software testing field. Embedded systems, however, are unique.

Page 19: Automated softwaretestingmagazine april2013

www.automatedtestinginstitute.comApril 2013 19Automated Software Testing Magazine

Tools, InTerfaces and TechnIques for auTomaTIon on embedded sysTems

Page 20: Automated softwaretestingmagazine april2013

20 Automated Software Testing Magazine www.automatedtestinginstitute.com April 2013

what makEs tEst automation on EmbEddEd

systEms uniquE? Embedded systems introduce many factors that can result in anomalous system behavior. These factors include:

• Processor loading• Watchdog servicing• Power modes (low, standby,

sleep, etc.) • Bad interfacing to external

peripherals• Peripheral loading (e.g. network

traffic, user interface requests) • Signal conditioning anomalies

(e.g. filtering) • Thread priority inversion• Noise conditions

While, it is necessary to address more conventional software defects, it is also necessary to consider these additional factors when creating tests. Otherwise the test coverage will be inadequate and the system will likely ship with an unacceptable number of potentially serious defects.

While most, if not all, of these obstacles can be overcome—given enough time and resources—the fact remains that addressing them requires effort above and beyond what would be required in a more conventional computing system. Test automation on an embedded system requires three things: special tools, a customized interface or test “harness” between the tester and the system under test, and special automation techniques to cover not only common software defects but also those that are unique to embedded systems.

spEcial toolsFirst, choose a tool for test automation on an embedded system. This tool should include provisions to manipulate physical analog and binary inputs and outputs interfaced to the system being tested. And often an embedded system will utilize one or more communications protocols—the automation tool will need to be able to support these as well. This may, in fact, require a separate automation tool.

For example, the tester might evaluate the portions of code that manipulate hardware input/output (I/O) using one automation tool and then utilize a different automation tool to test portions of code that communicate using communications protocols such

as BACnet or ModBus. The ideal situation, however, is when a given automation tool can handle all of the system inputs and outputs—whether hard wired or communicated—together.

In a similar vein, if the embedded system includes a user display it may be possible to automate here as well, but frequently this will require a separate tool specifically designed to test user interfaces.

Debugging an automated test script for an embedded system presents many of the same challenges as debugging the system software. So the same kind of tools that the software developers use to manipulate system inputs and view the outputs in real-time will be needed.

An automation tool must also produce

manageable and maintainable test artifacts. Test automation is, after all, just software created to test software, so it runs into many of the same maintenance difficulties faced in more conventional software development. Specifically, there are a number of test automation tools that utilize graphical programming languages. These can be extremely useful for rapid prototyping and easy comprehension of specific test steps. But more than one test developer has found that once these graphical programs grow beyond a certain size, the task becomes daunting even for the original developer - let alone somebody else - to understand, modify and extend.

The way inputs and outputs are handled in the test tool should be

abstracted from their particular implementation in hardware. A test script should not “care” whether a temperature setpoint, for example, comes from a thermocouple, a thermistor, or an RTD. It should not “care” if a communicated value comes from a BACnet network, a LAN, or the Internet. Otherwise, a change in system implementation will break all of the scripts.

It is also useful for post-test analysis if the tool bundles both the script and the results from a specific run of that test and archives them in a single file. This eliminates any confusion that might arise as the test script is updated or expanded—there will always be a record of exactly what test steps yielded a given set of results.

A number of commercial tools on the market cover some or all of

This tool should include provisions to manipulate physical analog and binary inputs and outputs interfaced to the system

being tested.

Page 21: Automated softwaretestingmagazine april2013

www.automatedtestinginstitute.comApril 2013 21Automated Software Testing Magazine

these criteria. But the embedded test automation market has some notable gaps and could be better served with specialized tools.

spEcial intErFacEsBecause embedded systems represent an amalgamation of a computer system with external devices, a complete test automation system will require some sort of interface or “harness” between the automated test tool and the system under test.

Developing this test harness can be both complex and expensive. This time and monetary cost has to be factored into the project in order to get an accurate test schedule and return-on-investment calculation. It is necessary to work closely with both software and hardware engineers to design this test harness, particularly if expertise in those disciplines is insufficient.

Be sure to consider one important, overarching principle before planning and beginning work: An automated test harness should not, if at all possible, require any special “hooks” in the software or any special modifications to the hardware. Both software “hooks” and hardware modifications automatically mean that what is being tested is not the same as what the customer will be using.

Special software “hooks” add overhead and therefore affect the performance of the system under test. They also can result in a Catch 22—if the hooks have to be taken out of the software just before shipment, the software has been changed in a fundamental way while the ability to test it has been lost.

And hardware modifications to facilitate interfacing to an automated test system mean that standard, production hardware cannot be used for tests. This can open the door to shipping the product with subtle defects that appear on production hardware but not on the modified

system. They also require spending precious time and money acquiring and modifying hardware for use in the test system. This can become especially burdensome if the hardware itself is going through numerous revisions.

Sometimes software “hooks” and hardware modifications cannot be avoided, and the payoff may be more than sufficient to justify their use—as long as the potential pitfalls are fully understood. But in general, try to avoid these technical compromises.

Here are some more challenges that may be encountered when designing a test harness for embedded automation:

• High voltages and currents in the system require due attention to the safety of both human beings and the system under test.

• The interface to each input or output from the system under test may need to be conditioned in order to interface with the available test hardware. For example, an analog voltage may need to be divided down before it is applied to an analog input, or there may need to be

optical isolation on some or all connections between the test harness and the system under test.

• Non-linear sensors such as thermocouples can be notoriously difficult to mimic, especially if a very high degree of accuracy is necessary. Achieving accuracy to ± 0.5 ºC over the entire operating range may not be too difficult, but 0.01 ºC is probably going to be very difficult.

• Presenting a system with a simple DC voltage (e.g. 0-10 VDC) or current (e.g. 4-20 mA) is not difficult with off-the-shelf hardware. But presenting it with high voltage, variable resistance, or variable capacitance will be significantly more difficult and will likely require some custom hardware development.

• End-points and extreme values may be difficult to reproduce with the test harness. For example, when using a simple resistor voltage divider to condition an analog output to interface with an analog input, it often is not possible to drive the input all the way to its extremes (especially on the high side) to simulate a “shorted” or “open” input condition.

• Complex and fast communications protocols are a challenge to automate.

• User-intervention is often still necessary via key pads, touch screens, etc. You can automate these things, but may not be cost effective. On the other hand, the user interface may be the only part of an embedded system that can be cost-effectively automated and this may be well worth doing.

• While the subsystems may be manageable on a case-by-case basis, the ability to service all of the system inputs and outputs simultaneously can require a prohibitive amount of processing power in the automated test tool.

embedded systems

So the same kind of tools

that the software developers use to manipulate

system inputs and view the outputs in real-time will

be needed.

Page 22: Automated softwaretestingmagazine april2013

22 Automated Software Testing Magazine www.automatedtestinginstitute.com April 2013

TestKIT Conference2013

www.testkitconference.com

Crowne Plaza HotelArlington, VA

Testing & Test Automation Conference

The KIT is Koming... Back

Save the Date!September 23 - 25, 2013

Page 23: Automated softwaretestingmagazine april2013

www.automatedtestinginstitute.comApril 2013 23Automated Software Testing Magazine

Page 24: Automated softwaretestingmagazine april2013

24 Automated Software Testing Magazine www.automatedtestinginstitute.com April 2013

That last point brings up yet another factor that must be considered when designing a complete embedded test automation system. The automation system will have to run fast enough to sample inputs at a sufficient rate and assert outputs in a timely fashion. What that means varies from system to system.

In the HVAC industry, for example, being able to respond within one second is usually quite sufficient, with many events taking place in the 5 to 10 second range. This makes test automation very feasible. On the other hand, something like an automobile engine controller or a flight guidance system may need to process inputs hundreds or thousands of times per second and assert outputs within milliseconds of detecting a given condition. A test automation system capable of that level of performance may be prohibitively difficult and expensive.

But even faced with such a scenario, can

useful testing

be done at reduced speeds? If so, some automation may still be possible and warranted.

The bottom line is that it is necessary to factor in test harness development, fabrication, and testing of the harness itself into the project schedule. It is an added bonus if the test harness is designed to be generic and/or expandable, so that it can be applied to more than one product. This can enhance the long-term return on investment, so watch for these opportunities.

And given that there may be technical obstacles that would prevent test automation on the entire system, it may still be worthwhile to automate even a portion of a project, provided that the return on the investment of time and effort promises a payoff.

spEcial automation tEchniquEs: somE

typical “Gotchas” in EmbEddEd soFtwarE

tEst automationOnce appropriate automation tools

have been selected and d e s i g n i n g

and

building a test harness for the embedded system has been completed, it is time to create some test scripts. Here again there are a number of special considerations that should be factored in to the automation effort on an embedded system.

First, embedded systems can be vulnerable to initialization problems. You can write scripts and have them pass ordinarily, just because some system input is typically sitting at a given value. But if a prior test script left that system input at a non-standard value, suddenly a subsequent script may fail. So the same test on different test set-up/facility/etc. can fail unexpectedly because a less than comprehensive initialization has been performed.

To address this, try to have a comprehensive initialization sequence that can be called by all test scripts. Make it a matter of policy that this initialization sub-script is called at the start of each script. Yes, people are going to complain that it seems to be a waste of time to execute all of these steps at the start of every single test script. But in the end the time will be well spent, since chasing errant

conditions caused by initialization

Managing tolerances is crucial to successful embedded automation. Real-world systems do not

lend themselves well to absolutes.

Page 25: Automated softwaretestingmagazine april2013

www.automatedtestinginstitute.comApril 2013 25Automated Software Testing Magazine

problems will be avoided.

Managing tolerances is crucial to successful embedded automation. Real-world systems do not lend themselves well to absolutes. It is not useful for a system requirement to say that a system needs to control to a setpoint of 72 F. It is only useful to say that the control must control to the setpoint plus or minus some tolerance. Automated tests need to be written to handle the tolerances rather than absolutes. Otherwise numerous testing errors will be logged when the real world system deviates, even slightly, from those absolutes.

Race conditions are caused specifically by timing tolerances. A race condition “is a flaw in an electronic system or process whereby the output or result of the process is unexpectedly and critically dependent on the sequence or timing of other events. The term originates with the idea of two signals racing each other to influence the output first

(h t tp : / /en .wikipedia .org /wiki /Race_condition). In the case of test automation, it most often manifests itself in a condition in which the test script execution gets to a check point first—perhaps even by just a millisecond—and fails the step because the process it’s checking has not caught up.

Conversely, the process on the embedded system may have just completed and moved on—so the test script fails to detect the desired process state because it has already moved on. Fortunately, race conditions are relatively easy to avoid. Tests can use a simple “Wait While” followed by a “Wait For” construct. As long the timing requirements for the event that’s being tested are understood, this combination will not only prevent false errors because of the race condition, but will also verify that the system is working inside of its formal timing requirements.

A very big potential “gotcha” in automated testing on an embedded

system occurs when the test is not comprehensive enough to catch an unexpected glitch on a system output that might seem to fall outside of the specific test case. The difficulty is that a given system may have dozens or even hundreds of outputs. It is usually impossible to check the status of all of them in every test step.

At the very least, be sure that each test case explicitly checks the status of all known critical values. But on the flip side, so as not to add unnecessary execution overhead, if it’s truly a “don’t care” then don’t include it. Formal script reviews are the solution here—other test engineers, hardware engineers and software developers may identify system outputs that were not considered, but that really should be included in the test case.

to automatE or not to automatE: FindinG thE rEturn on invEstmEnt

The first question is whether the embedded system testing should be fully automated. The answer is no, generally not. At the very least, relying completely on automated tests is probably a bad idea. As mentioned above, in a system of any significant complexity there are simply too many inputs and outputs for the tests to be absolutely comprehensive. Many times manual tests run by individuals with significant understanding of the system will catch defects that would have been missed by a more narrowly scripted automated test.

Total reliance on automated testing will generally not result in sufficient coverage. There are aspects to most embedded systems that will defy full coverage through automation without enormous effort. And in certain embedded systems there are human and machine safety considerations—in these cases, although the safety tests can be automated, they should also be run manually so that a human being verifies the safety of the system.

Something like an

automobile engine

controller or a flight

guidance system

may need to process

inputs hundreds

or thousands of

times per second

and assert outputs

within milliseconds

of detecting a given

condition. A test

automation system

capable of that level

of performance may

be prohibitively

difficult and

expensive.

embedded systems

Page 26: Automated softwaretestingmagazine april2013

26 Automated Software Testing Magazine www.automatedtestinginstitute.com April 2013

So how does one decide whether to automate or not to automate? Ultimately this will be determined by calculating the return on investment (ROI) for the automation effort.

Remember, first, that almost every obstacle can be overcome: it is purely a function of how much time, money, and effort can be expended for an ROI. The better the metrics available about your automation process, the more information can be provided management concerning the ROI when automating new systems.

There are many models available to calculate ROI for test automation. Any of these can applied to test automation on an embedded system. The main difference will be to factor in the time it takes to design, build and troubleshoot the test harness(es). It is also necessary to be aware of any extensions to the test tool that may be required to provide coverage for parts of the embedded system that the tool does not already support, such as a new communications protocol or hardware I/O type.

Here are some good rules of thumb to maximize return on investment from embedded test automation:

• First and foremost, regression is the primary key to ROI. Repetition pays the bills. Automate tests that will be run numerous times over multiple test cycles.

• Intelligent selection of the scope of automation is the secondary key to ROI. Don’t bite off more than can be handled (or paid for). The low-hanging fruit would be tests that require large amounts of time to execute and where catastrophic results could result

if the software is defective. For example, signal conditioning algorithms such as piece-wise filtering and linearization applied to analog inputs can have bugs at the transition points that are relatively difficult to detect but can throw the input value wildly out of range. It is easy to create a test that sweeps the entire range of analog values in small increments looking for these anomalies. Such a test would be daunting to run manually, is easy to automate, and can catch software defects that could have catastrophic problems in the embedded system. (But note that, in this example at least, a good code inspection would go pretty far in eliminating the risk of such a software defect.)

• Another way embedded system test automation can have a huge payoff is to reproduce faults that require large numbers of iterations to occur, so many that manual testing would be impractical or impossible. For example, I once worked on a serious field issue that occurred very infrequently and at just a few job sites. The software engineers eventually came up with a set of conditions they thought could reproduce the problem. An automated test was developed to repeatedly present those conditions to the system and it turned out that on average the error would occur approximately every 300 presentations. The ability to reproduce the error, even that infrequently, enabled the software engineers to craft a fix. The test was then run for thousands of cycles and we were able to calculate, to a statistically exact level of confidence, just how certain we were that the remediation actually fixed the problem. The payoff of the automation was a little difficult to quantify in dollar terms, but the payoff in increased management confidence in the competence of the engineering group was very high.

• Remember that partial automation of a given test may still be worthwhile. Even if an automated test has to stop execution to prompt a user for certain intervention, the test might still provide better coverage, better reporting, better consistency, and be less mind-numbing—and therefore more prone to being run accurately during regression—than a fully manual test.

conclusionTest automation on an embedded system presents a unique set of challenges not encountered when automating tests in more conventional computing environments. Test automation on embedded systems requires a unique set of software tools. And since embedded systems involve an amalgamation of hardware and software, a specific tester-to-controller interface is required. Developing this interface can be complicated, challenging, and costly. The test professional must factor in the cost and time needed to create the automation interface, or the testing schedule is incomplete.

Because of the real-time nature of embedded systems, the test professional must also employ specific automation techniques. Being aware of these unique challenges will greatly decrease the time needed to debug automated tests, which will result in successful automation attempts and greater likelihood of management satisfaction.

Test automation on an embedded system can greatly expand the scope of testing and eliminate defects that would have been virtually impossible to identify using manual testing alone. Awareness of the unique challenges posed by embedded systems can help the test professional to decide on an appropriate scope of automation, avoid pitfalls during test development, and deliver a successful product.

Regression is the primary key to ROI

Page 27: Automated softwaretestingmagazine april2013

www.automatedtestinginstitute.comApril 2013 27Automated Software Testing Magazine

Training That’s Process Focused, Yet Hands On

Software Test Automation Trainingwww.training.automatedtestinginstitute.com

Public and Virtual Training Available

Come participate in a set

of test automation courses that address both fundamental

and advanced concepts from a theoretical and hands on perspective. These courses focus on topics such as test scripting concepts, automated framework creation, ROI calculations and more. In addition, these courses

may be used to prepare for the TABOK Certification

exam.

Public coursesSoftware Test Automation Foundations

Automated Test Development & ScriptingDesigning an Automated Test Framework

Advanced Automated Test Framework DevelopmentMobile Application Testing & Tools

Virtual coursesAutomated Test Development & ScriptingDesigning an Automated Test Framework

Advanced Automated Test Framework DevelopmentMobile Application Testing & Tools

Page 28: Automated softwaretestingmagazine april2013

28 Automated Software Testing Magazine www.automatedtestinginstitute.com April 2013

how AutomAted testing Fits into Agile soFtware

developMent

Page 29: Automated softwaretestingmagazine april2013

www.automatedtestinginstitute.comApril 2013 29Automated Software Testing Magazine

how AutomAted testing Fits into Agile soFtware

developMentBy Bo Roop

beIng an auTomaTor When auTomaTIon Is noT your only Task

Page 30: Automated softwaretestingmagazine april2013

30 Automated Software Testing Magazine www.automatedtestinginstitute.com April 2013

How does automated software testing fit into the big picture of agile software development? In my case, it was more about the

tester than it necessarily was about the testing.

I was the newest member of an existing eXtreme Programming (XP) team. This software team had been working together for a few years, but had just begun its transition into using the agile methodologies. The company I worked for had a standardized testing team that was used as a shared resource among each of the individual software development teams.

Its members (re)learned each software package as it came, but they were following the developers at the end of the software development cycle only. There was no early testing integration and the software quality was suffering because of the waterfall approach.

I was asked to join this new agile team as a tester, but quickly found that my role would be so much more.

Once I got up to speed on using the software and understanding our customer’s goals, I found that I had a better understanding of the whole system than the developers who were focused on just the small areas for which they were writing code. So I transitioned into a role of product champion and customer advocate. I worried about the whole product and how our customers would use it, like it, and recommend it.

Our development team had a dedicated XP customer in our marketing person, who knew what he wanted the software to do, but he couldn’t write a realistic requirement. So I spent many hours each iteration taking his broad requirements and changing them into achievable software tasks for the team to implement

When we started using him as a customer, we’d receive requirements like, “make

the software save faster.” This was not truly helpful, nor necessarily achievable. While we could have made it save faster, it still might not have met his desired speed improvement since it was never clearly defined. So I was tasked with morphing that ambiguous requirement into something achievable: “Make the saving of new customer records complete in less than 50 percent of the current rate.” We benchmarked results from the existing software to establish a baseline, and then aimed at making the software faster.

Once the new software pieces were implemented, I then performed ad-hoc and exploratory testing on the new builds, and I found bugs. This testing was performed within the two week iteration, and the feedback to the development staff members was almost instantaneous. The

developers

immediately corrected the problems and moved on to the next task, and I, of course, verified their fixes.

The BeginningAt the beginning of a new

iteration, I would spend the first two to three days running the existing automated testing suite to verify we didn’t have any regression issues during the previous iteration and once all the existing tests passed I’d take off automating the new features from the last two weeks. If they didn’t all pass, either because of regression issues or changes in the way the code behaved, I’d go through and

modify the automated scripts or I would work with the developers to help correct the regression issues.

By sharing the results of the automated test runs with the developers in the early stages of the new iteration, I could get bugs fixed quicker. Those regression issues were knocked out before we built on them and made them unmanageable. We sometimes found incomplete areas of the software that automated testing was able to reveal even though the application hid it from the user. Finding “Todo:” comments in the code always raised a red flag, and set me off on a more in-depth hunt.

Our development team manager also appreciated the prompt feedback which included metrics-like code coverage and pass/fail statistics. Those are the visible items that upper management

liked to track, even though we constantly reminded everyone that those were just a few metrics of interest, not all.

Within the first three days of the new iteration I would

have automated the last iteration’s newly created

features, passed the results on to management and the rest of the team, and then begun looking at the new features the developers were working on over the past few days in the current iteration.

We found that running with the software automation a week behind gave us a few benefits. The code to be automated had already been manually tested and verified (more on that later), and it was already in a shippable state at the end of the last iteration. Remember that we were following strict XP practices.

Stable software is always easier to automate than software that’s in a state of flux. These automation tasks also filled that gap when the new code hadn’t been developed yet, and manual testing would simply be repeating tests from the previous iteration.

Page 31: Automated softwaretestingmagazine april2013

www.automatedtestinginstitute.comApril 2013 31Automated Software Testing Magazine

Exploratory & HostileAfter the software automation

tasks were completed, I utilized exploratory testing models to find the bugs in the new code. The new features were tested manually before any of the existing code was regression tested again. The developers needed the feedback on newly implemented features while that code was still fresh in their minds. During our daily stand-up meetings, I received information from the developers telling me which areas of the code were in flux or in need of greater attention. We were working together toward a common goal.

I then performed ad-hoc testing as a hostile user. We had customers who were forced to use our software by their managers, so they would try to find problems with our software or create reasons to not use it. Since our software provides feedback on the quality of their work in a production environment, many viewed the software as a threat to their jobs instead of as a tool to make their jobs easier.

It was a difficult task for our team to improve the perception of our software with those customers. So if they were going to try to break the software, I had to try to beat them to the punch. I had to try to find the areas where the problems existed before they did. By using the software in the same fashion as our destructive customers, it became very solid.

RequirementsAfter a few days of performing

manual testing of the new builds, I’d change gears and begin working with

our internal XP customer, the marketing guy, to refine his requirements into something usable. I had a number of hurdles to overcome with him as our internal customer: He had no background in software development and didn’t understand how engineers implemented new code. While he was great at creating brochures and marketing campaigns, he was not as gifted with time management and interpersonal skills. In his haste to get the new requirements completed, he would rush, meaning we would frequently receive concept-only requirements. We’d hear things like we need to make the software pretty.

Pretty? Really?

So I would get started with him during the second week of the iteration and figure out what his “make it pretty” requirement really meant.

He didn’t like the way the software looked and he wanted it to be more closely aligned with the Windows operating system. So we changed his requirement into, “make the software use some of the newer Windows look and feel. Stuff like rounded buttons, gradients and transparency.” This was better, but it was still very ambiguous. But by removing some of our developer’s creative license, the software started looking prettier (in his eyes and ours); and by taking baby steps toward a real requirement, we were at least moving in the right direction.

My goal was to get all of the requirements from the marketing guy translated into developer-speak, and ready to share during the next iteration planning meeting. In the beginning, it took about five to eight days to get everything defined. Toward the end, we were able to knock out the requirements much faster.

New features/controlsWhile we were working to get

requirements refined, I would still grab the new builds and manually test the new features. On the project board, anything that was moved into the “ready for test” column was fair game. Sometimes it was complete and ready, and other times it was not. So that was another balancing act I had to learn. Some developers have thicker skin than others, and appreciate the immediate feedback, and others despise being told their code is broken (especially if it’s not 100 percent feature-complete). After the next iteration’s requirements were delivered, I’d change

over to running the regression test suite, and continue my manual testing of new builds. These last few days of the iteration allowed me to test some of the undocumented requirements that needed attention.

Before beginning automation of the new controls and interfaces, I’d take a dry run at learning the new controls into the automated software package. I needed to ensure that controls could be scriptable. They needed to be properly and consistently named in order to keep the automated scripts as readable as possible. Hotkeys and shortcuts needed to be unique, there needed to be stability in the code, and the software needed to be ready to be released at the end of each iteration.

I would also pick a few specialized types of testing to focus in on during the tail end of the iteration. Sometimes I would focus on the ease of use of the overall software, finding out if it was easy to learn, contained clear and useful warning and/or error messages. Other times I would focus on Windows

Once the new software pieces were implemented, I then performed ad-hoc and exploratory testing on the new builds, and I found bugs.

Agile Automation

Page 32: Automated softwaretestingmagazine april2013

32 Automated Software Testing Magazine www.automatedtestinginstitute.com April 2013

specific platform support. While much of my automation was done on one or two platforms, the software had to run on many other computer systems. Windows

2000, XP and Vista, as well as a large list of foreign languages, were all officially supported. It was my job to maintain these test environments, and to test the software on each of the platforms with the latest service packs, running each of the languages.

Now throw different hardware configurations into the mix, and my testing matrix continued to grow. More or less RAM, larger hard drives, filled disk space, no virtual memory, authenticated on a corporate domain or not, missing drive letters, none of these things were ever specifically called out in the testing requirements. They were all sourced from customers’ desires or problems reported from the field.

DocumentationIf these areas were

found to be working at the end of the iteration, I’d switch over to worrying about how the documentation team was being integrated into the development team. Could the software be quickly and easily translated? Would it work within the confines of their translation tools? Could we support field translations? Did our documentation staff understand how the new features worked? If not, I would provide them with training. If we weren’t ready for documentation and translation, I could test the software’s logging capabilities to ensure that our technical support team would be able to support the product in the field.

These items were specifically called out from the XP customer or the software manager as corporate goals, but were not usually given individual story or

task cards. We had to just work them in whenever we could.

ConclusionSo to recap, my two week

iterations (10 working days per iteration) would look something like this:

• Days 1-3: Automate the new features that were coded over the past two weeks

• Days 4-5: Perform manual testing of the newly added features as they are completed

• Days 6-7: Refine requirements with marketing

• Days 8-10: Run regression tests. Continue working to strive toward shippable software at the end of the iteration … focusing on the non-defined

testing requirements

And then we’d start over again. I loved it as a tester because I had complete

knowledge of the requirements (how should the software behave); I was helping write them! The developers loved it because I went through the hassle of refining the vague, ambiguous or incomplete requirements into something useful. Our development team rule was the software developers didn’t have to work on a story or task if it was ambiguous. The pressure to make difficult decisions was on the marketing guy. You want it faster? How much faster? Would you like it two seconds or 30 seconds faster? What if the team can’t achieve that speed? Shall we time-box this research to a half day? What’s it worth to you? By talking about these issues ahead of time, during the previous iteration, there was time to fill the gaps, make the necessary changes, and get the team useful requirements.

It made for some great requirements and very happy team members.

The marketing guy loved it because he got exactly what he wanted with little-to-no arguing with the developers. I had a

calmer demeanor with him and could get the answers out of him

without bringing in a lot of people ... which meant the developers could continue developing while I went to the meetings.

Finally, our manager loved it because everyone on his team was happy and engaged and the customers were getting great software from us. Win-win-win.

You want it faster? How much faster? Would you like it two seconds or 30 seconds faster? What if the team can’t achieve that speed?

Teamwork

Page 33: Automated softwaretestingmagazine april2013

www.automatedtestinginstitute.comApril 2013 33Automated Software Testing Magazine

“It’s OK To Follow the Crowd”

CrowdamationCrowdsourCed TesT AuTomATion

“It’s OK To Follow the Crowd”Inquire at [email protected]

It wIll offer the flexibility to

use a tool of choice (open source

and commercial), have teams

operate out of different locations,

address the challenges of different

platforms introduced by mobile and

other technologies, all while still

maintaining and building a cohesive,

standards-driven automated test

implementation that is meant to last.

Page 34: Automated softwaretestingmagazine april2013

34 Automated Software Testing Magazine www.automatedtestinginstitute.com April 2013

Page 35: Automated softwaretestingmagazine april2013

www.automatedtestinginstitute.comApril 2013 35Automated Software Testing Magazine

We have 203 guests online

Page 36: Automated softwaretestingmagazine april2013

36 Automated Software Testing Magazine www.automatedtestinginstitute.com April 2013

i ‘b’log To U

latest From the blogosphereAutomation blogs are one of the greatest sources of up-to-date test automation information, so the Automated Testing Institute has decided to keep you up-to-date with some of the latest blog posts from around the web. Read below for some interesting posts, and keep an eye out, because you never know when your post will be spotlighted.

Software changes are more frequent and demand stringent Quality parameters which enforce a highly efficient and automated development and quality process. Test Automation for this reason has seen a sea change in its adoption levels in the recent years. Though there are multiple factors that are responsible for delivering successful test automation, the key is selecting the right approach.

Blog Name: ATI User Blog PostPost Date: January 2, 2013

Post Title: Top 5 things you should considerAuthor: Sudhir G Patil

Read More at:http://www.automatedtestinginstitute.com/home/index.

php?option=com_k2&view=item&id=1158:top-5-things-you-should-consider-for-test-automation-investments&Itemid=231

“You think your framework is better than mine?” This is the result of the ‘framework’ stage or those that expand on structured programming to build robustness into their programming efforts. This stage produces the best results by modularizing code into reusable functions, components, and parameterizing test data. User-friendliness is an important characteristic so it can be handed off to system analysts and alleviate the amount of expensive programmers.

Blog Name: ATI User Blog PostPost Date: February 4, 2013

Post Title: .Think Your Automation Framework is BetterAuthor: Patrick Quilter

Read More at:http://www.automatedtestinginstitute.com/home/index.

php?option=com_k2&view=item&id=2643:http-wwwquilmontcom-blog&Itemid=231

Page 37: Automated softwaretestingmagazine april2013

www.automatedtestinginstitute.comApril 2013 37Automated Software Testing Magazine

latest From the blogosphereAutomation blogs are one of the greatest sources of up-to-date test automation information, so the Automated Testing Institute has decided to keep you up-to-date with some of the latest blog posts from around the web. Read below for some interesting posts, and keep an eye out, because you never know when your post will be spotlighted.

If you ever need to disguise a password in a VuGen script, you will no doubt have used the lr_decrypt() function. If you have stopped to think for a second or two, you will have realised “encrypting” the password in your script doesn’t make it more secure in any meaningful way. Anyone with access to the script can decode the password with a single line of code

Blog Name: MyLoadTestPost Date: December 29, 2012

Post Title: LoadRunner Password Encoder Author: Stuart Moncrieff

Read More at:

http://www.myloadtest.com/

On one of these (production) servers I typed ‘ci /etc/passwd’ instead of ‘vi /etc/passwd’. This had the unfortunate effect of invoking the RCS check-in command line utility ci, which then moved ‘/etc/passwd’ to a file named ‘/etc/passwd,v’. Instead of trying to get back the passwd file, I panicked and exited the ssh shell. Of course, at this point there was no passwd file, so nobody could log in anymore. Ouch. I had to go to my boss, admit my screw-up

Blog Name: Agile TestingPost Date: January 29, 2013

Post Title: IT stories from the trenches #1Author: Grig Gheorghiu

Read More at:

http://agiletesting.blogspot.com/2013_01_01_archive.html

Page 38: Automated softwaretestingmagazine april2013

38 Automated Software Testing Magazine www.automatedtestinginstitute.com April 2013

go on A retweet

paying a visit to the microblogsMicroblogging is a form of communication based on the concept of blogging (also known as web logging), that allows subscribers of the microblogging service to broadcast brief messages to other subscribers of the service. The main difference between microblogging and blogging is in the fact that microblog posts are much shorter, with most services restricting messages to about 140 to 200 characters.Popularized by Twitter, there are numerous other microblogging services, including Plurk, Jaiku, Pownce and Tumblr, and the list goes on-and-on. Microblogging

is a powerful tool for relaying an assortment of information, a power that has definitely not been lost on the test automation community. Let’s retreat into the world of microblogs for a moment and see how automators are using their 140 characters.

Should Programming Classes be Covering Software Testing, Too? http://lnkd.in/6nnyTx

Twitter Name: CaitlinBuxton2Post Date/Time: Mar 27

Topic: Dev & Testing

Atomic test cases are awesome http://tinyurl.com/aw36bre

Twitter Name: sheg80 Post Date/Time: Mar 12Topic: Test Controllers

Page 39: Automated softwaretestingmagazine april2013

www.automatedtestinginstitute.comApril 2013 39Automated Software Testing Magazine

paying a visit to the microblogsThe more I learn about #testing

(or anything else really) the more I realise I haven’t even scratched the

surface #keeplearning

Twitter Name: testchickPost Date/Time: Mar 30

Topic: Continuous Learning

Parameterizing Selenium Web-Driver Tests using TestNG - A Data Driven Approach http://wp.me/p2RSUo-jx

Twitter Name: KadharMRPost Date/Time: Feb 22

Topic: Parameterizing SeleniumThis demo clip shows step by step how to design a test with different types of virtual users: h t t p : / / w w w. y o u t u b e . c o m /watch?v=EjZoXwTAELs

Twitter Name: onloadtesting Post Date/Time: Dec 26

Topic: Load Test with Virtual Users

This is so freaking awesome visualisation of test data coverage. Kind courtesy of @Hexawise at Moolya! pic.twitter.com/CBXWcfmIGL

Twitter Name: shubhi_baruaPost Date/Time: Mar 18

Topic: Test Data Visualization

Page 40: Automated softwaretestingmagazine april2013

40 Automated Software Testing Magazine www.automatedtestinginstitute.com April 2013

the right tool for the JobBuilding a Mobile Automation Testing Matrixby darren madonick

hot Topics in Automation

For Desktop-based testing it’s a no-brainer: Use object-based scripting to maximize reuse across platforms/browsers. In today’s mobile world it really isn’t that simple. There are many different platforms, OS versions, form factors and carrier/manufacturer customizations. Multiply that by mobile web, native app, or some hybrid in-between and you’ve got yourself a healthy testing matrix. A daunting task for even the most skilled Automation Engineer.

In order to tackle this problem, an Automation Engineer cannot simply look at it from a “one size fits all” perspective to create a set of objects and re-use them across all combinations of platforms. For example, there are fundamental differences in how an app behaves on iOS and Android, even with something as basic as a “back button” has its quirks. Although these fundamental differences can be grouped together as a step or action, they are unique enough to not be able to simply share an object between the two OS’s.

In some cases with mobile testing, you may be able to get to the object-level, however this usually requires that you instrument your app, or test on an emulator. While this fulfills a piece of your testing matrix, you will probably need to seek a couple tools to get this done across all platforms. In other cases, the content you are testing might be HTML-based and you can test by WebKit profiling. Again, part of your testing matrix is fulfilled, however you aren’t quite there.

This may be enough to satisfy a short-term goal, but at some point you need to be testing on real mobile devices. In order to truly automate on mobile, your mobile testing “Utility Belt” needs to be designed in such a way that allows for testing by object when possible, element when possible, and also be able to quickly fall back on text or image verification in order to satisfy all areas of your testing matrix and assure the highest quality of your mobile product.

Having the flexibility to be able to choose how to get the testing done is paramount since as an Automation Engineer, you very rarely have a say in how a particular app or mobile web site is developed. The job requires you to sometimes understand functionality without necessarily being privy to

the construction, and there is always a tight timeline to achieve results. The right tool for the job is a tool that takes all of this into consideration, and provides a platform to consolidate all of these different types of testing approaches.

The first step is to determine the type of app you are testing. Is it fully native, fully web, or somewhere in between?

• If it’s fully native, you may be able to get to some objects on a per platform basis, but you

will probably be falling back on text and image based verification, especially if you are trying to go cross-platform.

• If it’s fully web, a lot of testing can be done up front in a WebKit profiler. When it comes to real devices, element-based testing can be done cross-platform if you want to instrument, or you can fall back on text and image verification.

• If it’s somewhere in between, you’ll need to mix and match.

The second step is to find which pieces or steps of your test cases are reusable between each other, and can accept parameterization to fulfill the task. For instance, automating the selection of an item or link on your main screen of your app or landing page: Maximize reuse by engineering a parameter to accept different values, and reuse it across each test case. Although you may need to individually determine what type of verification you will use to achieve this on a per platform or device level, you will save time in the long run when you write additional test cases.

The third step is to then group those pieces or steps together by device screens or pages. This way, as you write the test cases you have an organizational structure that is easy to identify by where you are within the app or site and where you need to navigate to next.

Following these steps will provide a structure that can be grown to accommodate new features within an app or new sections within a mobile web site. As mobile devices become easier to automate against, this structure can easily adapt to emerging technologies that allow for greater reuse across platforms.

Page 41: Automated softwaretestingmagazine april2013

Are You Contributing Content Yet?

The Automated Testing Institute relies heavily on the automated testing

community in order to deliver up-to-date and relevant content. That’s why

we’ve made it even easier for you to contribute content directly to the ATI

Online Reference! Register and let your voice be heard today!

Community Comments Box

As a registered user you can submit content directly to the site, providing you with content control and the ability to network with like minded individuals.

>> Community Comments Box - This comments box, available on the home page of the site, provides an opportunity for users to post micro comments in real time.>> AnnounCements & Blog Posts - If you have interesting tool announcements, or you have a concept that you’d like to blog about, submit a post directly to the ATI Online Reference today. At ATI, you have a community of individuals who would love to hear what you have to say. Your site profile will include a list of your submitted articles.>> AutomAtion events - Do you know about a cool automated testing meetup, webinar or conference? Let the rest of us know about it by posting it on the ATI site. Add the date, time and venue so people will know where to go and when to be there.

Announcements & Blog Posts

Automation Events Learn more today at http//www.about.automatedtestinginstitute.com

Page 42: Automated softwaretestingmagazine april2013

http://www.googleautomation.com

Page 43: Automated softwaretestingmagazine april2013

www.automatedtestinginstitute.comApril 2013 43Automated Software Testing Magazine

Training That’s Process Focused, Yet Hands On

Software Test Automation Trainingwww.training.automatedtestinginstitute.com

Public and Virtual Training Available

Come participate in a set

of test automation courses that address both fundamental

and advanced concepts from a theoretical and hands on perspective. These courses focus on topics such as test scripting concepts, automated framework creation, ROI calculations and more. In addition, these courses

may be used to prepare for the TABOK Certification

exam.

Public coursesSoftware Test Automation Foundations

Automated Test Development & ScriptingDesigning an Automated Test Framework

Advanced Automated Test Framework DevelopmentMobile Application Testing & Tools

Virtual coursesAutomated Test Development & ScriptingDesigning an Automated Test Framework

Advanced Automated Test Framework DevelopmentMobile Application Testing & Tools

Page 44: Automated softwaretestingmagazine april2013

44 Automated Software Testing Magazine www.automatedtestinginstitute.com April 2013

TestKITOD

ONDEMAND

OD

Anywhere, anytime access to testing and test automation sessionsONDEMAND Sessions

If you can’t be live, be virtual

>>>> Available anywhere you have an internet connection <<<<>>>> Learn about automation tools, frameworks & techniques <<<<>>>> Explore mobile, cloud, virtualization, agile, security & management topics <<<<>>>> Connect & communicate with testing experts <<<<

ONDEMAND.TESTKITCONFERENCE.COM