TEST Magazine - February-March 2013

52
Inside: Virtual environments | Penetration testing | Test Automation Getting mobisocial in the test lab with TS Narayanan and Sireesha Jajala Testing collaboration Visit TEST online at www.testmagazine.co.uk Volume 5: Issue 1: February 2013 I NNOVATION F OR S OFTWARE Q UALITY

description

The February-March 2013 issue of TEST Magazine

Transcript of TEST Magazine - February-March 2013

Inside: Virtual environments | Penetration testing | Test Automation

Getting mobisocial in the test lab with TS Narayanan and Sireesha Jajala

Testing collaboration

Visit TEST online at www.testmagazine.co.uk

Volume 5: Issue 1: February 2013

INNoVAT IoN For SoFTwArE QuAl I Ty

TE

ST

:

in

no

va

Ti

on

f

or

S

of

Tw

ar

E

qu

al

iT

yv

ol

um

e

5:

is

su

e

1:

fe

br

ua

ry

2

01

3

Feature | 1

February 2013 | TESTwww.testmagazine.co.uk

Leader | 1

INNoVAT IoN For SoFTwArE QuAl I Ty

Inside: Virtual environments | Penetration testing | Test Automation

Getting mobisocial in the test lab with TS Narayanan and Sireesha Jajala

Testing collaboration

Visit TEST online at www.testmagazine.co.uk

Volume 5: Issue 1: February 2013

INNOVAT ION FOR SOFTWARE QUAL I TY

TE

ST

: I

NN

OV

AT

IO

N F

OR

SO

FT

WA

RE

QU

AL

IT

Y

VO

LU

ME

5

: I

SS

UE

1

: F

EB

RU

AR

Y

20

13

Astory caught my eye recently. A respected, high-flying software developer in his 40s, based

in Sillcon Valley and known only as ‘Bob’ had his secret double life exposed when a security check at his employer revealed that he had been outsourcing his coding duties to a Chinese software company. He reportedly paid just a fifth of his six-figure salary to the company, based in Shenyang, to do his job.

Fearing a security breach, Bob’s employer ordered a security audit. According to Andrew Valentine, of Verizon, the infrastructure company requested the operator's risk team to investigate “anomalous activity” on its virtual private network (VPN) logs.

“This organisation had been slowly moving toward a more telecommuting oriented workforce, and they had therefore started to allow their developers to work from home on certain days. In order to accomplish this, they'd set up a fairly standard VPN concentrator approximately two years prior to our receiving their call,” Valentine was quoted as saying on an internet security website.

The company discovered an open and active VPN connection from Shenyang to Bob’s workstation that had been active for months and further investigation of Bob’s computer revealed PDFs of invoices from the contractor.

Bob, by all accounts an “inoffensive and quiet but talented man versed in several programming languages” paid less than a fifth of his six-figure salary to the Chinese firm to do his job for him.

Further evidence suggested he had the same scam going across multiple companies. “All told, it looked like he earned several hundred thousand dollars a year, and only had to pay the Chinese consulting firm about $50,000 annually,” commented an exasperated Valentine.

Relieved of his workload, Bob reportedly spent his time on the internet, checking out eBay and Facebook as well as his favourite cat videos, before writing progress-reports for his bosses. Perhaps unsurprisingly Bob’s employer, rather than applauding his initiative and business nous sacked him.

The question though remains: Is Bob a hero for our times or a feckless time waster?

On a totally different subject, I would like to say how great it is that following my Leader comment last issue about the first smartphones, we have a pioneer in the app development field in this issue (see page 42). I would also urge you to check out the main news story in this issue for some exciting news for anyone involved in testing (see page 4).

Until next time...

Matt Bailey, Editor

Dodgy developers

A respected, high-flying software

developer in his 40s, based in

Sillcon Valley and known only as

‘Bob’ had his secret double life

exposed when a security check at

his employer revealed that he had

been outsourcing his coding duties

to a Chinese software company. He

reportedly paid just a fifth of his

six-figure salary to the company,

based in Shenyang, to do his job.

Matt Bailey, Editor

Editor Matthew [email protected] Tel: +44 (0)203 056 4599

To advertise contact:Grant [email protected]: +44(0)203 056 4598

Production & DesignToni Barrington [email protected] Cook [email protected]

Editorial & Advertising Enquiries 31 Media Ltd, Unit 8a, Nice Business ParkSylvan Grove London, SE15 1PD

Tel: +44 (0) 870 863 6930Fax: +44 (0) 870 085 8837Email: [email protected] Web: www.testmagazine.co.uk

Printed by Pensord, Tram Road, Pontllanfraith, Blackwood. NP12 2YA

© 2013 31 Media Limited. All rights reserved.

TEST Magazine is edited, designed, and published by 31 Media Limited. No part of TEST Magazine may be reproduced, transmitted, stored electronically, distributed, or copied, in whole or part without the prior written consent of the publisher. A reprint service is available.

Opinions expressed in this journal do not necessarily reflect those of the editor or TEST Magazine or its publisher, 31 Media Limited.

ISSN 2040-0160

February 2013 | TESTwww.testmagazine.co.uk

Contents | 3

1 leader column Dodgy developments – when outsourcing is unacceptable.

4 News6 Mobisocial collaboration in the test lab Social media are revolutionising many people’s leisure hours, but how are they helping

testers to work more effectively? Matt Bailey speaks to TS Narayanan, Senior vice president and Sireesha Jajala, assistant vice president at Tech Mahindra about how their global testing organisation is benefitting from adopting ‘mobisocial’ collaboration.

10 Can-do with kanban A kanban approach to visual management within the test section can be an easy to

implement and cheap route to productivity and efficiency. Derk-Jan de Grood enlists Simon, Suzanne and Maik to help him explain how.

14 Practice makes perfect Lean and agile software development specialist Paul wilson reports back from the annual

Global Day of Code Retreat, the day when developers gather to hone their skills.

14 Product profile – Great tools make better tests Becky wetherill explains what’s happening with Borland’s open test management

solution Silk Central.

18 Growing Pains A closed control loop requires a closed testing loop. Simon de Boer and robin Mackaij

tell us how they implement simulation in their test environments.

22 Testing in a virtual world Former tester and systems engineering lead at virtualisation specialist Embotics,

Paul Martin discusses the benefits of testing in a virtualised environment.

24 Making the most of your devtest cloud with service virtualisation Conventional approaches to software development and testing are costing money and

impacting the organisation’s bottom line. Chris rowett, senior director, technical sales, CA Technologies says service virtualisation may be a way to tackle this situation.

28 The future is application-based – on your mobile It’s not just Bring Your Own Device (BYOD) any more, the future is Bring Your Own Software

(BYOS) and this is creating a need for extensive testing to guarantee the usability and robustness of apps on mobile devices. rohit Garg reports.

30 The testing world – It’s 2013 – time to be Agile? Happy New Year everyone! Angelina Samaroo is hoping that 2103 is our best year yet.

We survived the end of the Mayan calendar, so time for a new cycle – time to get Agile.

32 From hardware to software Ovum analyst Nick Dillon reports back from the Consumer Electronic Show in Las Vegas

where he notes a marked shift from hardware to software.

34 Enhance your security with penetration testing These days it seems that every organisation should be testing its boundaries and seeing

just how secure they are. John yeo, director of Ethical Hacking Incident Response at Trustwave makes the case for the ‘white hat’ hackers.

38 It’s only a report – do we need to test it? Testing evangelist Jackie McDougall makes the case for testing your Business Intelligence

solution before jumping to any major decisions.

41 Design for TEST – Bridging the gap Mike Holcombe assesses the gap between the research universities do and exactly what

industry can use.

42 Top quality apps With the explosion in the smartphone apps market in the last five years a whole new

industry has grown up to satiate the public’s desire for slick, fun, useful and new bits of software. The developers of these products live and die on their quality and poor performance is quickly translated into negative feedback. Matt Bailey talks software quality with Gary Partington a pioneer of smartphone software development and CEO of app development company Apadmi.

46 Agile testing: Should you disband the test team? Should the CIO disband the test team and make the testers part of the development

teams? Francis Miers looks at some of the issues which can arise when Agile methods meet real world situations and how to organise testing so as to maximise the benefits of Agile without losing visibility of quality.

48 Last Word – Humble pie According to Dave whalen Humble Pie is not just a band from the ’70s.

Contents...FEBRUARY 2013

14

22

6

42

TEST | February 2013 www.testmagazine.co.uk

What the testing industry has been waiting for... Announcing The European Software Testing Awards

T he testing industry is an indispensible and increasingly important part of the software development business, but historically it

hasn’t had much of a profile and as it grows and its influence extends testers need a forum to demonstrate their talents and promote their triumphs. To this end, Test magazine is launching The European Software Testing Awards (TESTA), an independent awards programme designed to celebrate and promote excellence, best practice and innovation in the software testing and QA community. with headline sponsorship from Borland, a market leading solutions provider, the awards ceremony will be held in central london in late November 2013.

The levels of dedication, commitment and expertise found in the European testing industry are world class. Reflecting this, TESTA is not only a celebration of the people involved, but also an opportunity for the whole industry to join together and celebrate its outstanding achievements.

With TESTA destined to become a highlight on the software testing industry’s calendar the rigour of the judging process has to be formidable, therefore an independent panel of judges drawn from senior and influential positions across the industry will decide which individuals, organisations, teams and projects have proved their worth to receive an award.

Entries open on 1 March to all within the industry regardless of vertical sector or business size. So whether entrants are from multi-nationals, medium-sized operations or small businesses all finalists will have to prove that they are leaders in their field,

while being compared to the very best that the European testing community has to offer.If you would like to be one of the first to receive your entry pack, or perhaps you would like to be on the judging panel please email: [email protected] or call the team on +44 (0) 870 863 6930.

DEVwEEk SET For MArCH 2013 Devweek 2013, the uk’s biggest

independent conference for software developers, database

professionals and software architects, will run between 4 – 8 March 2013. over 100 in-depth technical sessions and workshops will be held at the Barbican Centre in london, with world-class specialists, including keynote speaker, Dave wheeler and Steve Plank from Microsoft UK, addressing over 600 delegates.

Technical sessions, running through nine simultaneous tracks, include ‘A lap around Windows 8’ by Dave Wheeler, independent software consultant, and ‘Getting serious about the Cloud’, presented by Microsoft Certified Trainer Michael Kennedy. Meanwhile, the day-long workshops, of which there are 18 in total, cover subjects such as ‘Debugging .NET applications’ by Sasha Goldshtein, CTO of SELA Group, and ‘WCF one-day crash course’ run by DevelopMentor specialist, Richard Blewett.

Geri Richards, CEO at Publicis Blueprint who are helping to organise DevWeek 2013, said: “DevWeek is the UK's biggest independent conference for developers and provides an unrivalled forum for developers worldwide. We’re delighted to help make the 16th event the best yet.”

The European Software TesTing AwArdsCElEbraTing TEChniCal ExCEllEnCE

testanovEmbEr 2013, london

Headline Sponsor

www.softwaretestingawards.com

UK faces shortfall in tech workers

The United Kingdom faces a shortfall of 33,300 IT and tech workers by 2050 due to skills shortages, an ageing workforce

and restrictive migration policy, according to randstad Technologies, a specialist IT recruiter.

The UK workforce as a whole will have a deficit of 3.1m by 2050, a figure which represents nine percent of the required workforce. Using employment rates from the most recent European population analysis from Eurostat, the statistical office of the European Union, as a measure of demand, Randstad analysed the projected changes in UK population and working age rate for 2050 to establish the gap between employment demand and workforce supply.

The analysis showed that with a total population of 74.5m in 2050 the UK will require a working population of 35.4m to meet demand. However, will a pool of just 45.1m people forecast to the eligible to work in 2050, even if

the employment rate matches pre-downturn levels of 71.6 percent, an ageing population will leave the UK with only 32.3m people in employment – 3.1m short of the 35.4m required to meet demand.

Randstad forecast that IT and tech workers which represent one percent of the UK workforce will have a deficit of 33,300 staff by 2050.

February 2013 | TESTwww.testmagazine.co.uk

The software and IT sector in Britain is missing out on a game-changing £280 million boost to its domestic

and international prospects, according to Jumpstart, a leading research and development tax credit specialist.

According to the Edinburgh-based company, £280 million is the amount unclaimed from Her Majesty's Revenue and Customs in R&D tax relief – a sum which could dramatically alter the sector's business advantage in a brutally competitive global market.

The sector's importance to the UK economy is clear in an increasingly digital age – the ICT market is worth £140 billion, or 12 percent of GDP, and directly employs in excess of 600,000 staff. The country is a powerhouse for software development, attracting £930 million a year in software R&D investment from international businesses.

Jumpstart has calculated that an average initial claim size is likely to be in the region of £97,000 per company. Phill Gillespie of Jumpstart explains, the ICT sector is particularly challenging since the HMRC guidelines for submissions are unusually complex and contradictory and submissions are complicated by the need for companies to continually update and upgrade to make their own software compatible with advances introduced by major platforms such as Windows and Apple.

“As people in the software industry are only too well aware, the only real consistency is change. It is the nature of

these changes which dictate whether projects become eligible for R&D purposes,” says Gillespie. “Software is a complex subject area – but it is indicative of Jumpstart's technical analysis expertise that its analysts were recently called upon to share their knowledge and expertise on software submissions with HMRC staff. The government is keen to support innovation and the cash and credits available through its R&D schemes could make a significant difference. But it is important for companies to be guided through the submission process by professionals, in order to virtually guarantee their chances of success.”

NEW

S

Testing has never been more important. End-users are less tolerant of poorly delivered applications than

ever before. Fail to meet user expectations and you risk losing both customer loyalty and business. In the spirit of improving how testing is done, Borland has joined forces with Test Magazine to find out exactly what the state-of-the-art is for testing as of February 2013.

How do your requirements practices stack up? How Agile is your software delivery? How do you rate your ability to deliver mobile applications on the different device variations? The answers you give to these questions all have an impact on the success of your business both today and tomorrow. So to learn more about your testing experiences and concerns in keeping pace with software development trends in 2013 take ten minutes to complete the survey and you will be in with a chance to win one of three Kindle Fires. The results, plus the three lucky winners will be announced in the April issue of Test Magazine. The survey closes on February 28th 2013.Click the lead story on the Test magazine homepage at: www.testmagazine.co.uk and follow the link.

Take the Testing Survey!

£280 million bonus for software industry?

Almost half of businesses have not invested in IT for two yearsAnew study has revealed that almost

half (45 percent), of firms have not invested in business technology

for two years, and a third of organisations are choosing to wait until the end of the recession before they implement new IT.

The research, from MSM Software, also warns that a lack of IT investment is placing businesses at risk, with 98 percent of IT managers not convinced that their current IT systems are up to scratch, or capable of supporting the business long-term.

Thomas Coles, managing director at MSM Software, says, “While cutting back on IT investment is understandable, it is

a dangerous strategy for organisations to pursue, as no new wave of improvements can be made. This in turn leaves companies vulnerable to being overtaken by competitors and looking outdated in comparison. The recessionary environment has had an impact on many organisations’ approach to IT investment and I believe this is a major reason why doubt is being cast over the suitability of business technology.

“It is extremely concerning that such a vast majority of IT managers believe technology is unfit for the business. This places companies at great risk of system failure, which could introduce turmoil

into the organisation, bringing with it huge repercussions in terms of lost sales, custom and damaged reputation. I urge businesses to re-consider their approach to IT investment; organisations must have systems in place which are robust and fit for the specific requirements of the company. This is the only way to meet long-term business objectives and ensure competitive advantage both now and when the economic environment improves.”

Meanwhile, Gartner has increased its worldwide IT spending prediction for 2013. According to the analyst firm IT spending in 2013 will increase 4.2 percent compared with last year, reaching £2.28tn.

6 | Feature

Social media are revolutionising many people’s leisure hours, but how are they helping testers to work more effectively? Matt Bailey speaks to TS Narayanan, Senior vice president and Sireesha Jajala, assistant vice president at Tech Mahindra about how their global testing organisation is benefitting from adopting ‘mobisocial’ collaboration.

Mobisocial collaboration in the test lab

6 | Cover story

TEST | February 2013 www.testmagazine.co.uk

February 2013 | TESTwww.testmagazine.co.uk

Cover story | 7

When you have a software testing organisation spread across the globe for

‘follow-the-sun’ service, one of the challenges is keeping your testers in touch and engaged with each other. The benefits of sharing experience and best practice as well as discussing the latest innovations, techniques and tools cannot be overstated. In this regard testers have often been at the forefront, setting up chat groups and forums to share their knowledge.

Tech Mahindra is a leading global systems integrator and business transformation consulting organisation that has pioneered a ‘mobilsocial’ cooperation model for its staff. Focused primarily on the telecommunications industry, the company expanded its IT portfolio in 2009 by acquiring IT services company, Mahindra Satyam (earlier known as Satyam Computer Services). Tech Mahindra’s capabilities cover a broad spectrum in IT, including business support systems, operations support systems, network design and engineering, next generation networks, mobility solutions, security consulting and testing.

In the testing sphere the company specialises in providing outsourced systems and network test solutions to telecom service providers, enterprises and independent software vendors.

Mobisocial testingTS Narayanan, senior vice president at Tech Mahindra, explains a bit of the company’s history in testing. “From a testing perspective both Tech Mahindra and Mahindra Satyam cover a fair bit of ground. We have somewhere in the region of 8,000 people focussed entirely on testing. On the Tech Mahindra side it is almost entirely dedicated to the telcomm operators, but we also do a lot of domain-related testing, functional and otherwise, in the banking/BFSI (banking financial services and insurance), healthcare, ERP and CRM space.”

Assistant vice president Sireesha Jajala focuses on the company’s utilisation of mobisocial techniques for communication. “In terms of mobisocial, we are seeing a large amount of change coming,” she says, “both from our customers and clients, as ‘Generation Y’ start to make their presence felt and in the way they are designing and defining their products, as well as the usability of their products

which impacts the way we test those products before they go live. From our own employees’ point of view we are seeing a new generation coming up who have always used social media, who are very much on Facebook and Twitter all the time. They are bringing in behavioural changes in how testing is performed within the organisation.

“These changes have brought about a two-pronged impact on what we are doing and we have seen its effects on our engagement with customers as well,” says Jajala. “It affects both how we test and certify products in terms of usability and cultural experience and on the other hand, the people themselves have been bringing in big changes at work which are redefining how we actually deliver some of these testing products.”

Mobisocial in practiseThe mobisocial approach has immediate benefits (and perhaps negative aspects as well) in theory, but how are these applied in the testing organisation and in its interactions with customer organisations?

“From a collaboration point of view, we are seeing new ways of exchanging information,” says Jajala. “We have micro-blogs and blogging for our testers to share information and to allow much easier pooling of knowledge across geographically-distributed teams. The other aspect of mobisocial is how you generate insights from all this shared information that can be used in testing. A good example might be experiential sharing of information about a specific bug.

“Our job is definitely being made easier with these collaboration tools,” argues Jajala. “Within projects we have Facebook pages, we have micro-blogs, and we have blogging facilities in the company which people use extensively to collaborate and share knowledge.”

Company senior vice president TS Narayanan adds that although a lot of testing continues in the traditional way, there have been big changes in the last few years. “The big change I have seen in the last three or four years,” he says, “is that in the old days, when a tester found a defect they recoded it in a defect-tracking tool and someone would say ‘Hey! The tester has found a bug’ and then someone would send off an email to the developer who would then go and look at the bug and analyse it come back with an answer 48 to 72 hours later! This is changing with real time

“Our job is definitely being made easier with these collaboration tools,” argues Jajala. “Within projects we have Facebook pages, we have micro-blogs, and we have blogging facilities in the company which people use extensively to collaborate and share knowledge.”

8 | Cover story

www.testmagazine.co.ukTEST | February 2013

collaboration. People are now more inclined to say, ‘can I fix this here and now?’We still need the traditional tools for defect tracking because the statistical analysis is required to see how the software performs. But the act of getting things changed in real time has become a reality because of mobisocial collaboration.”

Mobilsocial agilityIt is often said that it is more difficult for a subcontracted tester to be Agile than it is for internal testers working cheek by jowl with developers. Narayanan thinks this situation is also changing with the introduction of mobisocial collaboration.

“In many situations, particularly when you have a multi-vendor scenario, there are a number of collaborating contractors working with the customer,” he says, “so from a testing perspective the use of a mix of traditional testing techniques and a collaborative approach is beneficial. You have the outsourcer and a number of systems integrators – some developers, some testers – working both in physically co-located locations and using real-time collaboration. This is making the process more agile, but the most important thing is seeing the turn-around of defects from the time they are reported speeding up. We are even now seeing a lot of real-time collaboration where people are working together to prevent a defect before it gets to the testing stage. This is all helping to shift the testing process to the ‘left’.”

onshore/offshoreTech Mahindra has a mix of onshore and offshore testing staff. “We have a mix for a number of reasons,” says Narayanan. “Firstly, for example, when testing telco networks we need an onshore presence because we use certain types of testing where we may actually need someone to go in and pull out a circuit board, cut a cable or switch off a router; types of testing where we need a physical presence. There are also situations where you need a certain amount of secure access in a number of locations, for reasons of data confidentiality. And we also actually need follow-the-sun coverage where onshore

teams can carry out a certain level of testing around the clock. The ratio of headcount is skewed in favour of offshore testing though.”

“We probably support about 49 countries with our testing service,” adds Jajala. “And about 12 different languages, so it does help being global and multi-cultural. We are globally distributed, but globally connected as well.”

Looking at testing as a global business, Narayanan says the emerging markets are where there is growth. Sireesha Jajala adds that there are three or four technical areas, from a customer’s point of view that are becoming more important. “Usability testing is growing,” she says, “and this is being driven by the growth of mobility, mobile devices and apps. Also because of the growth of cloud implementations and the growth of social media and collaboration tools we are seeing growth in non-functional and security testing. These are becoming a key considerations. There is no single type of testing that is taking the lead, but with the changes that are coming into the market, there are a few areas where we are seeing more traction and interest.”

Testing appsWe have reported the potential for a massive growth in app development giving a boost to the testing industry, but according to TS Narayanan this is not the full picture. “Whenever I talk about testing at forums, the one statement I always use is: ‘The entropy of the universe is always increasing!’ The complexity is always increasing but fortunately we have learned to build process and structure and tools to cope with this situation, particularly from a device perspective. You have operating systems, middleware and support tools and then the application itself. Because we have been dealing with this for the last four or five years we have tried to take complexity out by building the right frameworks and infrastructures that make things easily repeatable across newer technologies.”

The company has found tools on the market that can easily be adapted to every new paradigm in the development world. “We have used a

mix of industry-standard tools, as well as our own, to take out complexity,” says Narayanan.

Looking forwardAccording to TS Narayanan it is an exciting time for testing. “We see the focus of testing growing ever more towards the ‘left’,” he says. “We see it becoming less commoditised simply because of the need to integrate with the wider ecosystem, with the business on the left and the operations on the right. How do you manage the design and delivery of a project better and earlier and how does testing fit into this? From the opposite side, how do you make sure that customer experience, usability, performance and stability are well tested for? From a testing professional’s perspective you will actually see that the competence expected of the tester will be much higher because they are not merely people executing test cases, they are actually validating the product from a design perspective. They are building and developing tools to aid their testing, so they are moving more into the design and development area and moving away from being people who are blindly executing tests.”

Sireesha Jajala sees testers moving away from the traditional quality assurance role and towards becoming change agents within organisations. “Testers are increasingly participating in the decisions about going live with a product and the amount of intelligence that they gather is significant. It is no longer about whether a test case has been passed or not, it is about: has the non-functional been ensured? Is there any security impact? Is there any impact on the business when the software goes live? We include a whole host of things in QA now which weren’t traditionally a part of it. The role is becoming more important and moving more towards being a change agent.

“We are also looking at customer experience management,” adds Jajala, “because once a product goes live we do a lot of analysis that gets fed into the process improvement. We are seeing testing becoming an ever more important part of the process and we are seeing a shift leftwards.”

10 | Feature10 | Testing techniques

A kanban approach to visual management within the test section can be an easy to implement and cheap route to productivity and efficiency. Derk-Jan de Grood enlists Simon, Suzanne and Maik to help him explain how.

Can-do with kanban

www.testmagazine.co.ukTEST | February 2013

February 2013 | TESTwww.testmagazine.co.uk

Testing techniques | 11

“Better control of our test tasks and an increase of our efficiency. Our testing department has many duties. We do reviewing, test design, execution and regression testing. We are simultaneously working on several projects and releases and we have a TPI trajectory. The business demands changes or asks us to assist with production disruptions. A better grip on our activities sounds good and saving valuable test time sounds even better!”

It's Friday afternoon in the testing department, Simon, Suzanne and Maik are meeting at the whiteboard.

They want to know whether kanban can contribute value to their test team. Suzanne picks up the pen and writes on the top post- it: ‘Kanban for our test department’. She sticks it in the middle of the whiteboard and triumphantly looks at her colleagues. “How's that?” she asks, but Maik corrects her, “Actually it isn’t kanban, but ‘visual management’. kanban is a lean process management system aimed to optimise the flow. You use a kanban board, which may consist of a whiteboard with different columns on it. Each of the columns represents a step in the process. This process is also called the value stream.

“Post-its represent tasks for the team and are stuck on the board,” he continues. “During its life-cycle, each post-it, or task, makes a tour from the left column on the board (the back-log) to the column furthest to the right (completed tasks).”

“Teams doing kanban have a daily meeting at the board,” adds Suzanne. “Kanban helps to distribute the work, makes clear where the bottlenecks are and ensures an optimal handling of work packages.”

Simon is enthusiastic. “That is just what we want!” he says. “Better control of our test tasks and an increase of our efficiency. Our testing department has many duties. We do reviewing, test design, execution and regression testing. We are simultaneously working on several projects and releases and we have a TPI trajectory. The business demands changes or asks us to assist with production disruptions. A better grip on our activities sounds good and saving valuable test time sounds even better!”

Maik is not done yet though: “In visual management we use kanban, but besides the kanban board we have two other boards. These are the week-view and the improvements board. On the week-view board, we present the steering information. We keep track of the team’s production and use this to evaluate the effects of decisions and improvements the team makes.

This way we create a feedback-loop that can be used to continuously Improve. Monitoring the production also helps to show the value of the team to the stakeholders. On the improvements board, we put down all innovative ideas for improvement. Because the whole team meets around the board to discuss the progress, many bottlenecks and potential improvements are identified. By gathering all ideas we ensure they don’t get lost and the team can implement them in a controlled manner.”

“Well, I'm convinced,” says Suzanne. “We should call it visual management.” She writes a new post-it and sticks it on top the first: ‘Visual management for our test department’.

Workflow then kanbanKanban is applicable to different kind of processes. In fact, it can be used as soon as you have a workflow with a fixed number of steps. Kanban is often applicable for the test process, since many test departments work with a fairly standard workflow. You can think about the process that has been formulated in your general test strategy. It is, however, often difficult to keep track of all activities. Test departments are simultaneously working on a number of projects and are called upon when production problems need to be solved.

Kanban can help to keep a good overview on all these activities. The activities are divided into smaller work packages and are tracked on the kanban board. This gives insight into the activities that are in progress and completed. This overview enables the test team to keep a balance between:• High and low priority functions;• Operational and project activities;• Test improvements and operational

activities.

Flow over utilisationKanban is based on the theory of constrains (TOC). TOC aims for a maximum throughput by removing bottlenecks. Originally, TOC is applied to manufacturing processes. If you aim for a maximum utilisation of the team resources, like in traditional manufacturing, each team will push their products in the workflow. However, if the workflow contains a bottleneck, for example integration,

TEST | February 2013 www.testmagazine.co.uk

12 | Testing techniques

partial products will pile up and the process blocks itself. Many traditional managers aim for utilisation, ‘we pay you to work’ is their motto; they make sure that everybody is fully engaged.

By shifting the emphasis from utilisation to flow, bottlenecks gain attention. The focus will shift to maximising the number of completed products and it is these completed products that represent business value. There is a pull, instead of a push; the integration team indicates the amount of products it can handle, and the other teams deliver just enough for the integration team to do so. That’s why within Kanban we like to work with WIP (work in progress) limits. The WIP limit indicates how many tasks a team can handle. If a sufficient amount partial product is made, a team member might idle for a while. No problem, because we know that our turnaround time is optimal.

This principle is easily translated to testing activities. The maximum flow yields a maximum number of tested system components, changes, bugs or risks.

kanban is adaptiveWithin Kanban nothing is predetermined, this make you very adaptive as a team. The team can shift priorities when the content of the releases or the functionality changes or when the team needs to assist with solving production disruptions. Every

day, during the standing meeting, the team determines what risks need to be addressed and what activities represent the most business value.

New insights can be taken into the decision-making and the Kanban board helps to manage the resources. If required, tasks can be placed in the 'priority lane' and will be handled with greater urgency. This makes the teams work in a truly risk-based manner. The impact of changes to the plan are shown on the board and are clear to everyone. Since management is involved in the daily meetings they understand the impact of these decisions.

Greater visibility and better cooperationKanban triggers greater visibility and better cooperation. With the board as the new place to hang out, testers are challenged to help each other. Together with their teammates, they can discuss what it takes to finish a task as quickly as possible. It also pays to invite other disciplines to the standing meetings.

“I spoke recently one of our clients about him joining our daily meeting and he said that he would rather let the testers join the daily development meetings,” said Suzanne.

“Great idea,” says Simon, “but while our organisation is Agile. That wouldn’t work in more traditional organisations. In these organisations it can really pay

Within Kanban nothing is predetermined, this make you very adaptive as a team. The team can shift priorities when the content of the releases or the functionality changes or when the team needs to assist with solving production disruptions. Every day, during the standing meeting, the team determines what risks need to be addressed and what activities represent the most business value.

February 2013 | TESTwww.testmagazine.co.uk

Testing techniques | 13

Derk-Jan de Grood Testing expert Valoriwww.valori.nl

It is vital for a test department to demonstrate the value it adds to the business. Cooperation is a good enabler for this, because the people with whom you work, know what difference you make and visual management seems to be a perfect tool for this. The boards give a permanent status overview and a manager can visit the board at any time.

to invite someone from the business, the release manager or someone from the development team to the testers’ Kanban board. We have seen that this really boosts the commitment and involvement.”

It is vital for a test department to demonstrate the value it adds to the business. Cooperation is a good enabler for this, because the people with whom you work, know what difference you make and visual management seems to be a perfect tool for this. The boards give a permanent status overview and a manager can visit the board at any time.

It is also important to tell a good test story and great opportunities for telling your story arise during these meetings. Team members exchange information about their insights, challenges and their progress. This ensures that others know what is happening. Interested parties can see for themselves what work is being done and what needs to be done. This promotes involvement and involves stakeholders in decision making

More fun, from workflow to personal flow“Have you ever been in flow?” This is an unexpected question. Suzanne and Simon love working while in flow. They meet and have brainstorming sessions. Apparently, the question is relevant for

many people and flow is not obvious to everyone.

“Many people are in flow when they are busy with their hobby, but they are not when at work,” explains Maik. “Lean, Kanban and visual management ensures that people are positively challenged. They are part of the team and get into flow much faster.”

“How’s that?” asks Simon.“When working with kanban, you’re

working together,” says Suzanne. “The whole team is working towards a common goal. Team members are asked to perform the things they are good at. The advantage is that you can be yourself, with more responsibility and autonomy. You'll also get immediate feedback on your performance, since the board makes visible what you have accomplished.”

“More results and more fun?" Simon asks. Maik and Suzanne nod affirmatively.

Maik, Simon and Suzanne can’t imagine that any serious manager does not want to invest in this. Satisfied with the outcome of this initial brainstorming they take a photo of the whiteboard. “We need to involve more people in this discussion,” says Simon, “We have a lot to learn, but with this we can certainly improve out testing process and get recognition for the value we contribute.”

They ‘high five’ and go home for the weekend.

TEST | February 2013 www.testmagazine.co.uk

Lean and agile software development specialist Paul wilson reports back from the annual Global Day of Code Retreat, the day when developers gather to hone their skills...

Practice makes perfect

In December last year, around 3,000 programmers in 160 cities around the world gave up their Saturday to write code

– code they deliberately threw away. They were not being weird or frivolous. They were there to take part in The Global Day of Code retreat, and improve their craft. Musicians do not practise by only playing symphonies. Marathon runners do not train by running marathons. They engage in directed practise such as playing musical scales or interval runs. A Code Retreat is a community event to help programmers engage in directed practice. They work on

improving their code, free of the constraints of time-pressure or having to finish the task.

Playing the gameThe Code Retreat format consists of multiple sessions of implementing ‘Conway's Game of Life’:• Each session lasts for 45 minutes.• The code is written in pairs, meaning

that two people share one computer and collaborate.

• It is also Test Driven, which means that an automated unit test is written before each piece of code and the code is improved before writing the next test.

• Pairs are swapped after every session.

14 | Coding skills

February 2013 | TESTwww.testmagazine.co.uk

Coding skills | 15

• Code is deleted after every session.After each session the group shares their experience in a short retrospective. The retrospective concentrates on anything learnt. The main focus of the retrospective is to make the group think about the extent to which each pair has managed to follow the XP rules of simple design. These are:1. The code passes all the tests,

implying that automated tests have been written and the code does what is intended.

2. The code reveals intent. This is a measure of how easy it is to understand what each piece of code is supposed to do.

3. There is no duplication, also known as the Once and Only Once Rule. Duplicate code can be considered bad code as it is hard to change and understand. Duplication can be as obvious as code copied and pasted from one place to another but can also be subtle, where a rule is expressed in two different places in the code.

Time pressure can be detrimental to this kind of deliberate practise. Throwing the code away and implementing the 45 minute time limit on each session is designed to relieve the pressure of finishing the problem. Most pairs do not manage to implement Conway's Game of Life during a session, and that is desirable: unlike a programmer's day-job or even a hack-session, the value of Code Retreat lies in improving skills rather than producing useful code.

The problem, Conway's Game of Life, is chosen as it is fairly complex to implement but easy to understand. A problem that was hard to understand would get in the way of concentrating purely on improving code skills.

Tell don’t askAfter the first iteration, the facilitators introduce a constraint for each subsequent iteration. As well as keeping the exercise interesting, the constraints are designed to provoke different ways of thinking and working with code. A common constraint is ‘no return values’, which often leads to a more object-oriented way of working known as Tell Don't Ask.

Code Retreat came out of a discussion at the Code Mash conference in Ohio. Corey Haines, a well known figure in the software craftsmanship community, helped create the format and is responsible for popularising it since then. For the last two years there has been a Global Day of Code Retreat: Code Retreats on

the same day in different cities around the world.

On December 8, 2012, Neo hosted and facilitated the Edinburgh chapter of Global Day of Code Retreat. After the first zero-constraints session, also known as The Warm Up session, we introduced a single constraint per session.1. Ping Pong: One member of the pair writes a failing test and it is up to the other member to implement the test. Pairs that are more experienced with TDD swap roles, while those who are new to that discipline do not. This is a good ice-breaker and gets people comfortable with pairing.2. Mute Ping Pong: Like Ping Pong, but the pairs are not allowed to communicate with each other except through the tests or code. This becomes an exercise in writing focussed and expressive unit tests.3. No returns: After a method performs a calculation, the participants are banned from returning the result to the method- they have to find other ways to pass on the information. This proved a particularly challenging exercise for many of the participants. While many did find value in it, we might have coached the others a little better.4. Back to back: One straightforward iteration is performed without any constraints. This time the code is not deleted, but the pairs are told to write down the areas of the code they could do better. After the retrospective the pairs remain the same but the code is swapped; other pairs are then given the task of trying to improve the previous pair’s code.

The feedback at the end of the day was overwhelmingly positive. It is amazing how enjoyable failing to implement the same problem several times can be. The most gratifying part was winning over the participants who were initially sceptical about pair programming and Test Driven Development.

Paul wilson Managing director Neo UKwww.neo.com

Musicians do not practise by only playing symphonies. Marathon runners do not train by running marathons. They engage in directed practise such as playing musical scales or interval runs. A Code Retreat is a community event to help programmers engage in directed practice. They work on improving their code, free of the constraints of time-pressure or having to finish the task.

The Game of Life is a cellular automaton devised by the British mathematician John Horton Conway in 1970. It is a zero player game, meaning that its evolution is determined by its initial state, requiring no further input. One interacts with the Game of Life by creating an initial configuration and observing how it evolves.

Conway’s Game of Life

16 | Virtual testing

TEST | February 2013 www.testmagazine.co.uk

A closed control loop requires a closed testing loop. Simon de Boer and robin Mackaij tell us how they implement simulation in their test environments.

Growing Pains

Virtual testing | 17

Priva Horticulture is developing horticulture environment control systems. Their

performance is critical to big business operations. The market value of the plants in a single large greenhouse can be millions of Euro and they are typically very delicate so the margin of acceptable error is very small. Even a slight software failure could reduce this value dramatically by damaging the crop quality and/or quantity.

Therefore testing must be thorough but, as with any control system, performing it is challenged by the difficulty of predicting and creating realistic external behaviour. Inputs and outputs must be simulated, but so must their interdependence. It is the job of the system under test to react to input (eg rise in temperature), but the reaction (eg opening windows) is intended to, and can also unintentionally, affect input. For testing to be effective in preventing failure in production, those effects must be simulated accurately. Here we will explain some of the approaches we have used and intend to use to enable this and the challenges to them.

Greenhouse system architectureThe system under test is the process computer that executes the control logic: we call it the ‘controller’. It communicates with large numbers of sensors of temperature, humidity etc and actuators (water dispensers, window openers etc) which are distributed throughout the greenhouse and networked using BACnet (a specialised protocol for automation and control systems which transmits analogue as well as digital information) running over proprietary routers, whose number varies with the number of sensors and actuators and the physical space covered.

Where this is very large so that conditions can vary significantly within it, multiple controllers are used which also communicate directly with one another. Although the system provides

sophisticated automation, the climate strategy it delivers is set and managed by a person, called the ‘grower’. He or she uses a GUI client, Priva Office, on a workstation connected to a server which logs data from and configures the controller(s) according to the settings made by the grower. Figure 1 depicts all these components in situ, plus an alarm server used to alert others via email, SMS etc should sensor readings move outside parameters set as part of the climate strategy.

Physical vs simulated sensors and actuators during testingPhysical examples of some devices are available for development and testing and of course it makes sense to use them where possible, especially early in the development phase. In the case of other devices this is impossible. For example, some greenhouse installations use very expensive geothermal exchangers which require water flows of up to 100m3 per hour. This and other machinery must be

February 2013 | TESTwww.testmagazine.co.uk

FIGURE 1: GREENHOUSE CONTROL SYSTEM (HARDWARE COMPONENTS)

simulated for several reasons:1. Safety: A high level of assurance provided by testing is required before software can be connected to potentially dangerous machinery.2. Cost: Apart from the price of the hardware and installations, required physical arrangements create limitations: for example it is not usually practical to accommodate software teams in a field alongside a reservoir.3. Time: Development and testing must proceed simultaneously with and independently of building and engineering work, not be dependent upon its progress.4. Scalability: Testing must be of systems of varying numbers of every component. Lab testing with a few devices becomes inadequate early in the development phase: later testing at a specific production installation, even where possible, does not give sufficient assurance against failure at others.

Simulation in the control loopFigure 2 depicts the greenhouse system at the abstract level. The controller interacts with the outside world at two points, the user interface and the input from and output to devices. It would be possible to simulate connected devices by replacing the calls made to them by the controller program with calls to functions, similar to ‘stubs’ but whose output is affected by their input, making them ‘simulators’.

This approach is suitable in some cases, but not for the greenhouse system which is message-based; input is not always requested by the controller. For example a sensor noting a change

in reading may report it without being ‘asked’, causing an interrupt of the controller program. A single controller may be connected to up to 1,000 devices, so how it manages and prioritises inputs and outputs to them is a key part of its functionality and must be on the functional test path.

Additionally, non-functional testing (eg for the effects of network latency, gridlock etc) would not be meaningful with this arrangement. Therefore our simulator program needs to run independently of the controller program and pass unsolicited messages to it, as well as responding to its solicitations for messages, all as required by the test design. The problem here is with the design of the I/O cards to which the devices are connected. Typically one sensor or actuator is connected to a physical pin (‘I/O point’) on the card. The card processes signals from the sensors and communicates them to the controller via the network, and does the opposite for instructions from the controller to the devices. A single controller can be connected in this way to up to 1,000 pins. The mapping of pins to inputs and outputs, and configuration for their various types, is done in the control logic, and varies between installations.

Having to manage changing the pin (‘I/O point’) assignments for each test environment configuration would be very unwieldy. This need led us to include a virtual router in our device simulator tool. Since the router communicates directly with the controller at the I/O level, it can determine which channels are and are not in use. So we expanded the

18 | Virtual testing

TEST | February 2013 www.testmagazine.co.uk

The system under test is the process computer that executes the control logic: we call it the ‘controller’. It communicates with large numbers of sensors of temperature, humidity etc and actuators (water dispensers, window openers etc) which are distributed throughout the greenhouse and networked using BACnet (a specialised protocol for automation and control systems which transmits analogue as well as digital information) running over proprietary routers, whose number varies with the number of sensors and actuators and the physical space covered.

FIGURE 2: GREENHOUSE CONTROL SYSTEM (ABSTRACT)

functionality of the virtual router beyond that of a real one: it can receive a request from a simulated device for an I/O channel, ascertain which channels are free, assign one of them to the device and register this assignment in the controller. The result is that during test implementation any required input or output can be simulated easily, with the channel/pin assignment taken care of automatically.

Creating and controlling simulatorsThe simulators for a particular test environment are initialised by an XML script. This contains one XML element for each input: figure 3 shows a simple example, an element for a temperature measurement.

The ‘name’ attribute is arbitrary and used for test configuration management. The ‘type’ refers to an entry in a separate library we have created which defines the technical parameters of the device, in this case ‘compartment’ (identifying a section of the greenhouse) and ‘temperature’ which are set to their initial test values in the script. The same library defines parameters transmitted from and to many other types of device: light and humidity meters, motors, valves etc. Once the script has established the initial test environment, test execution begins.

This of course usually requires real-time changes to the behaviour of the simulators. These can be implemented in three ways. First, the simulation program also provides a GUI providing instant control of the simulators, for example by changing the ‘measured’ values, causing ‘detected’ events (including known failure of devices) and non-‘detected’ events (including

unknown failure of devices) to be reported to the controller. This is like the GUI used by the grower in production, but in reverse. The grower's GUI displays information from sensors and issues commands (from the controller) to the actuators; the simulation GUI issues commands to the simulated sensors and displays the commands issued by the controller to the simulated actuators. By operating both GUIs simultaneously both systematic and exploratory testing can be done.

In the case of systematic testing however this execution method can cause difficulties with test repeatability due to human error, especially in timing. Therefore is it is more usual to automate these tests. This done not using an automation tool to control the simulation GUI, but directly by having the simulation program execute further scripts. For example a script can be written to simulate an increase of outside temperature (perhaps causing the controller to open windows) shortly followed by rapid increase of wind speed and sudden rain. Scripts such as these can be invoked from the simulation GUI as part of exploratory testing or linked together for longer automated systematic test execution.

Sometimes it is better for a simulator to revert to acting as a simple stub. For example the controller may ‘ask’ a servo motor for the current aperture of the window it moves. Attempting to simulate variation in this caused by adjustment of mechanical parts, physical conditions etc may be appropriate for some tests but is not for others: in fact it may disrupt them or skew their results. Therefore we often script a simulator to report a value (or a value derived from a value) already held within the controller: in this case the current ‘desired’ aperture, or a value derived from it. Thus tests with tight focus on certain devices or behaviours can be simplified by making other simulated devices behave exactly as the controller ‘expects’.

Closing the test loopWhere the SUT is the controller, the method described has proven effective and convenient. We are satisfied that we can implement whatever is needed

Apart from the price of the hardware and installations, required physical arrangements create limitations: for example it is not usually practical to accommodate software teams in a field alongside a reservoir.

Virtual testing | 19

www.testmagazine.co.uk February 2013 | TEST

FIGURE 2: XML ELEMENT IN THE SIMULATION INITIALIzATION SCRIPT

20 | Virtual testing

TEST | February 2013 www.testmagazine.co.uk

We must either halt the script before each validation point while we establish the expected outcome by checking and/or setting simulated values manually using the simulation GUI or synchronize the GUI test script with the simulation scripts. The first is cumbersome and slows test execution; the second is technically difficult and slows test implementation. Either reduces the benefit of test automation.

Simon de Boer Test engineer Priva BV www.priva.nl

by any test. That of course does not mean the test design is necessarily realistic, but at least we know that continuing work to make it more so is worthwhile.

Testers of business systems might expect testing the grower's GUI to be more straightforward: in fact it is harder because we currently have no validation method. We can create and execute (via capture/replay and/or manual scripting) tests of it using almost any of the wide range of test execution tools available. But we don't know the expected outcomes because they depend upon on what is reported by the external devices, whether real or simulated and the values held within the controller program and its data, all of which vary in real time.

We must either halt the script before each validation point while we establish the expected outcome by

checking and/or setting simulated values manually using the simulation GUI or synchronize the GUI test script with the simulation scripts. The first is cumbersome and slows test execution; the second is technically difficult and slows test implementation. Either reduces the benefit of test automation.

We are working on a solution using new SOAP interfaces to the simulation program/virtual router and the controller. Since many test automation tools support SOAP out-of-the-box, this will enable their scripts to discover and change the relevant information. When this is working, the tests will be able to validate that what is being displayed to the grower matches what has been reported by the devices and that the grower's commands are being actioned by the controller. We expect then to be able to deliver true end-to-end testing, in both directions.

robin Mackaij Test specialist Sogeti High Tech www.sogeti.com/hightech

TEST | February 2013

22 | Feature

www.testmagazine.co.uk

22 | Virtual testing

Former tester and systems engineering lead at virtualisation specialist Embotics, Paul Martin discusses the benefits of testing in a virtualised environment.

Testing in a virtual world

There are many advantages to using virtualised systems for testing not the least of which is that any

damage the raw code causes is confined to the virtual testing environment. But there are other benefits of productivity, cost, and efficiency too. Paul Martin, systems engineering lead at virtualisation specialist Embotics, a software tester in a previous role, looks at how virtualisation has affected testing since its first use in this field.

“Virtualisation first started to take hold in test and development about a decade ago,” explains Paul Martin. “Up to about ten years ago testers and developers were still mainly running test and dev workloads on physical infrastructure. Over time this gets more and more expensive when you have under-utilised capacity on your servers. Virtualisation, certainly from a consolidation standpoint, seems a lot more attractive.”

As a former tester himself, Martin saw first-hand how vitualisation started to benefit testing. “I actually worked in

February 2013 | TESTwww.testmagazine.co.uk

Virtual testing | 23

test and development before getting into virtualisation,” he says, “The main advantages of virtualisation for testing was that from a consolidation standpoint we could make much more efficient use of servers. Beyond that, we found that when we were running test harnesses we could take snap shots of applications as they progressed through the process. In this way we had the snap shot to refer back to if we broke anything. This would have been more difficult to do in a physical environment, but was easy in a virtualised one because the VM is just a file which you can take a snap shot out of and copy, back it up and pass around to other testers for them to use.”

Virtualisation really came into its own in the early day as a low-risk testing environment. “If something went wrong there weren’t too many people affected,” says Martin, “and it wouldn’t take down your complete IT infrastructure for days or anything like that. It was these benefits that were the real low-hanging fruit of virtualisation.”

Complex testing environmentsAs the testing world got more complex, virtualisation often provided an ideal solution. “As the requirements for test and dev environments became a little more complex, instead of perhaps a single VM for a specific task you might have multiple VMs or things like linked clones and you need to manage the storage for these,” says Martin. “With linked clones you have a base VM image with clones linked to it that can be spun up really quickly. If you have someone that needs to prep an application quickly, they can deploy a VM for it in a few seconds. Couple this with things like automation technology where you can click a button build a virtual application (Vapp) straddling maybe three or four different VMs with the controllers and middle-ware and a web-based back end, and you start to get a complex, full blown test environment totally isolated from everybody else. Automation also starts to become much more important when you build in a bit more complexity than a single image VM to run an application.”

Of course, for many organisations automation is the bed rock of their testing processes. “Absolutely, but when I first started in test and dev in 1996, everything was done on checklists, there wasn’t too much you could do with automation in those days. With the advent of testing tools like Mercury Interactive, for example, you start to see more and more things becoming automated: the test harness becomes

automated; the chain of VMs becomes automated as well and automation becomes key. It is pretty much the bedrock of what we do at Embotics as well.”

State of the art VM testingSo what constitutes the state-of-the-art in virtual testing environments? “It used to be your Lab Manager was the de facto standard for anybody that was deploying a VM lab environment, especially on premise, because it offered a number of capabilities that you don’t see in a physical environment,” says Martin. “Linked clones for example give me a way to spin VMs on demand very quickly and this kind of capability is absolutely essential in a test and dev environment these days. You don’t want to be waiting around for a couple of days for an application to be deployed, you need it pretty much there and then.

“The other thing that you need to be able to take care of is making sure that you don’t have a lot of unused VMs clogging things up and wasting storage,” adds Martin. “Having VMs with no life cycle management can be just as big a problem because if I have a VM inside my infrastructure that nobody needs anymore, what happens if we have a sudden surge in demand? One of the aspects of this that we address is the lifecycle management. When a VM is created we can attached an owner to it so we know who owns it and they can give us an indication of how long they need it for and at the end of its lifecycle we can wrap some policy for decommissioning and backing up in there too. While this is great for management, we also have to ensure that the VMs are still as quick and easy to deploy as possible.”

Moving to the leftThe company Martin works for, Embotics, uses virtualisation in its test and development departments. “We develop and test our software in an Agile environment and obviously the key thing about VMs is that they can be deployed very quickly,” says Martin. “If someone is prototyping a new feature for example, if they say they need a VM, you don’t want to make them wait a couple of days - development tend to get a bit grumpy if you do that – we use our own solution in house and they can pretty much provision VMs on demand, the only thing we do is wrap a bit of policy around that, we don’t want someone deploying hundreds of VMs at a time and exhausting all the storage, we have to have some checks and balances.”

“If someone is prototyping a new feature for example, if they say they need a VM, you don’t want to make them wait a couple of days - development tend to get a bit grumpy if you do that – we use our own solution in house and they can pretty much provision VMs on demand, the only thing we do is wrap a bit of policy around that, we don’t want someone deploying hundreds of VMs at a time and exhausting all the storage, we have to have some checks and balances.”

Paul Martin Systems engineering lead Embotics www.embotics.com

TEST | February 2013

24 | Feature

www.testmagazine.co.uk

24 |Virtual testing

Today, most large enterprises, no matter what industry they serve, are software developers.

Applications that are used by either staff or customers are developed internally by the development team and/or with their development partners. For many organisations, their applications, particularly those which are customer-facing, are the differentiators between them and their competitors, so getting it right is crucial.

Development and testing teams are under immense pressure to speed up the testing process in order to deliver applications out to market as quickly as possible, but one of the main issues is the lack of pre-production infrastructure available to them. In recent years, many organisations have turned to the cloud to help with this issue, but it is not able to solve all of their problems as there are some components that cannot be replicated in the cloud. This is where the ability to create a virtual service, also known as service virtualisation, comes into play.

Service virtualisationSoftware developers today face significant constraints such as waiting for access to shared systems, waiting for other teams to deliver components they depend on, and the cost and time to reset and synchronise environments. Service virtualization enables developers, testers and more to remove these constraints by replacing dependent systems with virtual services.

A virtual service simulates the behaviour, data, and performance characteristics of a dependent system while consuming a fraction of the infrastructure. It is instantly available to each team that needs it. By removing the constraints we allow software to be developed and delivered faster, with lower costs and higher reliability. It is a fundamentally new technique in software development and complements existing technologies and methodologies such as DevTest Cloud.

Service virtualisation and DevTest cloudWith the current economic uncertainty, companies are desperately trying to

Conventional approaches to software development and testing are costing money and impacting the organisation’s bottom line says Chris rowett, senior director, technical sales, CA Technologies. Service virtualisation may be a way to tackle this problem.

Making the most of your devTest cloud with service virtualisation

February 2013 | TESTwww.testmagazine.co.uk

Virtual testing | 25

To test an application, development teams need to be able to test it in real time and understand exactly how it will behave in a real-world environment. To do this a replication of the production environment has to be made. However, this is easier said than done and it is one of the primary obstacles when it comes to testing. Recreating a life-like production environment, which has to be done for each incremental release, takes time and money.

improve their applications in order to retain their customers and establish new ones. Some are also trying to reach new markets or sub-sectors so have to develop new applications from scratch.

This means every large enterprise needs to put as much focus and effort into software development as it is does into marketing, customer service, finance and all other key functions that help to run a successful business.

Today, business leaders want more quality applications delivered on time and on budget, than ever before. Any projects that run late or over budget are an additional cost to the business. Applications that run late also give competitors a chance to get in there first.

This obviously has a big impact on the development and testing team. With the number of application releases going up each year and the continuous improvement to existing ones, there is increased pressure to get the development and testing stages of the lifecycle over and done with as quickly and efficiently as possible. In the past, this has often meant that the testing stage is reduced in order to meet the deadlines set by the business. Of course, the potential risk with that is the increased chance of putting a faulty customer-facing application into production, of which the knock-on effect is damage to the company reputation and the potential loss of customers. In today’s competitive landscape, putting an application into production that has not been rigorously tested against real scenarios is not an option.

This added pressure on the development and testing team is compounded by the fact that the budgets are not going up in line with the number of extra releases, making time and staff resources even more precious.

Into the cloudTo test an application, development teams need to be able to test it in real time and understand exactly how it will behave in a real-world environment. To do this a replication of the production environment has to be made. However, this is easier said than done and it is one of the primary obstacles when it comes to testing. Recreating a

life-like production environment, which has to be done for each incremental release, takes time and money.

The cloud is an obvious solution to this problem. Its elastic nature means that application development teams can provision an environment in minutes, rather than the huge effort it used to be. However, the cloud is not without considerable issues when it comes to the testing and development of applications. The problem is that many applications depend on infrastructure that is not possible and/or cost-effective to replicate in the cloud like a mainframe, third-party fee-based services or full databases. Without these crucial pieces of the puzzle, the development project can’t move forward. If it takes three weeks to get access to a mainframe that means it still takes three weeks to wait and provision a cloud lab. We call this the “wires hanging out” issue.

Service virtualisation makes cloud real for on-demand development and test environments. Organisations can use virtual services alongside virtual machines to capture and simulate those ‘wires hanging out’ and manage them in a complete DevTest Cloud environment. Preproduction teams can now get complete labs that include stable versions of all the mainframes, data scenarios and services they need to truly realise elastic capacity.

The result for the development and testing team is increased efficiency, reduced costs, higher quality earlier in the application development lifecycle, also known as the ‘shift-left’ effect, and a better relationship with the business. The benefit to the overall business is a much quicker route to market, greater customer satisfaction, increased revenue, improved market reputation and more competitive products.

Achieving high-performance cloud environmentsCloud is best used when the frequency of demand varies among a variety of uses of a particular infrastructure. Different applications have different capacity needs over time. The ability to use one common resource pool among many development teams gives an appearance of higher capacity on a per-team basis, when

26 | Feature

TEST | February 2013 www.testmagazine.co.uk

26 | Virtual testing

in fact organisations are simply utilising the unused capacity of other teams.

For example, one team might peak its usage during performance tuning or a ‘big bang’ release cycle. If other teams are simply doing typical development and test activities they are generating no such peak. This works well if each team plans its peak performance testing times when other teams don’t need that additional capacity. However, if each team is under pressure to get their applications ready for release this can lead to problems.

Using cloud combined with service virtualisation allows for a whole new economy in the development of high-performance applications. This creates a dramatic decrease in the cost structure as the overall development and test infrastructure requirements and costs go down for shared capacity, including on-premise and off-premise cloud.

When virtual services represent out-of-scope systems, they utilise computing resources far more efficiently than a live system does. For example, it might take several virtual machines (VMs) to represent a 25 percent capacity back-end app, whereas a virtual service will consume only a fraction of the CPU and memory requirements of just one of those machines for preproduction.

In a typical performance test, the entire architecture has to scale to the load desired, making the most over-used systems become bottlenecks. In the virtual environment world of VMs and virtual services, only the VMs must scale. An almost infinite number of virtual services can be instantly launched and utilised on-demand, with all the elasticity the cloud can offer.

If more scale is needed for performance tests, only a fraction of the entire lab must be scaled up, while the typically larger and more complex systems represented by virtual services will scarcely need to scale at all. When an enterprise’s IT management team understand their capacity in this regard, they have a greater ability to make sound economic decisions about how to best use cloud-based infrastructure.

Addressing the issuesIn July 2012 CA Technologies commissioned a research study

‘The Business Benefits of Service Virtualization’. The survey, which was conducted by Coleman Parkes, includes feedback from over three hundred software development managers from large enterprises with revenues of more than $1 billion or equivalent in the UK, France and Germany. The vast majority of respondents (90 percent) stated that they had problems with availability of systems and applications, such as databases or mainframes, for development and test purposes, the result of which leads to delayed projects, over-spending and lack of quality control.

The research suggests that organisations are taking steps to address their development and testing issues with nearly half of respondents (44 percent) indicating they are moving to a cloud-based development and test environment. However, as we know, moving to a cloud-based environment is not the only answer as it doesn’t solve the ‘wires hanging out’ issue. Service virtualisation can aid those moving their preproduction environments to the cloud. In simple terms, it is a capability that allows testers to remove the constraints of dependent systems. It does not force those testing an application to choose between the three critical criteria in application development and testing: cost, quality or schedule.

Respondents of the survey also recognised that better processes for developing and testing software applications can result in the following business outcomes: greater customer satisfaction (80 percent), increased revenue (72 percent), improved market reputation (72 percent) and more competitive products (71 percent).

The need from the business is clear. Leaders now expect the IT function to deliver greater operational efficiency, reduce time to market, increase revenues and, to some extent, build and sustain competitive advantage and be a driver for innovation. However, they also need to acknowledge that conventional approaches to software development and testing are costing money and impacting the organisation’s bottom line. Business leaders need to invest in their development and testing teams now to get the results that will drive the business forward in the future.

Chris rowettSenior director, technical sales CA Technologieswww.ca.com

Using cloud combined with service virtualisation allows for a whole new economy in the development of high-performance applications. This creates a dramatic decrease in the cost structure as the overall development and test infrastructure requirements and costs go down for shared capacity, including on-premise and off-premise cloud.

28 | Feature28 | Mobile app testing

It’s not just Bring Your Own Device (BYOD) any more, the future is Bring Your Own Software (BYOS) and this is creating a need for extensive testing to guarantee the usability and robustness of apps on mobile devices. rohit Garg reports.

The future is application-based - on your mobile

Executives and employees already use their personal devices (mobile and tablets, laptops instead of

desktops) at work and this trend has been hugely influenced by the increasing availability of affordable smart devices on offer to consumers.

The next stage of this trend, however, is the move towards personalised software applications, ie apps and

tools the individual has selected for use on their own device. The rise in open source software and the popularity of app stores indicates that not every employee wants to have their software and applications dictated to them in the same way we no longer want to be forced to use a specific device.

While this choice and openness is welcomed by the employees and individuals, it creates further challenges for the IT department and business in

general. More importantly, however, this also creates a need for extensive testing to guarantee the usability and robustness of apps on mobile devices.

Mobile app testing Apps are in need of extensive but rapid, effective development and testing. Formerly considered just the domain of teenagers and gamers, they are now making inroads into the business world and, therefore, need ongoing

TEST | February 2013 www.testmagazine.co.uk

February 2013 | TESTwww.testmagazine.co.uk

Mobile app testing | 29

Thanks to the burgeoning mobile market and the mobile testing industry that has sprung up around it out of necessity, mobile applications are becoming increasingly sophisticated. This has a cyclical nature to it however; it also significantly increases the requirement for functional testing, further driving demand in the mobile application testing space.

testing for a consistent experience and to ensure they are robust and fit for purpose. There are often problems when apps are updated, for instance, with loss of functionality or newer bugs introduced and therefore, loss of customer confidence.

In addition, while technological advancements and the proliferation of devices across operating systems (Apple iOS, Android and Windows Mobile) and platforms have created more opportunities for personalised apps and software on a mobile device, they have also made it more challenging for hardware manufacturers and application developers to develop and roll out new products.

For peace of mind, mobile applications must be tested to ensure they run on key platforms and across a multitude of networks. Despite the pressures of short mobile development cycles, rapid quality testing of applications across operating systems, device platforms and networks is a necessary but daunting task to ensure long-term success in what is a highly fragmented and competitive global market. Moreover, nonfunctional testing – including usability, security and adaptability – is as critical as functional testing.

Testing criteriaThere are three key areas that require in-depth testing when considering rolling out apps for the mobile. The first is the device diversity. With multiple platforms and browsers available, there will be rendering differences that need to be tested and optimised accordingly. Different devices also mean variations in application run times, which again need to be tested and optimised where possible.

Secondly, even if the mobile and app function are in harmony, there will be network challenges. From GSM/GPRS to WiFi/Wi-Max and now 4G in certain regions, multiple network types can lead to different speeds of connectivity across geographies and apps needs to be tested in these different regions to see how they can function with each network operator’s customised network features.

Finally, challenges can arise in the shape of hardware differences, leading

to limitations in processing speeds, as well as in the memory size of the mobile device. And, there will be variations in the communication protocols of each device (WAP/HTTP etc).

Mobile testing industry opportunitiesThere are major opportunities in the testing space, all the more so due to the growing mobile market, creating higher demand for specialised mobile testing. Compared with a desktop/notebook environment, mobile device system resources (eg, processing power, memory, etc) are limited. To be able to embrace the Bring Your Own Software or your personalised apps on your mobile device for employees and workers, performance testing of mobile applications is crucial.

Thanks to the burgeoning mobile market and the mobile testing industry that has sprung up around it out of necessity, mobile applications are becoming increasingly sophisticated. This has a cyclical nature to it however; it also significantly increases the requirement for functional testing, further driving demand in the mobile application testing space.

Mobile apps in the enterpriseWhereas a consumer mobile app not working might damage the relationship with the consumer, using mobile applications for business-focused activities – working on potentially sensitive and confidential information – raises the bar, demanding more stringent and sophisticated mobile apps testing.

Many organisations will not necessarily have this testing capability in-house. It is in those cases where they should turn to trusted partners who have the established labs for testing mobile applications. Only then can companies have confidence in their employees working with mobile apps on business matters, ideally with a higher rate of productivity as the BYOD trend has already demonstrated. This partnership will ensure that organisations can use the infrastructure established by the labs resulting in reduced capital expenditure, while at the same time ensuring exhaustive testing required to take the business forward.

rohit GargRegional testing practice leaderCognizantwww.cognizant.com

TEST | February 2013 www.testmagazine.co.uk

30 | The testing world

As the trend towards using Agile methods for software development continues, I thought I

would devote this year’s articles to the Agile principles. There are conveniently 12 of them, so two per issue. To set out my stall for the year, I like much of the Agile methodology, and can see where it adds significant value. However, like the V-model there are a few niggles. In these articles I will try

to play devil’s advocate, so that when we enter the Agile world, we have both sides of the story (as I see them).

The Agile manifestoFirstly, a reminder of the Agile Manifesto – this consists of four points. These are that Agile models value:• Individuals and interactions over

processes and tools;• Working software over

comprehensive documentation;

Happy New Year everyone! Angelina Samaroo is hoping that 2013 is our best year yet. We survived the end of the Mayan calendar, so time for a new cycle – time to get Agile.

It’s 2013 – time to be Agile?

February 2013 | TESTwww.testmagazine.co.uk

The testing world | 31

Angelina SamarooManaging director Pinta Educationwww.pintaed.com

• Customer collaboration over contract negotiation;

• Responding to change over following a plan.

In other words we focus less on verification processes, and more on validation. We move to the beat of the customer, not necessarily striving for the hallmark of quality. The ink on the hallmark may well not have time to dry before we find ourselves sprinting again.

The first principle listed at http://agilemanifesto.org/ is that ‘Our highest priority is to satisfy the customer through early and continuous delivery of valuable software’.

In other words, we bring the customer in early in the project, and keep them there until the bitter happy end. This sounds great on paper, but putting it into practice could be problematic. The first part of this principle concerns project management; specifically priority setting. We prioritise implementation over everything else. This opens up the possibility that we transfer the risk of poor supplier practices to the customer.

We set the expectation that we will deliver what they want when they want it. However, without proper business analysis, system design, people skills and training, systems delivered could well look as they should but not function as well as they should. If we take a leaf out of Darwin’s book, then it is the fittest who survive.

In the race to satisfy the customer we may miss the opportunity to fine tune our processes, so that we can stand the test of time. The customer who needs to be satisfied today may well be the marketing manager, but as HMV and Jessops have shown, some marketing strategies may well not been thought through early or deeply enough. Marketing may set themselves up as our customer, but it is the consumer who keeps us alive, by buying what we’re actually selling, not the dream we had on the story board.

Ch-ch-changesThe second principle is that ‘We welcome changing requirements, even late in development. Agile processes harness change for the customer's competitive advantage.’

The phrase ‘changing requirements’ conjures up many a risk, for testers and developers alike. I would have preferred if it were ‘changed requirements’. Not a wave rolling in the economic tides, but a boat coming in to dock every so often for top-ups and repairs. In other words, the requirements change when we have launched something.

At the end of each and every sprint there must be valuable software, if we are to support principle 1 (as I’m numbering them). If we’re building in sprints, getting agreement from our internal customers as we go, but launch only when we have all the sprints signed-off then we have retained a key risk of the V model – the consumer sees the product when it is just as costly to fix.

This is the end pointThe next phrase ‘even late in development’ suggests that we have a definite end point. It is difficult to be late without a deadline. If we have a deadline, then changing requirements mean we will always miss the deadline, unless we take remedial action. If we do not miss a deadline due to a changed requirement then we have allowed for it. If we allowed for it we must have guessed what it might be, since it wasn’t what we started with. If we guessed at it then we’re planning in the dark. We deny ourselves the important quality check points at the end of each sprint.

The second part on harnessing change is followed by the sponsor loving phrase ‘for competitive advantage’. This should allow us an audience to get budget approval for what we would like to deliver. The ‘harnessing change’ phrase is rather more troublesome to pin down. If we harness the lessons learned from each change or sprint, then we have something of value to take forward. At the end of each project the hope is that there will be more. The journal of learning is still relevant. Our memories are not what they used to be, that’s for sure.

The positive spinThe positive spin on these two principles is that we should move with the times and deliver what our customers want when they want it. The trick is to be able to distinguish between what they say they want, what they actually want and what they really need. If we do not take the time up front at the business and systems analysis stages to unravel these then the distinction between the wants and really needs may well become blurred.

To be truly Agile requires a little more than putting requirements on sticky notes and holding stand-up meetings in front of the white board. It requires skills in analysis, coding, testing, project management and working with others. Much like the V-model, but packaged differently, a bit at a time. It works, but you have to work at it.

To be truly Agile requires a little more than putting requirements on sticky notes and holding stand-up meetings in front of the white board. It requires skills in analysis, coding, testing, project management and working with others. Much like the V-model, but packaged differently, a bit at a time. It works, but you have to work at it.

TEST | February 2013 www.testmagazine.co.uk

32 | Devices & platforms news

Ovum analyst Nick Dillon reports back from the Consumer Electronic Show in Las Vegas where he notes a marked shift from hardware to software.

From hardware to software

February 2013 | TESTwww.testmagazine.co.uk

Devices & platforms news | 33

The annual Consumer Electronic Show (CES) in las Vegas has dominated the technology news at the

beginning of the year, as usual. Despite the attention it receives, the show’s continuing focus on hardware is making it look increasingly dated, as the majority of innovation is now happening in software and services rather than in hardware design. However, one area of hardware where innovation is occurring is the smart or app-enabled accessory market. This is a market that has the potential to take off in a similar fashion to the mobile app market and become a significant part of the smart devices ecosystem.

CES is traditionally focused on new hardware products, which is understandable as this is where the majority of innovation has occurred in the past. However, the rise of smart devices has caused a fundamental shift in the consumer technology industry, meaning that this is no longer the case. The focus of innovation has now shifted to the software and services running on these devices, rather than the physical hardware.

Smartphone hardware is now generally ‘good enough’ for the majority of users and as a result it has become commoditised and homogenised. The market has settled on large, slab-shaped, touchscreen devices as the optimum design for smart devices, meaning there is very little ability left for device vendors to differentiate on hardware. This point was illustrated neatly at CES, where the main ‘innovation’ was the launch of larger-screened smartphones or ‘phablets’ as they are often called.

Conversely, the market for software and services continues to grow at a phenomenal rate. Apple recently announced that 40 billion applications have now been downloaded from its App Store, illustrating users’ insatiable hunger for new applications and services. For those looking to keep track of the latest innovations in the consumer technology market, the software-focused events such as Google I/O, Apple WWDC, Microsoft Build, and Facebook f8 are the best places to look.

App-enabled accessoriesWhat differentiates a regular accessory from this latest wave of smart or app-

enabled accessories is that their functionality generally goes beyond the simple audio features (as found on Bluetooth headsets, for example) to provide richer functionality and interaction between the smartphone and the accessory through applications running on the smartphone.

On a basic level, this includes the ability for both the accessory and the smartphone to receive and display notifications. However, more advanced smart accessories can both control and be controlled by a smartphone. There has been particular interest in smart watches at CES, with the Kickstarter-funded Pebble the poster child of this market, but there are a range of other app-enabled accessories either available now or on the horizon, such as thermostats, fitness trackers, and concepts such as Google’s Project Glass personal heads-up display.

This development could be viewed as the natural evolution of mobile apps; having exhausted the functionality of the smartphone itself, they are moving to other devices. This trend has also been driven by the reducing cost and power requirements of adding connectivity and intelligence to accessories, in addition to the availability of the low-power Bluetooth 4.0 standard. While there will be opportunities for the current smart vendors to capitalise on this market, as with the mobile apps market it will be the third-party vendors that will be the driving force.

Market is in need of standardisationOne of the current hurdles in this market is the lack of standardisation. While there are plenty of technologies available to connect the devices to each other, such as Wi-Fi, Bluetooth, NFC, and ANT+, there is no standardised way to connect the services running on the devices. This means that, as with the mobile apps market, such functionality is difficult to implement and generally siloed by software platform. While users may be happy to buy a $0.99 app, they may be more hesitant in buying a more expensive accessory without assurances that it will work on more than one OS.

Qualcomm’s AllJoyn (not to be confused with the GSMA’s Joyn initiative) is one project that aims to address this shortcoming, by providing a standard API for discovering and

connecting to devices and services over local and personal area networks. Given Qualcomm’s leading position in the smartphone chipset market and involvement in mobile software development, it is in an ideal position to drive the adoption of such a standard.

Nick DillonSenior analyst, devices and platforms Ovumwww.ovum.com

CES is traditionally focused on new hardware products, which is understandable as this is where the majority of innovation has occurred in the past. However, the rise of smart devices has caused a fundamental shift in the consumer technology industry, meaning that this is no longer the case. The focus of innovation has now shifted to the software and services running on these devices, rather than the physical hardware.

These days it seems that every organisation should be testing its boundaries and seeing just how secure they are. John yeo, director of Ethical Hacking Incident Response at Trustwave makes the case for the ‘white hat’ hackers.

Enhance your security with penetration testing

TEST | February 2013 www.testmagazine.co.uk

34 | Penetration testing

February 2013 | TESTwww.testmagazine.co.uk

Penetration testing | 35

Based on the fundamental principle that prevention is better than cure, penetration testing

(pentesting) is essentially an information assurance activity to determine if information is appropriately secured. Conducted by penetration testers, sometimes referred to as ‘white hats’ or ethical hackers, who use the same tools and techniques as the bad guys, ‘black hat hackers’, but doing so in a controlled manner with the express permission of the target organisation, the aim of the exercise isn’t simply to determine whether it’s possible to break through an organisation’s defences, but about identifying the breadth and depth of vulnerabilities.

Naturally a major focus is offering detailed and accessible recommendations to improve an organisation’s overall security posture. Aside from assessing the risk of the more technically oriented findings, typically root cause analysis will be provided as part of a report, this has a tendency to be more business-focussed around shortcomings in the organisation’s overarching information security strategy; examples include why a password policy is insufficient, or highlighting inconsistent patch management.

Governance & complianceAny organisation with sensitive information, such as customer data, personally identifiable information, payroll data, payment card data, intellectual property or trade secrets should probably be incorporating penetration testing within their wider governance, risk and compliance activities. Pentesters conduct tests against networks and applications for many different types of sensitive data using a primarily manual testing process.

One of the prominent drivers for conducting regular pentesting is PCI-DSS compliance, which outlines requirements for penetration testing activities to validate the security controls in-place. Other drivers include

businesses wanting to validate the resilience of a new IT environment or perhaps following a major change; fundamentally it’s driven by the desire to ensure the company’s assets and data are well protected from attack. We know that being the victim of data breach can impact a business’ top line revenue through negative press, and in some industries the risk of regulatory fines is also at play – no one wants to become the next data breach headline.

Those companies with more mature approaches to security will tend to have proactively incorporated the use of pentests into their strategy, and have a relatively clear roadmap at the beginning of the year, commonly including the network environments and most critical web applications that require pentesting, how frequently they should be tested, and when. Others adopt an ad-hoc approach, sometimes just before a new system goes live or as part of their annual PCI review. The latter frequently just focuses on the infrastructure associated with payment card data and may leave the remainder of the network untested.

Vulnerability Scans versus pentestingA common area of confusion is the relationship between vulnerability scanning (automated) and pentesting (expert driven manual testing). Both involve a proactive and concerted attempt to identify vulnerabilities that could expose the organisation to a potential malevolent attack.

Vulnerability scanners are great at identifying ‘low-hanging’ vulnerabilities, like common configuration mistakes or un-patched systems, which offer an easy target for attackers. What they are unable to determine is the context or nature of the asset or data at risk. They are also less able than humans to identify unknown-unknowns (things not already on the risk register, or hadn’t been theorised by the organisation as potential security issues). Good pentesting teams however do this very well.

For instance, we’ve had countless engagements where previously an environment was only vulnerability scanned and when we’ve conducted

Any organisation with sensitive information, such as customer data, personally identifiable information, payroll data, payment card data, intellectual property or trade secrets should probably be incorporating penetration testing within their wider governance, risk and compliance activities.

By incorporating pentesting activities as part of a wider information security strategy, organisations can validate the robustness of their security controls and identify as yet unknown risks to their business. The results of a pentest and guidance provided, helps organisations better protect sensitive data from falling into the wrong hands.

John yeoDirector, Trustwave SpiderLabs EMEATrustwavewww.trustwave.com

TEST | February 2013

36 | Penetration testing

www.testmagazine.co.uk

a pentest, of that same environment, we’ve managed to compromise a number of systems, gained unauthorised domain-administrator or root access to systems, and ultimately gained unauthorised access to sensitive data. One final distinction is that vulnerability scans are unable to process certain types of security issues, such as subtle business logic flaws which would require a human’s understanding of how a particular workflow or process is supposed to work in order to exploit it.

In truth both are required, vulnerability scanning as a frequent, e.g. monthly or quarterly, baseline activity and pentesting as the more detailed exercise; perhaps once or twice per year, depending on the assurance objectives. The point is an experienced security tester, ethical or not, often finds critical and high-risk vulnerabilities in environments that regularly undergo automated vulnerability scanning.

Different types of pentestThe most common types of tests are either directed at network infrastructure or a specific application. A network pentest typically includes entire networks and many hosts, sometimes crossing over geographical boundaries. The type of testing is usually both externally, against Internet-facing servers and supporting infrastructure, and internally, against internal corporate information systems assets including servers, workstations and IP telephony systems.

Application testing on the other hand involves a targeted assessment of an individual, usually web-based, application. The application may be accessible just to the company’s own employees, third parties or partners, or it could be facing the internet and available to all, such as an e-commerce website. Conducting this type of testing will require the authentication credentials so each role or privilege level within the application can be tested. This will enable the tester to ensure that for any given user role, that role

cannot create, read, delete or update data in an unauthorised manner. Most organisations possess numerous web-based applications, not just the corporate website, that could be a potential entry point for attackers. Our recently published Global Security Report, which gleaned results from 2,000 manual pentests globally, revealed that ‘SQL injection’ and ‘business logic’ flaws are the most common web based vulnerabilities that we regularly identify.

Choosing a PentesterClearly choosing a trusted partner to conduct pentesting is itself a sensitive matter and the area of professional penetration testing is still relatively new and somewhat unregulated. For instance, it lacks a central governing body on professional standards when compared with more established professions, such as financial auditing. Some accreditations do exist, such as those offered by CREST (Council of Registered Ethical Security Testers), however it is a chiefly UK centric accreditation at both company and individual level.

Given the relatively low barrier to entry for organisations claiming to be expert penetration testers, reputation and industry standing is of utmost importance when selecting a provider. And whilst there are a number of high calibre individuals working for boutique security consultancies, organisations should seek well established penetration testing providers with well documented methodologies, careful recruitment policies, established references and track record for delivering the full spectrum of advanced technical security services.

By incorporating pentesting activities as part of a wider information security strategy, organisations can validate the robustness of their security controls and identify as yet unknown risks to their business. The results of a pentest and guidance provided, helps organisations better protect sensitive data from falling into the wrong hands.

Inside: Agile development | Mobile apps testing | Testing techniques

Richard Eldridge explains how to

tackle testing in the fi nancial sector.

Institutional

applications testing

Visit TEST online at www.testmagazine.co.uk

Volume 4: Issue 4: August 2012INNOVAT ION FOR SOFTWARE QUAL I TY

Sponsoring TEST’s

20 Leading

Testing Providers.

Editor’s Focus page 29-39.

TE

ST

: I

NN

OV

AT

IO

N F

OR

SO

FT

WA

RE

QU

AL

IT

Y

VO

LU

ME

4

: I

SS

UE

4

: AU

GU

ST

2

01

2

Agile development | Mobile apps testing | Testing techniques

Richard Eldridge explains how to

Richard Eldridge explains how to

tackle testing in the fi nancial sector.

tackle testing in the fi nancial sector.

tackle testing in the fi nancial sector.

applications testingapplications testing

Visit TEST online at www.testmagazine.co.uk

UAL I TY

Inside: Test automation | Agile testing | Mobile app testing

What graduates can expect from a career in testing.

A bright future

Visit TEST online at www.testmagazine.co.uk

Volume 4: Issue 5: October 2012

INNOVAT ION FOR SOFTWARE QUAL I TY

Sponsoring TEST’s 20 Leading

Testing Providers.Pages 35-47

TE

ST

: I

NN

OV

AT

IO

N F

OR

SO

FT

WA

RE

QU

AL

IT

YV

OL

UM

E

4:

IS

SU

E

5:

OC

TO

BE

R

20

12

Inside: Test automation | Agile testing | Mobile app testing

What graduates can expect from a career in testing..

A bright futureA bright futureA bright futureA bright futureA bright futureA bright futureA bright futureA bright futureA bright futureA bright futureA bright futureA bright futureA bright futureA bright future

Visit TEST online at www.testmagazine.co.uk

OFTWAREOFTWARE QQUAL I TY

Sponsoring TEST’s 20 Leading

Testing Providers.Pages 35-47

Inside: Regression testing | Crowd sourcing | Application complexity

Gojko Adzic asks are we testing the right things?

The right stuff?

Visit TEST online at www.testmagazine.co.uk

Volume 4: Issue 6: December 2012

INNOVAT ION FOR SOFTWARE QUAL I TY

TE

ST

: I

NN

OV

AT

IO

N F

OR

SO

FT

WA

RE

QU

AL

IT

Y

VO

LU

ME

4

: I

SS

UE

6

: D

EC

EM

BE

R

20

12

For exclusive news, features, opinion, comment, directory, digital archive and much more visit

www.testmagazine.co.uk

Subscribe to TEST free!

Published by 31 Media Ltd

www.31media.co.uk

Telephone: +44 (0) 870 863 6930

Facsimile: +44 (0) 870 085 8837

Email: [email protected]

Website: www.31media.co.uk

INNOVAT ION FOR SOFTWARE QUAL I TY

INNOVAT ION FOR SOFTWARE QUAL I TY

Inside: Agile development | Mobile apps testing | Testing techniques

Richard Eldridge explains how to

tackle testing in the fi nancial sector.

Institutional

applications testing

Visit TEST online at www.testmagazine.co.uk

Volume 4: Issue 4: August 2012INNOVAT ION FOR SOFTWARE QUAL I TY

Sponsoring TEST’s

20 Leading

Testing Providers.

Editor’s Focus page 29-39.

TE

ST

: I

NN

OV

AT

IO

N F

OR

SO

FT

WA

RE

QU

AL

IT

Y

VO

LU

ME

4

: I

SS

UE

4

: AU

GU

ST

2

01

2

Agile development | Mobile apps testing | Testing techniques

Richard Eldridge explains how to

Richard Eldridge explains how to

tackle testing in the fi nancial sector.

tackle testing in the fi nancial sector.

tackle testing in the fi nancial sector.

applications testingapplications testing

Visit TEST online at www.testmagazine.co.uk

UAL I TY

Inside: Test automation | Agile testing | Mobile app testing

What graduates can expect from a career in testing.

A bright future

Visit TEST online at www.testmagazine.co.uk

Volume 4: Issue 5: October 2012

INNOVAT ION FOR SOFTWARE QUAL I TY

Sponsoring TEST’s 20 Leading

Testing Providers.Pages 35-47

TE

ST

: I

NN

OV

AT

IO

N F

OR

SO

FT

WA

RE

QU

AL

IT

YV

OL

UM

E

4:

IS

SU

E

5:

OC

TO

BE

R

20

12

Inside: Test automation | Agile testing | Mobile app testing

What graduates can expect from a career in testing..

A bright futureA bright futureA bright futureA bright futureA bright futureA bright futureA bright futureA bright futureA bright futureA bright futureA bright futureA bright futureA bright futureA bright future

Visit TEST online at www.testmagazine.co.uk

OFTWAREOFTWARE QQUAL I TY

Sponsoring TEST’s 20 Leading

Testing Providers.Pages 35-47

Inside: Regression testing | Crowd sourcing | Application complexity

Gojko Adzic asks are we testing the right things?

The right stuff?

Visit TEST online at www.testmagazine.co.uk

Volume 4: Issue 6: December 2012

INNOVAT ION FOR SOFTWARE QUAL I TY

TE

ST

: I

NN

OV

AT

IO

N F

OR

SO

FT

WA

RE

QU

AL

IT

Y

VO

LU

ME

4

: I

SS

UE

6

: D

EC

EM

BE

R

20

12

For exclusive news, features, opinion, comment, directory, digital archive and much more visit

www.testmagazine.co.uk

Subscribe to TEST free!

Published by 31 Media Ltd

www.31media.co.uk

Telephone: +44 (0) 870 863 6930

Facsimile: +44 (0) 870 085 8837

Email: [email protected]

Website: www.31media.co.uk

INNOVAT ION FOR SOFTWARE QUAL I TY

INNOVAT ION FOR SOFTWARE QUAL I TY

TEST | February 2013 www.testmagazine.co.uk

Testing evangelist Jackie McDougall makes the case for testing your Business Intelligence solution before jumping to any major decisions.

It’s only a report –do we need to test it?

It’s a sad day. The board have met, they’ve had their people do the maths, they’ve reviewed the reports and they’re going to

have to lay off 500 staff – it’s tough, but you can’t argue with the figures, and hard decisions have to be made.

Let’s take a step back. Have the reports been tested? Have assumptions been made? How do we know the report is right? Well, the board have invested millions in their IT systems, and they’ve got a great testing team who have found some really nasty defects that could have done significant damage to the business’s reputation had they gone live. They’ve tested the source systems to death, and had thousands of defects raised and fixed, and they passed all the tests. The reports are ‘just’ extracts of the data – do we really need to test them?

Getting the messageAs a testing evangelist, I’m delighted that we’ve finally managed to get the testing message across, and today many organisations do test the systems

and applications they put in place to meet their day-to-day operational business needs.

Indeed, many of them invest heavily in testing within this space, often as a result of a ‘near-miss’ or worse – a real, catastrophic blunder. You know, one of those disaster stories that’s all over the news giving IT a bad name. Like the computer systems problem that resulted in the number of security staff required to support an international sports event this summer being miscalculated, and as a result, the armed forces being drafted in. Or the failed software upgrade carried out by a large UK bank last year that resulted in hundreds of thousands of customers’ payment transfers being significantly delayed (some by as much as a month). Or an IT services company owing $1.5 billion in penalty payments to the NHS for failing to deliver a patient records contract. Or Apple’s introduction of its own mapping service resulting in a PR catastrophe due to the lack of accuracy, clarity and detail offered by the service.

38 | Business Intelligence testing

 

33

February 2013 | TESTwww.testmagazine.co.uk

Business Intelligence testing | 39

Think of a sales processing system – transactional data would include what item was being bought, when it was bought, what price it was, how it was paid for; trends might include looking at how many of that item were bought in a month, how did the price vary over the course of the year, how many were bought on line, how many were paid for by credit card. This is called Business Intelligence (BI).

Naturally, organisations are willing to put a lot of people, time and money into making sure this sort of thing doesn’t happen to them (or happen again, for those who have already hit the news).

And that’s all well and good. But what about the data these systems produce. What happens to it? Well, it’s used to produce key performance indicators (KPIs), which management teams then look at and use to make decisions, perhaps about where the organisation needs to spend more money, where they need to increase sales targets, where they need to focus their investments, where they need to recruit more people. That’s the rosy view. Maybe it isn’t so positive in some cases. Maybe it’s more like - where the organisation is losing money, which sales teams are not performing so well, where they need to cut back on investment, where they need to let people go.

The way they do this is to gather data about how the organisation is doing (snapshots of individual transactions at particular points in time) - the ‘now’, and use this to provide a historical and summarised view of particular attributes of the business - the ‘trends’. Think of a sales processing system – transactional data would include what item was being bought, when it was bought, what price it was, how it was paid for; trends might include looking at how many of that item were bought in a month, how did the price vary over the course of the year, how many were bought on line, how many were paid for by credit card. This is called Business Intelligence (BI).

To get from the ‘Now’ to the ‘Trends’, it’s often necessary to use Data Warehouses. I know... when I first came across this term, I immediately thought of Mulder and Scully, investigating dark and dingy abandoned buildings. But what it actually means is “a copy of transaction data specifically structured for query and analysis” (The Data Warehouse Toolkit - R. Kimball, 1996). It’s the ‘query and analysis’ that makes it distinct from a relational database. Typically data is extracted from a number of source systems, transformed in some way to enable an appropriate structure for analysis and loaded into a single data warehouse – referred to as an ETL (Extract, Transform & Load) solution. From the data warehouse, a variety of outputs can be produced - reports, data cubes, dashboards or scorecards. These form the basis of the Business Intelligence on which strategic management decisions are made.

Correct configurationSo it’s just as well they’ve spent all that effort, time and money making sure the systems that source this data are well tested, isn’t it? But what about the BI solution – the ETL software and the output reports themselves? Some ETL solutions are Commercial-Off-the-Shelf (COTS) packages, and some are not. If they are COTS packages, they have to be configured correctly to an organisation’s requirements, they need to be integrated with other systems, they may also need customisation specific to that organisation. In other words, they are testable pieces of software that can have defects in them. If the ETL solution is not a COTS package, it is likely to be a series of bespoke scripts or programs that structure, query and analyse the data – again, all of which need to be tested.

And then there are the reports themselves – users can’t edit these, so surely we don’t have to test them too? But what if the report does not display what it says it does? What if, due to some issue with poor data quality much earlier in the chain, it shows a completely inaccurate picture? What if there was some miscalculation in the ETL process which has a knock-on effect on the aggregated values shown in the bottom-line. The risks here are not just that there might be some well-publicised blunder – there may well also be implications regarding non-compliance with regulatory reporting; there may be financial penalties; there may be security risks.

The way that data influences decisions is set out in the DIKW Model (Data Information Knowledge Wisdom; Russell Ackoff, From Data to Wisdom, Journal of Applied Systems Analysis (1989)), which looks a bit like this:

 

TEST | February 2013 www.testmagazine.co.uk

40 | Business Intelligence testing

We’ve already accepted that organisations are willing to invest in testing where they appreciate its value, and the risk that it mitigates. But this shouldn’t just stop at the delivery of the applications. If the data that’s output from these systems then has a further journey into the world of BI, some investment also needs to be made in testing the ETL solution, the Data Warehouse and the Reports or Dashboards produced for the users at the end of this process.

Jackie McDougallSenior test consultant – Innovations & Solutions Group Sopra groupwww.sopragroup.co.uk

But this is based on some key assumptions:• That the starting point is good quality data;• That the information is accurate;• That the knowledge facilitated is correctly understood;• That the wisdom achieved is valid.Unfortunately, I have worked on some BI testing projects where the quality of the data is not good, the information produced is not accurate, and the anticipated knowledge and wisdom is absent. So I’m starting to believe there is a sub-value chain that corresponds to the traditional DIKW model:

Testing BI solutionsSo what does this mean for testing BI solutions? We’ve already accepted that organisations are willing to invest in testing where they appreciate its value, and the risk that it mitigates. But this shouldn’t just stop at the delivery of the applications. If the data that’s output from these systems then has a further journey into the world of BI, some investment also needs to be made in testing the ETL solution, the Data Warehouse and the Reports or Dashboards produced for the users at the end of this process. Otherwise the value that was gained from the investment made on the left of this diagram can quickly be lost on the right.

So the CEO has the report in his hand. Does he let people go or not? The biggest assets of most businesses are its people and the quality of the information they have available to make the right decisions. Making sure the BI solution is well tested protects and increases the value of both!

 

 

Feature | 41

February 2013 | TESTwww.testmagazine.co.uk

There are examples where universities have been able to interact very closely with industry and produce some remarkable success stories. The University of Sheffield set up the Advanced Manufacturing Research Centre (AMRC) 11 years ago. It is heavily funded now by Boeing, RollsRoyce, BAe Systems etc.

The president of Boeing, Sir Roger Bone, said at a recent international meeting: “We believe that the University of Sheffield is one of the few universities that really understands the link between research and industrial application.”

Design for TEST | 41

Bridging the gapMike Holcombe assesses the gap between the research universities do and exactly what industry can use.

As we all know, technology progresses through research and its exploitation. This

is as true for testing as it is for everything else. The universities are at the forefront of research and represent a major asset for UK Plc in terms of opportunities to develop technology further. However, in Britain we have not been very good at taking this excellent research and adapting it for industry to use. There is a gap between what the universities produce and what industry can usefully use.Testing is no different from many other areas of technology. How many test professionals read the advanced research papers that are being published by the top university researchers? Probably, not many. There are several reasons for this.1. The papers are written in a formal

and hard to understand style requiring a deep knowledge of the research literature. This makes them pretty inaccessible to all but the academics.

2. Although the results described in the paper may be impressive they will not have been developed, generally, in an industrial context and so there is a question as to how useful they may be to practitioners.

3. Software testing is heavily dependent on automation, this means that high quality software is needed to apply any new algorithms, techniques etc. in a production context. Much academic research software is experimental and needs a lot of polishing up in order for it to be applicable.

4. The average testing department may not be familiar with the latest ideas from research and may thus be suspicious of them.

ExamplesThere are examples where universities have been able to interact very closely with industry and produce some remarkable success stories.

The University of Sheffield set up the Advanced Manufacturing Research Centre (AMRC) 11 years ago. It is heavily funded now by Boeing, RollsRoyce, BAe Systems etc.

The president of Boeing, Sir Roger Bone, said at a recent international meeting: “We believe that the University of Sheffield is one of the few universities that really understands the link between research and industrial application.”

The AMRC model is as follows: companies, large and small, subscribe to the Centre (the fees vary according to the size of the company). This gives them access to a group of highly trained researchers and specialists who can then work with them on problems the companies specify. This usually involves taking research ideas out of the lab and transforming them into a form suitable for use in industry. Typically this might mean that a company specifies a project that might last for three months, say, and could involve several researchers working full time on the problem. This is something that universities cannot normally do since the funding environment prefers long term projects – say one person working on a problem for three years. This is far too slow for most companies.

We want to do something similar in software testing and data analytics. The Advanced Digital Research Centre would be set up along the lines of the AMRC with University funding as well as industrial money – and Government money if things go well.

As well as building on the research breakthroughs and creating industry strength testing tools we will be interested in helping companies to develop their skills and knowledge in the latest techniques. Other areas that we are looking at include a certification process for app developers and similar mechanisms for making testing more central to the world of software development.

We are at the planning stages for this initiative and would welcome any expressions of interest from colleagues in industry.

Mike Holcombe Founder and directorepiGenesys Ltdwww.epigenesys.co.uk

With the explosion in the smartphone apps market in the last five years a whole new industry has grown up to satiate the public’s desire for slick, fun, useful and new bits of software. The developers of these products live and die on their quality and poor performance is quickly translated into negative feedback. Matt Bailey talks software quality with Gary Partington a pioneer of smartphone software development and CEO of app development company Apadmi.

Top quality apps

42 | App dev & test

www.testmagazine.co.ukTEST | February 2013

February 2013 | TESTwww.testmagazine.co.uk

App dev & test | 43

If you were asked which single technology is currently taking over the world, I’m sure a significant number would

answer “apps”. App development is no simple matter though. There are a multitude of devices, platforms, operating systems a nd formats to deal with, all with widely varying specifications. And the punters like their apps to run smoothly and glitch-free on all of them.

Apadmi is a software development company that has created business and consumer apps for a wide range of industry sectors including media, TV, radio, education, financial, telephony VoIP/PBX, email and messaging. It is responsible for the BBC iPlayer Radio - the new home for BBC radio available across multiple platforms. The app attracted a million downloads in two months.

Other recent projects include a new range of apps for BT Broadband customers to connect to BT’s wifi hotspots across the UK; an official Olympics tourist guide app for VisitBritain for exclusive use on Samsung devices; and the BlackBerry and Android versions of the official app for the prime-time ITV show, The X Factor. The company has also developed apps for Motorola, Nokia, The Carphone Warehouse, Aviva, Accenture, Dell, Skyscammer and Avaya.

Apadmi was founded in 2009 by its four directors, all of whom had experience developing mobile software for platform providers, handset manufacturers, network operators and software vendors. Indeed the company’s consultants helped to develop the first Symbian OS device, the Ericsson R380, way back in 1998 when mobile was still in its early stages of evolution.

“In a nutshell, we build great mobile apps,” says Apadmi CEO Gary Partington. “We provide innovative and cost-effective integrated mobile applications and solutions and, with long-standing expertise across multiple mobile platforms, we offer in depth mobile knowledge. If a client is looking for inspiration for a mobile app idea or they want to explore how they

can develop an existing concept, we help them, from the initial creation through to design, development and monetisation.”

Partington has worked in the mobile industry since 1999. As CEO his role is to bring new partners to the table and to enhance and build upon the company’s established strong customer relationships, while also ensuring that the Apadmi team is constantly focused on customer service and delivery.

Quality appsObviously, a crucial part of offering a quality app development service is a focus on software quality itself. “We have dedicated test teams,” says Partington, “but the way we’ve gone about things in the last few years is building things up as much as we can in the automated test space. We run lots of unit tests – as much as we can. We use a lot of tools that we’ve built in-house. We have a continuous integration process that automatically tests our frameworks and anything else that we can actually test in this way.”

As with any developer in the complex apps environment there are a series of challenges to face. “The range of platforms is one,” says Partington. “The sheer variety of devices available is another. Launching a product into 20 or 30 different countries is a major challenge especially when you might have half a dozen different operators running networks in any one country, not to mention the different languages involved.

“The challenge also lies in making the right decision about how to test. If I wanted to test every single phone on every single network on every single firmware revision and on every operating system, I’m into the hundreds of thousands of permutations. So you end up with a testing strategy where you try and take a view on what the most important handsets are and you use your experience to decide what the most likely things to fail will be and you try and test those as widely as you can. You try and make those testable on frameworks so that you can run versions of your software using tools like DeviceAnywhere in different countries as much as possible. There are different

If I wanted to test every single phone on every single network on every single firmware revision and on every operating system, I’m into the hundreds of thousands of permutations. So you end up with a testing strategy where you try and take a view on what the most important handsets are and you use your experience to decide what the most likely things to fail will be and you try and test those as widely as you can.

TEST | February 2013 www.testmagazine.co.uk

44 | App dev & test

“I’ve not come across a piece of software yet that will tell me whether an animation looks good or bad,” says Partington. “Someone has to sit down and watch it. This is where the challenge comes in, when with more and more applications on mobile devices the desire is for the user interface to be as slick, with as much whizz and as much bang as possible, and getting that to run smoothly across a range of devices. You might have a top-end Samsung Galaxy S3 with dual core processors at one end of the scale and a Samsung Galaxy Y at the other end with a slower single core processor and the app has to perform faultlessly on both.

approaches you have to choose to de-risk your project.

“Our test team uses a couple of different tools to automatically run through the user interface and try to simulate user events on the devices,” says Partington. “On Andriod it’s one that gets built in to another app that gets installed on the phone and that allows us to run tests to ensure that everything is working as expected. But we also have to write test code that sits on the server side because a lot of our applications have a mixture of Cloud and mobile device functionality, so we have to test those at the same time and analyse the results.”

Qualitative vs quantitativeAccording to Partington it is important to get the widest possible testing spread on a range of devices as well as much coverage within the application as possible. “At the end of the day some of this testing does boil down to standard, good old fashioned user-based testing,” he says. “What we try to do is embed tools so that if there is a problem we get as much feedback as possible so we can track the problem. We try to automate as much as possible, obviously, but button-pressing and checking is still an essential part of the process.”

Of course there is still a certain amount of testing that can never be automated. “I’ve not come across a piece of software yet that will tell me whether an animation looks good or bad,” says Partington. “Someone has to sit down and watch it. This is where the challenge comes in, when with more and more applications on mobile devices the desire is for the user interface to be as slick, with as much whizz and as much bang as possible, and getting that to run smoothly across a range of devices. You might have a top-end Samsung Galaxy S3 with dual core processors at one end of the scale and a Samsung Galaxy Y at the other end with a slower single core processor and the app has to perform faultlessly on both. You are in to double checking what works across not just a range of processors, but also a range of screen sizes.”

An app pioneerGary Partington is a true pioneer of software development for smartphones, going right back to the start in the late ’90s. “The first one I worked on was based on Symbian, it was called Epoch and I was the team leader for calendar, messaging and a couple of little apps on what was deemed the first real smartphone because it had a touch screen,” remembers Partington. “What was interesting is that I created a game for it as a third-party deliverable. This Othello game for the smartphone was originally tested with an automated test suite which because the phone had a touch screen just randomly generated key presses on the phone. Othello became famous because it was the only application that never crashed even under this constant bombardment of key presses. They threw millions of pointer events at the phone and they never managed to crash it. Even back then on the earliest smartphones testing was being done, but it was a bit random.

“Following this experience I set up a company called EMCC Software which developed applications which were embedded in the phone or delivered in the after-market,” says Partington. “Unfortunately with the changes in the Symbian ecosystem we had to close down the company and I started up Apadmi. The iPhone had just come out but the Appstore had yet to really take off at the time.

“The thing that Apple did,” says Partington, “was make a commercial model that previously no other handset manufacturer was able to do because the network operators wouldn’t let them and it is a great looking device. We watched the market develop and moved into that field with some iPhone developments but more recently in Apadmi we have focussed across the smartphone field on iPhone, Andriod and Blackberry.”

With the constant clamour for quality and an ever-growing range of devices, systems, languages and platforms to deal with, Gary Partington is continuing to ensure that quality is at the top of Apadmi’s agenda.

Gary Partington, a pioneer of smartphone software development and CEo of app development company Apadmi.

TEST | February 2013

46 | Feature

www.testmagazine.co.uk

Should the CIO disband the test team and make the testers part of the development teams? Francis Miers, director at Automation Consultants looks at some of the issues which can arise when Agile methods meet real world situations and how to organise testing so as to maximise the benefits of Agile without losing visibility of quality.

Agile testing: Should you disband the test team?

46 | Agile testing

So you're all ready to go Agile. The old strictures of the past are being dispensed with. The

rigid demarcations, lengthy documentation, project board approvals, waiting for quality gates to be passed, all that will be swept away in favour of more flexible, pragmatic, productive Agile methods. Testers will no longer make up a separate team, but be seeded among the developers. Quality will be 'baked in' from the start and there will be no need for endless cycles of testing, fixing and retesting. So should a CIo disband the test team and make the testers part of the development teams?

The happy pathBefore exploring the possible difficulties of Agile, let us describe the 'happy path' of Agile methods. Agile is best suited to small to medium sized development projects where the developers, testers and the customer are all under one roof. The

development work is broken down into 'sprints' of two to four weeks, each of which will deliver some working code. The testers are no longer organised as a separate team which tests finished modules of code. Instead they are embedded with the developers and continually test small pieces code as soon as they are written.

The testers implement a high degree of automation in the compilation and testing of code, so that the whole code set is compiled every evening and automated unit tests run on it. The compilation and running of the tests is typically organised by a tool like Hudson or Jenkins. The automated tests are written in tools like JUnit, Selenium and FitNesse.

Methodologies like test-driven development (TDD) are common, and tests are often written before the code. Testers are pro-active; they sit close to the developers and take part in the daily scrum meetings (or equivalent). They also liaise often and closely with the customer team to understand the customer's requirements (stories) as soon as possible and translate

them into tests. A traditional tester, by contrast, sits in a silo writing tests based on a functional or non-functional specification document. As code is thrown over the wall from the development team to the test team, the tester tests it and sends any defects back to the developers.

When all the defects have been fixed and retested, the test team declares the code passed and issues a certificate to that effect. Work does not begin until the entry criteria are met, and does not stop until the exit criteria are met. Agile testers are expected not to conform to this mindset and instead be pro-active in dealing with the developers and the customer. Ideally, this speeds up testing and finds bugs more quickly.

Disbanding the test teamUnfortunately, not all projects follow the Agile happy path. Agile methods were conceived primarily for software development. A great many IT projects, however, do not involve development. A huge amount of the time and resources of most IT departments is

February 2013 | TESTwww.testmagazine.co.uk

So should a CIO disband the test team? It depends on a number of factors. How much of the department's projects consist of development? How big are the projects? With how much outside regulation does the department have to comply? Ideally an IT department should keep some dedicated test expertise to determine the test strategy for each type of project the department undertakes, and then organise the testers appropriately, with projects having separate test teams or not depending on the type of project.

Agile testing | 47

taken up with infrastructure upgrades, migrations and systems integration, rather than development.

An IT department which disbands its test team may have problems organising its testing when it comes to a non-development project. In the absence of developers, the testers must come up with a set of functionality to be tested and a way of documenting the tests and results, and there is little alternative but to follow a traditional pattern.

A project board is unlikely to be comfortable in approving the go live of, say, a Citrix upgrade unless it can see evidence of testing in a traditional test completion report. Tools which are common in Agile development, such as Jenkins and JUnit, are simply irrelevant in a migration project. Some Agile testing techniques, however, can be deployed, especially the pro-active mindset and will to pre-empt problems, but the test methodology is likely to be more traditional.

Scale & distanceAgile methodologies also encounter problems with scale and distance. Ideally, an Agile project team is located under one roof and meets daily in a scrum or similar meeting. Stories are written on 'post-its' and stuck onto a board which the whole team can see. As stories are written, worked on and completed, the post-its are moved across the board to show progress. Following this methodology is not so easy if the project is large and requires hundreds of developers.

It is even harder if the developers and testers are spread across several continents. Here project and test management tools become essential. Many tools reproduce the story board electronically, such as Pivotal Tracker, the Rally suite and HP Agile Manager. Some integrate it with bug tracking software to produce an integrated project and quality management tool. For large Agile projects, the development is often broken down into several different Agile teams. Here the challenge is to co-ordinate the activities of the different teams. The risk is that dependencies are created between teams and that with less formal documentation, one team does not understand the outputs produced by another. Again, management tools can help. As regards testing, a defect management tool is especially useful with multiple Agile teams working on the same project.

The challenge of regulationAnother challenging area for Agile testing is in regulated industries such as pharmaceuticals, finance and aerospace. In these industries, regulatory bodies often require formal evidence of testing. If badly managed, software which has been developed using Agile methods must then go through a formal test phase to show that it complies with applicable regulations.

More traditional development methods lend themselves to producing the documents required by the regulators. To meet regulatory requirements without losing the benefits of Agile, using a suitable test management tool can help. HP has adapted Quality Center over the years so that it now caters for Agile projects, and newer tools such as TestWave do so as well. The tools can support Agile development sprints but can also draw out all the tests and results needed for a formal report to regulators.

Agile peopleAgile works best on small and medium sized development projects and is less well suited to non-development projects, large and dispersed projects and heavily regulated industries. So should a CIO disband the test team? It depends on a number of factors. How much of the department's projects consist of development? How big are the projects? With how much outside regulation does the department have to comply? Ideally an IT department should keep some dedicated test expertise to determine the test strategy for each type of project the department undertakes, and then organise the testers appropriately, with projects having separate test teams or not depending on the type of project.

Even in an Agile project with no separate test team, keeping some reporting lines to a test manager outside the various projects is a good way maintain testing specific skills, and to benefit from the resulting product quality. More fundamentally, the appropriate handling of Agile testing comes down to people. If you hire the right people, they will thrive in Agile projects (which require more initiative and flexibility from testers than traditional methods) and have the awareness to see where a different approach is required. Tools can also help by bridging long distances and simplifying reporting.

Francis MiersDirector Automation Consultants Ltdwww.automation-consultants.com

TEST | February 2013 www.testmagazine.co.uk

48 | The last word...

According to Dave whalen Humble Pie is not just a band from the ’70s.

Humble pie

As it is in the contracting world, I once again find myself between contracts. No worries.

I am after all – me! It's frustrating but I can typically find another gig pretty successfully. on the plus side, I get a much needed vacation – without pay of course, but it’s still a nice break.

This time I expected a long break due to the current downturn in the economy. Much to my surprise, I found that I was actually in demand. I updated my resume on a few job web sites a couple of weeks before my last contract ended. Within days my phone was ringing off the hook. I was able to add an in-demand skill this time. Soon I was juggling phone interviews, face-to-face interviews, and the latest addition – a video interview via Skype.

Given the increased demand, I started getting a little cocky. Unlike previous jobless spells, where I took the first offer, I was able play one offer against the other. If I turned down an offer in favour of another, I would get a call asking me to reconsider, usually with a boost in pay. It went straight to my head.

So I walked into a recent interview feeling pretty smug. I was told they were looking for a tester with some Java development experience. The job description said the job was 90 percent manual testing and 10 percent writing tests in Java. Due to my recent new skill acquisition, it described me perfectly. I walked into the interview with a swelled head.

Now, I admit, I'm by no means a professional programmer. My Java skills are self-taught. I've never taken a class specifically on Java, but I have a lot of experience with various programming languages. I can write tests in Java. It may not be the most efficient code, but it will get the job done. During a phone screening, I explicitly told the hiring manger that very thing – 90 percent test, 10 percent Java. As a

result, I was asked to come in for a face-to-face interview with the team. My inflated self-opinion was popped within five minutes. The team drilled me with very specific and very detailed Java programming questions. I answered a few, but most of my answers were “I don't know.” I felt totally stupid. They didn't ask a single test question.

After about 15 minutes I stopped the interview. I told them that I thought I was interviewing for a test job, not a programming job. They assured me that it was indeed a test job with a heavy concentration on programming. 10 percent testing, 90 percent Java programming.

I politely excused myself telling them they had the wrong guy. I could instantly see the relief on their faces. They were going to proceed through the interview, but it was obvious this was a waste of time. So we shook hands and politely said our goodbyes. I walked out with a smile on my face, but inside I was livid. Well 90 percent livid, 10 percent embarrassed and 100 percent humble. The air was out of my balloon. Welcome back to earth!

On the way home, I calmed down a little before calling the staffing company. The recruiter was extremely apologetic but assured me that she was given the incorrect requirements. No big deal. These things happen.

In the mean time offers kept coming in. Some I discounted immediately - they weren't me. Others were very intriguing. Soon I ended up with a couple of really good offers. One short-term contract, one long-term. I opted for the long-term contract even though the drive is much longer. The long-term contract offered more security with a chance to become a permanent employee.

But once again I was in unfamiliar territory. I had tentatively, informally, accepted the short-term contract. I had actually previously interviewed for the long-term contract and came in second. I was told there was a chance

they may still come back with an offer. But I couldn't rely on maybe so I moved on.

The short-term contract called me back. I went in for two interviews. It was a really good opportunity with the state government. Unfortunately, they couldn't make me a firm offer because the state had not signed a contract with them yet. This was a process that I knew could take weeks. I tentatively accepted. As luck would have it, the long term contract came back and wanted me. I decided to accept. Now the hard part. I have to call the first offer, that I tentatively accepted, and decline.

So, I may have been full of myself, but no more. I felt that I had gone back on my word. So I'm eating the pie, and it doesn't taste very good. I've got a week until the new job starts. I'm feeling a little better each day and I'm looking forward to the new opportunity. But I still have a bitter taste in my mouth. Next time I'm having cake!

the last word...

Dave whalen President and senior software entomologistwhalen Technologiessoftwareentomologist.wordpress.com

I walked out with a smile on my face, but inside I was livid. Well 90 percent livid, 10 percent embarrassed and 100 percent humble. The air was out of my balloon. Welcome back to earth!

For exclusive news, features, opinion, comment, directory, digital archive and much more visit

www.testmagazine.co.uk

The Whole Story

www.31media.co.uk

Print Digital Online

INNOVAT ION FOR SOFTWARE QUAL I TY