Editor-in-Chief
Michael Hackett
Deputy Editor
Christine Paras
Graphic Design
Tran Dang
Worldwide Offices
United States Headquarters
4100 E 3rd Ave, Ste 150
Foster City, CA 94404
Tel +1 650 572 1400
Fax +1 650 572 2822
Viet Nam Headquarters
1A Phan Xich Long, Ward 2
Phu Nhuan District
Ho Chi Minh City
Tel +84 8 3995 4072
Fax +84 8 3995 4076
Viet Nam, Da Nang
346 Street 2/9
Hai Chau District
Da Nang
Tel +84 511 3655 33
www.LogiGear.com
www.LogiGear.vn
www.LogiGearmagazine.com
Copyright 2016
LogiGear Corporation
All rights reserved.
Reproduction without
permission is prohibited.
Submission guidelines are
located at:
http://www.LogiGear.com/
magazine/issue/news/2015-
editorial-calendar-and-
submission-guidelines/
This is LogiGear magazine’s first issue on the big world of DevOps. DevOps is a very
large topic.
Just when you thought you were safe from more process improvement for a
while—not so fast. There’s DevOps, Continuous Testing, Continuous Delivery and
Continuous Deployment. In this issue, we are focusing on Continuous Testing, the
part most concerning Test teams.
DevOps is, by one description, Agile for Ops. With closer input and collaboration
from the business side, development and operations are using great tools to help
Ops be more Agile and migrate code to production faster. But this can be
complicated.
Now, I am running into organizations that say they do DevOps or are moving to
DevOps but have very little in place to do so, or worse, have no idea what they
are doing. This reminds me of a few companies that I knew in the Agile era, that said they were Agile, but weren’t.
I saw firsthand, how many teams limped into Agile then raced to get testing done, while working from significantly
less documentation, sometimes marginal collaboration, and were still expected to get high automation coverage;
some of them are still trying to get their footing. Without the culture change, empowerment, skills and tools to make
it happen, a team that attempts DevOps is headed for disaster. DevOps will highlight the shortcomings of a team
on a larger scale, and faster than Agile ever could.
DevOps is a minefield! To do anything here, you have to know what you are talking about. It’s not just buzzwords.
Just because you began using Puppet and Docker doesn’t mean you’re ready for Continuous Deployment.
We are at the stage in DevOps, and I am greatly reminded of the early stages of Agile, where the use of a single
tool or single change had uninformed people make assumptions about an entire paradigm shift. And I’m sad to
see this trend continuing.
If, a dozen years ago, you dropped phased quarterly releases to sets of 2 weeks sprints with user stories instead of
requirements and your Dev team started using Jenkins. That, by itself, did not make you Agile. It was a start. If, for
example, the team did not have access to the business side/Product Owner daily, while significantly boosting
collaboration with early team involvement and significant automation coverage—then that team would be
Scrumbutt, or Agile Falls, but still not truly Agile and instead of productivity gains many teams felt more pressure and
uncertainty.
It took most organizations years to implement, figure out and tune the new practices. We even already have a few
anti-names—DevFlops and DevOops. But let’s not go there. Let’s do it right! There is great progress to be made
here. DevOps, like Agile, is about culture. In DevOps, the whole organization focuses on the business constraints
and needs rather than the whole organization working according to development or Operations schedules.
Product functionality and cycles are delivered when the business needs it rather than being at the mercy of
development and/or IT/Ops.
DevOps is also more about business change and Operations/IT change than Dev and Testing change. Dev and
Testing got turned on our heads with Agile. This time it’s other groups getting turned upside down. Getting
Operations involved, shifting their tasks left, earlier in the cycle, and automating as many Ops tasks as possible with
tools like Puppet, or Chef, and Docker, among many, many others.
To get all this business driven product delivered on their schedule, there is even more use of task automation. To
get a good idea of where DevOps is going—everything that can be automated, has to be. From builds using
Continuous Integration and test automation we became Agile. Test teams have been dealing with these tasks for
a long time, but now more tasks, primarily Ops tasks: building and maintaining environments, build promotion,
provisioning, monitoring—are all becoming automated.
For Test teams, this means a lot. Apart from the team dynamics, tools, and responsiveness to change, this of course,
means bigger and more intelligent test automation. How we look at test automation has to evolve and grow. Its
use, as well as when, and where to use test automation and how this cycle impacts our regular Dev sprints is
expanding the role of the test teams—Clearly, we have a lot to learn and a lot to change.
To be DevOps and not DevFlops—first we have to know what we are aiming for, why, goals and how we can best
support the business. It’s a challenge.
I hope we can help you.
In this issue, we feature a two part series by Sanjeev Sharma, Understanding DevOps, and Adopting DevOps. Alister
Scott discusses Testing in Production in our Blogger of the Month feature. Sanjay Zalavadia writes about best
practices to create a test-driven development environment and Tim Hinds discusses Where QA Fits into DevOps.
Steve Ropa also has reviewed the 7 Best DevOps books, and we have an interview with Skytap’s Sumit Mehrotra.
I’m pleased to announce that in addition to continuing our new TA Corner series, we also have another new
column, Leader’s Pulse, which largely features recommendations on how to manage Test teams.
Continuous Testing, Part 1 DevOps for Test Teams
Michael Hackett
The DevOps lifecycle Infographic Christine Paras
Understanding DevOps Continuous Testing and Continuous Monitoring
Sanjeev Sharma
3 best DevOps practices to create a TDD environment How to ensure a successful test-driven environment
Adopting DevOps Aligning the Dev and Ops Teams
An Interview with Skytap’s Sumit Mehrotra On what you need to know before making the transition to Virtualization
Michael Hackett
Michael Hackett
Sanjay Zalavadia
Tim Hinds
Struggling with Continuous Testing? Infographic
Where Does QA Fit In DevOps? Fitting QA into a modern DevOps group
Christine Paras
Sanjeev Sharma
LogiGear Staff
The 7 Best DevOps Books Steve Ropa
Alister Scott Test in production? This post is part of the Pride & Paradev series
4
9
5
10
12
14
15
17
20
23
27
30
32
33
19
The ownership of quality has evolved, don’t get left behind
From adopting the culture to implementing Continuous Delivery
How to leverage TestArchitect in DevOps
LOGIGEAR LAUNCHES CONTINUOUS TESTING SOLUTION
AMAZON CEO JEFF BEZOS TEASES AN AMAZON WEARABLE AT CODE
CONFERENCE
During his interview with Walt Mossberg, at the Code Conference earlier
this month, Amazon CEO Jeff Bezos coyly hinted at the roadmap for
Amazon Hardware, which is extensive. When asked about wearbles, Bezos
commented “I think it’s a super interesting market, and I obviously can’t
talk about our future roadmap, but I think that’s [ the wearables market is]
also in its infancy.
Read more at: http://www.theverge.com/2016/6/8/11879684/walt-
mossberg-jeff-bezos-amazon-blue-origin-code-conference-2016
GOOGLE PLANS TO REPLACE SMARTPHONE PASSWORDS WITH TRUST
SCORES At Google’s I/O developer conference, Daniel Kaufman, head of Google’s
advance technology projects, announced that the company plans to phase
out password access to its Android mobile platform in favor of a trust score by
2017. Developer kits will be available by the end of 2016.
Read more at: https://www.newscientist.com/article/2091203-google
-plans-to-replace-smartphone-passwords-with-trust-scores/
LogiGear is pleased to announce our new Continuous Testing Service offering.
For over two decades, LogiGear has solved complex test automation
challenges for our clients. We’ve gained deep experience and plan to use it
to guide you through your Continuous Testing, Continuous Delivery and
DevOps transformation journey. Our test automation experts and engineers
leverage the leading tools and practices to ensure the optimum time to
market, cost reduction and peace of mind.
Read more at: http://www.logigear.com/services/continuous-testing-
solution.html
Continuous Testing, Part 1
By Michael Hackett
N ow that Dev Teams have had a little time to
settle into the Agile, the new wave of process
optimization has arrived. DevOps. DevOps has
been described as Agile on Steroids. DevOps
has also been described as Agile for Operations/IT. I like
both of those descriptions as well as some others. Many
organizations want Development, Test and Operations
teams to move to DevOps now.
DevOps is a big topic. But DevOps is not the focus of this
article. We will not be talking about, for example, containers,
or Docker, or Puppet or Infrastructure as code. In this article,
we are going to focus on Continuous Testing. I will focus
on what Test teams are responsible for--the practices and
concerns for testers to have their work become an asset to
the team and not a drag in this exciting new world of
DevOps.
This is a 2 part series on Continuous Testing. Part 1 is on the
rationale, business goals and overview with some specific
testing tasks. Part 2 is a deeper dive into the specific
ideas and tasks of Test teams that need to be updated or
changed to be successful in this next phase of product
development.
There are many problems DevOps is trying to solve. The
overriding idea is to:
1) Shift Operations or IT tasks left, earlier in the process.
2) To leverage a bunch of newer tools and
technologies to automate operations tasks, like
provisioning and migrating code, leveraging these
tools.
3) Have everyone on the product team communicate
and collaborate much more and earlier.
4) To have the product be ready to be deployed—in a
consistent state of readiness so that the business can
decide when new functionality goes to customers.
And ultimately, not be held back by Dev or Ops
being unstable and not prepared to go with
whatever is ready.
Continuous Testing gets us far along this path and is the
part of DevOps most concerning to traditional Test
teams. That is, Test teams that would execute
everything from black box, to regression, to UI, to
workflow scenario tests. Who generally do not do
security, performance or unit tests and have therefore
not executed tests in production.
When I think of Continuous Testing I think of the Lean
principle, quality at every step. When developers
commit new code, test it! When the product gets
integrated, test it! When the product gets moved to
any new environment, like the test environment,
staging environment or production, test it!
There are Continuous Testing activities.
Over the last few years, with Agile and Continuous
Integration, we have already shifted left and partially
right. Now, we have to learn how to shift testing fully
right. To do this, and test on all the various environments
and stages that need to be tested and validated, and
for rapid delivery and/or deployment, the testing, of
course, needs to be automated.
DevOps for Test Teams
Another way to look at this is as a system, there are
three parts:
1. The build delivered by Development.
2. The deployment on infrastructure handled by
OPS/IT.
3. The customers who use the apps/system.
Test teams must have strong knowledge of these
three parts, know the goals of testing at each part,
be able to design great tests for each part, and
have the ability to automate these tests. This is
harder and more comprehensive than most
organizations think.
Many Test teams still struggle with Agile, fast test
automation and automation maintenance. These
problems will not help the organization in its mission;
it will only drag it down. Tests can be designed and
built for successful Continuous Integration first, and
expand that to Continuous Testing by knowing the
right things to do, staying focused on a few key
points and most importantly, automating as smartly
as possible.
Let’s get started with the work.
The first task. Know what you are talking about with
DevOps.
Go read about DevOps. Familiarize yourself with the
terms currently used in discussions about DevOps, you
can also reference our Glossary in the following pages.
For example, Continuous Delivery is not Continuous
Deployment. Find out the details, the tools Ops/IT will
be using and why they will use them. Learn about
provisioning, containers, virtualization, and
automatic deployment. We need to be familiar
with the whole DevOps paradigm to be useful to
the team and also to design the right tests for each
situation, even though they may not be part of our
job. For example—understanding containers may
impact our test design for integration testing—but
the how, when and where to use containers is not
a Test teams task.
For everyone, DevOps is an automatic leveraging
technology, in which everyone’s tasks are automated.
From builds, to provisioning, to deployment, and
everything in between—it’s all automatic. The goal
is for Test teams to automate as effectively,
thoroughly, and economically as possible. This will
not be easy, nor is it an instant change.
DevOps for Testers is focused on Continuous
Testing.
The goal of Continuous Testing is that these test
tasks provide immediate, useful feedback to the
whole team on the stability and consistency of the
product, ultimately giving the team confidence
before delivery and deployment.
Test teams must also move seamlessly through the
testing cycles—this is a constant rolling process. For
most teams who do DevOps, “phases” is not used
to describe where the product is or its readiness.
The product can be in Dev, in Test, tests running on
staging server, and tests running in production at
the same time. If the business decides to deliver
functionality to customers, it can move delivered
code to deployment based on feedback from the
tests, instead of the phase the product is in.
Essentially, making the product is in a constant
state of test. This will be the primary factor in
building consistency from coding to deployment.
“Testing” is now about all the test efforts.
Of course, first, you need awesome testing. You will
need more of it, it will need to be Agile and it will
need to be automated. Automated more than
ever before. Testing includes re-running unit tests,
running functional, workflow, validation, and bug-
finding tests. These along with security tests, load
and performance testing…there is no separation in
terms of running these tests for consistency,
monitoring and reporting. When people describe
DevOps as a “shift left,” one reference is to running
performance tests early and not waiting until the
end of development. This is an important example
because it illustrates the many pieces of DevOps.
The production environment can be virtualized and
spun up at any moment with a VM or in the cloud
and have full performance tests run early in
development. In the very recent past, good
Developers would run performance tests early but
may have had to mock up a limited system and
focus on isolated performance tests.
Different automated suites of combinations of
these tests will be run on various environments
throughout the process. Depending on your
organization, some tests “belong to” the QA or Test
team. You may be expected to take on more test
and monitoring responsibilities.
Testing in production is normal and expected. This
may be problematic—be intelligent and careful.
Now testing happens everywhere, and is done by
everyone. To be successful in DevOps, aside from test
automation, the test leadership should have assigned
Developers, Test Operations and QA leadership as part
of the same team. Testing is now whole, no longer just
a part.
Your team has to be great at Agile. Scrumbutts and
AgileFalls will fail in DevOps.
As said above, some people describe DevOps as
Agile on Steroids. It you aren’t great at Agile
already, Continuous Testing will fail, and DevOps
will fail. If there are problem areas in your Agile/
Scrum process—fix them before moving any
further.
Also, if you are having a difficult time presently
keeping up with automation in your sprints, any
attempts at Continuous Testing will surely fail. Your
testing, test automation and reporting must already
be smooth, low maintenance, and effective.
The test automation of new functions has to
happen within the sprint; first manual testing in
sprint and then by the end of the sprint.
Don’t even try Continuous Testing until you have
great, meaningful, effective, fast Continuous
Integration.
In every team I have worked with, CI leads to
Continuous Testing. CI is the foundation of
Continuous Testing.
Having a great Agile practice means the
organization understands everyone does Quality
Assurance, not testing. Quality Assurance means
quality user stories, quality unit tests, and quality
environments. Quality at every step.
You need a better, more comprehensive test
strategy.
Get your test game on! Great testing skills are still
your most important job skill.
Where old style “QA” testing tested at the end,
Continuous Testing takes Test teams and entire
organizations to a whole new level.
Continuous Testing, in theory, frees you from
focusing on big bang deployments. You have
automated suites running constantly to monitor
system health as more work items are delivered.
You will need a strategy for testing in the sprint, a
strategy for what to test in CI, and a strategy for
what to test in the various environments.
Test teams need to stay focused on delivering
Done, potentially shippable User Stories at the end
of each sprint or iteration as well as growing and
optimizing the various regression suites.
You still need to adequately test new functionality.
Don’t focus too much on small “acceptance” tests.
You need workflows and integration tests, user
scenarios and task based tests. Low-level, small
tests may miss workflow/integration/end-to-end
problems.
You need speed but don’t forget exploratory, data
driven, and soap opera tests. Too many teams get
overwhelmed by more automation and focus only
on test automation at the expense of a full bug
finding, validation, design collaboration, and
customer experience test strategy.
You may have a separate team building the test
automation framework and another team doing
automation to pick up the pace. Your strategy will
need more low-level tests. Integration tests,
especially in the new “container” world where
containers/components can be easily plugged in,
are high priority.
Another trend in software development over the
last few years is Service-Oriented Architecture
(SOA). Today, SOA has gone into overdrive, for
many reasons not important here, but with the use
of containers and microservices as a goal of many
organizations. Whether your team uses or refers to,
APIs, SOA, web services, or microservices; or is using
a container tool like Docker, more testing is
happening now on more lower-level items, and is
more technical. Testing isolated services is great for
finding bugs or breaks fast. Then, separately,
integration tests need to run for correct integration
and consistency.
Testing should know a lot about grey-box testing.
This is about grey-box, intelligent test automation
with knowledge to design better tests and to do
quicker failure analysis.
An analogy of testing in DevOps is similar to an
assembly line production, like a car manufacturing
line, for example. You need to check and test
everywhere—not only at the end. A key difference
in a car assembly line, though, is that the parts are
fixed. With software in DevOps, the parts are
changeable and when the parts change, they will
need automated testing to keep up.
Continuous Testing in DevOps requires culture as
well as process change. Part 1 of this article
focused on culture and understanding. Part 2 will
focus more on process and tasks.
A quick, Continuous Testing task overview is:
1. Start with an automated smoke test. Move these into CI
build process tool.
2. Build bigger regression suites. Move these into CI build
process tool.
3. Grow in levels of awesomeness of CI; Run smoke and/or
regression on multiple VMs.
4. Easy and effective reporting back to the organization.
5. Use containers and/or virtualization for data and full
production like environments.
6. Distribute automated tests into different suites with varying
goals on different environments. Use VMs for various
environments to grow automation, coverage, speed,
monitoring and feedback to the team.
Continuous Testing grows from this.
Part 2 of this topic will define and describe the lower
level practices and automation goals, environment and
data problems to solve, virtualization, with more details
on testing goals.
Michael Hackett
Michael is a co-founder of
LogiGear Corporation, and
has over two decades of
experience in software
engineering in banking,
securities, healthcare and
consumer electronics. Michael
is a Certified Scrum Master
and has co-authored two books on software
testing. Testing Applications on the Web: Test
Planning for Mobile and Internet-Based Systems
(Wiley, 2nd ed. 2003), available in English,
Chinese and Japanese, and Global Software
Test Automation (HappyAbout Publishing,
2006).
He is a founding member of the Board of
Advisors at the University of California Berkeley
Extension and has taught for the Certificate in
Software Quality Engineering and
Management at the University of California
Santa Cruz Extension. As a member of IEEE, his
training courses have brought Silicon Valley
testing expertise to over 16 countries. Michael
holds a Bachelor of Science in Engineering
from Carnegie Mellon University.
The DevOps Lifecycle
By Christine Paras
D evOps is really the journey towards the ideals of greater collaboration and getting immediate
feedback thereby achieving greater productivity, and a high quality product. In order for DevOps
to be successful, Development, Operations and the Test team must align their duties.
DevOps is an extension of Agile
development that aims to
streamline the process of
software delivery as a whole. By
making each development
stage interdependent and
incorporating Continuous Testing
early and through the
development cycle, this
approach enhances efficiency
and reduces risk by allowing
Developers to work with
Operations much earlier in the
process.
Understanding DevOps
By Sanjeev Sharma
What is the goal of Continuous Integration?
Is it to enable Continuous Delivery of the code
developers’ produce out to users? Yes, eventually. But
first and foremost it is to enable ongoing test and
verification of the code. It is to validate that the code
produced and integrated with that from other
developers and with other components of the
application, functions and performs as designed. Once
the application has been deployed to a production
system, it is also a goal to monitor that application to
ensure that it functions and performs as designed in a
production environment, as it is being used by end-users.
This brings us to the two practices of DevOps that are
enabled by Continuous Integration and Continuous
Delivery.
Namely:
Continuous Testing
Continuous Monitoring
Continuous Integration and Delivery are both (almost)
meaningless without Continuous Test. Not having
motoring and hence, not knowing how the application is
performing in production makes the whole process of
DevOps moot. What good is having an streamlined
Continuous Delivery process if the only way you find out
about your applications’ functionality or performance
being below par is via a ticket opened by a disgruntled
user?
Let’s break this down.
Continuous Integration and Continuous Delivery facilitate
Continuous Testing
Individual developers work to create code – to fix
defects, add new features, enhance features, or to make
the code perform faster – could be one of several tasks
(work items) they could be working on. When done, they
deliver their code and integrate it with the work done by
other developers on their team, and with unchanged
code their team owns. They would along the way run Unit
Tests on their own code.
Once the integration is done, they would do Unit Tests on
the integrated code. They may run other tests such as
White box Security tests, code performance tests, etc.
This work would then be delivered to the common
Integration area of the team of teams – integrating the
work of all the teams working on the project and all the
code components that make up the service,
application or system being developed.
This is the essence of the process of Continuous
Integration. What makes this process continuous is
where the individual developers’ code is integrated
with that of their team as and when they check-in the
code and it gets delivered for Integration. (For more
detail, read Part 2 of this Understanding DevOps series).
The important point to note here is the goal of the
Continuous Integration process – to validate that the
code integrates at all levels without error and that all
tests run by developers run without error. Continuous
Testing starts right with the developers.
After validating that the complete application (or
service or system) is built without error, the application is
delivered to the QA area. This delivering of code from
the Dev or development environment to the QA
environment is the first major step of Continuous
Delivery.
There is Continuous Delivery happening as the
developers’ deliver their code to their teams’
integration space and to the projects integration
space, but this is limited to being within the Dev space.
There is no new environment to target. When delivering
to QA, we are speaking of a complete transition from
one environment to another. QA would have its own
Continuous Testing and Continuous Monitoring
environment on which to run its suites of functional and
performance tests. DevOps principles preach that this
environment be production-like.
In addition QA would potentially also need new data
sets for each run of the suites of tests it runs. This may be
even one or more times every day as Continuous
Integration leads to Continuous Delivery at a steady
stream. This means that the Continuous Delivery process
would not only require the processes to transition the
code from Dev to QA, but to also ‘refresh’ or provision
new instances of QA’s ‘Production-like’ environment,
complete with the right configurations, and associated
test data to run the tests against.
This makes Continuous Delivery a more complex
process than just ftp-ing code over. (I will discuss this in
more detail in a later post of Infrastructure automation
and Infrastructure as code). The key point to note is that
the goal of Continuous Delivery is to get the code
ready for test. To get the application to the right
environment – continuously, so that it can be tested –
continuously.
If one extends the process described above to
delivering the service, application or system to a
staging and eventually a production environment, the
process and goal remains the same. The Ops team
wants to run their own set of smoke tests, acceptance
tests and system stability tests before they deliver the
application to the ‘must-stay-up-at-all-costs’ production
environment. That is done using the Staging
environment. This is a production-like environment that
needs to be provisioned just like the QA Environment. It
needs to have the necessary scripts and test data for
acceptance and performance tests that Ops would
run. Only when this last phase of Continuous Testing is
complete would the application be delivered to
Production. Continuous Delivery processes would
perform these tasks too – provision staging and
production environments and deliver the application.
Continuous Monitoring
In Production, the Ops team manages and ensures
that the application is performing as desired and the
environment is stable via Continuous Monitoring. While
the Ops teams have their own tools to monitor their
environments and systems, DevOps principles suggest
that they also monitor the applications. They need to
ensure that the applications are performing at optimal
levels – down to levels lower than system monitoring
tools would allow. This requires that Ops teams use tools
that can monitor application performance and issues.
It may also require that they work with Dev to build self-
monitoring or analytics gathering capabilities right into
the applications being built. This would allow for true
end-to-end monitoring – continuously.
The Four Continuous Processes of DevOps:
It would not be much of a stretch to say that the
practice of DevOps is really up of the implementation
of these four basic processes:
Continuous Integration
Continuous Delivery
Continuous Testing
Continuous Monitoring
Of these, the end goal is really Continuous Testing and
Continuous Monitoring. Continuous Integration and
Continuous Delivery get the code in the right state and
in the right environment to enable these two processes.
Testing and Monitoring are what will validate that you
have built the right application, which functions and
performs as designed. In future posts, I will go into more
detail of the implementation of these four processes
and the associated practices that enable them.
Where Does QA Fit In DevOps?
By Tim Hinds
I n a traditional software engineering organization, the
QA group is often seen as separate from the
Development group. Developers and testers have
different roles, different responsibilities, different job
descriptions, and different management. They are two
distinct entities.
However, for folks outside the engineering team – say in
Operations – they generally consider Development and
QA to be in the same group. From this perspective those
teams are working together to do a single job, with a
single responsibility: deliver a product that works.
So what happens with QA in a DevOps organization?
When Development and Operations merge together,
where does that leave QA? How does the testing team
fit into a modern DevOps group? This article will take a
look at exactly that question.
The Reason Behind DevOps: Automated Deployment
When you look at the trend towards DevOps, it’s pretty
clear that companies are adopting this organizational
model in order to facilitate a practice of automated
software deployment. DevOps provides the structure
that enables teams to push software out as a service on
a weekly, or daily, or even hourly basis. The traditional
concept of a “software release” melts away into a
continuous cycle of service improvement.
DevOps is Agile taken through its logical evolution:
removing all the obstacles to getting high-quality
software in the hands of customers. Once you have a
smooth process for agile development and continuous
integration, automating the deployment process makes
total sense because it’s fulfilling the objectives that
business managers crave:
1. Faster time to market
2. Higher quality
Increased organizational effectiveness
But before we move on, let’s ponder that for a moment:
the purpose of DevOps is to get high-quality product out
to the market faster – even automatically. The notion of
“Quality” is built into the fabric of DevOps. If you
couldn’t reliably push high-quality software out the
door, DevOps would fail as a function.
Clearly, there is a critical role for QA in a DevOps
group. So how are people fitting it in?
The Product IS The Infrastructure
We asked Carl Schmidt, CTO of Unbounce, what he
thinks about QA and DevOps. Unbounce runs a SaaS
solution for online marketers, making it really easy to
build, publish, and A/B test landing pages without IT
resources. Unbounce has three development teams,
each with a resident QA team member, and practices
DevOps throughout the organization.
Carl states, “I’m of the mindset that any change at all
(software or systems configuration) can flow through
one unified pipeline that ends with QA verification. In a
more traditional organization, QA is often seen as
being gatekeepers between environments. However,
in a DevOps-infused culture, QA can now verify the
environments themselves, because now infrastructure
is code.”
The infrastructure is code. It’s a game-changing claim
to any traditional development organization.
Historically there was a clear separation between what
Fitting QA into a modern DevOps group
constituted the product and what constituted the
operation. You built a product, deployed it into a test
environment where it could go through some quality
control, and then eventually deployed it onto a live
production system where real users could get at it. If
there was a problem in product, the operations team
had to fix it.
But as the lines blur between product and operation
– as the very name DevOps implies – it’s no great
leap to recognize that the environment itself is a part
of the product.
Carl continues, “It’s QA’s responsibility to actually
push new code out to production, so the DevOps
team has been providing them with tools that make
blue-green deployments push-button easy. Our QA
team can then initiate deploys, verify that the
intended change functions as expected, cut over to
the newly deployed code, and also roll back if there
is any reported issue.“
DevOps QA Is About Preventing Defects, Not Finding
Them
QA takes a critical role in this organizational structure
because they have the visibility and the directive to
push code out when it is working, and roll it back
when it is not. This is a very different mindset from QA
organizations of 10 years ago, whose primary
responsibilities involved finding bugs. Today QA
groups are charged with preventing defects from
reaching the public site.
This has several implications:
QA owns continuous improvement and quality
tracking across the entire development cycle. They
are the ones who are primarily responsible for
identifying problems not just in the product but also in
the process, and recommending changes wherever
they can.
Tests are code, as any test automation expert will
tell you. It’s a necessity, of course. If your process is
designed to publish a new release every day (or
every hour) there is no room for manual testing. You
must develop automation systems, through code,
that can ensure quality standards are maintained.
Automation rules. Anything that can be
automated, should be automated. When Carl
describes Unbounce’s deployment process as “push-
button easy,” this is what he’s talking about.
Testers are the quality advocates, influencing both
development and operational processes. They don’t
just find bugs. They look for any opportunity to
improve repeatability and predictability.
Beyond Functional Testing: Automation for Load Testing,
Stress Testing, and Performance Testing
Now we are at a very exciting time in the transformation
of QA, because while many organizations have mature
processes for automating functional testing, they are only
just beginning to apply these practices to other areas of
testing like security and stress testing.
In particular, load testing and stress testing are critical
disciplines for DevOps organizations that are moving at
high velocity. A bottleneck introduced to a critical
transaction process on an eCommerce website can bring
a business to its knees. You want to do everything you can
to identify scaling problems before a product is pushed
out to a production environment, and you also want to
keep a close eye on performance after it’s been
released.
If you are curious to understand how your process holds
up, check out this infographic: How Automated Your
Performance Testing Is.
Creating a DevOps Culture
Unbounce CTO Carl Schmidt has some wise advice about
his DevOps group: “we dislike using that term in favour of
saying that we have a DevOps ‘culture.’” DevOps isn’t an
individual, it’s a core value of a development
organization. DevOps is more about trust, people, and
teamwork than it is about process. It’s about the creating
of software as an ongoing service, not a static product.
Although it’s not in the name (DevQuops, anyone?), the
only reason that DevOps works is because quality is built
into the entire system. You can’t get much more
important than that.
Tim Hinds
Product Marketing Manager
Tim Hinds is the Product
Market ing Manager for
NeoLoad at Neotys. He has a
background in Agile software
development, Scrum, Kanban,
Continuous Integration, Continuous Delivery, and
Continuous Testing practices. Previously, Tim was
Product Marketing Manager at AccuRev, a
company acquired by Micro Focus, where he
worked with software configuration management,
issue tracking, Agile project management,
continuous integration, workflow automation, and
distributed version control systems. https://
www.linkedin.com/in/timotheehinds
Struggling with Continuous Testing? You’re not alone...
The problems around automation have become increasingly complex. And now, automation is much more
integrated into the software development process…
We see CI moving onto virtual machines and DevOps running our automation all the time, on all kinds of
environments. It is no longer just the test team that runs automation, and reports results, and these tests are
occurring whenever a build happens. Many teams are still struggling with getting automated tests into their
current sprints, or new sprints. Some teams struggle just to get more tests automated in their development cycle
at all, and end up settling for adding new automation after a release, because they just do not have the time. If
this is your situation fix it! It may not be an easy fix, but not fixing it has a negative impact on development.
What you need to consider before you make the jump to Continuous Testing…
How to make the jump to
DevOps
To make the true leap to DevOps,
you have to automate not just your
testing, but other tasks that were
the responsibilities of other teams.
This can get burdensome if you
don’t have mature Agile practices
in place, and these 3 steps will help
you get your automation in shape
for the leap.
Once you have efficient Agile practices, good low-maintenance test automation and Continuous Integration
processes in place, then Continuous Testing and DevOps is the next step in achieving hyper efficiency. By shifting your
testing left, you will uncover bugs earlier, saving both time and funds for your company.
By Christine Paras
3 best DevOps practices to create a
test-driven development environment
By Sanjay Zalavadia
I n order to ensure a higher quality product is released
in the end, many teams have turned to test-driven
development. Under this scenario, quality assurance
metrics professionals first create various QA tests,
and then software engineers code based on these tests,
typically while using a robust enterprise test
management tool.
It's easy to see the benefits of such an approach, as it
ensures that QA metrics are kept front and center
throughout the entire process. However, it can also
introduce problems even for teams following agile
development best practices and using free agile test
management tools. In particular, test-driven
development requires quality assurance metrics
professionals to have a very clear understanding of what
the final product is supposed to do and all of the steps
they need to take to get there.
In order to make sure QA teams and software engineers
are always on the same page, it can be extremely
helpful to follow DevOps principles. A portmanteau of
Development and Operations, DevOps describes a
paradigm under which all teams always have a clear
understanding of what everyone else is up to throughout
the design and delivery process.
Many ideas and workflows can fall under the DevOps
umbrella, but these three are absolutely critical best
practices within a test-driven environment:
1) Embrace of automation
In any DevOps scenario, a heavy load is placed on the
shoulders of everyone involved. Individuals don't have
the luxury to only focus on one or a small number of
tasks, as they now have to be involved in all parts of the
software development process. There is simply no way
one person or even a small number of people can
manually develop tests, code and run these tests.
Considering how critical the initial tests are in a test-
driven environment, it is usually advisable for the initial
tests to be created by hand. But, once the tests are
established, there's no need for the subsequent steps to
also be so labor intensive. Not all components need to
be coded by hand, and tests can be run
automatically. Relying on automation, especially in the
later stages, is crucial for ensuring that work is
completed quickly without any sacrifices from the
DevOps ideals.
2) Placement of end users front and
center
The name DevOps seemingly puts the focus on Dev and
How to ensure a successful test-driven environment
Ops teams, but these are far from the only ones that
should, and do, fall under this paradigm's umbrella.
DevOps solutions are all about fostering a culture of
collaboration and understanding, including with the
ultimate end users. If the people who will actually be
using the final piece of software are not involved in its
development, then it's entirely possible that the final
product falls flat of expectations even if internal teams
are well aligned.
For the quality assurance metrics professionals in charge
of creating the initial tests, this means first gaining an
understanding of what end users really need from the
final software, and what issues will cause them to
abandon it. Armed with this knowledge, they can then
develop the initial tests to ensure that software
engineers are meeting all of the wants and needs on
the end-user side.
3) Adoption of enterprise test
management tool suitable for the task
Sanjay Zalavadia
As the VP of Client Service for Zephyr, Sanjay brings over 15
years of leadership experience in IT and Technical Support
Services. Throughout his career, Sanjay has successfully
established and grown premier IT and Support Services
teams across multiple geographies for both large and small
companies. Most recently, he was Associate Vice President
at Patni Computers (NYSE: PTI) responsible for the Telecoms IT
Managed Services Practice where he established IT
Operations teams supporting Virgin Mobile, ESPN Mobile,
Disney Mobile and Carphone Warehouse. Prior to this Sanjay
was responsible for Global Technical Support at Bay
Networks, a leading routing and switching vendor, which
was acquired by Nortel. Sanjay has also held management
positions in Support Service organizations at start-up Silicon
Valley Networks, a vendor of Test Management software,
and SynOptics. www.getzephyr.com
In a test-driven development environment, it stands to
reason that tests play a large role in the entire
process. As such, these all-important tests should not
be entrusted to a lackluster test case tool. Only a
robust enterprise test management tool will ensure
that everyone has the information they need at all
stages of the software development lifecycle.
In many scenarios, a test-driven development
environment is an ideal way to ensure the final
product is of the highest caliber. Still, a test-driven
environment can be difficult for those new to the
concept to navigate. By adopting DevOps and a
number of associated key best practices, however,
these teams can get the most out of a test-driven
development environment.
By adopting DevOps and a number of associated key
best practices, teams can get the most out of a test-
driven development environment.
Adopting DevOps
By Sanjeev Sharma
D evOps as a philosophy has had as its
centerpiece the principle that Dev and Ops
teams need to align better. This is a people and
organizational principle, not a process centric
principle. To me this is more important when adopting
DevOps than any other capability or tool. My last post
focused on the need to better align Dev and Ops and
the challenges that such organizational change would
address. This post discusses key guiding principles on
which this Dev-Ops alignment should be based. They are
designed to improve collaboration between these
organizations and to break down the proverbial ‘wall’; to
end the water-SCRUM-fall, when it comes to the
relationship between the Dev and Ops teams.
These guiding principles are:
1. Shift Left:
Organizationally the goals of DevOps are to bring Dev
and Ops closer. Not just at deployment time, as they may
do already, but all through the development cycle. It
requires Ops to allow Developers to take more
responsibility of the operational characteristics of the
applications they are building. In turn, it requires
Developers to keep Operational considerations in mind
while building their applications. This is referred to in the
DevOps world as ‘Shift Left’. As in shifting towards the left,
some Ops responsibilities. Shifting it to earlier in the
software delivery lifecycle, towards Dev.
2. Collaborate across all teams:
The Dev and Ops teams typically use different tools to
manage their projects, for change management and
(outside of email) different collaboration tools. In order to
better collaborate across the organizational boundaries,
these teams should start using the same Change
Management and Work Item Management tools or worst
case, use tools that are integrated. Thus allowing
seamless visibility across tools and traceability between
respective Change Requests. Real-time collaboration
using a common tool is obviously the best scenario.
3. Build ‘Application Aware’ Environments
As we move towards Software Defined Environments, we
have the ability to build, version and manage complex
environments, all as code. All of the benefits of this are
moot if the environments are not a perfect fit for the
applications and more importantly the changes to the
applications being delivered. The goal is hence to build
Environments that are ‘Application Aware’ or are fine-
tuned for the applications they are designed to run. No
more cookie cutter Virtual images for all kids of
applications. More importantly, one needs to ensure that
the environments are architected in a manner to allow
for the evolution of the applications, both as they are
developed; as they are projected to change or as they
may evolve in the future. This obviously, would require
close collaboration between Dev and Ops. Development
Architects and Product Managers need to work with the
IT Architects and Operations Managers to Architect
Environments and their projected evolution to align with
that of the application. Not only that, but also allow
enough resilience in the environments to allow for
unexpected change. For example, massive change
caused by a super successful App. Think Instagram and
how the founders had to keep changing their server
Aligning the Dev and Ops Teams
environment almost daily as millions joined their service.
4. Environment Sprints?
Agile Development Principles prescribe that at the end of
every Sprint, developers have a build. An executable that
runs, even if does not do much in terms of functionality. It
has been proposed (by the likes of Gene Kim) that this
concept be extended to environments. This would mean
that at the end of each Sprint, the Dev team would have
an executable AND the Ops team would have an
environment, a production-like environment (potentially
with very limited production like capabilities), built that
the executable can be deployed on. This would allow the
build to be tested. That too in a production-like
environment. This would also provide immediate
feedback to the Ops teams on the behavior of the
application in their environment and use that feedback
to improve the environment. This also provides an
opportunity for the Ops and Dev teams to align better.
They will both have to use a single Sprint plan for releases
of their Builds and Environments respectively and will
provide feedback to both at the end of every Sprint,
using which Dev can enhance their Apps to improve
functionality and performance and Ops can enhance
the environment for the same.
The Human factor
These principles are designed to foster Dev and Ops
alignment from a teaming perspective, from a people
perspective. These however do not replace the need
for good old team building. Whether it is thru face to
face get togethers or in todays distributed worlds, thru
regular virtual meetings and online collaboration in real
time. The best teams are the ones where the people
know each other, trust each other, look out for each
other and when there are challenges, know who to pick
up the phone (or launch Skype) and talk to. This series
on Adopting DevOps and my earlier series on
Understanding DevOps will continue in future posts. My
thoughts on #DevOps, Software #Architecture, #Agile
Development, Innovation, Technology, Life…
Sanjeev Sharma Sanjeev is an IBM
Distinguished Engineer,
and the CTO for DevOps
Technical Sales and
Adoption at IBM,
leading the DevOps
practice across IBM.
Sanjeev is responsible for
leading the worldwide
t e c h n i c a l s a l e s
community for DevOps
offerings across IBM’s
portfolio of tools and services, working with IBM’s
customers to develop DevOps solution
architectures for these offerings, and for driving
DevOps adoption. He speaks regularly at
conferences and has developed DevOps IP,
including the DevOps For Dummies book
published in 2013.
Sanjeev is a 20 year veteran of the software
industry. Sanjeev has an Electrical Engineering
degree from The National Institute of Technology,
Kurukshetra, India and a Masters in Computer
Science from Villanova University, United States.
He is passionate about his family, travel, reading,
Science Fiction movies and Airline Miles. He blogs
about DevOps at http://bit.ly/sdarchitect and
tweets as @sd_architect
I t is a fundamental role for testing teams to align their test design, test automation, and test case development with
DevOps–not only to verify that code changes work but that the changes do not break the product. A key
differentiator of DevOps is testing maturity. An organization can automate their integration, testing, delivery, and
monitor, but still have issues with the intelligence of test orchestration and automation, which can lead to a bottleneck
if this is not resolved beforehand.
In most projects, TestArchitect can be used in a general DevOps process. Following the Action Based Testing method,
TestArchitect offers capabilities to develop and automate test cases quickly in the same sprint as the application under
test, thus allowing team members to become actively involved in the testing and automation process, each from their
own skill perspective. Apart from QA other team skills/roles will typically be development, operations and product
ownership.
There are four continuous processes in DevOps as below.
Once the automated tests is
ready, they can be put into
test suites, and there
execution can be defined
batch file, which can be
executed by any scheduling
or continuous integration
tool, and even be used for
testing in production. The
results of the automation
e x e c u t i o n c a n b e
generated in xUnit format,
XML format, or HTML format
which can be read and run
report against.
You can generate the batch file in the execution
dialog.
The generation of execution information is controlled
in the second tab of the execution dialog:
A file is created with an execution command that
includes which tests to run, and all relevant settings
and options specified in the rest of the execution
dialog.
The generation of execution information is controlled
in the second tab of the execution dialog:
Allowing the team to create powerful and maintainable
tests, and organize their automation like outlined above
can help teams achieve an effective and efficient
development and release process.
By LogiGear Staff
How to leverage TestArchitect in DevOps
An Interview with Skytap’s Sumit Mehrotra
Interviewed by Michael Hackett
1. What are the main differences between cloud-
based environments and cloud infrastructure?
An environment is a collection of infrastructure elements
working in conjunction to enable an application stack to
work.
For example, a simple 3-tier application, with a web front
-end component, a business logic component and a
database component requires, at a minimum, the
following infrastructure components: a few VMs, a few
storage disks, and a network. All of these components
are required, and they need to work together as an
environment for the application to be functional. Note,
that as the complexity of the application stack grows,
the definition of an environment grows bigger than just
infrastructure components. It can include VPNs, public IP
addresses, an object store, and other service end-points.
From an application development and testing
perspective, the application environment is the starting
point of productive work. However, we see teams
spending a large amount of time just getting the
infrastructure components to a working state and hence
losing out on a good portion of their productive work.
Working with environments as a unit of work can enable
these teams to be more productive and hence service
customer and business requirements at a faster pace
with higher quality.
2. What is “Environment or Infrastructure as Code?”
How can we use this idea in testing?
As the name suggests, infrastructure-as-Code is
practice of representing infrastructure as code blocks
so that they can be executed whenever needed to
build an application stack (environment) in a
repeatable and consistent manner. Depending on the
actual tool used, it can allow you a variety of things
including:
Deploy basic infrastructure elements, VMs, disks,
networks, etc.
Install and configure operating systems
Deploy and configure application components
Populate application data
With a repeatable and consistent process, bugs can be
reproduced, fixed, and validated efficiently.
Developers can recreate the same environment that a
tester is using to find bugs by using the same
Infrastructure-as-Code scripts as the tester is using. This
practice goes a long way in solving the problem of ‘it-
works-on-my-machine’ syndrome.
There are various paradigms available to create
consistent environments in a repeatable manner. Two
such popular paradigms are “patterns” and “clones.”
With patterns, you can create the environment from
scratch each time—allowing you to progress the same
exact code at each point of your software delivery
pipeline. But it takes time for the environment to be
provisioned and can be error-prone especially, for very
complex environments. With clones, you can create
another copy of the environment from a pre-existing
copy. The process can be very fast, even for complex
environments. However, moving the same clone
through your SDLC can be a heavyweight process.
There is no right or wrong paradigm here. Successful
teams have chosen a combination of paradigms and
have made trade-offs between time and tooling
consistency, to achieve the velocity they need in their
...On what you need to know before making the transition to Virtualization
software delivery lifecycle.
3. How do companies use cloud-based environments
to facilitate testing in DevOps?
Testing especially in the DevOps world has the following
key requirements:
Instant self-service access to application stacks for
testing
Support the complexity and scale of the application
stack needed at various stages of testing
Consistent and reproducible ways to create test
environments
Ability to collaborate and share environments with
developers and testers as part of a feedback loop.
Constant feedback is a key aspect of Agile
development
Ability to ensure that the right resources are available
at any time and are being utilized judiciously
All of these requirements are hard to meet without a
cloud-based platform. So, cloud-based environments are
essential to facilitate testing workflows.
4. We know that getting the right environments for
testing can be difficult, and sometimes it’s out of a
Test team’s control. How are innovative organizations
solving this challenge?
Organizations are increasingly turning to the cloud for on-
demand and self-service access test environments, but
not every cloud platform is suited for the needs of the
applications in an organization’s portfolio. Innovative
organizations evaluate the environment needs of their
applications, and some of the criteria that we have seen
customers use are as follows:
Configurability - Can the platform deliver test
environments that capture the complexity of my
application at each stage of testing?
Consistency – Can the platform deliver test
environments that are in the exact state that we
need them to be, whenever I want?
Collaboration - Can the platform make it easier for
my team (Devs + QA + Ops) to work together more
productively?
Control – Does the platform have controls to ensure
the right people have the appropriate resources to
do their jobs?
5. How long does it take to build these
environments? Some Test teams do not have
dedicated IT members—does the Test team often do
this?
The time to get a fully configured environment to test
can vary in range from a few minutes to a few weeks.
The availability of cloud platforms, and especially IaaS
has reduced the far-end of the range to a few days
primarily by taking the acquisition of hardware out of
the equation. This is the lowest layer of the environment
and typically, the one that used to take the longest.
Reducing the time to build these environments to
minutes requires more than just infrastructure. It requires
a platform that can support the deployment of fully
configured environments. Paradigms like patterns and
clones, discussed earlier, are essential to achieve that
speed of deployment.
Additionally, these paradigms help with reuse of
expertise within the organization as well. Environments
capture the expertise needed to deploy various
aspects of the application, be it infrastructure (IT team),
presentation layer (UI team), business logic (app server
team), database layer (DBAs). All the tester has to do is
click a button or call an API to deploy the environment.
The individual components are automatically deployed
based on the definition of each component captured
in the environment definition.
6. How does this impact running an automated test
suite?
Automated testing at each stage can be made more
efficient and predictable by using consistent, fully
configured environments that can be deployed using
API calls. Continuous Integration and Continuous
Delivery can actually be realized if the environments of
desired size and configuration can be deployed within
seconds and minutes versus hours and days.
7. Getting the right data to test with, or setting up the
right data is also a problem area for many teams.
What strategies or tools help with this?
A few strategies can help with test data management:
Populating data in base environments which can
then be cloned into new copies of the environment
when needed. Once testing is done, the
environment along and data can be thrown away.
It is cheap to recreate a new environment with the
desired data set.
Creating a database as part of environment
creation. In this model, the data is created by using
some automation code. This slows the set-up of the
environments but ensures that the data creation is
based on the latest desired state of the test data
set.
Using data virtualization and data-masking tools to
continuously keep a close-to-production copy of
test data for testing. This is especially important in
the later stages of testing like pre-prod certification
testing.
8. Are cloud-based environments also a popular
choice for those doing continuous integration and
or continuous testing?
Absolutely. The key requirement for CI/CD—speed of
execution and feedback—cannot be achieved until
test environments can be deployed on-demand, in an
automated fashion, in a consistent state, and with
desired complexity.
9. We know containers are a stripped down wrapper
around software components or services. How
does Skytap use containers?
Skytap views containers as another component of the
application stack just like all other components:
infrastructure, middleware and applications. As
customers modernize parts of their application that are
a good fit for container technology, they are able to use
them in Skytap environments, along with the other parts
of their application that are not containerized. This is a
very common scenario for traditional enterprise
applications that tend to incorporate new technologies
and are modernized over time.
With containers in Skytap, not only are customers able to
use the container toolchain with container hosts in
Skytap, but they are also able to still achieve the same
efficiencies in their SDLC that they have traditionally done
with environments. Additionally, Skytap customers have full
control over their container hosts. This is not available in
many container services, but is often desired by our
enterprise customers. Customers can run containers on host
VMs in their Skytap environments. They can also use the
Skytap driver for Docker Machine to manage containers
running on hosts in Skytap. If you are interested in learning
more about Skytap’s support for containers, please contact
10. How do you see the use of containers impacting
testing?
Container technology can affect testing in many ways.
Some of these are:
Deploying test environments: Application test
environments that are entirely container-based or those
that use containers for some application components
require new ways for deploying the test environments
while still meeting the requirements of consistency.
Testing both code and deployment: Containers
encapsulate both the code changes and the
deployment process by design. Containers deploy fully
configured application components across the testing
pipeline. Testing that traditionally only used to focus on
code changes and deployment changes in isolation will
need to accommodate both.
Integration with tools across the testing toolchain:
Consideration must be given to the choice of build,
testing, and automation tools that can work with
traditional application components side-by-side with
container-based components.
Sumit Mehrotra
is Sr. Director of Product
Management at Skytap,
a role in which he is
responsible for product
strategy and roadmaps.
Prior to Skytap, Sumit
worked at Microsoft in
different roles and has
shipped a number of
products, including Windows Azure and
Windows operating system.
Sumit holds an MBA from University of Chicago
Booth School of Business and a Masters in
Computer Science from Boston University.
The 7 Best DevOps Books
By Steve Ropa
W ith the relative newness of DevOps, there
are not yet a ton of DevOps books. That’s
why we’ve assembled a list of the 7 best
DevOps books based on four criteria: the
number of ratings from Amazon, the average Amazon
rating, number of ratings from GoodReads and the
average GoodReads rating. Both Amazon and
GoodReads use a scale of 1 to 5 stars with 5 stars
being the best.
We did all the legwork digging through Amazon and
GoodReads to determine how many reviews each
book has as well as the average rating on each site so
that you can quickly find the DevOps book that is just
the right fit for your needs!
DevOps Books List
1. The Phoenix Project: A Novel About IT, DevOps, and
Helping Your Business Win By Gene Kim, Kevin Behr,
George Spafford.
4.6 Average Amazon rating (1,012 ratings).
4.17 Average GoodReads rating (3,350 ratings)
Book Description: Bill is an IT manager at Parts
Unlimited. It’s Tuesday morning and on his drive into
the office, Bill gets a call from the CEO. The
company’s new IT initiative, code named Phoenix
Project, is critical to the future of Parts Unlimited, but
the project is massively over budget and very late.
The CEO wants Bill to report directly to him and fix the
mess in ninety days or else Bill’s entire department will
be outsourced.
With the help of a prospective board member and
his mysterious philosophy of The Three Ways, Bill starts
to see that IT work has more in common with
manufacturing plant work than he ever imagined.
With the clock ticking, Bill must organize work flow
streamline interdepartmental communications, and
effectively serve the other business functions at Parts
Unlimited.
In a fast-paced and entertaining style, three
luminaries of the DevOps movement deliver a story
that anyone who works in IT will recognize. Readers
will not only learn how to improve their own IT
organizations, they’ll never view IT the same way
again.
2. What is DevOps? By Mike Loukide
From adopting the culture, to implementing Continuous Delivery
3.7 Average Amazon rating (57 ratings)
3.46 Average GoodReads rating (167 ratings)
Book Description: Have we entered the age of NoOps
infrastructures? Hardly. Old-style system administrators
may be disappearing in the face of automation and
cloud computing, but operations have become more
significant than ever. As this O’Reilly Radar Report
explains, we’re moving into a more complex
arrangement known as “DevOps.”
Mike Loukides, O’Reilly’s VP of Content Strategy,
provides an incisive look into this new world of
operations, where IT specialists are becoming part of
the development team. In an environment with
thousands of servers, these specialists now write the
code that maintains the infrastructure. Even
applications that run in the cloud have to be resilient
and fault tolerant, need to be monitored, and must
adjust to huge swings in load. That was underscored
by Amazon’s EBS outage last year.
From the discussions at O’Reilly’s Velocity Conference,
it’s evident that many operations specialists are
quickly adapting to the DevOps reality. But as a
whole, the industry has just scratched the surface. This
report tells you why.
3. Building a DevOps Culture By Mandi Walls
4.2 Average Amazon rating (20 ratings)
3.20 Average GoodReads rating (108 ratings)
Book Description: DevOps is as much about culture as
it is about tools. When people talk about DevOps, they
often emphasize configuration management systems,
source code repositories, and other tools. But, as
Mandi Walls explains in this Velocity report, DevOps is
really about changing company culture—replacing
traditional development and operations silos with
collaborative teams of people from both camps. The
DevOps movement has produced some efficient
teams turning out better products faster. The tough
part is initiating the change. This report outlines
strategies for managers looking to go beyond tools
to build a DevOps culture among their technical
staff.
Topics include:
Documenting reasons for changing to DevOps
before you commit
Defining meaningful and achievable goals
Finding a technical leader to be an evangelist,
tools and process expert, and shepherd
Starting with a non-critical but substantial pilot
project
Facilitating open communication among
developers, QA engineers, marketers, and other
professionals
Realigning your team’s responsibilities and
incentives
Learning when to mediate disagreements and
conflicts
Download this free report and learn how to the
DevOps approach can help you create a supportive
team environment built on communication, respect,
and trust.
4. Continuous Delivery: Reliable Software Releases
through Build, Test, and Deployment Automation By
Jez Humble, David Farley
4.4 Average Amazon rating (66 ratings)
Winner of the 2011 Jolt Excellence Award
Book Description: Getting software released to users
is often a painful, risky, and time-consuming
process. This groundbreaking new book sets out the
principles and technical practices that enable rapid,
incremental delivery of high quality, valuable new
functionality to users. Through automation of the
build, deployment, and testing process, and
improved collaboration between developers, testers,
and operations, delivery teams can get changes
released in a matter of hours—sometimes even
minutes–no matter what the size of a project or the
complexity of its code base.
Jez Humble and David Farley begin by presenting the
foundations of a rapid, reliable, low-risk delivery
process. Next, they introduce the “deployment
pipeline,” an automated process for managing all
changes, from check-in to release. Finally, they
discuss the “ecosystem” needed to support
continuous delivery, from infrastructure, data and
configuration management to governance.
5. Next Gen DevOps: Creating the DevOps
Organisation By Grant Smith
4.5 Average Amazon rating (2 Amazon ratings)
4.00 Average GoodReads rating (3 GoodReads
ratings)
Book Description: A coherent and actionable
DevOps framework is now available to businesses
through a revolutionary new book, Next Gen
DevOps: Creating the DevOps Organisation. Utilising
nearly two decades’ experience at firms including
AOL, Electronic Arts (EA) and British Gas’ Connected
Homes, the book’s author and pioneer of the
DevOps movement, Grant Smith, has distilled the
essence of DevOps into an easily-implementable
framework. Next Gen DevOps merges behaviour-
driven development, infrastructure-as-code,
automated testing, monitoring and continuous
integration into a single coherent process. The book
presents an implementation strategy that allows
firms large or small, start-up or enterprise to
embrace the move to DevOps.
By presenting a new way to look at the operations
discipline, Next Gen DevOps challenges the old
idea of a team languishing at the end of the
software development lifecycle, forever context-
switching between support tasks, security, data
management, infrastructure and software
deployment. Armed with the lessons learned from
history and the Agile software development
movement, combined with the latest in Software-as-
a-Service (SaaS) solutions, cloud computing and
automated testing, Next Gen DevOps sets out
Grant’s vision for IT in business’ biggest evolution yet.
“Every company is now an internet firm – and that
means changes in the way we work,” Grant Smith
says. “It’s time to drop the silos between our IT teams
and work as organisations to improve and develop
our products. Using simple theories and practices,
Next Gen DevOps: Creating the DevOps
Organisation offers a framework that can transform
any internet company.
6. The IT Manager’s Guide to Continuous Delivery:
Delivering Software in Days By Andrew Phillips,
Michiel Sens
4.2 Average Amazon rating (2 Amazon ratings)
Book Description: Turning good ideas into
marketable software quickly is now a business
imperative for every enterprise. Delivering software
features faster and with high quality is the first critical
step. The subsequent step is to rapidly collect
feedback from users to guide the next set of ideas for
further improvements. Critical software development
objectives such as these set the stage for The IT
Manager’s Guide to Continuous Delivery: Delivering
Software in Days, Instead of Months.
The book champions the concept of Continuous
Delivery in enabling organizations to build automated
software delivery platforms for releasing high-quality
applications faster. The book also presents how
Continuous Delivery is a set of processes and
practices that radically removes waste from the
software production process and creates a rapid and
effective feedback loop with end users.
7. Leading the Transformation: Applying Agile and
DevOps Principles at Scale By Gary Gruver, Tommy
Mouser
Book Description: Software is becoming more and
more important across a broad range of industries,
yet most technology executives struggle to deliver
Steve Ropa
Has more than 25 years
of experience in
software development
and 15 years of
experience working
with agile methods.
Steve is passionate
about bridging the gap
between the business
and technology and
nurturing the change in
t h e n a t u r e o f
development. As an
agile coach and VersionOne product trainer,
Steve has supported clients across multiple
industry verticals including: telecommunications,
network security, entertainment and education.
A frequent presenter at agile events, he is also a
member of Agile Alliance and Scrum Alliance. In
his personal time, Steve plays principal
trombone in a regional orchestra and is an avid
woodworker. https://blogs.versionone.com/
agile_management/author/sropa/
software improvements their businesses require.
Leading-edge companies like Amazon and
Google are applying DevOps and Agile principles
to deliver large software projects faster than
anyone thought possible. But most executives don’t
understand how to transform their current legacy
systems and processes to scale these principles
across their organizations.
Leading the Transformation is executive guide,
providing a clear framework for improving
development and delivery. Instead of the
traditional Agile and DevOps approaches that
focus on improving the effectiveness of teams, this
book targets the coordination of work across teams
in large organizations—an improvement that
executives are uniquely positioned to lead.
Conclusion
DevOps is an emerging methodology that is
growing and changing quickly. This relative
newness and rapid change make it difficult to find
great DevOps books. I hope our list has made your
search a little easier and that you have found some
DevOps books you are interested in reading!
Direct your organization into DevOps
W elcome to our new feature in LogiGear Magazine!
We will be doing a column in each issue on current
topics and how to manage, deal with, and support your
team through them.
This first installment of Leader’s Pulse is about making the
move to DevOps. This is a large topic and will be covered
over a few magazine issues. What I would like to cover in
this article are two topics: high-level mindset topic: the
growing and evolving ownership of quality, and a low-
level details topic of DevOps and its impact on test
environments and data.
First, who owns quality? Of course, it’s a trick question—
the answer is everyone owns quality—there is no single
owner of quality. And certainly the Test team does not
own quality alone. Testers may be running more tests and
even sets of tests that they do not own—like performance
tests but the speed of DevOps, the reliance on fast
reporting on consistency through running all the
automated suites of tests, coupled with the reliance on
speedy creation of environments makes the broad
understanding that everyone, at every single step, owns
the quality of the delivered product—an understanding
worth repeating.
But, if someone felt like arguing that there is a single
team that owns quality, I would say it is whoever
manages the product owners in your organization. The
usual suspects include but, are not limited to: Director of
Development, VP of Engineering and the Product
Manager. Once a product has a product roadmap
and the team is sized, the Dev and Test teams have
their first set of constraints in quality.
But that is not the subject for today.
Today’s topic of discussion is the growing and evolving
ownership role in quality. DevOps pushes more people
into the ownership or quality discussion. The move to
The ownership of quality has evolved, don’t get left behind
By Michael Hackett
Agile in the early 2000s showed unmistakably, that
Developers have a clear and big impact on quality by
incorporating code reviews, pair programming, unit
testing amongst their many other practices. Test teams
have their practices/role too: requirements and user story
analysis, exploratory testing, design collaboration, test
automation, bug finding and reporting among them.
The Key to being Agile today, is that most companies
have implemented various principles of Lean. Lean
Software Development (LSD) outlines 7 principles; one is
Quality at Every Step. This is sometimes referred to as
“Build quality in’ or “build integrity in”. It means exactly
what it sounds like: you have to build quality in at every
step. This means quality user stories, quality code, quality
unit tests, quality test cases, quality bugs, quality
automated scripts, quality performance tests, quality
environments, quality data, among many other
deliverables at every step.
One of the things DevOps does is put a spotlight on
automating every one of the reproducible quality
practices e.g. re-running unit tests, re-running the test
team’s many automated suites. This also means things
that were traditionally done at the end of the delivery
process, e.g. performance and security tests, now have
to happen much earlier. The decision to move those tests
will have an impact and often, a cost. That decision is a
quality assurance decision. The impact of moving security
and performance testing even earlier into the Continuous
Integration process is that performance or security bugs
can be found and fixed earlier when they are cheaper.
If Ops is in charge of environments, cloud infrastructure,
containers or whatever virtualized services you are using
for environments and data—then obviously Ops owns
pieces of delivering a quality product.
Now let’s talk about, great test environments and great
test data.
I’ll start with a story of a client LogiGear has been
helping push their development practices into the new
millennium.
The team supported a complex system with both
legacy and new products running on different
environments, they were also completely integrated
with data. The two most important ingredients in the
test—the environments and the data were a mess.
The builds were “pulled” rather than automated. The
environments were managed by the IT team; the data
was old, rarely scrubbed, and seldom mirrored
production data.
Due to the data having little integrity the number of
“bugs” the Test team found was not even close to
confidence level. There were issues not uncovered by
the team dealing with the test environments. Meaning,
Dev went on wild goose chases only to hit dead ends
and throw the “bug” back to test teams—essentially
undermining the team’s credibility. Clearly there were
many problems to fix.
The first problem was the manager of the team was
fighting for every ounce of help. He was the only person
on the team who really had an understanding of how
productive, responsive and useful Test teams could be—
most of the management team had been in place for
too long and had no idea of the impact test teams can
have on the bottom line, and consequently didn’t want
to spend money on them. Instead, his job was mainly
spent on educating management (re: hitting his head
against the wall), protecting his team, making
incremental change, and only then could he move on
to focusing on day-to-day tasks.
Ultimately, the environments and data mess was caused
by finger pointing between Dev and IT/Ops, which was
made worse by management’s not caring or willingness
to dedicate funds to fix it. Briefly, we fixed this problem
by auditing/measuring how many “bugs” and wastes-of-
time problems for Developers and testers were caused
by testing on bad environments with bad data. We
presented these findings to management, and we did
not let anyone point-fingers or blame, we explained that
the fix lied in making sure time testing had a dedicated IT
person, that the hardware needed to be brought into
this century, and the build process needed to be
automated along with a more detailed set of fixes for
data. With my guidance, a decade long nagging
problem was completely fixed in under one month.
This was not 20 years ago. Even more alarming—it was a
fairly recent client. I am happy to say this team is now in
much, much better shape. Everyone is happier—Dev,
testers, and management. I hope you do not have these
problems.
DevOps—or even Agile for that matter—will not tolerate
this. DevOps shines a bright light on environments and
data. If your team has an environment or data problems,
fix them now. We have known about these issues for a
long time—they are gone—we hope—from most
organizations—but the solution today is more pressing
and luckily, easier.
The reality of DevOps is really the journey towards the
ideals of greater collaboration, immediate feedback,
greater productivity, automating everything possible,
getting your team the tools and resources to have
easy, fast, perfect, production-like environments at all
times as well as great data to test against, mirrored,
current, live, whatever you need-the data is, like the
environment, great high quality, reliable, predictable,
current and as close as is effective to production data.
Collaborate with the IT/Ops teams to solve these
problems. Virtualized environments today are as
common as automated builds. VMs, cloud, “platforms
as a service”. Fix these problems; there should be no
excuses, especially when there are such a big number
of tools available to help you.
Leading the organization and/or test team into the
DevOps era is a big task. It starts with a change in
culture. Let’s make sure everyone is on the same page
with Quality Assurance practices throughout the
development cycle. Also, making significant,
incremental change is the key. We can’t change the
world overnight. Incremental change is the way to go.
Tackling environment and data problems is not easy—
but it’s a great place to start to get the Test team
much more productive and trusting their results and
reporting on a more consistent basis.
Test in production?
This post is part of the Pride & Paradev series
With continuous deployment, it is common to
release new software into production multiple times a
day. A regression test suite, no matter how well
designed, may still take over 10 minutes to run, which
can lead to bottlenecks in releasing changes to
production.
So, do you even need to test before going live? Why not
just test changes in production?
Test changes in production
Test changes before production
Test changes in production
The website for The Guardian, the UK’s third largest
newspaper, deploys on average 11 times a day, of
which all changes are tested in production.
“Once the code is in production, QA can really start.”
“Sometimes deployments go wrong. We expect that;
and we accept it, because people (and machines) go
wrong. But the key to dealing with these kind of
mistakes is not to lock down the process or extend the
breadth, depth and length of regression tests. The
solution is to enable people to fix their mistakes
quickly, learn, and get back to creating value as soon
as possible.”
~ Andy Hume on Real-time QA [The Guardian
Developer Blog]
The key to testing changes as soon as they hit
production is to have real time, continuous real user
experience monitoring. This includes metrics like page
views and page load time, which directly correlate to
advertising revenue, an incentive to keep these
healthy.
More comprehensive automated acceptance tests
can be written in a non-destructive style that means
they can be run in production. This means that these
can be run immediately following a fresh production
deployment, and as feedback about the tests is
received, any issues can be remedied immediately
into production and tested again.
By Alister Scott
Test changes before production
There are not many businesses that are able to release
software without any form of testing into production:
whether there be legislative requirements requiring
testing, or the risk of introducing errors is too high for its
target market.
Whilst automated regression tests do take longer to run
than unit or integration tests, there are ways to manage
these to ensure the quickest path into production. These
strategies include running tests in parallel, only running
business critical tests, only running against the single most
popular browser, or only running tests that are directly
related to your changes.
You can set up a deployment pipeline that runs a
selected subset of tests before deploying into production
then running the remaining tests (in a test environment).
Any of the issues found in subsequent tests are judged to
see whether they warrant another immediate release or
whether they can be included in the next set of changes
being deployed into production.
Whilst you definitely should run tests before deploying to
production, it doesn’t mean that this has to drastically
hinder your ability to continuously deploy.
Alister Scott
is an Excellence Wrangler
for WordPress.com at
Automattic. He has
extensive experience in
automated software
testing and establishing
quality engineering
cultures in lean cross-
functional software
development teams. He
lives in Brisbane, Australia
with his wife and three
sons, and writes a popular software testing
blog at watirmelon.com."
Summer 2016
Hans Buwalda will be giving the keynote speech at TiD 2016, where the conference will focus on “The
next generation of Software Development”. The conference covers industries, including the Internet,
mobile Internet, finance, insurance, telecommunication, safety and environmental protection and
traditional industry software development.
TiD 2016 Conference
Speaking Sessions July 17th - 20th, 2016 Beijing, China
Hans Buwalda will be giving his presentation Using Keywords to Support Behavior Driven Development
(BDD) and leading the workshop “What Makes Automated Testing Successful?” Michael Hackett will be
leading two workshops at STPCon-Fall this year. Move into DevOps: Experiences from the Real world,
and Test Case Design Intensive. The Software Test Professionals Conference is the leading event where
test leadership, management and strategy converge. The hottest topics in the industry are covered
including agile testing, performance testing, test automation, mobile application testing, and test team
leadership and management.
STPCon- Fall
Speaking Sessions September 19th-22nd, 2016 Dallas, TX
LogiGear Senior Manager of Testing Center of Excellence Minh Hoang Ngo will be presenting Test
Design With Action-Based Testing Methodology at the Software Testing Club Conference this year, and
LogiGear is exhibiting at the conference. The Software Testing Club Conference 2016 is the third
conference organized by HCMC Software Testing Club with the co-organization of the HCMC
Computer Association. The conference aims to promote best practices in software testing industry and
inspire software testing careers.
Software Testing Club Conference
2016 July 16th, 2016 Saigon, VN
The following are terms either found in the accompanying articles, or are
related to concepts relevant to those articles.
DevOps
(Term derived from the words “Development” and
“Operations”) A software development practice,
grounded in agile philosophy, that emphasizes
close collaboration between an organization’s
software developers and other IT professionals,
while automating the process of software delivery
and infrastructure changes. It aims at establishing a
culture and organizational workflow in which
building, testing, and releasing software happens
rapidly, frequently, and more reliably.
Source: Wikipedia
Virtualization
Virtualization (or virtualisation) is the simulation of
the software and/or hardware upon which other
software runs. This simulated environment is called a
virtual machine. There are many forms of virtualiza-
tion, distinguished primarily by computing architec-
ture layer. Virtualized components may include
hardware platforms, operating systems (OS),
storage devices, network devices or other re-
sources.
Source: Wikipedia
Infrastructure as Code (IaC)
Infrastructure as Code is the process of managing
and provisioning computing infrastructure
(processes, bare-metal servers, virtual servers, etc.)
and their configuration through machine-
processable definition files, rather than physical
hardware configuration or the use of interactive
configuration tools. The definition files may be in a
version control system. This has been achieved
previously through either scripts or declarative
definitions, rather than manual processes, but
developments as specifically titled “IaC” are now
focused on the declarative approaches. Infrastruc-
ture as Code approaches have become increas-
ingly widespread with the adoption of cloud
computing and Infrastructure as a Service (IaaS).
IaC supports IaaS, but should not be confused with
it.
Source: Wikipedia
Cloud computing
The use of computing resources (hardware and
software) that are delivered as a service over a
network (typically the Internet). The name comes
from the use of a cloud-shaped symbol as an
abstraction for the complex infrastructure it
contains in system diagrams. Cloud computing
entrusts remote services with a user’s data,
software and computation.
Source: Tagetik
Testing in the Cloud
Cloud Testing uses cloud infrastructure for soft-
ware testing. Organizations pursuing testing in
general, load, performance testing, and produc-
tion service monitoring in particular are chal-
lenged by several problems like limited test
budget, meeting deadlines. High costs per test,
large number of test cases, and little or no reuse
of tests and geographical distribution of users add
to the challenges.
Source: Wikipedia
EaaS (Environment as a Service)
Environment-as-a-Service, often referred to as
IaaS Plus, extends the traditional IaaS ecosystem
into the application development space. Here
advanced automation is layered on top of the
existing IaaS instance to not only configure servers
for a particular application, but also to deploy
and test all the other components needed to run
a given application.
For example in EaaS, automation is in place that
will simultaneously trigger a software deployment
and IaaS provisioning workflow to run in parallel.
This means that while build packages are creat-
ed, tested, and staged for deployment, the
infrastructure tool chain is provisioning the multi-
server test environment being requested by the
build server to support the current version of the
application. This level of automation enables
organizations to deploy entire application
environments with the same level of consistency
and reliability they have come to expect through
IaaS.
Source: Infocus.Emc.com
Infrastructure as a service (IaaS)
is a standardized, highly automated offering,
where compute resources, complemented by
storage and networking capabilities are owned
and hosted by a service provider and offered to
customers on-demand. Customers are able to self-
provision this infrastructure, using a Web-based
graphical user interface that serves as an IT
operations management console for the overall
environment. API access to the infrastructure may
also be offered as an option.
Source: Gartner.com
Continuous Integration (CI)
A software engineering practice in which the
changes made by developers to working copies of
code are added to the mainline code base on a
frequent basis, and immediately tested. The goal is
to provide rapid feedback so that, if a defect is
introduced into the mainline, it can be identified
quickly and corrected as soon as possible. Continu-
ous integration software tools are often used to
automate the testing and build a document trail.
Because CI detects deficiencies early on in
development, defects are typically smaller, less
complex, and easier to resolve. In the end, well-
implemented CI reduces the cost of software
development and helps speed time to market.
Source: SearchSoftwareQuality
Continuous testing (CT)
The process of executing automated tests as part
of the software delivery pipeline to obtain immedi-
ate feedback on the business risks associated with
a software release candidate.
For Continuous Testing, the scope of testing
extends from validating bottom-up requirements
or user stories to assessing the system requirements
associated with overarching business goals. The
goal of continuous testing is to provide fast and
continuous feedback regarding the level of
business risk in the latest build or release candi-
date. This information can then be used to
determine if the software is ready to progress
through the delivery pipeline at any given time
Source: Wikipedia
Continuous deployment (CD)
Continuous Deployment is a software develop-
ment practice in which every code change goes
through the entire pipeline and is put into produc-
tion, automatically, resulting in many production
deployments every day.
With Continuous Delivery your software is always
release-ready, yet the timing of when to push it
into production is a business decision, and so the
final deployment is a manual step. With Continu-
ous Deployment, any updated working version of
the application is automatically pushed to
production. Continuous Deployment man-
dates Continuous Delivery, but the opposite is not
required.
Source: Electric Cloud.com
Continuous delivery (CD)
Is a software engineering approach in which
teams produce software in short cycles, ensuring
that the software can be reliably released at any
time. It aims at building, testing, and releasing
software faster and more frequently. The ap-
proach helps reduce the cost, time, and risk of
delivering changes by allowing for more incre-
mental updates to applications in production. A
straightforward and repeatable deployment
process is important for continuous delivery.
Source: Wikipedia
The following are terms either found in the accompanying articles, or
are related to concepts relevant to those articles.
OVER
Top Related