Casq Cbok Rev 6-2

298
CASQ COMMON BODY OF KNOWLEDGE Version 6.2 © 2008 Guide to the

Transcript of Casq Cbok Rev 6-2

Page 1: Casq Cbok Rev 6-2

CASQ COMMON BODYOF KNOWLEDGE

Version 6.2 © 2008

Guide to the

Page 2: Casq Cbok Rev 6-2

Copyright

Copyright © Quality Assurance Institute 2008 All Rights Reserved

No part of this publication, or translations of it, may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or any other media embodiments now known or hereafter to become known, without the prior written permission of the Quality Assurance Institute.

Visit www.qaiworldwide.org for additional courseware and training seminars.

Page 3: Casq Cbok Rev 6-2

Table of Contents

Skill Category 1Quality Principles . . . . . . . . . . . . . . . . . . . . . . . . . . 1-11.1. Vocabulary of Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-11.2. The Different Views of Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-3

1.2.1. The Two Quality Gaps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-41.2.2. Quality Attributes for an Information System . . . . . . . . . . . . . . . . .1-5

1.3. Quality Concepts and Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-61.3.1. PDCA Cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-81.3.2. Cost of Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-9

1.3.2.1. The Three Key Principles of Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-101.3.2.2. Best Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-11

1.3.3. Six Sigma Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-111.3.4. Baselining and Benchmarking . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-121.3.5. Earned Value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-12

1.4. Quality Control and Quality Assurance . . . . . . . . . . . . . . . . . . . . . . .1-121.4.1. Quality Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-131.4.2. Quality Assurance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-131.4.3. Differentiating Between Quality Control and Quality Assurance .1-14

1.5. Quality Pioneers Approach to Quality . . . . . . . . . . . . . . . . . . . . . . . .1-151.5.1. Dr. W. Edwards Deming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-151.5.2. Philip Crosby . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-181.5.3. Dr. Joseph Juran . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-21

Version 6.2 1

Page 4: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

Skill Category 2Quality Leadership . . . . . . . . . . . . . . . . . . . . . . . . . .2-12.1. Leadership Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-1

2.1.1. Executive and Middle Management Commitment . . . . . . . . . . . . . 2-22.1.1.1. Executive Management Commitment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-22.1.1.2. Middle Management Commitment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2

2.1.2. Quality Champion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-22.1.2.1. Leadership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2

2.2. Quality Management Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . 2-32.2.1. Quality Council . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-32.2.2. Management Committees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-42.2.3. Teams and Work Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-4

2.2.3.1. Personal Persuasion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-52.2.3.2. Resolving Customer Complaints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-62.2.3.3. Written Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-7

2.3. Quality Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-102.3.1. The Six Attributes of an Effective Quality Environment . . . . . . . . 2-102.3.2. Code of Ethics and Conduct . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-112.3.3. Open Communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-12

2.3.3.1. Guidelines for Effective Communications . . . . . . . . . . . . . . . . . . . . . . . . 2-122.3.4. Mission, Vision, Goals, Values, and Quality Policy . . . . . . . . . . . 2-18

2.3.4.1. Mission . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-182.3.4.2. Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-182.3.4.3. Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-192.3.4.4. Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-212.3.4.5. Quality Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-22

2 Version 6.2

Page 5: Casq Cbok Rev 6-2

Table of Contents

Skill Category 3Quality Baselines . . . . . . . . . . . . . . . . . . . . . . . . . . 3-13.1. Quality Baseline Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-1

3.1.1. Baselines Defined . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-13.1.2. Types of Baselines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-23.1.3. Conducting Baseline Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-2

3.1.3.1. Conducting Objective Baseline Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-43.1.3.2. Conducting Subjective Baseline Studies . . . . . . . . . . . . . . . . . . . . . . . . . . 3-5

3.2. Methods Used for Establishing Baselines . . . . . . . . . . . . . . . . . . . . . . .3-63.2.1. Customer Surveys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-73.2.2. Benchmarking to Establish a Baseline Goal . . . . . . . . . . . . . . . . . . .3-73.2.3. Assessments against Management Established Criteria . . . . . . . . . .3-93.2.4. Assessments against Industry Models . . . . . . . . . . . . . . . . . . . . . . .3-12

3.3. Model and Assessment Fundamentals . . . . . . . . . . . . . . . . . . . . . . . .3-123.3.1. Purpose of a Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-123.3.2. Types of Models (Staged and Continuous) . . . . . . . . . . . . . . . . . . .3-13

3.3.2.1. Staged models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-133.3.2.2. Continuous models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-13

3.3.3. Model Selection Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-133.3.4. Using Models for Assessment and Baselines . . . . . . . . . . . . . . . . .3-14

3.3.4.1. Assessments versus Audits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-153.4. Industry Quality Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-16

3.4.1. Software Engineering Institute Capability Maturity Model Integration (CMMI®) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-16

3.4.1.1. Maturity Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-163.4.2. Malcolm Baldrige National Quality Award (MBNQA) . . . . . . . . .3-17

3.4.2.1. Other National and Regional Awards . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-183.4.3. ISO 9001:2000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-18

3.4.3.1. Model Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-183.4.4. ISO/IEC 12207: Information Technology – Software Life Cycle Pro-cesses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-19

3.4.4.1. Model Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-193.4.5. ISO/IEC 15504: Process Assessment (Formerly Known as Software Improvement and Capability Determination (SPICE)) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-19

3.4.5.1. Model Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-19

Version 6.2 3

Page 6: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

3.4.6. Post-Implementation Audits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-20

4 Version 6.2

Page 7: Casq Cbok Rev 6-2

Table of Contents

Skill Category 4Quality Assurance . . . . . . . . . . . . . . . . . . . . . . . . . 4-14.1. Establishing a Function to Promote and Manage Quality . . . . . . . . .4-1

4.1.1. How the Quality Function Matures Over Time . . . . . . . . . . . . . . . .4-34.1.1.1. Three Phases of Quality Function Maturation . . . . . . . . . . . . . . . . . . . . . . 4-3

4.1.2. IT Quality Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-44.1.2.1. Long-Term Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-44.1.2.2. Short-Term Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-5

4.2. Quality Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-64.2.1. Management Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-7

4.2.1.1. Brainstorming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-84.2.1.2. Affinity Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-84.2.1.3. Nominal Group Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-94.2.1.4. Cause-and-Effect Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-104.2.1.5. Force Field Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-124.2.1.6. Flowchart and Process Map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-124.2.1.7. Benchmarking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-144.2.1.8. Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-17

4.2.2. Statistical Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-184.2.2.1. Check Sheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-184.2.2.2. Histogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-204.2.2.3. Pareto Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-214.2.2.4. Run Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-224.2.2.5. Control Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-234.2.2.6. Scatter Plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-25

4.2.3. Presentation Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-264.2.3.1. Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-274.2.3.2. Line Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-274.2.3.3. Bar Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-284.2.3.4. Pie Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-29

4.3. Process Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-294.3.1. The Deployment Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-30

4.3.1.1. Deployment Phase 1: Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-304.3.1.2. Deployment Phase 2: Strategic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-314.3.1.3. Deployment Phase 3: Tactical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-31

4.3.2. Critical Success Factors for Deployment . . . . . . . . . . . . . . . . . . . .4-314.4. Internal Auditing and Quality Assurance . . . . . . . . . . . . . . . . . . . . . .4-32

4.4.1. Types of Internal Audits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-324.4.2. Differences in Responsibilities . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-33

Version 6.2 5

Page 8: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

Skill Category 5Quality Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-15.1. Planning Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-1

5.1.1. The Planning Cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-25.2. Integrating Business and Quality Planning . . . . . . . . . . . . . . . . . . . . . 5-4

5.2.1. The Fallacy of Having Two Separate Planning Processes . . . . . . . . 5-45.2.2. Planning Should be a Single IT Activity . . . . . . . . . . . . . . . . . . . . . 5-4

5.3. Prerequisites to Quality Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-55.4. The Planning Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-6

5.4.1. Planning Process Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-65.4.2. The Six Basic Planning Questions . . . . . . . . . . . . . . . . . . . . . . . . . . 5-85.4.3. The Common Activities in the Planning Process . . . . . . . . . . . . . . 5-10

5.4.3.1. Business or Activity Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-105.4.3.2. Environment Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-105.4.3.3. Capabilities and Opportunities Planning . . . . . . . . . . . . . . . . . . . . . . . . . 5-105.4.3.4. Assumptions/Potential Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-115.4.3.5. Objectives/Goals Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-115.4.3.6. Policies/Procedures Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-115.4.3.7. Strategy/Tactics Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-125.4.3.8. Priorities/Schedules Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-125.4.3.9. Organization/Delegation Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-125.4.3.10. Budget/Resources Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-125.4.3.11. Planning Activities for Outsourced Work . . . . . . . . . . . . . . . . . . . . . . . 5-13

5.5. Planning to Mature IT Work Processes . . . . . . . . . . . . . . . . . . . . . . . 5-135.5.1. How to Plan the Sequence for Implementing Process Maturity . . 5-14

5.5.1.1. Relationship between People Skills and Process Definitions . . . . . . . . . . 5-145.5.1.2. Relationship of Do and Check Procedures . . . . . . . . . . . . . . . . . . . . . . . . 5-145.5.1.3. Relationship of Individuals' Assessment of How They are Evaluated to Work Performed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-155.5.1.4. Relationship of What Management Relies on for Success . . . . . . . . . . . . 5-155.5.1.5. Relationship of Maturity Level to Cost to Do Work . . . . . . . . . . . . . . . . 5-155.5.1.6. Relationship of Process Maturity to Defect Rates . . . . . . . . . . . . . . . . . . 5-155.5.1.7. Relationship of Process Maturity and Cycle Time . . . . . . . . . . . . . . . . . . 5-155.5.1.8. Relationship of Process Maturity and End User Satisfaction . . . . . . . . . . 5-155.5.1.9. Relationship of Process Maturity and Staff Job Satisfaction . . . . . . . . . . 5-165.5.1.10. Relationship of Process Maturity to an Organization's Willingness to Embrace Change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-165.5.1.11. Relationship of Tools to Process Maturity . . . . . . . . . . . . . . . . . . . . . . . 5-16

6 Version 6.2

Page 9: Casq Cbok Rev 6-2

Table of Contents

Skill Category 6Define, Build, Implement, and Improve Work Pro-cesses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-16.1. Process Management Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-1

6.1.1. Definition of a Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-16.1.2. Why Processes Are Needed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-26.1.3. Process Workbench and Components . . . . . . . . . . . . . . . . . . . . . . . .6-36.1.4. Process Categories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-56.1.5. The Process Maturity Continuum . . . . . . . . . . . . . . . . . . . . . . . . . . .6-7

6.1.5.1. Product and Services Continuum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-76.1.5.2. Work Process Continuum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-86.1.5.3. Check Processes Continuum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-86.1.5.4. Customer Involvement Continuum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-8

6.1.6. How Processes Are Managed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-96.1.7. Process Template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-9

6.2. Process Management Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-106.2.1. Planning Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-11

6.2.1.1. Process Inventory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-116.2.1.2. Process Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-126.2.1.3. Process Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-13

6.2.2. Do Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-146.2.2.1. Process Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-14

6.2.3. Check Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-176.2.3.1. Identify Control Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-186.2.3.2. Process Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-206.2.3.3. Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-21

6.2.4. Act Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-216.2.4.1. Process Improvement Teams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-226.2.4.2. Process Improvement Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-24

Version 6.2 7

Page 10: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

Skill Category 7Quality Control Practices . . . . . . . . . . . . . . . . . . . .7-17.1. Testing Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-1

7.1.1. The Testers’ Workbench . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-17.1.2. Test Stages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-2

7.1.2.1. Integration Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-27.1.2.2. System Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-27.1.2.3. User Acceptance Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-3

7.1.3. Independent Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-37.1.4. Static versus Dynamic Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-47.1.5. Verification versus Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-4

7.1.5.1. Computer System Verification and Validation Examples . . . . . . . . . . . . . 7-57.1.6. The Life Cycle Testing Concept Example . . . . . . . . . . . . . . . . . . . . 7-67.1.7. Stress versus Volume versus Performance . . . . . . . . . . . . . . . . . . . 7-87.1.8. Test Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-87.1.9. Reviews and Inspections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-9

7.1.9.1. Review Formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-97.1.9.2. In-Process Reviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-107.1.9.3. Checkpoint Reviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-107.1.9.4. Phase-End Reviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-107.1.9.5. Post-Implementation Reviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-117.1.9.6. Inspections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-11

7.2. Verification and Validation Methods . . . . . . . . . . . . . . . . . . . . . . . . . 7-127.2.1. Management of Verification and Validation . . . . . . . . . . . . . . . . . 7-127.2.2. Verification Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-13

7.2.2.1. Feasibility Reviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-137.2.2.2. Requirements Reviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-137.2.2.3. Design Reviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-137.2.2.4. Code Walkthroughs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-137.2.2.5. Code Inspections or Structured Walkthroughs . . . . . . . . . . . . . . . . . . . . . 7-147.2.2.6. Requirements Tracing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-14

7.2.3. Validation Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-147.2.3.1. White-Box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-147.2.3.2. Black-Box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-157.2.3.3. Incremental . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-167.2.3.4. Thread . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-167.2.3.5. Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-16

7.2.4. Structural and Functional Testing . . . . . . . . . . . . . . . . . . . . . . . . . 7-17

8 Version 6.2

Page 11: Casq Cbok Rev 6-2

Table of Contents

7.2.4.1. Structural Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-187.2.4.2. Functional Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-18

7.3. Software Change Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-187.3.1. Software Configuration Management . . . . . . . . . . . . . . . . . . . . . . .7-187.3.2. Change Control Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-19

7.4. Defect Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-207.4.1. Defect Management Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-207.4.2. Defect Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-207.4.3. Severity versus Priority . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-217.4.4. Using Defects for Process Improvement . . . . . . . . . . . . . . . . . . . .7-21

Version 6.2 9

Page 12: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

Skill Category 8Metrics and Measurement . . . . . . . . . . . . . . . . . . . .8-18.1. Measurement Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-1

8.1.1. Standard Units of Measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-28.1.2. Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-28.1.3. Objective and Subjective Measurement . . . . . . . . . . . . . . . . . . . . . . 8-28.1.4. Types of Measurement Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-3

8.1.4.1. Nominal Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-38.1.4.2. Ordinal Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-38.1.4.3. Interval Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-48.1.4.4. Ratio Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-4

8.1.5. Measures of Central Tendency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-48.1.6. Attributes of Good Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-4

8.1.6.1. Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-58.1.6.2. Validity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-58.1.6.3. Ease of Use and Simplicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-58.1.6.4. Timeliness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-58.1.6.5. Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-5

8.1.7. Using Quantitative Data to Manage an IT Function . . . . . . . . . . . . 8-58.1.7.1. Measurement Dashboards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-58.1.7.2. Statistical Process Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-6

8.1.8. Key Indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-68.2. Measurement in Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-7

8.2.1. Product Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-88.2.1.1. Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-88.2.1.2. Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-98.2.1.3. Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-98.2.1.4. Customer Perception of Product Quality . . . . . . . . . . . . . . . . . . . . . . . . . 8-10

8.2.2. Process Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-108.3. Variation and Process Capability . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-11

8.3.1. The Measurement Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-118.3.2. Common and Special Causes of Variation . . . . . . . . . . . . . . . . . . . 8-13

8.3.2.1. Common Causes of Variation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-138.3.2.2. Special Causes of Variation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-14

8.3.3. Variation and Process Improvement . . . . . . . . . . . . . . . . . . . . . . . 8-158.3.4. Process Capability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-16

10 Version 6.2

Page 13: Casq Cbok Rev 6-2

Table of Contents

8.4. Risk Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-178.4.1. Defining Risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-178.4.2. Characterizing Risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-17

8.4.2.1. Situational . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-178.4.2.2. Time-Based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-178.4.2.3. Interdependent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-188.4.2.4. Magnitude Dependent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-188.4.2.5. Value-Based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-18

8.4.3. Managing Risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-188.4.4. Software Risk Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-198.4.5. Risks of Integrating New Technology . . . . . . . . . . . . . . . . . . . . . .8-20

8.5. Implementing a Measurement Program . . . . . . . . . . . . . . . . . . . . . . .8-218.5.1. The Need for Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-218.5.2. Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-22

Version 6.2 11

Page 14: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

Skill Category 9Internal Control and Security . . . . . . . . . . . . . . . . .9-19.1. Principles and Concepts of Internal Control . . . . . . . . . . . . . . . . . . . . 9-2

9.1.1. Internal Control and Security Vocabulary and Concepts . . . . . . . . . 9-29.1.1.1. Internal Control Responsibilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-39.1.1.2. The Internal Auditor’s Internal Control Responsibilities . . . . . . . . . . . . . 9-3

9.1.2. Risk versus Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-59.1.3. Environmental versus Transaction Processing Controls . . . . . . . . . 9-5

9.2. Environmental or General Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-59.3. Transaction Processing Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-6

9.3.1. Preventive, Detective and Corrective Controls . . . . . . . . . . . . . . . . 9-69.3.1.1. Preventive Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-79.3.1.2. Detective Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-89.3.1.3. Corrective Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-8

9.3.2. Cost versus Benefit of Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-89.4. The Quality Professionals Responsibility for Internal Control and Secu-rity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-9

9.5. Building Internal Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-99.5.1. Perform Risk Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-99.5.2. Model for Building Transaction Processing Controls . . . . . . . . . . 9-10

9.5.2.1. Transaction Origination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-119.5.2.2. Transaction Entry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-119.5.2.3. Transaction Communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-119.5.2.4. Transaction Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-129.5.2.5. Database Storage and Retrieval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-129.5.2.6. Transaction Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-12

9.6. Building Adequate Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-129.6.1. Where Vulnerabilities in Security Occur . . . . . . . . . . . . . . . . . . . . 9-12

9.6.1.1. Functional Vulnerabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-139.6.1.2. IT Areas Where Security is Penetrated . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-139.6.1.3. Accidental versus Intentional Losses . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-14

9.6.2. Establishing a Security Baseline . . . . . . . . . . . . . . . . . . . . . . . . . . 9-149.6.2.1. Creating Baselines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-15

9.6.3. Using Baselines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-159.6.4. Security Awareness Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-169.6.5. Security Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-16

12 Version 6.2

Page 15: Casq Cbok Rev 6-2

Table of Contents

Skill Category 10Outsourcing, COTS and Contracting Quality . . 10-110.1. Quality and Outside Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-1

10.1.1. Purchased COTS software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-210.1.1.1. Evaluation versus Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-2

10.1.2. Outsourced Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-310.1.2.1. Additional differences if the contract is with an offshore organization . 10-310.1.2.2. Quality Professionals Responsibility for Outside Software . . . . . . . . . . 10-4

10.2. Selecting COTS Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-510.3. Selecting Software Developed by Outside Organizations . . . . . . . .10-6

10.3.1. Contracting Life Cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-610.4. Contracting for Software Developed by Outside Organizations . .10-7

10.4.0.1. What Contracts Should Contain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-710.5. Operating for Software Developed by Outside Organizations . . . .10-8

10.5.1. Acceptance Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-810.5.1.1. Acceptance Testing Concerns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-910.5.1.2. Operation and Maintenance of the Software . . . . . . . . . . . . . . . . . . . . 10-1010.5.1.3. Contractual Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-10

Vocabulary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-1

Version 6.2 13

Page 16: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

This page intentionally left blank.

14 Version 6.2

Page 17: Casq Cbok Rev 6-2

Quality Principlesefore an organization can begin to assess the quality of its products and services, andidentify opportunity for improvement, it first must have a working knowledge of qualityprinciples and basic concepts. This category tests the CSQA candidate’s ability tounderstand and apply these principles, which include the following:

1.1 Vocabulary of Quality The quality language is the way quality professionals describe the principles, concepts, andapproaches used for improving quality. Until the vocabulary is learned and its use encouraged inthe organization, quality becomes a difficult program to achieve. For example, when the words“process” or “defect” are used, there must be a common understanding of what is meant by thoseterms.

Appendix A provides a glossary of definitions for terminology used in the quality language. Thisterminology is also referred to as the vocabulary of quality. Some of the more widely used termsare:

Vocabulary of Quality page 1-1The Different Views of Quality page 1-3Quality Concepts and Practices page 1-6Quality Control and Quality Assurance page 1-12Quality Pioneers Approach to Quality page 1-15

Skill Category

1

B

Version 6.2.1 1-1

Page 18: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

Defect

From the producer's viewpoint, a defect is a product requirement that has not been met,or a product attribute possessed by a product or a function performed by a product thatis not in the statement of requirements that define the product. From the customer'sviewpoint, a defect is anything that causes customer dissatisfaction, whether in thestatement of requirements or not.

Policy

Managerial desires and intents concerning either processes (intended objectives) orproducts (desired attributes).

Procedure

The step-by-step method followed to ensure that standards are met.

Process

• The work effort that produces a product. This includes efforts of people andequipment guided by policies, standards, and procedures.

• A statement of purpose and an essential set of practices (activities) that addressthat purpose. A process or set of processes used by an organization or project toplan, manage, execute, monitor, control, and improve its software relatedactivities.

Productivity

The ratio of the output of a process to the input, usually measured in the same units. It isfrequently useful to compare the value added to a product by a process, to the value ofthe input resources required (using fair market values for both input and output).

Quality

Operationally, the word quality refers to products. A product is a quality product if it isdefect free. For a CSQA professional, quality is further defined as follows:

Quality – Producer View

To the producer, a product is a quality product if it meets or conforms to the statementof requirements that defines the product. This statement is usually shortened to: qualitymeans meets requirements. The producer’s view of quality has these fourcharacteristics: Doing the right thing, Doing it the right way, Doing it right the firsttime, and Doing it on time without exceeding cost.

Quality – Customer View

To the customer, a product is a quality product if it meets the customer’s needs,regardless of whether the requirements were met. This is referred to as fit for use.

1-2 Version 6.2.1

Page 19: Casq Cbok Rev 6-2

Quality Principles

Standard

A requirement of a product or process. For example: 100 percent of the functionalitymust be tested.

1.2 The Different Views of QualityIndustry accepted definitions of quality are “conformance to requirements” (from Philip Crosby)and “fit for use” (from Dr. Joseph Juran and Dr. W. Edwards Deming). These two definitions arenot inconsistent.

Meeting requirements is a producer’s view of quality. This is the view of the organizationresponsible for the project and processes, and the products and services acquired, developed, andmaintained by those processes. Meeting requirements means that the person building the productdoes so in accordance with the requirements. Requirements can be very complex or they can besimple, but they must be defined in a measurable format, so it can be determined whether they havebeen met. The producer’s view of quality has these four characteristics:

• Doing the right thing• Doing it the right way• Doing it right the first time• Doing it on time without exceeding cost

Being fit for use is the customer’s definition. The customer is the end user of the products orservices. Fit for use means that the product or service meets the customer’s needs regardless of theproduct requirements. Of the two definitions of quality, fit for use, is the more important. Thecustomer’s view of quality has these characteristics:

• Receiving the right product for their use• Being satisfied that their needs have been met• Meeting their expectations• Being treated with integrity, courtesy and respect

In addition to the producer and customer views of quality, the organizational infrastructure alsoincludes a provider and a supplier view. These views are as follows:

• Provider view – This is the perspective of the organization that delivers the productsand services to the customer.

• Supplier view – This is the perspective of the organization (that may be external to theproducer’s company, such as an independent vendor) that provides either the producerand/or the provider with products and services needed to meet the requirements of thecustomer.

The infrastructure for quality products and services is illustrated in Figure 1-1. The figure showsthe requirements coming from the customer to the producer/provider, who uses them to create the

Version 6.2.1 1-3

Page 20: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

products and services needed by the customer. This process works because of the two-waymeasurement process established between the involved parties.

Figure 1-1 Infrastructure for Software Quality Products and Services

This infrastructure has been presented simplistically. In reality, the producer is the customer for thesupplier, making the supplier the producer for the intermediate producer, and there may be a longchain of producers/providers and their customers. However, the quality characteristics by which aninterim producer evaluates supplier products are really producer quality characteristics and not enduser/customer quality characteristics.

1.2.1 The Two Quality GapsMost Information Technology (IT) groups have two quality gaps: the producer gap and thecustomer gap as shown in Figure 1-2. The producer gap is the difference between what is specified(the documented requirements and internal standards) versus what is delivered (what is actuallybuilt). The customer gap is the difference between what the producers actually delivered versuswhat the customer wanted.

Closing these two gaps is the responsibility of the quality function (see Skill Category 4). Thequality function must first improve the processes to the point where the producer can develop theproducts according to requirements received and its own internal standards. Closing the producer'sgap enables the IT function to provide its customers consistency in what it can produce. This hasbeen referred to as the "McDonald's effect" - at any McDonald's in the world, a Big Mac shouldtaste the same. It doesn't mean that every customer likes the Big Mac or that it meets everyone'sneeds, but rather, that McDonald's has now produced consistency in its delivered product.

1-4 Version 6.2.1

Page 21: Casq Cbok Rev 6-2

Quality Principles

Closing the second gap requires the quality function to understand the true needs of the customer.This can be done by customer surveys, Joint Application Development (JAD) sessions, and moreuser involvement through the process of building information products. The processes can then bechanged to close the customer gap, keeping consistency while producing products and servicesneeded by the customer.

Figure 1-2 Two Quality Gaps

1.2.2 Quality Attributes for an Information SystemQuality is a multifaceted concept driven by customer requirements. The level of quality can varysignificantly from project to project and between organizations. In IT, the attributes of quality areexamined in order to understand the components of quality, and as a basis for measuring quality.Some of the commonly accepted quality attributes for an information system are described inFigure 1-3.

Management needs to develop quantitative, measurable "standards" for each of these qualitycriteria for their development projects. For example, management must decide the degree ofmaintenance effort that is acceptable, the amount of time that it should take for a user to learn howto use the system, etc. Skill Category 8 covers this topic in more detail.

Version 6.2.1 1-5

Page 22: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

Figure 1-3 Commonly Accepted Quality Attributes (Critical Success Factor) for Information Systems

"The Paul Revere Insurance Group believes that if a customer does not perceivequality, the program is not accomplished."

Charles E. Soule, Past Executive Vice PresidentPaul Revere Insurance Group

1.3 Quality Concepts and PracticesPeople say they want quality; however, their actions may not support this view for the followingreasons:

• Many think that defect-free products and services are not practical or economical, andthus believe some level of defects is normal and acceptable. (This is called acceptablequality level, or AQL.) Quality experts agree that AQL is not a suitable definition ofquality. As long as management is willing to "accept" defective products, the entirequality program will be in jeopardy.

• Quality is frequently associated with cost, meaning that high quality is synonymouswith high cost. (This is confusion between quality of design and quality of

Attributes Definition

Correctness

Reliability

Efficiency

Integrity

Usability

Maintainability

Testability

Flexibility

Reusability

Interoperability

Extent to which a program satisfies its specifications and fulfills the user’s mission objectives.

Extent to which a program can be expected to perform its intended function with required precision.

The amount of computing resources and code required by a program to perform a function.

Extent to which access to software or data by unauthorized persons can be controlled.

Effort required learning, operating, preparing input, and interpreting output of a program.

Effort required locating and fixing an error in an operational program.

Effort required testing a program to ensure that it performs its intended function.

Effort required modifying an operational program.

Extent to which a program can be used in other applications – related to the packaging and scope of the functions that programs perform.

Effort required to couple one system with another.

1-6 Version 6.2.1

Page 23: Casq Cbok Rev 6-2

Quality Principles

conformance.) Organizations may be reluctant to spend on quality assurance, as theydo not see an immediate payback.

• Quality by definition calls for requirements/specifications in enough detail so that theproducts produced can be quantitatively measured against those specifications. Feworganizations are willing to expend the effort to produce requirements/specifications atthe level of detail required for quantitative measurement.

• Many technical personnel believe that standards inhibit their creativity, and thus do notstrive for compliance to standards. However, for quality to happen there must be well-defined standards and procedures that are followed.

The contributors to poor quality in many organizations can be categorized as either lack ofinvolvement by management, or lack of knowledge about quality. Following are some of thespecific contributors for these two categories:

Lack of involvement by management

• Management's unwillingness to accept full responsibility for all defects• Failure to determine the cost associated with defects (i.e., poor quality)• Failure to initiate a program to "manage defects"• Lack of emphasis on processes and measurement• Failure to enforce standards• Failure to reward people for following processes

Lack of knowledge about quality

• Lack of a quality vocabulary, which makes it difficult to communicate qualityproblems and objectives

• Lack of knowledge of the principles of quality (i.e., what is necessary to make ithappen)

• No categorization scheme for defects (i.e., naming of defects by type)• No information on the occurrence of defects by type, by frequency, and by location• Unknown defect expectation rates for new products• Defect-prone processes unknown or unidentified• Defect-prone products unknown or unidentified• An economical means for identifying defects unknown• Proven quality solutions are unknown and unused

If achieving quality (i.e., defect-free products and services) were easy, it would have beenaccomplished years ago. Quality is very difficult to accomplish – it requires the close cooperationof management and staff. Achieving quality requires a commitment and the establishment of anenvironment in which quality can flourish. Skill Category 2 focuses on management commitmentand a quality management environment.

The bottom line is that making quality happen is a monumental challenge. Dr. Ishikawa, Japan’sleading quality expert, best expressed this when he stated that accomplishing quality requires “athought revolution by management.” Thought revolutions do not come easy.

Version 6.2.1 1-7

Page 24: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

As a result of his experiences in turning around the Japanese economy, Dr. W. Edwards Demingfound that it takes 20 years to change a culture from an emphasis on productivity to an emphasis onquality. Twenty years might be excessive, but management must be prepared to invest 2-5 yearsbefore the really large pay-backs occur. Quality is a long-term strategy, which must be continuallynurtured by the quality function and management.

The answer to the question, "Can we afford quality?" is: "You cannot afford to ignore it." Harold S.Geneen, past CEO at ITT, stated that quality "is the most profitable product line we have." Whatthis means is that preventing and/or detecting defects early results in huge savings. Studies by Dr.Barry W. Boehm at GTE, TRW, and IBM in the late 1980s showed geometric escalation in the costto fix a problem as the software life cycle progressed. Boehm concluded that errors are typically100 times more expensive to correct in the maintenance phase on large projects than in therequirements phase. Boehm also stated that the total economic impact is actually much larger inoperational systems because of the user costs incurred. Recent studies show that with today's morecomplex systems, Boehm's estimates are conservative.

1.3.1 PDCA CycleA major premise of a quality management environment is an emphasis on continuousimprovement. The approach to continuous improvement is best illustrated using the PDCA cycle,which was developed in the 1930s by Dr. Shewhart of the Bell System. The cycle comprises thefour steps of Plan, Do, Check, and Act as shown in Figure 1-4. It is also called the Deming Wheel,and is one of the key concepts of quality.

Figure 1-4 PDCA Concept

• Plan (P): Devise a plan - Define the objective, expressing it numerically, ifpossible. Clearly describe the goals and policies needed to attain the objective atthis stage. Determine the procedures and conditions for the means and methodsthat will be used to achieve the objective.

• Do (D): Execute the plan - Create the conditions and perform the necessaryteaching and training to ensure everyone understands the objectives and the plan.

1-8 Version 6.2.1

Page 25: Casq Cbok Rev 6-2

Quality Principles

Teach workers the procedures and skills they need to fulfill the plan andthoroughly understand the job. Then perform the work according to theseprocedures.

• Check (C): Check the results - As often as possible, check to determine whetherwork is progressing according to the plan and whether the expected results areobtained. Check for performance of the procedures, changes in conditions, orabnormalities that may appear.

• Act (A): Take the necessary action - If the check reveals that the work is not beingperformed according to plan, or if results are not what were anticipated, devisemeasures for appropriate action. Look for the cause of the abnormality to preventits recurrence. Sometimes workers may need to be retrained and proceduresrevised. The next plan should reflect these changes and define them in more detail.

Figure 1-5 Ascending Spiral

The PDCA procedures ensure that the quality of the products and services meets expectations, andthat the anticipated budget and delivery date are fulfilled. Sometimes preoccupation with currentconcerns limits the ability to achieve optimal results. Repeatedly going around the PDCA circlecan improve the quality of the work and work methods, and obtain the desired results. This conceptcan be seen in the ascending spiral of Figure 1-5.

1.3.2 Cost of QualityQuality is an attribute of a product or service. Productivity is an attribute of a process. They havefrequently been called two sides of the same coin because one significantly impacts the other.There are two ways that quality can drive productivity. The first, which is an undesirable method, isto lower or not meet quality standards. For example, if testing and rework components of a systemdevelopment process were eliminated or reduced, productivity as measured in lines of code perhours worked would increase. This is often done under the guise of completing projects on time.The second and more desirable method to improve productivity through quality is to improve

Version 6.2.1 1-9

Page 26: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

processes so that defects do not occur, thus minimizing the need for testing and rework. Qualityimprovement should be used to drive productivity.

The cost of quality (COQ) is the money spent beyond what it would cost to build a product right thefirst time. If every worker could produce defect-free products the first time, the COQ would bezero. Since this situation does not occur, there are costs associated with getting a defect-freeproduct produced.

There are three COQ categories:

• Prevention - Money required preventing errors and to do the job right the first timeis considered prevention cost. This category includes money spent on establishingmethods and procedures, training workers and planning for quality. Preventionmoney is all spent before the product is actually built.

• Appraisal – Appraisal costs cover money spent to review completed productsagainst requirements. Appraisal includes the cost of inspections, testing andreviews. This money is spent after the product or subcomponents are built butbefore it is shipped to the user.

• Failure – Failure costs are all costs associated with defective products. Somefailure costs involve repairing products to make them meet requirements. Othersare costs generated by failures, such as the cost of operating faulty products,damage incurred by using them and the costs incurred because the product is notavailable. The user or customer of the organization may also experience failurecosts.

"Quality is free, but it is not a gift."

Philip B. Crosby in Quality is Free

1.3.2.1 The Three Key Principles of QualityEveryone is responsible for quality, but senior management must emphasize and initiate qualityimprovement, and then move it down through the organization to the individual employees. Thefollowing three quality principles must be in place for quality to happen:

1. Management is responsible for quality.

Quality cannot be delegated effectively. Management must accept the responsibility forthe quality of the products produced in their organization; otherwise, quality will nothappen. A quality function is only a catalyst in making quality happen. The qualityfunction assists management in building quality information systems by monitoringquality and making recommendations to management about areas where quality can beimproved. As the quality function is a staff function, not management, it cannot dictatequality for the organization. Only management can make quality happen.

1-10 Version 6.2.1

Page 27: Casq Cbok Rev 6-2

Quality Principles

2. Producers must use effective quality control.All of the parties and activities involved in producing a product must be involved incontrolling the quality of those products. This means that the workers will be activelyinvolved in the establishment of their own standards and procedures.

3. Quality is a journey, not a destination.The objective of the quality program must be continuous improvement. The endobjective of the quality process must be satisfied customers.

1.3.2.2 Best PracticesA practice is a specific implementation of a work process. For example, a practice would be oneorganization’s process for estimating the amount of resources required for building a system.

A Best Practice is one of the most effective practices for performing a specific process. BestPractices are normally identified by benchmarking, or by an independent assessment. BestPractices are also identified through winners of quality competitions such as the Malcolm BaldrigeNational Quality Award, Deming Prize, etc.

1.3.3 Six Sigma QualityMost people spend twelve or more years in an educational system in which grades of 90% orhigher, are considered excellent. However, in industry, 90% is not a good quality record. Forexample, if one out of every ten tires fails, you have a 90% quality rating, but that is totallyunacceptable to tire customers.

Motorola developed a concept called “Six Sigma Quality” that focuses on defect rates, as opposedto percent performed correctly. “Sigma” is a statistical term meaning one standard deviation. “SixSigma” means six standard deviations. At the Six Sigma statistical level, only 3.4 items per millionare outside of the acceptable level. Thus, the Six Sigma quality level means that out of every onemillion items counted 999,996.6 will be correct, and no more than 3.4 will be defective.

Experience has shown that in most systems, a Four Sigma quality level is the norm. At the FourSigma level there are 6,120 defects per million parts, or about 6 defects per 1,000 opportunities, todo a task correctly.

The key focus of companies implementing a Six Sigma program is to develop a good businessstrategy that balances the cost, quality, features and availability considerations for products. Avaluable lesson learned is that decisions made must be tied to the bottom line for the company.Companies should take care to use correct measurements for each situation, and to considermeasuring output of a process over time (not just a snapshot).

When considering a project to improve using Six Sigma, the following characteristics are desirable.If one or more of these characteristics is missing, there will likely be barriers to success.

• The project should clearly connect to business priorities.

Version 6.2.1 1-11

Page 28: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

• The problem being solved should be very important, such as a 50% processimprovement.

• The importance of the project should be clear to the organization.• The project should have a limited scope that can be completed in less than six

months.• The project should have clear, quantitative measures that define success.• Management should support and approve the project to ensure resources are

available, barriers are removed, and that the project continues over time.

1.3.4 Baselining and BenchmarkingBaselining is defining the current level of performance. For example, the number of defectscurrently contained in a thousand lines of code is usually calculated quantitatively and can be usedto measure improvement.

Benchmarking involves a comparison of one organization’s, or one part of an organization’s,process for performing a work task to another organization’s process, for the purpose of findingbest practices or competitive practices that will help define superior performance of a product,service or support process.

Skill Category 3 provides additional details on benchmarking, including types of benchmarkingand a four-step process for conducting benchmarking.

1.3.5 Earned ValueIt is important that quality professionals be able to demonstrate that their work provides value totheir organization. Return-on-investment (ROI) that demonstrates the dollars returned for thedollars invested is one of the more popular means to demonstrate value returned. However, ROI isnot designed to measure subjective values such as customer loyalty. There is no generally acceptedbest way to measure the “value earned” from quality initiatives. It is recommended that qualityprofessionals use the method(s) recommended by their accounting function for calculating earnedvalue.

1.4 Quality Control and Quality AssuranceVery few individuals can differentiate between quality control and quality assurance. Most qualityassurance groups, in fact, practice quality control. This section differentiates between the two, anddescribes how to recognize a control practice from an assurance practice.

1-12 Version 6.2.1

Page 29: Casq Cbok Rev 6-2

Quality Principles

Quality means meeting requirements and meeting customer needs, which means a defect-freeproduct from both the producer’s and the customer’s viewpoint. Both quality control and qualityassurance are used to make quality happen. Of the two, quality assurance is the more important.

Quality is an attribute of a product. A product is something produced, such as a requirementdocument, test data, source code, load module or terminal screen. Another type of product is aservice that is performed, such as meetings with customers, help desk activities and trainingsessions. Services are a form of products, and therefore, also contain attributes. For example, anagenda might be a quality attribute of a meeting.

A process is the set of activities that is performed to produce a product. Quality is achieved throughprocesses. Processes have the advantage of being able to replicate a product time and time again.Even in data processing, the process is able to replicate similar products with the same qualitycharacteristics.

Quality Assurance (QA) is associated with a process. Once processes are consistent, they can"assure" that the same level of quality will be incorporated into each product produced by thatprocess.

1.4.1 Quality ControlQuality Control (QC) is defined as the processes and methods used to compare product quality torequirements and applicable standards, and the action taken when a nonconformance is detected.QC uses reviews and testing to focus on the detection and correction of defects before shipment ofproducts.

Quality Control should be the responsibility of the organizational unit producing the product andshould be integrated into the work activities. Ideally the same group that builds the productperforms the control function; however, some organizations establish a separate group ordepartment to check the product.

Impediments to QC include the following:

• Quality Control is often viewed as a police action• IT is often considered an art • Unclear or ineffective standards and processes• Lack of process training

Quality Control is the focus of Skill Category 7.

1.4.2 Quality AssuranceQuality Assurance (QA) is the set of activities (including facilitation, training, measurement andanalysis) needed to provide adequate confidence that processes are established and continuouslyimproved in order to produce products or services that conform to requirements and are fit for use.

Version 6.2.1 1-13

Page 30: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

QA is a staff function that prevents problems by heading them off, and by advising restraint andredirection at the proper time. It is also a catalytic function that should promote quality concepts,and encourage quality attitudes and discipline on the part of management and workers. SuccessfulQA managers know how to make people quality conscious and to make them recognize thepersonal and organizational benefits of quality.

The major impediments to QA come from management, which is typically results oriented, andsees little need for a function that emphasizes managing and controlling processes. Thus, many ofthe impediments to QA are associated with processes, and include the following:

• Management does not insist on compliance to processes• Workers are not convinced of the value of processes• Processes become obsolete• Processes are difficult to use• Workers lack training in processes • Processes are not measurable• Measurement can threaten employees• Processes do not focus on critical aspects of products

1.4.3 Differentiating Between Quality Control and Quality AssuranceQC is an activity that verifies whether or not the product produced meets standards. QA is anactivity that establishes and evaluates the processes that produce the products. If there is no process,there is no role for QA. Assurance would determine the need for, and acquire or help install systemdevelopment methodologies, estimation processes, system maintenance processes, and so forth.Once installed, QA would measure them to find weaknesses in the process and then correct thoseweaknesses to continually improve the processes.

It is possible to have quality control without quality assurance. For example, there might be astandard that “ALTER GO TO” statements in COBOL should not be used. Regardless of whethera program is produced using a system development process or done by an individual without aprocess, it could still be checked to determine whether or not “ALTER GO TOs” are in theprogram.

The following statements help differentiate QC from QA:

• QC relates to a specific product or service.• QC verifies whether particular attributes exist, or do not exist, in a specific product

or service.• QC identifies defects for the primary purpose of correcting defects.• QC is the responsibility of the worker.• QA helps establish processes.

1-14 Version 6.2.1

Page 31: Casq Cbok Rev 6-2

Quality Principles

• QA sets up measurement programs to evaluate processes.• QA identifies weaknesses in processes and improves them.• QA is a management responsibility, frequently performed by a staff function.• QA evaluates whether or not quality control is working for the primary purpose of

determining whether or not there is a weakness in the process.• QA is concerned with all of the products that will ever be produced by a process.• QA is sometimes called quality control over quality control because it evaluates

whether quality control is working.• QA personnel should not ever perform quality control unless doing it to validate

quality control is working.

1.5 Quality Pioneers Approach to QualityMany individuals have contributed to the quality movement. Three individuals are highlighted herebecause they have either organized a business to promote their concepts, or have a following.These three are Dr. W. Edwards Deming, Philip Crosby and Dr. Joseph Juran.

Both Dr. Deming and Mr. Crosby have developed a set of quality principles. Dr. Juran is wellknown for his trilogy and distinction between “little-Q” quality and “big-Q” quality.

1.5.1 Dr. W. Edwards DemingDr. Deming defined 14 principles for quality, which formed the basis for the turnaround of theJapanese manufacturing industry. He believed that all 14 principles must be used concurrently tomake quality happen. The 14 principles are discussed briefly below. Additional information can befound in his book Out of the Crisis (see the References in Appendix B).

1. Create consistency of purpose for improvment of product and service

In quality-oriented companies, quality should be the cornerstone of the corporation. Allunits within the organization should work toward common goals and purposes. WithinIT this can be translated to mean that the training department teaches the standards,operations is working for the same goals as systems programming, and systemsprogramming is dedicated to improving application performance. Assuring qualityinvolves:

• Innovating new approaches and allocating resources for long-term planning,such as possible new service, new skills required, training and retraining ofpersonnel, satisfaction of the user.

• Putting resources into research, education and maintenance.

Version 6.2.1 1-15

Page 32: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

• Constantly improve the design of products and services.

2. Adopt the new philosophyWe are in a new economic age. We can no longer live with commonly accepted levelsof mistakes, defects, material not suited to the job, people on the job that do not knowwhat the job is and are afraid to ask, failure of management to understand the problemsof the product in use; antiquated methods of training on the job; and inadequate andineffective supervision. Acceptance of defective systems and poor workmanship as away of life is one of the most effective roadblocks to better quality and productivity.

3. Cease dependence on mass inspectionQuality does not come from inspection, but from improvement of the processes used todevelop a product or service. Inspection does not improve quality as the quality, goodor bad, is aleady in the product. Eliminate the need for inspection on a mass basis bybuilding quality into the product in the first place.

4. End the practice of awarding business on the basis of price aloneRequire meaningful measures of quality along with price. Requiring statisticalevidence of quality control in the purchase of hardware and software will mean, inmost companies, drastic reduction in the number of vendors with whom they deal. Theaim is to minimize total cost, not merely initial cost.

5. Improve constantly and forever the system of production and serviceConstantly improving the system will improve quality and productivity, and thusconstantly decrease costs. This obligation never ceases. Most people in management donot understand that the system (their responsibility) is everything not under thegovernance of a user.

6. Institute training on the jobTraining must be totally reconstructed. Statistical methods must be used to learn whentraining is finished, and when further training would be beneficial. A person once hiredand trained and in statistical control of his/her own work, whether it be satisfactory ornot, can do no better. Further training cannot help. If their work is not satisfactory,move them to another job, and provide better training there.

7. Adopt and instuitute leadershipThe job of management is not supervision, but leadership.

• Leaders must know the work that they supervise.

• Statistical methods are vital aids to the project leader to indicate whether thefault lies locally or in the system.

• The usual procedure, by which a project leader calls the worker's attention toevery defect or to half of them, may be wrong – is certainly wrong in mostorganizations – and defeats the purpose of supervision.

8. Drive out fear

1-16 Version 6.2.1

Page 33: Casq Cbok Rev 6-2

Quality Principles

Most people on a job, and even in management positions, do not understand what thejob is, or what is right versus wrong. Moreover, it is not clear to them how to find out.Many of them are afraid to ask questions or to report trouble. The economic loss fromfear is appalling. It is necessary, for better quality and productivity, that people feelsecure.

Another related aspect of fear is the inability to serve the best interest of the companythrough necessity to satisfy specified rules, or to satisfy a production quota, or to cutcosts by some specified amount.

One common result of fear is seen in inspection. An inspector incorrectly records theresults of an inspection for fear of exceeding the quota of allowable defects.

9. Break down barriers between departmentsPeople in user areas must work as a team and learn about the problems encounteredwith various technologies and specifications in system design and operation.Otherwise, there will be losses in production from necessity of reruns and fromattempts to use systems unsuited to the purpose. Why not have users spend time in theIT department to see the problems and hear about them?

10. Eliminate slogans, exhortations, and targets for the work forceEliminate targets, slogans, exhortations, and posters for the work force that urges themto increase productivity. Such things only create adversarial relationships, as the bulkof the causes of low quality and low productivity belong to the system and thus liebeyond the power of the work force. Posters and slogans never helped anyone do abetter job and numerical goals often have a negative effect through frustration. Thesedevices are management's lazy way out. They indicate desperation and incompetenceof management. There is a better way.

11. Eliminate numerical quotas for the workforceDo your work standards take account of quality, or only numbers? Do they help anyonedo a better job? Eliminate work standards that prescribe numerical quotas for theworkforce and numerical goals for people in management. Substitute aids, helpfulleadership and use statistical methods for continual improvement of quality andproductivity.

12. Remove barriers that rob people of pride of workmanshipRemove barriers that rob people (including those in management and engineering) oftheir right to pride of workmanship. The responsibility of supervisors must be changedfrom stressing sheer numbers to quality. This means, among other things, ensuring thatthe workes have the means necessary to do thier jobs effectively and efficiently and theabolishment of the annual merit rating and of management by objective.

13. Institute a vigorous education and self-improvement program for everyoneWhat an organization needs is not just good people; it needs people that are improvingwith education. Advances in competitive position will have their roots in knowledge,therefore, such education should keep up with changes in technology and methods and,

Version 6.2.1 1-17

Page 34: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

if advantageous, new hardware. Finally, recruitment should not be based just on pastrecords but look for people who are improving, and are keen to continue improving.

14. Take action to accomplish the transformationThe transformation is everybody's job. Clearly define top management's permanentcommitment to ever-improving quality and productivity, and their obligations toimplement all of these principles. Create a structure in top management that will pushevery day on the preceding thirteen points, and take action in order to accomplish thetransformation.

1.5.2 Philip CrosbyPhilip Crosby has developed 14 steps for an organization to follow in building an effective qualityprogram. These are:

1. Management Commitment

Clarify where management stands on quality. It is necessary to consistently produceconforming products and services at the optimum price. The device to accomplish thisis the use of defect prevention techniques in the operating departments: engineering,manufacturing, quality control, purchasing, sales, and others. Management must ensurethat no one is exempt.

2. The Quality Improvement TeamThey run the quality improvement program. Since every function of an operationcontributes to defect levels, every function must participate in the quality improvementeffort. The degree of participation is best determined by the particular situation thatexists. However, everyone has the opportunity to improve.

3. Quality MeasurementCommunicate current and potential nonconformance problems in a manner thatpermits objective evaluation and corrective action. Basic quality measurement data isobtained from the inspection and test reports, which are broken down by operatingareas of the plant. By comparing the rejection data with the input data, it is possible toknow the rejection rates. Since most companies have such systems, it is not necessaryto go into them in detail. It should be mentioned that unless this data is reportedproperly, it is useless. After all, their only purpose is to warn management of serioussituations. They should be used to identify specific problems needing corrective action,and the quality department should report them.

1-18 Version 6.2.1

Page 35: Casq Cbok Rev 6-2

Quality Principles

4. The Cost of QualityDefine the ingredients of the COQ and explain its use as a management tool. See“Quality Concepts and Practices” on page 6 where COQ was defined and examplesprovided.

5. Quality AwarenessProvide a method of raising the personal concern felt by all personnel in the companytoward the conformance of the product or service and the quality reputation of thecompany. By the time a company is ready for the quality awareness step, they shouldhave a good idea of the types and expense of the problems being faced. The qualitymeasurement and COQ steps will have revealed them.

6. Corrective ActionProvide a systematic method of permanently resolving the problems that are identifiedthrough previous action steps. Problems that are identified during the acceptanceoperation or by some other means must be documented and then resolved formally.

7. Zero Defects PlanningExamine the various activities that must be conducted in preparation for formallylaunching the Zero Defects (ZD) program - The quality improvement task team shouldlist all the individual action steps that build up to ZD day in order to make the mostmeaningful presentation of the concept and action plan to personnel of the company.These steps, placed on a schedule and assigned to members of the team for execution,will provide a clean energy flow into an organization-wide ZD commitment. Since it isa natural step, it is not difficult, but because of the significance of it, management mustmake sure it is conducted properly.

8. Supervisor TrainingDefine the type of training supervisors need in order to actively carry out their part ofthe quality improvement program. The supervisor, from the board chairman down, isthe key to achieving improvement goals. The supervisor gives the individualemployees their attitudes and work standards, whether in engineering, sales, computerprogramming, or wherever. Therefore, the supervisor must be given primaryconsideration when laying out the program. The departmental representatives on thetask team will be able to communicate much of the planning and concepts to thesupervisors, but individual classes are essential to make sure that they properlyunderstand and can implement the program.

9. ZD DayCreate an event that will let all employees realize through personal experience, thatthere has been a change. Zero Defects is a revelation to all involved that they areembarking on a new way of corporate life. Working under this discipline requirespersonal commitments and understanding. Therefore, it is necessary that all membersof the company participate in an experience that will make them aware of this change.

10. Goal SettingTurn pledges and commitments into action by encouraging individuals to establishimprovement goals for themselves and their groups. About a week after ZD day,

Version 6.2.1 1-19

Page 36: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

individual supervisors should ask their people what kind of goals they should set forthemselves. Try to get two goals from each area. These goals should be specific andmeasurable.

11. Error-Cause RemovalGive the individual employee a method of communicating to management thesituations that make it difficult for the employee to fulfill the pledge to improve. One ofthe most difficult problems employees face is their inability to communicate problemsto management. Sometimes they just put up with problems because they do notconsider them important enough to bother the supervisor. Sometimes supervisors don’tlisten anyway. Suggestion programs are some help, but in a suggestion program theworker is required to know the problem and also propose a solution. Error-causeremoval (ECR) is set up on the basis that the worker need only recognize the problem.When the worker has stated the problem, the proper department in the plant can lookinto it. Studies of ECR programs show that over 90% of the items submitted are actedupon, and fully 75% can be handled at the first level of supervision. The number ofECRs that save money is extremely high, since the worker generates savings everytime the job is done better or quicker.

12. RecognitionAppreciate those who participate. People really don’t work for money. They go towork for it, but once the salary has been established, their concern is appreciation.Recognize their contribution publicly and noisily, but don’t demean them by applying aprice tag to everything.

13. Quality CouncilsBring together the professional quality people for planned communication on a regularbasis. It is vital for the professional quality people of an organization to meet regularlyjust to share their problems, feelings, and experiences, with each other. Primarilyconcerned with measurement and reporting, isolated even in the midst of many fellowworkers, it is easy for them to become influenced by the urgency of activity in theirwork areas. Consistency of attitude and purpose is the essential personal characteristicof one who evaluates another’s work. This is not only because of the importance of thework itself but because those who submit work unconsciously draw a great deal of theirperformance standard from the professional evaluator.

1-20 Version 6.2.1

Page 37: Casq Cbok Rev 6-2

Quality Principles

14. Do it Over AgainEmphasize that the quality improvement program never ends. There is always a greatsign of relief when goals are reached. If care is not taken, the entire program will end atthat moment. It is necessary to construct a new quality improvement team, and to letthem begin again and create their own communications.

1.5.3 Dr. Joseph JuranDr. Juran believed that managing for quality required the same attention that other functionstypically receive. To ensure that adequate attention was given, he developed a trilogy consisting ofthree interrelated, basic managerial phases/processes: quality planning, quality control and qualityimprovement. These are known as “The Juran Trilogy” or “The Quality Trilogy.”

• Quality Planning The purpose of this phase is to create a process that enables goals to be met. In developing theprocess, quality planning should identify customers and their needs, and then incorporate thoseneeds into the product and process designs. The planning process should also attempt to avoidcostly deficiencies, such as rework, and optimize the company performance. This phase occursbefore the process is used to produce a product.

• Quality ControlQuality control takes place at all levels in the organization, with everyone using the samefeedback loop. Dr. Juran believed that in order to achieve control, processes must havenumerical measures and adjustment capabilities. When products are produced from theprocess, there will always be some acceptable (inherent) variation; and the occasional spikes(those representing special causes of variation) should be investigated. Management shouldstrive to give process users the capability of making the necessary adjustments to control theprocess. He refers to this as “self-control”. When the process is being designed, control shouldbe part of the planning process. Typically quality control will be performed to prevent defectsfrom worsening; it will not focus on the process.

• Quality ImprovementAt some point in the quality control phase, the continuous loop of product deficiencies will betraced back to the planning process, and the problems will be seen as an opportunity toimprove. Improvements will be made to revise the process, and problems will become less thanoriginally planned.

Dr. Juran believed that business processes presented a major opportunity for improvement. Hedeveloped a structured approach for improvement, which included the following list ofresponsibilities for senior managers that could not be delegated:

Version 6.2.1 1-21

Page 38: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

• Create an awareness of the need and an opportunity for improvement• Mandate quality improvement; make it part of every job description• Create the infrastructure: establish a quality council, select projects to improve,

appoint teams and provide facilitators• Provide training in how to improve quality• Review progress regularly• Recognize winning teams• Propagandize the results

In addition to his Trilogy, Dr. Juran is also known for his distinction between “little-Q” quality and“big-Q” quality. Prior to total quality management (TQM) there was only “little-Q”, which hecalled the narrow focus on quality. “Little-Q” quality is considered important, but it has a limitedscope and impact, such as a team of people and their manager improving a specific work process.Dr. Juran referred to “big-Q” quality as the new focus on quality. An example of “big-Q” quality iscross-functional teams throughout an organization working to prevent problems. While the scopeof “little-Q” quality is a specific departmental mission, the scope of “big-Q” quality emphasizes thecoordination of various activities conducted in other functions and groups so that all planscontribute to achieving the organization’s goals for quality.

1-22 Version 6.2.1

Page 39: Casq Cbok Rev 6-2

Quality Leadershiphe most important prerequisite for successful implementation of any major qualityinitiative is commitment from executive management. It is management’s responsibility toestablish strategic objectives and build an infrastructure that is strategically aligned to thoseobjectives. This category describes the management processes used to establish the

foundation of a quality-managed environment:

2.1 Leadership ConceptsQuality management is a philosophy and a set of guiding principles that represent the foundation ofa continuously improving organization. Quality management is the application of quantitativemethods and human resources to improve the products and services supplied to an organization, allprocesses within an organization, and the degree to which the current and future needs of thecustomer are met. Quality management integrates fundamental management techniques, existingimprovement efforts, and technical tools under a disciplined approach focused on continuousimprovement. It is a culture change.

Leadership Concepts page 2-1

Quality Management Infrastructure page 2-3

Quality Environment page 2-10

Skill Category

2

T

Version 6.2.1 2-1

Page 40: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

2.1.1 Executive and Middle Management CommitmentManagement commitment is the single most important requirement for successful implementationof quality management. There is no precedent of successful quality improvement withoutexecutive management and the management team leading the effort. Having managementcommitment does not guarantee quality management success; it only improves the odds forsuccessful implementation. The entire organization must eventually become committed to qualitymanagement.

2.1.1.1 Executive Management CommitmentWhile overall management commitment is necessary to the success of quality management,commitment from the organization’s executives is vital. Executive management sets the tone forthe whole effort by visibly supporting quality management. Every employee, including othermanagers, will wait to see where executives prioritize quality management. If quality improvementis not "number one" with executive management, it will not be with anyone else.

2.1.1.2 Middle Management CommitmentAs the slowest group to accept the process, middle management is the weakest link in most qualitymanagement efforts. Special effort is required to assure them they have a role as important players.They should have input to the statements that executive management prepares, and be included inall aspects of quality management planning and implementation. One way to assure their support isto assign them the task of determining their own role. How to include middle management is animportant consideration for executive management, because obtaining quality managementsupport from first-line managers and employees is relatively easy.

2.1.2 Quality ChampionThere is a need for one or more people to champion the cause of quality management. Ideally achampion will emerge during the planning for quality management implementation. This is theperson who accepts personal responsibility for the success of quality management without beingassigned the responsibility. The champion will be emotionally committed to quality managementand will see it as a cause. The champion should be respected in the organization, have high qualitystandards and believe that the organization needs to improve. This “can do” attitude may be themost important consideration. A quality management champion may assume the day-to-daymanagement responsibility for successfully implementing quality management.

Champions happen naturally; they are not appointed. The enthusiasm and energy of champions areimportant factors in the success of an organization. Ideally, the initial champion would be the topexecutive, but several managers may assume this role at different times. The need for a championlasts a minimum of two to three years.

2.1.2.1 LeadershipLeadership and management are two different things. While a manager works within the systemfollowing the accepted practices of the system, a leader determines where the organization needs to

2-2 Version 6.2.1

Page 41: Casq Cbok Rev 6-2

Quality Leadership

be, and then does what is necessary to get there. In a business context, leadership is the ability tobuild the commitment of employees, to endow an organization with a positive perception of itself,and to give employees a positive perception of their role within the business. While programmingexperience, technical prowess, and management ability may be important qualifications for top-level IT management, leadership ability is the critical element.

2.2 Quality Management InfrastructureThe reason a quality management environment is established is to assure constancy of purpose inpromoting quality as a major IT goal. There are two components to that environment: belief andcommitment from management and staff, and the organizational structure and quality initiatives tosupport that environment. The first section of this Skill category focused on the belief andcommitment of the management team. This section focuses on the infrastructure and initiatives.

No organization has a perfect quality management environment. All are striving to achieve theoptimum management philosophy, and organizations can be anywhere along the qualitymanagement continuum. By forming a quality function, some level of commitment andorganizational structure exists that could be called quality management.

2.2.1 Quality CouncilA Quality Council is composed of the organization’s top executive and his or her direct reports. Itmay also be referred to as an Executive Council. The Quality Council acts as the steering group todevelop the organization’s mission, vision, goals, values, and quality policy. These serve as criticalinput to process mapping, planning, measurement, etc. Some large companies opt for more thanone level of Quality Council. When multiple levels exist, each organization’s mission and vision tieinto that specified by the top council. Specifically, the Quality Council:

• Initiates and personally commits to the quality management philosophies andpractices.

• Incorporates this decision into the strategic planning process, allocating resources inthe budget for the deployment of quality management, and ensuring resources areavailable for both ongoing and upcoming IT projects and internal processimprovement projects.

• Establishes committees at lower levels to focus on functional and cross-functionalimprovement efforts, to develop or revise processes, and to oversee and manage thequality management process on a daily basis. They may develop charters to serve asjob descriptions for the committees, or approve the committees’ charters.

• Defines and deploys policies.• Recommends critical processes for analysis.

Version 6.2.1 2-3

Page 42: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

• Makes the decision regarding whether to approve, reject, or table (pending furtherinvestigation) new or changed processes.

• Acts on unresolved process problems and issues referred by the committees.• Provides review and oversight of progress.

2.2.2 Management CommitteesManagement committees (also called Process Management Committees) are composed of middlemanagers and/or key staff personnel, and are responsible for deploying quality managementpractices throughout the organization. One or more committees may be needed depending on theorganization’s size and functional diversity. Committees should represent all the skills andfunctions needed to work on the specific processes or activities. They:

• Work with the Quality Council to understand the organization’s mission, goals, andpriorities. They either review the charter provided by the Quality Council or developone. They also develop and maintain a deployment plan that identifies and prioritizeswhich key processes need to be defined and improved.

• Develop, or commission the development of, and maintain a process inventory andprocess maps (see Skill Category 6).

• Analyze processes, at the direction of the Quality Council, and identify those that needpriority attention. This includes proposing new processes and/or revising existingprocesses.

• Establish teams or work groups (they may participate on the teams) to define andimprove processes, and provide support to the teams (training, coaching, facilities,approaches, standards, tools, etc.). They monitor team progress and review/approvethe resulting processes.

2.2.3 Teams and Work GroupsTeams are formed under any number of names depending on their purpose. Common names andfunctions are:

• Process Development Teams develop processes, standards, etc.• Process Improvement Teams improve existing processes, standards, etc.• Work Groups perform specific tasks such as JAD, inspection, or testing.

The process teams are composed of a representative group of process owners. Members of workgroups will vary depending on the purpose, but suppliers and customers are likely to participate asteam members or as reviewers. It may also be desirable for a QA analyst to participate on the team.

Process teams use standard approaches (such as flowcharts, checklists, Pareto analysis) to define,study, and improve processes, standards, procedures, and quality control methods. They may pilotnew or revised processes, help deploy them to the rest of the organization, and provide processtraining. They may also serve as process consultants to process owners and others using theprocess. Approaches and tools used by the team depend on its purpose.

2-4 Version 6.2.1

Page 43: Casq Cbok Rev 6-2

Quality Leadership

Guidelines for teams include the following:

• The process development committee selects teams.• Each team should have a chairperson.• The core team should be small, containing 3-5 people.• Each team should have a work plan that outlines the tasks to be performed and

assigns tasks to team members.• The team should meet regularly to review work performed by individual members

and to review progress.• Different team members may draft different portions of the processes and

procedures.• The team must reach consensus before submitting results to the process

management committee for approval.

Teams Don’t Always Work When it's definitely determined by a team that something cannot be done, watchsomebody go ahead and do it.

2.2.3.1 Personal PersuasionPeople receive most information through visual intelligence. Image (visual intelligence about aperson) is how others perceive a person. Their perception normally depends on how that person isviewed within the corporation. If management has a high image of that person, the probability ofhaving his or her recommendations accepted and being promoted is significantly higher than whena negative image is projected. A person has an image problem if peers appreciate his or her skillsmore than superiors do.

Management relies on technical people for their technical capabilities. However, many managersbelieve it is easy to buy technical capabilities. It is difficult to obtain good managers, but thedifference between a good technician and a good manager is frequently image.

Everyone has an image of what an executive looks like. That image is normally shaped throughrole models. By looking at the dress of corporate officers or other very successful people, an imageis developed of how successful people should look and act. Being male or female is irrelevantregarding how an image is projected. One would expect some differences in dress and actions, butbasically the same attributes of good image apply to both male and female.

James R. Baehler, author of The New Manager's Guide to Success defined these six basic attributesof executive management:

Version 6.2.1 2-5

Page 44: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

2.2.3.2 Resolving Customer ComplaintsComplaints are the customers' way of indicating they are having a problem. Quality promotesturning problems into opportunities. Thus, while resolving a customer complaint the opportunitycan be used to improve customer relationships.

Research shows that complaints must be resolved within four minutes and the customer should bereceiving a solution to his or her problem. Dr. Leonard Zunin, a human relations consultant, in hisbook Contact: The First Four Minutes, states that unless a customer is satisfied within fourminutes, the customer will give up on you. They will sense that you have not accepted the urgencyof the problem they are expressing to you. You have not accepted the problem as your problem,and you are not the one to solve their problem.

To resolve the customer's problem, execute the following four-step complaint-resolution process:

1. Get on your customer's wavelength – The first step in the resolution process is to showconcern about the customer's problem by performing the following acts:

• Get on the same physical wavelength. Establish a position for mutual discussion.Stand if your customer is standing, or ask the customer to sit and then sit after thecustomer complies.

• Give undivided attention to the customer. Comments to a secretary or receptionist,such as, "Do not interrupt us," show sincere interest.

• Physically display interest. Assume a body position, gestures, and tone of voicethat show concern.

• React positively to the customer's concern, showing empathy. For example, if thecustomer indicates you have caused great inconvenience to your customer's staff,apologize for causing this type of problem to occur.

Purposeful Stands and sits straight. Walks briskly. Knows where he or sheis going and how to get there. Looks people directly in the eye.

Competent Organizes thoughts before speaking; is brief, simple, specific,and direct. Stays calm. Does not appear rushed or harried.

Analytical Does more asking than telling; listens to the answers. Does notaccept generalities.

Decisive States the problem, then the solution. Always talks straight.Does not waste time.

Confident Talks about challenges, not obstacles. Is not tense withsuperiors. Knows the art of casual conversation.

Appearance Dresses up one level.

2-6 Version 6.2.1

Page 45: Casq Cbok Rev 6-2

Quality Leadership

2. Get the facts – The problem cannot be dealt with until the problem (not symptoms) isknown. An angry person more likely tells symptoms than problems. As a second step:• Ask probing questions. Request an example of the problem, samples of defective

products, and sources of information.• Take detailed notes. Write down names, amounts in question, order numbers, dates

or times at which events happened, and specific products and parts of productswhere problems occurred.

• Obtain feelings and attitudes. The problem may be more emotional than factual,but emotions need to be dealt with. Find out how a person feels about what hashappened; find out what his or her colleagues or boss feels about the problem.

• Listen carefully, through the words, to what is being said so that the real problemcan be identified. See “Achieving Effective Listening” on page 14 for details.

3. Establish and initiate an action program – Even if the complaint does not appearreasonable, action still needs to be taken to determine the validity of the facts, and topacify the complainer. In taking action:• If you are responsible for the error, admit it and apologize for it. Do not minimize

the seriousness of the error. • Negotiate a satisfactory resolution with the customer by suggesting a solution and

getting agreement. State the solution again, to ensure customer agreement. Thesolution may be to conduct an investigation and follow-up with the customer todetermine next steps.

• Immediately take the action that was agreed to. Just as it is important to begincommunicating a solution within four minutes, it is important to resolve the actionquickly.

• Note - if you are not personally responsible for the problem, still be empathetic;talk about the steps that you will take to get the issue resolved. If resolution of theissue requires another person, make sure to communicate the name of that personand his or her contact information to the customer.

4. Follow up with the customer – After the agreed upon action has been taken, follow upwith the customer to ascertain that the result is satisfactory. If the customer remainsunsatisfied, return to step 3 and renegotiate a solution. The problem could be adifference in believing what the solution was. Words do not always convey exactlywhat was meant.

2.2.3.3 Written ReportsWhile QA analysts write many types of documents, reports to management are the focus of thissection because written reports are often used to judge the QA analyst’s ability to write.

Good ideas are of little value unless they are accepted and implemented. The QA report is designedto convey information and to change behavior. QA analysts write a report, distribute it, and followup on the recommendations. The value of the quality function can be rated on whethermanagement accepts the report. Thus, the report must be comprehensive, identifying the scope,

Version 6.2.1 2-7

Page 46: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

explaining the factual findings, and suggesting recommendations. The report must be writtenclearly and effectively enough to cause action to be taken, and must include all informationnecessary to attain that end.

To write a good report the QA analyst should perform these ten tasks:

1. Establish report objectives and desired management actions

Writing a successful report requires a clear understanding of both the report objectives(what the QA analyst hopes the report will accomplish) and the desired action (whatthe QA analyst wants management to do after reading the report).

2. Gather factual data (i.e., findings) and recommendationsEnsure that relevant evidence supporting the data and recommendations is incorporatedinto the report. Failure to include this will adversely affect the credibility of the qualityfunction and management will almost certainly disagree with the factual information inthe report.

3. Develop a report outlineA good report has no more than three objectives and three actions. Too much data ortoo many requests overwhelm the reader. If several items need reporting tomanagement, rank the objectives according to priority, and report only the three mostimportant. List small items in an appendix or a supplemental letter.

4. Draft the reportGeneral principles of writing any report apply to a QA report. Consider using thepresentation tools discussed in Skill Category 4. The QA analyst should also rememberthe following potential problem areas:

• Keep the quality-oriented language at a level that can be understood bymanagement, and explain any technical jargon of quality.

• Provide enough information to make implementing the recommendationspossible.

• Ensure there is adequate time to write the report.

5. Review the draft for reasonablenessThe author should review the report to verify that the data gathered adequately supportsthe findings and recommendations, and that the information is presented clearly.

6. Have the report reviewed for readabilityAt least one person other than the author should look at the report objectively, from theperspective of the target audience, to assess the impression the report will make on itsreaders, and the impact it will have in changing managerial behavior. Appearance,

2-8 Version 6.2.1

Page 47: Casq Cbok Rev 6-2

Quality Leadership

wording, and effectiveness of the report are evaluated by considering the followingquestions:

• Does the report appear to have been developed by a professional andknowledgeable group?

• Do I understand what the report is trying to tell me?

• Would a person associated with the report topic find the information in thereport offensive or disparaging? If so, would they be more concerned withdeveloping countermeasures than with implementing the recommendations?

• Does the report adequately build a case for implementing therecommendations?

• Does the report clearly differentiate between important and less critical items?

7. Review the report with involved partiesTo recognize the importance of findings and recommendations, they should bediscussed with affected parties before issuing the final report so that their support canbe solicited.

8. Review the report with managementWhen the report is complete, the QA analyst should meet with management to explainthe report and to obtain their concurrence. Any issues should be addressed andcorrected.

Version 6.2.1 2-9

Page 48: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

9. Finalize the reportAfter incorporating any review comments make any final edits to the report.

10. Distribute the report and follow upDistribute the final report to the appropriate parties, and follow up to ensure thatappropriate action is taken.

2.3 Quality EnvironmentThe quality environment is the totality of practices that management uses that effects how workersperform. It is the attitudes, values, ethics, policies, procedures and behavior of management thatsets the example for work in the organization. For example, if management is ethical andcustomers over pay their accounts, they will be refunded the overpayment. If IT managementrecognizes that they would not have any work task to perform if not for the users, then the userswill be treated as very important people and their desires would be important to the IT organization.On the other hand, if IT users are viewed as not knowing the requirements and over demanding,they will be treated as unimportant to the IT organization.

In business and accounting literature, the environment is referred to in many different ways. Inaccounting literature it is sometimes called the “control environment,” other times the“management environment,” and sometimes just the environment in which work is performed. Forthe purpose of this skill category, we will refer to it as the “quality environment.” What is importantto understand is that the environment significantly impacts the employee’s attitude aboutcomplying with policies and procedures. For example, if IT management conveys that they do notbelieve that following the system development methodology is important, project personnel mostlikely will not follow that system development methodology. Likewise, if IT management conveysa lack of concern over security, employees will be lax in protecting their passwords and securingconfidential information.

The quality environment has a pervasive influence on the way business activities are structured,objectives established, and risks assessed.

2.3.1 The Six Attributes of an Effective Quality EnvironmentFive major accounting associations (Financial Executives International, American Institute ofPublic Accountants, American Accounting Association, The Institute of Internal Auditors, and theInstitute of Management Accountants), formed a group known as COSO (Committee ofSponsoring Organizations), to provide guidance on evaluating internal control. They issued thisguidance as the COSO Internal Control Framework. The COSO Framework identified the sixquality attributes. For each attribute, they listed several control objectives that if implementedwould define each of the six attributes.

The six attributes are briefly described below:

2-10 Version 6.2.1

Page 49: Casq Cbok Rev 6-2

Quality Leadership

The following six attributes are the key attributes of an effective quality environment:

• Integrity and Ethical Values• Management must convey the message that integrity and ethical values cannot be

compromised, and employees must receive and understand that message. Managementmust continually demonstrate, through words and actions, a commitment to highethical standards.

• Commitment to Competence• Management must specify the level of competence needed for particular jobs, and

translate the desired levels of competence into requisite knowledge and skills. • Management’s Philosophy and Operating Style

• The philosophy and operating style of management has a pervasive effect on an entity.These are, of course, intangibles, but one can look for positive or negative signs.

• Organizational Structure• The organizational structure shouldn’t be so simple that it cannot adequately monitor

the enterprise’s activities nor so complex that it inhibits the necessary flow ofinformation. Executives should adequately understand their control responsibilities andpossess the requisite experience and levels of knowledge commensurate with theirpositions.

• Assignment of Authority and Responsibility• The assignment of responsibility, delegation of authority and establishment of related

policies provide a basis for accountability and control, and sets forth-respective roles inthe organization.

• Human Resource Policies and Practices• Human resource policies are central to recruiting and retaining competent people to

enable the entity’s plans to be carried out so its goals can be achieved.

2.3.2 Code of Ethics and ConductEmployees need to know what is expected from them in the performance of their day-to-dayactivities. In most corporations, employees are trained in how to perform their job responsibilities;and either has, or is trained in, needed job skills. However, until recently, they were rarely trainedin how to react in situations in which ethics and values are involved.

Many employees face situations where ethical conduct guidance is needed. For example, whensuppliers invite them to lunch or offer them gifts; when they acquire a second job; when they’redealing with other employees; and when they represent the corporation in outside activities, such assales and community activities.

The Code of Conduct of the corporation represents the manner in which the corporation expects theemployees to act. It attempts to define the type of situations that employees may be faced with, andwhen they are faced with that situation, how they should respond to that situation. A Code ofConduct is one of the most important documents involved in corporate governance.

Version 6.2.1 2-11

Page 50: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

It is not enough to have a Code of Conduct. That Code of Conduct must be taught and seniorofficers must live by the Code of Conduct. The Code of Conduct applies to all officers andemployees of the organization. However, if a code is to be effective, the senior officers of thecorporation must set the example of how to perform.

2.3.3 Open CommunicationsIf the “tone at the top” is to drive corporate governance, then that tone must be communicated to allinvolved. Communication would not only be to employees, but to partners, agents, suppliers, andother stakeholders involved with the corporation. Communication must not only be downwardfrom senior management, but must include communication upward from the lowest levels to seniormanagement.

Effective communication is planned, not spontaneous. Effective communication is repeatable.Repeatable meaning that if individuals in a specific job are changed, the same types ofcommunication will occur with new individuals.

2.3.3.1 Guidelines for Effective CommunicationsThe following are some guidelines that quality assurance personnel can use to improve theircommunication effectiveness.

2.3.3.1.1 Providing Constructive CriticismIn giving constructive criticism, you should incorporate the following tactics:

• Do it Privately Criticism should be given on a one-on-one basis. Only the individual being criticizedshould be aware that criticism is occurring. It is best done in a private location. Manytimes it is more effective if it is done in a neutral location, for example, in a conferenceroom or while taking someone to lunch, rather than in the boss' office.

• Have the Facts General statements of undesired performance are not very helpful. For example,statements such as "That proposal is not clear, fix it" or "Your program does not makebest use of the language or technology" leave people feeling confused and helpless.Before criticizing someone’s performance, have specific items that are causing thedeficiency or undesirable performance.

• Be Prepared to Help the Worker Improve Their Performance It is not good enough to ask the worker to "fix it.” You must be prepared to help fix it.Be prepared to train the subordinate in the area of deficiency. For example, in aproposal, indicate that a return-on-investment calculation was not made; or if aprogram failed to use the language properly, state specifically how it should and shouldnot be used. You should not leave an individual feeling that they have performedpoorly or unsure as to how to correct that performance.

2-12 Version 6.2.1

Page 51: Casq Cbok Rev 6-2

Quality Leadership

• Be Specific on Expectations Be sure your subordinate knows exactly what you expect from him or her now and inthe future. Your expectations should be as clear as possible so there can be noconfusion. Again, in a proposal, indicate that you expect a return-on-investmentcalculation included in all proposals. Most people will try to do what they are expectedto do—if they know what those expectations are.

• Follow a Specific Process in Giving Criticism The specific process that is recommended is:

• State the positive first. Before criticizing indicate what you like about theirperformance. Again, be as specific as possible in the things you like.

• Indicate the deficiencies with products or services produced by the individual.Never criticize the individual, only the work performed by the individual. Forexample, never indicate that an individual is disorganized; indicate that a reportis disorganized. People can accept criticism of their products and services; theyhave great difficulty when you attack their personal work ethic.

• Get agreement that there is a problem. The individual being criticized mustagree there is a problem before proper corrective action can be taken. Avoidaccepting agreement just because you are the boss; probe the need forimprovement with the subordinate until you actually feel there is agreementthat improvement can be achieved. For example, if you believe a report orprogram is disorganized, get agreement from the individual on specificallywhy it might be disorganized.

• Ask the subordinate for advice on how to improve their performance. Alwaystry to get the employee to propose what needs to be done. If the employee’ssuggestion is consistent with what you have decided is a realistic method ofimprovement; you have finished the process.

• If the subordinate is unable to solve the problem, suggest the course of actionthat you had determined before performing the actual criticism.

• Make a specific "contract" regarding what will happen after the session. Bevery specific in what you expect, when and where you expect it. If theemployee is uncertain how to do it, the "contract" should include yourparticipation, as a vehicle to ensure what will happen.

• One last recommendation for criticism:

Avoid making threats about what will happen if the performance does not change.This will not cause any positive behavior change to occur and normally producesnegative behavior. Leave the individual with the assumption that he or she has thecapability for improvement, and that you know he or she will improve.

Version 6.2.1 2-13

Page 52: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

2.3.3.1.2 Achieving Effective ListeningThroughout school, students are taught the importance of speaking, reading, writing, andarithmetic, but rarely is much emphasis placed on listening. The shift in society from industrialproduction to information management emphasizes the need for good listening skills. This isparticularly true in the practice of software testing – oral communication is rated as the number-oneskill for the quality analyst.

Some facts about listening include:

• Many Fortune 500 companies complain about their workers' listening skills. • Listening is the first language skill that we develop as children; however, it is

rarely taught as a skill. Thus, in learning to listen, we may pick up bad habits.• Listening is the most frequently used form of communication.• Listening is the major vehicle for learning in the classroom.• Salespeople often lose sales because they believe talking is more important than

listening (thus, in ads a computer company emphasizes that they listen).It is also important to understand why people do not listen. People do not listen for one or more ofthe following reasons:

• They are impatient and have other stimuli to respond to, such as random thoughtsgoing through their mind.

• They are too busy rehearsing what they will say next, in response to someone.• They are self-conscious about their communication ability.• External stimuli, for example, an airplane flying overhead, diverts their attention.• They lack the motivation and responsibility required of a good listener.• The speaker’s topic is not of interest to them.

The listener must be aware of these detriments to good listening so they can recognize them anddevote extra attention to listening.

2.3.3.1.3 The 3-Step Listening ProcessThe listening process involves three separate steps: 1) hearing the speaker, 2) attending to thespeaker, and 3) understanding the speaker. The practice of listening requires these three listeningsteps to occur concurrently. Mastering each of these steps will help improve your listening abilities.

Step 1: Hearing the Speaker

Hearing the speaker requires an understanding of the five channels of communicationincorporated into speech. Much of listening occurs beyond merely hearing the words.Let's look at the five channels through which a speaker delivers information to his/heraudience:

2-14 Version 6.2.1

Page 53: Casq Cbok Rev 6-2

Quality Leadership

Speakers normally use the information, verbal, vocal, and body channels in speaking.In some instances, they also use the graphic channel. Listening requires that there is ameeting of the mind on the information channel. Speakers sometimes skip around todifferent subjects, making it easy to lose the subject being covered on the informationchannel. In Step 2, attending to the speaker, we will discuss the importance of feedbackto confirm the subject being covered on the information channel.

The vocal and body channels impact the importance of the verbal channel. The verbalchannel includes the choice of words used to present information, but the vocal andbody channels modify or emphasize the importance of those words. For example, thewords in the verbal channel may be, "John says he can do it.” However, the tone of thevocal channel might indicate that John cannot do it, or the use of a thumbs-down bodychannel signal will also indicate that John cannot do it.

Hearing the speaker involves an awareness of all five channels, and listening to andwatching the speaker to be sure we are receiving what the speaker is saying through allfive channels. To master the hearing step, you must pay attention to all five channels. Ifyou miss one or more of the channels, you will not hear what the person is saying. Forexample, if you are only paying partial attention to the speaker when the words, "Johncan do it" are stated, you may hear that John can do it, while the speaker said that Johncould not do it.

Step 2: Attending to the Speaker

Attending to the speaker is sometimes referred to as being an active listener. Devoteyour full attention to the speaker to confirm that what you heard is what the speakerintended you to hear. You must first understand yourself and your situation. You mustevaluate your motivation for wanting to listen to this speaker. If the subject is importantto you, but the speaker is boring, it will require significantly more effort on your part tobe a good listener.

The most important part of attending to the speaker is establishing an active listeningability. Active listening involves a lot of response and dynamics. Some people view the

Information Channel The speaker’s subject.

Verbal Channel The words used by the speaker.

Vocal Channel The tone of voice associated with the various words.

Body Channel The body movements and gestures associated with theinformation being conveyed.

Graphic Channel The pictures, charts, etc., that the speaker uses toemphasize or illustrate the material being discussed.

Version 6.2.1 2-15

Page 54: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

listening process as a passive skill where you sit back and let the other person talk. Thisis fine for hearing the speaker, but not for confirming what the speaker has said.Feedback is very important to the listening process, particularly in this step. Feedbackcan be a nonverbal response, such as nodding your head, or a verbal response such as aquestion or a statement of confirmation.

It is very important to send the right type of feedback to the speaker. The wrong type offeedback not only doesn’t confirm what the speaker said, but also can reduce orterminate the listening process. It is very irritating to a speaker who is providinginformation to have the listener stray from the subject. For example, the speaker mightbe describing a quality problem, and the listener changes the subject and asks where thespeaker is going to have lunch that day.

Some suggestions to help in attending to the speaker are:

• Free your mind of all other thoughts and concentrate exclusively on thespeaker's communication.

• Maintain eye contact with the speaker for approximately 80 percent of thetime.

• Provide continuous feedback to the speaker.

• Periodically restate what you heard the speaker say, and ask the speaker toconfirm the intent of the information spoken.

• Move periodically to the understanding step to ensure that the informationpassed has been adequately understood.

Step 3 - Understanding the Speaker

There are five types of listening. While people can listen several different waysconcurrently, normally listening is limited to one of the five types. The type chosen willhave an impact on the ability to understand what the speaker is saying. When one hasdeciphered the information channel (i.e., what the subject is) and related the importanceof that subject to the audience, listening must be adjusted to ensure that we get themessage we need.

The five types of listening and their impact on understanding are:

• Type 1: Discriminative Listening

Directed at selecting specific pieces of information and not the entirecommunication. For example, one may be listening to determine if an individualdid a specific step in the performance of a task. To get this, listen more to thenonverbal expressions rather than the verbal channel.

• Type 2: Comprehensive Listening

2-16 Version 6.2.1

Page 55: Casq Cbok Rev 6-2

Quality Leadership

Designed to get a complete message with minimal distortion. This type of listeningrequires a lot of feedback and summarization to fully understand what the speakeris communicating. This type of listening is normally done in fact gathering.

• Type 3: Therapeutic Listening

The listener is sympathetic to the speaker's point of view. During this type oflistening, the listener will show a lot of empathy for the speaker's situation. It isvery helpful to use this type of listening when you want to gain the speaker'sconfidence and understand the reasons why a particular act was performed or eventoccurred, as opposed to comprehensive listening where you want to find out whathas happened.

• Type 4: Critical Listening

The listener is performing an analysis of what the speaker said. This is mostimportant when it is felt that the speaker is not in complete control of the situation,or does not know the complete facts of a situation. Thus, the audience uses this typeof understanding to piece together what the speaker is saying with what has beenlearned from other speakers or other investigation.

• Type 5: Appreciative or Enjoyment Listening

One automatically switches to this type of listening when it is perceived as a funnysituation or an explanatory example will be given of a situation. This listening typehelps understand real-world situations.

One must establish which type of understanding is wanted and then listen from that perspective.

2.3.4 Mission, Vision, Goals, Values, and Quality PolicyThe mission statement tells why a company or an organization exists. Organizations need to maptheir course of direction, which is the corporate vision. Goals convey how the vision will beachieved. Values are like an organization’s code of ethics – they help establish the corporate cultureand shape the foundation for making decisions. A quality policy is a statement of principles, and abroad guide to action.

The statements of mission, vision, goals, values, and quality policy must be what all levels ofmanagement truly believe and practice in their day-to-day activities. Developing the statementscannot be delegated, nor is it a quick task.

2.3.4.1 MissionA mission statement explains why a company, organization, or activity exists, and what it isdesigned to accomplish. It clearly and concisely describes the work that is done, providingdirection and a sense of purpose. The mission should focus on products and services and becustomer-oriented. During implementation, the mission is constrained by the vision and values.

Version 6.2.1 2-17

Page 56: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

Examples of mission statements include:

• From Arco Transportation Company Information Services: "The mission ofinformation services is to provide the appropriate computing network, products, andservices and support of the strategies, goals, and objectives of the company."

• From the Ford Motor Company (listed in their Ford Q-101 Quality Systems Standard,January 1986): “Ford Motor Company is a worldwide leader in automotive andautomotive-related products and services as well as in newer industries such asaerospace, communications, and financial services. Our mission is to improvecontinually our products and services to meet our customers’ needs, allowing us toprosper as a business and to provide a reasonable return for our stockholders, theowners of our business.”

2.3.4.2 VisionLeaders provide a vision, which is a clear definition of the result to be achieved. Organizationswithout a vision flounder. The vision establishes where the organization desires to move from itscurrent state. It gives everyone a direction to work towards. Senior management should establishthe vision, ensuring how it contributes to the business is clear. A vision is simple and concise, and itshould be understood and supported by all.

Examples of visions include:

• From the Quality Assurance Institute: “Our vision is to produce competent andsuccessful quality assurance analysts.”

• From the Eastman Kodak Company: "We see ourselves now and in the future as acompany with a strong customer franchise, known for reliability, trust, and integrity inall relationships. Our business will be based on technologies that have evolved overour long history, and which will give us unique advantages over our competition.These technologies will span our core businesses, and will also go beyond boundarieswe can see today.”

• From the Ford Motor Company: "A worldwide leader in automotive and automotive-related products and services as well as in newer industries such as aerospace,communications, and financial services."

• President Kennedy had a vision of putting a man on the moon before 1970. • QA analysts should have a vision of improving quality, productivity, and customer

satisfaction.

2.3.4.3 GoalsGoals explain how the vision will be achieved. For example, if an organization's vision is toproduce defect-free software, a goal might be to have no more than one defect per thousand lines ofcode. Goals change as an organization moves closer to accomplishing the vision. Well-developedprograms are necessary to achieve the goals.

2-18 Version 6.2.1

Page 57: Casq Cbok Rev 6-2

Quality Leadership

Goals and objectives are often used interchangeably; however, goals tend to be more global andnon-quantitative. Objectives come from goals, and tend to be more specific and quantitative.

Goals:

• Are consistent with the vision • Are established by operational management (manager of systems and

programming, manager of computer operations, etc.)• Must have management commitment • Clearly identify the role each individual plays in accomplishing the goal• Should be linked to programs established to accomplish the goals

Strategic quality management goals must focus on both the producer and the customer. Short-termgoals should:

• Reduce defects• Reduce cycle time (i.e., shorter schedule and less resources)• Provide return on investment from short-term programs

Long-term goals should be customer oriented. They involve improving customer satisfaction andgreater matching of the products and services to the true customer needs. These goals couldinclude, but should not be limited to:

• High customer satisfaction activities• Management establishing a need for quality, thus creating an environment

receptive to quality processes• Understanding what must be done in order to deploy a new process• Improving compliance to processes• Sustaining quality effort, as a result of managing quality• Involving all employees in quality processes• Recognizing the need for the quality analyst• Establishing a quality infrastructure with adequate resources to perform the

assigned mission• Having adequate resources to perform quality activities such as continuous process

improvement• A doable and measurable plan of action enabling the quality processes to

demonstrate accomplishments based on approved objectives

Financial goal statements from the Eastman Kodak Company are:

• “To rank among the top 25 U.S.-based multinational companies in net earnings”

Version 6.2.1 2-19

Page 58: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

• “To approach a return on equity of 20 percent”• “To increase worldwide productivity at double the U.S. average for manufacturing

companies”

2.3.4.4 ValuesValues or guiding principles tell how to conduct business. They help define an organization’sculture and personality by clarifying what behavior is expected in order for the organization toachieve its vision and mission. Values are established by senior management and respect theintegrity of the individual. Examples of values are: customer-focused, quality management,innovative, employee empowerment, ethical, cooperative relationships, and risk-taking. Valuesshould be consistent with Dr. Deming's 14 quality principles (see Skill Category 1), and theyshould be integrated into the organization's work program. If really believed, values help focus theorganization on a shared behavioral model.

Examples of values include:

• From the Eastman Kodak Company:“Quality – to strive for continuous improvement through personal contributions andteamwork.”

“Integrity – requiring honest relationships with colleagues, customers, shareholders,and suppliers.”

“Trust – characterized by treating everyone with respect and dignity.”

“Ethical behavior – so Kodak can earn and deserve a reputation that is beyondquestion.”

“Teamwork – through open communication that gives everyone a sense of personalinvolvement in the company's performance.”

“Job satisfaction – in an environment that encourages people to grow to their fullpotential.”

“Creativity – fostered by an atmosphere that challenges employees to seek newsolutions and to take intelligent risks.”

“Flexibility – recognizing the need to anticipate and respond to changing economic,social, competitive, and market conditions.”

“Winning attitude – in knowing that through hard work, pride and confidence, Kodakpeople make up a ‘world-class’ team.”

• From the Ford Motor Company (listed in their Ford Q-101 Quality Systems Standard,January 1986) include:

2-20 Version 6.2.1

Page 59: Casq Cbok Rev 6-2

Quality Leadership

“People – Our people are the source of our strength. They provide our corporateintelligence and determine our reputation and vitality. Involvement and teamwork areour core human values.”

“Products – Our products are the end result of our efforts and they should be the best inserving customers worldwide. As our products are viewed, so are we viewed.”

“Profits – Profits are the ultimate measure of how efficiently we provide customerswith the best products for their needs. Profits are required to survive and grow.”

• From Arco Transportation Company Information Services relating to people: "Information services maintain a productive and challenging work environment tofoster personal growth and career development."

2.3.4.5 Quality PolicyExecutive management's commitment to quality should be expressed in writing to all employees inthe form of a quality policy. Management should work as a team to develop the policy, which mustbe aimed at the employees and written so they can understand it. The policy should be concise andcover all aspects of quality. Eventually, every existing regulation, procedure, and policy lettershould be reviewed to assure that it aligns with the new quality policy.

Examples of quality policies are:

• From Xerox: “Quality is the basic business principle of Xerox. Quality means providing our internaland external customers with innovative products and services that fully satisfy theirrequirements. Quality improvement is the job of every Xerox employee.”

• From Corning Glass Works: “It is the policy of Corning Glass Works to achieve total quality performance inmeeting the requirements of external and internal customers. Total quality performancemeans understanding who the customer is, what the requirements are, and meetingthose requirements without error, on time, every time.”

• From Baxter: “We will reach agreement on requirements with our customers and suppliers, insideand outside the company. We will conform to those requirements and perform defect-free work at all times.”

• Key components from IBM Corporation’s quality policy are:“Quality is the cornerstone of the IBM Corporation business.”

“The objective of this policy is to provide products and services, which are defect free.”

“Everyone must learn to do his or her job right the first time (i.e., no rework due todefects).”

Version 6.2.1 2-21

Page 60: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

“Each stage of a job must be defect free.”

“Quality is everybody's responsibility.”

2-22 Version 6.2.1

Page 61: Casq Cbok Rev 6-2

Quality Baselinesrganizations need to establish baselines of performance for quality, productivity andcustomer satisfaction. These baselines are used to document current performance anddocument improvements by showing changes from a baseline. In order to establish abaseline, a model and/or goal must be established for use in measuring against, to

determine the baseline.

3.1 Quality Baseline Concepts

3.1.1 Baselines DefinedDeveloping a baseline is performing an analysis/study to determine the current level ofperformance in a specific activity. Baselines normally are quantitative and describe multipleattributes of an activity/process. For example, if one were to baseline the software developmentprocess the baseline could include quantitative data on defect rates, resources expended by phase,productivity rates such as function points per person-month, and levels or amounts ofdocumentation (number of words of documentation per line of code). The metrics used forbaselining should be metrics, which, if improved, would help drive the management-definedresults.

Quality Baseline Concepts page 3-1Methods Used for Establishing Baselines page 3-6Model and Assessment Fundamentals page 3-12Industry Quality Models page 3-16

Skill Category

3

O

Version 6.2.1 3-1

Page 62: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

A baseline needs to be established for two reasons: first, to identify perceived quality problems;and second, to establish a baseline from which quality improvement objectives can be establishedand improvements in quality quantitatively measured. The practices used to improve the processare covered in Skill Category 6.

These quantitative quality baseline studies help establish the need for a quality initiative. Many ITmanagers do not know the severity of their quality problem. Once a need can be identified fromanalyzing a baseline, the need for a quality initiative to address the problem normally becomesobvious.

3.1.2 Types of BaselinesA baseline study is one designed to quantitatively show the current status of an activity, program,or attitude. It both shows “how you are doing” and provides a quantitative base from which changecan be measured.

The studies included in this manual should be viewed as baseline studies. They are attempting toestablish a “base” from which a quality improvement need can be identified and measured.Baseline studies can be conducted in one of the following two manners:

• Evaluate entire populationThis means all of the parties or products involved will be surveyed. This method is normallymost effective when the information to be analyzed is automated. For example, when lookingat factors such as schedule and budget.

• Sample surveyUsing this method, only a part of the population of people/products are surveyed. Thisapproach is normally most effective when there is a large population to be surveyed and thedata is not automated. While sampling should be done statistically, it is not essential in thesestudies for valid statistical samples to be drawn. The reason is that quality is attempting toeradicate all defects, and even if the defect is only perceived by a part of the population, it isstill a defect and warrants attention.

3.1.3 Conducting Baseline StudiesThe three logical groups to conduct baseline studies are:

• Quality assurance groupsIf one has been established, they should conduct baseline studies in areas of quality concern.

• Quality task forcesA special task force or committee established to study quality/productivity in an informationservices function. In many instances these study groups precede the establishment of a qualityassurance function. In fact, many of the individuals who chair the study group later become thequality assurance manager.

3-2 Version 6.2.1

Page 63: Casq Cbok Rev 6-2

Quality Baselines

• IT managementSpecific managers that have a concern over quality may wish to perform a baseline study. Notethat in many instances the study is actually conducted by a subordinate to that manager.

Baseline studies need not be time-consuming or costly. The objective is to identify quantitativelypotential problems. The quantitative data should be subject to analysis and interpretation. If thedata appears unreliable, then an additional study might be undertaken to validate the reliability. It isgenerally good practice to get the data as quickly and cheaply as possible, because in mostinstances, the data is used to substantiate what people intuitively know.

The following are typical steps needed to perform a baseline study.

1. Identify products/services to be surveyed

This is the requirements phase of the baseline study. Studies should be directed atspecific products or services. For example, computer programs as a product, andcustomer/user interaction as a service.

2. Define conformance and nonconformanceThe individuals developing the survey instrument must have defined (at least on apreliminary basis) the expected conformance and nonconformance. Note that in manyinstances the survey will be used to help establish nonconformance, but the generalareas of nonconformance will need to be identified in order to gather nonconformancedata.

3. Identify survey populationThe population of data/people to be surveyed needs to be identified. This is a criticalstep because the definition of nonconformance will vary significantly depending uponwho is considered the population. For example, programming defects would looksignificantly different to the programmer than the end user of the program results. Theprogrammer may only consider variance from specs a defect, but to the end user notmeeting needs is a defect.

4. Identify size of population to be surveyedThis step is one that involves economics. It is always cheaper to look at fewer thanmore. The question that needs to be decided is how few can be examined to give validresults. Statistically, we rarely try to go below a sample size of twenty, but in surveyingpeople we may be able to drop below this limit and still get valid results.

Version 6.2.1 3-3

Page 64: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

5. Develop survey instrumentA specific survey instrument must be developed to meet the survey objectives. Surveysneed to be customized to the specific needs and vocabulary of the population beingsurveyed.

6. Conduct surveyThe survey instruments should be completed by the surveyed population. From aquality perspective; it is helpful to train the population on how to complete the surveyquestionnaire. This can be done through written instructions, but it is normally better todo it verbally. If the population group is small, they can be called together for ameeting, or have the survey instruments hand delivered and explained. Generally, thevalidity of the results will increase when extra time is spent to explain the intent andpurpose of the survey.

7. Follow up on incomplete surveysThe survey should have a time limit for response. Normally this should not exceed oneweek. If the surveys have not been returned by that time, then the surveying groupshould follow up and attempt to get as many surveys completed as possible. Note that itis generally not realistic to expect every survey to be returned.

8. Accumulate and present survey resultsThe survey information should be accumulated, consolidated, and put into presentationformat. Suggestions on how to do this follow in a later part of this quality initiative.

9. Take action and notify participants of that actionAll surveys should result in some specific action. Even if that action is to do nothing, adecision should be made based on the survey results. That decision should be given tothe survey participants. Note that whenever a survey is conducted, the participantsexpect some action. Not to inform the participants of the action will reduce cooperationin future surveys.

3.1.3.1 Conducting Objective Baseline StudiesObjective baseline studies are ones which are viewed as factual and non-argumentative. There arevery few objective studies that can be conducted within information services. Objective meansperformed by counting, and what can be counted must be considered non-argumentative. Forexample, if we wanted to know how many lines of executable code were in a program, and let usassume we can define what is meant by a line of executable code, then we could count those linesand have an objective baseline.

We need to note that what may appear as objective, may really be more subjective. For example, ifwe ask people to keep time records, and then record hours worked based on those times, we mustmake the assumption that the count is accurate. In most instances, the hours count is subjectivebecause many will record their hours worked at the end of each month, and thus the hours count isa subjective measure and not an objective measure.

3-4 Version 6.2.1

Page 65: Casq Cbok Rev 6-2

Quality Baselines

Objective measures are those, which can be accomplished by counting. Examples of objectivebaseline measures that can be used for baselines include:

• Project completed on schedule• Lines of code• Number of programs• Number of people assigned to a project• Number of abnormal terminations

Again, the exactness of the counting will actually determine whether the above measures areobjective or subjective. It is important to recognize that there are very few objective measures, andthus we are forced to use subjective measures in measuring quality and productivity.

3.1.3.2 Conducting Subjective Baseline StudiesSubjective baseline studies will be the most commonly conducted studies in measuring quality andproductivity. Subjective means that judgment is applied in making the measure. We noted in thediscussion on objective measures that when the individual involved in recording time has theoption of applying judgment, then the measure becomes subjective.

Baselines should be quantitative even if it is a subjective measure, but quantitatively subjective. Forexample, because quality conformance and nonconformance must be defined by people, we arelooking for ways to put this information into a quantifiable format. This does not convey a lot ofinformation, but it is indicative of a problem. However, if we develop a five-point scale forunresponsiveness, and ask your dissatisfied customer to complete that scale, we now haveconveyed a lot more information. If our scale rates “1” as very poor service, and “5” as very goodservice, there is a great deal of difference between a “1” rating and a “3” rating for dissatisfaction.

Examples of products/services that can be measured subjectively for developing a baseline include:

• Customer satisfaction• Effectiveness of standards/manuals• Helpfulness of methodologies to solve problems• Areas/activities causing the greatest impediments to quality/productivity• Causes for missed schedules/over-budget conditions• Understandability of training materials• Value of tools• Importance of activities/standards/methods/tools to individual activity

Baselines can be conducted for any one of the following three purposes:

3.1.3.2.1 PlanningTo determine where detailed investigation/survey should be undertaken.

Version 6.2.1 3-5

Page 66: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

3.1.3.2.2 Internal analysisTo identify problems/areas for quality improvement. Once the problem/area has beenidentified, then no additional effort need be undertaken to formalize the results.

3.1.3.2.3 Benchmarking Comparison against external organizations.

If you want to develop a baseline of customer satisfaction, you might ask your customer to rate thefollowing factors using “very satisfied” to “very dissatisfied” as a scale:

• Availability – Accessible to the customer, as agreed (e.g., product is availablewhen you need it)

• Correctness – Accurate and meets business and operational requirements (e.g.,automated information is accurate)

• Flexibility – Easy to customize for specific needs (e.g., easy to enhance productwith new features)

• Usability – Easy to learn, easy to use, and relevant (e.g., the way you interact withcomponents of the product is consistent)

• Caring – Demonstrating empathy, courtesy, and commitment (e.g., project team issensitive to your needs)

• Competence – Having and applying the appropriate knowledge and skills (e.g.,project team provides correct answers to questions)

• Dependability – Meeting cost and schedule commitments (e.g., project teamsuccessfully completes work on schedule)

• Responsiveness – Providing prompt turnaround of customer inquiries andrequests (e.g., project team responds quickly to your written requests)

Next, to develop a customer satisfaction score you would need to assign importance to each factor.For example, you might assign the percentage of importance to your end user/customer of theseeight quality factors as follows: 10% availability, 10% correctness, 5% flexibility, 10% usability,10% caring, 20% competence, 20% dependability, and 15% responsiveness.

3.2 Methods Used for Establishing BaselinesIT organizations have established many different baselines to evaluate current performance and tomeasure improvement. To help understand the type of baselines that are used in IT, QAI hascategorized four most commonly used baselines as listed below. Each one of these will bediscussed individually.

• Customer surveys• Benchmarking• Management established criteria

3-6 Version 6.2.1

Page 67: Casq Cbok Rev 6-2

Quality Baselines

• Industry models

3.2.1 Customer SurveysThe customer needs to be defined as some group or individual receiving IT products and services.These can be internal customers, such as programmers receiving program specifications fromsystems analysts, or external customers, such as the payroll department using IT products andservices to produce payroll.

Skill Category 1 provides two definitions of quality. These were “meets requirements” and “fit foruse.” Customer surveys use the “fit for use” definition of quality. Customer surveys are subjectivebaselines. Customer surveys measure attitude and satisfaction of customers with the products andservices they receive. Because they are subjective it is important that customer surveys are properlyconstructed.

We can divide customer surveys into report cards and surveys. Report cards ask the customer whatthey think about something, for example, were they satisfied with the user manuals provided by IT.The report card will ask them on a scale of 1-to-5, one meaning very pleased and five meaning verydispleased, on what they think of the user manual. Because the report card is not controlled, it issometimes difficult to know how the user calculated a particular rating.

Surveys should be constructed around specific factors and attributes of a product or service.Questions are then constructed to support the assessment of that factor and attribute. The subjectiveresponse is defined in enough detail so that there is consistency among individuals regarding theresponse. The factors and attributes are then given rates so that a total customer-satisfaction scorecan be developed.

3.2.2 Benchmarking to Establish a Baseline GoalA benchmarking baseline is one determined by another organization. For example, if you wantedto establish a baseline goal for up-time in your computer center, you might want to go to what youbelieve is a leading organization and use their up-time percentage as your baseline goal.

What is important in benchmarking is that you have a well established baseline measurement inyour organization so that you can ensure the other organization’s baseline is comparable. Let usassume that you are baselining the number of defects per thousand lines of code created duringdevelopment. Through your baseline of your organization you will have to carefully define what adefect is and what a line of code is. Once you have done that and you look at another organization’sdefects per thousand lines of code for benchmarking purposes, you will know that they have thesame definition for defects and the same definition for lines of code. If the definitions are different,then the benchmark that you get from that organization will be meaningless as the goal for yourorganization.

There are many different steps that organizations follow in benchmarking. However, mostbaselining processes have these four steps:

Version 6.2.1 3-7

Page 68: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

1. Develop a clearly defined baseline in your organization.

This means that all of the attributes involved in your baseline are defined. In ourexample of defects per lines of code, clearly defining what is meant by defect and a lineof code would meet the objective of this step.

2. Identify the organizations you desire to baseline against.Many factors come into this decision, such as do you want to benchmark within yourindustry, do you want to benchmark what you believe are leading organizations, do youwant to benchmark an organization that uses the same tools that are used in yourorganization, and do you want to benchmark against organizations with a similarculture.

3. Compare baseline calculations.Compare how your baseline is calculated versus the baseline calculation in thecompany you want to benchmark against. Benchmarking is only effective when youbenchmark against an organization who has calculated their baseline usingapproximately the same approach that your organization used to calculate the baseline.

4. Identify the cause of baseline variance in the organization you benchmarked against.When you find a variance between the baseline calculation in your company and thebaseline calculation in the organization you are benchmarking against, you need toidentify the cause of variance. For example, if your organization was producing 20defects per thousand lines of code, and you benchmarked against an organization thatonly had 10 defects per thousand lines of code you would want to identify the cause ofthe difference. If you cannot identify the cause of difference, there is little value inbenchmarking. Let us assume that the company you benchmarked against had adifferent process for requirement definition than your organization. For example,assume they use JAD (joint application development) and you did not. Learning this,you may choose to adopt JAD in your organization as a means for reducing yourdevelopmental defect rates.

A less formal method for benchmarking is to visit other organizations. This will provide the qualityprofessionals with these benefits:

• The cost and effort to develop new and innovative quality approaches within IT iscost-prohibitive for most companies. Learn from others and don’t “reinvent thewheel”.

• Comparing quality programs to those in other companies can identify gaps incurrent processes and lead to obtaining more effective quality practices.

• Interfacing periodically with other quality individuals is good for professionaldevelopment. Those colleagues will not exist internally unless the company islarge.

Visiting another company is a five-step process, as follows:

1. Identify discussion areas

3-8 Version 6.2.1

Page 69: Casq Cbok Rev 6-2

Quality Baselines

As it is important for the visit to be mutually advantageous, get management agreementon what oral and written information can be shared with the company being visited.Determine the objective of the visit.

• Identify a specific area such as conducting software reviews or independenttesting.

• If the objective is to gather general information, convert it into a visitobjective, such as identifying the three most effective quality practices used bythe other company.

2. Identify target companiesVisits can be within divisions or subsidiaries of the corporation, or to otherorganizations or corporations. Identify constraints that need to be considered as part ofthe selection process. Consider items such as whether information can be exchangedwith competitors, the availability of travel funds, whether the size of the targetcompany should be similar, and whether the maturity of the company's quality functionlends itself to the topics selected (starting a quality function requires a company thathas gone through the process). Select three target companies, prioritizing them by thedesirability, and obtain management's approval to schedule a visit with them.

3. Schedule the visitContact your peers at the targeted companies. Identify yourself and your company.State the purpose for the visit, what you can share, the information you would like toreceive in exchange and how the visit would be mutually advantageous. Offer toprovide a letter requesting the visit if your colleague needs it to get the visit approved.Set a date and time for the visit. One-half to two days is recommended.

4. Conduct the visitVisits typically begin with introductions, a restatement of the objectives of the visit anda tour of the other company’s facilities to put the size and purpose of the IT function inperspective. Agenda items are then discussed in detail. Written documentation isexchanged and agreement for usage (such as copying, reworking, distributing, etc.) isgiven. The visit concludes by thanking management of the other company, and, ifpossible, leaving some memorabilia or some of your company’s products as a token ofremembrance.

5. Put new practice into useSelect the one best idea from the visit for implementation. Demonstrating a positiveoutcome from these visits will increase the chance of management approving othervisits. Do not try to implement too many things at once.

3.2.3 Assessments against Management Established CriteriaManagement can develop a baseline for anything they feel needs to be measured and/or improved.These are baselines that are organizational dependent. There are no industry models or standards

Version 6.2.1 3-9

Page 70: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

against which a baseline can be determined. For example, your management may want to developa baseline on how effective an organization developed training program is. The organization mayhave developed a training program for tool x and want to develop a baseline on how the studentstaking that course perceive the value of the course.

Generally, if management wants to know the efficiency or effectiveness of something in theirorganization, they should develop a baseline to evaluate performance. The baseline can alsomeasure improvement due to changing that particular item.

The following is an example of a baseline for IT climate. Climate is a component of theenvironment relating to the individual worker’s perception of the “climate” in which they performtheir work activities. An example of calculating such a baseline follows below.

The organizational climate is the workers’ attitude toward their organization. It is a composite ofhuman behaviors, perception of events, responses of employees to one another, expectations,interpersonal conflicts, and the opportunities for growth in the organization. The climate is crucialin creating and maintaining an effective organization, and, therefore, should be periodicallyevaluated to discover whether the satisfaction level of the employees is positive or negative.

An evaluator can use the six steps below to assess the climate of a specific organization, group,committee, task force, etc.

1. Look at the organization (group, committee, task force, etc.)

Assess the mission and goals of the organization, what it is supposed to produce, andthe overriding principles by which it operates.

3-10 Version 6.2.1

Page 71: Casq Cbok Rev 6-2

Quality Baselines

2. Examine the jobsExamine each job in the organization. Ask whether the job is necessary, whether itmakes full use of the employee’s capabilities, and whether it is important inaccomplishing the mission and goals of the organization.

3. Assess employees’ performanceEvaluate each employee’s performance in relation to the organization’s mission andgoals. For each job being performed, ask if the employee is doing what should be done,is using his or her skills effectively, likes his or her job, and has enthusiasm and interestin performing the job.

4. Evaluate how employees feel about their manager or leaderGood organizational climate requires good leadership. Determine whether eachemployee within the group likes his or her manager, whether they follow or ignore therequests of their manager, and whether they attempt to protect their manager (i.e., maketheir manager look good).

5. Create a dialog with the members of the groupInteract with each employee asking a series of hypothetical questions to identify theemployee's true feelings toward the organization. Questions such as, “do you feel theorganization supports your suggestions”, can help draw out the true feelings of eachemployee.

6. Rate organizational climateBased on the responses to steps 1-5, evaluate the climate on the following five-pointscale:

• Ideal (5 points)

A fully cooperative environment in which managers and staff work as a team toaccomplish the mission.

• Good (4 points)

Some concerns about the health of the climate, but overall it is cooperative andproductive.

• Average (3 points)

The organizational climate is one of accomplishing the organization's mission andgoals, but no more.

• Below average (2 points)

The individuals are more concerned about their individual performance,development, and promotion than accomplishing the organization’s mission.

• Poor (1 point)

Version 6.2.1 3-11

Page 72: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

There is open hostility in the group and a non-cooperative attitude. As a result, themission and goals are typically not met.

A negative climate of three points or less often results from individuals having too muchresponsibility without the authority to fulfill those responsibilities, or management’s failure torecognize the abilities of the employees. Negative organizational climates can be improved withthe following:

• Develop within the organization a shared vision of what needs to be changed. Getfeedback from the employees, and through discussion and compromise agree upon themission and goals for the organization.

• Change the organization's procedures, as the climate rarely improves withoutprocedural changes.

• Develop a plan for accomplishing the organization's mission that is understood andacceptable to its members. This is normally accomplished if the members help developthe plan.

• If the workers’ abilities are not being utilized effectively, reassign tasks to takeadvantage of their capabilities.

• Develop new methods; for example, try an incentive program or one tied to short-termrewards, such as a paid lunch, a day off, etc.

3.2.4 Assessments against Industry ModelsAn industry model can be used to measure your IT organization against that industry model. Yourcurrent level of performance against that industry model provides a baseline to use to measureimprovement. The following section describes the more common industry models used by the ITindustry.

3.3 Model and Assessment Fundamentals

3.3.1 Purpose of a ModelA model is an idealized concept to be accomplished. Models are usually developed under theauspice of national or international standards organizations and may be customized or ‘tailored’ tomeet new or changing needs. Most industry models define the minimum that has to beaccomplished for compliance, and allow compliance to be measured in "pass/fail" terms.Compliance assessments can be through first-party, second-party or third-party audits or othertypes of evaluation.

Organizations choose to adopt a model for any or all of the following reasons:

3-12 Version 6.2.1

Page 73: Casq Cbok Rev 6-2

Quality Baselines

• Satisfy business goals and objectivesIf the current infrastructure and processes are not meeting business goals anddirectives, adopting a model can refocus the IT organization to meet those goals andobjectives.

• Requirements are imposed by customer(s)In an effort to improve efficiency and effectiveness, key customers may direct the ITorganization to adopt a model.

• For competitive reasonsExternal customers may only do business with an IT organization that is in compliancewith a model.

• As a guide (road map) for continuous improvementA model usually represents the combined learning of leading organizations, and, assuch, provides a road map for improvement. This is particularly true of continuousmodels.

3.3.2 Types of Models (Staged and Continuous)The two types of models that exist, staged and continuous, are discussed below.

3.3.2.1 Staged modelsStaged models are composed of a number of distinct levels of maturity. Each level of maturity isfurther decomposed into a number of processes that are fixed to that level of maturity. Theprocesses themselves are staged and serve as foundations for the next process. Likewise, each levelof maturity is the foundation for the next maturity level.

The Software Engineering Institute’s Capability Maturity Model Integration (CMMI®1) is anexample of a staged model, although it now has a continuous representation. See “SoftwareEngineering Institute Capability Maturity Model Integration (CMMI®)” on page 16 for moreinformation. ISO15504 (SPICE) is another example of a staged model.

3.3.2.2 Continuous modelsIn the continuous model, processes are individually improved along a capability scale independentof each other. For example, the project planning process could be at a much higher capability levelthan the quality assurance process.

3.3.3 Model Selection Process There are a number of software process improvement models publicly available. The most notableare the Capability Maturity Model Integration (CMMI®), Malcolm Baldrige National Quality

1. ® CMMI are registered in the U.S. Patent and Trademark Office.

Version 6.2.1 3-13

Page 74: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

Award (MBNQA), ISO 9001, the Deming Prize, and ISO/IEC 12207, used with ISO/IEC 15504 asthe assessment model.

Quality models are designed for specific purposes. For example, the CMMI was developed toevaluate a software contractor’s capability to deliver quality software. The MBNQA was designedto identify world-class quality organizations in the United States.

Having a model that is applicable, and that management is committed to achieve, can be a majorstep in improving the productivity and quality of an IT organization as it moves toward world-classstatus. A specific model may or may not fully apply to an IT organization, and selecting a modeldoes not have to be an all or nothing decision.

An organization should consider these four criteria when selecting a model:

• Applicability of the model to the organization’s goals and objectives.A particular model may include criteria that are inconsistent with the organization’s goals orobjectives. For example, a model may propose the use of customer and employee surveys. Theeffort and resources required to do this may not be consistent with the organization’s currentwork plan. In that case, the model may either be rejected or modified. It is important that eachcriterion in a model be directly related to accomplishing the organization’s goals andobjectives.

• Management commitmentThe implementation of any model may require a significantly different management style thancurrently exists. The new management style may involve different programs or methods ofdoing work, independent audits to determine compliance to the model, and so forth. Ifmanagement does not commit budget, schedule and resources to implement and sustain themodel, the effort is doomed from the beginning.

• Need for baseline assessmentsAn organization may need to know how effective and efficient it is. Without measurementagainst specific criteria, any efficiency and effectiveness assessment is likely to be subjective.Measuring the object against a model improves the objectivity of a baseline assessment.

• Need for measurable goals and objectivesOrganizations that desire to improve, need goals and objectives to measure that improvement.If a model is accepted as the means for continuous improvement, then measurable goals andobjectives can be defined that describe movement toward meeting the model criteria.

3.3.4 Using Models for Assessment and BaselinesA baseline is a current level of performance. An assessment determines the baseline against themodel (goal to be accomplished). In addition to the baseline and the model, the third criterionrequired for continuous improvement is a method for moving from the current baseline to thedesired goal.

3-14 Version 6.2.1

Page 75: Casq Cbok Rev 6-2

Quality Baselines

The assessment against the model characterizes the current practice within an organization in termsof the capability of the selected processes. The results may be used to drive process improvementactivities or process capability determination by analyzing the results in the context of theorganization's business needs and identifying strengths, weaknesses, and risks inherent in theprocesses. Sequencing of the improvements is determined by the organization, and thenassessments are used again to show whether or not performance is being accomplished.

Many organizations put new programs into place without a model, and, as a result, have no meansof determining whether performance has changed. The effective use of a model, includingdetermining baselines and regular assessments, can help IT management continually improveeffectiveness and efficiency.

3.3.4.1 Assessments versus AuditsAn assessment is a process used to measure the current level of performance or service. Theobjective is to establish a baseline of performance. You can measure against an internationallyaccepted model or an organization’s own defined criteria.

An audit is an independent appraisal activity. While assessments may be done by the individualsinvolved in conducting the work, an audit is performed by someone independent of the activityunder audit.

There are two general categories of auditors. These are auditors external to the organization(external auditors), and auditors internal to the organization (internal auditors). The externalauditors are normally engaged by the organization’s Board of Directors to perform an independentaudit of the organization. Internal auditors are employees of the organization that they audit.

Both internal and external auditors are governed by the standards of their profession. In addition, inmany countries, the external auditors (frequently referred to as Certified Public Accountants orChartered Accountants) are covered by stock exchange and governmental regulations.

For the purpose of this study guide, we will define an audit as an independent evaluation of an ITrelated activity. For example, information systems may be audited after they are placed intooperation to determine whether or not the objectives of those systems were accomplished.

Audits assess an activity against the objectives/requirements established for that activity. A reportor opinion by the auditors relates to whether or not the activity under audit has or has not met theobjectives/requirements for that activity. While internal and independent auditors are subject totheir professional standards, auditors within an organization are not held to professional standards.

Many audits within the IT organization are conducted by quality assurance professionals. Theyfrequently perform what are called “test implementation audits.” Most of these audits look at aspecific activity, such as building a single software system, as opposed to an overview of all theinformation systems built within a specified period by the organization.

Version 6.2.1 3-15

Page 76: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

3.4 Industry Quality ModelsThere are many industry models available against which your organization can establish a baseline.One of the most commonly used models for developing a performance baseline is CMMI model.Normally, IT management will establish achieving the criteria of a model as a goal, and thenestablish the baseline. For example, if an IT organization wanted to be at CMMI Level 3, theywould determine their current performance baseline against the CMMI model.

Most baselines are determined by, and measured by, IT personnel. However, when industrymodels are adopted the organization has a choice to bring in outside consultants to establish thebaseline. Although this can be costly, it does provide an independent view.

The following section describes the most commonly used models in the IT industry.

3.4.1 Software Engineering Institute Capability Maturity Model Integration (CMMI®)In 1986, the Software Engineering Institute (SEI), a federally funded research and developmentcenter, began working with the Mitre Corporation to develop a process maturity framework thatwould help organizations improve their ability to build software solutions. In 1991, the CapabilityMaturity Model (CMMSM) version 1.0 was released and became the framework for softwaredevelopment and control. In 1992, CMM version 1.1 was released with improvements from thesoftware community. On March 1, 2002, the integrated model for systems and softwareengineering, Integrated Product and Process Development, and supplier sourcing (CMMI®-SE/SW, version 1.1) was released. This has two different representations – staged and continuous.

3.4.1.1 Maturity LevelsThe CMMI® influences an organization’s culture by instilling discipline and continuous processimprovement into the workplace. It is a framework that enables an organization to “build the rightproducts right.” In the staged representation, maturity levels organize the process areas and providea recommended order for approaching process improvement in stages.

Continuous improvement is based upon long-term objectives accomplished through theimplementation of short-term evolutionary steps.

As shown in Figure 3-1, the CMMI® framework is a method for organizing these steps into fivelevels of maturity. Each maturity level defines an evolutionary plateau of process improvement andstabilizes an important part of the organization’s processes.

3-16 Version 6.2.1

Page 77: Casq Cbok Rev 6-2

Quality Baselines

Figure 3-1 The SEI CMMI® Framework

The five maturity levels define an ordinal scale that enables an organization to determine its levelof process capability. The framework is also an aid to quality planning as it affords organizationsthe opportunity to prioritize improvement efforts.

A maturity level is a well-defined evolutionary plateau for achieving a mature software process.Each level contains a set of goals that, when satisfied, stabilizes specific aspects of the softwaredevelopment process. Achieving each level of the maturity model institutionalizes a differentcomponent, resulting in an overall increase in the process capability of the organization.

3.4.2 Malcolm Baldrige National Quality Award (MBNQA)The United States national quality award originated from the Malcolm Baldrige National QualityImprovement Act (Public Law 100-107), signed by President Ronald Reagan on August 20, 1987.That act, named after a former Secretary of Commerce, called for the creation of a national qualityaward and the development of guidelines and criteria that organizations could use to evaluate theirquality improvement efforts. Awards are given in the five categories below, with no more than twoawards being presented per category per year.

• Manufacturing• Service• Small Business• Educational Organizations• Health Care Organizations

Version 6.2.1 3-17

Page 78: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

3.4.2.1 Other National and Regional AwardsOther countries and regions have also created quality awards. These use similar categories andvariations of the scoring methods used by the MBNQA. The Deming Prize is awarded in Japan;there are also the European Quality Awards and the Australian Quality Award in their respectiveregions.

3.4.3 ISO 9001:2000 The ISO 9000 family of standards contains 3 standards and many supporting documents. UnderISO protocols, all standards must be reviewed at least every five years to determine whether theyshould be confirmed, revised, or withdrawn.

In 1997, ISO conducted a large global survey to better understand their needs. This survey revealedthe need to substantially re-engineer the model itself, with an emphasis on four needs:

• A common structure based upon a Process Model• Promote use of continuous improvement and prevention of non-conformity• Simplicity of use, ease of understanding, and use of clear language and

terminology• Consistency with the Plan-Do-Check-Act improvement cycle

In 2000, the ISO 9000 family of standards integrated the five 1994 Standards into four primarydocuments as follows:

• ISO 9000 – Gives an introduction and a guide to using the other standards in the9000 family.

• ISO 9001 – Describes a quality system model, including quality systemrequirements applicable to organizations that design, develop, produce, install andservice products.

• ISO 9004 – Helps companies implement quality systems.These are supported by the following Standard:

• ISO 19011 – Includes guidelines for auditing quality systems.For those interested in applying ISO 9001:2000 to Software the following will be useful;

• ISO/IEC 90003:2004 Software Engineering – Guidelines for the application ofISO 9001:2000 to computer software

3.4.3.1 Model OverviewThe ISO 9001:2000 standards are based on the following eight quality management principles.

1. Customer Focus

2. Leadership

3. Involvement of People

3-18 Version 6.2.1

Page 79: Casq Cbok Rev 6-2

Quality Baselines

4. Process Approach

5. System Approach to Management

6. Continuous Improvement

7. Factual Approach to Decision-Making

8. Mutually Beneficial Supplier Relationships

3.4.4 ISO/IEC 12207: Information Technology – Software Life Cycle Processes

3.4.4.1 Model OverviewISO/IEC 12207, which was published in 1995, is the international standard that covers the softwarelife cycle from concept through retirement. It contains a framework for managing, controlling, andimproving the software life cycle activities. The standard describes the major processes of thesoftware life cycle, how these processes interface with each other, and the high- level relations thatgovern their interactions. It has subsequently been amended twice; Amendment 1 defines aSoftware Engineering Process Reference Model for use with ISO/IEC 15504 Process Assessment.

For each process, the standard also describes the activities and tasks involved, defining specificresponsibilities and identifying outputs of activities and tasks. Since it is a high-level standard, itdoes not detail how to perform the activities and tasks. The standard does not assume a specific lifecycle model, nor does it specify names, format, or content of documentation. As a result,organizations applying ISO/IEC 12207 should use other standards and procedures to specify thesetypes of details.

3.4.5 ISO/IEC 15504: Process Assessment (Formerly Known as Software Improvement and Capability Determination (SPICE))In June 1991, ISO approved a study period to investigate the needs and requirements for a standardfor software process assessments. The resulting conclusion was that there was an internationalconsensus on the needs and requirements for a process assessment standard.

ISO/IEC TR 15504:2004 Information Technology – Process Assessment, is a nine part TechnicalReport referred to as SPICE. Based on a series of trials that have been held around the world since1994, this Technical Report has become an International Standard comprising of five parts.

3.4.5.1 Model OverviewISO/IEC 15504: provides a framework for the assessment of software processes that is appropriateacross all application domains and organization sizes. This framework can be used by

Version 6.2.1 3-19

Page 80: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

organizations that plan, manage, monitor, control, and improve the acquisition, supply,development, operation, evolution, and support of software.

The SPICE standard provides a structured approach for assessing software processes by, or onbehalf of, an organization with the objectives of:

• Understanding the state of the organization’s processes for process improvement(establish process baselines)

• Determining the suitability of the organization’s processes for a particularrequirement or class of requirements

• Determining the suitability of another organization’s processes for a particularcontract or class of contracts

3.4.6 Post-Implementation AuditsA post-implementation audit is conducted to determine any, or all, of the following:

• The system objectives were met• The system specifications were implemented as specified• The developmental standards were followed• The IT quality objectives (e.g., maintainability) were achieved

Post-implementation audits are a quality assurance activity. The post-implementation audit is usedto assess the ability of the IT organization to perform effective and efficient work. The results of theaudit will be used to both improve the software system and to improve the process that buildssoftware systems.

3-20 Version 6.2.1

Page 81: Casq Cbok Rev 6-2

Quality Assuranceuality Assurance is a professional competency whose focus is directed at criticalprocesses used to build products and services. The profession is charged with theresponsibility for tactical process improvement initiatives that are strategically aligned tothe goals of the organization. This category describes the management processes used to

establish the foundation of a quality-managed environment:

4.1 Establishing a Function to Promote and Manage Quality

Inadequate attention to quality in IT normally results in high systems maintenance costs andcustomer dissatisfaction. Through the promotion of effective quality practices, the quality functioncan reduce the cost of systems development, operation, and maintenance, and can improvecustomer satisfaction.

The quality function has been implemented in hundreds of companies. Quality functions havedemonstrated that quality can be defined and measured. Experience has shown that effectivequality does increase productivity, and pays for itself by actually reducing costs.

Establishing a Function to Promote and Manage Quality page 4-1

Quality Tools page 4-6Process Deployment page 4-28Internal Auditing and Quality Assurance page 4-31

Skill Category

4

Q

Version 6.2.1 4-1

Page 82: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

The key concept to establishing a quality function is establishing the need for quality. Untilmanagement believes there is a need for quality improvement, the real impediment, management,cannot be dealt with.

A quality function exists when a specific individual/group is assigned the responsibility to assist inimproving quality. While individual workers have responsibility for the quality of their productsand services, it is management's responsibility to ensure that the environment is one in whichquality can flourish. The ultimate responsibility for quality rests with senior management.

Many people argue that because everyone has some quality responsibility, a staff function forquality is unnecessary. That argument is theoretically correct, but in practice unless there is a groupcharged with responsibility for ensuring quality, the pressures of other priorities such as meetingschedules and budgets frequently takes precedence over quality. The following example showshow this happens in a typical systems development project.

The four project variables are scope, schedule, resources, and quality. The management challengein completing the project can be illustrated as a dashboard of four system attribute dials, which getset according to the project criteria as illustrated in Figure 4-1. The four dials are interconnected, somovement of one dial affects one or more of the other dials.

Figure 4-1 The Four Interconnected Variables of IT Development

All goes well until one of the variables needs to be changed. For example, the project team agreesto install a new requirement, but if management holds schedule and resources firm while movingthe scope, something must give. Since the dials are interconnected, what must give is quality.Reduced quality occurs as less testing, less documentation, or fewer controls. One role of thequality function is to hold the quality dial firm, so that scope changes cause the resources orschedule dials to move, and not cause a reduction in quality.

4-2 Version 6.2.1

Page 83: Casq Cbok Rev 6-2

Quality Assurance

"Of all men's miseries, the bitterest is this: to know so much and have control over nothing."

Herodotus 484-424

4.1.1 How the Quality Function Matures Over Time The MBNQA program has stated that it takes five to seven years to mature a quality managementsystem. The SEI of Carnegie Mellon University states that it takes at least three years to move fromSEI capability maturity Level 1 to Level 3. It is during this maturation period that the role of theQA analyst significantly changes from quality control to quality assurance to quality consulting.Other individuals performing these roles also change as the organization's quality managementphilosophy matures.

4.1.1.1 Three Phases of Quality Function MaturationThe maturation of the quality management system can be divided into three phases:

4.1.1.1.1 Initial PhaseThe initiation point of the quality management system can be considered the formalization ofquality control activities. An organization in this phase is results-driven, focusing on defining andcontrolling product quality. It normally takes at least two years in this phase of maturity formanagement and staff to recognize that quality cannot be tested into a product; it must happenthrough process maturity. In this phase:

4.1.1.1.2 Intermediate PhaseIn this phase an organization's objectives move from control to assurance. The emphasis is ondefining, stabilizing, measuring, and improving work processes. While process improvementoccurs before and after this phase, it is during this phase that resources are allocated to makeprocess maturity happen. It takes between two and four years for significant process maturity. Atthis level products and services achieve consistency in product quality (consistency being theprerequisite to improved quality and productivity). In this phase:

4.1.1.1.3 Final PhaseDuring this phase, objectives such as consulting, motivating, and benchmarking move theorganization toward optimization. The MBNQA program defines such an organization as a“world-class” organization. World-class begins when work and management processes are welldefined, are being continually improved, and are producing high-quality results that yield highcustomer satisfaction. These processes are integrated to obtain maximum customer satisfaction atminimum cost.

Version 6.2.1 4-3

Page 84: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

4.1.2 IT Quality PlanQuality planning is discussed in Skill Category 5. The IT Strategic Business Plan should includethe following:

• The mission, giving a detailed description of what business IT is in• Long-term IT goals giving direction for IT in the next five years• Short-term objectives for the next business year • How the objectives will be accomplished, including IT projects for the next

business year• Organizational renewal programs that will assure the long-range success of the

organization• The resources necessary to accomplish the short-term objectives and the

organizational renewal activities that will enable the long-term goals to beachieved

An IT Quality Plan has two objectives:

• Supporting the organizational quality policy• Ensuring the high quality achievement of the IT Strategic Business Plan.

The IT Quality Plan should include the following:

• A reference to the organization's quality policy• An assessment of where the organization currently stands in relation to

accomplishing the quality policy• Long-term quality goals - these are discussed below • Short-term quality objectives (i.e., programs) - these are discussed below• The means of implementing the quality objectives• Resources required in order to accomplish the short-term objectives and long term-

goalsA major reason for the failure of quality initiatives is a lack of action. Many organizations haveintroduced quality principles through generalized education and other departmental-wideawareness and motivational sessions. However, at the end of these sessions nothing has changed.In fact, these sessions often demoralize the staff because they recognize the benefits that could beachieved if action was taken.

Quality is a long-term strategy, and any successful quality program must balance the long-termstrategy of building a quality management environment with the short-term need for quickpayback.

4.1.2.1 Long-Term ActionsThe quality function should have a long-term plan of action to become a champion for moving theIT function to a world-class organization. While the short-term plan focuses on specific work tasks,the long-term plan is more complex and should incorporate these three objectives:

4-4 Version 6.2.1

Page 85: Casq Cbok Rev 6-2

Quality Assurance

• Building a quality management environment• Supporting the implementation of the IT function's quality policy• Assisting management and staff in closing the two quality gaps (see Skill

Category 1One of the best case studies in long-term planning is the Ford Motor Company. In a short period,from the late 1970s to the mid-1980s, Ford Motor Company went from losing hundreds of millionsof dollars per year to becoming the most profitable corporation in the world. This happened underthe guidance of Dr. W. Edwards Deming. His plan required Ford Motor Company to develop amission, vision, guiding principles, and corporate values, and then live by them. Dr. Deming placedWilliam Scherkenbach in the Ford Motor Company as the quality manager. His duties were tomake sure that the management and staff of the Ford Motor Company were doing those thingsnecessary to accomplish Ford's vision.

4.1.2.2 Short-Term ActionsEach quality manager must assess his/her own organization to identify quick-paybackopportunities. The following short-term actions have proven beneficial in demonstrating thatpositive results can be achieved from quality initiatives.

• Involve Management in QualityThe first of this two-part action is to help management get beyond quality as an abstractconcept by educating them in the "how-to" aspects of quality management. The second partgets management involved in performing quality tasks, large or small. For example,management can post a paper on a bulletin board asking IT staff what prevents them fromdoing their job effectively, and then select tasks to perform from this list.

• Redefine a Problem ProcessProcess definition should not take longer than just a few days to accomplish. In someorganizations a small process can be defined in a day or two. Redefining a process should notusually take longer than five days.

It is best to select a process that is performed in a variety of different ways, none of whichappears to be completely successful, and to select a team that consists of individuals with avested interest in the process. The process definition team chooses the best pieces of all of theprocesses, and puts them together to create the best of the best practices within the newlydefined process. The team reviews and evaluates the process based on the following criteria:

• Its value to the users (criticality of process)• How current the process is• Usability of the process• Measurability of the process• Attainability of adherence to the process in the existing environment

• Find and Fix a Problem

Version 6.2.1 4-5

Page 86: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

The cost of quality in most IT functions is about 50%. This means half of the budget is alreadydevoted to finding and fixing problems. Unfortunately, the fix is usually to the work productand not the process.

Measurement (see Skill Category 8) is essential to determine what problems need to be fixed,and is a prerequisite to improvement. It requires identifying a product, and then counting thenumber of defects, providing an accurate record of the types and frequencies of productdefects. Analyzing this record identifies recurring defects and their root causes. Eliminating theroot causes eliminates the defects, and results in process improvement.

• Improve quality controlAn important step in process definition is to identify the most logical points in the process toadd a control. Skill Category 6 discusses this further.

"Overriding all other values is our dedication to quality. We are a market-driven institution, committed to our customers in everything we do. Weconstantly seek improvement and we encourage the unusual, even theiconoclastic."

Louis V. Gerstner, Jr., CEO, IBM Corporation

4.2 Quality ToolsA tool is defined as a vehicle that assists in performing a task. Some tasks that a qualitymanagement organization will be performing where quality tools can be used are:

• Defining a mission, vision, goals, and objectives• Defining Do and Check processes• Defining measures• Collecting data• Problem-solving• Designing solutions• Improving processes• Measuring results

Quality tools can be categorized many different ways. For this presentation the following threegroups have been selected. The tools described in each of these categories are a subset of existingtools. They have been included because they are more common and experience has demonstratedtheir effectiveness.

• Management Tools

4-6 Version 6.2.1

Page 87: Casq Cbok Rev 6-2

Quality Assurance

These tools are based on logic rather than mathematics, to address idea generationand organization, decision-making and implementation.

• Statistical ToolsThese tools have a mathematical focus, usually related to data collection,organization, or interpretation. They may also be separated into tools used forcounting and tools used with measures.

• Presentation ToolsThese tools are used during presentations to summarize or graphically illustratedata. These tools may be used in the development of written materials, such asproposals or reports, or to accompany oral presentations.

The three steps needed to select and use a quality tool are:

1. Select the Tool

The general process for selecting a tool is to first define the objective (what is needed toperform the work task more effectively and efficiently). Next, study the tooldescription to determine whether the need objectives match the tool objectives. Finally,assure that the user of the tool believes it meets the objectives.

2. Learn the ToolIf applicable, the person using the tool must receive some training. Reading through thetool’s documentation is the minimum. If possible, get classroom training or take a self-study course. Many tools are not only valuable in quality improvement, but can helpindividuals in the performance of their day-to-day work. Dr. W. Edwards Demingfrequently stated that individuals knowledgeable in quality tools tend to be better on-the-job performers.

3. Use the Tool in Performing the Work PracticeThe tool should be utilized in the manner in which it is taught. The user should ensurethat there is an approach for deploying and using the tool, and that the results meet theobjectives.

4.2.1 Management ToolsTools in this category are used to help generate ideas and information, to organize the ideas orinformation, to facilitate making decisions about the information, and to aid in the implementation.These tools are broad in nature. While they are not based on statistics, some, such as cause-and-effect diagrams and benchmarking may be used in conjunction with statistical tools. The tools togenerate or organize ideas and information are:

• Brainstorming• Affinity Diagram• Nominal Group Technique

Version 6.2.1 4-7

Page 88: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

• Cause-and-Effect Diagram• Force Field Analysis• Flowchart and Process Map• Benchmarking• Matrix

4.2.1.1 BrainstormingBrainstorming is a technique used to quickly generate a quantity of creative or original ideas on orabout a process, problem, product, or service. Brainstorming can be used to:

• Develop a vision• Review inputs, outputs, and flows of existing processes• Create a list of products or services• Eliminate wasteful and redundant work activities• Re-engineer a process, product, or service• Design a new or improved process• Establish standards, guidelines, or measures• Identify the internal and external customers served• Improve the work environment• Gather data for use with other tools

A brainstorming session begins with a facilitator establishing basic ground rules and a code ofconduct. Typical brainstorming rules state that all members have an equal opportunity toparticipate, there is no criticism or pulling rank, people should think creatively, no idea will betreated as insignificant, and there should be only one conversation at a time. Members need to beactive participants, willing to share their ideas, opinions, concerns, issues, and experiences.

Next the team agrees on the topic to be brainstormed and whether to give ideas verbally or writtenon individual index cards, or any other easily manipulated medium. Either a structured (roundtable) or unstructured (free-flowing) approach is selected. Ideas should be generated quickly (5-15minutes) and are recorded clearly on a flip-chart or board. The process stops when ideas becomeredundant or infrequent. Recorded ideas are reviewed for duplication and clarification, andeliminated when necessary. Remaining ideas are then evaluated with an open mind and may beused with the affinity diagram, nominal group technique, or cause-and-effect diagram.

4.2.1.2 Affinity DiagramThe affinity diagram is an orderly extension of a structured brainstorming session. Teams use thistool to help create order out of chaos, by categorizing large numbers of ideas. Rather than havingteams react logically to a group of ideas, this technique helps to identify more creative solutions orto structure ideas for a cause-and-effect diagram.

4-8 Version 6.2.1

Page 89: Casq Cbok Rev 6-2

Quality Assurance

Possible topics or problem statements where affinity diagrams could help are:

• Why policies don’t exist• Why standards are not adhered to• Why QA failed• Why objective measures aren’t used• Understanding the leadership role in quality management• Why employees are not involved or lack empowerment• Why quality doesn’t work• Improving teamwork in the workplace• Understanding the issues to automation and use of CASE tools

To generate affinity diagrams, continue with these steps after a brainstorming session:

1. Write each idea on a separate index card.

2. Randomly place each index card on a flat surface, wallboard or flipchart.

3. In silence, team members move the cards into meaningful groups until consensus hasbeen achieved (the group stops moving the cards).

4. As a team, discuss and then label each category with a title.

5. As a team, discuss each category, using cause-and-effect diagrams, if needed.

4.2.1.3 Nominal Group TechniqueThe nominal group technique is a structured, facilitated technique where all team membersparticipate by individually ranking ideas, issues, concerns, and solutions, and then achieveconsensus by combining the individual rankings. Sample ideas that could be ranked with thenominal group technique are:

• Which defect is the greatest?• Who are our customers?• What are our impediments to quality improvement?• What new standards are needed?• What are our key indicators?• What tool is not being used effectively and how can we increase usage?• How do we get a quality tool used?

Nominal grouping uses a round table (verbal) or index card (written) method for equal participationof teams or groups. It is a good technique to gather large amounts of information. The steps for thenominal group technique are:

1. Generate a list of ideas, issues, concerns, or solutions to prioritize. Brainstorming canbe used if the list is not readily available.

Version 6.2.1 4-9

Page 90: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

2. If the list contains more than about 35 items, it may be desirable to shorten it usingPareto analysis to make it more manageable.

3. As shown in Table 4-1, record the remaining listed items in a location visible to theteam, prefacing each item with a letter of the alphabet, such as:

A – list item 1

B – list item 2

C – list item 3

Table 4-1 Results from Nominal Grouping

4. Team members individually rank the list by assigning a number to each line item. Onerepresents the lowest (least important) ranking, and higher numbers signify increasingimportance.

5. Total the rankings of all team members. In this example, item “C” is the mostimportant.

4.2.1.4 Cause-and-Effect DiagramTeams use cause-and-effect diagrams to visualize, clarify, link, identify, and classify possiblecauses of problems in processes, products, and services. They are also referred to as "fishbonediagrams," “why-why diagrams,” or "Ishikawa diagrams" (after Kaoru Ishikawa, a quality expertfrom Japan).

Through understanding problems within the work processes, teams can identify root causes of aproblem. A diagnostic approach for complex problems, this technique begins breaking down rootcauses into manageable pieces of a process. A cause-and-effect diagram visualizes results ofbrainstorming and affinity grouping through major causes of a process problem. Through a seriesof "why-why" questions on causes, this process can uncover the lowest-level root cause. Figure 4-2displays a sample cause-and-effect diagram.

List Items Member 1 Member 2 Member 3 Total

A – item 1 2 3 1 6B – item 2 1 1 2 4C – item 3 3 2 3 8

4-10 Version 6.2.1

Page 91: Casq Cbok Rev 6-2

Quality Assurance

Figure 4-2 Cause-and-Effect Diagram

Cause-and-effect diagrams are applicable for:

• Analyzing problems• Identifying potential process improvements• Identifying sources of defect causes• Analyzing improper use of test routines/testing problems• Scheduling problems/cycle times

Developing a cause-and-effect diagram requires this series of steps:

1. Identify a problem (effect) with a list of potential causes. This may result from abrainstorming session.

2. Write the effect at the right side of the paper.

3. Identify major causes of the problem, which become “big branches”. Six categories ofcauses are often used: Measurement, Methods, Materials, Machines, People, andEnvironment, but the categories vary with the effect selected.

4. Fill in the “small branches” with sub-causes of each major cause until the lowest-levelsub-cause is identified.

5. Review the completed cause-and-effect diagram with the work process to verify thatthese causes (factors) do affect the problem being resolved.

6. Work on the most important causes first. Teams may opt to use the nominal grouptechnique or Pareto analysis to reach consensus.

7. Verify the root causes by collecting appropriate data (sampling) to validate arelationship to the problem.

Version 6.2.1 4-11

Page 92: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

8. Continue this process to identify all validated causes, and, ultimately the root cause.

4.2.1.5 Force Field AnalysisForce field analysis is a visual team tool used to determine and understand the forces that drive andrestrain a change. Driving forces promote the change from the existing state to the desired goal.Opposing forces prevent or fight the change. Understanding the positive and negative barriers helpsteams reach consensus faster. Following are sample processes that could benefit by a force fieldanalysis:

• Implementing a quality function• Implementing quality management in IT• Developing education and training programs• Establishing a measurement program/process• Selecting a new technique or tool• Implementing anything new• Establishing meaningful meetings• Empowering the work force

The steps below show how a team uses force field analysis:

1. Establish a desired situation or goal statement.

2. Brainstorm and list all possible driving forces.

3. Brainstorm and list all possible restraining forces.

4. Determine the relative importance of reaching consensus on each force.

5. Draw a force field diagram that consists of two columns, driving forces on one side andrestraining forces on the other.

6. Select the most significant forces that need to be acted upon using the nominal grouptechnique to prioritize and reduce the number, if there are too many.

7. Proceed to a plan of action on the forces selected in the previous step.

4.2.1.6 Flowchart and Process MapA flowchart is a diagram displaying the sequential steps of an event, process, or workflow.Flowcharts may be a simple high-level process flow, a detailed task flow, or anywhere in between.They are standard tools for any IT organization. Flowcharts are most useful when applied by ateam to obtain knowledge of a process for improvement. The technique is used to develop acommon vision of what a process should do or look like. Once the process is documented,inefficiencies and redundancies can be identified and reduced.

4-12 Version 6.2.1

Page 93: Casq Cbok Rev 6-2

Quality Assurance

A process map is a more detailed flowchart that depicts processes, their relationships, and theirowners. The display of relationships and owners helps identify wasted steps, redundant tasks, andevents with no trigger activities. Figure 4-3 shows a sample process map.

Figure 4-3 High-Level Process Map for Project Management

Sample processes where flowcharts are useful include:

• Life cycle activities, such as internal or external design, review processes, testingprocesses, change management, configuration management

• Customer surveys or interviews• Supplier agreements or contracts• Service level agreements

A flowchart is constructed using the following steps:

1. Identify the major function or activity being performed or needing to be performed.

2. Identify the tasks to support the function or activity.

3. Determine the steps needed to do the tasks.

4. Sequence the tasks and steps for the function or activity.

5. Connect the tasks and steps for the function or activity.

6. Create a flowchart of the tasks and steps for the function or activity using directionalarrows or connections.

Version 6.2.1 4-13

Page 94: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

7. Reach team consensus on the process depicted in the flowchart.Flowcharts should reference the following information:

• Process owners • Suppliers • Customers • Key deliverables• Decision points • Interfaces • Tasks and task sequence • Policies• Standards • Procedures • Tools used

4.2.1.7 BenchmarkingBenchmarking is the process of determining how well a company’s products, services, andpractices measure up against others. Benchmarking partners can be internal (against other companyunits), competitive (against competitors within the same industry), or functional (against “best inclass” or “best in breed” within any industry). It is the highest level of performance that fully meetscustomer requirements.

Benchmarking enables a company to identify the performance gap between themselves and thebenchmarking partner, and to determine a realistic improvement goal (some set higher goals) basedon industry practices. It helps achieve process improvement, measurement, motivation and amanagement process for improvement. The use of benchmarking is not normally associated withcost cutting or a quick fix. Benchmarking should be integrated with an organization’s processimprovement process.

Benchmarking has been used to:

• Evaluate and upgrade the customer requirements process.• Design a professional career ladder for information technology professionals.• Identify and install measurements for quality and productivity.

The three types of benchmarking are identified below. The first two types account for about 80% ofall benchmarking that is done.

• Process BenchmarkingThis benchmark is used to plan for business process improvement, and isdocumented as part of business plans and quality improvement projects.

• Product Benchmarking

4-14 Version 6.2.1

Page 95: Casq Cbok Rev 6-2

Quality Assurance

This benchmark is used to help in product planning and development, usingproduct documentation that includes the product performance goals and designpractices identified through benchmarking.

• Performance BenchmarkingThis benchmark is used to set and validate objectives to measure performance, andto project improvements required to close the benchmark “gap.”

Benchmarking is a ten-step process, involving four phases, as described below. Steps 2 through 5are iterative.

4.2.1.7.1 Planning PhaseStep 1: Identify Benchmarking Subject and Teams

These internal or external candidates come from personal knowledge; interaction withindustry groups; studying industry reports; and interviewing consultants, professionalgroups, etc.

Step 2: Identify and Select Benchmarking Partners

Determine viable candidates for benchmarking; obtain their agreement to participate;and confirm the visit time, agenda, and attendees with the benchmarking partner.

Step 3: Collect Data

Document the organization’s process to be benchmarked. Develop questions and meetwith the process owners to collect and record the data.

4.2.1.7.2 Analysis PhaseStep 4: Determine Current Competitive Gap

Identify the difference between the attributes of the organization’s process, product, orperformance and those of the benchmarking partner.

Step 5: Project Future Performance Levels

Based on the competitive gap, make a managerial decision regarding performancegoals for the organization.

4.2.1.7.3 Integration PhaseStep 6: Communicate Findings and Gain Acceptance

Describe the benchmarking results to the process owners and involved parties, andcommunicate the potential future performance levels to gain acceptance to moveforward.

Step 7: Establish Functional Improvement Objectives

Version 6.2.1 4-15

Page 96: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

In conjunction with the process owners, establish specific improvement objectives tobe achieved. These are generally short-term objectives not to exceed one year.

4.2.1.7.4 Action PhaseStep 8: Develop an Action Plan

Plan improvement using the organization's process improvement process.

Step 9: Implement Plans and Monitor Progress

Perform the plan, measure progress, and make adjustments as necessary.

Step 10: Re-calibrate and Reset Benchmark Performance Levels

Based on the analysis, set new objectives, benchmark again to find better ways ofexecuting the process, and continue the improvement cycle.

Lessons learned from the benchmarking leaders include the following:

• Focus on a specific objective and process, and facilitate the benchmarking sessionto keep it on track.

• Prepare objectives, agenda, data, attendees, meeting and benchmarking protocol,and process documentation requirements in advance.

• It is not easy to identify the IT “best of the best” because good performance data isnot readily available, and research is required to evaluate opinion versus fact.

• It always takes longer than you think.

4.2.1.8 MatrixA matrix is a structured, problem-solving technique used to show the relationship betweengroupings. It is also known as a matrix check sheet or a matrix diagram.

The matrix data analysis is frequently used to identify whether customer needs are being met, notbeing met, different, or no longer exist. For IT organizational teams, this approach could supportthe determination of system requirements, such as when teams need to understand and analyzecustomer preferences to drive out requirements or improvements on a product or service. This toolhelps view a problem as a whole for clarity, especially in conjunction with the JAD process.

For multi-dimensional problems, this approach focuses on the essential factors in problem areas. Ithelps teams sort language information for discussion and consensus, focus on what is important,and clarify the strength of the relationships. The matrix data analysis allows a team to test, evaluate,and develop strategies of multi-dimensional factors in solving problems.

Matrix diagrams can be used to:

• Research or survey customer preferences• Compare skill levels versus experiences in job

4-16 Version 6.2.1

Page 97: Casq Cbok Rev 6-2

Quality Assurance

• Evaluate tools available versus usage• Correlate defect rates, cycle times, effort, and skills • Understand tasks in a process versus goals and resources

The two common types of matrices are the L-type matrix and the T-type matrix. An L-type matrixcompares two sets of items to each other or compares a single set to itself, such as twocharacteristics of a process or product. A T-type matrix is used to compare two sets of items to acommon third set, such as observed attributes between causes and results.

Table 4-2 is an L-type matrix, showing attributes of an improvement objective. The relationshipbetween the attributes and objectives helps clarify how to prioritize the objectives.

Table 4-2 L-type Matrix

To produce an L-type matrix, use the following steps:

1. Determine the (usually) two lists of items to be compared.

2. Create a matrix (tabular) structure with enough rows and columns to accommodate thetwo lists. Put one list across the top of the matrix and one down the left side.

3. Determine how the comparison will be symbolized. Typically this is shown withnumbers or with relationship symbols such as shaded circles, circles and triangles(indicating strong, probable, none).

4. Complete the matrix by putting the relevant symbols in the corresponding boxes.

5. Total the scores if applicable. With the exception of the format, a T-type matrix is generated the same as an L-type matrix. Forthe T-type matrix, the common set of items is listed in a row across the middle of the matrix. Listedalong the top half of the left side is one set of items (such as causes) and listed along the bottom halfof the left side is the other set of items (such as results). The resulting matrix is in the shape ofa “T”.

4.2.2 Statistical ToolsThe tools covered in this section are used to collect, view, and analyze numbers. They are:

Improvement Objective

Contribution (1-5)

Readiness (1-5)

Capability (1-5)

Cost/Benefit (1-5) Score

Implement Unit Testing 5 5 1 3 14

Define Coding Standards 3 1 4 2 10

Implement Design Reviews 5 5 5 3 18

Build Usability Lab 2 1 1 4 8

Version 6.2.1 4-17

Page 98: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

• Check Sheet• Histogram• Pareto Chart• Run Chart• Control Chart• Scatter Plot

4.2.2.1 Check SheetA check sheet (also called a checklist or tally sheet) of events or occurrences is a form used togather and record data in an organized manner. This tool records the number of occurrences over aspecified interval of time to determine the frequency of an event. The data is recorded to support orobjectively validate the significance of the event. It may follow a Pareto analysis or cause-and-effect diagram to validate or verify a problem, or it may be used to build Pareto charts orhistograms. Figure 4-4 shows a sample check sheet

Figure 4-4 Check Sheet

Check sheets can be used to record the following types of information:

• Project review results, such as defect occurrences, location, or type• Documentation defects by type or frequency• Cycle times, such as requirements to design or design to implementation • Conformance to standards• End user complaints of all types • End user surveys• Late deliveries

To use a check sheet:

1. Clarify what must be collected objectively.

2. Establish a format for the data collection that is easily understood by the collector.

3. Ensure those involved understand the objectives so the collection process is accurate.

4. Establish the sample size and time frame of data collection.

5. Instruct or train data collectors for consistency.

6. Observe, record, and collect data.

(Daily System)Failures

Week of dd/mm/yyDay 1 Day 2 Day 3 Day 4 Day 5 Total

4-18 Version 6.2.1

Page 99: Casq Cbok Rev 6-2

Quality Assurance

7. Tally the results.

8. Depending on the purpose, build a Pareto chart or histogram or evaluate the results todetermine whether the original analysis is supported.

Advantages of check sheets are that they pre-define areas to discuss, limit the scope, and provide aconsistent, organized, and documented approach. Disadvantages might be their applicability orlimiting of other questions.

Questions on check sheets should be organized by topic and tested prior to use. A response of “Idon’t know” should be allowed for, and bias should be avoided. The person using the check sheetshould understand the reason for the questions and be able to anticipate a response.

4.2.2.2 HistogramA histogram (or frequency distribution chart) is a bar graph that groups data by predeterminedintervals to show the frequency of the data set. It provides a way to measure and analyze datacollected about a process or problem, and may provide a basis for what to work on first.Histograms are also useful for displaying information such as defects by type or source, deliveryrates or times, experience or skill levels, cycle times, or end user survey responses. Figure 4-5shows a simple histogram.

Figure 4-5 Histogram

A histogram requires some understanding of the data set being measured to consolidate orcondense it into a meaningful display. To create a histogram, perform the following steps:

1. Gather data and organize it from lowest to highest values.

Interval Tabulation Frequency Cumulative Frequency

0-3 11 2 23-6 111111 6 86-9 111 3 11

Version 6.2.1 4-19

Page 100: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

2. Calculate the range, which is the largest value – smallest value.

3. Based on the number of observations, determine the number of cells, which is normallybetween 7 and 13.

4. Calculate the interval or width of the cells, which is the range divided by number ofcells.

5. Sort the data or observations into their respective cells.

6. Count the data points of each cell (frequency) to determine the height of the interval,and create a frequency table.

7. Plot the results.The distribution is normally a bell-shaped curve. Other shapes such as double peak, isolated island,cliff, cogwheel, and skewed can provide insight on the variability of the process.

One variation on the histogram is to create a graph by drawing a line from the midpoints of thebars. Then add the range of acceptable values (e.g., within plus or minus 5 of budget) to showwhether the actual values lie within the acceptable range.

4.2.2.3 Pareto ChartThe Pareto chart is a special type of histogram, used to view causes of a problem in order ofseverity from largest to smallest. It is a simple statistical tool that graphically shows the 20-80 rulewhere 20% of the sources cause 80% of the problems. Joseph Juran refers to this Pareto principleas the separation of the “vital few” from the “trivial many”.

A Pareto chart is typically used early in the continuous improvement process when there is a needto order or rank problems or causes by frequency. The vital few problems and their respective rootcauses can then be focused on. This technique provides the ability to:

• Categorize items, usually by content (type of defect, position, process, time, etc.)or cause (materials, operating methods, manpower, measurement, etc.) factors

• Identify the causes or characteristics that contribute most to a problem• Decide which basic causes of a problem to work on first• Understand the effectiveness of the improvement by doing pre- and post-

improvement chartsSample problems for Pareto analysis include:

• Problem-solving for the vital few causes or characteristics• Defect analysis• Cycle or delivery time reductions• Failures found in production• Employee satisfaction/dissatisfaction

The process for using Pareto charts is described in the following steps:

4-20 Version 6.2.1

Page 101: Casq Cbok Rev 6-2

Quality Assurance

1. Use brainstorming, affinity diagrams, or cause-and-effect diagrams to define theproblem clearly.

2. Collect a sufficient sample size (at least 30 occurrences) of data over the specified time,or use historical data, if available.

3. Sort the data in descending order by occurrence or frequency of causes characteristics.

4. Construct the Pareto Chart and draw bars to correspond to the sorted data in descendingorder, where the “x” axis is the problem category and the “y” axis is frequency.

5. Determine the vital few causes to focus improvement efforts.

6. Compare and select major causes, repeating the process until the problem’s root causesare reached sufficiently to resolve the problem.

4.2.2.4 Run ChartA run chart as shown in Figure 4-6 is a graph of data, in chronological order that displays changesand trends in the central tendency (average). The data represents measures, counts, or percentagesof outputs (products or services) from a process. Run charts are often used to monitor and quantifyprocess outputs before a control chart is developed. Run charts can be used as input for establishingcontrol charts.

Figure 4-6 Run Chart

Run charts can track events such as:

• Total failures • Complaint levels• End user satisfaction levels • Suggestion levels • Training efforts • Production yields

Version 6.2.1 4-21

Page 102: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

• Number of invoices • Number of system errors • Down time (minutes, percent)

The steps for developing a run chart are as follows:

1. Decide which output of a process to measure.

2. Label the chart both vertically (quantity) and horizontally (time).

3. Plot the individual measurements over time (once per time interval or as they becomeavailable), tracking the data chronologically in time.

4. Connect data points for easy use and interpretation.

5. Monitor the data points for any obvious trend.

4.2.2.5 Control ChartControl charts provide a way of objectively defining a process and variation. They establishmeasures on a process, improve process analysis and allow process improvements to be based onfacts.

Note: Variation is briefly described here to put control charts in perspective. Skill Category 8provides additional details on the topic of variation.

The intent of a control chart is to determine if a process is statistically stable and then to monitor thevariation of stable process where activities are repetitive. There are two types of variation:

• Common or random causes of variationThese are inherent in every system over time, and are a part of the natural operation ofthe system. Resolving common cause problems requires a process change.

• Special causes of variationThese are not part of the system all the time. They result from some specialcircumstance and require changes outside the process for resolution.

Common causes of variation are typically due to many small random sources of variation. The sumof these sources of variation determines the magnitude of the processes inherent variation due tocommon causes. From the sum, the process control limits and current process capability can bedetermined. Accepted practice uses a width of three standard deviations around the populationmean (µ ± 3) to establish the control limits. A process containing only common causes of variationis considered stable, which implies that the variation is predictable within the statisticallyestablished control limits.

Processes containing special as well as common causes of variation are referred to as unstableprocesses. Figure 4-7 and Figure 4-8 show control charts for stable and unstable processes.

Note: The special cause falls outside of the control limits of the unstable process.

4-22 Version 6.2.1

Page 103: Casq Cbok Rev 6-2

Quality Assurance

Figure 4-7 Control Chart of a Stable (In Control) Process

Figure 4-8 Control Chart of an Unstable (Out of Control) Process

Control charts are suitable for tracking items such as:

• Production failures • Defects by life cycle phase• Complaint/failures by application/software• Response time to change request• Cycle times/delivery times• Mean time to failure

When there is reason to believe a process is no longer stable, it is typically evaluated first bybrainstorming, Pareto analysis, and cause-and-effect diagrams.

Use of control charts follows:

Version 6.2.1 4-23

Page 104: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

1. From the initial evaluation, identify the characteristics of the process to monitor, suchas defects, cycle times, failures, cost, or maintenance.

2. Select the appropriate type of control chart based on the characteristic to monitor. Datathat is variable (measured and plotted on a continuous scale such as time, cost, figures,etc.) may require different charts.

3. Determine the methods for sampling, such as how many or over what time frame.

4. Collect sample data. Check sheets can be used to gather data.

5. Analyze and calculate the sample statistics: average, standard deviation, upper controllimit, and lower control limit.

6. Construct the control chart based on the statistics.

7. Monitor the process for common and special causes of variation. The process is incontrol when observations fall within the control limits.

Five rules are used to determine the existence of special causes. If observed, the processneeds to be evaluated and analyzed for causes related to the situation. The five rulesconstituting a special cause are:

• Any point outside the upper or lower control limit.

• Any run of eight or more data points above or below the average value(centerline), indicating the average has changed.

• Six or more consecutive data points, which are increasing (trend up) ordecreasing (trend down).

• Two out of three consecutive points in the outer one-third control limit.

• Fifteen consecutive points between the centerline and inner one-third of thechart.

4.2.2.6 Scatter PlotA scatter plot is used for problem solving and understanding cause-and-effect relationships. Itshows whether a relationship exists between two variables, by testing how one variable influencesthe response (other variable). Scatter plots are also called scatter diagrams or correlation diagrams.

4-24 Version 6.2.1

Page 105: Casq Cbok Rev 6-2

Quality Assurance

Scatter plots may be used to look for relationships, such as:

• Defect Level versus Complexity• Defects versus Skill Levels (Training)• Failures versus Time• Cost versus Time• Change Response versus People Availability• Defect Cost versus Where Found (Life Cycle)• Preventive Cost versus Failure Cost

The steps for creating scatter plots are:

1. Select the variable and response relationship to be examined.

2. Gather data on the variable and response; determine the sample size of the paired data.

3. Plot the results; determine the appropriate scale to plot the relationship.

4. Circle repeated data points as many times as they occur.

5. The pattern of the plots will suggest correlation: positive, negative, random, linear,curvilinear, cluster, etc.

Figure 4-9 shows a few typical patterns. Be careful when interpreting results – a frequent error ininterpreting results is to assume that no relationship exists between a variable and a responsebecause a relationship isn't immediately apparent. It may be necessary to take additional samples,or use specialized axes such as logarithmic scales.

Figure 4-9 Types of Scatter Plots

4.2.3 Presentation ToolsPresentations are an integral part of the change process. The involved parties, sometimes calledstakeholders, need to be convinced that a proposed change is beneficial, or want to see reportsduring and after implementation. Stakeholders include management, the individuals that will usethe changed process, and the individuals impacted by the changed process.

Version 6.2.1 4-25

Page 106: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

The five tools below are the more common methods for graphical presentation:

• Table• Line Chart • Bar Chart• Pie Chart

4.2.3.1 TableQuality reports often use tables as worksheets presented to management. The information ispresented in row and column format. Spreadsheet software can prepare these types of graphicalpresentations.

Table 4-3 shows a sample table, which depicts the dollars spent on maintenance for three differentprojects of about equal complexity over a four-month period.

Table 4-3 Sample Table

4.2.3.2 Line ChartA line chart is used to show direction of events. For example, Figure 4-10 shows how maintenancecosts for Project 1 fluctuate over time. Line charts can also be used to compare:

Figure 4-10 Line Chart

• Like Units – There could be a line for each of the three projects.

Month Project 1 Project 2 Project 3

January $1,000 $2,000 $2,000February $2,000 $1,000 $3,000

March $1,000 $2,000 $1,000April $2,000 $3,000 $3,000Total $6,000 $8,000 $9,000

4-26 Version 6.2.1

Page 107: Casq Cbok Rev 6-2

Quality Assurance

• Related or Fixed Variables - The total or average maintenance could be shown asanother line on the chart.

• Like Periods - Maintenance for Project 1, for the first four months of this yearcould be compared to the same time period last year.

4.2.3.3 Bar ChartA bar chart is normally a two-dimensional chart using bars to represent values or items. InFigure 4-11, project 3 maintenance costs are illustrated. Note that the same type of information canbe presented in a tabular format, a line chart, or a bar chart.

Figure 4-11 Bar Chart

Version 6.2.1 4-27

Page 108: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

A bar chart is particularly advantageous when:

• A large graphic is desired• Specific time frame periods are to be emphasized• Two or more items are to be included, representing different values or items so that

the bar permits easy distinction between the two items

4.2.3.4 Pie ChartA pie chart graphically presents the components of a total population. The pie chart illustrated inFigure 4-12 uses percentages to show how four categories of information are spread among thepopulation. The size of each pie segment should reflect the portion of the total represented by thatpiece. It is not necessary to be highly precise if the numbers are placed within the pie.

Figure 4-12 Pie Chart

Pie charts can be used to view:

• Segments visually differentiated from one another• Segments showing the percent of the whole (uses 100% of the total pie)• Dollar volumes, where each pie piece indicates how many dollars of the total

dollars are included• Items, where each piece shows the number of items, such as claims processed by

the processing district

4.3 Process DeploymentOne of the most difficult tasks facing any IT function is changing the way that function operates. Inmost organizations, stability is the norm and change is abnormal. That cycle needs to be reversed, ifquality and productivity are to be constantly improved.

4-28 Version 6.2.1

Page 109: Casq Cbok Rev 6-2

Quality Assurance

People resist change for the following reasons:

• It is significantly more difficult to implement change than to develop the approach thatwill cause the change.

• People do not like change imposed on them. If they are not actively involved inmaking the change, there is a natural resistance because the change is not his or heridea. (This is closely associated with the “not-invented-here” syndrome.)

• Workers know they can be reasonably successful using the current process, but notwhether they can be successful using the changed process. Change brings risk, andthere is a higher probability that they will make errors using it.

• When people spend time and effort to learn a new process, and then make routinemistakes while learning, it discourages them from wanting to try the changed process.

• The person(s) who developed the current process may resent having that processchanged. This resentment sometimes leads to overt action to stop the change.

• Management may be more committed to meeting schedules and budgets than toimplementing change.

4.3.1 The Deployment ProcessInitiating change is only effective when change is implemented through a process. This changeprocess is called deployment. Deployment is the process of implementing a new or improvedapproach to completing a work task.

Deployment is normally only effective in an environment that uses processes. If there are noprocesses, there is no way of effectively implementing a change. In a software engineeringenvironment, compliance to process is enforced, thus deployment is a critical component ofmaking the software engineering environment work.

Dr. Curt Reimann, past director of the U.S. National Quality Award Program, stated that less thanone percent of U.S. Corporations have an effective quality program. Most quality experts believethe cause of ineffective quality programs is attributable to poorly designed or nonexistentdeployment efforts, coupled with the lack of measurement processes to assess the results of qualityprograms. Starting a quality management program forces management to rethink its qualityresponsibilities.

There are three deployment phases - assessment, strategic, and tactical. The assessment andstrategic deployment phases represent the Planning component of the PDCA cycle. The tacticaldeployment phase represents the Do, Check and Act components of the PDCA cycle.

4.3.1.1 Deployment Phase 1: AssessmentThe first step in the deployment process is to establish the current level of performance for both theenvironment (via general assessments) and the goal to be accomplished (via specific assessments).This phase answers the question “Where am I?”

Version 6.2.1 4-29

Page 110: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

4.3.1.2 Deployment Phase 2: StrategicThe deployment strategy establishes the framework in which deployment will occur. Withoutstrategic planning, deployment rarely works. This phase results in a goal, which is normally a steptowards accomplishing the vision, and a definition of the process or approach to accomplish thatgoal. The questions “Where do I want to be (goal or vision)?” and “How am I going to get there(process)?” are answered in this phase.

4.3.1.3 Deployment Phase 3: TacticalAs previously stated, effectively performing the strategic deployment activities helps ensure thesuccess of the deployment tactics. It takes three to ten times more resources to deploy an approachthan to develop it. If the deployment resources are inadequate, there is a high probability that theapproach will fail, or that its use will be minimal. For example, if a capture/playback testing tool ispurchased for $25,000, installation costs should be between $50,000 and $250,000. If those fundsare not expended, that tool will normally fall into disuse.

The tactical phase answers three questions:

• When the process is initially implemented, compliance is attempted, answering theimplementation question “How do I get people to follow the process?”

• Measurement is performed to answer the question “Does the process work (is itunder control) and is the organization moving in the right direction to support thevision or goal?”

• Based on the measurement, the question “Does the process need improvement, andif so, how?” is answered.

4.3.2 Critical Success Factors for DeploymentDeployment is much harder than defining an approach. Approach is an intellectual exercise;deployment is a people-intensive process. There are five intangible attributes called critical successfactors that help make deployment work.

These five critical success factors for an effective deployment process are:

1. Deployment is a series of integrated tasks, which together enable approaches to beeffectively implemented.

• These integrated tasks are a deployment process that should be customized asneeded for each organization and each approach.

2. Deployment champion(s) is in place.• Someone must take the lead for making the identified approach happen. While the

champion can be a highly respected staff member, it is always advantageous forthe champion to be a senior manager.

3. Deployment is a team effort.

4-30 Version 6.2.1

Page 111: Casq Cbok Rev 6-2

Quality Assurance

• A single individual can develop an effective approach, but can rarely deploy thatapproach. A team of people including instructors, technicians, colleagues andmanagement must implement the deployment process.

4. There is buy-in by the affected parties.• Tasks that transfer ownership of the approach to the users of the approach involve

a buy-in. In this activity an individual accepts the approach as the way businesswill be done. The individual does not have to like the approach, but does have towholeheartedly support its use in performing the effective work tasks.

5. Deployment responsibilities are effectively passed between individuals and betweenteams.• Deployment is a continuous process that begins prior to developing the approach,

and goes on until the approach is discontinued. During that time, the level ofenthusiasm for the approach will vary. People involved in ensuring that theapproach is followed (i.e., deployed) likely will change over time. It is essentialthat new people involved in the work tasks have the same enthusiasm and desirethat existed in the initial deployment effort.

4.4 Internal Auditing and Quality AssuranceBoth internal auditing and QA are professions. It is generally recognized that a profession has thefollowing criteria:

• Code of ethics• Common body of knowledge • Statement of responsibilities• Certification program (including continuing education)

The differences between the auditing and QA professions are in the common body of knowledgeand the statement of responsibilities.

4.4.1 Types of Internal AuditsInternal auditing is a management control directed at measuring and evaluating an activity todetermine if it is performed in accordance with the policies and procedures of an organization (i.e.,meets the intent of management). It is an independent appraisal activity. The specific types ofauditing are:

• Financial AuditingFinancial auditing is performed in accordance with generally accepted accounting proceduresand other applicable laws and regulations to determine that the accounting records arereasonable.

Version 6.2.1 4-31

Page 112: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

• Operational AuditingOperational auditing is performed to determine that operations are performed in an efficient,effective and economical manner.

• Program AuditingProgram auditing is performed to determine that the objectives of specific business activitiesare being properly fulfilled.

There are three important characteristics in the performance of an internal audit:

1. The work of the internal auditor needs to be detached from the regular day-to-dayoperations of the company. A good practical test is that if the internal auditing activitywere temporarily discontinued, the regular company operations would go on in anormal manner for the time being.

2. Internal auditing cannot get involved in developing procedures, standards, or usurp theroles and responsibilities of other employees.

3. The internal auditor is to evaluate the interaction of all company groups with regards tomeeting objectives.

4.4.2 Differences in ResponsibilitiesThe main role of auditing is to identify and report problems, while the role of QA is to find andimplement solutions for those problems. QA should be a leadership position, emphasizing thestrong interpersonal activities involved in making improvement occur. While QA performs manyappraisals, it strives to be independent of the activities being appraised. Auditing, by nature, has anegative role; QA, by practice, should have a positive role. Confusion between the two rolesfrequently leads to a negative image of QA.

Some of the skills and activities that an internal auditor has are not applicable to QA analysts.

• Internal auditors must be knowledgeable of the Standards for the Professional Practiceof Internal Auditing and are required to comply with those standards in theperformance of their work.

• Internal auditors review the means of safeguarding assets and verify the existence ofassets.

• Internal auditors verify compliance to corporate policies, plans, procedures, andapplicable laws and regulations.

• Internal auditors normally coordinate their activities and work in conjunction with theorganization’s firm of external auditors.

• Internal auditors have direct lines of communication to senior corporate officers andfrequently to the organization’s board of directors.

Some key activities performed by QA analysts that are not normally performed by internal auditorsare:

4-32 Version 6.2.1

Page 113: Casq Cbok Rev 6-2

Quality Assurance

• Developing policies, procedures, and standards• Acquiring and implementing tools and methodologies• Marketing or creating awareness of quality programs and concepts• Measuring quality• Defining, recording, summarizing, and presenting analyses• Performing process analysis (i.e., statistical process control)

See Skill Category 3 for a discussion of quality audits.

Version 6.2.1 4-33

Page 114: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

This page intentionally left blank.

4-34 Version 6.2.1

Page 115: Casq Cbok Rev 6-2

Quality Planningxecutive management establishes the vision and strategic goals. Planning is the process thatdescribes how those strategic goals will be accomplished. Quality planning should beintegrated into the IT plan so that they become a single plan. In simplistic terms, the IT planrepresents the producer, and the quality plan represents the customer.

5.1 Planning ConceptsPlanning is the totality of activities that determine, for an individual or organization, what will bedone and how it will be done. Quality planning is a component of overall business planning.Quality planning focuses on the policies, processes and procedures which assure that the definedrequirements are implemented, and the implemented requirements meet the customer’s needs. Thefollowing two concepts epitomize the importance of planning.

• If you do not know where you are going, all roads lead there. This means that withouta plan, any action is acceptable.

• If you fail to plan – plan to fail. This means that without a good plan which defines theexpectations of work, activities may be performed which provide no benefit and leadto customer dissatisfaction.

Planning Concepts page 5-1Integrating Business and Quality Planning page 5-4

Prerequisites to Quality Planning page 5-5The Planning Process page 5-6Planning to Mature IT Work Processes page 5-13

Skill Category

5

E

Version 6.2.1 5-1

Page 116: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

Two important components of quality planning are the management cycle and the planning cycle.The management cycle, frequently referred to as the Plan-Do-Check-Act Cycle, is repeated here toemphasize the importance of planning as a management activity.

5.1.1 The Planning CyclePlanning is a management responsibility. The responsibility commences when managementestablishes a vision for the IT organization, and works through the development of a tactical planwhich defines the detailed work activities to be performed.

The planning cycle is a decomposition of the IT vision into work activities which will helpaccomplish that vision. Table 5-1 shows that decomposition. It also shows the do, check, and actactivities that follow when planning is completed.

The planning cycle must be integrated with the do, check, and act activities because planning is acontinuous activity. While the PDCA cycle implies that you plan, then do, then check, and then act,that concept is misleading. While the plan should be complete before work activities commence,business requirements may change, and problems or opportunities may be encountered. Theseevents which affect work activities should be incorporated into a new version of the plan.

These changes to the work activities can have any or all of the following impacts on the plan:

• Change the schedule• Change the budget• Change the number of resources allocated• Change how one implemented component of software will affect other

components of the software• Change in work priorities• Addition or deletion of work activities to accommodate the needed changed work

activities

5-2 Version 6.2.1

Page 117: Casq Cbok Rev 6-2

Quality Planning

Table 5-1 Planning Cycle Example to Show Decomposition from Vision to Rework

The planning cycle is illustrated with a customer satisfaction example showing the decompositionfrom the first listed planning activity to the last planning activity.

• Establish IT visionA vision is broad in nature, probably un-achievable but it is the ultimate goal.

• Define MissionThe responsibility of an organization unit related to achieving the vision.

• Set GoalsA target established by management to be achieved by the staff of the implementingorganization.

• Strategic PlanningA description of what must be done in order to meet the goals set by management.

• Tactical PlanningThe detailed “how-to” work activities that need to be undertaken to accomplish thestrategic planning objectives.

• ExecutionThe execution of the tactical plan as written.

• MonitoringAn ongoing assessment to assure that the plan is followed, and problems encounteredin following the plan, are appropriately addressed.

• ReworkActions approved by management based on problems uncovered through themonitoring process.

PlanningActivity PDCA Phase Example of

Planning Activity

Establish IT Vision P IT deliverables and service exceed customer satisfaction.Define Mission P We will work with our customer to assure satisfaction.

Set Goals P

On a scale of five to one -- from very satisfied, satisfied, neither satisfied nor unsatisfied, dissatisfied, very dissatisfied – our goal is 90% of our customers very satisfied or satisfied.

Strategic Planning P Involve users in the software development process.

Tactical Planning P Conduct reviews at the end of each development phase with users as part of the review team.

Execution D For project “x” conduct a requirements phase review on November 18, 20xx.

Monitoring C Did the requirements phase produce testable requirements?

Rework A Make non-testable requirements testable.

Version 6.2.1 5-3

Page 118: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

5.2 Integrating Business and Quality PlanningQuality planning should focus on two major activities: process management and quality control.Process management is discussed in Skill Category 6 and quality control is described in SkillCategory 7. The quality professional will do other activities, most quality activities that requireplanning are related to these two quality activities. Business planning should focus onaccomplishing business objectives.

Let’s look at these two planning processes. The IT organization develops a business plan. Thepurpose of the business plan is to define the work and work activities that will be conducted duringthe planning period. These work activities are designed to accomplish the business objectives. Thequality professionals will develop a quality plan focusing on quality activities that will help assurethe outputs from the business plan meet the defined output specifications and meet the needs of theusers of those deliverables.

5.2.1 The Fallacy of Having Two Separate Planning ProcessesMany IT organizations develop both a business plan and a quality plan. However, they do notintegrate these plans and, by not integrating the plans, undesirable results may occur. For example,the quality plan may call for system development reviews to occur prior to the end of a softwaredevelopment phase. However, if the business plan does not allot time and resources to participatein that development review, the review may never occur. In many organizations, the qualityprofessionals who organize the reviews are not informed when a software development phase isconcluding.

5.2.2 Planning Should be a Single IT ActivityBoth the business staff and the quality staff should be involved in IT planning. Involvement is inboth strategic and tactical planning.

The objective of this single planning cycle is to ensure that adequate resources and time areavailable to perform the quality activities. The net result is that the individuals executing thebusiness plan cannot differentiate the quality planning from the business planning. For example, ifbusiness planning calls for a quality review prior to the end of each phase of software development,the business staff will assume, that is a logical part of the software development process. If it is notintegrated, the review is owned by the quality professional and may not be adequately supported bythe IT business staff, such as, system designers and programmers.

Figure 5-1 illustrates the integration of quality planning into IT business planning.

5-4 Version 6.2.1

Page 119: Casq Cbok Rev 6-2

Quality Planning

Figure 5-1 Integrating Quality Planning into IT Business Planning

The figure gives an example of business planning and an example of quality planning. While theblocks show that strategic and tactical business planning and strategic and tactical quality planningoccur, the end result is a business plan that incorporates quality activities.

5.3 Prerequisites to Quality PlanningQuality planning is a process. Quality planning should be a defined process indicating who isinvolved in planning and the specific work procedures and deliverables included within theplanning process. Individual IT staff members should not create their own planning process.

Before effective quality planning can occur these prerequisites should be met:

• IT vision, mission and goals documentedThose planning need to know specifically what the plan is to accomplish. The planningprocess begins by knowing the desired output from the plan. If those performingplanning understand the IT vision, the IT mission and the specific goals related to thatvision and mission, they can develop a plan that hopefully will meet those goals.

• Defined planning process

Version 6.2.1 5-5

Page 120: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

The IT organization needs a planning policy, planning standards, and planningprocedures. To better understand the attributes of an effective process, refer to SkillCategory 6.

• Management support for planningEffective planning will only occur when management requires the plans be developedusing the IT organization’s planning process. Management support means that the planmust be completed and approved before resources will be allocated to accomplish thework defined by the plan.

• Planners competent in the planning processThose who will use the planning process to plan need to be competent in using theplanning process. The competency can be achieved by training, or working under amentor to teach effective planning. Normally both should occur.

• Compliance to the planIf a planning process exists, management should require compliance to that process.

• Maintenance of the planning processPlanning processes should continually be improved. If the process is not working or noteffective the process should be changed.

• Reliable information requiredThe plan will be no better than the information used to create the plan. If thoseperforming quality planning cannot rely on the information provided them, they shouldtake whatever steps necessary to assure them that they’re working with valid andreliable information.

5.4 The Planning ProcessThe planning process is the same for business planning and quality planning. There are literallyhundreds of different books on planning. Discussed below are the planning activities that are mostfrequently identified as important to effective planning, as well as, these three areas:

• Planning process overview• Basic planning questions• Common planning activities

5.4.1 Planning Process OverviewWhile there is no standard off-the-shelf plan for planning that can be universally applied to everysituation, the systematic planning process described in this section provides a great wealth of

5-6 Version 6.2.1

Page 121: Casq Cbok Rev 6-2

Quality Planning

experience, concepts and materials. This process can be adapted to most organizations and thusavoid the necessity to “reinvent the wheel.”

The quality of planning and decision-making cannot consistently rise above the quality of theinformation on which it is based. But planning will still be poor in spite of good information unlesseach manager is able to coordinate adequately with his/her associates and consolidate his/herplanning to support the IT objectives. This planning process as illustrated in Figure 5-2 provides aneasy, economical way to collect the process data, retain it, retrieve it and distribute it on a controlledbasis for decision-making.

Figure 5-2 The Planning Process

This planning process is divided into the following ten planning activities:

• Business or Activity Planning• Environment Planning• Capabilities and Opportunities Planning

Version 6.2.1 5-7

Page 122: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

• Assumptions and Potentials Planning• Objectives and Goals Planning• Policies and Procedures Planning• Strengths, Weaknesses and Tactics Planning• Priorities and Schedules Planning• Organization and Delegation Planning• Budgets and Resources Planning

Like most important things in life, it is impossible to describe the scope and purpose of thisapproach to planning by enumerating the bits and pieces. The primary purpose of planning is not toproduce a rigid plan, but to facilitate intelligent delegation with responsible participation byproviding a better method of reaching and revising agreements. Most specifically, it minimizessurprise and optimizes performance in a changing environment. The whole must be greater thanthe sum of its parts. Yet, many managers abhor planning. The common reason heard mostfrequently is that planning systems cannot be installed because “our business is a dynamic onewhich is changing daily.” Obviously, there is probably not a single, viable business today which isnot changing. If one does seemingly fall into such a category, it is stagnating. A stagnating onesoon dies.

The practical, systematic approach to planning was best described by Peter Drucker in “ThePractice of Management:”

“There is only one answer: the tasks must be simplified and there is only onetool for this job: to convert into system and method what has been done beforeby hunch or intuition, to reduce to principles and concepts what has been left toexperience and ‘rule of thumb,’ to substitute a logical and cohesive pattern forthe chance recognition of elements. Whatever progress the human race hasmade, whatever ability it has gained to tackle new tasks has been achieved bymaking things simple through system.”

The following material in this section identifies the contents of the planning activities illustrated inFigure 5-2 above.

5.4.2 The Six Basic Planning QuestionsThe ten planning activities described in Figure 5-2 were designed to answers six basic planningquestions as listed below. The planning process then documents the answers to these six questions:

• Where are we?• Where do we want to go?• How are we going to get there?• When will it be done?• Who is responsible for what?

5-8 Version 6.2.1

Page 123: Casq Cbok Rev 6-2

Quality Planning

• How much will it cost?Table 5-2 shows how each question links to one or more of the ten planning activities and theinformation that is needed to answer the question and perform the activities.

Table 5-2 The Six Basic Quality Planning Questions

There are a large number of planning processes that are effective. What are important inunderstanding an effective planning process are the activities that must be performed and theinformation needed to develop an effective plan. From a quality profession perspective, anyplanning process that works would be an acceptable response to any question on the CSQA exam

Six Basic Questions Planning Activities Planning Information Needed

1. Where are we?

(Historic and current information, present time, and facts)

Business or Activity Planning Nature of Business-Purpose, Scope, History Management Philosophy Profiles of Business-Revenues, Profits, Products, etc.

Environment Planning (External to Company)

Organization and IT work environment Economic, Social, Political, Industry Regulations and Laws Identify and Analyze input on other organizations

Capabilities and Opportunities Planning

Capabilities (strengths, weaknesses – internal/controllable) Problems (external/partially controllable) Opportunities Analysis by Key Result Areas

2. Where do we want to go?

(Dealing with the future, cannot be predicted with accuracy)

Assumptions and Potentials Planning

Temporary future estimates of probable developments beyond our control. e.g., populations, interest rates, market potentials, government regulations and impact of competitive actions.

Objectives and Goals Planning

Temporary estimates of desirable results achieved by our own efforts. Quantified measurable objectives (5-year and fiscal year month-by-month). For example, revenue, products, expenses, profits, productivity objectives.

3. How are we going to get there?

Policies and Procedures Planning

Current policies/procedures hindering performance Required policies/procedures to improve performance

Strengths, Weaknesses, and Tactics Planning

Strategy is a course of action selected from among alternatives as the optimum way to obtain major objectives.

Select tactics that maximize strengths and minimize weakness.

Define tactics.

4. When will it be done?

Priorities and Schedules Planning

Assign order of accomplishment for programs. Identify specific milestones to measure progress on a month-by-month basis.

5. Who is responsible for what?

Organization and Delegation Planning

Specify organizational relationships, organizational charts, and responsibility profiles.

Specify who is responsible for the program of action and identify areas of decision-making and the accompanying authority required to accomplish the programs.

Plan now for your organization requirement 2-3 years from now so you have the right person, at the right place, doing the right work, in the right way at the right time.

6. How much will it cost?

Budget and Resources Planning

The operational budget should place price tags on the tactics. Monthly operating budgets by department.

Capital budgets by month and by year List of major resources—dollars, facilities, information

Version 6.2.1 5-9

Page 124: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

about planning. QAI recognizes that quality professionals should follow the planning processadopted by their organization.

5.4.3 The Common Activities in the Planning ProcessThe ten planning activities listed in Figure 5-2 are common to most planning processes. Someplanning processes have more activities, while others will combine the ten into a fewer number.The description of each of the planning activities that follows can be used for two purposes. First tounderstand the planning model presented in this section. Second to perform a self-assessment onyour organization’s planning process to identify potential improvement opportunities.

5.4.3.1 Business or Activity Planning The purpose of this planning activity is to both define the planning objectives, and to relate the planto other documents that involve individuals. Specifically, this planning activity should address:

• Vision, mission and goals• Who are the customers/users• What are the business needs of the customers/users• Interfacing software systems• Profile/description of customer/user activities

5.4.3.2 Environment Planning Planners need to know the environment in which the deliverables will be developed as well as theenvironment in which the deliverables will be operated. Specifically, this planning activity shouldaddress:

• The environment established by the organization and the IT function that impactsthe means by which work is performed

• Laws and regulations affecting the products produced and operated• Other organizations and systems that are interfaced or impacted by the products

being developed and operated (e.g., payroll systems automatically sending taxinformation to governmental agencies)

5.4.3.3 Capabilities and Opportunities Planning • This planning activity needs to identify criteria that are required for the success of

the project. The criteria need to be ranked so the proper emphasis can be placed onthe most important criteria. The planning activity needs to identify the capabilitiesand competencies of the individuals who will develop the products. The businessopportunities that can be achieved by the project also need to be identified.Specifically, this planning activity should address:

• Critical success factors

5-10 Version 6.2.1

Page 125: Casq Cbok Rev 6-2

Quality Planning

• Strengths and weaknesses of the assigned staff• IT’s ability to meet the project goals (e.g., turnaround time, number of clicks to get

information, etc.)

5.4.3.4 Assumptions/Potential Planning This planning activity identifies those probable developments that will affect the ability to achievethe project objectives. These developments should be as specific as possible in terms of how muchand when. In this activity the planners are trying to describe what will happen in the next month oryears so that implementation can seize any new opportunities that may develop. Specifically, thisplanning activity should address:

• Assumptions which if not correct will impact the success of the project• Current opportunities received from implementing the project• How future opportunities will be identified during the implementation and

operation time frame of the project

5.4.3.5 Objectives/Goals Planning Setting the project objectives/goals is an important component of the planning cycle. Goals andobjectives should be measurable and achievable. If goals are not measurable it will be difficult todetermine whether or not the project is successful. If the goals are not achievable the assigned staffare in a “no win” situation. However, some management philosophies believe in using stretch goalsto push people to achieve a higher level of performance. Specifically, this activity should address:

• Project objectives and goals expressed in quantitative terms• Any qualifications for objectives that can impact the sequence of work, or

alternative strategies can be determined• Quality and productivity goals

5.4.3.6 Policies/Procedures Planning This planning activity needs to identify all of the policies, procedures and practices that will impactthe implementation and operation of the project. This analysis needs to determine whether thatimpact can be positive or negative. Specifically, this planning activity should address:

• Documenting the processes to be used in implementing and operating the project(i.e., policies, standards, procedures and practices)

• Changes needed to processes• Existing processes or parts of processes, not applicable to this project• Process variances needed and how those variances will be obtained

Version 6.2.1 5-11

Page 126: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

5.4.3.7 Strategy/Tactics Planning The strategy defines what to do, and the tactics how to do it. This planning activity is normally themost time consuming activity as planners need to explore multiple strategies and tactics.Specifically, this planning activity should address:

• Select preferred strategy among alternatives• Select best tactics among alternatives• Select tactics that maximize strength and minimize weakness• Document tactics• Get buy-in from those involved in the project.

5.4.3.8 Priorities/Schedules Planning This activity develops precise milestones that identify steps or activities that are essential toaccomplishing the project in the sequence in which they must occur. Many organizations usecritical path planning or equivalent for this planning activity. If the completion date ispredetermined then the milestones must consider what can be accomplished within the availabletime span. Specifically, this planning activity should address:

• Required and realistic completion date• Milestones that need to be met to finish by the scheduled completion date• Sequence in which activities must be performed to determine whether or not the

scheduled date can be met

5.4.3.9 Organization/Delegation Planning The only way to develop people is to give them an opportunity to make decisions. Therefore, aresponsibility profile should be developed by each staff member and discussed and revised withmanagement so that understanding and agreement are accomplished. Specifically, what decisions(and limits) and related responsibilities are delegated to accomplish the task. Specifically, thisplanning activity should address:

• Responsibilities for each employee assigned to the project• Responsibilities of support staff/individual• Agreement by the individual that those responsibilities are adequate and

reasonable

5.4.3.10 Budget/Resources Planning The resources needed for projects include employees, supplies, capital items such as software andhardware, information, support staff, education and training, and other monetary needs. Theseresources can be incorporated into a budget. Specifically, this planning activity should address:

• Monetary resources needed• Skills/competencies needed

5-12 Version 6.2.1

Page 127: Casq Cbok Rev 6-2

Quality Planning

• Hardware/software needed• Support needed• Information needed• Training needed

5.4.3.11 Planning Activities for Outsourced WorkMost of the planning activities that occur for a project developed in-house must also be performedfor outsourced projects. However, since the planners do not have direct control over a lot of theresources in the outsourced organization, many of the planning activities need to be incorporatedinto contract negotiations. To better understand the changes in planning when work is outsourced,refer to Skill Category 10.

5.5 Planning to Mature IT Work ProcessesA major component of most quality plans will be defining, deploying, and improving IT workprocesses. The quality plan should identify specific processes to be defined, deployed andimproved with specific objectives and goals defined for those work processes. Quality assuranceshould be involved in identifying needed new processes and where existing processes need to beimproved.

A quality plan should include a process model whose achievement would be a quality goal. Manyorganizations have chosen the SEI CMMI Capability Maturity Model as that goal. Others haveselected the ISO Model. For information on those models, refer to Skill Category 3.

Many industry process models are designed for software development and not an entire ITorganization. Quality professionals need a model that represents the entire IT organization becauseit is the environmental components of the IT organization that drive quality.

Specifically, some of the components missing from many industry process maturity models aremanagement processes, leadership processes, testing processes, training processes, motivationprocesses and customer/user-focused processes, such as handling customer complaints. TheMalcolm Baldrige National Quality Award Model includes these, but does not focus heavily on ITprocesses.

Please note that no one model is appropriate for, or applicable to, all IT organizations. Therefore,quality professionals need to know the logical steps that IT organizations should follow in maturingtheir IT work processes, assure adequate competency in this area, understand that industry orequivalent models, and how to implement that model to mature IT work processes.

Version 6.2.1 5-13

Page 128: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

5.5.1 How to Plan the Sequence for Implementing Process MaturityThe QAI Manage by Processes Tactical model identifies six process categories. As explainedearlier, the six were selected because they are frequently managed by six different individuals orgroups. This section explains some of the relationships and strategies that are important tounderstand in maturing an information services function, and thus how to sequence theimplementation of the maturity of the six process categories.

Twelve relationships are presented for use in maturing your IT function to meet your specificimprovement objectives and timetable. For each relationship, a strategy is provided on thesequencing of maturity, as well as a discussion on skipping levels and reverting back to lowerlevels. The twelve relationships are:

• People skills and process definitions• Do and check procedures• Individuals' assessment of how they are evaluated to work performed• What management relies on for success• Maturity level to cost to do work• Process maturity to defect rates• Process maturity and cycle time• Process maturity and end user satisfaction• Process maturity and staff job satisfaction• Process maturity to an organization's willingness to embrace change• Tools to process maturity

5.5.1.1 Relationship between People Skills and Process DefinitionsProfessional work processes are a combination of the skill sets of the individual performing thework process plus the procedures and standards within the work process. Since there is acontinuum of work processes, from well-defined and routine work processes to highly creativework processes, the mix of written procedures and people skills change as this continuum moves.illustrates that the defined routine processes are comprised primarily of detailed written proceduresto be followed with minimal variation, while at the creative level the procedures are very crypticand generalized, and they are very dependent upon people skills.

5.5.1.2 Relationship of Do and Check ProceduresWork processes are a combination of Do and Check procedures. The worker is told what to do, andthen how to check that what was done was done correctly. For the defined routine work processes,the Do procedures are very detailed and are designed to be followed with minimal variation. This isbecause organizations know how to do the routine processes. As processes move toward creative,there is less knowledge on how to do it, but the ability to recognize whether it is done correctlyexists. Thus, for creative processes there may be minimal guidance on how to do it, but well-

5-14 Version 6.2.1

Page 129: Casq Cbok Rev 6-2

Quality Planning

defined processes to check to see if it is done right, can be developed. Many of these checkingprocesses involve groups of peers.

5.5.1.3 Relationship of Individuals' Assessment of How They are Evaluated to Work PerformedPeople do what they believe they are evaluated on. If they are evaluated on meeting schedules, theymeet schedules; if they are evaluated on producing defect-free code, they produce defect-free code;and so forth. Because immature processes have poorly defined standards of work, mostassessments of individual performance at maturity Level 1 are highly subjective. Thus, at lowlevels of process maturity, people believe they are subjectively evaluated and focus their attentionon organizational politics, while at the highest levels of process maturity their emphasis switches tothe results they are paid to produce because those results can be measured against specificstandards.

5.5.1.4 Relationship of What Management Relies on for SuccessCurrent literature emphasizes the use of teams, empowered teams, and self-directed teams. On theother hand, organizations at low process maturity levels tend to rely much more on individuals,even if those individuals are members of teams.

5.5.1.5 Relationship of Maturity Level to Cost to Do WorkMaturity level has a significant impact on cost. As the maturity level increases, the cost per unit ofwork decreases. Many organizations now measure cost in dollars to produce a function point.(Note: They could also use cost to produce a thousand lines of code.) It has been estimated that anincrease of one level in process maturity doubles an organization's productivity.

5.5.1.6 Relationship of Process Maturity to Defect RatesThere is a strong correlation between process maturity and defect rates. As the process maturitylevel increases, the defect rate decreases. Many organizations measure defect rate in defects perthousand function points or defects per thousand lines of code. As processes become better defined,controls become more effective, and as people become more motivated, the defect rate dropssignificantly.

5.5.1.7 Relationship of Process Maturity and Cycle TimeThere is a high correlation between process maturity and cycle time. As processes mature, there isa decrease in the cycle time to build software products. The maturity of processes, people, andcontrols has associated with it a search for the root cause of problems. It is these problems that leadto rework which, in turn, extends the cycle time. Thus, improvements also focus on facilitatingtransitions of deliverables from work task to work task, which also significantly reduces cycle time.

5.5.1.8 Relationship of Process Maturity and End User SatisfactionThere is significant research to support the premise that end user satisfaction increases as processmaturity increases. End users are dissatisfied for many reasons, which are addressed by process

Version 6.2.1 5-15

Page 130: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

maturity. One is variability, meaning that the service or product they receive one time issignificantly different from product received at a later point in time. End users are also dissatisfiedby high costs and long cycle time, which are both reduced by process maturity. In addition, endusers are dissatisfied when the information system does not provide significant value inaccomplishing their mission. Levels 3 through 5 have a significant impact on value received fromthe information areas.

5.5.1.9 Relationship of Process Maturity and Staff Job SatisfactionStaff job satisfaction increases significantly as processes mature. The reason for this is thatstabilized effective processes increase an individual's ability to be successful. The facts that processmaturity increases end user satisfaction, reduces defect rates, and reduces rework and cost, are allcontributors to an individual's personal success. Individuals tend to want to work in an organizationin which they are successful, and if the organization's processes make them successful they aremotivated and anxious to stay with that organization.

5.5.1.10 Relationship of Process Maturity to an Organization's Willingness to Embrace ChangeThere is a high correlation between process maturity and an organization's willingness to embracechange. People resist change for a variety of reasons. Some of these are personal, but others relateto a lack of confidence that the change will both improve the way work is done and have a positiveimpact on their career. However, as processes mature, people gain confidence in the processes andthe willingness of management to reward for results. The willingness to change parallels processmaturity but lags slightly at the lower levels of maturity and accelerates during the higher levels ofmaturity.

5.5.1.11 Relationship of Tools to Process MaturityTools are a vital component of maturing processes. At Level 1, tools tend to be optional and notwell taught. The lack of good tools, and the lack of consistent use of those tools, holdsorganizations at lower maturity levels. There is a strong relationship between the acquisition andintegration of tools into the work processes and the movement from process maturity Level 1 toLevel 5.

5-16 Version 6.2.1

Page 131: Casq Cbok Rev 6-2

Define, Build, Implement, and Improve Work Processes

he world is constantly changing. Customers are more knowledgeable and demanding and,therefore, quality and speed of delivery are now critical needs. Companies must constantlyimprove their ability to produce quality products that add value to their customer base.Defining and continuously improving work processes allows the pace of change to be

maintained without negatively impacting the quality of products and services.

6.1 Process Management ConceptsProcess management is a term used by many IT organizations to represent the totality of activitiesinvolved in defining, building, deploying and maintaining the work processes used to achieve theIT mission. What is referred to as process management in this skill category is also called processengineering and the standards program.

6.1.1 Definition of a ProcessA process is a vehicle of communication, specifying the methods used to produce a product orservice. It is the set of activities that represent the way work is to be performed. The level of

Process Management Concepts page 6-1Process Management Processes page 6-10

Skill Category

6

T

Version 6.2.1 6-1

Page 132: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

communication (detail of the process) is normally commensurate with the skill level associatedwith the job. Table 6-1 shows some sample IT processes and their outputs.

Table 6-1 Sample IT Processes and Their Outputs

6.1.2 Why Processes Are NeededProcesses add value to both management and the workers, although the reasons differ.

From a management perspective, processes are needed to:

• Explain to workers how to perform work tasks• Transfer knowledge from more experienced to less experienced workers• Assure predictability of work activities so that approximately the same

deliverables will be produced with the same resources each time the process isfollowed

• Establish a basic set of work tasks that can be continuously improved• Provide a means for involving workers in improving quality, productivity, and

customer satisfaction by having workers define and improve their own workprocesses

• Free management from activities associated with "expediting work products" tospend more time on activities such as planning, and customer and vendorinteraction

From a worker perspective, work processes are important to:

• Increase the probability that the deliverables produced will be the desireddeliverables

• Put workers in charge of their own destiny because they know the standards bywhich their work products will be evaluated

• Enable workers to devote their creativity to improving the business instead ofhaving to develop work processes to build products

• Enable workers to better plan their workday because of the predictability resultingfrom work processes

Examples of IT Processes Process Outputs

Analyze Business Needs Needs Statement

Conduct JAD Session JAD Notes

Run Job Executed Job

Develop Strategic Plan Strategic Plan

Recognize Individual Performance Recognized Individual

Conduct Project Status Meeting Updated status information

6-2 Version 6.2.1

Page 133: Casq Cbok Rev 6-2

Define, Build, Implement, and Improve Work Processes

6.1.3 Process Workbench and ComponentsA quality management approach is driven through processes. As Figure 6-1 shows, the workbenchis a graphic illustration of a process, documenting how a specific activity is to be performed.Workbenches are also called phases, steps, or tasks. A process can be viewed as one or moreworkbenches. Depending on the maturity of the organization, process workbenches may bedefined by process management (or standards) committees, QA analysts, or work teams.

Figure 6-1 Components of a Process Workbench

From the perspective of the PDCA cycle, the process workbench is created during the Plan phase,and improved during the Act segment.

The workbench transforms the input to produce the output. The workbench is comprised of twoprocedures: Do and Check, which correspond to the Do and Check phases of the PDCA cycle. Ifthe Check procedure determines that the standards for the output product are not met, the processengages in rework until the output products meet the standards, or management makes a decision torelease a nonstandard product. People, skills, and tools support the Do and Check procedures.

A process is defined by workbench and deliverable definitions. A process is written with theassumption that the process owners and other involved parties possess certain skill and knowledgelevels (subject matter expertise).

A workbench definition contains:

• A policy statement (why - the intent)• Standards (what - the rules)

Version 6.2.1 6-3

Page 134: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

• Procedures (one or more tasks) in the form of procedural statements (how)A deliverable definition contains:

• A policy statement (why – the intent)• Standards (what – the rules)• Templates that specify document format• Policies, standards, and procedures may refer to people, methods, and tools.

Note that some workbenches and deliverables may not contain standards orprocedures. It is assumed that if a defined process contains standards, they will becomplied with; and if the process contains task procedures, they will be performed.

The following components of a process represent the vocabulary of process management.

• PolicyThe policy states why a process exists or its purpose. A policy indicates intentions or desirableattributes or outcomes of process performance, and should link to the organization’s strategicgoals and support customer needs/requirements.

• StandardsThe standards state what must happen to meet the intent of the policy. Standards may relate to adeliverable produced in the process or to the task procedures within the process. Regardingdeliverables, the standard is used to determine that the delivered product is what is needed.Regarding the task procedures, standards may specify things such as the time frame or asequence that must be followed. A standard must be measurable, attainable, and necessary.

• InputsInputs are the entrance criteria or materials needed to perform the work.

6-4 Version 6.2.1

Page 135: Casq Cbok Rev 6-2

Define, Build, Implement, and Improve Work Processes

• ProceduresProcedures describe how work must be done - how methods, tools, techniques, and people areapplied to perform a process (transform the input into the output). Procedures indicate the "bestway" to meet standards. There are procedures to Do and Check work. People, skills, and toolsare incorporated into the Do or Check procedures, and, therefore, are not considered separatecomponents of the workbench.

• People or Skills are the roles (such as suppliers, owners, and customers),responsibilities, and associated skill sets needed to execute a process. For example,a programmer may require written communication skills and knowledge of VisualBasic.

• Manual and automated tools such as CASE tools, checklists, templates, codecompilers, capture/playback testing tools, and e-mail may be used to aid in theexecution of the procedures.

• Output or DeliverablesOutput or deliverables are the exit criteria, products, or results produced by the process.Deliverables can be interim or external. Interim deliverables, such as JAD notes, are producedwithin the workbench, but never passed on to another workbench. External deliverables, suchas a requirements specification, may be used by one or more workbenches, and have one ormore customers. Deliverables serve as both inputs to, and outputs from, a process.

6.1.4 Process Categories Approaches for controlling businesses have evolved over many decades. These approaches havebeen accepted by the American Institute of Certified Public Accountants, chartered accountantsocieties worldwide, and the U.S. General Accounting Office (which has issued control guidelinesfor the U.S. Government).

The business control model includes three general categories of control, which are implementedthrough the processes below. Examples of these controls are given in Skill Category 7.

• Management ProcessesThese are the processes that govern how an organization conducts business, including humanresources, planning, budgeting, directing, organizational controls, and processes governingresponsibility and authority. Management processes are referred to as the quality managementsystem. A model for this system is the Malcolm Baldrige National Quality Award model.Models are discussed in Skill Category 3.

Version 6.2.1 6-5

Page 136: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

• Work ProcessesThese processes include the standards and procedures that govern the performance of a specificwork activity or application, such as systems development, contracting, acquisition of software,and change management.

• Check ProcessesThese controls assure that the work processes are performed in accordance with the productstandards and the customer needs. They also assure that management processes are performedaccording to organizational standards and needs. Examples include document reviews,program reviews, and testing.

In the context of the PDCA cycle (see Skill Category 1), management processes perform the Planand Act components; work processes represent the Do component; and the Check processesrepresent the Check component. This means that management must resolve noncompliance toprocesses, not punish the people assigned to the work processes. Responsibility for resolving anoncompliance may be enforced automatically through controls.

As processes mature, so do the methods and approaches for managing them. While the SoftwareEngineering Institute (SEI) Capability Maturity Model (see Skill Category 3) is directed at thesoftware development process, the same concepts apply to management processes. Managementprocesses at SEI Level 1 are very unpredictable and have great variability, while those at Level 5are highly predictable, with minimal variability. For example:

• With management processes at Level 1, organizational politics become extremelyimportant in the management style. Politics can influence unpredictable processes. Incontrast, a mature, stable management process is less influenced by organizationalpolitics, and trust increases.

• When organizations have immature management and work processes, managementdoes not know what workers are doing, and workers may not know what managementwants, or how management will react to work situations. Immature managementprocesses are usually managed through controls such as budget and schedule, ratherthan relying on work processes.

• Management processes should be stabilized and matured in conjunction with the workprocesses. As management and work processes mature, predictability and consistencyenter the workplace. The need for status reports and status meetings decreases, andthere is more reliance on management by fact. The organization tends to flatten aslayers of middle management are eliminated.

• Check processes are typically associated with work processes, but they exist inmanagement processes as well. The maturing of the Check processes help mature themanagement and work processes. Check processes are a major source of quantitativedata for the tactical dashboards that are used by management (see Skill Category 8).They are also the source of data needed for continuous process improvement. As theCheck and Do processes mature, workers need less supervision, leaving managementfree to perform their planning responsibilities rather than acting as inspectors for the

6-6 Version 6.2.1

Page 137: Casq Cbok Rev 6-2

Define, Build, Implement, and Improve Work Processes

work products and services. A study of quality models shows that the major factor inmaturing the work processes is the addition of check processes.

6.1.5 The Process Maturity ContinuumWork processes, check processes, and customer involvement, are interrelated as shown inFigure 6-2. The type of product determines the work and Check processes, and dictates the level ofcustomer involvement.

6.1.5.1 Product and Services ContinuumThe IT industry deals with a continuum of products, ranging from similar to professional. Forexample, many repetitive tasks are performed by computer operations to produce similar productssuch as invoices and checks. In the middle of the product continuum are job shops that produceone-of-a-kind products with the same characteristics but are customized and unique, such as asoftware system. At the high end of the continuum the product may also be a service in the form ofprofessional advice or consulting.

Figure 6-2 Process Management Continuum

As the type of product changes on the continuum, so do the work processes. The primary change inwork processes is the amount of worker skill and personal contribution to the product or serviceproduced.

Version 6.2.1 6-7

Page 138: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

6.1.5.2 Work Process ContinuumContinuous work processes are process-dependent. The worker knows the standards that theproduct or service must meet, and is provided with the procedures and tools needed to complete thejob on time and within budget. It is important for the worker to follow the processes precisely sothat each product produced is similar to the previous product produced. While individuals useexperience and personal skills, the probability of success is significantly increased at this end of thecontinuum because previous use of the process has proven it to work.

Professional work processes are people-dependent, and may be referred to as crafts or art. Theydepend as much on the skills of the worker as on the steps of the process. For example, a softwaredeveloper might have C++ programming skills. Or, in creating a design, the process focuses on theway the design is documented and the constraints rather than the actual design. The processassumes that the designer has certain skills and creativity and utilizes them to create the design.

With professional work processes, management depends upon the creativity and motivation ofpeople, and must try to hire the best people and then encourage them to perform the job tasksneeded. This is sometimes called "inspirational management." The inspiration can be positive, withthe promise of a reward; or negative, threatening to withhold rewards if the job task is notcompleted on time, within budget, and to the customer's satisfaction. There are normally fewstandards at this level, and the worker’s manager or customer, judge the quality of the completedproduct.

6.1.5.3 Check Processes ContinuumThe continuum of check processes parallels that of work process methodologies, and represents thetypes of controls associated with the work processes. Continuous work processes use literalcontrols. Controls of professional processes focus more on intent, and require a group of peers toassess their effectiveness. In the system design example above, the objective of placing controls ona worker’s skills and creativity is to enhance the probability that the skills and creativity will beused effectively. Intent controls used in design processes would focus on such things as whether thedesign:

• Uses the hardware configuration effectively• Can be implemented given the skills of the implementers• Can be easily maintained

6.1.5.4 Customer Involvement ContinuumThe customer involvement continuum shows the level of involvement needed by the customer forthe type of product produced. For similar products produced by continuous work processes usingliteral controls, there is minimal or no customer involvement. For example, in daily computeroperations, the customer would rarely be involved unless problems occurred. As an IT functionmoves towards customized products and services, the customer usually becomes more involved onan iterative or checkpoint basis.

At certain points during the work process, the user becomes still more involved to verify that theproducts being produced are those wanted by the customer. When the products become

6-8 Version 6.2.1

Page 139: Casq Cbok Rev 6-2

Define, Build, Implement, and Improve Work Processes

professional, the customer is heavily involved in the work or check processes. In consulting, thework process involves how the customer uses that advice. If the advice is ignored, or usedincorrectly, the end product will likely be ineffective. Thus, heavy involvement is not only needed,but the involvement actually shapes the final product or service.

6.1.6 How Processes Are ManagedThe infrastructure in a quality management environment supports process management, and isshown in Figure 6-1 in Skill Category 2. Process management is primarily a line (not a staff)responsibility. All levels of the organization should be involved in both establishing and usingprocesses in their daily work. The most effective means for managing and building work processesis to have managerial responsibilities fall to a process management committee, and give teamsresponsibility for the activities of building and improving processes.

Other approaches have been used, but not as successfully. Generic purchased processes are notcustomized for the culture in which they are installed. Either they fail completely, or are onlypartially followed. Some organizations engage a single individual or small group to write processesfor the entire organization. This frequently fails because the users do not feel ownership of theprocess, or they feel that they know better ways to do work than those defined by someone else.

As a supporting staff function, the quality function also has some process managementresponsibilities. Their primary role should be to help and support the line organization, but not tomanage the processes. As discussed in Skill Category 4, the involvement of the quality functionvaries with the maturity level of the quality management system.

The quality function may:

• Participate on committees, providing process management expertise• Provide team support, such as training, coaching, and facilitation• Serve as a centralized resource for measurement analysis and reporting• Play a "custodial" role for processes - formatting, editing, and publishing and

distributing process definitions; controlling access or change to processdefinitions, etc.

• Occasional failure is the price of improvemen.Audit for process deployment andcompliance, when an organization is not able to separate the auditing and qualityfunctions (separate functions are recommended)

Occasional failure is the price of improvement.

6.1.7 Process TemplateA process template is a pictorial representation of what is needed to comply with the processrequirements. For example, an order entry check might have a computer screen of the fields needed

Version 6.2.1 6-9

Page 140: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

to complete the customer order. The computer screen represents the template used to accomplishthe order entry process.

6.2 Process Management ProcessesProcess management is a PDCA cycle. Process management processes provide the frameworkfrom within which an organization can implement process management on a daily basis.Figure 6-3 shows how this set of practices can be viewed as a continuous improvement cycle.

Figure 6-3 Process for Process Management

The process management PDCA cycle includes seven processes, together with the infrastructuregroup that uses that process. These processes are summarized and then discussed in more detailbelow.

• Plan CycleThe Plan cycle is used by the Process Management Committee and includes these processes:

1. Process Inventory defines a list of processes that support an organization inaccomplishing its goals.

2. Process Mapping identifies relationships between processes and the organization'smission/goals, its functions (people), and its deliverables (products and services).

3. Process Planning sets priorities for process management projects (defining orimproving processes).

Do Cycle

The Do cycle is used by the Process Development Team and includes these processes:

6-10 Version 6.2.1

Page 141: Casq Cbok Rev 6-2

Define, Build, Implement, and Improve Work Processes

4. Process Definition defines a process's policies, standards, task procedures, deliverables,people and skill requirements, and tools.

5. Process Controls identifies the level and types of quality controls needed within aprocess, and incorporates QC procedures into the process.

• Check CycleThe Check cycle is used by the Process Management Committee and includes this process:

6. Process Measurement determines what measures and metrics are needed tostrategically and tactically manage by fact, and incorporates tactical measurement intothe appropriate processes.

• Act CycleThe Act cycle is used by the Process Development Team and includes this process:

7. Process Improvement uses facts (measurement results) to identify root causes ofproblems and to change the processes in order to improve results, prevent problems,and reduce variation.

The seven process management processes should be used in the sequence in which they aredescribed. Planning should occur before processes are defined to ensure that the most criticalprocesses are defined first. Implemented processes should then be measured to determine firstwhether they are repeatable (approximately the same product set is produced each time the processis used); and second, to determine where the process could be improved. This enables the processimprovement process to focus on those process components that will provide the greatest benefit tothe organization when improved.

The process management PDCA cycle is continuously performed. The Check and Act componentscause new planning to occur, as will the introduction of different technology and approaches, suchas client/server or the Internet. The plan then redefines the sequence in which processes should bedefined, checked, and acted upon; and the cycle continues.

6.2.1 Planning ProcessesFigure 6-3 showed the following three processes within the Plan cycle of the process managementprocess.

6.2.1.1 Process InventoryA process inventory is a major process management deliverable containing the "master list" ofprocesses that support an organization in accomplishing its goals. Before producing an inventory,the scope of the effort must be defined, focusing on processes owned and used by the organization.The inventory is developed as part of an overall process management framework, but is alsoupdated and improved on an ongoing basis.

Inventories can be developed by:

Version 6.2.1 6-11

Page 142: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

• Referencing existing policies, standards, procedures, and system development lifecycle manuals

• Conducting brainstorming and affinity grouping sessions (see Skill Category 4)• Surveying and interviewing employees• Starting with existing process inventories (such as other companies' inventories, or

Information Systems Process Architecture) and updating to reflect organizationalstructure and terminology

The inventory should list processes that produce a major outcome or deliverable. Each processlisted should contain a brief description and its status or state (e.g., the CMM levels of Undefined,Initial, Repeatable, Defined, Managed, Optimized can be used.). Optionally, a high-level “class”could be included to categorize processes, such as “run jobs” or “manage facilities”. Sampleprocesses include:

• Develop Strategic Plan• Purchase Tools• Perform Internal Assessment• Conduct Market Research• Identify Product Requirements

6.2.1.2 Process MappingProcess mapping identifies or "maps" relationships between processes and the organization'smission and goals, its functional units or roles (people), and its deliverables (products and services).The three main objectives of process mapping are to understand how a process contributes tomeeting the organization's mission and goals, who is responsible for the process, and how theprocess interfaces to produce the organization's outcomes.

To map processes, executive management (Quality Council) must have identified theorganization's mission, long and short-term goals, organizational structure, and deliverables. Ifformal strategic planning is not regularly performed, identifying an organization's mission andgoals is difficult.

Processes should be mapped to mission and goals, to functional units (people), and to deliverablesin separate matrices. The following generic process can be used for each mapping:

1. Create a matrix.

2. List processes across the top of the matrix.

3. On the left side of the matrix, list the goals, functional units or roles or deliverables.

4. Identify the linkages between the rows and columns, as follows:• A process may support multiple goals. Put an X in the intersection to acknowledge

a linkage only if the stability or quality of the process could influence meetinggoals. If all processes contribute to all goals, decompose the mission and goalsfurther. The resulting Manage-by-Process matrix is a process map.

6-12 Version 6.2.1

Page 143: Casq Cbok Rev 6-2

Define, Build, Implement, and Improve Work Processes

• Processes have primary owners, suppliers and customers. Identify a linkage in thismatrix by using “O”, “S” or “C” in the intersection to distinguish between theroles.

• Deliverables can be interim such as design specifications, or external such as usermanuals. In this matrix indicate the usage of the deliverable by placing a “C”, “R”,“U” and/or “D” in the intersection. “C” is used when the deliverable is created, orthe service is provided through the process. “R” indicates the deliverable isreferenced or used as input to the process. “U” indicates the deliverable is updatedor revised in the process. “D” means the deliverable is deleted, or retired, etc.

5. For mission and goal mapping, look for gaps where goals are not supported byprocesses. Consider removing any processes that do not support goals, as they do notadd value. For functional unit mapping, look for gaps where units do not haveprocesses. For deliverable mapping, look for gaps where deliverables do not haveprocesses or vice versa.

6. If the mapping identified any new processes, add them to the inventory.

6.2.1.3 Process PlanningProcess planning allows priorities to be set for process management projects (defining orimproving processes). Priorities are set based on the relative importance of the process toaccomplishing the organization's mission and goals, organizational constraints or readiness, and anassessment of the project’s status as follows:

• Assessing mission alignmentThe degree to which each process contributes to the organization's mission and helps toaccomplish organizational goals should be ranked. This involves weighting goals byimportance and then applying the weighting scale to the one to three processes that moststrongly align to each goal.

• Assessing organizational capability or readinessThe degree to which an organization is capable of defining or improving each process shouldbe assessed. A score of 1-3 represents the readiness of the organization. Readiness is influencedby three main factors:

• Motivation: Are most of the process owners committed to managing by processand motivated to define or improve it?

• Skills: Is the process understood, and is there subject matter expertise in themethods and tools used to define or improve the process?

• Resources: Are appropriate and adequate resources (people, time and money)allocated to define or improve the process?

• Status of each processThis was determined as part of the inventory process.

Version 6.2.1 6-13

Page 144: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

After assessing the alignment, readiness, and process status, the process management projects canbe prioritized. The alignment and readiness assessments are represented by a numerical value.Process status values can be converted to numbers by setting Undefined/Initial to 3, Repeatable orDefined to 2, and Managed or Optimized to 1. Total the assessment values to provide an overallscore, and assign priorities based on the scores. Each company should develop a prioritizationscheme that fits their own needs. A simple scheme is to assign the top-scoring process a Priority 1,and so on.

After establishing priorities, develop the tactical plan by assigning resources and time frames to thehighest-priority projects. While each process definition or improvement team should develop itsown work plan, the tactical plan is a higher-level plan that the Quality Council and processmanagement committee use to establish and track multiple process management projects. This planshould contain a project title, the resources (manpower and materials) and time frame (start andstop dates) for the project.

Many organizations use a "time box" approach to process management, which specifies a presetperiod of time, such as six weeks, for the project's duration. The scope of the work effort and thespecific tactical plan is then driven by this time frame. The time box approach helps organizationsuse an incremental, iterative approach to process definition.

6.2.2 Do ProcessesThe two processes within the Do cycle of the process management process as shown in Figure 6-3are discussed below.

6.2.2.1 Process DefinitionThe process that a team uses to scope a single process by defining its policies, standards,procedures, deliverables, people or skill requirements, and tools is called Process Definition. Thecore activity of Process Definition is defining the process; but other activities include performingwalkthroughs of the process before publication, piloting the process, marketing the process, etc.Only the core activity is discussed within the scope of this guide.

The core team should contain 3-5 members (typically process owners). When multiple people orunits exist for the same role, a representative group should be used with the others acting asreviewers. Team members should include a process owner, supplier, customer, processadministrator, manager of the process owner, and a process auditor. Roles such as team leader,facilitator, and scribe should be assigned and the team should be trained in consensus building (seeSkill Category 4).

During Process Definition the following activities occur:

• Define the Scope of the ProcessUse the Process Inventory, Process Maps, and existing standards and procedures to clarify thescope of the process. This involves developing a high-level process flow (major workbenches

6-14 Version 6.2.1

Page 145: Casq Cbok Rev 6-2

Define, Build, Implement, and Improve Work Processes

and interfaces with other processes), major inputs and outputs (deliverables), and majorcustomers and their requirements.

• Develop the WorkflowBrainstorm the tasks and deliverables in the process and then group the tasks. Select the currentbest practices, adding missing tasks or deliverables and correcting flaws in the workflow.Define the workbenches internal to the process, their sequence, and major interim deliverables.Typically processes contain 3-5 workbenches, with each workbench containing 3-7 processsteps or task procedures.

• Develop PoliciesA policy states why the workbench or deliverable exists, and indicates desired results (desirableattributes of process performance or desirable product quality characteristics). Policies shouldlink to the organization’s strategic goals, and support customer needs or requirements. A policyshould be realistic, stating a desire that the organization is currently capable of accomplishing.

• Sample Workbench Policy StatementA JAD session is conducted to uncover the majority of customer requirements earlyand efficiently, and to ensure that all involved parties interpret these requirementsconsistently.

• Sample Deliverable Policy StatementThe requirements specification must reflect the true needs of the ABC organization,and be complete, correct, testable, and easily maintainable so that it can be usedthroughout application systems development and maintenance.

A process management committee or the process manager usually develops a policy beforeestablishing a process development team. When the team begins scoping and defining thedetailed process, they may need to challenge the feasibility and appropriateness of the policy. Ifthe process development team develops the policy, process management committee and/orprocess manager should review it.

• Develop StandardsA standard states what must happen to meet the intent of the policy. Standards are morespecific than policies in that they convert intentions into specific rules. Workbench standardsdeal with performance issues related to time frames, prerequisites, or sequencing of tasks whiledeliverable standards typically specify content.

A standard must be measurable, attainable, and necessary. It is measurable if it can be verifiedthat the standard has or has not been met. A standard is attainable, if given current resourcesand time frame; the standard can reasonably be complied with every time. The standard isnecessary if it is considered important or needed (not a "nice to have”) in order to meet theintent of the policy.

• Sample Workbench Standard

Version 6.2.1 6-15

Page 146: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

Requirements uncovered in each JAD session must be formalized, reviewed, andapproved by the JAD participants the following morning.

• Sample Deliverable StandardEach unit of data or information referenced in the requirements specification must bedescribed in the data dictionary.

Process development teams should consider the following guidelines when developingstandards:

• Standards contain the basis on which compliance is determined, not the methodsby which compliance is achieved.

• Each time a workbench is executed or a deliverable is produced, quality controlprocedures must be able to evaluate whether the standard has been met.

• It is easier to control quality with binary standards that require no subjectiveassessments or judgments to determine compliance; however, this is not alwayspossible. Different types of standards may result in different QC methods. Forexample:

• Literal or binary: Each dataflow line on a dataflow diagram must contain adataflow descriptor of five words or less that identifies the data being passedbetween processes and stores.

• Judgment or intent: Each dataflow line on a dataflow diagram must contain adataflow descriptor that concisely and accurately identifies the data beingpassed between processes and stores.

• Since customer requirements often determine standards, interview customers tofind out how they use the deliverables, any problems they have, and the mostimportant quality characteristics.

• Consider "internal requirements" - the process owner’s desires for effective,efficient, and consistent processes.

• If information from other processes is a problem, create standards that serve asentrance criteria. It is better to embed these standards in the supplier’s process.

• Policy statements may need to be re-evaluated in light of standards development.• Develop ProceduresProcedures describe how work is done, and indicate the "best current way" to meet standards.Ideally, if the procedures are followed, the standards will automatically be followed and theintent of the policy will be met.

A task is a single step in a procedure. Procedures may have many tasks. Task proceduresshould refer to people, accessible tools, known techniques or methods, and templates fordeliverables. If appropriate, tasks can also refer to more detailed work instructions, such astraining or user’s manuals.

6-16 Version 6.2.1

Page 147: Casq Cbok Rev 6-2

Define, Build, Implement, and Improve Work Processes

Procedures are often written in play script format (see Skill Category 4). Like a script in a play,each line (task) is identified in its proper sequence, and the actor (role or function) responsiblefor “saying” each line is noted. A task flow diagram or flowchart may be used to graphicallydepict the procedure.

A process development team develops procedures using the following guidelines:

• Procedures are not always required and unless critical, should be minimized. • Skill set and knowledge (subject matter expertise) requirements or prerequisites

for performing the procedure should be identified. The procedures should not be asubstitute for skill set development or describe how to do something the persondoing the task should know (given the prerequisites).

There are Do procedures and Check procedures. Check procedures are described in the nextsection on process control. A Do procedure explains in a step-by-step format the tasks neededto produce a product.

• Sample Do procedure for the requirements definition process1) Scribe: Use the XYZ tool to enter the requirements. Generate a list of any openissues using the XX template. 2) Leader: Walk through the requirements, paraphrasingeach item. Address each open issue when its reference column contains the item beingcovered.

• Sample Do procedure to write a computer program might have the tasks1) Project manager: get a job number, 2) Programmer: obtain the programspecifications, and so forth.

6.2.3 Check Processes The Check process incorporates the process controls needed to be assured the “Do process” wasperformed correctly. Process control identifies the appropriate types of quality controls neededwithin a process, and designs and incorporates them as Check procedures into the process. Checkprocedures describe how to evaluate whether the right work was done (output meets needs) andwhether the work was done right (output meets standards). Controls should be designed based onthe criticality of the process and what needs to be checked.

Check procedures are defined in the same way as Do procedures. If the play script format is used,Check procedures should be incorporated to the task flow diagram or flowchart at the appropriatespot. For example, a quality control checklist associated with programming would have questionssuch as, "Was a flowchart produced that describes the processing of the program?" or “How muchquality control is enough?”

One of the quality management philosophy objectives is to continually improve process capabilityby reducing variation and rework. With this strategy quality is built into the products rather thantested in. As standards and Do procedures are perfected, the need for extensive quality control isreduced.

Version 6.2.1 6-17

Page 148: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

Quality control procedures are considered appraisal costs. They add to the overall cost, increase theeffort required, and increase the cycle time for building products and delivering services. Wherestandards and Do procedures are not perfected, quality control is necessary in a process to catchdefects before they affect the outcome of downstream processes. Appraisal costs are part of theCost of Quality, which is covered in Skill Category 1.

The challenge is to install the appropriate amount and types of controls to minimize cost, effort, andcycle time; and to minimize the risk that defects will go undetected and "leak" into other processes.

6.2.3.1 Identify Control PointsControls are often placed near the end of a process or workbench, but this is not always the mostappropriate location. The first step in process control design is to identify the most logical points inthe process to add controls. One way to address this issue is to identify and rank process risks.Process risks are those things that could occur during execution of a process that would result indefects and rework. For example, process risks for an "estimate project effort" process may be:

• Use of inaccurate historical data• Misuse of historical data• Inaccurate estimation algorithm or mathematical mistakes• Inadequate contingencies used• Wrong staffing ratios and loading figures used

The team reviews the process workflow, policies, standards, and procedures, and then brainstormsrisks. Risks that are found to be outside the scope and control of the process being defined are bestcontrolled in other processes. If there is any risk that standards may not be followed, that shouldalso be noted.

Ranking risks should be based on two factors: the probability that the risk might occur and theimpact (cost of defect and rework) if the risk does occur. A high, medium, low ranking schemecould be used. While this is somewhat subjective without historical defect and cost data, thejudgment and knowledge of the process owners on the team is usually accurate.

Plotting where the top-ranking risks lie can help to determine the most appropriate point to insertcontrols. Using the ranked list of risks, the team should identify where defects potentially originate.In general, the closer the control is to the point of origin of the defect, the better. The severity of therisk will also influence the selection of the control methods. Skill Category 8 contains additionalinformation on risk.

The control point decisions should be adjusted if needed. Control methods should be consideredbased on the following:

• Risk severity• Cost, effort, and cycle time impact• Availability of appropriate resources and people• Strength of the control method

6-18 Version 6.2.1

Page 149: Casq Cbok Rev 6-2

Define, Build, Implement, and Improve Work Processes

• Impact on the overall cultureFigure 6-4 shows five main categories of control methods that are discussed below.

Figure 6-4 Control Methods Categories

6.2.3.1.1 AutomaticWhen performing a Do procedure, automation is the only way to force compliance. Sometools, such as CASE tools, automatically enforce task completion and sequence, anddeliverable standards.

6.2.3.1.2 Self-CheckingThis is when the process owner (author) uses a method other than the Do procedure, tocrosscheck his/her work. Methods in this category are:

• Analysis tools, which parse and analyze deliverables after they have been created,such as spelling checkers, writing analyzers, and code analyzers (e.g., standardscompliance, complexity analyzers, and cross-reference tools).

• Checklists, which are worksheets containing a list of questions oriented towardsdetermining whether the standards have been adhered to. They are designed toprovide a summary-style self-check for the author. Checklists often mirrorpolicies, standards, and procedures, but address each compliance issue in the form

Without Process Improvement

Version 6.2.1 6-19

Page 150: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

of a question. A "yes" answer means that the author has followed the process andhas produced a result that matches the intent of the policy statement. A "no"answer indicates noncompliance.

• Desk-checks, where the author or owner reviews the product against specificationsand standards. It is uncontrolled and subject to the time requirements of eachindividual.

• One-on-one reviews, which are informal reviews conducted between the author/owner and one other person. The objective is to review against specifications andstandards.

• Tests, which validate that the actual results are the expected or desired results.

6.2.3.1.3 Peer ReviewsOne or more process owners review the results of the author. Typically, quality problems arenoted, and the author corrects them. Various methods exist, from informal to formal. Methodssuch as informal walkthroughs, formal inspections, and checkpoint reviews are discussed inSkill Category 7.

6.2.3.1.4 SupervisoryThe author’s supervisor reviews the work and ensures that defects found are corrected. This isproblematic because it takes responsibility for quality away from the worker, and may beineffective because the supervisor is not sufficiently skilled or knowledgeable about the workto make intent or judgment calls regarding compliance. It may also influence the supervisorunfairly regarding a worker’s performance. Supervisors may use the informal walkthrough,checklists, or testing for control methods.

6.2.3.1.5 Third PartyAn independent group evaluates the product. As with supervisory controls, this is problematicbecause responsibility for quality is taken from the worker and the third party may not have theskills or knowledge about the work to determine compliance. Examples of independent groupsare quality functions and independent test teams. Methods used by third parties includeinformal walkthroughs, checklists, testing, analysis tools, and sampling.

6.2.3.2 Process MeasurementProcess measurement determines what strategic and tactical measures and metrics are needed tomanage by fact, and incorporates measurement into the appropriate processes. Measurementprovides quantitative feedback to an organization about whether it is achieving its goals - whether itis moving towards its results.

The program starts by building a measurement base, and then identifies goals for the desiredbusiness results. To implement measurement in a process, it identifies the relationship of processcontributors and results, and how to measure each. The fourth phase discusses measurement byfact.

6-20 Version 6.2.1

Page 151: Casq Cbok Rev 6-2

Define, Build, Implement, and Improve Work Processes

Results should play a significant role in process definition. Process requirements can be derivedfrom analyzing the factors that contribute to achieving desired results. These factors includeidentifying:

• The desirable attributes that must be present when processes are performed• The desirable characteristics that must be included when products are produced in

order to achieve desired resultsNext, consider which processes should incorporate the requirements to address the contributors.These requirements become policies and standards, and form the basis for tactical measurements.They are needed to ensure that processes are being followed and are also facts that must beanalyzed when measuring and interpreting results.

Measurements may already be used to control the process. For example, if "maintainable code" is adesirable outcome for the "develop unit" process, then a standard may have been developedlimiting code complexity and size. A self-check QC method to analyze and report code complexityand size may have been incorporated into the process. While the measurement has been selected,how it will be collected in order to evaluate the process’ effectiveness over time may not have beenconsidered.

One measure of process effectiveness is compliance. Other process measurements must be derivedfrom the policies, standards, and procedures themselves.

If the process being defined will be a measurement collection, analysis, or reporting point, the tasksthat describe how to do these things must be defined. Measurement data should be developed aspart of executing processes and collected for use by the line management responsible for thoseprocesses. Line management uses the data to control the process and the quality of the productsproduced by the process. Normally, the QA analyst is a secondary recipient of IT data, and uses itto improve the process itself.

Measurement procedures are defined as are Do and Check procedures. If using the play scriptformat, measurement procedures should be incorporated at the appropriate spot, and the proceduresadded to the task flow diagram or flowchart if those forms are used.

6.2.3.3 TestingTesting is the process that determines whether or not the actual results of testing equal the expectedresults of processing. The concept and practices used to test are covered in Skill Category 7.

6.2.4 Act ProcessesThe Act cycle of the process management process includes the one process of processimprovement, as shown in Figure 6-5. The purpose of process improvement is to reduce thefrequency of defects, including process ineffectiveness. Figure 6-5 shows how, without processimprovement, there is a continuous cycle of uncovering product defects and removing them fromthe product. This is because the same defects will likely occur every time that product is built.Process improvement uses facts (measurement results) to identify the root causes of problems and

Version 6.2.1 6-21

Page 152: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

to change processes so that results will improve, problems will be prevented, and variation will bereduced.

Figure 6-5 Concept of Process Improvement

The long-range objective for process improvement is to eliminate the need for quality controlactivities such as testing, reviews, and inspections. If the processes do not generate defects, thenthere is no need to search out, find, and correct product defects. For example, if an organizationidentifies data entry errors as a high-defect activity, then an improvement program can be started todrive down those defects. The objective of the program would be to change the products orprocesses in order to remove the cause of the data entry defects.

Process improvement is a continuous, iterative process. It involves finding the defects,accumulating the defects in a manner that identifies the significant from the insignificant, selectinga single defect, and identifying the root cause of that defect. At that point, an action program is putinto place to reduce the frequency of defects or eliminate the root cause of the defect. Then theprocess selects the next most significant defect and repeats the improvement process.

Process improvement has two components:

• Establishing process improvement teams (PIT), which may or may not includemembers of the process development team

• Providing the teams with a process to use for process improvement

6.2.4.1 Process Improvement TeamsProcess improvement must be accomplished by emphasizing teamwork. The PIT componentaddresses this need. Every member of the organization should become involved in the processimprovement process. Not everyone must be on a team, although as many teams as practicalshould be organized. The two methods below are recommended for creating the teams.

6-22 Version 6.2.1

Page 153: Casq Cbok Rev 6-2

Define, Build, Implement, and Improve Work Processes

• Natural Work GroupNatural work groups, such as a system development project team, characterize the waycompanies currently organize their employees. Using natural work groups for a PIT is theeasiest, quickest, and least confusing method. Generally, these teams already exist, and themembers have established working relationships. They are also usually aware of improvementsthat could be made to improve the effectiveness of their work group.

If natural work group teams elect to address problems that impact beyond their team, theimprovement process should support the collaboration of multiple teams working on problemswith a broad scope. The measurement system would be required to track and report theseinstances, so appropriate reward and recognition would occur for these shared team efforts.

• Interdepartmental TeamsThis method of organizing teams across departmental boundaries promotes interdepartmentalteamwork and contributes to breaking barriers (and problems) that exist within organizations. Itis also an excellent way to address problems that affect more than one work group ordepartment. Because these types of problems are typically more complex, it is onlyrecommended as a method of organizing when the PITs are mature.

Regardless of the method, each team should have between five and eight members. Smaller orlarger groups can lose effectiveness. Team member responsibilities should include:

• Identifying problems and selecting the ones on which to work• Proposing solutions to problems• Choosing an appropriate solution and improvement approach• Implementing the chosen improvement• Documenting required data regarding team activities in the PIT measurement

system• Ensuring consistent use of a common set of statistical process control (SPC) tools

and techniques• Presenting the implemented improvement to a quality improvement administrator

for certification that the PIT processes were followedTeam members should allocate 30-45 minutes per week for their duties. (This time commitmentincludes both meeting time, plus individual time spent on PIT activities, such as planning,designing, and implementing improvements.) Some teams might meet weekly and others mightmeet for an extended session every other week. If everyone participates, it is important to limit thetime so there is not a major impact on member’s daily responsibilities. If teams are trained ineffective problem-solving and meeting-leading skills, this small amount of meeting time can bevery productive. Process improvement and associated cost savings will soar.

Utilizing 30-45 minutes per week, the Paul Revere Insurance Company was able to save $3.25million in annualized savings in their first year, and $7.5 million in their second (250 teams, 1,200employees). United Data Services (the IT function for United Telephone System) saved $4.75million annualized in the first year (60 teams); and McCormack and Dodge saved over $2 million

Version 6.2.1 6-23

Page 154: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

in the first four months (150 teams). These are impressive figures, but achievable with minimaltime commitments.

6.2.4.2 Process Improvement ProcessFor organizations that do not have a process improvement process (sometimes called quality orcontinuous improvement), the eight-step process below is recommended. The first three stepsfocus on process identification and understanding while steps 4 through 8 focus on theimprovement aspect.

1. Select process and team

2. Describe current process

3. Assess process for control and capability

4. Brainstorm for improvement

5. Plan how to test proposed improvement

6. Analyze results

7. Compare results

8. Change process or redo steps 4-8

6.2.4.2.1 Identify and Understand the Process

1. Select Process and Team

The process to be improved may be selected by the improvement team or assigned bymanagement. Leaders of improvement teams are usually process owners, and membersare process users and may be cross-functional. Organizational improvement teammembers are usually volunteers; however, if subject experts are required and nonevolunteer, management will assign them. After the process has been selected and theimprovement team formed, customers of, and suppliers to, the process are determined.If not already on the team, customer and supplier representatives should be added whenpractical. Often the improvement team begins by analyzing data on customercomplaints, defects, rework, and cost of quality. The quantitative tools used in this stepcan include the process flow, Pareto charts, and run charts.

2. Describe the ProcessTwo simultaneous actions are initiated in this step: defining customer-supplierrelationships and determining actual process flow. The customer's requirements aredefined using operational definitions to assure complete understanding between thoseproviding the product or service and the customer. The customer's qualitycharacteristics are defined and the current state of satisfying these, determined. In mostinstances, the customer referred to is the internal customer. This same idea is applied tothe suppliers in the process. The process owner's expectations are defined for suppliers

6-24 Version 6.2.1

Page 155: Casq Cbok Rev 6-2

Define, Build, Implement, and Improve Work Processes

of inputs to the process. The customer and supplier must then agree on thespecifications to be met and how quality will be measured.

While the customer-supplier relationships are being defined, the team is also building aflowchart of the current process (how it is done at this point in time) if one was notcompleted in Step 1. Questions the team asks include: Does a person do the process thesame every time? Does each person, if more than one does the process, do it the same?Is there variation between projects? The flowchart is used to identify "work around"sources of variation, and recycle, rework, and other causes of poor quality orproductivity. The flowchart and discussions with the customer determine processmeasurement points, which monitor process health. Both input and output of theprocess are monitored. Data that needs to be collected and analyzed can be identified inthis step. A discussion of the ideal way to do the process is initiated.

A brainstorming session should be used to develop a cause-and-effect diagramidentifying all possible causes of process input variation. Next categorize the causes.The customer's required quality characteristics are the effects. Separate cause-and-effect diagrams are constructed for each effect. Scatter diagrams are then used toestablish the relationship (correlation) between each of the major causes of variationand the process output.

Tools used in this step include measurement and data collection methods, flowcharts,Pareto charts, cause-and-effect diagrams, and scatter diagrams.

3. Assess the ProcessTo assure that decisions are being made using precise and accurate data, themeasurement system used to assess the process must be evaluated. If the measurementis counting, care should be taken to assure that everyone is counting the same things inthe same manner. Once it has been determined that the measurement system accuratelydescribes the data of interest, the input and output should be measured and baselinesestablished.

Examples of process output indicators are listed below:

• Amount of rework

• Yield or productivity

• Errors (defects) in products, reports, or services

• Cycle time

• Timeliness

• Number of schedules missed

• Engineering changes per document

• Downtime because of maintenance, parts shortage, or other factors

Version 6.2.1 6-25

Page 156: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

• Overtime

• Absenteeism or sick leave

Process inputs, determined in Step 2, represent possible sources of variation that mustbe quantified. Process inputs and outputs can be baselined using Pareto charts, runcharts, histograms, scatter diagrams, and control charts. The cause-and-effect diagramdeveloped in the previous step should be reviewed and updated.

Next, the process is assessed for statistical control by constructing a control chart(s) foreach important process input. If points are found outside the control limits, these aredue to special causes of variation and must be investigated (see Skill Category 4).Control mechanisms may need to be installed in higher-level processes to eliminatethem in the future. More data is then gathered to confirm the elimination of the specialcauses. The improvement team may have to proceed to Steps 4 through 8 to find andeliminate the special causes of variation. Once special causes of variation have beeneliminated, the process is stable and, therefore, in a state of statistical control.

Another assessment of the process determines whether it is capable of meeting thecustomer's expectations (refer also to Skill Category 8). The control limits of theprocess are compared to the customer's specification limits. If the control limits areinside the specification limits, the process is capable of satisfying the customer. If thecontrol limits are outside the specification limits, the process is not capable andcommon cause variation must be reduced or the process retargeted, or both. Bothprocess location and variation may be a problem. This change in process location andreduction in variation is accomplished by using Steps 4 through 8 of the continuousimprovement strategy.

The tools available for this step are data collection methods, Pareto charts, run charts,control charts, process capability, measurement capability, and advanced statisticaltechniques.

6.2.4.2.2 Improve the Process

4. Brainstorm for ImprovementThe team should review all relevant information, such as the process flowchart, cause-and-effect diagrams, and control charts. A brainstorming session may be held togenerate ideas for reducing input variation. It may help to visualize the ideal process.Improvement ideas must be prioritized and a theory for improvement developed. Thewhat, how, and why of the improvement should be documented so everyone agrees tothe plan. All statistical tools should be used in this step, although brainstorming is usedthe most.

5. Plan How to Test Proposed Improvement(s)In this step a plan for testing the proposed improvement developed in the precedingstep is created. This process improvement plan specifies what data is to be collected,how the data is to be collected, who will collect the data, and how the data will be

6-26 Version 6.2.1

Page 157: Casq Cbok Rev 6-2

Define, Build, Implement, and Improve Work Processes

analyzed after collection. The specific statistical tools to be used for analysis aredefined. Attention is given to designing data collection forms that are simple, concise,and easy to understand. The forms should be tested before being put into use. Theprocess improvement plan is a formal document that is approved by the improvementteam.

6. Analyze ResultsThe plan developed in the prior step is implemented. The required data is collectedusing the previously tested forms, and then analyzed using the statistical tools noted inthe process improvement plan. The plan is reviewed at improvement team meetings totrack progress on process improvement. The tools usually used in this step are datacollection methods, flowcharts, run charts, scatter diagrams, histograms, measurementcapability, control charts, process capability, and advanced statistical techniques.

7. Compare ResultsThis step uses the same statistical tools as in Step 3 to compare the test results with thatpredicted in Step 4. Has the process been improved? Do the test results agree withscientific or engineering theory? If the results are not as expected, can they beexplained? When results are not as expected, it is not a failure - something is stilllearned about the process and its variation. The new information will be used todevelop a new theory. Care is taken to document lessons learned and accomplishments.A set of before and after Pareto charts, histograms, run charts, or control charts aregenerally used in this step.

8. Change Process or Redo Steps 4-8If the results are not as expected, then based on what was learned, a new method forimprovement must be developed by reverting back to Step 4. If the results are asexpected, and are practical and cost-effective, the change recommended in the processimprovement plan is implemented. The implementation should be monitored to assurethat the gains made in reducing variation are maintained. The team then returns to Step2 to update the process documentation.

A decision is now required concerning the next improvement effort. Should the teamcontinue to improve this process by further reducing variation due to common causesor start working on another process that is not meeting customer requirements? Theanswer will probably be based on an economic analysis. If it is economically desirableto continue to reduce the process variation, the team develops another improvementmethod and repeats Steps 4 through 8. If the decision is made to improve a differentprocess, the process begins again at Step 1 by defining a team.

Version 6.2.1 6-27

Page 158: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

This page intentionally left blank.

6-28 Version 6.2.1

Page 159: Casq Cbok Rev 6-2

Quality Control Practicesuality control practices should occur during product development, product acquisition,product construction at the end of development/acquisition and throughout productchange and operation. During development, the quality control process is frequentlycalled verification and at the conclusion of development, it is called validation. This skill

category will address the various types of controls and when they are best used in the process. Thequality practitioner should also be familiar with verification and validation techniques, theframework for developing testing tactics, change control and configuration management.

7.1 Testing ConceptsMany testers fail to do testing effectively and efficiently because they do not know the basicconcepts of testing. The purpose of this section is to describe those basic testing concepts.

7.1.1 The Testers’ WorkbenchThe testers’ workbench is the process used to verify and validate the system structurally andfunctionally. To understand the testing methodology, it is necessary to understand the workbenchconcept. Process workbenches are discussed in Skill Category 6.

Testing Concepts page 7-1Verification and Validation Methods page 7-12Software Change Control page 7-18Defect Management page 7-20

SkillCategory

7

Q

Version 6.2.1 7-1

Page 160: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

A testers’ workbench is one part of the software development life cycle, which is comprised ofmany workbenches. Two examples of the workbench concept are given below.

• The programmers’ workbench for one of the steps to build a system is:• Input (program specifications) is given to the producer (programmer).• Work (coding and debugging) is performed; a procedure is followed, and a product

or interim deliverable (a program, module, or unit) is produced.• Work is checked to ensure the product meets the specifications and standards, and

that the procedure was followed. If the check finds no problems, the product isreleased to the next workbench. If the check finds a problem, the product is sentback for rework.

• A project team uses the workbench to guide them through a unit test of computer code.The programmer takes the following steps:• Give input products (e.g., program code) to the tester.• Perform work (execute unit tests), follow a procedure, and produce a product or

interim deliverable (e.g., the test results).• Check work to ensure test results meet test specifications and standards and that

the test procedure was followed. If the check finds no problems, release theproduct (test results) to the next workbench. If the check process finds a problem,the product is sent back for rework.

7.1.2 Test StagesThere are four main testing stages in a structured software development process. They are:

Unit TestingThese tests demonstrate that a single program, module, or unit of code function as designed.For example, observing the result when pressing a function key to complete an action. Testedunits are ready for testing with other system components such as other software units,hardware, documentation, or users.

7.1.2.1 Integration TestingThese tests are conducted on tasks that involve more than one application or database, or onrelated programs, modules, or units of code, to validate that multiple parts of the systeminteract according to the system design. Each integrated portion of the system is then ready fortesting with other parts of the system.

7.1.2.2 System TestingThese tests simulate operation of the entire system and confirm that it runs correctly. Uponcompletion, the validated system requirements result in a tested system based on thespecification developed or purchased.

7-2 Version 6.2.1

Page 161: Casq Cbok Rev 6-2

Quality Control Practices

7.1.2.3 User Acceptance TestingThis real-world test is the most important to the business, and it cannot be conducted inisolation. Internal staff, customers, vendor, or other users interact with the system to ensure thatit will function as desired regardless of the system requirements. The result is a tested systembased on user needs.

7.1.3 Independent TestingThe primary responsibility of individuals accountable for testing activities is to ensure that qualityis measured accurately. Often, knowing that quality is being measured is enough to causeimprovements in the applications being developed. The existence of a Tester or someone in theorganization devoted to test activities is a form of independence, in the loosest definition.

The roles and reporting structure of test resources differ across, and within, organizations. Theseresources may be business or systems analysts assigned to perform testing activities, or, lessbeneficially, they may be Testers who report to the project manager. Ideally, the test resources willhave a reporting structure independent from the group designing or developing the application inorder to assure that the quality of the application is given as much consideration as the projectbudget and timeline.

The benefits of independent testing can be seen even in the unit testing stage. Often, successfuldevelopment teams will have a peer perform unit testing on a program or module. Once a portionof the application is ready for integration testing, the same benefits can be achieved by having anindependent person plan and coordinate the integration testing.

Where an independent test team exists, they are usually responsible for system testing, theoversight of user acceptance testing, and providing an unbiased assessment of the quality of anapplication. The team may also support or participate in other phases of testing as well as executingspecial test types such as performance and load testing.

An independent test team is usually comprised of a Test Manager or team leader, Key Testers, andadditional Testers. The Test Manager should join the team by the beginning of the requirementsdefinition stage. Key Testers may also join the team at this stage on large projects to assist with testplanning activities. Other testers join later to assist with the creation of test cases and scripts.Additional Testers, including users who will participate in test execution, usually join the test teamright before system testing is scheduled to begin.

The Test Manager ensures that testing is performed, that it is documented, and that testingtechniques are established and developed. The manager is also responsible for:

• Planning and estimating tests• Designing the test strategy• Ensuring tests are created and executed in a timely and productive manner• Reviewing analysis and design artifacts• Chairing the test readiness review

Version 6.2.1 7-3

Page 162: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

• Managing the test effort• Overseeing acceptance tests

Testers are usually responsible for:

• Developing test cases and procedures• Planning, capturing, and conditioning test data • Reviewing analysis and design artifacts• Executing tests• Utilizing automated test tools for regression testing • Preparing test documentation• Tracking and reporting defects

Other Testers primarily focus on test execution, defect reporting, and regression testing. They maybe junior members of the test team, users, marketing or product representatives, or others.

The test team should be represented in all key requirements and design meetings, including: JADor requirements definition sessions, risk analysis sessions, prototype review sessions, etc. Theyshould also participate in all inspection or walkthrough reviews for requirements and designartifacts.

7.1.4 Static versus Dynamic TestingStatic testing is another name for in-process reviewing. It means that the test is being performedwithout executing the code. Static testing occurs throughout the development life cycle; however, alarge part of it takes place during the requirements and design phases in the form of walk- throughinspections, and system reviews. Other examples of static testing include code analyzers or writinganalyzers.

Dynamic testing (also known as program testing) implies that the code is being executed on amachine.

7.1.5 Verification versus ValidationVerification ensures that the system (software, hardware, documentation, and personnel) complieswith an organization’s standards and processes, relying on review of non-executable methods.Validation physically ensures that the system operates according to plan by executing the systemfunctions through a series of tests that can be observed and evaluated. Verification answers thequestion, “Did we build the right system?” while validation addresses, “Did we build the systemright?”

Keep in mind that verification and validation techniques can be applied to every element of thecomputerized system. You’ll find these techniques in publications dealing with the design andimplementation of user manuals and training courses, as well as in industry publications.

7-4 Version 6.2.1

Page 163: Casq Cbok Rev 6-2

Quality Control Practices

7.1.5.1 Computer System Verification and Validation ExamplesVerification requires several types of reviews, including requirements reviews, design reviews,code walkthroughs, code inspections, and test reviews. The system user should be involved in thesereviews to find defects before they are built into the system. In the case of purchased systems, userinput is needed to assure that the supplier makes the appropriate tests to eliminate defects. Table 7-1 shows examples of verification. The list is not exhaustive, but it does show who performs the taskand what the deliverables are. For purchased systems, the term “developers” applies to thesupplier’s development staff.

Table 7-1 Computer System Verification Examples

Validation is accomplished simply by executing a real-life function (if you wanted to check to seeif your mechanic had fixed the starter on your car, you’d try to start the car). Examples of validationare shown in Table 7-2. As in the table above, the list is not exhaustive.

Determining when to perform verification and validation relates to the development, acquisition,and maintenance of software. For software testing, this relationship is especially critical because:

• The corrections will probably be made using the same process for developing thesoftware. If the software was developed internally using a waterfall methodology,that methodology will probably be followed in making the corrections; on theother hand, if the software was purchased or contracted, the supplier will likelymake the correction. You’ll need to prepare tests for either eventuality.

Verification Example Performed By Explanation DeliverableRequirements Reviews Developers, Users The study and discussion of

the computer system requirements to ensure they meet stated user needs and are feasible.

Reviewed statement of requirementsReady to be translated into system design

Design Reviews Developers The study and discussion of the computer system design to ensure it will support the system requirements.

System designReady to be translated into computer programsHardware configurationsDocumentationTraining

Code Walkthroughs Developers An informal analysis of the program source code to find defects and verify coding techniques.

Computer software ready for testing or more detailed inspections by the developer.

Code Inspections Developers A formal analysis of the program source code to find defects as defined by meeting computer system design specifications. Usually performed by a team composed of developers and subject matter experts.

Computer software ready for testing by the developer.

Version 6.2.1 7-5

Page 164: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

• Testers can probably use the same test plans and test data prepared for testing theoriginal software. If testers prepared effective test plans and created extensive testdata, those plans and test data can probably be used in the testing effort, therebyreducing the time and cost of testing.

Table 7-2 Computer System Validation Examples

7.1.6 The Life Cycle Testing Concept ExampleLife cycle testing involves continuous testing of the system during the developmental process. Atpredetermined points, the results of the development process are inspected to determine thecorrectness of the implementation. These inspections identify defects at the earliest possible point.

Life cycle testing cannot occur until a formalized SDLC has been incorporated. Life cycle testing isdependent upon the completion of predetermined deliverables at specified points in thedevelopmental life cycle. If information services personnel have the discretion to determine theorder in which deliverables are developed, the life cycle test process becomes ineffective. This isdue to variability in the process, which normally increases cost.

The life cycle testing concept can best be accomplished by the formation of a test team. The team iscomprised of members of the project who may be both implementing and testing the system. Whenmembers of the team are testing the system, they must use a formal testing methodology to clearly

Validation Example Performed By Explanation DeliverableUnit Testing Developers The testing of a single

program, module, or unit of code. Usually performed by the developer of the unit. Validates that the software performs as designed.

Software unit ready for testing with other system components, such as other software units, hardware, documentation, or users.

Integrated Testing Developers The testing of related programs, modules, or units of code. Validates that multiple parts of the system interact according to the system design.

Portions of the system ready for testing with other portions of the system.

System Testing Developers, Users The testing of an entire computer system. This kind of testing can include functional and structural testing, such as stress testing. Validates the system requirements.

A tested computer system, based on what was specified to be developed or purchased.

User Acceptance Testing Users The testing of a computer system or parts of a computer system to make sure it will work in the system regardless of what the system requirements indicate.

A tested computer system, based on user needs.

7-6 Version 6.2.1

Page 165: Casq Cbok Rev 6-2

Quality Control Practices

distinguish the implementation mode from the test mode. They also must follow a structuredmethodology when approaching testing, the same as when approaching system development.Without a specific structured test methodology, the test team concept is ineffective because teammembers would follow the same methodology for testing as they used for developing the system.Experience shows people are blind to their own mistakes, so the effectiveness of the test team isdependent upon developing the system under one methodology and testing it under another.

The life cycle testing concept is illustrated in Figure 7-1. This illustration shows that when theproject starts, both the system development process and system test process begins. The team thatis developing the system begins the systems development process and the team that is conductingthe system test begins planning the system test process. Both teams start at the same point using thesame information. The systems development team has the responsibility to define and documentthe requirements for developmental purposes. The test team will likewise use those samerequirements, but for the purpose of testing the system. At appropriate points during thedevelopmental process, the test team will test the developmental process in an attempt to uncoverdefects. The test team should use the structured testing techniques outlined in this guide as a basisof evaluating the system development process deliverables.

Figure 7-1 The “V” Concept of Software Testing

During the system test process, an appropriate set of test transactions should be developed, to becompleted at the same time as the completion of the application system. When the applicationmeets the acceptance criteria, it can be integrated into the operating environment. During this

Version 6.2.1 7-7

Page 166: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

process, the systems development team and the systems test team work closely together to ensurethat the application is properly integrated into the production environment. At that point, the teamsagain split to ensure the correctness of changes made during the maintenance phase. Themaintenance team will make whatever changes and enhancements are necessary to the applicationsystem, and the test team will continue the test process to ensure that those enhancements areproperly implemented and integrated into the production environment.

In the V-testing concept, your project’s “Do” and “Check” procedures slowly converge from startto finish (see Figure 7-1), which indicates that as the “Do” team attempts to implement a solution,the “Check” team concurrently develops a process to minimize or eliminate the risk. If the twogroups work closely together, the high level of risk at a project’s inception will decrease to anacceptable level by the project’s conclusion.

7.1.7 Stress versus Volume versus PerformanceMany testers define stress and volume testing as testing the system constraints. A stricter definitionis that stress testing tests the built-in constraints of the system, such as internal table size; andvolume testing tests the system’s ability in an operating environment to process very large amountsof data. Performance testing tests the systems ability to meet performance standards, such as amaximum three-second response to a user’s request.

7.1.8 Test ObjectivesA test objective (goal) is a statement of what the test team or tester is expected to accomplishduring a specific testing activity. Test objectives, are usually defined during requirements analysis,and guide the development of test cases, test scripts, and test data.

Test objectives enhance communication both within and outside the project team by defining thescope of the testing effort, and enabling the test manager and project manager to gauge testingprogress and success.

Each test objective should contain a statement of purpose and a high-level description of theexpected results stated in measurable terms. Completion criteria for test objectives define thesuccess measure for the tests. Test objectives can be easily derived using the system requirementsdocumentation, the test strategy, results of the risk assessment, and the test team assignments. Testobjectives are not simply a restatement of the system’s requirements, but the actual way in whichthe system will be tested in order to assure that the system objective has been met. If requirementsare lacking or poorly written, then the test team must have a defined method for uncovering anddefining test objectives. Techniques to consider include brainstorming, relating test objectives tothe system outputs, developing use cases or relating test objectives to events or system inputs.

The users and project team must prioritize the test objectives. Usually the highest priority isassigned to objectives related to high priority or high-risk requirements defined for the project. Incases where test time is cut short, test cases supporting the highest priority objectives would beexecuted first.

7-8 Version 6.2.1

Page 167: Casq Cbok Rev 6-2

Quality Control Practices

As a final step, the test team should perform quality control on this activity. This might entail usinga checklist or worksheet to ensure that the process to set test objectives was followed, or reviewingthe objectives with the system users.

7.1.9 Reviews and InspectionsReviews are conducted to utilize the variety of perspectives and talents brought together in a team.The main goal is to identify defects within the stage or phase of the project where they originate,rather than in later test stages; this is referred to as “stage containment.” As reviews are generallygreater than 65% efficient in finding defects, and testing is often less than 30% efficient, theadvantage is obvious. In addition, since defects identified in the review process are found earlier inthe life cycle, they are less expensive to correct.

Another advantage of holding reviews is not readily measurable. Reviews are an efficient methodof educating a large number of people on a specific product or project in a relatively short period oftime. Semiformal reviews (see Review Formats below) are especially good for this, and are oftenheld for just that purpose. In addition to learning about a specific product or project, team membersare exposed to a variety of approaches to technical issues (a cross-pollination effect). Finally,reviews provide training in, and enforce the use of, standards, as nonconformance to standards isconsidered a defect and reported as such.

The timing and the purpose of a review determine what type of review takes place, when it takesplace, and how it is conducted. Reviews are performed during the development process, at the endof a phase, and at the end of the project.

7.1.9.1 Review FormatsThere are three review formats as follows:

7.1.9.1.1 Informal ReviewThis review is generally a one-on-one meeting between the producer of a work product and apeer or co-worker, and is initiated as a request for input regarding a particular artifact orproblem. There is no agenda, no preparation time, and results are not formally reported. Thesereviews occur on an as needed basis throughout each phase of a project.

7.1.9.1.2 Semiformal Review (or Walkthrough)This review is facilitated by the producer of the material being reviewed (e.g., documentationor code). The participants are led through the material in one of two formats: the presentation ismade without interruptions and comments are given at the end, or comments are madethroughout. In either case, the issues raised are captured and published in a report distributed tothe participants. Possible solutions for uncovered defects are typically not discussed during thereview. Semiformal reviews should occur multiple times during a phase for segments or“packages” of work.

Version 6.2.1 7-9

Page 168: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

7.1.9.1.3 Formal Review (or Inspection)This review is facilitated by a knowledgeable individual called a moderator, who is not theproducer or a team member of the product under review. The meeting is planned in advance,and material is distributed to participants before the review so they will be familiar with thetopic and arrive prepared. Full participation by all members of the review team is required;therefore, the quality of a formal review is directly dependent on the preparation of theparticipants. A recorder assists the moderator by capturing issues and action items, andpublishing them in a formal report with distribution to participants and management. Defectsfound are tracked through resolution, usually by way of the existing defect-tracking system.Formal reviews may be held at any time, and apply to both development and test products.

Regardless of the format, three rules apply to all reviews:

1. The product is reviewed, not the producer

2. Defects and issues are identified, not corrected during the session

3. All members of the review team are responsible for the results of the review

7.1.9.2 In-Process ReviewsIn-Process reviews are used to examine a product during a specific time period of its life cycle,such as during the design activity. They are usually limited to a segment of a project, with the goalof identifying defects as work progresses, rather than at the close of a phase or even later, whenthey are more costly to correct. These reviews may use an informal, semiformal or formal reviewformat.

7.1.9.3 Checkpoint ReviewsThese are facilitated reviews held at predetermined points in the development process. Theobjective is to evaluate a system as it is being specified, designed, implemented, and tested.Checkpoint reviews focus on ensuring that critical success factors are being adequately addressedduring system development. The participants are subject matter experts on the specific factors to bereviewed against, and could include customer representatives, analysts, programmers, vendors,auditors, etc. For example, if system performance was identified as a critical requirement, threecheckpoint reviews might be set up at the end of the requirements, design, and coding phases toensure there were no performance issues before proceeding to the next phase. Instead of walkingteam members through a general checklist (as would be done in a phase-end review), a designatedperformance expert would look specifically at whether performance requirements were being met.

7.1.9.4 Phase-End ReviewsPhase-end reviews (also called Decision-Point or Gate reviews) look at the product for the mainpurpose of determining whether to continue with planned activities. In contrast to the checkpointreviews, which focus on critical success factors, phase-end reviews are more general in nature.

Phase-end reviews are held at the end of each phase, in a formal review format. Defects found aretracked through resolution, usually through a defect-tracking system. Although there may be more,

7-10 Version 6.2.1

Page 169: Casq Cbok Rev 6-2

Quality Control Practices

the most common phase-end reviews are listed below. Project status, risks, and non-technicalissues are also reviewed.

7.1.9.4.1 Software Requirements ReviewThis review is aimed at verifying and approving the documented software requirements for thepurpose of establishing a baseline and identifying analysis packages. The Development Plan,Software Test Plan, Documentation Plan, Training Plan and Configuration Management Planderived from the requirements are also verified and approved.

7.1.9.4.2 Critical Design ReviewThis review baselines the Detailed Design Specification (the “build to” document). Normally,coding officially begins at the close of this review. Test cases are also reviewed and approved.

7.1.9.4.3 Test Readiness ReviewThis review is performed when the appropriate application components are near completion.The review determines the readiness of the application or project for system and acceptancetesting.

It is important to note that although the completion of a phase-end review signals the formalbeginning of the next phase, subsequent phases may have already been started. In fact, initerative development methodologies, each analysis or design “package” or segment of theapplication may be in a different phase of the project simultaneously. Careful analysis andplanning are critical to ensure that the iterations are sequenced appropriately to minimize therisk of a defect found in one iteration causing excessive rework in previous iterations.

7.1.9.5 Post-Implementation ReviewsPost-implementation reviews (also known as "postmortems") are conducted in a formal format upto six months after implementation is complete, in order to audit the process based on actual results.They are held to assess the success of the overall process after release, and to identify anyopportunities for process improvement.

These reviews focus on questions such as: “Is the quality what was expected?” “Did the processwork?” “Would buying a tool have improved the process?” or “Would automation have sped upthe process?” Post-implementation reviews are of value only if some use is made of the findings.The quality assurance practitioner draws significant insight into the processes used and theirbehaviors.

7.1.9.6 InspectionsInspections are formal manual techniques that are a natural evolution of desk checking. Thisprocedure requires a team, usually directed by a moderator. The team includes the developer, butthe remaining members and the moderator should not be directly involved in the developmenteffort. Both techniques are based on a reading of the product (e.g., requirements, specifications, orcode) in a formal meeting environment with specific rules for evaluation. The difference betweeninspection and walkthrough lies in the conduct of the meeting. Both methods require preparationand study by the team members, and scheduling and coordination by the team moderator.

Version 6.2.1 7-11

Page 170: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

Inspection involves a step-by-step reading of the product, with each step checked against apredetermined list of criteria. These criteria include checks for historically common errors.Guidance for developing the test criteria can be found elsewhere. The developer is usually requiredto narrate the reading product. The developer finds many errors just by the simple act of readingaloud. Others, of course, are determined because of the discussion with team members and byapplying the test criteria.

At the problem definition stage, inspections can be used to determine if the requirements satisfy thetestability and adequacy measures as applicable to this stage in the development. If formalrequirements are developed, formal methods, such as correctness techniques, may be applied toensure adherence with the quality factors.

Inspections should be performed at the preliminary and detailed design stages. Design inspectionswill be performed for each module and module interface. Adequacy and testability of the moduleinterfaces are very important. Any changes that result from these analyses will cause at least apartial repetition of the verification at both stages and between the stages. A reexamination of theproblem definition and requirements may also be required.

Finally, the inspection procedures should be performed on the code produced during theconstruction stage. Each module should be analyzed separately and as integrated parts of thefinished software.

7.2 Verification and Validation MethodsVerification and validation represents both static testing (verification) and dynamic testing(validation). Together they comprise the test activities. The methods available for verification andvalidation are briefly described.

7.2.1 Management of Verification and ValidationManagement of software development verification and validation (V&V) activities begins at thestart of the project, and is performed for all software life cycle processes and activities. This activitycontinuously reviews the V&V effort, revises the Software V&V Plan as necessary based uponupdated project schedules and development status, and coordinates the results with the projectteam. The V&V manager assesses each proposed change to the system and software, identifies thesoftware requirements that are affected by the change, and plans the V&V tasks to address thechange. Each proposed change must also be assessed to determine whether any new hazards orrisks are introduced in, or eliminated from, the software. The V&V plan is revised as necessary byupdating tasks or modifying the scope and intensity of existing V&V tasks.

At key project milestones, such as the requirements review, design review, or test readiness review,the V&V manager consolidates the V&V results to establish supporting evidence regardingwhether to proceed to the next set of software development activities. Whenever necessary, it must

7-12 Version 6.2.1

Page 171: Casq Cbok Rev 6-2

Quality Control Practices

also be determined whether a V&V task needs to be repeated as a result of changes in theapplication or work products.

The minimum tasks performed by V&V management include:

• Create the Software V&V Plan• Conduct Management Review of V&V• Support Management and Technical Reviews• Interface with Organizational and Supporting Processes• Creation of V&V

7.2.2 Verification TechniquesVerification is the process of confirming that interim deliverables have been developed accordingto their inputs, process specifications, and standards. Verification techniques are listed below.

7.2.2.1 Feasibility ReviewsTests for this structural element verify the logic flow of a unit of software (e.g., verifying thatthe software could conceivably perform after the solution is implemented the way thedevelopers expect). Output from this review is a preliminary statement of high-level marketrequirements that becomes input to the requirements definition process (where the detailedtechnical requirements are produced).

7.2.2.2 Requirements ReviewsThese reviews examine system requirements to ensure they are feasible and that they meet thestated needs of the user. They also verify software relationships; for example, the structurallimits of how much load (e.g., transactions or number of concurrent users) a system can handle.Output from this review is a statement of requirements ready to be translated into systemdesign.

7.2.2.3 Design ReviewsThese structural tests include study and discussion of the system design to ensure it will supportthe system requirements. Design reviews yield a system design, ready to be translated intosoftware, hardware configurations, documentation and training.

7.2.2.4 Code WalkthroughsThese are informal, semi-structured reviews of the program source code against specificationsand standards to find defects and verify coding techniques. When done, the computer softwareis ready for testing or more detailed code inspections by the developer.

Version 6.2.1 7-13

Page 172: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

7.2.2.5 Code Inspections or Structured WalkthroughsThese test techniques use a formal, highly structured session to review the program source codeagainst clearly defined criteria (System Design Specifications, product standards) to finddefects. Completion of the inspection results in computer software ready for testing by thedeveloper.

7.2.2.6 Requirements TracingAt each stage of the life cycle (beginning with requirements or stakeholder needs) this review isused to verify that inputs to that stage are correctly translated and represented in the resultingdeliverables. Requirements must be traced throughout the rest of the software development lifecycle to ensure they are delivered in the final product. This is accomplished by tracing thefunctional and non-functional requirements into analysis and design models, class andsequence diagrams, and test plans and code. The level of traceability also enables project teamsto track the status of each requirement throughout the development and test process.

7.2.3 Validation TechniquesValidation assures that the end product (system) meets requirements and expectations underdefined operating conditions. Within an IT environment, the end product is typically executablecode. Validation ensures that the system operates according to plan by executing the systemfunctions through a series of tests that can be observed and evaluated for compliance with expectedresults.

Table 7-3 illustrates how various techniques can be used throughout the standard test stages. Eachtechnique is described below.

Table 7-3 Validation Techniques Used in Test Stages

7.2.3.1 White-BoxWhite-box testing (logic driven) assumes that the path of logic in a unit or program is known.White-box testing consists of testing paths, branch by branch, to produce predictable results.Multiple white-box testing techniques are listed below. These techniques can be combined asappropriate for the application, but should be limited, as too many techniques can lead to anunmanageable number of test cases.

TechniquesTest Stages

White-box Black-box Incremental Thread Regression

Unit Test X XString/Integration Test X X X X XSystem Test X X X XAcceptance Test X X

Statement Coverage Execute all statements at least once.

7-14 Version 6.2.1

Page 173: Casq Cbok Rev 6-2

Quality Control Practices

When evaluating the paybacks received from various test techniques, white-box or program-basedtesting produces a higher defect yield than the other dynamic techniques when planned andexecuted correctly.

7.2.3.2 Black-BoxIn black-box testing (data or condition driven), the focus is on evaluating the function of a programor application against its currently approved specifications. Specifically, this technique determineswhether combinations of inputs and operations produce expected results. As a result, the initialconditions and input data are critical for black-box test cases.

Three successful techniques for managing the amount of input data required include:

7.2.3.2.1 Equivalence PartitioningAn equivalence class is a subset of data that represents a larger class. Equivalence partitioningis a technique for testing equivalence classes rather than undertaking exhaustive testing of eachvalue of the larger class. For example, a program which edits credit limits within a given range(at least $10,000 but less than $15,000) would have three equivalence classes:

• Less than $10,000 (invalid)• Equal to $10,000 but not as great as $15,000 (valid)• $15,000 or greater (invalid)

7.2.3.2.2 Boundary AnalysisThis technique consists of developing test cases and data that focus on the input and outputboundaries of a given function. In the credit limit example, boundary analysis would test the:

• Low boundary plus or minus one ($9,999 and $10,001) • Boundaries ($10,000 and $15,000)• Upper boundary plus or minus one ($14,999 and $15,001)

7.2.3.2.3 Error GuessingThis is based on the theory that test cases can be developed from the intuition and experience ofthe tester. For example, in a test where one of the inputs is the date, a tester may try February29, 2000 or February 29, 2001.

Decision Coverage Execute each decision direction at least once.Condition Coverage Execute each decision with all possible outcomes at

least once.Decision/Condition Coverage Execute all possible combinations of condition

outcomes in each decision, treating all iterations as two-way conditions exercising the loop zero times and once.

Multiple Condition Coverage Invoke each point of entry at least once.

Version 6.2.1 7-15

Page 174: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

7.2.3.3 IncrementalIncremental testing is a disciplined method of testing the interfaces between unit-tested programsand between system components. It involves adding unit-tested programs to a given module orcomponent one by one, and testing each resultant combination. There are two types of incrementaltesting:

7.2.3.3.1 Top-DownThis method of testing begins testing from the top of the module hierarchy and works down tothe bottom using interim stubs to simulate lower interfacing modules or programs. Modules areadded in descending hierarchical order.

7.2.3.3.2 Bottom-UpThis method of testing begins testing from the bottom of the hierarchy and works up to the top.Modules are added in ascending hierarchical order. Bottom-up testing requires thedevelopment of driver modules, which provide the test input, call the module or program beingtested, and display test output.

There are pros and cons associated with each of these methods, although bottom-up testing isgenerally considered easier to use. Drivers tend to be less difficult to create than stubs, and canserve multiple purposes. Output from bottom-up testing is also often easier to examine, as it alwayscomes from the module directly above the module under test.

7.2.3.4 ThreadThis test technique, which is often used during early integration testing, demonstrates keyfunctional capabilities by testing a string of units that accomplish a specific function in theapplication. Thread testing and incremental testing are usually used together. For example, unitscan undergo incremental testing until enough units are integrated and a single business function canbe performed, threading through the integrated components.

When testing client/server applications, these techniques are extremely critical. An example of aneffective strategy for a simple two-tier client/server application could include:

1. Unit and bottom-up incrementally test the application server components.

2. Unit and incrementally test the GUI or client components.

3. Test the network.

4. Thread test a valid business transaction through the integrated client, server, andnetwork.

7.2.3.5 RegressionThere are always risks associated with introducing change to an application. To reduce this risk,regression testing should be conducted during all stages of testing after a functional change,reduction, improvement, or repair has been made. This technique assures that the change will notcause adverse effects on parts of the application or system that were not supposed to change.

7-16 Version 6.2.1

Page 175: Casq Cbok Rev 6-2

Quality Control Practices

Regression testing can be a very expensive undertaking, both in terms of time and money. The testmanager’s objective is to maximize the benefits of the regression test while minimizing the timeand effort required for executing the test.

The test manager must choose which type of regression test minimizes the impact to the projectschedule when changes are made, and still assures that no new defects were introduced. The typesof regression tests include:

7.2.3.5.1 Unit Regression TestingThis retests a single program or component after a change has been made. At a minimum, thedeveloper should always execute unit regression testing when a change is made.

7.2.3.5.2 Regional Regression TestingThis retests modules connected to the program or component that have been changed. Ifaccurate system models or system documentation are available, it is possible to use them toidentify system components adjacent to the changed components, and define the appropriateset of test cases to be executed. A regional regression test executes a subset of the full set ofapplication test cases. This is a significant timesaving over executing a full regression test, andstill helps assure the project team and users that no new defects were introduced.

7.2.3.5.3 Full Regression TestingThis retests the entire application after a change has been made. A full regression test is usuallyexecuted when multiple changes have been made to critical components of the application.This is the full set of test cases defined for the application.

When an application feeds data to another application, called the “downstream” application, adetermination must be made whether regression testing should be conducted with theintegrated application. Testers from both project teams cooperate to execute this integrated test,which involves passing data from the changed application to the downstream application, andthen executing a set of test cases for the receiving application to assure that it was not adverselyaffected by the changes.

7.2.4 Structural and Functional TestingStructural testing is considered white-box testing because knowledge of the internal logic of thesystem is used to develop test cases. Structural testing includes path testing, code coverage testingand analysis, logic testing, nested loop testing, and similar techniques. Unit testing, string orintegration testing, load testing, stress testing, and performance testing are considered structural.

Functional testing addresses the overall behavior of the program by testing transaction flows, inputvalidation, and functional completeness. Functional testing is considered black-box testing becauseno knowledge of the internal logic of the system is used to develop test cases. System testing,regression testing, and user acceptance testing are types of functional testing.

As part of verifying and validating the project team’s solution, testers perform structural andfunctional tests that can be applied to every element of a computerized system. Both methods

Version 6.2.1 7-17

Page 176: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

together validate the entire system. For example, a functional test case might be taken from thedocumentation description of how to perform a certain function, such as accepting bar code input.A structural test case might be taken from a technical documentation manual. To effectively testsystems, both methods are needed. Each method has its pros and cons, which are listed below:

7.2.4.1 Structural Testing • Advantages

The logic of the software’s structure can be tested.

Parts of the software will be tested which might have been forgotten if only functionaltesting was performed.

• DisadvantagesIts tests do not ensure that user requirements have been met.

Its tests may not mimic real-world situations.

7.2.4.2 Functional Testing • Advantages

Simulates actual system usage.

Makes no system structure assumptions.

• DisadvantagesPotential of missing logical errors in software.

Possibility of redundant testing.

7.3 Software Change ControlControlling software changes requires both a configuration management process and a changecontrol process. Both are described in this section.

7.3.1 Software Configuration ManagementThe dynamic nature of most business activities causes software or system changes. Changesrequire well-formulated and well-documented procedures to prevent the manipulation of programsfor unauthorized purposes. The primary objective of configuration management (or changecontrol) is to get the right change installed at the right time. Change control concerns should beidentified so that proper control mechanisms can be established to deal with the concerns.

7-18 Version 6.2.1

Page 177: Casq Cbok Rev 6-2

Quality Control Practices

Some key points regarding changes include:

• Each release of software, documentation, database, etc., should have a unique versionnumber. Changes should be incorporated through new versions of the program. Thereshould be a process for moving versions in and out of production on prescribed dates.

• Procedures should exist for maintaining the production and source libraries. Theyshould address when to add to the library and when prior versions should be deleted.Care should be taken to regularly review libraries for obsolete programs, as largelibraries can negatively impact operations performance.

• Project documentation such as requirements specifications, design documents, testplans, standards, procedures, and guidelines should also be identified with versionnumbers and kept under version control to ensure the project team is working with thelatest, approved documents.

• Other environmental considerations to keep under version control are the operatingsystem and hardware, as changes to either of these have the potential for impacting theproject.

• Testing will not uncover all of the problems. As a result, people should be assigned toreview output immediately following changes. If this is a normal function, then thosepeople should be notified that a change has occurred.

• Each time an application is changed, the backup data required for recovery purposesmay also have to be changed. Since this step occurs outside the normal changeprocedures, it may be overlooked. Backup data includes the new program versions, thejob control language associated with those programs and other documentationprocedures involved in making the system operational after a problem occurs.

• Modifying an application system may also require modifying the recovery procedures.If new files have been established, or if new operating procedures or priorities havebeen designed, they must be incorporated into the recovery procedures.

7.3.2 Change Control ProceduresSeveral procedures are necessary to maintain control over program changes.

• The nature of the proposed change should be explained in writing, and formallyapproved by a responsible individual. Major changes should be approved by thesystems-planning steering committee, commonly called the CCB or ConfigurationControl Board, in the same manner as for new systems. Minor changes may onlyrequire the joint approval of the IT manager and senior personnel in the userdepartment. Documenting the proposed change clears up any initialmisunderstandings that may arise when only verbal requests are made. In addition,written proposals provide a history of changes in a particular system.

• Developers should make the program changes, not the operations group. Any changeshould be supported by adequate systems documentation. If the operators wereauthorized to make minor changes, it would greatly increase the difficulty ofcontrolling versions and of maintaining up-to-date documentation.

Version 6.2.1 7-19

Page 178: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

• Someone independent of the person who designed and made the change should beresponsible for testing the final revised program. The results should be recorded onprogram change registers and sent to the IT manager for approval. Operations shouldaccept only properly approved changes.

• Finally, the documentation system should be updated with all change sheets or changeregisters and printouts.

7.4 Defect ManagementA defect is a variance from expectations. To manage defects properly requires a process thatprevents, discovers, tracks, resolves, and improves processes to reduce future defect occurrences.

7.4.1 Defect Management ProcessThe general principles of a Defect Management Process are as follows:

• The primary goal is to prevent defects. Where this is not possible or practical, thegoals are to find the defect as quickly as possible and to minimize the impact of thedefect.

• The defect management process, like the entire software development process, shouldbe risk driven, i.e., strategies, priorities, and resources should be based on anassessment of the risk and the degree to which the expected impact of a risk can bereduced (see Skill Category 8).

• Defect measurement should be integrated into the development process. Informationon defects should be captured at the source as a natural by-product of doing the job. Itshould not be done after the fact by people unrelated to the project or system.

• As much as possible, the capture and analysis of the information should be automated.The QA analyst should look for trends and perform a root-cause analysis to identifyspecial and common cause problems.

• Defect information should be used to improve the process. As imperfect or flawedprocesses cause most defects, processes may need to be altered to prevent defects.

7.4.2 Defect ReportingRecording the defects identified at each stage of the test process is an integral part of a successfullife cycle testing approach. The purpose of this activity is to create a complete record of thediscrepancies identified during testing. The information captured is used in multiple waysthroughout the project, and forms the basis for quality measurement.

A defect can be defined in one of two ways. From the producer’s viewpoint, a defect is a deviationfrom specifications, whether missing, wrong, or extra. From the Customer’s viewpoint, a defect is

7-20 Version 6.2.1

Page 179: Casq Cbok Rev 6-2

Quality Control Practices

anything that causes customer dissatisfaction, whether in the requirements or not. It is critical thatdefects identified at each stage of the life cycle be tracked to resolution.

Defects are recorded for four major purposes:

• To ensure the defect is corrected• To report status of the application• To gather statistics used to develop defect expectations in future applications• To improve the software development process

Most project teams use some type of tool to support the defect tracking process. This tool could beas simple as a white board or a table created and maintained in a word processor, or one of the morerobust tools available today on the market. Tools marketed for this purpose usually come with anumber of customizable fields for tracking project specific data in addition to the basics. They alsoprovide advanced features such as standard and ad-hoc reporting, e-mail notification to developersor testers when a problem is assigned to them, and graphing capabilities.

7.4.3 Severity versus PriorityBased on predefined severity descriptions, the test team should assign the severity of a defectobjectively. For example a “severity one” defect may be defined as one that causes data corruption,a system crash, security violations, etc. Severity levels should be defined at the start of the projectso that they are consistently assigned and understood by the team. This foresight can help testteams avoid the common disagreements with development teams about the criticality of a defect.

In large projects, it may also be necessary to assign a priority to the defect, which determines theorder in which defects should be fixed. The priority assigned to a defect is usually more subjectiveas it may be based on input from users regarding which defects are most important, resourcesavailable, risk, etc.

7.4.4 Using Defects for Process ImprovementUsing defects to improve processes is not done by many organizations today, but it offers one ofthe greatest areas of payback. NASA emphasizes the point that any defect represents a weakness inthe process. Seemingly unimportant defects are, from a process perspective, no different fromcritical defects. It is only the developer’s good luck that prevents a defect from causing a majorfailure. Even minor defects, therefore, represent an opportunity to learn how to improve the processand prevent potentially major failures. While the defect itself may not be a big deal, the fact thatthere was a defect is a big deal.

Based on the research team findings, this activity should include the following:

• Go back to the process that originated the defect to understand what caused the defect• Go back to the verification and validation process, which should have caught the

defect earlier

Version 6.2.1 7-21

Page 180: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

Not only can valuable insight be gained as to how to strengthen the review process, these stepsmake everyone involved in these activities take them more seriously. This human factor dimensionalone, according to some of the people the research team interviewed, can have a very large impacton the effectiveness of the review process.

NASA takes an additional step of asking the question: “If this defect could have gotten this far intothe process before it was captured, what other defects may be present that have not beendiscovered?” Thus, not only is the process strengthened to prevent defects, it is strengthened to finddefects which have been created but not yet discovered. This aggressiveness should be mandatoryon life-critical systems.

7-22 Version 6.2.1

Page 181: Casq Cbok Rev 6-2

Metrics and Measurementproperly established measurement system is used to help achieve missions, visions, goals,and objectives. Measurement data is most reliable when it is generated as a by-product ofproducing a product or service. The QA analyst must ensure that quantitative data isvalued and reliable, and presented to management in a timely and easy-to-use manner.

Measurement can be used to gauge the status, effectiveness and efficiency of processes, customersatisfaction, product quality, and as a tool for management to use in their decision-makingprocesses. This skill category addresses measurement concepts, the use of measurement in asoftware development environment, variation, process capability, risk management, the waysmeasurement can be used and how to implement an effective measurement program.

8.1 Measurement ConceptsTo effectively measure, one needs to know the basic concepts of measurement. This sectionprovides those basic measurement concepts.

Measurement Concepts page 8-1Measurement in Software page 8-7Variation and Process Capability page 8-11Risk Management page 8-17Implementing a Measurement Program page 8-21

SkillCategory

8

A

Version 6.2.1 8-1

Page 182: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

8.1.1 Standard Units of MeasureA measure is a single quantitative attribute of an entity. It is the basic building block for ameasurement program. Examples of measures are lines of code (LOC), work effort, or number ofdefects. Since quantitative measures can be compared, measures should be expressed in numbers.For example, the measure LOC refers to the “number” of lines and work effort refers to the“number” of hours, days, or months.

Measurement cannot be used effectively until the standard units of measure have been defined. Forexample, talking about lines of code does not make sense until the measure LOC has been defined.Lines of code may mean LOC written, executable LOC written, or non-compound LOC written. Ifa line of code contained a compound statement (such as a nested IF statement two levels deep) itcould be counted as one or two lines of code. Additionally, organizations may use weightingfactors; for example, one verb would be weighted as more complete than other verbs in the sameprogramming language.

Standard units of measure are the base on which all measurement exists. Measurement programstypically have between five and fifty standard units.

8.1.2 MetricsA metric is a derived (calculated or composite) unit of measurement that cannot be directlyobserved, but is created by combining or relating two or more measures. A metric normalizes dataso that comparison is possible. Since metrics are combinations of measures they can add morevalue in understanding or evaluating a process than plain measures. Examples of metrics are meantime to failure and actual effort compared to estimated effort.

8.1.3 Objective and Subjective MeasurementObjective measurement uses hard data that can be obtained by counting, stacking, weighing,timing, etc. Examples include number of defects, hours worked, or completed deliverables. Anobjective measurement should result in identical values for a given measure, when measured bytwo or more qualified observers.

Subjective data is normally observed or perceived. It is a person's perception of a product oractivity, and includes personal attitudes, feelings and opinions, such as how easy a system is to use,or the skill level needed to execute the system. With subjective measurement, even qualifiedobservers may determine different values for a given measure, since their subjective judgment isinvolved in arriving at the measured value. The reliability of subjective measurement can beimproved through the use of guidelines, which define the characteristics that make themeasurement result one value or another.

Objective measurement is more reliable than subjective measurement, but as a general rule,subjective measurement is considered more important. The more difficult something is to measure,the more valuable it is. For example, it is more important to know how effective a person is in

8-2 Version 6.2.1

Page 183: Casq Cbok Rev 6-2

Metrics and Measurement

performing a job (subjective measurement), than knowing they got to work on time (objectivemeasurement). Following are a few other examples of objective and subjective measures:

• The size of a software program measured in LOC is an objective product measure.Any informed person, working from the same definition of LOC, should obtain thesame measure value for a given program.

• The classification of software as user-friendly is a subjective product measure. For ascale of 1-5, customers of the software would likely rate the product differently. Thereliability of the measure could be improved by providing customers with a guidelinethat describes how having or not having a particular attribute affects the scale.

• Development time is an objective process measure.• Level of programmer experience is a subjective process measure.

8.1.4 Types of Measurement DataBefore measurement data is collected and used, the type of information involved must beconsidered. It should be collected for a specific purpose. Usually the data is used in a processmodel, used in other calculations, or is subjected to statistical analyses. Statisticians recognize fourtypes of measured data, which are summarized in Table 8-1 and described below.

Table 8-1 Types of Measured Data

8.1.4.1 Nominal DataThis data can be categorized. For example, a program can be classified as database software,operating system, etc. Nominal data cannot be subjected to arithmetic operations of any type, andthe values cannot be ranked in any "natural order." The only possible operation is to determinewhether something is the same type as something else. Nominal data can be objective orsubjective, depending on the rules for classification.

8.1.4.2 Ordinal DataThis data can be ranked, but differences or ratios between values are not meaningful. For example,programmer experience level may be measured as low, medium, or high. For ordinal data to beused in an objective measurement the criteria for placement in the various categories must be welldefined; otherwise, it is subjective.

Operations for a data type also apply to all data types appearing below it.

Data Type PossibleOperations

Description of Data

Nominal = ≠ Categories

Ordinal < > RankingsInterval + - DifferencesRatio / Absolute Zero

Version 6.2.1 8-3

Page 184: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

8.1.4.3 Interval DataThis data can be ranked and can exhibit meaningful differences between values. Interval data hasno absolute zero, and ratios of values are not necessarily meaningful. For example, a program witha complexity value of 6 is four units more complex than a program with a complexity of 2, but it isprobably not meaningful to say that the first program is three times as complex as the second. T. J.McCabe’s complexity metric is an example of an interval scale.

8.1.4.4 Ratio DataThis data has an absolute zero and meaningful ratios can be calculated. Measuring program size byLOC is an example. A program of 2,000 lines can be considered twice as large as a program of1,000 lines.

It is important to understand the measurement scale associated with a given measure or metric.Many proposed measurements use values from an interval, ordinal, or nominal scale. If the valuesare to be used in mathematical equations designed to represent a model of the software process,measurements associated with a ratio scale are preferred, since the ratio scale allows mathematicaloperations to be meaningfully applied.

8.1.5 Measures of Central TendencyThe measures of central tendency are the mean, median, and mode. The mean is the average of theitems in the population; the median is the item at which half the items in the population are belowthis item and half the items are above this item; and the mode represents which items are repeatedmost frequently.

For example, if a population of numbers are: 1, 2, 2, 3, 4, 5, and 11:

• The mean is “4” because 1 + 2 + 2 + 3 + 4 + 5 + 11 = 28 and 28 ÷ 7 = 4.• The median is “3” because there are three values less and three values higher

than 3.• The mode is “2” because that is the item with the most occurrences.

8.1.6 Attributes of Good MeasurementIdeally models should be developed that are capable of predicting process or product parameters,not just describing them. This is facilitated by measures and resulting metrics that are:

• Simple and precisely definable, so it is clear how they can be evaluated• Objective• Easily obtainable at reasonable cost• Valid, measuring what they are intended to measure• Robust, being relatively insensitive to intuitively small changes in the process or

product

8-4 Version 6.2.1

Page 185: Casq Cbok Rev 6-2

Metrics and Measurement

Before being approved for use, measures and metrics should be subjected to the six tests describedbelow.

8.1.6.1 ReliabilityThis test refers to the consistency of measurement. If taken by two people, the same results shouldbe obtained. Sometimes measures are unreliable because of the measurement technique. Forexample, human error could make counting LOC unreliable, but the use of an automated codeanalyzer would result in the same answer each time it is run against an unchanged program.

8.1.6.2 ValidityThis test indicates the degree to which a measure actually measures what it was intended tomeasure. If actual project work effort is intended to quantify the total time spent on a softwaredevelopment project, but overtime or time spent on the project by those outside the project team isnot included, the measure is invalid for its intended purpose. A measure can be reliable, but invalid.An unreliable measure cannot be valid.

8.1.6.3 Ease of Use and SimplicityThese two tests are functions of how easy it is to capture and use the measurement data.

8.1.6.4 TimelinessThis test refers to whether the information can be reported in sufficient time to impact the decisionsneeded to manage effectively.

8.1.6.5 CalibrationThis test indicates the modification of a measurement so it becomes more valid; for example,modifying a customer survey to better reflect the true opinions of the customer.

8.1.7 Using Quantitative Data to Manage an IT FunctionAn integral part of an IT function is quantitative management, which contains two aspects:measurement dashboards and statistical process control.

8.1.7.1 Measurement DashboardsMeasurement dashboards (also called key indicators) are used to monitor progress and initiatechange. A measurement dashboard is analogous to the dashboard on a car. Various key indicatorsare presented in comparison to their own desired target value (i.e., speed, oil temperature). Arrayedtogether, they provide an overall snapshot of the car's performance, health, and operating quality.Measurement dashboards help ensure that all critical performance areas are analyzed in relation toother areas. Indicators evaluated alone can lead to faulty conclusions and decision-making.

Version 6.2.1 8-5

Page 186: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

Measurable results defined by an organization can be organized into dashboards for different levelsof management. Line managers use process or tactical dashboards to manage the process. Seniormanagement uses strategic dashboards to manage the function or organization and track to mission,vision, or goals. Using dashboards is known as “management by fact." Figure 8-1 depicts strategicand tactical dashboards.

Figure 8-1 Strategic and Tactical Measurement Dashboards

8.1.7.2 Statistical Process ControlStatistical process control is used to ensure that the process behaves in a consistent manner. Linemanagers use the principles of statistical process control to assess consistency of products andservices, and as a basis for continuous process improvement.

8.1.8 Key IndicatorsKey indicators are the metrics used by management to help them fulfill their managementresponsibilities. Managers decide what key metrics they need. A dashboard is comprised of thetotal number of key indicators used by a single manager. The concept of “key” means the metric isconsidered important by the user in managing job responsibilities. Normally the key indicator iscreated from many different measures. For example, the Dow Jones Average is created from thecombining of stock prices from approximately 40 different stocks.

8-6 Version 6.2.1

Page 187: Casq Cbok Rev 6-2

Metrics and Measurement

8.2 Measurement in Software The use of measurement in the software life cycle requires the development and use of softwaremetrics, which are standardized through the use of defined units of measure. This measurementprogram enables management to control and manage software throughout its entire life.

Both the software product and the process by which it is developed can be measured. The softwareproduct should be viewed as an abstract object that evolves from an initial statement of need to afinished software system, including source and object code and the various forms of documentationproduced during development. The software metrics are studied and developed for use in modelingthe software development process. These metrics and models are used to estimate and predictproduct costs and schedules, and to measure productivity and product quality. Information gainedfrom the measurements and models can be used in the management and control of the developmentprocess, leading to improved results.

There is no clearly defined, commonly accepted way of measuring software products and services.A large number of measures and metrics exist, but only a few have had widespread use oracceptance. Even with those widely studied, such as LOC or T. J. McCabe's cyclomaticcomplexity, it is not universally agreed what they mean. Some studies have attempted to correlatethe measurements with a number of software properties, including size, complexity, reliability(error rates), and maintainability.

Additionally, many measurements are done subjectively, such as product type or level ofprogramming expertise. They are difficult to evaluate because of the potentially large number offactors involved and the problems associated with assessing or quantifying individual factors.

As for the proposed process models, few have a significant theoretical basis. Most are based upon acombination of intuition, expert judgment, and statistical analysis of empirical data. Many softwaremeasures and metrics have been defined and tested but used only in limited environments. There isno single process model that can be applied with a reasonable degree of success to a variety ofenvironments. Generally, significant recalibration is required for each new environment in order toproduce useful results. Furthermore, the various models often use a wide variety of basic parametersets.

The above considerations make it difficult to interpret and compare quoted measurement results,especially with different environments, languages, applications, or development methodologies.Even simple measures, such as LOC, have differences in underlying definitions and countingtechniques making it almost impossible to compare quoted results. Metrics involving LOC valuesacross different program languages can lead to incorrect conclusions and thereby conceal the realsignificance of the data. For example, the productivity metrics LOC per month, and cost per LOCsuggest that assembly language programmers are more productive than high-level languageprogrammers (higher LOC per month and lower $ per LOC), even though the total programmingcost is usually lower for high-level languages. Similarly, defects per LOC and cost per defectvalues have been used as quality or productivity indicators. Again, with different levels ofprogramming languages, using these measurements may obscure overall productivity and quality

Version 6.2.1 8-7

Page 188: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

improvements by systematically yielding lower defect per LOC and cost per defect values forlower-level languages, even though total defects and costs are actually higher.

Despite these problems, applying software metrics and models in limited environments can helpimprove software quality and productivity. Defect density and McCabe's complexity have beenfound to be reasonably good predictors of other characteristics, such as defect counts, total effort,and maintainability. There are many useful products for measurement and modeling available onthe market today. Additional experience with current models and a better understanding ofunderlying measurements and their application to the software process will improve the results.

8.2.1 Product MeasurementA product can be measured at any stage of its development. For a software product, therequirements, the complexity of the software design, the size of the final program’s source or objectcode, or the number of pages of documentation produced for the installed system can be measured.

Most of the initial work in measuring products has dealt with the characteristics of source code.Experience with measurement and models shows that measurement information available earlier inthe development cycle can be of greater value in controlling the process and results. Thus, anumber of papers have dealt with the size or complexity of the software design.

The following examples show various ways of measuring a product. These were chosen because oftheir wide use or because they represent a particularly interesting point of view.

8.2.1.1 SizeLOC is the most common way of quantifying software size; however, this cannot be done until thecoding process is complete. Function points have the advantage of being measurable during thedesign phase of the development process or possibly earlier.

8.2.1.1.1 Lines of CodeThis is probably the most widely used measure for program size, although there are many differentdefinitions. The differences involve treatment of blank lines, comment lines, non-executablestatements, multiple statements per line, multiple lines per statement, and the question of how tocount reused lines of code. The most common definition counts any line that is not a blank or acomment, regardless of the number of statements per line. In theory, LOC is a useful predictor ofprogram complexity, total development effort, and programmer performance (debugging,productivity). Numerous studies have attempted to validate these relationships.

8.2.1.1.2 Function PointsA. J. Albrecht proposed a metric for software size and the effort required for development that canbe determined early in the development process. This approach computes the total function points(FP) value for the project, by totaling the number of external user inputs, inquiries, outputs, andmaster files, and then applying the following weights: inputs (4), outputs (5), inquiries (4), andmaster files (10). Each FP contributor can be adjusted within a range of ±35% for a specific projectcomplexity.

8-8 Version 6.2.1

Page 189: Casq Cbok Rev 6-2

Metrics and Measurement

8.2.1.2 ComplexityMore metrics have been proposed for measuring program complexity than for any other programcharacteristic. Two examples of complexity metrics are:

8.2.1.2.1 Cyclomatic Complexity -- v(G)Given any computer program, draw its control flow graph, G, where each node corresponds to ablock of sequential code and each edge corresponds to a branch or decision point in the program.The cyclomatic complexity of such a graph can be computed by a simple formula from graphtheory, as v(G)=e-n+2, where e is the number of edges, and n is the number of nodes in the graph.

8.2.1.2.2 Knots Calculate program knots by drawing the program control flow graph with a node for everystatement or block of sequential statements. A knot is defined as a necessary crossing of directionallines in the graph. The same phenomenon can be observed by drawing transfer-of-control linesfrom statement to statement in a program listing.

8.2.1.3 QualityThere is a long list of quality characteristics for software, such as correctness, efficiency,portability, performance, maintainability, and reliability. Skill Category 1 lists the commonlyaccepted quality attributes for an information system. While software quality can theoretically bemeasured at every phase of the software development cycle, the characteristics often overlap andconflict with one another. For example, increased performance or speed of processing (desirable)may result in lowered efficiency (undesirable). Since useful definitions are difficult to devise, mostefforts to find any single way to measure overall software quality have been less than successful.

Although much work has been done in this area, there is still less direction or definition than formeasuring software size or complexity. Three areas that have received considerable attention are:program correctness, as measured by defect counts; software reliability, as computed from defectdata; and software maintainability, as measured by various other metrics, including complexitymetrics.

8.2.1.3.1 CorrectnessThe number of defects in a software product should be readily derivable from the product itself.However, since there is no easy and effective procedure to count the number of defects in theprogram, the four alternative measures listed below have been proposed. These alternativemeasures depend on both the program and the outcome, or result, of some phase of thedevelopment cycle.

• Number of design changes• Number of errors detected by code inspections• Number of errors detected in program tests• Number of code changes required

Version 6.2.1 8-9

Page 190: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

8.2.1.3.2 ReliabilityIt would be useful to know the probability of a software failure, or the rate at which software errorswill occur. Again, although this information is inherent in the software product, it can only beestimated from data collected on software defects as a function of time. If certain assumptions aremade, this data can be used to model and compute software reliability metrics. These metricsattempt to indicate and predict the probability of failure during a particular time interval, or themean time to failure (MTTF) and mean time between failures (MTBF).

8.2.1.3.3 MaintainabilityEfforts have been made to define ways to measure or predict the maintainability of a softwareproduct. An early study by Bill Curtis, and others, investigated the ability of Halstead's effortmetric, E, and v(G) to predict the psychological complexity of software maintenance tasks.Assuming such predictions could be made accurately, complexity metrics could then be profitablyused to reduce the cost of software maintenance. A carefully detailed experiment indicated thatsoftware complexity metrics could effectively explain or predict the maintainability of software ina distributed computer system.

8.2.1.4 Customer Perception of Product QualityDetermining the customer's perception of quality involves measuring how the customer views thequality of the IT product. It can be measured in a number of ways, such as using customer surveys,service level agreements, loyalty, and recommendations to others.

8.2.2 Process MeasurementA process can be measured by either of the following:

• Attributes of the process, such as overall development time, type of methodologyused, or the average level of experience of the development staff.

• Accumulating product measures into a metric so that meaningful information aboutthe process can be provided. For example, function points per person-month or LOCper person-month can measure productivity (which is product per resources), thenumber of failures per month can indicate the effectiveness of computer operations,and the number of help desk calls per LOC can indicate the effectiveness of a systemdesign methodology.

There is no standardized list of software process metrics currently available. However, in additionto the ones listed above, some others to consider include:

• Number of deliverables completed on time• Estimated costs vs. actual costs• Budgeted costs vs. actual costs• Time spent fixing errors• Wait time• Number of contract modifications

8-10 Version 6.2.1

Page 191: Casq Cbok Rev 6-2

Metrics and Measurement

• Number of proposals submitted vs. proposals won• Percentage of time spent performing value-added tasks

8.3 Variation and Process Capability Dr. W. Edwards Deming's quality principles call for statistical evidence of quality. He was a strongproponent of the use of statistics that took into account common and special causes of variation.Common causes are those that can be controlled by improving the work processes. Special causesare those that must be controlled outside the process; typically they need to be dealt withindividually. It is generally not cost-effective or practical to deal with special causes in the day-to-day work processes.

The natural changes occurring in organizations are moving systems and processes towardincreasing variation. As a result, it is important for the QA analyst to understand the differencebetween common and special causes. Since the key to quality is process consistency, variation (thelack of consistency) must be understood before any process can be improved. Statistical tools arethe only methods available to objectively quantify variation, and to differentiate between the twotypes. Control charts are the tools used to monitor variation, and they are discussed in SkillCategory 4.

8.3.1 The Measurement ProgramA measurement program is defined as the entire set of activities that occur around quantitative data.It can be as simple as measuring whether a system is completed on time or completed withinbudget, or it can be extensive and complex.

Quantitative measurement occurs at all levels of IT maturity. As organizations mature, their use ofmeasurement changes with the maturation of the management approaches. Immature organizationstypically measure for budget, schedule, and project status, and management relies on project teamsto determine when requirements are done. When work processes are optimized, management relieson the quantitative data produced from the processes to determine whether or not the requirementsare complete, and to prevent problems.

There are four major uses of quantitative data (i.e., measurement):

1. Manage and control the process.

A process is a series of tasks performed to produce deliverables or products. IT processesusually combine a skilled analyst with the tasks defined in the process. In addition, each time aprocess is executed it normally produces a different product or service from what was built bythe same process at another time. For example, the same software development process may befollowed to produce two different applications. Management may need to adapt the process for

Version 6.2.1 8-11

Page 192: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

each product or service built, and needs to know that when performed, the process will producethe desired product or service.

2. Manage and control the productQuality is an attribute of a product. Quality level must be controlled from the start of theprocess through the conclusion of the process. Control requires assuring that the specifiedrequirements are implemented, and that the delivered product is what the customer expects andneeds.

3. Improve the processThe most effective method for improving quality and productivity is to improve the processes.Improved processes have a multiplier effect because everyone that uses the improved processgains from the improvement. Quantitative data gathered during process execution can identifyprocess weaknesses, and, therefore, opportunities for improvement.

4. Manage the risksRisk is the opportunity for something to go wrong - for example, newly purchased softwarewill not work as stated, projects will be delivered late, or workers assigned to a project do notpossess the skills needed to successfully complete it. Management needs to understand eachrisk, know the probability of the risk occurring, know the potential consequences if the riskoccurs, and understand the probability of success based upon different management actions.

The same database of quantitative data is employed for these four uses, but different measures andmetrics may be utilized. Table 8-2 illustrates how the four uses of measurement can be achieved.

8-12 Version 6.2.1

Page 193: Casq Cbok Rev 6-2

Metrics and Measurement

d

Table 8-2 Achieving the Four Uses of Measurement

8.3.2 Common and Special Causes of Variation

8.3.2.1 Common Causes of VariationAll processes contain some inherent variation, or common causes of variation. The amount ofvariation in a process is quantified with summary statistics (the mean and standard deviation).

A process is defined as stable when its mean and standard deviation remain constant over time.Processes containing only common causes of variation are considered stable. As a stable process ispredictable, future process values can be predicted within the control limits with a certain amountof belief. A stable process is said to be in a state of statistical control. The control chart in SkillCategory 4 depicts a stable process.

In a computer operation, abnormal terminations cause variation. Typical common causes ofabnormal terminations include invalid data, no available disk space, and errors in operating or jobcontrol instructions.

One researcher1 provides the following thoughts on common causes of variation:

"Process inputs and conditions that regularly contribute to the variability of process outputs.”

Use Questions Answered Measurement Category Examples of Measures/Metrics Use

Manage and Control the

Process

- How much have we made?- How much is left to make? Size

- Lines of code (LOC)- Boxes- Procedures- Units of output

How much progress have we made? Status

- Earned Value- Amount of scheduled work that is done- % of each activity completed

How much effort has been expended? Effort

Labor hours that differentiate requirements, design, implementation and test

When will the product be completed? Schedule Calendar times (months, weeks) of activity completed

Manage and Control the

Product

How good is the product? Quality- Number of defects found- Mean time to failure- Mean time to repair

How effectively does the product perform? Performance

- Technical performance- Measures specified by customers and management

Improve the Process

- How cost-efficient is the process?- What is the current performance? Time and effort - Unit costs

- Time to completeManage the

Risks What are the risks? Risks Probability of exceeding constraints or not meeting requirements

Version 6.2.1 8-13

Page 194: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

“Common causes contribute to output variability because they themselves vary.”

“Each common cause typically contributes a small portion to the total variation in processoutputs.”

“The aggregate variability due to common causes has a ‘non-systematic,’ random-lookingappearance.”

“Because common causes are ‘regular contributors,’ the ‘process’ or ‘system’ variability isdefined in terms of them."

Joiner outlined this strategy for reducing common causes of variation:

"Talk to lots of people including local employees, other managers, and staff from variousfunctions.”

“Improve the measurement processes if measuring contributes too much to the observedvariation.”

“Identify and rank categories of problems by Pareto analysis.”

“Stratify and desegregate your observations to compare performance of sub-processes.”

“Investigate cause-and-effect relations, running experiments (one factor and multi-factor)."

8.3.2.2 Special Causes of VariationSpecial causes of variation are not present in a process. They occur because of special or uniquecircumstances. In the IT example of abnormal terminations in a computer operation, special causesmight include operator strikes, citywide power outages, or earthquakes.

If special causes of variation exist, the process may be unpredictable, and therefore unstable. Astate of statistical control is established when all special causes of variation have been eliminated(the operator strike ends, citywide power returns or business returns to normal operations after anearthquake). Illustrated in Skill Category 4 is a process that is unstable because it contains a specialcause of variation in addition to the common causes.

Brian Joiner summarized special causes of variation as follows:

"Process inputs and conditions that sporadically contribute to the variability of processoutputs.”

“Special causes contribute to output variability because they themselves vary.”

1. Joiner, Brian, "Stable and Unstable Processes, Appropriate and Inappropriate Managerial Action." From an address given at a Deming User's Group Conference in Cincinnati, OH.

8-14 Version 6.2.1

Page 195: Casq Cbok Rev 6-2

Metrics and Measurement

“Each special cause may contribute a `small' or `large' amount to the total variation in processoutputs.”

“The variability due to one or more special causes can be identified by the use of controlcharts.”

Because special causes are `sporadic contributors,' due to some specific circumstances, the`process' or `system' variability is defined without them."

Joiner then presented this strategy for eliminating special causes of variation:

"Work to get very timely data so that special causes are signaled quickly - use early warningindicators throughout your operation.”

“Immediately search for the cause when the control chart gives a signal that a special cause hasoccurred. Find out what was different on that occasion from other occasions.”

“Do not make fundamental changes in the process.”

“Instead, seek ways to change some higher-level systems to prevent that special cause fromrecurring. Or, if results are good, retrain that lesson."

8.3.3 Variation and Process ImprovementConsistency in all processes from conception through delivery of a product or service is thecornerstone of quality. One of the challenges in implementing quality management is to get processusers thinking in terms of sources of variation. Managers must change the way they manage, anduse statistical methods when making improvements to processes.

Employees using the process have the lead responsibility for reducing special causes of variation.Management working on the process is responsible for leading the effort to reduce common causesof variation. Improvements to address the common causes of variation usually require process orsystem changes. It has been widely recognized that at least 85% of problems in any organizationare system problems and the responsibility of management to solve. Some sources quote 94%1.

The concept of statistical control makes it possible to determine which problems are in a processdue to common causes of variation and which are external to the process due to special causes ofvariation. Bringing a process into a state of statistical control is not improving the process - it isbringing it back to its typical operation. Reducing variation due to common causes is processimprovement and the real essence of continuous process improvement.

By definition, process variation due to common causes is expected and is not a reason for adjustingor changing the process. Tampering is any adjustment made to a process in response to common

1. Deming, W. Edwards, Out of the Crisis, MIT Press, Cambridge, MA, 1986.

Version 6.2.1 8-15

Page 196: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

cause variation. Deming defines tampering as "Action taken on a stable system in response tovariation within statistical control, in an effort to compensate for this variation - the results of whichwill inevitably increase the variation and will increase cost from here on out." Management thatdoes not understand variation continually asks for explanations or corrective action whenconfronted with variation due to common causes.

8.3.4 Process CapabilityAs previously stated, variation due to special causes must be removed to create a stable process.However, a stable process may not be an acceptable process. If the variation due to common causesresults in a process operating outside of the customer specifications, the process is called "non-capable." The process must be improved by reducing variation due to common causes, retargetingthe process, or both.

Figure 8-2 illustrates the transition of a process from non-capable to capable. In this figure, LCLand UCL represent lower and upper control limits. LSL and USL represent lower and upperspecification limits. In the first picture, the control limits are outside the specification limits, so avalue could be within the control limits but outside the specification limits, making the processnon-capable. In the last picture the modified process results in different control limits, which havemoved within the specification limits, yielding a process that is both stable and capable.

Figure 8-2 Making a Process Capable

8-16 Version 6.2.1

Page 197: Casq Cbok Rev 6-2

Metrics and Measurement

8.4 Risk ManagementRisk management involves the activities of defining, measuring, prioritizing, and managing risk inorder to eliminate or minimize any potential negative effect associated with risk.

8.4.1 Defining RiskRisk is the possibility that an unfavorable event will occur. It may be predictable or unpredictable.Risk has three components, each of which must be considered separately when determining how tomanage the risk.

• The event that could occur – the risk• The probability that the event will occur- the likelihood• The impact or consequence of the event if it occurs – the penalty

Risks can be categorized as one of the following:

Technical such as complexity, requirement changes, unproven technology, etc.

Programmatic or Performance such as safety, skills, regulatory changes, materialavailability, etc.

Supportability or Environment such as people, equipment, reliability, maintainability, etc.

Cost such as sensitivity to technical risk, overhead, estimating errors, etc.

Schedule such as degree of concurrency, number of critical path items, sensitivity to cost, etc.

8.4.2 Characterizing RiskRisk has five distinguishing characteristics:

8.4.2.1 SituationalChanges in a situation can result in new risks. Examples include, replacing a team member,undergoing reorganization, or changing a project's scope.

8.4.2.2 Time-BasedConsidering a software development life cycle, the probability of risk occurring at the beginning ofthe project is very high (due to the unknowns), whereas at the end of the project the probability isvery low. In contrast, during the life cycle, the impact (cost) from a risky event occurring is low atthe beginning (since not much time and effort have been invested) and higher at the end (as there ismore to lose).

Version 6.2.1 8-17

Page 198: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

8.4.2.3 InterdependentWithin a project, many tasks and deliverables are intertwined. If one deliverable takes longer tocreate than expected, other items depending on that deliverable may be affected, and the resultcould be a domino effect.

8.4.2.4 Magnitude DependentThe relationship of probability and impact are not linear, and the magnitude of the risk typicallymakes a difference. For example, consider the risk of spending $1 for a 50/50 chance to win $5, vs.the risk of spending $1,000 for a 50/50 chance of winning $5,000 vs. the risk of spending $100,000for a 50/50 chance of winning $500,000. In this example, the probability of loss is all the same(50%) yet the opportunity cost of losing is much greater.

8.4.2.5 Value-BasedRisk may be affected by personal, corporate or cultural values. For example, completing a projecton schedule may be dependent on the time of year and nationalities or religious beliefs of the workteam. Projects being developed in international locations where multiple cultures are involved mayhave a higher risk than those done in one location with a similar work force.

8.4.3 Managing RiskRisk management is the process used to identify, analyze, and respond to a risk. Identifying,analyzing, and prioritizing risks require knowledge of the business functions, and userinvolvement. The Project Management Institute's Project Management Body of Knowledge(PMBOK) defines the following four processes to address risk management. The PMBOK alsonotes that different application areas may use different names for these four processes.

• Risk Identification

• Risk Quantification

• Risk Response Development

• Risk Response ControlThis discussion of risk management addresses six processes, which have the following mapping tothe PMBOK processes.

Risk Identification

Risk Identification – this process answers the question "What are the risks?"

Risk Quantification

Risk Analysis - this process answers the question "Which risks do we care about?"

Risk Prioritization - this process answers the question "How are the risks prioritized?"

Risk Response Development

8-18 Version 6.2.1

Page 199: Casq Cbok Rev 6-2

Metrics and Measurement

Risk Response Planning - this process answers the question "What should be doneabout the risk?"

Risk Response Control

Risk Resolution – this process executes the plan that was developed in the prior step.

Risk Monitoring – this process evaluates the action taken, documents the risk resultsand repeats the cycle of identification, quantification and response.

8.4.4 Software Risk ManagementWithin many software development organizations risk management remains ad hoc andincomplete. Where risk management processes exist, they tend to be used only for large projects, orthose perceived to be risky.

Incorporating risk management into the software development life cycle includes planning at thefollowing levels:

• Long-Term or High-LevelThis level includes both long range planning, and optimizing the organization's mix ofprojects in order to obtain a balance.

Version 6.2.1 8-19

Page 200: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

• Medium-Term or Medium-LevelThis level deals with project management strategies, project evaluation, and selectionand project portfolio management

• Short-Term or Low-LevelThis level includes development, integration and testing strategies.

There are several components to consider when incorporating risk management into softwareproject management:

• In a traditional waterfall methodology, most risk management activity occurs close tomilestones. In a spiral development model the risk management activity falls in theexplicit risk management portion.

• Risk management is not a separate, independent auditing process. It is a part of aproject manager's job that should be explicitly performed, either quantitatively orqualitatively. Large projects should follow a formal risk management process; smallerprojects may require less effort. Ultimately the project manager must decide based onthe project's cost, schedule and performance issues.

• Risk management should be implemented as a series of integral tasks that are insertedinto the normal project planning process, not added on to the end of the planningactivities.

• The customer, management, development team, and others should be involved withdetermining project risks. Risks should not be the sole responsibility of the projectmanager.

8.4.5 Risks of Integrating New Technology One of the major challenges facing an IT organization is to effectively integrate new technology.This integration needs to be done without compromising quality.

The QA analyst has three roles in integrating new technology:

• Determining the RisksEach new technology poses new risks. These risks need to be identified andprioritized and, if possible, quantified. Although the QA analyst will probably notperform the actual task, the QA analyst needs to ensure that a risk analysis for thenew technology is undertaken and effectively performed.

• Assuring that the Controls are Adequate to Reduce the Risk The QA analyst needs to assess whether the controls proposed for the newtechnology are adequate to reduce the risk to an acceptable level. This may be doneby line management and reviewed by the QA analyst.

• Assuring that Existing Processes are Appropriately Modified to Incorporate theUse of the New Technology

8-20 Version 6.2.1

Page 201: Casq Cbok Rev 6-2

Metrics and Measurement

Work processes that will utilize new technologies normally need to be modified toincorporate those technologies into the step-by-step work procedures. This may bedone by the workers responsible for the work processes, but at least needs to beassessed or reviewed by the QA analyst.

8.5 Implementing a Measurement ProgramThe key to a good measurement program is knowing what results are wanted, and what drivesthose results. Then metrics need to be developed to measure those results and drivers. This sectionexplains how an effective measurement program is implemented.

8.5.1 The Need for MeasurementCurrent IT management is often ineffective because the IT function is extremely complex, and hasfew well-defined, reliable process or product measures to guide and evaluate results. Thus, accurateand effective estimating, planning, and control are nearly impossible to achieve. Projects are oftencharacterized by:

• Schedule and cost estimates that are grossly inaccurate• Poor quality software • A productivity rate that is increasing more slowly than the demand for software• Customer dissatisfaction

Addressing these problems requires more accurate schedule and cost estimates, better-qualityproducts, and higher productivity that can be achieved through improved software management.Improvement of the management process depends upon improved ability to identify, measure, andcontrol essential parameters of the IT processes. Measurement is an algorithm connecting thedesired result (i.e., the effect wanted) with the contributors or causes that will enable that effect tobe achieved. The results are what management wants, and the contributors are attributes of theprocesses that will be used to achieve those results. By measuring processes and products,information is obtained that helps control schedule, cost, quality, productivity, and customersatisfaction. Consistent measurements provide data for the following:

• Expressing requirements, objectives, and acceptance criteria in a quantitativemanner

• Monitoring progress of a project and/or product• Making trade offs in the allocation of resources• Evaluating process and product quality• Anticipating problems• Predicting deadlines of current project• Estimating future projects of a similar nature

Version 6.2.1 8-21

Page 202: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

Results indicate that implementation and application of a measurement program can help achievebetter management results, both in the short run (for a given project) and in the long run (improvingproductivity on future projects).

8.5.2 Prerequisites Implementing a measurement program requires four prerequisite steps:

1. Perceive the need for a measurement program and make a commitment to it.

The lack of timely and usable quantitative information to solve major project problemsbecomes apparent at the senior management level. Seeing the need for bettermanagement information (as discussed in the prior section), the site manager, generalmanager, or division VP sponsors an organizational commitment to a measurementprogram. As a senior manager, the sponsor has the authority to ensure understanding atall levels in the organization.

2. Identify a champion and change agent, and assign organizational responsibility.A champion is an advocate for the program by virtue of his/her technical credibility andinfluence. Champions are managers at the senior, middle, program, or project level,and are assisted by a change agent.

A change agent is a management leader empowered by the sponsor and champions toplan and implement the program. Change agents are most effective when they aremembers of working groups that will benefit from the measurement program. Theyhave the operational knowledge needed to schedule, monitor, control, and report theaccomplishments of the measurement program.

The project or organization selected for the implementation of the measurementprogram should have the resources, authority, and responsibility to make the programhappen. Unless a very limited program is contemplated, responsibility forimplementing the program should not be given to a single individual.

During this step, idea generators, idea exploiters, and information gatekeepers shouldbe identified. Idea generators contribute new ideas about the measurement program.Idea exploiters implement the new ideas in the form of pragmatic programs.Information gatekeepers are experts in measurement, and can provide informedrealities of it. These people implement the ideas to form a workable measurementprogram and ensure developers accept the program.

3. Establish tangible objectives and meaningful measurement program activities.The change agent guides the planning of the program, including the creation ofprogram objectives and the design of program activities. The planning takes thesponsor's goals for more effective information and defines expected results, neededresources, tasks, and organizations responsible for implementing the program.

8-22 Version 6.2.1

Page 203: Casq Cbok Rev 6-2

Metrics and Measurement

4. Facilitate management buy-in at all levels for the measurement program.The measurement program’s sponsor must clearly inform all levels of management ofhis/her interest in the measurement program and motivate their cooperation. They needto know the implementation team's goals, responsibilities, authority, and interfaceswith other organizations. Also important is to work with affected managers to obtaintheir buy-in, by tailoring the implementation so that most of their needs are met.

For each of the arguments against measurement that might be raised, there is a counter argument asto its importance.

• Measurement has a high cost; too much investment is required and the return is toolow.

Actual experience with measurement suggests that recurring costs of 2 - 3% of directproject costs are adequate for data collection and analysis and for project tracking andmonitoring. This small price buys real help in meeting project goals, and in increasingproject control through better budgeting, problem anticipation, risk reduction, andincremental process improvement.

• All the data exists to support special studies for the senior management.Data in many forms is typically scattered throughout an organization, but the data maynot be organized, available, or accessible on a timely basis. All levels of managementneed measurement data in a meaningful form. Lower levels of management may needmore detailed, quantitative technical information, but all levels need the informationthat the measurement function can provide.

• The ability to measure exists if and when it is needed.Many organizations have the ability to measure their performance, but they only do itwhen a problem is apparent. At that point, appropriate information, if it exists at all,may not be available in time to solve the problem. System measurement, if practiced ina systematic manner, ensures that information is available at all times, for all projects,over all levels of management, when needed for problem solving and decision-making.

• Our estimates are based on standard industry methods, and our budgeting and planningis sufficient.

To be good enough, estimates, estimating algorithms, metrics, and experience dataneed to be tailored to an organization's unique environment and processes. Industrystandard estimating algorithms, while useful, must have their parameter valuescalibrated to reflect the organization's unique environment; otherwise, they produceestimates that are not meaningful or reliable in that environment. Experience showsthat controllability of system development projects decreases when budgets and thebudgeting process bear little relation to the operating environment.

• If data is collected, the prime contractor may want to see it, take it away, or use it toharm your organization.

The customer has access to all customer contract data, and can require access to acentral measurement database, including management data not collected as a part of the

Version 6.2.1 8-23

Page 204: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

contract. The measurement database will contain information from past projects as wellas ongoing projects. After a contract is satisfactorily completed, it is unlikely that theold data will be requested. Because this database will prove vital to the management ofthe business, it should be kept under a reasonable level of security.

• There is no use for a measurement program in the organization.The bottom line is that if a business cannot be measured, it cannot be successfullymanaged for long without information. Reliable information about a business requiresmeasurements.

8-24 Version 6.2.1

Page 205: Casq Cbok Rev 6-2

Internal Control and Security

wo key issues for quality assurance are internal control and security. Interest in internalcontrol has been highlighted by the passage of the Sarbanes-Oxley Act. Interest in securityhas been highlighted by publicized penetrations of security and the increased importance ofinformation systems and the data contained by those systems.

The Sarbanes-Oxley Act, sometimes referred to as SOX, was passed in response to the numerousaccounting scandals such as Enron and WorldCom. While much of the act relates to financialcontrols, there is a major section relating to internal controls. For Securities and ExchangeCommission (SEC)-regulated corporations, both the CEO and the CFO must personally attest tothe adequacy of their organization’s system of internal control. Because misleading attestationstatements is a criminal offense, top corporate executives take internal control as a very importanttopic.

Principles and Concepts of Internal Control page 9-2

Environmental or General Controls page 9-5Transaction Processing Controls page 9-6The Quality Professionals Responsibility for Internal Control and Security page 9-9

Building Internal Controls page 9-9Building Adequate Security page 9-12

Skill Category

9

T

Version 6.2.1 9-1

Page 206: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

9.1 Principles and Concepts of Internal ControlThere are many definitions of internal control. Most of those definitions were developed byaccountants. Some of those definitions focus more on financial controls, but others take a muchbroader view of internal control. Note that security is part of a system of internal control.

In the 1990’s five major accounting organizations developed a framework for internal control. Thefive members of the group are: Financial Executives International, American Institute of CertifiedPublic Accountants, American Accounting Association, The Institute of Internal Auditors, and theInstitute of Management Accountants. This group is called the Committee of SponsoringOrganizations and is frequently referred to by the acronym COSO.

The COSO Internal Control Framework has been widely accepted after the passage of theSarbanes-Oxley Act. This is because the Act requires organizations to have a “framework forinternal control” and the SEC, which oversees the Sarbanes-Oxley Act, only recognizes the COSOInternal Control Framework.

9.1.1 Internal Control and Security Vocabulary and ConceptsThere is no one generally accepted definition of internal control. Many have developed definitions,some broad, some very specific. However, it is important to have a clear definition of internalcontrol.

The COSO report defines internal control as:

"…A process, effected by an organization’s Board of Directors, management and otherpersonnel, designed to provide reasonable assurance regarding the achievement ofobjectives in the following categories:

• Effectiveness and efficiency of operations

• Reliability of financial reporting

• Compliance with applicable laws and regulations.”

The following four key terms are used extensively in internal control and security:

• Risk – The probability that an undesirable event will occur.• Exposure – The amount of loss that might occur if an undesirable event occurs.• Threat – A specific event that might cause an undesirable event to occur.• Control – Anything that will reduce the impact of risk.

Let’s look at an example of these terms using a homeowner’s insurance policy and focus on onerisk, which is the risk of fire. The exposure associated with a risk of fire would be the value of yourhome. A threat that might cause that risk to turn into a loss might be an improper electricalconnection or children playing with matches. Controls that would minimize the loss associated

9-2 Version 6.2.1

Page 207: Casq Cbok Rev 6-2

Internal Control and Security

with this risk would include such things as fire extinguishers, sprinkler systems, fire alarms andnon-combustible material used in construction.

In looking at information technology, we might look at the risk of someone penetrating a bankingsystem and improperly transferring funds to the perpetrators personal account. The risk is the lossof funds in the account, which was penetrated. The exposure is the amount of money in theaccount, or the amount of money that the bank allows to be transferred electronically. The threat isinadequate security systems, which allow the perpetrator to penetrate the banking system. Controlscan include passwords limiting access, limiting the amount that can be transferred at any one time,limiting unusual transactions such as transferring the monies to an overseas account, and a controlwhich limits who can transfer money from the account.

9.1.1.1 Internal Control ResponsibilitiesEveryone in an organization has some responsibility for internal control. Management, however, isresponsible for an organization’s internal control system. The chief executive officer is ultimatelyresponsible for the internal control system. Financial and accounting officers are central to the waymanagement exercises control. All management personnel play important roles and areaccountable for controlling their units’ activities.

Internal auditors contribute to the ongoing evaluation of the internal control system, but they do nothave primary responsibility for establishing or maintaining it. The Board of Directors and its auditcommittee provide important oversight to the internal control system. A number of other parties,such as lawyers and external auditors, contribute to the achievement of the organization’sobjectives and provide information useful in improving internal control. However, they are notresponsible for the effectiveness of, nor are they a part of, the organization’s internal controlsystem.

9.1.1.2 The Internal Auditor’s Internal Control Responsibilities Internal auditors directly examine internal controls and recommend improvements. The Institute ofInternal Auditors, the professional association representing internal auditors worldwide, definesinternal auditing as:

“… an independent, objective assurance and consulting activity designed to add valueand improve an organization’s operations. It helps an organization accomplish itsobjectives by bringing a systematic, disciplined approach to evaluate and improve theeffectiveness of risk management, control, and governance processes.”

International Standards for the Professional Practice of Internal Auditing established by theInstitute of Internal Auditors, specify that internal auditors should:

• Assist the organization by identifying and evaluating significant exposures to riskand contributing to the improvement of risk management and control systems.

• Monitor and evaluate the effectiveness of the organization’s risk managementsystem.

Version 6.2.1 9-3

Page 208: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

• Evaluate risk exposures relating to the organization’s governance, operations, andinformation systems regarding the:

Reliability and integrity of financial and operational information

Effectiveness and efficiency of operations

Safeguarding of assets

Compliance with laws, regulations, and contracts

• Assist the organization in maintaining effective controls by evaluating theireffectiveness and efficiency and by promoting continuous improvement.

All activities within an organization are potentially within the scope of the internal auditors’responsibility. In some entities, the internal audit function is heavily involved with controls overoperations. For example, internal auditors may periodically monitor production quality, test thetimeliness of shipments to customers, or evaluate the efficiency of the plant layout. In other entities,the internal audit function may focus primarily on compliance or financial reporting-relatedactivities.

The Institute of Internal Auditors standards also set forth the internal auditors’ responsibility for theroles they may be assigned. Those standards, among other things, state that internal auditors shouldbe independent of the activities they audit. They possess, or should possess, such independencethrough their position and authority within the organization and through recognition of theirobjectivity.

Organizational position and authority involve such matters as a reporting line to an individual whohas sufficient authority to ensure appropriate audit coverage, consideration and response; selectionand dismissal of the director of internal auditing only with Board of Directors or audit committeeconcurrence; internal auditor access to the Board or audit committee; and internal auditor authorityto follow up on findings and recommendations.

Internal auditors are objective when not placed in a position of subordinating their judgment onaudit matters to that of others. The primary protection for this objectivity is appropriate internalaudit staff assignments. These assignments should be made to avoid potential and actual conflictsof interest and bias. Staff assignments should be rotated periodically and internal auditors shouldnot assume operating responsibilities. Similarly, they should not be assigned to audit activities withwhich they were involved in connection with prior operating assignments.

It should be recognized that the internal audit function does not – as some people believe – haveprimary responsibility for establishing or maintaining the internal control system. That, as noted, isthe responsibility of the CEO, along with key managers with designated responsibilities. Theinternal auditors play an important role in evaluating the effectiveness of control systems and thuscontribute to the ongoing effectiveness of those systems.

9-4 Version 6.2.1

Page 209: Casq Cbok Rev 6-2

Internal Control and Security

9.1.2 Risk versus Control From an academic perspective, the sole purpose of control is to reduce risk. Therefore, if there is norisk, there is no need for control. The formula for risk is as follows:

Risk = Frequency x Probable Loss

To calculate the loss due to risk, one must first determine:

• The frequency with which an unfavorable event will occur; and• The probable loss associated with that unfavorable occurrence.

Let’s look at a simple example. There is a risk that products shipped will not be invoiced. If wewere to assume that an average of two products will be shipped per day and not be invoiced and theaverage billing per invoice is $500, then the risk associated with not invoicing shipments is $1,000per day.

Management has chosen to use a positive concept in addressing risk, rather than a negativeconcept. In other words, they recognize that there will be a risk that products will be shipped butnot invoiced. To address risk such as this, management has chosen to define control objectivesrather than risks.

In our shipped but not billed risk example, management would define a control objective of “Allproducts shipped should be invoiced”. They would then implement controls to accomplish thatpositive control objective.

9.1.3 Environmental versus Transaction Processing ControlsIt is important for the quality professional to know that internal control systems have twocomponents. The first is environmental controls (sometimes called general controls), and thesecond is the transaction processing controls within an individual business application.

9.2 Environmental or General ControlsEnvironmental controls are the means which management uses to manage the organization. Theyinclude such things as:

• Organizational policies• Organizational structure in place to perform work• Method of hiring, training, supervising and evaluating personnel• Processes provided to personnel to perform their day-to-day work activities, such

as a system development methodology for building software systems.Auditors state that without strong environmental controls the transaction processing controls maynot be effective. For example, if passwords needed to access computer systems are not adequately

Version 6.2.1 9-5

Page 210: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

protected the password system will not work. Individuals will either protect, or not protect, theirpassword based on environmental controls such as the attention management pays to passwordprotection, the monitoring of the use of passwords that exist, and management’s actions regardingindividual workers failure to protect passwords.

9.3 Transaction Processing ControlsThe object of a system of internal control in a business application is to minimize business risks.Risks are the probability that some unfavorable event may occur during processing. Controls arethe totality of means used to minimize those business risks.

There are two systems in every business application. The first is the system that processes businesstransactions, and the second is the system that controls the processing of business transactions.From the perspective of the system designer, these two are designed and implemented as onesystem. For example, edits that determine the validity of input are included in the part of the systemin which transactions are entered. However, those edits are part of the system that controls theprocessing of business transactions.

Because these two systems are designed as a single system, most software quality analysts do notconceptualize the two systems. Adding to the difficulty is that the system documentation is notdivided into the system that processes transactions and the system that controls the processing oftransactions.

When one visualizes a single system, one has difficulty in visualizing the total system of internalcontrol. That is, if one looks at edits of input data by themselves, it is difficult to see how the totalityof control over the processing of a transaction is controlled. For example, there is a risk that invalidtransactions will be processed. This risk occurs throughout the system and not just during theediting of data. When the system of internal controls is viewed it must address all of the risks ofinvalid processing from the point that a transaction is entered into the system to the point that theoutput deliverable is used for business purposes.

9.3.1 Preventive, Detective and Corrective ControlsThis section describes three different categories of transaction processing controls, preventive,detective, and corrective and provides examples of those types of controls. Also provided is adetailed process to follow when building controls within an information system.

The objectives of transaction processing controls are to prevent, detect, or correct incorrectprocessing. Preventive controls will stop incorrect processing from occurring; detective controlsidentify incorrect processing; and corrective controls correct incorrect processing. Since thepotential for errors is always assumed to exist, the objectives of transaction processing controls willbe summarized in five positive statements:

9-6 Version 6.2.1

Page 211: Casq Cbok Rev 6-2

Internal Control and Security

• Assure that all authorized transactions are completely processed once and onlyonce.

• Assure that transaction data is complete and accurate.• Assure that transaction processing is correct and appropriate to the circumstances.• Assure that processing results are utilized for the intended benefits.• Assure that the application can continue to function.

In most instances controls can be related to multiple exposures. A single control can also fulfillmultiple control objectives. For these reasons transaction processing controls have been classifiedaccording to whether they prevent, detect, or correct causes of exposure. The controls listed in thenext sections are not meant to be exhaustive but, rather, representative of these control types.

9.3.1.1 Preventive ControlsPreventive controls act as a guide to help things happen as they should. This type of control is mostdesirable because it stops problems from occurring. Computer application systems designersshould put their control emphasis on preventive controls. It is more economical and better forhuman relations to prevent a problem from occurring than to detect and correct the problem after ithas occurred.

Preventive controls include standards, training, segregation of duties, authorization, forms design,pre-numbered forms, documentation, passwords, consistency of operations, etc.

One question that may be raised is, “At what point in the processing flow is it most desirable toexercise computer data edits?” The answer to this question is simply, “As soon as possible, in orderto uncover problems early and avoid unnecessary computer processing.” Some input controlsdepend on access to master files and so must be timed to coincide with file availability. However,many input validation tests may be performed independently of the master files. Preferably, thesetests should be performed in a separate edit run at the beginning of the computer processing.Normally, the input validation tests are included in programs to perform data-conversionoperations such as transferring data files from one application to another. By including the tests inprograms performing such operations, the controls may be employed without significantlyincreasing the computer run time.

Preventive controls are located throughout the entire IT system. Many of these controls areexecuted prior to the data entering the computer programs. The following preventive controls willbe discussed in this chapter:

• Source-data authorization• Data input• Source-data preparation• Turn-around documents• Pre-numbered forms• Input validation• Computer updating of files

Version 6.2.1 9-7

Page 212: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

• Controls over processing

9.3.1.2 Detective ControlsDetective controls alert individuals involved in a process so that they are aware of a problem.Detective controls should bring potential problems to the attention of individuals so that action canbe taken. One example of a detective control is a listing of all paychecks for individuals whoworked over 80 hours in a week. Such a transaction may be correct, or it may be a systems error, oreven fraud.

Detective controls will not prevent problems from occurring, but rather will point out a problemonce it has occurred. Examples of detective controls are batch control documents, batch serialnumbers, clearing accounts, labeling, and so forth.

The following detective controls will be discussed here:

• Data transmission• Control register• Control totals• Documentation and testing• Output Checks

9.3.1.3 Corrective ControlsCorrective controls assist individuals in the investigation and correction of causes of risk exposuresthat have been detected. These controls primarily collect evidence that can be utilized indetermining why a particular problem has occurred. Corrective action is often a difficult and time-consuming process; however, it is important because it is the prime means of isolating systemproblems. Many system improvements are initiated by individuals taking corrective actions onproblems.

It should be noted that the corrective process itself is subject to error. Many major problems haveoccurred in organizations because corrective action was not taken on detected problems. Thereforedetective control should be applied to corrective controls.

Examples of corrective controls are audit trails, discrepancy reports, error statistics, backup andrecovery, etc. The following two corrective controls will be discussed in greater detail: errordetection and resubmission, and audit trails.

9.3.2 Cost versus Benefit of ControlsIn information systems there is a cost associated with each control. These costs need to beevaluated as no control should cost more than the potential errors it is established to detect, prevent,or correct. Also, if controls are poorly designed or excessive, they become burdensome and maynot be used. The failure to use controls is a key element leading to major risk exposures.

9-8 Version 6.2.1

Page 213: Casq Cbok Rev 6-2

Internal Control and Security

Preventive controls are generally the lowest in cost. Detective controls usually require somemoderate operating expense. On the other hand, corrective controls are almost always quiteexpensive. As noted above, prior to installing any control, a cost/benefit analysis should be made.

Controls need to be reviewed continually. This is a prime function of the auditor. The auditorshould determine if controls are effective. As the result of such a review an auditor mayrecommend adding, eliminating, or modifying system controls.

9.4 The Quality Professionals Responsibility for Internal Control and Security

The quality professional is the organization’s advocate for quality. The role of the qualityprofessional involves identifying opportunities to improve quality and facilitating solutions.Because internal control and security are important responsibilities of management, the qualityprofessional should be involved in those areas.

The quality professional should not be responsible for building or assessing the adequacy ofinternal control and security systems. However, the quality professional can assist in building workprocesses for building and assessing internal control and security. The quality professional can alsoevaluate the effectiveness and efficiency of those work processes.

The quality professional should support the importance of environmental controls in creating anenvironment conducive to effective internal control and security. The following section on controlmodels will emphasize the importance of a strong control environment.

9.5 Building Internal ControlsThe system of internal control is designed to minimize risk. The control models emphasize theimportance of the control environment. Normally quality assurance does not establish the controlenvironment, but can review the control environmental practices in place in the IT organization.This section will focus on building transaction processing control in software systems.

9.5.1 Perform Risk AssessmentBuilding controls starts with risk assessment because reduction in risk is the requirement for acontrol. Risk assessment allows an organization to consider the extent to which potential eventsmight have an impact on the achievement of objectives. Management should assess events fromtwo perspectives; first, the likelihood of an event occurring and second, the impact of that event.The assessment normally uses a combination of qualitative and quantitative methods.

Version 6.2.1 9-9

Page 214: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

The positive and negative impacts of potential events should be examined, individually or bycategory, across the organization. Potentially negative events are assessed on both an inherent andresidual basis.

In risk assessment, management considers the mix of potential future events relevant to theorganization and its activities. This entails examining factors, including organization size,complexity of operations, and degree of regulation over its activities, that shape the organization’srisk profile and influence the methodology it uses to assess risks.

9.5.2 Model for Building Transaction Processing ControlsSystem controls for computer applications involve automated and manual procedures. Automatedprocedures may include data entry performed in user areas, as well as the control of the data flowwithin a computer system. Manual procedures in user areas are developed to ensure that thetransactions processed by IT are correctly prepared, authorized, and submitted to IT.

Manual application control procedures are also required within IT. For example, the IT input/output control section frequently balances and reconciles input to output. File retention and securityprocedures may be required and specified for individual computer applications. Such controls areunique to the requirements of the application and complement management controls that governinput/output controls and the media library.

Figure 9-1 shows the six steps of a transaction flow through a computer application system.Transaction flow is used as a basis for classifying transaction processing controls because itprovides a common framework for users and system development personnel to improve computerapplication system controls.

9-10 Version 6.2.1

Page 215: Casq Cbok Rev 6-2

Internal Control and Security

Figure 9-1 Model for Building Transaction Processing Controls

The two shaded areas in the figure indicate the steps which require the greatest involvement of theuser organization. Each step is described below.

9.5.2.1 Transaction OriginationTransaction originations controls govern the origination, approval, and processing of sourcedocuments and the preparation of data processing input transactions and associated error detectionand correction procedures.

9.5.2.2 Transaction EntryTransaction entry controls govern the data entry via remote terminal or batch, data validation,transaction or batch proofing and balancing, error identification and reporting, and error correctionand reentry.

9.5.2.3 Transaction CommunicationsTransaction communications controls govern the accuracy and completeness of datacommunications, including message accountability, data protection hardware and software,security and privacy, and error identification and reporting.

Version 6.2.1 9-11

Page 216: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

9.5.2.4 Transaction ProcessingTransaction processing controls govern the accuracy and completeness of transaction processing,including the appropriateness of machine-generated transactions, validation against master files,and error identification and reporting.

9.5.2.5 Database Storage and RetrievalTransaction database storage and retrieval controls govern the accuracy and completeness ofdatabase storage, data security and privacy, error handling, backup, recovery, and retention.

9.5.2.6 Transaction OutputTransaction output controls govern the manual balancing and reconciling of input and output(within the input/output control section and at user locations), distribution of data processingoutput, control over negotiable documents (within data processing and user areas), and output dataretention.

As a general rule, if risks are significant, controls should be strong. If the quality assurance analystsand/or the individual developing the adequacy opinion can match the risks with controls, theopinion can be based on that documentation.

9.6 Building Adequate SecuritySecurity is heavily dependent on management establishing a strong control environment thatencourages compliance to security practices. Good security practices, such as protectingpasswords, will not be effective unless employees are diligent complying with password protectionpractices.

Security can be divided into two parts. First is the security management controls, and second is thesecurity technical controls. Normally, security experts are needed to identify, install and monitorthe technical controls such as anti-virus software. Quality assurance should focus on the securitymanagement controls.

To build good security management controls, quality assurance needs to build a security baseline.However, prior to building the baseline the team needs to understand the vulnerabilities that allowsecurity penetrations. This is typically accomplished via security awareness training. The nextsection identifies some of the more widely used security practices.

9.6.1 Where Vulnerabilities in Security OccurVulnerability is a weakness in an information system. It is the point at which software systems areeasiest to penetrate. Understanding the vulnerabilities helps in designing security for informationsystems.

9-12 Version 6.2.1

Page 217: Casq Cbok Rev 6-2

Internal Control and Security

This section describes vulnerabilities that exist in the functional attributes of an informationsystem, identifies the location of those vulnerabilities, and distinguishes accidental from intentionallosses.

9.6.1.1 Functional VulnerabilitiesThe primary functional vulnerabilities result from weak or nonexistent controls in the followingeight categories, listed in order of historic frequency of abuse:

1. Input/Output Data

2. Physical Access

3. IT Operations

4. Test Processes

5. Computer Programs

6. Operating System Access and Integrity

7. Impersonation

8. Media

9.6.1.2 IT Areas Where Security is PenetratedData and report preparation areas and computer operations facilities with the highest concentrationof manual functions are areas most vulnerable to having security penetrated. Nine primaryfunctional locations are listed, described, and ranked according to vulnerability:

1. Data and Report Preparation Facilities

2. Computer Operations

3. Non-IT Areas

4. Online Terminal Systems

5. Programming Offices

6. Online Data Preparation and Report Generation

Version 6.2.1 9-13

Page 218: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

7. Digital Media Storage Facilities

8. Online Operations

9. Central Processors

9.6.1.3 Accidental versus Intentional LossesErrors generally occur during labor-intensive detailed work and errors lead to vulnerabilities. Theerrors are usually data errors, computer program errors (bugs), and damage to equipment orsupplies. Such errors often require rerunning of jobs, error correction, and replacement/repair ofequipment/supplies.

Nevertheless, it is often difficult to distinguish between accidental loss and intentional loss. In fact,some reported intentional loss is due to perpetrators discovering and making use of errors thatresult in their favor. When loss occurs, employees and managers tend to blame the computerhardware first (to absolve themselves from blame and to pass the problem along to the vendor tosolve). The problem is rarely a hardware error, but proof of this is usually required before searchingelsewhere for the cause. The next most common area of suspicion is users or the source of datageneration because, again, the IT department can blame another organization. Blame is usuallynext placed on the computer programming staff. Finally, when all other targets of blame have beenexonerated, IT employees suspect their own work.

It is not uncommon to see informal meetings between computer operators, programmers,maintenance engineers, and users arguing over who should start looking for the cause of a loss. Thethought that the loss was intentional is remote because they generally assume they function in abenign environment.

In many computer centers, employees do not understand the significant difference betweenaccidental loss from errors and intentionally caused losses. Organizations using computers havebeen fighting accidental loss for 40 years, since the beginning of automated data processing.Solutions are well known and usually well applied relative to the degree of motivation and cost-effectiveness of controls. They anticipate, however, that the same controls used in similar waysalso have an effect on people engaged in intentional acts that result in losses. They frequently fail tounderstand that they are dealing with an intelligent enemy who is using every skill, experience, andaccess capability to solve the problem or reach a goal. This presents a different kind ofvulnerability, one that is much more challenging and that requires adequate safeguards and controlsnot yet fully developed or realized, let alone adequately applied.

9.6.2 Establishing a Security BaselineA baseline is a snapshot of the organization’s security program at a certain time. The baseline isdesigned to answer two questions:

• What are we doing about computer security?• How effective is our computer security program?

9-14 Version 6.2.1

Page 219: Casq Cbok Rev 6-2

Internal Control and Security

Baseline information should be collected by an independent assessment team; as much as possible,bias for, or against, a security program should be removed from the process. The process itselfshould measure both factual information about the program and the attitudes of the people involvedin the program.

9.6.2.1 Creating BaselinesThe establishment of a security baseline need not be time-consuming. The objective is to collectwhat is easy to collect, and ignore the information that is difficult to collect. In many instances, theneeded information may be already available.

The three key aspects of collecting computer security baseline information are as follows:

• What to collect.A determination must be made about what specific pieces of information would be helpful inanalyzing the current security program and in building a more effective computer securityprogram

• From whom will the information be collected? Determining the source of information may be a more difficult task than determining whatinformation should be collected. In some instances, the source will be current data collectionmechanisms (if used by the organization). In other instances, individuals will be asked toprovide information that has not previously been recorded.

• The precision of the information collected. There is a tendency to want highly precise information, but in many instances it is notnecessary. The desired precision should be both reasonable and economical. If people are beingasked to identify past costs, high precision is unreasonable; and if the cost is large, it must becarefully weighed against the benefit of having highly precise information. In many instances,the same decisions would be made regardless of whether the precision was within plus orminus 1 percent, or within plus or minus 50 percent.

9.6.3 Using BaselinesThe baseline is the starting point to a better security program. It reports the status of the currentprogram and provides a basic standard against which improvements can be measured.

The baseline study serves two primary objectives. First, it reduces computer security discussionsfrom opinion to fact. Even though some of the facts are based on attitude, they are a statistical baseof data on which analyses and discussion can be focused, as opposed to people’s opinion andprejudices. The baseline helps answer the question of whether the expenditure was worthwhile. Forexample, if a security software package is acquired, but there is no way to determine whether theenvironment has been improved, management will wonder whether that expenditure wasworthwhile. When the next computer security request is made, the uncertainty about the lastexpenditure may eliminate the probability of a new improvement.

Version 6.2.1 9-15

Page 220: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

9.6.4 Security Awareness TrainingIT organizations cannot protect the confidentiality, integrity, and availability of information intoday’s highly networked systems environment without ensuring that all the people involved inusing and managing IT:

• Understand their roles and responsibilities related to the organizational mission• Understand the organization’s IT security policy, procedures, and practices• Have at least adequate knowledge of the various management, operational, and

technical controls required and available to protect the IT resources for which theyare responsible

• Fulfill their security responsibilities.

As cited in audit reports, periodicals, and conference presentations, it is generally agreed by the ITsecurity professional community that people are the weakest link in attempts to secure systems andnetworks.

The “people factor” – not technology – is the key to providing an adequate and appropriate level ofsecurity. If people are the key, but are also a weak link, more and better attention must be paid tothis “component of security”. A robust and enterprise-wide awareness and training program isparamount to ensure that people understand their IT security responsibilities, organizationalpolicies, and how to properly use them to protect the IT resources entrusted to them.

This practice provides a strategy for building and maintaining a comprehensive awareness andtraining program, as part of an organization’s IT security program. The strategy is presented in alife cycle approach, ranging from designing, developing, and implementing an awareness andtraining program, through post-implementation evaluation of the program. The security awarenessprogram includes guidance on how IT security professionals can identify awareness and trainingneeds, develop a training plan, and get organizational buy-in for the funding of awareness andtraining program efforts.

While there is no one best way to develop a security awareness program, the process that follows isan all inclusive process of the best security awareness training program. This example includesthese three steps:

1. Create a security awareness policy.

2. Develop the strategy that will be used to implement that policy.

3. Assign the roles for security and awareness to the appropriate individuals.

9.6.5 Security PracticesWhen addressing security issues, some general information security principles should be kept inmind, as follows:

9-16 Version 6.2.1

Page 221: Casq Cbok Rev 6-2

Internal Control and Security

• Simplicity• Fail Safe• Complete Mediation• Open Design• Separation of Privilege• Psychological Acceptability• Layered Defense• Compromise Recording

Version 6.2.1 9-17

Page 222: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

This page intentionally left blank.

9-18 Version 6.2.1

Page 223: Casq Cbok Rev 6-2

Outsourcing, COTS and Contracting Quality

rganizations can assign software development work responsibilities to outsideorganizations by purchasing software or contracting services; but they cannot assign theresponsibility for quality. Quality of software remains an internal IT responsibilityregardless of who builds the software. The quality professionals need to assure that those

quality responsibilities are fulfilled through appropriate processes for acquiring purchased softwareand contracting for software services.

10.1Quality and Outside SoftwareThere is a trend in the software industry for organizations to move from in-house developedsoftware to commercial off-the-shelf (COTS) software and software developed by contractors.Software developed by contractors who are not part of the organization is referred to as

Quality and Outside Software page 10-1Selecting COTS Software page 10-5Selecting Software Developed by Outside Organizations page 10-6

Contracting for Software Developed by Outside Organizations page 10-7

Operating for Software Developed by Outside Organizations page 10-8

SkillCategory

10

O

Version 6.2.1 10-1

Page 224: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

outsourcing organizations. Contractors working in another country are referred to as offshoresoftware developers.

There are some common differences between software developed in-house and that developed byany outside organization, as well as differences specific to COTS and contractor developedsoftware. Quality professionals should be familiar with these differences as they impact theirquality responsibilities.

Two differences between software developed in-house and software developed by an outsideorganization are:

• Relinquishment of controlThe software is developed by individuals who are not employees of the organization,and thus it is difficult to oversee the development process. The contracting organizationcannot direct the employees of the other organization, nor have control over the manyday-to-day decisions that are made in developing software.

• Loss of control over reallocation of resourcesIf work needs to be done to correct problems and/or speed up development, thecontractor cannot take workers off one project and assign them to another project.

10.1.1 Purchased COTS softwareCOTS software is normally developed prior to an organization selecting that software for its use.For smaller, less expensive software packages the software is normally “shrink wrapped” and ispurchased as-is. As the COTS software becomes larger and more expensive, the purchasingorganization may be able to specify modifications to the software.

Differences or challenges faced with COTS software include:

• Task or items missing• Software fails to perform• Extra features• Does not meet business needs• Does not meet operational needs• Does not meet people needs

10.1.1.1 Evaluation versus AssessmentMany organizations select COTS software based on evaluation which is a static analysis of thedocumentation and benefits of the software, versus performing an assessment which requires thatthe software be tested in a dynamic mode before selection.

10-2 Version 6.2.1

Page 225: Casq Cbok Rev 6-2

Outsourcing, COTS and Contracting Quality

10.1.2 Outsourced SoftwareThe differences in contracted software developed by an outsourcer include:

• Quality factors may not be specifiedThere are many factors such as reliability and ease of use which are frequently notincluded as part of the contractual criteria. Thus when the software is delivered it maynot be as easy to use or as reliable as desired by the purchasing orgainzation.

• Non-testable requirements and criteriaIf the requirements or contractual criteria are in measurable and testable terms then thedelivered result may not meet the intent of the purchasing orgainzation.

• Customer’s standards may not be metUnless the contract specifies the operational standards and documentation standards thedelivered product may be more complex to use than desired by the purchasingorgainzation.

• Missing requirementsUnless detailed analysis and contractual specifications work is complete the purchasingorganization may realize during the development of the software that requirements aremissing and thus the cost of the contract could escalate significantly.

• Overlooked changes in standards in technologyIf changes in standards that the purchasing organization must meet, or the introductionof new desirable technology is incorporated into the contract there may be significantcost to modify the software for those new standards in technology.

• Training and deployment may be difficultIf software is developed by another organization there may be inadequate knowledge inthe contracted organization to provide the appropriate training for staff and to ensurethat deployment is effective and efficient.

10.1.2.1 Additional differences if the contract is with an offshore organizationExperience has shown that over 50% of the software developed by offshore organizations fails tomeet the expectations of the purchasing organization. Since many of the decisions to have softwaredeveloped offshore are economic decisions, the differences associated with having the softwaredeveloped offshore negate the economic advantages in many cases. These offshore differences are:

Version 6.2.1 10-3

Page 226: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

• Cultural differencesThere may be a significant difference in the culture and values between the purchasingorganization and the offshore organization.

• Communication barriersThe language of the offshore organization may be different or difficult to comprehendwhich causes difficulty in communicating the needs and desires of the purchasingorgainzation.

• Loss of employee morale and supportEmployees who would like to have developed the software may resent the softwarebeing developed offshore and thus make it difficult for the offshore developed softwareto be successful.

• Root cause of the purchasing organization IT organization not addressedFrequently, offshoring is chosen because there are problems in the purchasingorganization that executives do not want to address. For example, the problems mightinclude a lack of training for the employees or other, perhaps better, options forsoftware development were not explored.

The above discussions are not meant to be an exhaustive list of the differences between in-housedeveloped software and software developed by outside organizations. The objective is so thequality assurance professional recognizes some potential root causes of software quality. If thosedifferences are not adequately addressed in the contract, and with employees of the organization,the probability of the contracted or offshore-developed software not meeting expectationsincreases.

10.1.2.2 Quality Professionals Responsibility for Outside SoftwareWhile the software may be developed by an outside organization, the responsibility for quality ofthat software cannot be contracted. The purchasing organization is still responsible for the qualityof the organization. There must be a process to monitor the development and validate the correctfunctioning of the software when it is developed by an outside organization.

The quality professional is the individual who should accept the quality responsibility for softwaredeveloped by an outside organization. This may mean that the quality professional needs to visitperiodically or during the entire developmental period of the software to ensure the quality. Manyof the same practices used to assure quality of in-house developed software are applicable tosoftware developed by outside organizations. For example, conducting reviews at specificcheckpoints should occur on contracted software. Acceptance testing should be conducted on allsoftware regardless of how developed.

The quality professional’s specific responsibility for software developed by outside organizationsis to assure that the process for selecting COTS software and contracting with an outsideorganization for software are adequate.

10-4 Version 6.2.1

Page 227: Casq Cbok Rev 6-2

Outsourcing, COTS and Contracting Quality

One of the major responsibilities of the quality assurance activity is to oversee the development anddeployment of work processes, and then to assure that those work processes are continuouslyimproved.

Without a process for selecting COTS software and a process for contracting for software thoseprocesses would be subject to great variability. One contract may work well, one acquisition ofCOTS software may work well, while other acquisitions may result in a failure.

The quality professional needs to look at these two processes in the same way that they view theSEI CMMI® Capability Maturity Model. If contracting is done at a Level 1 maturity there will begreat variability and thus many disappointments in the delivered product and services. On the otherhand, as those processes move to a Level 5 maturity, the probability of getting exactly what iswanted from COTS software and contracted software is very high.

This skill category contains a prototype process for selecting COTS software and a prototypeprocess for contracting for software with outside organizations. The two processes incorporatemany of the best practices for acquiring software developed by outside organizations.

10.2Selecting COTS SoftwareThere is no generally accepted best practice for acquiring COTS software. However, there aremany practices in place by organizations that have a process for selecting COTS software. Theprocess proposed includes many of those best practices.

It is important for the quality professional to first understand that a process is needed for acquiringCOTS software, and second understanding the key components of that process. Thus, the purposeof presenting a process for selecting COTS software is to facilitate the quality professionalunderstanding the type of criteria that should be included in a COTS software selection process.

The following seven-step process includes those activities which many organizations follow inassuring that the COTS software selected is appropriate for the business needs. Each of theprocesses is discussed below:

• Assure Completeness of Needs Requirements• Define Critical Success Factor• Determine Compatibility with Hardware, Operating System, and other COTS

Software• Assure the Software can be Integrated into Your Business System Work Flow• Demonstrate the Software in Operation• Evaluate People Fit• Acceptance Test the Software Process

Version 6.2.1 10-5

Page 228: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

10.3Selecting Software Developed by Outside Organizations

Normally contracting for software is a more significant activity than acquiring COTS software.COTS software has been developed and the acquisition challenge is to assure that the developedsoftware will meet the organization’s needs. On the other hand the contracted software may not bedeveloped or may be only partially developed and thus it incorporates many of the aspects of in-house developed software except the actual implementation of the requirements/criteria.

There is no one single process generally accepted for contracting for software. Many largeorganizations have a purchasing section which specifies many of the components of contracting forsoftware. In addition, large organization’s legal departments may also be involved in writing andapproving contracts for software development.

If an off-shore organization develops the software, even more attention should be given to thecontract because of cultural and communication differences. Off-shore contracts may involve thelaws of the country in which the software is developed.

The following contracting life cycle incorporates many of the best practices used by organizationsin selecting, contracting, and operating software developed by an outside organization. Thisprocess commences when the need for software developed by an outside organization is defined,and continues throughout the operation of the contracted software.

10.3.1 Contracting Life CycleIn order to participate in the various aspects of contracting for software development, it is necessaryto establish an acquisition life cycle for contracted software. This life cycle contains the followingthree activities:

• Selecting an Outside Organization Selecting a contractor is similar to systems design. It is a time of studying alternativecontractors, costs, schedules, detailed implementation design specifications, and thespecification of all the deliverables, such as documentation. The selection of an outsideorganization may involve the following individuals in addition to the quality controlreviewer:

• Systems analysts

• User personnel

• Internal auditor

• Purchasing agent

• Legal counsel• Contract Negotiations

10-6 Version 6.2.1

Page 229: Casq Cbok Rev 6-2

Outsourcing, COTS and Contracting Quality

In some organizations, the purchasing agent conducts all the negotiations with thecontractor; the other parties are involved only in the needs specification. In otherorganizations, there is no purchasing agent, so the data processing department dealsdirectly with contractors for the acquisition of application systems.

• Operations and MaintenanceThe maintenance and operations of purchased applications may be subject tocontractual constraints. For example, some contracts limit the frequency with which anapplication can be run without paying extra charges, and limit an organization’s abilityto duplicate the application system in order to run it in a second location. Somepurchased applications can be maintained by in-house personnel, but others do notcome with source code and thus are not maintainable by in-house personnel. Theremay also be problems in connecting a purchased application from one contractor withpurchased or rented software from another contractor. It is important that softwarefrom multiple contractors be evaluated as to its capability to work in the same operatingenvironment.

The contract lists the obligations assumed by both parties. The contractor may beobligated to meet contractual requirements for such things as updating the applicationsystem to be usable with new versions of operating systems, etc. The purchasingorganization may have obligations for protecting the application system fromcompromise, paying for extensive use of the application, etc. This provision should bemonitored and enforced during the life of the contract. Provisions which are violatedand not enforced may be unenforceable at a later point in time due to the impliedagreement by one party not to enforce a provision of the contract. Therefore, it isimportant that contracts be reviewed regularly to determine that all the provisions of thecontract are being enforced.

10.4Contracting for Software Developed by Outside Organizations

10.4.0.1 What Contracts Should ContainContracts are legal documents. To fully understand the impact of provisions being included in, orexcluded from, the contract may require legal training. However, the following information shouldbe included in all contracts:

• What is done.The contract should specify the deliverables to be obtained as a result of execution ofthe contract. Deliverables should be specified in sufficient detail so that it can bedetermined whether or not the desired product has been received.

• Who does it.

Version 6.2.1 10-7

Page 230: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

The obligations of both contractual parties should be spelled out in detail in thecontract.

• When it is done.The dates on which the contractual obligations need to be filled should be specified inthe contract.

• How it is done.The contract should specify the methods by which the deliverables are to be prepared ifthat is important in achieving the contractual needs. For example, the organization maynot want certain types of source instructions used in developing an application systembecause they plan to perform the maintenance with the in-house personnel.

• Where it is done.The location where the deliverables are to be prepared, delivered, operated, andmaintained should be specified as part of the contract.

• Penalties for nonperformance.If the contractual agreements are not followed, the penalties should be specified in thecontract. For example, if the contractor is late in delivering the work products, thecontract may specify a penalty of x dollars per day.

10.5Operating for Software Developed by Outside Organizations

Three activities occur once the software is ready for delivery. These are acceptance testing,operation and maintenance of the software, and contractual relations.

10.5.1 Acceptance TestingThe acceptance testing phase is necessary to verify that the contractual requirements have beenmet. During this phase, the customer’s personnel verify that the delivered products are what theorganization needs. This requires examination and testing of the deliverables.

The contractor has a responsibility to test the application to determine that it meets contractualrequirements. However, the contractor will be testing against what the contractor believes to be thecontractual requirements. This may or may not be what the customer needs. The customer testsagainst what is actually wanted.

In some cases, there will be a difference between what the user wants and what the contractordelivers. If the contract has specified the deliverables in sufficient detail, the customer will have

10-8 Version 6.2.1

Page 231: Casq Cbok Rev 6-2

Outsourcing, COTS and Contracting Quality

adequate recourse for correction. However, if the contract is loosely worded, the customer may beobligated to pay all or part of additional costs necessary to meet the customer’s specifications.

Ideally, the acceptance test phase occurs throughout the development process if the application isbeing developed for the customer. If differences can be uncovered during the development phase,correction is not costly and will usually be made by the contractor. However, problems uncoveredafter the application system has been developed can be costly, and may result in resistance by thecontractor making the change.

If the application systems have been developed prior to contract negotiations, the user can performacceptance testing prior to signing the contract. This is ideal from a user perspective because theyknow exactly what they are getting, or what modifications are needed prior to contract signing. Thecustomer is always in a better position to negotiate changes prior to contract signing than aftercontract signing.

10.5.1.1 Acceptance Testing ConcernsThe primary concern of the user during acceptance testing is that the deliverables meet the userrequirements. The specific concerns during this phase are:

• Meets specificationsAll of the deliverables provided by the contractor meet the user requirements asspecified in the contract.

• On timeThe delivered products will be in the hands of the user on the date specified on thecontract.

• Adequate test dataThe customer should prepare sufficient test data so that the deliverables can beadequately tested. For application systems, this may be test transactions to verify theprocessing performed by the application. The acceptance test criteria for trainingcourses and manuals may be review and examination. For example, the contractor maybe asked to put on a training class in order to determine the adequacy of the material.

• Preparation for operationAny supporting activities should be completed in time for acceptance testing. This mayinvolve the ordering of forms, making changes to existing operating systems, and otherapplication systems, developing procedures to use the application, etc. The customershould have identified these needs during the feasibility phase, and worked on meetingthose requirements while the contractor was preparing the application system.

• User trainingThe users of the application should be trained prior to the acceptance test phase. Thistraining may be provided by the organization itself, or it may be done by the contractor.

• When can it begin

Version 6.2.1 10-9

Page 232: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

Acceptance testing should occur on the date which was specified in the contract.

• Conversion to productionThe procedures and steps necessary to put the application into production should betested during the acceptance testing phase. These are normally customer obligations toprepare files, schedule production, etc. These procedures should be tested just asvigorously as the contractor’s application system.

10.5.1.2 Operation and Maintenance of the SoftwareThe contractual arrangements determine the ongoing relationship between the contractor and thecustomer. This relationship may continue as long as the customer continues to use the applicationsystem. It encompasses continued service and maintenance. However, the ongoing relationshipmay only involve guarantee and warranty of the product.

Frequently organizations overlook contractual agreements after the application system has goneoperational. This is because problems may not occur initially, and when they do occur theorganization neglects to go back to the contract to determine the obligation of the contractor forthese problems.

The major concern during the operation and maintenance of a purchased or leased applicationsystem is that both parties to the agreement comply with the contractual requirements. Contractsshould be periodically reviewed to verify that the contractual requirements are being met.

Reviewers should evaluate negotiations over time to determine that the contractor fulfills their partof the contract. There are also instances where the customer is obligated to meet ongoingcontractual obligations and compliance to those obligations should be verified.

10.5.1.3 Contractual RelationsThe relationship between the contractor and the customer is an ongoing relationship. Time andeffort must be expended to keep that a viable and healthy relationship. The relationship should notbe considered fixed at the point in time the contract was signed but, rather, a continual evolvingrelationship in which both the interest of the contractor and the customer are protected.

The contractor is anxious to sell more applications and service to the customer. Therefore, specialneeds and interests of the customer are normally handled even if they are above and beyond thecontractual negotiations. These are normally performed in an effort to continually improve therelationship in hopes of ongoing business.

The customer hopefully has received a valuable product from the contractor. In most instances, thecustomer either could not produce this product, or produce it at an equivalent cost or time span.Thus, it is normally within the best interest of the customer to gain more products of a similarnature.

10.5.1.3.1 Contractual Relation ConcernsThe concerns that arise in maintaining a relationship of harmony and good will include:

10-10 Version 6.2.1

Page 233: Casq Cbok Rev 6-2

Outsourcing, COTS and Contracting Quality

• Contractor obligations metThe contractor should meet their requirements as specified in the contract.

• Customer obligations metThe customer should meet their requirements as specified in the contract.

• Need metIt is to the benefit of both parties to have the customer satisfied with the applicationsystem. Even when the initial deliverables meet the customer’s need, there will beongoing maintenance required to meet the continually evolving needs of the customer.The methods of doing this should be specified in the contract, and those requirementsshould form the basis for both parties specifying and delivering new contractualobligations.

• Limits on cost increasesThe cost specified in the contract should include provisions for ongoing costs. In aninflationary economy, it may be advantageous to have limits placed on cost increases.For example, if service is provided at an hourly rate, the increases in that rate might bespecified in the contract.

• Exercising options (e.g., added features)Many contracts contain options for additional features or work. When newrequirements are needed, it should first be determined if they can be obtained byexercising some of the options already available in the contract.

• RenegotiationMany contracts contain provisions to renegotiate in the event of some specifiedcircumstances. For example, if the customer wants to extend the contract, thatextension may involve a renegotiation of the terms of the contract. The renegotiationprocess should be conducted in accordance with the contractual specifications.

• Compensation for errorIf the contractor agrees to compensate for problems due to contractor causes, thepenalties should be specified in the contract.

• Returns on terminationIf the contract is terminated, the contractual termination procedures should beperformed in accordance with the contract requirements.

Version 6.2.1 10-11

Page 234: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

This page intentionally left blank.

10-12 Version 6.2.1

Page 235: Casq Cbok Rev 6-2

Vocabulary The organization of this document is primarily alphabetical. Acronyms are grouped at the beginning of each alphabetical section, and are followed by words, terms and phrases. Acro-nyms are expanded at the beginning of each alphabetical section and defined with the full term or phrase. Four modifications are the grouping of terms and phrases in the domains of specifications, testing, qualification, and validation. Those related terms are located sequen-tially to assist the user in finding all defined terms in these domains, e.g., functional testing is defined under testing, functional.

The terms are defined, as much as possible, using available standards. The source of such def-initions appears immediately following the term or phrase in parenthesis, e.g. (NIST). The source documents are listed below.

The New IEEE Standard Dictionary of Electrical and Electronics Terms, IEEE Std. 100-1992.

IEEE Standards Collection, Software Engineering, 1994 Edition, published by the Institute of Electrical and Electronic Engineers Inc.

National Bureau of Standards [NBS] Special Publication 500-75 Validation, Verification, and Testing of Computer Software, 1981.

Federal Information Processing Standards [FIPS] Publication 101, Guideline For Lifecycle Validation, Verification, and Testing of Computer Software, 1983.

Federal Information Processing Standards [FIPS] Publication 105, Guideline for Software Documentation Management, 1984.

American National Standard for Information Systems, Dictionary for Information Systems, American National Standards Institute, 1991.

FDA Technical Report, Software Development Activities, July 1987.

FDA Guide to Inspection of Computerized Systems in Drug Processing, 1983.

FDA Guideline on General Principles of Process Validation, May 1987.

Appendix

A

Version 6.2 A-1

Page 236: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

Reviewer Guidance for Computer Controlled Medical Devices Undergoing 510(k) Review, Office of Device Evaluation, CDRH, FDA, August 1991.

HHS Publication FDA 90-4236, Preproduction Quality Assurance Planning.

MIL-STD-882C, Military Standard System Safety Program Requirements, 19JAN1993.

International Electrotechnical Commission, International Standard 1025, Fault Tree Analy-sis.

International Electrotechnical Commission, International Standard 812, Analysis Tech-niques for System Reliability - Procedure for Failure Mode and Effects Analysis [FMEA].

FDA recommendations, Application of the Medical Device GMP to Computerized Devices and Manufacturing Processes, May 1992.

Pressman, R., Software Engineering, A Practitioner's Approach, Third Edition, McGraw-Hill, Inc., 1992.

Myers, G., The Art of Software Testing, Wiley Interscience, 1979.

Beizer, B., Software Testing Techniques, Second Edition, Van Nostrand Reinhold, 1990.

Additional general references used in developing some definitions are:

Bohl, M., Information Processing, Fourth Edition, Science Research Associates, Inc., 1984.

Freedman, A., The Computer Glossary, Sixth Edition, American Management Association, 1993.

McGraw-Hill Electronics Dictionary, Fifth Edition, 1994, McGraw-Hill Inc.

McGraw-Hill Dictionary of Scientific & Technical Terms, Fifth Edition, 1994, McGraw-Hill Inc..

Webster's New Universal Unabridged Dictionary, Deluxe Second Edition, 1979.

A-2 Version 6.2

Page 237: Casq Cbok Rev 6-2

Vocabulary

- A -

ADC. analog-to-digital converter.

ALU. arithmetic logic unit.

ANSI. American National Standards Institute.

ASCII. American Standard Code for Information Interchange.

abstraction. The separation of the logical properties of data or function from its implementation in a computer program. See: encapsulation, information hiding, software engineering.

access. (ANSI) To obtain the use of a resource.

access time. (ISO) The time interval between the instant at which a call for data is initiated and the instant at which the delivery of the data is completed.

accident. See: mishap.

accuracy. (IEEE) (1) A qualitative assessment of correctness or freedom from error. (2) A quantitative measure of the magnitude of error. Contrast with precision. (CDRH) (3) The measure of an instrument's capability to approach a true or absolute value. It is a function of precision and bias. See: bias, precision, calibration.

accuracy study processor. A software tool used to perform calculations or determine accuracy of computer manipulated program variables.

actuator. A peripheral [output] device which translates electrical signals into mechanical actions; e.g., a stepper motor which acts on an electrical signal received from a computer instructing it to turn its shaft a certain number of degrees or a certain number of rotations. See: servomechanism.

adaptive maintenance. (IEEE) Software maintenance performed to make a computer program usable in a changed environment. Contrast with corrective maintenance, perfective maintenance.

address. (1) A number, character, or group of characters which identifies a given device or a storage location which may contain a piece of data or a program step. (2) To refer to a device or storage location by an identifying number, character, or group of characters.

addressing exception. (IEEE) An exception that occurs when a program calculates an address outside the bounds of the storage available to it.

algorithm. (IEEE) (1) A finite set of well-defined rules for the solution of a problem in a finite number of steps. (2) Any sequence of operations for performing a specific task.

algorithm analysis. (IEEE) A software V&V task to ensure that the algorithms selected are correct, appropriate, and stable, and meet all accuracy, timing, and sizing requirements.

alphanumeric. Pertaining to a character set that contains letters, digits, and usually other characters such as punctuation marks.

Version 6.2 A-3

Page 238: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

American National Standards Institute. 11 West 42nd Street, New York, N.Y. 10036. An organization that coordinates the development of U.S. voluntary national standards for nearly all industries. It is the U.S. member body to ISO and IEC. Information technology standards pertain to programming languages, electronic data interchange, telecommunications and physical properties of diskettes, cartridges and magnetic tapes.

American Standard Code for Information Interchange. A seven bit code adopted as a standard to represent specific data characters in computer systems, and to facilitate interchange of data between various machines and systems. Provides 128 possible characters, the first 32 of which are used for printing and transmission control. Since common storage is an 8-bit byte [256 possible characters] and ASCII uses only 128, the extra bit is used to hold a parity bit or create special symbols. See: extended ASCII.

analog. Pertaining to data [signals] in the form of continuously variable [wave form] physical quantities; e.g., pressure, resistance, rotation, temperature, voltage. Contrast with digital.

analog device. (IEEE) A device that operates with variables represented by continuously measured quantities such as pressures, resistances, rotations, temperatures, and voltages.

analog-to-digital converter. Input related devices which translate an input device's [sensor] analog signals to the corresponding digital signals needed by the computer. Contrast with DAC [digital-to-analog converter]. See: analog, digital.

analysis. (1) To separate into elemental parts or basic principles so as to determine the nature of the whole. (2) A course of reasoning showing that a certain result is a consequence of assumed premises. (3) (ANSI) The methodical investigation of a problem, and the separation of the problem into smaller related units for further detailed study.

anomaly. (IEEE) Anything observed in the documentation or operation of software that deviates from expectations based on previously verified software products or reference documents. See: bug, defect, error, exception, fault.

application program. See: application software.

application software. (IEEE) Software designed to fill specific needs of a user; for example, software for navigation, payroll, or process control. Contrast with support software; system software.

architectural design. (IEEE) (1) The process of defining a collection of hardware and software components and their interfaces to establish the framework for the development of a computer system. See: functional design. (2) The result of the process in (1). See: software engineering.

architecture. (IEEE) The organizational structure of a system or component. See: component, module, subprogram, routine.

archival database. (ISO) An historical copy of a database saved at a significant point in time for use in recovery or restoration of the database.

archive. (IEEE) A lasting collection of computer system data or other records that are in long term storage.

A-4 Version 6.2

Page 239: Casq Cbok Rev 6-2

Vocabulary

archive file. (ISO) A file that is part of a collection of files set aside for later research or verification, for security purposes, for historical or legal purposes, or for backup.

arithmetic logic unit. The [high speed] circuits within the CPU which are responsible for performing the arithmetic and logical operations of a computer.

arithmetic overflow. (ISO) That portion of a numeric word that expresses the result of an arithmetic operation, by which the length of the word exceeds the word length of the space provided for the representation of the number. See: overflow, overflow exception.

arithmetic underflow. (ISO) In an arithmetic operation, a result whose absolute value is too small to be represented within the range of the numeration system in use. See: underflow, underflow exception.

array. (IEEE) An n-dimensional ordered set of data items identified by a single name and one or more indices, so that each element of the set is individually addressable; e.g., a matrix, table, or vector.

as built. (NIST) Pertaining to an actual configuration of software code resulting from a software development project.

assemble. See: assembling.

assembler. (IEEE) A computer program that translates programs [source code files] written in assembly language into their machine language equivalents [object code files]. Contrast with compiler, interpreter. See: cross-assembler, cross-compiler.

assembling. (NIST) Translating a program expressed in an assembly language into object code.

assembly code. See: assembly language.

assembly language. (IEEE) A low level programming language, that corresponds closely to the instruction set of a given computer, allows symbolic naming of operations and addresses, and usually results in a one-to-one translation of program instructions [mnemonics] into machine instructions. See: low-level language.

assertion. (NIST) A logical expression specifying a program state that must exist or a set of conditions that program variables must satisfy at a particular point during program execution.

assertion checking. (NIST) Checking of user- embedded statements that assert relationships between elements of a program. An assertion is a logical expression that specifies a condition or relation among program variables. Tools that test the validity of assertions as the program is executing or tools that perform formal verification of assertions have this feature. See: instrumentation; testing, assertion.

asynchronous. Occurring without a regular time relationship, i.e., timing independent.

asynchronous transmission. A timing independent method of electrical transfer of data in which the sending and receiving units are synchronized on each character, or small block of characters, usually by the use of start and stop signals. Contrast with synchronous transmission.

audit. (1) (IEEE) An independent examination of a work product or set of work products to assess compliance with specifications, standards, contractual agreements, or other criteria.

Version 6.2 A-5

Page 240: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

See: functional configuration audit, physical configuration audit. (2) (ANSI) To conduct an independent review and examination of system records and activities in order to test the adequacy and effectiveness of data security and data integrity procedures, to ensure compliance with established policy and operational procedures, and to recommend any necessary changes. See: computer system audit, software audit.

audit trail. (1) (ISO) Data in the form of a logical path linking a sequence of events, used to trace the transactions that have affected the contents of a record. (2) A chronological record of system activities that is sufficient to enable the reconstruction, reviews, and examination of the sequence of environments and activities surrounding or leading to each event in the path of a transaction from its inception to output of final results.

auxiliary storage. Storage device other than main memory [RAM]; e.g., disks and tapes.

- B -

BIOS. basic input/output system.

bps. bits per second.

band. Range of frequencies used for transmitting a signal. A band can be identified by the difference between its lower and upper limits, i.e. bandwidth, as well as by its actual lower and upper limits; e.g., a 10 MHz band in the 100 to 110 MHz range.

bandwidth. The transmission capacity of a computer channel, communications line or bus. It is expressed in cycles per second [Hz], and also is often stated in bits or bytes per second. See: band.

bar code. (ISO) A code representing characters by sets of parallel bars of varying thickness and separation that are read optically by transverse scanning.

baseline. (NIST) A specification or product that has been formally reviewed and agreed upon, that serves as the basis for further development, and that can be changed only through formal change control procedures.

BASIC. An acronym for Beginners All-purpose Symbolic Instruction Code, a high-level programming language intended to facilitate learning to program in an interactive environment.

basic input/output system. Firmware that activates peripheral devices in a PC. Includes routines for the keyboard, screen, disk, parallel port and serial port, and for internal services such as time and date. It accepts requests from the device drivers in the operating system as well from application programs. It also contains autostart functions that test the system on startup and prepare the computer for operation. It loads the operating system and passes control to it.

batch. (IEEE) Pertaining to a system or mode of operation in which inputs are collected and processed all at one time, rather than being processed as they arrive, and a job, once started, proceeds to completion without additional input or user interaction. Contrast with conversational, interactive, on-line, real time.

A-6 Version 6.2

Page 241: Casq Cbok Rev 6-2

Vocabulary

batch processing. Execution of programs serially with no interactive processing. Contrast with real time processing.

baud. The signalling rate of a line. It's the switching speed, or number of transitions [voltage or frequency change] made per second. At low speeds bauds are equal to bits per seconds; e.g., 300 baud is equal to 300 bps. However, one baud can be made to represent more than one bit per second.

benchmark. A standard against which measurements or comparisons can be made.

bias. A measure of how closely the mean value in a series of replicate measurements approaches the true value. See: accuracy, precision, calibration.

binary. The base two number system. Permissible digits are "0" and "1".

bit. A contraction of the term binary digit. The bit is the basic unit of digital data. It may be in one of two states, logic 1 or logic 0. It may be thought of as a switch which is either on or off. Bits are usually combined into computer words of various sizes, such as the byte.

bits per second. A measure of the speed of data transfer in a communications system.

black-box testing. See: testing, functional.

block. (ISO) (1) A string of records, words, or characters that for technical or logical purposes are treated as a unity. (2) A collection of contiguous records that are recorded as a unit, and the units are separated by interblock gaps. (3) A group of bits or digits that are transmitted as a unit and that may be encoded for error-control purposes. (4) In programming languages, a subdivision of a program that serves to group related statements, delimit routines, specify storage allocation, delineate the applicability of labels, or segment parts of the program for other purposes. In FORTRAN, a block may be a sequence of statements; in COBOL, it may be a physical record.

block check. (ISO) The part of the error control procedure that is used for determining that a block of data is structured according to given rules.

block diagram. (NIST) A diagram of a system, instrument or computer, in which the principal parts are represented by suitably annotated geometrical figures to show both the basic functions of the parts and the functional relationships between them.

block length. (1) (ISO) The number of records, words or characters in a block. (2) (ANSI) A measure of the size of a block, usually specified in units such as records, words, computer words, or characters.

block transfer. (ISO) The process, initiated by a single action, of transferring one or more blocks of data.

blocking factor. (ISO) The number of records in a block. The number is computed by dividing the size of the block by the size of each record contained therein. Syn: grouping factor.

blueprint. An exact or detailed plan or outline. Contrast with graph.

bomb. A trojan horse which attacks a computer system upon the occurrence of a specific logical event [logic bomb], the occurrence of a specific time-related logical event [time

Version 6.2 A-7

Page 242: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

bomb], or is hidden in electronic mail or data and is triggered when read in a certain way [letter bomb]. See: trojan horse, virus, worm.

boolean. Pertaining to the principles of mathematical logic developed by George Boole, a nineteenth century mathematician. Boolean algebra is the study of operations carried out on variables that can have only one of two possible values; i.e., 1 (true) and 0 (false). As ADD, SUBTRACT, MULTIPLY, and DIVIDE are the primary operations of arithmetic, AND, OR, and NOT are the primary operations of Boolean Logic. In Pascal a boolean variable is a variable that can have one of two possible values, true or false.

boot. (1) (IEEE) To initialize a computer system by clearing memory and reloading the operating system. (2) To cause a computer system to reach a known beginning state. A boot program, in firmware, typically performs this function which includes loading basic instructions which tell the computer how to load programs into memory and how to begin executing those programs. A distinction can be made between a warm boot and a cold boot. A cold boot means starting the system from a powered-down state. A warm boot means restarting the computer while it is powered-up. Important differences between the two procedures are; 1) a power-up self-test, in which various portions of the hardware [such as memory] are tested for proper operation, is performed during a cold boot while a warm boot does not normally perform such self-tests, and 2) a warm boot does not clear all memory.

bootstrap. (IEEE) A short computer program that is permanently resident or easily loaded into a computer and whose execution brings a larger program, such an operating system or its loader, into memory.

boundary value. (1) (IEEE) A data value that corresponds to a minimum or maximum input, internal, or output value specified for a system or component. (2) A value which lies at, or just inside or just outside a specified range of valid input and output values.

boundary value analysis. (NBS) A selection technique in which test data are chosen to lie along "boundaries" of the input domain [or output range] classes, data structures, procedure parameters, etc. Choices often include maximum, minimum, and trivial values or parameters. This technique is often called stress testing. See: testing, boundary value.

box diagram. (IEEE) A control flow diagram consisting of a rectangle that is subdivided to show sequential steps, if-then-else conditions, repetition, and case conditions. Syn: Chapin chart, Nassi-Shneiderman chart, program structure diagram. See: block diagram, bubble chart, flowchart, graph, input-process-output chart, structure chart.

branch. An instruction which causes program execution to jump to a new point in the program sequence, rather than execute the next instruction. Syn: jump.

branch analysis. (Myers) A test case identification technique which produces enough test cases such that each decision has a true and a false outcome at least once. Contrast with path analysis.

branch coverage. (NBS) A test coverage criteria which requires that for each decision point each possible branch be executed at least once. Syn: decision coverage. Contrast with condition coverage, multiple condition coverage, path coverage, statement coverage. See: testing, branch.

A-8 Version 6.2

Page 243: Casq Cbok Rev 6-2

Vocabulary

bubble chart. (IEEE) A data flow, data structure, or other diagram in which entities are depicted with circles [bubbles] and relationships are represented by links drawn between the circles. See: block diagram, box diagram, flowchart, graph, input-process-output chart, structure chart.

buffer. A device or storage area [memory] used to store data temporarily to compensate for differences in rates of data flow, time of occurrence of events, or amounts of data that can be handled by the devices or processes involved in the transfer or use of the data.

bug. A fault in a program which causes the program to perform in an unintended or unanticipated manner. See: anomaly, defect, error, exception, fault.

bus. A common pathway along which data and control signals travel between different hardware devices within a computer system. (A) When bus architecture is used in a computer, the CPU, memory and peripheral equipment are interconnected through the bus. The bus is often divided into two channels, a control channel to select where data is located [address bus], and the other to transfer the data [data bus or I/O bus]. Common buses are: ISA [Industry Standard Architecture] the original IBM PC 16 bit AT bus; EISA [Extended Industry Standard Architecture] the IBM PC 32 bit XT bus [which provides for bus mastering]; MCA [MicroChannel Architecture] an IBM 32 bit bus; Multibus I & II [advanced, 16 & 32 bit respectively, bus architecture by Intel used in industrial, military and aerospace applications]; NuBus, a 32 bit bus architecture originally developed at MIT [A version is used in the Apple Macintosh computer]; STD bus, a bus architecture used in medical and industrial equipment due to its small size and rugged design [Originally 8 bits, with extensions to 16 and 32 bits]; TURBO Channel, a DEC 32 bit data bus with peak transfer rates of 100 MB/second; VMEbus [Versa Module Eurocard Bus], a 32 bit bus from Motorola, et.al., used in industrial, commercial and military applications worldwide [VME64 is an expanded version that provides 64 bit data transfer and addressing]. (B) When bus architecture is used in a network, all terminals and computers are connected to a common channel that is made of twisted wire pairs, coaxial cable, or optical fibers. Ethernet is a common LAN architecture using a bus topology.

byte. A sequence of adjacent bits, usually eight, operated on as a unit.

- C -

CAD. computer aided design.

CAM. computer aided manufacturing.

CASE. computer aided software engineering.

CCITT. Consultive Committee for International Telephony and Telegraphy.

CD-ROM. compact disc - read only memory.

CISC. complex instruction set computer.

CMOS. complementary metal-oxide semiconductor.

CO-AX. coaxial cable.

Version 6.2 A-9

Page 244: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

COTS. configurable, off-the-shelf software.

CP/M. Control Program for Microcomputers.

CPU. central processing unit.

CRC. cyclic redundancy [check] code.

CRT. cathode ray tube.

C. A general purpose high-level programming language. Created for use in the development of computer operating systems software. It strives to combine the power of assembly language with the ease of a high-level language.

C++. An object-oriented high-level programming language.

calibration. Ensuring continuous adequate performance of sensing, measurement, and actuating equipment with regard to specified accuracy and precision requirements. See: accuracy, bias, precision.

call graph. (IEEE) A diagram that identifies the modules in a system or computer program and shows which modules call one another. Note: The result is not necessarily the same as that shown in a structure chart. Syn: call tree, tier chart. Contrast with structure chart. See: control flow diagram, data flow diagram, data structure diagram, state diagram.

cathode ray tube. An output device. Syn: display, monitor, screen.

cause effect graph. (Myers) A Boolean graph linking causes and effects. The graph is actually a digital-logic circuit (a combinatorial logic network) using a simpler notation than standard electronics notation.

cause effect graphing. (1) (NBS) Test data selection technique. The input and output domains are partitioned into classes and analysis is performed to determine which input classes cause which effect. A minimal set of inputs is chosen which will cover the entire effect set. (2) (Myers) A systematic method of generating test cases representing combinations of conditions. See: testing, functional.

central processing unit. The unit of a computer that includes the circuits controlling the interpretation of program instructions and their execution. The CPU controls the entire computer. It receives and sends data through input-output channels, retrieves data and programs from memory, and conducts mathematical and logical functions of a program.

certification. (ANSI) In computer systems, a technical evaluation, made as part of and in support of the accreditation process, that establishes the extent to which a particular computer system or network design and implementation meet a prespecified set of requirements.

change control. The processes, authorities for, and procedures to be used for all changes that are made to the computerized system and/or the system's data. Change control is a vital subset of the Quality Assurance [QA] program within an establishment and should be clearly described in the establishment's SOPs. See: configuration control.

change tracker. A software tool which documents all changes made to a program.

check summation. A technique for error detection to ensure that data or program files have been accurately copied or transferred. Basically, a redundant check in which groups of digits;

A-10 Version 6.2

Page 245: Casq Cbok Rev 6-2

Vocabulary

e.g., a file, are summed, usually without regard to overflow, and that sum checked against a previously computed sum to verify operation accuracy. Contrast with cyclic redundancy check [CRC], parity check. See: checksum.

checksum. (IEEE) A sum obtained by adding the digits in a numeral, or group of numerals [a file], usually without regard to meaning, position, or significance. See: check summation.

chip. See: integrated circuit.

client-server. A term used in a broad sense to describe the relationship between the receiver and the provider of a service. In the world of microcomputers, the term client-server describes a networked system where front-end applications, as the client, make service requests upon another networked system. Client-server relationships are defined primarily by software. In a local area network [LAN], the workstation is the client and the file server is the server. However, client-server systems are inherently more complex than file server systems. Two disparate programs must work in tandem, and there are many more decisions to make about separating data and processing between the client workstations and the database server. The database server encapsulates database files and indexes, restricts access, enforces security, and provides applications with a consistent interface to data via a data dictionary.

clock. (ISO) A device that generates periodic, accurately spaced signals used for such purposes as timing, regulation of the operations of a processor, or generation of interrupts.

coaxial cable. High-capacity cable used in communications and video transmissions. Provides a much higher bandwidth than twisted wire pair.

COBOL. Acronym for COmmon Business Oriented Language. A high-level programming language intended for use in the solution of problems in business data processing.

code. See: program, source code.

code audit. (IEEE) An independent review of source code by a person, team, or tool to verify compliance with software design documentation and programming standards. Correctness and efficiency may also be evaluated. Contrast with code inspection, code review, code walkthrough. See: static analysis.

code auditor. A software tool which examines source code for adherence to coding and documentation conventions.

code inspection. (Myers/NBS) A manual [formal] testing [error detection] technique where the programmer reads source code, statement by statement, to a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards. Contrast with code audit, code review, code walkthrough. This technique can also be applied to other software and configuration items. Syn: Fagan Inspection. See: static analysis.

code review. (IEEE) A meeting at which software code is presented to project personnel, managers, users, customers, or other interested parties for comment or approval. Contrast with code audit, code inspection, code walkthrough. See: static analysis.

code walkthrough. (Myers/NBS) A manual testing [error detection] technique where program [source code] logic [structure] is traced manually [mentally] by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the

Version 6.2 A-11

Page 246: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

programmer's logic and assumptions. Contrast with code audit, code inspection, code review. See: static analysis.

coding. (IEEE) (1) In software engineering, the process of expressing a computer program in a programming language. (2) The transforming of logic and data from design specifications (design descriptions) into a programming language. See: implementation.

coding standards. Written procedures describing coding [programming] style conventions specifying rules governing the use of individual constructs provided by the programming language, and naming, formatting, and documentation requirements which prevent programming errors, control complexity and promote understandability of the source code. Syn: development standards, programming standards.

comment. (1) (ISO) In programming languages, a language construct that allows [explanatory] text to be inserted into a program and that does not have any effect on the execution of the program. (2) (IEEE) Information embedded within a computer program, job control statements, or a set of data, that provides clarification to human readers but does not affect machine interpretation.

compact disc - read only memory. A compact disk used for the permanent storage of text, graphic or sound information. Digital data is represented very compactly by tiny holes that can be read by lasers attached to high resolution sensors. Capable of storing up to 680 MB of data, equivalent to 250,000 pages of text, or 20,000 medium resolution images. This storage media is often used for archival purposes. Syn: optical disk, write-once read-many times disk.

comparator. (IEEE) A software tool that compares two computer programs, files, or sets of data to identify commonalities or differences. Typical objects of comparison are similar versions of source code, object code, data base files, or test results.

compatibility. (ANSI) The capability of a functional unit to meet the requirements of a specified interface.

compilation. (NIST) Translating a program expressed in a problem-oriented language or a procedure oriented language into object code. Contrast with assembling, interpret. See: compiler.

compile. See: compilation.

compiler. (1) (IEEE) A computer program that translates programs expressed in a high-level language into their machine language equivalents. (2) The compiler takes the finished source code listing as input and outputs the machine code instructions that the computer must have to execute the program. See: assembler, interpreter, cross-assembler, cross-compiler.

compiling. See: compilation.

complementary metal-oxide semiconductor. A type of integrated circuit widely used for processors and memories. It is a combination of transistors on a single chip connected to complementary digital circuits.

completeness. (NIST) The property that all necessary parts of the entity are included. Completeness of a product is often used to express the fact that all requirements have been met by the product. See: traceability analysis.

A-12 Version 6.2

Page 247: Casq Cbok Rev 6-2

Vocabulary

complex instruction set computer. Traditional computer architecture that operates with large sets of possible instructions. Most computers are in this category, including the IBM compatible microcomputers. As computing technology evolved, instruction sets expanded to include newer instructions which are complex in nature and require several to many execution cycles and, therefore, more time to complete. Computers which operate with system software based on these instruction sets have been referred to as complex instruction set computers. Contrast with reduced instruction set computer [RISC].

complexity. (IEEE) (1) The degree to which a system or component has a design or implementation that is difficult to understand and verify. (2) Pertaining to any of a set of structure based metrics that measure the attribute in (1).

component. See: unit.

computer. (IEEE) (1) A functional unit that can perform substantial computations, including numerous arithmetic operations, or logic operations, without human intervention during a run. (2) A functional programmable unit that consists of one or more associated processing units and peripheral equipment, that is controlled by internally stored programs, and that can perform substantial computations, including numerous arithmetic operations, or logic operations, without human intervention.

computer aided design. The use of computers to design products. CAD systems are high speed workstations or personal computers using CAD software and input devices such as graphic tablets and scanners to model and simulate the use of proposed products. CAD output is a printed design or electronic output to CAM systems. CAD software is available for generic design or specialized uses such as architectural, electrical, and mechanical design. CAD software may also be highly specialized for creating products such as printed circuits and integrated circuits.

computer aided manufacturing. The automation of manufacturing systems and techniques, including the use of computers to communicate work instructions to automate machinery for the handling of the processing [numerical control, process control, robotics, material requirements planning] needed to produce a workpiece.

computer aided software engineering. An automated system for the support of software development including an integrated tool set, i.e., programs, which facilitate the accomplishment of software engineering methods and tasks such as project planning and estimation, system and software requirements analysis, design of data structure, program architecture and algorithm procedure, coding, testing and maintenance.

computer instruction set. (ANSI) A complete set of the operators of the instructions of a computer together with a description of the types of meanings that can be attributed to their operands. Syn: machine instruction set.

computer language. (IEEE) A language designed to enable humans to communicate with computers. See: programming language.

computer program. See: program.

computer science. (ISO) The branch of science and technology that is concerned with methods and techniques relating to data processing performed by automatic means.

Version 6.2 A-13

Page 248: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

computer system. (ANSI) a functional unit, consisting of one or more computers and associated peripheral input and output devices, and associated software, that uses common storage for all or part of a program and also for all or part of the data necessary for the execution of the program; executes user-written or user-designated programs; performs user-designated data manipulation, including arithmetic operations and logic operations; and that can execute programs that modify themselves during their execution. A computer system may be a stand-alone unit or may consist of several interconnected units. See: computer, computerized system.

computer system audit. (ISO) An examination of the procedures used in a computer system to evaluate their effectiveness and correctness and to recommend improvements. See: software audit.

computer system security. (IEEE) The protection of computer hardware and software from accidental or malicious access, use, modification, destruction, or disclosure. Security also pertains to personnel, data, communications, and the physical protection of computer installations. See: bomb, trojan horse, virus, worm.

computer word. A sequence of bits or characters that is stored, addressed, transmitted, and operated on as a unit within a given computer. Typically one to four bytes long, depending on the make of computer.

computerized system. Includes hardware, software, peripheral devices, personnel, and documentation; e.g., manuals and Standard Operating Procedures. See: computer, computer system.

concept phase. (IEEE) The initial phase of a software development project, in which user needs are described and evaluated through documentation; e.g., statement of needs, advance planning report, project initiation memo. feasibility studies, system definition documentation, regulations, procedures, or policies relevant to the project.

condition coverage. (Myers) A test coverage criteria requiring enough test cases such that each condition in a decision takes on all possible outcomes at least once, and each point of entry to a program or subroutine is invoked at least once. Contrast with branch coverage, decision coverage, multiple condition coverage, path coverage, statement coverage.

configurable, off-the-shelf software. Application software, sometimes general purpose, written for a variety of industries or users in a manner that permits users to modify the program to meet their individual needs.

configuration. (IEEE) (1) The arrangement of a computer system or component as defined by the number, nature, and interconnections of its constituent parts. (2) In configuration management, the functional and physical characteristics of hardware or software as set forth in technical documentation or achieved in a product.

configuration audit. See: functional configuration audit, physical configuration audit.

configuration control. (IEEE) An element of configuration management, consisting of the evaluation, coordination, approval or disapproval, and implementation of changes to configuration items after formal establishment of their configuration identification. See: change control.

A-14 Version 6.2

Page 249: Casq Cbok Rev 6-2

Vocabulary

configuration identification. (IEEE) An element of configuration management, consisting of selecting the configuration items for a system and recording their functional and physical characteristics in technical documentation.

configuration item. (IEEE) An aggregation of hardware, software, or both that is designated for configuration management and treated as a single entity in the configuration management process. See: software element.

configuration management. (IEEE) A discipline applying technical and administrative direction and surveillance to identify and document the functional and physical characteristics of a configuration item, control changes to those characteristics, record and report change processing and implementation status, and verifying compliance with specified requirements. See: configuration control, change control, software engineering.

consistency. (IEEE) The degree of uniformity, standardization, and freedom from contradiction among the documents or parts of a system or component. See: traceability.

consistency checker. A software tool used to test requirements in design specifications for both consistency and completeness.

constant. A value that does not change during processing. Contrast with variable.

constraint analysis. (IEEE) (1) Evaluation of the safety of restrictions imposed on the selected design by the requirements and by real world restrictions. The impacts of the environment on this analysis can include such items as the location and relation of clocks to circuit cards, the timing of a bus latch when using the longest safety-related timing to fetch data from the most remote circuit card, interrupts going unsatisfied due to a data flood at an input, and human reaction time. (2) verification that the program operates within the constraints imposed upon it by requirements, the design, and the target computer. Constraint analysis is designed to identify these limitations to ensure that the program operates within them, and to ensure that all interfaces have been considered for out-of-sequence and erroneous inputs.

Consultive Committee for International Telephony and Telegraphy. See: International Telecommunications Union - Telecommunications Standards Section.

control bus. (ANSI) A bus carrying the signals that regulate system operations. See: bus.

control flow. (ISO) In programming languages, an abstraction of all possible paths that an execution sequence may take through a program.

control flow analysis. (IEEE) A software V&V task to ensure that the proposed control flow is free of problems, such as design or code elements that are unreachable or incorrect.

control flow diagram. (IEEE) A diagram that depicts the set of all possible sequences in which operations may be performed during the execution of a system or program. Types include box diagram, flowchart, input-process-output chart, state diagram. Contrast with data flow diagram. See: call graph, structure chart.

Control Program for Microcomputers. An operating system. A registered trademark of Digital Research.

Version 6.2 A-15

Page 250: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

controller. Hardware that controls peripheral devices such as a disk or display screen. It performs the physical data transfers between main memory and the peripheral device.

conversational. (IEEE) Pertaining to a interactive system or mode of operation in which the interaction between the user and the system resembles a human dialog. Contrast with batch. See: interactive, on-line, real time.

coroutine. (IEEE) A routine that begins execution at the point at which operation was last suspended, and that is not required to return control to the program or subprogram that called it. Contrast with subroutine.

corrective maintenance. (IEEE) Maintenance performed to correct faults in hardware or software. Contrast with adaptive maintenance, perfective maintenance.

correctness. (IEEE) The degree to which software is free from faults in its specification, design and coding. The degree to which software, documentation and other items meet specified requirements. The degree to which software, documentation and other items meet user needs and expectations, whether specified or not.

coverage analysis. (NIST) Determining and assessing measures associated with the invocation of program structural elements to determine the adequacy of a test run. Coverage analysis is useful when attempting to execute each statement, branch, path, or iterative structure in a program. Tools that capture this data and provide reports summarizing relevant information have this feature. See: testing, branch; testing, path; testing, statement.

crash. (IEEE) The sudden and complete failure of a computer system or component.

critical control point. (QA) A function or an area in a manufacturing process or procedure, the failure of which, or loss of control over, may have an adverse affect on the quality of the finished product and may result in a unacceptable health risk.

critical design review. (IEEE) A review conducted to verify that the detailed design of one or more configuration items satisfy specified requirements; to establish the compatibility among the configuration items and other items of equipment, facilities, software, and personnel; to assess risk areas for each configuration item; and, as applicable, to assess the results of producibility analyses, review preliminary hardware product specifications, evaluate preliminary test planning, and evaluate the adequacy of preliminary operation and support documents. See: preliminary design review, system design review.

criticality. (IEEE) The degree of impact that a requirement, module, error, fault, failure, or other item has on the development or operation of a system. Syn: severity.

criticality analysis. (IEEE) Analysis which identifies all software requirements that have safety implications, and assigns a criticality level to each safety-critical requirement based upon the estimated risk.

cross-assembler. (IEEE) An assembler that executes on one computer but generates object code for a different computer.

cross-compiler. (IEEE) A compiler that executes on one computer but generates assembly code or object code for a different computer.

A-16 Version 6.2

Page 251: Casq Cbok Rev 6-2

Vocabulary

cursor. (ANSI) A movable, visible mark used to indicate a position of interest on a display surface.

cyclic redundancy [check] code. A technique for error detection in data communications used to assure a program or data file has been accurately transferred. The CRC is the result of a calculation on the set of transmitted bits by the transmitter which is appended to the data. At the receiver the calculation is repeated and the results compared to the encoded value. The calculations are chosen to optimize error detection. Contrast with check summation, parity check.

cyclomatic complexity. (1) (McCabe) The number of independent paths through a program. (2) (NBS) The cyclomatic complexity of a program is equivalent to the number of decision statements plus 1.

- D -

DAC. digital-to-analog converter.

DFD. data flow diagram.

DMA. direct memory access.

DOS. disk operating system.

data. Representations of facts, concepts, or instructions in a manner suitable for communication, interpretation, or processing by humans or by automated means.

data analysis. (IEEE) (1) Evaluation of the description and intended use of each data item in the software design to ensure the structure and intended use will not result in a hazard. Data structures are assessed for data dependencies that circumvent isolation, partitioning, data aliasing, and fault containment issues affecting safety, and the control or mitigation of hazards. (2) Evaluation of the data structure and usage in the code to ensure each is defined and used properly by the program. Usually performed in conjunction with logic analysis.

data bus. (ANSI) A bus used to communicate data internally and externally to and from a processing unit or a storage device. See: bus.

data corruption. (ISO) A violation of data integrity. Syn: data contamination.

data dictionary. (IEEE) (1) A collection of the names of all data items used in a software system, together with relevant properties of those items; e.g., length of data item, representation, etc. (2) A set of definitions of data flows, data elements, files, data bases, and processes referred to in a leveled data flow diagram set.

data element. (1) (ISO) A named unit of data that, in some contexts, is considered indivisible and in other contexts may consist of data items. (2) A named identifier of each of the entities and their attributes that are represented in a database.

data exception. (IEEE) An exception that occurs when a program attempts to use or access data incorrectly.

Version 6.2 A-17

Page 252: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

data flow analysis. (IEEE) A software V&V task to ensure that the input and output data and their formats are properly defined, and that the data flows are correct.

data flow diagram. (IEEE) A diagram that depicts data sources, data sinks, data storage, and processes performed on data as nodes, and logical flow of data as links between the nodes. Syn: data flowchart, data flow graph.

data integrity. (IEEE) The degree to which a collection of data is complete, consistent, and accurate. Syn: data quality.

data item. (ANSI) A named component of a data element. Usually the smallest component.

data set. A collection of related records. Syn: file.

data sink. (IEEE) The equipment which accepts data signals after transmission.

data structure. (IEEE) A physical or logical relationship among data elements, designed to support specific data manipulation functions.

data structure centered design. A structured software design technique wherein the architecture of a system is derived from analysis of the structure of the data sets with which the system must deal.

data structure diagram. (IEEE) A diagram that depicts a set of data elements, their attributes, and the logical relationships among them. Contrast with data flow diagram. See: entity-relationship diagram.

data validation. (1) (ISO) A process used to determine if data are inaccurate, incomplete, or unreasonable. The process may include format checks, completeness checks, check key tests, reasonableness checks and limit checks. (2) The checking of data for correctness or compliance with applicable standards, rules, and conventions.

database. (ANSI) A collection of interrelated data, often with controlled redundancy, organized according to a schema to serve one or more applications. The data are stored so that they can be used by different programs without concern for the data structure or organization. A common approach is used to add new data and to modify and retrieve existing data. See: archival database.

database analysis. (IEEE) A software V&V task to ensure that the database structure and access methods are compatible with the logical design.

database security. The degree to which a database is protected from exposure to accidental or malicious alteration or destruction.

dead code. Program code statements which can never execute during program operation. Such code can result from poor coding style, or can be an artifact of previous versions or debugging efforts. Dead code can be confusing, and is a potential source of erroneous software changes. See: infeasible path.

debugging. (Myers) Determining the exact nature and location of a program error, and fixing the error.

decision coverage. (Myers) A test coverage criteria requiring enough test cases such that each decision has a true and false result at least once, and that each statement is executed at least

A-18 Version 6.2

Page 253: Casq Cbok Rev 6-2

Vocabulary

once. Syn: branch coverage. Contrast with condition coverage, multiple condition coverage, path coverage, statement coverage.

decision table. (IEEE) A table used to show sets of conditions and the actions resulting from them.

default. (ANSI) Pertaining to an attribute, value, or option that is assumed when none is explicitly specified.

default value. A standard setting or state to be taken by the program if no alternate setting or state is initiated by the system or the user. A value assigned automatically if one is not given by the user.

defect. See: anomaly, bug, error, exception, fault.

defect analysis. See: failure analysis.

delimiter. (ANSI) A character used to indicate the beginning or the end of a character string. Syn: separator.

demodulate. Retrieve the information content from a modulated carrier wave; the reverse of modulate. Contrast with modulate.

demodulation. Converting signals from a wave form [analog] to pulse form [digital]. Contrast with modulation.

dependability. A facet of reliability that relates to the degree of certainty that a system or component will operate correctly.

design. (IEEE) The process of defining the architecture, components, interfaces, and other characteristics of a system or component. See: architectural design, preliminary design, detailed design.

design description. (IEEE) A document that describes the design of a system or component. Typical contents include system or component architecture, control logic, data structures, data flow, input/output formats, interface descriptions and algorithms. Syn: design document. Contrast with specification, requirements. See: software design description.

design level. (IEEE) The design decomposition of the software item; e.g., system, subsystem, program or module.

design of experiments. A methodology for planning experiments so that data appropriate for [statistical] analysis will be collected.

design phase. (IEEE) The period of time in the software life cycle during which the designs for architecture, software components, interfaces, and data are created, documented, and verified to satisfy requirements.

design requirement. (IEEE) A requirement that specifies or constrains the design of a system or system component.

design review. (IEEE) A process or meeting during which a system, hardware, or software design is presented to project personnel, managers, users, customers, or other interested parties for comment or approval. Types include critical design review, preliminary design review, system design review.

Version 6.2 A-19

Page 254: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

design specification. See: specification, design.

design standards. (IEEE) Standards that describe the characteristics of a design or a design description of data or program components.

desk checking. The application of code audit, inspection, review and walkthrough techniques to source code and other software documents usually by an individual [often by the person who generated them] and usually done informally.

detailed design. (IEEE) (1) The process of refining and expanding the preliminary design of a system or component to the extent that the design is sufficiently complete to be implemented. See: software development process. (2) The result of the process in (1).

developer. A person, or group, that designs and/or builds and/or documents and/or configures the hardware and/or software of computerized systems.

development methodology. (ANSI) A systematic approach to software creation that defines development phases and specifies the activities, products, verification procedures, and completion criteria for each phase. See: incremental development, rapid prototyping, spiral model, waterfall model.

development standards. Syn: coding standards.

diagnostic. (IEEE) Pertaining to the detection and isolation of faults or failures. For example, a diagnostic message, a diagnostic manual.

different software system analysis. (IEEE) Analysis of the allocation of software requirements to separate computer systems to reduce integration and interface errors related to safety. Performed when more than one software system is being integrated. See: testing, compatibility.

digital. Pertaining to data [signals] in the form of discrete [separate/pulse form] integral values. Contrast with analog.

digital-to-analog converter. Output related devices which translate a computer's digital outputs to the corresponding analog signals needed by an output device such as an actuator. Contrast with ADC [Analog-to-Digital Converter].

direct memory access. Specialized circuitry or a dedicated microprocessor that transfers data from memory to memory without using the CPU.

directed graph. (IEEE) A graph in which direction is implied in the internode connections. Syn: digraph.

disk. Circular rotating magnetic storage hardware. Disks can be hard [fixed] or flexible [removable] and different sizes.

disk drive. Hardware used to read from or write to a disk or diskette.

disk operating system. An operating system program; e.g., DR-DOS from Digital Research, MS-DOS from Microsoft Corp., OS/2 from IBM, PC-DOS from IBM, System-7 from Apple.

diskette. A floppy [flexible] disk.

A-20 Version 6.2

Page 255: Casq Cbok Rev 6-2

Vocabulary

documentation. (ANSI) The aids provided for the understanding of the structure and intended uses of an information system or its components, such as flowcharts, textual material, and user manuals.

documentation, level of. (NIST) A description of required documentation indicating its scope, content, format, and quality. Selection of the level may be based on project cost, intended usage, extent of effort, or other factors; e.g., level of concern.

documentation plan. (NIST) A management document describing the approach to a documentation effort. The plan typically describes what documentation types are to be prepared, what their contents are to be, when this is to be done and by whom, how it is to be done, and what are the available resources and external factors affecting the results.

documentation, software. (NIST) Technical data or information, including computer listings and printouts, in human readable form, that describe or specify the design or details, explain the capabilities, or provide operating instructions for using the software to obtain desired results from a software system. See: specification; specification, requirements; specification. design; software design description; test plan, test report, user's guide.

drift. (ISO) The unwanted change of the value of an output signal of a device over a period of time when the values of all input signals to the device are kept constant.

driver. A program that links a peripheral device or internal function to the operating system, and providing for activation of all device functions. Syn: device driver. Contrast with test driver.

duplex transmission. (ISO) Data transmission in both directions at the same time.

dynamic analysis. (NBS) Analysis that is performed by executing the program code. Contrast with static analysis. See: testing.

- E -

EBCDIC. extended binary coded decimal interchange code.

EEPROM. electrically erasable programmable read only memory.

EMI. electromagnetic interference.

EPROM. erasable programmable read only memory.

ESD. electrostatic discharge.

ESDI. enhanced small device interface.

editing. (NIST) Modifying the content of the input by inserting, deleting, or moving characters, numbers, or data.

electrically erasable programmable read only memory. Chips which may be programmed and erased numerous times like an EPROM. However an EEPROM is erased electrically. This means this IC does not necessarily have to be removed from the circuit in which it is mounted in order to erase and reprogram the memory.

Version 6.2 A-21

Page 256: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

electromagnetic interference. Low frequency electromagnetic waves that emanate from electromechanical devices. An electromagnetic disturbance caused by such radiating and transmitting sources as heavy duty motors and power lines can induce unwanted voltages in electronic circuits, damage components and cause malfunctions. See: radiofrequency interference.

electronic media. Hardware intended to store binary data; e.g., integrated circuit, magnetic tape, magnetic disk.

electrostatic discharge. The movement of static electricity, e.g. sparks, from a non-conductive surface to an approaching conductive object that can damage or destroy semiconductors and other circuit components. Static electricity can build on paper, plastic or other non-conductors and can be discharged by human skin, e.g. finger, contact. It can also be generated by scuffing shoes on a carpet or by brushing a non-conductor. MOSFETs and CMOS logic ICs are especially vulnerable because it causes internal local heating that melts or fractures the dielectric silicon oxide that insulates gates from other internal structures.

embedded computer. A device which has its own computing power dedicated to specific functions, usually consisting of a microprocessor and firmware. The computer becomes an integral part of the device as opposed to devices which are controlled by an independent, stand-alone computer. It implies software that integrates operating system and application functions.

embedded software. (IEEE) Software that is part of a larger system and performs some of the requirements of that system; e.g., software used in an aircraft or rapid transit system. Such software does not provide an interface with the user. See: firmware.

emulation. (IEEE) A model that accepts the same inputs and produces the same outputs as a given system. To imitate one system with another. Contrast with simulation.

emulator. (IEEE) A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system. Contrast with simulator.

encapsulation. (IEEE) A software development technique that consists of isolating a system function or a set of data and the operations on those data within a module and providing precise specifications for the module. See: abstraction, information hiding, software engineering.

end user. (ANSI) (1) A person, device, program, or computer system that uses an information system for the purpose of data processing in information exchange. (2) A person whose occupation requires the use of an information system but does not require any knowledge of computers or computer programming. See: user.

enhanced small device interface. A standard interface for hard disks introduced in 1983 which provides for faster data transfer compared to ST-506. Contrast with ST-506, IDE, SCSI.

entity relationship diagram. (IEEE) A diagram that depicts a set of real-world entities and the logical relationships among them. See: data structure diagram.

environment. (ANSI) (1) Everything that supports a system or the performance of a function. (2) The conditions that affect the performance of a system or function.

A-22 Version 6.2

Page 257: Casq Cbok Rev 6-2

Vocabulary

equivalence class partitioning. (Myers) Partitioning the input domain of a program into a finite number of classes [sets], to identify a minimal set of well selected test cases to represent these classes. There are two types of input equivalence classes, valid and invalid. See: testing, functional.

erasable programmable read only memory. Chips which may be programmed by using a PROM programming device. Before programming each bit is set to the same logical state, either 1 or 0. Each bit location may be thought of as a small capacitor capable of storing an electrical charge. The logical state is established by charging, via an electrical current, all bits whose states are to be changed from the default state. EPROMs may be erased and reprogrammed because the electrical charge at the bit locations can be bled off [i.e. reset to the default state] by exposure to ultraviolet light through the small quartz window on top of the IC. After programming, the IC's window must be covered to prevent exposure to UV light until it is desired to reprogram the chip. An EPROM eraser is a device for exposing the IC's circuits to UV light of a specific wavelength for a certain amount of time.

error. (ISO) A discrepancy between a computed, observed, or measured value or condition and the true, specified, or theoretically correct value or condition. See: anomaly, bug, defect, exception, fault.

error analysis. See: debugging, failure analysis.

error detection. Techniques used to identify errors in data transfers. See: check summation, cyclic redundancy check [CRC], parity check, longitudinal redundancy.

error guessing. (NBS) Test data selection technique. The selection criterion is to pick values that seem likely to cause errors. See: special test data; testing, special case.

error seeding. (IEEE) The process of intentionally adding known faults to those already in a computer program for the purpose of monitoring the rate of detection and removal, and estimating the number of faults remaining in the program. Contrast with mutation analysis.

event table. A table which lists events and the corresponding specified effect[s] of or reaction[s] to each event.

evolutionary development. See: spiral model.

exception. (IEEE) An event that causes suspension of normal program execution. Types include addressing exception, data exception, operation exception, overflow exception, protection exception, underflow exception.

exception conditions/responses table. A special type of event table.

execution trace. (IEEE) A record of the sequence of instructions executed during the execution of a computer program. Often takes the form of a list of code labels encountered as the program executes. Syn: code trace, control flow trace. See: retrospective trace, subroutine trace, symbolic trace, variable trace.

exception. (IEEE) An event that causes suspension of normal program operation. Types include addressing exception, data exception, operation exception, overflow exception, protection exception, underflow exception. See: anomaly, bug, defect, error, fault.

Version 6.2 A-23

Page 258: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

extended ASCII. The second half of the ACSII character set, 128 thru 255. The symbols are defined by IBM for the PC and by other vendors for proprietary use. It is non-standard ASCII. See: ASCII.

extended binary coded decimal interchange code. An eight bit code used to represent specific data characters in some computers; e.g., IBM mainframe computers.

extremal test data. (NBS) Test data that is at the extreme or boundary of the domain of an input variable or which produces results at the boundary of an output domain. See: testing, boundary value.

- F -

FDD. floppy disk drive.

FIPS. Federal Information Processing Standards.

FMEA. Failure Modes and Effects Analysis.

FMECA. Failure Modes and Effects Criticality Analysis.

FTA. Fault Tree Analysis.

FTP. file transfer protocol.

Fagan inspection. See: code inspection.

fail-safe. (IEEE) A system or component that automatically places itself in a safe operational mode in the event of a failure.

failure. (IEEE) The inability of a system or component to perform its required functions within specified performance requirements. See: bug, crash, exception, fault.

failure analysis. Determining the exact nature and location of a program error in order to fix the error, to identify and fix other similar errors, and to initiate corrective action to prevent future occurrences of this type of error. Contrast with debugging.

Failure Modes and Effects Analysis. (IEC) A method of reliability analysis intended to identify failures, at the basic component level, which have significant consequences affecting the system performance in the application considered.

Failure Modes and Effects Criticality Analysis. (IEC) A logical extension of FMEA which analyzes the severity of the consequences of failure.

fault. An incorrect step, process, or data definition in a computer program which causes the program to perform in an unintended or unanticipated manner. See: anomaly, bug, defect, error, exception.

fault seeding. See: error seeding.

Fault Tree Analysis. (IEC) The identification and analysis of conditions and factors which cause or contribute to the occurrence of a defined undesirable event, usually one which significantly affects system performance, economy, safety or other required characteristics.

A-24 Version 6.2

Page 259: Casq Cbok Rev 6-2

Vocabulary

feasibility study. Analysis of the known or anticipated need for a product, system, or component to assess the degree to which the requirements, designs, or plans can be implemented.

Federal Information Processing Standards. Standards published by U.S. Department of Commerce, National Institute of Standards and Technology, formerly National Bureau of Standards. These standards are intended to be binding only upon federal agencies.

fiber optics. Communications systems that use optical fibers for transmission. See: optical fiber.

field. (1) (ISO) On a data medium or in storage, a specified area used for a particular class of data; e.g., a group of character positions used to enter or display wage rates on a screen. (2) Defined logical data that is part of a record. (3) The elementary unit of a record that may contain a data item, a data aggregate, a pointer, or a link. (4) A discrete location in a database that contains an unique piece of information. A field is a component of a record. A record is a component of a database.

file. (1) (ISO) A set of related records treated as a unit; e.g., in stock control, a file could consists of a set of invoices. (2) The largest unit of storage structure that consists of a named collection of all occurrences in a database of records of a particular record type. Syn: data set.

file maintenance. (ANSI) The activity of keeping a file up to date by adding, changing, or deleting data.

file transfer protocol. (1) Communications protocol that can transmit binary and ASCII data files without loss of data. See: Kermit, Xmodem, Ymodem, Zmodem. (2) TCP/IP protocol that is used to log onto the network, list directories, and copy files. It can also translate between ASCII and EBCDIC. See: TCP/IP.

firmware. (IEEE) The combination of a hardware device; e.g., an IC; and computer instructions and data that reside as read only software on that device. Such software cannot be modified by the computer during processing. See: embedded software.

flag. (IEEE) A variable that is set to a prescribed state, often "true" or "false", based on the results of a process or the occurrence of a specified condition. Syn: indicator.

flat file. A data file that does not physically interconnect with or point to other files. Any relationship between two flat files is logical; e.g., matching account numbers.

floppy disk. See: diskette.

floppy disk drive. See: disk, disk drive.

flowchart or flow diagram. (2) (ISO) A graphical representation in which symbols are used to represent such things as operations, data, flow direction, and equipment, for the definition, analysis, or solution of a problem. (2) (IEEE) A control flow diagram in which suitably annotated geometrical figures are used to represent operations, data, or equipment, and arrows are used to indicate the sequential flow from one to another. Syn: flow diagram. See: block diagram, box diagram, bubble chart, graph, input-process-output chart, structure chart.

formal qualification review. (IEEE) The test, inspection, or analytical process by which a group of configuration items comprising a system is verified to have met specific contractual

Version 6.2 A-25

Page 260: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

performance requirements. Contrast with code review, design review, requirements review, test readiness review.

FORTRAN. An acronym for FORmula TRANslator, the first widely used high-level programming language. Intended primarily for use in solving technical problems in mathematics, engineering, and science.

full duplex. See: duplex transmission.

function. (1) (ISO) A mathematical entity whose value, namely, the value of the dependent variable, depends in a specified manner on the values of one or more independent variables, with not more than one value of the dependent variable corresponding to each permissible combination of values from the respective ranges of the independent variables. (2) A specific purpose of an entity, or its characteristic action. (3) In data communication, a machine action such as carriage return or line feed.

functional analysis. (IEEE) Verifies that each safety-critical software requirement is covered and that an appropriate criticality level is assigned to each software element.

functional configuration audit. (IEEE) An audit conducted to verify that the development of a configuration item has been completed satisfactorily, that the item has achieved the performance and functional characteristics specified in the functional or allocated configuration identification, and that its operational and support documents are complete and satisfactory. See: physical configuration audit.

functional decomposition. See: modular decomposition.

functional design. (IEEE) (1) The process of defining the working relationships among the components of a system. See: architectural design. (2) The result of the process in (1).

functional requirement. (IEEE) A requirement that specifies a function that a system or system component must be able to perform.

- G -

GB. gigabyte.

gigabyte. Approximately one billion bytes; precisely 230 or 1,073,741,824 bytes. See: kilobyte, megabyte.

graph. (IEEE) A diagram or other representation consisting of a finite set of nodes and internode connections called edges or arcs. Contrast with blueprint. See: block diagram, box diagram, bubble chart, call graph, cause-effect graph, control flow diagram, data flow diagram, directed graph, flowchart, input-process-output chart, structure chart, transaction flowgraph.

graphic software specifications. Documents such as charts, diagrams, graphs which depict program structure, states of data, control, transaction flow, HIPO, and cause-effect relationships; and tables including truth, decision, event, state-transition, module interface, exception conditions/responses necessary to establish design integrity.

A-26 Version 6.2

Page 261: Casq Cbok Rev 6-2

Vocabulary

- H -

HDD. hard disk drive.

HIPO. hierarchy of input-processing-output.

Hz. hertz.

half duplex. Transmissions [communications] which occur in only one direction at a time, but that direction can change.

handshake. An interlocked sequence of signals between connected components in which each component waits for the acknowledgement of its previous signal before proceeding with its action, such as data transfer.

hard copy. Printed, etc., output on paper.

hard disk drive. Hardware used to read from or write to a hard disk. See: disk, disk drive.

hard drive. Syn: hard disk drive.

hardware. (ISO) Physical equipment, as opposed to programs, procedures, rules, and associated documentation. Contrast with software.

hazard. (DOD) A condition that is prerequisite to a mishap.

hazard analysis. A technique used to identify conceivable failures affecting system performance, human safety or other required characteristics. See: FMEA, FMECA, FTA, software hazard analysis, software safety requirements analysis, software safety design analysis, software safety code analysis, software safety test analysis, software safety change analysis.

hazard probability. (DOD) The aggregate probability of occurrence of the individual events that create a specific hazard.

hazard severity. (DOD) An assessment of the consequence of the worst credible mishap that could be caused by a specific hazard.

hertz. A unit of frequency equal to one cycle per second.

hexadecimal. The base 16 number system. Digits are 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, & F. This is a convenient form in which to examine binary data because it collects 4 binary digits per hexadecimal digit; e.g., decimal 15 is 1111 in binary and F in hexadecimal.

hierarchical decomposition. See: modular decomposition.

hierarchy of input-processing-output. See: input- processing-output.

hierarchy of input-processing-output chart. See: input-process-output chart.

high-level language. A programming language which requires little knowledge of the target computer, can be translated into several different machine languages, allows symbolic naming of operations and addresses, provides features designed to facilitate expression of data structures and program logic, and usually results in several machine instructions for each

Version 6.2 A-27

Page 262: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

program statement. Examples are PL/1, COBOL, BASIC, FORTRAN, Ada, Pascal, and "C". Contrast with assembly language.

- I -

I/0. input/output.

IC. integrated circuit.

IDE. integrated drive electronics.

IEC. International Electrotechnical Commission.

IEEE. Institute of Electrical and Electronic Engineers.

ISO. International Organization for Standardization.

ITU-TSS. International Telecommunications Union - Telecommunications Standards Section.

implementation. The process of translating a design into hardware components, software components, or both. See: coding.

implementation phase. (IEEE) The period of time in the software life cycle during which a software product is created from design documentation and debugged.

implementation requirement. (IEEE) A requirement that specifies or constrains the coding or construction of a system or system component.

incremental integration. A structured reformation of the program module by module or function by function with an integration test being performed following each addition. Methods include top-down, breadth-first, depth-first, bottom-up. Contrast with nonincremental integration.

incremental development. (IEEE) A software development technique in which requirements definition, design, implementation, and testing occur in an overlapping, iterative [rather than sequential] manner, resulting in incremental completion of the overall software product. Contrast with rapid prototyping, spiral model, waterfall model.

industry standard. (QA) Procedures and criteria recognized as acceptable practices by peer professional, credentialing, or accrediting organizations.

infeasible path. (NBS) A sequence of program statements that can never be executed. Syn: dead code.

information hiding. The practice of "hiding" the details of a function or structure, making them inaccessible to other parts of the program. See: abstraction, encapsulation, software engineering.

input/output. Each microprocessor and each computer needs a way to communicate with the outside world in order to get the data needed for its programs and in order to communicate the results of its data manipulations. This is accomplished through I/0 ports and devices.

A-28 Version 6.2

Page 263: Casq Cbok Rev 6-2

Vocabulary

input-process-output chart. (IEEE) A diagram of a software system or module, consisting of a rectangle on the left listing inputs, a rectangle in the center listing processing steps, a rectangle on the right listing outputs, and arrows connecting inputs to processing steps and processing steps to outputs. See: block diagram, box diagram, bubble chart, flowchart, graph, structure chart.

input-processing-output. A structured software design technique; identification of the steps involved in each process to be performed and identifying the inputs to and outputs from each step. A refinement called hierarchical input-process-output identifies the steps, inputs, and outputs at both general and detailed levels of detail.

inspection. A manual testing technique in which program documents [specifications (requirements, design), source code or user's manuals] are examined in a very formal and disciplined manner to discover errors, violations of standards and other problems. Checklists are a typical vehicle used in accomplishing this technique. See: static analysis, code audit, code inspection, code review, code walkthrough.

installation. (ANSI) The phase in the system life cycle that includes assembly and testing of the hardware and software of a computerized system. Installation includes installing a new computer system, new software or hardware, or otherwise modifying the current system.

installation and checkout phase. (IEEE) The period of time in the software life cycle during which a software product is integrated into its operational environment and tested in this environment to ensure that it performs as required.

installation qualification. See: qualification, installation.

Institute of Electrical and Electronic Engineers. 345 East 47th Street, New York, NY 10017. An organization involved in the generation and promulgation of standards. IEEE standards represent the formalization of current norms of professional practice through the process of obtaining the consensus of concerned, practicing professionals in the given field.

instruction. (1) (ANSI/IEEE) A program statement that causes a computer to perform a particular operation or set of operations. (2) (ISO) In a programming language, a meaningful expression that specifies one operation and identifies its operands, if any.

instruction set. (1) (IEEE) The complete set of instructions recognized by a given computer or provided by a given programming language. (2) (ISO) The set of the instructions of a computer, of a programming language, or of the programming languages in a programming system. See: computer instruction set.

instrumentation. (NBS) The insertion of additional code into a program in order to collect information about program behavior during program execution. Useful for dynamic analysis techniques such as assertion checking, coverage analysis, tuning.

integrated circuit. Small wafers of semiconductor material [silicon] etched or printed with extremely small electronic switching circuits. Syn: chip.

integrated drive electronics. A standard interface for hard disks which provides for building most of the controller circuitry into the disk drive to save space. IDE controllers are functionally equivalent to ST-506 standard controllers. Contrast with EDSI, SCSI, ST-506.

Version 6.2 A-29

Page 264: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

interactive. (IEEE) Pertaining to a system or mode of operation in which each user entry causes a response from or action by the system. Contrast with batch. See: conversational, on-line, real time.

interface. (1) (ISO) A shared boundary between two functional units, defined by functional characteristics, common physical interconnection characteristics, signal characteristics, and other characteristics, as appropriate. The concept involves the specification of the connection of two devices having different functions. (2) A point of communication between two or more processes, persons, or other physical entities. (3) A peripheral device which permits two or more devices to communicate.

interface analysis. (IEEE) Evaluation of: (1) software requirements specifications with hardware, user, operator, and software interface requirements documentation, (2) software design description records with hardware, operator, and software interface requirements specifications, (3) source code with hardware, operator, and software interface design documentation, for correctness, consistency, completeness, accuracy, and readability. Entities to evaluate include data items and control items.

interface requirement. (IEEE) A requirement that specifies an external item with which a system or system component must interact, or sets forth constraints on formats, timing, or other factors caused by such an interaction.

International Electrotechnical Commission. Geneva, Switzerland. An organization that sets standards for electronic products and components which are adopted by the safety standards agencies of many countries.

International Organization for Standardization. Geneva, Switzerland. An organization that sets international standards. It deals with all fields except electrical and electronics which is governed by IEC. Syn: International Standards Organization.

International Standards Organization. See: International Organization for Standardization.

International Telecommunications Union - Telecommunications Standards Section. Geneva, Switzerland. Formerly, Consultive Committee for International Telephony and Telegraphy. An international organization for communications standards.

interpret. (IEEE) To translate and execute each statement or construct of a computer program before translating and executing the next. Contrast with assemble, compile.

interpreter. (IEEE) A computer program that translates and executes each statement or construct of a computer program before translating and executing the next. The interpreter must be resident in the computer each time a program [source code file] written in an interpreted language is executed. Contrast with assembler, compiler.

interrupt. (1) The suspension of a process to handle an event external to the process. (2) A technique to notify the CPU that a peripheral device needs service, i.e., the device has data for the processor or the device is awaiting data from the processor. The device sends a signal, called an interrupt, to the processor. The processor interrupts its current program, stores its current operating conditions, and executes a program to service the device sending the interrupt. After the device is serviced, the processor restores its previous operating conditions and continues executing the interrupted program. A method for handling constantly changing data. Contrast with polling.

A-30 Version 6.2

Page 265: Casq Cbok Rev 6-2

Vocabulary

interrupt analyzer. A software tool which analyzes potential conflicts in a system as a result of the occurrences of interrupts.

invalid inputs. (1) (NBS) Test data that lie outside the domain of the function the program represents. (2) These are not only inputs outside the valid range for data to be input, i.e. when the specified input range is 50 to 100, but also unexpected inputs, especially when these unexpected inputs may easily occur; e.g., the entry of alpha characters or special keyboard characters when only numeric data is valid, or the input of abnormal command sequences to a program.

I/O port. Input/output connector.

- J -

JCL. job control language.

job. (IEEE) A user-defined unit of work that is to be accomplished by a computer. For example, the compilation, loading, and execution of a computer program. See: job control language.

job control language. (IEEE) A language used to identify a sequence of jobs, describe their requirements to an operating system, and control their execution.

- K -

KB. kilobyte.

KLOC. one thousand lines of code.

Kermit. An asynchronous file transfer protocol developed at Columbia University, noted for its accuracy over noisy lines. Several versions exist. Contrast with Xmodem, Ymodem, Zmodem.

key. One or more characters, usually within a set of data, that contains information about the set, including its identification.

key element. (QA) An individual step in an critical control point of the manufacturing process.

kilobyte. Approximately one thousand bytes. This symbol is used to describe the size of computer memory or disk storage space. Because computers use a binary number system, a kilobyte is precisely 210 or 1024 bytes.

- L -

LAN. local area network.

Version 6.2 A-31

Page 266: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

LSI. large scale integration.

ladder logic. A graphical, problem oriented, programming language which replicates electronic switching blueprints.

language. See: programming language.

large scale integration. A classification of ICs [chips] based on their size as expressed by the number of circuits or logic gates they contain. An LSI IC contains 3,000 to 100,000 transistors.

latency. (ISO) The time interval between the instant at which a CPU's instruction control unit initiates a call for data and the instant at which the actual transfer of the data starts. Syn: waiting time.

latent defect. See: bug, fault.

life cycle. See: software life cycle.

life cycle methodology. The use of any one of several structured methods to plan, design, implement, test. and operate a system from its conception to the termination of its use. See: waterfall model.

linkage editor. (IEEE) A computer program that creates a single load module from two or more independently translated object modules or load modules by resolving cross references among the modules and, possibly, by relocating elements. May be part of a loader. Syn: link editor, linker.

loader. A program which copies other [object] programs from auxiliary [external] memory to main [internal] memory prior to its execution.

local area network. A communications network that serves users within a confined geographical area. It is made up of servers, workstations, a network operating system, and a communications link. Contrast with MAN, WAN.

logic analysis. (IEEE) Evaluates the safety-critical equations, algorithms, and control logic of the software design. (2) Evaluates the sequence of operations represented by the coded program and detects programming errors that might create hazards.

longitudinal redundancy check. (IEEE) A system of error control based on the formation of a block check following preset rules.

low-level language. See: assembly language. The advantage of assembly language is that it provides bit-level control of the processor allowing tuning of the program for optimal speed and performance. For time critical operations, assembly language may be necessary in order to generate code which executes fast enough for the required operations. The disadvantage of assembly language is the high-level of complexity and detail required in the programming. This makes the source code harder to understand, thus increasing the chance of introducing errors during program development and maintenance.

- M -

A-32 Version 6.2

Page 267: Casq Cbok Rev 6-2

Vocabulary

MAN. metropolitan area network.

Mb. megabit.

MB. megabyte.

MHz. megahertz.

MIPS. million instructions per second.

MOS. metal-oxide semiconductor.

MOSFET. metal-oxide semiconductor field effect transistor.

MSI. medium scale integration.

MTBF. mean time between failures.

MTTR. mean time to repair.

MTTF. mean time to failure.

machine code. (IEEE) Computer instructions and definitions expressed in a form [binary code] that can be recognized by the CPU of a computer. All source code, regardless of the language in which it was programmed, is eventually converted to machine code. Syn: object code.

machine language. See: machine code.

macro. (IEEE) In software engineering, a predefined sequence of computer instructions that is inserted into a program, usually during assembly or compilation, at each place that its corresponding macroinstruction appears in the program.

macroinstruction. (IEEE) A source code instruction that is replaced by a predefined sequence of source instructions, usually in the same language as the rest of the program and usually during assembly or compilation.

main memory. A non-moving storage device utilizing one of a number of types of electronic circuitry to store information.

main program. (IEEE) A software component that is called by the operating system of a computer and that usually calls other software components. See: routine, subprogram.

mainframe. Term used to describe a large computer.

maintainability. (IEEE) The ease with which a software system or component can be modified to correct faults, improve performance or other attributes, or adapt to a changed environment. Syn: modifiability.

maintenance. (QA) Activities such as adjusting, cleaning, modifying, overhauling equipment to assure performance in accordance with requirements. Maintenance to a software system includes correcting software errors, adapting software to a new environment, or making enhancements to software. See: adaptive maintenance, corrective maintenance, perfective maintenance.

Version 6.2 A-33

Page 268: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

mean time between failures. A measure of the reliability of a computer system, equal to average operating time of equipment between failures, as calculated on a statistical basis from the known failure rates of various components of the system.

mean time to failure. A measure of reliability, giving the average time before the first failure.

mean time to repair. A measure of reliability of a piece of repairable equipment, giving the average time between repairs.

measure. (IEEE) A quantitative assessment of the degree to which a software product or process possesses a given attribute.

measurable. Capable of being measured.

measurement. The process of determining the value of some quantity in terms of a standard unit.

medium scale integration. A classification of ICs [chips] based on their size as expressed by the number of circuits or logic gates they contain. An MSI IC contains 100 to 3,000 transistors.

megabit. Approximately one million bits. Precisely 1024 K bits, 220 bits, or 1,048,576 bits.

megabyte. Approximately one million bytes. Precisely 1024 K Bytes, 220 bytes, or 1,048,576 bytes. See: kilobyte.

megahertz. A unit of frequency equal to one million cycles per second.

memory. Any device or recording medium into which binary data can be stored and held, and from which the entire original data can be retrieved. The two types of memory are main; e.g., ROM, RAM, and auxiliary; e.g., tape, disk. See: storage device.

menu. A computer display listing a number of options; e.g., functions, from which the operator may select one. Sometimes used to denote a list of programs.

metal-oxide semiconductor. One of two major categories of chip design [the other is bipolar]. It derives its name from its use of metal, oxide and semiconductor layers. There are several varieties of MOS technologies including PMOS, NMOS, CMOS.

metal-oxide semiconductor field effect transistor. Common type of transistor fabricated as a discrete component or into MOS integrated circuits.

metric based test data generation. (NBS) The process of generating test sets for structural testing based upon use of complexity metrics or coverage metrics.

metric, software quality. (IEEE) A quantitative measure of the degree to which software possesses a given attribute which affects its quality.

metropolitan area network. Communications network that covers a geographical area such as a city or a suburb. Contrast with LAN, WAN.

microcode. Permanent memory that holds the elementary circuit operations a computer must perform for each instruction in its instruction set.

microcomputer. A term used to describe a small computer. See: microprocessor.

A-34 Version 6.2

Page 269: Casq Cbok Rev 6-2

Vocabulary

microprocessor. A CPU existing on a single IC. Frequently synonymous with a microcomputer.

million instructions per second. Execution speed of a computer. MIPS rate is one factor in overall performance. Bus and channel speed and bandwidth, memory speed, memory management techniques, and system software also determine total throughput.

minicomputer. A term used to describe a medium sized computer.

mishap. (DOD) An unplanned event or series of events resulting in death, injury, occupational illness, or damage to or loss of data and equipment or property, or damage to the environment. Syn: accident.

mnemonic. A symbol chosen to assist human memory and understanding; e.g., an abbreviation such as "MPY" for multiply.

modeling. Construction of programs used to model the effects of a postulated environment for investigating the dimensions of a problem for the effects of algorithmic processes on responsive targets.

modem. (ISO) A functional unit that modulates and demodulates signals. One of the functions of a modem is to enable digital data to be transmitted over analog transmission facilities. The term is a contraction of modulator-demodulator.

modem access. Using a modem to communicate between computers. MODEM access is often used between a remote location and a computer that has a master database and applications software, the host computer.

modifiability. See: maintainability.

modular decomposition. A structured software design technique, breaking a system into components to facilitate design and development. Syn: functional decomposition, hierarchical decomposition. See: abstraction.

modular software. (IEEE) Software composed of discrete parts. See: structured design.

modularity. (IEEE) The degree to which a system or computer program is composed of discrete components such that a change to one component has minimal impact on other components.

modulate. Varying the characteristics of a wave in accordance with another wave or signal, usually to make user equipment signals compatible with communication facilities. Contrast with demodulate.

modulation. Converting signals from a binary-digit pattern [pulse form] to a continuous wave form [analog]. Contrast with demodulation.

module. (1) In programming languages, a self- contained subdivision of a program that may be separately compiled. (2) A discrete set of instructions, usually processed as a unit, by an assembler, a compiler, a linkage editor, or similar routine or subroutine. (3) A packaged functional hardware unit suitable for use with other components. See: unit.

module interface table. A table which provides a graphic illustration of the data elements whose values are input to and output from a module.

Version 6.2 A-35

Page 270: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

multi-processing. (IEEE) A mode of operation in which two or more processes [programs] are executed concurrently [simultaneously] by separate CPUs that have access to a common main memory. Contrast with multi-programming. See: multi-tasking, time sharing.

multi-programming. (IEEE) A mode of operation in which two or more programs are executed in an interleaved manner by a single CPU. Syn: parallel processing. Contrast with multi-tasking. See: time sharing.

multi-tasking. (IEEE) A mode of operation in which two or more tasks are executed in an interleaved manner. Syn: parallel processing. See: multi-processing, multi-programming, time sharing.

multiple condition coverage. (Myers) A test coverage criteria which requires enough test cases such that all possible combinations of condition outcomes in each decision, and all points of entry, are invoked at least once. Contrast with branch coverage, condition coverage, decision coverage, path coverage, statement coverage.

multiplexer. A device which takes information from any of several sources and places it on a single line or sends it to a single destination.

multipurpose systems. (IEEE) Computer systems that perform more than one primary function or task are considered to be multipurpose. In some situations the computer may be linked or networked with other computers that are used for administrative functions; e.g., accounting, word processing.

mutation analysis. (NBS) A method to determine test set thoroughness by measuring the extent to which a test set can discriminate the program from slight variants [mutants] of the program. Contrast with error seeding.

- N -

NBS. National Bureau of Standards.

NIST. National Institute for Standards and Technology.

NMI. non-maskable interrupt.

NMOS. n-channel MOS.

National Bureau of Standards. Now National Institute for Standards and Technology.

National Institute for Standards and Technology. Gaithersburg, MD 20899. A federal agency under the Department of Commerce, originally established by an act of Congress on March 3, 1901 as the National Bureau of Standards. The Institute's overall goal is to strengthen and advance the Nation's science and technology and facilitate their effective application for public benefit. The National Computer Systems Laboratory conducts research and provides, among other things, the technical foundation for computer related policies of the Federal Government.

n-channel MOS. A type of microelectronic circuit used for logic and memory chips.

A-36 Version 6.2

Page 271: Casq Cbok Rev 6-2

Vocabulary

network. (1) (ISO) An arrangement of nodes and interconnecting branches. (2) A system [transmission channels and supporting hardware and software] that connects several remotely located computers via telecommunications.

network database. A database organization method that allows for data relationships in a net-like form. A single data element can point to multiple data elements and can itself be pointed to by other data elements. Contrast with relational database.

nibble. Half a byte, or four bits.

node. A junction or connection point in a network, e.g. a terminal or a computer.

noncritical code analysis. (IEEE) (1) Examines software elements that are not designated safety-critical and ensures that these elements do not cause a hazard. (2) Examines portions of the code that are not considered safety-critical code to ensure they do not cause hazards. Generally, safety-critical code should be isolated from non-safety-critical code. This analysis is to show this isolation is complete and that interfaces between safety-critical code and non-safety-critical code do not create hazards.

nonincremental integration. A reformation of a program by immediately relinking the entire program following the testing of each independent module. Integration testing is then conducted on the program as a whole. Syn: "big bang" integration. Contrast with incremental integration.

non-maskable interrupt. A high priority interrupt that cannot be disabled by another interrupt. It can be used to report malfunctions such as parity, bus, and math co-processor errors.

null. (IEEE) A value whose definition is to be supplied within the context of a specific operating system. This value is a representation of the set of no numbers or no value for the operating system in use.

null data. (IEEE) Data for which space is allocated but for which no value currently exists.

null string. (IEEE) A string containing no entries. Note: It is said that a null string has length zero.

- O -

OCR. optical character recognition.

OEM. original equipment manufacturer.

OOP. object oriented programming.

object. In object oriented programming, A self contained module [encapsulation] of data and the programs [services] that manipulate [process] that data.

object code. (NIST) A code expressed in machine language ["1"s and "0"s] which is normally an output of a given translation process that is ready to be executed by a computer. Syn: machine code. Contrast with source code. See: object program.

Version 6.2 A-37

Page 272: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

object oriented design. (IEEE) A software development technique in which a system or component is expressed in terms of objects and connections between those objects.

object oriented language. (IEEE) A programming language that allows the user to express a program in terms of objects and messages between those objects. Examples include C++, Smalltalk and LOGO.

object oriented programming. A technology for writing programs that are made up of self-sufficient modules that contain all of the information needed to manipulate a given data structure. The modules are created in class hierarchies so that the code or methods of a class can be passed to other modules. New object modules can be easily created by inheriting the characteristics of existing classes. See: object, object oriented design.

object program. (IEEE) A computer program that is the output of an assembler or compiler.

octal. The base 8 number system. Digits are 0, 1, 2, 3, 4, 5, 6, & 7.

on-line. (IEEE) Pertaining to a system or mode of operation in which input data enter the computer directly from the point of origin or output data are transmitted directly to the point where they are used. For example, an airline reservation system. Contrast with batch. See: conversational, interactive, real time.

operating system. (ISO) Software that controls the execution of programs, and that provides services such as resource allocation, scheduling, input/output control, and data management. Usually, operating systems are predominantly software, but partial or complete hardware implementations are possible.

operation and maintenance phase. (IEEE) The period of time in the software life cycle during which a software product is employed in its operational environment, monitored for satisfactory performance, and modified as necessary to correct problems or to respond to changing requirements.

operation exception. (IEEE) An exception that occurs when a program encounters an invalid operation code.

operator. See: end user.

optical character recognition. An information processing technology that converts human readable data into another medium for computer input. An OCR peripheral device accepts a printed document as input, to identify the characters by their shape from the light that is reflected and creates an output disk file. For best results, the printed page must contain only characters of a type that are easily read by the OCR device and located on the page within certain margins. When choosing an OCR product, the prime consideration should be the program's level of accuracy as it applies to the type of document to be scanned. Accuracy levels less than 97% are generally considered to be poor.

optical fiber. Thin glass wire designed for light transmission, capable of transmitting billions of bits per second. Unlike electrical pulses, light pulses are not affected by random radiation in the environment.

optimization. (NIST) Modifying a program to improve performance; e.g., to make it run faster or to make it use fewer resources.

A-38 Version 6.2

Page 273: Casq Cbok Rev 6-2

Vocabulary

Oracle. A relational database programming system incorporating the SQL programming language. A registered trademark of the Oracle Corp.

original equipment manufacturer. A manufacturer of computer hardware.

overflow. (ISO) In a calculator, the state in which the calculator is unable to accept or process the number of digits in the entry or in the result. See: arithmetic overflow.

overflow exception. (IEEE) An exception that occurs when the result of an arithmetic operation exceeds the size of the storage location designated to receive it.

- P -

PAL. programmable array logic.

PC. personal computer.

PCB. printed circuit board.

PDL. program design language.

PLA. programmable logic array.

PLD. programmable logic device.

PMOS. positive channel MOS.

PROM. programmable read only memory.

paging. (IEEE) A storage allocation technique in which programs or data are divided into fixed length blocks called pages, main storage/memory is divided into blocks of the same length called page frames, and pages are stored in page frames, not necessarily contiguously or in logical order, and pages are transferred between main and auxiliary storage as needed.

parallel. (1) (IEEE) Pertaining to the simultaneity of two or more processes. (2) (IEEE) Pertaining to the simultaneous processing of individual parts of a whole, such as the bits of a character or the characters of a word, using separate facilities for the various parts. (3) Term describing simultaneous transmission of the bits making up a character, usually eight bits [one byte]. Contrast with serial.

parallel processing. See: multi-processing, multi- programming.

parameter. (IEEE) A constant, variable or expression that is used to pass values between software modules. Syn: argument.

parity. An error detection method in data transmissions that consists of selectively adding a 1-bit to bit patterns [word, byte, character, message] to cause the bit patterns to have either an odd number of 1-bits [odd parity] or an even number of 1-bits [even parity].

parity bit. (ISO) A binary digit appended to a group of binary digits to make the sum of all the digits, including the appended binary digit, either odd or even, as predetermined.

parity check. (ISO) A redundancy check by which a recalculated parity bit is compared to the predetermined parity bit. Contrast with check summation, cyclic redundancy check [CRC].

Version 6.2 A-39

Page 274: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

Pascal. A high-level programming language designed to encourage structured programming practices.

password. (ISO) A character string that enables a user to have full or limited access to a system or to a set of data.

patch. (IEEE) A change made directly to an object program without reassembling or recompiling from the source program.

path. (IEEE) A sequence of instructions that may be performed in the execution of a computer program.

path analysis. (IEEE) Analysis of a computer program [source code] to identify all possible paths through the program, to detect incomplete paths, or to discover portions of the program that are not on any path.

path coverage. See: testing, path.

perfective maintenance. (IEEE) Software maintenance performed to improve the performance, maintainability, or other attributes of a computer program. Contrast with adaptive maintenance, corrective maintenance.

performance requirement. (IEEE) A requirement that imposes conditions on a functional requirement; e.g., a requirement that specifies the speed, accuracy, or memory usage with which a given function must be performed.

peripheral device. Equipment that is directly connected a computer. A peripheral device can be used to input data; e.g., keypad, bar code reader, transducer, laboratory test equipment; or to output data; e.g., printer, disk drive, video system, tape drive, valve controller, motor controller. Syn: peripheral equipment.

peripheral equipment. See: peripheral device.

personal computer. Synonymous with microcomputer, a computer that is functionally similar to large computers, but serves only one user.

physical configuration audit. (IEEE) An audit conducted to verify that a configuration item, as built, conforms to the technical documentation that defines it. See: functional configuration audit.

physical requirement. (IEEE) A requirement that specifies a physical characteristic that a system or system component must posses; e.g., material, shape, size, weight.

pixel. (IEEE) (1) In image processing and pattern recognition, the smallest element of a digital image that can be assigned a gray level. (2) In computer graphics, the smallest element of a display surface that can be assigned independent characteristics. This term is derived from the term "picture element".

platform. The hardware and software which must be present and functioning for an application program to run [perform] as intended. A platform includes, but is not limited to the operating system or executive software, communication software, microprocessor, network, input/output hardware, any generic software libraries, database management, user interface software, and the like.

A-40 Version 6.2

Page 275: Casq Cbok Rev 6-2

Vocabulary

polling. A technique a CPU can use to learn if a peripheral device is ready to receive data or to send data. In this method each device is checked or polled in-turn to determine if that device needs service. The device must wait until it is polled in order to send or receive data. This method is useful if the device's data can wait for a period of time before being processed, since each device must await its turn in the polling scheme before it will be serviced by the processor. Contrast with interrupt.

positive channel MOS. A type of microelectronic circuit in which the base material is positively charged.

precision. The relative degree of repeatability, i.e. how closely the values within a series of replicate measurements agree. It is the result of resolution and stability. See: accuracy, bias, calibration.

preliminary design. (IEEE) (1) The process of analyzing design alternatives and defining the architecture, components, interfaces, and timing and sizing estimates for a system or component. See: detailed design. (2) The result of the process in (1).

preliminary design review. (IEEE) A review conducted to evaluate the progress, technical adequacy, and risk resolution of the selected design approach for one or more configuration items; to determine each design's compatibility with the requirements for the configuration item; to evaluate the degree of definition and assess the technical risk associated with the selected manufacturing methods and processes; to establish the existence and compatibility of the physical and functional interfaces among the configuration items and other items of equipment, facilities, software and personnel; and, as applicable, to evaluate the preliminary operational and support documents.

printed circuit board. A flat board that holds chips and other electronic components. The board is "printed" with electrically conductive pathways between the components.

production database. The computer file that contains the establishment's current production data.

program. (1) (ISO) A sequence of instructions suitable for processing. Processing may include the use of an assembler, a compiler, an interpreter, or another translator to prepare the program for execution. The instructions may include statements and necessary declarations. (2) (ISO) To design, write, and test programs. (3) (ANSI) In programming languages, a set of one or more interrelated modules capable of being executed. (4) Loosely, a routine. (5) Loosely, to write a routine.

program design language. (IEEE) A specification language with special constructs and, sometimes, verification protocols, used to develop, analyze, and document a program design.

program mutation. (IEEE) A computer program that has been purposely altered from the intended version to evaluate the ability of program test cases to detect the alteration. See: testing, mutation.

programmable array logic. A programmable logic chip. See: programmable logic device.

programmable logic array. A programmable logic chip. See: programmable logic device.

programmable logic device. A logic chip that is programmed at the user's site. Contrast with PROM.

Version 6.2 A-41

Page 276: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

programmable read only memory. A chip which may be programmed by using a PROM programming device. It can be programmed only once. It cannot be erased and reprogrammed. Each of its bit locations is a fusible link. An unprogrammed PROM has all links closed establishing a known state of each bit. Programming the chip consists of sending an electrical current of a specified size through each link which is to be changed to the alternate state. This causes the "fuse to blow", opening that link.

programming language. (IEEE) A language used to express computer programs. See: computer language, high-level language, low-level language.

programming standards. See: coding standards.

programming style analysis. (IEEE) Analysis to ensure that all portions of the program follow approved programming guidelines. See: code audit, code inspection. coding standards.

project plan. (NIST) A management document describing the approach taken for a project. The plan typically describes work to be done, resources required, methods to be used, the configuration management and quality assurance procedures to be followed, the schedules to be met, the project organization, etc. Project in this context is a generic term. Some projects may also need integration plans, security plans, test plans, quality assurance plans, etc. See: documentation plan, software development plan, test plan, software engineering.

PROM programmer. Electronic equipment which is used to transfer a program [write instructions and data] into PROM and EPROM chips.

proof of correctness. (NBS) The use of techniques of mathematical logic to infer that a relation between program variables assumed true at program entry implies that another relation between program variables holds at program exit.

protection exception. (IEEE) An exception that occurs when a program attempts to write into a protected area in storage.

protocol. (ISO) A set of semantic and syntactic rules that determines the behavior of functional units in achieving communication.

prototyping. Using software tools to accelerate the software development process by facilitating the identification of required functionality during analysis and design phases. A limitation of this technique is the identification of system or software problems and hazards. See: rapid prototyping.

pseudocode. A combination of programming language and natural language used to express a software design. If used, it is usually the last document produced prior to writing the source code.

- Q -

QA. quality assurance.

QC. quality control.

A-42 Version 6.2

Page 277: Casq Cbok Rev 6-2

Vocabulary

qualification, installation. (FDA) Establishing confidence that process equipment and ancillary systems are compliant with appropriate codes and approved design intentions, and that manufacturer's recommendations are suitably considered.

qualification, operational. (FDA) Establishing confidence that process equipment and sub-systems are capable of consistently operating within established limits and tolerances.

qualification, process performance. (FDA) Establishing confidence that the process is effective and reproducible.

qualification, product performance. (FDA) Establishing confidence through appropriate testing that the finished product produced by a specified process meets all release requirements for functionality and safety.

quality assurance. (1) (ISO) The planned systematic activities necessary to ensure that a component, module, or system conforms to established technical requirements. (2) All actions that are taken to ensure that a development organization delivers products that meet performance requirements and adhere to standards and procedures. (3) The policy, procedures, and systematic actions established in an enterprise for the purpose of providing and maintaining some degree of confidence in data integrity and accuracy throughout the life cycle of the data, which includes input, update, manipulation, and output. (4) (QA) The actions, planned and performed, to provide confidence that all systems and components that influence the quality of the product are working as expected individually and collectively.

quality assurance, software. (IEEE) (1) A planned and systematic pattern of all actions necessary to provide adequate confidence that an item or product conforms to established technical requirements. (2) A set of activities designed to evaluate the process by which products are developed or manufactured.

quality control. The operational techniques and procedures used to achieve quality requirements.

- R -

RAM. random access memory.

RFI. radiofrequency interference.

RISC. reduced instruction set computer.

ROM. read only memory.

radiofrequency interference. High frequency electromagnetic waves that emanate from electronic devices such as chips and other electronic devices. An electromagnetic disturbance caused by such radiating and transmitting sources as electrostatic discharge [ESD], lightning, radar, radio and TV signals, and motors with brushes can induce unwanted voltages in electronic circuits, damage components and cause malfunctions. See: electromagnetic interference.

random access memory. Chips which can be called read/write memory, since the data stored in them may be read or new data may be written into any memory address on these chips. The

Version 6.2 A-43

Page 278: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

term random access means that each memory location [usually 8 bits or 1 byte] may be directly accessed [read from or written to] at random. This contrasts to devices like magnetic tape where each section of the tape must be searched sequentially by the read/write head from its current location until it finds the desired location. ROM memory is also random access memory, but they are read only not read/write memories. Another difference between RAM and ROM is that RAM is volatile, i.e. it must have a constant supply of power or the stored data will be lost.

range check. (ISO) A limit check in which both high and low values are stipulated.

rapid prototyping. A structured software requirements discovery technique which emphasizes generating prototypes early in the development process to permit early feedback and analysis in support of the development process. Contrast with incremental development, spiral model, waterfall model. See: prototyping.

read only memory. A memory chip from which data can only be read by the CPU. The CPU may not store data to this memory. The advantage of ROM over RAM is that ROM does not require power to retain its program. This advantage applies to all types of ROM chips; ROM, PROM, EPROM, and EEPROM.

real time. (IEEE) Pertaining to a system or mode of operation in which computation is performed during the actual time that an external process occurs, in order that the computation results can be used to control, monitor, or respond in a timely manner to the external process. Contrast with batch. See: conversational, interactive, interrupt, on-line.

real time processing. A fast-response [immediate response] on-line system which obtains data from an activity or a physical process, performs computations, and returns a response rapidly enough to affect [control] the outcome of the activity or process; e.g., a process control application. Contrast with batch processing.

record. (1) (ISO) a group of related data elements treated as a unit. [A data element (field) is a component of a record, a record is a component of a file (database)].

record of change. Documentation of changes made to the system. A record of change can be a written document or a database. Normally there are two associated with a computer system, hardware and software. Changes made to the data are recorded in an audit trail.

recursion. (IEEE) (1) The process of defining or generating a process or data structure in terms of itself. (2) A process in which a software module calls itself.

reduced instruction set computer. Computer architecture that reduces the complexity of the chip by using simpler instructions. Reduced instruction set does not necessarily mean fewer instructions, but rather a return to simple instructions requiring only one or a very few instruction cycles to execute, and therefore are more effectively utilized with innovative architectural and compiler changes. Systems using RISC technology are able to achieve processing speeds of more than five million instructions per second.

region. A clearly described area within the computer's storage that is logically and/or physically distinct from other regions. Regions are used to separate testing from production [normal use]. Syn: partition.

A-44 Version 6.2

Page 279: Casq Cbok Rev 6-2

Vocabulary

register. A small, high speed memory circuit within a microprocessor that holds addresses and values of internal operations; e.g., registers keep track of the address of the instruction being executed and the data being processed. Each microprocessor has a specific number of registers depending upon its design.

regression analysis and testing. (IEEE) A software V&V task to determine the extent of V&V analysis and testing that must be repeated when changes are made to any previously examined software products. See: testing, regression.

relational database. Database organization method that links files together as required. Relationships between files are created by comparing data such as account numbers and names. A relational system can take any two or more files and generate a new file from the records that meet the matching criteria. Routine queries often involve more than one data file; e.g., a customer file and an order file can be linked in order to ask a question that relates to information in both files, such as the names of the customers that purchased a particular product. Contrast with network database, flat file.

release. (IEEE) The formal notification and distribution of an approved version. See: version.

reliability. (IEEE) The ability of a system or component to perform its required functions under stated conditions for a specified period of time. See: software reliability.

reliability assessment. (ANSI/IEEE) The process of determining the achieved level of reliability for an existing system or system component.

requirement. (IEEE) (1) A condition or capability needed by a user to solve a problem or achieve an objective. (2) A condition or capability that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed documents. (3) A documented representation of a condition or capability as in (1) or (2). See: design requirement, functional requirement, implementation requirement, interface requirement, performance requirement, physical requirement.

requirements analysis. (IEEE) (1) The process of studying user needs to arrive at a definition of a system, hardware, or software requirements. (2) The process of studying and refining system, hardware, or software requirements. See: prototyping, software engineering.

requirements phase. (IEEE) The period of time in the software life cycle during which the requirements, such as functional and performance capabilities for a software product, are defined and documented.

requirements review. (IEEE) A process or meeting during which the requirements for a system, hardware item, or software item are presented to project personnel, managers, users, customers, or other interested parties for comment or approval. Types include system requirements review, software requirements review. Contrast with code review, design review, formal qualification review, test readiness review.

retention period. (ISO) The length of time specified for data on a data medium to be preserved.

retrospective trace. (IEEE) A trace produced from historical data recorded during the execution of a computer program. Note: this differs from an ordinary trace, which is produced

Version 6.2 A-45

Page 280: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

cumulatively during program execution. See: execution trace, subroutine trace, symbolic trace, variable trace.

revalidation. Relative to software changes, revalidation means validating the change itself, assessing the nature of the change to determine potential ripple effects, and performing the necessary regression testing.

review. (IEEE) A process or meeting during which a work product or set of work products, is presented to project personnel, managers, users, customers, or other interested parties for comment or approval. Types include code review, design review, formal qualification review, requirements review, test readiness review. Contrast with audit, inspection. See: static analysis.

revision number. See: version number.

risk. (IEEE) A measure of the probability and severity of undesired effects. Often taken as the simple product of probability and consequence.

risk assessment. (DOD) A comprehensive evaluation of the risk and its associated impact.

robustness. The degree to which a software system or component can function correctly in the presence of invalid inputs or stressful environmental conditions. See: software reliability.

routine. (IEEE) A subprogram that is called by other programs and subprograms. Note: This term is defined differently in various programming languages. See: module.

RS-232-C. An Electronic Industries Association (EIA) standard for connecting electronic equipment. Data is transmitted and received in serial format.

- S -

SCSI. small computer systems interface.

SOPs. standard operating procedures.

SQL. structured query language.

SSI. small scale integration.

safety. (DOD) Freedom from those conditions that can cause death, injury, occupational illness, or damage to or loss of equipment or property, or damage to the environment.

safety critical. (DOD) A term applied to a condition, event, operation, process or item of whose proper recognition, control, performance or tolerance is essential to safe system operation or use; e.g., safety critical function, safety critical path, safety critical component.

safety critical computer software components. (DOD) Those computer software components and units whose errors can result in a potential hazard, or loss of predictability or control of a system.

security. See: computer system security.

A-46 Version 6.2

Page 281: Casq Cbok Rev 6-2

Vocabulary

sensor. A peripheral input device which senses some variable in the system environment, such as temperature, and converts it to an electrical signal which can be further converted to a digital signal for processing by the computer.

serial. (1) Pertaining to the sequential processing of the individual parts of a whole, such as the bits of a character or the characters of a word, using the same facilities for successive parts. (2) Term describing the transmission of data one bit at a time. Contrast with parallel.

server. A high speed computer in a network that is shared by multiple users. It holds the programs and data that are shared by all users.

service program. Syn: utility program.

servomechanism. (ANSI) (1) An automatic device that uses feedback to govern the physical position of an element. (2) A feedback control system in which at least one of the system signals represents a mechanical motion.

severity. See: criticality.

side effect. An unintended alteration of a program's behavior caused by a change in one part of the program, without taking into account the effect the change has on another part of the program. See: regression analysis and testing.

simulation. (1) (NBS) Use of an executable model to represent the behavior of an object. During testing the computational hardware, the external environment, and even code segments may be simulated. (2) (IEEE) A model that behaves or operates like a given system when provided a set of controlled inputs. Contrast with emulation.

simulation analysis. (IEEE) A software V&V task to simulate critical tasks of the software or system environment to analyze logical or performance characteristics that would not be practical to analyze manually.

simulator. (IEEE) A device, computer program, or system that behaves or operates like a given system when provided a set of controlled inputs. Contrast with emulator. A simulator provides inputs or responses that resemble anticipated process parameters. Its function is to present data to the system at known speeds and in a proper format.

sizing. (IEEE) The process of estimating the amount of computer storage or the number of source lines required for a software system or component. Contrast with timing.

sizing and timing analysis. (IEEE) A software V&V task to obtain program sizing and execution timing information to determine if the program will satisfy processor size and performance requirements allocated to software.

small computer systems interface. A standard method of interfacing a computer to disk drives, tape drives and other peripheral devices that require high-speed data transfer. Up to seven SCSI devices can be linked to a single SCSI port. Contrast with ST-506, EDSI, IDE.

small scale integration. A classification of ICs [chips] based on their size as expressed by the number of circuits or logic gates they contain. An SSI IC contains up to 100 transistors.

software. (ANSI) Programs, procedures, rules, and any associated documentation pertaining to the operation of a system. Contrast with hardware. See: application software, operating system, system software, utility software.

Version 6.2 A-47

Page 282: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

software audit. See: software review.

software characteristic. An inherent, possibly accidental, trait, quality, or property of software; e.g., functionality, performance, attributes, design constraints, number of states, lines or branches.

software configuration item. See: configuration item.

software design description. (IEEE) A representation of software created to facilitate analysis, planning, implementation, and decision making. The software design description is used as a medium for communicating software design information, and may be thought of as a blueprint or model of the system. See: structured design, design description, specification.

software developer. See: developer.

software development notebook. (NIST) A collection of material pertinent to the development of a software module. Contents typically include the requirements, design, technical reports, code listings, test plans, test results, problem reports, schedules, notes, etc. for the module. Syn: software development file.

software development plan. (NIST) The project plan for the development of a software product. Contrast with software development process, software life cycle.

software development process. (IEEE) The process by which user needs are translated into a software product. the process involves translating user needs into software requirements, transforming the software requirements into design, implementing the design in code, testing the code, and sometimes installing and checking out the software for operational activities. Note: these activities may overlap or be performed iteratively. See: incremental development, rapid prototyping, spiral model, waterfall model.

software diversity. (IEEE) A software development technique in which two or more functionally identical variants of a program are developed from the same specification by different programmers or programming teams with the intent of providing error detection, increased reliability, additional documentation or reduced probability that programming or compiler errors will influence the end results.

software documentation. (NIST) Technical data or information, including computer listings and printouts, in human readable form, that describe or specify the design or details, explain the capabilities, or provide operating instructions for using the software to obtain desired results from a software system. See: specification; specification, requirements; specification, design; software design description; test plan, test report, user's guide.

software element. (IEEE) A deliverable or in- process document produced or acquired during software development or maintenance. Specific examples include but are not limited to:

(1) Project planning documents; i.e., software development plans, and software verification and validation plans.

(2) Software requirements and design specifications.

(3) Test documentation.

(4) Customer-deliverable documentation.

(5) Program source code.

A-48 Version 6.2

Page 283: Casq Cbok Rev 6-2

Vocabulary

(6) Representation of software solutions implemented in firmware.

(7) Reports; i.e., review, audit, project status.

(8) Data; i.e., defect detection, test.

Contrast with software item. See: configuration item.

software element analysis. See: software review.

software engineering. (IEEE) The application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software; i.e., the application of engineering to software. See: project plan, requirements analysis, architectural design, structured design, system safety, testing, configuration management.

software engineering environment. (IEEE) The hardware, software, and firmware used to perform a software engineering effort. Typical elements include computer equipment, compilers, assemblers, operating systems, debuggers, simulators, emulators, test tools, documentation tools, and database management systems.

software hazard analysis. (ODE, CDRH) The identification of safety-critical software, the classification and estimation of potential hazards, and identification of program path analysis to identify hazardous combinations of internal and environmental program conditions. See: risk assessment, software safety change analysis, software safety code analysis, software safety design analysis, software safety requirements analysis, software safety test analysis, system safety.

software item. (IEEE) Source code, object code, job control code, control data, or a collection of these items. Contrast with software element.

software life cycle. (NIST) Period of time beginning when a software product is conceived and ending when the product is no longer available for use. The software life cycle is typically broken into phases denoting activities such as requirements, design, programming, testing, installation, and operation and maintenance. Contrast with software development process. See: waterfall model.

software reliability. (IEEE) (1) the probability that software will not cause the failure of a system for a specified time under specified conditions. The probability is a function of the inputs to and use of the system in the software. The inputs to the system determine whether existing faults, if any, are encountered. (2) The ability of a program to perform its required functions accurately and reproducibly under stated conditions for a specified period of time.

software requirements specification. See: specification, requirements.

software review. (IEEE) An evaluation of software elements to ascertain discrepancies from planned results and to recommend improvement. This evaluation follows a formal process. Syn: software audit. See: code audit, code inspection, code review, code walkthrough, design review, specification analysis, static analysis.

software safety change analysis. (IEEE) Analysis of the safety-critical design elements affected directly or indirectly by the change to show the change does not create a new hazard, does not impact on a previously resolved hazard, does not make a currently existing hazard

Version 6.2 A-49

Page 284: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

more severe, and does not adversely affect any safety-critical software design element. See: software hazard analysis, system safety.

software safety code analysis. (IEEE) Verification that the safety-critical portions of the design are correctly implemented in the code. See: logic analysis, data analysis, interface analysis, constraint analysis, programming style analysis, noncritical code analysis, timing and sizing analysis, software hazard analysis, system safety.

software safety design analysis. (IEEE) Verification that the safety-critical portion of the software design correctly implements the safety-critical requirements and introduces no new hazards. See: logic analysis, data analysis, interface analysis, constraint analysis, functional analysis, software element analysis, timing and sizing analysis, reliability analysis, software hazard analysis, system safety.

software safety requirements analysis. (IEEE) Analysis evaluating software and interface requirements to identify errors and deficiencies that could contribute to a hazard. See: criticality analysis, specification analysis, timing and sizing analysis, different software systems analyses, software hazard analysis, system safety.

software safety test analysis. (IEEE) Analysis demonstrating that safety requirements have been correctly implemented and that the software functions safely within its specified environment. Tests may include; unit level tests, interface tests, software configuration item testing, system level testing, stress testing, and regression testing. See: software hazard analysis, system safety.

source code. (1) (IEEE) Computer instructions and data definitions expressed in a form suitable for input to an assembler, compiler or other translator. (2) The human readable version of the list of instructions [program] that cause a computer to perform a task. Contrast with object code. See: source program, programming language.

source program. (IEEE) A computer program that must be compiled, assembled, or otherwise translated in order to be executed by a computer. Contrast with object program. See: source code.

spaghetti code. Program source code written without a coherent structure. Implies the excessive use of GOTO instructions. Contrast with structured programming.

special test data. (NBS) Test data based on input values that are likely to require special handling by the program. See: error guessing; testing, special case.

specification. (IEEE) A document that specifies, in a complete, precise, verifiable manner, the requirements, design, behavior,or other characteristics of a system or component, and often, the procedures for determining whether these provisions have been satisfied. Contrast with requirement. See: specification, formal; specification, requirements; specification, functional; specification, performance; specification, interface; specification, design; coding standards; design standards.

specification analysis. (IEEE) Evaluation of each safety-critical software requirement with respect to a list of qualities such as completeness, correctness, consistency, testability, robustness, integrity, reliability, usability, flexibility, maintainability, portability, interoperability, accuracy, auditability, performance, internal instrumentation, security and training.

A-50 Version 6.2

Page 285: Casq Cbok Rev 6-2

Vocabulary

specification, design. (NIST) A specification that documents how a system is to be built. It typically includes system or component structure, algorithms, control logic, data structures, data set [file] use information, input/output formats, interface descriptions, etc. Contrast with design standards, requirement. See: software design description.

specification, formal. (NIST) (1) A specification written and approved in accordance with established standards. (2) A specification expressed in a requirements specification language. Contrast with requirement.

specification, functional. (NIST) A specification that documents the functional requirements for a system or system component. It describes what the system or component is to do rather than how it is to be built. Often part of a requirements specification. Contrast with requirement.

specification, interface. (NIST) A specification that documents the interface requirements for a system or system component. Often part of a requirements specification. Contrast with requirement.

specification, performance. (IEEE) A document that sets forth the performance characteristics that a system or component must possess. These characteristics typically include speed, accuracy, and memory usage. Often part of a requirements specification. Contrast with requirement.

specification, product. (IEEE) A document which describes the as built version of the software.

specification, programming. (NIST) See: specification, design.

specification, requirements. (NIST) A specification that documents the requirements of a system or system component. It typically includes functional requirements, performance requirements, interface requirements, design requirements [attributes and constraints], development [coding] standards, etc. Contrast with requirement.

specification, system. See: requirements specification.

specification, test case. See: test case.

specification tree. (IEEE) A diagram that depicts all of the specifications for a given system and shows their relationship to one another.

spiral model. (IEEE) A model of the software development process in which the constituent activities, typically requirements analysis, preliminary and detailed design, coding, integration, and testing, are performed iteratively until the software is complete. Syn: evolutionary model. Contrast with incremental development; rapid prototyping; waterfall model.

ST-506. A standard electrical interface between the hard disk and controller in IBM PC compatible computers. Contrast with EDSI, IDE, SCSI.

standard operating procedures. Written procedures [prescribing and describing the steps to be taken in normal and defined conditions] which are necessary to assure control of production and processes.

Version 6.2 A-51

Page 286: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

state. (IEEE) (1) A condition or mode of existence that a system, component, or simulation may be in; e.g., the pre-flight state of an aircraft navigation program or the input state of a given channel.

state diagram. (IEEE) A diagram that depicts the states that a system or component can assume, and shows the events or circumstances that cause or result from a change from one state to another. Syn: state graph. See: state-transition table.

statement coverage. See: testing, statement.

state-transition table. (Beizer) A representation of a state graph that specifies the states, the inputs, the transitions, and the outputs. See: state diagram.

static analysis. (1) (NBS) Analysis of a program that is performed without executing the program. (2) (IEEE) The process of evaluating a system or component based on its form, structure, content, documentation. Contrast with dynamic analysis. See: code audit, code inspection, code review, code walk-through, design review, symbolic execution.

static analyzer. (ANSI/IEEE) A software tool that aides in the evaluation of a computer program without executing the program. Examples include checkers, compilers, cross-reference generators, standards enforcers, and flowcharters.

stepwise refinement. A structured software design technique; data and processing steps are defined broadly at first, and then further defined with increasing detail.

storage device. A unit into which data or programs can be placed, retained and retrieved. See: memory.

string. (IEEE) (1) A sequence of characters. (2) A linear sequence of entities such as characters or physical elements.

structure chart. (IEEE) A diagram that identifies modules, activities, or other entities in a system or computer program and shows how larger or more general entities break down into smaller, more specific entries. Note: The result is not necessarily the same as that shown in a call graph. Syn: hierarchy chart, program structure chart. Contrast with call graph.

structured design. (IEEE) Any disciplined approach to software design that adheres to specified rules based on principles such as modularity, top-down design, and stepwise refinement of data, system structure, and processing steps. See: data structure centered design, input-processing-output, modular decomposition, object oriented design, rapid prototyping, stepwise refinement, structured programming, transaction analysis, transform analysis, graphical software specification/design documents, modular software, software engineering.

structured programming. (IEEE) Any software development technique that includes structured design and results in the development of structured programs. See: structured design.

structured query language. A language used to interrogate and process data in a relational database. Originally developed for IBM mainframes, there have been many implementations created for mini and micro computer database applications. SQL commands can be used to interactively work with a data base or can be embedded with a programming language to interface with a database.

A-52 Version 6.2

Page 287: Casq Cbok Rev 6-2

Vocabulary

stub. (NBS) Special code segments that when invoked by a code segment under test will simulate the behavior of designed and specified modules not yet constructed.

subprogram. (IEEE) A separately compilable, executable component of a computer program. Note: This term is defined differently in various programming languages. See: coroutine, main program, routine, subroutine.

subroutine. (IEEE) A routine that returns control to the program or subprogram that called it. Note: This term is defined differently in various programming languages. See: module.

subroutine trace. (IEEE) A record of all or selected subroutines or function calls performed during the execution of a computer program and, optionally, the values of parameters passed to and returned by each subroutine or function. Syn: call trace. See: execution trace, retrospective trace, symbolic trace, variable trace.

support software. (IEEE) Software that aids in the development and maintenance of other software; e.g., compilers, loaders, and other utilities.

symbolic execution. (IEEE) A static analysis technique in which program execution is simulated using symbols, such as variable names, rather than actual values for input data, and program outputs are expressed as logical or mathematical expressions involving these symbols.

symbolic trace. (IEEE) A record of the source statements and branch outcomes that are encountered when a computer program is executed using symbolic, rather than actual values for input data. See: execution trace, retrospective trace, subroutine trace, variable trace.

synchronous. Occurring at regular, timed intervals, i.e. timing dependent.

synchronous transmission. A method of electrical transfer in which a constant time interval is maintained between successive bits or characters. Equipment within the system is kept in step on the basis of this timing. Contrast with asynchronous transmission.

syntax. The structural or grammatical rules that define how symbols in a language are to be combined to form words, phrases, expressions, and other allowable constructs.

system. (1) (ANSI) People, machines, and methods organized to accomplish a set of specific functions. (2) (DOD) A composite, at any level of complexity, of personnel, procedures, materials, tools, equipment, facilities, and software. The elements of this composite entity are used together in the intended operational or support environment to perform a given task or achieve a specific purpose, support, or mission requirement.

system administrator. The person that is charged with the overall administration, and operation of a computer system. The System Administrator is normally an employee or a member of the establishment. Syn: system manager.

system analysis. (ISO) A systematic investigation of a real or planned system to determine the functions of the system and how they relate to each other and to any other system. See: requirements phase.

system design. (ISO) A process of defining the hardware and software architecture, components, modules, interfaces, and data for a system to satisfy specified requirements. See: design phase, architectural design, functional design.

Version 6.2 A-53

Page 288: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

system design review. (IEEE) A review conducted to evaluate the manner in which the requirements for a system have been allocated to configuration items, the system engineering process that produced the allocation, the engineering planning for the next phase of the effort, manufacturing considerations, and the planning for production engineering. See: design review.

system documentation. (ISO) The collection of documents that describe the requirements, capabilities, limitations, design, operation, and maintenance of an information processing system. See: specification, test documentation, user's guide.

system integration. (ISO) The progressive linking and testing of system components into a complete system. See: incremental integration.

system life cycle. The course of developmental changes through which a system passes from its conception to the termination of its use; e.g., the phases and activities associated with the analysis, acquisition, design, development, test, integration, operation, maintenance, and modification of a system. See: software life cycle.

system manager. See: system administrator.

system safety. (DOD) The application of engineering and management principles, criteria, and techniques to optimize all aspects of safety within the constraints of operational effectiveness, time, and cost throughout all phases of the system life cycle. See: risk assessment, software safety change analysis, software safety code analysis, software safety design analysis, software safety requirements analysis, software safety test analysis, software engineering.

system software. (1) (ISO) Application- independent software that supports the running of application software. (2) (IEEE) Software designed to facilitate the operation and maintenance of a computer system and its associated programs; e.g., operating systems, assemblers, utilities. Contrast with application software. See: support software.

- T -

TB. terabyte.

TCP/IP. transmission control protocol/Internet protocol.

tape. Linear magnetic storage hardware, rolled onto a reel or cassette.

telecommunication system. The devices and functions relating to transmission of data between the central processing system and remotely located users.

terabyte. Approximately one trillion bytes; precisely 240 or 1,099,511,627,776 bytes. See: kilobyte, megabyte, gigabyte.

terminal. A device, usually equipped with a CRT display and keyboard, used to send and receive information to and from a computer via a communication channel.

A-54 Version 6.2

Page 289: Casq Cbok Rev 6-2

Vocabulary

test. (IEEE) An activity in which a system or component is executed under specified conditions, the results are observed or recorded and an evaluation is made of some aspect of the system or component.

testability. (IEEE) (1) The degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met. (2) The degree to which a requirement is stated in terms that permit establishment of test criteria and performance of tests to determine whether those criteria have been met. See: measurable.

test case. (IEEE) Documentation specifying inputs, predicted results, and a set of execution conditions for a test item. Syn: test case specification. See: test procedure.

test case generator. (IEEE) A software tool that accepts as input source code, test criteria, specifications, or data structure definitions; uses these inputs to generate test input data; and, sometimes, determines expected results. Syn: test data generator, test generator.

test design. (IEEE) Documentation specifying the details of the test approach for a software feature or combination of software features and identifying the associated tests. See: testing functional; cause effect graphing; boundary value analysis; equivalence class partitioning; error guessing; testing, structural; branch analysis; path analysis; statement coverage; condition coverage; decision coverage; multiple-condition coverage.

test documentation. (IEEE) Documentation describing plans for, or results of, the testing of a system or component, Types include test case specification, test incident report, test log, test plan, test procedure, test report.

test driver. (IEEE) A software module used to invoke a module under test and, often, provide test inputs, control and monitor execution, and report test results. Syn: test harness.

test harness. See: test driver.

test incident report. (IEEE) A document reporting on any event that occurs during testing that requires further investigation. See: failure analysis.

test item. (IEEE) A software item which is the object of testing.

test log. (IEEE) A chronological record of all relevant details about the execution of a test.

test phase. (IEEE) The period of time in the software life cycle in which the components of a software product are evaluated and integrated, and the software product is evaluated to determine whether or not requirements have been satisfied.

test plan. (IEEE) Documentation specifying the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, responsibilities, required, resources, and any risks requiring contingency planning. See: test design, validation protocol.

test procedure. (NIST) A formal document developed from a test plan that presents detailed instructions for the setup, operation, and evaluation of the results for each defined test. See: test case.

test readiness review. (IEEE) (1) A review conducted to evaluate preliminary test results for one or more configuration items; to verify that the test procedures for each configuration item

Version 6.2 A-55

Page 290: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

are complete, comply with test plans and descriptions, and satisfy test requirements; and to verify that a project is prepared to proceed to formal testing of the configuration items. (2) A review as in (1) for any hardware or software component. Contrast with code review, design review, formal qualification review, requirements review.

test report. (IEEE) A document describing the conduct and results of the testing carried out for a system or system component.

test result analyzer. A software tool used to test output data reduction, formatting, and printing.

testing. (IEEE) (1) The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component. (2) The process of analyzing a software item to detect the differences between existing and required conditions, i.e. bugs, and to evaluate the features of the software items. See: dynamic analysis, static analysis, software engineering.

testing, 100%. See: testing, exhaustive.

testing, acceptance. (IEEE) Testing conducted to determine whether or not a system satisfies its acceptance criteria and to enable the customer to determine whether or not to accept the system. Contrast with testing, development; testing, operational. See: testing, qualification.

testing, alpha []. (Pressman) Acceptance testing performed by the customer in a controlled environment at the developer's site. The software is used by the customer in a setting approximating the target environment with the developer observing and recording errors and usage problems.

testing, assertion. (NBS) A dynamic analysis technique which inserts assertions about the relationship between program variables into the program code. The truth of the assertions is determined as the program executes. See: assertion checking, instrumentation.

testing, beta []. (1) (Pressman) Acceptance testing performed by the customer in a live application of the software, at one or more end user sites, in an environment not controlled by the developer. (2) For medical device software such use may require an Investigational Device Exemption [IDE] or Institutional Review Board [IRB] approval.

testing, boundary value. A testing technique using input values at, just below, and just above, the defined limits of an input domain; and with input values causing outputs to be at, just below, and just above, the defined limits of an output domain. See: boundary value analysis; testing, stress.

testing, branch. (NBS) Testing technique to satisfy coverage criteria which require that for each decision point, each possible branch [outcome] be executed at least once. Contrast with testing, path; testing, statement. See: branch coverage.

testing, compatibility. The process of determining the ability of two or more systems to exchange information. In a situation where the developed software replaces an already working program, an investigation should be conducted to assess possible comparability problems between the new software and other programs or systems. See: different software system analysis; testing, integration; testing, interface.

testing, component. See: testing, unit.

A-56 Version 6.2

Page 291: Casq Cbok Rev 6-2

Vocabulary

testing, design based functional. (NBS) The application of test data derived through functional analysis extended to include design functions as well as requirement functions. See: testing, functional.

testing, development. (IEEE) Testing conducted during the development of a system or component, usually in the development environment by the developer. Contrast with testing, acceptance; testing, operational.

testing, exhaustive. (NBS) Executing the program with all possible combinations of values for program variables. Feasible only for small, simple programs.

testing, formal. (IEEE) Testing conducted in accordance with test plans and procedures that have been reviewed and approved by a customer, user, or designated level of management. Antonym: informal testing.

testing, functional. (IEEE) (1) Testing that ignores the internal mechanism or structure of a system or component and focuses on the outputs generated in response to selected inputs and execution conditions. (2) Testing conducted to evaluate the compliance of a system or component with specified functional requirements and corresponding predicted results. Syn: black-box testing, input/output driven testing. Contrast with testing, structural.

testing, integration. (IEEE) An orderly progression of testing in which software elements, hardware elements, or both are combined and tested, to evaluate their interactions, until the entire system has been integrated.

testing, interface. (IEEE) Testing conducted to evaluate whether systems or components pass data and control correctly to one another. Contrast with testing, unit; testing, system. See: testing, integration.

testing, interphase. See: testing, interface.

testing, invalid case. A testing technique using erroneous [invalid, abnormal, or unexpected] input values or conditions. See: equivalence class partitioning.

testing, mutation. (IEEE) A testing methodology in which two or more program mutations are executed using the same test cases to evaluate the ability of the test cases to detect differences in the mutations.

testing, operational. (IEEE) Testing conducted to evaluate a system or component in its operational environment. Contrast with testing, development; testing, acceptance; See: testing, system.

testing, parallel. (ISO) Testing a new or an altered data processing system with the same source data that is used in another system. The other system is considered as the standard of comparison. Syn: parallel run.

testing, path. (NBS) Testing to satisfy coverage criteria that each logical path through the program be tested. Often paths through the program are grouped into a finite set of classes. One path from each class is then tested. Syn: path coverage. Contrast with testing, branch; testing, statement; branch coverage; condition coverage; decision coverage; multiple condition coverage; statement coverage.

Version 6.2 A-57

Page 292: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

testing, performance. (IEEE) Functional testing conducted to evaluate the compliance of a system or component with specified performance requirements.

testing, qualification. (IEEE) Formal testing, usually conducted by the developer for the consumer, to demonstrate that the software meets its specified requirements. See: testing, acceptance; testing, system.

testing, regression. (NIST) Rerunning test cases which a program has previously executed correctly in order to detect errors spawned by changes or corrections made during software development and maintenance.

testing, special case. A testing technique using input values that seem likely to cause program errors; e.g., "0", "1", NULL, empty string. See: error guessing.

testing, statement. (NIST) Testing to satisfy the criterion that each statement in a program be executed at least once during program testing. Syn: statement coverage. Contrast with testing, branch; testing, path; branch coverage; condition coverage; decision coverage; multiple condition coverage; path coverage.

testing, storage. This is a determination of whether or not certain processing conditions use more storage [memory] than estimated.

testing, stress. (IEEE) Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements. Syn: testing, boundary value.

testing, structural. (1) (IEEE) Testing that takes into account the internal mechanism [structure] of a system or component. Types include branch testing, path testing, statement testing. (2) Testing to insure each program statement is made to execute during testing and that each program statement performs its intended function. Contrast with functional testing. Syn: white-box testing, glass-box testing, logic driven testing.

testing, system. (IEEE) The process of testing an integrated hardware and software system to verify that the system meets its specified requirements. Such testing may be conducted in both the development environment and the target environment.

testing, unit. (1) (NIST) Testing of a module for typographic, syntactic, and logical errors, for correct implementation of its design, and for satisfaction of its requirements. (2) (IEEE) Testing conducted to verify the implementation of the design for one software element; e.g., a unit or module; or a collection of software elements. Syn: component testing.

testing, usability. Tests designed to evaluate the machine/user interface. Are the communication device(s) designed in a manner such that the information is displayed in a understandable fashion enabling the operator to correctly interact with the system?

testing, valid case. A testing technique using valid [normal or expected] input values or conditions. See: equivalence class partitioning.

testing, volume. Testing designed to challenge a system's ability to manage the maximum amount of data over a period of time. This type of testing also evaluates a system's ability to handle overload situations in an orderly fashion.

testing, worst case. Testing which encompasses upper and lower limits, and circumstances which pose the greatest chance finding of errors. Syn: most appropriate challenge conditions.

A-58 Version 6.2

Page 293: Casq Cbok Rev 6-2

Vocabulary

See: testing, boundary value; testing, invalid case; testing, special case; testing, stress; testing, volume.

time sharing. (IEEE) A mode of operation that permits two or more users to execute computer programs concurrently on the same computer system by interleaving the execution of their programs. May be implemented by time slicing, priority-based interrupts, or other scheduling methods.

timing. (IEEE) The process of estimating or measuring the amount of execution time required for a software system or component. Contrast with sizing.

timing analyzer. (IEEE) A software tool that estimates or measures the execution time of a computer program or portion of a computer program, either by summing the execution times of the instructions along specified paths or by inserting probes at specified points in the program and measuring the execution time between probes.

timing and sizing analysis. (IEEE) Analysis of the safety implications of safety-critical requirements that relate to execution time, clock time, and memory allocation.

top-down design. Pertaining to design methodology that starts with the highest level of abstraction and proceeds through progressively lower levels. See: structured design.

touch sensitive. (ANSI) Pertaining to a device that allows a user to interact with a computer system by touching an area on the surface of the device with a finger, pencil, or other object, e.g., a touch sensitive keypad or screen.

touch screen. A touch sensitive display screen that uses a clear panel over or on the screen surface. The panel is a matrix of cells, an input device, that transmits pressure information to the software.

trace. (IEEE) (1) A record of the execution of a computer program, showing the sequence of instructions executed, the names and values of variables, or both. Types include execution trace, retrospective trace, subroutine trace, symbolic trace, variable trace. (2) To produce a record as in (1). (3) To establish a relationship between two or more products of the development process; e.g., to establish the relationship between a given requirement and the design element that implements that requirement.

traceability. (IEEE) (1) The degree to which a relationship can be established between two or more products of the development process, especially products having a predecessor-successor or master-subordinate relationship to one another; e.g., the degree to which the requirements and design of a given software component match. See: consistency. (2) The degree to which each element in a software development product establishes its reason for existing; e.g., the degree to which each element in a bubble chart references the requirement that it satisfies. See: traceability analysis, traceability matrix.

traceability analysis. (IEEE) The tracing of (1) Software Requirements Specifications requirements to system requirements in concept documentation, (2) software design descriptions to software requirements specifications and software requirements specifications to software design descriptions, (3) source code to corresponding design specifications and design specifications to source code. Analyze identified relationships for correctness, consistency, completeness, and accuracy. See: traceability, traceability matrix.

Version 6.2 A-59

Page 294: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

traceability matrix. (IEEE) A matrix that records the relationship between two or more products; e.g., a matrix that records the relationship between the requirements and the design of a given software component. See: traceability, traceability analysis.

transaction. (ANSI) (1) A command, message, or input record that explicitly or implicitly calls for a processing action, such as updating a file. (2) An exchange between and end user and an interactive system. (3) In a database management system, a unit of processing activity that accomplishes a specific purpose such as a retrieval, an update, a modification, or a deletion of one or more data elements of a storage structure.

transaction analysis. A structured software design technique, deriving the structure of a system from analyzing the transactions that the system is required to process.

transaction flowgraph. (Beizer) A model of the structure of the system's [program's] behavior, i.e., functionality.

transaction matrix. (IEEE) A matrix that identifies possible requests for database access and relates each request to information categories or elements in the database.

transform analysis. A structured software design technique in which system structure is derived from analyzing the flow of data through the system and the transformations that must be performed on the data.

translation. (NIST) Converting from one language form to another. See: assembling, compilation, interpret.

transmission control protocol/Internet protocol. A set of communications protocols developed for the Defense Advanced Research Projects Agency to internetwork dissimilar systems. It is used by many corporations, almost all American universities, and agencies of the federal government. The File Transfer Protocol and Simple Mail Transfer Protocol provide file transfer and electronic mail capability. The TELENET protocol provides a terminal emulation capability that allows a user to interact with any other type of computer in the network. The TCP protocol controls the transfer of the data, and the IP protocol provides the routing mechanism.

trojan horse. A method of attacking a computer system, typically by providing a useful program which contains code intended to compromise a computer system by secretly providing for unauthorized access, the unauthorized collection of privileged system or user data, the unauthorized reading or altering of files, the performance of unintended and unexpected functions, or the malicious destruction of software and hardware. See: bomb, virus, worm.

truth table. (1) (ISO) An operation table for a logic operation. (2) A table that describes a logic function by listing all possible combinations of input values, and indicating, for each combination, the output value.

tuning. (NIST) Determining what parts of a program are being executed the most. A tool that instruments a program to obtain execution frequencies of statements is a tool with this feature.

twisted pair. A pair of thin-diameter insulated wires commonly used in telephone wiring. The wires are twisted around each other to minimize interference from other twisted pairs in the

A-60 Version 6.2

Page 295: Casq Cbok Rev 6-2

Vocabulary

cable. Twisted pairs have less bandwidth than coaxial cable or optical fiber. Abbreviated UTP for Unshielded Twisted Pair. Syn: twisted wire pair.

- U -

unambiguous. (1) Not having two or more possible meanings. (2) Not susceptible to different interpretations. (3) Not obscure, not vague. (4) Clear, definite, certain.

underflow. (ISO) The state in which a calculator shows a zero indicator for the most significant part of a number while the least significant part of the number is dropped. For example, if the calculator output capacity is four digits, the number .0000432 will be shown as .0000. See: arithmetic underflow.

underflow exception. (IEEE) An exception that occurs when the result of an arithmetic operation is too small a fraction to be represented by the storage location designated to receive it.

unit. (IEEE) (1) A separately testable element specified in the design of a computer software element. (2) A logically separable part of a computer program. Syn: component, module.

UNIX. A multitasking, multiple-user (time-sharing) operating system developed at Bell Labs to create a favorable environment for programming research and development.

usability. (IEEE) The ease with which a user can learn to operate, prepare inputs for, and interpret outputs of a system or component.

user. (ANSI) Any person, organization, or functional unit that uses the services of an information processing system. See: end user.

user's guide. (ISO) Documentation that describes how to use a functional unit, and that may include description of the rights and responsibilities of the user, the owner, and the supplier of the unit. Syn: user manual, operator manual.

utility program. (ISO) A computer program in general support of the processes of a computer; e.g., a diagnostic program, a trace program, a sort program. Syn: service program. See: utility software.

utility software. (IEEE) Computer programs or routines designed to perform some general support function required by other application software, by the operating system, or by the system users. They perform general functions such as formatting electronic media, making copies of files, or deleting files.

- V -

V&V. verification and validation.

VAX. virtual address extension.

VLSI. very large scale integration.

Version 6.2 A-61

Page 296: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

VMS. virtual memory system.

VV&T. validation, verification, and testing.

valid. (1) Sound. (2) Well grounded on principles of evidence. (3) Able to withstand criticism or objection.

validate. To prove to be valid.

validation. (1) (FDA) Establishing documented evidence which provides a high degree of assurance that a specific process will consistently produce a product meeting its predetermined specifications and quality attributes. Contrast with data validation.

validation, process. (FDA) Establishing documented evidence which provides a high degree of assurance that a specific process will consistently produce a product meeting its predetermined specifications and quality characteristics.

validation, prospective. (FDA) Validation conducted prior to the distribution of either a new product, or product made under a revised manufacturing process, where the revisions may affect the product's characteristics.

validation protocol. (FDA) A written plan stating how validation will be conducted, including test parameters, product characteristics, production equipment, and decision points on what constitutes acceptable test results. See: test plan.

validation, retrospective. (FDA) (1) Validation of a process for a product already in distribution based upon accumulated production, testing and control data. (2) Retrospective validation can also be useful to augment initial premarket prospective validation for new products or changed processes. Test data is useful only if the methods and results are adequately specific. Whenever test data are used to demonstrate conformance to specifications, it is important that the test methodology be qualified to assure that the test results are objective and accurate.

validation, software. (NBS) Determination of the correctness of the final program or software produced from a development project with respect to the user needs and requirements. Validation is usually accomplished by verifying each stage of the software development life cycle. See: verification, software.

validation, verification, and testing. (NIST) Used as an entity to define a procedure of review, analysis, and testing throughout the software life cycle to discover errors, determine functionality, and ensure the production of quality software.

valid input. (NBS) Test data that lie within the domain of the function represented by the program.

variable. A name, label, quantity, or data item whose value may be changed many times during processing. Contrast with constant.

variable trace. (IEEE) A record of the name and values of variables accessed or changed during the execution of a computer program. Syn: data-flow trace, data trace, value trace. See: execution trace, retrospective trace, subroutine trace, symbolic trace.

A-62 Version 6.2

Page 297: Casq Cbok Rev 6-2

Vocabulary

vendor. A person or an organization that provides software and/or hardware and/or firmware and/or documentation to the user for a fee or in exchange for services. Such a firm could be a medical device manufacturer.

verifiable. Can be proved or confirmed by examination or investigation. See: measurable.

verification, software. (NBS) In general the demonstration of consistency, completeness, and correctness of the software at each stage and between each stage of the development life cycle. See: validation, software.

verify. (ANSI) (1) To determine whether a transcription of data or other operation has been accomplished accurately. (2) To check the results of data entry; e.g., keypunching. (3) (Webster) To prove to be true by demonstration.

version. An initial release or a complete re-release of a software item or software element. See: release.

version number. A unique identifier used to identify software items and the related software documentation which are subject to configuration control.

very large scale integration. A classification of ICs [chips] based on their size as expressed by the number of circuits or logic gates they contain. A VLSI IC contains 100,000 to 1,000,000 transistors.

virtual address extension. Identifies Digital Equipment Corporation's VAX family of computers, ranging from a desktop workstation to a large scale cluster of multiprocessors supporting thousands of simultaneous users.

virtual memory system. Digital Equipment Corporation's multiprocessing, interactive operating system for the VAX computers.

virus. A program which secretly alters other programs to include a copy of itself, and executes when the host program is executed. The execution of a virus program compromises a computer system by performing unwanted or unintended functions which may be destructive. See: bomb, trojan horse, worm.

volume. (ANSI) A portion of data, together with its data carrier, that can be handled conveniently as a unit; e.g., a reel of magnetic tape, a disk pack, a floppy disk.

- W -

WAN. wide area network.

walkthrough. See: code walkthrough.

watchdog timer. (IEEE) A form of interval timer that is used to detect a possible malfunction.

waterfall model. (IEEE) A model of the software development process in which the constituent activities, typically a concept phase, requirements phase, design phase, implementation phase, test phase, installation and checkout phase, and operation and

Version 6.2 A-63

Page 298: Casq Cbok Rev 6-2

Guide to the CASQ CBOK

maintenance, are performed in that order, possibly with overlap but with little or no iteration. Contrast with incremental development; rapid prototyping; spiral model.

white-box testing. See: testing, structural.

wide area network. A communications network that covers wide geographic areas such as states and countries. Contrast with LAN, MAN.

word. See: computer word.

workaround. A sequence of actions the user should take to avoid a problem or system limitation until the computer program is changed. They may include manual procedures used in conjunction with the computer system.

workstation. Any terminal or personal computer.

worm. An independent program which can travel from computer to computer across network connections replicating itself in each computer. They do not change other programs, but compromise a computer system through their impact on system performance. See: bomb, trojan horse, virus.

- X -

Xmodem. An asynchronous file transfer protocol initially developed for CP/M personal computers. First versions used a checksum to detect errors. Later versions use the more effective CRC method. Programs typically include both methods and drop back to checksum if CRC is not present at the other end. Xmodem transmits 128 byte blocks. Xmodem-1K improves speed by transmitting 1024 byte blocks. Xmodem-1K-G transmits without acknowledgment [for error free channels or when modems are self correcting], but transmission is cancelled upon any error. Contrast with Kermit, Ymodem, Zmodem.

- Y -

Ymodem. An asynchronous file transfer protocol identical to Xmodem-1K plus batch file transfer [also called Ymodem batch]. Ymodem-G transmits without acknowledgement [for error-free channels or when modems are self correcting], but transmission is cancelled upon any error. Contrast with Kermit, Xmodem, Zmodem.

- Z -

Zmodem. An asynchronous file transfer protocol that is more efficient than Xmodem. It sends file name, date and size first, and responds well to changing line conditions due to its variable length blocks. It uses CRC error correction and is effective in delay-induced satellite transmission. Contrast with Kermit, Xmodem, Ymodem.

A-64 Version 6.2