SEM_IV_BT0056_Software Testing & Quality Assurance

42
 Subject Code : BT 0056 Assignment No: 01 Subject Name : Software Testing & Quality Assurance Marks: 30 Book ID: B 0649 1. Explain the origin of the defect distribution in a typical software development life cycle. Ans:- Many organizations want to predict the number of defects (faults) in software systems, before they are deployed, to gauge the likely delivered quality and maintenance effort. To help in this numerous software metrics and statistical models ha ve been developed, with a correspondingly large literature. We provide a critical review of this literature and the state-of-the- art. Most of the wide range of prediction models use size and complexity metrics to predict defects. Others are based on testing data, the " quality" of the development process, or take a multivariate approach. The authors of the models have often made heroic contributions to a subject otherwise bereft of empirical studies. However, there are a number of serious theoretical and practical problems in many studies. The models are weak because of their inability to cope with the, as yet, unknown relationship between defects and failures. There are fundamental statistical and data quality problems that undermine model validity. More significantly many prediction models tend to model only part of the underlying problem and seriously misspecify it. To illustrate these points the "Goldilock's Conjecture," that there is an optimum module size, is used to show the considerable problems inherent in current defect prediction approaches. Careful and considered analysis of past and new results shows t hat the conjecture lacks support and that some models are misleading. We recommend holistic models for software defect prediction, using Bayesian Belief Networks, as alternative approaches to the single-issue models used at present. We also argue for research into a theory of "software decomposition" in order to test hypotheses about defect introduction and help construct a better science of software engineering. The systems development life cycle is a project management technique that divides complex projects into smaller, more easily managed segments or phases. Segmenting projects allows managers to verify the successful completion of project phases before allocating resources to subsequent phases. Software development projects typically include initiation, planning, design, development, testing, implementation, and maintenance phases. However, the phases may be divided differently depending on the organization involved. For example, initial project activities might be designated as request, requirements-definition, and planning phases, or i nitiation, concept-develop ment, and planning phases. End users of the system under development should be involved in reviewing the output of each phase to ensure the system is being built to deliver the needed functionality. Note: Examiners should focus their assessments of development, acquisition, and maintenance activities on the effectiveness of an organization¶s project management techniques. Reviews should be centered on ensuring the depth, quality, and sophistication of a project management technique are commensurate with the characteristics and risks of the project under review. INITIATION PHASE Careful oversight is required to ensure projects support strategic business objectives and resources are effectively implemented into an organization's enterprise architecture. The initiat ion phase begins when an opportunity to add, improve, or correct a system is identified and formally requested through the presentation of a business case. The business case should, at a minimum, describe a proposal¶s purpose, identify expected benefits, and explain how the proposed system

Transcript of SEM_IV_BT0056_Software Testing & Quality Assurance

8/6/2019 SEM_IV_BT0056_Software Testing & Quality Assurance

http://slidepdf.com/reader/full/semivbt0056software-testing-quality-assurance 1/42

 

Subject Code : BT 0056 Assignment No: 01

Subject Name : Software Testing & Quality Assurance Marks: 30

Book ID: B 0649

1. Explain the origin of the defect distribution in a typical softwaredevelopment life cycle.

Ans:- Many organizations want to predict the number of defects (faults) in software systems,

before they are deployed, to gauge the likely delivered quality and maintenance effort. To help inthis numerous software metrics and statistical models have been developed, with acorrespondingly large literature. We provide a critical review of this literature and the state-of-the-art. Most of the wide range of prediction models use size and complexity metrics to predictdefects. Others are based on testing data, the "quality" of the development process, or take amultivariate approach. The authors of the models have often made heroic contributions to a

subject otherwise bereft of empirical studies. However, there are a number of serious theoreticaland practical problems in many studies. The models are weak because of their inability to copewith the, as yet, unknown relationship between defects and failures. There are fundamentalstatistical and data quality problems that undermine model validity. More significantly manyprediction models tend to model only part of the underlying problem and seriously misspecify it.

To illustrate these points the "Goldilock's Conjecture," that there is an optimum module size, isused to show the considerable problems inherent in current defect prediction approaches. Carefuland considered analysis of past and new results shows that the conjecture lacks support and thatsome models are misleading. We recommend holistic models for software defect prediction,using Bayesian Belief Networks, as alternative approaches to the single-issue models used atpresent. We also argue for research into a theory of "software decomposition" in order to testhypotheses about defect introduction and help construct a better science of software engineering.

The systems development life cycle is a project management technique that divides complexprojects into smaller, more easily managed segments or phases. Segmenting projects allowsmanagers to verify the successful completion of project phases before allocating resources tosubsequent phases.

Software development projects typically include initiation, planning, design, development, testing,implementation, and maintenance phases. However, the phases may be divided differentlydepending on the organization involved. For example, initial project activities might be designatedas request, requirements-definition, and planning phases, or initiation, concept-development, andplanning phases. End users of the system under development should be involved in reviewing theoutput of each phase to ensure the system is being built to deliver the needed functionality.

Note: Examiners should focus their assessments of development, acquisition, and maintenanceactivities on the effectiveness of an organization¶s project management techniques. Reviewsshould be centered on ensuring the depth, quality, and sophistication of a project managementtechnique are commensurate with the characteristics and risks of the project under review.

INITIATION PHASE 

Careful oversight is required to ensure projects support strategic business objectives andresources are effectively implemented into an organization's enterprise architecture. The initiationphase begins when an opportunity to add, improve, or correct a system is identified and formallyrequested through the presentation of a business case. The business case should, at a minimum,describe a proposal¶s purpose, identify expected benefits, and explain how the proposed system

8/6/2019 SEM_IV_BT0056_Software Testing & Quality Assurance

http://slidepdf.com/reader/full/semivbt0056software-testing-quality-assurance 2/42

 supports one of the organization¶s business strategies. The business case should also identifyalternative solutions and detail as many informational, functional, and network requirements aspossible.

The presentation of a business case provides a point for managers to reject a proposal beforethey allocate resources to a formal feasibility study. When evaluating software developmentrequests (and during subsequent feasibility and design analysis), management should consider input from all affected parties. Management should also closely evaluate the necessity of eachrequested functional requirement. A single software feature approved during the initiation phasecan require several design documents and hundreds of lines of code. It can also increase testing,documentation, and support requirements. Therefore, the initial rejection of unnecessary featurescan significantly reduce the resources required to complete a project.

If provisional approval to initiate a project is obtained, the request documentation serves as astarting point to conduct a more thorough feasibility study. Completing a feasibility study requiresmanagement to verify the accuracy of the preliminary assumptions and identify resourcerequirements in greater detail.

Primary issues organizations should consider when compiling feasibility study supportdocumentation include:

Business Considerations:

Strategic business and technology goals and objectives;

Expected benefits measured against the value of current technology;

Potential organizational changes regarding facilities or the addition/reduction of end users,technicians, or managers;

Budget, scheduling, or personnel constraints; and

Potential business, regulatory, or legal issues that could impact the feasibility of the project.

Functional Requirements:

End-user functional requirements;

Internal control and information security requirements;

Operating, database, and backup system requirements (type, capacity, performance);

Connectivity requirements (stand-alone, Local Area Network, Wide Area Network, external);

Network support requirements (number of potential users; type, volume, and frequency of data transfers); and

Interface requirements (internal or external applications).

Project Factors:

Project management methodology;

Risk management methodology;

Estimated completion dates of projects and major project phases; and

Estimated costs of projects and major project phases.

Cost/Benefit Analysis:Expected useful life of the proposed product;

 Alternative solutions (buy vs. build);

Nonrecurring project costs (personnel, hardware, software, and overhead);

Recurring operational costs (personnel, maintenance, telecommunications, and overhead);

Tangible benefits (increased revenues, decreased costs, return-on-investments); and

Intangible benefits (improved public opinions or more useful information).

The feasibility support documentation should be compiled and submitted for senior managementor board study. The feasibility study document should provide an overview of the proposed

8/6/2019 SEM_IV_BT0056_Software Testing & Quality Assurance

http://slidepdf.com/reader/full/semivbt0056software-testing-quality-assurance 3/42

 project and identify expected costs and benefits in terms of economic, technical, and operationalfeasibility. The document should also describe alternative solutions and include arecommendation for approval or rejection. The document should be reviewed and signed off onby all affected parties. If approved, management should use the feasibility study and supportdocumentation to begin the planning phase.

PLANNING PHASE The planning phase is the most critical step in completing development, acquisition, andmaintenance projects. Careful planning, particularly in the early stages of a project, is necessaryto coordinate activities and manage project risks effectively. The depth and formality of projectplans should be commensurate with the characteristics and risks of a given project.

Project plans refine the information gathered during the initiation phase by further identifying thespecific activities and resources required to complete a project. A critical part of a projectmanager¶s job is to coordinate discussions between user, audit, security, design, development,and network personnel to identify and document as many functional, security, and networkrequirements as possible.

Primary items organizations should address in formal project plans include:

Project Overview ± Project overviews provide an outline of the project plan. Overviews shouldidentify the project, project sponsors, and project managers; and should describe project goals,background information, and development strategies.

Roles and Responsibilities ± Project plans should define the primary responsibilities of keypersonnel, including project sponsors, managers, and team members. Additionally, projectplans should identify the responsibilities of third-party vendors and internal audit, security, andnetwork personnel.

Communication ± Defined communication techniques enhance project efficiencies. Therefore,management should establish procedures for gathering and disseminating information.Standard report forms, defined reporting requirements, and established meeting schedulesfacilitate project communications.

Management should establish acceptance criteria for each project phase. Management shouldalso establish appropriate review and approval procedures to ensure project teams completeall phase requirements before moving into subsequent phases.

Defined Deliverables ± Clearly defined expectations are a prerequisite for successfullycompleting projects. Representatives from all departments involved in, or affected by, a projectshould assist in defining realistic project objectives, accurate informational, functional, andinterface requirements, and objective acceptance criteria.

Control Requirements ± An essential part of the planning process involves designing andbuilding automated control and security features into applications. Identifying all requiredfeatures and exactly where they should be placed is not always possible during initial projectphases. However, management should consider security and control issues throughout aproject¶s life cycle and include those features in applications as soon as possible during aproject¶s life cycle.

Risk Management ± Managing risks is an important part of the project planning process.Organizations should establish procedures to ensure managers appropriately assess, monitor,and manage internal and external risks throughout a project¶s life cycle. The procedures shouldinclude risk acceptance, mitigation, and/or transfer strategies.

External risks include issues such as vendor failures, regulatory changes, and naturaldisasters. Internal risks include items that affect budgets, such as inaccurate cost forecastingor changing functional requirements; scheduling difficulties, such as unexpected personnelchanges or inaccurate development assumptions; and work flow challenges, such as weakcommunication or inexperienced project managers

Change Management ± Personnel often request the addition or modification of functionalrequirements during software development projects. Although the addition or modification of 

8/6/2019 SEM_IV_BT0056_Software Testing & Quality Assurance

http://slidepdf.com/reader/full/semivbt0056software-testing-quality-assurance 4/42

 requirements may be appropriate, standards should be in place to control changes in order tominimize disruptions to the development process. Project managers should establish cut-off dates after which they defer requested changes to subsequent versions. Additionally,representatives from the same departments involved in establishing requirements should beinvolved in evaluating and approving proposed changes. Large, complex, or mission-criticalprojects should include formal change management procedures.

Standards ± Project plans should reference applicable standards relating to project oversightactivities, system controls, and quality assurance. Oversight standards should address projectmethodology selections, approval authorities, and risk management procedures. Systemcontrols standards should address functional, security, and automated-control requirements.Quality assurance standards should address the validity of project assumptions, adherence toproject standards, and testing of a product¶s overall performance. Management should review,approve, and document deviations from established standards.

Documentation ± Project plans should identify the type and level of documentation personnelmust produce during each project phase. For instance, personnel should document projectobjectives, system requirements, and development strategies during the initiation phase. Thedocumentation should be revised as needed throughout the project. For example, preliminaryuser, operator, and maintenance manuals created during the design phase should be revisedduring the development and testing phases, and finalized during the implementation phase.

Scheduling ± Management should identify and schedule major project phases and the tasks tobe completed within each phase. Due to the uncertainties involved with estimating projectrequirements, management should build flexibility into project schedules. However, the amountof flexibility built into schedules should decline as projects progress and requirements becomemore defined.

Budget ± Managers should develop initial budget estimations of overall project costs so theycan determine if projects are feasible. Managers should monitor the budgets throughout aproject and adjust them if needed; however, they should retain a baseline budget for post-project analysis. In addition to budgeting personnel expenses and outsourced activities, it isimportant to include the costs associated with project overhead such as office space,hardware, and software used during the project.

Testing ± Management should develop testing plans that identify testing requirements andschedule testing procedures throughout the initial phases of a project. End users, designers,developers, and system technicians may be involved in the testing process.

Staff Development ± Management should develop training plans that identify trainingrequirements and schedule training procedures to ensure employees are able to use andmaintain an application after implementation.

DESIGN PHASE 

The design phase involves converting the informational, functional, and network requirementsidentified during the initiation and planning phases into unified design specifications thatdevelopers use to script programs during the development phase. Program designs areconstructed in various ways. Using a top-down approach, designers first identify and link major program components and interfaces, then expand design layouts as they identify and link smaller subsystems and connections. Using a bottom-up approach, designers first identify and link minor 

program components and interfaces, then expand design layouts as they identify and link larger systems and connections.

Contemporary design techniques often use prototyping tools that build mock-up designs of itemssuch as application screens, database layouts, and system architectures. End users, designers,developers, database managers, and network administrators should review and refine theprototyped designs in an iterative process until they agree on an acceptable design. Audit,security, and quality assurance personnel should be involved in the review and approval process.

Management should be particularly diligent when using prototyping tools to develop automatedcontrols. Prototyping can enhance an organization¶s ability to design, test, and establish controls.

8/6/2019 SEM_IV_BT0056_Software Testing & Quality Assurance

http://slidepdf.com/reader/full/semivbt0056software-testing-quality-assurance 5/42

 However, employees may be inclined to resist adding additional controls, even though they areneeded, after the initial designs are established.Designers should carefully document completed designs. Detailed documentation enhances aprogrammer¶s ability to develop programs and modify them after they are placed in production.The documentation also helps management ensure final programs are consistent with originalgoals and specifications.

Organizations should create initial testing, conversion, implementation, and training plans duringthe design phase. Additionally, they should draft user, operator, and maintenance manuals.

Application Control Standards 

 Application controls include policies and procedures associated with user activities and theautomated controls designed into applications. Controls should be in place to address both batchand on-line environments. Standards should address procedures to ensure managementappropriately approves and control overrides. Refer to the IT Handbook¶s "Operations Booklet"for details relating to operational controls.

Designing appropriate security, audit, and automated controls into applications is a challengingtask. Often, because of the complexity of data flows, program logic, client/server connections,and network interfaces, organizations cannot identify the exact type and placement of thefeatures until interrelated functions are identified in the design and development phases.However, the security, integrity, and reliability of an application is enhanced if managementconsiders security, audit, and automated control features at the onset of a project and includesthem as soon as possible in application and system designs. Adding controls late in thedevelopment process or when applications are in production is more expensive, time consuming,and usually results in less effective controls.

Standards should be in place to ensure end users, network administrators, auditors, and securitypersonnel are appropriately involved during initial project phases. Their involvement enhances aproject manager's ability to define and incorporate security, audit, and control requirements. Thesame groups should be involved throughout a project¶s life cycle to assist in refining and testingthe features as projects progress.

Input Controls  Automated input controls help ensure employees accurately input information, systems properlyrecord input, and systems either reject, or accept and record, input errors for later review andcorrection. Examples of automated input controls include:

Check Digits ± Check digits are numbers produced by mathematical calculations performed oninput data such as account numbers. The calculation confirms the accuracy of input byverifying the calculated number against other data in the input data, typically the final digit. Completeness Checks ± Completeness checks confirm that blank fields are not input and thatcumulative input matches control totals. Duplication Checks ± Duplication checks confirm that duplicate information is not input. Limit Checks ± Limit checks confirm that a value does not exceed predefined limits. Range Checks ± Range checks confirm that a value is within a predefined range of parameters. Reasonableness Checks ± Reasonableness checks confirm that a value meets predefinedcriteria. Sequence Checks ± Sequence checks confirm that a value is sequentially input or processed. Validity Checks ± Validity checks confirm that a value conforms to valid input criteria. 

Processing Controls 

 Automated processing controls help ensure systems accurately process and record informationand either reject, or process and record, errors for later review and correction. Processing

8/6/2019 SEM_IV_BT0056_Software Testing & Quality Assurance

http://slidepdf.com/reader/full/semivbt0056software-testing-quality-assurance 6/42

 includes merging files, modifying data, updating master files, and performing file maintenance.Examples of automated processing controls include:

Batch Controls ± Batch controls verify processed run totals against input control totals. Batchesare verified against various items such as total dollars, items, or documents processed.

Error Reporting ± Error reports identify items or batches that include errors. Items or batches

with errors are withheld from processing, posted to a suspense account until corrected, or processed and flagged for later correction.

Transaction Logs ± Users verify logged transactions against source documents. Administratorsuse transaction logs to track errors, user actions, resource usage, and unauthorized access.

Run-to-Run Totals ± Run-to-run totals compiled during input, processing, and output stagesare verified against each other.

Sequence Checks ± Sequence checks identify or reject missing or duplicate entries.

Interim Files ± Operators revert to automatically created interim files to validate the accuracy,validity, and completeness of processed data.

Backup Files ± Operators revert to automatically created master-file backups if transactionprocessing corrupts the master file.

Output Controls 

 Automated output controls help ensure systems securely maintain and properly distributeprocessed information. Examples of automated output controls include:

Batch Logs ± Batch logs record batch totals. Recipients of distributed output verify the outputagainst processed batch log totals.

Distribution Controls ± Distribution controls help ensure output is only distributed to authorizedindividuals. Automated distribution lists and access restrictions on information storedelectronically or spooled to printers are examples of distribution controls.

Destruction Controls ± Destruction controls help ensure electronically distributed and storedinformation is destroyed appropriately by overwriting outdated information or demagnetizing(degaussing) disks and tapes. Refer to the IT Handbook¶s ³Information Security Booklet´ for 

more information on disposal of media.

DEVELOPMENT PHASE 

The development phase involves converting design specifications into executable programs.Effective development standards include requirements that programmers and other projectparticipants discuss design specifications before programming begins. The procedures helpensure programmers clearly understand program designs and functional requirements.

Programmers use various techniques to develop computer programs. The large transaction-oriented programs associated with financial institutions have traditionally been developed usingprocedural programming techniques. Procedural programming involves the line-by-line scriptingof logical instructions that are combined to form a program.

Primary procedural programming activities include the creation and testing of source code andthe refinement and finalization of test plans. Typically, individual programmers write and review(desk test) program modules or components, which are small routines that perform a particular task within an application. Completed components are integrated with other components andreviewed, often by a group of programmers, to ensure the components properly interact. Theprocess continues as component groups are progressively integrated and as interfaces betweencomponent groups and other systems are tested.

 Advancements in programming techniques include the concept of "object-oriented programming."Object-oriented programming centers on the development of reusable program routines(modules) and the classification of data types (numbers, letters, dollars, etc.) and data structures

8/6/2019 SEM_IV_BT0056_Software Testing & Quality Assurance

http://slidepdf.com/reader/full/semivbt0056software-testing-quality-assurance 7/42

 (records, files, tables, etc.). Linking pre-scripted module objects to predefined data-class objectsreduces development times and makes programs easier to modify. Refer to the "SoftwareDevelopment Techniques" section for additional information on object-oriented programming.

Organizations should complete testing plans during the development phase. Additionally, theyshould update conversion, implementation, and training plans and user, operator, andmaintenance manuals.

Development Standards 

Development standards should be in place to address the responsibilities of application andsystem programmers. Application programmers are responsible for developing and maintainingend-user applications. System programmers are responsible for developing and maintaininginternal and open-source operating system programs that link application programs to systemsoftware and subsequently to hardware. Managers should thoroughly understand developmentand production environments to ensure they appropriately assign programmer responsibilities.

Development standards should prohibit a programmer's access to data, programs, utilities, andsystems outside their individual responsibilities. Library controls can be used to manage accessto, and the movement of programs between, development, testing, and production environments.Management should also establish standards requiring programmers to document completedprograms and test results thoroughly. Appropriate documentation enhances a programmer'sability to correct programming errors and modify production programs.

Coding standards, which address issues such as the selection of programming languages andtools, the layout or format of scripted code, and the naming conventions of code routines andprogram libraries, are outside the scope of this document. However, standardized, yet flexible,coding standards enhance an organization¶s ability to decrease coding defects and increase thesecurity, reliability, and maintainability of application programs. Examiners should evaluate anorganization¶s coding standards and related code review procedures.

Library Controls 

Libraries are collections of stored documentation, programs, and data. Program libraries includereusable program routines or modules stored in source or object code formats. Program librariesallow programmers to access frequently used routines and add them to programs without havingto rewrite the code. Dynamic link libraries include executable code programs can automaticallyrun as part of larger applications.Library controls should include:

 Automated Password Controls ± Management should establish logical access controls for alllibraries or objects within libraries. Establishing controls on individual objects within librariescan create security administration burdens. However, if similar objects (executable and non-executable routines, test and production data, etc.) are grouped into separate libraries, accesscan be granted at library levels.

 Automated Library Applications ± When feasible, management should implement automatedlibrary programs, which are available from equipment manufacturers and software vendors.

The programs can restrict access at library or object levels and produce reports that identifywho accessed a library and what, if any, changes were made.

Version Controls 

Library controls facilitate software version controls. Version controls provide a means tosystematically retain chronological copies of revised programs and program documentation.

Development version control systems, sometimes referred to as concurrent version systems,assist organizations in tracking different versions of source code during development. The

8/6/2019 SEM_IV_BT0056_Software Testing & Quality Assurance

http://slidepdf.com/reader/full/semivbt0056software-testing-quality-assurance 8/42

 systems do not simply identify and store multiple versions of source code files. They maintain onefile and identify and store only changed code. When a user requests a particular version, thesystem recreates that version. Concurrent version systems facilitate the quick identification of programming errors. For example, if programmers install a revised program on a test server anddiscover programming errors, they only have to review the changed code to identify the error.

Software Documentation Organizations should maintain detailed documentation for each application and applicationsystem in production. Thorough documentation enhances an organization¶s ability to understandfunctional, security, and control features and improves its ability to use and maintain the software.The documentation should contain detailed application descriptions, programmingdocumentation, and operating instructions. Standards should be in place that identify the type andformat of required documentation such as system narratives, flowcharts, and any special systemcoding, internal controls, or file layouts not identified within individual application documentation.

Management should maintain documentation for internally developed programs and externallyacquired products. In the case of acquired software, management should ensure (either throughan internal review or third-party certification) prior to purchase, that an acquired product¶sdocumentation meets their organization's minimum documentation standards. For additionalinformation regarding acquired software distinctions (open/closed code) refer to the "EscrowedDocumentation" discussion in the "Acquisition" section.

Examiners should consider access and change controls when assessing documentationactivities. Change controls help ensure organizations appropriately approve, test, and recordsoftware modifications. Access controls help ensure individuals only have access to sections of documentation directly related to their job functions.

System documentation should include:

System Descriptions ± System descriptions provide narrative explanations of operatingenvironments and the interrelated input, processing, and output functions of integratedapplication systems.

System Documentation ± System documentation includes system flowcharts and models thatidentify the source and type of input information, processing and control actions (automatedand manual), and the nature and location of output information.

System File Layouts ± System file layouts describe collections of related records generated byindividual processing applications. For example, personnel may need system file layouts todescribe interim files, such as sorted deposit transaction files, in order to further define master file processing requirements.

 Application documentation should include:

 Application Descriptions ± Application descriptions provide narrative explanations of thepurpose of an application and provide overviews of data input, processing, and outputfunctions.

Layouts ± Layouts represent the format of stored and displayed information such as databaselayouts, screen displays, and hardcopy information.

Program Documentation ± Program documentation details specific data input, processing, andoutput instructions, and should include documentation on system security. Programlistings/source code and related narrative comments are the most basic items in programdocumentation and consist of technical programming scripts and non-technical descriptions of the scripts. It is important that developers update the listings and comment documentationwhen they modify programs. Many software development tools are available that automaticallycreate source listings and narrative descriptions.

Traditionally, designers and developers have used flowcharts to present pictorial views of the

8/6/2019 SEM_IV_BT0056_Software Testing & Quality Assurance

http://slidepdf.com/reader/full/semivbt0056software-testing-quality-assurance 9/42

 sequencing of procedural programs such as COBOL and Assembler. Flowcharts provide apractical way to illustrate complex programs and routines. Flowcharting software is availablethat can automatically chart programs or enable programmers to chart programs dynamicallywithout the need to draw them manually.

Programming techniques, such as object-oriented programming, have contributed to the use of dynamic flowcharting products. Maintaining detailed documentation of object-oriented code is

particularly important because a primary benefit of the programming technique is the reuse of program objects.

Naming Conventions ± Naming conventions are a critical part of program documentation.Software programs are comprised of many lines of code, usually arranged hierarchically intosmall groups of code (modules, subroutines, or components), that perform individual functionswithin an application. Programmers should name and document the modules and any relatedsubroutines, databases, or programs that interact with an application. Standardized namingconventions allow programmers to link subroutines into a unified program efficiently andfacilitate technicians¶ and programmers¶ ability to understand and modify programs.

Operator Instructions ± Organizations should establish operator instructions regarding allprocessing applications. The guidance should explain how to perform particular jobs, includinghow operators should respond to system requests or interrupts. The documentation shouldonly include information pertinent to the computer operator's function. Program documentation

such as source listings, record layouts, and program flowcharts should not be accessible to anoperator. Operator instructions should be thorough enough to permit an experienced operator who is unfamiliar with the application to run a program successfully without assistance.

End-User Instructions ± Organizations should establish end-user instructions that describe howto use an application. Operation manuals, online help features, and system error messagesare forms of instructions that assist individuals in using applications and responding toproblems.

TESTING PHASE 

The testing phase requires organizations to complete various tests to ensure the accuracy of programmed code, the inclusion of expected functionality, and the interoperability of applicationsand other network components. Thorough testing is critical to ensuring systems meet

organizational and end-user requirements.

If organizations use effective project management techniques, they will complete test plans whiledeveloping applications, prior to entering the testing phase. Weak project managementtechniques or demands to complete projects quickly may pressure organizations to develop testplans at the start of the testing phase. Test plans created during initial project phases enhance anorganization¶s ability to create detailed tests. The use of detailed test plans significantly increasesthe likelihood that testers will identify weaknesses before products are implemented.

Testing groups are comprised of technicians and end users who are responsible for assemblingand loading representative test data into a testing environment. The groups typically perform testsin stages, either from a top-down or bottom-up approach. A bottom-up approach tests smaller components first and progressively adds and tests additional components and systems. A top-down approach first tests major components and connections and progressively tests smaller components and connections. The progression and definitions of completed tests vary betweenorganizations.

Bottom-up tests often begin with functional (requirements based) testing. Functional tests shouldensure that expected functional, security, and internal control features are present and operatingproperly. Testers then complete integration and end-to-end testing to ensure application andsystem components interact properly. Users then conduct acceptance tests to ensure systemsmeet defined acceptance criteria.

Testers often identify program defects or weaknesses during the testing process. Proceduresshould be in place to ensure programmers correct defects quickly and document all corrections or 

8/6/2019 SEM_IV_BT0056_Software Testing & Quality Assurance

http://slidepdf.com/reader/full/semivbt0056software-testing-quality-assurance 10/42

 modifications. Correcting problems quickly increases testing efficiencies by decreasing testers¶downtime. It also ensures a programmer does not waste time trying to debug a portion of aprogram without defects that is not working because another programmer has not debugged adefective linked routine. Documenting corrections and modifications is necessary to maintain theintegrity of the overall program documentation.

Organizations should review and complete user, operator, and maintenance manuals during thetesting phase. Additionally, they should finalize conversion, implementation, and training plans.

Primary tests include:

 Acceptance Testing ± End users perform acceptance tests to assess the overall functionalityand interoperability of an application.

End-to-End Testing ± End users and system technicians perform end-to-end tests to assessthe interoperability of an application and other system components such as databases,hardware, software, or communication devices.

Functional Testing ± End users perform functional tests to assess the operability of a programagainst predefined requirements. Functional tests include black box tests, which assess theoperational functionality of a feature against predefined expectations, or white box tests, which

assess the functionality of a feature¶s code.Integration Testing ± End users and system technicians perform integration tests to assess theinterfaces of integrated software components.

Parallel Testing ± End users perform parallel tests to compare the output of a new applicationagainst a similar, often the original, application.

Regression Testing ± End users retest applications to assess functionality after programmersmake code changes to previously tested applications.

Stress Testing ± Technicians perform stress tests to assess the maximum limits of anapplication.

String Testing ± Programmers perform string tests to assess the functionality of related codemodules.

System Testing ± Technicians perform system tests to assess the functionality of an entiresystem.

Unit Testing ± Programmers perform unit tests to assess the functionality of small modules of code.

IMPLEMENTATION PHASE 

The implementation phase involves installing approved applications into productionenvironments. Primary tasks include announcing the implementation schedule, training endusers, and installing the product. Additionally, organizations should input and verify data,configure and test system and security parameters, and conduct post-implementation reviews.Management should circulate implementation schedules to all affected parties and should notifyusers of any implementation responsibilities.

 After organizations install a product, pre-existing data is manually input or electronicallytransferred to a new system. Verifying the accuracy of the input data and security configurationsis a critical part of the implementation process. Organizations often run a new system in parallelwith an old system until they verify the accuracy and reliability of the new system. Employeesshould document any programming, procedural, or configuration changes made during theverification process.

PROJECT EVALUATION 

Management should conduct post-implementation reviews at the end of a project to validate thecompletion of project objectives and assess project management activities. Management shouldinterview all personnel actively involved in the operational use of a product and document and

8/6/2019 SEM_IV_BT0056_Software Testing & Quality Assurance

http://slidepdf.com/reader/full/semivbt0056software-testing-quality-assurance 11/42

 address any identified problems.

Management should analyze the effectiveness of project management activities by comparing,among other things, planned and actual costs, benefits, and development times. They shoulddocument the results and present them to senior management. Senior management should beinformed of any operational or project management deficiencies.

MAINTENANCE PHASE 

The maintenance phase involves making changes to hardware, software, and documentation tosupport its operational effectiveness. It includes making changes to improve a system¶sperformance, correct problems, enhance security, or address user requirements. To ensuremodifications do not disrupt operations or degrade a system¶s performance or security,organizations should establish appropriate change management standards and procedures.

Change management (sometimes referred to as configuration management) involves establishingbaseline versions of products, services, and procedures and ensuring all changes are approved,documented, and disseminated. Change controls should address all aspects of an organization¶stechnology environment including software programs, hardware and software configurations,operational standards and procedures, and project management activities. Management shouldestablish change controls that address major, routine, and emergency software modifications andsoftware patches.

Major modifications involve significant changes to a system¶s functionality. Management shouldimplement major modifications using a well-structured process, such as an SDLC methodology.

Routine changes are not as complex as major modifications and can usually be implemented inthe normal course of business. Routine change controls should include procedures for requesting, evaluating, approving, testing, installing, and documenting software modifications.

Emergency changes may address an issue that would normally be considered routine, however,because of security concerns or processing problems, the changes must be made quickly.Emergency change controls should include the same procedures as routine change controls.Management should establish abbreviated request, evaluation, and approval procedures to

ensure they can implement changes quickly. Detailed evaluations and documentation of emergency changes should be completed as soon as possible after changes are implemented.Management should test routine and, whenever possible, emergency changes prior toimplementation and quickly notify affected parties of all changes. If management is unable tothoroughly test emergency modifications before installation, it is critical that they appropriatelybackup files and programs and have established back-out procedures in place.

Software patches are similar in complexity to routine modifications. This document uses the term"patch" to describe program modifications involving externally developed software packages.However, organizations with in-house programming may also refer to routine softwaremodifications as patches. Patch management programs should address procedures for evaluating, approving, testing, installing, and documenting software modifications. However, acritical part of the patch management process involves maintaining an awareness of external

vulnerabilities and available patches.

Maintaining accurate, up-to-date hardware and software inventories is a critical part of all changemanagement processes. Management should carefully document all modifications to ensureaccurate system inventories. (If material software patches are identified but not implemented,management should document the reason why the patch was not installed.)

Management should coordinate all technology related changes through an oversight committeeand assign an appropriate party responsibility for administering software patch managementprograms. Quality assurance, security, audit, regulatory compliance, network, and end-user personnel should be appropriately included in change management processes. Risk and security

8/6/2019 SEM_IV_BT0056_Software Testing & Quality Assurance

http://slidepdf.com/reader/full/semivbt0056software-testing-quality-assurance 12/42

 review should be done whenever a system modification is implemented to ensure controls remainin place.

DISPOSAL PHASE 

The disposal phase involves the orderly removal of surplus or obsolete hardware, software, or data. Primary tasks include the transfer, archiving, or destruction of data records. Management

should transfer data from production systems in a planned and controlled manner that includesappropriate backup and testing procedures. Organizations should maintain archived data inaccordance with applicable record retention requirements. It should also archive systemdocumentation in case it becomes necessary to reinstall a system into production. Managementshould destroy data by overwriting old information or degaussing (demagnetizing) disks andtapes. Refer to the IT Handbook¶s ³Information Security Booklet´ for more information on disposalof media. Large-scale integrated systems are comprised of multiple applications that operate asan integrated unit. The systems are designed to use compatible programming languages,operating systems, and communication protocols to enhance interoperability and easemaintenance requirements.

Effectively implementing large-scale integrated systems is a complex task that requiresconsiderable resources, strong project and risk management techniques, and a long-termcommitment from corporate boards and management.

 Although the anticipated benefits of integrated systems may be compelling, there are significantchallenges associated with the development process. Some organizations underestimate thedemands of such projects, incur significant financial losses, and ultimately abandon the projects.

Examiners encountering organizations that are implementing large-scale integrated systems mustthoroughly review all life cycle procedures. The reviews should include an assessment of theboard¶s understanding of project requirements and commitment to the project. Examiners mustalso closely review the qualifications of the project manager, the adequacy of project plans, andthe sufficiency of risk management procedures.

OBJECT-ORIENTED PROGRAMMING 

Traditionally, programmers wrote programs using sequential lines of code in a high-level,procedural language such as COBOL . Programmers also write object-oriented programs in high-level languages such as C++ and Java; however, the programs are written less sequentially.

Object-oriented programming centers on the development of small, reusable program routines(modules) that are linked together and to other objects to form a program. A key component of object-oriented programming involves the classification (modeling) of related data types(numbers, letters, dollars, etc.) and structures (records, files, tables, etc.). Modeling allowsprogrammers to link reusable program modules to modeled data classes. Linking pre-developedprogram modules to defined data classes reduces development times and makes programseasier to modify.

Programmers use various methods to define and link reusable objects. Initially, structuredprogramming techniques were developed that focused on the arrangement of lines of procedural

code. The ad hoc techniques enhanced the layout of a program¶s modules and overall design,making it easier to integrate and reuse program modules. Object-oriented programming employsa form of structured programming and adds methods for defining the data classes that are linkedto structured program routines.

 A drawback to the use of structured programming, and consequently, methods such as object-oriented programming, which use structured programming techniques, has been a lack of standardized coding procedures. The lack of standardized procedures restricts the interoperabilityof proprietary products, including automated design and development products sometimesreferred to as computer-aided software engineering (CASE) tools. However, the software industryis moving towards acceptance of standardized object-oriented modeling protocols.

8/6/2019 SEM_IV_BT0056_Software Testing & Quality Assurance

http://slidepdf.com/reader/full/semivbt0056software-testing-quality-assurance 13/42

8/6/2019 SEM_IV_BT0056_Software Testing & Quality Assurance

http://slidepdf.com/reader/full/semivbt0056software-testing-quality-assurance 14/42

 tolerance and the mission-critical designation of an application, organizations may use RADtechniques during the design and development phases within a structured developmentmethodology.

Management should consider the functional complexity and security risks of an application whenselecting a development methodology. Management should ensure that the developmenttechnique selected is appropriate for managing the complexity and risks of the application beingdeveloped.

The RAD process may involve only three phases: initiation, development, and implementation.The short duration of RAD projects necessitates the quick identification of functionalrequirements, which should remain largely unchanged during the development process. Typically,managers assign end users full time to a project and empower them to make key designdecisions. However, they normally consult project specialists such as database administrators,network technicians, and system programmers, when they make key decisions.

End users and RAD team members usually create functional designs using prototyping tools inan iterative process. They create, review, and revise the prototypes as needed until they approvea final design.RAD methodologies often employ object-oriented programming techniques that take advantage

of reusable program objects. In the case of redesigned applications, designers and developersselect objects based on the function they perform and re-link them to other reusable or newlycreated objects to form a new application.

Testing often occurs concurrently with the development of functional features. Organizations mayconduct testing and implementation procedures quickly or in a structured process similar to SDLCtesting and implementation phases. The speed and structure of testing and implementationprocedures depends on an organization¶s risk tolerance, an application¶s mission-criticaldesignation, and a project¶s established deadlines.

Developers are increasingly creating web applications using RAD techniques. Numerous CASE-type products (for example, application program interfaces) are available that developers can useto combine pre-designed object-oriented functions to form unified applications.

 Appropriate standards and controls should be in place to ensure:

Organizations only use RAD techniques when appropriate;

Management includes adequate security and control features in all developed applications;

Quality assurance personnel verify that security and control features (commensurate with risklevels) exist and function as intended;

End users are appropriately involved throughout RAD projects; and

Project managers closely monitor project activities.

Databases contain stored information and are structured in various ways. Legacy systems oftenuse hierarchical or networked structures. Hierarchical databases are structured similar to anorganizational chart. Lower data elements are dependent on (communicate data between) asingle data element above them. Each data element can have multiple elements linked below it,but only one link to a data element above it. Networked databases are similar to hierarchicaldatabases, but data elements can link to multiple elements above or below them.

Relational databases, which are currently the most prevalent type database, are organized astables with data structured in rows and columns. (Data paths are not predefined becauserelationships are defined at the data value level.) Rows (or records) contain information thatrelates to a single subject, such as a customer, employee, or vendor. Columns (or fields) containinformation related to each subject, such as customer identification numbers, dates of birth, or business addresses. Each record (item in a row), links to a corresponding field element (item in a

8/6/2019 SEM_IV_BT0056_Software Testing & Quality Assurance

http://slidepdf.com/reader/full/semivbt0056software-testing-quality-assurance 15/42

 column) that is defined as the primary key. Primary keys are the main identifiers associated withstored information and facilitate access to the information. In certain situations, the primary keymay be comprised of data from more than one field.

Relational databases are usually comprised of multiple tables, which may reside on a singleserver or in a distributed environment. If related records are stored on multiple tables (for example, if a customer¶s primary information, as shown in the table above, is maintained at acustomer service center and the customer¶s deposit account information is maintained on asecond table/database at a local branch), the same primary key must be used in both tables toensure data integrity. The keys in secondary tables, however, are referred to as foreign keys.Object-relational databases have been developed that apply ad hoc object-oriented protocols inrelational database environments. Definitive standards do not currently exist to support the wideacceptance of strictly object-oriented databases and database management systems. However,various proprietary standards do exist, and organizations are attempting to develop standardizedobject-oriented database protocols.

DATABASE MANAGEMENT SYSTEMS 

Database management systems (DBMS) are software programs that control a database user¶saccess and modification rights. The systems also facilitate referential integrity (by managing crossreferences between primary and foreign key relationships), support data import and exportfunctions, and provide backup and recovery services.

Database management systems may also provide access to data dictionaries. Data dictionariesare documentation tools that store descriptions of the structure and format of data and datatables. Advanced data dictionaries may store source code copies of field, record, and codedescriptions for use during software design and development activities.

Primary issues to consider when reviewing the design and configuration of databasemanagement systems include access controls and auditing features. Management should restrictdirect (privileged) access to a database (as opposed to accessing information through an

application) to authorized personnel.

Most DBMS have a journaling feature that allows organizations to track data changes. Journalingprovides audit trails of data changes and facilitates the safe recovery of data if errors occur. If available, organizations should employ automated auditing tools, such as journaling, that identifywho accessed or attempted to access a database and what, if any, data was changed.Many DBMS can validate users at record and row levels and log their activities. The detailedvalidation levels provide strong security controls. Examiners should consider validation levelswhen assessing the adequacy of DBMS controls. Strong DBMS controls include data-changelogs, input validity checks, locking and rollback mechanisms (ability to recover a previousdatabase if the database becomes corrupted), password and data file encryption. System

8/6/2019 SEM_IV_BT0056_Software Testing & Quality Assurance

http://slidepdf.com/reader/full/semivbt0056software-testing-quality-assurance 16/42

 developers should consider incorporating these types of security features when designingdatabases. If strong controls or auditing features are unavailable, management should implementcompensating controls such as segregation-of-duty or dual controls

 Acquisition projects are similar to development projects because management approves projectrequests, defines functional, security, and system requirements, and appropriately tests and

implements products. Organizations often employ structured acquisition methodologies similar tothe SDLC when acquiring significant hardware and software products. However, organizationsreplace the SDLC design and development phases with a bid solicitation process that involvesdeveloping detailed lists of functional, security, and system requirements and distributing them tothird parties. The "Acquisition Project Guidance" discussion below centers on the specificactivities associated with acquisition projects. Refer to the Project Management and Developmentsections for additional details relating to general life cycle phase information.

In addition to developing and distributing detailed lists of functional, security, and systemrequirements, organizations should establish vendor selection criteria and review potentialvendors¶ financial strength, support levels, security controls, etc., prior to obtaining products or services. Additionally, management reviews contracts and licensing agreements to ensure therights and responsibilities of each party are clear and equitable. Primary risks includeinadequately defining requirements, ineffectively assessing vendors, and insufficiently reviewingcontracts and agreements.

Contract and licensing issues may arise due to the complexity of contractual requirements. Anorganization¶s legal counsel should confirm that performance guarantees, source codeaccessibility, intellectual property considerations, and software/data security issues areappropriately addressed before management signs contracts.

Financial institutions sometimes acquire software or services from foreign-based third parties.Organizations should appropriately manage the unique risks included in these arrangements. For example, organizations should decide which country's laws will control the relationship andensure they and their vendors comply with United States¶ laws that restrict the export of softwareapplications employing encryption techniques. Refer to the "Software Development Contracts andLicensing Agreements" discussion for additional details on contracts and licenses. Refer to the IT

Handbook¶s "Outsourcing Technology Services Booklet" for additional information relating toforeign-based third-party relationships.

Management should establish acquisition standards that address the same security and reliabilityissues as development standards. However, acquisition standards should focus on ensuringsecurity, reliability, and functionality are already built into a product. Acquisition standards shouldalso ensure managers complete appropriate vendor, contract, and licensing reviews and acquireproducts compatible with existing systems.

Key tools in managing acquisition projects include invitations-to-tender and request-for-proposals.Invitations-to-tender involve soliciting bids from vendors when acquiring hardware or integratedsystems of hardware and software. Request-for-proposals involve soliciting bids when acquiringoff-the-shelf or third-party developed software. However, the terms are sometimes used

interchangeably.

Management should establish acquisition standards to ensure functional, security, andoperational requirements are accurately identified and clearly detailed in request-for-proposalsand invitations-to-tender. The standards should also require managers to compare bids against aproject¶s defined requirements and against each other; to review potential vendors¶ financialstability and commitment to service; and to obtain legal counsel reviews of contracts beforemanagement signs them.

Note: The risks associated with using general business purpose, off-the-shelf software, such as aword processing application, are typically lower than those associated with using financial

8/6/2019 SEM_IV_BT0056_Software Testing & Quality Assurance

http://slidepdf.com/reader/full/semivbt0056software-testing-quality-assurance 17/42

 applications. Therefore, the acquisition of general business purpose, off-the-shelf softwaretypically requires less stringent evaluation procedures than acquiring hardware or softwarespecifically designed for financial purposes. However, the level of evaluation will depend on howrisky the application is and how critical it is to the institution.

 Acquisition projects begin with the submission of a project request. Procedures should be in place

to facilitate the request process and ensure management systematically reviews all requests.Requests should present a business case for acquiring a product, identify desired systemfeatures, and, to the extent possible, describe the information requirements, network interfaces,hardware components, and software applications that will support and interact with a newproduct. Management should complete a feasibility study to determine if the business casesupports the procurement of either customized or off-the-shelf software. All affected partiesshould document their approval of the overall feasibility of the project.

Determining the feasibility of an acquisition proposal includes consideration of issues such as:

Business objectives;

Technology objectives;

Functional requirements;

Security requirements;

Internal control requirements;

Documentation requirements;

Performance requirements;

Network requirements;

System interface requirements;

Expandability requirements;

Reliability requirements;

Maintenance requirements;

Installation requirements;

Conversion requirements;Personnel requirements;

Processing requirements;

Product development standards;

Product design standards;

Testing requirements;

Training requirements;

Vendor¶s financial strength;

Vendor¶s support levels; and

Cost/benefit analysis.

To determine the feasibility of a project, management should consult with various personnel,including those listed below. These individuals should be involved in all phases of the project asdeemed appropriate depending on their role in relation to the specific system being acquired:

 Audit personnel;

Business unit managers;

Database administrators;

End users;

Legal counsel;

8/6/2019 SEM_IV_BT0056_Software Testing & Quality Assurance

http://slidepdf.com/reader/full/semivbt0056software-testing-quality-assurance 18/42

 Network administrators;

Network technicians;

Quality assurance personnel;

Security administrators;

Systems analysts;

Technology department managers; andVendor personnel.

If a request appears feasible, the feasibility study can help define the functional, system, andorganizational requirements included in the request-for-proposals and invitations-to-tender thatmanagement distributes to third parties in the bid solicitation process.

 After organizations receive bids they should analyze and compare the bids against each other and against the organization¶s defined requirements. Vendors¶ proposals should clearly addressall of the organization¶s requirements and identify any other applicable issues such as:

Software:

Confidentiality standards;Compatible operating systems;

Copyright standards;

Delivery dates;

Escrow criteria;

Liability limitations;

Licensing restrictions;

Maintenance procedures;

Next release date;

Regulatory requirements;

Software language;

Subcontractor details;

Testing standards;

Training provided; and

Warranty specifications.

Hardware:

Backup options;

Maintenance requirements;

Memory capacities;

Performance capabilities; and

Servicing options.

Procedures should be in place to ensure organizations appropriately review bids. After theselection process narrows the list of potential vendors, management should review the financialstability and service commitment of the remaining vendors. After an organization selects aproduct and vendor and negotiates a contract, legal counsel should review the contract prior tosigning.

Software programs are written using non-proprietary, open source code; proprietary (licensed)open source code; or proprietary, closed source code.

Non-proprietary, open source programs, sometimes referred to as free software, are written in

8/6/2019 SEM_IV_BT0056_Software Testing & Quality Assurance

http://slidepdf.com/reader/full/semivbt0056software-testing-quality-assurance 19/42

 publicly available code and can usually be used, copied, modified, etc., without restriction.Proprietary, open source programs are also written in publicly available code but are copyrightedand distributed through various licensing agreements. Management should carefully consider alllicensing agreements to ensure their use, modification, or redistribution of the programs conforms

to the agreements.

Proprietary, closed source programs are normally copyrighted trade secrets of the company thatwrote or owns the programs. Most vendors do not release closed source code to theorganizations that buy or lease the products in order to protect the integrity and copyrights of thesoftware. An alternative to receiving the source information is to install programs in object codeand establish a source code escrow agreement. In such an agreement, organizations can onlyaccess the source code under specific conditions, such as discontinued product support or financial insolvency of the vendor.

Typically, an independent third party retains the documentation in escrow; however it is eachorganization¶s responsibility to periodically (at least annually) ensure the third party holds acurrent version of the source information. Often, escrow agents provide services for reviewingand confirming source code version numbers and dates. Some agents also perform automated

code reviews to ensure the integrity of the escrowed code.

In addition to ensuring access to current documentation, organizations should consider protectingtheir escrow rights by contractually requiring software vendors to inform the organization if thesoftware vendor pledges the software as loan collateral.

Provisions management should consider incorporating into escrow agreements include:

Definitions of minimum programming and system documentation;

Definitions of software maintenance procedures;

Conditions that must be present before an organization can access the source information;

 Assurances that the escrow agent will hold current, up-to-date versions of the source programsand documentation (escrowed information must be updated whenever program changes are

made); Arrangements for auditing/testing the escrowed code;

Descriptions of the source information media (for example, magnetic tape, disc, or hard copy)and assurances that the media is operable and compatible with an organization¶s existingtechnology systems;

 Assurances that source programs will generate executable code

SOFTWARE LICENSES - GENERAL 

Software is usually licensed, not purchased. Under l icensing agreements, organizations obtain noownership rights, even if the organization paid to have the software developed. Rather, thesoftware developer licenses an organization certain rights to use the software. Vendors typically

grant license rights for a specific time period and may require annual fees to use the software.Vendors may also provide maintenance agreements that assure they will provide new versions or releases of the software.

The most important licensing issue is the definition of the precise scope of the license.Organizations should ensure that licenses clearly state whether software usage is exclusive or not exclusive, who or how many individuals at the organization can use the software, and whether there are any location limitations on its use. Before negotiating a license, organizations shouldaccurately assess current and future software needs and ensure the license will meet their needs.

The license should clearly define permitted users and sites. If an organization desires a sitelicense for an unlimited number of users at its facilities, it should ensure the contract expressly

8/6/2019 SEM_IV_BT0056_Software Testing & Quality Assurance

http://slidepdf.com/reader/full/semivbt0056software-testing-quality-assurance 20/42

 provides for this. If the organization requires other related entities to use the software, such assubsidiaries or contractors, they should also be included in the license. Organizations should alsoensure they have an express license to retain and use backup copies of any mission-criticalsoftware that they may need to carry out disaster recovery or business continuity programs atremote sites.

Organizations should clearly understand the duration of the licenses to prevent unexpectedlicense expirations. If an organization desires a perpetual license to use the software, it shouldensure the contract explicitly grants such a license. Organizations should not assume the failureto specify a fixed term or termination date automatically provides them with a perpetual license.

 At a minimum, organizations should specify in their contract or license the time periods for a non-perpetual license and the minimum amount of notice required for termination.

SOFTWARE LICENSES AND COPYRIGHT VIOLATIONS 

Copyright laws protect proprietary as well as open-source software. The use of unlicensedsoftware or violations of a licensing agreement expose organizations to possible litigation.

Management should take particular caution when purchasing software for use on a network.Some programs are not licensed for shared use and management may be required to purchase

individual copies for each network user. Additionally, some network licenses only allow apredetermined number of persons to use the programs concurrently.

Measures that organizations may employ to protect against copyright violations include obtaininga site license that authorizes software use at all organization locations, informing employees of the rules governing site licenses, and acquiring a software management program that scans for unauthorized software use or copyright violations. While these measures may help preventcopyright violations, the best control mechanism is a strict corporate policy that management andauditors communicate and enforce. Management should have an uncompromising attituderegarding copyright violations. The organization¶s security administrator should be responsible for monitoring and enforcing the policy.

SOFTWARE DEVELOPMENT SPECIFICATIONS AND PERFORMANCE

STANDARDS Contracts for the development of custom software should describe and define the expectedperformance attributes and functionality of the software. The contract should also describe theequipment required to operate the software to ensure appropriate compatibility. Vendors shouldbe required to meet or exceed an institution's internal development policies and standards.Therefore, before opening negotiations or issuing a request-for-proposal on custom softwaredevelopment, organizations should have a clear idea of the essential business needs to beaddressed by the software and an adequate understanding of the organization¶s present andplanned system architectures.

Contracts should identify and describe the functional specifications that operational software willperform and may identify functional milestones that vendors must meet during the developmentprocess. The development contract should also contain provisions that permit the modification of specifications and performance standards during the development process.Software development contracts should contain objective pre-acceptance performance standardsto measure the software's functionality. The contracts may identify particular tests needed todetermine whether the software complies with performance standards. The contracts may alsoaddress what actions a vendor will take if the software fails one or more tests.

DOCUMENTATION, MODIFICATION, UPDATES, AND CONVERSION 

 A licensing or development agreement should require vendors to deliver appropriate softwaredocumentation. This should include both application and user documentation.

 A license or separate maintenance agreement should address the availability and cost of software updates and modifications. When drafting agreements, organizations should determineif a vendor provides access to source or object code. Regardless of whether vendors limit access

8/6/2019 SEM_IV_BT0056_Software Testing & Quality Assurance

http://slidepdf.com/reader/full/semivbt0056software-testing-quality-assurance 21/42

 to object code or provide access to source code, the permission and willing participation of thevendor may be necessary to make modifications to the software. Modifications to source codemay void maintenance agreements.

REGULATORY REQUIREMENTS 

Depending on the function of the specific software, organizations should consider including a

regulatory requirements clause in their licensing agreements. The clause requires vendors tomaintain application software so that functions are performed in compliance with applicablefederal and state regulations.

PAYMENTS 

Software development contracts normally call for partial payments at specified milestones, withfinal payment due after completion of acceptance tests. Organizations should structure paymentschedules so developers have incentives to complete the project quickly and properly. Properlydefined milestones can break development projects into deliverable segments so an organizationcan monitor the developer's progress and identify potential problems.

Organizations should exercise caution when entering into software development contracts thatbase compensation on the developer's time and materials. A fixed price agreement with specific

payment milestones is sometimes preferable because it provides an organization more controlover the development process and total project costs.

REPRESENTATIONS AND WARRANTIES 

Organizations should seek an express representation and warranty in the software license thatthe licensed software does not infringe upon the intellectual property rights of any third partiesworldwide. Under some state laws, non-infringement warranties are limited to the United Statesunless otherwise specifically provided.

Vendors should also represent and warrant that software will not contain undisclosed restrictivecode or automatic restraints not specifically authorized in the agreement.

DISPUTE RESOLUTION 

Organizations should consider including dispute resolution provisions in contracts and licensingagreements. Such provisions enhance an organization¶s ability to resolve problems expeditiouslyand may provide for continued software development during a dispute resolution period.

AGREEMENT MODIFICATIONS 

Organizations should ensure software licenses clearly state that vendors cannot modifyagreements without written signatures from both parties. This clause helps ensure there are noinadvertent modifications through less formal mechanisms some states may permit.

VENDOR LIABILITY LIMITATIONS 

Some vendors may propose contracts that contain clauses limiting their liability. They mayattempt to add provisions that disclaim all express or implied warranties or that limit monetarydamages to the value of the product itself, consideration paid, or specific liquidated damages.

Generally, courts uphold these contractual limitations on liability in commercial settings unlessthey are unconscionable. Therefore, if organizations are considering contracts that contain suchclauses, they should consider whether the proposed damage limitation bears an adequaterelationship to the amount of loss the financial organization might reasonably experience as aresult of the vendor¶s failure to perform its obligations. For mission-critical software, broadexculpatory clauses that limit a vendor's liability are a dangerous practice that could adverselyaffect the soundness of an organization because organizations could be substantially injured andhave no recourse.

8/6/2019 SEM_IV_BT0056_Software Testing & Quality Assurance

http://slidepdf.com/reader/full/semivbt0056software-testing-quality-assurance 22/42

 SECURITY 

Organizations should develop security control requirements for information systems andincorporate performance standards relating to security features in their software licensing anddevelopment contracts. The standards should ensure software is consistent with anorganization's overall security program. In developing security standards, organizations may wishto reference the methodology detailed in the IT Handbook¶s "Information Security Booklet."

Organizations may also refer to other widely recognized industry standards.

Software development packages may include significant update, modification, training,operational, and support services that require a vendor¶s access to an organization's customer data. These aspects of the relationship trigger service provider requirements under the federalbanking agencies'

RESTRICTIONS ON ADVERSE COMMENTS 

Some software licenses include a provision prohibiting licensees from disclosing adverseinformation about the performance of the software to any third party. Such provisions could inhibitan organization's participation in user groups, which provide useful shared experience regardingsoftware packages. Accordingly, organizations should resist these types of provisions.

Maintenance activities include the routine servicing and periodic modification of hardware,software, and related documentation. Hardware modifications are periodically required to replaceoutdated or malfunctioning equipment or to enhance performance or storage capacities. Softwaremodifications are required to address user requirements, rectify software problems, correctsecurity vulnerabilities, or implement new technologies. Documentation maintenance isnecessary to maintain current, accurate, technology-related records, standards, and procedures.

Major modification standards should address applicable project activities such as requirementsanalysis, feasibility studies, project planning, software design, programming, testing, andimplementation procedures. Refer to the "Project Management" and "Development" sections for additional project management details. Refer to the "Conversion" discussion later in this sectionfor specific considerations relating to major modifications.

Routine modifications involve making changes to application or operating system software toimprove performance, correct problems, or enhance security. Routine modifications can besimple or complex, but are not of the magnitude of major modifications and can be implementedin the normal course of business.

Change request forms should provide an accurate chronological record and description of allchanges. The forms should provide sufficient information for affected parties to understand theimpact of a change and include:

Request date;

Requestor¶s name;

Description of change;

Reasons for implementing or rejecting a change;

Justification for change;

 Approval signature(s); and

Change control number.

Priority information;

Identification of affected systems, databases, and departments;

8/6/2019 SEM_IV_BT0056_Software Testing & Quality Assurance

http://slidepdf.com/reader/full/semivbt0056software-testing-quality-assurance 23/42

 Name of individual responsible for making the change;

Resource requirements;

Projected costs;

Projected completion date;

Projected implementation date;

Potential security and reliability considerations;Testing requirements;

Implementation procedures;

Estimated downtime for implementation;

Backup/Back-out procedures;

Documentation updates (program designs and scripts, network topologies, user manuals,contingency plans, etc.);

Change acceptance documentation from all applicable departments (user, technology, qualityassurance, security, audit, etc.); and

Post-implementation audit documentation (comparison of expectations and results).

 After program modifications are completed, all program codes (source code, object code, patchcode, load module, etc.) should be secured. Securing the codes provides some assurance thatthe programs cataloged to production environments are unaltered versions of the approved andtested programs. Management should establish program approval standards that includeprocedures for verifying test results, inspecting modified code, and confirming source and objectcodes match.

 Appropriate backups, established back-out procedures, and detailed documentation enhancemanagement¶s ability to reverse changes if they cause system disruptions. Detaileddocumentation also enhances management¶s ability to analyze the impact of any changes duringpost-change evaluations. At a minimum, emergency change procedures should require:

Pre-change reviews and authorizations;

Pre-change testing (in segregated testing environments);Backup/backout procedures;

Documentation that includes:

Descriptions of a change;

Reasons for implementing or rejecting a proposed change;

The name of the individual who made the change;

 A copy of the changed code;

The date and time a change was made; and

Post-change evaluations.

8/6/2019 SEM_IV_BT0056_Software Testing & Quality Assurance

http://slidepdf.com/reader/full/semivbt0056software-testing-quality-assurance 24/42

 

2. Explain the concept of quality. 

Ans:- Excellence means exhibiting characteristics that are very good and, implicitly, not achievable byall.

The Higher Education and Research Opportunities in the United Kingdom (HERO) site providesthe following definition of Research Achievements and Interpretation of the Rating Scale for theRAE:'International excellence' is defined as a quality of research which makes an intellectuallyoutstanding contribution in either or both of these respects. The definition has no connection withthe subject matter of the research in question , which may be of any kind within the UoAboundaries defined above.

'National excellence' is defined as a quality of research which makes an intellectually substantialcontribution to new knowledge and understanding and/or to original thought.

In a paper entitled, The Power of Quality , the Deutsche Gesellschaft für Qualität e.V., a member and national partner organisation of the European Foundation for Quality Management (EFQM),statesExcellence is defined as exceptionally good performance in all areas of management.

Relating excellence to service provision, University of Louisville (1995) states:³Operational excellence´ is defined as a focus on reliability, convenience, and pricecompetitiveness. Customers would expect total availability, security and integrity of theinfrastructure and services. Customers expect reliability to be the norm, followed by convenience(delivery of quick, dependable service), and then price competitiveness (lowest price.)

Harvey and Green (1993) further develop the notion of quality as excellence as follows:Quality as exceptional The exceptional notion of quality sees it as something special. There are three variations on this.First, the traditional notion of quality as distinctive, second, a view of quality as exceeding veryhigh standards (or µexcellence¶) and third, a weaker notion of exceptional quality, as passing a setof required (minimum) standards.1. Traditional notion of quality Traditionally, the concept of quality has been associated with the notion of distinctiveness, of something special or µhigh class¶. The traditional notion of quality implies exclusivity: for example,the supposed high quality of an Oxbridge education. Quality is not determined through anassessment of what is provided but is based on an assumption that the distinctiveness andinaccessibility of an Oxbridge education is of itself µquality¶. This is not quality to be judged againsta set of criteria but the quality, separate and unattainable for most people.

The traditional notion of quality does not offer benchmarks against which to measure quality. Itdoes not attempt to define quality. One instinctively knows quality (as the butler says in the Croftsherry adverts). The traditional concept of quality is useless when it comes to assessing quality ineducation because it provides no definable means of determining quality.2. Exceeding high standards (Excellence 1)Excellence is often used interchangeably with quality. There are two notions of excellence inrelation to quality, excellence in relation to standards and excellence as µzero defects¶ (which isdiscussed below in section 2).Excellence 1 sees quality in terms of µhigh¶ standards. It is similar to the traditional view butidentifies what the components of excellence are, while at the same time ensuring that these arealmost unattainable. It is elitist in as much as it sees quality as only possibly attainable in limited

8/6/2019 SEM_IV_BT0056_Software Testing & Quality Assurance

http://slidepdf.com/reader/full/semivbt0056software-testing-quality-assurance 25/42

 circumstances. The best is required if excellence is to result. In the education context, if you arelectured by Nobel prizewinners, have a well equipped laboratory with the most up-to-datescientific apparatus and a well stocked library, then you may well produce excellent results.

Excellence 1 is about excelling in input, and output. An institution that takes the best students,provides them with the best resources, both human and physical, by its nature excels. Whatever the process (by which students learn) the excellence remains.Excellence 1, with its emphasis on the µlevel¶ of input and output, is an absolutist measure of quality. The notion of centres of excellence in higher education is based on this notion of qualityand it can be seen in the original conception of the CTCs, for example..3. Checking StandardsThe final notion of quality as exceptional dilutes the notion of excellence. A µquality¶ product in thissense is one that has passed a set of quality checks. Rather than unattainable, the checks arebased on attainable criteria that are designed to reject µdefective¶ items.

µQuality¶ is thus attributed to all those items that fulfil the minimum standards set by themanufacturer or monitoring body. Quality is thus the result of µscientific quality control¶.

 At any given moment there will be an µabsolute¶ benchmark (or standard) against which theproduct is checked, those that satisfy the criteria will pass the quality threshold. The benchmarksmay be set internally or externally, the Which? reports are an example of the latter. Checking for 

quality may be pass/fail or it may be on a scale. The Which? reports provide a quality rating, asdo final degree results in higher education institutions.

The standards approach to quality implies that quality is improved if standards are raised. Aproduct that meets a higher standard is a higher quality product. In education, quality has oftenbeen equated with the maintenance and improvement of standards.This approach to quality implicitly assumes that µstandards¶ are µobjective¶ and static. However,standards are negotiated and subject to continued renegotiation in the light of changedcircumstances.

In an article entitled µDefining Excellence in Graduate Studies¶ Laurie Carlson Berg (University of Regina ) and Linda Sabatini report a study of definitions of excellence provided by Master¶sdegree and doctoral candidates, identified by their department as ³excellent,´ and by chairs of 

graduate programmes at two western Canadian universities.

Quality Management System (QMS) can be defined as a set of policies, processes andprocedures required for planning and execution (production / development / service) in the corebusiness area of an organization. QMS integrates the various internal processes within theorganization and intends to provide a process approach for project execution. QMS enables theorganizations to identify, measure, control and improve the various core business processes thatwill ultimately lead to improved business performance.

Concept of quality - historical background

The concept of quality as we think of it now first emerged out of the Industrial Revolution.Previously goods had been made from start to finish by the same person or team of people, withhandcrafting and tweaking the product to meet 'quality criteria'. Mass production brought hugeteams of people together to work on specific stages of production where one person would notnecessarily complete a product from start to finish. In the late 1800s pioneers such as FrederickWinslow Taylor and Henry Ford recognized the limitations of the methods being used in massproduction at the time and the subsequent varying quality of output. Taylor established QualityDepartments to oversee the quality of production and rectifying of errors, and Ford emphasizedstandardization of design and component standards to ensure a standard product was produced.Management of quality was the responsibility of the Quality department and was implemented byInspection of product output to 'catch' defects.

8/6/2019 SEM_IV_BT0056_Software Testing & Quality Assurance

http://slidepdf.com/reader/full/semivbt0056software-testing-quality-assurance 26/42

8/6/2019 SEM_IV_BT0056_Software Testing & Quality Assurance

http://slidepdf.com/reader/full/semivbt0056software-testing-quality-assurance 27/42

 all overseen by Management Responsibility and Quality Audits.

Because the QS regulation covers a broad spectrum of devices and production processes, itallows some leeway in the details of quality system elements. It is left to manufacturers todetermine the necessity for, or extent of, some quality elements and to develop and implementprocedures tailored to their particular processes and devices. For example, if it is impossible to

mix up labels at a manufacturer because there is only one label to each product, then there is nonecessity for the manufacturer to comply with all of the GMP requirements under device labeling.

Quality management organizations and awards

The International Organization for Standardization's ISO 9000:2000 series describes standardsfor a QMS addressing the principles and processes surrounding the design, development anddelivery of a general product or service. Organizations can participate in a continuing certificationprocess to ISO 9001:2000 to demonstrate their compliance with the standard, which includes arequirement for continual (i.e. planned) improvement of the QMS.

(ISO 9000:2000 provides guidance on Quality principles and on the common language used byquality professionals. ISO 9004:2000 provides guidance on improvement methods. It can be seen

that neither of these standards can be used for certification purposes as they provide guidance,not requirements).

The Malcolm Baldrige National Quality Award is a competition to identify and recognize top-quality U.S. companies. This model addresses a broadly based range of quality criteria, includingcommercial success and corporate leadership. Once an organization has won the award it has towait several years before being eligible to apply again.

The European Foundation for Quality Management's EFQM Excellence Model supports an awardscheme similar to the Malcolm Baldrige Award for European companies.

In Canada, the National Quality Institute presents the 'Canada Awards for Excellence' on an

annual basis to organisations that have displayed outstanding performance in the areas of Qualityand Workplace Wellness, and have met the Institute's criteria with documented overallachievements and results.

The Alliance for Performance Excellence is a network of state, local, and internationalorganizations that use the Malcolm Baldrige National Quality Award criteria and model at thegrassroots level to improve the performance of local organizations and economies.NetworkforExcellence.org is the Alliance web site; browsers can find Alliance members in their state and get the latest news and events from the Baldrige community To remain competitive, software companies must establish practices that enhance quality andadvance process management. To this end, they have increasingly turned to software processimprovement (SPI) methodologies, of which the ISO 9000 standards and the capability maturity

model (CMM) are the best known. The underlying principle of both methodologies is to assessorganizational capabilities to produce quality software, but they depend on different underlyingprocesses.

Whether the practices advocated by these methodologies lead to high-quality software has beenthe topic of ongoing debates. Both scholars and practitioners are looking for hard evidence to

 justify the time and effort required by such guidelines to improve the software-developmentprocess and its end product.

In this paper, we investigate the impact of SPI methodologies on software quality, first bytheoretical comparison and then with empirical data. Our findings reveal that each methodology

8/6/2019 SEM_IV_BT0056_Software Testing & Quality Assurance

http://slidepdf.com/reader/full/semivbt0056software-testing-quality-assurance 28/42

 has had a different level of impact on software quality factors. These findings could help software-development organizations select the methodology or combination that best meets their qualityrequirement.

3. Explain defect prevention.

Ans:- This paper is an experience report of a software process model which will help inpreventing defects through defect prediction. The paper gives a vivid description of how themodel aligns itself to business goals and also achieves various quality and productivity goals bypredicting the number and type of defects well in advance and corresponding preventive actiontaken to reduce the occurrence of defects. Data have been collected from the case study of a liveproject in INFOSYS Technologies Limited, India. A project team always aims at a zero defectsoftware or a quality product with as few a defects as possible. To deliver a defect free software,it is imperative that in the process of development maximum number of defects are captured andfixed them before we deliver to the customer. In other words our process model should help usdetect maximum number of defects possible through various Quality Control activities. Also theprocess model should be able to predict defects and should help us to detect them quite early.Defects can be reduced in two ways - (i) By detecting it at each and every stage in the project lifecycle or (ii) By preventing to occur.

 Automated Defect Prevention

While reading the latest book written by Adam Kolawa and Dorothy Huizinga, which is titled"Automated Defect Prevention: Best Practices in Software Management," I felt the continuousneed to exclaim, "Finally!"

FINALLY, someone took the time to gather and describe well-known principles and fundamentalrules that should have been documented years ago. FINALLY, a software development processis presented using true-to-life project implementation circumstances rather than falsely optimisticscenarios. FINALLY, a methodology successfully combined the three different, but co-existingapproaches to automated defect prevention:

Efficient design and code implementation

 Assessment, evaluation and improvement of implemented practices, procedures anddevelopment processesMotivation and satisfaction of people involved in the development process

Kolawa & Huizinga's "Automated Defect Prevention: Best Practices in Software Management"does address these questions and provides welcome, invaluable answers for the softwaredevelopment professionals.

Just browsing through the contents of the book, nothing extraordinary, surprising, or revolutionaryimmediately jumps from the pages. It appears to cover basic development phases: fromestablishment of infrastructure, requirement specifications and management; througharchitectural and detailed design; up to the testing process and software product implementation

Here are the goals of Automated Defect Prevention:

1. Satisfied and motivated people: Until now, the factor that human psychology plays in thesoftware development process has been undervalued, or even omitted. At times it has been usedas proof that light-weight methods are better than others, for example, XP is a refreshing andsuccessful new approach because it emphasizes customer involvement and promotes teamwork.The most surprising aspect of XP is its simple rules and practices. They seem awkward andperhaps even naive at first, but soon become a welcome change. Customers enjoy beingpartners in the software process and developers actively contribute regardless of experiencelevel.

 ADP takes this concept further. It includes the human psychology factor as a basis of the overall

8/6/2019 SEM_IV_BT0056_Software Testing & Quality Assurance

http://slidepdf.com/reader/full/semivbt0056software-testing-quality-assurance 29/42

 software development process. The authors seem to know that "achieving a balance betweendiscipline and creativity is difficult" and that "both excessive boredom and excessive anxiety makepeople ineffective and error-prone."With that, they indicate that implementation of the overall ADP methodology keeps peoplepositively stimulated, but not overwhelmed. The results include performing in an advancedmanner and achieving the maximum level of professional satisfaction.

2. High-quality product: This basic factor of any software development methodology is oftenoverlooked during the development process. ADP assigns it high-priority--second only to satisfiedand motivated people.

3. Increased productivity and operational efficiency of the organization: Do not think about how toname the procedures or artifacts within your organization. Instead; think about how to improveproductivity and operation effectiveness.

4. Controlled, improved, and sustainable process: Regardless of your current CMMI level or theformality of your requirements definition process, ADP emphasizes that it is essential to control,improve, and sustain your process to increase productivity and effectiveness.

5. Projects managed through decision-making: The next basic factor that contributes to

successful development is project management through decision making and risk management.To make a good decision, you need to have the appropriate data--regardless of you decisionprocess. ADP emphasizes the process of gathering and analyzing project data, as well asprocess automation.

6. Defect prevention: Many bug detection tools enable developers to easily make changes andimprovements to software at any stage in the software development lifecycle (SDL). While suchtools are convenient, they merely provide temporary quick-fixes because a problem solved in onebuild can easily reappear in the next build.

 ADP suggests following these steps to completely remove erroneous errors: 1) Perform root-cause analysis to find and eliminate the core of the problem; 2) Automate a defect preventionprocess so that the detected problem is prohibited from recurring.

The principles of ADP 1. Establishment of infrastructure built through integration of people and technology: Someprogramming methods, such as. CMMI, tell you what should be done, but do not specify how todo it. ADP, however, specifies what tools should be involved in SDL to streamline theimplementation of the methodology.

2. Application of general best practices: While this is standard among similar traditional modelsand methodologies, ADP separates itself from the others due to its approach. ADP integrates allindustry-accepted best practices--from SOA patterns to code construction best practices for efficient defect prevention.

3. Customization of best practices: In addition to general best practices, individual companieshave their own specific best practices that apply only to them and their project profiles. A set of 

best practices for any company should combine self-designed practices with the most applicablegeneral practices modified according to their needs.

4. Measurement and tracking of project status: Automated measurement and tracking of projectstatus lets you see what is happening with the project throughout its entire lifecycle--enabling youto identify weaknesses and vulnerabilities.

5. Automation: With software systems larger and more complex than ever before, it seemspractically impossible to develop, implement, and maintain them manually--without any processautomation. ADP emphasizes this strong need for automation, which also prevents overwhelmingdevelopers with repetitive tasks.

8/6/2019 SEM_IV_BT0056_Software Testing & Quality Assurance

http://slidepdf.com/reader/full/semivbt0056software-testing-quality-assurance 30/42

8/6/2019 SEM_IV_BT0056_Software Testing & Quality Assurance

http://slidepdf.com/reader/full/semivbt0056software-testing-quality-assurance 31/42

 y black box testing should make use of randomly generated inputs (only a test range

should be specified by the tester), to eliminate any guess work by the tester as to themethods of the function

y data outside of the specified input range should be tested to check the robustness of theprogram

y boundary cases should be tested (top and bottom of specified range) to make sure the

highest and lowest allowable inputs produce proper outputy the number zero should be tested when numerical data is to be input

y stress testing should be performed (try to overload the program with inputs to see whereit reaches its maximum capacity), especially with real time systems

y crash testing should be performed to see what it takes to bring the system down

y test monitoring tools should be used whenever possible to track which tests have alreadybeen performed and the outputs of these tests to avoid repetition and to aid in thesoftware maintenance

y other functional testing techniques include: transaction testing, syntax testing, domaintesting, logic testing, and state testing.

y finite state machine models can be used as a guide to design functional tests

Black-Box Testing

In this technique, we do not use the code to determine a test suite; rather, knowing the problemthat we're trying to solve, we come up with four types of test data:

1. Easy-to-compute data2. Typical data3. Boundary / extreme data4. Bogus data

Easy data (discriminant is a perfect square):

a  b  c  Roots 

1 2 1 -1, -1

1 3 2 -1, -2

Typical data (discriminant is positive):

a  b  c  Roots 

1 4 1 -3.73205, -0.267949

2 4 1 -1.70711, -0.292893

Boundary / extreme data (discriminant is zero):

a  b  c  Roots 

2 -4 2 1, 1

2 -8 8 2, 2

Bogus data (discriminant is negative, or a is zero):

a  b  c  Roots 

1 1 1 square root of negative number 

8/6/2019 SEM_IV_BT0056_Software Testing & Quality Assurance

http://slidepdf.com/reader/full/semivbt0056software-testing-quality-assurance 32/42

 0 1 1 division by zero

 As with glass-box testing, you should test your code with each set of test data. If the answersmatch, then your code passes the black-box test

5. Explain the use of decision tables in testing.

Ans:-Presenting decision processes in a tabular format goes back to antiquity.

1Decision tables offer a

simple, visual aid, and they can be applied in knowledge-based systems to perform verificationprocesses efficiently. In software development, decision tables help test teams manage complexlogic in software applications.

This article introduces a decision-tables-based testing technique and describes animplementation using IBM Rational Functional Tester and IBM Rational Software Modeler. Thistechnique is used to elaborate non-regression test suites that run a collection of reusable testscripts. Each test script described is generated with Functional Tester using the GUIrecord/playback technique.My goal here is a "proof of concept." To that end, I developed a Java class library during a five-

day period to implement this decision tables technique with IBM Rational tools. While thetechnique has not yet been deployed to an actual project, I will demonstrate the potential of thisapproach by using the IBM Rational toolset based on the Eclipse framework. The implementationI propose is entirely based on the standard, documented interfaces that are accessible to anycustomer.

The problemNon-regression tests based on data-driven testing techniques, in which a single test script is usedrepeatedly with varying input data, are popular in the test automation community.

3This technique

can be implemented with Functional Tester using data pools that are associated to a test scriptduring or after its creation. Unfortunately, when testing applications involving complex logic, data-driven testing becomes difficult to implement without hard coding. Generally speaking, thebehavior of the application is affected by variations in the data input to the different test scripts

that compose the test suite. Equivalence partitioning of input data is required to identify the setsof input data that provide an equivalent behavior of the AUT (application under test). Conditionsmust be hard coded in each test suite to fork towards the right testing path and to address thedata variation issue. This approach, which can be "fun" for developers, is not really appreciatedby testers, especially when using a test-automation tool, since the hard coding of the conditions inthe test scripts makes the maintenance and the extension of the test scripts more difficult tomanage. Furthermore, it is difficult to optimize the test script decomposition without a clear strategy in mind.

The proposed approach 

When a tester manually implements a test procedure, he selects input data according to histesting goal and he makes a decision based on the behavior of the AUT. The question is, howcan we help the tester in that decision-making process, when using test automation tools, without

the hard coding I described above?This simple question led us to consider the decision table technique and to develop a Java classlibrary to validate the concept. Decision tables are implemented with Functional Tester datapools. Decision scripts that interpret the testing logic provided by decision tables are incorporatedinto the test suite architecture, as shown in Figure 1. Other reusable tests scripts that composethe test suite are generated with Functional Tester using the GUI record/playback technique. Atest segment is defined as a sequence of test scripts between two decision scripts or between thestarting script and a decision script or between a decision script and an ending script. A data-driven table is linked to the test suite to run the test suite repeatedly with several combinations of input data injected into the different test scripts.

8/6/2019 SEM_IV_BT0056_Software Testing & Quality Assurance

http://slidepdf.com/reader/full/semivbt0056software-testing-quality-assurance 33/42

 Figure 1: The test suite is composed of test scripts and decision scripts. This technique provides the following benefits:

y A formalized approach to elaborate the test suites and test scripts

y The encapsulation of the test suite logic in the decision scripts

y A test suite architecture centered around decision points

y A testing logic easy to keep track of and to change with the decision tables that can beread and filled out by non-programmers

y A more flexible implementation of the data-driven approach.

The decision-table-based testing technique When a decision point is reached in the testing process, the tester examines the state of the AUTand determines a test action. Each decision point can be specified with a decision table. Adecision table has two parts: the conditions part and the actions part. The decision table specifiesunder what conditions a test action must be performed. Each condition expresses a relationshipamong variables that must be resolvable as true or false. All the possible combinations of conditions define a set of alternatives. For each alternative, a test action should be considered.The number of alternatives increases exponentially with the number of conditions, which may beexpressed as 2

NumberOfConditions. When the decision table becomes too complex, a hierarchy of new

decision tables can be constructed.Because some alternatives specified might be unrealistic, a test strategy should 1) verify that all

alternatives can actually be reached and 2) describe how the AUT will behave under allalternative conditions. With a decision table, it is easy to add and remove conditions, dependingon the test strategy. It is easy to increase test coverage by adding new test actions from iterationto iteration, according to the test strategy.

 As illustrated in Figure 2, decision tables are useful when specifying, analyzing, and testingcomplex logic. They are efficient for describing situations where varying conditions producedifferent test actions. They are powerful for finding faults both in implementation andspecifications.

8/6/2019 SEM_IV_BT0056_Software Testing & Quality Assurance

http://slidepdf.com/reader/full/semivbt0056software-testing-quality-assurance 34/42

 

Figure 2: Example of a decision table 

Working with decision and data-driven tables  At every decision point, a decision table should specify what needs to be verified regarding the AUT (depending on conditions) as well as the next test action. Since the logic is defined in thedecision table, the tester does not need to hard code any testing logic. The decision script justperforms the verifications during execution, compares the result of the verifications with the

alternatives provided by the decision table, and returns the next test script to run if a solution isfound. A test suite script contains several decision scripts and test scripts. All the elements of a test suiteare defined in a driver table that specifies an un-ordered set of test segments. Each test segmentconsists of a collection of test scripts that are executed sequentially between two decision scripts.For each test segment, the driver table specifies the transition between a source test script and atarget test script.

 As the decision is computed dynamically by the decision script during execution, a mechanism of notification must be implemented for the test suite script to be notified by the decision script aboutthe next test script to run. When the decision script notifies the test suite script about the next testscript to run, the test suite script queries the driver table to find the next test segment to run. Theprocess is illustrated in Figure 3.

Potentially, any test script within a test suite is linked to a data pool for data input. When a data-

driven table is linked to the test suite, it is possible to specify several combinations of datarecords to input to the test scripts and to dynamically change the behavior of the AUT. When thebehavior of the AUT changes, the result provided by each decision script changes, andconsequently the testing path across the AUT changes. With a single test suite script, it ispossible to validate several testing paths. It is easy to consider new combinations of input dataand to extend the test suite coverage, adding new conditions in the decision tables.

8/6/2019 SEM_IV_BT0056_Software Testing & Quality Assurance

http://slidepdf.com/reader/full/semivbt0056software-testing-quality-assurance 35/42

 

Figure 3: The elements of a test suite This approach clearly separates the testing logic encapsulated in the decision scripts from thetest actions and verifications performed in the test scripts. The identification of the decision pointsin the AUT helps formalize and elaborate the test suite decomposition into test scripts.

Implementing the technique with Functional Tester   As part of this proof of concept, I developed a Java library to implement the decision-table-basedtechnique with Functional Tester. To create a test suite script, the tester must do the following:

y Reuse the test suite code template and fill the test suite driver table

y Create the decision scripts with a code template and fill the decision tables

y Fill the data-driven table

The decision-table-based testing library The decision-table-based testing library consists of Java classes that provide the followingservices:

y Services to initialize and to iterate through the test suite structure defined in the driver table

y Services to explore the decision tables and to compare alternatives with verificationsperformed on the AUT

The event-listener mechanism between the decision script and the test suite script is transparentfor the tester. It is implemented in the library by the DecisionBuilder and the TestSuiteDriver classes shown in Figure 4. To use the services provided by the library, each test suite script andeach decision script created by the tester must inherit from the TestSuiteHelper 

4class, which

provides an interface to the library. For that purpose, the tester selects this super helper classeach time she creates a new test suite or a decision script.

8/6/2019 SEM_IV_BT0056_Software Testing & Quality Assurance

http://slidepdf.com/reader/full/semivbt0056software-testing-quality-assurance 36/42

 

Figure 4: The main classes of the decision-table-based testing library Creating a new test suite The test suites can be used to implement the automation test strategy at the business level or atthe system use-case level. At the system use-case level, a test suite implements the use-casescenarios. At the business level, a test suite traces a business workflow across several usecases. For a given testing level, the test suites can also be organized according to the testinggoals defined in the iteration test plan. For example, the test suite can focus on business rulesverification or on services delivery or data integrity checking (creation, modification, deletion).While theory insists that working with only one test suite is possible, this is unmanageable inpractice. Thus, the tester must design test suites according to the test architecture, and it makessense to have a test suite that runs test suites.To create a new test suite, the tester must do the following:

y Create an empty test script with Functional Tester y Insert the code template in the test suite script (The code template for a test suite script is

shown in Figure 5.)

y Specify the name of the driver table and the data-driven table

8/6/2019 SEM_IV_BT0056_Software Testing & Quality Assurance

http://slidepdf.com/reader/full/semivbt0056software-testing-quality-assurance 37/42

 Figure 5: Code template for a test suite script 

 As shown in Figure 6, the structure of each test suite is described in the test suite driver table, adata pool that defines the transitions between the test scripts (i.e., the transition from sourcescript to target script ). The order has no importance, because the TestSuiteDriver class parsesthe driver data pool and loads in memory the test suite structure. Nevertheless, you must definethe starting and the ending scripts. The tester can fill this table to specify the test suite or 

generate this data pool from the UML definition of the test suite (see "Modeling test suites withIBM Rational Software Modeler" farther below).

Figure 6: Example of test suite driver table 

Creating a data-driven test suite  A data-driven table can be linked to the test suite script in order to 1) control the data input to thedifferent test scripts and 2) create different paths across the AUT. The header of the data-driventable contains the name of the data pools used by the test scripts of the test suite. Each row of the data-driven table specifies a different combination of input data records to be used for eachtest script data pool. As shown in Figure 7, the first column of the data-driven table is a true/falseflag used by the test suite to select, or not, a row, depending on the test objectives.

Figure 7: Example of test suite data-driven tableEach test script data pool also contains a flag that indicates whether or not the test script mustselect a record when iterating through the records of the data pool. When the test suite starts anew iteration, the TestSuiteDriver class reads the test suite driver table and sets the test scriptdata pool selection flags, and this is repeated for all the test scripts. Thus, only the recordspecified in the driver table is considered when iterating through the records of a test script datapool. This mechanism is managed by the library and is completely transparent for the tester. Theonly constraint is the presence of the SelectRecord flag for all the data pools.

8/6/2019 SEM_IV_BT0056_Software Testing & Quality Assurance

http://slidepdf.com/reader/full/semivbt0056software-testing-quality-assurance 38/42

 

Creating a decision test script The tester identifies the decision points in the test suite workflow and creates a decision testscript for each decision point. When the test suite workflow is designed via a UML activitydiagram, a decision test script is created for each condition (see "Modeling test suites with IBMRational Software Modeler" farther below).

To implement a decision point, the tester must do the following:

y Create and fill the decision data pool

y Create an empty decision test script and insert the code template

y Register the verification points for each condition of the decision

First, the tester creates a decision table with a Functional Tester data pool. The decision table, anexample of which is shown in Figure 8, can also be generated from the UML definition of the testsuite (see "Modeling test suites with IBM Rational Software Modeler" farther below). The decisionscript compares the result of the verifications performed on the AUT with the condition entries inorder to identify the test action to perform.

Figure 8: Example of decision table Second, the tester creates an empty test script, inserts the code template in the test script(illustrated in Figure 9), and specifies the name of the decision data pool. The tester usesFunctional Tester to insert the verification points that capture the information needed by thedecision table.

Figure 9: Code template for a decision script

8/6/2019 SEM_IV_BT0056_Software Testing & Quality Assurance

http://slidepdf.com/reader/full/semivbt0056software-testing-quality-assurance 39/42

 

Modeling test suites with IBM Rational Software Modeler  The test suite is designed with a UML activity diagram, in which each action corresponds to a testaction (test script) and each decision corresponds to a decision script, as shown in Figure 10. Theconditions specified at a decision point are used to generate the corresponding decision table.

 Activity diagrams are easy to use and to understand by non-developers.

Figure 10: The test suite implementation is generated from the UML specification.  An object node with the stereotype "datastore" can be linked to a test action in order to specifythat a data pool is required for this test action. It is also possible to specify the structure of thedata pool with a class. Each column of the data pool corresponds to an attribute in the class. AUML parser generates all the data pools required to run the test suite with Functional Tester,including the driver table, the decision tables, and the structure of the data-driven table and testscript data pools. Both the activity diagram and the class diagram can be organized under acollaboration element, as illustrated in Figure 11. Traceability links can be created between thetest suite definition and the use-case model, as illustrated in Figure 12. A more sophisticatedapproach could be developed with the transformations facilities provided by IBM Rational

Software Modeler .

8/6/2019 SEM_IV_BT0056_Software Testing & Quality Assurance

http://slidepdf.com/reader/full/semivbt0056software-testing-quality-assurance 40/42

 

Figure 11: The test suite definition is encapsulated under a collaboration. 

Figure 12: Traceability links between the test suite and other model elements

I developed a UML parser that generates the test suite driver table, the decision tables, and thedata pools (i.e., the structure for the data-driven table and the test script input data tables) in an

8/6/2019 SEM_IV_BT0056_Software Testing & Quality Assurance

http://slidepdf.com/reader/full/semivbt0056software-testing-quality-assurance 41/42

 XML format. An Eclipse plug-in with a contextual menu is used to generate the test suite tableswhen selecting the collaboration, as shown in Figure 13.

Figure 13: Class library to generate the test suite tables from the UML

definition 

Figure 14: All the possible combinations are generated in the decision table. 

 All the possible alternatives are automatically generated in the decision table data pool, as shownin Figure 14. Only the test actions specified in the activity diagram are generated.

ConclusionI believe this decision-tables-based testing technique greatly improves the tester's ability tomanage decisions that must be made during automated testing. Using IBM Rational Functional

8/6/2019 SEM_IV_BT0056_Software Testing & Quality Assurance

http://slidepdf.com/reader/full/semivbt0056software-testing-quality-assurance 42/42

 Tester and IBM Rational Software Modeler, this technique can facilitate non-regression test suitesthat run a collection of reusable test scripts.

 As I noted in the introduction, this technique has not yet been used in a real-life project, but theearly implementation using the Java class library built for this purpose suggests that thetechnique is viable and feasible.Further work is currently in progress to extend the test modeling approach introduced here. Themodel transformation services provided by IBM Rational Software Architect will be used for thetest automation aided design.