February 12, 2018 Sam Siewert
SE420 Software Quality Assurance
Lecture 6 – Planning and Design Phase of Validation and Verification
Reminders Assignment #3 Posted, due on 2/23
Exam #1 - Tues, 2/27, Review on 2/22 (Thurs) After Completion of Assignment #3 on 2/23 I will grade Assignment #2 this week and Turn-around Assignment #3 Quickly, Completing Grading on Monday, 2/26 Remaining Assignments [Top Down] – #4 – System Design and System Integrated Test – #5 – Design, Module Unit Tests and Regression Suite – #6 – Complete Code, Refine and Run all V&V Tests and Deliver
We can have another pre-Exam Quiz on-line, 2/25
Sam Siewert 2
MCDC, MCDC2, MCDC3 Gray-box Improvements to MCDC Test Case for 2 Toggles – White-box Goal: Full path coverage – White-box Goal: Full internal state coverage – Black-box Goal: Call each function – Black-box Goal: Call MCDC logic N times
Sam Siewert 3
% make cc -Wall -O0 -fprofile-arcs -ftest-coverage -g sclogic.c -o sclogic % ./sclogic 5 argv[0]=./sclogic argv[0]=./sclogic testIterations=5 TEST CASE 1: do function_A, cnt=1, entry=0, exit=1 do function_B, cnt=1, entry=0, exit=1 do function_C, cnt=1 do function_D, cnt=1
TEST CASE 2: MCD TEST 1 of 5 do function_A, cnt=2, entry=1, exit=0 MCD 0 FALSE: rc=0, rca=0, rcb=1 do function_D, cnt=2 MCD TEST 2 of 5 do function_A, cnt=3, entry=0, exit=1 do function_B, cnt=3, entry=1, exit=0 MCD 1 FALSE: rc=0, rca=1, rcb=0 do function_D, cnt=3 MCD TEST 3 of 5 do function_A, cnt=4, entry=1, exit=0 MCD 2 FALSE: rc=0, rca=0, rcb=0 do function_D, cnt=4 MCD TEST 4 of 5 do function_A, cnt=5, entry=0, exit=1 do function_B, cnt=5, entry=0, exit=1 MCD 3 TRUE: rc=1, rca=1, rcb=1 do function_C, cnt=2 MCD TEST 5 of 5 do function_A, cnt=6, entry=1, exit=0 MCD 4 FALSE: rc=0, rca=0, rcb=1 do function_D, cnt=5
Notes on IV&V vs. Integrated SQA IV&V is totally independent from Design & Development – Might be required by contract – Might be an organizational standard (internal IV&V) – Very different from an integrated SQA or Test-Driven Design
strategy – Often limited to acceptance testing (standards compliance) – Costly if used during Unit, I&T, System testing – May be used for Mission Critical or Enterprise systems
E.g. Medusa Labs for Network protocol compliance Fiber channel, IP networks, SAS/SATA, SAN, NAS Medusa test tools, test tool user’s guide Branding acceptance tests (Windows Logo, Vmware certified, etc.)
Sam Siewert 4
Specification & Design V&V Prior to I&T Goal:
Integrating Well-tested Modules with Well-
defined Interfaces into Sub-systems and
Systems
Methods: Specification Validation,
Design Validation, Design Verification
Sam Siewert
5
https://commons.wikimedia.org/wiki/File:Boeing_787-8_N787BA_cockpit.jpg
Integration and System Testing Strategy Architected and Designed – Well-Defined Interfaces and Protocols Well-Tested Units Integrated into Sub-systems, Systems, Systems of Systems Leads to End-to-End System Test, Soak Tests, Stress Tests, Mission Simulation Testing (E.g. NASA IV&V Facility – JSTAR)
Sam Siewert 6
Reg
ress
ion
Test
ing
Review Black Box Test – RAID Unit Test, Imageproc Unit Test
– Sam Siewert – “Test main Program or Script(s) that drives each function in candidate product code along with stimulation of each feature through an API, CLI, GUI, or HCI if this is available.”
– Wikipedia – “Black-box testing is a method of software testing that examines the functionality of an application without peering into its internal structures or workings. This method of test can be applied virtually to every level of software testing: unit, integration, system and acceptance. It typically comprises most if not all higher level testing, but can also dominate unit testing as well.”
– SWEBOK – “Sometimes these techniques are classified as … black-box if the test cases rely only on the input/output behavior of the software.”
White Box Test [Glass Box] - MCDC, MCDC2, Enigma-LCOV – Sam Siewert – “Test cases, procedures, and instrumentation (SWIC, HWIC) used to verify
software internal state, source and generated code quality, software/hardware interaction and internal features and characteristics of software by segment [code, data, stack, heap]. Code level use of SWIC/HWIC tools including /proc, JTAG, Visual Debugger, Coverage, and Profile/Trace.”
– Wikipedia - White-box testing (also known as clear box testing, glass box testing, transparent box testing, and structural testing) is a method of testing software that tests internal structures or workings of an application, as opposed to its functionality (i.e. black-box testing). In white-box testing an internal perspective of the system, as well as programming skills, are used to design test cases. The tester chooses inputs to exercise paths through the code and determine the appropriate outputs. This is analogous to testing nodes in a circuit, e.g. in-circuit testing (ICT).
– SWEBOK – “Sometimes these techniques are classified as white-box (also called glass-box), if the tests are based on information about how the software has been designed or coded…”
Gray Box – Black Box Test with White Box Methods to capture metrics (e.g. coverage, clocks per instruction efficiency, heap use, I/O rates, etc.)
Sam Siewert 7
System Test Design & Plans Focus is End-to-End Testing – E.g. Send Command, Through Link Emulator, to Remote Device,
Return Data, Display Update – All Software Units (CSUs) and Subsystems (CSCIs) are Integrated,
Interfaces and Protocols Exercised – OOA/OOD OIM (Object Interaction) Sequence Diagrams Model
Expectations
Some Elements May Be Emulated or Simulated – E.g. Instrument Tested with Spacecraft Simulator – Spacecraft with Integrated Instrument Tested with Ground Station
Simulator and Actual Payload Operations Center – Progression to Fully Operational System from I&T Plans
Include Positive, Negative and Load/Stress/Performance Tests
Sam Siewert 8
University of Colorado, USA Colorado Space Grant College
October 24, 1997
DATA End-to-End System
NASA Communications
ACCESS
Customer Ground Support Equipment
Central Workstation
IP LAN/WAN Link
Eng. & Science Workstations
DATA-CHASER
WAN Backup Modem Link
University of Colorado - GSFC - JPL Distributed Mission Operations
DATA-CHASER TESTBED
University of Colorado, USA Colorado Space Grant College
October 24, 1997
SCL RTE
SCL INTERPRETER
SELMON
TLM/CMD/SCORE DB
DICTIONARY
CAUSAL GRAPH
HISTORY
SCORE RECID MAP
DATAIO/LINK
DEVICEIO
DEVICE-1 DEVICE-2 DEVICE-N
Basic SCL-SELMON Testbed
Simulated Device I/O
Part of Demo Today
University of Colorado, USA Colorado Space Grant College
October 24, 1997
DATA-CHASER End-to-End Software System
*Not completed Fully functional
GRTE
SELMON
LRDU
SCL
DataIO (server)
GSFC
GUIs (Beacon) DataIO
(client) GUIs
DCAPS
LRU
LRD
LASIT
DeviceIO
*FRTE
*SELMON
Downlink Buffer SXEE
FARUS
STS-85
DATA-CHASER Hitchhiker Payload
CU Boulder Key:
O2 DBMS
GUIs (Beacon)
Integration Test Design (I&T) Focus is on Integrating Units (CSUs) into Subsystem (CSCIs) – CSUs Well Tested by Unit Test Drivers (Unit Test Design and
Plans) – Integrated Tests Verify Specific Architectural Interfaces and
Protocols One Interface at a Time – Stress and Performance Testing of Interfaces – Negative Testing with Fault Injection – E.g. Command/Response – Good and Bad Commands Issued,
Command Rates, Response Rates, Data Transfer Rates and Line Rates Established
Bugs (Faults and Defects) Found – Isolate to CSU and Interface (Root Cause) – Regression Test CSUs with Unit Tests in Isolation – Interpose on Specific Interface with Protocol Analyzer
Sam Siewert 12
I&T Tools SW to SW Interfaces SW to HW Interface Physical, Link, Network, Transport Layers [OSI Reference Model] Protocol Analyzers – Establish Conformance for Protcol
Standards – E.g. CAN Bus, I2C, SPI, Ethernet,
USB, Serial (RS232, 485, …), Fiber Channel, SAS/SATA, …
– E.g. Bus Protocols – PCI-Express, ARM AMBA, QPI, Hypertransport
– Custom Command/Response Synchronous Command/Response Asynchronous
– HWIC – Hardware In-Circuit Analyzers (Logic, Protocol)
– SWIC – Packet Snooping and Visualization – E.g. Wireshark, Kernelshark
Sam Siewert 13
http://www.totalphase.com/store/
http://data-transit.com/products/
Unit Test Design Drive Tests with Software Unit in Isolation – Modify CSU as Little as Possible (Ideally Not at All) – Test Driver is Dynamically Linked to CSU under test – Positive Testing – Used as Expected – Negative Testing – Unexpected Inputs, System Interface
Failures – Stress Testing – Corner Cases (Input Extreme Values),
Maximum Rates, Extreme Resource Use (Memory, CPU, I/O Bandwidth, Storage)
– Soak Testing – Continuous Positive, Negative, Stress Testing over Long Periods of Time
Serves as Regression Test in I&T and System Test Sam Siewert 14
I&T Test Flow 1. Automatic Checkout of Source Code (git, subversion, etc.)
2. Build each Unit, Unit Test driver and run all Unit tests
3. Build (Link) a Test Configuration (Integrated Test)
4. Run Test Cases with I&T Test Driver (SQA, Development)
5. Run DejaGnu Expect or Similar Test Reporting
6. Provide Summary (Nightly)
7. Review and File Bug Reports
Sam Siewert 15
Git Example – SSH Cloning Now that you have your SSH key generated on PRClab on your GitHub keyring, you can clone with SSH instead of the HTTPS clone – Now, for me, I just copied from the web page
and then issued the clone command on PRClab “git clone [email protected]:siewertserau/Examples-Imageproc-Unit-Test.git” and I’m prompted for my SSH passphrase now, which is great, and if I enter it correctly, I will get an SSH clone sandbox.
– It should look like this:
Sam Siewert 16
Use SSH PATH
Git Example – Sandbox Work Now edit your files, add files, build, test and when your happy, do “git add *” to add any new files you copied into your sandbox, and do a pull to make sure you are current. Now do “git commit –a” to commit the change set to your clone sandbox and add a meaningful comment and after do “git push”, which should look like:
Sam Siewert 17
Git Example – Browse Change History Use Browse code and Side-by-side “Split” view to Compare Old and New Versions of a file Browsing Code Change History is One of the MOST Powerful Features of CMVC
Sam Siewert 18
Binary File Example Make changes, build, test and then commit and push Now Run to Generate Chunk files
Sam Siewert 19
Git RAID Example - Browse To See What I Changed, Browse First, 4 New “+” Lines for Includes for Simple file I/O Next, 2 New Lines to Declare Simple file Descriptors
Sam Siewert 20
Git RAID Example – Test Case Update New Block of Binary Output for External Verification Can “diff” and use “od –t x1” on files to Examine
Sam Siewert 21
Line 129 to 158 is New Line 124 and 160 Sync Up Again
Basic Binary File Dump and Compare Chunk1.bin, …, Chunk4.bin are Data Files in 4+1 RAID-5 Set ChunkXOR.bin is the XOR Parity File in the RAID-5 4+1 Chunk4_Rebuilt.bin is the Recovered Data in Test Case #0 Identical Chunks, so XOR=0 Sam Siewert 22
System Test of Integrated Modules
Profiling, Tracing and Workloads for I&T
Sam Siewert
23
Profiling Basics Gprof, Available with GCC Since Beginning of Time Still Very Useful Today Read the Manual, Then Try Quick Start Following Example in These Notes https://sourceware.org/binutils/docs/gprof/ Read numerous Tutorials - https://en.wikipedia.org/wiki/Gprof
Sam Siewert 24
Performance Tests Profiling [Classic Tools] – Gprof – Open souce tool [similar to Gcov, but for Profiling] – Vtune – Commercial Tool from Intel – Logic Analyzer, Agilent SPA (Statistical Performance Analysis)
Tracing – E.g. Timestamps output to syslog
Statistics – top, htop – iostat – memstat
Workloads – Iometer – stress
Sam Siewert 25
Performance - Sysprof What is Using CPU on my System Rather than Profile of an Application – Sub-System [Service]
Sam Siewert 26
Gprof Simple –pg compile opiton Run, gprof on gmon.out to get analysis
Sam Siewert 27
%make cc -O3 -Wall -pg -msse3 -malign-double -g -c raidtest.c raidtest.c: In function 'main': raidtest.c:99: warning: format '%d' expects type 'int', but argument 2 has type 'long unsigned int' raidtest.c:68: warning: unused variable 'aveRate' raidtest.c:68: warning: unused variable 'totalRate' raidtest.c:66: warning: unused variable 'rc' raidtest.c:212: warning: control reaches end of non-void function cc -O3 -Wall -pg -msse3 -malign-double -g -c raidlib.c cc -O3 -Wall -pg -msse3 -malign-double -g -o raidtest raidtest.o raidlib.o %./raidtest Will default to 1000 iterations Architecture validation: sizeof(unsigned long long)=8 RAID Operations Performance Test Test Done in 453 microsecs for 1000 iterations 2207505.518764 RAID ops computed per second %ls Makefile gmon.out raidlib.h raidlib64.c raidtest raidtest.o Makefile64 raidlib.c raidlib.o raidlib64.h raidtest.c raidtest64 %gprof raidtest gmon.out > raidtest_analysis.txt
Gprof Analysis 1 million iterations of RAID test XOR and Rebuild
Sam Siewert 28
Flat profile: Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ns/call ns/call name 82.13 1.54 1.54 main 15.47 1.83 0.29 2000001 145.38 145.38 xorLBA 2.67 1.88 0.05 2000001 25.07 25.07 rebuildLBA % the percentage of the total running time of the time program used by this function. cumulative a running sum of the number of seconds accounted seconds for by this function and those listed above it. self the number of seconds accounted for by this seconds function alone. … calls the number of times this function was invoked, if this function is profiled, else blank. self the average number of milliseconds spent in this ms/call function per call, … total the average number of milliseconds spent in this ms/call function and its descendents per call, … name the name of the function. …
RAID Operations Performance Test Test Done in 206417 microsecs for 1000000 iterations 4844562.221135 RAID ops computed per second
Call Graph Profile from Gprof
Sam Siewert 29
Call graph (explanation follows) granularity: each sample hit covers 2 byte(s) for 0.53% of 1.88 seconds index % time self children called name <spontaneous> [1] 100.0 1.54 0.34 main [1] 0.29 0.00 2000001/2000001 xorLBA [2] 0.05 0.00 2000001/2000001 rebuildLBA [3] ----------------------------------------------- 0.29 0.00 2000001/2000001 main [1] [2] 15.4 0.29 0.00 2000001 xorLBA [2] ----------------------------------------------- 0.05 0.00 2000001/2000001 main [1] [3] 2.7 0.05 0.00 2000001 rebuildLBA [3] ----------------------------------------------- This table describes the call tree of the program, and was sorted by the total amount of time spent in each function and its children… % time This is the percentage of the `total' time that was spent in this function and its children… self This is the total amount of time spent in this function. children This is the total amount of time propagated into this function by its children. called This is the number of times the function was called…
Tracing and Code Coverage Basics What Code Has Been Executed? - Coverage What Events Triggered Code Execution (Function Call) and at What Time? - Trace Read the Manual, Then Try Quick Start Following Example in These Notes for Coverage https://gcc.gnu.org/onlinedocs/gcc/Gcov.html Read numerous Tutorials - https://en.wikipedia.org/wiki/Gcov
Sam Siewert 30
Summary of Gcov Commands
Sam Siewert 31
%make clean rm -f *.o *.d *.exe sclogic *.gcov *.gcno *.gcda *.info %ls Makefile SC-Logic-2-LCOV-results SC-Logic-2-LCOV-results.zip sclogic.c << ALWAYS MAKE CLEAN FIRST>> %make cc -Wall -O0 -fprofile-arcs -ftest-coverage -g sclogic.c -o sclogic << DO NEW BUILD ON YOUR TEST MACHINE AND NOT PROFILE-ARCS and TEST-COVERAGE GCOV INSTRUMENTATION DIRECTIVES >> %./sclogic function_A function_B do function_C do function_D function_A do function_D … do function_C function_A do function_D function_A function_B do function_D <<RUN OF CODE TO BE TESTED WITH SOME PRINTF DEBUG OUTPUT HERE>>
%gcov sclogic.c File 'sclogic.c' Lines executed:100.00% of 29 sclogic.c:creating 'sclogic.c.gcov' <<RUN POST RUN COVERAGE ANALYSIS ON SOURCE FOR LAST RUN OF INSTRUMENTED CODE>> %gcov sclogic File 'sclogic.c' Lines executed:100.00% of 29 sclogic.c:creating 'sclogic.c.gcov' <<RUN POST RUN COVERAGE ANNOTATED TEXT GENERATION>> %cat sclogic.c.gcov -: 0:Source:sclogic.c -: 0:Graph:sclogic.gcno -: 0:Data:sclogic.gcda -: 0:Runs:1 -: 0:Programs:1 -: 1:#include <stdio.h> … 1: 41:int main(void) -: 42:{ … -: 52: // Test Case #2, Test use in logic 11: 53: for(testIdx=0; testIdx < 10; testIdx++) -: 54: { 10: 55: if((rc=(function_A() && function_B()))) 2: 56: function_C(); -: 57: else 8: 58: function_D(); -: 59: -: 60: } -: 61: 1: 62: return(1); -: 63:}
Summary of Lcov Commands
Sam Siewert 32
%lcov -t 'SC LOGIC 2 REPORT' -o sclogic.info -c -d . Capturing coverage data from . Found gcov version: 4.4.7 geninfo: WARNING: invalid characters removed from testname! Scanning . for .gcda files ... Found 1 data files in . Processing sclogic.gcda Finished .info-file creation <<RUN LCOV TOOL ON GCOV RESULTS>> %genhtml -o result2 sclogic.info Reading data file sclogic.info Found 1 entries. Found common filename prefix "/home/facstaff/siewerts/se420/src" Writing .css and .png files. Generating output. Processing file MCDC2/sclogic.c Writing directory view page. Overall coverage rate: lines......: 100.0% (29 of 29 lines) functions..: 100.0% (5 of 5 functions) % <<GENERATE THE WEB PAGES FROM LCOV RESULTS FOR BROWSING>>
What Does Gcov Show? Impact of Short-Circuit Logic in C http://mercury.pr.erau.edu/~siewerts/se420/code/MCDC2/SC-Logic-2-LCOV-results/MCDC2/sclogic.c.gcov.html
Sam Siewert 33
Every Other Driver If-call goes Into B, But B must return “1” after A does too
Function_C called 2x in 10 iterations, due to toggle + short-circuit logic
Tracing Tools for Linux LTTng – http://lttng.org/ Systemtap – https://sourceware.org/systemtap/ Ftrace – http://elinux.org/Ftrace Kernelshark – http://rostedt.homelinux.com/kernelshark/, https://lwn.net/Articles/425777/ Wireshark - https://www.wireshark.org/ (network) Syslog - http://linux.die.net/man/5/syslog.conf, http://linux.die.net/man/2/syslog
Sam Siewert 34
Kernelshark Trace VxWorks System Viewer
Top Related