So Your Boss Wants You to Performance Test Blackboard
-
Upload
steve-feldman -
Category
Technology
-
view
2.478 -
download
0
description
Transcript of So Your Boss Wants You to Performance Test Blackboard
So Your Boss Wants you to Performance Test the Blackboard
Learn™ PlatformSteve Feldman
Quick Bio
• Blackboard since 2003• Performance Engineering
from the start• Platform Architecture in
2005• Security Engineering in
2010
“Love my job…love my team. If you email me, I will respond.”
http://goo.gl/Z4Rq5
@SEVEN_SECONDS
Problem Statement
So Your Boss Wants You to Performance Test
Expectation Setting
Expectation Setting
For Yourself• Does this fall in your job requirements?• Are you capable of accomplishing this?
For Your Team• Do they have the skills to contribute?• Can responsibilities be well divided/distributed?
For Your Boss• Can the effort be quantified?• Will he/she be willing to agree on objectives, plan and
schedule?
Expectation Setting
• Why are you going through this exercise?
• What do you expect to get out of it?
• Who will be working this effort?
• When will it be accomplished?
• How much will it cost?• Where do we go next once we accomplish it?
Expectation Setting
Drive FunctionalObjectives
Mine and Analyze
Data
AccessRightTools
DevelopSimulation
Scripts
CaptureAppropriate
Metrics
Analyze and Respond
to theData
Escalate without
Confidence
Benchmarking Figuring Out Your Gaps
Expectation Setting
Performance Goals are Measureable and Traceable.• Goals need to align to vision and direction of institution.• Goals should be attainable and realistic.
Response Times are Necessary for Performance• Avoid averages, maximum and minimum values unless using
Standard Deviation.• Try to use Percentiles instead
Throughput Measurements are Needed for Scalability • Bytes Transferred (Send and Receive)• Hits/Time Interval• Pages/Time Interval (Business Transaction Mapping)
Expectation Setting
Attributes of a Performance Goal
Performance: Response
Time Percentile
Scalability: Concurrency/P
arallelism
Acceptance CriteriaWorkload
Business Context
Failure Rates and Errors
Poor AttributesSystem Utilization Ambiguous Words
Everything You Need to Know
Goals, Targets
and Thresholds
Functional Scenarios
Test Bed
Scripting
Scenario Planning
Load Test Definition
Everything You Need to Know
• Planning: Goals and Objectives– Goals should be measureable and traceable– Best goals align to the vision and direction of the
business.– Performance requirements are preferred
• Poor goals involve system utilization metrics as they don’t align to the business.
Everything You Need to Know
• Attributes of a good performance/scale goal– Response time percentile– Throughput metric: bytes, hits, pages/transactions
served or processed– Community/Population– Definition of a business transaction– Workload/Data Condition– Database transaction– Failure Rate– HTTP Error Codes
Everything You Need to Know
Load Test Environment
Test/Deploy Environment
Monitoring Infrastructure
Synthetic Tools
Everything You Need to Know
Analysis Tools
Everything You Need to Know
Scripting Frameworks• C• Java and JS• Python and Perl
Record and Playback Systems• Commercial Tools: LoadRunner, SilkPerformer,
Rational, SOASTA, MSVSTS• Open Source Tools: Jmeter, Grinder, OpenSTA, Multi-
Mechanize, Curl-Loader
Rich Client and Browser• Browser: Browser-Mob, Selenium, SOASTA, LISA• Alternative: WebPageTest
Everything You Need to Know
• Planning: Scheduling– Defining Objectives– Analyzing Behavior– Analyzing System
Data– Functional Script
Definition (Coverage Model)
– Scripting– Data Set Construction
(Test Bed Data Set)
• Planning: Test Process– Monitoring Setup– Infrastructure Setup– Sample Testing– Restore Process– Calibration (Testing
and Tuning)– Scalability Testing– Analyzing Results– Presenting Results
Everything You Need to Do
• Performance Scenario and Modeling– Conduct Functional Interviews– Functional Analysis (Review Use Cases)– Log Mining– Data Mining– User Experience and Expectations
• Sequence, Order and Probability• Modeling Time
– Time of day, year and universal behavior
Everything You Need to Do
Test Bed and Data Conditions
Create Synthetic: Uniform, Controlled and Simplified for Ease of Scripting and Requirements Traceability
Leverage Existing Data: Real data, but requires careful planning and evaluations. Complicates scripting and
makes requirements traceability difficult.
• Multiple techniques for creating test bed and data conditions– Combination of ContentExchange and Snapshot– Use of B2 APIs– Direct SQL
• Two pitfalls to avoid– Avoid creating synthetic data with load test scripts.– Avoid trying to use real customer data for actual
test conditions.
Everything You Need to Do
Everything You Need to Do
• Two Recommended Synthetic Transactions: Production HTTP Drivers and True Browser Rendering.
• HTTP Drivers: Interval-based simulators usually from external sources.– Regulate Frequency, Define Functional Paths and
Verify Non-Functional Requirements
• Browser Rendering: Execute full browser behavior.– Show full E2E and not just First to Last Byte from a
Server Pespective
Everything You Need to Do
• Record and Playback vs. HTTP Capture– R/P acts like a proxy capturing HTTP and allows
playback like a video recorder.– HTTP Capture: HTTPLiveHeaders, Fiddler and
Firebug
Everything You Need to Do
• Partial Payloads vs. Full Payloads– Use Partial for Code Simplicity and Management
• Emphasis on Server Side Request• Accelerate scripting delivery time
– Use Full for Total Round Trip Time• Dynamic, but controlled content for simplicity purpose• Doesn’t get browser time
• Nugget: Introduce automated Selenium or WebPage Test script(s) on sampled intervals during life of test for browser and end-2-end time.
Everything You Need to Do
• Arrival Rates and Load LevelsApproach Overview
Activity Accumulator Event_Type = Login: More precision and already available for query.
Log Analysis /webapps/login: Need to parse and analyze for query purposes, but get so much more data such as host and HTTP statistics.
Changes in SP8 Authetication Framework: Will provide hybrid of 1st and 2nd approach without HTTP statistics.
Everything You Need to Do
• Analytics: Study Both as a Transparent Lens
Behavioral
• Closest way to show session lengths.
• Build Probability Models• Narrow Down
Concentration of Systems (See Reads/Writes)
• Network Traffic• Application Behavior
Volumetric/Data Composition
• Shows Writes, but Not Reads
• Good for histograms for studying data orientation.
• Study growth and adoption patterns over time.
• Time Stamped Date: Time Clusters
• Adoption of Tools and Data
Everything You Need to Do
• SLA’s Acceptance Criteria on top of Performance and Scalability Requirements.
Objectives Attributes of Performance
Attributes of Scale
TargetsDesired
Resource Conditions
Thresholds Variance and Maximum Willing Resource Conditions
Everything You Need to Do
Validation/Verification
Functional• Does it work?• Text Checks Matching• Do I see more than I should?• User Abandonment
Technical• Error Rate• HTTP Failures• Post-Test Log Mining
Everything You Need to Do
Area of Focus Recommendations
Key OS Metrics CPU, Memory, Network (Retransmission, Bytes Send/Receive) and I/O (I/Ops)
Key Application Metrics HTTP Codes, Hits/Time, Pages/Time, Failure/Error Rate
Key Container & JVM Metrics
GC Metrics, Thread Counts, JDBC Counts, Cache Statistics, JK Sockets
Key DB Statistics Trans/Sec, Wait Events, SQL Stats (Logical I/O, Physical I/O, Sorts, Parses, Executions), Segment/File Stats, Object Statistics, Cache Stats, Instance Memory Stats
Key Storage Metrics Latency and Seek Time, Disk Throughput, Bytes/Time, I/Ops, Cache, Packet Issues, Reads vs. Writes
When You Can’t Do It Yourself
Load Testing Experts
Technology Experts
Turn-Key Solutions
Key Differentiator?• Portability of Tools• Cost Breakdown• Reuse of Tools• Methodologies• Sample
Documentation• Account for gaps?• Measure of
success?
Key Differentiator?• What tools will
they use?• Leave behinds?• Methodologies?• Benefits of
engagement?• Dependency on
you/team?• Account for gaps?• Measure of
success?
Key Differentiator?• Really LT or TE in
disguise?• What tools will
they use?• Cost breakdown?• Methodologies• Dependency on
you/team?• Artifacts• Measure of
success?
When You Can’t Do It Yourself
• Lead the objective planning phase Gather requirements and conduct functional interviews.
• Establish relationship w/ PerfEng team at Blackboard
• Product detailed test plan• Execute the benchmark lifecycle• Produce summary report
– Short-term recommendations for configuration– Feedback to Blackboard– Long-term capacity planning guidance
When You Can’t Do It Yourself
Planning• Goals, Objectives, Targets and Thresholds• Scenario Definition and Test Bed Data Set• Project Plan and Contingency Planning
Test Infrastructure
• Scripting Methodology• Load Generation Capabilities• Measurement Tools
Benchmarking Approach
• Forensics methodology
• Issue Escalation• Tuning/Optimization• Conclusion
How We Do It
Modeling Tools• BbTrends (Custom SQL): Histograms, Star Schema, Adoption and Growth Pattern Analysis• Log Mining: R, Saw Mill, and MS Log Parser
Measurement Tools• ALM: dynaTrace and Quest Foglight• Infrastructure: Cacti, SiteScope and Quest Foglight• Database: Quest Performance Analysis, SQL Server Nexus, ASH/AWR and DMVs
Load Test Tools• HP LoadRunner/Performance Center, Jmeter and SOASTA• Selenium WebPageTest
Analysis Tools• Galileo: Home grown analytics engine
How We Do ItSt
ate
Mod
elin
g • State: Read-only operation, usually a navigation.
• Action: Write-based operation with transactional context
• Action State: Form Sc
ript C
ompo
sitio
n • Every page and operation is fully componentized for reusability.
• Identify vertical and horizontal state movement in the application
• Conventions• Naming• Abandonment• Parameters
Sim
ple
vs. C
ompl
ex • Full vs. Partial
payloads• Dynamic data
awareness• Dynamic data
submission
DEMO
How We Do It
Base Workload
Abandonment
Workload
Interactive Tuning Process
Forecast Theoretical
Determine Theoretical Load/Performance
Model Targets and Thresholds
Develop Ratios
Prove PAR (Discrete Model)
Workload 1
Workload 2
Workload 3
Workload N
How We Do It
User AbandonmentTests
CalibrationTests
Aggregate WorkloadTests
EnduranceTests
Performance Archetype (PAR)Tests
Adoption Tests: S-Curve, Normal Distribution and Altruistic L-Curve
Tests
RegressionTests
Steve Feldman@seven_seconds
Please provide feedback for this session by [email protected].
The title of this session is:
So Your Boss Wants you to Performance Test the Blackboard Learn™ Platform Before you Go Live with
the Next Release