1
Measuring Managed Services
Measuring Managed Services IT Infrastructure Partnership Team
December 9, 2008
2
Measuring Managed Services
This document explains performance measures established to ensure customers receive high quality and reliable service from the ITP
After reading this document, you will understand:
Service goals and measures established for the enterprise
Performance results to date
Plans to continue improving service where necessary
3
Measuring Managed Services
Key to managed services is measuring performance and sharing service results with customers
Adopt a more business-like approach to managing the state’s information technology resources and to operating associated systems
Modernize IT service delivery with measurable results at predictable costsEnsure customers receive predictable, reliable IT services needed to carry out the mission
of their agency
Adopt a more business-like approach to managing the state’s information technology resources and to operating associated systems
Modernize IT service delivery with measurable results at predictable costsEnsure customers receive predictable, reliable IT services needed to carry out the mission
of their agency
Managing asset changes appropriately in order to present accurate billsManaging asset changes appropriately in order to present accurate bills
Ensures the technology and tools are in place to monitor/manage services from an enterprise perspective
Ensures the technology and tools are in place to monitor/manage services from an enterprise perspective
Standardizing and optimizing processes and procedures ensures service quality and integration of operations; less “fire fighting” and more process rigor
Standardizing and optimizing processes and procedures ensures service quality and integration of operations; less “fire fighting” and more process rigor
Preparing the workforce with the skills to efficiently deliver managed servicesPreparing the workforce with the skills to efficiently deliver managed services
Well defined and understood services and performance targets, with continuous improvement initiatives triggered by root cause analysis
Well defined and understood services and performance targets, with continuous improvement initiatives triggered by root cause analysis
Asset Management Technology transformation Mature processes
Workforce transformationDefined services & measures
Managed Services results from 5 transformations
Purpose of Implementing Managed Services
4
Measuring Managed Services
In the contract with the state, Northrop Grumman committed to 196 enterprise service measures spanning various IT areas…
SLA Examples
Network Availability
Desktop Repairs
Help Desk Response
Incident resolution
Mainframe Availability
Server Availability
Accuracy of Asset Inventory
Incremental Back-up
Security Scanning
Disaster Recovery Testing
…the goal of these measures is to ensure quality and reliable service, provide accountability and identify areas for improvement
SLAs measure service to ensure quality and reliability
They identify trends and problem areas to guide improvements
Payment penalties associated with SLAs bring accountability to the contract and help focus service efforts on areas that matter most to customers
In general, payment penalties are between VITA and Northrop Grumman
5
Measuring Managed Services
The ITP implemented 49 of the performance measures in July 2008 and will implement all 196 by July 2009
Jul 1 Aug 1 Jan 1 Mar 1 May 1 Jun 1
Data Center 20 39
Internal Applications 11
Voice/Video 12 8
Help Desk 4 18
Desk Top 1 13
Security 1 6 5
Network 25
Messaging 10
Other * 6 17
TOTAL 49 6 31 31 8 71
2008 2009
Note: “Other” includes SLAs related to disaster recovery, chargebacks, end user surveys, asset tracking, and incident resolution; the implementation timeline may be adjusted per discussions between VITA and Northrop Grumman
SLA Implementation Timeline
6
Measuring Managed Services
No
n-A
ctiv
eA
ctiv
e S
LA
S
SLAs are implemented in phases to ensure that measures are accurate and realistic before each SLA is eligible for penalties
This is essentially a testing phase in which Northrop Grumman is required to produce initial reports as soon as the reporting mechanisms for SLAs are developed, but is not yet required to meet thresholds. The goal during this phase is to ensure data collection methods are accurate and there is opportunity to correct initial problem areas.
Interim
These SLAs are considered active and Northrop Grumman is required to report on and meet or exceed thresholds for them or be subject to penalties.
These SLAs are also considered active, however, in most cases they are measured by location. When an SLA is missed that falls under this category, in most cases, Northrop Grumman is penalized for each location disrupted. Examples include Network and Voice connectivity.
Pha
se I
Pha
se I
I
Per
form
ance
Cre
dit
E
ligib
le
Enterprise Wide
Per Location
7
Measuring Managed Services
In the initial months, reports show that most targets were met for performance credit eligible SLAs…
…and improvements were made to address narrow misses
100%95%98%98%Security – Vulnerability Scanning - Tracking
3.41Security
F M A
Note BNote BNote B99%Development Servers7.17
Note BNote BNote B98%QA & Test Servers7.16
100%100%100%99.5%CESC/Mainframe Production Sub-Systems - Unisys
7.13
Mainframe Server
100%100%100%99.8%Mainframe OS (Class 1) Availability
7.11
Note ANote ANote A95%Procurement of new devices
5.41Desktop
100%100%77%95%Restore Request –Production Systems at CESC
1.41
100%100%100%99%Archive Backup Summary
1.35Cross Functional
99%99%99%99%Full Backup Summary1.33
J FDNOSAJJMSLA
TargetMeasureSLA #Tower
100%95%98%98%Security – Vulnerability Scanning - Tracking
3.41Security
F M A
Note BNote BNote B99%Development Servers7.17
Note BNote BNote B98%QA & Test Servers7.16
100%100%100%99.5%CESC/Mainframe Production Sub-Systems - Unisys
7.13
Mainframe Server
100%100%100%99.8%Mainframe OS (Class 1) Availability
7.11
Note ANote ANote A95%Procurement of new devices
5.41Desktop
100%100%77%95%Restore Request –Production Systems at CESC
1.41
100%100%100%99%Archive Backup Summary
1.35Cross Functional
99%99%99%99%Full Backup Summary1.33
J FDNOSAJJMSLA
TargetMeasureSLA #Tower
Note A: No Instances during the reporting interval Note B: No Performance Credit Eligible Infrastructure to measure
Actual Example
Actual Example
8
Measuring Managed Services
Interim SLAs for incident management are currently being tracked and we are working to improve problem areas…
…in many cases, SLA performance will improve over the next year as the new technology and processes are put in place to improve service
DF M
70%
86%
98%
1%
18s
A
54.5%40%66%90% < 4
hrsSeverity 1
1.12(06/09)Cross
Functional
(Incidents Resolved
Critical Data Ctr Locations)
90%64%84%95% < 8
hrsSeverity 2
1.13(06/09)
64%50%51%95% < 16
hrsSeverity 3
1.14(06/09)
100%100%100%95%Severity 41.15
(06/09)
68%53%49%85% < 8
hrsSeverity 1
1.21(06/09)
Cross Functional
(Incidents Resolved Other
Locations)
86%63%63%90% < 16
hrsSeverity 2
1.22(06/09)
74%56%57%90% < 18
hrsSeverity 3
1.23(06/09)
100%100%100%95%Severity 41.24
(06/09)
43%50%61%63%90%
Average Shrink Wrap Resolutions
90% =< 2 hours
4.32(03/09)
84%84%85%87%70%Average First Call Resolve Rate
4.31 (03/09)
93%97%90%97%90%
Average Email Response Speed
90.0% =< 1 hr response time
4.25(03/09)
6%1%3%2%<= 5%Average Call Abandon Rate 4.24
(03/09)
54s19s35s22s60 secAverage Speed to Answer 4.21
(03/09)
Help Desk
JNOSAJJMSLA
TargetMeasureSLA #Tower DF M
70%
86%
98%
1%
18s
A
54.5%40%66%90% < 4
hrsSeverity 1
1.12(06/09)Cross
Functional
(Incidents Resolved
Critical Data Ctr Locations)
90%64%84%95% < 8
hrsSeverity 2
1.13(06/09)
64%50%51%95% < 16
hrsSeverity 3
1.14(06/09)
100%100%100%95%Severity 41.15
(06/09)
68%53%49%85% < 8
hrsSeverity 1
1.21(06/09)
Cross Functional
(Incidents Resolved Other
Locations)
86%63%63%90% < 16
hrsSeverity 2
1.22(06/09)
74%56%57%90% < 18
hrsSeverity 3
1.23(06/09)
100%100%100%95%Severity 41.24
(06/09)
43%50%61%63%90%
Average Shrink Wrap Resolutions
90% =< 2 hours
4.32(03/09)
84%84%85%87%70%Average First Call Resolve Rate
4.31 (03/09)
93%97%90%97%90%
Average Email Response Speed
90.0% =< 1 hr response time
4.25(03/09)
6%1%3%2%<= 5%Average Call Abandon Rate 4.24
(03/09)
54s19s35s22s60 secAverage Speed to Answer 4.21
(03/09)
Help Desk
JNOSAJJMSLA
TargetMeasureSLA #Tower
9
Measuring Managed Services
Per location SLAs will primarily consist of network and voice availability SLAs
N D
0/591
0/591
0/567
0/0
0/13
0/11
2/567
0/0
0/13
0/11
F
7/10302/8953/8953/940 Note A.3/6901/591
<= .05% Data Loss
Network Packet Loss 8.82
(01/09)
220/1030202/8
95145/895104/940Note A.105/69054/591
< 80ms RTD
Network Transit Delay
8.81(01/09)
65/8782/8741/8282/7785/7170/6570/56799.70%LAN Connectivity –Small
8.53(01/09)
0/00/00/00/00/00/00/099.70%LAN Connectivity –Small - Critical
8.52(01/09)
3/630/540/450/381/270/220/1399.70%LAN Connectivity –Medium
8.51(01/09)
1/261/220/220/151/132/112/1199.70%LAN Connectivity –Large
8.50(01/09)
12/87817/87
412/8284/7780/71713/6570/56799.70%Router Connectivity
– Small8.43
(01/09)
0/00/00/00/00/00/00/099.95%Router Connectivity – Small - Critical
8.42(01/09)
2/632/540/450/380/271/220/1399.95%Router Connectivity – Medium
8.41(01/09)
Network
1/260/220/220/150/133/110/1199.95%Router Connectivity – Large
8.40(01/09)
JOSAJJMAMSLAMeasureSLA #Tower N D
0/591
0/591
0/567
0/0
0/13
0/11
2/567
0/0
0/13
0/11
F
7/10302/8953/8953/940 Note A.3/6901/591
<= .05% Data Loss
Network Packet Loss 8.82
(01/09)
220/1030202/8
95145/895104/940Note A.105/69054/591
< 80ms RTD
Network Transit Delay
8.81(01/09)
65/8782/8741/8282/7785/7170/6570/56799.70%LAN Connectivity –Small
8.53(01/09)
0/00/00/00/00/00/00/099.70%LAN Connectivity –Small - Critical
8.52(01/09)
3/630/540/450/381/270/220/1399.70%LAN Connectivity –Medium
8.51(01/09)
1/261/220/220/151/132/112/1199.70%LAN Connectivity –Large
8.50(01/09)
12/87817/87
412/8284/7780/71713/6570/56799.70%Router Connectivity
– Small8.43
(01/09)
0/00/00/00/00/00/00/099.95%Router Connectivity – Small - Critical
8.42(01/09)
2/632/540/450/380/271/220/1399.95%Router Connectivity – Medium
8.41(01/09)
Network
1/260/220/220/150/133/110/1199.95%Router Connectivity – Large
8.40(01/09)
JOSAJJMAMSLAMeasureSLA #Tower
10
Measuring Managed Services
Initial metrics reflect performance for the entire enterprise, but Northrop Grumman is working to provide select metrics for each agency
• Initial metrics reflect performance for the entire enterprise
• To produce metrics by agency, Northrop Grumman needs to develop proper coding and reporting mechanisms to extract agency specific data
• A team is currently identifying which measures could be provided on an agency-by-agency basis and working to develop reporting tools – agencies will be involved in the process
• A limited number of agency specific metrics will be available later this winter
• CAMs will share these metrics with agencies as they are available
11
Measuring Managed Services
The Commonwealth also will incorporate SLAs in updated MOUs with customers
• The Commonwealth recently kicked off efforts to update the Memorandums of Understanding to which agencies agreed in 2005
• The new MOUs will reflect the Commonwealth Infrastructure Agreement (CIA) and creation of the IT Partnership
• They will now include significant detail on SLAs and performance expectations
• Drafts are currently being reviewed by the PAC and Auditor of Public Accounts (APA)
• The Commonwealth expects to solicit agency input for base MOUs in Q1 2009
12
Measuring Managed Services
Next Steps
• Continue to post and share enterprise performance measures over the months ahead
• Work with agencies to establish and begin sharing agency-specific metrics
• Establish new MOUs for customer agencies
• Make adjustments to service delivery as needed
• Work toward providing comprehensive performance measures including P2P, RFS, and SLAs at a later date
Top Related