Operational Risk Training Managing Operational Risk & AMA Toronto Nov 3, 2011

142
1 Operational Risk Training Managing Operational Risk & AMA Toronto Nov 3, 2011 Bogie Ozdemir & Evren Cubukgil

description

Operational Risk Training Managing Operational Risk & AMA Toronto Nov 3, 2011. Bogie Ozdemir & Evren Cubukgil. Agenda - Morning. Agenda - Afternoon. Principles of an Effective Operational Risk Framework. - PowerPoint PPT Presentation

Transcript of Operational Risk Training Managing Operational Risk & AMA Toronto Nov 3, 2011

Page 1: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

1

Operational Risk Training

Managing Operational Risk & AMA

Toronto Nov 3, 2011

Bogie Ozdemir & Evren Cubukgil

Page 2: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

2

Agenda - Morning

8:30 – 10:30 Introduction

o Principles and Objectives of an effective Operational Risk Framework

o Earlier Practices and Lessons Learned Basel II & Solvency II - what lies aheadDesigning a Comprehensive AMA Framework

o Necessary components and their inter-connectivityo Internal Data

10:30 – 11:00 Coffee Break

11:00 – 12:00 o External Data

Scaling External Data Quantile Regression

12:00 – 13:00 Lunch Break

Page 3: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

3

Agenda - Afternoon

13:00 – 14:30 oQualitative Elements RCSA Scenario Analysis

14:30 – 15:00 Coffee Break

15:00 – 16:15 oQuantitative Elements Distribution Choices – Frequency and Severity Fitting to truncated data Change of Measure Goodness of Fit, Stability EVT – Comparison with LDA False Precision

16:15 – 16:30 Coffee Break

16:30 – 17:30 Governance and ValidationOperationalizing AMA

Management Buy-In -Roles and Responsibilities IT Infrastructure Allocation of Operational Risk Capital: appropriate level of granularity, allocation

of diversification, Allocation of capital generated by shared service centers Driving business benefits: Utilizing AMA in effective capital and Risk

Management

Page 4: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

4

8:30 – 10:30 Introduction

o Principles and Objectives of an effective Operational Risk Framework

o Earlier Practices and Lessons Learned Basel II & Solvency II - what lies aheadDesigning a Comprehensive AMA Framework

o Necessary components and their inter-connectivityo Internal Data

Page 5: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

5

Principles of an Effective Operational Risk Framework

1. Actionable – facilitate effective operational risk and capital management – Models don’t manage Op Risk, people do

2. Facilitate Institutional Learning – We need to connect the dots

3. Reliable, Consistent, Stable across time and the organization – Relative accuracy is a key

4. Defendable

5. Quantifiable – We cannot manage it if we cannot quantify it

6. Complementary, marry Expert Judgment and Analytics - Use all Available Information

Page 6: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

6

Earlier Practices and Lessons Learned

1. SCORE-CARD LIKE APPROACHES

• A score is assigned to Each Risk Type based on subjective criteria (perceived risk, effectiveness of controls, audit findings, etc). Aggregated to arrive at an overall score using weights/multipliers. Top-down capital numbers are assigned

• An attempt to make use of expert judgment, provide some incentive to manage risk, but

• Subjective, Simplistic and not defendable

High Risk Medium Risk Low Risk

Risk Type Score Weight Overall Score Capital1234

Page 7: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

7

Earlier Practices and Lessons Learned

2. SINGLE EQUIVALENT SCENARIO QUANTIFICATION

• Can have different starting points; Single overall event based scenario can directly be quantified as Capital or Event based scenario per risk type can be aggregated as Capital

• Discussion can be very useful to surface potential risks but

• No complete Op VaR distribution, thus

• No way of knowing if the scenario corresponds to a capital event at the required confidence level

In a workshop a tail scenario is identified, discussed and quantified. The result is used as Capital

Page 8: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

8

Earlier Practices and Lessons Learned

3. Earlier OpVaR approaches

• A complete Op VaR distribution but,

• In the absence of a reliable supporting governance, validation framework, consistency and defendability are not achieved. Workshops turned into a negotiation process,

• The approached quickly lost credibility and faded!

• Overreliance on the quantitative approach was clearly not the solution

Quantitatively similar to today’s AMA, it uses frequency (typically Poisson) and severity (typically log-normal) distributions to obtain an OpVaR distribution. While internal data are used for high frequency - low severity events covering the body of the distribution, workshops are used for the low frequency – high severity events constructing the tail

Page 9: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

9

Back to the FutureI- Capital Estimation Process - Example

Management Action:

It allows the LOBs to manage their operational Capital at Risk and make cost/benefit decisions on controls and insurance.

“Monte Carlo

Simulations”

Lost Distrubution

-

500

1,000

1,500

2,000

2,500

$-

$9

27

,50

0

$1

,85

5,0

00

$2

,78

2,5

00

$3

,71

0,0

00

$4

,63

7,5

00

$5

,56

5,0

00

$6

,49

2,5

00

$7

,42

0,0

00

$8

,34

7,5

00

$9

,27

5,0

00

$1

0,2

02

,50

0

$1

1,1

30

,00

0

$1

2,0

57

,50

0

$1

2,9

85

,00

0

Bin

Fre

qu

en

cy

.

INPUTSFor Frequency:Expert Opinion, IT, etc.Audit Reports Management InterviewsManagement ReportsLoss HistoryExternal Reports

For Severity:Management InterviewsLoss History- Internal- External

1 RISK ASSESSMENT FRAMEWORK

(Parameter Estimation Qualitative+Quantitative)

MODELING

OPERATING RISK REVIEW COMMITTEE

REVIEW AND VALIDATION

FEEDBACK

3

2Risk Categories and

Nature of LossPeople Process Technology Ext. Dep. Likelihood Severity

Fraud

Rogue Trading

Transactions

Models

Legal

Regulatory

CustomerSatisfactionLoss of Assets

Methodology Inputs

OUTPUT

Page 10: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

10

Back to the Future II-Modeling Approach

Methodology Inputs:

• The identification of the risk types found in the line of business through the assessment process facilitated by the Operational Risk unit of RMG

• The average frequency of the operational loss events by risk type: Expected # of occurrences per year

= 100 (100 events per year)

= 0.2 (1 event in every 5 years)

• The average severity of the operational loss events by risk component: Expected $ loss per event

= $100,000 ($100,000per event)

The assessment process uses loss events from historical data, industry data, management

interviews, Audit reports, etc.

Page 11: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

11

Back to the Future Methodology Assumptions

Frequency Distribution = Poisson Distribution

Why Poisson:

One parameter distribution, easy to calibrate

Widely used in insurance industry for event risk modeling

Severity Distribution = Lognormal Distribution

Why Lognormal:

Practical: Similar to normal, but can not have negative values (losses)

!)(

n

enxp

n

Not possible to validate these assumptions using historical data, but they are reasonable and provide the simplest model

based solution

Page 12: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

12

Back to the Future Monte Carlo Simulation

• For each scenario, # of events per risk class over next year is simulated $ loss per event per risk class is simulated the $ losses are aggregated per risk class cumulative losses for all risk classes are aggregated

• Repeat the same procedure 10,000 times• Then the worst possible operational loss at the required confidence level is used for Capital

Estimation

EC

Scenarios

Risk Classes

# of events $ Loss per event

# of events $ Loss per

eventFrequency =

1500 event/year

Severity = $1000 per

event

Frequency = 0.5 event/year

Severity = $1MM

per event1 1200 Log-normal $20,000 0 0 0 $20,000 2 800 Log-normal $15,000 1 $1M $1M $1,015,000 3 1300 Log-normal $25,000 0 0 0 $2,500 4 1400 Log-normal $23,000 2 $1M $2M $2,023,000

10,000 2200 Log-normal $50,000 1 $1M $1M $1,050,000 Average

Frequency 1500 event

per year

Average Frequency 1

event in every 2 years 0.5 event

per year

Average Severity is 1000 per event

Scnerio #

Cum. loss per scenario

Cum. loss per scenario

Cum loss due to all

Risk

Risk Type 2Frequency = $1MM per event Severity = $1MM per event

Risk Type 1Frequency = 1500 event/year,

Severity = $1000 per event

Page 13: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

13

Back to the Future Loss Distribution

Outcome - Operational Loss Distribution

0

200

400

600

800

1000

1200

$0

$5,

834,

288

$11

,668

,576

$17

,502

,864

$23

,337

,152

$29

,171

,440

$35

,005

,728

$40

,840

,017

$46

,674

,305

$52

,508

,593

$58

,342

,881

$64

,177

,169

$70

,011

,457

$75

,845

,745

$81

,680

,033

$87

,514

,321

$93

,348

,609

$99

,182

,897

$10

5,01

7,18

5

$11

0,85

1,47

3

Mor

e

Potential Annual Operational Losses

Freq

uenc

y

.00%

20.00%

40.00%

60.00%

80.00%

100.00%

120.00%

Frequency

Cumulative %

Capital

Low Frequency, High Severity

High Frequency, Low Severity

99.95%

Confidence Level

Page 14: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

14

Back to the Future Loss Distribution

Expected Losses:

• Mean events

• High Frequency, Low Severity (Processing Errors, Fraud, etc.)

• Think of internal controls as a filter

• Residual risk is the risk after the filter, contributing towards EL’s

Capital:

• Tail events

• Low Frequency, High Severity (Legal Risk, Image and Reputational, etc.)

• Think of internal controls as a filter

• Catastrophic losses if the filter itself breaks down

• Inherent Risk

Page 15: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

15

The Basel Committee definition

“The risk of loss resulting from inadequate or failed internal processes, people and systems or from external events.”

• Lack of loss data makes it impossible to rely solely on quantitative models for Operational Risk Economic Capital

• Financial industry has developed various qualitative models for assessing Operational Risk

• Leading Market practice incorporates available loss data with

qualitative methods within a robust conceptual framework.

Page 16: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

16

Qualitative Practices For Capturing OpRiskAlthough Forward looking, capital numbers are directly calibrated based on qualitative assessments. Results are difficult to challenge – Lack of discipline in managing risk

• Risk Control and Self Assessment (RCSA) process

Businesses qualitatively assess their internal controls and inherent risk factors.

Monitor Key Risk Indicators for Businesses.

• Scenario Analysis

Working together, Business and OpRisk experts formulate a set of worst case scenarios meant to capture extreme tail risk loss events

Page 17: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

17

Quantitative Practices for Capturing OpRisk

Historical loss data is not forward looking, and is not sensitive to changes in current risk factors – little incentive to manage risk

• Using Internal Loss Data to directly calibrate a loss distribution to infer a capital number

Lack of internal loss data - will never observe tail loss events

• Use External Data to calibrate a loss distribution and infer a capital number

Much more data available

Risk profiles can be mismatched between internal business units and industry loss experience

Page 18: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

18

AMA Provides a Formalized Framework to Consistently Manage Operational Risk

Formal and Consistent Risk Management

Leading Market Practice All Available Information

Disciplined and Sensitive to Risk

Encouraged by Regulators

Recognized by Rating Agencies

Internal Data

External Data

Loss Scenarios

RCSA

Focus on Horizon: RCSA,

KRI

Grounded with Industry

Experience: External Data

Manage Risk at Business Unit Level

Strong Message to Shareholders

Page 19: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

19

Basel II & Solvency II - what lies aheadEvren

• AMA has emerged as industry best practice from the requirements set out in Basel II for the measurement and management of operational risk.

Regulatory requirements are broadly defined, so as to allow institutions to tailor approaches to organizational needs

Well documented approach for weighting the use of BEICF, Scenario Analysis, Internal and External Data in measuring Economic Capital.

• Banking regulators are familiar with AMA given long history of implementation and approval (BIS June 2011 supervisory guidelines)

• Regulatory requirements for operational risk management for insurance companies have lagged those applying to banks

• Under Solvency II insurance companies are required to demonstrate that their methodology for quantifying operational risk is aligned with their risk profile, and that the output from that methodology is used in strategic decision making and business planning (Use Test). Standardized or formula based approaches will generally fail the Use Test.

• Following Solvency II and the higher standards for operational risk in the banking sector, regulators in North America will require a move towards best practices by insurance companies.

Page 20: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

20

Basel Operational Risk Event Types

1. Internal fraud: intentional misreporting of positions, employee theft and insider trading on an employee’s own account.

2. External fraud: robbery, forgery, check kiting and damage from computer hacking.

3. Employment practices and workplace safety: compensation claims, violation of employee health and safety rules, organized labor activities, discrimination claims and general liability (for example, a customer slipping and falling at a branch office).

4. Clients, products and business practices: fiduciary breaches, misuse of confidential customer information, improper trading activities on the bank’s account, money laundering and sale of unauthorized products.

5. Damage to physical assets: terrorism, vandalism, earthquakes, fires and floods.

6. Business disruption and system failures: hardware and software failures, telecommunication problems and utility outages.

7. Execution, delivery and process management: data entry errors, collateral management failures, incomplete legal documentation, unapproved access given to client accounts, non-client counterparty miss-performance and vendor disputes.

Page 21: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

21

Examples in the Insurance Industry

1. Internal Fraud

2. External Fraud

3. Employment Practices and Workplace Safety

4. Client, Products and Business Processes

5. Damage to Physical Assets

6. Business Disruption and Systems Failures

7. Execution, Delivery and Process Management

1. Employee theft, claim falsification

2. Claims fraud, falsifying application information

3. Repetitive stress, discrimination

4. Client privacy, bad faith, red-lining

5. Physical damage to own office or vehicle fleets

6. Processing center downtime, system interruptions

7. Policy Processing, claim payment errors

Basel Event Types Examples in Insurance

Page 22: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

22

AMA and the Use Test

4 broad principles that institutions have to consider at a minimum to satisfy the Use Test provided by the EBA:

1. The purpose and use of the AMA should not be limited to regulatory purposes

• The framework is used to manage operational risk exposures across different business lines

• How inputs, estimations, predictions or outputs from the risk measurement system are used in the decision making process (strategic or tactical decision making

2. The AMA should evolve as the institution gains experience with risk management techniques and solutions

• How the institution ensures that the nature and balance of inputs into the framework are relevant and fully reflect the nature of the business

• How the framework becomes more responsive and robust over time

Page 23: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

23

AMA and the Use Test

3. The AMA should support and enhance the management of operational risk within the organization

• How decisions for improving processes and controls are made

• Operational management objectives and activities are communicated within the organization

4. The use of an AMA should provide benefits to the organization in the management and control of operational risk.

• Senior management has considered action on its receipt of information from the framework

• AMA increases transparency, risk awareness and operation risk management expertise, and creates incentives to improve the management of operational risk throughout the organization.

Page 24: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

24

Designing a Comprehensive AMA Framework

• Advanced Measurement Approach (AMA) has emerged as the best practice already used by Basel II banks and being implemented by Solvency II adopting insurers

• AMA was originally based on an actuarial approach to modeling claim severity and frequency. Its earlier – pre Basel II implementation has not been successful due to sole reliance on quantitative models in a data starved field without the supporting governance and control infrastructure.

• Under Basel II, AMA has matured with the supporting governance and control infrastructure being mandatory components. With these components in place, AMA provides Financial Institutions with a useful framework to quantify and manage the Op Risk with actionable cost vs benefit decision capability.

• AMA is not a quantitative model, but a process and framework, which encompasses both quantitative and qualitative elements within a controlled environment.

Page 25: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

25

AMA Framework Brings 4 Approaches Together

• None of the aforementioned approaches (RCSA, Scenario Analysis, Internal and External data) can be satisfactorily used on its own to manage Operational Risk

• AMA provides a framework in which each of the approaches can be brought together; results of each approach can be validated against each other

• Qualitative assessments (Scenarios and RCSA) bring a forward looking perspective of the risk profile to empirical loss experience

• Capital models and empirical loss data ground qualitative assessments in industry experience.

• Common Language within AMA Framework is Frequency and Severity

Page 26: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

26

Using All Available Information

• Internal Loss Data Establish an internal loss database to consistently record loss events in Business-

Unit / Even-Type categories – Mapping and Aggregation matters

• External Loss Data Must be purchased from an external consortium (if necessary mapped to internal

loss categories)

• Risk Control and Self Assessment (RCSA) Operational risk management indicators that provide forward-looking assessments

of business risk factors as well as internal control environment. Assessment of expected (mean) impact of risk types on Business Units Score Risk types according to severity and frequency within Business Units

• Loss Scenarios Hypothetical loss scenarios defined over a severity range and according to

frequency / likelihood of occurring (ex. 1 event every 5 years). Assessment of extreme tail events of risk types on Business Units Developed through formalized and replicable workshops with full representation

from business units.

Page 27: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

27

Designing a Comprehensive AMA Framework Necessary components and their inter-connectivity

Frequency and Severity

RCSA

Scenario AnalysisScenario Analysis

Internal Data

Internal Data

External Data

Consortium

External Data

ConsortiumExternal

Data

Public

External Data

Public

Scaling

Curve Fitting per Risk Type

Aggregation

Op VaR

Validation

Enterprise Consistency

Check

Risk Management

Risk Identification & Quantification

Page 28: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

28

Internal Data

• Internal data Issues:

EL type losses, certainly not tail

Availability (3 years are required under Basel, but no magic number) How many years of history do we need?

Retroactive data collection

Truncated loss collection (Basel limits: collection over $10K) – collection cost vs accuracy

Changing the culture (Admitting to the op risk event)

Near misses are harder to capture – how to quantify?

Capturing the losses which unfold over a long period of time: time of occurrence, continuous updates

Discounting the losses to estimate the severity - discount rate?

Page 29: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

29

Designing an Internal Loss Database

• Database has to be designed to roll up into Business Unit – Event Types (Units of Measure) for which capital will be calculated in the Framework

• Highest level of event types must correspond to Basel categories for compatibility with external data sources, and participation in consortiums

• Sub categories can vary across business groups depending on the nature of the risk profile

• Sub-Categories to be defined in conjunction with business groups based on common underlying causes of loss events

• How Granular to go? Implications for Modeling Executive Fraud vs Teller Fraud, should they be treated as different loss

categories, or just different degrees of the same loss type? If they are different, how to allocate intermediate event types (middle

management fraud)

• Additional levels of granularity benefit RCSA and Scenario Analysis workshops by allowing more detailed analysis of internal data.

• The more granular the loss categories the more diluted the data available for analysis within each category.

Page 30: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

30

Units of Measure

Risk Types Business Unit 1 Business Unit 2 Business Unit 3 Business Unit 4 Business Unit 5Internal Fraud

External Fraud

Employment Practices and Workplace Safety

Client, Products and Business Processes

Damage to Physical Assets

Business Disruption and Systems Failures

Execution, Delivery and Process Management

• Within the AMA framework Capital will be calculated and held at the level of Units of Measure (Business Unit – Event Type Categoreis)

Page 31: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

31

Shared Service Centers

• How to address risks of shared internal service centers within the database and in the broader framework

• Businesses will resist recognizing losses incurred by processes which they do not manage directly, or are shared across the enterprise

• Categorization has to link to capital allocation, if the service center cannot hold capital, risks have to be allocated to business units using the service center, in the same way as risks incurred by outsourced services

• Loss exposure of business units to shared service centers has to be managed carefully to avoid double counting across the enterprise

• Joint exposure to shared services has to be recognized in modeling correlations between units of measure

Page 32: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

32

Recording Operational Losses

• ORX data standards:

• An Operational Risk event is an event leading to the actual outcome(s) of a business process to differ from the expected outcome(s), due to inadequate or failed processes, people and systems, or due to external facts or circumstances.

• Includes Legal Risk:

• Legal Risk is the risk of being sued or being the subject of a claim or proceedings due to non-compliance with legal or statutory responsibilities and/or losses due to inaccurately drafted contracts. This includes the exposure to new laws as well as changes in interpretations of existing law by appropriate authorities and exceeding authority as contained in the contract. This applies to the full scope of Group activities and may also include others acting on behalf of the Group. Legal Risk is a component of Operational Risk.

• Basel II requirements:

• Legal Risk includes, but is not limited to fines, penalties, or punitive damages from supervisory actions, or to judgments or private settlements (Basel II Accord section V. A. §644 - Definition of Operational Risk) or to the reduction in asset values or cashflows.

Page 33: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

33

Anatomy of an Operational Loss

• Dates Define a Loss Event 1 Date of Occurrence: the date when the event happened or

first began,

2 Date of Discovery: the date on which the firm became aware of event, and

3 Date of Recognition / Accounting Date: the date when a loss or reserve/provision was first recognized in the P&L

Date of Resolution – (Duration of loss event)

• This is a standard requirement for recording losses in industry consortiums

• Standard practice is to leave loss dates unchanged, even if further losses are realized over time. If multiple losses are posted at different times in the General Ledger, losses linked to original dates should be updated (Grouped Losses).

Page 34: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

34

Truncation of Recorded Loss Events

• Loss event databases will typically not record all loss events

• As losses become smaller and smaller there is a point at which the benefit of having the loss available in an AMA framework is outweighed by the effort / cost of recording it.

Small losses are not very informative of exposure to significant operational loss events (High Severity Low Frequency)

• Impact on modeling likelihood of large losses:

10 small losses and 10 medium – large losses

95th percentile becomes the 90th percentile

Page 35: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

35

Conditional Probability

• Probability that a loss L is less than a, based on a data set where only losses greater than b are reported.

• Based on the definition of conditional probability, we only need to adjust the likelihood function when estimating the probability density function of the severity distribution. If we don’t we’d be overstating the severity.

BPBAP

BAP

|

bLP

aLbLPbLaLP

|

bFbFLF

bLLF

1|

Page 36: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

36

How to Record & Use Near Miss Losses

• Stands between actual loss and hypothetical loss (scenario?)

• Were losses averted by chance, or were losses averted due to controls?

Is there a difference?

• Separate categorization in loss database

• How to incorporate into modeling?

Cannot augment realized loss data with near miss events

Need to assess probability of near miss loss occurrence, because it didn’t actually happen: 1 near miss event in 10 years is not a 1 in 10 year event; how many 10 year periods would we have to experience before the near miss event was actually realized?

Incorporate into modeling with hypothetical loss scenarios.

Page 37: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

37

Grouped losses

• Some operational losses can be characterized by multiple losses over several years.

Several losses occur before their cause is identified and remediated

Regulatory fines, or legal settlements are linked to operational failures in the past

• All losses should be tied to their underlying causal event. This could require previous loss event data points to be updated over time, and calls for careful management of the internal loss event databases.

• ORX Requirements:

An event may have multiple associated losses. In such cases, an investigation may be necessary to identify the “root event”—that is, the initial event without which none of the related losses would have occurred. For ORX purposes, the root event is included in a single record, containing all related losses, and is classified according to its specific event characteristics.

Page 38: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

38

Discounting Grouped Loss Events to the Date of the Causal Event

• Grouped loss events reported to ORX are not discounted, due to complexity and availability of information:

• Which discount rate to use?

Risk free discount rate – at the time of the original event? – at the time of the newly reported event?

Should one use risk adjusted discount factors? Required rate of return / cost of capital?

• What is the impact on modeling of not discounting grouped losses realized over extended time periods?

Page 39: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

39

Discounting Grouped Loss Events to the Date of the Causal Event - Ctd

• Risk adjusted discount rate:

Who’s perspective is the operational loss from? Debtholders, Policyholders? Deposit Holders? Shareholders?

Magnitude of EC is assessed from the debtholder or policyholder perspective, return on EC is assessed from the shareholder’s perspective.

Operational losses experienced are distinct from Economic Capital held against operational risk, and on which firms must earn a required rate of return for shareholders – this is not the relevant discount rate to apply to grouped losses.

Shareholder’s required rate of return would take into account all un-diversifiable risk in returns generated by the financial institution: including market, credit, operational, etc. (weighted average cost of capital)

• Conceptually, Operational loss events likely have low levels of un-diversifiable / systematic risk

Page 40: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

40

Discounting Grouped Loss Events to the Date of the Causal Event - Ctd

• There are difficulties in obtaining data for estimating time series correlation of loss severity between observed operational loss events and other risk types and returns. When losses take place over several years, the time horizon is uncertain, and

in some cases independent of loss severity (regulatory fines, lawsuits)

• Applying risk adjusted discount factors to operational losses which take place over multiple years would result in less conservative estimates of losses. Given uncertainty over methodology it’s always better to be conservative.

• If operational losses have little or no systematic or un-diversifiable risk, they should be discounted over multiple years based on the risk free interest rate.

• The relevant discount rate is not that prevailing at the time of the initial causal event, nor the rate prevailing at the time the loss impacts a firm’s P&L

• The relevant rate is that prevailing at the time of modeling the capital, as this is the perspective from which the potential loss is being assessed.

Page 41: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

41

Internal Data and Expected Loss

• Given a sufficiently fat tailed distribution, Internal loss data is only informative of the body of the loss distribution – mean loss events

• Conceivably many thousands of loss data points may be required in order to sample from the tail of the loss distribution of an individual business unit event type

• Fitting a severity distribution to internal loss data can be miss-leading if we only have points around the body of the distribution, and have not observed any tail loss events.

Lack of tail observations may lead to the selection of an inappropriate probability density function used to approximate the severity distribution. Capital figures can vary substantially based on the choice of severity distribution

For a given density function used to approximate the severity distribution of losses, fitted parameter values may be inappropriate given the lack of tail data.

Page 42: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

42

Collecting Internal Data – How Many Years are Needed?

• Generally regulators expect a minimum of 3 years of internal loss data.

European banking authority requires 3 years of internal loss data when institutions initially apply for AMA, and mandates the use of minimum of 5 years of internal loss data as it is acquired.

• There is no set minimum amount of data that will ensure more accurate results of the model, or superior management of risk in the framework

• Lower the event frequency the more years of data

• Depending on how quickly an institution grows or undergoes internal changes, data can quickly become stale.

You can’t drive a car by looking the rear view mirror

Page 43: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

43

Retroactively Collecting Data

• Retroactively collecting losses can skew the perspective of the risk profile:

• When collecting historical losses it is uncertain what percentage of losses can be recovered in given years

• The threshold above which losses are recovered is not fixed and is unknown: may only find 30% of losses between $100k and $500k, and 50% of losses between $50k and $100k.

• When retroactively collecting losses a sufficiently high threshold must be set so as to ensure that all relevant losses are collected: Retroactively record all losses above $20 m, $100 m.

• Similar to hypothetical loss events or near miss events, retroactively collected internal losses cannot be added directly to the internal loss event database.

• Retroactively collected loss events can be useful in validating loss distributions (back-testing the largest loss in 10 years)

Page 44: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

44

Setting the Tone at the Top

• Populating the internal loss event database requires strong support from senior executives and the board.

• Business Units may not want to appear to have higher or more frequent operational losses, and may resist reporting initiatives

• Negative equilibrium: there is no incentive to be the first business unit to buy into recording operational losses

• Neglecting to record minor loss events places upward bias on the likelihood of larger loss events that are recorded. – Businesses can benefit by being more diligent about recording lower operational losses.

• Incentivize businesses within the framework: RCSA must be supported by internal data. Incentivize progress:

At first RCSA is supported or validated by collection efforts.

As process matures and data is collected, RCSA becomes supported by results of data collection.

Page 45: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

45

10:30 – 11:00 Coffee Break

Page 46: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

46

11:00 – 12:00 External Data Scaling External Data

Quantile Regression

Page 47: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

47

External Data

• External data Consortium (ORX, ORIC)

Public (First, SAS)

Private / Proprietary (Aon OpBase)

• How do we use it? Scaling

In direct calibration

In stimulating RCSA and Scenario analysis

In Validation

Complementary: external data Covering different regions of the distribution

Page 48: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

48

External Data Sources:

• SAS OpRisk Global Data: largest database of losses over $US 100k: 25 000 loss events. Losses are categorized according to Basel II event types, and by

individual business lines Facilitates data scaling according to firm size by revenue, assets, net

income, number of employees and shareholder equity Database is populated by commercial online information providers and

thousands of publications

• Aon OpBase: quantitative and qualitative information on more than 16,500 losses and

incidents experienced by more than 1,600 financial institutions worldwide

Based on both proprietary (through reinsurance brokerage business) and publicly available data

Loss amounts range from $US 4 to $US 798m.

Page 49: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

49

External Data Sources:

• ALGO – FIRST:

Leading database for scenario analysis

Contains approximately 10 000 events

Collected from publicly available sources

Includes detailed qualitative analysis of each event based on control breakdowns and event triggers

Losses categories span corporate governance, strategic issues, market practices, and business risk

• ORX

Leading consortium database for banks

212 282 operational loss events recorded from 57 member firms

Insurance database is in development with an initial planned number of 8 participating firms

Losses colected since 2002

Page 50: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

50

External Data Sources:

• ORIC

Maintained by the Association of British Insurers (ABI)

Leading operational loss data consortium for insurance companies

Includes over 3000 loss events collected over the last 5 years

3 levels of loss categorization with level 1 losses consistent with Basel II

26 members, European insurance firms

Page 51: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

51

Comparing External Loss Data Sources

• External loss data from different providers correspond to different regions of loss distributions.

• Different collection thresholds have implications for use of external data in modeling or validation

FIRSTORX/ORIC AON/SAS

Page 52: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

52

Scaling Operational Losses

• External data provided from Operational loss consortiums potentially contain a sufficient number of operational losses to calibrate the tail of loss severity distributions.

• However, loss data is collected across many institutions and business lines, which differ significantly in risk profile.

• Quality of existing controls are not equal across institutions

• Absolute size of business unit in terms of gross revenue or net income, as well as geographical differences significantly affect the magnitude of operational losses

• To date there is no satisfactory loss scaling methodology that would allow financial institutions to directly incorporate external loss data into the calibration of severity loss distributions and the calculation of economic capital.

• The use of external loss data is limited largely to validating the outcomes of RCSA and Scenario Analysis, as well as fitted severity distributions

Page 53: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

53

Scaling Loss Data

• Each loss database provider has developed its own scaling methodology

• Generally scaling methodologies are derived and tested based on the proprietary database (each probably works best with its own)

• Scaling methodologies must be mindful of preserving underlying distributional assumptions : linear transformations of lognormally distributed random variables are not lognormally distributed.

• Predominant methodologies for scaling operational loss data:

Log Linear models of losses

Modeling scale and shape parameters for given distributions

Quantile regression

Page 54: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

54

Log-Linear Models of Losses

• Early studies of loss scaling fit linear models of log-losses onto bank characteristics that are correlated with firm size (log-gross income, log-total assets), as well as dummy variables indicating region, business line or type of loss occurred.

• First term is a deterministic function of a vector of business line characteristics, the second term is a random variable corresponding to the loss distribution of a “standardized” business line

• Many studies use a power law function to describe variation in loss sizes between institutions

yfyL

yfygL

i

ii

Page 55: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

55

Log-Linear Models of Losses

• Once the scaling relationship has been determined, internal and external data can conceivably be pooled together.

• Given a sufficient number of observations, tail loss events can be used to calibrate loss severity distributions.

• Scaling functions can be estimated by Ordinary Least Squares (OLS).

• This methodology has the advantage of not making any distributional assumptions for base losses prior to scaling

• Empirical studies have found that fitted regressions have very poor explanatory power with R2 values reported in the range of 10%

xgy

L

y

L

2

2

1

1

iii yL lnln 21

Page 56: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

56

Log-Linear Models of Losses

• Fitting log linear regression models of operational losses only fit the mean log loss level given the level of certain exposure indicators.

• The mean log loss level that is fit does not correspond to an intuitive statistic in the original loss data, because the exponential of the mean of log losses does not equal to the mean of loss:

• In OLS regression R2, is of limited significance in assessing the validity of the scaling relationship, as the objective is not to explain variation in losses at different levels of exposure indicators

• The scaling relationship should be assessed based on the degree of similarity among the scaled loss distributions

LEeLE ln

Page 57: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

57

References for Studies on Log-Linear Models of Loss Severity

• Shih, J., A. Samed-Khan, and P. Medapa. “Is the size of operational loss related to firm size?” Operational Risk, 2000

• Chapelle, Ariane, Yves Crama, Georges Hubner, and Jean-Philippe Peters. “Measuring and managing operational risk in the financial sector an integrated framework.” (2005)

• Na, Heru Stataputera, jan van den Berg, Lourenco Couto Miranda, and Marc Leipoldt. “An econometric model to scale operational losses.” Journal of Operational Risk 1 (2006): pp. 11-31.

• Dahen, Hela and Georges Dionne. “Scaling models of severity and frequency of external operational loss data.” Canada Research Chair in Risk Management Working Paper 07-01, 2007.

Page 58: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

58

Modeling Scale and Shape Parameters

• An alternative approach to loss scaling is to assume that loss distributions arise from a given parametric family of distributions, and that shape and scale of those distributions vary with an institution’s exposure indicators.

• Assuming a given family of distributions for loss severities (log-normal), estimate location and scale parameters separately for each institution in the consortium data set.

• Regress the scale and shape parameters on the exposure indicators:

• Likelihood ratio tests can be used to assess whether models including exposure indicators perform better than those which impose constant scale and shape parameters.

• Wei (2007) applies this methodology with the following severity distributions: Generalized Beta of the second kind (GB2), Burr Type XII, Generalized Pareto (GPD) and Lognormal.

ii Xscale lnlnln

Page 59: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

59

Aon OpBase Scaling

• Aon propose an alternative scaling approach, by assuming losses are log-normally distributed with mean and standard deviations specified as functions of explanatory variables.

• The approach is based on a function which scales external loss data L to have the same distribution as internal loss data: L*=h(L)

• Where the subscript 0 indicates the exposures of the institution for which data is considered “internal”

abLL *

XX

a 0

X

XXXb

0

0exp

Page 60: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

60

Aon OpBase Scaling

• Coefficients in specifications of mean and standard deviation are estimated by maximum likelihood pooling all internal and external data together.

• Once estimated, both the severity distribution and the scaling function are specified.

n

i i

iii

n

i i

ii

ii

n

i

n

iii

X

XYX

X

XY

XY

YfYf

12

2

,

12

2

,

1 1,,

,

,ln

2

1,lnmax

,2

,lnexp

2,

11lnmax

lnmaxmax

Page 61: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

61

References for Modeling Scale and Shape Parameters

• Wei, Ran. “Quantification of operational losses using firm-specific information and external database.” Journal of Operational Risk 1 (2007): pp. 3-34.

• Frachot, Antoine, Edlira Kospiri and Fabien Ramaharobandro. “Operational risk: scaling the severity of external operational risk data”. Available for download at www.aon.com

Page 62: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

62

Quantile Regression

• Quantile regression specifies a relationship between quantiles of distributions of random variables as opposed to their means

• Given percentiles of a distribution of losses from one institution may be differ

by a constant amount (location shift, figures a and b)

By a constant and multiple (location-scale shift figures c and d)

• The location or location-scale shift relationships being parameterized in terms exposure indicators

Page 63: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

63

Quantile Regression

• Formally the location and location-scale shift model can be expressed respectively as

• The residual term has a fixed distribution Fo

• The distribution of any log loss is a shifted scaled version of the base distribution of Fo

• For a given exposure variable the quantile of the log-loss severity distribution can be expressed in terms of the location and scale shift parameters

iiii

iii

XXL

XL

ln

ln

1

|

o

ii

F

XXQ

Page 64: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

64

Quantile Regression

• First step is to estimate the Khmaladze test which evaluates the null hypothesis that a given quantile regression model follows a location- or a location-scale shift model.

• If the null hypothesis of a location shift model is not rejected its parameters can be estimated using OLS

• If the null hypothesis of a location-scale shift model is not rejected, Fit quantile regression lines at several probability levels

For a fixed exposure level Xo, let the estimated quantile level of the loss distribution be given by

Regress quantile coefficient estimates on the relevant quantile estimates given the base exposure Xo, using OLS to obtain estimates of location and scale parameters

ˆˆ 1ii XF

10̂

ˆ F

ˆˆ0

10 XF

Page 65: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

65

Quantile Regression

iR ymin

• The calculation of quantiles of a given data set can be expressed in the form of a minimization problem:

ii

iiR

yIy

yIy

)1(min

Page 66: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

66

Quantile Regression

• Expressing a quantile of a data set as a linear function of exposure variables – quantile regressions obtains parameters of the function by a similar minimziation problem:

• This is very similar to OLS estimate, the only difference being that the quantile regression estimates a specific quantile as opposed to the mean

,min iiR Xy

2,min iiR Xy

Page 67: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

67

Quantile Regression

• Unlike OLS estimates of log linear models of operational losses, which don’t preserve the mean of the loss data, quantiles of the underlying loss data are preserved under all monotonic transformations

• Quantile regressions do not require any distributional assumptions of the underlying loss data.

• A significant drawback are loss data thresholds – only above which are losses reported to consortium databases. It is very difficult to determine which percentile of the underlying loss data the threshold corresponds to at each institution.

• ORX proposes an iterative quantile regression approach to obtaining estimates of location and scale parameters from truncated data.

Page 68: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

68

References For Quantile Regression

• Koenker, Roger. Quantile Regression. Cambridge University Press, 2005.

• Cope, E and Abderrahim Labbi. “Operational Loss Scaling by Exposure Indicators: Evidence from the ORX Database”. 2005. available for download at http://www.orx.org.

Page 69: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

69

External Data and Validation

• External data is still vital to an AMA framework even without applying a scaling methodology and combining it with internal data

• Qualitative assessments of operational loss exposures should always reference relevant external data

• Scenario Analysis: workshops should include challenge by review of external loss experience from similar lines of business and similar institutions

• RCSA: Evaluation of control effectiveness and inherent risks should be validated against external loss data (and if available, comparison of external and internal data)

• High percentiles of loss data from external consortiums, and loss scenarios from the FIRST database can be used to validate capital numbers based on fitted severity distributions.

Page 70: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

70

12:00 – 13:00 Lunch Break

Page 71: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

71

13:00 – 14:30 Qualitative Elements RCSA Scenario Analysis

Page 72: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

72

Qualitative Elements: RCSA

• RCSA is the point at which the AMA framework is integrated with businesses and management of operational risk.

• Without a sound RCSA process exposures that are identified and measured within the AMA framework cannot be effectively managed.

• 2 major components of an RCSA: Qualitative self assessment of risk exposures Identification and review of existing controls and current gaps (responsibility

for implementing controls)

• Given an inventory of gaps in a business’ controls, capital figures produced in the AMA framework provide an indication of which to prioritize

• Qualitative risk self assessments are often criticized within risk management

As a process it promotes the introspection of risk exposure, and imbeds good risk management culture within organization

• To the critics: Is it a bad idea for key management and business leaders to get together to

discuss and assess weaknesses and risks within their operations?

Page 73: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

73

Who Participates in an RCSA?

• Representation from all areas of the Business Group, discussion of relevant risks and review of controls should be facilitated at all levels of the business in a successful RCSA

• At least some participants in the workshop must be able to understand strategic objectives of the business group as well as the processes in place to meet those objectives

• Workshop must also have members who can either determine or allocate ownership of controls and gaps

• Participation of more senior employees must be managed carefully: They may turn defensive during discussion of potential failures in the

business plans More junior employees may feel uncomfortable giving their opinions Political or sensitive issues may not be discussed

• These are cultural issues and must be resolved from the top of an organization by an executive champion

Page 74: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

74

Role of the RSCA Facilitator

• RCSA workshops benefit substantially from a designated objective facilitator from outside the business who can:

Maintain discipline within the group and help to manage the time taken by the workshop

Record decisions and actions in a consistent format

Act as devil’s advocate within group discussions

Manage use of external and internal loss data throughout the workshop

Ensure involvement of the entire group and stimulate discussion by introducing reference material such as internal and external data at appropriate points within the workshop

• High level validation of the output from workshops is required to ensure that they are conducted consistently across different business units.

Page 75: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

75

Qualitative Elements RCSA Process

Business Objectives

Processes

Inherent Risks

Control Assessment

Residual Risk

AMA Framework

Modified

Control Portfolio

•Results are owned and produced by the Business. Integrated with planning process

•Risks assessed based on expected (mean) impact on business

•Manage Risks through the AMA

Framework: Severity and Frequency

Page 76: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

76

Impact on Business Objectives RCSA Process

• Identify Risks in the contexts of Business Objectives

Outline Key Objectives of the Business Unit at two levels:

• High level objective in the organization (long term)

• Strategic objectives within the high level mandate (short term)

Define the Processes in place to meet those objectives

Business Objectives

Processes

Inherent Risks

• How do Risk Types impact Processes and Business Objectives?

More granular taxonomy of risk types within Basel categories significantly aids this assessment process

Identification of new risk sub-categories can be added to the database as they emerge

Page 77: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

77

Frequency of Event TypesRCSA Process

• Frequency: how often each event type impacts Business Processes and Objectives in the Absence of All Controls (include near miss events)

• Severity: Given occurrence, what is the expected loss in the Absence of All Controls

1 2 3 4 5 6 7Financial Internal Fraud External Fraud Employment

practices and Workplace Safety

Client, Products and Business

Processes

Damage to Physical Assets

Business disruption and Systems

Failures

Execution, Delivery and Process Management

Rare 1 in 10-25yrUnlikely 1 in 5-10yrPossible 1 in 1-5ryLikely 1 in 1yrFrequent >1 in 1yr

1 2 3 4 5 6 7Financial Internal Fraud External Fraud Employment

practices and Workplace Safety

Client, Products and Business

Processes

Damage to Physical Assets

Business disruption and Systems

Failures

Execution, Delivery and Process Management

Negligible <10kModerate [10k,100k]Material [100k,1M]Major [1M,10M]Enormous >10M

Page 78: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

78

Evaluating Controls and Residual Risk RCSA Process

• Develop an inventory of existing controls in the Business Unit

• Map controls to risk types in the Business Unit’s taxonomy (aided by more granular categorization of risk types)

• Develop an inventory of gaps in controls on processes

• Within business units identify owners of controls, and owner of gaps

Control Assessment

• As part of control and gap inventories: estimate costs for maintaining existing controls, as well as costs for closing gaps in controls

• Goal is not only to identify missing controls, but also identify potential areas that may be over-controlled and create inefficiency.

Page 79: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

79

Evaluating Controls and Residual Risk RCSA Process

• Evaluate Controls on 2 dimensions: Development and Effectiveness:

• Weight each dimension according to maturity of controls

In the absence of sound controls, Development is more important than Effectiveness

Control Assessment1. Development:

– How are controls developed? – Who is involved?– How Frequently are they

Reviewed?– Calculate KRI; recording internal

data2. Effectiveness:

– Defined in terms of Severity and Frequency:• How much is the loss amount decreased given an event• How is the likelihood of the event decreased• KRI metrics, Internal vs External Loss data.

Page 80: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

80

Determining Residual Risk RCSA Process

Control Assessment

Residual Risk

• Inherent Risk External Data

• Residual Risk-External Data Development/Review Process

1 2 3 4 5 6 7Financial Internal Fraud External Fraud Employment

practices and Workplace Safety

Client, Products and Business

Processes

Damage to Physical Assets

Business disruption and Systems

Failures

Execution, Delivery and Process Management

Negligible <10kModerate [10k,100k]Material [100k,1M]Major [1M,10M]Enormous >10M

1 2 3 4 5 6 7Financial Internal Fraud External Fraud Employment

practices and Workplace Safety

Client, Products and Business

Processes

Damage to Physical Assets

Business disruption and Systems

Failures

Execution, Delivery and Process Management

Rare 1 in 10-25yrUnlikely 1 in 5-10yrPossible 1 in 1-5ryLikely 1 in 1yrFrequent >1 in 1yr

Page 81: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

81

Residual Risk Heat Map RCSA Process

• Residual Risk of the Event Types can be placed on a heat map to summarize the results of the RCSA

• Operational Risk is categorized by extreme loss events which by their nature are highly unlikely

FrequentLikelyPossible 6Unlikely 2,7 1 4Rare 3,5

Negligable Moderate Material Major Enormous

1 2 3 4 5 6 7Internal Fraud External Fraud Employment

practices and Workplace Safety

Client, Products and Business

Processes

Damage to Physical Assets

Business disruption and Systems

Failures

Execution, Delivery and Process Management

Page 82: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

82

RCSA and Validation

• External and Internal Data is vital for challenging results of risk self assessments

• Introducing internal and external data too early can impede the RCSA process

• Business units need to develop their OWN assessment of their risks and control quality, which can then be challenged through comparison with internal and external data.

Exposure to data can prevent introspection of risks, as workshop participants focus on past events

• Differences between Inherent (external loss data) and residual risk should be substantiated by development/review process for controls – businesses can’t have effective controls by accident

• Differences between loss scenarios and external (as well as internal) should be supported by the evaluation of residual risk and the development process for controls

Page 83: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

83

Scenario Analysis - Workshop

• Objective: To create severity and frequency data to populate the tails of the loss distribution per risk type via working sessions with management and front line operational risk people

• External data, discussions with other groups, expert opinions are used to facilitate the discussion

• Scenario data can be combined with internal loss data or used to calibrate the CDF of a severity distribution directly

Frequency and Severity

RCSA

Scenario AnalysisScenario Analysis

Curve Fitting per Risk Type

External Data

Public

External Data

Public

Page 84: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

84

Framing the questions• We effectively create synthetic data points for the tail

• Questions should be framed appropriately to accommodate the human mind

Risk Type 0-10 10-100 100-300 300-500 500-1000 1000-2000 Maximum1 5 2 2 1 0.5 0.2 0.012 10 5 5 1 1 0.53 100 50 40 20 5 14 200 100 50 5 1 15 10 10 10 1 0.5 0.2

OR/(BOTH ?)

Risk Type Annual Total Loss

# of events in year 1 in 5 year 1 in 10 year 1 in 20 year 1 in 50 year 1 in 100 year

1 1000 5 50 1000 1000 2000 50002 2000 10 100 500 500 1000 50003 3000 100 100 100 200 10000 200004 5000 200 100 150 200 200 10005 500 10 100 200 300 500 1000

$ Loss Amount (in '000) in a year- EXAMPLE

$ Loss Amount (in '000) per event - EXAMPLE

Average Frequency

# of events / year

Frequency

($)Severity (per event)

($)Severity (per event)Need to realy on internal data

Page 85: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

85

Example Curve Fitting

• MLE minimizes the cumulative error in curve fitting

• We can give more weight to the tail errors for better tail fit – but extreme weight marginalizes the other data points creating extreme sensitivity to the tail data

• (if had only used internal data, we’d be extrapolating based on distribution assumptions)

Expected # of events per year

Page 86: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

86

Role of Corporate:

• Corporate experts facilitate the sessions to ensure maximum objectively and consistency across the businesses and time.

• Corporate plays a very critical pull and push role in making sure the business leaders will be sufficiently and consistency (with other groups) open with potential Op Risk Events.

• The process has a high risk of turning into a negotiation process rather than an open, fact finding and brainstorming.

• Reassuring the business leaders that the process is consistent across the enterprise a key.

• Centralized expertise with the same corporate people attending the sessions over time and for all business group is required.

Page 87: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

87

Potential Bias’s and Control

• Scenario Biases:

Partition dependence: where respondent’s knowledge is distorted by discrete choice of buckets within which their responses have to be presented

Availability: where participants only recall recent events

Anchoring: where different starting points yield different estimates

Motivational: where the misrepresentation of information due to respondents’ interests are in conflict with the goals and consequences of the assessment.

Overconfidence: where respondents rely on limited loss experience

• Identification and evaluation of controls within the RCSA process can validate outcomes of scenario analysis against internal and external loss experience.

Page 88: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

88

14:30 – 15:00 Coffee Break

Page 89: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

89

15:00 – 16:15 oQuantitative Elements Distribution Choices – Frequency and

Severity Fitting to truncated data Change of Measure Goodness of Fit, Stability EVT – Comparison with LDA False Precision

Page 90: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

90

Loss Distribution Approach

• Standard approach to determining capital for a business unit event type (unit of measure) is to parameterize frequency and severity distributions

• Frequency distribution determines likelihood of experiencing different numbers of loss events in a year Use annual observations of the number of losses to parameterize the

frequency distribution

• Severity distribution determines the likelihood of losing different amounts given that a loss occurs. Use the magnitudes of all losses experienced to parameterize the

severity distribution

• Capital numbers are based on the aggregate annual loss distribution A draw from the frequency distribution determines the number of

losses in a year (n) Taking (n) draws from the severity distribution determines the

individual magnitudes of those losses.

Page 91: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

91

Level of Granularity in Modeling

• Modeling loss distributions for event type sub-categories significantly reduces the amount of internal data available

• Comparing loss sub-categories within Basel Event types, it may not be clear whether losses should be categorized by two separate distributions.

• The goodness of fit tests can be adapted to evaluate the hypothesis that subsamples of data belong to the same distribution.

• Why not model executive fraud and teller fraud as losses from different quantiles of the same distribution for internal fraud? There is certainly a continuum of employees within an organization -

or business unit - capable of perpetrating fraud of various magnitudes.

• Evaluate whether there is sufficient data available to calibrate separate distributions, and whether data rejects the hypothesis that losses from different categories follow the same distribution If there are limited grounds for rejecting it, a simpler framework is

generally better.

Page 92: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

92

OpVar Frequency Distributions

• Poisson Distribution:

Single variable distribution parameterized with mean annual frequency of losses

Variance is equal to mean

• Binomial

Two variable distribution parameterized by Maximum Likelihood Estimation (MLE) using loss frequency data

Variance is smaller than the mean

• Negative Binomial:

Two variable distribution parameterized with MLE using loss frequency data

Variance is larger than the mean

Page 93: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

93

OpVar Severity Distributions• There are many options to choose from for modeling the likelihood of

loss severity

• Thin tailed Distributions: Beta, Chi-Square, Exponential, Gamma, Log Normal, Weibull

• Fat Tailed Distributions: Burr, Cauchy, Generalized Pareto (GDP), Log – Gamma, Log-Logistic,

Pareto

• Parameters of the severity distribution are fit by maximizing the joint likelihood of loss amounts w.r.t. parameters of the likelihood function (distribution)

• Typically the data will not clearly match one severity distribution over alternatives – sufficient number of tail loss events are not available to differentiate between these distributions. The likelihood of observing losses around the body or mean, is similar

across the alternative severity distributions.

• Capital numbers are based on the shape of the tail of the severity distribution – which can vary substantially across fitted distributions

Page 94: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

94

Fitting Severity Distributions to Truncated Data

• Likelihood functions for severity of losses are defined from a loss value of 0. Loss data is only recorded for losses over a threshold b

• Parameters of the likelihood function f() have to be calibrated based on the likelihood of observing loss greater than a threshold b.

N

iilf

0

|*max

b

dxxf

lfbllflf

0

|1

|,||*

b L

blLlP ,|

Data is only available from this portion of the distribution

Page 95: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

95

Quantitative Elements – Curve Fitting- Evren

• Calibration to Mixture of Data

Internal + External loss data;

Internal loss data + Scenarios;

Change of measure

Scenarios

Page 96: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

96

External Data in Modeling

• There is no consensus in the industry on a scaling methodology that would allow external data to be incorporated directly into capital modeling

• Current techniques focus on indicators of business size and location, however risk profiles will vary substantially between institutions depending on specific controls that are in place and differences in culture

• Using external data (even if it was accurately scaled) would allocate capital to businesses based on loss experience that was completely out of their control, and would be difficult to gain acceptance and support within an institution.

• Percentiles of the empirical distribution of external losses can be compared to percentiles of candidate severity distributions fit with internal data.

• Fit severity distributions using scaled external data combined with internal data, and compare to the fit using only internal data.

Page 97: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

97

Incorporating Scenario Analysis into Modeling• Outcome of Scenario Analysis can be used to validate the fit of a

severity distribution by comparing relevant quantiles of the distribution to the severity and likelihood of the hypothetical scenario

• Scenario data cannot be directly added to internal loss data, as it increases the likelihood of the scenario losses within the sample of internal data: Adding a 1 in 50 year loss event to 3 years of internal loss data

• One approach to incorporating scenario data points into capital modeling, is to fit two distributions with scenario data and internal loss data; splicing the two distributions together or drawing independently from each in simulation. Significant discontinuities between scenarios and internal data pose

problems to this approach Difficult to assess the appropriateness of using two separate

distributions when there is little internal data.

Page 98: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

98

Combining Scenario and Internal Loss Data• Scenarios cannot be added directly to the internal loss data for

any unit of measurement. Frequency and severity of internal loss data have to be modified before it is combined with scenario events in calculating capital figures.

Frequencies of the scenario and internal loss data may not be aligned; and naively combining scenarios with internal data will place undue weight on the scenario.

Dutta and Babbel (2010) propose a method for appropriately merging internal data with a set of loss scenarios each defined by a frequency and loss interval (i.e event with losses between 200 mil and 250 mill occurring once every 20 years).

This method involves systematically extending the frequency of historical loss data to match the frequency of each scenario event on a normalized scale. Severity distributions estimated with the underlying loss data are then transformed to accommodate the scenarios.

Page 99: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

99

Combining Scenario and Loss Data (Dutta 2010)

t

iiny

1

4. Take y draws from the severity distribution estimated with the internal data. We observe k<m events in the range [a,b]

5. Generate m-k events in the range [a,b] from the severity distribution estimated with the internal data

6. Combine the m-k events from step 5 and the y events from step 4 into a new data set of losses for the unit of measurement.

7. Re-estimate a new implied severity distribution using the data set from step 6.

1. Use internal loss data to estimate severity and frequency distributions for a unit of measurement (ET/LoB)

2. Formulate Scenario with loss interval [a,b] and frequency m/t; m events are observed every t years

3. If the frequency of losses n is distributed as freq(n) then in t years we would expect to see

Page 100: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

100

Evaluating Distributions using Change of MeasureChange of Measure (COM) metrics can be used to evaluate the relevance of scenarios in updating a given severity distribution; and also the ability of a given severity density function to predict a scenario

Each scenario that is combined with the internal data will change the implied probability of that event with respect to the historical probability.

• Historical probability: probability of scenario occurring based on severity distribution estimated with the internal data

• Implied probability: probability of a scenario occurring based on the implied severity distribution obtained by combing in internal loss data with scenario data.

yprobabilithistorical

yprobabilitimpliedCOM

_

_

Change of Measure (COM) is informative of the relevance of a given scenario in updating the severity distribution

The appropriateness of distribution choice can be evaluated based on COM values for a set of scenarios. Comparing COM values calculated between different severity density functions, a lower COM implies that a distribution is a better predictor of a scenario.

Page 101: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

101

Evaluating Alternative Distributions Based on Change of Measure

• Instead of fitting distributions independently with internal and scenario data, COM approach can be used to jointly assess the fit of distributions to both internal data and scenarios

1. For a set of candidate distributions, fit each using available internal data

2. For Each candidate distribution calculate the COM based on implied probability from refitting the given distributions with scenario data

• The lowest COM implies that a given distributional choice more easily accommodates the fit of both internal data and hypothetical scenarios

• The parameterization of the distribution estimated with the internal and external data, and with the lowest COM would be selected for capital modeling

Page 102: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

102

References

• Dutta, K. and Babbel, D (2010) Scenario Analysis in the Measurement of Operational Risk Capital: a Change of Measure Approach; available for download at http://fic.wharton.upenn.edu/fic/papers/10/10-10.pdf

• Dutta, K. and Perry, J (2006) A tale of tails: an empirical analysis of Loss distribution models for estimating operation risk capital. Federal Reserve Bank of Boston, Working Paper No. 06-13

• Embrechts, P. Degen, M. Lambrigger, D. The Quantitative Modeling of Operational Risk: Between g-and-h and EVT; available for download at http://www.math.ethz.ch/%7Edegen/Lausanne.pdf

• Moscadelli, M. (2004) The modelling of operational risk: experiences with the analysis of the data collected by the Basel Committee. Bank of Italy, Working Paper No 517.

• Makarov, M. (2006). Extreme Value Theory and High Quantile Convergence. Journal of Operational Risk (51-57). Volumne 1/ Number 2, Summer 2006.

Page 103: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

103

Using Only Scenario Data

• In some instances institutions or individual units of measure will have little to no internal loss data available for modeling (loss data collection has only just commenced)

• It is still meaningful to summarize scenario data into a distribution to calculate capital

Each scenario severity and likelihood corresponds to a point on the severity CDF

Calibrate parameters of the severity distribution to fit its CDF to the scenario severity / likelihood data points.

Choosing the weight of the fit, so that the tail of the distribution fits the data better. CDF

1

0

Severity

Page 104: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

104

Goodness of Fit Statistics

• A good way to evaluate the fit of a severity distribution for a given set of loss data is to plot empirical CDF against the CDF of the fitted distributions (using log losses)

How do the fitted distributions compare to the data? The scenarios?

• Lognormal probability plots are also useful for observing fit of the tail

y-axis is the log loss level

X-axis is a standard normal variable

NormSInv(X)=proportion of the sample below exp(Y)

• Formal Goodness of Fit Statistics include:

Kolmogorov-Smirnov Test

Anderson Darling Test

Page 105: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

105

Kolmogorov Smirnov Test

• The tests evaluates the null hypothesis of whether

A sample data set is drawn from a reference distribution

Two sample data sets are drawn from the same distribution

• The test compares the differences between the CDFs of the sample and reference distribution or the two sample CDFs:

• Empirical CDF:

• Test Statistic:

• Two Sample Case:

• Requires a large sample of data to reject the null

• Compared to critical values of the Kolmogorov distribution

n

ixXn i

In

xF1

1

xFxFT nxKS sup

xFxFnn

nnT nnxKS ',2,12 sup

'

'

Page 106: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

106

Anderson-Darling Test

• Evaluates the Null Hypothesis that a sample of data came from a population with a specific distribution

• Modified version of the KS test, which gives more weight to the fit of the data in the tail

• A disadvantage is that the distribution of the test statistic under the null depends on the hypothesized distribution

• Fit a distribution with the data, then use the fitted distribution in the Test statistic to evaluate the null hypothesis that the data was drawn from the fitted distribution

• One sided test – smaller values of the test statistic are preferred.

iNi

n

i

AD

YFYFn

iS

SnTS

11

2

1lnln12

Page 107: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

107

Capital Stability

• When modeling severity with fat tailed distributions, resulting capital numbers can become sensitive to the addition of new data points. Unstable capital figures over time as new internal data is collected.

• It is important to evaluate the stability of high percentiles of fitted loss distributions with respect to changes in the data: For a given set of loss data points, use random subsamples to

obtain multiple fits of the same distribution.

Compare the 99th (or high) percentiles across the fitted distributions

• A distribution that results in a small test statistic will not necessarily provide the most stable capital figures over time.

Page 108: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

108

Generating Capital Numbers:

• Monte Carlo estimates of high quantiles of loss distributions suffer from high variance.

• Deterministic such as Fourier Transformation can provide more accuracy than Monte Carlo Simulations, however they are not as flexible: Insurance benefits can be modeled in Monte Carlo

Simulations

• High number of simulations (10^5 or greater per Unit of Measure) are require for capital number to converge. Number of simulations required will depend on the severity

distribution (kurtosis).

Increase the number of simulations until the desired variance of capital numbers is achieved.

Page 109: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

109

Extreme Value Theory (EVT)

• An alternative to fitting separate severity and frequency distributions (LDA) is to model the probability of losses exceeding a given threshold using EVT.

• A common Peaks-over-Threshold model of extreme values employs a Generalized Pareto Distribution (GPD) to model the likelihood of losses exceeding a high threshold.

• For a large class of commonly used distributions used in statistics and actuarial sciences (normal, lognormal, 2, t, F, gamma, exponential, uniform, beta, etc.), the distribution of exceedences above a thresholds converges to a GDP as the threshold approaches the right endpoint of the distribution.

• Frequency can be incorporated by representing exceedences with a Peaks Over Thresholds Point Process (POT-PP)

• Capital numbers can be obtained by multiplying the average severity of exceedences by the average frequency of exceedences when they are modeled by POT-PP

0suplim ,0 00 yGPDyF uuuxyxu

Page 110: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

110

LDA vs EVT

• From a practical perspective it can be difficult to determine the appropriate threshold from which to model loss exceedences using EVT POT-PP. EVT relies on convergence in distribution to GDP, as the threshold converges to the right endpoint of the underlying distribution.

The higher the threshold the less data available to estimate parameters of the GDP distribution

• “Many different techniques being tested by researches are centered around EVT. In many of those cases we observe that attempts are made to fit a distribution or apply a method without understanding the characteristics of the loss data or the limitation of the models” (Dutta and Perry (2006)

• Based on findings of Dutta and Perry (2006) in which operational risk data seem to be modeled better by the g-and-h than by Peaks Over Threshold (POT) EVT approaches, Embrechts et al (2006) outline several theoretical grounds on which POT EVT approaches may be unsatisfactory for fitting data from g-and-h density functions.

Page 111: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

111

EVT and Convergence in Quantiles and Shortfall (Makarov (2006))

• EVT POT-PP approximations of the tail of a distribution is based on convergence in distribution to GDP.

• The convergence in distribution is a weak form of convergence and does not guarantee convergence in mean or convergence for quantiles.

• EVT approximations can have convergence in quantiles (Uniform Relative Quantile convergence), however not for all families of distributions (not log gamma)

• When applying EVT method to a distribution with finite mean, it can result in an approximation which has infinite mean or significantly different high quantiles and shortfalls.

• Although EVT can be useful in approximating the tails of unknown distributions without making assumptions about the underlying distribution, LDA provides a more flexible and less sophisticated approach.

Page 112: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

112

False Precision

• Although there are sophisticated methods available for fitting loss distributions and generating capital figures based on historical loss data, these methodologies are not robust without large samples of data (generally more than will be available for operational risk modeling).

• Without a substantial number of tail loss observations capital numbers represent a significant extrapolation based on the assumed severity distribution.

• Even if scenario data is used to fit the tail of the severity distribution, it is still based on an approximation of very remote and unlikely events

• The role of expert judgment has an important role to play in the AMA framework – it has to be recognized and carefully managed.

• Estimated loss distributions are ultimately extrapolations of available loss experience to estimate highly unlikely events and will always require the overlay of management judgment.

• A successful AMA framework cannot be over-reliant on sophisticated fitting techniques, capital calculations and test statistics – at the end of the day the precision of these numbers is not defendable

• Institutions have fallen into the trap of using complexity to disguise uncertainty in the AMA framework.

Page 113: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

113

Qualitative Assessment of AMA Framework

• Dutta and Perry (2006) provides a qualitative yardstick for evaluating AMA models:

1.Good Fit – Statistically, how well does the method fit the data?

2.Realistic – If a method fits well in a statistical sense, does it generate a loss distribution with a realistic capital estimate?

3.Well specified – Are the characteristics of the fitted data similar to the loss data and logically consistent?

4.Flexible – How well is the method able to reasonably accommodate a wide variety of empirical loss data?

5.Simple – Is the method easy to apply in practice and is it easy to generate random numbers for the purposes of loss simulation?

Page 114: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

114

References For Capital Modeling

• Dutta, K. and Babbel, D (2010) Scenario Analysis in the Measurement of Operational Risk Capital: a Change of Measure Approach; available for download at http://fic.wharton.upenn.edu/fic/papers/10/10-10.pdf

• Dutta, K. and Perry, J (2006) A tale of tails: an empirical analysis of Loss distribution models for estimating operation risk capital. Federal Reserve Bank of Boston, Working Paper No. 06-13

• Embrechts, P. Degen, M. Lambrigger, D. The Quantitative Modeling of Operational Risk: Between g-and-h and EVT; available for download at http://www.math.ethz.ch/%7Edegen/Lausanne.pdf

• Moscadelli, M. (2004) The modelling of operational risk: experiences with the analysis of the data collected by the Basel Committee. Bank of Italy, Working Paper No 517.

• Makarov, M. (2006). Extreme Value Theory and High Quantile Convergence. Journal of Operational Risk (51-57). Volumne 1/ Number 2, Summer 2006.

Page 115: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

115

Quantitative Elements – Correlations

• Correlations by [Risk Types x Business Units x Regions]

• Across the risk types Nice to have but almost no data But some risk types must be correlated (e.g fraud and legal). Ad-hoc

parameterization of correlations is possible (0%, 25%, 50%) – typically frequency draws are correlated (not severity)

• Utilizing the aggregate op risk losses for the business units: Time series of aggregate losses may be available for different business units and

regions – from which we can infer the empirical correlations between the aggregate losses

We first aggregate the losses from different risk types per business unit and region using zero or ad-hoc correlations by means of normal copulas

We then aggregate the cumulative loss distributions (per region/business unit) using the empirical distributions (by means of normal copulas)

Note the correlations among the risk types would still be the weakest link and need to be empirically estimated as more data are collected over time

Page 116: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

116

Aggregation

Risk Types Business Unit 1 Business Unit 2 Business Unit 3 Business Unit 4 Business Unit 5Internal Fraud

External Fraud

Employment Practices and Workplace Safety

Client, Products and Business Processes

Damage to Physical Assets

Business Disruption and Systems Failures

Execution, Delivery and Process Management

ρ ρ ρ ρStandalone

Business Unit

Capital

Enterprise Capital

ρ

ρ

ρ

ρ

ρ

Page 117: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

117

Aggregation, across the risk types

• Typically, frequency is correlated (as opposed to severity)

• Normal copulas are used for correlation

• Where

)),(()( 1_ iiiRiskType kPxP

)),(()( 1_ jjjRiskType kPxP

)'(1 PoissoncumulativeofinverseP

parametershapei

n disributio uniform a tok ing transformNormal, Standard cumulative )( i

)1,0(~ Nki

jiji kkCorr ,),(

Page 118: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

118

Aggregation, across the Business Units

• Need to aggregate Standalone Business Unit Loss Distributions, L1 & L2

• Normal copulas are used again: Correlated draws from L1 and L2 and aggregation

)1,0(~ Nki

jiji kkCorr ,),(

),~

(),~

(~

~~

221121 LkPercentileLkPercentileLLLTotal

Page 119: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

119

16:15 – 16:30 Coffee Break

Page 120: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

120

16:30 – 17:30Governance and Validation

Operationalizing AMA Management Buy-In

Roles and Responsibilities: Corporate vs Front Line

Allocation of Operational Risk Capital

IT Infrastructure

The appropriate level of granularity for allocation

among the geographies, business units and business

lines

Allocation of capital generated by shared service

centers

Allocation of diversification

Driving business benefits: Utilizing AMA in effective

capital and Risk Management

Page 121: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

121

Validation

• What is different about OpVaR Validation?

Backtesting the inputs Backtesting the Quantiles of the distribution (Thus the distribution assumptions)

Market VaR

Market Risk EC

Credit Risk EC

OpVaR

Page 122: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

122

Governance and Validation

• In the absence of data, “full” validation is not possible therefore we need to rely on Quality Assurance and external bechmarking. However a comprehensive validation spectrum includes:

1.Quality Assurance embedded in the production cycle

1. Enterprise relative-consistency check per risk type

2. Examine the large quantiles of the fitted loss distributions against the available data

3. Utilizing the available data to examine the RCSA and Scenario Analysis Results

2.Model Vetting

3.External bechmarking

4.Data integrity and Maintenance

5.Use Test

• It is important to use the right skill set and expertise for each component of validation

Page 123: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

123

Quality Assurance 1 - Ensuring Consistency

• Subject matter experts examine the relative accuracy across the enterprise per risk type

• For example

Does it make sense that the legal risk for Business Unit A is twice a big as for Business Unit B?

Examined by a enterprise legal risk expert.

Frequency and Severity

RCSA

Scenario AnalysisScenario Analysis

Enterprise Consistency

Check

Op VaR

by Risk Type

Page 124: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

124

Quality Assurance 2 – Examine the Fitted Distributions

Frequency and Severity

RCSA

Scenario AnalysisScenario Analysis

External Data

Consortium

External Data

Consortium

External Data

Public

External Data

Public

Curve Fitting per Risk Type

Validation

• Possible to see underestimation

• Out of sample testing with public data: Compare the quantiles of the Fitted Distribution with external data per risk type.

Need to estimate the frequency

Scaling is an issue

• Out of sample (in sample if used in the fitting) testing with consortium data: Compare the quantiles of the Fitted Distribution with external data per risk type.

• Data from individual institutions can be aggregated to create a longer time series data to estimate frequency (assuming independence ?)

Page 125: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

125

Quality Assurance 3 – Examine the RCSA and Scenario Analysis Data

• Internal and External data are used in validating the RCSA

Inherent risk should align with external loss experience

External data - Residual Development/Review of Controls

• Residual Risk and Control quality validates Loss Scenarios against External and Internal Loss data.

• This is valuable anyway as it stimulates business introspection of potential risk

RCSA

Scenario AnalysisScenario Analysis

Internal Data

Internal Data

External Data

External Data

Curve Fitting per Risk Type

Page 126: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

126

Model Vetting

• Vetting the mechanical aspects:

Verification of the code

Monte Carlo simulation

Curve fitting algorithm

Choice of the distributions

Goodness of fit

• Separation of vett-able and not-vettable: Managing the scope to match the vetters skill set is very important.

Page 127: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

127

External Benchmarking

• In the absence of sufficient data for full validation, peer benchmarking and industry studies for reasonability and comparability testing is required.

• 12%-20% (Op Risk Capital/Minimum Regulatory Capital) appears to be a current industry benchmark, with bank’s being at the lower end and insurance companies being at the higher end.

• Operational risk capital for non-AMA banks is higher than for AMA banks regardless of exposure indicator used for scaling

Ratio of OpRisk Capital to gross income is 10.8% for typical AMA banks

Basic Indicator Approach requires 15% of gross income. Banks using Standardized Approach (TSA) have a similar ratio of 12% - 18%

• Regulators move to impose AMA capital floors (AMA ≥ 85%(ish) TSA

Going Forward EC < AMA – Implications?

• Is this a Top-down calibration?

Page 128: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

128

Operationalizing AMA

• Management Buy-In

• Roles and Responsibilities: Corporate vs Front Line

• Allocation of Operational Risk Capital

• IT Infrastructure

• The appropriate level of granularity for allocation among the geographies, business units and business lines

• Allocation of capital generated by shared service centers

• Allocation of diversification

• Driving business benefits: Utilizing AMA in effective capital and Risk Management

Page 129: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

129

Management Buy-In

Some tips:

• Don’t oversell OpVaR as the correct absolute number – emphasize that it provides an operation risk management framework where relative magnitude of different operational risk types are quantified.

• Get business and management involved during the entire process.

• Express the fact that: This is not a quantitative model but a process and framework, i.e. the business

unit’s experience/ judgment is used. The model provides a quantitative and consistent framework to formalize the process, this experience/judgment.

It is directionally right, it is obviously better than most benchmark based approaches (expense based, etc.).

It is linked to the risk drivers, thus, gives Business Units control over their Capital charge and allows them to make cost/benefit decisions on controls and insurance.

• Explain the Use Test

• The fact that it is a regularly requirement helps but actively seek linkages to the business benefits.

Page 130: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

130

Roles and Responsibilities:

• Corporate: Set up the framework, coordinate, facilitate, govern, allocate, report

EC Group

Operational Risk Group

• Front Line Risk Managers: Manage the risk

• Business: Own the Risk and the corresponding OpVaR

Page 131: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

131

Dividing the Work within Corporate,A Key Partnership

Operational

Risk DriversRCSA and Scenario Analysis Facilitation

Coordination of Data Capture and Maintenance

Op VaR

Framework Design, Calibration and Estimation

Scenario Analysis Design Op Risk Data Marts Design

Internal + External Data

Enterprise Consistency/Validation

EC

Grou

p

Op VaR

Op

Risk

Grou

p

Risk TaxonomyGovernance

Page 132: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

132

Allocation of Operational Risk Capital

• The appropriate level of granularity for allocation among the geographies, business units and business lines

• Allocation of diversification

• Allocation of capital generated by shared service centres (by volume, usage etc) – do they have control over the controls?

Page 133: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

133

Allocation

Risk Types Business Unit 1 Business Unit 2 Business Unit 3 Business Unit 4 Business Unit 5Internal Fraud

External Fraud

Employment Practices and Workplace Safety

Client, Products and Business Processes

Damage to Physical Assets

Business Disruption and Systems Failures

Execution, Delivery and Process Management

• By Risk Type x Business Unit

• Capital Numbers are calculated for Business-Unit where performance is managed

Page 134: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

134

Diversification

• Total Allocation Benefit =∑Standalone – Enterprise Capital

• Allocation of diversification benefits?

Risk Types Business Unit 1 Business Unit 2 Business Unit 3 Business Unit 4 Business Unit 5Internal Fraud

External Fraud

Employment Practices and Workplace Safety

Client, Products and Business Processes

Damage to Physical Assets

Business Disruption and Systems Failures

Execution, Delivery and Process Management

ρ ρ ρ ρBusiness

Unit Capital

Enterprise Capital

ρ

ρ

ρ

ρ

ρ

(Standalone)

Page 135: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

135

Allocation of diversification benefits

• Pro-rata allocation: Diversified EC allocated to sub-portfolios based on standalone EC

• Marginal capital allocation: the difference between the EC required with the sub-portfolio as a component of the total portfolio and the EC required with the sub-portfolio removed

• Shapley Allocation: the average of the marginal EC when a given portfolio is added to all possible combinations of sub-portfolios in which it can be included (Denault, 2001).

m

t

SAt

SAt

dDiversifieEC

ECEC

1

m

t SPpp

SPppdDiversifie

t

t

ECEC

ECECEC

1)(

Page 136: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

136

Allocation of diversification benefits

• The Risk Contribution: Based on sub-portfolios’ contribution to the total variance of the loss distribution.

• Tail Risk Contribution Methodology: Based on a sub portfolio’s marginal contribution to the portfolio losses within a defined region of the portfolio loss distribution

2/),cov( ppidDiversifie RREC

],[

],[

21

21

VaRVaRLL

VaRVaRLLEC

pp

pi

dDiversifie

Page 137: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

137

Diversification – No fungibility between the regions

Risk Types Business Unit 1 Business Unit 2 Business Unit 3 Business Unit 4 Business Unit 5Internal Fraud

External Fraud

Employment Practices and Workplace Safety

Client, Products and Business Processes

Damage to Physical Assets

Business Disruption and Systems Failures

Execution, Delivery and Process Management

ρ ρ ρ ρBusiness

Unit Capital

Enterprise Capital

ρ

ρ

ρ

ρ

ρ

Region 1 Region 2

+Enterprise

Capital =

• Diversification is allowed within the region but not between the regions where capital is not fungible

Page 138: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

138

IT infrastructure

• IT infrastructure is essential for supporting the AMA Framework – tracking and coordinating the moving parts.

• IT system must integrate: Loss collection Relevant internal and external data must be referred through the RCSA and

Scenario Analysis Results of RSCA, Scenario Work shops and Internal and External Loss data

must be integrated with Capital modeling and fitting Results of the Capital modeling must then be available for in RCSA and control

review to complete the cycle. Audit tracking and workflow management throughout the entire cycle to

demonstrate the “Use-Test” and the use of all 4 elements

• Can be very costly to develop the IT internally. Various components require input and design by different subject matter experts that are usually “siloed” across an enterprise (capital modeling especially)

How to get systems to talk to each other if they are developed independently

• Given the FTEs required for development and potential pitfalls to achieving coordination it is more cost effective to purchase pre-packaged off the shelf solution

Page 139: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

139

Operational Risk And Regulation Software Rankings

Page 140: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

140

Operational Risk And Regulation Software Rankings

Page 141: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

141

Closing Remarks

• The profile of operational risk is growing in the insurance industry

Solvency II standard formula will be insufficient, requiring the development of internal models

Joint regulators won’t hold a double standard to banks and insurance companies

Rating agencies are increasingly interested in operational risk controls and processes

• The value of an AMA framework is in the governance and validation processes between the 4 inputs: Internal and External data, Scenario Analysis and RCSA.

Distribution fitting and capital modeling is not defendable without transparent governance and validation against Scenario Analysis and RCSA

Page 142: Operational Risk Training Managing Operational Risk  & AMA Toronto Nov 3, 2011

142

Closing Remarks, con’t

• A well governed AMA has direct business benefits. It provides an effective framework for operational risk quantification and management, facilitating:

increased awareness for operational risk

early identification and thus mitigation of potential operational risk via Scenario Analysis and RCSA discussions, keeping an eye on the horizon rather than on the rear view mirror – being proactive rather than reactive

OpVaR quantification provides a heat map indicating where to focus

• For these to happen we need to close the cycle of learning – when analyzing the OpVaR results – need to connect the dots

• It separates heavy capital users (Business Units, Products etc.) from others. This more accurate relative quantification of Capital leads to effective capitalization, performance management and pricing.

• Facilitates Cost vs Benefit decisions, such as investing in better controls, insurance vs holding capital