Marsh Analytics - CFO com

5
DO THE MATH BEFORE YOU BUY INSURANCE BY CLAUDE YODER AND DAVE HEPPEN, CONTRIBUTORS USING BIG DATA TO CAPTURE RISK VOLATILITY BY CLAUDE YODER AND DAVE HEPPEN, CONTRIBUTORS Claude Yoder is head of Marsh Global Analytics and Dave Heppen leads the Marsh Global Analytics North American unit.

Transcript of Marsh Analytics - CFO com

Page 1: Marsh Analytics - CFO com

DO THE MATH BEFORE YOU BUY INSURANCEB Y C L A U D E Y O D E R A N D D A V E H E P P E N , C O N T R I B U T O R S

USING BIG DATA TO CAPTURE RISK VOLATILITYB Y C L A U D E Y O D E R A N D D A V E H E P P E N , C O N T R I B U T O R S

Claude Yoder is head of Marsh Global Analytics and Dave Heppen leads the Marsh Global Analytics North American unit.

Page 2: Marsh Analytics - CFO com

When it comes to deciding how much property-casualty insurance to buy for their companies, CFOs and corporate risk managers face an essential dilemma. On one hand, they don’t want to buy more insurance than their companies need because the cost depletes capital that could be used for other business purposes. On the other hand, if they buy too little insurance, they could put their company’s balance sheet at risk if a significant loss occurs.

Difficult as deciding how much insurance to buy is, it can be even harder when a company has little or no history of large losses on which to base such decisions. The use of analytics can help executives make more informed decisions about how much insurance coverage is appropriate to buy during their spring renewal, however.

Although it may be tempting for CFOs and/or risk managers who have never experienced a large loss at their firms to discount the possibility of one occurring in the future, the exposure still exists. Just as insurers must price the coverage to match their exposure to risk to succeed in the long term, executives should consider the possibility of a large event when evaluating the appropriate amounts of coverage to buy from insurers. One need look no further than the aftereffects of recent large retail data breaches and catastrophes like Superstorm Sandy to know why.

Quantifying a Big ExposureQuantifying a company’s exposure to large losses

involves two steps: estimating the likelihood of a large-scale event occurring, and estimating the cost of such an event if it does occur.

For a company with little to no loss history, it’s useful to consult large losses experienced by other companies in the same industry. That requires a robust source of industry loss data. A threshold representing a large event needs to be selected for this exercise, such as all industry

DO THE MATH BEFORE YOU BUY INSURANCEDeciding how much insurance to buy can be tough when a company has little or

no history of large losses on which to base such decisions.B Y C L A U D E Y O D E R A N D D A V E H E P P E N , C O N T R I B U T O R S

cfo.comloss events above $10 million.By considering the number of such events across

the industry (on an annual basis), in combination with the company’s size relative to the size of the industry, one can estimate the company’s likelihood of a large event occurring. These initial estimates can be further refined based on company-specific considerations that include loss control measures, nature of exposures and business models.

Using that data, analytics can then help provide a reasonable basis for estimating the potential costs associated with such an event.

It’s important to recognize, however, that although industry losses serve as a useful guide, they’re not a definitive statement of loss potential. For example, if the largest industry event is $100 million, you shouldn’t assume that this is the maximum loss the company could experience. Instead, CFOs and their staffs should consider a range of potential loss outcomes if a large loss occurs, over and above the range suggested by the individual losses.

One way to accomplish this is through loss distributions. Each individual large loss in a data set can be viewed as a result drawn at random from an underlying loss distribution. By fitting a curve to its losses, a company can estimate the potential distribution of costs above and beyond what has already been seen from the available data.

Consider the following example: Company XYZ, which has a 1 percent market share of its industry, is beginning its insurance-renewal discussions. Let’s say the company, which has never experienced a loss above $10 million, wants to figure out if $75 million in general liability insurance is the best amount for it to buy.

Upon further investigation, the finance department learns that 50 general liability losses of more than $10 million have occurred in the industry over the last five years. Insured losses from those

Page 3: Marsh Analytics - CFO com

cfo.com

cfo.

com events range from $10 million to $125 million, with

an average loss of $50 million.With 50 industry losses of more than $10 million

over a five-year period, the number of industry events above $10 million can be estimated as 10 per year (50/5). Since Company XYZ has a 1 percent market share, its likelihood of a loss above $10 million can be estimated at 10 percent per year: (10 losses per year)(1%) = 0.10, or 10% per year.

In this example, we assume that the distribution that best fits the industry data yields the following results:

• Average loss: $50 million.• 1-in-5 loss (80th percentile) = $75 million.• 1-in-10 loss (90th percentile) = $100 million.• 1-in-100 loss (99th percentile) = $150 million.

Evaluating CoverageGiven these results, how should Company XYZ

evaluate what buying $75 million in total coverage would mean? The thought process may go as follows:

• If we have a large loss, the likelihood that that amount of coverage would be enough is 80 percent (a 1-in-5 loss).

• But there’s a 10 percent chance that the loss will be at least $25 million above those coverage limits ($100 million – $75 million), and a 1 percent chance that it will be at least $75 million above our limits ($150 million – $75 million).

• Our likelihood of experiencing a large loss is 10 percent.

• By combining the 10 percent likelihood of a large loss with the 80 percent likelihood that our current coverage limits contain large losses, our limits appear to be adequate 98 percent of the time (100 percent – (10 percent x (1-80%)) = 98%).

• If we want limits to be sufficient 99 percent of the time, we would need to increase them to the 90th percentile of the loss distribution, or $100 million (100 percent – (10 percent x (1-90%)) = 99%).

Other ConsiderationsAside from insurance limits there are other

important considerations concerning the best use of risk transfer. A company’s risk-bearing capacity and appetite, and its market-based insurance premiums, are important aspects when deciding how to retain and transfer risk. Sophisticated techniques can be used to ensure all relevant factors are part of the decision-making framework.

Analytics can provide the credible supporting documentation needed for insurance limit and other risk transfer discussions at the executive and board levels. Particularly with company boards being more demanding and specific about the risks facing companies today, the better prepared executives are with data heading into those discussions, the more satisfied a company’s stakeholders will be.

Claude Yoder is head of Marsh Global Analytics and Dave Heppen leads the Marsh Global Analytics North American unit.

ePrinted and posted with permission to Marsh LLC from CFO.com, April 18, 2014. Visit our website at www.cfo.com© CFO Publishing LLC. All Rights Reserved. Foster Printing Service: 866-879-9144, www.marketingreprints.com.

Page 4: Marsh Analytics - CFO com

Companies have traditionally measured their exposure to risks through total cost of risk (TCOR) calculations. Definitions vary, but TCOR represents the sum of these larger elements: the insurance premiums a corporation spends, the cost of the losses it retains, and other items such as administrative costs, brokerage fees, and taxes and assessments.

The shortcoming of this line of thinking is that it places no value on uncertainty, rather treating losses as a known quantity. And yet the amount of losses at any one company fluctuates unpredictably from year to year. In fact, this uncertainty is the main reason that companies buy insurance and create loss control and mitigation programs.

The importance of measuring uncertainty and volatility has led to the creation of a new measure of risk: the economic cost of risk (ECOR).

ECOR is defined as the sum of:

• Expected retained losses.• Premiums.• Other expenses (for example, claims-handling fees

and the cost of collateral).• Implied risk charge.

Unlike TCOR, ECOR incorporates an implied risk charge (IRC) that evaluates the severity and likelihood of detrimental outcomes and their associated cost. Because no company is perfectly protected against the unexpected, every organization bears an implied charge for their unexpected risk.

Thus, IRC can be quantified for any insurance or mitigation structure. IRC incorporates a company’s capital costs and provides a direct linkage between insurance purchasing decisions and financial perfor-mance metrics. It also creates a necessary and more

USING BIG DATA TO CAPTURE RISK VOLATILITYBy factoring in the Economic Cost of Risk, CFOs can capture the

ups and downs of their companies’ perils.B Y C L A U D E Y O D E R A N D D A V E H E P P E N , C O N T R I B U T O R S

cfo.commeaningful way for companies to strategically engage between their finance and risk management functions.

The Metric in PracticeConsider the two hypothetical companies (in

the charts below) with the following loss history, assuming constant size over the past five years (“Ground-Up Losses” are a measurement of the original losses to a company):

Each of these companies has an average loss of $10 million per year. Traditional TCOR analysis might suggest that there is no difference in the cost of risk for these two companies. But there is much more volatility in Company B’s losses.

Such volatility can lead to unexpected and unpleasant effects on a company’s earnings and performance. Intuitively, it feels as if the cost of risk for Company B should be higher than Company A because of the higher downside risk. The question then becomes: How do you put a value on this volatility?

ECOR measures this additional cost of risk through the IRC, which is computed as the capital at risk (expected losses above average losses) multiplied by the company’s cost of capital. Stochastic modeling is typically used to measure IRC. However, for purposes

of simplicity, we can show how IRC for Company A and Company B can be measured based on their historical losses.

For Company A, average losses are $10 million, and there is one year, 2011, with losses above that amount, of $11 million. Total losses above the Company A

Page 5: Marsh Analytics - CFO com

cfo.com

cfo.

com average are thus $1

million ($11 million – $10 million). There is a 20 percent chance of experiencing losses above the average (one year out of five). Therefore, expected losses above the average annually are 20 percent multiplied by $1 million, or $200,000.

For Company B, average losses are also $10 million. But there are two years with losses above $10 million, 2009 ($15 million) and 2011 ($18 million). Therefore, total losses above expected are $13 million (the sum of $15 million – $10 million = $5 million, and $18 million – $10 million = $8 million). Average losses above expected are $6.5 million (the average of $5 million and $8 million). There is a 40 percent chance of experiencing losses above the

average (two years out of five). Therefore, expected losses above the average are 40 percent multiplied by $6.5 million, or $2.6 million.

The IRC for Company A is $200,000 multiplied by the company’s cost of capital. In relation to ECOR, this is a trivial amount, which should be the case when losses are highly predictable.

The IRC for Company B is $2.6 million multiplied by the company’s cost of capital. That becomes a significant cost component of ECOR and, in this example, is more than 10 times higher for Company B than Company A.

That should be the case when losses are highly volatile, particularly when considering that the capital is at risk for the lifetime of the claims rather than only a short period, such as 12 months. By measuring the cost of risk through the lens of ECOR, companies can now place a value on uncertainty.

Claude Yoder is head of Marsh Global Analytics and Dave Heppen is Marsh Global Analytics North American Leader.

ePrinted and posted with permission to Marsh LLC from CFO.com, December 2, 2013. Visit our website at www.cfo.com© CFO Publishing LLC. All Rights Reserved. Foster Printing Service: 866-879-9144, www.marketingreprints.com.

Company B