The Financial Reporter - soa.org · The Financial Reporter ... An International Financial Reporting...

36
The Financial Reporter The Newsletter of the Life Insurance Company Financial Reporting Section An International Financial Reporting Standards (IFRS) Phase II Discussion Paper Primer by Mark J. Freedman and Tara J.P. Hansen December 2007 Issue No. 71 U S GAAP and IFRS are aligning their con- cepts, principles and rules. This means U.S. insurance companies will almost certainly be impacted by any accounting changes that take place whether they occur domestically or abroad. The only question is how quickly this will occur. Recently, the SEC issued for public comment a proposal to accept from foreign private issuers financial statements prepared in accordance with IFRS without any reconciliation to US GAAP. The SEC has also issued a concept release to obtain information as to whether U.S. issuers should have the option to prepare financial state- ments in accordance with IFRS. In addition, in August 2007, the FASB issued an invitation to comment on the International Accounting Standards Board’s (IASB) Phase II Discussion Paper (Discussion Paper), entitled “Preliminary Views on Insurance Contracts,” in order to assess whether there is a need for a project on account- ing for insurance contracts and whether or not to work with the IASB in a joint project. The landscape of financial reporting here in the United States is changing rapidly, especially with the issuance of FASB Statements No. 157 and 159. Statement No. 157, Fair Value Measurements, applies to all existing pronounce- ments under GAAP that require (or permit) the use of fair value. It also establishes a framework for measuring fair value in GAAP, clarifies the definition of fair value within that framework, and expands disclosures about the use of fair value measurements. Statement No. 159, “The Fair Value Option for Financial Assets and Financial Liabilities,” including an amendment of Statement No. 115, “Accounting for Certain Investments in Debt and Equity Securities,” per- mits entities to measure certain other items not included within the scope of Statement No. 157 at fair value. The issuance of Statement No. 159 by the FASB is a big step forward in requiring fair value reporting. Insurers looking to adopt Statement No. 159 for their insurance contracts will be faced with the challenge of determining continued on page 3 >>

Transcript of The Financial Reporter - soa.org · The Financial Reporter ... An International Financial Reporting...

Page 1: The Financial Reporter - soa.org · The Financial Reporter ... An International Financial Reporting Standards (IFRS) Phase II Discussion Paper Primer

The Financial ReporterThe Newsletter of the Life Insurance Company Financial Reporting Section

An International Financial Reporting Standards (IFRS)Phase II Discussion Paper Primerby Mark J. Freedman and Tara J.P. Hansen

December 2007 Issue No. 71

US GAAP and IFRS are aligning their con-cepts, principles and rules. This means U.S.insurance companies will almost certainly

be impacted by any accounting changes that takeplace whether they occur domestically or abroad.The only question is how quickly this will occur.

Recently, the SEC issued for public comment aproposal to accept from foreign private issuersfinancial statements prepared in accordance withIFRS without any reconciliation to US GAAP.The SEC has also issued a concept release toobtain information as to whether U.S. issuersshould have the option to prepare financial state-ments in accordance with IFRS. In addition, inAugust 2007, the FASB issued an invitation tocomment on the International AccountingStandards Board’s (IASB) Phase II DiscussionPaper (Discussion Paper), entitled “PreliminaryViews on Insurance Contracts,” in order to assesswhether there is a need for a project on account-ing for insurance contracts and whether or not towork with the IASB in a joint project.

The landscape of financial reporting here in the United States is changing rapidly, especiallywith the issuance of FASB Statements No. 157 and 159. Statement No. 157, Fair ValueMeasurements, applies to all existing pronounce-ments under GAAP that require (or permit) theuse of fair value. It also establishes a frameworkfor measuring fair value in GAAP, clarifies thedefinition of fair value within that framework,and expands disclosures about the use of fairvalue measurements. Statement No. 159, “TheFair Value Option for Financial Assets andFinancial Liabilities,” including an amendment

of Statement No. 115, “Accounting for CertainInvestments in Debt and Equity Securities,” per-mits entities to measure certain other items notincluded within the scope of Statement No. 157at fair value. The issuance of Statement No. 159by the FASB is a big step forward in requiringfair value reporting. Insurers looking to adoptStatement No. 159 for their insurance contractswill be faced with the challenge of determining

continued on page 3 >>

Page 2: The Financial Reporter - soa.org · The Financial Reporter ... An International Financial Reporting Standards (IFRS) Phase II Discussion Paper Primer

The Financial Reporter

Published by the Life Insurance CompanyFinancial Reporting Section of the Society of Actuaries

475 N. Martingale Road, Suite 600Schaumburg, Illinois 60173p: 847.706.3500f: 847.706.3599 w: www.soa.org

This newsletter is free to section members.Current-year issues are available from the communications department. Back issues of section newsletters have been placed in the SOAlibrary and on the SOA Web site (www.soa.org).Photocopies of back issues may be requested for anominal fee.

2006-2007 Section LeadershipHenry W. Siegel, ChairpersonJerry F. Enoch, Vice-ChairpersonCraig Reynolds, SecretaryMike Y. Leung, Treasurer(Annual Program Committee Rep.)Errol Cramer, BOG PartnerRod L. Bubke, Council MemberSusan T. Deakins, Council MemberJason A. Morton, Council MemberYiji S. Starr, Council MemberVincent Y.Y. Tsang, Council Member(Spring Program Committee Rep.)Kerry A. Krantz, Web Coordinator

Rick Browne, Newsletter EditorKPMG LLP303 East Wacker Drive • Chicago, IL • 60601p: 312.665.8511f: 312.275.8509e: [email protected]

Carol Marler, Associate Editore: [email protected]

Keith Terry, Associate Editore: [email protected]

Sam Phillips, Staff Editore: [email protected]

Julissa Sweeney, Graphic Designere: [email protected]

Mike Boot, Staff Partnere: [email protected]

Christy Cook, Project Support Specialiste: [email protected]

Facts and opinions contained herein are the soleresponsibility of the persons expressing themand shall not be attributed to the Society ofActuaries, its committees, the Life InsuranceCompany Financial Reporting Section or theemployers of the authors. We will promptly correct errors brought to our attention.

Copyright ©2007 Society of Actuaries.All rights reserved.Printed in the United States of America.

December 2007 Issue No. 71

1 An International Financial Reporting Standards (IFRS) Phase II Discussion PaperPrimerA summary of the provisions of the recently issued IASB Phase II Discussion Paper.

Mark J.Freedman and Tara J.P. Hansen

6 The Siren Call of Models—Beware of the RocksThoughts from the Section Chair.

Henry W. Siegel

8 Potential Implications of IASB’s Preliminary Views on Insurance ContractsWhat are some of the implications of the recently issued IASB Discussion Paper? This is one ofa series of articles by the American Academy of Actuaries’ Life Insurance Financial ReportingCommittee.

Leonard Reback

11 Risk Margins to the Non-Market Risks under FAS 157: Suggested ApproachA methodology for calculating risk margins related to policyholder behavior risks when meas-uring fair value under FAS 157.

Vadim Zinkovsky

17 A Critique of Fair Value as a Method for Valuing Insurance LiabilitiesA discussion of some of the challenges the insurance industry will face as we move towards fairvalue accounting for insurance products.

Darin Zimmerman

22 Embedded Value Reporting ReduxThe Academy’s Life Financial Reporting Committee announces the soon to be released practicenote on EV.

Tina Getachew

23 The Lowly Loss RatioThe mathematical underpinnings of loss ratios.

Paul Margus

32 What’s OutsideArticles appearing recently in other publications that may be of interest to financial reportingactuaries.

Carol Marler

What’s Inside

Financial Reporter | December 20072

Page 3: The Financial Reporter - soa.org · The Financial Reporter ... An International Financial Reporting Standards (IFRS) Phase II Discussion Paper Primer

the fair value of those insurance contract liabilitiesand are considering the principles in the DiscussionPaper.

Developing an understanding of the current direc-tion of IFRS for insurance products is imperative forU.S. accounting and actuarial practitioners. ThisIFRS Phase II Discussion Paper Primer has beendesigned to provide a summary of the key proposalsand issues that are facing U.S. insurers as this mate-rial evolves into authoritative guidance.

The Discussion Paper, issued by the IASB in May2007, contains the IASB’s preliminary views on thevarious recognition and measurement componentsconsidered in accounting for insurance contracts andidentifies issues that are still under consideration.The IASB has invited interested parties to commenton the Discussion Paper by November 16, 2007.This process will ultimately lead to an insurancestandard that will replace the current IFRS 4“Insurance Contracts.”

This article summarizes the main proposals in theDiscussion Paper, and in particular, emphasizes thoseissues where the views of the IASB are not uniform-ly accepted.

ObjectivesThe Discussion Paper reflects a principles-basedapproach with additional high-level and prescriptiveguidance. Insurance contracts are to be subject to thesame general principles as those of other financialservice entities. This approach seeks to ensure consis-tency of financial statements for insurance, assetmanagement and banking companies. The revisedplatform for insurers should lead to increased com-parability of financial statements, better identifica-tion of key value drivers, and enhanced share valuesdue to improved transparency.

Measurement ModelThe proposed measurement model for insurance lia-bilities rests on three building blocks:

• Explicit, unbiased, market-consistent, probability-weighted, and current estimates of expected futurecash flows;

• Discount rates consistent with prices observable inthe market place; and

• Explicit and unbiased estimates of the margin thatmarket participants require for bearing risk (riskmargin) and for providing any services (servicemargin).

The expected future cash flowsshould be explicit, current andconsistent with observablemarket prices, and excludeentity-specific cash flows.There is a subtle, but impor-tant, difference between mar-ket-consistent and entity-spe-cific assumptions. Claimsassumptions, for example,would presumably be the samefor both bases, but expenseassumptions, for instance, maynot be. This is true because the claims experiencerelates to the block of business and would be trans-ferred with the block of business upon sale, butany entity-specific expense savings that the currententity enjoys may not be transferred with theblock.

The IASB believes that discounting should beapplied to all liabilities in an effort to enhancecomparability of financial statements. Many U.S.and Japanese insurers believe that most non-lifeinsurance liabilities should not be discounted andthey have already expressed this view with theIASB.

Risk margins convey a level of uncertainty associatedwith future cash flows. These margins should bemarket-consistent and reassessed at each reportingdate. The IASB has given high-level guidance withrespect to risk margins, but has left the details relat-ed to their development to the insurance industry.One example of the high-level guidance is that oper-ational risk can only be provided for if it is relateddirectly to the liability itself. Acceptable approachescurrently appear to include cost of capital, per-centile, Tail Value at Risk, and multiple of standarddeviation.

Service margins represent what market participantsrequire for providing other services in addition tocollecting premiums and paying claims. These mar-gins should also be market-consistent. The invest-ment management function in variable (and pre-sumably other interest sensitive) contracts is anexample of a service for which an explicit servicemargin may be established under this new frame-work. This concept is generally not considered incurrent pricing and embedded value techniques usedby insurance companies.

Developing an understanding ofthe current direction of IFRS forinsurance products is imperativefor U.S. accounting and actuarialpractitioners.

Financial Reporter | December 2007 3

>> ... Financial Reporting Standards (IFRS) …

continued on page 4 >>

Mark J. Freedman, FSA,MAAA is a Principal atErnst & Young LLP inPhiladelphia, Pa. He can be reached at215.448.5012 or [email protected]

Tara J.P. Hansen, FSA,MAAA is a SeniorActuarial Advisor atErnst & Young LLP inNew York, N.Y. She can be reached at 212.773.2329 [email protected]

Page 4: The Financial Reporter - soa.org · The Financial Reporter ... An International Financial Reporting Standards (IFRS) Phase II Discussion Paper Primer

The primary implementation issue related to bothrisk and service margins is how to calibrate them.

Current Exit ValueThe IASB is proposing that an insurer measureinsurance liabilities at what it calls “current exitvalue.” This represents the amount another partywould reasonably expect to receive in an arm’s-lengthtransaction to accept all contractual rights and obli-gations of a liability. Under an exit value measure-ment framework, there are no requirements to breakeven at issue. Such a framework will prove challeng-ing in terms of determining an appropriate level ofrisk margin, since the calibration is a very subjectiveprocess under an exit value framework. In particular,there is not currently a secondary market to transferinsurance liabilities. While one could look to recentacquisitions and reinsurance arrangements, the spe-cific components rarely, if ever, become public infor-mation. However, these, combined with the retailmarket, could be taken into consideration in a hypo-thetical model.

A further point to note is that “current exit value”may or may not be equivalent to “fair value,” asdefined in the IASB’s discussion paper on fair valuemeasurement (to be used when fair value must beapplied under IFRS). This discussion paper on fairvalue measurement is almost a word-for-word copyof Statement No. 157, which is FASB’s version offair value that has already been adopted. At the timethis article is being written, the IASB has noted thatthere do not appear to be material differencesbetween these two concepts, but it intends to explorethis issue more thoroughly.

“Entry value” is another method of risk margin cali-bration covered in the Discussion Paper. Under the“entry value” framework, no gain or loss at issuewould arise, since risk margins would be calibratedto the premium received less acquisition expenses.The IASB has rejected this approach in favor of cur-rent exit value, even though the CFO Forum, a dis-cussion group made up of the Chief FinancialOfficers of 19 major European insurers, andGNAIE, a group made up of North American andJapanese insurance companies, lobbied for an entry-value type approach.

Own Credit StandingThe measurement of liabilities should include theeffects, if any, of the credit characteristics of the lia-bility (insurer’s credit standing). In the DiscussionPaper, the IASB wants the impact of this item to bedisclosed. This issue has been rather controversial tosome actuaries and accountants, although the IASBdoes not believe the impact will be large.

To be consistent with a fair value measurement ofother financial instruments, the IASB has said thecreditworthiness of the insurance contracts shouldbe reflected in the measurement of the liability. Theimplicit assumption is that the transfer is to a thirdparty of similar credit standing. This topic has gen-erated much discussion—many do not agree withthe rationale of reducing liabilities when a company’sfinancial condition deteriorates. However, in a well-regulated market with guaranty funds, it is notexpected that this will have a significant impact onthe level of reserves ultimately held under IFRSPhase II. There may be other offsets to this potentialreduction of liabilities that still need to be explored.

For example, if a cost of capital approach is used forderiving risk margins, an entity’s cost of capital willincrease as credit standing declines, implying a high-er risk margin and therefore higher liabilities. Also,actuarial models sometimes already reflect the effectson pricing of credit standing indirectly via lapserates, e.g., for life insurers.

Treatment of Future Renewal PremiumsThe Discussion Paper indicates that future premi-ums would be included in the liability calculationonly to the extent that either:

1. The insurer has an unconditional contractualobligation to accept premiums whose value is lessthan the value of the resulting additional benefitpayments; or

Financial Reporter | June 20074

>> ... Financial Reporting Standards (IFRS) …

Page 5: The Financial Reporter - soa.org · The Financial Reporter ... An International Financial Reporting Standards (IFRS) Phase II Discussion Paper Primer

2. They are required for the policyholder to contin-ue to receive guaranteed insurability at a price thatis contractually constrained.

This is a controversial issue with respect to NorthAmerican style universal life and flexible annuityproducts, since current pricing techniques include anassumption for future expected premiums. If futureexpected premiums cannot be taken into account,there may be a significant loss at issue in today’sproducts, largely due to heaped commission struc-tures.

Unit of AccountThe liability, including the determination of riskmargins, should be determined on a portfolio basis.“Portfolio basis” refers to insurance contracts that aresubject to similarly broad risks and managed togeth-er as a single portfolio. Determining risk margins ona portfolio basis means that one must exclude thebenefits of diversification between portfolios.

Deferred Acquisition Costs (DAC)Under the framework described by the DiscussionPaper, there will be no separate DAC asset to accountfor the investment the insurer makes in the customerrelationship. Acquisition costs are to be expensedwhen incurred, as they play no direct role in deter-mining current exit value.

Discretionary Participating Features andUniversal Life ContractsIn the IASB’s current view, policyholder participa-tion rights consisting of dividends and excess interestcredits for life insurers do not create a liability untilthe insurer has an unconditional obligation to poli-cyholders. A prior claim without an obligationshould not be recognized as a liability; the amountexpected to be paid in the future should be treated aspart of equity. In assessing whether an insurer has aconstructive obligation to pay dividends to partici-pating policyholders, the IASB will rely on itsConceptual Framework and IAS 37, “Provisions,Contingent Liabilities, and Contingent Assets.” Thiscould potentially be a large issue for U.S. participat-ing and interest sensitive life and annuity products,since current pricing takes into account expectedcredits to policyholders and not just guaranteed pay-ments.

SummaryThe Discussion Paper provides for significantchanges to financial reporting for insurance contractsand related activity under IFRS.

The anticipated timeline forIFRS Phase II for insurers is asfollows:

• An Exposure Draft is expectedto be issued in late 2008.

• A final standard is expected tobe adopted in late 2009.

• Implementation is projectedto become applicable for2011.

Many European companies have been pilot testing a“current exit” value type standard for several years;therefore, most of these companies will be prepared.

However, as discussed earlier in this article, theaccounting changes in Europe are having a dominoeffect in the United States. While several of thelarge U.S.-based insurance companies have been intune with the development of the proposed guid-ance, others appear to be largely asleep at the wheel,probably because they may not have appreciatedthe significance of US GAAP aligning with IFRS.The current events should be a wake-up call to U.S.insurers, as preparation for managing under a cur-rent exit value approach can take several years. Thecompetitive landscape is changing and action isrequired.

The current events should be awake-up call to U.S. insurers, aspreparation for managing undera current exit value approach can take several years.

Financial Reporter | December 2007

>>... Financial Reporting Standards (IFRS) …

5

$

Page 6: The Financial Reporter - soa.org · The Financial Reporter ... An International Financial Reporting Standards (IFRS) Phase II Discussion Paper Primer

As actuaries, we are uniquely concerned withboth measuring the past and projecting thefuture. We look at the past in order to get

insight into the future, but we can’t assume that thepast will always continue into the future.

One of the most common misunderstandings aboutactuaries is that we forecast the future. In fact, weproject the future, we don’t forecast it. The problemis, sometimes we forget this.

History 101Many years ago I was involved in pricing the firstGICs. Not directly, but in an oversight capacity. Welooked at various scenarios—I think it was the firststeps in a stochastic model and I can still remembersome of the conversations. We noted early that ahump pattern (an increase in interest rates followedby a decrease) was good, so long as the decrease elim-inated capital losses when the GIC ended. We alsonoted that a straight increase in interest rates wasbad. Very bad.

We discussed this: what happens if interest rates goup to 12 percent, half a percent a year from when wepriced it? “That can never happen,” was theresponse—and it seemed reasonable. After all, thiswas around 1975 and interest rates since WWII hadbeen stable, increasing slightly, for many years.Surely interest rates could never go so high—thegovernment would do something first. Well, youknow how that worked out. The government decid-ed that curbing inflation was more important thankeeping interest rates down.

There have been lots of other times when the crystalball was cloudy:

• Buy Junk Bonds (Executive Life)• Invest in Mortgages and Real Estate

(Confederation Life)• Assume normal lapses on lapse supported business

(Long-term Care and Canadian Term-to-100)

But the classic example is Long-Term CapitalManagement. Every actuary should read “WhenGenius Failed,” the story of this failure. For thoseunfamiliar with this, the Long-Term CapitalManagement hedge fund had Nobel Prize winners

(among them Scholes, as in Black-Scholes) andmany other extremely bright and successful people.Their fatal flaw was they believed their models.

They thought they had diversified their risks. Theyhad invested in securities in many countries; if onewent down, the others would provide stability. Butthen came the Asian Flu and the Russian crisis andsuddenly EVERY country’s securities went under.Then their competitors got wind and started movingagainst them. So the company would have collapsedif the banks hadn’t bailed them out. There was amore complete article on this in The Actuary back inAugust.

The FutureSo you might be wondering why this all matters tofinancial reporting actuaries. It matters because thereare proposals in the market that don’t seem to havelearned these lessons.

For instance, the International AccountingStandards Board is suggesting that it’s OK to accruefuture expected profits on the day a policy is issued.The American Insurance Industry generally opposesgains at issues, the Academy Task Force on IFRSgenerally opposes it in all but the rarest circum-stances, but some accountants and regulators on theinternational front seem less concerned.

“Surely there are times you know you’re going tomake a profit” some have said, even though thefuture remains as cloudy as ever. “Our models showthat we’ll have gains” others say, stubbornly believingtheir models will turn out to be real.

We’ve seen gains at issue before. Enron used it.Many sub-prime lenders used it. What makes usthink it makes sense for insurance policies? If I wereallowed to issue only one required piece of guidancefor life accounting, “no gain at issue” would be it.Another issue involves adjusting for portfolios withdiversified risk. “We can reduce our required capitalbecause we have diversified our risks. If we losemoney on mortality on our life insurance, our annu-ities will bale us out!” they claim, ignoring that thepeople with annuities have different demographicsthan those with life insurance and that the mortalitychanges may occur at ages that affect the two busi-

The Siren Call of Models—Beware of theRocksby Henry W. Siegel

Henry W. Siegel, FSA,MAAA, is vice president,Office of the ChiefActuary with New YorkLife Insurance Companyin New York, N.Y. He may be reached [email protected].

Financial Reporter | December 20076

Page 7: The Financial Reporter - soa.org · The Financial Reporter ... An International Financial Reporting Standards (IFRS) Phase II Discussion Paper Primer

nesses differently. “When things go wrong, all corre-lations approach 1” should be a lesson learned fromLong-Term Capital Management but some regula-tors seem willing to go along with a diversity credit.

The Society announced on the day I am writing thisthat they’ve completed “a research project thatexplored the covariance and correlation among vari-ous insurance and non–insurance risks in the contextof risk based capital.” I was happy when I saw thisannouncement since I think it’s an area that shrieksfor further research, but further examination showeda problem. There’s a bibliography and a theoreticalmathematical model, but there is no data to actuallydetermine if any covariance or correlation exists!This is not a criticism of the SOA, but how can oneargue for a diversity credit if there’s no data to deter-mine if it exists?

As actuaries, we need to avoid falling in love with ourmodels. It’s easy to become entranced with thembecause as actuarial students, our job was often tofeed those models and we don’t always step back andlook at the whole picture. How many times have youlamented the inability of beginning students tonotice when the model has produced somethingobviously wrong?

So as actuaries we need to be conscious that thefuture is uncertain. No surprise there, I hope. Also,as actuaries we are uniquely qualified to explain thatto those who do believe a model is real. I urge allSection Members to adopt this as a personal goal.

Given all the above, it may surprise you that I amconfident about one aspect of the future. This is myfinal column as Section Chair and I am confidentthat the Section will continue to thrive. As I havesaid before, we are in a period of exhilarating change,when all the financial reporting paradigms we’veknown will be changing. Whatever replaces them,financial reporting actuaries will be even more essen-tial in the future.

Finally, I want to thank the intrepid editor of theFinancial Reporter, Rick Browne, for putting upwith my “Just in time” style of writing. If you haven’twritten for the Financial Reporter, pick up a key-board and send Rick an article.

And always remember:

Insurance accounting is too important to be leftto the accountants!

Financial Reporter | December 2007 7

$

>> ... Chairperson’s Corner

Page 8: The Financial Reporter - soa.org · The Financial Reporter ... An International Financial Reporting Standards (IFRS) Phase II Discussion Paper Primer

T he IASB Discussion Paper on InsuranceContracts presents the preliminary view thatinsurance liabilities should be valued at “cur-

rent exit value” for GAAP financial reporting pur-poses. See the article in this issue “An InternationalFinancial Reporting Standards (IFRS) Phase IIDiscussion Paper Primer” by Tara Hansen and MarkFreedman for a more detailed examination of thepreliminary views expressed in the Discussion Paper.Some of the implications of those views may be sur-prising. A few are enumerated below.

Credit StandingAs the Discussion Paper is currently written, theimpact of an insurer’s credit standing will be a factorin the discount rate for its liabilities. If the creditstanding were to decrease, it would cause liabilityvalues to decrease, increasing income and surplus.Conversely, if credit standing were to improve, itwould cause liability values to increase, reducingincome and surplus. It may be difficult to explainthese gains when there is a credit downgrade, gener-ally thought of as a “negative” event, or the losseswhen there is a credit upgrade, which is generallyconsidered a “positive” event.

Earnings VolatilityAnother implication is the potential for earningsvolatility. It may not be surprising that if there isduration, convexity or hedging mismatch betweenthe assets and liabilities there could be earningsvolatility from changes in interest rates or other cap-ital market fluctuations. However, even where assetsand liabilities are well matched, there could bevolatility if some assets are accounted for in theincome statement on a basis other than fair value (ora similar measure). After all, current accounting doesnot require changes in fair values of securities ormost other financial instruments to flow through netincome. And certain invested assets, such as realestate, are not even eligible for a fair value option.

Earnings volatility might also emerge from othersources, even if liabilities are duration matched andwell hedged. All of an insurer’s liabilities wouldreflect its own credit standing. Invested assets, how-ever, typically have diverse credit ratings, not neces-sarily equal to the insurer’s. Thus, if there were anon-parallel shift in credit spreads, the asset and lia-bility values would not move consistently, creatingearnings volatility.

For example, assume an insurer has an AA rating.And assume that AA credit spreads relative to therisk-free rate increase by 10 basis points. Further,assume that the credit spreads on invested assetsincrease by an average of 20 basis points. Even ifassets and liabilities were otherwise well matched andproperly hedged, the non-parallel shift in creditspreads would cause the asset fair values to decreasemore than the liability current exit values, generatinga potentially large loss.

This could occur if there is a perception in the mar-ket that an insurer’s credit standing has changed,even if no change has actually occurred. An examplecould be presented from recent events, as follows.Assume that during the recent sub-prime credit cri-

Financial Reporter | December 2007

Potential Implications of IASB’S PreliminaryViews on Insurance Contractsby Leonard Reback

8

Page 9: The Financial Reporter - soa.org · The Financial Reporter ... An International Financial Reporting Standards (IFRS) Phase II Discussion Paper Primer

sis some insurers were perceived by the market to beexposed to sub-prime credit risk. The credit spreadsfor their own obligations would have increased, evenif in reality they had no such exposure. Thisincreased credit spread, however, would likely havebeen considered consistent with observed marketcash flows, and thus may have been required to bereflected in discounting those insurers’ liabilities.This would have generated a reduction in the cur-rent exit value of an insurer’s liabilities, generatingpotentially large gains, whether or not the insurer’screditworthiness was actually impacted. If thosecredit spreads reversed in subsequent periods, thegains would reverse, producing large losses. Inessence, the market’s view of a company’s credit riskwould likely be deemed “correct” for purposes ofvaluing the liabilities, whether or not that view wasaccurate.

Also, changes in liability experience and assumptionswould create earnings volatility, similar to currentDAC unlocking on FAS 97 contracts. For example,if current estimates of future mortality were tochange, that change would impact the current exitvalue of the liabilities immediately, and wouldimpact net income. However, there would be no cor-responding change in invested assets (although rein-surance assets might be impacted). Depending onthe direction of the change in liability current exitvalue, a large gain or loss could result. These gains orlosses could be greater than the impacts currentlyseen from DAC unlocking on FAS 97 contracts,because DAC unlocking impacts are mitigated bythe amortization ratio, which is typically less than100 percent. With current exit value changes, therewould be no such mitigation. And under current exitvalue these changes would also impact contracts cur-rently accounted for under FAS 60 under US GAAP.

Gain or Loss at IssueGains or losses at issue are another possibly surprisingimpact. Under current GAAP, gains generally do notoccur upon issuance of a contract. Losses upon issueare rare, and are generally limited to situations whereacquisition costs are not recoverable, or where thepresent value of expected benefits and expenses underthe contract exceed the present value of premiums.

Under the IASB preliminary view, gains at issuewould be permitted (although the IASB indicates

that they expect this to be rare), and losses may occureven if expected premiums are adequate to coverexpected benefits and expenses. This may happen ifthe premiums are adequate to cover benefits andexpenses, but not adequate to cover benefits andexpenses plus a risk margin consistent with that ofother market participants.

Another situation where a loss may occur at issue iswhen an insurer has a particularly efficient expensestructure and builds the resulting low expenses intoits pricing, generating a low premium relative to therest of the market. Because the market level ofexpenses is higher than that of the insurer, the insur-er may need to assume those higher expenses whendetermining its liability value. And where thoseexpenses are higher than what was assumed in deter-mining the premium, there may be a resulting loss atissue. Of course, if the insurer’s own expected lowexpenses do materialize, the insurer will realize high-er profits as those expenses emerge.

Furthermore, the IASB preliminary view restricts therecognition of future premiums on a contract, unlessone of three conditions is met:

1. The insurer can compel premium payment;2. Including the premiums and associated benefits

increases the liability value; or3. The insured must pay the premium to retain

guaranteed insurability.

This would seem to preclude recognition of futurepremiums on most universal life contracts in excessof the minimum amount needed to cover contractu-al charges. It would also seem to preclude recogni-tion of any future premiums on most variable annu-ity contracts that are classified as insurance contracts.Any such excess premiums would be considered anunrecognized customer relationship intangible.

This provision appears to make losses on issue ofthese contracts likely. After all, premiums in excess ofthe bare minimum to keep the contract in force aregenerally assumed in pricing the contract. Thus,those premiums typically include elements to recov-er acquisition costs. If those premiums cannot berecognized, a loss at issue may result.

Financial Reporter | December 2007 9

>> Potential Implications of IASB’S ...

continued on page 10>>

Leonard Reback FSA,MAAA, is vice presidentand actuary,Metropolitan LifeInsurance Co. inBridgewater, N.J. He can be contacted at 908.253.1172 or [email protected]

Page 10: The Financial Reporter - soa.org · The Financial Reporter ... An International Financial Reporting Standards (IFRS) Phase II Discussion Paper Primer

Financial Reporter | December 2007

>> Potential Implications of IASB’S ...

Similar concerns may apply to lapse-supported prod-ucts. The beneficial policyholder behavior, in thiscase lapsation, may not be recognizable under thepreliminary views. Therefore, even likely future laps-es may need to be excluded from the liability valua-tion. This could also generate losses at issue.

A mechanism which might have the opposite effectis the IASB’s preliminary view on the restriction ofrecognition of dividends (on participating contracts)and interest credits in excess of minimum guarantees(on UL contracts) to those which the insurer has alegal or constructive obligation to pay. It is notentirely clear how strong the expectation for pay-ment of the dividend or interest credit would need tobe in order to be recognized. If dividends and inter-est credits that were assumed in pricing cannot berecognized in the liability valuation, this could gen-erate a gain at issue.

Business CombinationsAssets and liabilities acquired in a business combina-tion are required to be initially measured at fair valueunder FASB Statement No. 141, and under IFRS 3for entities following IASB standards. As of theissuance of the Discussion Paper, the IASB had notidentified any significant differences between currentexit value as described in the Discussion Paper andfair value. If that remains the case, then current exitvalue, as described in the Discussion Paper, maybecome the valuation basis for insurance contractsacquired in business combinations—a developmentthat could have interesting implications.

For example, if a company is acquired in a competi-tive bidding process, the fair value of the acquiredassets net of the current exit value of the acquired lia-bilities may well exceed the purchase price. After all,all the other companies that participated in the bid-ding process would have required receiving moreassets from the selling company. But in situationswhere an entire company is acquired, the resultingdifference could be considered a goodwill asset.

If only a block of insurance contracts is acquired,however, rather than an entire company, there wouldbe no goodwill. Under the preliminary viewsexpressed in the Discussion Paper, any differencebetween the purchase price and the net of the fair

value of acquired assets over current exit value ofacquired liabilities would be considered a loss (orgain, if applicable) at the time of the transaction.

SummaryThe preliminary views expressed in the IASBDiscussion Paper on insurance contracts may gener-ate some surprising and, perhaps, disturbing results.Because future accounting standards for entities cov-ered by the IASB and by FASB may be impacted bythe results of this project, interested actuaries shouldfollow these developments closely and make them-selves heard in the process.

10

$

Page 11: The Financial Reporter - soa.org · The Financial Reporter ... An International Financial Reporting Standards (IFRS) Phase II Discussion Paper Primer

T he recent FASB Statement 157—Fair ValueMeasurement—describes, among other modi-fications to the current fair value methodology,

addition of risk margins for the risks other than cap-ital market risks. In particular, for investment guar-antees on variable annuities, GMxB’s (e.g.,Guaranteed Minimum Accumulation Benefit—GMAB, Guaranteed Minimum WithdrawalBenefit—GMWB), which are subject to fair valueaccounting (FAS 133), one major risk category ispolicyholder behavior risks.

This paper describes a methodology to calculate riskmargins attributable to these types of risks. Whileexamples will focus on policyholder behavior risk forGMxB’s, the approach may be extended to a broad-er range of non-market risks and other insuranceproducts.

For purposes of the risk-neutral liability value calcu-lation (a current, widely used FAS 133 method), cer-tain assumptions are made about future policyhold-er behavior. Let’s call this set of assumptions thebaseline. The policyholder behavior risks arise froma chance that corresponding assumptions will be dif-ferent from the baseline assumptions and willadversely impact the value of guarantees. Such riskscreate an uncertainty about future liability cashflowsand clearly affect the liability transfer price, or “exitvalue,” described by FAS 157.

Systematic vs. Idiosyncratic RisksOne question is what type of risks should be consid-ered: a systematic deviation from the baseline or arandom noise with the mean being the baselineassumption.

In the capital markets world, CAPM assigns com-pensation in form of higher risk premium only fornon-diversifiable or systematic risks. Systematic risksare the risks simultaneously affecting the entire cap-ital markets. Idiosyncratic risks are risks specific toindividual assets.

To extend this approach to policyholder behaviorrisks, an example of systematic risk would be the riskof a large number of policyholders simultaneouslyswitching to a lower lapse “regime.” A random noisetype of a risk, where lapse experience each period fluc-tuates around the mean assumed in pricing for eachpolicy/cohort, is an example of an idiosyncratic risk.

In terms of its impact on the fair value of liabilities,the systematic behavior risk of this nature may resultin more severe outcomes than a random fluctuationtype of risk. Note that a simple application of theCAPM principles suggests that diversifiable risksshould be excluded from the margin calculation.Random noise types of risks seem to be of suchnature.

While it’s not suggested to exclude idiosyncraticbehavior risks from the margin calculation entirely,the focus here is going to be on systematic risks.

Wang TransformOne possible methodology for the calculation of riskmargins is a technique called Wang transform (Wang2000, Wang, et al, 1997). For an arbitrary insurancerisk X, where X is, for example, a distribution ofinsurance losses, Wang transform describes a distor-tion to the cumulative density function (CDF) F(X).The distorted CDF F*(X) then can be used to deter-mine price of risk, where the premium equals theexpected value of X.

In a context of GMxB-specific risks, e.g., behav-ior risks, each observation of the variable X canbe viewed as a market-consistent value of liabili-ties corresponding to a certain state, representedby a set of policyholder behavior assumptions.Thus, a range of possible sets of policyholderassumptions will translate into a distribution ofliability values, where each of them will corre-spond to a market-consistent (risk-neutral) valueunder a given set of behavior assumptions. In par-ticular, the value corresponding to the baseline setof assumptions is the baseline value of liabilities,as determined using the current FAS 133methodology.

Wang transform as applied to non-market risks ofGMxB formulaically is as follows:

where X – distribution of fair value of liability result-ing from stochastic nature of non-market risks;

Financial Reporter | December 2007 11

>> Risk Margins to the Non-Market Risks …

Risk Margins to the Non-Market Risks underFAS 157: Suggested Approachby Vadim Zinkovsky

continued on page 12>>

2

for example, a distribution of insurance losses, Wang transform describes a distortion tothe cumulative density function (CDF) F(X). The distorted CDF F*(X) then can be used to determine price of risk, where the premium equals the expected value of X.

In a context of GMxB-specific risks, e.g., behavior risks, each observation of the variable X can be viewed as a market-consistent value of liabilities corresponding to a certain state, represented by a set of policyholder behavior assumptions. Thus, a range ofpossible sets of policyholder assumptions will translate into a distribution of liabilityvalues, where each of them will correspond to a market-consistent (risk-neutral) value under a given set of behavior assumptions. In particular, the value corresponding to the baseline set of assumptions is the baseline value of liabilities, as determined using the current FAS 133 methodology.

Wang transform as applied to non-market risks of GMxB formulaically is as follows:

where X – distribution of fair value of liability resulting from stochastic nature of non-market risks; F(x) – original “loss” cumulative density function. X’s are ranked from the lowest loss(best result) to the highest,�� cumulative distribution function for standard normal;�

-1� inverse of �;

�— market price of risk, a parameter.F*(X) — transformed, or “pricing” distribution.Expected value under the transformed distribution E(X) equals the new price adjusted bythe risk margin.

For normally distributed risks, Wang transform applies a concept of Sharpe ratio fromthe capital markets world. However, it can also extend the concept to the skewed distributions.

Basically, the distribution F(x) describes the real world set of probabilities attributable toa range of possible outcomes, while F*(x) describes risk-adjusted probability distribution for the range of outcomes.

Essentially, Wang transform makes the probability of severe outcomes higher byreducing their implied percentile. This bears similarity with the risk-neutral valuation technique for the capital market instruments. Indeed, Wang transform replicates the results of risk-neutral pricing, Black-Scholes formula in particular, under a set ofadditional conditions, such as market completeness, availability of the risk-free asset,etc. Results of CAPM is another special case of the Wang transform.

Wang transform enables us to calculate a margin to the fair value of liabilities using notions of price of risk, usually denoted by � (lambda), and a measure of risk which isdetermined by the entire distribution. Possible methods to estimate lambda will be discussed later in the article.

[ ]����= � ))(()(* 1 XFXF

Vadim Zinkovsky, FSA, is vice president,Metropolitan LifeInsurance Co. in JerseyCity, N.J. He can be contacted at201.761.4603 [email protected]

Page 12: The Financial Reporter - soa.org · The Financial Reporter ... An International Financial Reporting Standards (IFRS) Phase II Discussion Paper Primer

F(x)—original “loss” cumulative density function.X’s are ranked from the lowest loss (best result) to thehighest,

—cumulative distribution function for standardnormal;

inverse of ;—market price of risk, a parameter.

F*(X)—transformed, or “pricing” distribution. Expected value under the transformed distributionE(X) equals the new price adjusted by the risk margin.

For normally distributed risks, Wang transformapplies a concept of Sharpe ratio from the capitalmarkets world. However, it can also extend the con-cept to the skewed distributions.

Basically, the distribution F(x) describes the real worldset of probabilities attributable to a range of possibleoutcomes, while F*(x) describes risk-adjusted proba-bility distribution for the range of outcomes.

Essentially, Wang transform makes the probability ofsevere outcomes higher by reducing their impliedpercentile. This bears similarity with the risk-neutralvaluation technique for the capital market instru-ments. Indeed, Wang transform replicates the resultsof risk-neutral pricing, Black-Scholes formula in par-ticular, under a set of additional conditions, such asmarket completeness, availability of the risk-freeasset, etc. Results of CAPM is another special case ofthe Wang transform.

Wang transform enables us to calculate a margin tothe fair value of liabilities using notions of price ofrisk, usually denoted by (lambda), and a measureof risk which is determined by the entire distribu-tion. Possible methods to estimate lambda will bediscussed later in the article.

Case 1: Normal DistributionConsider two normal loss distributions, Distribution1 and Distribution 2, with same means equal 1 andstandard deviations equal 0.5 and 1, respectively.

If the price was determined as an expectation of loss-es, the two risks would’ve had the same price equalto the mean.

However, Distribution 2 is clearly more risky and, asa compensation for risk, should command a higherprice.

This is consistent with the CAPM efficient frontiertheory where a riskier asset would have higherexpected return. Since both distributions are normaland have same means, standard deviation can beused as a measure of risk here, similarly with theCAPM approach. An assumption of risks normalityis also an underlying assumption of the CAPM.

Now, apply Wang transform to the original loss dis-tribution in order to get the pricing distribution, i.e.,the one that can be used to get price as expectedvalue of losses.

For normal distribution, F(X) with mean and stan-dard deviation and , the transformed distribu-tion F*(X) is also normal, however with the parame-ters and .

Thus, the price as an expectation of the transformeddistribution is expressed simply as:

Where – is the price of the margin, calculatedas expected value of the transformed distribution.

See the graph below showing the transform for thetwo normal distributions:

Assume that the state with the unshocked policy-holder behavior assumptions corresponds to themean result of the loss distribution: E[X] = BaselineRisk-Neutral value, the risk-neutral value correspon-ding to the baseline set of policyholder assumptions.

So the margin for additional non-market risk equalsjust ,

Value with margin = Risk-Neutral Value + .

4

Pricing

0.0%

0.5%

1.0%

1.5%

2.0%

2.5%

3.0%

3.5%

4.0%

-3 -2 -1 0 1 2 3 4Losses

Normal: Low sigmaNormal: High sigmaPricing: Low sigmaPricing: High sigma

Lambda x Sigma1

Lambda x Sigma2

Assume that the state with the unshocked policyholder behavior assumptionscorresponds to the mean result of the loss distribution: E[X] = Baseline Risk-Neutral value, the risk-neutral value corresponding to the baseline set of policyholder assumptions.

So the margin for additional non-market risk equals just � �[X],

Value with margin = Risk-Neutral Value + �� �� [X].

Thus, risk margin is proportional to the standard deviation of a normally distributed risk.Under the normality assumption, a distribution with higher standard deviation will havehigher risk margin, which in this example is Distribution 2.

This result is analogous to the CAPM/Sharpe ratio conclusion which assigns a higher compensation to a risk with higher standard deviation of return, that is, standard deviation is a measure of risk.

Case 2: Long-tailed distribution – LognormalThe normal distribution, being a two-tailed symmetrical distribution with a range ofoutcomes from -� to +�, is not a very realistic one for modeling behavioral risks. If onethinks of a range of policyholder behavior outcomes, e.g., rider utilization scenarios, it’sreasonable to assume an extremely efficient behavior will increase the value of liabilitiesquite significantly. Intuitively, perfectly efficient behavior scenarios should be lowprobability events, perhaps with lower probabilities than normal distribution implies. Suchscenarios will define the right tail of the liability value distribution due to variability in policyholder behavior.

On the other hand, the left tail of the distribution will be impacted by extremely inefficientbehavior scenario types. An example of perfectly inefficient behavior is for example an

4

Pricing

0.0%

0.5%

1.0%

1.5%

2.0%

2.5%

3.0%

3.5%

4.0%

-3 -2 -1 0 1 2 3 4Losses

Normal: Low sigmaNormal: High sigmaPricing: Low sigmaPricing: High sigma

Lambda x Sigma1

Lambda x Sigma2

Assume that the state with the unshocked policyholder behavior assumptionscorresponds to the mean result of the loss distribution: E[X] = Baseline Risk-Neutral value, the risk-neutral value corresponding to the baseline set of policyholder assumptions.

So the margin for additional non-market risk equals just � �[X],

Value with margin = Risk-Neutral Value + � � [X].

Thus, risk margin is proportional to the standard deviation of a normally distributed risk.Under the normality assumption, a distribution with higher standard deviation will havehigher risk margin, which in this example is Distribution 2.

This result is analogous to the CAPM/Sharpe ratio conclusion which assigns a higher compensation to a risk with higher standard deviation of return, that is, standard deviation is a measure of risk.

Case 2: Long-tailed distribution – LognormalThe normal distribution, being a two-tailed symmetrical distribution with a range ofoutcomes from -� to +�, is not a very realistic one for modeling behavioral risks. If onethinks of a range of policyholder behavior outcomes, e.g., rider utilization scenarios, it’sreasonable to assume an extremely efficient behavior will increase the value of liabilitiesquite significantly. Intuitively, perfectly efficient behavior scenarios should be lowprobability events, perhaps with lower probabilities than normal distribution implies. Suchscenarios will define the right tail of the liability value distribution due to variability in policyholder behavior.

On the other hand, the left tail of the distribution will be impacted by extremely inefficientbehavior scenario types. An example of perfectly inefficient behavior is for example an

3

Case 1: Normal Distribution.Consider two normal loss distributions, Distribution 1 and Distribution 2, with samemeans equal 1 and standard deviations equal 0.5 and 1, respectively.

Distribution of LossesNormal

0.0%

0.5%

1.0%

1.5%

2.0%

2.5%

3.0%

3.5%

4.0%

-3 -2 -1 0 1 2 3 4Loss

Normal: Low sigmaNormal: High sigma

If the price was determined as an expectation of losses the two risks would’ve had the same price equal to the mean.

However, Distribution 2 is clearly more risky and, as a compensation for risk, should command higher price.

This is consistent with the CAPM efficient frontier theory where a riskier asset would have higher expected return. Since both distributions are normal and have same means,standard deviation can be used as a measure of risk here, similarly with the CAPMapproach. An assumption of risks normality is also an underlying assumption of the CAPM.

Now, apply Wang transform to the original loss distribution in order to get the pricing distribution, i.e., the one that can be used to get price as expected value of losses.For normal distribution, F(X) with mean and standard deviation 7 and �, the transformed distribution F*(X) is also normal, however with the parameters 7+�� and �.

Thus, the price as an expectation of the transformed distribution is expressed simply as:

E*[X] = E[X] + � �[X],

Where E*[X] – is the price of the margin, calculated as expected value of the transformed distribution.

See the graph below showing the transform for the two normal distributions:

3

Case 1: Normal Distribution.Consider two normal loss distributions, Distribution 1 and Distribution 2, with samemeans equal 1 and standard deviations equal 0.5 and 1, respectively.

Distribution of LossesNormal

0.0%

0.5%

1.0%

1.5%

2.0%

2.5%

3.0%

3.5%

4.0%

-3 -2 -1 0 1 2 3 4Loss

Normal: Low sigmaNormal: High sigma

If the price was determined as an expectation of losses the two risks would’ve had the same price equal to the mean.

However, Distribution 2 is clearly more risky and, as a compensation for risk, should command higher price.

This is consistent with the CAPM efficient frontier theory where a riskier asset would have higher expected return. Since both distributions are normal and have same means,standard deviation can be used as a measure of risk here, similarly with the CAPMapproach. An assumption of risks normality is also an underlying assumption of the CAPM.

Now, apply Wang transform to the original loss distribution in order to get the pricing distribution, i.e., the one that can be used to get price as expected value of losses.For normal distribution, F(X) with mean and standard deviation 7 and �, the transformed distribution F*(X) is also normal, however with the parameters 7+�� and �.

Thus, the price as an expectation of the transformed distribution is expressed simply as:

E*[X] = E[X] + � �[X],

Where E*[X] – is the price of the margin, calculated as expected value of the transformed distribution.

See the graph below showing the transform for the two normal distributions:

3

Case 1: Normal Distribution.Consider two normal loss distributions, Distribution 1 and Distribution 2, with samemeans equal 1 and standard deviations equal 0.5 and 1, respectively.

Distribution of LossesNormal

0.0%

0.5%

1.0%

1.5%

2.0%

2.5%

3.0%

3.5%

4.0%

-3 -2 -1 0 1 2 3 4Loss

Normal: Low sigmaNormal: High sigma

If the price was determined as an expectation of losses the two risks would’ve had the same price equal to the mean.

However, Distribution 2 is clearly more risky and, as a compensation for risk, should command higher price.

This is consistent with the CAPM efficient frontier theory where a riskier asset would have higher expected return. Since both distributions are normal and have same means,standard deviation can be used as a measure of risk here, similarly with the CAPMapproach. An assumption of risks normality is also an underlying assumption of the CAPM.

Now, apply Wang transform to the original loss distribution in order to get the pricing distribution, i.e., the one that can be used to get price as expected value of losses.For normal distribution, F(X) with mean and standard deviation 7 and �, the transformed distribution F*(X) is also normal, however with the parameters 7+�� and �.

Thus, the price as an expectation of the transformed distribution is expressed simply as:

E*[X] = E[X] + � �[X],

Where E*[X] – is the price of the margin, calculated as expected value of the transformed distribution.

See the graph below showing the transform for the two normal distributions:

3

Case 1: Normal Distribution.Consider two normal loss distributions, Distribution 1 and Distribution 2, with samemeans equal 1 and standard deviations equal 0.5 and 1, respectively.

Distribution of LossesNormal

0.0%

0.5%

1.0%

1.5%

2.0%

2.5%

3.0%

3.5%

4.0%

-3 -2 -1 0 1 2 3 4Loss

Normal: Low sigmaNormal: High sigma

If the price was determined as an expectation of losses the two risks would’ve had the same price equal to the mean.

However, Distribution 2 is clearly more risky and, as a compensation for risk, should command higher price.

This is consistent with the CAPM efficient frontier theory where a riskier asset would have higher expected return. Since both distributions are normal and have same means,standard deviation can be used as a measure of risk here, similarly with the CAPMapproach. An assumption of risks normality is also an underlying assumption of the CAPM.

Now, apply Wang transform to the original loss distribution in order to get the pricing distribution, i.e., the one that can be used to get price as expected value of losses.For normal distribution, F(X) with mean and standard deviation 7 and �, the transformed distribution F*(X) is also normal, however with the parameters 7+�� and �.

Thus, the price as an expectation of the transformed distribution is expressed simply as:

E*[X] = E[X] + � �[X],

Where E*[X] – is the price of the margin, calculated as expected value of the transformed distribution.

See the graph below showing the transform for the two normal distributions:

3

Case 1: Normal Distribution.Consider two normal loss distributions, Distribution 1 and Distribution 2, with samemeans equal 1 and standard deviations equal 0.5 and 1, respectively.

Distribution of LossesNormal

0.0%

0.5%

1.0%

1.5%

2.0%

2.5%

3.0%

3.5%

4.0%

-3 -2 -1 0 1 2 3 4Loss

Normal: Low sigmaNormal: High sigma

If the price was determined as an expectation of losses the two risks would’ve had the same price equal to the mean.

However, Distribution 2 is clearly more risky and, as a compensation for risk, should command higher price.

This is consistent with the CAPM efficient frontier theory where a riskier asset would have higher expected return. Since both distributions are normal and have same means,standard deviation can be used as a measure of risk here, similarly with the CAPMapproach. An assumption of risks normality is also an underlying assumption of the CAPM.

Now, apply Wang transform to the original loss distribution in order to get the pricing distribution, i.e., the one that can be used to get price as expected value of losses.For normal distribution, F(X) with mean and standard deviation 7 and �, the transformed distribution F*(X) is also normal, however with the parameters 7+�� and �.

Thus, the price as an expectation of the transformed distribution is expressed simply as:

E*[X] = E[X] + � �[X],

Where E*[X] – is the price of the margin, calculated as expected value of the transformed distribution.

See the graph below showing the transform for the two normal distributions:

3

Case 1: Normal Distribution.Consider two normal loss distributions, Distribution 1 and Distribution 2, with samemeans equal 1 and standard deviations equal 0.5 and 1, respectively.

Distribution of LossesNormal

0.0%

0.5%

1.0%

1.5%

2.0%

2.5%

3.0%

3.5%

4.0%

-3 -2 -1 0 1 2 3 4Loss

Normal: Low sigmaNormal: High sigma

If the price was determined as an expectation of losses the two risks would’ve had the same price equal to the mean.

However, Distribution 2 is clearly more risky and, as a compensation for risk, should command higher price.

This is consistent with the CAPM efficient frontier theory where a riskier asset would have higher expected return. Since both distributions are normal and have same means,standard deviation can be used as a measure of risk here, similarly with the CAPMapproach. An assumption of risks normality is also an underlying assumption of the CAPM.

Now, apply Wang transform to the original loss distribution in order to get the pricing distribution, i.e., the one that can be used to get price as expected value of losses.For normal distribution, F(X) with mean and standard deviation 7 and �, the transformed distribution F*(X) is also normal, however with the parameters 7+�� and �.

Thus, the price as an expectation of the transformed distribution is expressed simply as:

E*[X] = E[X] + � �[X],

Where E*[X] – is the price of the margin, calculated as expected value of the transformed distribution.

See the graph below showing the transform for the two normal distributions:

2

for example, a distribution of insurance losses, Wang transform describes a distortion tothe cumulative density function (CDF) F(X). The distorted CDF F*(X) then can be used to determine price of risk, where the premium equals the expected value of X.

In a context of GMxB-specific risks, e.g., behavior risks, each observation of the variable X can be viewed as a market-consistent value of liabilities corresponding to a certain state, represented by a set of policyholder behavior assumptions. Thus, a range ofpossible sets of policyholder assumptions will translate into a distribution of liabilityvalues, where each of them will correspond to a market-consistent (risk-neutral) value under a given set of behavior assumptions. In particular, the value corresponding to the baseline set of assumptions is the baseline value of liabilities, as determined using the current FAS 133 methodology.

Wang transform as applied to non-market risks of GMxB formulaically is as follows:

where X – distribution of fair value of liability resulting from stochastic nature of non-market risks; F(x) – original “loss” cumulative density function. X’s are ranked from the lowest loss(best result) to the highest,�� cumulative distribution function for standard normal;�

-1� inverse of �;

�— market price of risk, a parameter.F*(X) — transformed, or “pricing” distribution.Expected value under the transformed distribution E(X) equals the new price adjusted bythe risk margin.

For normally distributed risks, Wang transform applies a concept of Sharpe ratio fromthe capital markets world. However, it can also extend the concept to the skewed distributions.

Basically, the distribution F(x) describes the real world set of probabilities attributable toa range of possible outcomes, while F*(x) describes risk-adjusted probability distribution for the range of outcomes.

Essentially, Wang transform makes the probability of severe outcomes higher byreducing their implied percentile. This bears similarity with the risk-neutral valuation technique for the capital market instruments. Indeed, Wang transform replicates the results of risk-neutral pricing, Black-Scholes formula in particular, under a set ofadditional conditions, such as market completeness, availability of the risk-free asset,etc. Results of CAPM is another special case of the Wang transform.

Wang transform enables us to calculate a margin to the fair value of liabilities using notions of price of risk, usually denoted by � (lambda), and a measure of risk which isdetermined by the entire distribution. Possible methods to estimate lambda will be discussed later in the article.

����= � ))(()(* 1 XFXF

2

for example, a distribution of insurance losses, Wang transform describes a distortion tothe cumulative density function (CDF) F(X). The distorted CDF F*(X) then can be used to determine price of risk, where the premium equals the expected value of X.

In a context of GMxB-specific risks, e.g., behavior risks, each observation of the variable X can be viewed as a market-consistent value of liabilities corresponding to a certain state, represented by a set of policyholder behavior assumptions. Thus, a range ofpossible sets of policyholder assumptions will translate into a distribution of liabilityvalues, where each of them will correspond to a market-consistent (risk-neutral) value under a given set of behavior assumptions. In particular, the value corresponding to the baseline set of assumptions is the baseline value of liabilities, as determined using the current FAS 133 methodology.

Wang transform as applied to non-market risks of GMxB formulaically is as follows:

where X – distribution of fair value of liability resulting from stochastic nature of non-market risks; F(x) – original “loss” cumulative density function. X’s are ranked from the lowest loss(best result) to the highest,�� cumulative distribution function for standard normal;�

-1� inverse of �;

�— market price of risk, a parameter.F*(X) — transformed, or “pricing” distribution.Expected value under the transformed distribution E(X) equals the new price adjusted bythe risk margin.

For normally distributed risks, Wang transform applies a concept of Sharpe ratio fromthe capital markets world. However, it can also extend the concept to the skewed distributions.

Basically, the distribution F(x) describes the real world set of probabilities attributable toa range of possible outcomes, while F*(x) describes risk-adjusted probability distribution for the range of outcomes.

Essentially, Wang transform makes the probability of severe outcomes higher byreducing their implied percentile. This bears similarity with the risk-neutral valuation technique for the capital market instruments. Indeed, Wang transform replicates the results of risk-neutral pricing, Black-Scholes formula in particular, under a set ofadditional conditions, such as market completeness, availability of the risk-free asset,etc. Results of CAPM is another special case of the Wang transform.

Wang transform enables us to calculate a margin to the fair value of liabilities using notions of price of risk, usually denoted by � (lambda), and a measure of risk which isdetermined by the entire distribution. Possible methods to estimate lambda will be discussed later in the article.

����= � ))(()(* 1 XFXF

2

for example, a distribution of insurance losses, Wang transform describes a distortion tothe cumulative density function (CDF) F(X). The distorted CDF F*(X) then can be used to determine price of risk, where the premium equals the expected value of X.

In a context of GMxB-specific risks, e.g., behavior risks, each observation of the variable X can be viewed as a market-consistent value of liabilities corresponding to a certain state, represented by a set of policyholder behavior assumptions. Thus, a range ofpossible sets of policyholder assumptions will translate into a distribution of liabilityvalues, where each of them will correspond to a market-consistent (risk-neutral) value under a given set of behavior assumptions. In particular, the value corresponding to the baseline set of assumptions is the baseline value of liabilities, as determined using the current FAS 133 methodology.

Wang transform as applied to non-market risks of GMxB formulaically is as follows:

where X – distribution of fair value of liability resulting from stochastic nature of non-market risks; F(x) – original “loss” cumulative density function. X’s are ranked from the lowest loss(best result) to the highest,�� cumulative distribution function for standard normal;�

-1� inverse of �;

�— market price of risk, a parameter.F*(X) — transformed, or “pricing” distribution.Expected value under the transformed distribution E(X) equals the new price adjusted bythe risk margin.

For normally distributed risks, Wang transform applies a concept of Sharpe ratio fromthe capital markets world. However, it can also extend the concept to the skewed distributions.

Basically, the distribution F(x) describes the real world set of probabilities attributable toa range of possible outcomes, while F*(x) describes risk-adjusted probability distribution for the range of outcomes.

Essentially, Wang transform makes the probability of severe outcomes higher byreducing their implied percentile. This bears similarity with the risk-neutral valuation technique for the capital market instruments. Indeed, Wang transform replicates the results of risk-neutral pricing, Black-Scholes formula in particular, under a set ofadditional conditions, such as market completeness, availability of the risk-free asset,etc. Results of CAPM is another special case of the Wang transform.

Wang transform enables us to calculate a margin to the fair value of liabilities using notions of price of risk, usually denoted by � (lambda), and a measure of risk which isdetermined by the entire distribution. Possible methods to estimate lambda will be discussed later in the article.

����= � ))(()(* 1 XFXF

2

for example, a distribution of insurance losses, Wang transform describes a distortion tothe cumulative density function (CDF) F(X). The distorted CDF F*(X) then can be used to determine price of risk, where the premium equals the expected value of X.

In a context of GMxB-specific risks, e.g., behavior risks, each observation of the variable X can be viewed as a market-consistent value of liabilities corresponding to a certain state, represented by a set of policyholder behavior assumptions. Thus, a range ofpossible sets of policyholder assumptions will translate into a distribution of liabilityvalues, where each of them will correspond to a market-consistent (risk-neutral) value under a given set of behavior assumptions. In particular, the value corresponding to the baseline set of assumptions is the baseline value of liabilities, as determined using the current FAS 133 methodology.

Wang transform as applied to non-market risks of GMxB formulaically is as follows:

where X – distribution of fair value of liability resulting from stochastic nature of non-market risks; F(x) – original “loss” cumulative density function. X’s are ranked from the lowest loss(best result) to the highest,�� cumulative distribution function for standard normal;�

-1� inverse of �;

�— market price of risk, a parameter.F*(X) — transformed, or “pricing” distribution.Expected value under the transformed distribution E(X) equals the new price adjusted bythe risk margin.

For normally distributed risks, Wang transform applies a concept of Sharpe ratio fromthe capital markets world. However, it can also extend the concept to the skewed distributions.

Basically, the distribution F(x) describes the real world set of probabilities attributable toa range of possible outcomes, while F*(x) describes risk-adjusted probability distribution for the range of outcomes.

Essentially, Wang transform makes the probability of severe outcomes higher byreducing their implied percentile. This bears similarity with the risk-neutral valuation technique for the capital market instruments. Indeed, Wang transform replicates the results of risk-neutral pricing, Black-Scholes formula in particular, under a set ofadditional conditions, such as market completeness, availability of the risk-free asset,etc. Results of CAPM is another special case of the Wang transform.

Wang transform enables us to calculate a margin to the fair value of liabilities using notions of price of risk, usually denoted by � (lambda), and a measure of risk which isdetermined by the entire distribution. Possible methods to estimate lambda will be discussed later in the article.

����= � ))(()(* 1 XFXF

2

for example, a distribution of insurance losses, Wang transform describes a distortion tothe cumulative density function (CDF) F(X). The distorted CDF F*(X) then can be used to determine price of risk, where the premium equals the expected value of X.

In a context of GMxB-specific risks, e.g., behavior risks, each observation of the variable X can be viewed as a market-consistent value of liabilities corresponding to a certain state, represented by a set of policyholder behavior assumptions. Thus, a range ofpossible sets of policyholder assumptions will translate into a distribution of liabilityvalues, where each of them will correspond to a market-consistent (risk-neutral) value under a given set of behavior assumptions. In particular, the value corresponding to the baseline set of assumptions is the baseline value of liabilities, as determined using the current FAS 133 methodology.

Wang transform as applied to non-market risks of GMxB formulaically is as follows:

where X – distribution of fair value of liability resulting from stochastic nature of non-market risks; F(x) – original “loss” cumulative density function. X’s are ranked from the lowest loss(best result) to the highest,�� cumulative distribution function for standard normal;�

-1� inverse of �;

�— market price of risk, a parameter.F*(X) — transformed, or “pricing” distribution.Expected value under the transformed distribution E(X) equals the new price adjusted bythe risk margin.

For normally distributed risks, Wang transform applies a concept of Sharpe ratio fromthe capital markets world. However, it can also extend the concept to the skewed distributions.

Basically, the distribution F(x) describes the real world set of probabilities attributable toa range of possible outcomes, while F*(x) describes risk-adjusted probability distribution for the range of outcomes.

Essentially, Wang transform makes the probability of severe outcomes higher byreducing their implied percentile. This bears similarity with the risk-neutral valuation technique for the capital market instruments. Indeed, Wang transform replicates the results of risk-neutral pricing, Black-Scholes formula in particular, under a set ofadditional conditions, such as market completeness, availability of the risk-free asset,etc. Results of CAPM is another special case of the Wang transform.

Wang transform enables us to calculate a margin to the fair value of liabilities using notions of price of risk, usually denoted by � (lambda), and a measure of risk which isdetermined by the entire distribution. Possible methods to estimate lambda will be discussed later in the article.

����= � ))(()(* 1 XFXF

Financial Reporter | December 2007

>> Risk Margins to the Non-Market Risks …

12

Page 13: The Financial Reporter - soa.org · The Financial Reporter ... An International Financial Reporting Standards (IFRS) Phase II Discussion Paper Primer

Thus, risk margin is proportional to the standarddeviation of a normally distributed risk. Under thenormality assumption, a distribution with higherstandard deviation will have higher risk margin,which in this example is Distribution 2.

This result is analogous to the CAPM/Sharpe ratioconclusion which assigns a higher compensation to arisk with higher standard deviation of return, that is,standard deviation is a measure of risk.

Case 2: Long-tailed distribution –LognormalThe normal distribution, being a two-tailed sym-metrical distribution with a range of outcomes from

to , is not a very realistic one for modelingbehavioral risks. If one thinks of a range of policy-holder behavior outcomes, e.g., rider utilization sce-narios, it’s reasonable to assume an extremely effi-cient behavior will increase the value of liabilitiesquite significantly. Intuitively, perfectly efficientbehavior scenarios should be low probability events,perhaps with lower probabilities than normal distri-bution implies. Such scenarios will define the righttail of the liability value distribution due to variabil-ity in policyholder behavior.

On the other hand, the left tail of the distributionwill be impacted by extremely inefficient behaviorscenario types. An example of perfectly inefficientbehavior is for example an assumption of zero riderutilization rate. But under this assumption the liabil-ity value will be naturally capped by present value offees with the minus sign. Thus, a distribution with alonger tail for higher losses and with outcomes limitedby the lowest (best result: e.g., no claims) value shouldbe more realistic.

One such distribution with some well-behaved prop-erties is lognormal. Here is how it compares with thenormal distribution:

The distributions shown on the graph have samemean and standard deviations. However, the lognor-mal has “fatter” right tail for high losses. We’ll seebelow that Wang transform assigns higher risk mar-gin to the lognormal distribution in this example.

For the lognormal distribution the transformedCDF per Wang transform is also distributed lognor-mally. If the original F(X) is lognormal with param-eters and , the transformed distribution F*(X) isalso lognormal with parameters and .

The mean of the lognormal distribution equals:.

So the liability value with the margin, calculated asthe mean of the transformed distribution, equals:

.

Again, assuming that the expected value under theoriginal, “real world” loss distribution corresponds tothe baseline liability value, E[X] = Baseline Risk-Neutral value, we’ll get the following result:

Value with margin = Risk-Neutral Value*e .

And

Here is how transformed distributions look:

Assuming =0.3,expected values of transformed “pricing” distribu-tions are:1.15 – normal,1.152 – lognormal

6

For the lognormal distribution the transformed CDF per Wang transform is alsodistributed lognormally. If the original F(X) is lognormal with parameters < and �, thetransformed distribution F*(X) is also lognormal with parameters <+�� and �.

The mean of the lognormal distribution (<, �) equals: 2

2�

�+

e . So the liability value with the margin, calculated as the mean of the transformed

distribution, equals: 2

2�

��� ++

e .

Again, assuming that the expected value under the original, “real world”, loss distributioncorresponds to the baseline liability value, E[X] = Baseline Risk-Neutral value, we’ll getthe following result:

Value with margin = Risk-Neutral Value* e � �[X] .

And

}1{_arg__arg_

=�=��eMeanLognormal

RNValueinMwithValueinMLognormal

Here is how transformed distributions look:

Pricing Distribution

-0.5%

0.0%

0.5%

1.0%

1.5%

2.0%

2.5%

3.0%

3.5%

4.0%

4.5%

-2 -1 0 1 2 3 4

Losses

Normal Mu=1, Sigma=0.5

Lognormal Mean=1, St.Dev=0.5Pricing: Normal

Pricing: Lognormal

Assuming �=0.3,expected values of transformed “pricing” distributions are:1.15 – normal,1.152 – lognormal

Thus lognormally distributed losses resulted in a slightly higher price.6

For the lognormal distribution the transformed CDF per Wang transform is alsodistributed lognormally. If the original F(X) is lognormal with parameters < and �, thetransformed distribution F*(X) is also lognormal with parameters <+�� and �.

The mean of the lognormal distribution (<, �) equals: 2

2�

�+

e . So the liability value with the margin, calculated as the mean of the transformed

distribution, equals: 2

2�

��� ++

e .

Again, assuming that the expected value under the original, “real world”, loss distributioncorresponds to the baseline liability value, E[X] = Baseline Risk-Neutral value, we’ll getthe following result:

Value with margin = Risk-Neutral Value* e �� ��[X] .

And

}1{_arg__arg_

=�=��eMeanLognormal

RNValueinMwithValueinMLognormal

Here is how transformed distributions look:

Pricing Distribution

-0.5%

0.0%

0.5%

1.0%

1.5%

2.0%

2.5%

3.0%

3.5%

4.0%

4.5%

-2 -1 0 1 2 3 4

Losses

Normal Mu=1, Sigma=0.5

Lognormal Mean=1, St.Dev=0.5Pricing: Normal

Pricing: Lognormal

Assuming �=0.3,expected values of transformed “pricing” distributions are:1.15 – normal,1.152 – lognormal

Thus lognormally distributed losses resulted in a slightly higher price.

6

For the lognormal distribution the transformed CDF per Wang transform is alsodistributed lognormally. If the original F(X) is lognormal with parameters < and �, thetransformed distribution F*(X) is also lognormal with parameters <+�� and �.

The mean of the lognormal distribution (<, �) equals: 2

2�

�+

e . So the liability value with the margin, calculated as the mean of the transformed

distribution, equals: 2

2�

��� ++

e .

Again, assuming that the expected value under the original, “real world”, loss distributioncorresponds to the baseline liability value, E[X] = Baseline Risk-Neutral value, we’ll getthe following result:

Value with margin = Risk-Neutral Value* e � �[X] .

And

}1{_arg__arg_

=�=��eMeanLognormal

RNValueinMwithValueinMLognormal

Here is how transformed distributions look:

Pricing Distribution

-0.5%

0.0%

0.5%

1.0%

1.5%

2.0%

2.5%

3.0%

3.5%

4.0%

4.5%

-2 -1 0 1 2 3 4

Losses

Normal Mu=1, Sigma=0.5

Lognormal Mean=1, St.Dev=0.5Pricing: Normal

Pricing: Lognormal

Assuming �=0.3,expected values of transformed “pricing” distributions are:1.15 – normal,1.152 – lognormal

Thus lognormally distributed losses resulted in a slightly higher price.

6

For the lognormal distribution the transformed CDF per Wang transform is alsodistributed lognormally. If the original F(X) is lognormal with parameters < and �, thetransformed distribution F*(X) is also lognormal with parameters <+�� and �.

The mean of the lognormal distribution (<, �) equals: 2

2�

�+

e . So the liability value with the margin, calculated as the mean of the transformed

distribution, equals: 2

2�

��� ++

e .

Again, assuming that the expected value under the original, “real world”, loss distributioncorresponds to the baseline liability value, E[X] = Baseline Risk-Neutral value, we’ll getthe following result:

Value with margin = Risk-Neutral Value* e � �[X] .

And

}1{_arg__arg_

=�=��eMeanLognormal

RNValueinMwithValueinMLognormal

Here is how transformed distributions look:

Pricing Distribution

-0.5%

0.0%

0.5%

1.0%

1.5%

2.0%

2.5%

3.0%

3.5%

4.0%

4.5%

-2 -1 0 1 2 3 4

Losses

Normal Mu=1, Sigma=0.5

Lognormal Mean=1, St.Dev=0.5Pricing: Normal

Pricing: Lognormal

Assuming �=0.3,expected values of transformed “pricing” distributions are:1.15 – normal,1.152 – lognormal

Thus lognormally distributed losses resulted in a slightly higher price.

6

For the lognormal distribution the transformed CDF per Wang transform is alsodistributed lognormally. If the original F(X) is lognormal with parameters < and �, thetransformed distribution F*(X) is also lognormal with parameters <+�� and �.

The mean of the lognormal distribution (<, �) equals: 2

2�

�+

e . So the liability value with the margin, calculated as the mean of the transformed

distribution, equals: 2

2�

��� ++

e .

Again, assuming that the expected value under the original, “real world”, loss distributioncorresponds to the baseline liability value, E[X] = Baseline Risk-Neutral value, we’ll getthe following result:

Value with margin = Risk-Neutral Value* e � �[X] .

And

}1{_arg__arg_

=�=��eMeanLognormal

RNValueinMwithValueinMLognormal

Here is how transformed distributions look:

Pricing Distribution

-0.5%

0.0%

0.5%

1.0%

1.5%

2.0%

2.5%

3.0%

3.5%

4.0%

4.5%

-2 -1 0 1 2 3 4

Losses

Normal Mu=1, Sigma=0.5

Lognormal Mean=1, St.Dev=0.5Pricing: Normal

Pricing: Lognormal

Assuming �=0.3,expected values of transformed “pricing” distributions are:1.15 – normal,1.152 – lognormal

Thus lognormally distributed losses resulted in a slightly higher price.

6

For the lognormal distribution the transformed CDF per Wang transform is alsodistributed lognormally. If the original F(X) is lognormal with parameters < and �, thetransformed distribution F*(X) is also lognormal with parameters <+�� and �.

The mean of the lognormal distribution (<, �) equals: 2

2�

�+

e . So the liability value with the margin, calculated as the mean of the transformed

distribution, equals: 2

2�

��� ++

e .

Again, assuming that the expected value under the original, “real world”, loss distributioncorresponds to the baseline liability value, E[X] = Baseline Risk-Neutral value, we’ll getthe following result:

Value with margin = Risk-Neutral Value* e � �[X] .

And

}1{_arg__arg_

=�=��eMeanLognormal

RNValueinMwithValueinMLognormal

Here is how transformed distributions look:

Pricing Distribution

-0.5%

0.0%

0.5%

1.0%

1.5%

2.0%

2.5%

3.0%

3.5%

4.0%

4.5%

-2 -1 0 1 2 3 4

Losses

Normal Mu=1, Sigma=0.5

Lognormal Mean=1, St.Dev=0.5Pricing: Normal

Pricing: Lognormal

Assuming �=0.3,expected values of transformed “pricing” distributions are:1.15 – normal,1.152 – lognormal

Thus lognormally distributed losses resulted in a slightly higher price.

6

For the lognormal distribution the transformed CDF per Wang transform is alsodistributed lognormally. If the original F(X) is lognormal with parameters < and �, thetransformed distribution F*(X) is also lognormal with parameters <+�� and �.

The mean of the lognormal distribution (<, �) equals: 2

2�

�+

e . So the liability value with the margin, calculated as the mean of the transformed

distribution, equals: 2

2�

��� ++

e .

Again, assuming that the expected value under the original, “real world”, loss distributioncorresponds to the baseline liability value, E[X] = Baseline Risk-Neutral value, we’ll getthe following result:

Value with margin = Risk-Neutral Value* e � �[X] .

And

}1{_arg__arg_

=�=��eMeanLognormal

RNValueinMwithValueinMLognormal

Here is how transformed distributions look:

Pricing Distribution

-0.5%

0.0%

0.5%

1.0%

1.5%

2.0%

2.5%

3.0%

3.5%

4.0%

4.5%

-2 -1 0 1 2 3 4

Losses

Normal Mu=1, Sigma=0.5

Lognormal Mean=1, St.Dev=0.5Pricing: Normal

Pricing: Lognormal

Assuming �=0.3,expected values of transformed “pricing” distributions are:1.15 – normal,1.152 – lognormal

Thus lognormally distributed losses resulted in a slightly higher price.

6

For the lognormal distribution the transformed CDF per Wang transform is alsodistributed lognormally. If the original F(X) is lognormal with parameters < and �, thetransformed distribution F*(X) is also lognormal with parameters <+�� and �.

The mean of the lognormal distribution (<, �) equals: 2

2�

�+

e . So the liability value with the margin, calculated as the mean of the transformed

distribution, equals: 2

2�

��� ++

e .

Again, assuming that the expected value under the original, “real world”, loss distributioncorresponds to the baseline liability value, E[X] = Baseline Risk-Neutral value, we’ll getthe following result:

Value with margin = Risk-Neutral Value* e � �[X] .

And

}1{_arg__arg_

=�=��eMeanLognormal

RNValueinMwithValueinMLognormal

Here is how transformed distributions look:

Pricing Distribution

-0.5%

0.0%

0.5%

1.0%

1.5%

2.0%

2.5%

3.0%

3.5%

4.0%

4.5%

-2 -1 0 1 2 3 4

Losses

Normal Mu=1, Sigma=0.5

Lognormal Mean=1, St.Dev=0.5Pricing: Normal

Pricing: Lognormal

Assuming �=0.3,expected values of transformed “pricing” distributions are:1.15 – normal,1.152 – lognormal

Thus lognormally distributed losses resulted in a slightly higher price.

6

For the lognormal distribution the transformed CDF per Wang transform is alsodistributed lognormally. If the original F(X) is lognormal with parameters < and �, thetransformed distribution F*(X) is also lognormal with parameters <+�� and �.

The mean of the lognormal distribution (<, �) equals: 2

2�

�+

e . So the liability value with the margin, calculated as the mean of the transformed

distribution, equals: 2

2�

��� ++

e .

Again, assuming that the expected value under the original, “real world”, loss distributioncorresponds to the baseline liability value, E[X] = Baseline Risk-Neutral value, we’ll getthe following result:

Value with margin = Risk-Neutral Value* e � �[X] .

And

}1{_arg__arg_

=�=��eMeanLognormal

RNValueinMwithValueinMLognormal

Here is how transformed distributions look:

Pricing Distribution

-0.5%

0.0%

0.5%

1.0%

1.5%

2.0%

2.5%

3.0%

3.5%

4.0%

4.5%

-2 -1 0 1 2 3 4

Losses

Normal Mu=1, Sigma=0.5

Lognormal Mean=1, St.Dev=0.5Pricing: Normal

Pricing: Lognormal

Assuming �=0.3,expected values of transformed “pricing” distributions are:1.15 – normal,1.152 – lognormal

Thus lognormally distributed losses resulted in a slightly higher price.

4

Pricing

0.0%

0.5%

1.0%

1.5%

2.0%

2.5%

3.0%

3.5%

4.0%

-3 -2 -1 0 1 2 3 4Losses

Normal: Low sigmaNormal: High sigmaPricing: Low sigmaPricing: High sigma

Lambda x Sigma1

Lambda x Sigma2

Assume that the state with the unshocked policyholder behavior assumptionscorresponds to the mean result of the loss distribution: E[X] = Baseline Risk-Neutral value, the risk-neutral value corresponding to the baseline set of policyholder assumptions.

So the margin for additional non-market risk equals just � �[X],

Value with margin = Risk-Neutral Value + � � [X].

Thus, risk margin is proportional to the standard deviation of a normally distributed risk.Under the normality assumption, a distribution with higher standard deviation will havehigher risk margin, which in this example is Distribution 2.

This result is analogous to the CAPM/Sharpe ratio conclusion which assigns a higher compensation to a risk with higher standard deviation of return, that is, standard deviation is a measure of risk.

Case 2: Long-tailed distribution – LognormalThe normal distribution, being a two-tailed symmetrical distribution with a range ofoutcomes from -� to +�, is not a very realistic one for modeling behavioral risks. If onethinks of a range of policyholder behavior outcomes, e.g., rider utilization scenarios, it’sreasonable to assume an extremely efficient behavior will increase the value of liabilitiesquite significantly. Intuitively, perfectly efficient behavior scenarios should be lowprobability events, perhaps with lower probabilities than normal distribution implies. Suchscenarios will define the right tail of the liability value distribution due to variability in policyholder behavior.

On the other hand, the left tail of the distribution will be impacted by extremely inefficientbehavior scenario types. An example of perfectly inefficient behavior is for example an

4

Pricing

0.0%

0.5%

1.0%

1.5%

2.0%

2.5%

3.0%

3.5%

4.0%

-3 -2 -1 0 1 2 3 4Losses

Normal: Low sigmaNormal: High sigmaPricing: Low sigmaPricing: High sigma

Lambda x Sigma1

Lambda x Sigma2

Assume that the state with the unshocked policyholder behavior assumptionscorresponds to the mean result of the loss distribution: E[X] = Baseline Risk-Neutral value, the risk-neutral value corresponding to the baseline set of policyholder assumptions.

So the margin for additional non-market risk equals just � �[X],

Value with margin = Risk-Neutral Value + � � [X].

Thus, risk margin is proportional to the standard deviation of a normally distributed risk.Under the normality assumption, a distribution with higher standard deviation will havehigher risk margin, which in this example is Distribution 2.

This result is analogous to the CAPM/Sharpe ratio conclusion which assigns a higher compensation to a risk with higher standard deviation of return, that is, standard deviation is a measure of risk.

Case 2: Long-tailed distribution – LognormalThe normal distribution, being a two-tailed symmetrical distribution with a range ofoutcomes from -� to +�, is not a very realistic one for modeling behavioral risks. If onethinks of a range of policyholder behavior outcomes, e.g., rider utilization scenarios, it’sreasonable to assume an extremely efficient behavior will increase the value of liabilitiesquite significantly. Intuitively, perfectly efficient behavior scenarios should be lowprobability events, perhaps with lower probabilities than normal distribution implies. Suchscenarios will define the right tail of the liability value distribution due to variability in policyholder behavior.

On the other hand, the left tail of the distribution will be impacted by extremely inefficientbehavior scenario types. An example of perfectly inefficient behavior is for example an

Financial Reporter | December 2007 13

>> Risk Margins to the Non-Market Risks …

continued on page 14>>

And

}1{_arg__arg_ ==

eMeanLognormalRNValueinMwithValueinMLognormal

Page 14: The Financial Reporter - soa.org · The Financial Reporter ... An International Financial Reporting Standards (IFRS) Phase II Discussion Paper Primer

Financial Reporter | December 2007

>> Risk Margins to the Non-Market Risks …

14

Thus, lognormally distributed losses resulted in aslightly higher price.

Note that both mean and standard deviation of theoriginal loss distributions were the same.

Margin adjustment to the price is as follows:Normal: New Value = Old price + Lognormal (-.11, 0.47): New Value =Old Price x Exp

As mentioned before, Wang transform increases prob-abilities of severe outcomes.

As an example, consider the 99.5th percentile of ouroriginal lognormal distribution L( -.11, 0.47) whichequals 3.02. Under the original distribution, proba-bility of an outcome in excess of 3.02 is 0.5 percent.The Wang transform changed the original lognormalto the lognormal with parameters L( 0.03, 0.47).The transformed distribution assigns higher proba-bility to the outcomes greater than 3.02, equal 1.14percent.

Recommendation for GMxB’s RiskMargin CalculationIn practice, modeling non-market risks stochastical-ly is a difficult task, both computationally and interms of underlying assumptions. Take as a specialcase policyholder behavior. Multiple parametersneed to be estimated to describe stochastic processesfor lapse, partial withdrawal, rider utilization, can-cellation, optional step-ups, etc. Usually, there is verylittle historical information available for such typesof calibration. If each of these risks is modeled sto-chastically, even in its simplest form, at least an esti-mate for the standard deviation of each risk shouldbe needed. The resulting liability value distributionis likely to be of some arbitrary shape other than awell-behaved distribution. However, there is no rea-son to believe such a distribution will be any betterin terms of its predictive power.

At the same time, a set of “worst case” type of behav-ior assumptions may be defined as part of sensitivityanalysis by pricing actuaries and the liability valueestimated.

Fitting DistributionsMoreover, there may be a certain probabilityassigned to a state described by a set of shocked pol-icyholder behavior assumptions. For example, suchestimates may be needed for economic capital calcu-lation purposes.

In any case, making an assumption of the probabili-ty of such “shocked behavior” would define a pointon the tail of the liability value distribution due to an

uncertain nature of policyholder behavior. Alongwith the assumption that the baseline set of behaviorassumptions corresponds to the mean value of theliability distribution, this defines two points on theliability CDF.

For two-parameter distributions, such as normal orlognormal, defining two points on the probabilitydistribution curve is sufficient to find the distribu-tion parameters.

Assume, as an example, that the “shocked behavior”liability value corresponds to the 99.5th percentile.We’ll show below how the mean and the 99.5th per-centile, can be used to uniquely identify parametersof the normal or lognormal distributions Ì and Û.

Normalizing DistributionReal life distributions of liability market values mayspan over a wide range of outcomes. However, thelognormal distribution we’ve considered so far, withparameters =-0.11 and =0.47, denote it L(-0.11,0.47), has its domain between [0; ], as any lognor-mal distribution. In order to map the results of a reallife distribution to the domain of lognormal distri-bution a one-to-one relationship needs to bedefined. A simple linear transform will describe sucha relationship:

(1)

Where,

– real life outcome variable,– lognormal variable, L(-0.11, 0.47).

A and B – constants to solve for.

Two equations linking the two “observed” values ofthe liability value, baseline and 99.5th percentile totheir mapped values to the lognormal L(-.11, .47)will uniquely identify constants A and B.

Here are the main steps of the derivation of parame-ters A and B. The mean of the lognormal distribu-tion:

Mean

The 99.5th percentile for L(-.11, 0.47) equals 3.02 orapproximately four standard deviations away fromthe mean, and variance (standard deviation squared)of the lognormal distribution equals:

Variance

Since the real life distribution is just a linear trans-form of the original distribution, the same relation-ship between the mean and 99.5th percentile musthold: four standard deviations apart.

8

Normalizing DistributionReal life distributions of liability market values may span over a wide range of outcomes.However, the lognormal distribution we’ve considered so far, with parameters C=-0.11and �=0.47, denote it L(-0.11, 0.47), has its domain between [0; �], as any lognormal distribution. In order to map the results of a real life distribution to the domain oflognormal distribution a one-to-one relationship needs to be defined. A simple linear transform will describe such a relationship:

Xreal = A+B*X, (1)

Where,

Xreal – real life outcome variable,X – lognormal variable, L(-0.11, 0.47).A and B – constants to solve for.Two equations linking the two “observed” values of the liability value, baseline and 99.5th percentile to their mapped values to the lognormal L(-.11, .47) will uniquelyidentify constants A and B.Here are the main steps of the derivation of parameters A and B. The mean of the lognormal distribution:

2

2��+

= eMean

The 99.5th percentile for L(-.11, 0.47) equals 3.02 or approximately four standard deviations away from the mean, and variance (standard deviation squared) of the lognormal distribution equals:

)1(222�=

+ ��� eeVariance

Since the real life distribution is just a linear transform of the original distribution the same relationship between the mean and 99.5th percentile must hold: four standard deviations apart.

Therefore, standard deviation of the original distribution equals:

Sigma = (99.5th percentile - Mean)/4.

Next, constant B equals 2 x Sigma.

Next, recall the result of the Wang transform for the lognormal distribution.Price with the margin equals:

��

����

eiceOrigeinicewithM *PrargPr 2

2

==++

Going back to the real life distribution X using formula (1) and remembering that Mean=Original Price =1:

Margin = A + B*PricewithMargin – (A+ B*LMean)= B*Lognormal margin

}1{**2arg �= ��� einM (2)

8

Normalizing DistributionReal life distributions of liability market values may span over a wide range of outcomes.However, the lognormal distribution we’ve considered so far, with parameters C=-0.11and �=0.47, denote it L(-0.11, 0.47), has its domain between [0; �], as any lognormal distribution. In order to map the results of a real life distribution to the domain oflognormal distribution a one-to-one relationship needs to be defined. A simple linear transform will describe such a relationship:

Xreal = A+B*X, (1)

Where,

Xreal – real life outcome variable,X – lognormal variable, L(-0.11, 0.47).A and B – constants to solve for.Two equations linking the two “observed” values of the liability value, baseline and 99.5th percentile to their mapped values to the lognormal L(-.11, .47) will uniquelyidentify constants A and B.Here are the main steps of the derivation of parameters A and B. The mean of the lognormal distribution:

2

2��+

= eMean

The 99.5th percentile for L(-.11, 0.47) equals 3.02 or approximately four standard deviations away from the mean, and variance (standard deviation squared) of the lognormal distribution equals:

)1(222�=

+ ��� eeVariance

Since the real life distribution is just a linear transform of the original distribution the same relationship between the mean and 99.5th percentile must hold: four standard deviations apart.

Therefore, standard deviation of the original distribution equals:

Sigma = (99.5th percentile - Mean)/4.

Next, constant B equals 2 x Sigma.

Next, recall the result of the Wang transform for the lognormal distribution.Price with the margin equals:

��

����

eiceOrigeinicewithM *PrargPr 2

2

==++

Going back to the real life distribution X using formula (1) and remembering that Mean=Original Price =1:

Margin = A + B*PricewithMargin – (A+ B*LMean)= B*Lognormal margin

}1{**2arg �= ��� einM (2)

8

Normalizing DistributionReal life distributions of liability market values may span over a wide range of outcomes.However, the lognormal distribution we’ve considered so far, with parameters C=-0.11and �=0.47, denote it L(-0.11, 0.47), has its domain between [0; �], as any lognormal distribution. In order to map the results of a real life distribution to the domain oflognormal distribution a one-to-one relationship needs to be defined. A simple linear transform will describe such a relationship:

Xreal = A+B*X, (1)

Where,

Xreal – real life outcome variable,X – lognormal variable, L(-0.11, 0.47).A and B – constants to solve for.Two equations linking the two “observed” values of the liability value, baseline and 99.5th percentile to their mapped values to the lognormal L(-.11, .47) will uniquelyidentify constants A and B.Here are the main steps of the derivation of parameters A and B. The mean of the lognormal distribution:

2

2��+

= eMean

The 99.5th percentile for L(-.11, 0.47) equals 3.02 or approximately four standard deviations away from the mean, and variance (standard deviation squared) of the lognormal distribution equals:

)1(222�=

+ ��� eeVariance

Since the real life distribution is just a linear transform of the original distribution the same relationship between the mean and 99.5th percentile must hold: four standard deviations apart.

Therefore, standard deviation of the original distribution equals:

Sigma = (99.5th percentile - Mean)/4.

Next, constant B equals 2 x Sigma.

Next, recall the result of the Wang transform for the lognormal distribution.Price with the margin equals:

��

����

eiceOrigeinicewithM *PrargPr 2

2

==++

Going back to the real life distribution X using formula (1) and remembering that Mean=Original Price =1:

Margin = A + B*PricewithMargin – (A+ B*LMean)= B*Lognormal margin

}1{**2arg �= ��� einM (2)

8

Normalizing DistributionReal life distributions of liability market values may span over a wide range of outcomes.However, the lognormal distribution we’ve considered so far, with parameters C=-0.11and �=0.47, denote it L(-0.11, 0.47), has its domain between [0; �], as any lognormal distribution. In order to map the results of a real life distribution to the domain oflognormal distribution a one-to-one relationship needs to be defined. A simple linear transform will describe such a relationship:

Xreal = A+B*X, (1)

Where,

Xreal – real life outcome variable,X – lognormal variable, L(-0.11, 0.47).A and B – constants to solve for.Two equations linking the two “observed” values of the liability value, baseline and 99.5th percentile to their mapped values to the lognormal L(-.11, .47) will uniquelyidentify constants A and B.Here are the main steps of the derivation of parameters A and B. The mean of the lognormal distribution:

2

2��+

= eMean

The 99.5th percentile for L(-.11, 0.47) equals 3.02 or approximately four standard deviations away from the mean, and variance (standard deviation squared) of the lognormal distribution equals:

)1(222�=

+ ��� eeVariance

Since the real life distribution is just a linear transform of the original distribution the same relationship between the mean and 99.5th percentile must hold: four standard deviations apart.

Therefore, standard deviation of the original distribution equals:

Sigma = (99.5th percentile - Mean)/4.

Next, constant B equals 2 x Sigma.

Next, recall the result of the Wang transform for the lognormal distribution.Price with the margin equals:

��

����

eiceOrigeinicewithM *PrargPr 2

2

==++

Going back to the real life distribution X using formula (1) and remembering that Mean=Original Price =1:

Margin = A + B*PricewithMargin – (A+ B*LMean)= B*Lognormal margin

}1{**2arg �= ��� einM (2)

8

Normalizing DistributionReal life distributions of liability market values may span over a wide range of outcomes.However, the lognormal distribution we’ve considered so far, with parameters C=-0.11and �=0.47, denote it L(-0.11, 0.47), has its domain between [0; �], as any lognormal distribution. In order to map the results of a real life distribution to the domain oflognormal distribution a one-to-one relationship needs to be defined. A simple linear transform will describe such a relationship:

Xreal = A+B*X, (1)

Where,

Xreal – real life outcome variable,X – lognormal variable, L(-0.11, 0.47).A and B – constants to solve for.Two equations linking the two “observed” values of the liability value, baseline and 99.5th percentile to their mapped values to the lognormal L(-.11, .47) will uniquelyidentify constants A and B.Here are the main steps of the derivation of parameters A and B. The mean of the lognormal distribution:

2

2��+

= eMean

The 99.5th percentile for L(-.11, 0.47) equals 3.02 or approximately four standard deviations away from the mean, and variance (standard deviation squared) of the lognormal distribution equals:

)1(222�=

+ ��� eeVariance

Since the real life distribution is just a linear transform of the original distribution the same relationship between the mean and 99.5th percentile must hold: four standard deviations apart.

Therefore, standard deviation of the original distribution equals:

Sigma = (99.5th percentile - Mean)/4.

Next, constant B equals 2 x Sigma.

Next, recall the result of the Wang transform for the lognormal distribution.Price with the margin equals:

��

����

eiceOrigeinicewithM *PrargPr 2

2

==++

Going back to the real life distribution X using formula (1) and remembering that Mean=Original Price =1:

Margin = A + B*PricewithMargin – (A+ B*LMean)= B*Lognormal margin

}1{**2arg �= ��� einM (2)

8

Normalizing DistributionReal life distributions of liability market values may span over a wide range of outcomes.However, the lognormal distribution we’ve considered so far, with parameters C=-0.11and �=0.47, denote it L(-0.11, 0.47), has its domain between [0; �], as any lognormal distribution. In order to map the results of a real life distribution to the domain oflognormal distribution a one-to-one relationship needs to be defined. A simple linear transform will describe such a relationship:

Xreal = A+B*X, (1)

Where,

Xreal – real life outcome variable,X – lognormal variable, L(-0.11, 0.47).A and B – constants to solve for.Two equations linking the two “observed” values of the liability value, baseline and 99.5th percentile to their mapped values to the lognormal L(-.11, .47) will uniquelyidentify constants A and B.Here are the main steps of the derivation of parameters A and B. The mean of the lognormal distribution:

2

2��+

= eMean

The 99.5th percentile for L(-.11, 0.47) equals 3.02 or approximately four standard deviations away from the mean, and variance (standard deviation squared) of the lognormal distribution equals:

)1(222�=

+ ��� eeVariance

Since the real life distribution is just a linear transform of the original distribution the same relationship between the mean and 99.5th percentile must hold: four standard deviations apart.

Therefore, standard deviation of the original distribution equals:

Sigma = (99.5th percentile - Mean)/4.

Next, constant B equals 2 x Sigma.

Next, recall the result of the Wang transform for the lognormal distribution.Price with the margin equals:

��

����

eiceOrigeinicewithM *PrargPr 2

2

==++

Going back to the real life distribution X using formula (1) and remembering that Mean=Original Price =1:

Margin = A + B*PricewithMargin – (A+ B*LMean)= B*Lognormal margin

}1{**2arg �= ��� einM (2)

7

Note that both mean and standard deviation of the original loss distributions were the same.

Margin adjustment to the price is as follows:

Normal: New Value = Old price + � �[X]

Lognormal (-.11, 0.47): New Value = Old Price x Exp(� �[X])

As mentioned before, Wang transform increases probabilities of severeoutcomes.

As an example, consider the 99.5th percentile of our original lognormal distribution L( -.11, 0.47) which equals 3.02. Under the original distribution, probability of an outcome inexcess of 3.02 is 0.5 percent. The Wang transform changed the original lognormal to the lognormal with parameters L( 0.03, 0.47). The transformed distribution assigns higher probability to the outcomes greater than 3.02, equal 1.14 percent.

Recommendation for GMxB’s Risk Margin CalculationIn practice, modeling non-market risks stochastically is a difficult task, bothcomputationally and in terms of underlying assumptions. Take as a special casepolicyholder behavior. Multiple parameters need to be estimated to describe stochasticprocesses for lapse, partial withdrawal, rider utilization, cancellation, optional step-ups,etc. Usually, there is very little historical information available for such types ofcalibration. If each of these risks is modeled stochastically, even in its simplest form, atleast an estimate for the standard deviation of each risk should be needed. The resulting liability value distribution is likely to be of some arbitrary shape other than a well-behaved distribution. However, there is no reason to believe such a distribution will be any better in terms of its predictive power.

At the same time, a set of “worst case” type of behavior assumptions may be defined aspart of sensitivity analysis by pricing actuaries and the liability value estimated.

Fitting Distributions.Moreover, there may be a certain probability assigned to a state described by a set ofshocked policyholder behavior assumptions. For example, such estimates may beneeded for economic capital calculation purposes.

In any case, making an assumption of the probability of such “shocked behavior” would define a point on the tail of the liability value distribution due to an uncertain nature ofpolicyholder behavior. Along with the assumption that the baseline set of behavior assumptions corresponds to the mean value of the liability distribution, this defines two points on the liability CDF.

For two-parameter distributions such as normal or lognormal, defining two points on the probability distribution curve is sufficient to find the distribution parameters.

Assume as an example that the “shocked behavior” liability value corresponds to the 99.5th percentile. We’ll show below how the mean and the 99.5th percentile, can be usedto uniquely identify parameters of the normal or lognormal distributions C and �.

7

Note that both mean and standard deviation of the original loss distributions were the same.

Margin adjustment to the price is as follows:

Normal: New Value = Old price + � �[X]

Lognormal (-.11, 0.47): New Value = Old Price x Exp(� �[X])

As mentioned before, Wang transform increases probabilities of severeoutcomes.

As an example, consider the 99.5th percentile of our original lognormal distribution L( -.11, 0.47) which equals 3.02. Under the original distribution, probability of an outcome inexcess of 3.02 is 0.5 percent. The Wang transform changed the original lognormal to the lognormal with parameters L( 0.03, 0.47). The transformed distribution assigns higher probability to the outcomes greater than 3.02, equal 1.14 percent.

Recommendation for GMxB’s Risk Margin CalculationIn practice, modeling non-market risks stochastically is a difficult task, bothcomputationally and in terms of underlying assumptions. Take as a special casepolicyholder behavior. Multiple parameters need to be estimated to describe stochasticprocesses for lapse, partial withdrawal, rider utilization, cancellation, optional step-ups,etc. Usually, there is very little historical information available for such types ofcalibration. If each of these risks is modeled stochastically, even in its simplest form, atleast an estimate for the standard deviation of each risk should be needed. The resulting liability value distribution is likely to be of some arbitrary shape other than a well-behaved distribution. However, there is no reason to believe such a distribution will be any better in terms of its predictive power.

At the same time, a set of “worst case” type of behavior assumptions may be defined aspart of sensitivity analysis by pricing actuaries and the liability value estimated.

Fitting Distributions.Moreover, there may be a certain probability assigned to a state described by a set ofshocked policyholder behavior assumptions. For example, such estimates may beneeded for economic capital calculation purposes.

In any case, making an assumption of the probability of such “shocked behavior” would define a point on the tail of the liability value distribution due to an uncertain nature ofpolicyholder behavior. Along with the assumption that the baseline set of behavior assumptions corresponds to the mean value of the liability distribution, this defines two points on the liability CDF.

For two-parameter distributions such as normal or lognormal, defining two points on the probability distribution curve is sufficient to find the distribution parameters.

Assume as an example that the “shocked behavior” liability value corresponds to the 99.5th percentile. We’ll show below how the mean and the 99.5th percentile, can be usedto uniquely identify parameters of the normal or lognormal distributions C and �.

8

Normalizing DistributionReal life distributions of liability market values may span over a wide range of outcomes.However, the lognormal distribution we’ve considered so far, with parameters C=-0.11and �=0.47, denote it L(-0.11, 0.47), has its domain between [0; �], as any lognormal distribution. In order to map the results of a real life distribution to the domain oflognormal distribution a one-to-one relationship needs to be defined. A simple linear transform will describe such a relationship:

Xreal = A+B*X, (1)

Where,

Xreal – real life outcome variable,X – lognormal variable, L(-0.11, 0.47).A and B – constants to solve for.Two equations linking the two “observed” values of the liability value, baseline and 99.5th percentile to their mapped values to the lognormal L(-.11, .47) will uniquelyidentify constants A and B.Here are the main steps of the derivation of parameters A and B. The mean of the lognormal distribution:

2

2��+

= eMean

The 99.5th percentile for L(-.11, 0.47) equals 3.02 or approximately four standard deviations away from the mean, and variance (standard deviation squared) of the lognormal distribution equals:

)1(222�=

+ ��� eeVariance

Since the real life distribution is just a linear transform of the original distribution the same relationship between the mean and 99.5th percentile must hold: four standard deviations apart.

Therefore, standard deviation of the original distribution equals:

Sigma = (99.5th percentile - Mean)/4.

Next, constant B equals 2 x Sigma.

Next, recall the result of the Wang transform for the lognormal distribution.Price with the margin equals:

��

����

eiceOrigeinicewithM *PrargPr 2

2

==++

Going back to the real life distribution X using formula (1) and remembering that Mean=Original Price =1:

Margin = A + B*PricewithMargin – (A+ B*LMean)= B*Lognormal margin

}1{**2arg �= ��� einM (2)

8

Normalizing DistributionReal life distributions of liability market values may span over a wide range of outcomes.However, the lognormal distribution we’ve considered so far, with parameters C=-0.11and �=0.47, denote it L(-0.11, 0.47), has its domain between [0; �], as any lognormal distribution. In order to map the results of a real life distribution to the domain oflognormal distribution a one-to-one relationship needs to be defined. A simple linear transform will describe such a relationship:

Xreal = A+B*X, (1)

Where,

Xreal – real life outcome variable,X – lognormal variable, L(-0.11, 0.47).A and B – constants to solve for.Two equations linking the two “observed” values of the liability value, baseline and 99.5th percentile to their mapped values to the lognormal L(-.11, .47) will uniquelyidentify constants A and B.Here are the main steps of the derivation of parameters A and B. The mean of the lognormal distribution:

2

2��+

= eMean

The 99.5th percentile for L(-.11, 0.47) equals 3.02 or approximately four standard deviations away from the mean, and variance (standard deviation squared) of the lognormal distribution equals:

)1(222�=

+ ��� eeVariance

Since the real life distribution is just a linear transform of the original distribution the same relationship between the mean and 99.5th percentile must hold: four standard deviations apart.

Therefore, standard deviation of the original distribution equals:

Sigma = (99.5th percentile - Mean)/4.

Next, constant B equals 2 x Sigma.

Next, recall the result of the Wang transform for the lognormal distribution.Price with the margin equals:

��

����

eiceOrigeinicewithM *PrargPr 2

2

==++

Going back to the real life distribution X using formula (1) and remembering that Mean=Original Price =1:

Margin = A + B*PricewithMargin – (A+ B*LMean)= B*Lognormal margin

}1{**2arg �= ��� einM (2)

Page 15: The Financial Reporter - soa.org · The Financial Reporter ... An International Financial Reporting Standards (IFRS) Phase II Discussion Paper Primer

Therefore, standard deviation of the original distri-bution equals:

Sigma = (99.5th percentile - Mean)/4.

Next, constant B equals 2 x Sigma.

Next, recall the result of the Wang transform for thelognormal distribution.Price with the margin equals:

Price with Margin Orig Price

Going back to the real life distribution X using for-mula (1) and remembering that Mean=OriginalPrice=1:

Margin = A + B*PricewithMargin – (A+ B*LMean)=B*Lognormal margin

Margin (2)

where LMean – mean of the lognormal distributionL(-.11, 0.47), equal 1;and standard deviation = (99.5th percentile -Mean)/4.

GMAB ExampleIn order to have a better feel of the margin adjust-ment magnitude consider a generic GMAB block.Risk-neutral valuation under the baseline and theshocked behavior sets of assumptions (correspondingto 99.5th percentile) produce the following results,respectively:

In $millions:PV Claims PV Fees Net

Baseline: 145 171 -26 99.5th percentile: 170 145 25

The graph below illustrates how the distributions couldbe fit to the two “observed” points. The linear trans-form described by formula (1) to map the domain oflognormal to the real life range of outcomes is used:

Again, lognormal distribution looks more realistic torepresent variability in policyholder behavior. It has

a natural minimum point while the normal distribu-tion extends far to the left. Also, it generally has fat-ter tail although it manifests itself only in extremelyhigh percentiles.

Risk Margin CalculationAssume =0.3.

For the normal distribution: Margin = x St.Deviation,Where St. Deviation can be calculated from (99.5th

percentile – Mean)/2.576= $19.8 million

For the lognormal distribution use formula (2):

Margin = 2x St. Deviation[e - 1]

WhereSt. Deviation = (99.5th percentile – Mean)/4 = $12.6million

Thus the margin adjustment equals:

Normal: 0.3 x $19.8 million = $5.9 million

Lognormal: 2x $12.6 million x [e(0.3x0.47) - 1] = $3.8million

Thus, fitting lognormal distribution resulted in alower margin adjustment.

Baseline Assumptions vs. Risk MarginThe margin captures the risk of non-market assump-tions being more adverse than those assumed in thebaseline pricing. But what if a more conservative setof assumptions is used in the baseline product pric-ing? One should expect a lower margin. Assume forexample, two companies valuing an identical prod-uct using different sets of assumptions. Company 1would use a set of assumptions consistent with thebaseline result above, call it Original. Company 2would use a more conservative non-market assump-tions set. As a result, net baseline liability value willincrease from -$26 million to 0.

In $millions:PV Claims PV Fees Net

Original Baseline: 145 171 -26 ConservativeCompany Baseline: 160 160 0 99.5th Percentile: 170 145 25

Note that the value of the 99.5th percentile would beidentical for both companies since the product isidentical. Applying formulas (1) and (2) above pro-duce the following liability value distributions:

10

Where St. Deviation can be calculated from (99.5th percentile – Mean)/2.576 = $19.8 million

For the lognormal distribution use formula (2):

Margin = 2x St. Deviation x [e(�x0.47)- 1]

WhereSt. Deviation = (99.5th percentile – Mean)/4 = $12.6 million

Thus the margin adjustment equals:

Normal: 0.3 x $19.8 mln = $5.9 million

Lognormal: 2x $12.6 million x [e(0.3x0.47) - 1] = $3.8 million

Thus, fitting lognormal distribution resulted in a lower margin adjustment.

Baseline Assumptions vs. Risk MarginThe margin captures the risk of non-market assumptions being more adverse than those assumed in the baseline pricing. But what if a more conservative set of assumptions is used in the baseline product pricing? One should expect a lower margin. Assume for example, two companies valuing an identical product using different sets of assumptions. Company 1 would use a set of assumptions consistent with the baseline result above, call it Original. Company 2 would use a more conservative non-market assumptions set. As a result, net baseline liability value will increase from -$26 million to 0.

In $millions:PV Claims PV Fees Net

Original Baseline: 145 171 -26 Conservative Company Baseline: 160 160 0 99.5th Percentile: 170 145 25

Note that the value of the 99.5 percentile would be identical for both companies since the product is identical. Applying formulas (1) and (2) above produce the following liability value distributions:

9

where LMean – mean of the lognormal distribution L(-.11, 0.47), equal 1;and standard deviation � = (99.5th percentile - Mean)/4.

GMAB ExampleIn order to have a better feel of the margin adjustment magnitude consider a genericGMAB block. Risk-neutral valuation under the baseline and the shocked behavior sets ofassumptions (corresponding to 99.5th percentile) produce the following results,respectively:

In $millions:PV Claims PV Fees Net

Baseline: 145 171 -26 99.5th percentile: 170 145 25

The graph below illustrates how the distributions could be fit to the two “observed” points. The linear transform described by formula (1) to map the domain of lognormal tothe real life range of outcomes is used:

Fitting Distributions

-60.0 -40.0 -20.0 .0 20.0Liabilty, $mln

Normal

Lognormal

Observed

Baseline Value, RN =Mean of distribution

Shocked Value,99.5th %-tile

Again, lognormal distribution looks more realistic to represent variability in policyholder behavior. It has a natural minimum point while the normal distribution extends far to the left. Also it generally has fatter tail although it manifests itself only in extremely high percentiles.

Risk Margin CalculationAssume �=0.3.

For the normal distribution: Margin = � x St. Deviation,

9

where LMean – mean of the lognormal distribution L(-.11, 0.47), equal 1;and standard deviation � = (99.5th percentile - Mean)/4.

GMAB ExampleIn order to have a better feel of the margin adjustment magnitude consider a genericGMAB block. Risk-neutral valuation under the baseline and the shocked behavior sets ofassumptions (corresponding to 99.5th percentile) produce the following results,respectively:

In $millions:PV Claims PV Fees Net

Baseline: 145 171 -26 99.5th percentile: 170 145 25

The graph below illustrates how the distributions could be fit to the two “observed” points. The linear transform described by formula (1) to map the domain of lognormal tothe real life range of outcomes is used:

Fitting Distributions

-60.0 -40.0 -20.0 .0 20.0Liabilty, $mln

Normal

Lognormal

Observed

Baseline Value, RN =Mean of distribution

Shocked Value,99.5th %-tile

Again, lognormal distribution looks more realistic to represent variability in policyholder behavior. It has a natural minimum point while the normal distribution extends far to the left. Also it generally has fatter tail although it manifests itself only in extremely high percentiles.

Risk Margin CalculationAssume �=0.3.

For the normal distribution: Margin = � x St. Deviation,

9

where LMean – mean of the lognormal distribution L(-.11, 0.47), equal 1;and standard deviation � = (99.5th percentile - Mean)/4.

GMAB ExampleIn order to have a better feel of the margin adjustment magnitude consider a genericGMAB block. Risk-neutral valuation under the baseline and the shocked behavior sets ofassumptions (corresponding to 99.5th percentile) produce the following results,respectively:

In $millions:PV Claims PV Fees Net

Baseline: 145 171 -26 99.5th percentile: 170 145 25

The graph below illustrates how the distributions could be fit to the two “observed” points. The linear transform described by formula (1) to map the domain of lognormal tothe real life range of outcomes is used:

Fitting Distributions

-60.0 -40.0 -20.0 .0 20.0Liabilty, $mln

Normal

Lognormal

Observed

Baseline Value, RN =Mean of distribution

Shocked Value,99.5th %-tile

Again, lognormal distribution looks more realistic to represent variability in policyholder behavior. It has a natural minimum point while the normal distribution extends far to the left. Also it generally has fatter tail although it manifests itself only in extremely high percentiles.

Risk Margin CalculationAssume �=0.3.

For the normal distribution: Margin = � x St. Deviation,

Financial Reporter | December 2007 15

>> Risk Margins to the Non-Market Risks …

8

Normalizing DistributionReal life distributions of liability market values may span over a wide range of outcomes.However, the lognormal distribution we’ve considered so far, with parameters C=-0.11and �=0.47, denote it L(-0.11, 0.47), has its domain between [0; �], as any lognormal distribution. In order to map the results of a real life distribution to the domain oflognormal distribution a one-to-one relationship needs to be defined. A simple linear transform will describe such a relationship:

Xreal = A+B*X, (1)

Where,

Xreal – real life outcome variable,X – lognormal variable, L(-0.11, 0.47).A and B – constants to solve for.Two equations linking the two “observed” values of the liability value, baseline and 99.5th percentile to their mapped values to the lognormal L(-.11, .47) will uniquelyidentify constants A and B.Here are the main steps of the derivation of parameters A and B. The mean of the lognormal distribution:

2

2��+

= eMean

The 99.5th percentile for L(-.11, 0.47) equals 3.02 or approximately four standard deviations away from the mean, and variance (standard deviation squared) of the lognormal distribution equals:

)1(222�=

+ ��� eeVariance

Since the real life distribution is just a linear transform of the original distribution the same relationship between the mean and 99.5th percentile must hold: four standard deviations apart.

Therefore, standard deviation of the original distribution equals:

Sigma = (99.5th percentile - Mean)/4.

Next, constant B equals 2 x Sigma.

Next, recall the result of the Wang transform for the lognormal distribution.Price with the margin equals:

��

����

eiceOrigeinicewithM *PrargPr 2

2

==++

Going back to the real life distribution X using formula (1) and remembering that Mean=Original Price =1:

Margin = A + B*PricewithMargin – (A+ B*LMean)= B*Lognormal margin

}1{**2arg �= ��� einM (2)

8

Normalizing DistributionReal life distributions of liability market values may span over a wide range of outcomes.However, the lognormal distribution we’ve considered so far, with parameters C=-0.11and �=0.47, denote it L(-0.11, 0.47), has its domain between [0; �], as any lognormal distribution. In order to map the results of a real life distribution to the domain oflognormal distribution a one-to-one relationship needs to be defined. A simple linear transform will describe such a relationship:

Xreal = A+B*X, (1)

Where,

Xreal – real life outcome variable,X – lognormal variable, L(-0.11, 0.47).A and B – constants to solve for.Two equations linking the two “observed” values of the liability value, baseline and 99.5th percentile to their mapped values to the lognormal L(-.11, .47) will uniquelyidentify constants A and B.Here are the main steps of the derivation of parameters A and B. The mean of the lognormal distribution:

2

2��+

= eMean

The 99.5th percentile for L(-.11, 0.47) equals 3.02 or approximately four standard deviations away from the mean, and variance (standard deviation squared) of the lognormal distribution equals:

)1(222�=

+ ��� eeVariance

Since the real life distribution is just a linear transform of the original distribution the same relationship between the mean and 99.5th percentile must hold: four standard deviations apart.

Therefore, standard deviation of the original distribution equals:

Sigma = (99.5th percentile - Mean)/4.

Next, constant B equals 2 x Sigma.

Next, recall the result of the Wang transform for the lognormal distribution.Price with the margin equals:

��

����

eiceOrigeinicewithM *PrargPr 2

2

==++

Going back to the real life distribution X using formula (1) and remembering that Mean=Original Price =1:

Margin = A + B*PricewithMargin – (A+ B*LMean)= B*Lognormal margin

}1{**2arg �= ��� einM (2)

continued on page 16>>

8

Normalizing DistributionReal life distributions of liability market values may span over a wide range of outcomes.However, the lognormal distribution we’ve considered so far, with parameters C=-0.11and �=0.47, denote it L(-0.11, 0.47), has its domain between [0; �], as any lognormal distribution. In order to map the results of a real life distribution to the domain oflognormal distribution a one-to-one relationship needs to be defined. A simple linear transform will describe such a relationship:

Xreal = A+B*X, (1)

Where,

Xreal – real life outcome variable,X – lognormal variable, L(-0.11, 0.47).A and B – constants to solve for.Two equations linking the two “observed” values of the liability value, baseline and 99.5th percentile to their mapped values to the lognormal L(-.11, .47) will uniquelyidentify constants A and B.Here are the main steps of the derivation of parameters A and B. The mean of the lognormal distribution:

2

2��+

= eMean

The 99.5th percentile for L(-.11, 0.47) equals 3.02 or approximately four standard deviations away from the mean, and variance (standard deviation squared) of the lognormal distribution equals:

)1(222�=

+ ��� eeVariance

Since the real life distribution is just a linear transform of the original distribution the same relationship between the mean and 99.5th percentile must hold: four standard deviations apart.

Therefore, standard deviation of the original distribution equals:

Sigma = (99.5th percentile - Mean)/4.

Next, constant B equals 2 x Sigma.

Next, recall the result of the Wang transform for the lognormal distribution.Price with the margin equals:

��

����

eiceOrigeinicewithM *PrargPr 2

2

==++

Going back to the real life distribution X using formula (1) and remembering that Mean=Original Price =1:

Margin = A + B*PricewithMargin – (A+ B*LMean)= B*Lognormal margin

}1{**2arg �= ��� einM (2)

Page 16: The Financial Reporter - soa.org · The Financial Reporter ... An International Financial Reporting Standards (IFRS) Phase II Discussion Paper Primer

Financial Reporter | December 2007

>> Risk Margins to the Non-Market Risks …

16

$

Note how the distribution for the conservativeCompany 2 is less disperse. The margin for non-market risks is also lower for Company 2.

Margin (Original Company) = $3.8 millionMargin (Conservative Company) = $1.9 million

As the conservative estimate is approximately half-way between the original baseline and 99.5th per-centile, the margin to the conservative estimate isalso around a half of the original margin.

Interaction of RisksSo far, the focus has been on a single non-marketrisk, behavior risk in most of the examples.Generally, there may be more than one source of thenon-market risk, for example policyholder behaviorand mortality improvement. In this example anassumption of a zero correlation between policyhold-er and mortality risks seems to be a reasonable one.To generalize for n uncorrelated non-market risks,the formula for the standard deviation of the jointdistribution is as follows:

Strictly speaking, this formula is accurate if each riskis normal. It’s suggested to also use the same formu-la for the lognormal assumption.

The resulting standard deviation of the joint distri-bution then should be substituted to the formula(2), if lognormal distribution is assumed.

Summary: Recommended Risk MarginCalculation for GMxB, Non-market Risks

1.Determine market-consistent values of the guaran-tee corresponding to two sets of non-marketassumptions, baseline and shocked. Assign a prob-ability to the shock value, e.g., 99.5th percentile.

2. Take the difference Shocked – Baseline. This willbe equal X standard deviations depending on thepercentile assignment in step 1.a. E.g., under the lognormal with Mean=1,

St. Dev=0.5, 4 x Standard Deviation. Therefore,St. Deviation = (Shocked – Baseline) / 4.

3. Repeat step 2 for each uncorrelated mortality risk,if applicable.

a. Calculate resulting standard deviationas:

4. Calculate the risk margin (added to the baselinevalue):a. Normal: Margin = x St. Deviation,

b. Lognormal: : 2x St. Deviation

Recommended distribution: Lognormal.

Estimating LambdaThe meaning of the parameter lambda in the Wangtransform is price of risk. That is, it shows by howmuch the fair price will increase should a measure ofrisk increase by one unit. In this general context, thelambda is basically independent of the nature of risk.It’s rather more of a characteristic of the entity bear-ing the risk and should be closely related to the over-all risk tolerance of the entity.

In order to estimate lambda, two approaches are sug-gested here, company-specific and market.

With the company-specific approach the questionshould be asked: how much extra return in excess ofthe risk-free rate the company requires per acceptingan additional marginal unit of risk. In a narrowercontext, this could be answered by the return on cap-ital targets for new products. For example, if a com-pany expects risk-adjusted return on capital(RAROC) of 8 percent in excess of the risk-free rateon an after-tax basis, and the required economic cap-ital equals four standard deviations of the change ineconomic value the lambda here equals 8 percent x 4= 32 percent. Here an assumption is made that onestandard deviation equals one unit of risk.

The market approach would extend the definition oflambda to capital markets which is essentially a def-inition of Sharpe ratio. The Sharpe ratio based onhistorical returns for a broad range of domestic equi-ty indices ranges between 0.3 and 0.4. For example,the Sharpe ratio for the S&P 500 historically isabout 0.4. Therefore, a Sharpe ratio in a range of 0.3-0.4 should be reasonable.

12

Summary: Recommended Risk Margin Calculation for GMxB, Non-marketRisks

1. Determine market-consistent values of the guarantee corresponding to twosets of non-market assumptions, baseline and shocked. Assign a probability to the shock value, e.g., 99.5th percentile.

2. Take the difference Shocked – Baseline. This will be equal X standard deviations depending on the percentile assignment in step 1.

a. E.g., under the lognormal with Mean=1, St. Dev=0.5, 4 x Standard Deviation. Therefore, St. Deviation = (Shocked – Baseline) / 4.

3. Repeat step 2 for each uncorrelated mortality risk, if applicable.a. Calculate resulting standard deviation �Joint as: �Joint = (�beh

2 + �mort2)0.5

4. Calculate the risk margin (added to the baseline value): a. Normal: Margin = � x St. Deviation,b. Lognormal: : 2x St. Deviation x [e(�x0.472)- 1]

Recommended distribution: Lognormal.

Estimating LambdaThe meaning of the parameter lambda in the Wang transform is price of risk. That is, itshows by how much the fair price will increase should a measure of risk increase by one unit. In this general context, the lambda is basically independent of thenature of risk. It’s rather more of a characteristic of the entity bearing the risk and should be closely related to the overall risk tolerance of the entity.

In order to estimate lambda, two approaches are suggested here, company-specific and market.

With the company-specific approach the question should be asked: how much extra return in excess of the risk-free rate the company requires per accepting an additional marginal unit of risk. In a narrower context, this could be answered by the return on capital targets for new products. For example, if a company expects risk-adjusted return on capital (RAROC) of 8 percent in excess of the risk-free rate on an after-tax basis, and the required economic capital equals four standard deviations of the change in economic value the lambda here equals 8percent x 4 = 32 percent. Here an assumption is made that one standard deviation equals one unit of risk.

The market approach would extend the definition of lambda to capital markets which is essentially a definition of Sharpe ratio. The Sharpe ratio based on historical returns for a broad range of domestic equity indices ranges between

12

Summary: Recommended Risk Margin Calculation for GMxB, Non-marketRisks

1. Determine market-consistent values of the guarantee corresponding to twosets of non-market assumptions, baseline and shocked. Assign a probability to the shock value, e.g., 99.5th percentile.

2. Take the difference Shocked – Baseline. This will be equal X standard deviations depending on the percentile assignment in step 1.

a. E.g., under the lognormal with Mean=1, St. Dev=0.5, 4 x Standard Deviation. Therefore, St. Deviation = (Shocked – Baseline) / 4.

3. Repeat step 2 for each uncorrelated mortality risk, if applicable.a. Calculate resulting standard deviation �Joint as: �Joint = (�beh

2 + �mort2)0.5

4. Calculate the risk margin (added to the baseline value): a. Normal: Margin = � x St. Deviation,b. Lognormal: : 2x St. Deviation x [e(�x0.472)- 1]

Recommended distribution: Lognormal.

Estimating LambdaThe meaning of the parameter lambda in the Wang transform is price of risk. That is, itshows by how much the fair price will increase should a measure of risk increase by one unit. In this general context, the lambda is basically independent of thenature of risk. It’s rather more of a characteristic of the entity bearing the risk and should be closely related to the overall risk tolerance of the entity.

In order to estimate lambda, two approaches are suggested here, company-specific and market.

With the company-specific approach the question should be asked: how much extra return in excess of the risk-free rate the company requires per accepting an additional marginal unit of risk. In a narrower context, this could be answered by the return on capital targets for new products. For example, if a company expects risk-adjusted return on capital (RAROC) of 8 percent in excess of the risk-free rate on an after-tax basis, and the required economic capital equals four standard deviations of the change in economic value the lambda here equals 8percent x 4 = 32 percent. Here an assumption is made that one standard deviation equals one unit of risk.

The market approach would extend the definition of lambda to capital markets which is essentially a definition of Sharpe ratio. The Sharpe ratio based on historical returns for a broad range of domestic equity indices ranges between

12

Summary: Recommended Risk Margin Calculation for GMxB, Non-marketRisks

1. Determine market-consistent values of the guarantee corresponding to twosets of non-market assumptions, baseline and shocked. Assign a probability to the shock value, e.g., 99.5th percentile.

2. Take the difference Shocked – Baseline. This will be equal X standard deviations depending on the percentile assignment in step 1.

a. E.g., under the lognormal with Mean=1, St. Dev=0.5, 4 x Standard Deviation. Therefore, St. Deviation = (Shocked – Baseline) / 4.

3. Repeat step 2 for each uncorrelated mortality risk, if applicable.a. Calculate resulting standard deviation �Joint as: �Joint = (�beh

2 + �mort2)0.5

4. Calculate the risk margin (added to the baseline value): a. Normal: Margin = � x St. Deviation,b. Lognormal: : 2x St. Deviation x [e(�x0.472)- 1]

Recommended distribution: Lognormal.

Estimating LambdaThe meaning of the parameter lambda in the Wang transform is price of risk. That is, itshows by how much the fair price will increase should a measure of risk increase by one unit. In this general context, the lambda is basically independent of thenature of risk. It’s rather more of a characteristic of the entity bearing the risk and should be closely related to the overall risk tolerance of the entity.

In order to estimate lambda, two approaches are suggested here, company-specific and market.

With the company-specific approach the question should be asked: how much extra return in excess of the risk-free rate the company requires per accepting an additional marginal unit of risk. In a narrower context, this could be answered by the return on capital targets for new products. For example, if a company expects risk-adjusted return on capital (RAROC) of 8 percent in excess of the risk-free rate on an after-tax basis, and the required economic capital equals four standard deviations of the change in economic value the lambda here equals 8percent x 4 = 32 percent. Here an assumption is made that one standard deviation equals one unit of risk.

The market approach would extend the definition of lambda to capital markets which is essentially a definition of Sharpe ratio. The Sharpe ratio based on historical returns for a broad range of domestic equity indices ranges between

12

Summary: Recommended Risk Margin Calculation for GMxB, Non-marketRisks

1. Determine market-consistent values of the guarantee corresponding to twosets of non-market assumptions, baseline and shocked. Assign a probability to the shock value, e.g., 99.5th percentile.

2. Take the difference Shocked – Baseline. This will be equal X standard deviations depending on the percentile assignment in step 1.

a. E.g., under the lognormal with Mean=1, St. Dev=0.5, 4 x Standard Deviation. Therefore, St. Deviation = (Shocked – Baseline) / 4.

3. Repeat step 2 for each uncorrelated mortality risk, if applicable.a. Calculate resulting standard deviation �Joint as: �Joint = (�beh

2 + �mort2)0.5

4. Calculate the risk margin (added to the baseline value): a. Normal: Margin = � x St. Deviation,b. Lognormal: : 2x St. Deviation x [e(�x0.472)- 1]

Recommended distribution: Lognormal.

Estimating LambdaThe meaning of the parameter lambda in the Wang transform is price of risk. That is, itshows by how much the fair price will increase should a measure of risk increase by one unit. In this general context, the lambda is basically independent of thenature of risk. It’s rather more of a characteristic of the entity bearing the risk and should be closely related to the overall risk tolerance of the entity.

In order to estimate lambda, two approaches are suggested here, company-specific and market.

With the company-specific approach the question should be asked: how much extra return in excess of the risk-free rate the company requires per accepting an additional marginal unit of risk. In a narrower context, this could be answered by the return on capital targets for new products. For example, if a company expects risk-adjusted return on capital (RAROC) of 8 percent in excess of the risk-free rate on an after-tax basis, and the required economic capital equals four standard deviations of the change in economic value the lambda here equals 8percent x 4 = 32 percent. Here an assumption is made that one standard deviation equals one unit of risk.

The market approach would extend the definition of lambda to capital markets which is essentially a definition of Sharpe ratio. The Sharpe ratio based on historical returns for a broad range of domestic equity indices ranges between

11

Fitting Lognormal Distributions

-60.0 -40.0 -20.0 .0 20.0 40.0Liabilty, $mln

Original Baseline

Conservative Baseline

OriginalBaseline 99.5th %-tile

ConservativeBaseline

Note how the distribution for the conservative Company 2 is less disperse. The margin for non-market risks is also lower for Company 2.

Margin (Original Company) = $3.8 million Margin (Conservative Company) = $1.9 million

As the conservative estimate is approximately half-way between the originalbaseline and 99.5th percentile, the margin to the conservative estimate is also around a half of the original margin.

Interaction of RisksSo far, the focus has been on a single non-market risk, behavior risk in most of the examples. Generally, there may be more than one source of the non-market risk, for example policyholder behavior and mortality improvement. In this example anassumption of a zero correlation between policyholder and mortality risks seems to be a reasonable one. To generalize for n uncorrelated non-market risks, the formula for thestandard deviation of the joint distribution is as follows:

�Joint = (�12 + �2

2 +…+ �n2)0.5

Strictly speaking, this formula is accurate if each risk is normal. It’s suggested to also use the same formula for the lognormal assumption.

The resulting standard deviation of the joint distribution then should be substituted to the formula (2), if lognormal distribution is assumed.

References:

1. K. Dowd, 2005, “Risk and Reward” FinancialEngineering News, July/Aug. 2005, Issue 44.

2. S. Wang, 2000, “A class of distortion operatorsfor pricing financial and insurance risks”,Journal of Risk and Insurance, 67, 15 – 36.

3. S. Wang, V. Young, H. Panjer, 1997,“Axiomatic characterization of insuranceprices”, Insurance: Mathematics andEconomics. 21: 173 -183.

Page 17: The Financial Reporter - soa.org · The Financial Reporter ... An International Financial Reporting Standards (IFRS) Phase II Discussion Paper Primer

F air value will soon be used to value insuranceliabilities. It is not a question of “if ” but“when.” This should not be viewed as the apoc-

alypse or a holy grail. Like virtually every otherintractable problem in life, there are no solutions,only trade offs. There are advantages to this develop-ment, but also drawbacks. I offer this critique of fairvalue as a method of valuing insurance liabilities, notas a call to abandon fair value or forestall its imple-mentation, but to highlight the challenges the indus-try will face in this transition. It is my hope that bybringing these challenges to the forefront of people’sminds, we will develop practical valuation tech-niques that make the most of the advantages of fairvalue, while diminishing the drawbacks as much aspossible.

It is also important to note that the following dis-claimer is not perfunctory. I offer these opinions asmy own, and they in no way reflect the opinions orpositions of my employer, Transamerica Reinsurance,or its parent, AEGON.

Who’s Asking?I’ve made the point in numerous actuarial presen-tations, that financial statements are a complicatedanswer to a simple question: “When will I getpaid?” Since many different people ask this ques-tion, there are many different answers that aninsurance enterprise can supply. The regulators askthis question on the behalf of policy holders, and sostatutory (or regulatory) financial statements pres-ent a different picture than US GAAP financialstatements, which are, generally, answering thatquestion for corporate debt holders. Equity analystsask that question on behalf of their clients, com-mon share holders, and find the information inembedded value financial statements very useful.The insurance enterprise’s pensioners are also inter-ested in getting paid and are probably interested inthe company as a going concern. And of course,you can’t forget everybody’s favorite uncle, Sam—for the tax man wants to get paid as well and he isincreasingly suspect of overly conservative valua-tions that comfort the policyholder.

In so far as it is at least rude, if not disconcerting,to acknowledge that capital markets reduce the

promises we sell to simple figures of dollars andcents, fair value is obviously not a candidate forreporting to policy holders, or their proxy, regula-tors. Solvency and prudent conservatism will andshould continue to dominate the financials pro-duced to tell the policy holders when they will getpaid. But for debt and equity holders, fair value hasthe potential to provide better information aboutthe risks a company writes and can even make thefinancial statements more comparable to otherfinancial service industry enterprises. A friend ofmine made the astute observation that U.S. insur-ance companies have become very adept at givingaway free call and put options, because theaccounting rules didn’t force companies to valuethem. Under fair value accounting, “free” optionsbecome prohibitively expensive.

Rules versus PrinciplesThere is no better demonstration of the axiom, “nosolutions; only trade offs” than the question ofwhether valuation requirements should be rules-based or principles-based. This table presents someof the advantages and drawbacks of each:

Financial Reporter | December 2007 17

>> A Critique of Fair Value as a Method …

A Critique of Fair Value as a Method for ValuingInsurance Liabilitiesby Darin Zimmerman

continued on page 18>>

Page 18: The Financial Reporter - soa.org · The Financial Reporter ... An International Financial Reporting Standards (IFRS) Phase II Discussion Paper Primer

Financial Reporter | December 2007

>> A Critique of Fair Value as a Method …

18

HistoryCarpe diem—“Seize the day” is the rejoinder ofartists around the world. They are constantly tryingto sell us on the value of living in the momentbecause you never know what the future mightbring. Well, most actuaries would characterize this asa dubious claim, to say the least! But apparently, theinvestment bankers have taken this advice to heart.They feel that all opinions of what the future holdsare mere speculation, and no one person’s opinioncan be trusted more than another. They feel the pricedata as it exists today is the bestmeasure of liabilities because itis an amalgam of all opinionsprecisely balanced by the invisi-ble hand of The Market.

Well, just as I know night willeventually follow day, andspring will eventually followwinter, I know high interestrates will be followed by lowinterest rates and low interestrates will be followed by high.Likewise, the property and casu-alty actuary knows that theunderwriting cycle will have atight market followed by a loosemarket before returning to tight.

We do not know the frequency of the interest rate andunderwriting cycles as precisely as the seasons, but wedo know they exist. And even if our opinion of “highinterest rates” and “low interest rates” changes overtime, mean reversion exists because market actions area result of human decisions, which cannot be memo-ry free like the log normal interest rate and equityreturn generators we build. Historically, insurance lia-bility valuation has sought to smooth out these cyclesas another opportunity for diversification. Graphicallyit looks like this:

Rules-based Principles-based

Advantages

• Easier for regulators to audit • Easier for companies to implement • Makes commercial software easier to develop and use• Makes different companies’ results more comparable• More objective

• Always makes sense • No incentive to manipulate

product design (PD)• Encourages PD creativity• Diminishes systemic risk

(Mistakes not industry-wide)• Actuarial focus is on price of

risk, not accounting results

Draw

backs

• Does not always make sense• Easier to manipulate• New products can cause problems• Lack of accountability• Impedes creativity• Regulators can’t admit

• More subjective• Harder for regulator to audit• Easier to commit fraud• Easier for actuary to make a

bankruptcy-sized mistake• Requires powerful analysis

tools and understanding• Requires better actuaries

Page 19: The Financial Reporter - soa.org · The Financial Reporter ... An International Financial Reporting Standards (IFRS) Phase II Discussion Paper Primer

Financial Reporter | December 2007 19

>> A Critique of Fair Value as a Method …

The best way to manage non-hedgable risk isthrough diversification. Insurance enterprises havealways diversified individual instances of risk bycombining them into portfolios of risk and they havebeen adept at developing products that containedseveral non-correlated kinds of risk to further thediversification benefit. And we have historicallydiversified our risks across time as well. The U.S.Statutory concept of the IMR (Interest MaintenanceReserve) and AVR (Asset Valuation Reserve) aremethods of smoothing investment returns (that is,current period income) by amortizing credit gainsand default losses, and capital gains and losses intoincome. And under US GAAP FAS 97, it has beenvery common for management’s best estimate offuture equity returns and interest rates to employmean reversion to dampen the effects of capital mar-ket movements.

This is a perfectly acceptable approach PROVIDEDthat you have enough capital to ride out the entirecycle. In truth, we can never know this because thesecycles are not as regular as the seasons.

Price versus PredictionI’ve often made the point that asking an actuary toput a fair value on an insurance liability is askinghim to do something completely foreign with respectto traditional actuarial work. All of our training hasbeen geared toward predicting future benefits. (Withcall and put options, we assign probabilities to therange of possible future outcomes in order to calcu-late an expected value of benefits.) We can quibbleall day over the discount rate used to calculate thepresent value of the benefits, but that’s the easy part:predicting the future benefits is the hard part. And ifyou want to know how well an actuary does his job,all you have to do is wait to see how close his predic-tions of future benefits are.

In placing a fair value on an insurance liability, weare asking for something fundamentally differentfrom a prediction of future benefits even though it iscalculated similarly. With fair value (especially exitvalue) we are asking the actuary to predict a price atwhich a hypothetical transaction would take place.It’s interesting to note that once the prediction ismade, we can never check up on the actuary like wecan with traditional actuarial work where all we needis patience. Once the hypothetical transaction date ispast, any subsequent transaction would have a differ-ent in-force inventory, different interest rate condi-tions, and more information for experience studies.

When we speak of a liability’s value, investmentbankers and insurance actuaries are talking past eachother. In placing a value on an insurance liability, theinvestment banker is trying to predict the unknow-able present and the actuary is trying to predict theeventually knowable future. In a fair value world, theactuary must come to appreciate this subtle differ-ence. Before I gained an appreciation for this differ-ence, I used to enjoy pointing out the fact that, gen-erally, market prices don’t have predictive value.Forward interest rates don’t do a good job of predict-ing future interest rate levels. Implied volatilitiesdon’t do a good job of predicting future actualvolatility. And if a country’s regulator greatly increas-es the capital requirements for a certain asset class,spreads on that asset class will widen as companiesare forced to divest of that class, but the widerspreads do not increase the probability of default forthe issuing companies. They merely reflect a newequilibrium point in the supply and demand forfixed income securities.

This lack of predictive value is not surprising giventhe fact that investment bankers aren’t trying to pre-dict the future, they are trying to predict the present.And in one sense, the investment bankers have thebetter argument: the most important aspect of anyproposed insurance accounting system is that it mustvalue assets and liabilities consistently. For themajority of assets, it is very easy to accurately predicttheir fair value. If we start on the left side of the bal-ance sheet and we know the fair value of a group ofassets that perfectly hedge a liability, then the liabili-ty value must match the value of the assets irrespec-tive of what a good prediction of the future benefitpayout is. This is one of those Truths that is trulyself-evident.

Now it is fair to point out that in the insuranceworld, the situation where it is possible to constructa “perfect hedge” is exceedingly rare. It is more com-mon to be able to construct partial hedges. Giventhis fact, the “art” of fair value will be in deconstruct-ing the various risks so that the hedgable portions tieexactly to the value of financial instruments thatwould hedge the risk, and the remaining risks arevalued consistently with the assets backing them.

Understanding Market Value Risk MarginsEven though one uses risk-free rates (which have nodefault margins) to calculate risk neutral scenarios,

continued on page 20>>

Page 20: The Financial Reporter - soa.org · The Financial Reporter ... An International Financial Reporting Standards (IFRS) Phase II Discussion Paper Primer

that does not mean risk neutral prices don’t have riskmargins embedded in them. The confusion is creat-ed when the three risk profiles are placed next toeach other. The three risk profiles are risk averse, riskneutral and risk seeker. Someone who is a risk seek-er will pay a margin to take risk. (A gambler plays atan expected loss for the privilege/thrill of gambling.)Someone who is risk averse will pay a margin toavoid risk. (Insurance premiums are higher than thepresent value of loss.) It is incorrect to interpolatebetween these two profiles and conclude someonewho is risk neutral is indifferent to risk and willaccept it without a margin.

In fact, risk neutral prices do contain a margin overthe actuarially determined expected value. Theamount of the margin is the precise amount that TheMarket has determined will compensate for assum-ing the risk.

The International Actuarial Association (IAA) hasissued a white paper on risk margins at the request ofthe International Association of InsuranceSupervisors (IAIS). In it, they describe four possibleapproaches for determining margins: 1) Explicitassumption (110 percent of current estimate, forexample), 2) Quantile method (enough margin for95 percent confidence interval, for example), 3) Costof Capital method (enough margin to compensate atthe cost of capital for the amount of capital neededto manage the risk), and 4) Other (implicit conser-vatism in assumptions or discount rates).

It seems the cost of capital method is the most prom-ising for fair value. In order to get a value that does-n’t exaggerate the effects of market movements theactuary will need to start with the total capital

requirement for the product portfolio and will thenneed to strip out the hedgable risks. If he can esti-mate the risk margins within the prices for thehedgable risks, he can use his cost of capital to esti-mate what portion of the total capital is attributableto hedgable risks. After that, he should strip out cap-ital related to asset risk (C3 in the United States).Then he can use his cost of capital with the remain-ing capital to calculate the market value margin thatneeds to be added to the current estimate of the non-hedgable risks in order to calculate the total fairvalue.

Risk Free versus Portfolio YieldMany actuaries believe the current draft of the IASBPhase II exposure draft directs the actuary to dis-count projected cash flows using risk-free rates (lessa spread to reflect the instrument’s credit worthi-ness). Historically, actuaries have discounted at (aslightly conservative estimate of ) an appropriateportfolio yield. It seems counter-intuitive that itshouldn’t matter if the left side of the balance sheetis a well diversified portfolio of bonds, or if it is a sin-gle bond. In reality, the left side will affect the finan-cials through capital. An entity will need less capitalto manage insurance risks backed by a well diversi-fied portfolio than it would need to run the compa-ny with a single (very large) bond.

I can appreciate why discounting at risk-free rates isdesirable for simplicity sake, but I think a strong casecan be made for a higher rate. As described in theprevious section, a bond’s credit spread should reflecta spread higher than the best estimate of credit loss-es because the spread has to reward the risk taker forthe capital he needs to post to manage these risks. Aswas discussed before, by diversifying default riskacross a non-correlated portfolio of bonds, we reducethe required capital needed to manage the portfolio,and thus there should be extra spread (margin) avail-able.

Incorporating Credit Worthiness inDiscountingI have encountered many people who feel verystrongly that incorporating credit risk into the dis-counting of liability cash flows is just plain wrong. Ifwe were discussing regulatory valuation, I wouldagree without reservation; however, since this isGAAP reporting I will admit to being able to seeboth sides.

On a theoretical level, incorporating an instrument’scredit spread makes sense because it should reduceincome volatility. An insurance enterprise that is

Financial Reporter | December 2007

>> A Critique of Fair Value as a Method …

20

Darin G. Zimmerman,FSA, MAAA, is vice presi-dent and chief actuaryfor Transamerica Re inCharlotte, N.C.He may be reached [email protected]

Page 21: The Financial Reporter - soa.org · The Financial Reporter ... An International Financial Reporting Standards (IFRS) Phase II Discussion Paper Primer

managing risk prudently should invest the funds toback reserves in a diversified portfolio of fixedincome assets that approximates the credit worthi-ness of the instrument being managed. That way, ifthere is a credit event or shock in the broader mar-kets, credit spreads on the asset portfolio willdecrease the value of those assets, but the credit wor-thiness of the instrument being valued should havewidened as well, also reducing the fair value of theliability being valued. If management did a good jobof matching, the decrease in assets should be offsetby the decrease in liabilities. Without incorporatingcredit worthiness into the fair value of the liability,all credit events would produce income equal to thechange in fair value of the assets. So on a theoreticallevel, this makes perfect sense: a well managed com-pany should see earnings volatility if credit spreadswiden a little.

On a practical level, it will be exceedingly difficult tomonitor and estimate the credit worthiness of instru-

ments each quarter. There is a very real possibilitythat methods for estimating credit worthiness will besomewhat arbitrary and will create artificial incomevolatility that could be worse than if no creditwor-thiness was considered. Here the actuary will need toremember that in predicting a price (not a probabil-ity of default), markets move together. A survey ofthe broader market might be the best method of esti-mating creditworthiness of the instrument.

Concluding RemarksObviously, the thoughts contained herein are in noway exhaustive on the subject, but it is my hope thatactuaries will consider these issues I have raised. Ithink that if they do, a better understanding of theimplications of the issues will lead them to developbetter valuation methods for putting a fair value oninsurance liabilities.

Financial Reporter | December 2007 21

>> A Critique of Fair Value as a Method …

$

Page 22: The Financial Reporter - soa.org · The Financial Reporter ... An International Financial Reporting Standards (IFRS) Phase II Discussion Paper Primer

Stop us if you’ve heard thisone—Embedded valuereporting is coming to North

America. This has been heard invarious forms since the late ’90s.But this time it’s true. Embeddedvalue (EV) reporting is growing inimportance in North America forseveral reasons. Most Europeaninsurance groups (and, by exten-sion, their North American sub-sidiaries) are reporting embeddedvalue results publicly. In Canada,two of the “Big Three” regularlydisclose embedded values as part oftheir external reporting. The con-cept behind embedded value is sim-ilar in nature to the principles-based approaches being developedby the American Academy ofActuaries, and investment analystsand rating agencies are increasinglyturning to economic measures ofvalue other than traditional statuto-ry or GAAP reporting.

One major criticism of embeddedvalues has always been the lack ofauthoritative guidance. Because EVreporting lies outside the purviewof statutory and GAAP accounting,North American standard settershave steered clear of addressing EV(notwithstanding a paper publishedby the Canadian Institute ofActuaries in 2000). However, withEV gaining popularity outsideNorth America, in 2004 a groupcomprised of the CFOs of themajor European life insurers pub-lished a set of principles for report-ing embedded values. These princi-ples, and subsequent releases, havebegun to form a set of codifiedguidance for EV reporting that has,in part, filled that void for NorthAmerican insurers.

With that in mind, the Academy’sLife Financial ReportingCommittee decided to furtherenhance the guidance available toactuaries performing EV work. TheCommittee is currently completinga Practice Note for North Americanactuaries on the reporting ofembedded values. This work comesnot a moment too soon. The SOAEmbedded Value Webcast wasattended by at least 600 people,indicating that interest in EVreporting is on the upswing, withmore than half of registrants statingthey were currently calculating EVin some form.

The Practice Note focuses on tradi-tional and European EmbeddedValues, while leaving the thornierissues relating to Market-Consistent Embedded Values for afuture Note. Topics coveredinclude:

• Introduction to embedded values• Mechanics of embedded values• Non-economic assumptions• Economic assumptions• Analysis of movement• Treatment of options and guaran-tees• Disclosure of embedded values

As is characteristic of AcademyPractice Notes, the focus is on thecurrent state of practice in NorthAmerica, although the Committeedoes attempt to provide some of thetheoretical basis underlying certainissues. The target date for therelease of the exposure draft isDecember 2007.

Many thanks to Tina Getachew,Academy Risk Management andFinancial Reporting Policy Analystand the following Committeemembers and other volunteers whohave devoted time and energy tothe development of this newPractice Note:

• Errol Cramer• Rob Frasca• Noel Harewood• Ken LaSorella• Patricia Matson• James Norman• Jack Walton• Darin Zimmerman

For more information, please con-tact Tina Getachew at theAmerican Academy of Actuaries, [email protected], or any of thecommittee members listed above.

Financial Reporter | December 2007

Embedded Value Reporting Redux

22

Page 23: The Financial Reporter - soa.org · The Financial Reporter ... An International Financial Reporting Standards (IFRS) Phase II Discussion Paper Primer

"There are more things in heaven and earth, Loss Ratio,than are dreamt of in your philosophy."

The loss ratio has been around for a long time. Aproperly formulated loss ratio tells us what por-tion of premium income is set aside for claims,

over a fairly short interval, over many years, or overthe lifetime of an entire block. Yet despite its manymodifications to accommodate diverse lines of busi-ness, it’s still a very blunt instrument.

Loss ratios began as a casualty insurance concept. Inauto and homeowners’ insurance, renewal periodsare short. The loss ratio is just the aggregate claimspaid, divided by the aggregate premiums collected.This works because the premiums and claims areconfined to a short period, usually one year or less.The timing of the premiums collected roughlymatches the timing of the cash claim payouts.

Subsequently, the loss ratio was embraced by thegroup and individual health insurance business. Inthose lines, claims can extend much longer, necessi-tating long-tail claim reserves and the “incurredclaims” concept.

With Long Term Care and Individual DisabilityIncome things get more complicated. These lines useissue age and level premiums, an idea borrowed fromlife insurance. Active life reserves and investmentincome complicate the picture, requiring furtherrefinements, and recognition of the time value ofmoney. Here, our definition is more properly thepresent value of (expected) future claims, divided bypresent value of future premiums. Or equivalently,it’s the accumulated value of (actual) past claims,divided by the accumulated value of past premiums.Despite all those elaborations, loss ratio calculationsare based solely on aggregate data and are easy to cal-culate.

As we will see, these last enhancements have, bynecessity, made the loss ratio sensitive to policy per-sistency. Thus, it’s no longer a pure measurement ofclaims.

The loss ratio is often used for regulatory purposes.For new rate filings in some lines, companies mustdemonstrate that the loss ratio, calculated under rea-sonable assumptions, is expected to meet a legal

minimum. In addition, they must monitor emergingexperience on their existing business in force. Ifclaims are lower than expected, they may have todecrease premiums or increase policyholder divi-dends. But this article will concentrate on using theloss ratio as an internal management tool. The meth-ods analyzed here may not precisely match the legaldefinitions.

What makes a good tool for financial analysis?• Ease of use—The traditional loss ratio is easy to

calculate, because it’s based solely on aggregatedata, namely premiums, claims and reserves. Ifpossible, any refinement should preserve thisadvantage. But, as we’ll see below, we must sacrificesome simplicity to understand loss ratio dynamicsand tie it into other financial measures.

• Drill-down capability—If we subdivide our data, itshould be possible to get loss ratios for variousunderwriting and occupation classes, geographicalregions and markets. This can help us monitor theexperience of important subgroups, and estimateour pricing adequacy.

• Consistency—To be useful, a measurement mustbe consistent. Having adopted a benchmark, weshould be able to judge how we’re doing in relationto it. In other words, if our experience is exactly asoriginally assumed, the loss ratio should remainconstant throughout the life of the business.

The Lowly Loss Ratioby Paul Margus

Financial Reporter | December 2007 23

continued on page 24>>

Paul Margus, FSA,MAAA, is 2nd vicepresident -- pricingactuary, Berkshire LifeInsurance Co., inPittsfield, MA.He may be reached [email protected]

Page 24: The Financial Reporter - soa.org · The Financial Reporter ... An International Financial Reporting Standards (IFRS) Phase II Discussion Paper Primer

The remainder of this articlewill explore alternative defini-tions and their mathematicalunderpinnings. We’ll discussthe modifications for level pre-mium lines of business, chieflyLong Term Care and IndividualDisability Income, harping

again and again on the importance of interest adjust-ments. The loss ratio concept will be extended toexpenses and profit. Limited pay plans and LifeInsurance will be briefly explored. Finally, we’ll examinethe strengths and limitations of the loss ratio, and howthey can be remedied. This will entail linking the lossratio to gain and loss analysis.

In the next two sections, we deal separately withnumerator and denominator.

The Numerator: ClaimsFor very short-duration claims, it may be sufficientsimply to use cash payments. But if claim payoutsextend beyond the expected period of the loss ratio,we will have to include the claim reserve. We havetwo methods of addressing this.

1. The simpler method is to include the initialclaim reserve at the moment of inception, andignore all subsequent activity on that claim. Forour loss ratio, the entire claim obligation is dis-charged in one lump sum. Thus, the loss ratioresponds to actual claim incidence, but not todeviations from our assumed claim termination.

2. Another method is to count cash payouts, plusthe increase in claim reserve.

Initially, Method 2 is equivalent to the Method 1. Atclaim inception, the claim reserve instantaneously jumpsfrom zero to its initial value. And in this infinitesimalspan, we haven’t had enough time to make any pay-ment. But subsequently, Method 2 makes mid-coursecorrections as the actual claim terminations deviatefrom expected. To see this, consider the familiar recur-sive formula for the annuity function.

Here, is the claim reserve, is the claimtermination rate expected in the claim reserve calcu-lation, is the reserve interest rate, and the period-ic claim payout is $1.

Actual claim termination always differs from expected.Let the actual termination rate during claim year t be

. Then, subtracting from both sides of Equation 1, we get

At the beginning of the period, the aggregate in-force is $1 and the aggregate claim reserve is

. At the end of the period, we’re leftwith of aggregate claims in force, bearingan aggregate reserve of .

is the amount of “actuarialgain from claim termination.” If aggregate claims inthe period are exactly as expected, then the actuarialgain is zero. If terminations are bigger than expected,the gain is positive. If they’re less than expected, thegain is negative, and we have a loss from claim ter-mination.

Thus, on any closed block of claims, we can subtractactual claim payouts from the last period’s aggregateclaim reserve. Then we adjust for interest, takinginto account the actual timings. If this overstates thecurrent aggregate claim reserve, then the amount ofthe overstatement represents the actuarial gains forthe period.

So, in many practical applications,is just the balancing item.

But as I will explain later, it may be useful to investadditional effort to calculate it explicitly.

For other than annual payouts with one-year lossratios, the above math is more complex; but the resultis the same, as long as we let our interest adjustmentsreflect the actual timing of the payments.

We now return to our definition of “incurredclaims”: cash payouts, plus the increase in claimreserve.

… loss ratio calculations arebased solely on aggegate dataand are easy to calculate.

Financial Reporter | December 2007

>> The Lowly Loss Ratio

24

Equation 3

Equation 2

Equation 1

Page 25: The Financial Reporter - soa.org · The Financial Reporter ... An International Financial Reporting Standards (IFRS) Phase II Discussion Paper Primer

To make things work out neatly in Equation 4, Ihave applied an interest adjustment to the reserveincrease in Equation 3. In the real world, the claimpayout isn’t concentrated at the beginning, so it mayneed some sort of discounting, too. In practice, that’sall there is to calculating the aggregate incurredclaims. But let’s see what we’re actually calculating.

As mentioned above, at the moment of claim incep-tion, Method 2 is identical to Method 1. Thereafter,Method 2 records deviations from expected termina-tions as actuarial gains. These gains (and losses) serveas mid-course corrections to the initial claim reserve,which occur only as the experience unfolds.

Method 1 is simpler, and it confines the claim expe-rience to the period of the loss ratio, while neglect-ing the mid-course corrections. Method 2 scrupu-lously adjusts for under- or over-reserving over time.But it blends prior claims into the calculation of thecurrent loss ratio. Thus, each method has its advan-tages and disadvantages.

In the above derivations, we have assumed that ourclaim reserve is a quasi life annuity calculation. Butthese principles are equally valid for claim triangles.In any event, if claim durations are potentially long,we need an interest element in the reserve andincurred claim calculations. (Loss reserves should bediscounted.)

As a practical matter, under Method 2, the incurredclaims are calculated using the fundamental defini-

tion: cash payouts, plus the (interest-adjusted)increase in claim reserve. The sole purpose of ourderivations was to show that incurred claims areexactly:• the claim reserve at the moment of claim incep-

tion, and• the negative of actuarial gains for any subsequent

period.

The Denominator: PremiumsPremiums should be recognized only when due.“Incurred Premiums” represent what we’ll collectover the period, if everyone pays exactly on time.This is just the Cash Collections over the period,plus the increase in “Premiums Due and Unpaid,”minus the increase in “Premiums Paid in Advance.”

A further refinement is to use the “EarnedPremium,” which represents what we would collectif premiums were paid continuously, and alwaysexactly on time. Thus, over a four-month period, weshow of an annual premium, regardless of whenthe policy anniversary occurs. (Otherwise, for ablock of policies paying annually in February, wecould be dividing by zero if we tried to do a loss ratiofor just the summer months. And a first quarter lossratio would be understated because it would reflect awhole year’s premium.) The “earned premium” is the“incurred premium,” minus the increase in unearnedpremiums.

For loss ratios taken over an extended period, wemust adjust for interest. This means taking the pres-ent or accumulated value of premiums. In addition,it seems appropriate to interest-adjust all of the “Dueand Unpaid,” “Advance” and “Unearned” accruals.(See the Active Life Reserves section)

Financial Reporter | December 2007 25

>> The Lowly Loss Ratio

Equation 4

Equation 5

Equation 6

continued on page 26>>

Page 26: The Financial Reporter - soa.org · The Financial Reporter ... An International Financial Reporting Standards (IFRS) Phase II Discussion Paper Primer

Financial Reporter | December 2007

Active Life ReservesThe above formulation is consistent with a mid-ter-minal Active Life Reserve. If t represents the time (inyears) since the last anniversary (between 0 and 1),then it’s good enough to interpolate the Active LifeReserve linearly between anniversaries:

This differs somewhat from custom as follows:• The unearned premium is omitted from the

reserve;• In Equation 6 above, we subtracted it from the pre-

miums.If our reported premiums and reserves follow a dif-ferent convention, we should adjust them for lossratio calculations. (In the case of Life Insurance, pre-miums should exclude any increase in deferred pre-mium. For the casualty and group lines, we end upwith no active life reserve at all.)

To get meaningful loss ratios, we’ll want our reservesto be as realistic as possible. Usually, GAAP benefitreserves are the best candidate. To the extent possi-ble, the margins for adverse deviation should beremoved, perhaps using a simple multiple.

Long Term Care and Individual Disability Insurancespecify level premiums, payable for the term of thecoverage or for a limited period. Because the premi-um is level and claim costs are increasing, the premi-ums and claim costs are mismatched. Without someadjustment for active life reserves, the loss ratios willbe meaningless. They will start out unrealisticallylow, but would ultimately attain astronomical levels.To remedy this, we recognize the increase in activelife reserves as part of the claim cost for the currentperiod. Two definitions are popular.

As always, the 0 subscript refers to the beginning ofthe period, while 1 means the end.

Equation 9 is the better choice. As we will show insection entitled Doing the Math, the loss ratio worksout to be the valuation net premium for the benefitreserve, divided by the gross premium, minus actu-arial gains (as a percent of premium). Thus, it meetsthe “consistency” criterion discussed in the introduc-tion of this article.

If you accept the previous assertion for now, thenEquation 8 fails the “consistency” criterion. Even inthe absence of actuarial gains, the loss ratio won’t belevel. In the early policy years, when the active lifereserve is small, it will be almost right. It will increaseartificially as the missing interest adjustmentbecomes significant. It will peak at some point, andthen decrease back to normal at the end of time.Back in the real world, when the loss ratio increases,we won’t know whether to blame bad claims or chalkit up to the natural behavior of an aging block.

Over an extended period, Equation 9 is better writ-ten as:

where PV(whatever) is the n-year present value at themth policy year, or

Now, we define a few symbols.

= Issue Age.

= Policy Year.

= Active life reserve per unit in force at the

end of policy year t.

= Active life reserve on the policy issue date,

which is zero.

= Active life reserve at the end of coverage.

Px is chosen so that this comes out to

zero.

= Valuation interest rate for policy year t.

>> The Lowly Loss Ratio

26

Equation 7

Equation 8

Equation 9

Equation 10

Page 27: The Financial Reporter - soa.org · The Financial Reporter ... An International Financial Reporting Standards (IFRS) Phase II Discussion Paper Primer

Therefore,

Financial Reporter | December 2007 27

>> The Lowly Loss Ratio

= Actual Lapse rate for policy year t.

= Actual Mortality rate for policy year t.

$1 = Actual Amount in force at beginning of policy year t.

= Actual Amount in force at the end of policy year t.

= Aggregate Reserve at beginning of policy year t.

= Actual Aggregate Reserve at end of policy year t.

At policy year m, we have $1 in force with an aggregatereserve of . Then n years later, remainsin force, and the aggregate reserve is .

Substituting into Equation 10, we get

which looks a lot like Equation 9. From this, wedraw some conclusions.

• Equation 9 applies over any period of time that wechoose, as long as we properly adjust for interest.

• The foregoing derivation does not in any way usethe reserve valuation assumptions. But the sectionDoing the Math does.

• Reserves matter only at the endpoints.Intermediate reserves have no effect on the lossratio.o Within the term, the claim reserve

increases (Method 2) telescope in the same way.o For m = 0 (new business) and (the end of

time), the reserve increase becomes Zero minusZero. Similarly, the accruals in Equation 5 andEquation 6 go to zero, so the lifetime loss ratio is

Equation 11

continued on page 28>>

Page 28: The Financial Reporter - soa.org · The Financial Reporter ... An International Financial Reporting Standards (IFRS) Phase II Discussion Paper Primer

which is similar to the loss ratio that we file for a newpolicy form.

o As mentioned previously, the premium accrualsneed interest adjustments of the form

• For analyzing past results, we calculate retrospectiveaccumulated values rather than present values.Instead of inserting persistency and mortalityassumptions into the Equation 11 summations, wesimply accumulate aggregate the historical premi-ums and claims with interest. At the endpoints, weuse actual reserves with the interest adjustment.

Doing the MathThe active life reserve funds benefits over the term ofthe coverage. During policy year t, it changes as fol-lows:

where

= Net level premium.

= Valuation Claim incidence rate

for policy year t.

= The present value of benefits at

inception, under a claim start-

ing in policy year t. For indi-

vidual disability income, it’s

the familiar claim annuity.

= The net annual claim cost for

claims starting in policy year t.

= Valuation Lapse rate for policy

year t.

= Valuation Mortality rate for

policy year t.

Here, the net level premium is set at a level thatfunds the benefits over the term of the policy, assum-ing that claim costs, interest, lapse and mortalityoccur exactly as assumed. Thus, the Active lifereserve starts and ends at zero. At intermediate times, if generally increases with t, thereserve is greater than zero.

Actual Experience never follows our script.

= Actual Claim incidence rate forpolicy year t.

= The actual net annual claimcost for claims starting in policyyear t. To keep it simple, I’musing the Method 1 definitionof “incurred claims.”

If 1.0000 = Amount in force at the beginning of pol-icy year t, then;

= Expected Aggregate Reserve atend of policy year t.

= Expected Amount in force atthe end of policy year t.

Then, we can transform Equation 13 as follows:

Add (expected decrements) toboth sides …

Subtract (actual decrements)from both sides …

Financial Reporter | December 2007

>> The Lowly Loss Ratio

28

Equation 12

Equation 13

Page 29: The Financial Reporter - soa.org · The Financial Reporter ... An International Financial Reporting Standards (IFRS) Phase II Discussion Paper Primer

Financial Reporter | December 2007 29

>> The Lowly Loss Ratio

Collect terms …

On the right hand side, add and subtract (actual claims) …

Finally, we can write

Equation 14 says

The actuarial gains represent deviations from expectedclaims, lapses and mortality. They can be either posi-tive (favorable) or negative (unfavorable).

This motivates a definition for Interest-Adjusted LossRatio:

In practice, the loss ratio is calculated from aggregateddata. Therefore, actual calculations use the Equation15 definition. Equation 16 shows that the loss ratio isour established Net-to-Gross ratio, adjusted for expe-rience over the period.

Expense Ratios, Combined Ratios and ProfitMarginIn GAAP accounting, we establish a deferred acquisi-tion cost asset. The DAC asset is simply the negativeof an “expense reserve.” From year to year, the expensereserve progresses in a manner similar to Equation 13:

where

= Expected incurred expense at thebeginning of policy year t.

= Actual incurred expense at thebeginning of policy year t.

= Expense Net level premium.

= Expense Reserve per unit in forceat the end of policy year t. It’sgenerally negative. Negating itgives us the positive DAC asset.

Then we define an expense ratio as follows:

Skipping a lot of math that’s very similar to our trans-formation of Equation 13, we get:

Equation 14

Equation 15

Equation 16

Equation 17

Equation 18

Equation 19

continued on page 30>>

Page 30: The Financial Reporter - soa.org · The Financial Reporter ... An International Financial Reporting Standards (IFRS) Phase II Discussion Paper Primer

Financial Reporter | December 2007

Actual calculations use the Equation 18 definition.Equation 19 shows that it works out to our GAAPamortization percentage, adjusted for experienceover the period.

Equation 19 is the deferrable expense analogue ofEquation 16. Adding:

• the net-to gross ratio from Equation 16 (cover-ing benefits),

• the GAAP Amortization percentage fromEquation 19, and

• some provision for nondeferrable expensesresults in the “combined ratio,” another con-cept borrowed from casualty insurance. And100 percent minus the combined ratio is theprofit margin.

Lines of BusinessThe Equation 9 definition of the loss ratio modifiesthe basic concept to fit issue age level premiumplans, chiefly Long Term Care and IndividualDisability Income.

In the early years, LTC claims are very small. Evensignificant percentage deviations will not register sig-nificantly in the loss ratio calculation. The actuarialgains from claim experience are small compared tothe other components (Net-to-gross ratio, and mor-tality and lapse gains). High early lapses could makea new block of LTC look very profitable. But theremaining insureds may be less healthy, and subse-quent claim experience may be unfavorable. Youshould always examine your loss ratio results critical-ly, and understand what’s driving them.

Limited-pay plans (e.g., 10-pay) present specialproblems. Applying our usual formulas, we’re divid-ing by a very large premium in the early years. Lateron, we’re faced with the prospect of dividing by zero.None of this matters if we don’t have much limitedpay in force, or if we have a mature distribution bypolicy year. But the scheme is fairly popular; andmost policies are probably still in their premium-paying period. One solution may be to restate theactive-life reserve as lifetime pay, and treat the excessas unearned premium. Thus:

• the interest-adjusted increase in the lifetime payportion would be added to claims in the numer-ator, and

• the increase in the interest-adjusted excesswould be subtracted from the premiums in thedenominator.

This doesn’t sound very practical. If the limited-payblock is small, you can spare yourself the effort.

We can apply these concepts to life insurance. Thelife insurance analog of Equation 16 iswhere

= Face Amount for policy year t= Cash Value for policy year t (usu-

ally zero for term insurance)

In life insurance the law doesn’t require loss ratio cal-culations. There is no consensus on acceptable val-ues, especially for Cash Value Whole Life, althoughthey may be helpful for term insurance. They mayalso be useful if your parent company is a casualtyinsurer.

Limitations and Food for ThoughtThe loss ratio is easy to apply, based solely on aggre-gate premiums, claims and reserves. Using moderndata warehouse technology, we can examine separateloss ratios for various underwriting and occupationclasses, geographical regions and markets.

Of course, we must confine our examination to sub-sets that produce statistically significant results. Thatentails some combination of choosing sufficientlylarge subsets or sufficiently long study periods.

But Equation 16 indicates one obvious area wherethe information is incomplete. We see that the majorcomponent of the loss ratio is the ratio of valuationnet premium to gross premium (and this ratio maybe similar to what we originally filed with the states).In the loss ratio calculation, actuarial gains,expressed as a percentage of premium, are implicitlysubtracted.

So, if our loss ratio is higher than expected, is itbecause of excess claim incidence or insufficient laps-es? (And if we’re using method 2 to calculate ourincurred claims, are low claim terminations toblame?) Thus, the loss ratio alone gives us an incom-plete picture.

The solution is to calculate the individual actuar-ial gains explicitly. For example, Equation 16

contains , the gain from incidence. The

>> The Lowly Loss Ratio

30

Equation 20

Page 31: The Financial Reporter - soa.org · The Financial Reporter ... An International Financial Reporting Standards (IFRS) Phase II Discussion Paper Primer

Financial Reporter | December 2007 31

>> The Lowly Loss Ratio

component is the sum of all new claim reservesestablished during the period. And requires doinga summation over all active lives in force. Once we getthose figures, we can divide by the interest-adjusted earned premiums, giving us the gain from inci-dence as a percentage of premium.

Similar analysis gives us the remaining actuarial gains.Then, we can calculate the aggregate net-to-gross ratio,subtract the actuarial gains percentages, and hope thatthey add up to the aggregate loss ratio.

This is a bit tedious, because we are required to go beyondmerely taking ratios of aggregate quantities. But ourreward is that we can split our loss ratio into the expectednet-to-gross value and all deviations from expected. Forexample, assume that we priced for a 55 percent loss ratio,which is the net-to-gross premium ratio. During the peri-od, if incidence gains are +2 percent, lapse gains are -7percent, and termination gains are +1 percent, then ourtotal loss ratio is

55% - (2% - 7% + 1%) = 59%The loss ratio is higher than we priced for, and yet claimincidence and termination are fine. The problem is “insuf-ficient” lapses.

We can perform similar analysis of expenses (see Section6). Then the “deferrable” expense ratio will be the GAAPamortization ratio, minus the actuarial gains. Here, the“insufficient” lapses may translate into an actuarial gain,somewhat offsetting the disappointing loss ratio.

All of the foregoing, plus the nondeferrable expenses andprofit margin, add up to 100 percent. The moral of thestory is that not all deviations are created equal. A higherthan expected loss ratio does indicate an unanticipatedlevel of claim payout. But if it’s solely because of low laps-es, then our expense amortization offers some mitigation.

GAAP reserves generally include some margin for adversedeviation. As mentioned earlier, we can have a moremeaningful exhibit of actuarial gains if we devise anadjustment that removes them.

If the net-to-gross ratio varies by issue age, sex, underwrit-ing class, etc., then the aggregate net-to-gross will gradu-ally shift with variations in lapses and mortality. This isanother pitfall to examining loss ratios in isolation. Wecan overcome this by splitting the loss ratio into its basicnet-to-gross ratio, minus actuarial gains. $

Page 32: The Financial Reporter - soa.org · The Financial Reporter ... An International Financial Reporting Standards (IFRS) Phase II Discussion Paper Primer

Financial Reporter | December 200732

Health WatchHealth Section Newsletter, Sept. 2007 http://www.soa.org/library/newsletters/health-watch-newsletter/2007/september/hsn-0708.pdf

IASB Phase II Insurance Contracts Project:Implications for U.S. Health Insurersby Rowen B. BellAn article about how IASB Phase II may affectreporting for health insurance products.

²²²

Long-Term Care News, Aug. 2007 http://www.soa.org/library/newsletters/long-term-care/2007/august/ltc-0807.pdf

Random Variation in Claims Reservesby James BergerOn the challenges of explaining changes in claimsreserves.

Taxing TimesTaxation Section Newsletter, Sept. 2007 http://www.soa.org/library/newsletters/taxing-times/2007/september/ttn-0907.pdf

Tax Uncertainty Swirls Around Principles-BasedReservesby Christian DesRochersHighlights some potential issues for the effect onFederal Income Tax when a Principles-Basedapproach to reserves is adopted.

What’s Outside

$

Page 33: The Financial Reporter - soa.org · The Financial Reporter ... An International Financial Reporting Standards (IFRS) Phase II Discussion Paper Primer
Page 34: The Financial Reporter - soa.org · The Financial Reporter ... An International Financial Reporting Standards (IFRS) Phase II Discussion Paper Primer
Page 35: The Financial Reporter - soa.org · The Financial Reporter ... An International Financial Reporting Standards (IFRS) Phase II Discussion Paper Primer
Page 36: The Financial Reporter - soa.org · The Financial Reporter ... An International Financial Reporting Standards (IFRS) Phase II Discussion Paper Primer

475 N. Martingale Road, Suite 600Schaumburg, Illinois 60173Web: www.soa.org