You can search for articles in back issues of Contingencies from July/August 2000 to March/April 2009 using the search box to the right. Simply type in subject words, author's name, or article title and click search. To search for articles from May/June 2009 to the present, go to the current digital issue of the magazine and use the search function on the left of the top navigation bar.

Google Custom Search


New Catastrophe Models for Hard Times

By Patricia Grossi and Howard Kunreuther

PDF version


Driven by the increasing frequency and severity of natural disasters over the past few years, the growing sophistication of catastrophe models has helped insurers and reinsurers weather the storms.

THE PROBLEM OF PREPARING FOR A NATURAL DISASTER isn’t a new one. Around the world and particularly in the more developed countries, governments, individuals, and businesses know they should be preparing for the “big one,” whether it’s an earthquake, a hurricane, or a flood. But it doesn’t always happen. Only after a disaster occurs does one recognize the importance of preparing for these types of extreme events. There are, however, some individuals who spend a great deal of time and energy modeling natural disasters and enlightening others on ways in which their impact can be managed—catastrophe modelers.

Introduced in the mid-1980s, catastrophe models didn’t gain widespread attention until after Hurricane Andrew hit southern Florida in August, 1992. Nine insurers became insolvent as a result of their losses from Hurricane Andrew. Insurers and reinsurers thought that in order to reduce the likelihood of a very severe loss relative to their surplus, they needed to estimate and manage their natural hazard risk more precisely. Many companies turned to the modelers of catastrophe risk for decision support.

Since this time, the catastrophe modeling business has grown considerably. In 2006, catastrophe models are in widespread use throughout the insurance industry, assisting insurers, reinsurers, and other stakeholders in managing their risk from earthquakes and hurricanes in the United States and other regions of the world. In Catastrophe Modeling: A New Approach to Managing Risk, these models are discussed, as is their ability to evaluate alternative strategies for managing insured risk, such as risk transfer or portfolio diversification.

Components of Catastrophe Models

Catastrophe models identify and quantify the likelihood of occurrence of specific natural disasters in a region and estimate the extent of incurred losses. The four basic components of a catastrophe model are hazard, inventory, vulnerability, and loss, as depicted in the figure below.

First, the model characterizes the risk of the hazard phenomenon. In the case of a hurricane, risk is characterized by storm tracks, landfall locations, and track angles at landfall. Other sophisticated approaches, such as physically based numerical weather prediction models, can be used.

Next, the model characterizes the inventory or portfolio of properties at risk as accurately as possible. Geographic coordinates such as latitude and longitude are assigned to a property based on its street address, ZIP code, or another location descriptor. For each property’s location, other factors that characterize the property are the construction and occupancy types, the building height, and its age.

The hazard and inventory components enable the calculation of the vulnerability or damage susceptibility of the structures at risk. This step in the catastrophe model quantifies the physical impact of the natural hazard phenomenon on the property at risk. How this vulnerability is quantified differs from model to model. In most models, damage curves are constructed for the structure, its contents, and time element losses, such as business interruption loss or relocation expenses.

Finally, based on this measure of vulnerability, the loss to the inventory is evaluated. In a catastrophe model, loss is characterized as direct or indirect. Direct losses include the cost to repair and/or replace a structure. Indirect losses include business interruption impacts and relocation costs of residents forced to evacuate their homes. Once total losses are calculated, estimates of insured losses are computed by applying policy conditions (e.g., coverage deductibles and limits) to the total loss estimates.

Exceedance Probability Curves

From the output of a catastrophe model, one can construct an exceedance probability (EP) curve that specifies the probability that a certain level of loss will be exceeded over a given time.

To illustrate with a specific example, suppose an insurer were interested in constructing an EP curve for his portfolio of insurance policies covering wind damage from hurricanes in a Southeastern U.S. coastal community. Using the output of a catastrophe model, the insurer would have a set of events that could produce a given dollar loss for the portfolio and the corresponding probabilities of exceedance. Based on these estimates, an EP curve is constructed, such as the one depicted in the figure above right. The x-axis measures the insurer’s loss in dollars, and the y-axis depicts the probability that losses will exceed a particular level. If the insurer focuses on a specific loss Li, one can see from the figure that the likelihood that insured losses exceed Li is given by pi.

The insurer can use this EP curve for managing its portfolio, determining the scope of coverage to various properties in the region, given the current risk profile. More specifically, if the insurer wants to remain below the critical loss level Li, it will have to develop a strategy to reduce the risk (i.e., reduce the probability of loss to less than pi). The insurer could reduce the number of policies in force, decide not to offer this type of coverage at all (if permitted by law to do so), or increase the capital available for dealing with future catastrophic events.

Beyond the insurance industry, an EP curve provides an important link between risk assessment by scientists and engineers and risk management by policy analysts and key decision-makers. For a given year, federal and state agencies can use an EP curve to estimate the likelihood that natural disaster losses to specific communities or regions of the country will exceed certain levels, in order to determine the likelihood it will have to provide disaster assistance to these stricken areas.

As an example, at the start of the hurricane season, one could have used a catastrophe model to create an EP curve to estimate the likelihood of damage to the Southeastern U.S. coastal community exceeding $22 billion in 2004. This probability would have been extremely low, even though we now know that a confluence of events (i.e., Charley, Frances, Ivan, and Jeanne) was able to produce an outcome that exceeded this dollar value. ISO’s Property Claims Services unit calculated the industry total from these hurricanes as $22.8 billion, as of Dec. 1, 2005.

New Developments

As with any evolving field, there is room for growth. With each new natural disaster, lessons are learned and observations are made that allow catastrophe modelers to strive for the next generation in modeling.

LINKING INSURANCE WITH MITIGATION

Disaster insurance can facilitate cost-effective mitigation measures to reduce losses from future events if it’s linked with other private-public sector initiatives. The importance of well-enforced building codes and land-use regulations to control development in hazard-prone areas becomes an important part of such a program. If some states and the federal government are providing protection against catastrophic losses, they can also require these risk-reducing measures as part of such a private-public partnership.

One way to encourage the adoption of cost-effective mitigation measures is to have banks provide long-term mitigation loans that could be tied to the property. The bank holding the mortgage on the property could offer a home improvement loan with a payback period identical to the life of the mortgage.

For example, a 20-year loan for $1,500 at an annual interest rate of 10 percent would result in payments of $145 per year. If the annual premium reduction due to the adoption of the mitigation measure is greater than $145 per year, an insured homeowner would have lower total payments by investing in mitigation. In order for such a program to achieve its desired impact, insurance premiums need to be risk-based using data from catastrophe models.

If the insurance premium reduction for undertaking the mitigation measure exceeds the annual home improvement loan payment, the homeowner will have lower total annual payments. The other interested parties will also be better off with the mitigation measure in place. The insurers will have a lower probability of experiencing catastrophic losses, and their costs of reinsurance should decrease; the financial institutions will have a more secure investment because of lower losses from disasters. The general taxpayer will incur lower costs of disaster assistance because of the reduction in losses from homeowners adopting mitigation measures.

The 2004 and 2005 U.S. hurricane seasons were no exception to this rule. In fact, Hurricanes Charley, Frances, Ivan, and Jeanne in 2004 and Hurricane Katrina, in particular, in 2005 have had a major impact on catastrophe modeling—not just on how to incorporate observations from these events into existing methodologies but on how models are viewed and being used by the insurance industry. Insurers are taking a closer look at their own underwriting practices, taking a more comprehensive approach to understanding risk, and using a more dynamic approach to managing catastrophe risk.

First, there is an intense focus on the underwriting process. As discussed in Chapter 6, “Insurance Portfolio Management,” in Catastrophe Modeling: A New Approach to Managing Risk, catastrophe modeling is a valuable tool for underwriting and pricing decisions. By quantifying risk in the form of EP curves, the impact of adding another policy to a portfolio becomes transparent.

Consider an underwriter who needs to decide if he should add a homeowners policy to his current portfolio in the Miami/Dade County area of Florida. Currently, the portfolio risk is below the critical level Li on the portfolio’s EP curve. Given the risk of hurricane wind damage in the area, if by insuring another home the likelihood of a loss exceeds the critical level Li, the underwriter will decide not to insure the home and expand coverage in this part of Florida.

The underwriter can also use a catastrophe model to price risks, deciding that certain types of structures would be ineligible for coverage, such as homes with inadequate roof anchorage. By estimating potential losses and their variability, catastrophe models provide a means to determine the appropriate actuarial premium for a particular insurance policy. Given its concern with aggregate losses, the insurer can examine the impact on portfolio losses by varying the deductibles as well as coverage limits on insurance contracts.

With the staggering amount of claims from the 2004 and 2005 U.S. hurricane seasons, insurers have begun to take a closer look at their underwriting process and are questioning the input data used in catastrophe models. For any model, recognizing the importance of input data is essential. For the inventory component of a catastrophe model, the “garbage in, garbage out” principle holds irrespective of how advanced or state-of-the-art a model may be. Partial information on a structure’s characteristics can result in an inaccurate measure of risk.

For example, is a residential structure coded as masonry when in fact it’s wood frame? Is a commercial structure in fact a petrochemical refinery when it’s coded as a chemical processing plant? Are the structures’ and contents’ values underestimated? This type of misinformation in the underwriting process results in inaccurate measures of risk.

Second, insurers are taking a more comprehensive approach to understanding catastrophe risk and the role of catastrophe models in their risk management process. There is a growing appreciation for the limitations of models. The science and impact of natural hazards aren’t completely understood and lead to uncertainty in estimating catastrophe risk.

Chapter 4, on “Sources, Nature and Impact of Uncertainties in Catastrophe Modeling,” in Catastrophe Modeling: A New Approach to Managing Risk examines the complexity in the catastrophe modeling process and provides a detailed analysis of the impact of uncertainty on EP curves. A case study of the earthquake hazard facing Charleston, S.C., illustrates how one can use competing catastrophe models for developing bounds on risk.

A similar analysis is undertaken with respect to the hurricane risk in Florida, by looking at each catastrophe modeler’s submission data to the Florida Commission on Hurricane Loss Projection Methodology for model certification for residential insurance rate making. By recognizing the uncertainty associated with catastrophe models, insurers are more disciplined about managing their exposure and are beginning to take steps to protect themselves against losses they may not have foreseen.

Insurers and reinsurers are also paying more attention to the scope of modeled losses. A loss estimate from a catastrophe model often includes structural damage, contents damage, and other time-based impacts from a disaster, such as additional living expenses or business interruption. However, catastrophe models often don’t explicitly model losses from the impact of an offsite power failure or mold, as seen in Hurricane Katrina.

More specifically, the catastrophe models’ estimates of losses from Hurricane Katrina included direct wind damage and direct storm surge damage but not mold infestation due to the levee failures in New Orleans. As insurers and reinsurers become more sophisticated in their use of models, there is increasing pressure on the modeling firms to develop more complete models that calculate all possible direct and indirect losses from a disaster.

Finally, as the users of catastrophe models become more educated in their understanding of the scope of modeled losses, they’re being proactive in managing their catastrophic risk potential. They’re using a more dynamic approach to managing their catastrophe risk, striving to understand if and how risk can change over time. The use of time-dependent rates of earthquake occurrence are scrutinized within catastrophe models, as they can potentially increase or decrease risk, depending on the occurrence of the last earthquake event.

Additionally, the idea of risk changing over time is particularly important for hurricane risk in the Atlantic Basin, given the increase in hurricane numbers and intensities over the past 30 years and the potential of this trend to continue over the next five years or more.

Conclusions

For more than a decade, catastrophe models have helped to advance the culture of risk management within the insurance industry. Models are powerful tools for assessing risk at a site-specific and portfolio level and for allowing for the evaluation of risk management strategies.

Hurricane Katrina and the 2004 and 2005 U.S. hurricane seasons have stimulated a dialogue between the modeling firms and those who use the models. As a result, the future models are incorporating more complete estimates of both direct and indirect losses, and users are taking a more comprehensive and dynamic approach to understanding and managing risk.


PATRICIA GROSSI is the manager for earthquake modeling at Risk Management Solutions.

HOWARD KUNREUTHER is the Cecilia Yen Koo professor of Decision Sciences and Public Policy and co-director of the Risk Management and Decision Processes Center at the Wharton School, University of Pennsylvania.

 


Contingencies (ISSN 1048-9851) is published by the American Academy of Actuaries, 1100 17th St. NW, 7th floor, Washington, DC 20036. The basic annual subscription rate is included in Academy dues. The nonmember rate is $24. Periodicals postage paid at Washington, DC, and at additional mailing offices. BPA circulation audited.

This article may not be reproduced in whole or in part without written permission of the publisher. Opinions expressed in signed articles are those of the author and do not necessarily reflect official policy of the American Academy of Actuaries.

March/April 2006

A Hard Look at Soft Fraud

New Catastrophe Models for Hard Times

The Value of Human Life

In the Eye of the Beholder

Inside Track:
Nosy Data

Letters

Commentary:
Making Good Sausage

Up To Code:
Living With Precept 10

Policy Briefing:
Where Policy Meets Politics

Workshop:
Health-Related Quality of Life

Tradecraft:
The Future of Historic Studies

Statistical Miscellany:
2005 A Record Year for Casualty Claims

Puzzles:
Peculiar Star Position

Endpaper:
Second-Order Effects


2006 Directory of Actuarial Software

Past Issues

Contact us


American Academy of Actuaries