Marsh & McLennan Advantage Insights logo
Conversations and insights from the edge of global business
Menu Search

BRINK News is transitioning to This Moment platform on MarshMcLennan.com as of March 31, 2023. Read the update here.

Environment

How To Avoid a Catastrophe Model Failure

This piece is the third in a BRINK series exploring the global implications of hurricane season. You can find the rest of the series here and here.

Since commercial catastrophe models were introduced in the 1980s, they have become an integral part of the global reinsurance industry.

Underwriters depend on them to price risk; management uses them to set business strategies; and rating agencies and regulators consider them in their analyses. Yet, new scientific discoveries and claims insights regularly reshape our view of risk, and a customized model that is fit-for-purpose one day might quickly become obsolete if it is not updated for changing business practices and advances in our understanding of natural and man-made events in a timely manner.

Surprise Impacts

Despite the sophisticated nature of each new generation of models, new events sometimes expose previously hidden attributes of a particular peril or region.

In 2005, Hurricane Katrina caused economic and insured losses in New Orleans far greater than expected because models did not consider the possibility of the city’s levees failing. In 2011, the existence of a previously unknown fault beneath Christchurch and the fact that the city sits on an alluvial plain of damp soil created unexpected liquefaction in the New Zealand earthquake. And in 2012, Superstorm Sandy exposed the vulnerability of underground garages and electrical infrastructure in New York City to storm surge, a secondary peril in wind models that did not consider the placement of these risks in pre-Sandy event sets.

Such surprises impact the bottom lines of reinsurers, who price risk largely based on the losses and volatility suggested by the thousands of simulated events analyzed by a model.

Resilient Modeling

However, there is a silver lining for reinsurers.

These events advance modeling capabilities by improving our understanding of the peril’s physics and damage potential. Users can then incorporate such advances themselves, along with new technologies and best practices for model management, to keep their company’s views of risk current—even if the vendor has not yet released its own updated version—and validate enterprise risk management decisions to important stakeholders.

When creating a resilient internal modeling strategy, reinsurers must weigh cost, data security, ease of use and dependability.

Reconciling any material differences in hazard assumptions or modeled losses and complementing a core commercial model with standard formulas from regulators and in-house data and analytics can help companies of all sizes manage resources. Additionally, it protects sensitive information, allows access to the latest technology and support networks, and mitigates the impact of a crisis to vital assets—all while developing a unique risk profile.

Value of Customization

To the extent resources allow, reinsurers should analyze several macro- and micro-level considerations when evaluating the merits of a given platform.

If an insurer performs regular analyses of its models, it can construct a more resilient strategy that minimizes failure and maximizes opportunities for growth.

On the macro level, unless a company’s underwriting and claims data dominated the vendor’s development methodology, customization is almost always desirable, especially at the bottom of the loss curve, where there is more claims data. If a large insurer with robust exposure and claims data is heavily involved in the vendor’s product development, the model’s vulnerability assumptions and loss payout and developments patterns will likely mirror that of the company itself, so less customization is necessary.

Either way, users should validate modeled losses against historical claims from both their own company and industry perspectives, taking care to adjust for inflation, exposure changes or non-modeled perils to confirm the reasonability of return periods in portfolio and industry occurrence and aggregate exceedance-probability curves.

Without this important step, insurers may find their modeled loss curves differ materially from observed historical results, as illustrated below.

The Micro-Level

A micro-level review of model assumptions and shortcomings can further narrow the odds of a “shock” loss. As such, it is critical to precisely identify risks’ physical locations and characteristics as loss estimates may vary widely within a short distance—especially for flood risk, where elevation is an important factor.

When a model’s geocoding engine or a national address database cannot assign location, there are several disaggregation methodologies available, but each produces different loss estimates. European companies will need to be particularly careful regarding data quality and integrity as the new General Data Protection Regulation takes effect, which may mean less specific location data is collected.

Year of Construction Matters

Equally as important as location is a risk’s physical characteristics, as a model will estimate a range of possibilities without this information. If the assumption regarding year of construction, for example, differs materially from the insurer’s actual distribution, modeled losses for risks with unknown construction years may be under or overestimated.

The exhibit below illustrates the difference between an insurer’s actual data and a model’s assumed year of construction distribution based on regional census data in Portugal. In this case, the model assumes an older distribution than the actual data shows, so losses on risks with unknown construction years may be overstated.

There is also no database of agreed property, contents or business interruption valuations, so if a model’s assumed valuations are under or overstated, the damage function may be inflated or diminished to balance to historical industry losses.

Missing Components

Finally, companies must also adjust “off-the-shelf” models for missing components. Examples include overlooked exposures such as a detached garage; new underwriting guidelines, policy wordings or regulations; or the treatment of sub-perils, such as a tsunami resulting from an earthquake. Loss adjustment difficulties are also not always adequately addressed in models. Loss leakage—such as when adjusters cannot separate covered wind loss from excluded storm surge loss—can inflate results, and complex events can drive higher labor and material costs or unusual delays.

Users must also consider the cascading impact of failed risk mitigation measures, such as the malfunction of cooling generators in the Fukushima Nuclear Power Plant after the Tohoku earthquake.

If an insurer performs regular, macro-level analyses of its model, validating estimated losses against historical experience and new views of risk, while also supplementing missing or inadequate micro-level components appropriately, it can construct a more resilient modeling strategy that minimizes the possibility of model failure and maximizes opportunities for profitable growth.

The views expressed herein are solely those of the author and do not reflect the views of Guy Carpenter & Company, LLC, its officers, managers, or employees.

Imelda Powers

Global Chief Catastrophe Modeler at Guy Carpenter & Company, LLC

Imelda Powers is the global chief catastrophe modeler at Guy Carpenter & Company, LLC.

Get ahead in a rapidly changing world. Sign up for our daily newsletter. Subscribe
​​