Which is worse when it comes to policy pricing: underestimating risk, or overestimating it?

There’s a lot of data available to help get it right, but only if you can master it.

Tue Oct 31 13:16:00 EDT 2017

Our clients tell us that when it comes to P&C policy pricing, there are two key things they want to avoid: underestimating the risks associated with a property — and overlooking a low-risk property in a high-risk area. The first can result in unacceptable losses, the second in lost opportunities for profit.

Insurers know the answers to both problems lie somewhere in the vast amount of location-based, risk- and property-related data they use to support underwriting and pricing. By this, I mean property attributes, crime incidence, distance from shoreline, location and staffing of fire stations, presence in a flood or earthquake zone, history of tornados or hurricanes, wildfire risk and the like.

Insurers have long used broad classifications like zip codes for assigning risk scores to a property. The technology available today can do much better than that. You can use individual addresses and geocoding, for example, to get a much finer risk profile.  But this approach has issues. It’s difficult to get addresses right — and have them recorded consistently across an organization. Is it avenue or street? East or west, or neither?

Another issue is inaccurate geocoding, where even a small location discrepancy can dramatically change a property’s risk profile. For example, the geocode you’re using for a waterfront property is the latitude/longitude for the end of the driveway, which is 500 feet from the shoreline. The house itself is set back 450 feet from the roadway, putting it just 50 feet from the water. Big difference in flood risk.

When you compound these small errors across dozens of data sets, you can quickly get to the point where up to 40 percent of your records include faulty data. That can put your pricing strategies at risk.

Location Master Data Management offers a new, very different approach that can dramatically increase the precision of your risk assessments while simplifying how you take advantage of the masses of data available today. It’s founded on three principles, the first of which is to start with better data. You need to combine address cleansing with precise geocoding to identify the exact location of any asset or boundary.

The second principle is to apply a unique and persistent ID to every property. Think of it like a social security number for a property. The street name can change, or the zip code, or even the name of the municipality, but the unique, persistent ID always stays the same.

With one and two in hand, you can execute the third principle, which is to link that unique ID across every data set you use to evaluate risk. One ID, hundreds of data attributes, from multiple data sources. Without an approach like Location Master Data Management, bringing all that data together so it can be readily analyzed will remain a challenge that can result in costly losses.

Learn more about how precision accuracy and Location Master Data Management strategies are helping insurers understand exposures, reduce risk and increase profitability. Read Mastering Location Data: Close, But Not Quite There from Harvard Business Review Analytic Services.

More Ideas and Insights