Hyper-accurate insurance data can improve insurers’ ability to assess risk.
The ability of an insurer to accurately price its premiums is critical to sustainable, long-term profitability. And accurate pricing is dependent on precise and comprehensive location data. Each iterative improvement in the accuracy of location data—from zip codes to parcel-level, and increasingly to building footprint-level data—enables insurers to more precisely factor risk.
What does this mean to your bottom-line?
Two recent studies by Perr&Knight, commissioned by Pitney Bowes, compared the data differences. The firm looked at typical auto and homeowner insurance policies to compare how premiums would be priced under commonly used vendor data sets, such as zip codes and street segments, versus more precise parcel-level data.
To learn more about the research findings, read Pinpointing the Issue: Why Hyper-Accurate Location Data Can’t be Overlooked in Insurance.
When pricing is inaccurate, insurers’ bear significant costs
The research found that although most policies would experience no change in pricing, in the 5 to 10 percent of cases that would be affected, the range of under- or overpricing in premiums was significant. The differences are based on comparisons to Pitney Bowes more precisely geocoded wildfire risk location data.
Some examples include:
- Some homeowner policies were underpriced by as much as 86.7 percent, or $2,800 annually per policy.
- The state of Florida alone has experienced more than $100 million in lost premiums.
- One top 10 US insurer, using a sample of 100 properties affected by the 2017 California wildfires, found that only 3 percent had previously been identified as high risk from wildfire loss based on zip code data. The company paid out $100 million in claims from a single wildfire event because it did not have good data.
Adopting a best practice approach to location data
It only takes small improvements to see a big impact. Key elements to developing hyper-accurate data include:
- Curate data sources: Not all data sources are created equal. Some suppliers may have more authority around a particular data set, while other data sets may be purpose-built for certain uses. Dan Adams, vice president of data product management at Pitney Bowes, recommends using more authoritative sources in order to be more confident about the data quality and the decisions made based on the data sets. Outsourcing to a vendor that specializes in these activities is an option to expending in-house resources.
- Ensure interoperability: When selecting data sets, it’s important to “make sure everything—when you bring the two data sets together—behaves the way it should, and that they’re both accurate and precise in relation to each other,” says Adams.
- Regularly monitor and maintain: Data evolves and with it, quality. Plan for bringing new data sets online, improving the underlying data quality of already existing data sets, investing to increase coverage territories and understanding how often new or changed data will be incorporated into your database.
By adopting these guidelines, insurers can see significant improvements in the quality of their location data.
To learn more about how precise geocoded location data can benefit you, read Pinpointing the Issue: Why Hyper-Accurate Location Data Can’t be Overlooked in Insurance, a Forbes Insight report sponsored by Pitney Bowes.