The use of data to model reinstatement costs for insurance is increasingly common, but what are the pitfalls of this approach?
While increased computing power and access to large volumes of location data is adding increased sophistication to these analytics tools, one key issue we regularly see with these models is the underlying assumptions. In our experience, the assumptions in these models are rarely consistent with the insurance policy terms, or circumstances, of the policyholder.
Often these insurance valuation models exclude or ignore:
- heritage aspects, e.g. listed building status;
- demolition and debris removal, or if included the possible high costs of shoring up adjoining properties if an urban location;
- Architects, engineering and other professional fees;
- interior fitout works, e.g. internal partitioning;
- building services, e.g. if property is sprinklered;
- site access issues;
- asbestos or other hazardous material removal;
- piled or unique foundations; and
- cost escalation during the policy term or during reconstruction.
In omitting these essential elements, or not reflecting the specific policy terms, these models may be giving false assurance to brokers and policyholders that the outputs are suitable as the declared values, potentially leaving them severely exposed.
Does the output match to the subject assets and more importantly the insurance policy terms? If not, then you may need to adjust the figures produced by these models to ensure that you are arriving at the correct declared values.
The use of data to model reinstatement costs for insurance offers insurers, brokers and policyholders an ability to review large portfolios consistently and quickly. It is particularly useful in identifying outliers and unique locations across large numbers of locations.
However, if you are looking at establishing the reinstatement cost of a specific location or locations, in a situation where you are being presented with an ‘answer’ based on modelling or data analysis, always read the small print!