I never properly understood economics until I stumbled upon behavioral economics. Economics should help us make sense of how decisions are made: early on, I sensed that things don't work in the abstract terms of maximization of utility, defined set of preferences and perfectly available information. Kahneman and Tversky have well depicted the real word of decision making as made of snap judgments, unbalanced confidence, driven by heuristics and relative utility. In one sentence of messy and imperfect decisions.
Yet in insurance we seem to ignore it. We talk about customers preferences, as if customers had perfect understanding of what their actual risk is, we divide customers in more and less risk adverse, as if risk aversion could be consciously and objectively measured on a scale.
It does not work that way. For example, people judge risks by a complex interdependency of availability heuristics (how easy they recall an example of an accident) and affect heuristics (how emotionally impacting the example is). The topic is explored academically and it’s starting to get momentum more practically, with reinsurers paving the way (e.g. SwissRe and GenRe).
In commercial insurance we somehow expect our underwriters to be superhuman, supra-emotional agents, who make perfectly rational decisions in a world of perfectly available information. Well, it turns out underwriters are human too.
The Hartford has studied impact of behavioral economics on underwriting and concluded that commercial underwriters are prone to four types of biases: obedience to authority, loss aversion, outcome bias (they overweigh past outcomes as justification for decisions that could lead to a restricted risk appetite), snap and stick (quick judgments that can narrow underwriting opportunities).
A study of the University of Oxford outlined additional insights: underwriters seem to rely less on modeled output that one would think, tend to ignore market price when estimating risk, tend to avoid anchoring when they have sufficient evidence to base their decisions and are loss adverse, which means that, in absence of models, tends to overvalue the impact of past losses on risk assessment.
So, what can we do about it? The first step would be to train underwriters. Training however is generic and does not necessarily fit the individual and the context in which transactions happen.
This is where I believe technology can help. Thaler and Sunstein defined the concept of choice architecture as targeted system of nudges designed to help improve decisions. What if we were to leverage technology to help to create a personalized underwriting choice architecture? Commercial underwriters make decisions at the intersection of portfolio information, personal judgment on risks and relationships with brokers: what if we kept track of the patters of individual decisions, understood how they deviate from ideal or successful decisions and nudged their choices? This could take the shape of a second opinion on risk selection, provide a post-mortem assessment on specific losses or generate self-learning personalized reports, outlining recurrent judgment biases and improvement opportunities.
Artificial intelligence also can be used to provide recommendations on how a carrier value proposition can help win an account, perhaps correcting personal biases, bring context to the overall risk strategy of an account, for example by outlining how other similar accounts are structured. The core value that technology brings is its ability to be tailored to the specific underwriter and transaction, to be contextual within the process and to provide both backward looking assessment and future looking recommendations.
Commercial underwriting will still be run by human decisions in the foreseeable future. In a world where speed and complexity of transactions are increasing, there is still a lot that technology can do to help underwriters make better decisions.
Comments