Keep the Change: Part Two

Six years ago, we wrote a post called, “Keep the Change”. At the end of that post, we wrote this:

Since we have to keep the change, we may as well manage it effectively.

In the time that’s elapsed, we’ve come to realize two things: (1) We were correct. (2) We didn’t know how correct we were.

Today, of course, we’re likely to hear change referred to as transformation. Digital transformation. Financial transformation. Operational transformation. “We’re going to transform the enterprise!”

Are we really?

Transformation seems too abrupt. It feels like zero to sixty or a snap of the fingers. That’s not at all the kind of change we think about or that our software enables. Rather, it’s more gradual and deliberate. It’s more incremental and better managed. And it involves knowledge, learning, and what used to be called process re-engineering. Maybe it still is, for all we know.

Approach is Everything

Instead of transformation or process re-engineering, we’re more likely to call it change management. By that, we’re referring to a service that comes with our software. More specifically, we mean — especially when transitioning from fragmented legacy systems to an integrated suite — some processes are likely to change as they become more automated and efficient.

By the same token, people are likely to change in different ways. Some people may resist or have difficulty reconciling themselves to new systems and new processes. On the other hand, other people, who may have been considered marginal performers with the old system, may become stars with a new system and altered processes. And if we accept that best practices are static and determined in hindsight (they are) we can also tout that best practices, modified by experience, can be better practices. That creates the opportunity for people to suggest what’s possible, to build collaborative relationships, and to continuously improve requirements-identifying processes.

In other words, we view change management as a crucial aspect of of delivering our software. It’s a systematic approach to supporting people through their transitions to new processes with new tools.

Yes. We have to keep the change. We also have to manage it to keep it accessible and assimilable to everyone who experiences it. We have to ensure every link in the chain is secure.

It’s all part of the service.

The Three Ps

Most software companies define success in a number of common (and predictable) ways: They develop and deliver software specifically for their intended users. They possess a deep understanding of their customers’ needs and experiences. They ensure their software enhances business value and has quantifiable ROI. They establish roles and responsibilities with their customers during implementations. They define and achieve clear project objectives, including development cycles, milestones, and release plans. They count their numbers of users. They measure the satisfaction of their users, largely by staying within project scopes and budgets, completing project tasks on time, meeting milestones, and fulfilling specified requirements.

But we don’t consider ourselves to be common. And the only thing we want to be predictable around here is our reliability.

We’re Different

While predicability does start with p, it’s not one of the Three Ps we’re talking about. We’re talking about the Three Ps on which we built our company and by which we established our reputation.

Those Three Ps are, in this order:

  1. People. We’re not aware of any software organizations that have been built with code. Software is built with code. Organizations are built with people who love what they do, who are dedicated to the success of their teammates, and who are passionate about the satisfaction of their customers.
  2. Process. We don’t think of process as formula. We think of it as ever-evolving. We don’t think of best practices. We’re constantly in search of better practices. And we don’t dictate processes to our people. We encourage them to find them — to try, to experiment, to fail on occasion, to learn from their failures, and to improve their skills and their performance.
  3. Product. We’re very proud of our product. But it doesn’t constitute laurels on which to rest. Rather, the functional capabilities of our product are benchmarks: If we can do this today, with the ingenuity of our people and the input from our customers, we’ll do that tomorrow. That’s the way in which people and process always inform and improve our product.

It’s About Priorities

We didn’t count priorities as a fourth p. We could have because we learned a long time ago we become what we count. That’s why we count our most valuable assets. And our most valuable assets are our people. We wouldn’t have a process or a product without them.

What do you count?

The Use in User Groups

Since we concluded our Innovation Advisory Board (IAB) meetings early last month, since the IAB comprises users of our software, and since (like our software) we intend to keep improving our IAB meetings, we were doing a little reading on the topic. We found a post from April of 2022, published by Forbes — Why Customer User Groups Are Integral To The Success Of Today’s Technology Organizations” — that proved to be instructive and affirming. It said this, in part:

User groups exist to facilitate knowledge-sharing and communication among individuals who use the same technology, so providing a frictionless forum to share and receive information is critical … Many technology companies decide to form user groups when they have motivated and engaged customers who see the benefit of participating in a community around the company’s particular products or platforms. As groups grow, it’s the relationships — built among members and the vendor — that are the basis of user groups’ value.

We agree with all those points. And we’d add, in our experience, the relationships we build with our customers yield loyalty, which is a significant part of the IAB’s value to us.

There’s More

In addition to the points raised in the Forbes post, we find the IAB yields these things, as well:

  • Expertise and Guidance: Since we’re all working in and sharing knowledge of the insurance domain, the input we get from our customers provides the insights we need to ensure our products and services evolve most beneficially.
  • Stakeholder Engagement: Beyond our users, the IAB allows us to include other stakeholders, including vendors of other systems and data sources with which we integrate to provide the functionality and the information our customers need to do their jobs most efficiently.
  • Improved Governance: Advisory boards can serve — our IAB provides — checkpoints for ensuring our accountability, transparency, and responsible decision-making.
  • Effective Leadership: Our IAB also provides leadership by giving us feedback on practice standards and helping to keep us in compliance with ethical codes, as well as regulatory mandates.
  • Resource Accessibility: Since we can’t know everything, our Advisory Board helps us identify and make us of existing vendors, resources, expertise, and networks, keeping us from reinventing wheels, reducing costs, and increasing efficiency.
What’s Next?

We’ve already received feedback from the attendees of this year’s IAB meetings. We’re considering all of it and will include it in the development of next years IAB meetings. And we’ll continue to ensure the Finys Suite contains the features our customers want and need. Otherwise, they won’t use it.

That’s how we keep the use in our user group.

The Tell in Artificial Intelligence

tell (noun)

  1. In poker, a tell is a change in a player’s behavior or demeanor that allegedly reveals information about his hand strength. This can include facial expressions, nervous habits, or mannerisms that are believed to be indicative of a player’s assessment of his hand. A tell can be used to gain an advantage by observing and understanding the behavior of other players, but it can also be faked or misinterpreted.
  2. A tell can also refer to any behavior or action that reveals a person’s true intentions, emotions, or thoughts, often unintentionally. This concept can be applied to various aspects of human interaction.

In February of this year, we published a post called, “The Art in Artificial Intelligence”, in response to an article we read in the December edition of Best’s Review. We stand by the balance of optimism and skepticism we struck in that post. And we found that balance validated in the October edition of Best’s Review. That edition ran an article called, “Artificial Intelligence’s Imperfections Become Clearer.” Is it a condemnation of AI? Not by a long shot. But in identifying some of AI’s tells, it does sound cautionary notes worth heeding.

Exhibit A

First, the article offers this slice of sensibility:

How can insurance underwriters, using AI, separate fact from fiction? Could AI be wrong? AI systems can be imperfect and may produce erroneous outcomes if they are trained on biased or inadequate datasets. Add to those false pathways, poor data integration, algorithmic bias and decision-making errors and, yes, AI can be wrong. Examples may include denial of a claim due to not having the correct nomenclature programmed into and recognized by AI. At times, AI will confidently, yet inadvertently, omit information, such as a street address; or list a medical condition that a claimant may not have had; or fail to post electronic fund transfers in a timely manner for premium payments, resulting in a notice of cancellation.

It’s fair to imagine the differences between fact and fiction are seldom considered when it comes to AI. But the fact is AI still requires programming and accurate data. So, it’s still subject to GIGO.

Exhibit B

Then the article ups the ante, extending its considerations to insurance companies in their entirety:

What about “business decisions” made by insurance carriers when a loss occurs? Can AI make decisions based on business relationships and long-term client loyalty … what data points go into that algorithm? AI … offers a huge amount of promise for the insurance industry. But mitigating uncertainty … has to be at the forefront of asserting the risk decision-making for accuracy by machine and human on paper and online.

Insurance companies may be built on products and services. But they’re sustained by minimizing losses, maximizing and maintaining business relationships, and ensuring the loyalty of policyholders. Those things are not worth risking to AI or anything else.

Our View

We tend to think of AI in terms of the classic technology adoption lifecycle, which mimics the bell curve. We don’t need to innovate with it. We don’t need to be early adopters of it. But we do need to keep our eye on it, to learn about it, and to employ it in ways that will best — and most reliably — serve our customers. Then, especially if it turns about be anything like the dot-com bubble, neither we nor our customers will pay any undue prices should the bubble burst.

Image by Craig Chelius, CC BY 3.0, via Wikimedia Commons.

We’re not in the business of building every bell and whistle we can think of. And we’re not inclined to weigh our product down with unnecessary functionality. We are, however, very much in the business of giving our customer what they need when — and because — they need it.

As the saying goes, reliability is in the AI of the beholder.

A Study in Contrasts

We asked JoAnna Bennett and Mark O’Brien from O’Brien Communications Group to attend our annual Innovation Advisory Board Meeting. Following that experience, Mark wrote this post.

Back in my corporate days, I had the distinct sense in every meeting I attended that they were conducted for the sole purpose of talking about writing plans for things we were going to do. No such plans ever got written. None of those things ever got done. But people seemed pretty content with the routine. And no one was ever held accountable for the fact that nothing ever got done. As curious as I was about that, I was even more curious about how people seemed so content to be unproductive and about why there was no accountability.

In contrast, at the Finys Innovation Advisory Board (IAB) meeting last week, there was no talk about planning. There were only confirmations of things that had been done and updates about things that would be done. There was no discernible hierarchy. Rather, there were contributions from every level of the organization, recognition for those contributions, and 40 or more attendees from Finys’s various client companies, happy that all those things were being done on their behalf. I felt amazed and naïve.

I felt amazed at the genuine interaction and sincere customer satisfaction I was witnessing. I felt naïve because I’d never imagined how simple creating that kind of environment could be. And the secret to how it’s done is that there’s no secret to how it’s done.

Common Sense

Experiences like ours at the IAB indicate how inscrutably uncommon common sense has become.

At the IAB, I learned you don’t create a culture by talking about it. I learned culture and teamwork are like honesty and integrity. You don’t manifest those things by talking about them, either. You manifest them by being them, by demonstrating them, by making sure every interaction — employees with employees and employees with customers — is driven by them. I learned you find the right people for the culture you’re creating by asking the right questions in the interview process. Rather than asking questions like, “Where do you see yourself in five years?” you ask questions like, “How would you like to contribute to this organization and help us grow over the next five years?”

Once you’ve brought the right people on board, you give them the latitude to contribute and to collaborate. You let them make mistakes, correct them, and learn from them. And you give them support, recognition, and opportunities to advance. If you do that, you prove Richard Branson  was correct when he said, “Clients do not come first. Employees come first. If you take care of your employees, they will take care of the clients.”

The Proof in the Pudding

If you doubt the truth of Branson’s statement, all you have to do is sit in a room full of happy clients and the people who are happy to take care of them. From what I witnessed, I don’t believe there is one person at Finys for whom working there is just a job. You can’t fake commitment, dedication, and knowledge. And you can’t earn trust from and enthusiastic collaboration with your clients by faking anything.

There’s a clear distinction to be made between doing things right and doing right things. When a client, as one did, says, “My Finys team is like family,” you can be certain you’re doing right things.

JoAnna and I take our hats off to the entire Finys team. And we thank them for including us in their IAB.

It was a welcome contrast to much of what we’ve seen in our working lives.

The Evolution of BI with AI

With more and more being written about AI — and with more and more applications being developed for its use — we couldn’t help wondering about the role of AI in what we’ve come to consider business intelligence (BI). Obviously enough, BI is synonymous with data analytics and has been for a while. But we really wanted to know about BI’s origins and its evolution, which would help us to understand and to keep pace with its evolution with the introduction of AI.

One of the first things we came across was a post from DATAVERSITY, a producer of educational resources for business and Information Technology (IT) professionals on the uses and management of data. By way of historical perspective, DATAVERSITY offered this:

In 1865, Richard Millar Devens presented the phrase “Business Intelligence” (BI) in the “Cyclopædia of Commercial and Business Anecdotes.” He used it to describe how Sir Henry Furnese, a banker, profited from information by gathering and acting on it before his competition … in 1958, an article was written by an IBM computer scientist named Hans Peter Luhn, describing the potential of gathering business intelligence (BI) through the use of technology … In 1968, only individuals with extremely specialized skills could translate data into usable information. At this time, data from multiple sources was normally stored in silos, and research was typically presented in a fragmented, disjointed report that was open to interpretation. Edgar Codd recognized this as a problem, and published a paper in 1970, altering how people thought about databases. His proposal of developing a “relational database model” gained tremendous popularity and was adopted worldwid … The number of BI vendors grew in the 1980s, as business people discovered the value of business intelligence. An assortment of tools was developed during this time, to access and organize data in simpler ways. OLAP [online analytical processing], executive information systems, and data warehouses were some of the tools developed.

Given what we know about the analytical and predictive abilities of AI, it’s fairly easy to generalize, then, about the ways in which it will contribute to the evolution of AI.

Here’s What We Think

While we can’t be sure how much of this has actually come to fruition, it seems safe to assume AI helps or will help BI to:

  1. Automatically analyze large datasets, identify patterns, and generate insights.
  2. Forecast future trends (given #1), optimize operations, and allow even better data-driven decisions.
  3. Interact with BI systems using natural language processing (NLP), making it more accessible to and usable for non-technical people.
  4. Use machine-learning algorithms to understand user behavior, adapt to changing business needs, and continuously improve performance.
  5. Enable real-time decision-making, making BI dynamically operational, as opposed to reporting-centric.
  6. Integrate with other technologies, such as big data, the cloud, and IoT to leverage a wide range of data sources and create unified views of businesses.

Are we correct about all of that? We don’t know. We’d have to combine AI with a crystal ball and a Ouija board to be sure. But we do know we’ve come a long way since 1865. And the evolution of BI will continue with the further adaptation and inclusion of AI.

That’s why we built the Finys Suite to be ready for it.

Configure This

In the early 2000s, policy admin vendors whose systems had tools were at a distinct advantage. Vendors that didn’t have tools would blanch at the notion of competing for business: “We can’t beat those guys. They have tools!” That was then. This is now. Configuration toolsets have become table stakes. If you’re a vendor — and if you don’t offer a configuration toolset — you’re not in the game. Period. In fact, configuration toolsets have become so ubiquitous, their value is almost overlooked. It shouldn’t be.

These days, configuration toolsets enable insurers to configure their own systems; to modify and maintain products; to create and market new products; to tailor policies to specific customer needs, risks, and requirements; and to do all those things without carrying the overhead of huge IT departments. In addition, configuration toolsets allow insurers to:

  • Customize coverages: By selecting from a range of policy features and business rules, insurers can develop policies that address unique exposures, such as specialized equipment or business operations.
  • Mitigate risk: Configuration toolsets help underwriters assess, select, and manage risks more effectively, reducing the likelihood of unexpected losses or claims.
  • Improve customer satisfaction: By offering tailored policies, insurers can better meet the needs and desires of their policyholders, increasing satisfaction, loyalty, and retention.
  • Enhance competitiveness: Because responsiveness is king, insurers that are able to use advanced configuration toolsets are also able to compete more aggressively, to differentiate themselves from their competitors, and to attract customers that want and need flexibility and customized coverages.

What’s Next?

While no one has a crystal ball or a Ouija board, insurance configuration toolsets are likely to evolve towards increased automation and digitalization. (Both are givens at this point.) Here are just a few of the things that are likely to happen:

  • As insurers find ways to take greater advantage of tools like artificial intelligence (AI), augmented reality (AR), and optical character recognition (OCR), manual processes in back-office operations will continue to be reduced, while personal touches in things like strategic and customer-facing activities will continue to increase.
  • The integration of telematics, wearables, and connected devices will give insurers more personalized data on policyholders, enabling underwriters to make more informed decisions and to offer tailored risk coverages and financial offers.
  • Insurance shopping platforms and digital tools will continue to expand distribution channels, letting customers compare products, review testimonials, and find plans that meet their needs.
  • Advanced visual capabilities, including geolocation, OCR, AR, AI, and drones, will let agents collect data more efficiently, reduce unnecessary travel, and speed up claim resolution.

Our inability to predict the future notwithstanding, we can be sure configuration toolsets aren’t going anywhere. That’s why we continue to refine and enhance our Design Studio.

We can’t see the future. But we’re already ready for it.

The Search Is On

Your legacy system is starting to wheeze a little. It’s not as reliable as it once was. It’s nowhere near as flexible as it used to seem to be. It requires a little more coaxing than you’d like to give it. And you, your agents, and your policyholders are starting to wonder if it might be ready for a safe spot in the Old Systems Home.

But where do you start?

Well, you start with the facts, of course, beginning here:

  1. What deployment models does the vendor offer? Do they provide a choice between SaaS and on-premise platforms? You may have more control on-premise, but it likely will require more internal IT resources. SaaS, on the other hand, offers greater security, scalability, and flexibility. And it may offer lower total cost of ownership (TCO).
  2. What’s the vendor’s track record on implementations, migrations, and data conversions? Does it stumble out of the blocks? Does it fade in the late going? Or are its pace and delivery steady from start to finish? Your business operations will depend on the vendor’s performance.
  3. How is the process of implementing the software, migrating your data to the new system, and performing the necessary conversions priced? Will you be on the hook for delays or scope creep?
  4. What’s the vendor’s approach to change management (process re-engineering and getting your employees up to speed on the new software’s ability to facilitate necessary processes) and training (ensuring buy-in from and a smooth transition for your employees).
  5. How good is the software? How much of it is standardized? How much of it is customizable? How reliant do you have to be on the vendor to configure the system, to configure existing products, and to develop new products?
That Was Fun

All five steps above are necessary, and we highly recommend the gathering of as many fact as you need to ensure your decisions about a vendor and its software are fully informed. You can find all manner of data to support the contention that the single most important factor in a property/casualty insurer’s selection of a core processing system is integration with existing systems and data. But it’s not.

The single most important factor in any insurance company’s selection of any software is word of mouth. In other words, the most important thing to establish with a vendor is trust, beginning with the trust it’s established with other insurers. Yes, the software has to be good. But a trusting relationship comes first.

If your legacy system is telling you the search is on, remember this: There is no perfect system. But there is a perfect system for you.

That includes your relationship with the vendor.

Insurtechs: Bigger Fish or Red Herrings?

At this point, insurtechs have been around long enough that most of us are familiar with the benefits they typically tout. Here are the Top 10:

  1. Innovation and modernization, based on their belief that new technologies like AI, machine learning, blockchain, and IoT will change the game. Core system vendors are under pressure to integrate these technologies into their offerings to remain competitive.
  2. Legacy system transformation, based on their belief that they know the industry well enough to introduce more flexibility, scalability, and the capability to handle modern insurance demands.
  3. Enhanced user experiences, based on their belief that they’ll equip core system vendors to enhance their systems with user-friendly interfaces and customer-centric features to meet the expectations of modern consumers.
  4. Data-driven personalization, based on their belief that data analytics is the silver bullet for ensuring personalized insurance products, claims handling,  and customer service.
  5. Faster implementations, based on their belief that, for the most part, one size will fit all.
  6. APIs and integration capabilities, based on their apparent belief that core system vendors aren’t already including such things to make their systems more open and adaptable to new technologies and partners.
  7. Ecosystem development, based on their apparent belief that core system vendors will be open to partnering with all of them, their value to vendors, their customers, and their customers’ policyholders notwithstanding.
  8. Co-development and white-labeling, based on their apparent belief that core system vendors may not be developers in the first place and that they may be willing to compromise their brand credibillty by suggesting they can’t developing capabilities on their own.
  9. Increased competition, based on their apparent belief that they can influence and compete with traditional vendors.
  10. Continuous evolution, based on their apparent belief that the rest of the world, the insurance industry, and technology might remain static without them.

But there’s a bigger reality to take into consideration.

Behind the Veil

Given the fact that some insurtechs have, indeed, proven their value and manifested varying degrees of longevity, we’re not entitled to express opinions about the Top 10. But we should bear this in mind: In May of last year (the most recent data we could find on the topic), Boston Consulting Group reported the following in an article entitled, “Insurtech’s Hot Streak Has Ended. What’s Next?”:

Investments in the fintech sector decreased by 43% year over year, with insurtech registering the largest drop at 50%. After hitting a peak of $4.9 billion in the second quarter of 2021, insurtech funding began its descent. By the fourth quarter of 2022, funding had reached its lowest level of the past 20 quarters, with only $800 million invested. That marked a decrease of 64% from the previous quarter and 78% from the fourth quarter of 2021 … the pace of growth has slowed significantly, and the market shows no signs of a rebound.

Does that mean insurtechs are going away? Nope. Does it mean we can or should ignore them? Nope. Does it mean core system vendors should be prepared to incorporate and integrate the ones that suit their business models and provide discernible value to their customers? Yep.

That’s exactly what we mean when we say the Finys Suite is future-proof.

Social Inflation Goes Nuclear

The July edition of Best’s Review ran an article called, “Social Inflation Remains a Thorn in the Side of Casualty Insurers”. The article reflects the evolving psychology of some policyholders and the corresponding expectations that yield suspicion of corporations and assumptions about corporations’ abilities to inflate compensatory damages:

Social inflation continues to test the ability of casualty insurers with unpredictable and excessive claim costs … a reflection of shifting social and cultural attitudes toward corporations … when people have claims or file claims … they’re looking at the deep pockets of the corporations and figuring that, “Hey, somebody has to pay for my misfortune” … A lot of that led to an increase in lawsuits … [and] the erosion of tort reform in a number of states.

In other words, we may be facing the proverbial perfect storm of social inflation and nuclear verdicts.

What is Social Inflation?

Social inflation denotes the increase in claim severity above the historical norms of economic inflation and claim trends, in which the rising costs of insurance claims are driven by societal trends and views toward litigation, rather than just general economic inflation. Those trends include changes in public perception and attitudes toward corporations, liability, and risk-taking that can lead to increased litigation and larger jury awards. They include the involvement of outside parties in funding lawsuits that drive up litigation costs. They include reversals of tort-reform measures that were intended to protect insurers from insolvency. And they include varying demographic makeups of jury pools that can influence jury verdicts and awards. The upshot is that those trends lead to increased claim costs, higher premiums, and reduced profitability for insurers.

What is a Nuclear Verdict?

Nuclear verdict denotes verdicts in favor of plaintiffs with damage awards that surpass $10 million. Such verdicts are considered nuclear because they can have devastating effects on defendants, potentially causing financial hardship and bankruptcy. Nuclear verdicts often involve complex cases, such as product liability, medical malpractice, or catastrophic injuries. The increase in nuclear verdicts is attributable to a number of things, including the changing attitudes toward corporations mentioned above, increasingly aggressive plaintiff attorneys, the increasing numbers of class-action lawsuits, the increasing cost of healthcare and medical treatments, and more. The proliferation of nuclear verdicts is a source of concern and consequence for the insurance industry and defense litigators. They lead to increased insurance premiums, reduced coverage options, and a greater risk of financial ruin for defendants. As a result, there is a growing need for effective risk management strategies, litigation tactics, and claim management techniques to mitigate the implications of these verdicts.

Start at the Beginning

We can’t say your core processing suite can save you from all the effects of social inflation and nuclear verdicts. There are regulatory and legal issues to be resolved, as well as social attitudes to be examined and mitigated. But the right suite — one with the flexible configuration capabilities to enable you to anticipate and adapt — will have you better positioned before social inflation goes nuclear.

If you happen to be looking for such a suite, we know some guys.