Jay ML Q+A blog_linkedin_202210

Lead Scoring Differentiators with Fenris

We sat down with Fenris’s Chief Technology Officer, Jay Bourland, for a deep dive discussion about how the Fenris machine learning (ML) platform delivers predictive lead scoring for the insurance industry—tuned to each customer’s individual needs—faster and at a much lower cost than companies can achieve on their own.

Q: We say that our ML scoring platform is configurable to each client’s requirements. How is that different from what other vendors offer?
A: Without Fenris, if you want to create an ML model for your business, you have two choices. There’s the one-size-fits-all solution that acts as if your business is the same as everyone else’s. The option at the other end of the spectrum is to hire a team, either internally or outsourced, to create a model specifically for you and the process that you have in mind. That approach is both costly and time consuming, so even high-performing models have a very poor price performance ratio.

The Fenris approach is to use basic strong machine learning capabilities to deliver models that are trained for your business outcome. If you look at the machine learning process, you’ll see there are a lot of steps that apply equally to everyone. There’s one step towards the end of the process—where we train the model and configure scoring parameters—where an individual business’s requirements matter. We leverage a library of core models to create a new model, and we focus on making sure that we can train that model very quickly for individual outcomes. And we can spread the costs of this core library across multiple customers, allowing us to offer customized solutions at a cost far below in-house development.

Q: What are some examples of individual outcomes or parameters that would be different from one insurance company to another?
A: The simplest example is car insurance. Everyone wants to know if this customer is likely to buy insurance. A one-size-fits-all model is very easy to do because everyone is generally required to buy auto insurance. It’s not very helpful.

The real question is whether the prospect is likely to buy from your company. The customer who buys insurance from Company A, a non-standard carrier, for example, is very different from a customer that buys insurance from Company B, a premium brand. They reach very different markets, they have different products, and they have different price points and underwriting guidelines. Company A appeals to clients with an imperfect driving record, poor credit, or other challenges. Company B’s target market is high-net-worth individuals with high-value vehicles.

So, prospects for Company A are scored very differently from prospects for Company B. The same goes for companies across the standard market. Each brand has a unique identity and target market. In order to create a model that is appropriate for a specific business and for the customer base that brand speaks to, we need to train it on a specific customer profile and outcomes.

Q: We say we can deliver models quickly. Once a contract is signed, how long does it take to go through the configuration process?
A: Of course, it depends on the requirements, but typically we can deliver a first model within two to three weeks. Subsequent models, we can usually turn around within a week.

Q: What are the client business problems that Fenris models address, and how do they work?
A: We have two classes of ML models. One is our “top of the funnel” lead scoring model that customers use during the “ping” stage in purchasing leads. The customer gets a ping in their lead tree and wants to know whether or not to bid on the lead. Because the lead is based on anonymous data, we start with basic rules—is it in the right state? is the right zip code? how many cars? We want just a few inference questions, a simple decision tree, that we can back up with a lot of machine learning and improve over time.

The customer sends us the anonymized “ping” information as they receive it, and we return information about whether or not it’s a good lead to buy based on their parameters of what makes a quality lead. For this stage, we need to be very, very fast. We are a real-time system. We don’t do things “on the back end” or in batches. Rather, we respond as things happen, in time for the customer to take action. For these models, we’re generally able to return the score within 200 milliseconds.

The other type of model is used after our client has purchased a lead. Here the client wants to know how best to handle the lead in their marketing pipeline. Should this lead go into a call center, which is the highest cost? Or will an email drip campaign be better because it’s likely to take a long time to convert?

The client sends us a name and address, and we append features from our reference dataset, along with any extra variables that we’ve developed to improve the scoring model for this client. We run that improved prospect profile through the model, and the customer gets an answer back. We focus on returning a simple index, rating the prospect as poor, fair, good, very good, or excellent. We’ve found that this the most actionable kind of insight for people to work with. For these models, where we are enriching the lead with a lot of other data, the response time tends to run somewhere in the neighborhood of 650 to 850 milliseconds.

Q: What is model retraining, and how do we do it?
A: We live in a very dynamic environment. Our world is constantly changing and the ways we attract and communicate with prospects changes almost daily. We focus on being able to deliver a very fresh model over time. Retraining means refining the model based on the actual outcomes experienced by our customer, the insurance company, or other factors. Outcomes are a reflection of the accuracy of the model’s predictions—how many leads scored as a high propensity to buy actually end up buying, for example. The customer sends us their outcomes, and we retrain the models based on those. When you’re looking at the very top of the marketing funnel, we’ve seen campaigns come and go. And other things happen very rapidly to change the marketing landscape. So, it’s very important that we’re keeping the model up to date.

We can accept new training data however the customer wants to send it to us. We can set up an API and have them send us outcomes as they occur. Or, if it’s easier at this stage of the customer’s digital journey, we can take batch files that they send through our SFTP site. We actually do the retraining on whatever schedule makes sense for the individual company, although the more frequently we can do it, the better. We have models that are able to retrain over thousands of outcomes in in just a few seconds.

Q: What do we mean when we say a customer’s model is extensible?
A: Typically, a customer gets started with a propensity to buy model. A next step might be to model a customer’s lifetime value, or, in the non-standard market, to model a prospect’s propensity to cancel a policy quickly. There’s a lot of problems around very short tenures of policies. Often, we can customize a new model based on their existing model by adding new parameters and outcomes. This allows us to deliver a new model very rapidly, at a very low cost to the clients.

Q: How does Fenris differ from its competitors, and what makes our models special?
A: We have three characteristics that we focus on. The first is explainability, or transparency. We want to help our customers get insight into their business. We tend to favor more simple models. They tend to be faster to retrain, have better performance on the front end, and are easy to understand. That lets us watch out for things like bias, which is always a big concern for us.

Number two is trainability. We’ve talked about that. We want to be able to retrain very quickly. We think that the world changes much too fast to try to get a perfect solution in place every single time, or the absolute best model every time, which requires a lot of time to develop and train. We’d rather have a model that we can retrain to changing circumstances on a continual basis, compared to one that has a higher accuracy than ours.

The third characteristic is performance—being able to get information back to our clients in real time. That means we want the response time for the inference step in the model to be very fast. A model that takes several seconds to come back with an answer is not a good fit for our clients.

Contact us for more information about how lead scoring works, or to schedule a demo.

Posted in