Simple tips for better lead scoring
You don’t need machine learning to develop an effective lead scoring approach. In fact, it probably won’t help.
The more complexity you add to your lead scoring, the harder it is to tell what’s working and what’s not. To improve your lead scoring, start simple, experiment with a diversity of tactics, and then iterate, iterate, iterate. That’s what we do at Clearbit.
Start with a simple lead scoring process
It’s common to go over the top from the beginning with lead scoring, incorporating AI and machine learning at any opportunity. But these tools are only effective when accompanied by a solid strategy. At Clearbit, we’ve learned it’s better to start with a simple lead scoring approach and build onto it slowly to accurately gauge what is working and what isn’t.
Julie Beynon, head of analytics at Clearbit, remembers when the Clearbit lead qualification approach was initially over-engineered. But as soon as we realized it was too complicated, we scaled back and simplified the process. Now Clearbit’s lead scoring process involves only one external tool (MadKudu) before it’s scored and sent to the sales team.
Boris Jabes, the CEO of Census, a company that builds data syncing tools, warns organizations not to make the common mistake of getting distracted by the excitement of new data tools when it’s possible to do more with the information already available.
Instead of using dozens of metrics to filter leads right off the bat, Jabes recommends starting with just one. To start, filter out 99% of users based on any random metric. Boris suggests it could be your most active users but that the metric really doesn’t matter. You’re simply establishing a baseline to compare future efforts against. Once you’ve got your baseline, you can try different campaigns and see what works best for you. Successful lead scoring is built up over time, and starting simple (i.e., knowing your baseline rather than grabbing tons of data) is the best way to get there.
Evolve your lead scoring approach with your business
Just like everything else in your business, lead qualification requires constant adjustment as the business evolves and grows.
After you have measured a campaign or two against your baseline metric, Jabes recommends iterating quickly to find the right formula. At first, our lead scoring process at Clearbit only filtered out obvious spam, leaving the sales team with the opportunity to treat every genuine contact as a potential lead. But soon, that became too much.
Three years in, the sales team had enough volume that it required some filtering. We compiled an ICP (ideal customer profile) that consisted of a short list of attributes we were looking for in our leads. As our sales team received more inbound leads, we needed a way to keep finding the best leads, so our sales team wasn’t overwhelmed. Imagine if Clearbit’s lead scoring approach had never changed and our sales team was just flooded with inbound leads without any way to prioritize them.
At Census, Jabes says from the beginning that they used Clearbit to enrich their leads, but they, too, had to scale their operation. In the beginning, pretty much every demo request was a lead. Over time, their lead scoring has become more sophisticated as they’ve learned what metrics are most important for their leads. And they still don’t use machine learning.
At Clearbit, we constantly monitor the performance of our current lead scoring approach. If the results fall below our ideal range, we immediately make changes to address the issue. That might involve adjusting our lead scoring tech stack or processes or both. Our organization is always changing, and the world is always changing. Lead scoring has to change along with it.
Experiment, especially when data isn’t available or useful
One thing Clearbit and Census definitely agree on: keep an open mind and be prepared to try new approaches. This is important because some of the most effective campaigns start as experiments. And it’s also great preparation for the inevitable times when data is either not available or not helpful.
As Jabes points out, it’s important to get comfortable trying out new campaigns regularly. First of all, it’s the only way to discover surprising connections. There’s a reason experimentation is critical to scientific discovery: Sometimes, you just have to try things to see what works.
There will also be times when data isn’t available, or the data available doesn't make sense because historical data doesn’t apply. For example, Clearbit is growing. And as we move from a startup to a large organization, our historical data isn’t going to be as applicable to our current situation. It’s not going to help us because the data we have is for a business size that we’ve outgrown.
Be on the lookout for data that doesn’t make sense and consider whether changing circumstances will affect the accuracy of the incoming data. Once Clearbit grows to the size of an enterprise, it’s likely that much of our data from when we were a mid-sized company will no longer apply. We’ll need to use a combination of our individual team members’ experiences navigating similar situations in the past, along with a lot of experimenting to learn the new rules.
While working with data is an exciting chance to learn how much insight is out there in ones and zeros, it also tends to come with reminders that some things are unknowable and tech can only do so much. Use data when data is useful, but remember that the human propensity toward curiosity and gut feelings is an important part of effective lead scoring that you will need to rely on frequently.
Think two-dimensionally: Fit and intent
Two aspects to consider in two-dimensional (2D) lead scoring are fit and intent. Fit is how well the potential lead matches your ideal customer profile. And intent is how interested they seem in buying from you.
Fit is less flexible, so look at it first to determine whether your contact is the kind of customer you’ve been targeting. You can show a lead something that interests them to increase their intent, but you can’t really increase a lead’s fit. For this same reason, be sure to keep your fit and intent scores separate.
Intent is more malleable. If someone signs up for emails or downloads a report, those are signs of intent. And how a brand interacts with them could increase or decrease their intent. So start out with high-fit, high-intent leads and work your way down to high-fit, low-intent leads. Don’t bother with low-fit leads, no matter how high intent.