I’m a marketer who does marketing to marketers. I’m literally my own prospect. You’d think I’d know exactly what my audience wants.
But the fact is, I can never know how my audience will behave on a consistent basis. Audiences are made up of human beings and even if they have the same business titles, they’re all going to behave differently.
Accept this fact, and it’s easier to understand the value of experimentation.
Experimentation can mean different things to different people. At Metadata, we define it as the act of putting out combinations of audience, creative and offer types to different channels to discover which have the most positive impact on revenue.
A recent survey by Facebook Marketing Science Research showed that advertisers that ran 15 experiments (versus one) in a given year see about a 30% higher ad performance that year; those that ran 15 experiments in the prior year see about a 45% increase in performance, highlighting the positive long-term impact of experimentation.
Whether your goal is to boost website performance, improve lead nurturing or refine your landing page copy, you need to continually experiment with new messaging and design to understand what works for your evolving audience.
Experimentation and testing provide proof of marketing ROI and proof that you’re not wasting money. They help you identify campaign mistakes and learn from them.
Sure, your marketing campaigns may still perform fine without experiments, but you’re leaving too many unknowns in the equation.
Could the campaign have performed better with different copy? Are you leaving revenue on the table by running the same campaigns?
Without experimentation, it’s hard to know if your marketing efforts are actually helping the business grow.
With all these benefits, one would assume marketing teams are experimenting regularly. But in my experience and according to research, they’re not. How come?
We don’t have the time and resources.
The time and resources issues are both easy to resolve: plan better.
Marketing teams often don’t consider experimentation important enough to be a project task.
But teams could easily adjust campaigns to bake in experimentation two weeks before launch and start the “learning” part of the campaign two weeks early.
Lack of resources is a more valid argument. However, experimentation doesn’t have to increase resource load. It can actually save resources.
You still have the same offer, audience, and desired outcome. But with experimentation, you’re making subtle campaign changes to get better performance.
Experimentation is not a resource-suck if you think of it in terms of not needing so many campaigns. When you experiment, test, and optimize one campaign effectively, you can adjust it to promote new audiences, events or product features as they come up.
You don’t have to create new campaigns each time because you’ve refined the current campaign to where you know what works.
Maybe you can only spend $10,000 a month total on marketing campaigns and you need to spend $10,000 just to get the answers from experimentation.
One method is to front load. Put 30% of your monthly $10,000 towards experimentation and dedicate 70% to reliable lead-gen channels and campaigns, such as paid search, where you really know your unit economics.
Another scenario is to borrow from next month to experiment this month. You’ll spend $13,000 this month, and then ideally you’ll learn enough that you can spend $7,000 next month and get the same performance because you learned from the experimentation.
Last month, Metadata experimented with two homepage variants. One featured a stock image of a person, the other featured an actual Metadata customer with a quote.
We were looking at how often visitors abandon the page and how far down the page they scroll.
For the variant with the stock image, the scroll depth was deeper and the bounce rate was 5% lower than the variant with the customer and quote. I felt like I had the data to declare the stock image the winner. But for the “number of meetings booked”, the stock image variant was 0.5%, while the customer image was 2.7% — five times higher for a metric that’s much more important for business growth.
Lesson learned: Always keep your experimentation focused on the performance metrics that matter to your business.
Speaking of metrics … proper measurement is vital to experimentation. I’m not talking about just viewing a Google Analytics dashboard, but doing regular experiments to verify that your data reflects real-life customer engagement.
Learn how to use Google Optimize to test and measure website variants. And use a tool like Metadata to test and optimize your ads.
Also, be patient with experimentation. Revenue doesn’t happen when users click. Sometimes you won’t know how campaigns perform for a month or two because your sales cycle is 90 days. So don’t judge a campaign until people have moved through the funnel.
And finally, there has to be cultural support for experimentation within the company. You’ll be asking for longer deadlines and additional creative resources.
If the company culture doesn’t support learning from failure then it’s going to be harder to implement experimentation.
You’ll never fully know your audience’s psychology. But Metadata’s AI-based technology can get you as close as possible. Our platform generates hundreds of marketing experiments based on the campaign goals you set and at a pace not humanly possible. If you’re looking to drive more revenue through experimentation, we’d love to help.