After a trillion‑dollar sell‑off in software stocks and the so‑called “SaaSpocalypse”, it is not just founders who are nervous – CMOs and marketing leaders are questioning the future of their products and positioning.
In this episode of the FINITE Podcast we sit down with Antony Cousins, VP Product at Meltwater, to interrogate the death‑of‑SaaS narrative head‑on. They unpack how investor decks, agentic AI and vibe‑coded tools like Claude Cowork have fuelled the story that AI will replace subscriptions.
Ant brings a rare perspective, combining a career in Ministry of Defence tech roles with frontline communications work in Iraq, Afghanistan and the Arab Spring, leadership in AI startups and now product leadership at Meltwater. He has spent the last decade at the intersection of AI, media intelligence and reputation.
If you are a B2B marketing leader wondering how AI will affect your revenue, this conversation will help you separate existential risk from lazy narrative – and design for growth, not just survival.
Inside you’ll find…
- What CMOs and agencies should change now: commercial models, junior hiring and how they collaborate with software providers.
- Why “SaaS is dead” is an oversimplified narrative – and where AI agents and vibe‑coded tools genuinely threaten software.
- How data moats, long‑term memory and UX become the defensible edge in an AI‑native SaaS ecosystem.
Listen below, on Apple Podcasts or Spotify
Or watch on YouTube
And once you’re done listening, find more of our B2B marketing podcasts here!
The FINITE Podcast is sponsored by Clarity, a full-service digital marketing and communications agency. Through ideas, influence and impact, Clarity empowers visionary technology companies to change the world for the better.
Find the full transcript here:
Jodi (00:01)
Hi Ant, how are you doing today?
Ant Cousins (00:05)
Thank you for having me.
Jodi (00:07)
It’s a pleasure to have you on the FINITE Podcast. I previously heard you speak at another event and was so blown away by the breadth of your knowledge I thought, “We need to get you on the show.”
I’m really looking forward to hearing your take on the dreaded “death of SaaS” – or the “SaaSpocalypse”, or whatever you want to call it. I have a feeling there’ll be some myth‑busting today, and that we’ll be really challenging the assumptions of our listeners, who are all marketers and marketing leaders in SaaS. So it’s a very front‑of‑mind topic for them at the moment.
Before we dive in, could you give us a bit of a lowdown on who you are, how you got into tech and SaaS and marketing, and your journey so far?
Ant’s journey: from MOD tech to AI and SaaS
Ant Cousins (00:56)
I’ll take “breadth of knowledge” as a compliment and not just “you’ve been around a long time”!
I started out at the Ministry of Defence in 2000 and spent seven years in tech roles. I’ve always had a background in tech, building applications many years ago. After seven years, I got onto a leadership scheme at the MOD and they basically said, “If you do one more job in tech, you’re screwed. You’ve got to do something different.”
This was back in the day when people still read newspapers. I remember saying: “In the newspaper it always says, ‘an MOD spokesperson said…’ – how do I become an MOD spokesperson? That sounds like fun.”
This was at the height of the Iraq and Afghanistan conflicts in the Middle East. The MOD was front‑page news most days. It was probably the busiest press office in Whitehall at the time. They said, “You’ve got zero background in communications. This is not the time to join the press office. You’re not ready.” And I thought, “How hard can it be?”
Fortunately, I’d fixed the PC of one of the press officers who was recruiting at the time. He managed to get me into the list of about 20 people applying for the press officer role. I got the job and became a press officer – and I found my second passion, which was communications.
From there my career went off on a tangent: first into Iraq and Afghanistan, offering on‑the‑ground media advice; then into counter‑terrorism, which was more about countering Al‑Qaeda’s single narrative on the streets in London; then into the Middle East during what turned out to be the Arab Spring – all the “social media revolutions”.
That was a turning point for me: seeing this new technology – social media – and its impact in non‑democratic countries, where people were effectively getting themselves new governments because enough of them wanted change, and they were using social media to help achieve it.
I remember thinking, “This technology is amazing. Look how much better it’s going to make our world. All the power is now with the people.” That was obviously naive. We then had Cambridge Analytica, the Brexit campaign, misinformation, disinformation, fake news, filter bubbles – all the issues we’re still grappling with now were new ideas back then.
But it gave me a passion for social media as a specific part of communications.
In 2014, I jumped into an AI startup and fell back in love with technology as my primary role. Since 2014, my job has been either AI, communications, or both. For the last ten years I’ve been in AI startups.
Most recently I was at a startup called FactMatter, which I sold to Cision. I stayed at Cision for a couple of years advising on AI strategy, then moved to Meltwater after reviewing the market and asking: “Who’s really spending on AI? Who really understands where it’s going?” For me, Meltwater was the only company in that top‑right mental box. I was very happy to find a role here.
At Meltwater I lead on AI strategy and how AI integrates across our product suite – riding the wave of the “year of AI”. Those of us who’ve been in it for a decade kept saying, “This is the year, this is the year everyone gets AI.” I think 2023 was that year, and you realise: that’s what it feels like when everyone is into AI. So I’m riding the wave and enjoying it.
Where did the “death of SaaS” narrative come from?
Jodi (04:46)
Really interesting story – and you clearly had an early understanding of the impact AI would have, before everyone else was talking about it. I also can’t believe someone told you that you needed to get out of technology – that you shouldn’t have another tech role. That just would not happen today.
It sounds like you have a strong grasp, through your career, of the broader narrative of tech, innovation and AI, and how it interacts with contextual factors like terrorism and global crises. It’s a really interesting perspective.
I’d love you to set the scene for us and give your view on where the “death of SaaS” narrative has come from. What were the triggers and signals that SaaS could be under threat?
Ant Cousins (05:51)
There’s a melting pot of factors – a lot of different influences. One of the primary influences is the large language model providers themselves. The Anthropics and OpenAIs of the world have raised huge amounts of money on the hypothesis that their models will be ubiquitously available for everyone to use, for everything.
Part of their narrative is: “Invest in us because we’re taking over the world. Invest in us because everyone’s going to use this technology and generate so much value.”
Between November 2022, when ChatGPT first hit the masses, and 2023, when the hype really ramped up, that narrative kept escalating. I think 2024 was probably peak hype – especially towards the end of 2024 – around “this technology is going to change the world”. But people were asking: where is the value? Where is it actually changing our roles and our lives? At that point we hadn’t really seen it.
So the hype train had to escalate. It went from “people will use it in their daily lives” to “work is going to fundamentally change; the way you do your job is going to fundamentally change; we’re going to save so much money because we’ll automate so much work”.
That hype train continued to escalate. Now I think it’s at its peak, with people publishing reports left and right saying all these jobs are going to disappear.
So in part, expectations around the “death of SaaS” have come from the LLM providers’ own need to drive that story – to justify valuations and investment. They’re on that train and can’t get off; it has to keep escalating to keep the money train going.
Initially, SaaS providers themselves weren’t really caught up in that. In 2023–2024, most SaaS providers were saying, “This is great for us. We can increase the quality of our products, add features, automate painful tasks.” Most SaaS companies were on the train, happy to pay the ticket to the LLM providers.
That escalated to the point where SaaS providers were talking about automating huge chunks of their products, which actually played into the broader narrative of “automation of work”, because where do people do their work? In software. So SaaS providers became part of that hype train.
What changed – and this is where the “SaaSpocalypse” comes in – is that it reached a point where you didn’t necessarily need a SaaS provider to build software you could use to get value.
You can now go to tools like Claude, Cowork, Lovable and others and essentially “vibe code”. You can prompt: “Give me an application that lets me generate a press release.” You get a thin application on top of Claude or ChatGPT. That part of the hype train – “you can just build your own apps” – is less than a year old, but the capabilities have become good enough that people started to imagine:
“Hang on, it’s not just thin wrapper applications anymore. People will vibe‑code new HR software. They’ll vibe‑code their Salesforce. They’ll vibe‑code their HubSpot.”
That was the start of the “SaaSpocalypse” – when people realised it might not just be simple tools users can build for themselves, but entire applications and suites. That took a chunk of cash out of SaaS valuations: more than a trillion dollars of value wiped off for those companies combined.
That’s the narrative – the story behind the impact. But there’s a lot of nuance and a lot of myth in there.
The first myth is this: yes, you can go to Claude or Lovable and vibe‑code yourself an application – and it’s very easy to get to the first 80%. It’s very easy to produce something that looks great. You can say, “It took me 20 minutes to build this app that does this task for me, amazing.”
If all you needed was that simple task, then great. Use it, you’ve saved some time and lived the dream.
But no one wants to deal with the edge cases – the nuance of “it doesn’t quite work when I try to do this”.
No one wants to listen to other people’s requirements. If you want your team to work in a consistent way, if you want your team to benefit from that app, you need to share it. And other people will have different views on how they want to use the software.
Then you have compliance, governance, workflows, permissions. There is a lot of challenging detail in building software. A SaaS provider will tell you that’s exactly why they exist – but it isn’t always well understood by end users, and clearly hasn’t been well understood by the market.
I think there’s still plenty of room for people to build software, because I don’t think anyone wants to spend their budget training staff to build software and deal with edge cases, governance, workflows, compliance and regulatory requirements. That’s what software providers are for.
So there’s a big myth to burst in the idea that “we’ll just vibe‑code a new Salesforce”. It’s not that easy.
Is there any truth in the “SaaS is dead” narrative?
Jodi (11:38)
Honestly, it all sounds a little bit ridiculous when you put it like that.
The sheer resource and nuanced understanding required – not just of engineering and infrastructure, but of UX and platform optimisation – even in “simple” platforms like project management tools. It’s about deeply understanding and anticipating user needs. I don’t know how your project managers would feel if you just vibe‑coded them a platform that wasn’t suited to them, and then didn’t help with edge cases or synchronising the entire team.
To take a contrarian stance: is there any part of it you do think is valid? Could the “death of SaaS” at least lead to a long‑term devaluation of some SaaS platforms?
Ant Cousins (12:54)
There are definitely some use cases.
The challenge for a SaaS provider is that you have to build software everyone can use. In our industry, we have the classic challenge of building for in‑house teams, for agencies, for small agencies, mid‑sized agencies, huge agencies.
All of those users have slightly different requirements and different levels of process maturity. That’s the classic challenge: we have to build one interface, and we can adjust it and give you ways to manage that within the product.
But the more we account for lots of personas, the more complex the software becomes. Then smaller, less mature organisations – or small agencies – look at it and think, “This doesn’t feel like my kind of software. I just want something quick and easy.”
SaaS providers are always trying to balance UI, complexity and maturity with the size and type of organisations they’re selling to.
Where this gets exciting for companies like us is that, instead of fighting that trend, we can enable it. We have a huge amount of proprietary data.
We’ve done the hard work of building contracts and relationships with around 700 media organisations worldwide to access their data and put it into our database. We have about two and a half trillion data points. We have around two billion documents coming in each day.
We’ve built scalable processes to enrich every single one of those documents consistently – sentiment, entity detection, reach, engagement, views, and so on. You can’t vibe‑code that. No one is vibe‑coding that level of hardcore engineering. That’s relatively safe.
On top of that, at the user‑engagement layer, if you have a very specific use case – “this is how my team wants to work; we want to do this one thing” and it’s relatively simple – then by all means, go and vibe‑code it. You absolutely should.
And we’ll support you, because we have an API, an MCP server, and the agentic enablement you’d expect – which is already leading edge in our industry. We’re leaning into that opportunity. We want you to build more stuff, because it means you use more of our product.
So while you still have our application for the things you can’t vibe‑code, if you have niche use cases you want to cover, go ahead – and we’ll support you.
In our specific industry, because of that moat of hardcore data engineering and scale, we see this as a growth opportunity: people wanting to build their own software. The more we work with clients on that, the more we understand: if three‑quarters of our clients have just vibe‑coded an app that does the same thing, maybe we should build that into the platform.
I see it as an opportunity for truly agile, collaborative software development.
Where I do feel some sympathy is for companies that don’t have that moat. If you don’t have proprietary data, and your proposition is basically “we’re good at process, and we’re good at thinking things through”, that’s a weaker proposition now.
We still do that UX thinking – understanding what clients actually want to do, what creates value for them, where the AI capability is, what the bleeding edge looks like, and how the human interaction fits around it. We’re still investing in that. But if you don’t have unique data and hard engineering problems, you’re in a different position. That’s where some devaluation is real.
Agencies, AI and vulnerable business models
Jodi (17:00)
It sounds like a really creative and innovative response: enabling vibe‑coding rather than resisting it.
I’ve heard a lot of CMOs say they’re repositioning or moving towards being “AI agencies”, but it sounds like those solutions aren’t really tackling the core problem – which is that users want flexibility and want to collaborate with software providers to better suit their needs.
I’d love to talk about the companies that don’t have as much inherent or long‑standing value. What do they look like? How do you spot them? Which kinds of companies do you think are genuinely threatened by this shift?
Ant Cousins (18:05)
Let’s be honest: the traditional agency model is fundamentally “we’ve done this a hundred times, so we know how to do it really well, and we’ll sell you our time so you can benefit from our experience and expertise, because we’re always learning and staying on the bleeding edge”.
That model was already transitioning – from “pay us for our time” to “pay us for outcomes”. AI has really accelerated that shift.
If you’re still basing your commercial model on time, you’re already in trouble. You absolutely need to move towards an outcome‑based model.
If anyone’s listening and wants a concrete example: there’s a small agency called Hard Numbers who came out very early saying, “We will guarantee you this commitment.” It requires an honest agreement – they won’t commit to something unrealistic – but they’ll have the hard conversations to agree what’s possible and then commit to delivering it. Every agency should be moving toward that kind of model.
Because in a world of automation, if your model is time‑based, the more AI you use, the less money you make.
So I’d look at any company with a time‑based model and say: you’re in trouble. If you haven’t started the transition, you’re already behind the curve. It’s not too late, but you need to speed up.
The other factor is access to data. If your secret sauce and IP lives entirely in the minds of the people in your business, that’s under threat in a way many people haven’t realised yet – because the technology hasn’t quite got there.
When you use Claude or ChatGPT and you prompt “give me a press release” or “write a social media post” or “think of a campaign plan”, the more context you give, the better the output.
If you just say “give me a social media post”, it’ll be poor. But if you say “give me a social media post, here are the last 50 posts we did, here are the ones that got the most engagement, here’s the audience, here’s the call to action”, then the more context you add, the better the result.
If you give enough relevant context, the output can actually be very good. But it’s hard to give that context right now. That’s what we as software providers are working on – making it easier to provide context.
We’ve aligned all our assets – saved searches, source lists, filter sets, author lists – so you can start to use those as context. That’s where we are right now; we’re already building that, and much of it is already out.
But that’s context, not memory – and memory is becoming the new battleground for user interaction.
If you spend time on LinkedIn (and maybe this is just my feed), you’ll see constant talk about context graphs. That’s about AI’s increasing capability to build long‑term memory: how you do a job, what worked, what didn’t. That’s the battleground we’re seeing between SaaS providers.
I don’t think many people have realised this is coming for what they see as their “human secret sauce”. People think:
“I remember what worked two years ago; I know where things went wrong; I remember when we tried that and why it failed.”
That’s seen as an inherently human trait. But that, too, will become part of AI.
Another issue for agencies is that when someone walks out the door, all that memory walks out with them. The value of that experience is gone.
Agencies need to build long‑term memory they can monetise over time. It’s not enough to say “our IP is in our people’s heads”; you need to consolidate that collective experience into the best possible version of the process.
I don’t think many people have started that process, or even understood it. But that’s where things are going.
Data, agencies and the “AI hiring freeze”
Jodi (23:08)
It makes sense. We’ve had the “data is oil” conversation since what feels like the Tesla era – “it’s not a car company, it’s a data company”, and that’s why it’s so highly valued.
Hopefully most SaaS companies are already supporting their inherent value with data production, and it sounds like they can find creative ways to start thinking about that if they haven’t already.
It’s interesting, because agencies almost seem to be getting a short‑ to mid‑term boost: there are hiring freezes, people are outsourcing, and they don’t want the long‑term financial commitment of full‑time in‑house hires. It will be interesting to see how that relationship evolves.
Ant Cousins (24:03)
You’ve hit a really important point. For me, one of the most painful parts of this AI revolution is that the result has often been hiring freezes or reductions in junior hires.
My argument is: you should be hiring more junior people right now, not fewer.
AI is an abundance technology, not a scarcity technology.
The winners in the dot‑com bubble were not the people who said, “This internet thing is great – how do we achieve the same thing we did before, but with less?” That’s not the mindset that wins in an AI revolution.
The winning mindset is: how do we scale? How do we do more? How do we attract more clients, win more clients, serve more clients? How do we give them better quality than we ever could before?
Raising the top line, raising quality, stealing market share, increasing the size of the market overall – those are the attitudes of the winners in the AI revolution.
If you’re saying, “We want to use AI to cut our costs,” what you’re really saying is that your business model is capped – there’s no more money to make, you’re already winning all the market you can, and all that’s left is to shave the bottom line.
If that’s your model, you’re already in trouble. Your business model needs to change if you believe AI is only useful for cutting costs, and you can’t imagine how it could help you win more share or grow your market.
If everyone in your market feels the same way, that market will disappear.
So it’s not about reducing junior hires. Most of them are AI natives: if they’re coming out of university now, their AI skill set is much stronger because they’ve used it throughout their studies.
You should hire more juniors – but also different juniors. It’s not about more of the same profile you hired before, because the skills you need in future will be different.
Empathy, relationship‑building, critical thinking, genuinely novel creativity – those are the skills we expect to need. The coding and mechanical tasks will be automated.
So: hire more juniors, but hire different juniors. I hope we get over this bump soon – the mindset that AI is only a cost‑cutting tool. I hope we move past that quickly.
Why are leaders defaulting to cost‑cutting with AI?
Jodi (26:47)
You put that so well. It makes the “capped growth” mindset around AI seem almost ridiculous.
Why do you think the immediate instinct for so many leading companies, at the intersection of automation and teams, has been to focus on cost cutting?
Ant Cousins (27:08)
It’s a bit of a vicious cycle.
Remember the narrative: AI will automate everything, therefore you won’t need as many people. The technology is actively being sold as a cost‑cutting tool.
That message comes through investors, boards, leadership teams, CFOs – it’s ingrained. The pitch is: “Use AI and you can cut costs.”
I think that’s a problem created by the LLM providers themselves. Their narrative is wrong.
Their story is: “AI will automate, so you won’t need as many humans and you can cut costs.”
OpenAI, Anthropic and others should be talking about increasing wealth, increasing share and improving quality and volume – not just saving money. If they did that, there’d be less friction and resistance to adoption.
The challenge is that the reason people are buying AI – at board and leadership level – is: “If I buy this, I can cut a bunch of people.” But you need those people to use the technology. You need them to invest time, learn and adapt. They won’t do that if they’re worried they’ll be cut.
We need to change the methodology and messaging – to get people thinking about what more they can do with AI, not what they’re going to lose.
Right now, the underlying thought is: “If I implement this, I can probably cut a few heads, and the CFO will be happy because the bottom line looks better.”
But the resistance you meet in trying to implement those cuts – because you need people to use the AI to get any value – will slow down your ability to realise them.
Everyone is in favour of top‑line growth, though. If you say: “Use this technology really well and we’ll have ten more of you,” everyone benefits.
At the individual level, when people are asked to change and adopt new tech, the first question is: “What does this do to me? Does it increase my status? Does it make my job more fun? Does it make my life easier? Do I look forward to work more?” Those are the reasons people use new technology.
If the fear markers are there – “If I use this, do I still have a job? Do I lose half my team? Half my budget?” – those are reasons not to lean in.
It’s strange more people aren’t talking about this. I wrote a LinkedIn post about it recently and people said, “This makes a lot of sense.”
I don’t understand why LLM providers aren’t messaging this way. Maybe too many people in the hype cycle – analysts, investors – are “money people”. They understand cuts.
“If you’re spending 10 million, I can get you down to 9” is tangible and easy to grasp. “Keep your 10 million, increase it to 10.5 and I’ll double your top line” feels like make‑believe to them. Cutting a known budget feels easier to understand.
Closing thoughts for CMOs and leaders
Jodi (30:43)
That’s exactly what I was thinking – this weird preference for certainty at all costs, and this very logic‑driven era. Marketers feel this too: growth is intangible, you don’t know how much you’ll grow, but you do know how much you can cut.
I think people are waking up to the idea that some AI leaders may be saying things that do not really make sense. Hopefully there’ll be a turn soon, ideally coming from them, because as you say, they should want more people to use AI, not fewer. Fundamentally they should be showcasing a growth mindset.
This has been so inspiring and thought‑provoking. I’m sure you’ve challenged a lot of assumptions and hopefully given the CMOs and marketing leaders listening to the FINITE Podcast some hope. Thank you so much for coming on.
Ant Cousins (31:52)
If those are the people listening, I’d say this: what got you into the business in the first place?
If you created an agency, it wasn’t because you thought, “I’m getting really good at managing costs.” You started an agency because you thought, “I know how to develop value. I think I can scale. I think I can build a business.”
That same attitude is what you should keep when it comes to AI. Ask: how do I get more of the thing I set out to do in the first place?
That would be my parting comment.
Jodi (32:23)
Lovely. Thanks so much, Ant.
Ant Cousins (32:25)
Thanks for having me.









