Whats up whats uppp!
Why you're getting this: I'm sharing some tools and approaches we use at Sidetool and personally. I think you'll find them useful.
Zero pressure to stick around, just click unsubscribe if this isn't for you.
Let's get into it.
What we've been building
We built a system that stopped us from losing leads
If you run any kind of sales operation, you know the feeling. A lead comes in, someone's supposed to follow up, and then life happens. A week goes by, the lead goes cold, and nobody noticed until it's too late.
It's not that the team is bad. It's that things pile up. Tasks don't get completed, follow-ups slip, and stuff falls through the cracks. Multiply that by 50 or 100 leads a week and you have a real problem.
We had that problem at Kleva. So we built a system to fix it.
I listened to Lenny's Podcast interviewing Jeanne DeWitt, Vercel's COO, where she talked about building internal systems with AI to solve exactly these kinds of operational gaps. That got us thinking. You can check it out here: lennysnewsletter.com/p/what-the-best-gtm-teams-do-differently
The system does two things.

First, qualification.
Every time a new lead comes into our CRM, an AI pipeline kicks in automatically.
We connected Apollo and Firecrawl to enrich the lead with company data and hiring signals before anyone on the team even looks at it.
Then Claude evaluates everything. The company, the activity history, the source. And qualifies it.
Is this an MQL? A bad fit? Worth pursuing?
By the time a sales rep opens the lead, the answer is already there.
Second, follow-ups.
This is the part that actually changed our operation.
A cron job runs every hour checking for changes across every lead. If a task gets missed, if someone who was active suddenly goes quiet, if a follow-up didn't happen when it should have, the system catches it and flags it automatically.
It also detects no-shows. If a meeting was scheduled but nobody showed up, it doesn't count as a real interaction.
Nothing sits there rotting without anyone knowing.
The stack is Next.js, Supabase, Close CRM, Claude for the AI layer, Apollo and Firecrawl for enrichment, and Instantly for nurturing campaigns. Everything deployed on Vercel.
Here's what I want you to take from this.
One person on our team built this in a couple of days using Claude Code.
One person.
Not a team of engineers, not a six-month project, not an enterprise build.
You describe what you want, go back and forth with Claude, and ship.
Because the building part took days instead of months, we got to spend the rest of the time actually using it, testing it, improving it.
After a week of iterations and real usage, the system grew into something way more solid than the original version.
If you have a repeatable problem in your operation, something that depends on people remembering to do things correctly every single time, you can probably build a system that handles it.
And you can probably do it this week.
Every lead that doesn't fall through the cracks is revenue that would have been lost. Every follow-up that happens on time is a deal that stays warm.
Whether you're running sales, ops, support, onboarding, whatever, the ability to build internal tools like this used to require a dev team and months of work.
I'm not saying everyone should become a developer. I'm saying everyone now has the ability to solve their own problems with product.
And if you have that ability, you should use it.
If you want to see how we set this up, the architecture and the workflow, reply and I'll share it.
What caught my attention this week
Sam Altman admitted OpenAI "screwed up" GPT-5.2
During a developer town hall in San Francisco on January 27, Altman did something unusual. He was honest about a mistake.
When asked about user complaints that GPT-5.2 writing is "unwieldy" and "hard to read," he said: "I think we just screwed that up."
He explained that they deliberately focused on intelligence, reasoning, and coding because that's where the enterprise money is. Writing quality took a back seat. "We did decide, and I think for good reason, to put most of our effort in 5.2 into making it super good at intelligence, reasoning, coding, engineering, that kind of thing."
But the quote that stuck with me was aimed at founders: "If GPT-6 is extremely powerful, will your company be excited or desperate?"
Think about that. If the next model makes your product better, you're in a good spot. If it makes your product irrelevant, you have a problem. That's the question every founder building with AI should be asking themselves right now.
He also said OpenAI is planning to "dramatically slow down hiring" because they can do more with fewer people. And that he expects GPT-5.2 level intelligence to cost 100x less by the end of 2027.
"By summer, people working with frontier AI will feel like they live in a parallel world"
Jack Clark, co-founder of Anthropic, wrote something that I think nails where we are right now.
His point: the biggest limitation on benefiting from AI is no longer the technology. It's whether you have the time, curiosity, and access to actually engage with it. The advances are already here, but most of them are invisible to most people.
He compared it to the crypto economy, which moved at a weird speed relative to everything else.
But with a key difference: the AI economy already touches way more of our regular economic reality than crypto ever did. So by summer 2026, parts of the digital world will be moving with counter-intuitive speed relative to everything else.
The gap between people who use AI daily and people who don't is getting wider fast. And it's not about intelligence or technical skill, it's about engagement.
The fact that you’re reading this means you probably know someone who feels way behind compared to you
Only 5% of people can tell AI video from real video now
I've talked before about how I was using HeyGen to create videos for our social media, and most people couldn't tell it was AI.
Well, that gap just got a lot smaller.
Runway just published a study called "The Turing Reel" alongside their Gen-4.5 model. They showed 1,043 people 20 videos, half real, half AI-generated, and asked them to identify which was which.

Only 5% could do it consistently.
The model still has limitations. Objects sometimes disappear between frames, effects sometimes happen before their causes, and actions almost always succeed even when they shouldn't.
But the fact that 95% of people can't tell the difference is a threshold we weren't supposed to cross this fast.
The implications go beyond video production. If you can't tell what's real, you need new systems for trust. Runway includes C2PA metadata in all their outputs to certify origin, but that's a standard that barely anyone checks yet.
If you want to test yourself, you can take the test here: runwayml.com/research/theturingreel
That's it for this week.
If any of this was useful, reply and let me know. I read everything.
-Ed
Did you enjoy this newsletter? Say thanks by checking out one of my businesses:
Liked this? Sign up here to get more.
