Skip to content

April 2026#

Why Your Domain Expertise Is More Valuable Than Your AI Prompt

Why Your Domain Expertise Is More Valuable Than Your AI Prompt

Most business owners are quietly paying for software that almost fits their business.

A platform that locks them in. A vendor that ships features they didn't ask for. A line item that climbs every renewal whether the team uses it or not.

Last month, an owner in a room I was in mentioned, almost as an aside, that he'd built his own property management operating system with AI. In seven days. The kind of thing his industry rents from vendors for around $200,000 a year.

The room went quiet.

Here's the shift: it's not about software getting cheaper. It's about your unfair advantage finally becoming buildable.

You could feel everyone recalibrate.

Ben already wrote the software-side argument from this same session: the cost of software is moving toward zero. He's right.

But the lesson isn't "I should build all my own software now."

The real lesson is simpler.

AI makes domain expertise executable.


The Build Was Impressive. The Builder Mattered More.

Everyone asks one question: "What tool did he use?"

The answer is the trap.

You'll copy the tool. You won't copy the understanding.

The build worked because he knew the business. He knew which fields mattered and which ones only existed because a vendor needed them. He knew which reports helped people make decisions. He knew the weird exceptions that break clean workflows. He knew the handoffs that quietly fail near renewal time.

That's the part a generic software product never has on day one.

It's also the part a junior "AI hire" doesn't have. A smart kid can learn the tools. They can't walk in and know why a dashboard is lying, which tasks create risk, or why the same customer record shows up three different ways.

The AI didn't replace business knowledge.

It finally gave business knowledge a build surface.


The Prompt Is the Visible Part. The Judgment Is the Asset.

Most companies get this backwards. They look at a story like the seven-day build and decide they need to find the person who can prompt the best.

That's the wrong hire.

If you ask AI to build a workflow and you can't explain how the workflow should actually work, you get noise. Maybe polished noise. Maybe useful-looking noise. But still noise.

The model isn't the bottleneck. The clarity is.

The owner in that room didn't succeed because he had a magic prompt. He succeeded because he could look at an AI-built version of his business and say:

This field is wrong.

This report is missing the decision.

This handoff will fail at renewal.

This is how my team actually works.

This step needs to happen before the manager ever sees it.

That's not technical knowledge. That's operator knowledge. And right now, operator knowledge is wildly underpriced.


Why This Shift Happens Now

Three years ago, this story would have started with a developer. Scoping took weeks. The first useful version cost real money before anyone knew whether the workflow was right.

Today, the operator can build the rough first version himself. Then someone with software judgment hardens what works: permissions, data quality, security, reliability, and edge cases.

That changes the economics. The first question is no longer, "Can we afford custom software?" It is, "Can we prove this workflow is worth owning?"

That is where JOV fits. We are not trying to replace operator judgment with tools. We are trying to turn operator judgment into working systems.


Your Unfair Advantage Is the Stuff Vendors Never Learn

Every business has an operating layer vendors never learn: the Friday spreadsheet, the client report stitched together from three systems, the intake exception, the renewal workflow carried by a veteran employee.

That stuff is invisible. It's also where AI gets real.

Not "write me a strategy." Build the thing that removes drag from the business you already understand.

That's why the seven-day build matters. Not because every owner should spend the weekend replacing their software stack. Most shouldn't. Security, permissions, data quality, and production reliability still matter.

It matters because it proves the bottleneck moved. The bottleneck used to be whether you could afford software built around your business. Now it is whether you can describe the work clearly enough for AI to help build it.


What This Looks Like at SMB Scale

The seven-day build is the dramatic version. Here's the everyday one.

A multi-location wellness operator I work with runs the business on a vertical-specific accounting and booking platform that costs around $150,000 a year. The kind that keeps everything in a closed ecosystem because that's how they upcharge you. Every adjacent feature is a paid line item. The person handling operations isn't a developer, but he knows the business. Over the last several months, he has been using AI to build small apps that work around the closed platform. Each one replaces one paid upsell. He's stacking these.

Ben is helping turn the working ones into production-grade tools. That's the maturity arc: an operator builds the first version because he knows the business. Someone with software judgment hardens what works for security, reliability, and the edge cases. Neither half does it alone.

That's the model. It scales down to one retired line item at a time, and up to a $200K system in seven days.


Two Bad Reactions, One That Actually Works

We see two predictable reactions to this shift.

The first is DIY bravado. "Great, software is free. We'll build everything ourselves."

Result? A prototype becomes one person's private system. No permissions model. No audit trail. No one else trusts the data, so the old vendor stays live in parallel.

The second is vendor reflex. "Great, let's hire an AI person and have them figure it out."

Result? Someone learns the tools but never the business. They automate the obvious stuff and miss the expensive stuff: the exception, the handoff, the renewal risk, the report that actually drives a decision.

The version that works is a partnership.

One operator inside who can say what good looks like. One implementation partner outside who can turn that into AI workflows, software, automations, and guardrails.

Start with the workflow that already has a number attached to it: the tool you keep renewing reluctantly, the report that takes six hours, the handoff that creates rework, or the process your best employee carries in their head.

If you can't put a dollar, hour, risk, or revenue number on it, it can wait.


The Next Seven Days

Don't start by asking, "What app should we build?"

Start with three questions:

  1. What part of the business do we understand better than any vendor ever will?
  2. What recurring workflow is expensive because it's trapped in people, spreadsheets, or rented software?
  3. Who inside the business can judge whether the AI-built version is actually right?

That third question is the one most people skip.

Without that person, AI produces demos. With that person, AI produces operating leverage.

That's the shift. For SMB owners, it's the opportunity.

Start here: which of these is costing you the most?

  • [ ] A vendor subscription for features your team doesn't use
  • [ ] A manual report or process your best employee carries in their head
  • [ ] A workflow trapped between three systems with no clean handoff
  • [ ] A renewal price that climbs every year

Pick one. Then Let's Talk. We'll use it to scope the first useful build: the smallest workflow that can free capacity, retire spend, reduce risk, or unblock revenue.


Sources:

Only 11% of Companies Are Scaling AI. The Rest Keep Starting Over.

Only 11% of Companies Are Scaling AI

You automated a workflow. It worked. Maybe you saved six hours a week on reporting, or cut your meeting prep in half. Your team noticed. You felt the win.

Then you tried to do it again in another department and hit a wall.

That wall is where most businesses get stuck. Not at the start. After the start.

Here's the number I can't get past: only 11% of companies qualify as AI leaders in KPMG's Q1 2026 Global AI Pulse.

Not 11% using AI. Almost everyone is using AI.

Eleven percent getting coordinated results across the business.

The other 89% aren't failing. They're just not compounding.

And yes, KPMG's sample is larger companies. But I see the same pattern even faster in SMBs, where smaller teams have less room for vague ownership and broken handoffs.


The Word Nobody Wants to Hear

Here's the word most business owners don't want to hear: governance.

I know. You heard "governance" and thought: lawyers, compliance, red tape.

That's not what I mean.

McKinsey made the point clearly in March 2026. Governance isn't the thing that slows AI down. It's the thing that lets it expand.

Think about it practically. If nobody owns the output, if nobody defined what success looks like before rollout, and if there's no plan for when the system gets it wrong — you can't responsibly give AI more responsibility inside the business.

So it stays at the edges. Drafting a few emails. Summarizing a few notes. Maybe saving some time.

But it never changes the way the business actually runs.

That's not a governance failure. That's a governance vacuum. And it's the reason most companies stay stuck after their first win.


Why the Second Workflow Is Harder Than the First

Here's the pattern I keep seeing.

First workflow works great. Owner gets excited. Tries to roll out three more at once. Each one gets managed by a different person, with different tools, different standards, and a different definition of success.

Six months later you have four AI "projects" and no AI "system."

KPMG's Global Tech Report puts a number on the operational problem: 51% of tech executives say legacy processes are contributing to poor ROI on their tech investments. In smaller companies, Goldman Sachs found the same gap — 93% of small businesses using AI say it's had a positive impact, but only 14% have fully integrated it into core operations (Goldman Sachs 10KSB, March 2026).

That's not about old software. It's about old handoffs, unclear ownership, and workflows nobody fixed before layering AI on top.

The first win doesn't require a system. You just need one person, one problem, and one tool. But the second win? The third? Those require something the first one didn't: a repeatable model for how AI gets deployed, measured, and improved inside the business.

Without that model, every new workflow starts from scratch. And starting from scratch every time is how you stay in the 89%.


What the 11% Do Differently

When I look at what separates the companies getting coordinated results from the ones collecting standalone wins, three things show up every time.

1. One person owns AI outcomes, not AI tools.

Not a Chief AI Officer. Not whoever is most "tech-savvy." The person who owns the business outcome the workflow is supposed to improve. If AI is supposed to cut reporting time, the person who owns reporting owns the AI outcome too. Ownership follows the metric, not the technology.

2. Success is defined before rollout, not after.

The 11% don't launch AI and then check if it helped. They name the metric first. Time saved. Errors reduced. Revenue influenced. If you can't name the metric before you start, you're not ready to scale.

3. Guardrails are built into the workflow, not bolted on later.

What data can the tool touch? What requires human review? What happens when the output is wrong? The companies scaling AI answer these questions before the second workflow launches. Not after the first incident.

None of this is complicated. But it's the part almost everyone skips.


What Happens When This Clicks

When those three things are in place, something changes.

The second AI workflow deploys in days, not months. The third is faster still. Each one inherits the ownership model, the measurement framework, and the guardrails from the last.

That's what "AI leader" actually means in the KPMG data. Not more tools. Not bigger budgets. AI leaders report meaningful business value at 82%. Non-leaders: 62%. Same market. Same access to tools.

Different operating discipline.

The gap isn't about adoption anymore. It's about whether your AI efforts compound or just coexist.


This is the part most businesses skip. Not because they don't believe in it, but because nobody walks them through it while the work is happening.

That's how we operate. We start with the bottleneck. Find the quick win. But while we're building that first workflow, we're asking the questions that make the second one faster: who owns this, how do we measure it, and what are the guardrails?

We built this framework with a Dallas foundation from scratch - policy, data classification, ownership model. It's the same approach we bring to every engagement, because it's how AI stops being a project and starts being the way you run.

If you've had your first AI win and want the next five to stick, let's figure out how to pair these tools with the people who actually know your business. Let's Talk.


Sources:

AI Slop Is a Confession

AI Slop Is a Confession

You've seen it. The LinkedIn post that reads like a robot summarized a robot. The sales email that opens with "Dear [First Name]" and goes downhill from there. The blog post so generic it could be about any company in any sector on any planet.

"Slop" became Merriam-Webster's Word of the Year in 2025. The American Dialect Society picked it too. Everyone agrees the problem is real.

But here's the question nobody's asking: why does the slop exist?

The Amplifier, Not the Problem

I was in a room full of business owners recently when my CTO reframed the whole conversation. He compared AI to a guitar amplifier.

If you're a great guitarist, an amplifier lets you fill a stadium. If you're a terrible guitarist, it just makes you louder and noisier to more people.

Same tool. Same technology. Completely different outcomes. The variable is the person plugging in.

That's AI right now. The same AI subscription that produces thoughtful, specific, useful content for one person produces pure garbage for the person sitting next to them. The technology didn't change between those two desks. The expertise did.

And this doesn't change just because the AI gets more sophisticated. If you hook a powerful system up to a broken process managed by someone who doesn't understand the domain, you don't fix the problem. You just automate the creation of slop at scale.

The Confession Nobody Hears

Here's where it gets uncomfortable.

When someone says "it made AI slop," they're making a confession. They're telling you, without meaning to, that they couldn't coach AI into producing good work. Not because the AI can't do it. Because they didn't know what good looked like in the first place.

Think about that. If you can't recognize bad output, you can't fix it. If you can't define what good looks like, you can't direct the tool toward it. The slop isn't an AI failure. It's a skills gap wearing a technology mask.

I've been guilty of this too. I'm not a natural writer, so the amplifier didn't work for me out of the gate. I had to build the expertise first. The AI only got good once I learned what good looked like.

This isn't just a hot take from a room. Harvard Business School published research in March 2026 that backs it up: AI helps people generate ideas and frame problems, but it can't help them execute when they lack the experience to know what good execution looks like. The researchers put it plainly: when a task requires "concrete application and context-bound nuances," the person without lived experience stays at a disadvantage, AI or not.

AI doesn't close the expertise gap. It highlights it.

The difference between slop and signal isn't the software. It's the driver steering the prompt.

The Tale of Two Prompts

The Junior Hire (No Domain Expertise): "Write a warranty claim for this HVAC repair: 'unit dead. capacitor blown. replaced it.'" Result: A vague, three-paragraph letter that no warranty clerk would approve. Missing the model number, the failure code, the diagnostic readings, the part specs. Slop.

The 15-Year Ops Lead (Deep Domain Expertise): "Draft a warranty submission for a Carrier 48TCED06 RTU. Use Condition/Cause/Correction format. Field data: 70/7.5 mfd 440V dual run cap vented with oil leak. Contactor points pitted from high-amp draw. Compressor windings verified good (megohm test >500M). Replaced cap and 3-pole 30A contactor. 2 hours total: 0.5 diagnostic, 1.0 repair, 0.5 system test. Tone: clinical, no fluff. Address to Carrier National Warranty Dept." Result: A clean, compliant, ready-to-submit warranty claim that gets the business paid.

Same AI. Same field notes. One person had the domain knowledge to direct it. The other didn't.

What This Means for Your Business

If your team is producing mediocre AI output, don't cancel the subscription. Look at who's driving.

Last September, HBR reported that 41% of workers are already dealing with "workslop," memos and reports that create more rework than they save. Every incident costs about two hours to clean up.

The answer is pairing AI with someone who actually knows the work. Someone who can look at what the AI produced and say, "No, that's wrong. Here's why, and here's what right looks like."

In most SMBs, that person already exists. Your operations lead who's been doing the work for fifteen years. Your sales manager who can spot a bad proposal in two sentences. Your controller who knows which numbers actually matter.

They don't need to become AI experts. They need to become AI editors. The person who knows the work is the difference between slop and signal.

The Real Question

The next time someone shows you AI slop (a terrible email, a generic blog post, a report that says nothing), don't blame the technology.

Ask who was driving.


If your team is bleeding hours cleaning up mediocre AI output, we should map out your bottlenecks. Let's figure out how to pair these tools with the people who actually know your business. Click Let's Talk to start the conversation—no pitch, just shop talk. Let's Talk.

What Block Gets Right and Wrong About AI-Driven Organizations

Block recently published an essay arguing that AI will replace organizational hierarchy — that the span-of-control constraint governing every large organization since the Roman legions can finally be broken. The essay, introduced with an endorsement from Sequoia, spends considerable time on military history before arriving at Block's vision: a company organized as "an intelligence" rather than a hierarchy, where AI maintains a "world model" of operations and coordinates work that previously required layers of human management.

The piece is ambitious. It is also roughly 80% historical context, 15% vision, and 5% acknowledgment that none of this exists yet. Let's extract what's actually useful.

They Don't Want to Learn AI. They Want the Easy Button.

Where's the AI Easy Button?

We recently hosted an AI session for a group of business owners. We had slides. We planned for a 30-minute presentation and 30 minutes of Q&A.

They kept us for almost three hours.

Not because the slides were great. Because every question opened another question. The room included owners, investors, and executives from financial services, construction trades, property management, and protection services. Industries that have nothing in common except this: they all know AI matters, and none of them are sure what to do about it.


"I'm Not Creative Enough to Know What Problems to Bring to AI"

One exec said this out loud. Nobody laughed. Everyone nodded.

This was a successful, experienced leader being honest about a specific blind spot: he couldn't picture what AI does for his specific role. Not "AI can improve efficiency." He needed to know what it looks like on a Tuesday morning when he sits down at his desk. He couldn't even frame the right questions to ask.

These are leaders with vision -- that's how they built what they built. The gap is translating "AI matters" into a picture of what it actually does inside their business. And that gap is everywhere. Kellogg just published research naming this exact pattern. They call it Stage 1. Your people are using ChatGPT for the stuff they find annoying, but there's no strategy. No structure. Nobody connecting it to business outcomes. The tools exist. The vision doesn't.


"In Six Months, Will There Be a Product That Eliminates the Need to Do All This Learning?"

A different exec asked this one.

Another founder in the room took it further. He'd already decided to hire someone junior to start digging in. His real question was whether he could then hire us to train that person. He'd even framed the ROI on the spot -- $1,000 for a week to sit with his operations team and find the savings. He wasn't looking for a vendor. He was describing the model without knowing it had a name.

Two different people. Same request. Give me the easy button.

Here's the thing: that instinct is exactly right. The smartest thing a busy CEO can do is recognize what they're great at, running their business, and find someone to handle the rest. You don't build your own accounting software. You hire a CPA.

The problem isn't wanting the easy button. The problem is that most of the "easy buttons" on the market don't actually work.


Why the DIY Approach Stalls

The most advanced AI user in the room, someone who'd built a full property management operating system in seven days using AI, pushed back hard on the "hire someone" instinct.

His point: you can't hire a kid right out of college to figure this out for you. AI requires domain expertise. A junior hire doesn't know your business. They don't know which processes are bleeding money, which reports take six hours that should take six minutes, or which customer touchpoints are quietly falling apart.

Here's the number that tells the story: 56% of CEOs investing in AI still haven't seen revenue or cost benefits (PwC, January 2026). Not because the technology failed. Because the implementation did. They bought the tool without connecting it to a business problem. Or they handed it to someone who didn't understand the business well enough to know where to point it.

That's the pattern we see on almost every discovery call. Someone bought a tool, or assigned it to the most "tech-savvy" employee, and six months later the tool is gathering dust and the employee is back to doing things the old way. Not because anyone failed. Because the approach was wrong from the start.


What It Looks Like When It Works

Here's where the conversation turned. My CTO drew the distinction that stuck with everyone: the difference between an AI implementer and an AI champion. An implementer installs the tool and moves on. A champion is someone inside the business who changes how the team actually works. That's the role that matters -- and it's not a role you can hire off a job board.

The model that came out of the room was simple. Don't hire an AI person. Find someone already in your business who's curious, give them time and permission to experiment, and pair them with someone who actually knows the tools. Not an IT project. A business operations project. One founder put it simply: it's the same reason companies hire an MSP instead of building an internal IT team, or a fractional CFO instead of a full-time hire. You need the result. You don't want to manage the complexity.

That's the AI champion model. One person inside who knows the business. One partner outside who knows AI. The inside person spots the problems worth solving. The outside partner builds the solution and trains the team to use it.

We use this model ourselves. Our own AI systems handle daily briefings, prospect research before meetings, and coordination across our delivery team. Meeting prep that used to take 30 minutes now takes 5. We built them the same way -- started with the bottleneck, pointed AI at it, and trained ourselves to use it. We're our own first client.


The Easy Button Exists. It Just Doesn't Look Like Software.

That's not laziness. That's leadership. The CEO's job is to run the business, set the vision, and make the calls. Not to spend weekends watching YouTube tutorials about AI agents.

The easy button isn't a product you buy. It's a partner who already knows the tools, pairs with someone who knows your business, and builds systems your team can actually use. No more handing it to whoever seems most tech-savvy and hoping for the best. Just someone who's done this before, paired with someone inside who knows where the problems are.

If you're the person in that room nodding along, thinking "that's exactly what I want," that's what we built JOV AI to be.

If you want to talk through what this looks like for your business, reach out. I'll send you the three questions we use to find where AI saves the most time. Just a starting point.


Sources:

The Cost of Software Is Now Zero

A survival rubric for software and SaaS entrepreneurs in the era of vibe coding.


In February 2025, we published The AI-Driven Transformation of Software Development. Our central thesis: AI would trigger a fundamental shift in the build-versus-buy calculus, accelerating a "Cambrian explosion of software" and driving development costs toward zero. We predicted that businesses would find building tailored solutions increasingly cost-effective and strategically superior to purchasing off-the-shelf software.

The thesis has played out. The cost of code is, for most practical purposes, zero.


What's Actually Happening Out There

We sat with two business owners last week. The conversations were different in detail but identical in conclusion: both had stopped buying software.

One is building a complete property management operating system: property records, CRM, fleet tracking, risk management, financials, task management, and more. Not a subscription he configured — a system his company owns outright, built for exactly how his operation works. He built it in two weeks — what would have cost $200,000 a year to rent from a vendor.

The other runs a retail chain. Someone on his team has been working through the software stack systematically — not one big build, but a rolling replacement of every tool they'd been renting. He's already cut $300,000 in annual costs. He's roughly halfway through. When the last subscription is gone, he's asked us to review the whole thing before it goes live — security, scalability, and production robustness.

Operators are replacing project management tools, CRMs, inventory systems, client portals — the entire layer of workflow software that SMBs have been renting for decades. Not because they became developers. Because describing software and building software are now the same thing.

The savings compound at exit. At a typical acquisition multiple, a $300,000 annual reduction in software costs adds over a million dollars to the sale price.

Now look at the same picture from the other side — the side trying to sell software to these operators.


One Million Vibecoders Writing the Same Thing

A massive crowd lined up for "Vibe Coders" and one person in line for "Users"

A million people are building ERP systems. A million people are building project management tools. A million people are building CRMs. They're all working on the same categories, pouring effort into software they intend to sell — and none of them have a market. Because anyone who wants that software will just build their own.

The vibecoders building products to sell are wasting their time. Their potential customers have the same tools they do.

The only vibecoders whose code actually gets used are the ones who are also the users: owner/operators building custom software for their own businesses. That ERP built specifically for one company's workflows, by the person running that company — it doesn't need to find a customer. It already has one.

This is the dividing line. Vibe coding is not a new software business model. It's the tool that lets operators stop being software customers.

The businesses in trouble aren't failing because they have bad products. They're failing because the people who used to buy from them have a better option: build it themselves, tailored to their exact needs, with no recurring subscription.


The Question That Follows

If code is free to produce, software businesses that sell code lose their moat.

The value proposition was never really the software itself. It was the arbitrage: someone already built this, so you don't have to pay a developer. That arbitrage is gone. The operator with a weekend and a capable AI assistant can now build exactly what they need, perfectly suited to their workflow, with no recurring subscription cost.

Not all software businesses face this. The ones selling code packaged as a product are in trouble. The ones that were always selling something else — using software as the delivery mechanism — are fine. Some are better than ever.

The question every founder needs to answer honestly: if code were free, would anyone still buy from us?


What Survives

Twenty years ago my colleague John Cage introduced me to Treacy and Wiersema's Value Disciplines. Operational Excellence, Product Leadership, Customer Intimacy — pick one to dominate, maintain threshold in the others. I've applied it to every strategic engagement since. Vibe coding just took one of the three off the table.

Operational Excellence. Competing on lowest cost and highest efficiency has been the dominant strategy for SMB SaaS. It's no longer defensible. When an operator can build exactly what they need at zero recurring cost, "cheaper than building it yourself" isn't a position.

Product Leadership survives — if the complexity is real. Feature-rich workflow software doesn't qualify. Genuine product leadership means ML models, optimization systems, domains that require years of specialized expertise to build correctly. A vibe-coded app can approximate a dashboard. It can't approximate a decade of algorithmic research.

Customer Intimacy not only survives, it wins. Anywhere the deliverable is judgment, accountability, or trusted expertise — with software as the delivery mechanism rather than the product. Cheap code helps these businesses. They deliver faster, operate leaner, and take on more clients with the same team. The operators winning here aren't the ones handing everything to AI — they're the domain experts who can supervise it. That's precisely why they're winning.

Two additional categories fall outside the disciplines but are equally defensible:

Regulatory and compliance moats. Healthcare software, financial systems, anything requiring liability acceptance, certifications, or audit trail requirements. A vibe-coded replacement might replicate the features. It won't replicate the compliance posture.

Infrastructure position. The picks-and-shovels layer that vibe-coded applications depend on: authentication, payments, deployment, APIs, databases. Network effects live here too — platforms where years of data and an embedded partner ecosystem make migration genuinely expensive. Vibe coding expands this market, not shrinks it.


The Rubric

Score your business across seven dimensions. Add them up.

Dimension 1 — Exposed 2 — Mixed 3 — Defensible
Value Delivery Software is the product. Customers pay for features. Software enables a service. Code and expertise blend. Judgment, trust, or accountability is the product. Software is delivery.
Switching Cost Data is portable. No integrations, no ecosystem. Meaningful friction: data history, integrations, learned workflows. Network effects or regulatory data residency. Migration is genuinely expensive.
Compliance Moat No requirements. Anyone can build a replacement. Compliance matters, but a determined operator could manage it. Certifications, liability acceptance, audit trails. Vibe coding can't satisfy these.
Problem Complexity Forms, dashboards, CRUD. Buildable in a weekend. Non-trivial integrations or moderate algorithmic depth. ML, optimization, real-time systems. Years of specialized expertise required.
Buyer Profile SMB operators — the people now building their own tools. Mid-market with some IT governance. Regulated enterprises, governments. Procurement and legal sit between you and replacement.
Layer End-user application for a specific use case. Platform with some application features. Infrastructure that vibe-coded apps depend on.
Proprietary Data / Content / IP No proprietary data or IP. Anyone starting from scratch would reach feature parity quickly. Some accumulated data advantage — user history, transaction data — but replicable with time and effort. Proprietary datasets, content licenses, or IP that cannot be recreated from scratch. The asset is the moat.

Reading Your Score

Total What it means
7–12 Pivot urgently. You're in Operational Excellence territory — the discipline vibe coding just ended.
13–17 Reinforce or reposition. You have assets but meaningful exposure. Identify which dimensions can be strengthened.
18–21 Press the advantage. You're operating in Customer Intimacy, Product Leadership, or infrastructure. Double down.

Two Examples

Monday.com scores a 10. It's a $10 billion company. It's also a work management application — forms, boards, and status columns with a clean interface. No compliance requirements. No proprietary data. No algorithmic depth that requires years to build. Its switching cost scores a 2 because workflows and integrations create some friction, but nothing that survives a determined replacement effort. The rubric doesn't care about revenue multiples. A tool called Zapta already lets teams feed in their Monday.com API token and vibe-code a custom replacement — database, authentication, and all — for $29 a month.

Stripe scores a 21. Every dimension is defensible, and most reinforce each other. The compliance posture is what creates the enterprise buyer. The enterprise buyer generates the transaction data. The transaction data trains the fraud models. The fraud models deepen the moat. A vibe coder building a payments app doesn't compete with Stripe — they depend on it.

The M&A market is already pricing this divergence in. Q1 2026 data shows that in vertical software acquisitions, revenue growth carries 2.4 times the predictive weight of EBITDA margins in explaining valuation outcomes. Buyers are paying for stickiness — which is another way of saying they're paying for defensibility.


What This Means

Most software businesses were built on the assumption that code was scarce. It isn't anymore.

The question in the middle of this article — if code were free, would anyone still buy from us? — isn't rhetorical. Run the rubric. If you're scoring in the 7–12 range, the answer is no, and your replacement isn't a competitor. It's your customer.


JOV AI helps technology businesses navigate this shift. If your rubric score raised questions about your position — or if you're building the thing that replaces someone else's and want it done right — let's talk.

92% of Nonprofits Use AI. Only Half Have a Policy. Here's What One Foundation Built.

Your staff is already using AI. You probably know that. What you might not know is which tools, on which data, with what guardrails.

The 2026 Nonprofit AI Adoption Report put a number on it: 92% of nonprofits are using AI in some capacity. But 47% have no governance policy at all. And 81% are using AI individually, no shared workflows, no documentation, no organizational learning.

That's not an AI problem. That's a risk management problem hiding in plain sight.

At the end of last year, we helped The Catholic Foundation in Dallas build an AI governance framework from scratch. Policy, training, board approval, the whole thing. Here's what the process looked like and what we learned doing it.

Nonprofit AI Governance


The Problem Isn't AI. It's What You Don't Know About.

The risk that keeps me up at night for organizations like this isn't a sophisticated cyberattack. It's a well-meaning staff member pasting sensitive information into a free AI tool to draft an email.

That's not hypothetical. Last July, IBM's Cost of Data Breach Report found that one in five organizations experienced a breach tied to shadow AI: tools employees use without IT approval. Those breaches cost an average of $670,000 more than standard incidents. UpGuard confirmed what we see on every engagement: 81% of employees are already using unapproved AI tools at work. Including the security professionals.

For foundations built on donor trust, "we didn't know" is not a sufficient answer. Neither is "we're working on a policy."


What the Process Actually Looks Like

When The Catholic Foundation reached out, they weren't reacting to an incident. They were getting ahead of one. AI features were showing up in the tools their team already used, whether anyone asked for them or not. People wanted to use AI the right way. They just didn't have a playbook. Leadership decided to build the framework before that ambiguity became a problem.

Here's what we mapped out in a few weeks:

Data classification. Not every piece of information carries the same risk. The work starts with drawing clear lines: what never touches an AI tool under any circumstances, what can be used with explicit approval, and what's fair game. The test is simple: if it would be devastating on the front page of a newspaper, it stays out of AI completely.

Tool evaluation. Not all AI tools are created equal. Enterprise tools with contractual data protection agreements are fundamentally different from free consumer tools that may use your data for training. The policy needs a clear approved and prohibited list.

Staff training. Not a lecture about AI theory. Scenario-based: "This situation just came up. What do you do?" The questions that surface are the kind you can't anticipate from a desk: AI features appearing unprompted in existing software, third parties on calls running AI recorders, voice assistants on personal phones.

Board alignment. The governance committee reviewed the policy before the full board. Their board brought the right questions and the experience to evaluate the framework on its merits. The full board approved it, no revisions needed.

The core framework was built in weeks. Review and board approval added a few months to the calendar, but that's governance working the way it should. No dedicated AI team required. No year-long compliance project. Just a decision to be intentional about it.


Why This Matters Beyond One Foundation

Last September, CEP confirmed what we were already seeing: almost two-thirds of foundations and nonprofits are using AI, but data security remains the top concern among foundation leaders, cited by more than 80%. But concern alone doesn't build a framework.

Foundations won't get forced into this conversation by strategy. They'll get forced into it by a board question, a compliance review, or a staff member asking what's allowed.

The Catholic Foundation won't be explaining why they don't have a policy. They'll be pointing to the one they built.


Start With Three Questions

If you lead a foundation or nonprofit, here's where the work begins:

  1. What information does your organization handle that would be catastrophic to expose?
  2. What AI tools are your staff using right now, with or without your knowledge?
  3. Do you have a written policy that answers question two in light of question one?

If the answer to question three is no, that's the gap. And closing it doesn't require a dedicated AI team or a six-figure consulting engagement. It requires a decision to be intentional about how your organization uses AI before the decision gets made for you.

If you want to talk through what a nonprofit AI governance policy looks like for your organization, not a sales pitch, just a straight conversation about your situation, reach out. We'll tell you if you need a formal policy yet. And if you do, we'll show you exactly where to start.


Sources: - 2026 Nonprofit AI Adoption Report, Virtuous/Fundraising.AI, February 2026 - CEP "AI With Purpose" Report, September 2025 - IBM 2025 Cost of Data Breach Report, July 2025 - UpGuard "State of Shadow AI" Research, November 2025

70% of Small Business Leaders Are Betting on AI. Here's What Successful AI Implementation Looks Like.

The Execution Gap

If you run a small business, you've probably had some version of this conversation in the last six months:

"We should be doing something with AI."

Maybe your office manager started using ChatGPT for emails. Maybe a competitor posted about their "AI-powered" workflow on LinkedIn. Maybe you sat through a vendor demo that promised to "transform your operations."

And then nothing happened. Or worse, something happened, but you can't point to what actually changed. Your AI implementation stalled before it started.

You're not alone. And your skepticism isn't a weakness. It's the right instinct.

The AI Implementation Optimism Is Real. The Results Aren't Yet.

The ECI AI Readiness Report came out this week. 550+ owners in manufacturing, field service, and distribution. These are the people in our world.

The headline: more than 70% of SMB leaders are positive about AI. That's not Silicon Valley hype. That's owners like you and me saying, "I think this thing can help my business."

But here's where it gets interesting. Despite that optimism, roughly 40% of those same businesses report zero measurable results from their AI efforts so far.

Seventy percent believe. Forty percent can't prove it's working.

That gap is the whole story.

And it's not just SMBs. Here's the kicker: PwC's latest CEO survey shows 56% of CEOs actively investing in AI haven't seen revenue or cost benefits yet. Only one in eight reported gains on both. If large companies with dedicated AI budgets are still struggling to show ROI, budget alone clearly isn't enough.

The will is there. The execution isn't.

What the Winners Are Actually Doing

So what separates the 60% getting results from the 40% who can't point to measurable ones?

It's not budget. It's not team size. It's not which tool they picked.

It's where they started.

The ECI report found that 60% of SMBs using or planning AI are focused on data analysis and reporting. Back-office work. Not chatbots. Not customer-facing AI. The boring stuff: pulling reports, reconciling data, tracking jobs.

That tracks with everything I've seen over the past two years. The wins don't come from flashy demos. They come from finding the one process that eats six hours a week and cutting it to thirty minutes.

Not "let's see what AI can do." Instead: "We spend 12 hours a week manually routing service calls. Can we cut that in half?"

That's the difference between experimenting and operating.

Why Most DIY AI Implementation Projects Stall

Here's a pattern I keep seeing. An owner gets excited about AI, assigns it to someone on their team, usually whoever seems most "tech-savvy," and says, "Figure out how we can use this."

Three months later, that person has tested a dozen tools, built a few clever prompts, and can't point to a single process that actually changed. Not because they're not smart. Because they're learning from scratch while still doing their real job.

Every time, the fix is the same. Stop leading with the technology. Start with the problem. That's what drives everything we do at JOV AI.

We run our business on the same AI systems we build for clients. It's the fastest way to find out what actually works, and the fastest way to kill what doesn't.

Why SMBs Have the Real Advantage

Here's what the big consultancies miss when they publish these reports: small businesses can move faster than anyone.

I wrote about this in The Blue-Collar AI Advantage. A 50-person HVAC company doesn't need a change management committee. The owner can decide on Tuesday, implement on Wednesday, and see results by Friday.

That speed is a structural advantage. Shorter decision chains. Closer to the actual work. Less bureaucracy between "this is a good idea" and "let's do it."

But it cuts both ways. When every dollar matters more, you can't afford to experiment blindly. A Fortune 500 company can burn a quarter-million on a failed AI pilot and write it off. You can't.

That's why the bottleneck-first approach matters even more for SMBs. You don't need an AI strategy. You need to fix one expensive problem and prove ROI before you touch anything else.

Stop Running AI Projects. Start Operating Your Business.

The companies getting results from AI aren't "doing AI." They're not running innovation labs or hiring prompt engineers.

They're doing what they've always done: finding inefficiencies and fixing them. AI just happens to be the tool that works right now.

The ECI report named the barriers holding most SMBs back, and none of them are surprising: no in-house expertise, messy data, and no idea where to start. Those aren't technology problems. They're AI implementation problems.

And that's exactly where the gap lives, between "AI can do amazing things" and "here's what it's doing for your P&L this quarter."

The testing phase is over. Seventy percent of your peers are ready to move. The question isn't whether AI works for small business. It's whether you'll be in the 60% getting results or the 40% still unable to point to what changed.

Start Here

What's your most expensive bottleneck this week? The process that eats the most hours, causes the most errors, or keeps you from focusing on growth?

Start there. Not with a chatbot. Not with a strategy deck. With one problem, one measurement, and one fix.

That's how the winners are doing it.

If you want to talk through where AI implementation fits in your operations, not a sales pitch, just a straight conversation about your bottleneck, reach out. We'll tell you if AI isn't the answer. And if it is, we'll show you exactly where to start.

The Blue-Collar AI Advantage Nobody's Talking About

The Blue-Collar AI Advantage

Your best tech is losing two to three hours a day to bad routing. Your estimator is rebuilding the same spreadsheet for the third time this week. Your office manager is chasing invoices instead of chasing growth.

None of that is a technology problem. It's operational drag. And it's capping how fast your business can grow.

Most trades owners assume AI isn't for them yet. That's exactly why the ones adopting it now are pulling ahead so fast. HVAC. Plumbing. Construction. Manufacturing. Field services. Where almost no one has started, even basic AI puts you a generation ahead.