Skip to content

AI Strategy#

They Don't Want to Learn AI. They Want the Easy Button.

Where's the AI Easy Button?

We recently hosted an AI session for a group of business owners. We had slides. We planned for a 30-minute presentation and 30 minutes of Q&A.

They kept us for almost three hours.

Not because the slides were great. Because every question opened another question. The room included owners, investors, and executives from financial services, construction trades, property management, and protection services. Industries that have nothing in common except this: they all know AI matters, and none of them are sure what to do about it.


"I'm Not Creative Enough to Know What Problems to Bring to AI"

One exec said this out loud. Nobody laughed. Everyone nodded.

This was a successful, experienced leader being honest about a specific blind spot: he couldn't picture what AI does for his specific role. Not "AI can improve efficiency." He needed to know what it looks like on a Tuesday morning when he sits down at his desk. He couldn't even frame the right questions to ask.

These are leaders with vision -- that's how they built what they built. The gap is translating "AI matters" into a picture of what it actually does inside their business. And that gap is everywhere. Kellogg just published research naming this exact pattern. They call it Stage 1. Your people are using ChatGPT for the stuff they find annoying, but there's no strategy. No structure. Nobody connecting it to business outcomes. The tools exist. The vision doesn't.


"In Six Months, Will There Be a Product That Eliminates the Need to Do All This Learning?"

A different exec asked this one.

Another founder in the room took it further. He'd already decided to hire someone junior to start digging in. His real question was whether he could then hire us to train that person. He'd even framed the ROI on the spot -- $1,000 for a week to sit with his operations team and find the savings. He wasn't looking for a vendor. He was describing the model without knowing it had a name.

Two different people. Same request. Give me the easy button.

Here's the thing: that instinct is exactly right. The smartest thing a busy CEO can do is recognize what they're great at, running their business, and find someone to handle the rest. You don't build your own accounting software. You hire a CPA.

The problem isn't wanting the easy button. The problem is that most of the "easy buttons" on the market don't actually work.


Why the DIY Approach Stalls

The most advanced AI user in the room, someone who'd built a full property management operating system in seven days using AI, pushed back hard on the "hire someone" instinct.

His point: you can't hire a kid right out of college to figure this out for you. AI requires domain expertise. A junior hire doesn't know your business. They don't know which processes are bleeding money, which reports take six hours that should take six minutes, or which customer touchpoints are quietly falling apart.

Here's the number that tells the story: 56% of CEOs investing in AI still haven't seen revenue or cost benefits (PwC, January 2026). Not because the technology failed. Because the implementation did. They bought the tool without connecting it to a business problem. Or they handed it to someone who didn't understand the business well enough to know where to point it.

That's the pattern we see on almost every discovery call. Someone bought a tool, or assigned it to the most "tech-savvy" employee, and six months later the tool is gathering dust and the employee is back to doing things the old way. Not because anyone failed. Because the approach was wrong from the start.


What It Looks Like When It Works

Here's where the conversation turned. My CTO drew the distinction that stuck with everyone: the difference between an AI implementer and an AI champion. An implementer installs the tool and moves on. A champion is someone inside the business who changes how the team actually works. That's the role that matters -- and it's not a role you can hire off a job board.

The model that came out of the room was simple. Don't hire an AI person. Find someone already in your business who's curious, give them time and permission to experiment, and pair them with someone who actually knows the tools. Not an IT project. A business operations project. One founder put it simply: it's the same reason companies hire an MSP instead of building an internal IT team, or a fractional CFO instead of a full-time hire. You need the result. You don't want to manage the complexity.

That's the AI champion model. One person inside who knows the business. One partner outside who knows AI. The inside person spots the problems worth solving. The outside partner builds the solution and trains the team to use it.

We use this model ourselves. Our own AI systems handle daily briefings, prospect research before meetings, and coordination across our delivery team. Meeting prep that used to take 30 minutes now takes 5. We built them the same way -- started with the bottleneck, pointed AI at it, and trained ourselves to use it. We're our own first client.


The Easy Button Exists. It Just Doesn't Look Like Software.

That's not laziness. That's leadership. The CEO's job is to run the business, set the vision, and make the calls. Not to spend weekends watching YouTube tutorials about AI agents.

The easy button isn't a product you buy. It's a partner who already knows the tools, pairs with someone who knows your business, and builds systems your team can actually use. No more handing it to whoever seems most tech-savvy and hoping for the best. Just someone who's done this before, paired with someone inside who knows where the problems are.

If you're the person in that room nodding along, thinking "that's exactly what I want," that's what we built JOV AI to be.

If you want to talk through what this looks like for your business, reach out. I'll send you the three questions we use to find where AI saves the most time. Just a starting point.


Sources:

The Cost of Software Is Now Zero

A survival rubric for software and SaaS entrepreneurs in the era of vibe coding.


In February 2025, we published The AI-Driven Transformation of Software Development. Our central thesis: AI would trigger a fundamental shift in the build-versus-buy calculus, accelerating a "Cambrian explosion of software" and driving development costs toward zero. We predicted that businesses would find building tailored solutions increasingly cost-effective and strategically superior to purchasing off-the-shelf software.

The thesis has played out. The cost of code is, for most practical purposes, zero.


What's Actually Happening Out There

We sat with two business owners last week. The conversations were different in detail but identical in conclusion: both had stopped buying software.

One is building a complete property management operating system: property records, CRM, fleet tracking, risk management, financials, task management, and more. Not a subscription he configured — a system his company owns outright, built for exactly how his operation works. He built it in two weeks — what would have cost $200,000 a year to rent from a vendor.

The other runs a retail chain. Someone on his team has been working through the software stack systematically — not one big build, but a rolling replacement of every tool they'd been renting. He's already cut $300,000 in annual costs. He's roughly halfway through. When the last subscription is gone, he's asked us to review the whole thing before it goes live — security, scalability, and production robustness.

Operators are replacing project management tools, CRMs, inventory systems, client portals — the entire layer of workflow software that SMBs have been renting for decades. Not because they became developers. Because describing software and building software are now the same thing.

The savings compound at exit. At a typical acquisition multiple, a $300,000 annual reduction in software costs adds over a million dollars to the sale price.

Now look at the same picture from the other side — the side trying to sell software to these operators.


One Million Vibecoders Writing the Same Thing

A massive crowd lined up for "Vibe Coders" and one person in line for "Users"

A million people are building ERP systems. A million people are building project management tools. A million people are building CRMs. They're all working on the same categories, pouring effort into software they intend to sell — and none of them have a market. Because anyone who wants that software will just build their own.

The vibecoders building products to sell are wasting their time. Their potential customers have the same tools they do.

The only vibecoders whose code actually gets used are the ones who are also the users: owner/operators building custom software for their own businesses. That ERP built specifically for one company's workflows, by the person running that company — it doesn't need to find a customer. It already has one.

This is the dividing line. Vibe coding is not a new software business model. It's the tool that lets operators stop being software customers.

The businesses in trouble aren't failing because they have bad products. They're failing because the people who used to buy from them have a better option: build it themselves, tailored to their exact needs, with no recurring subscription.


The Question That Follows

If code is free to produce, software businesses that sell code lose their moat.

The value proposition was never really the software itself. It was the arbitrage: someone already built this, so you don't have to pay a developer. That arbitrage is gone. The operator with a weekend and a capable AI assistant can now build exactly what they need, perfectly suited to their workflow, with no recurring subscription cost.

Not all software businesses face this. The ones selling code packaged as a product are in trouble. The ones that were always selling something else — using software as the delivery mechanism — are fine. Some are better than ever.

The question every founder needs to answer honestly: if code were free, would anyone still buy from us?


What Survives

Twenty years ago my colleague John Cage introduced me to Treacy and Wiersema's Value Disciplines. Operational Excellence, Product Leadership, Customer Intimacy — pick one to dominate, maintain threshold in the others. I've applied it to every strategic engagement since. Vibe coding just took one of the three off the table.

Operational Excellence. Competing on lowest cost and highest efficiency has been the dominant strategy for SMB SaaS. It's no longer defensible. When an operator can build exactly what they need at zero recurring cost, "cheaper than building it yourself" isn't a position.

Product Leadership survives — if the complexity is real. Feature-rich workflow software doesn't qualify. Genuine product leadership means ML models, optimization systems, domains that require years of specialized expertise to build correctly. A vibe-coded app can approximate a dashboard. It can't approximate a decade of algorithmic research.

Customer Intimacy not only survives, it wins. Anywhere the deliverable is judgment, accountability, or trusted expertise — with software as the delivery mechanism rather than the product. Cheap code helps these businesses. They deliver faster, operate leaner, and take on more clients with the same team. The operators winning here aren't the ones handing everything to AI — they're the domain experts who can supervise it. That's precisely why they're winning.

Two additional categories fall outside the disciplines but are equally defensible:

Regulatory and compliance moats. Healthcare software, financial systems, anything requiring liability acceptance, certifications, or audit trail requirements. A vibe-coded replacement might replicate the features. It won't replicate the compliance posture.

Infrastructure position. The picks-and-shovels layer that vibe-coded applications depend on: authentication, payments, deployment, APIs, databases. Network effects live here too — platforms where years of data and an embedded partner ecosystem make migration genuinely expensive. Vibe coding expands this market, not shrinks it.


The Rubric

Score your business across seven dimensions. Add them up.

Dimension 1 — Exposed 2 — Mixed 3 — Defensible
Value Delivery Software is the product. Customers pay for features. Software enables a service. Code and expertise blend. Judgment, trust, or accountability is the product. Software is delivery.
Switching Cost Data is portable. No integrations, no ecosystem. Meaningful friction: data history, integrations, learned workflows. Network effects or regulatory data residency. Migration is genuinely expensive.
Compliance Moat No requirements. Anyone can build a replacement. Compliance matters, but a determined operator could manage it. Certifications, liability acceptance, audit trails. Vibe coding can't satisfy these.
Problem Complexity Forms, dashboards, CRUD. Buildable in a weekend. Non-trivial integrations or moderate algorithmic depth. ML, optimization, real-time systems. Years of specialized expertise required.
Buyer Profile SMB operators — the people now building their own tools. Mid-market with some IT governance. Regulated enterprises, governments. Procurement and legal sit between you and replacement.
Layer End-user application for a specific use case. Platform with some application features. Infrastructure that vibe-coded apps depend on.
Proprietary Data / Content / IP No proprietary data or IP. Anyone starting from scratch would reach feature parity quickly. Some accumulated data advantage — user history, transaction data — but replicable with time and effort. Proprietary datasets, content licenses, or IP that cannot be recreated from scratch. The asset is the moat.

Reading Your Score

Total What it means
7–12 Pivot urgently. You're in Operational Excellence territory — the discipline vibe coding just ended.
13–17 Reinforce or reposition. You have assets but meaningful exposure. Identify which dimensions can be strengthened.
18–21 Press the advantage. You're operating in Customer Intimacy, Product Leadership, or infrastructure. Double down.

Two Examples

Monday.com scores a 10. It's a $10 billion company. It's also a work management application — forms, boards, and status columns with a clean interface. No compliance requirements. No proprietary data. No algorithmic depth that requires years to build. Its switching cost scores a 2 because workflows and integrations create some friction, but nothing that survives a determined replacement effort. The rubric doesn't care about revenue multiples. A tool called Zapta already lets teams feed in their Monday.com API token and vibe-code a custom replacement — database, authentication, and all — for $29 a month.

Stripe scores a 21. Every dimension is defensible, and most reinforce each other. The compliance posture is what creates the enterprise buyer. The enterprise buyer generates the transaction data. The transaction data trains the fraud models. The fraud models deepen the moat. A vibe coder building a payments app doesn't compete with Stripe — they depend on it.

The M&A market is already pricing this divergence in. Q1 2026 data shows that in vertical software acquisitions, revenue growth carries 2.4 times the predictive weight of EBITDA margins in explaining valuation outcomes. Buyers are paying for stickiness — which is another way of saying they're paying for defensibility.


What This Means

Most software businesses were built on the assumption that code was scarce. It isn't anymore.

The question in the middle of this article — if code were free, would anyone still buy from us? — isn't rhetorical. Run the rubric. If you're scoring in the 7–12 range, the answer is no, and your replacement isn't a competitor. It's your customer.


JOV AI helps technology businesses navigate this shift. If your rubric score raised questions about your position — or if you're building the thing that replaces someone else's and want it done right — let's talk.

92% of Nonprofits Use AI. Only Half Have a Policy. Here's What One Foundation Built.

Your staff is already using AI. You probably know that. What you might not know is which tools, on which data, with what guardrails.

The 2026 Nonprofit AI Adoption Report put a number on it: 92% of nonprofits are using AI in some capacity. But 47% have no governance policy at all. And 81% are using AI individually, no shared workflows, no documentation, no organizational learning.

That's not an AI problem. That's a risk management problem hiding in plain sight.

At the end of last year, we helped The Catholic Foundation in Dallas build an AI governance framework from scratch. Policy, training, board approval, the whole thing. Here's what the process looked like and what we learned doing it.

Nonprofit AI Governance


The Problem Isn't AI. It's What You Don't Know About.

The risk that keeps me up at night for organizations like this isn't a sophisticated cyberattack. It's a well-meaning staff member pasting sensitive information into a free AI tool to draft an email.

That's not hypothetical. Last July, IBM's Cost of Data Breach Report found that one in five organizations experienced a breach tied to shadow AI: tools employees use without IT approval. Those breaches cost an average of $670,000 more than standard incidents. UpGuard confirmed what we see on every engagement: 81% of employees are already using unapproved AI tools at work. Including the security professionals.

For foundations built on donor trust, "we didn't know" is not a sufficient answer. Neither is "we're working on a policy."


What the Process Actually Looks Like

When The Catholic Foundation reached out, they weren't reacting to an incident. They were getting ahead of one. AI features were showing up in the tools their team already used, whether anyone asked for them or not. People wanted to use AI the right way. They just didn't have a playbook. Leadership decided to build the framework before that ambiguity became a problem.

Here's what we mapped out in a few weeks:

Data classification. Not every piece of information carries the same risk. The work starts with drawing clear lines: what never touches an AI tool under any circumstances, what can be used with explicit approval, and what's fair game. The test is simple: if it would be devastating on the front page of a newspaper, it stays out of AI completely.

Tool evaluation. Not all AI tools are created equal. Enterprise tools with contractual data protection agreements are fundamentally different from free consumer tools that may use your data for training. The policy needs a clear approved and prohibited list.

Staff training. Not a lecture about AI theory. Scenario-based: "This situation just came up. What do you do?" The questions that surface are the kind you can't anticipate from a desk: AI features appearing unprompted in existing software, third parties on calls running AI recorders, voice assistants on personal phones.

Board alignment. The governance committee reviewed the policy before the full board. Their board brought the right questions and the experience to evaluate the framework on its merits. The full board approved it, no revisions needed.

The core framework was built in weeks. Review and board approval added a few months to the calendar, but that's governance working the way it should. No dedicated AI team required. No year-long compliance project. Just a decision to be intentional about it.


Why This Matters Beyond One Foundation

Last September, CEP confirmed what we were already seeing: almost two-thirds of foundations and nonprofits are using AI, but data security remains the top concern among foundation leaders, cited by more than 80%. But concern alone doesn't build a framework.

Foundations won't get forced into this conversation by strategy. They'll get forced into it by a board question, a compliance review, or a staff member asking what's allowed.

The Catholic Foundation won't be explaining why they don't have a policy. They'll be pointing to the one they built.


Start With Three Questions

If you lead a foundation or nonprofit, here's where the work begins:

  1. What information does your organization handle that would be catastrophic to expose?
  2. What AI tools are your staff using right now, with or without your knowledge?
  3. Do you have a written policy that answers question two in light of question one?

If the answer to question three is no, that's the gap. And closing it doesn't require a dedicated AI team or a six-figure consulting engagement. It requires a decision to be intentional about how your organization uses AI before the decision gets made for you.

If you want to talk through what a nonprofit AI governance policy looks like for your organization, not a sales pitch, just a straight conversation about your situation, reach out. We'll tell you if you need a formal policy yet. And if you do, we'll show you exactly where to start.


Sources: - 2026 Nonprofit AI Adoption Report, Virtuous/Fundraising.AI, February 2026 - CEP "AI With Purpose" Report, September 2025 - IBM 2025 Cost of Data Breach Report, July 2025 - UpGuard "State of Shadow AI" Research, November 2025

70% of Small Business Leaders Are Betting on AI. Here's What Successful AI Implementation Looks Like.

The Execution Gap

If you run a small business, you've probably had some version of this conversation in the last six months:

"We should be doing something with AI."

Maybe your office manager started using ChatGPT for emails. Maybe a competitor posted about their "AI-powered" workflow on LinkedIn. Maybe you sat through a vendor demo that promised to "transform your operations."

And then nothing happened. Or worse, something happened, but you can't point to what actually changed. Your AI implementation stalled before it started.

You're not alone. And your skepticism isn't a weakness. It's the right instinct.

The AI Implementation Optimism Is Real. The Results Aren't Yet.

The ECI AI Readiness Report came out this week. 550+ owners in manufacturing, field service, and distribution. These are the people in our world.

The headline: more than 70% of SMB leaders are positive about AI. That's not Silicon Valley hype. That's owners like you and me saying, "I think this thing can help my business."

But here's where it gets interesting. Despite that optimism, roughly 40% of those same businesses report zero measurable results from their AI efforts so far.

Seventy percent believe. Forty percent can't prove it's working.

That gap is the whole story.

And it's not just SMBs. Here's the kicker: PwC's latest CEO survey shows 56% of CEOs actively investing in AI haven't seen revenue or cost benefits yet. Only one in eight reported gains on both. If large companies with dedicated AI budgets are still struggling to show ROI, budget alone clearly isn't enough.

The will is there. The execution isn't.

What the Winners Are Actually Doing

So what separates the 60% getting results from the 40% who can't point to measurable ones?

It's not budget. It's not team size. It's not which tool they picked.

It's where they started.

The ECI report found that 60% of SMBs using or planning AI are focused on data analysis and reporting. Back-office work. Not chatbots. Not customer-facing AI. The boring stuff: pulling reports, reconciling data, tracking jobs.

That tracks with everything I've seen over the past two years. The wins don't come from flashy demos. They come from finding the one process that eats six hours a week and cutting it to thirty minutes.

Not "let's see what AI can do." Instead: "We spend 12 hours a week manually routing service calls. Can we cut that in half?"

That's the difference between experimenting and operating.

Why Most DIY AI Implementation Projects Stall

Here's a pattern I keep seeing. An owner gets excited about AI, assigns it to someone on their team, usually whoever seems most "tech-savvy," and says, "Figure out how we can use this."

Three months later, that person has tested a dozen tools, built a few clever prompts, and can't point to a single process that actually changed. Not because they're not smart. Because they're learning from scratch while still doing their real job.

Every time, the fix is the same. Stop leading with the technology. Start with the problem. That's what drives everything we do at JOV AI.

We run our business on the same AI systems we build for clients. It's the fastest way to find out what actually works, and the fastest way to kill what doesn't.

Why SMBs Have the Real Advantage

Here's what the big consultancies miss when they publish these reports: small businesses can move faster than anyone.

I wrote about this in The Blue-Collar AI Advantage. A 50-person HVAC company doesn't need a change management committee. The owner can decide on Tuesday, implement on Wednesday, and see results by Friday.

That speed is a structural advantage. Shorter decision chains. Closer to the actual work. Less bureaucracy between "this is a good idea" and "let's do it."

But it cuts both ways. When every dollar matters more, you can't afford to experiment blindly. A Fortune 500 company can burn a quarter-million on a failed AI pilot and write it off. You can't.

That's why the bottleneck-first approach matters even more for SMBs. You don't need an AI strategy. You need to fix one expensive problem and prove ROI before you touch anything else.

Stop Running AI Projects. Start Operating Your Business.

The companies getting results from AI aren't "doing AI." They're not running innovation labs or hiring prompt engineers.

They're doing what they've always done: finding inefficiencies and fixing them. AI just happens to be the tool that works right now.

The ECI report named the barriers holding most SMBs back, and none of them are surprising: no in-house expertise, messy data, and no idea where to start. Those aren't technology problems. They're AI implementation problems.

And that's exactly where the gap lives, between "AI can do amazing things" and "here's what it's doing for your P&L this quarter."

The testing phase is over. Seventy percent of your peers are ready to move. The question isn't whether AI works for small business. It's whether you'll be in the 60% getting results or the 40% still unable to point to what changed.

Start Here

What's your most expensive bottleneck this week? The process that eats the most hours, causes the most errors, or keeps you from focusing on growth?

Start there. Not with a chatbot. Not with a strategy deck. With one problem, one measurement, and one fix.

That's how the winners are doing it.

If you want to talk through where AI implementation fits in your operations, not a sales pitch, just a straight conversation about your bottleneck, reach out. We'll tell you if AI isn't the answer. And if it is, we'll show you exactly where to start.

The Blue-Collar AI Advantage Nobody's Talking About

The Blue-Collar AI Advantage

Your best tech is losing two to three hours a day to bad routing. Your estimator is rebuilding the same spreadsheet for the third time this week. Your office manager is chasing invoices instead of chasing growth.

None of that is a technology problem. It's operational drag. And it's capping how fast your business can grow.

Most trades owners assume AI isn't for them yet. That's exactly why the ones adopting it now are pulling ahead so fast. HVAC. Plumbing. Construction. Manufacturing. Field services. Where almost no one has started, even basic AI puts you a generation ahead.

OpenClaw — Why AI Governance Can't Wait?

If you've been on LinkedIn this week, you've seen the posts. Someone's AI agent just booked their flights, summarized their inbox, and scheduled their week—all while they slept.

OpenClaw (formerly Clawdbot, then Moltbot) is being downloaded thousands of times a day. It's the fastest-growing AI agent in history, and it's likely already installed on a laptop in your office.

Here's the problem: this isn't a polished product from Google or Microsoft. It's an experimental tool that, once granted access, can read and write files on your computer. It can execute scripts. It can connect to your email, your calendar, your customer data. And the creator himself says the security is "a work in progress" and it's "not meant for non-technical users."

That hasn't stopped non-technical users from installing it anyway. And most of them have no AI governance in place.

OpenClaw AI Governance

The AI Advantage Isn't About Which Model You Pick. It's How You Run It.

Every few months, a new AI model drops and the internet loses its mind. GPT-5-whatever. Gemini-some-number. The next thing.

It's like the iPhone when they were new—each number was a big deal. I used to get excited too. But can you honestly tell me the kid fresh out of college is getting more ROI out of a new iPhone than the business operator on one that's three years old?

Now, look at the guy who doesn't have a cell phone and is still faxing documents—how's he doing?

That's what I mean. Debating which AI model to use is like worrying about which carrier to get cell service through. Meanwhile, most business owners haven't figured out what to do with the last model.

Here's what's easy to miss in the noise: the models got good. Really good. Three years ago, AI couldn't write a decent email. Now it can draft proposals, analyze contracts, and handle first-line customer support. Small and medium-sized businesses have access to the same capabilities enterprise companies are spending millions to deploy.

So why are some businesses pulling ahead while others spin their wheels?

It's not which AI they picked. It's how they run it.

The AI Advantage

What Are You Waiting For?

82% of enterprise decision-makers now use AI weekly. 46% use it daily. These aren't employees experimenting on the side—these are the people running things. VPs, C-suite, the ones setting strategy and making decisions. (Wharton, 2025)

Meanwhile, only about 9% of small businesses have adopted AI. (SBA, 2025) That's SBA's cut of Census data on firms reporting active AI use, not just experimentation.

That's the gap. The models are the same. You have access to the same AI enterprise leaders are using daily.

4 Years in AI: Trial by Fire

4 Years in AI

In 2022, I walked away from a 15-year career in bond trading to build AI models. I thought I knew exactly how this would play out.

I was right about the opportunity. I was wrong about the timeline.

That gap between promise and payoff is the real story, and it's the reason most small businesses still haven't figured out how to make AI work for them.