Your Next AI Hire Isn't a Person
- David Hajdu

- 12 minutes ago
- 5 min read
Dave Hajdu February 2026
I read Every's piece yesterday about the next chapter of their consulting practice. My first reaction was: we've been living this same reality at Edge8, just on the other side of the world. And we've done a terrible job of talking about it.
So let me fix that.
For the past two years, our team in Ho Chi Minh City has been building AI programs for companies that most Silicon Valley consultancies would never touch. Textile factories in Vietnam. Venture capital firms in Seattle. Hotel chains. Scrappy startups with more ambition than bandwidth.
Here's what we've learned: the gap between companies experimenting with AI and companies actually running on AI is massive. And closing it takes more than a few workshops and a prompt library.
It takes building an actual team of AI agents. And then (this is the part nobody talks about) staffing the humans who can manage them.

The AI Hire Most Companies Forget to Make
MIT says 95% of companies see zero return on their AI investments. Zero. Not bad returns. Not marginal. Nothing.
That used to surprise me. It doesn't anymore.
Here's what happens: a company signs up for some AI tools, runs a few experiments, gets inconsistent results, and quietly shelves the whole thing. Six months later someone at the leadership table asks, "Whatever happened to that AI thing we were doing?"
I've seen this pattern play out dozens of times now. The technology is never the problem. It's that the strategy, the people, and the tools aren't aligned. They're trying to get AI to do something useful without first figuring out what "useful" actually means for their business.
That's the problem Edge8 was built to solve.
Most companies don’t fail at AI because the tools don’t work. They fail because no one owns the AI hire. Treating AI like software instead of a hire means no clear goals, no performance management, and no accountability, exactly the conditions where even great hires fail.
What This Actually Looks Like
Every's Natalia Quintero made a point I really liked. The best place to start with AI isn't always the most obvious team. Sometimes it's the bottleneck nobody's talking about.
We've seen this across wildly different industries.
Venture Capital: Making the CRM Actually Smart
One of our VC clients in Seattle was drowning in deal flow. Their analysts were spending 60% of their time on data entry. Copying company info from pitch decks into spreadsheets. Cross-referencing Crunchbase. Formatting things for the investment committee. The actual analysis, the part that requires a human brain, was getting squeezed into whatever time was left.
We built an AI program that normalizes deal flow automatically, enriches it from public sources, scores opportunities against the firm's thesis, and populates the CRM with clean, structured data. Their analysts went from spending most of their week on data hygiene to spending most of their week on actual investing. The CRM went from a graveyard of half-filled records to something the partners actually open every morning.
Factories: Gut Feel to Real Data
Kyungbang is a Korean textile company with a factory here in Vietnam. Their quality control data was trapped in PDFs. Test results from different suppliers, different machines, different fiber types, all in slightly different formats. Pricing decisions were basically vibes because nobody could see the full picture.
We built a platform that parses every quality test report, standardizes everything across suppliers and machines, flags anomalies in real time, and lets anyone on the team search their quality history in plain English. When you can see that Supplier A's fiber consistency dropped 12% over six months while Supplier B's improved, the pricing conversation gets a lot more interesting.
Startups: The Founder Can't Write Everything
Early-stage startups have a different problem. No legacy systems. No data warehouse. Just a founder with a vision and not enough hours in the day. Content is inconsistent. Brand voice changes depending on who wrote it that week. Marketing feels permanently reactive.
We build automations that capture the founder's voice, keep brand consistency across every channel, and turn one idea into a full distribution plan. Blog, LinkedIn, email, social. One founder told me this gave him back ten hours a week. That's ten hours of fundraising and customer conversations that just weren't happening before.
Our Process: Five Steps, Not Four
Every describes a four-step process: set strategy, build workflows, train teams, stick around as chief AI officer. We do all of those. But we've added a fifth that I think is the real differentiator.
It comes from a pretty simple observation: AI agents need human managers.
Step 1: AI Capabilities Audit. Before we build anything, we figure out where you actually are. Not where you think you are. We interview people, survey teams, map processes. We find what's documented, what's tribal knowledge, and what's being held together by that one person who "just knows how things work." One factory was convinced they needed AI for demand forecasting. The audit showed their underlying data was so messy that no model would give them anything useful. We started with cleanup instead and saved them six months.
Step 2: Build the Automations. This is where agents get built. Not demos. Not proof-of-concepts. Production systems that do real work. Deal flow normalization running 24/7. A quality data platform replacing weeks of PDF parsing. Travel Buddy for Wink Hotels, a multilingual AI companion handling guest recommendations across every property. AI dev tools for Abound Health that boosted engineering productivity 30%. Each one compounds. The more it runs, the smarter it gets.
Step 3: Train Your Team. Tools without training are shelfware. We meet people where they are. Execs who need to understand what AI can and can't do. Ops managers working alongside agents every day. Skeptics who've heard every tech promise before and aren't buying it. The goal isn't turning everyone into a prompt engineer. It's building AI fluency. Knowing what an agent is doing, whether it's doing it well, and when to step in. Like managing a new hire. You don't do their job. You manage their output.
Step 4: Stay On as CAIO. AI isn't a project with a start and end date. The tech moves monthly. We stick around as your fractional Chief AI Officer. Watching performance, spotting new opportunities, keeping things from going stale.
Step 5: Staff AI Engineers. This is where we go further. Most consultancies do steps one through four and wave goodbye. But agents need ongoing care. Someone has to monitor them, update them when processes change, build new ones, fix the ones that break. That someone is an AI Engineer, and most companies don't have one. We staff them. Embedded with your team or managed from ours. This is the step that turns an AI project into an AI capability.
Why Now
A year ago, building a custom AI agent required serious engineering talent. That's not true anymore. The tools have caught up. The bottleneck now isn't technical. It's organizational. Can your team actually adopt this stuff? Do you have someone who'll keep it running after the consultants leave?
The companies that win from here aren't the ones with the fanciest tools. They're the ones that figured out how to wire AI into how they actually work. And staffed the people to sustain it.
That's what we build.
If you want to start on your own, here's the simplest thing you can do this week: walk through your team's calendar and find the three tasks that are most tedious, most repetitive, and most clearly documented. Those are your first automations. Everything else follows.
And if you want help, I'm at dave@edge8.ai.
Dave Hajdu is the founder of Edge8 and the AI Officer Institute. Co-founder and board member of TINYpulse. Based in Ho Chi Minh City and Seattle.




Comments