Date: 2026-02-16
Context: Analysis of ZTAG dev meetings + OpenClaw interview insights
Objective: Get Malachi bought in through real-world outcomes, not philosophy
Malachi Burke is AI-skeptical but not AI-hostile. His limited openness to agentic approaches (CI/CD pipeline, acceptance testing) provides entry points. Ryan Summers is already an effective AI user whose work Malachi respects. The junior devs (UTF LABS) are untapped potential. Strategy: proof over philosophy, controlled wins, let Malachi own his ideas.
Leadership Style:
AI Stance:
His Stated Openness (from your notes):
"The best fit for agentic approach seems to be in assisting us maintaining a CI/CD pipeline. Once that is in place, the next best gamble would be an acceptance testing assist. The way that would work would be you would use a live feed camera to hear and watch the output on a ztagger device and the agent could make crude judgements about how well that fit the requirement. You'd have to filter it a LOT and that's OK because QA is the front lines."
Key Insight: He's already designed an agentic use case. He just needs to see it work.
Current AI Usage:
Relationship with Malachi:
Profile:
From OpenClaw interview (Peter Steinberger):
"It needs to be experienced. And from that time on... it clicked for people."
"You have to approach it like a conversation with a very capable engineer who... sometimes needs a little help."
Malachi won't be convinced by arguments. He needs to SEE it work in HIS domain.
Goal: Make Ryan's AI-assisted productivity undeniable
Actions:
Why It Works:
Timeline: Weeks 1-2
Goal: Let Malachi own the agentic CI/CD initiative
Actions:
Why It Works:
Implementation Notes:
Timeline: Weeks 2-4
Goal: Quietly uplevel UTF LABS with AI tools
Actions:
Why It Works:
Training Approach:
From OpenClaw interview: "If you drive it right, Opus can make more elegant solutions, but it requires more skill."
Teach the juniors to:
Timeline: Month 2
Goal: Build Malachi's vision and show him HIS idea working
His Vision:
"You would use a live feed camera to hear and watch the output on a ztagger device and the agent could make crude judgements about how well that fit the requirement."
Actions:
Why It Works:
Timeline: Month 2-3
| Avoid | Why |
|---|---|
| Philosophical debates about AI | Malachi is a skeptic by nature |
| Forcing tool adoption | He prefers tools he's familiar with |
| Letting AI make architectural decisions | He cares about code integrity |
| Hiding AI usage | Transparency builds trust |
| Over-promising AI capabilities | He'll catch the gaps and lose trust |
From OpenClaw interview:
"Partly why I find it quite easy to work with agents is because I led engineering teams before... you have to understand and accept that your employees will not write code the same way you do. Maybe it's also not as good as you would do, but it will push the project forward."
Malachi already manages juniors who don't code exactly like him.
AI is just another junior that needs:
Frame it this way and it maps to his existing mental model.
| Phase | Timeframe | Actions | Success Metric |
|---|---|---|---|
| Now | Week 1-2 | Ryan documents AI wins | 3+ documented instances |
| Short | Week 2-4 | Malachi scopes CI/CD automation | First automated test passing |
| Medium | Month 2 | One junior tries Cursor on bounded task | Task completed, Malachi approves output |
| Medium | Month 2-3 | Acceptance testing prototype | Demo to Malachi, captures one real issue |
| Outcome | Month 3+ | Malachi becomes selective AI advocate | He recommends AI for specific use cases |
Not: Malachi becomes an AI evangelist
Yes: Malachi selectively recommends AI for validated use cases
Not: Everyone uses AI for everything
Yes: AI used where it demonstrably improves velocity without quality loss
Not: Philosophy wins
Yes: Results win
"I actually put in that API call because, you know, AI can slop in some things."
— Jan 2, 2026 [00:04:17]
"AI had suggested about a month ago that I do this TaskNotify Wait trick. And I really pummeled AI about it."
— Jan 2, 2026 [00:05:49]
"Some of the upsides that have come from it being hard to do is that I've had to do a lot of testing. And I followed unit test practices."
— Feb 6, 2026 [00:12:35]
"On Cursor, I've been using Claude, and then I also use ChatGPT alongside it, just to kind of have them cross-reference each other."
— Feb 7, 2026 [00:04:08]
"You can jump in there, too, although your proof of concepting is really very valuable. Much appreciated."
— Feb 6, 2026 [00:18:00]
| OpenClaw Insight | Application to Code5 |
|---|---|
| "It needs to be experienced" | Don't argue — demo |
| "Approach it like conversation with capable engineer" | AI as junior dev needing direction |
| "Employees won't code same way you do" | Malachi already accepts this with UTF LABS |
| "Playing is the best way to learn" | Let juniors experiment on bounded tasks |
| "I don't read the boring parts of code" | Use AI for boilerplate, humans for architecture |
| "Every time I merge, I ask what can we refactor" | Build AI into review workflow, not replacement |
Report generated from analysis of 202 ZTAG dev meeting transcripts and OpenClaw/Lex Fridman interview.