← Back to Index

Code5 AI Adoption Strategy: Accelerating Without Alienating

Date: 2026-02-16
Context: Analysis of ZTAG dev meetings + OpenClaw interview insights
Objective: Get Malachi bought in through real-world outcomes, not philosophy


Executive Summary

Malachi Burke is AI-skeptical but not AI-hostile. His limited openness to agentic approaches (CI/CD pipeline, acceptance testing) provides entry points. Ryan Summers is already an effective AI user whose work Malachi respects. The junior devs (UTF LABS) are untapped potential. Strategy: proof over philosophy, controlled wins, let Malachi own his ideas.


Current State Analysis

Malachi Burke - Technical Lead

Leadership Style:

AI Stance:

His Stated Openness (from your notes):

"The best fit for agentic approach seems to be in assisting us maintaining a CI/CD pipeline. Once that is in place, the next best gamble would be an acceptance testing assist. The way that would work would be you would use a live feed camera to hear and watch the output on a ztagger device and the agent could make crude judgements about how well that fit the requirement. You'd have to filter it a LOT and that's OK because QA is the front lines."

Key Insight: He's already designed an agentic use case. He just needs to see it work.

Ryan Summers - Support Engineer

Current AI Usage:

Relationship with Malachi:

UTF LABS Junior Devs (Faisal, Shan, Basim)

Profile:


Strategic Framework

Core Principle: Proof Over Philosophy

From OpenClaw interview (Peter Steinberger):

"It needs to be experienced. And from that time on... it clicked for people."

"You have to approach it like a conversation with a very capable engineer who... sometimes needs a little help."

Malachi won't be convinced by arguments. He needs to SEE it work in HIS domain.


Three-Track Tactical Plan

Track 1: The Ryan Showcase (Immediate)

Goal: Make Ryan's AI-assisted productivity undeniable

Actions:

  1. Have Ryan briefly note when AI helped in PRs/commits
  2. Document specific wins: "AI caught this edge case," "AI wrote these unit tests in 10 min"
  3. Let results speak — no evangelizing needed

Why It Works:

Timeline: Weeks 1-2


Track 2: The CI/CD Play (Malachi's Own Idea)

Goal: Let Malachi own the agentic CI/CD initiative

Actions:

  1. Frame it as Malachi's initiative (because it IS his idea)
  2. Have an agent help set up GitHub Actions / automated testing
  3. Let Malachi validate and "pummel" the AI suggestions
  4. When it works, he owns the win

Why It Works:

Implementation Notes:

Timeline: Weeks 2-4


Track 3: Junior Acceleration (Medium-term)

Goal: Quietly uplevel UTF LABS with AI tools

Actions:

  1. Introduce Cursor or Claude Code to one junior (Shan or Basim) for a bounded task
  2. Have them report results factually: "Built X in Y time"
  3. No philosophy — just output metrics

Why It Works:

Training Approach:
From OpenClaw interview: "If you drive it right, Opus can make more elegant solutions, but it requires more skill."

Teach the juniors to:

Timeline: Month 2


Bonus Track: Acceptance Testing Prototype

Goal: Build Malachi's vision and show him HIS idea working

His Vision:

"You would use a live feed camera to hear and watch the output on a ztagger device and the agent could make crude judgements about how well that fit the requirement."

Actions:

  1. Build crude prototype using vision model + ztagger output
  2. Demonstrate filtering approach (he expects to filter heavily)
  3. Position QA team as "front lines" with AI assist

Why It Works:

Timeline: Month 2-3


What NOT to Do

Avoid Why
Philosophical debates about AI Malachi is a skeptic by nature
Forcing tool adoption He prefers tools he's familiar with
Letting AI make architectural decisions He cares about code integrity
Hiding AI usage Transparency builds trust
Over-promising AI capabilities He'll catch the gaps and lose trust

Key Reframe

From OpenClaw interview:

"Partly why I find it quite easy to work with agents is because I led engineering teams before... you have to understand and accept that your employees will not write code the same way you do. Maybe it's also not as good as you would do, but it will push the project forward."

Malachi already manages juniors who don't code exactly like him.

AI is just another junior that needs:

Frame it this way and it maps to his existing mental model.


Timeline Summary

Phase Timeframe Actions Success Metric
Now Week 1-2 Ryan documents AI wins 3+ documented instances
Short Week 2-4 Malachi scopes CI/CD automation First automated test passing
Medium Month 2 One junior tries Cursor on bounded task Task completed, Malachi approves output
Medium Month 2-3 Acceptance testing prototype Demo to Malachi, captures one real issue
Outcome Month 3+ Malachi becomes selective AI advocate He recommends AI for specific use cases

Success Definition

Not: Malachi becomes an AI evangelist
Yes: Malachi selectively recommends AI for validated use cases

Not: Everyone uses AI for everything
Yes: AI used where it demonstrably improves velocity without quality loss

Not: Philosophy wins
Yes: Results win


Supporting Evidence from Dev Meetings

Malachi's Skepticism (with nuance)

"I actually put in that API call because, you know, AI can slop in some things."
— Jan 2, 2026 [00:04:17]

"AI had suggested about a month ago that I do this TaskNotify Wait trick. And I really pummeled AI about it."
— Jan 2, 2026 [00:05:49]

Malachi's Quality Focus

"Some of the upsides that have come from it being hard to do is that I've had to do a lot of testing. And I followed unit test practices."
— Feb 6, 2026 [00:12:35]

Ryan's AI Usage

"On Cursor, I've been using Claude, and then I also use ChatGPT alongside it, just to kind of have them cross-reference each other."
— Feb 7, 2026 [00:04:08]

Malachi Valuing Ryan's Work

"You can jump in there, too, although your proof of concepting is really very valuable. Much appreciated."
— Feb 6, 2026 [00:18:00]


Appendix: OpenClaw Interview Insights Applied

OpenClaw Insight Application to Code5
"It needs to be experienced" Don't argue — demo
"Approach it like conversation with capable engineer" AI as junior dev needing direction
"Employees won't code same way you do" Malachi already accepts this with UTF LABS
"Playing is the best way to learn" Let juniors experiment on bounded tasks
"I don't read the boring parts of code" Use AI for boilerplate, humans for architecture
"Every time I merge, I ask what can we refactor" Build AI into review workflow, not replacement

Report generated from analysis of 202 ZTAG dev meeting transcripts and OpenClaw/Lex Fridman interview.