Malachi Burke is ZTAG's lead architect and primary quality gate. His skepticism toward AI/automation is not ideological but pragmaticβhe's seen brittle auto-generated code.
The win: Frame agentic workflows as force multipliers for tedious work, not replacements for thinking. Let Malachi control adoption, and he becomes the tool's strongest advocate.
| Dimension | Pattern | Implication |
|---|---|---|
| Communication | Direct, question-driven, document-focused | He wants to understand WHY, not just get results |
| Decision-making | Evidence-based (testing, prototypes, logs) | "Show, don't tell"βdemos beat explanations |
| Quality Standards | Extremely high (edge cases, regression testing, maintainability) | Can't ship "good enough" with him as architect |
| Leadership | Heavy mentoring of juniors, high expectations | Invests in team growth |
| AI Skepticism | Pragmatic (wants tools to PROVE value) | He'll use AI if it clearly helps without introducing risk |
β Too Autonomous β He loses oversight
β "Move fast, fix later" β Accumulates tech debt he resists
β Black-box solutions β He can't debug or maintain them
β Minimal documentation β Violates his standards
β "Trust me" β He needs evidence
Goal: Prove value on tedious work Malachi hates
Tasks:
Success = Zero regressions + β₯10% velocity gain + Malachi approval
Goal: Reduce Malachi bottleneck while maintaining quality
Tasks:
Success = Team confidence + Sustained quality + Malachi championing tools
Goal: Free Malachi for architecture, mentoring, vision
Tasks:
Success = Malachi focused on hard problems, team more independent
"Right now, you're the bottleneck on code review, architecture decisions, and documentation sync. That won't scale as we grow Code 5. I want tools that handle routine work so you can focus on the decisions only you can make. We'll start small, prove it works, and you control the dial. Deal?"
"Will this replace my team?"
β No. It eliminates boilerplate. They'll implement logic (where they learn), and you'll have more time to mentor thinking.
"What if it breaks something?"
β Same risk as a junior. You review before it ships. AI just handles legwork faster.
"How do I know it won't introduce subtle bugs?"
β We start with low-risk (boilerplate, not logic). Phase 1 succeeds only if zero regressions. If it doesn't work, we stop.
Why:
| Risk | Mitigation |
|---|---|
| Over-automation erodes quality | Code review requires "Why is this here?" verification + test coverage before merge |
| Malachi feels de-valued | Frame as HIS productivity multiplier; shift his work to bigger problems |
| AI produces "works but breaks later" code | Every agentic module needs β₯80% test coverage + architecture comments + manual review by Malachi |
| Juniors stop thinking | AI suggests β Human reviews β Human implements β Human tests (Malachi spot-checks decisions where AI was wrong) |
| Team resistance | Start with pilots; celebrate wins; Malachi becomes advocate (if he approves, team trusts it) |
Malachi's skepticism is your quality control mechanism. By introducing agentic workflows through him (not around him):
In 6 months, Malachi will likely say: "These tools save us 20% on routine work. That's a win."
Timeline: 6 months to full integration
Investment: ~20% of Malachi's time for oversight (decreases over time)
Payoff: β₯15% velocity gain, better code quality, sustainable growth