| Aspect | Description |
|---|---|
| Title | Lead Architect, Code 5 (ZTAG) |
| Seniority | Veteran (80s/90s dev background) |
| Core Value | Quality + Maintainability over speed |
| Communication | Direct, evidence-based, document-focused |
| Decision Style | Slow but thorough; wants data before approving |
| AI Stance | Pragmatic skeptic: "Show me, don't tell me" |
β
Working code + test coverage
β
Clear architecture documents
β
Mentoring juniors (he invests heavily)
β
Root-cause analysis (not band-aids)
β
Rigorous PR reviews
β
Tools that reduce toil
β "Trust me, it works" (without tests)
β Shortcuts that accumulate tech debt
β Auto-generated code that's unmaintainable
β Black-box solutions
β Missing edge case handling
β Unclear or undocumented assumptions
"You're a bottleneck on routine tasks (code review, doc sync, test analysis). I want tools that eliminate toil so you focus on architecture & mentoring. We start small (boilerplate, analysis), prove it works, you decide if we expand. No surprises."
| Task | What AI Does | What Malachi Does | Success Metric |
|---|---|---|---|
| Boilerplate | Generate FreeRTOS scaffold + test template | Review structure, approve or modify | Time saved, zero rework |
| Test Gaps | Analyze coverage, flag untested paths | Decide which tests actually matter | Coverage increases, relevant gaps closed |
| Doc Sync | Flag where code & docs diverge | Decide if code or docs is wrong, assign fixes | Discrepancies decrease |
Win Condition: Zero regressions + β₯10% time savings + Malachi says "This adds value"
If Phase 1 succeeds:
"I've been looking at bottlenecks. You're gating on code review, doc sync, and architecture decisions. That won't scale. I want to try tools that handle routine work. 6-week pilot. You control it. If it doesn't work, we stop."
"Great. For Phase 1, you'll spend ~3 hours reviewing 3 scaffolds, flagging test gaps, and marking doc discrepancies. In return, you save ~10 hours on routine work. Worth it?"
Weekly:
Monthly:
π© Deploying AI tools without his approval
π© "The AI decided X" (he decides, AI suggests)
π© Cutting corners on testing to ship faster
π© Black-box tools he can't understand or customize
π© Skipping code review because "AI reviewed it"
π’ "Here's what we tried, here's what it saved, here's your call"
π’ Tools he can control/customize
π’ Clear ROI (time saved, quality maintained)
π’ Respecting his veto (he says no, you stop)
π’ Making him the champion (if Malachi approves, team trusts it)
Primary: Claude Code (Extended)
Secondary: Cline (task-based workflows) + GitHub Actions (automated checks)
Malachi's skepticism isn't a weaknessβit's your quality control. By letting him gate agentic workflows:
Week 1-2: Phase 1A (boilerplate)
Week 2-3: Phase 1B (test gaps)
Week 3-4: Phase 1C (doc sync)
Week 4: Phase 1 review + Phase 2 decision
Week 5-8: Phase 2 (PR review, refactoring, field issues)
Week 9-12: Phase 3 (scaling)
Let Malachi decide. Don't surprise him. Don't force it. Show him data. Respect his "no." If he approves, he becomes your strongest advocate.
In 6 months: "These tools save us 20% on routine work while I focus on the hard decisions. That's a win."
Subject: Agentic Workflows Pilot - Your Input Needed
Body:
Malachi,
I've analyzed our bottlenecks. You're the gating factor on:
- Code review (every PR waits for you)
- Documentation sync (manually tracking changes)
- Architecture decisions (juniors need your input)
I want to explore tools that eliminate routine toil so you focus on decisions
only you can make. Start small, prove it works, you decide if we expand.
6-week pilot:
- Week 1-4: Low-risk proof (boilerplate, test gaps, doc sync)
- Week 4: Phase 1 review + decision on Phase 2
- If Phase 1 succeeds: Expand to PR review, refactoring, field diagnosis
Your involvement: ~3 hours Phase 1, reviewing scaffolds and approving tools.
In return: You save ~10 hours/month on routine work.
Interested? Let's talk.
βQuan
Document Version: 1.0
Last Updated: Feb 16, 2026
Purpose: Quick reference for implementing agentic workflows with Malachi