← Back to Index

Malachi's Development Style & Agentic Workflow Integration

Comprehensive Analysis for Code 5 (ZTAG System)

Prepared for: Quan Gan, ZTAG Leadership
Analysis Period: 120+ meetings (May 2025 - Feb 2026)
Subject: Lead Architect Malachi Burke
Context: Senior veteran developer (80s/90s experience), does not trust AI, requires detail & accuracy


Executive Summary

Malachi Burke is ZTAG's lead architect and principal technical decision-maker for the Code 5 system. Analysis of 120+ meetings reveals a highly disciplined, detail-oriented, collaborative developer with three critical characteristics:

  1. Manual-first, verification-heavy approach - Prefers hands-on testing, skeptical of automation
  2. Mentorship-driven leadership - Invests heavily in junior developer growth through direct feedback
  3. Quality-over-velocity mindset - Resists rushing, emphasizes robust architecture over speed

Key Insight

Malachi's skepticism toward AI is not ideological but pragmatic β€” he's seen brittle code from auto-generation and demands that ANY tool (including AI) prove its value through rigorous testing and maintainability.


Part I: Malachi's Development Style

1. Communication Pattern

Style Characteristics:

Example Topics from Meeting Summaries:

When He Accepts Suggestions:

When He Rejects Suggestions:


2. Detail & Accuracy Requirements

He Demands:

Red Flags He Raises:

His Question Patterns:


3. Decision-Making Flow

Typical Pattern (from meeting analysis):

  1. Define the problem precisely (IR packet corruption, MQTT sync issues, state machine clarity)
  2. Explore the landscape (What approaches exist? What did we try before?)
  3. Prototype & test (Small working example > lengthy discussion)
  4. Gather evidence (Logs, traces, measurable improvements)
  5. Document the decision (Why this approach? Trade-offs? Reversibility?)
  6. Implement with review (Code review focusing on clarity, testing, edge cases)

What Works:

What Doesn't Work:


4. Collaboration Style with Team

With Junior Developers (Ryan, Basim, Shan, Faisal, Sean):

With Leadership (Quan):

Recurring Themes in Team Interactions:


5. Concerns About AI/Automation

What He's Observed:

His Position (Inferred from Meetings):

Where He Sees AI Value:

Where He Resists:


Part II: Current Agentic Workflow Landscape

Analysis of Current Tools (Cursor, Windsurf, Cline, Claude Code, GitHub Copilot)

How Malachi Views Them Today:

GitHub Copilot (Line-level completion)

Cursor & Windsurf (AI-assisted IDE)

Cline & Claude Code (Code generation & planning)


Why Current Agentic Workflows Conflict with Malachi's Approach

Agentic Workflow Trait Malachi's Preference Conflict
Speed Quality & correctness Rushing introduces bugs he has to fix later
Autonomous execution Human oversight & review Loses opportunity to learn & maintain code
Black-box solutions Transparent, traceable logic Can't debug or modify if requirements change
"Move fast, fix later" "Get it right, maintain forever" Accumulates technical debt he resists
Minimal documentation Explicit specs & rationale Hard to onboard or modify later

Part III: Recommended Agentic Workflow Integration

Principle: "Augment, Don't Replace"

Malachi's skepticism isn't about rejecting innovationβ€”it's about maintaining quality standards. The path forward is to introduce agentic workflows as force multipliers for tedious work, not as replacements for thinking.


Phase 1: Pilot (Months 1-2)

Goal: Prove value on low-risk, well-defined tasks
Success Metric: Measurable velocity improvement with zero regression bugs

1A: Agentic Boilerplate Generation

Why It Works for Malachi:

1B: Test Coverage Analysis

Why It Works for Malachi:

1C: Documentation Sync Checker

Why It Works for Malachi:


Phase 2: Expansion (Months 3-4)

Goal: Extend agentic workflows to higher-value tasks while maintaining quality
Prerequisite: Phase 1 pilots show zero regressions and measurable time savings

2A: PR Review Assistant (Pre-human review)

Why It Works for Malachi:

2B: Refactoring Proposals

Why It Works for Malachi:

2C: Field Issue Analysis

Why It Works for Malachi:


Phase 3: Scaling (Months 5-6)

Goal: Autonomous workflows for routine tasks; agentic support for decision-making
Prerequisite: Phase 1-2 show sustained improvement and team confidence

3A: Automated Testing Scaffolding

Why It Works for Malachi:

3B: Dependency & Compatibility Checker

Why It Works for Malachi:


Phase 4: Strategic (Ongoing)

4A: Architecture Advisors (Claude Code)

4B: Knowledge Capture


Part IV: Risk Assessment & Mitigation

Risk #1: Over-Automation Erodes Code Quality

Symptom: Junior devs trust AI suggestions without thinking
Mitigation:

Risk #2: Malachi Feels Sidelined or De-valued

Symptom: "If AI can do this, why do you need me?"
Mitigation:

Risk #3: AI Produces Code That "Works But Breaks Later"

Symptom: Unmaintainable auto-generated code that passes tests but has hidden issues
Mitigation:

Risk #4: Team Becomes Dependent on "AI Thinks"

Symptom: Junior devs stop learning and become order-takers
Mitigation:

Risk #5: Malachi's Skepticism Spreads (Team Resistance)

Symptom: Team rejects tools before giving them fair shot
Mitigation:


Part V: Phased Rollout Plan

Timeline & Milestones

Month 1-2: PILOT
β”œβ”€ 1A: Boilerplate generation (FreeRTOS, IDF templates)
β”œβ”€ 1B: Test coverage analysis
└─ 1C: Documentation sync checker
   Milestone: Zero regressions, β‰₯10% velocity gain, Malachi approval

Month 3-4: EXPANSION
β”œβ”€ 2A: PR review assistant (pre-human review)
β”œβ”€ 2B: Refactoring proposals
└─ 2C: Field issue analysis
   Milestone: Team confidence, sustained quality, Malachi championing tools

Month 5-6: SCALING
β”œβ”€ 3A: Automated test scaffolding
β”œβ”€ 3B: Dependency checker
└─ 3C: Knowledge capture (decision records)
   Milestone: Reduced Malachi bottleneck, juniors more independent, sustained quality

Month 6+: STRATEGIC
β”œβ”€ 4A: Architecture advisors (brainstorming partners)
β”œβ”€ 4B: Tribal knowledge preservation
└─ 4C: Continued process optimization
   Milestone: Malachi focused on vision/mentoring, AI handles routine work

Success Criteria

βœ… Code Quality: Zero net increase in bugs from agentic-assisted code
βœ… Velocity: β‰₯15% improvement in time-to-merge for routine tasks
βœ… Maintainability: Code written with AI assistance remains understandable 6+ months later
βœ… Team Adoption: β‰₯70% of junior devs using agentic tools by Month 4
βœ… Malachi Satisfaction: "These tools let me focus on the hard decisions"


Part VI: Recommended Tools & Configuration

Primary Recommendation: Claude Code (Extended) + Cline

Why:

Configuration for Code 5 Project

# System Prompt for ZTAG Code 5 Agentic Workflows

You are an assistant helping the ZTAG Code 5 development team (an embedded systems 
project using ESP-IDF, FreeRTOS, and custom IR/RF protocols).

## Your Role
- Suggest, don't decide
- Explain trade-offs explicitly
- Identify edge cases and risks
- Require test coverage before approval
- Respect existing architecture decisions

## Code 5 Standards You Must Know
- IR packets: V2 format with CSMA backoff
- MQTT sync: Document assumptions about ordering
- State machines: Prefer explicit (not implicit) states
- Testing: Every feature needs unit + regression tests
- Naming: Descriptive (no abbreviations that don't appear in spec)
- Comments: Explain WHY, not WHAT

## When to Say "No"
- If test coverage would be <70%
- If breaking legacy IR protocol compatibility
- If adding undocumented assumptions
- If architecture would require Malachi's re-review

## Key Phrase
When in doubt, suggest: "This might work, but let's verify with [test/prototype] first."

Secondary Tools


Part VII: Conversation Starter with Malachi

Here's how to frame this with Malachi (respecting his style):


Opening

"Malachi, I've been analyzing where our bottlenecks are. Right now, you're the gating factor on:

That's not sustainable as we grow Code 5. I want to explore tools that handle routine work so you can focus on the hard problems."

The Ask

"Would you be willing to try a structured 6-week pilot? Specific tasks:

Goal: Show that agentic tools reduce toil while maintaining your quality bar. Zero regressions, measurable time saved."

Addressing His Concerns

"Will this replace junior devs?"

No. It handles boilerplate. They'll implement logic, which is where they learn. You'll have more time to mentor thinking, not just fix sloppy code.

"What if the AI generates something subtle wrong?"

Same risk as a junior dev. We'll have the same review process: submit β†’ review β†’ test β†’ merge. You still gate it. AI just does the legwork faster.

"How do I know it won't break things?"

We start with low-risk tasks (boilerplate, analysis, not logic). Phase 1 succeeds only if we have zero regressions. If that doesn't work, we stop and re-evaluate.


Part VIII: Metrics to Track

During Pilot (Phase 1)

During Expansion (Phase 2-3)

Long-term (Phase 4+)


Part IX: Special Consideration β€” Malachi's Trust Model

Why His "No AI" Stance Works in His Favor

Malachi's skepticism isn't a weaknessβ€”it's a strength. In a team of 4+ junior devs, one senior developer with high standards is the quality control. By introducing agentic workflows through him (not around him), we:

  1. Preserve quality (his judgment gates every tool)
  2. Build institutional knowledge (he learns the boundaries of what tools can do)
  3. Maintain team trust (if Malachi approves, juniors will use it)
  4. Future-proof the team (when Malachi trains the next senior, they'll inherit vetted workflows)

The Real Conversation

The goal isn't to convince Malachi that AI is amazing. It's to show him:

"Here's tedious work humans shouldn't do. Here's a tool that eliminates it. You control it. You decide if it's worth it. If not, we stop."


Conclusion

Malachi Burke is an asset in adopting agentic workflows because his skepticism forces rigor. Rather than fighting his caution, leverage it.

The playbook:

  1. Start small (boilerplate, analysis, documentation)
  2. Prove value (measurable time savings, zero regressions)
  3. Let him decide (never surprise him with automation)
  4. Make him the champion (his approval builds team confidence)
  5. Shift his workload (free him from toil β†’ more mentoring & architecture)

If you follow this approach, in 6 months, Malachi will likely say: "These tools save us 20% on routine work while I focus on decisions that actually matter. That's a win."


Appendix A: Key Meetings Analyzed

Meeting Topic Malachi's Key Insight
2/12/26 Code5 OTA & debugging Detail-oriented: "Why is this happening?" over "Let's patch it"
2/2/26 RLGL algorithm updates Process focus: "Define the algorithm clearly before coding"
1/29/26 Code 5 sync update Mentorship: Guides team on kconfig and reference implementation
1/26/26 FreeRTOS integration Architecture: "Cartridge system for modularity" (thinks in systems)
12/16/25 User stories planning Standards: "Clarify 'why' in stories; separate technical from functional"
11/25/25 Trade show learnings System design: "Marco Polo" mesh model for scalability
11/13/25 Code 5 planning Vision: "Collaborative, research-driven approach"
10/23/25 External meeting Code quality: "How do we keep this maintainable?"
9/4/25 New developer intro Mentorship: "Explains architecture to new team members"
8/21/25 V2 packet progress Technical excellence: "Byte-by-byte analysis, robust testing"

Document Version: 1.0
Last Updated: Feb 16, 2026
Author: Analysis Team
Status: Ready for Malachi Review