A Framework for Successful Human-AI Collaboration
The Unbroken Method is a systematic approach to human-AI collaboration that maximizes productivity and quality by maintaining continuous context, demanding complete ownership, and learning from documented failures. Developed during the creation of SAM (Synthetic Autonomic Mind) and refined through CLIO (Command Line Intelligence Orchestrator), this methodology has been used to ship production software across three languages (Swift, Perl, Python) and three platforms.
The core insight: The secret to successful AI collaboration isn't waiting for smarter models. It's implementing better methodology.
Most people experience AI collaboration as a series of disconnected conversations. You start fresh, explain your context, get partway through a task, and then, through session limits, context loss, or simple confusion, you have to start over.
1. The Fresh Start Problem - Every new session loses accumulated context - You spend 20% of each session re-explaining your project - The AI forgets decisions made in previous conversations - Progress resets to zero
2. The Partial Solution Trap - AI provides "basic implementations" that need expansion - "Good enough for now" becomes permanent technical debt - Edge cases left as "exercise for the reader" - You end up doing more work fixing partial solutions
3. The Symptom Patch Pattern - AI adds try-catch blocks instead of fixing underlying issues - Problems suppressed rather than solved - Code becomes increasingly fragile - Root causes remain hidden
4. The Scope Escape - AI identifies an issue: "This is a separate problem" - Issues deferred indefinitely - Technical debt accumulates - Problems discovered during work never get fixed
5. The Assumption Cascade - AI assumes how your code works without reading it - Solutions based on assumptions break in unexpected ways - Time wasted debugging incorrect implementations - Trust erodes, collaboration becomes adversarial
Every failure mode above stems from broken continuity: - Context breaks between sessions - Ownership breaks when issues are "out of scope" - Quality breaks when solutions are partial - Trust breaks when assumptions replace investigation
The Unbroken Method addresses each of these by establishing protocols that maintain continuity across every dimension of collaboration.
flowchart LR
A[đ Start] --> B[đ Explain<br/>Context]
B --> C[đģ Work]
C --> D[đ Issue<br/>Found]
D --> E[đĢ Out of<br/>Scope]
E --> F[đĻ Partial<br/>Solution]
F --> G[âšī¸ Session<br/>Ends]
G --> H[đ¨ Context<br/>Lost]
H -.->|Repeat| A
style E fill:#ff6b6b,color:#fff
style F fill:#ff6b6b,color:#fff
style H fill:#ff6b6b,color:#fff
flowchart LR
A[đ Start] --> B[đ Load<br/>Context]
B --> C[đ Investigate]
C --> D[â Checkpoint]
D --> E[đģ Implement]
E --> F{đ Issue?}
F -->|Yes| G[â
Own & Fix]
G --> E
F -->|No| H[đĻ Complete<br/>Solution]
H --> I[â Validate]
I --> J{End?}
J -->|No| C
J -->|Yes| K[đ Handoff]
K --> L[đž Context<br/>Preserved]
L -.->|Next Session| A
style G fill:#51cf66,color:#fff
style H fill:#51cf66,color:#fff
style L fill:#51cf66,color:#fff
The Seven Pillars form the foundation of The Unbroken Method. Each addresses a specific failure mode and provides concrete guidance for maintaining productive collaboration.
graph TB
subgraph P1[" "]
C1[đ 1. Continuous Context<br/>Never break the conversation]
end
subgraph P2[" "]
C2[đ¯ 2. Complete Ownership<br/>Find it, fix it]
end
subgraph P3[" "]
C3[đ 3. Investigation First<br/>Understand before acting]
end
subgraph P4[" "]
C4[đą 4. Root Cause Focus<br/>Problems, not symptoms]
end
subgraph P5[" "]
C5[đĻ 5. Complete Deliverables<br/>Finish what you start]
end
subgraph P6[" "]
C6[đ 6. Structured Handoffs<br/>Perfect context transfer]
end
subgraph P7[" "]
C7[đ 7. Learning from Failure<br/>Codified anti-patterns]
end
C1 --> C2
C2 --> C3
C3 --> C4
C4 --> C5
C5 --> C6
C6 --> C7
C7 -.->|Improve| C1
Principle: Never break the conversation. Context is your most valuable asset.
The Problem It Solves: Traditional AI sessions end when you close the chat or hit token limits. Each new session starts from zero, forcing you to re-explain everything.
The Solution: Implement programmatic collaboration checkpoints that preserve context within sessions and structured handoffs that transfer context between sessions.
How It Works:
Within a session: - Use a collaboration tool (script, macro, or protocol) that creates checkpoints - Share findings, propose approaches, and get confirmation before major work - The AI stays in the same context rather than you responding and breaking flow
Between sessions: - Create comprehensive handoff documents before ending - Include everything the next session needs: completed work, pending tasks, discoveries, lessons learned - The handoff document IS the context. It should be standalone and complete
The Key Insight:
Principle: If you find it, you fix it. There is no "out of scope."
The Problem It Solves: AI assistants often identify issues and then punt them: "This appears to be a separate issue." The issue never gets fixed because no one owns it.
The Solution: Establish absolute ownership of everything discovered during a work session. The current session owns all discovered issues.
What This Means in Practice:
â Not Allowed: - "This is a separate issue for another session" - "This is out of scope for the current task" - "Should I investigate this further?" (just do it) - "Would you like me to fix this?" (just fix it)
â Required: - "I found issue X while working. I will fix it by doing Y." - "Discovered blocker Z. Proposing solution A, proceeding with implementation." - Work continues until ALL discovered issues are resolved
The Ownership Chain:
Principle: Understand before acting. Never assume when you can verify.
The Problem It Solves: AI assistants often jump to implementation based on assumptions about how things work. These assumptions are frequently wrong, leading to wasted time and broken solutions.
The Solution: Mandate investigation before implementation. Read, search, test, and verify before writing.
The Investigation Protocol:
Real Example from SAM Development:
User Report: "The model picker shows 'Downloads' as an option"
Wrong Approach (Assumption): "I'll add a filter to hide 'Downloads' in the UI picker code."
Right Approach (Investigation): 1. Read LocalModelManager.swift to understand model detection 2. Search for where Downloads directory is created 3. Discover: Downloads folder is INSIDE the models directory 4. Root cause: Downloads are detected AS models because of location 5. Solution: Move staging directory OUTSIDE models directory 6. Result: Downloads never appear because they're never detected
The investigative approach found a permanent architectural solution. The assumption approach would have created a fragile UI hack.
Universal Application:
Principle: Solve problems, not symptoms. Every fix should address the underlying cause.
The Problem It Solves: Quick fixes that address symptoms create fragile systems. The same problems resurface in different forms.
The Solution: Require root cause analysis for every fix. Ask "why?" until you reach the fundamental issue.
Symptom vs. Root Cause Examples:
| Symptom | Bad Fix | Root Cause | Good Fix |
|---|---|---|---|
| Error message appears | Add try-catch to suppress | Invalid input not validated | Add input validation |
| UI flickers during update | Add 100ms delay | Multiple refresh calls | Remove duplicate calls |
| Feature doesn't work | Add special case handling | Architecture doesn't support feature | Refactor architecture |
| Content inconsistency | Fix this instance | No style guide | Create and apply style guide |
The Five Whys Technique:
Principle: Finish what you start. No partial solutions that need expansion later.
The Problem It Solves: "I'll implement a basic version first" creates technical debt that accumulates forever. Partial implementations become permanent because "it works." The MVP mindset, while valuable for market validation, becomes counterproductive in AI collaboration where "minimum viable" outputs rarely get improved.
The Solution: Demand complete implementations within their defined scope. Handle edge cases, add error handling, follow patterns.
What "Complete" Means:
â Complete Within Scope: - All specified requirements working - Edge cases for THIS feature handled - Error handling in place - Follows existing project patterns - Tested and verified
â Not Acceptable: - "Basic implementation, expand later" - "Handles the common case" - "Good enough for now" - Hardcoded values when dynamic values are available - Missing error handling
The Completeness Checklist:
⥠All requirements addressed
⥠Edge cases identified and handled
⥠Error handling comprehensive
⥠Follows existing patterns/style
⥠Tested with realistic data
⥠Documentation updated if needed
⥠No TODO comments added
⥠No "temporary" code
Principle: Perfect context transfer. The next session should start as if it never stopped.
The Problem It Solves: Session transitions lose critical context. The next session makes different assumptions, contradicts previous decisions, or duplicates completed work.
The Solution: Create structured handoff documents that capture complete context for continuation.
The Four-Document Handoff Protocol:
When a session must end (token limits, work phase complete, user request):
CRITICAL PREREQUISITE: Documentation MUST be current
Before creating handoff documents, ensure ALL technical documentation is up-to-date:
Rule: Documentation is NOT optional. Code without docs is incomplete work.
Then create the four handoff documents:
NO external references. This document IS the context
AGENT_PLAN.md
Dependencies between tasks
SESSION_HANDOFF.md
What worked and what didn't
FEATURES.md / CHANGELOG.md
The Handoff Test:
Can someone start a completely new session with only the CONTINUATION_PROMPT.md and immediately continue productive work?
If yes, the handoff is complete. If no, add more context.
Principle: Codify mistakes into anti-patterns. Never make the same mistake twice.
The Problem It Solves: Without documented anti-patterns, the same mistakes recur. Each session might repeat errors that previous sessions learned to avoid.
The Solution: Maintain an evolving catalog of anti-patterns with concrete examples of what NOT to do.
Anti-Pattern Structure:
### Anti-Pattern: [Name]
**What It Looks Like:**
[Concrete example of the bad pattern]
**Why It's Wrong:**
[Explanation of the problem it causes]
**What To Do Instead:**
[Correct approach with example]
**Real Example:**
[Actual instance from project history]
Building Your Anti-Pattern Catalog:
The Evolution Cycle:
Failure Occurs â Document Anti-Pattern â Add to Catalog â Include in Instructions â Prevent Recurrence
How you implement The Unbroken Method depends on your tools. Modern AI agents have built-in support for the key mechanisms (collaboration checkpoints, file operations, session persistence). If you're using a tool without native support, you can implement the same patterns with scripts and prompts.
CLIO implements The Unbroken Method natively. The methodology is embedded in its system prompt, and key pillars are enforced through built-in tools rather than prompt instructions alone.
New project from scratch:
/design - The Architect guides you through creating a Product Requirements Document (.clio/PRD.md)/init - CLIO analyzes your codebase and generates .clio/instructions.md (methodology) and AGENTS.md (project reference), incorporating the PRD if presentExisting project:
/init in your project directory - CLIO reads your code and creates instructions tailored to your codebase.clio/instructions.md and AGENTS.md as your project evolvesCLIO automates each pillar through dedicated tools: session persistence for continuous context, semantic search and code intelligence for investigation, todo operations for tracking deliverables, handoff directory protocols for structured transitions, and long-term memory (LTM) for learning from failure across all sessions. See How CLIO Automates the Pillars for the full breakdown.
The user_collaboration tool implements checkpoint discipline at the tool level - the AI calls it to pause execution and present its plan, and cannot proceed without human input.
Most AI coding agents support custom instructions and have their own collaboration mechanisms:
| Agent | Instructions File | Built-in Collaboration |
|---|---|---|
| Claude Code | CLAUDE.md |
AskUserQuestion tool |
| GitHub Copilot | .github/copilot-instructions.md |
Chat-based |
| Cursor | .cursorrules |
Chat-based |
| Windsurf | .windsurfrules |
Chat-based |
| Claude Projects | Project instructions (web UI) | Chat-based |
Setup:
AGENTS.md in your project root with your architecture, conventions, and testing commandsFor agents with built-in collaboration tools (like Claude Code's AskUserQuestion), the checkpoint discipline works naturally. For chat-based agents, the prompt instructions enforce the checkpoint pattern through the conversation itself.
If your environment doesn't have built-in collaboration tools, you can build the checkpoint mechanism with scripts. See the Collaboration Tool Scripts page for bash, Python, and Swift implementations that create the pause-and-validate pattern.
The Unbroken Method is a methodology. AGENTS.md is a project reference file. They solve different problems, and understanding the distinction is key to implementing the method effectively.
The Unbroken Method defines how humans and AI collaborate. It needs to live somewhere your AI assistant reads. But it should be separate from your project's technical reference.
In practice, this means two files with distinct purposes:
| File | Purpose | Contains |
|---|---|---|
| Instructions File | How to work (methodology) | The Seven Pillars, checkpoint discipline, handoff protocols, anti-patterns, collaboration requirements |
| Project Reference | What to build (project scope) | Architecture, code style, directory structure, testing commands, module naming, commit format |
The instructions file teaches the AI how to collaborate. The project reference tells it what the project looks like. Both are necessary. Neither replaces the other.
Every AI coding assistant has its own way of loading custom instructions. The Unbroken Method adapts to all of them:
| Tool | Instructions File (Methodology) | Project Reference |
|---|---|---|
| CLIO | .clio/instructions.md |
AGENTS.md |
| GitHub Copilot | .github/copilot-instructions.md |
AGENTS.md |
| Cursor | .cursorrules |
AGENTS.md |
| Claude Projects | Project instructions (web UI) | AGENTS.md |
| Windsurf | .windsurfrules |
AGENTS.md |
| Any tool | Custom instructions / system prompt | Project README or reference doc |
Notice the pattern: AGENTS.md is the project reference - it describes your codebase, architecture, and conventions. It's the same file regardless of which AI tool you use. The instructions file is where The Unbroken Method lives, and its location varies by tool.
AGENTS.md has become a popular convention for giving AI assistants project context. It solves an important problem - but a different one. AGENTS.md tells an AI what your project looks like. The Unbroken Method tells it how to work with you.
Without methodology, the AI knows your directory structure but still gives partial solutions, knows your coding style but still skips investigation, knows your test commands but doesn't run them proactively.
With both:
AGENTS.md gives the AI project knowledge (what to build)Here's what the separation looks like for a real project:
your-project/
âââ .clio/instructions.md # OR .github/copilot-instructions.md OR .cursorrules
â âââ Contains:
â - The Seven Pillars
â - Checkpoint discipline
â - Handoff protocol
â - Anti-pattern catalog
â - Quality standards
â
âââ AGENTS.md
â âââ Contains:
â - Project overview & architecture
â - Directory structure
â - Code style & conventions
â - Testing commands
â - Build process
â - Module naming
â
âââ src/...
The instructions file is portable across projects - you can use the same methodology everywhere. AGENTS.md is project-specific - it changes with every codebase. This separation means you write the methodology once and adapt only the project reference.
CLIO implements several pillars directly in its tooling:
| Pillar | Manual Implementation | CLIO Automation |
|---|---|---|
| 1. Continuous Context | Copy-paste context between sessions | Session persistence, context compression, thread summarization |
| 2. Complete Ownership | Discipline in instructions | Enforced in system prompt + instructions |
| 3. Investigation First | Tell AI to read code first | File operations, semantic search, code intelligence tools |
| 4. Root Cause Focus | Discipline in instructions | Enforced in system prompt + instructions |
| 5. Complete Deliverables | Tell AI to finish work | Todo operations with progress tracking, mandatory completion |
| 6. Structured Handoffs | Manual document creation | Handoff directory protocol with templates |
| 7. Learning from Failure | Manual anti-pattern catalog | Long-term memory (LTM) with discoveries, solutions, and patterns that persist across all sessions |
The user_collaboration tool implements checkpoint discipline natively - the AI pauses, presents its plan, and waits for human approval before proceeding. This isn't just a prompt instruction; it's a tool the AI calls that actually stops execution.
Key Practices: - Run builds after EVERY change - Test immediately, don't batch - Check logs yourself, don't assume success - Commit frequently (every 30 minutes minimum) - Use proper logging, never print statements
Developer-Specific Anti-Patterns:
- Using swift build when Makefile exists (missing dependencies)
- Background processes for commands that should block
- Commenting out code instead of deleting it
- Hardcoding when metadata is available
Key Practices: - Review existing content before proposing changes - Document style decisions as they're made - Complete sections fully before moving on - Verify citations and references during writing
Writer-Specific Anti-Patterns: - Inconsistent terminology across sections - Style changes without updating previous content - "Draft" sections that never get finished - References to non-existent sections
Key Practices: - Document methodology as you develop it - Verify sources before citing - Follow tangents within scope (ownership principle) - Complete analysis before drawing conclusions
Researcher-Specific Anti-Patterns: - Drawing conclusions before investigation complete - Citing sources without verification - Stopping at surface-level findings - Ignoring contradictory evidence
THE UNBROKEN METHOD - QUICK REFERENCE
BEFORE STARTING WORK
⥠Read existing code/content
⥠Search for patterns
⥠Test current behavior
⥠Share findings via collaboration tool
DURING WORK
⥠Fix ALL discovered issues (no "out of scope")
⥠Implement complete solutions (no "basic version")
⥠Test after each change
⥠Commit/save frequently
BEFORE ENDING
⥠All discovered issues resolved
⥠Work tested and verified
⥠Changes committed/saved
⥠User validation via collaboration tool
COLLABORATION CHECKPOINTS
âĸ Before major implementation
âĸ After investigation (share findings)
âĸ After implementation (share results)
âĸ Before ending (validation required)
THE SEVEN PILLARS
1. Continuous Context - Never break conversation
2. Complete Ownership - Find it, fix it
3. Investigation First - Understand before acting
4. Root Cause Focus - Problems, not symptoms
5. Complete Deliverables - Finish what you start
6. Structured Handoffs - Perfect context transfer
7. Learning from Failure - Codified anti-patterns
How does The Unbroken Method compare to established AI/ML development methodologies? This analysis highlights key differences in approach and philosophy.
| Aspect | Unbroken Method | Agile AI/ML | Google MLOps | CRISP-DM | Agentic AI |
|---|---|---|---|---|---|
| Context Handling | Explicit preservation via checkpoints and handoffs | Sprint boundaries can break context | Documentation-dependent | Phase transitions can lose context | Fragments across agents |
| Issue Ownership | Mandatory - discovered issues must be fixed | Team-distributed, can dilute | Process-dependent | Phase-based ownership | Agent-scoped, limited |
| Investigation Rigor | Required before implementation | Varies by team | Documentation standards | Strong data investigation | Task-dependent |
| Root Cause Focus | Mandated - no symptom patches | Varies by team culture | Incident response varies | Process-focused | Often superficial |
| Completeness | Scope-complete deliverables | MVP-oriented, iterative | Production-focused | Deliverable-focused | Task completion varies |
| Knowledge Transfer | Structured 4-document protocol | Retrospectives, wikis | Runbooks, documentation | Reports, handoffs | Limited between agents |
| Learning Codification | Anti-pattern catalog required | Retrospectives | Post-mortems | Lessons learned | Minimal |
The Unbroken Method works best when: - Quality and reliability matter more than speed - Context preservation across sessions is critical - You need sustainable long-term collaboration - Technical debt must be minimized
Agile AI/ML works best when: - Rapid experimentation is the priority - Requirements are highly uncertain - Team velocity matters most - Quick pivots are expected
Google MLOps works best when: - Production scale is the primary concern - Monitoring and reliability are paramount - Team has strong documentation culture - Infrastructure is the bottleneck
CRISP-DM works best when: - Data investigation is the core challenge - Stakeholder communication is critical - Process compliance is required - Traditional project management is preferred
Agentic AI Workflows work best when: - Parallelization and automation are priorities - Tasks are well-defined and repeatable - Human oversight is available for quality control - Scale matters more than individual task depth
AGENTS.md describing your project's architecture, conventions, and testing commandsWith CLIO: CLIO implements the methodology natively. Install via brew install clio, create .clio/instructions.md and AGENTS.md in your project, and start working. Context preservation, checkpoint discipline, long-term memory, and task tracking are built in.
The methodology will evolve as you use it. Document what works, what doesn't, and what you learn. The anti-pattern catalog is never complete. It grows with experience.
The Unbroken Method transforms AI collaboration from a series of disconnected, frustrating sessions into a continuous, productive partnership. The key principles are simple:
The methodology works because it eliminates the common failure modes of AI collaboration. It's not about having a smarter AI. It's about having a better system.
This methodology is language-agnostic, tool-agnostic, and domain-agnostic. Whether you're writing code, documents, research, or any other knowledge work, The Unbroken Method will improve your AI collaboration.
Welcome to The Unbroken Method. There are no shortcuts, and that's the point.
Related Documentation:
Research: - The Reflexive Ecosystem - Case study: self-building AI development in the SAM ecosystem
Templates & Getting Started: - Templates - Ready-to-use prompts, handoffs, and tools - Quick Start Prompt - Get started in 5 minutes - Full Collaboration Prompt - Complete prompt with all seven pillars
CLIO (Terminal AI Assistant): - CLIO Overview - AI in your terminal, built on The Unbroken Method - CLIO Documentation - Installation, configuration, and usage - CLIO on GitHub - Source code and issues
SAM (macOS AI Assistant): - Developer Guide - Building and extending SAM - Architecture - System design and components - Contributing Guide - How to contribute to SAM projects