Build an AI assistant that actually ships code.
SAM was developed using a novel human-AI collaboration methodology called The Unbroken Method, and you can use the same approach. This guide covers everything you need to contribute to SAM, extend its capabilities, or learn from its architecture.
What makes SAM's development unique: - Structured AI collaboration methodology - Architecture designed for extensibility and testing - Complete tool system you can extend - Zero architectural debt carried forward
This guide covers: - Setting up your development environment - Building SAM from source - Architecture and code organization - Creating custom tools and extending SAM - Testing, debugging, and release process
SAM was developed using The Unbroken Method, a systematic approach to human-AI collaboration that maximizes productivity and quality. This methodology is documented separately and applies to all SAM development work.
Key Principles: - Continuous Context: Never break the conversation - maintain context across sessions - Complete Ownership: Find it, fix it - no "out of scope" escapes - Investigation First: Understand before acting - read code before modifying - Root Cause Focus: Fix problems, not symptoms - Complete Deliverables: Finish what you start - no partial implementations - Structured Handoffs: Perfect context transfer between sessions - Learning from Failure: Document anti-patterns to prevent recurrence
These principles enable consistent, high-quality development with features complete on first implementation.
Read More: The Unbroken Method - Complete methodology documentation
# Install Homebrew (if not installed)
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# Install ccache (optional but recommended)
brew install ccache
# Clone the repository
git clone https://github.com/SyntheticAutonomicMind/SAM.git
cd SAM
# Initialize submodules (llama.cpp, etc.)
git submodule update --init --recursive
# Select Xcode command line tools
sudo xcode-select --switch /Applications/Xcode.app/Contents/Developer
# Accept license if needed
sudo xcodebuild -license accept
CLIO is the recommended terminal AI assistant for SAM development.
CLIO is a terminal-based AI code assistant built by the SAM team specifically for developers who prefer command-line workflows. It's particularly valuable when working on SAM projects.
Why Use CLIO for SAM Development:
- Terminal-native interface complements SAM's GUI
- Quick file operations, code reviews, and git workflows
- Custom instructions support (.clio/instructions.md) enforces SAM coding standards automatically
- Same privacy-first philosophy as SAM
- Zero dependencies (just Perl 5.20+)
- Perfect for remote development and SSH sessions
Quick Install:
git clone https://github.com/SyntheticAutonomicMind/CLIO.git
cd CLIO
sudo ./install.sh
Usage Examples:
# Code review
clio --new
: Review the changes in Sources/UserInterface/ChatView.swift
# Quick refactoring
clio --resume
: Refactor this function to use async/await properly
# Git workflow
: Show git status and create a commit for these changes
For complete CLIO documentation: See CLIO Developer Guide
ccache - Speeds up Swift compilation:
brew install ccache
SwiftLint - Code style enforcement:
brew install swiftlint
SF Symbols - Browse Apple's SF Symbols for UI icons: - Download from Apple Developer
# Build debug version (faster, includes debug symbols)
make build-debug
# Run debug build
open .build/Build/Products/Debug/SAM.app
SAM uses a custom Makefile that handles: 1. Building llama.cpp framework (local GGUF models) 2. Compiling Metal shaders (MLX support) 3. Building Swift code 4. Creating app bundle with frameworks
# Clean previous builds
make clean
# Build llama.cpp framework
make llamacpp
# Build Metal library
make metallib
# Build full debug version
make build-debug
# Or build optimized release version
make build-release
After building:
.build/
└── Build/
└── Products/
├── Debug/
│ ├── SAM # Executable
│ ├── SAM.app/ # App bundle
│ └── PackageFrameworks/ # Dependencies
└── Release/
└── (same structure)
The Makefile provides several targets:
| Target | Description |
|---|---|
make build-debug |
Debug build with symbols |
make build-release |
Optimized release build |
make clean |
Remove build artifacts |
make llamacpp |
Build llama.cpp only |
make metallib |
Build Metal shaders only |
make test |
Run unit tests |
SAM/
├── Sources/
│ ├── SAM/ # App Entry Point
│ │ ├── main.swift # @main entry
│ │ ├── AppDelegate.swift # App lifecycle
│ │ ├── SAMCommands.swift # App commands
│ │ └── Resources/ # App resources
│ │
│ ├── UserInterface/ # UI Layer
│ │ ├── Chat/ # Chat components
│ │ │ ├── ChatWidget.swift # Main chat interface
│ │ │ └── MessageView.swift # Individual messages
│ │ ├── MainWindowView.swift # Main window
│ │ ├── PreferencesView.swift # Settings UI
│ │ ├── Help/ # Help system
│ │ │ └── HelpView.swift # Help documentation
│ │ └── ...
│ │
│ ├── APIFramework/ # AI Provider Layer
│ │ ├── AIProvider.swift # Provider protocol
│ │ ├── OpenAIProvider.swift # OpenAI implementation
│ │ ├── GitHubCopilotProvider.swift
│ │ ├── MLXProvider.swift # Local MLX models
│ │ ├── LlamaProvider.swift # Local GGUF models
│ │ ├── AgentOrchestrator.swift # LLM coordination
│ │ ├── EndpointManager.swift # Provider management
│ │ ├── UniversalToolRegistry.swift # Tool registration
│ │ └── ...
│ │
│ ├── MCPFramework/ # MCP Tool System
│ │ ├── MCPManager.swift # MCP protocol manager
│ │ ├── MCPTypes.swift # MCP data types
│ │ └── Tools/ # MCP tool implementations
│ │ ├── ThinkTool.swift
│ │ ├── FileOperationsTool.swift
│ │ ├── TerminalOperationsTool.swift
│ │ ├── MemoryOperationsTool.swift
│ │ ├── RunSubagentTool.swift
│ │ └── ...
│ │
│ ├── ConversationEngine/ # Conversation Management
│ │ ├── ConversationManager.swift # Conversation state
│ │ ├── MemoryManager.swift # Vector memory system
│ │ ├── YaRNContextProcessor.swift # Context compression
│ │ ├── AppleNLEmbeddingGenerator.swift # 512-dim embeddings
│ │ └── ...
│ │
│ ├── ConfigurationSystem/ # Settings & Configuration
│ │ ├── SystemPromptConfiguration.swift
│ │ ├── PersonalityManager.swift # Personality system
│ │ ├── PersonalityTrait.swift # Built-in personalities
│ │ ├── EndpointConfigurationModels.swift # ProviderType enum
│ │ └── ...
│ │
│ ├── MLXIntegration/ # MLX Support
│ │ ├── AppleMLXAdapter.swift
│ │ ├── MLXModelCache.swift
│ │ └── HuggingFaceAPIClient.swift
│ │
│ ├── StableDiffusionIntegration/ # Image Generation
│ │ └── ...
│ │
│ ├── SharedData/ # Shared Topics
│ │ └── SharedStorage.swift
│ │
│ ├── VoiceFramework/ # Voice Features
│ │ └── SpeechRecognitionService.swift
│ │
│ └── Utilities/ # Helper utilities
│ └── ...
│
├── external/
│ └── llama.cpp/ # Submodule for GGUF support
│
├── Info.plist # App metadata
├── Package.swift # SPM dependencies
├── Makefile # Build system
└── README.md
SAM follows these core principles:
┌─────────────────────────────────────┐
│ UserInterface Layer │
│ (SwiftUI Views, ChatWidget) │
└────────────────┬────────────────────┘
│
┌────────────────▼────────────────────┐
│ Business Logic Layer │
│ (AgentOrchestrator, ToolService) │
└────────────────┬────────────────────┘
│
┌────────────────▼────────────────────┐
│ Provider Layer │
│ (OpenAI, MLX, Copilot) │
└────────────────┬────────────────────┘
│
┌────────────────▼────────────────────┐
│ Infrastructure Layer │
│ (Database, Network) │
└─────────────────────────────────────┘
1. Provider Pattern
All AI providers implement AIProvider protocol:
protocol AIProvider {
func sendMessage(
_ message: String,
context: [Message],
tools: [Tool]?,
stream: Bool
) async throws -> ChatResponse
}
2. Tool System
Tools are discovered, registered, and executed dynamically:
protocol ToolService {
func register(_ tool: Tool)
func execute(_ tool: ToolCall) async throws -> ToolResult
}
3. Streaming
Server-Sent Events (SSE) for real-time responses:
for try await chunk in provider.streamMessage(...) {
await updateUI(with: chunk)
}
4. State Management
SwiftUI's @State, @StateObject, @EnvironmentObject:
@StateObject private var conversationManager = ConversationManager()
@State private var messages: [Message] = []
Step 1: Define Tool
Create Sources/MCPFramework/Tools/MyNewTool.swift:
import Foundation
import Logging
public final class MyNewTool: MCPTool {
// MARK: - Protocol Conformance
public let name = "my_new_tool"
public let description = """
Brief description of what the tool does.
USE FOR:
- List the use cases
PARAMETERS:
- param1: Description of first parameter
"""
public var parameters: [String: MCPToolParameter] {
return [
"param1": MCPToolParameter(
type: .string,
description: "First parameter description",
required: true
),
"param2": MCPToolParameter(
type: .number,
description: "Optional second parameter",
required: false
)
]
}
// MARK: - Execution
private let logger = Logger(label: "com.sam.mcp.MyNewTool")
public func execute(
arguments: [String: Any],
conversationManager: ConversationManager?,
conversationId: UUID?
) async throws -> MCPToolResponse {
// 1. Validate required arguments
guard let param1 = arguments["param1"] as? String else {
return MCPToolResponse(
success: false,
output: MCPOutput(content: "Error: param1 is required", mimeType: "text/plain"),
toolName: name
)
}
// 2. Get optional arguments
let param2 = arguments["param2"] as? Double ?? 0.0
// 3. Perform tool operation
let result = processParameters(param1, param2)
// 4. Return success result
return MCPToolResponse(
success: true,
output: MCPOutput(content: result, mimeType: "text/plain"),
toolName: name
)
}
private func processParameters(_ p1: String, _ p2: Double) -> String {
// Your tool logic here
return "Processed: \(p1) with value \(p2)"
}
}
Step 2: Register Tool
In Sources/SAM/main.swift, add the tool registration:
// In the tool registration section
conversationManager.mcpManager.registerTool(MyNewTool(), name: "my_new_tool")
Step 3: Build and Test
make build-debug
# Test by asking SAM to use your new tool
Step 1: Create Provider Class
Create Sources/APIFramework/MyProvider.swift:
import Foundation
class MyProvider: AIProvider {
let apiKey: String
let baseURL: URL
init(apiKey: String) {
self.apiKey = apiKey
self.baseURL = URL(string: "https://api.myprovider.com")!
}
func sendMessage(
_ message: String,
context: [Message],
tools: [Tool]?,
stream: Bool
) async throws -> ChatResponse {
// Implement API call
let request = createRequest(message: message, context: context)
let (data, _) = try await URLSession.shared.data(for: request)
return try JSONDecoder().decode(ChatResponse.self, from: data)
}
}
Step 2: Register Provider
In Sources/ConfigurationSystem/EndpointManager.swift:
func registerProviders() {
register("openai", OpenAIProvider.self)
register("copilot", GitHubCopilotProvider.self)
register("myprovider", MyProvider.self) // Add your provider
}
Step 3: Add UI Configuration
In Sources/UserInterface/PreferencesView.swift, add settings UI for your provider.
MCPManager.registerTool()MCPToolResponse to LLMimport Foundation
import Logging
/// Custom MCP tool implementation
public final class MyCustomTool: MCPTool {
// MARK: - Required Protocol Properties
public let name = "my_custom_tool"
public let description = """
Brief description of what the tool does.
USE FOR:
- Use case 1
- Use case 2
"""
public var parameters: [String: MCPToolParameter] {
return [
"arg1": MCPToolParameter(
type: .string,
description: "Description of arg1",
required: true
),
"arg2": MCPToolParameter(
type: .number,
description: "Description of arg2",
required: false
)
]
}
// MARK: - Execution
private let logger = Logger(label: "com.sam.mcp.MyCustomTool")
public func execute(
arguments: [String: Any],
conversationManager: ConversationManager?,
conversationId: UUID?
) async throws -> MCPToolResponse {
// 1. Validate required arguments
guard let arg1 = arguments["arg1"] as? String else {
return MCPToolResponse(
success: false,
output: MCPOutput(content: "Error: arg1 is required", mimeType: "text/plain"),
toolName: name
)
}
let arg2 = arguments["arg2"] as? Double ?? 0.0
// 2. Perform tool operation
do {
let result = try await performOperation(arg1, arg2)
// 3. Return success result
return MCPToolResponse(
success: true,
output: MCPOutput(content: result, mimeType: "text/plain"),
toolName: name
)
} catch {
// 4. Return error result
return MCPToolResponse(
success: false,
output: MCPOutput(content: "Error: \\(error.localizedDescription)", mimeType: "text/plain"),
toolName: name
)
}
}
// MARK: - Helper Methods
private func performOperation(
_ arg1: String,
_ arg2: Double
) async throws -> String {
// Your tool logic here
return "Result"
}
}
async/await for network/file I/O# Run all tests
make test
# Run specific test file
swift test --filter MyToolTests
# Run with coverage
swift test --enable-code-coverage
import XCTest
@testable import SAM
final class MyFeatureTests: XCTestCase {
var sut: MyFeature!
override func setUp() {
super.setUp()
sut = MyFeature()
}
override func tearDown() {
sut = nil
super.tearDown()
}
func testFeatureBehavior() async throws {
// Given
let input = "test"
// When
let result = try await sut.process(input)
// Then
XCTAssertEqual(result, "expected")
}
}
Test full workflows:
func testFullChatFlow() async throws {
// Create conversation
let conversation = Conversation()
// Send message
let response = try await provider.sendMessage(
"Hello",
context: [],
tools: nil,
stream: false
)
// Verify response
XCTAssertFalse(response.content.isEmpty)
}
po command to inspect variablesSAM uses structured logging:
import os.log
let logger = Logger(subsystem: "com.sam", category: "MyFeature")
logger.debug("Debug message")
logger.info("Info message")
logger.error("Error: \(error.localizedDescription)")
View logs:
# Console.app → Search "SAM"
# Or use log command:
log stream --predicate 'subsystem == "com.sam"' --level debug
Use Charles Proxy or Proxyman to inspect HTTP/HTTPS traffic:
Use Xcode Instruments:
# Build with profiling enabled
xcodebuild -scheme SAM -configuration Release
# Launch Instruments
open /Applications/Xcode.app/Contents/Applications/Instruments.app
Key instruments: - Time Profiler: CPU usage - Allocations: Memory usage - Leaks: Memory leaks - Network: Network activity
1. Async/Await
// ❌ Bad: Synchronous
let result = expensiveOperation()
// ✅ Good: Asynchronous
let result = await Task.detached {
expensiveOperation()
}.value
2. Lazy Loading
// ❌ Bad: Load all upfront
let models = loadAllModels()
// ✅ Good: Lazy load
lazy var models = { loadAllModels() }()
3. Caching
// Cache expensive computations
private var cache: [String: Result] = [:]
func compute(_ input: String) -> Result {
if let cached = cache[input] {
return cached
}
let result = expensiveComputation(input)
cache[input] = result
return result
}
SAM uses semantic versioning: MAJOR.MINOR.PATCH
1. Update Version
Edit Info.plist:
<key>CFBundleShortVersionString</key>
<string>1.0.25</string>
2. Build Release
make build-release
3. Create Archive
cd .build/Build/Products/Release
zip -r SAM.app.zip SAM.app
4. Sign Archive
# Sign with Sparkle EdDSA key
./bin/sign_update SAM.app.zip
# Output:
# sparkle:edSignature="..." length="..."
5. Update Appcast
Edit appcast.xml:
<item>
<title>SAM 1.0.25</title>
<sparkle:version>1.0.25</sparkle:version>
<sparkle:edSignature>PASTE_SIGNATURE_HERE</sparkle:edSignature>
<enclosure url="https://github.com/SyntheticAutonomicMind/SAM/releases/download/v1.0.25/SAM.app.zip" />
</item>
6. Create GitHub Release
gh release create v1.0.25 SAM.app.zip \
--title "SAM 1.0.25" \
--notes "Release notes here"
7. Commit Appcast
git add appcast.xml
git commit -m "chore: Update appcast for v1.0.25"
git push origin main
SAM uses Metal for GPU acceleration (MLX):
# Compile Metal shaders
xcrun metal -c shader.metal -o shader.air
xcrun metallib shader.air -o default.metallib
For distribution outside App Store:
# Sign app bundle
codesign --deep --force --verify --verbose \
--sign "Developer ID Application: YOUR NAME" \
SAM.app
# Verify signature
codesign --verify --deep --strict --verbose=2 SAM.app
Required for macOS 10.15+:
# Create archive
ditto -c -k --keepParent SAM.app SAM.zip
# Submit for notarization
xcrun notarytool submit SAM.zip \
--apple-id your@email.com \
--team-id TEAMID \
--password "app-specific-password"
# Staple ticket
xcrun stapler staple SAM.app
Happy Developing! 🚀