Documentation / Advanced Workflows

Advanced Workflows

Build systems, not just prompts.

SAM isn't limited to single conversations. With subagents, shared topics, and MCP tools, you can orchestrate entire projects. Specialized agents working in parallel, sharing knowledge, building on each other's work.

This guide shows you how to: - Spawn subagents that handle complex subtasks autonomously - Design multi-conversation patterns for large projects - Automate workflows with SAM's REST API - Optimize performance for enterprise-scale work

Real-world results: - Research projects spanning hundreds of sources - Full-stack development from planning to deployment - Document analysis across massive corpora - Automated pipelines that run while you sleep

Who this is for: Users who've mastered the basics and want to push SAM to its limits. Build systems that use SAM's full capabilities.


Table of Contents

  1. Subagent Workflows
  2. Multi-Conversation Patterns
  3. Complex Project Templates
  4. Automation & Scripting
  5. Performance Optimization
  6. Advanced Tool Usage

Subagent Workflows

What Are Subagents?

Subagents are specialized AI agents spawned by the main conversation to handle specific subtasks.

Benefits: - Fresh iteration budget for each subagent - Isolated context for focused, specialized work - Parallel execution of multiple tasks simultaneously - Specialized expertise - each subagent focuses on one aspect

When to Use Subagents

Complex Multi-Part Tasks:

Main: "Research and write a report on remote work trends"
├── Subagent 1: Find recent statistics and studies
├── Subagent 2: Interview summaries and expert quotes
├── Subagent 3: Draft the executive summary
└── Subagent 4: Format and proofread final document

Code Review:

Main: "Plan our family budget for 2025"
├── Subagent 1: Analyze current spending categories
├── Subagent 2: Research cost-saving opportunities
├── Subagent 3: Build the monthly budget plan
└── Subagent 4: Create tracking spreadsheet template

Research:

Main: "Research AI safety"
├── Subagent 1: Literature review
├── Subagent 2: Current developments
├── Subagent 3: Expert opinions
└── Subagent 4: Synthesis and summary

Subagent Best Practices

Clear Instructions:

 Bad: "Help with the budget"
 Good: "Analyze our last 3 months of spending. Identify the top 3 categories where we can realistically cut 15% or more, and explain why."

Shared Topic Integration:

1. Enable shared topic in your main conversation
2. Spawn subagents - they automatically inherit the topic workspace
3. All subagents collaborate in the same directory
4. Results persist and are available to the main conversation

Iteration Budgeting: - Each subagent gets fresh iteration budget - Can request increases via increase_max_iterations - Main conversation tracks overall progress


Multi-Conversation Patterns

Pattern 1: Specialist Team

Concept: Multiple persistent conversations, each specialized in one area

Shared Topic: "Home Purchase 2025"

Conversations:
├── "Market Researcher" (finds listings, pricing trends)
├── "Financial Planner" (budget, mortgage scenarios)
├── "Neighborhood Scout" (schools, commute, amenities)
├── "Inspector Prep" (what to look for, checklists)
└── "Document Tracker" (paperwork, deadlines)

Workflow: 1. Start with Market Researcher to understand the market 2. Financial Planner runs mortgage and budget scenarios 3. Neighborhood Scout evaluates shortlisted areas 4. Inspector Prep reviews each property before viewings 5. Document Tracker stays on top of paperwork and deadlines

Benefits: - Deep specialization per conversation - Context maintained per domain - All access shared workspace

Pattern 2: Pipeline Processing

Setup: Sequential conversations for workflow stages

Shared Topic: "Content Pipeline"

Pipeline:
"Research" → "Drafting" → "Editing" → "Publishing"

Workflow:

Research Conversation:
- Gathers information
- Stores findings in shared memory
- Saves sources to shared workspace

Drafting Conversation:
- Retrieves research from memory
- Reads source documents
- Creates draft in shared workspace

Editing Conversation:
- Reads draft
- Refines content
- Applies style guidelines

Publishing Conversation:
- Reads final draft
- Formats for publication
- Handles deployment

Pattern 3: Review & Iterate

Setup: Primary + Review conversations

Shared Topic: "Code Project"

Conversations:
├── "Implementation" (primary development)
└── "Code Review" (analysis and feedback)

Workflow:

Implementation:
1. Writes code
2. Commits to shared workspace
3. Requests review

Code Review:
1. Reads code from workspace
2. Analyzes for issues
3. Stores feedback in memory

Implementation:
1. Retrieves feedback from memory
2. Implements improvements
3. Cycle repeats

Complex Project Templates

Personal Project Management

Shared Topic Setup:

Topic: "Home Buying 2025"

Directory Structure:
~/SAM/Home Buying 2025/
├── research/
├── finances/
├── neighborhoods/
├── viewings/
└── documents/

Conversations: 1. "Market Research" - Personality: Scholar (analytical, thorough) - Model: GPT-4 (complex analysis) - Working Dir: research/

  1. "Budget & Finances"
  2. Personality: Professional (direct, business-focused)
  3. Model: Claude 3.5 Sonnet
  4. Working Dir: finances/

  5. "Neighborhood Comparison"

  6. Personality: Scholar
  7. Model: GPT-4
  8. Working Dir: neighborhoods/

  9. "Document Checklist"

  10. Personality: Professional
  11. Model: GPT-3.5-turbo (sufficient for lists)
  12. Working Dir: documents/

Workflow:

Week 1:
- Market Research: Average prices, trends by area
- Budget & Finances: Pre-approval estimate, monthly payment scenarios

Week 2:
- Neighborhood Comparison: Schools, commute, amenities for shortlist
- Market Research: Dig into specific listings

Week 3:
- Viewings: Log notes from each visit
- Document Checklist: Track what's needed for offer

Week 4:
- All conversations: Final comparison, make offer

Research & Writing Project

Setup:

Topic: "Research Paper on AI Ethics"

Conversations:
1. "Literature Review"
2. "Data Collection"
3. "Analysis"
4. "Writing"
5. "Citations"

Advanced Pattern:

Literature Review:
- Imports multiple PDFs
- Uses Vector RAG for semantic search
- Stores summaries in shared memory
- Tags: "methodology", "findings", "critique"

Data Collection (spawns subagents):
├── Subagent: Web research (latest developments)
├── Subagent: Expert interviews (contact info)
└── Subagent: Dataset analysis

Writing (uses all previous work):
- Retrieves summaries from memory
- Accesses imported papers
- References data collection results
- Synthesizes into coherent paper

Citations:
- Scans all references in paper
- Generates bibliography
- Verifies citation format

Automation & Scripting

API-Based Workflows

Use SAM's REST API for automation:

#!/bin/bash
# Automated code review script

# Start conversation
CONV_ID=$(curl -X POST http://localhost:8080/v1/conversations \
  -H "Content-Type: application/json" \
  -d '{"title":"Automated Review"}' \
  | jq -r '.id')

# Submit code for review
curl -X POST http://localhost:8080/api/chat/completions \
  -H "Content-Type: application/json" \
  -d "{
    \"model\": \"gpt-4\",
    \"conversationId\": \"$CONV_ID\",
    \"messages\": [{
      \"role\": \"user\",
      \"content\": \"Review this PR for security and performance issues\"
    }]
  }"

Scheduled Workflows

Daily Summary:

# cron: 0 18 * * * /path/to/daily_summary.sh

#!/bin/bash
# Generate daily summary of project progress

curl -X POST http://localhost:8080/api/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4",
    "conversationId": "'$PROJECT_CONV_ID'",
    "messages": [{
      "role": "user",
      "content": "Summarize what I worked on today and list any open questions"
    }]
  }' > ~/daily-summary.txt

Batch Processing

Process Multiple Files:

import requests

files = ['chapter1.md', 'chapter2.md', 'chapter3.md']
endpoint = 'http://localhost:8080/api/chat/completions'

for file in files:
    response = requests.post(endpoint, json={
        'model': 'gpt-4',
        'conversationId': conv_id,
        'messages': [{
            'role': 'user',
            'content': f'Analyze {file} for code quality issues'
        }]
    })
    print(f"Results for {file}:", response.json())

Performance Optimization

Context Management

YaRN Profile Selection:

Small Tasks       → Default (low scaling)
Long Conversations → Extended (medium scaling)
Document Analysis  → Universal (high scaling, default)
Enterprise Docs    → Mega (enterprise scaling)

How to Set: Click the Parameters button in the toolbar to expand Advanced Parameters, then select Context Size.

Manual Context Pruning:

When context fills:
1. Clear less important messages
2. Summarize earlier parts
3. Store summaries in memory
4. Continue with clean context

Memory Optimization

Regular Cleanup:

Every few weeks:
1. Review memory statistics
2. Clear low-importance memories (< 0.4)
3. Remove duplicate entries
4. Archive completed project memories

Efficient Storage:

 Don't store everything
✅ Store decisions, requirements, and critical information Don't duplicate across conversations
✅ Use shared topics for related work Don't use high similarity thresholds for documents
✅ Lower threshold for document search (0.15-0.25)

Model Selection Strategy

By Task Type:

Quick Q&A          → GPT-3.5-turbo (fast, cheap)
Complex Logic      → GPT-4 (best reasoning)
Creative Writing   → Claude 3.5 Sonnet (creative)
Code Generation    → GitHub Copilot GPT-4 (code-optimized)
Privacy-Critical   → Local MLX/GGUF models (offline)

Cost Optimization:

1. Use cheaper models (GPT-3.5-turbo) for initial drafts
2. Refine with expensive models (GPT-4) when needed
3. Use local models for sensitive or privacy-critical data
4. Enable streaming for better user experience

Advanced Tool Usage

Chaining Multiple Tools

Pattern: Research → Store → Retrieve → Create

Step 1: Web Research
Tool: web_operations (research)
Result: Stores findings in memory

Step 2: Memory Retrieval
Tool: memory_operations (search_memory)
Result: Retrieves relevant research

Step 3: Document Creation
Tool: document_operations (create)
Result: Creates final document with citations

Terminal Workflows

Complex Build Pipeline:

Session 1: "build"
- run_command: "make clean"
- run_command: "make build"
- get_output: Check for errors

Session 2: "test"
- run_command: "pytest tests/"
- get_output: Verify all passed

Session 3: "deploy"
- run_command: "docker build ."
- run_command: "docker push ..."

Persistent Sessions:

Main conversation maintains multiple sessions:
- "research": Gathering sources and data
- "writing": Drafting sections
- "review": Fact-checking and editing
- "output": Formatting and export

Switch between sessions as needed

File Operation Optimization

Bulk Operations:

Instead of:
- create_file (10 times)

Use:
- multi_replace_string (one operation)
- Apply templates efficiently

Smart Search:

Semantic: Find by meaning
"notes about our budget constraints"

Regex: Find by pattern
"Total:.*\$[0-9]+"

Glob: Find by filename
"**/budget-*.md"

Example: Complete Research Project

Setup (Day 1):

1. Create shared topic: "AI in Healthcare Report"
2. Create four conversations:
   - "Literature Review" (Scholar personality, GPT-4, working dir: sources/)
   - "Data Analysis" (Professional personality, Claude 3.5, working dir: data/)
   - "Writing" (Creative personality, GPT-4, working dir: drafts/)
   - "Fact-Checking" (Scholar personality, GPT-4, working dir: sources/)
3. Enable shared topic in all conversations

Day 1-2: Literature Review

You: Find and summarize 10 key papers on AI diagnostics published since 2021
SAM: [Searches web for papers]
     Summarized 10 papers - saved to sources/literature-review.md
     Key themes: accuracy improvements, FDA approval challenges, bias concerns

Day 3: Data Analysis

You: Read the literature review and identify the key statistics to highlight
SAM: [Reads sources/literature-review.md from shared workspace]
     Found 8 compelling statistics - saved to data/key-stats.md
     Strongest: "AI matched radiologist accuracy in 94% of cases (Smith et al, 2023)"

Day 4-5: Writing

You: Write the executive summary using our research and key stats
SAM: [Reads literature-review.md and key-stats.md]
     Draft saved to drafts/executive-summary.md
     ~800 words, covers all major themes

Day 6: Fact-Checking

You: Check every statistic in the executive summary against our sources
SAM: [Reads drafts/executive-summary.md and sources/literature-review.md]
     All 6 statistics verified. One citation needed a correction - fixed in draft.

Result: A well-sourced, fact-checked report built across four specialized conversations - each one staying focused, all sharing the same files and memory.

Troubleshooting Advanced Workflows

Subagents Not Finishing: - Increase max iterations in settings - Break large tasks into smaller, focused subtasks - Check for circular dependencies in the workflow

Memory Conflicts: - Use clear, distinct tags for different project aspects - Assign higher importance to critical information - Review and remove duplicate memories regularly

Context Overflow: - Switch to a higher YaRN profile (Extended, Universal, or Mega) - Clear less important messages from the conversation - Summarize old context and store summaries in memory

Performance Issues: - Use appropriate models for each task type - Optimize context size settings - Clear terminal session history periodically - Remove old files from the workspace


Next Steps

Related: - Memory & RAG - Memory mastery - Shared Topics - Collaboration basics


Level up your SAM workflows and build complex projects efficiently!