Documentation / Use Cases

Real-World Use Cases

See how SAM solves real problems for real people.

This guide shows practical scenarios where SAM helps. Each example includes step-by-step setup and what to expect. Find the scenario closest to your needs and follow along.


Table of Contents

  1. Getting Started Quickly
  2. Understanding SAM's Five Layers
  3. Document Analysis
  4. Medical Records
  5. Legal Documents
  6. Financial Statements
  7. Developer Workflows
  8. Code Project Assistant
  9. Multi-Team Project
  10. Research & Writing
  11. Academic Research
  12. Content Creation
  13. Diagram Generation
  14. LoRA Model Fine-Tuning
  15. Customer Support Bot
  16. Domain Expert Specialization
  17. Writing Style Adaptation
  18. Code Style Assistant
  19. Business Analysis
  20. Customer Analysis
  21. Competitive Research
  22. Choosing the Right Model

Getting Started Quickly

Most users should start here. Connect a cloud provider and start chatting. You can always switch to local models later.

The Fastest Path

graph LR
    A[Add API Key] --> B[Start Chatting]
    B --> C[Get Results]

    style A fill:#00d4ff,color:#000
    style B fill:#141822,color:#fff
    style C fill:#00ff88,color:#000
  1. Go to Preferences → Remote Providers
  2. Add your OpenAI, Anthropic, or GitHub Copilot credentials
  3. Create a new conversation
  4. Enable Tools toggle
  5. Start asking questions

That's it. SAM handles the rest.

Which Provider Should I Use?

If you want... Use this
Best overall quality GPT-4 or Claude 3.5
Already have Copilot GitHub Copilot (free if subscribed)
Budget-conscious GPT-4 Turbo or DeepSeek
Complete privacy Local MLX model

About Privacy

Cloud models send your messages to the provider's servers. This is fine for most tasks.

Local models process everything on your Mac. Use these for sensitive data like medical records, legal documents, or personal journals.

You can mix both: Start with a cloud model for general work, switch to local when handling private information.


Understanding SAM's Five Layers

Before diving into use cases, understand how SAM builds your AI experience. SAM uses five layers, each adding more customization.

Layer 1: System Prompt

The system prompt defines SAM's core behavior. Think of it as SAM's job description.

graph LR
    A[System Prompt] --> B[SAM's Behavior]

    style A fill:#00d4ff,color:#000
    style B fill:#141822,color:#fff

Two built-in options:

Prompt Best For
SAM Default General use, most users
SAM Minimal Local models, speed

How to change: Click the system prompt picker in the conversation toolbar.


Layer 2: Personality

Personality changes HOW SAM communicates. Same capabilities, different style.

graph LR
    A[You pick a personality] --> B[SAM adjusts tone and style]
    B --> C[Same power, different approach]

    style A fill:#00d4ff,color:#000
    style B fill:#141822,color:#fff
    style C fill:#141822,color:#fff

Example personalities:

Category Personality Style
General Assistant Balanced, helpful
Tech Tinkerer Hands-on, practical
Creative Wordsmith Encouraging writer
Domain Doctor Clinical methodology
Fun Pirate Arr, nautical flair!

How to change: Click the personality picker next to the model selector.


Layer 3: Mini-Prompts

Mini-prompts are reusable context snippets. Create once, enable when needed.

graph TD
    A[Create mini-prompts once] --> B[Enable per conversation]
    B --> C[SAM remembers your context]

    style A fill:#00d4ff,color:#000
    style B fill:#141822,color:#fff
    style C fill:#141822,color:#fff

Example mini-prompts:

How to create: Go to Preferences → Mini-Prompts → Add New


Layer 4: Tools

Tools define what SAM can DO. Enable or disable per conversation.

graph LR
    A[Conversation Settings] --> B{Tools Enabled?}
    B -->|Yes| C[SAM can take action]
    B -->|No| D[Chat only]

    style A fill:#00d4ff,color:#000
    style B fill:#141822,color:#fff
    style C fill:#00ff88,color:#000
    style D fill:#141822,color:#fff

Tool categories:

How to configure: Click the Tools toggle in the conversation toolbar.


Layer 5: Conversation History

Everything discussed in the current conversation becomes context.

graph TD
    A[Your messages] --> D[Context Window]
    B[SAM's responses] --> D
    C[Attached documents] --> D
    D --> E[SAM sees all of this]

    style A fill:#00d4ff,color:#000
    style B fill:#00d4ff,color:#000
    style C fill:#00d4ff,color:#000
    style D fill:#141822,color:#fff
    style E fill:#141822,color:#fff

Context window sizes by model:

Model Context Size
GPT-4 Turbo Large
Claude 3.5 Large
Gemini 1.5 Very Large
Local MLX Variable

How the Layers Work Together

Each layer builds on the previous one:

graph TD
    A[1. System Prompt<br/>Core behavior] --> B[2. Personality<br/>Communication style]
    B --> C[3. Mini-Prompts<br/>Your context]
    C --> D[4. Tools<br/>Available actions]
    D --> E[5. Conversation<br/>Current discussion]
    E --> F[SAM's Response]

    style A fill:#00d4ff,color:#000
    style B fill:#0099cc,color:#000
    style C fill:#007799,color:#fff
    style D fill:#005566,color:#fff
    style E fill:#003344,color:#fff
    style F fill:#00ff88,color:#000

Key insight: You control each layer independently. Mix and match to create your ideal AI assistant.


Document Analysis

SAM excels at reading and understanding your documents. Import PDFs, Word docs, or text files and ask questions about them.

Medical Records

The situation: You have lab results and medical records. You want to understand trends before your next appointment.

flowchart LR
    A[Import PDFs] --> B[SAM analyzes]
    B --> C[Trends identified]
    C --> D[Questions prepared]

    style A fill:#00d4ff,color:#000
    style D fill:#00ff88,color:#000

Setup options:

Your preference Configuration
Quick start Any cloud model + Doctor personality
Privacy-first Local MLX model + Doctor personality

Step 1: Configure

  1. Select your model (cloud or local)
  2. Select Doctor personality
  3. Enable Tools toggle

Step 2: Import your documents

Drag your PDF lab results into the chat window.

Step 3: Ask questions

You: I have lab results from 2020-2024. Help me understand 
my cholesterol trends before my cardiology appointment.

SAM: I've analyzed your lipid panels. Here's what I found:

TRENDS:
- LDL: Started at 142, now at 118 (improved)
- HDL: Stable around 52-55 (borderline low)
- Triglycerides: Spike in 2022 (285), now 168

QUESTIONS FOR YOUR CARDIOLOGIST:
1. LDL improved after starting medication in 2022. 
   Should we continue current dose?
2. HDL has been consistently in the 50s. What lifestyle 
   changes could help raise it?

DISCLAIMER: This is for informational purposes. 
Always discuss with your doctor.

What you get: - Analysis of trends over time - Prepared questions for your doctor

Privacy tip: Use a local model if you prefer medical data stays on your Mac.


The situation: You're reviewing an employment contract. You need to understand complex terms before signing.

flowchart LR
    A[Import contract] --> B[SAM reviews clauses]
    B --> C[Flags concerns]
    C --> D[Questions to ask]

    style A fill:#00d4ff,color:#000
    style D fill:#00ff88,color:#000

Setup options:

Your preference Configuration
Quick start Any cloud model + Counsel personality
Privacy-first Local MLX model + Counsel personality

Step 1: Configure

  1. Select your model
  2. Select Counsel personality
  3. Enable Tools toggle

Step 2: Import the contract

Drag the PDF into chat.

Step 3: Ask for analysis

You: Review this employment contract. Explain the 
non-compete clause and flag anything unusual.

SAM: I've analyzed your employment agreement.

NON-COMPETE ANALYSIS (Section 8.2):
- Duration: 18 months post-employment
- Geographic scope: "Anywhere employer does business"
- Industry restriction: "Competing technology companies"

POTENTIAL CONCERNS:
1. California generally does NOT enforce non-competes.
   This clause may be unenforceable.
2. The geographic scope is extremely broad and 
   typically disfavored by courts.
3. 18 months is on the longer end of typical durations.

QUESTIONS TO ASK:
- Will they modify the non-compete given CA law?
- Can the scope be narrowed to direct competitors only?

DISCLAIMER: This is educational analysis, not legal advice.

Privacy tip: Use a local model for confidential contracts.


Financial Statements

The situation: You need to organize investment statements and transactions for your tax accountant.

flowchart LR
    A[Import statements] --> B[SAM categorizes]
    B --> C[Summary created]
    C --> D[Ready for accountant]

    style A fill:#00d4ff,color:#000
    style D fill:#00ff88,color:#000

Setup options:

Your preference Configuration
Quick start Any cloud model + Finance Coach personality
Privacy-first Local MLX model + Finance Coach personality

Step 1: Configure

  1. Select your model
  2. Select Finance Coach personality
  3. Enable Tools toggle

Step 2: Import your statements

Drag brokerage PDFs and transaction exports into chat.

Step 3: Request organization

You: I have statements from Fidelity and Schwab, plus 
crypto from Coinbase. Help me organize for my accountant.

SAM: I've analyzed your statements.

SUMMARY FOR YOUR ACCOUNTANT:

INVESTMENT INCOME:
- Dividends (qualified): $3,247
- Dividends (ordinary): $892
- Interest: $156

CAPITAL GAINS/LOSSES:
- Short-term gains: $4,521
- Long-term gains: $12,847
- Short-term losses: $2,103

CRYPTO (Needs Attention):
- 47 taxable events identified
- Net gain: ~$8,200 (needs Form 8949)
- 3 transactions may qualify as wash sales

I've created a summary document. Would you like me 
to organize by account or transaction type?

Privacy tip: Use a local model for sensitive financial data.


Developer Workflows

Code Project Assistant

The situation: You're working on a React/TypeScript project and want SAM to understand your codebase.

Step 1: Configure your environment

graph TD
    A[Create mini-prompts for your stack] --> B[Enable Tools and Terminal]
    B --> C[Select Tinkerer personality]
    C --> D[Ready to code]

    style A fill:#00d4ff,color:#000
    style B fill:#141822,color:#fff
    style C fill:#141822,color:#fff
    style D fill:#00ff88,color:#000
  1. Create mini-prompts:
  2. "Project: React 18 + TypeScript 5.3 + Vite + Tailwind"
  3. "Code style: Functional components, named exports, strict TypeScript"
  4. "Working directory: ~/projects/my-app/"

  5. Select Tinkerer or Crusty Coder personality

  6. Enable Tools and Terminal toggles
  7. Enable the relevant mini-prompts for this conversation

Step 2: Start coding

You: Add a dark mode toggle to the header component

SAM: [Reads your existing Header.tsx]
[Checks your Tailwind config]
[Creates useTheme hook following your patterns]
[Updates Header.tsx with toggle button]
[Adds dark: variants to affected components]

Done. Here's what I did:
1. Created src/hooks/useTheme.ts with localStorage persistence
2. Updated Header.tsx with a toggle button (matches your UI patterns)
3. Added dark: variants to components

Run 'npm run dev' to test.

Multi-Team Project

The situation: You're building a complex app that needs frontend, backend, and testing expertise.

Step 1: Create a Shared Topic

graph TD
    A[Create Shared Topic] --> B[Create Backend conversation]
    A --> C[Create Frontend conversation]
    A --> D[Create Testing conversation]
    B --> E[All share same files and memory]
    C --> E
    D --> E

    style A fill:#00d4ff,color:#000
    style B fill:#141822,color:#fff
    style C fill:#141822,color:#fff
    style D fill:#141822,color:#fff
    style E fill:#00ff88,color:#000
  1. Go to Preferences → Shared Topics
  2. Create topic: "My Web App"
  3. Create three conversations:
  4. "Backend API"
  5. "Frontend UI"
  6. "Testing"
  7. Enable shared topic in each conversation

Step 2: Work across conversations

In Backend conversation:

You: Create a Flask API with user authentication
SAM: Created backend/app.py, backend/auth.py, backend/models.py

In Frontend conversation:

You: Read the backend API and create React components
SAM: [Reads backend/app.py from shared workspace]
Found endpoints: /login, /register, /products
Creating React components...

In Testing conversation:

You: Write integration tests for the API
SAM: [Accesses both backend and frontend code]
Creating tests in tests/

What you get: - Each conversation specialized in its domain - All see the same files - Shared memory of architecture decisions


Research & Writing

Academic Research

The situation: You're writing a thesis with hundreds of sources to organize and synthesize.

Step 1: Configure for research

graph LR
    A[Import papers] --> B[Search semantically]
    B --> C[Synthesize themes]
    C --> D[Write with citations]

    style A fill:#00d4ff,color:#000
    style B fill:#141822,color:#fff
    style C fill:#141822,color:#fff
    style D fill:#00ff88,color:#000
  1. Use Claude 3.5 (excellent for long documents)
  2. Select Scientist personality
  3. Create mini-prompts for your research topic and citation style

Step 2: Import and analyze

You: I have 47 papers on machine learning in healthcare. 
Help me identify main themes and gaps.

SAM: I've analyzed all 47 papers.

MAJOR THEMES:

Theme 1: Diagnostic Imaging (18 papers)
- Primary focus: radiology, pathology
- Key finding: CNN architectures dominate
- Gap: Few studies on deployment in resource-limited settings

Theme 2: Clinical Decision Support (14 papers)
- Primary focus: risk prediction, treatment selection
- Key finding: Explainability is major concern
- Gap: Longitudinal validation studies rare

Theme 3: NLP for Clinical Notes (11 papers)
- Primary focus: EHR analysis
- Key finding: Privacy-preserving methods emerging
- Gap: Limited non-English language research

RESEARCH GAPS (Potential thesis directions):
1. Intersection of explainability AND deployment
2. Cross-institutional validation studies
3. Patient-facing applications (only 3 papers!)

Content Creation

The situation: You need to research a topic and create content from multiple sources.

flowchart LR
    A[Request topic] --> B[SAM researches web]
    B --> C[Synthesizes sources]
    C --> D[Polished content]

    style A fill:#00d4ff,color:#000
    style D fill:#00ff88,color:#000

Step 1: Configure for content

  1. Use any cloud model (GPT-4, Claude)
  2. Select Wordsmith personality
  3. Enable Tools toggle

Step 2: Research and write

You: Research AI developments this week and create a 
newsletter summary

SAM: [Searches multiple sources]
[Scrapes relevant articles]
[Synthesizes information]

AI WEEKLY DIGEST

HEADLINE: Major breakthrough in reasoning models...

[Continues with well-structured content]

Diagram Generation

SAM renders mermaid diagrams inline. Just describe what you want and SAM creates a visual diagram.

Supported Diagram Types

SAM supports 13 mermaid diagram types:

Type Best For
flowchart Process flows, decision trees
sequenceDiagram API calls, user interactions
classDiagram Object-oriented design
stateDiagram State machines, workflows
erDiagram Database schemas
gantt Project timelines
pie Data distribution
journey User journey mapping
mindmap Brainstorming, idea organization
timeline Historical events, roadmaps
quadrantChart Priority matrices
requirementDiagram Requirements traceability
gitGraph Branch and merge visualization

How to Create Diagrams

Step 1: Just ask SAM to create a diagram:

You: Create a flowchart showing user login flow

SAM: Here's the login flow:

```mermaid
flowchart TD
    A[User visits site] --> B{Has account?}
    B -->|Yes| C[Enter credentials]
    B -->|No| D[Sign up]
    C --> E{Valid?}
    E -->|Yes| F[Dashboard]
    E -->|No| G[Try again]
    D --> C
**Step 2:** SAM renders the diagram inline. You'll see a visual flowchart, not code.

**Step 3:** Ask for modifications:

You: Add 2FA step after valid credentials

SAM: Updated: [Diagram with 2FA step added]

### Example Requests

**Architecture diagram:**

"Create a sequence diagram showing how the API handles authentication"

**Database schema:**

"Draw an ER diagram for a blog with users, posts, and comments"

**Project timeline:**

"Create a Gantt chart for our 3-month MVP development"

**Process visualization:**

"Make a state diagram for order processing: created, paid, shipped, delivered"

### Tips for Better Diagrams

1. **Be specific**: "flowchart with 5 steps" is better than "make a diagram"
2. **Name your elements**: "boxes for Frontend, Backend, Database" gives clearer results
3. **Iterate**: Ask SAM to add, remove, or restyle elements
4. **Request different types**: Try "show this as a sequence diagram instead"

---

## LoRA Model Fine-Tuning

### Customer Support Bot

**The situation:** You want to train a model on your product documentation to answer customer questions accurately.

**Why LoRA:** Train the model to understand your product terminology, common questions, and company voice without sending data to cloud providers.

**Setup:**

1. **Prerequisites**:
   - Local MLX model installed (e.g., Qwen 2.5 7B)
   - Product documentation or FAQ documents
   - macOS with Apple Silicon

2. **Export Training Data**:
   - Go to SAM  Documents
   - Import your product documentation
   - Click **Export for Training**
   - Choose **Semantic (Paragraphs)** chunking
   - Enable PII redaction
   - Save JSONL file

3. **Configure Training**:
   - Go to Preferences  Model Training
   - Click **Train LoRA Adapter** tab
   - Choose JSONL file
   - Select base model (Qwen 2.5 7B)
   - Name: "Customer Support Bot"
   - Parameters:
     - Rank: 16 (good capacity for product info)
     - Epochs: 5
     - Batch Size: 4
     - Learning Rate: 0.0001 (default)

4. **Train**:
   - Click **Start Training**
   - Wait for completion (15-30 minutes typical)
   - Watch loss decrease (should reach ~1.5-2.0)

**Using the Adapter**:

1. Create new conversation
2. Select `lora/Customer Support Bot` in model picker
3. Ask customer questions
4. Compare responses to base model

**Example Results**:

You: What is our warranty coverage?

Base model: I don't have specific information about your warranty.

Trained adapter: We provide a comprehensive 5-year warranty covering manufacturing defects. For warranty claims, contact support@company.com with your order number and photos of the defect. We'll arrange free return shipping and process replacement within 5 business days.

**Benefits**:
- Accurate, company-specific responses
- No hallucination of features
- Consistent brand voice
- Complete privacy (local training and inference)

---

### Domain Expert Specialization

**The situation:** You need a specialized assistant in a technical field (medical, legal, engineering).

**Why LoRA:** Train the model on domain-specific textbooks, papers, and terminology to create an expert assistant.

**Example: Medical Terminology Assistant**

**Setup:**

1. **Collect Training Data**:
   - Medical textbooks (PDFs)
   - Clinical guidelines
   - Medical terminology references
   - Case studies (with PII removed)

2. **Export Documents**:
   - Import all documents to SAM
   - Export with **Page Aware (PDFs)** chunking
   - Enable FULL PII redaction (critical for medical data)
   - Save JSONL file

3. **Configure Training**:
   - Base Model: Llama 3.1 8B or Qwen 2.5 7B
   - Name: "Medical Terminology Assistant"
   - Parameters:
     - Rank: 32 (higher for complex medical reasoning)
     - Epochs: 10
     - Batch Size: 2 (slower, more thorough)
     - Learning Rate: 0.00005 (lower for stability)

4. **Train and Validate**:
   - Start training (may take 1-2 hours)
   - Test with medical terminology questions
   - Compare accuracy to base model

**Example Results**:

You: Explain the pathophysiology of myocardial infarction

Base model: [Generic explanation]

Trained adapter: Myocardial infarction occurs when coronary artery occlusion (typically from plaque rupture and thrombus formation) results in ischemia to myocardial tissue. The affected region undergoes necrosis within 20-40 minutes without reperfusion. The infarction progresses in a wave-front pattern from endocardium to epicardium...

**⚠️ Important**: For educational/research purposes only. Never use for actual medical diagnosis.

---

### Writing Style Adaptation

**The situation:** You want the model to write in a specific style or voice.

**Why LoRA:** Train on examples of the desired writing style so the model consistently produces content matching that voice.

**Example: Company Brand Voice**

**Setup:**

1. **Gather Style Examples**:
   - Approved marketing copy
   - Blog posts
   - Email templates
   - Social media content
   - Company communication guidelines

2. **Create Training Examples**:
   - Format as question-answer pairs:
   ```jsonl
   {"messages": [{"role": "user", "content": "Write a product announcement for our new feature"}, {"role": "assistant", "content": "[Example in your brand voice]"}]}
   ```
   - Include 50-100 examples covering different content types

3. **Configure Training**:
   - Base Model: Qwen 2.5 3B (smaller model fine for style)
   - Name: "Brand Voice - Marketing"
   - Parameters:
     - Rank: 8 (style doesn't need huge capacity)
     - Epochs: 5
     - Batch Size: 8
     - Learning Rate: 0.0001

4. **Test Style Consistency**:
   - Generate multiple pieces of content
   - Verify tone, vocabulary, formatting match brand guidelines

**Example Results**:

You: Write a product update email about our new dashboard feature

Base model: [Generic corporate tone]

Trained adapter: Hey there! 👋

We're excited to share something we've been working on just for you. Your new dashboard is here, and it's going to change how you work.

What's new? Everything you need, right where you need it. No more clicking through menus or hunting for data. It's all there, beautifully organized and ready to go.

[Continues in consistent brand voice...]

**Use Cases**:
- Marketing content generation
- Email drafting
- Blog posts
- Social media content
- Internal communications

---

### Code Style Assistant

**The situation:** You want code suggestions that match your team's coding standards.

**Why LoRA:** Train on your codebase so suggestions follow your architecture patterns, naming conventions, and style guide.

**Setup:**

1. **Export Code Examples**:
   - Export well-written code files from your project
   - Include code review comments showing corrections
   - Include documentation of standards
   - Use **Fixed Size** chunking for consistent context

2. **Create Training Examples**:
   - Format as "implement feature X"  "code following your standards"
   - Include examples of common patterns
   - 100-200 examples recommended

3. **Configure Training**:
   - Base Model: Qwen 2.5 Coder 7B or Llama 3.1 8B
   - Name: "Team Code Style - Python FastAPI"
   - Parameters:
     - Rank: 16
     - Epochs: 5
     - Batch Size: 4
     - Learning Rate: 0.0001

4. **Validate Code Quality**:
   - Test with real feature requests
   - Verify code follows team standards
   - Check naming conventions, error handling, documentation

**Example Results**:

You: Implement a user authentication endpoint

Base model: [Generic code structure]

Trained adapter: @router.post("/auth/login", response_model=AuthResponse) async def authenticate_user( credentials: LoginCredentials, db: Database = Depends(get_database), logger: Logger = Depends(get_logger) ) -> AuthResponse: """ Authenticate user and return JWT token.

Args:
    credentials: User login credentials
    db: Database connection
    logger: Application logger

Returns:
    AuthResponse with JWT token and user info

Raises:
    HTTPException: If credentials invalid
"""
# [Follows team's error handling, logging, and structure patterns]
**Benefits**:
- Consistent code style across team
- Follows architecture patterns
- Includes proper error handling
- Matches documentation standards

---

## Business Analysis

### Customer Analysis

**The situation:** You have customer data and need insights for retention strategy.

```mermaid
flowchart LR
    A[Import data] --> B[SAM analyzes patterns]
    B --> C[At-risk identified]
    C --> D[Action recommendations]

    style A fill:#00d4ff,color:#000
    style D fill:#00ff88,color:#000

Setup options:

Your preference Configuration
Quick start Any cloud model + Professional personality
Privacy-first Local MLX model + Professional personality

Step 1: Configure

  1. Select your model
  2. Select Professional personality
  3. Enable Tools toggle

Step 2: Analyze your data

You: Analyze my customer purchase history and identify 
who I should focus retention efforts on.

SAM: Based on your customer database:

HIGH-VALUE AT-RISK CUSTOMERS (Priority Outreach):

| Customer    | Lifetime Value | Last Purchase | Risk Level |
|-------------|---------------|---------------|------------|
| Acme Corp   | $24,500       | 89 days ago   | HIGH       |
| Beta LLC    | $18,200       | 67 days ago   | MEDIUM     |
| Delta Inc   | $15,800       | 112 days ago  | CRITICAL   |

RECOMMENDED ACTIONS:
1. Delta Inc: Immediate outreach. They haven't ordered in 
   112 days but historically order every 45 days.
2. Acme Corp: Schedule check-in. Order frequency dropped 
   from monthly to quarterly in Q3.

Would you like me to draft outreach emails?

Privacy tip: Use a local model for confidential customer data.


Competitive Research

The situation: You need to gather intelligence on competitors.

flowchart LR
    A[Request research] --> B[SAM searches web]
    B --> C[Analyzes findings]
    C --> D[Intelligence report]

    style A fill:#00d4ff,color:#000
    style D fill:#00ff88,color:#000

Step 1: Configure

  1. Use any cloud model (best results with web access)
  2. Select Professional personality
  3. Enable Tools toggle

Step 2: Research and organize

You: Research our top 3 competitors' recent product announcements

SAM: [Searches company websites, news, press releases]

COMPETITIVE INTELLIGENCE REPORT:

COMPETITOR A:
- Announced: New enterprise tier (March 2025)
- Pricing: $99/seat vs our $79/seat
- Key feature: Advanced analytics dashboard

COMPETITOR B:
- Announced: Mobile app launch (Q2 2025)
- Strategy: Focus on small business market
- Weakness: No API access in base tier

COMPETITOR C:
- No major announcements
- Note: Recent layoffs suggest budget constraints

STRATEGIC IMPLICATIONS:
1. Our pricing remains competitive
2. Mobile app should be prioritized
3. Opportunity to capture Competitor C's customers

Choosing the Right Model

Different models excel at different tasks:

Quick Reference

graph TD
    A[What's your priority?] --> B{Privacy?}
    B -->|Yes| C[Local MLX or GGUF]
    B -->|No| D{Task type?}
    D -->|Long documents| E[Claude 3.5]
    D -->|General chat| F[GPT-4]
    D -->|Huge files| G[Gemini 1.5]
    D -->|Fast + cheap| H[GPT-4 Turbo or DeepSeek]

    style A fill:#00d4ff,color:#000
    style B fill:#141822,color:#fff
    style C fill:#00ff88,color:#000
    style D fill:#141822,color:#fff
    style E fill:#00ff88,color:#000
    style F fill:#00ff88,color:#000
    style G fill:#00ff88,color:#000
    style H fill:#00ff88,color:#000

Model Comparison

Model Speed Context Privacy Cost
GPT-4 Medium Large Cloud $$
Claude 3.5 Medium Large Cloud $$
Gemini 1.5 Medium Very Large Cloud $$
Local MLX Varies Variable 100% Local FREE
Local GGUF Varies Variable 100% Local FREE

Model Switching

You can switch models mid-conversation. SAM maintains context across switches.

Start: Local model for sensitive discussion
Switch: Cloud model for complex reasoning
Return: Local model for final sensitive summary

How to switch: Click the model picker and select a different model.


Next Steps

Pick a scenario and try it:

  1. Choose the use case closest to your needs
  2. Follow the setup steps
  3. Start with a simple request
  4. Build from there

Learn more:


Ready to get started? Open SAM and try one of these scenarios today.