The Intelligence Layer

From Hierarchy
to Intelligence

Hierarchy exists because humans were the only option for routing information. AI changes that equation. The companies that understand this first will be unrecognizable in three years.

Jack Dorsey is restructuring Block around this idea. Foresight delivers the same intelligence layer — for teams of 5 to 50.

Two world models. One operating truth. No more humans routing information that a machine already understands.

The Thesis

Hierarchy Is a Compression Algorithm
for Human Limits

Every org chart in the world exists for the same reason: humans can’t hold the full context of an operation in their heads. So we built layers. Managers who summarize upward. Directors who translate strategy downward. VPs who broker between departments. Each layer exists to route information that the layer above can’t process directly.

This was the best available design — when humans were the only option. It isn’t anymore.

Jack Dorsey published From Hierarchy to Intelligence laying out Block’s future: replace the management coordination layer with an AI intelligence layer that understands the whole company. Not AI as copilot — AI as the coordination mechanism itself.

The question isn’t whether this shift happens. It’s whether you’re the one building it — or the one disrupted by it.

Jack Dorsey

“The fundamental structure of a company should be: ICs do the work. DRIs own 90-day problems. Player-coaches develop people. Everything else is coordination — and coordination is what AI does best.”

Jack Dorsey — From Hierarchy to Intelligence, 2025

Two World Models

Company Worldview.
Customer Worldview.

Dorsey’s framework has two pillars. So does Foresight. An intelligence layer that understands your operation from the inside, and one that understands your customers from the outside. Together, they replace the coordination hierarchy with something faster, cheaper, and more honest.

๐Ÿข

Company Worldview

How your company understands itself

The Company Worldview is the living model of your operation. Not a dashboard that shows what happened — a system that understands what’s true right now. What’s moving, what’s stalled, where dependencies are real vs. claimed, who owns what, and what changed since yesterday.

โ†’

Morning Brief compresses the full operating state so the founder stops being the human router of context.

โ†’

Execution Health replaces hierarchy’s status reporting with actual signal — credible vs. unproven dependency, visible proof gaps.

โ†’

Decision Weighting routes 2-way door decisions to speed and 1-way door decisions to judgment.

โ†’

Closeout writes the operating truth to persistent memory so tomorrow doesn’t start from scratch.

In Dorsey’s framework, the Company World Model eliminates the need for managers to summarize upward. Foresight’s Morning Brief already does this — every morning, automatically.

๐Ÿ‘ฅ

Customer Worldview

How your company understands its customers

The Customer Worldview is the living model of what your customers actually need — built from real signal, not surveys and NPS scores. It reads patterns in support tickets, churn signals, deal velocity, product usage, and market movement to build a continuously-updated map of customer reality.

โ†’

Pattern recognition surfaces what customers are actually doing vs. what they say they’re doing.

โ†’

Churn signal detection identifies at-risk relationships before they become cancellation emails.

โ†’

Capability gap mapping — when the intelligence layer can’t compose a solution for what customers need, that gap becomes the product roadmap.

โ†’

Market context layers competitive movement and industry shifts into the customer model so opportunities surface before they’re obvious.

Dorsey’s Customer World Model means the company stops guessing what to build. Foresight’s Customer Worldview turns real usage and support signal into prioritized intelligence.

Three Roles Survive. Everything Else Is Coordination.

Dorsey’s framework reduces the entire org to three human roles. The intelligence layer handles everything in between.

โšก

ICs

Individual contributors do the actual work. Building, designing, selling, supporting. The intelligence layer feeds them context so they start sharp instead of spending the first hour reconstructing yesterday.

๐ŸŽฏ

DRIs

Directly Responsible Individuals own 90-day problems. Not permanent fiefdoms — time-bounded missions. The intelligence layer tracks their execution health and surfaces blockers before they become crises.

๐Ÿ‹๏ธ

Player-Coaches

The only “managers” left are player-coaches who develop people and exercise judgment on 1-way door decisions. They don’t route information — the intelligence layer does that.

This Isn’t Theory. Block Is Doing It. So Are We.

Dorsey is building this intelligence layer for 12,000 employees at a $40B+ company. Foresight delivers the same architecture for teams of 5–50. The Morning Brief is your Company Worldview. Customer Worldview turns signal into roadmap. The coordination hierarchy that used to require a manager for every 8 people — Foresight replaces it with software that never forgets, never filters, and never plays politics.

Your Best People Are Drowning
in Coordination Work

If You Run a Small Business

Your team is talented but drowning. Everyone wears 4 hats. The work that matters — strategy, relationships, creative thinking — gets buried under status updates, context switching, and information routing. The founder becomes the human API: every question flows through one person because no one else has the full picture.

If You Lead a Corporate Team

Your department is headcount-capped but scope keeps growing. You have a VP who summarizes for a Director who summarizes for a Manager who asks an IC what actually happened. Three layers of lossy compression between the truth and the decision-maker. That’s not management. That’s a broken telephone with a headcount budget.

Not a Chatbot. An Intelligence Layer.

Foresight is a coordinated system of specialized AI that understands your company and your customers — then routes the right context to the right person at the right time. No more manager-as-middleware. No more status meetings that exist so someone upstream can feel informed.

Your team doesn’t get replaced. The coordination tax does.

ICs get context delivered. DRIs get execution truth. The founder gets the full picture without being the router. That’s the intelligence layer.

The AI Department Architecture

Your team member stays in charge. The AI Orchestrator manages specialized agents across every function. Everything runs inside an enterprise security boundary.

graph TB
    DH["๐Ÿ‘ค DEPARTMENT HEAD / OWNER
Sets Priorities ยท Makes Decisions ยท Provides Judgment"] DH -->|"Strategic
Direction"| ORC ORC["๐Ÿง  AI ORCHESTRATOR
Business Context ยท Task Routing ยท Performance Tracking"] ORC -->|"Tasks &
Context"| MKT["๐Ÿ“ฃ MARKETING
Content ยท SEO ยท Social ยท Analytics"] ORC -->|"Tasks &
Context"| OPS["โš™๏ธ OPERATIONS
Automation ยท Inventory ยท Vendors ยท Reports"] ORC -->|"Tasks &
Context"| FIN["๐Ÿ’ฐ FINANCE
Invoices ยท Reconciliation ยท Forecasting ยท Auditing"] ORC -->|"Tasks &
Context"| CS["๐ŸŽง CUSTOMER SUCCESS
Triage ยท Responses ยท Sentiment ยท Churn"] ORC -->|"Tasks &
Context"| ENG["๐Ÿ› ๏ธ ENGINEERING
Code Review ยท Bug Triage ยท Docs ยท Testing"] MKT --> INT["๐Ÿ”Œ YOUR EXISTING TOOLS
CRM ยท ERP ยท Email ยท Slack ยท Cloud Storage"] OPS --> INT FIN --> INT CS --> INT ENG --> INT SEC["๐Ÿ›ก๏ธ SECURITY: Encryption ยท RBAC ยท Audit Trails ยท Data Residency"] DASH["๐Ÿ“Š EXECUTIVE DASHBOARD: ROI ยท Agent Performance ยท Cost Savings ยท Security Logs"] style DH fill:#1a1a2e,stroke:#AB7522,stroke-width:3px,color:#fff style ORC fill:#16213e,stroke:#AB7522,stroke-width:2px,color:#fff style MKT fill:#0f3460,stroke:#533483,color:#fff style OPS fill:#0f3460,stroke:#533483,color:#fff style FIN fill:#0f3460,stroke:#533483,color:#fff style CS fill:#0f3460,stroke:#533483,color:#fff style ENG fill:#0f3460,stroke:#533483,color:#fff style INT fill:#16213e,stroke:#0f3460,color:#fff style SEC fill:#900000,stroke:#fff,stroke-width:2px,color:#fff style DASH fill:#533483,stroke:#fff,color:#fff
The Hidden Problem

Why AI Gets Worse the Longer
It Runs

Every AI model has a fixed working memory - its context window. Give it a long enough task and something critical always gets pushed out. Early instructions vanish. Decisions contradict each other. The model starts filling gaps with guesses instead of facts.

This isn't a bug you can patch. It's a physics constraint. The question is whether your AI architecture accounts for it - or ignores it until production breaks.

Traditional Approach

One Long Session ยท Everything In Memory

Task 1
โœ“ Correct
Task 2
โœ“ Mostly correct
Task 3
โš  Drifting from spec
Task 4
โœ— Contradicts Task 1
Task 5
โœ— Original context lost
Task 6
โœ— Compounding errors
Working memory 97% full - critical context pushed out

The model doesn't know what it's forgotten. It fills gaps with plausible-sounding guesses.

Our Architecture

Ingress Guardrails ยท Isolated Tasks ยท Validation-Before-Done ยท Persistent Memory

Task 1
โœ“ Validated โ†’ committed
Task 2
โœ“ Validated โ†’ committed
Task 3
โœ“ Validated โ†’ committed
Task 4
โœ“ Validated โ†’ committed
Task 5
โœ“ Validated โ†’ committed
Task 6
โœ“ Validated โ†’ committed
Working memory per task Always fresh - reloaded from persistent memory

Each iteration starts clean. What was learned is written down. What matters is loaded back in.

The Six Mechanisms That Make This Work

๐Ÿšง

Ingress Guardrails

Risky actions and oversized requests are blocked before they reach an agent - at the control-plane layer, not the prompt layer. The system defaults to deny, returns structured reroute guidance, and fails closed. No hoping the model follows instructions.

โšก

Task Isolation

Each agent handles one atomic, well-defined task - not an open-ended session. Isolated runs execute in their own context. Fresh every time. No accumulated drift.

๐Ÿ”’

Validation-Before-Done

Nothing can be marked complete without validation evidence. An agent can't self-certify its own work - the system requires proof of success before closing the loop. Bad output gets caught at the task level, not after 40 tasks of compounding errors.

๐Ÿง 

Persistent Memory

Agents don't rely on in-session recall. Decisions, context, and lessons are written to structured memory files - reloaded fresh at the start of each task.

๐ŸŽฏ

Specialized Routing

The orchestrator routes to the right specialist. No single agent carries the full context of every function - each expert knows its domain deeply.

๐Ÿงญ

Governed Approval Flow

Critical actions enter a durable approval queue visible in the dashboard. Approved items dispatch through the real execution path with scoped tokens and expiry. Every item carries its own transition history - an audit trail per work item, not just a global log.

Security Architecture

The Rule We Never Break

Don't give an AI permission and then tell it not to use it.
Remove the permission.

Most AI deployments rely on prompt-level guardrails: "Don't send emails without approval." "Don't delete records." "Don't access production data." That's a policy document, not a security boundary. Prompt instructions can be overridden by injection attacks, ignored during hallucinations, or simply forgotten when context windows fill up. We architect differently.

How Most People Do It
โœ—

Email Agent

Given full email access. System prompt says "always draft, never send without approval." A prompt injection or context overflow - and the guardrail vanishes.

โœ—

Database Agent

Given read-write credentials. System prompt says "only run SELECT queries." One hallucinated DROP TABLE and your weekend is ruined.

โœ—

Code Agent

Given production deploy keys. System prompt says "only push to staging." Model decides staging IS production. Nobody catches it until Monday.

The security boundary is a sentence in a prompt. That's not a boundary - it's a suggestion.

How We Architect It
โœ“

Email Agent

Can draft to a staging queue. Cannot send. The API credential literally doesn't have send permission. No prompt injection can escalate what the key doesn't allow.

โœ“

Database Agent

Runs on a read-only replica. The connection string points to a follower. Write queries return permission errors at the database level. The agent can't even try.

โœ“

Code Agent

Creates pull requests. Cannot merge or deploy. CI runs tests automatically. A human reviews and merges. Production deploy is a separate credential the agent never touches.

The security boundary is the credential itself. No prompt, no hallucination, no injection can override what the system doesn't allow.

This Isn't New. It's Ignored.

The Principle of Least Privilege has been standard in enterprise security for 20 years. Every IAM policy at AWS, every RBAC system, every zero-trust architecture is built on the same idea: give each actor the minimum permissions required for its job. Nothing more.

What's remarkable isn't the principle - it's that almost nobody applies it to AI agents. They hand the model a God Mode API key and hope the system prompt holds. Then they're shocked when it doesn't.

We don't hope. We constrain.

The Full Security Stack

๐Ÿ”

End-to-End Encryption

AES-256 at rest, TLS 1.3 in transit. Your data is encrypted everywhere it lives and everywhere it moves.

๐Ÿ›ก๏ธ

Least-Privilege Credentials

Every agent gets the minimum permissions for its job. Email agents can't send. Database agents can't write. Code agents can't deploy. Enforced at the credential level, not the prompt level.

๐Ÿ“‹

Immutable Audit Trail

Every agent action logged with timestamp, input, output, and decision rationale. Logs cannot be modified or deleted. Full replay capability.

๐Ÿšซ

No Model Training

Your data is never used to train foundation models. Ever. Your business intelligence stays yours.

๐Ÿ“

Data Residency

Choose where your data lives - US cloud, EU cloud, or on-premises. Customer-managed encryption keys available.

โœ…

SOC 2 Aligned

Controls mapped to SOC 2 Type II. GDPR/CCPA ready. HIPAA BAA available. Monthly security posture reports.

"The biggest risk in AI isn't that it doesn't work. It's that it works - and someone deploys it without thinking about who has access to what. We design the blast radius before we write the first agent prompt."

Human-in-the-Loop: The Governed Control Plane

The security stack above handles data protection and access control. But there's a harder problem: what happens when an AI agent needs to do something dangerous?

Not malicious. Dangerous. Deploy code. Send an email to a client. Modify a production database. Run a migration. These are legitimate actions that your AI workforce will need to perform - and the question is whether your architecture handles them with a prayer or a protocol.

Most AI operations tools handle this with prompt-level instructions: "Don't deploy without asking." That's the equivalent of putting a "Please Don't Steal" sign on an unlocked door. It works until it doesn't - and in AI, "doesn't" means a hallucination, a context window overflow, or a prompt injection attack that overwrites the instruction entirely.

We built a different answer: a governed control plane with three enforcement layers. Ingress guardrails that block risky requests before they reach an agent. A durable approval queue where anything that requires action sits until a human explicitly approves it. And scoped execution tokens that auto-expire after the approved action completes. This isn't a prompt instruction - it's architecture.

The sudo Model for AI Agents

If you've administered a Linux server, you already understand this architecture. A regular user can read files, run processes, and navigate the system. But anything that modifies system state - installing packages, editing configs, restarting services - requires sudo. The user doesn't have root access. They request escalation, authenticate, and the system logs every elevated action to /var/log/auth.log.

// Foresight HITL Architecture
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚  AI AGENT (read-only service account)           โ”‚
โ”‚  Can: query, analyze, draft, recommend          โ”‚
โ”‚  Cannot: write, deploy, send, modify            โ”‚
โ”‚                                                 โ”‚
โ”‚  Agent identifies action needed:                โ”‚
โ”‚  "Deploy hotfix to production server"           โ”‚
โ”‚                                                 โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”‚
โ”‚  โ”‚  ESCALATION REQUEST                       โ”‚  โ”‚
โ”‚  โ”‚  Action: deploy commit abc123             โ”‚  โ”‚
โ”‚  โ”‚  Target: prod-web-01                      โ”‚  โ”‚
โ”‚  โ”‚  Reason: fix null pointer in /api/v1/...  โ”‚  โ”‚
โ”‚  โ”‚  Risk: medium (production deployment)     โ”‚  โ”‚
โ”‚  โ”‚  Requested: 2026-03-05 14:32:07 CST      โ”‚  โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ”‚
โ”‚                    โ”‚                            โ”‚
โ”‚                    โ–ผ                            โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”‚
โ”‚  โ”‚  HUMAN OPERATOR                           โ”‚  โ”‚
โ”‚  โ”‚  Reviews request โ†’ Issues OTP             โ”‚  โ”‚
โ”‚  โ”‚  OTP: a8f3k2 (expires: 14:47:07 CST)     โ”‚  โ”‚
โ”‚  โ”‚  Scope: deploy commit abc123 to prod only โ”‚  โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ”‚
โ”‚                    โ”‚                            โ”‚
โ”‚                    โ–ผ                            โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”‚
โ”‚  โ”‚  ELEVATED EXECUTION                       โ”‚  โ”‚
โ”‚  โ”‚  Agent executes with OTP credential       โ”‚  โ”‚
โ”‚  โ”‚  Action logged to audit trail             โ”‚  โ”‚
โ”‚  โ”‚  OTP consumed โ†’ access revoked            โ”‚  โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ”‚
โ”‚                                                 โ”‚
โ”‚  AUDIT LOG ENTRY:                               โ”‚
โ”‚  [2026-03-05 14:32:41] agent=soc-deploy         โ”‚
โ”‚  action=deploy scope=prod-web-01                โ”‚
โ”‚  approver=nathan.rone otp=a8f3k2                โ”‚
โ”‚  result=success duration=34s                    โ”‚
โ”‚  auto_revoke=14:47:07                           โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

Amazon's Operational Break-Glass

This pattern has a name at Amazon: Operational Break-Glass. When an engineer needs elevated access to a production system - to fix an outage, roll back a deployment, or access sensitive data - they don't just SSH in with root. They:

  1. Open a ticket describing what they need to do and why
  2. Get approval from an on-call manager
  3. Receive time-bounded elevated credentials
  4. Execute the action - logged and auditable
  5. Credentials auto-expire

The system fires alerts. The action is recorded. Post-incident review covers every break-glass event. This is how you operate production systems at scale without hoping that people follow the rules. You build the rules into the credential system.

Foresight implements the same pattern for AI agents. The agent is the engineer. The operator is the on-call manager. The OTP is the break-glass credential. The audit log captures everything.

Why This Matters for SOC 2

SOC 2 Type II auditors evaluate whether your controls operate effectively over time. The single hardest control to demonstrate for AI systems is authorization - proving that a human authorized each significant action.

With most AI tools, the best you can offer is: "We told the AI not to do things without asking." That's a policy control, not a technical control. Auditors know the difference.

With our governed control plane, every elevated action has:

  • โ†’ A human authorization event with a named approver
  • โ†’ A scoped execution token proving the approval was for this specific action
  • โ†’ A durable per-item transition history - not just a log entry, but a full state machine from request through validation
  • โ†’ Validation evidence required before the item can be marked complete
  • โ†’ Auto-expiry on the execution token - no lingering elevated access

That's a technical control. That's what passes audits.

The Attack Surface Most People Miss

The conversation about AI security usually focuses on data: "Is my data encrypted? Does the model train on my data? Where is my data stored?" Those are real concerns - and we handle all of them.

But the attack surface that actually keeps security teams up at night is action authority. An AI agent with write access to your production database doesn't need to leak your data to cause damage. It just needs to run the wrong query. And if your security model is "we told it not to," you're one hallucination away from an incident with no audit trail.

HITL approval gates eliminate this category of risk. The agent cannot run the query without an OTP. The OTP requires a human. The human requires context. The context is logged. If the query runs, someone approved it. If no one approved it, the query doesn't run. There is no third state.

This runs in production. Today. Nathan's own AI team - the agents operating his infrastructure - run under this exact control plane. Ingress guardrails, durable approval queues, scoped execution tokens, validation-before-done, per-item audit trails. Same architecture across demo and production surfaces. Same governed behavior everywhere. We didn't design this in a meeting - we built it because we needed it at 2am when an agent wanted to push a hotfix.

What's Live

โœ“ Ingress guardrails: fail-closed blocking of risky and oversized requests before agent execution

โœ“ Durable approval queue: structured request flows with dashboard visibility across all surfaces

โœ“ Scoped execution tokens: approve → execute flow with auto-expiry and real dispatch path

โœ“ Validation-before-done: items cannot close without evidence. Per-item transition history and audit trail

What's Coming

โ†’ Role-based approver tiers (RBAC): different approval chains for different risk levels and environments

โ†’ Webhook integrations: ServiceNow, Jira, PagerDuty - approval requests route through your existing enterprise workflows

โ†’ Compliance export layer: external reporting and audit export for regulatory requirements

โ†’ Approval delegation: designate backup approvers for after-hours escalations with configurable chains

Question 1 of 2 - Infrastructure

Where Does Your
AI Actually Run?

From a laptop on your desk to redundant iron across multiple data centers - every AI deployment lives somewhere. The spectrum is wider than most people realize, and where you land changes everything about cost, control, and capability.

โ† Lower Cost ยท More DIY More Scale ยท More Managed โ†’
Laptop Cloud Private HPC

How We Evaluate the Right Infrastructure For You

Four questions that drive the decision:

1

Data Sensitivity

Is your data regulated? PHI, PII, financial records, legal? The answer immediately narrows the field - some compliance frameworks require you to own the hardware.

2

Task Volume & Cost Curve

Low-volume, variable? Cloud. High-volume, predictable steady-state? Owned compute amortizes. We run the math before you spend a dollar.

3

Team Capacity

Do you have an infrastructure team? On-prem and colo require people to manage them. Cloud and API minimize operational overhead dramatically.

4

Timeline

Need production in 30 days? API or cloud, full stop. Have 6 months and a compliance mandate? We design the right owned-infrastructure stack from scratch.

This is covered in week one of your AI Readiness Audit. We don't push a default - we audit your situation and give you the honest answer, even if it means less work for us.

Question 2 of 2 - AI Models

Which Models Actually
Run Your AI?

Infrastructure tells you where the compute lives. Models tell you what's doing the thinking. These are two separate decisions - and any infrastructure tier can support any model approach.

The right model strategy usually isn't one of these three in isolation. It's a deliberate blend based on what each task actually requires.

๐ŸŒ

Pure API

Frontier models - data goes out, answers come back

Call OpenAI, Anthropic, xAI, or Google. No hardware, instant access to the most capable models in the world.

โœ“ Frontier capability, zero infra, always latest version, pay-per-use

โœ— Data leaves your environment, per-token costs compound, model behavior can change

Best for: any task where data sensitivity allows it

Most Common
โšก

Hybrid

Right model for the right task - local + cloud

Sensitive or high-volume tasks run on local models. Complex, creative, or judgment-heavy tasks go to frontier APIs. The orchestrator routes intelligently.

โœ“ Data governed per task, cost-optimized, frontier where it matters

โœ— More complex to architect, requires clear data classification

Best for: most businesses once they've thought it through

๐Ÿ”’

Pure On-Premises

Open-source models - your hardware, your data, always

Models like Qwen, Llama, Mistral, DeepSeek run entirely on your infrastructure. Nothing leaves. The gap vs frontier APIs is narrowing fast.

โœ“ Absolute data sovereignty, no per-token costs, fine-tunable

โœ— Still lags frontier on complex reasoning, hardware required

Best for: regulated industries, high-volume steady-state

These Two Decisions Are Independent - But They Interact

Any infrastructure tier can technically support any model approach. The combination you choose sets your cost floor, your capability ceiling, and your data risk profile all at once.

We model all three dimensions - infrastructure, model approach, and task requirements - before recommending anything. The audit is where this gets figured out right.

Execution Architecture

The Harness Is the Product

The AI industry keeps promising that the next model will be the breakthrough. More parameters. Higher benchmarks. Bigger context windows. But in practice, the teams getting 100x results and the teams getting marginal value are using the same models.

The difference is never raw intelligence. It is architecture — the system that surrounds the model: what it knows about the operation, what rules constrain its behavior, how it routes decisions that require human judgment, and whether it remembers anything tomorrow.

A smarter model with no operating context, no doctrine, no memory, and no governance is just a faster version of the same shallow tool. It generates better text. It does not generate better outcomes.

Prompting — Resets Every Session

  • Starts from zero every conversation
  • No memory of the operation
  • No understanding of company doctrine
  • Cannot distinguish high-stakes from low-stakes
  • User provides all context every time
  • Useful for drafts, dangerous for decisions

Doctrine — Compounds Every Day

  • Carries continuity across days and weeks
  • Builds a living model of the operation
  • Encodes your thresholds and escalation rules
  • Weighs decisions by reversibility and cost
  • Assembles context before you ask
  • Designed for judgment, not just generation

Two layers that make AI trustworthy

A well-built execution system separates work into latent intelligence (where AI models excel) and deterministic execution (where reliability is non-negotiable).

Latent Layer — AI Judgment

Synthesis, judgment, ambiguity, context compression. The model reads the state of the operation and applies reasoning: Is this dependency credible? Is this meeting worth the founder’s time? What should the morning brief surface versus suppress?

Deterministic Layer — System Execution

Rules, permissions, routing, approvals, audit, scheduling. Permissions enforced. Approvals logged. Calendar changes governed. Data retention respected. This layer does not guess. It executes with precision inside boundaries the business controls.

The model provides intelligence. The system provides trust. Neither works well alone. Together they create Artificial Productivity — AI that operates in the flow of business with both judgment and accountability.

Doctrine over prompting

Prompting is how individuals interact with AI. Doctrine is how companies interact with AI. Every serious business already has an operating style — escalation norms, meeting expectations, decision speed, risk tolerance. Most AI tools ignore all of it.

Foresight takes the opposite approach: strong defaults first, then your doctrine on top. The system starts opinionated about execution quality, then adapts to your language, thresholds, and operating rules. Skills and judgment compound across every interaction, every day, every team member.

On day one, Foresight applies strong defaults. By month two, it knows the shape of your business well enough that the morning brief anticipates problems before you notice them. That is not a smarter model. That is a system that learns your doctrine and applies it consistently.

Why this compounds

Week 1

Strong defaults. Morning brief surfaces priorities. Bad meetings flagged. Dependency claims challenged. Closeout captures what happened.

Month 1

System knows your meeting patterns, escalation tendencies, stale-task thresholds. Morning brief is noticeably sharper. Carry-forward is reliable.

Month 3

Doctrine calibrated. Team internalized better escalation habits. Meetings improved. Decision speed increased because framing is consistent.

Month 6+

Company worldview and customer worldview are rich enough that Foresight anticipates problems before they surface in status meetings.

Every Company Will Have an AI Department.
Will Yours Be Built Right?

Five questions. I'll respond within 48 hours if it's a fit.

Not a chatbot. Not a funnel. I read every one.