The question is no longer whether AI can do the work.
The question is under what control model it is allowed to.

AI can already extract, classify, analyze, draft, and review real business documents. The capability exists today. What most firms still lack is a safe, governed architecture to use it — one where sensitive work stays private, routing is policy-controlled, approvals are real objects, and every meaningful action leaves evidence.

Same workflow. Same dashboard. Same team experience. Different execution boundary underneath.


Market Context

The governance gap is not theoretical. It is measured.

Deloitte's State of AI in the Enterprise 2026 surveyed 3,235 enterprise leaders across industries. Three findings stand out.

1 in 5

Have mature agent governance

80% of companies deploying AI agents have no mature governance model in place. Agent capabilities are outpacing guardrails.

42%

Strategically ready, operationally not

Leaders believe their AI strategy is sound — but report gaps in infrastructure, data governance, risk management, and talent to execute it.

#1

Sovereignty drives vendor selection

Where AI runs — and who controls the data — is now a top factor in vendor decisions. Not performance. Not price. Sovereignty.

Foresight exists because these gaps are real. The architecture below is how we close them.
Read the full Deloitte analysis →


The Real Problem

AI capability is not the bottleneck anymore.

Every week, another demonstration proves that AI can do meaningful office work — bookkeeping classification, document extraction, variance analysis, workpaper prep, issue spotting, internal memo drafting. The use cases are real.

But for firms that handle sensitive client data — accounting firms, law offices, medical practices, diligence teams — the useful question has changed.

It is no longer: Can AI do this?

It is now:

Most AI tools today cannot answer these questions cleanly. They route work to the best available model, hope for good outcomes, and leave firms to figure out governance on their own.

That is not a deployment model. That is a liability.


Architecture

Same Foresight experience. Different execution boundary.

Foresight does not ask firms to choose between AI capability and operational discipline. The product experience remains the same across all deployment modes. What changes is the governed substrate underneath.

Same interface

Chat, dashboard, task management, approval workflows, and team coordination — identical across every execution mode.

Different boundary

Where models run, which capabilities are exposed, what routing is permitted, and what controls are enforced — governed by deterministic policy.

No retraining

A firm can adopt Foresight today in the mode that matches their risk posture. The team never needs to learn a different product.


Execution Modes

Four modes. One experience. You choose the boundary.

Each execution mode defines a different relationship between your data, your models, and your governance requirements. All four share the same Foresight workflow — what changes is the governed substrate underneath.

Mode 1

Managed Cloud

The fastest path to Foresight, using the best available hosted model stack. Strong fit for teams working with internal, non-sensitive, or public-facing content where hosted model access is acceptable.

  • Broadest model access and convenience
  • Standard Foresight workflow and coordination
  • Audit logging for platform actions

Best for: teams with primarily non-sensitive workflows, early adoption, evaluation.

Mode 3

Governed Enterprise API

For firms whose security and vendor posture allows use of enterprise-grade external AI providers — under approved contractual terms, data processing agreements, retention controls, and compliance attestations.

  • Frontier-quality reasoning from enterprise-grade providers
  • Provider-level safeguards: no training on your data, retention controls, SOC 2 / ISO attestations available
  • Data processing agreements and vendor review documentation
  • Foresight policy engine still governs routing, approvals, and evidence
  • Provider-agnostic — firm selects approved providers

Best for: firms comfortable with approved external processing under enterprise terms, where vendor review and contractual protections satisfy compliance requirements.

Mode 4

Hybrid Governed

Raw sensitive work stays private by default — on your own infrastructure (on-prem, dedicated, or private cloud). Approved higher-order tasks can use governed enterprise-grade external reasoning, but only on derived, sanitized fact packs — never on raw source documents.

  • Everything in Private Local for raw sensitive data
  • Governed external reasoning for approved derived tasks only
  • Sanitization boundary: raw source files never leave the private lane
  • Policy-controlled routing with audit evidence
  • Approval workflows for any external transitions

Best for: firms that want both the strongest privacy boundary on source materials and frontier reasoning quality on approved, sanitized derivative work.


The AI may recommend.
The platform must enforce.


Trust Model

Why prompt-only trust is not enough.

Most AI tools today rely on one trust mechanism: the prompt. They instruct the model to be careful, to avoid sensitive topics, to ask before acting.

That is not governance. That is a suggestion.

Prompt-based controls are useful for shaping tone and behavior. They are not sufficient for enforcing data boundaries, routing restrictions, or approval requirements. A model that is instructed to "never send raw financials externally" can still be tricked, confused, or simply wrong. There is no enforcement layer beneath the request.

Foresight takes a different approach. The AI model can suggest a task classification, propose an execution path, or draft a deliverable. But the allowed execution boundary is computed by a deterministic policy engine outside the model. The model does not choose its own permissions. The platform computes what is allowed and exposes only those capabilities.

This is the same philosophy as protected branches in version control or scoped permissions in access management: the system does not rely on the actor's good judgment alone. The boundary exists whether the actor respects it or not.


Control Architecture

How Foresight enforces the boundary.

Foresight governance is not a single feature. It is a layered control architecture — from intake to evidence.

1

Intake

Work enters Foresight — a document, a request, a task.

2

Classification

Data sensitivity, task type, output destination — structured dimensions.

3

Policy Engine

Deterministic logic computes the allowed execution envelope.

4

Lane Selection

Work routes to Local Private, Premium Governed, or Review Required.

5

Execution

Only allowed capabilities are exposed. The model cannot exceed the boundary.

6

Evidence

Every decision produces a linked, structured audit record.

Policy Engine

The policy engine is deterministic control logic — not another AI model making judgment calls. Given the data sensitivity, task type, output destination, execution mode, and tenant configuration, the engine returns a bounded decision: which lane is allowed, which models can be used, which capabilities are exposed, whether approval or sanitization is required, or whether the request must be denied.

The policy engine does not offer suggestions. It computes the allowed envelope, and the downstream system cannot exceed it.

Execution Lanes

The orchestrator sees only the capabilities allowed for that lane. If a capability is not exposed, it cannot be invoked — not because the model decided to be careful, but because the platform did not make it available.

Approvals as First-Class Objects

When policy requires approval, Foresight creates a structured, scoped, time-stamped, attributable approval record tied to a specific action on a specific artifact. Not a chat message that looks like agreement.

Evidence Layer

Every meaningful decision produces a structured evidence record: policy decisions, routing choices, approval events, and artifact lineage. This is not a raw activity log. It is structured, linked evidence designed to answer specific questions after the fact.


Comparison

Choosing the right mode for your firm.

All four modes share the same Foresight experience. The difference is where work runs, what leaves the firm's boundary, and what governance applies.

Managed Cloud Private Local Governed Enterprise API Hybrid Governed
Same Foresight experience
Data leaves firm's environment Yes Never — on-prem, dedicated, or private cloud Under enterprise terms Only approved derived outputs
Local / self-hosted models ✓ Exclusively ✓ For raw sensitive work
Enterprise-grade external reasoning Standard hosted ✓ With vendor controls ✓ Governed, derived inputs only
Provider-level vendor controls (DPA, SOC 2, retention) Limited N/A — nothing external ✓ Full enterprise terms ✓ For governed external lane
Approval-gated sensitive actions Optional
Audit story Standard Cleanest and simplest Strong — vendor-backed Strong — policy-backed with routing evidence
Best for Non-sensitive workflows Strictest boundary firms Firms that approve enterprise vendors Mixed workloads needing both

Start with Private Local if the priority is the absolute cleanest boundary — nothing external, strongest audit simplicity, no vendor dependency to explain. Deploy on your own hardware, a dedicated server, or your own cloud infrastructure (AWS, Azure, GCP) — the boundary is the same.

Consider Governed Enterprise API if the firm's vendor and security review process can approve enterprise-grade external providers under contractual terms, retention controls, and compliance attestations — and the firm wants frontier reasoning quality without maintaining local hardware.

Consider Hybrid Governed when the firm has mixed workloads — some that need to stay strictly private, others where governed external reasoning on approved, sanitized derivatives would materially improve strategic or advisory output.

Private Local is not a stepping stone. It is a complete, production-grade execution mode. The other modes expand capability with additional governance, not corrections of a weakness. Your firm makes the call.


Safe Defaults

When the system is uncertain, it becomes more restrictive — not less.

Low classification confidence

The system promotes to the stricter sensitivity class. Ambiguity does not widen permissions.

Sensitivity boundary crossed

The system requires review before proceeding. It does not route optimistically.

Sanitization unconfirmed

The premium governed lane is not exposed. Work stays private until conditions are met.

Client-facing output

Release is gated by review and, where policy requires, by explicit approval. Drafting may proceed internally.

Approval not yet satisfied

Execution pauses. The system creates an approval request and waits. It does not substitute inference for authorization.

Default posture

Always the more restrictive path. Governance that only works when everything goes right is not governance.


Auditability

A system is only as auditable as the questions it can answer after the fact.

Foresight's evidence model is designed to answer specific questions months later:

This is not logging for the sake of logging. It is structured evidence designed to make governance reconstructable and defensible.


Example Workflow

CPA financial review — governed, step by step.

A CPA uploads financial statements and asks: "Review these statements, flag unusual variances, and draft questions for the client."

Step 1

Intake and classification

Foresight detects the uploaded documents, classifies them as financial/client-sensitive, and identifies the task as analysis with a client-facing drafting component.

Step 2

Policy decision

The policy engine evaluates data sensitivity, task type, and output destination against the firm's execution mode. All work routes to the local private lane. Client-facing output marked as review-required.

Step 3

Local analysis

Local models extract data from statements, identify variances, flag anomalies, and produce structured findings — all within the private lane.

Step 4

Internal draft

Foresight generates an internal draft of client questions based on the analysis. The draft is tagged as review-required because the output class is client-facing.

Step 5

Review gate

The draft cannot be sent or released until a qualified reviewer approves it. Foresight creates an approval object and holds the deliverable.

Step 6

Strategic synthesis

Hybrid Governed / Governed Enterprise API

If the partner later asks for a strategic advisory memo, Foresight may allow enterprise-grade external reasoning — but only on a derived, sanitized fact pack built from local analysis. Raw financial statements do not enter the external lane. The routing decision is policy-gated and produces audit evidence.

Step 7

Evidence

Every step — classification, policy decision, lane selection, model usage, draft generation, review hold, approval, and release — produces a linked evidence record.


Boundaries

What Foresight is not.

Credibility requires honesty about boundaries.

Not autonomous sign-off authority.

AI assists with analysis, drafting, and coordination. Final sign-off on client-facing work, official filings, or regulated deliverables remains a human responsibility. Foresight enforces the review gate — it does not replace the reviewer.

Not unrestricted external routing.

In Private Local, sensitive data does not leave the private environment. In Governed Enterprise API, external processing happens under enterprise contractual terms with vendor-level safeguards. In Hybrid Governed, external routing is policy-gated and limited to approved derived inputs. There is no mode where raw sensitive files are casually sent to uncontrolled external models.

Not a claim of universal regulatory certification.

Foresight provides an architecture that makes strong compliance posture achievable. Specific regulatory compliance depends on the firm's deployment, configuration, policies, and jurisdiction. We do not substitute an architecture claim for legal counsel.

Not prompt-only governance.

The control boundary is enforced by deterministic policy logic, not by instructing the model to be careful. If the boundary depended on the model's cooperation, it would not be a boundary.

Not a generic chatbot in professional clothing.

It is a governed execution system with structured policy controls, first-class approval objects, bounded capability exposure, and linked audit evidence. The architecture is the product.


See what governed AI execution looks like for your firm.

Whether you handle financial records, legal documents, medical files, or sensitive diligence materials — Foresight gives your team AI-powered workflows inside boundaries you can actually defend.

See Foresight for CPA Firms Request a Walkthrough View Plans & Pricing

Request a Foresight Walkthrough

Tell us what kind of environment you operate in and what you need to govern. We’ll respond with the right deployment angle — Private Local, Hybrid Governed, or both.

Submitted through Nathan’s existing web-form flow with a Foresight-specific subject line.