AI can already extract, classify, analyze, draft, and review real business documents. The capability exists today. What most firms still lack is a safe, governed architecture to use it — one where sensitive work stays private, routing is policy-controlled, approvals are real objects, and every meaningful action leaves evidence.
Same workflow. Same dashboard. Same team experience. Different execution boundary underneath.
Deloitte's State of AI in the Enterprise 2026 surveyed 3,235 enterprise leaders across industries. Three findings stand out.
80% of companies deploying AI agents have no mature governance model in place. Agent capabilities are outpacing guardrails.
Leaders believe their AI strategy is sound — but report gaps in infrastructure, data governance, risk management, and talent to execute it.
Where AI runs — and who controls the data — is now a top factor in vendor decisions. Not performance. Not price. Sovereignty.
Foresight exists because these gaps are real. The architecture below is how we close them.
Read the full Deloitte analysis →
Every week, another demonstration proves that AI can do meaningful office work — bookkeeping classification, document extraction, variance analysis, workpaper prep, issue spotting, internal memo drafting. The use cases are real.
But for firms that handle sensitive client data — accounting firms, law offices, medical practices, diligence teams — the useful question has changed.
It is no longer: Can AI do this?
It is now:
Most AI tools today cannot answer these questions cleanly. They route work to the best available model, hope for good outcomes, and leave firms to figure out governance on their own.
That is not a deployment model. That is a liability.
Foresight does not ask firms to choose between AI capability and operational discipline. The product experience remains the same across all deployment modes. What changes is the governed substrate underneath.
Chat, dashboard, task management, approval workflows, and team coordination — identical across every execution mode.
Where models run, which capabilities are exposed, what routing is permitted, and what controls are enforced — governed by deterministic policy.
A firm can adopt Foresight today in the mode that matches their risk posture. The team never needs to learn a different product.
Each execution mode defines a different relationship between your data, your models, and your governance requirements. All four share the same Foresight workflow — what changes is the governed substrate underneath.
The fastest path to Foresight, using the best available hosted model stack. Strong fit for teams working with internal, non-sensitive, or public-facing content where hosted model access is acceptable.
Best for: teams with primarily non-sensitive workflows, early adoption, evaluation.
Sensitive work stays inside a private environment your firm controls — on-premises hardware, a dedicated private server, or your own cloud infrastructure (AWS VPC, Azure private instance, GCP project). Runs only on local, self-hosted models. No sensitive data routes to any external AI provider. The audit story is clean because the boundary is clean.
Best for: firms where "nothing external" is non-negotiable — strictest client contracts, highest-sensitivity engagements. Deploy on hardware you own or cloud infrastructure you control.
For firms whose security and vendor posture allows use of enterprise-grade external AI providers — under approved contractual terms, data processing agreements, retention controls, and compliance attestations.
Best for: firms comfortable with approved external processing under enterprise terms, where vendor review and contractual protections satisfy compliance requirements.
Raw sensitive work stays private by default — on your own infrastructure (on-prem, dedicated, or private cloud). Approved higher-order tasks can use governed enterprise-grade external reasoning, but only on derived, sanitized fact packs — never on raw source documents.
Best for: firms that want both the strongest privacy boundary on source materials and frontier reasoning quality on approved, sanitized derivative work.
The AI may recommend.
The platform must enforce.
Most AI tools today rely on one trust mechanism: the prompt. They instruct the model to be careful, to avoid sensitive topics, to ask before acting.
That is not governance. That is a suggestion.
Prompt-based controls are useful for shaping tone and behavior. They are not sufficient for enforcing data boundaries, routing restrictions, or approval requirements. A model that is instructed to "never send raw financials externally" can still be tricked, confused, or simply wrong. There is no enforcement layer beneath the request.
Foresight takes a different approach. The AI model can suggest a task classification, propose an execution path, or draft a deliverable. But the allowed execution boundary is computed by a deterministic policy engine outside the model. The model does not choose its own permissions. The platform computes what is allowed and exposes only those capabilities.
This is the same philosophy as protected branches in version control or scoped permissions in access management: the system does not rely on the actor's good judgment alone. The boundary exists whether the actor respects it or not.
Foresight governance is not a single feature. It is a layered control architecture — from intake to evidence.
Work enters Foresight — a document, a request, a task.
Data sensitivity, task type, output destination — structured dimensions.
Deterministic logic computes the allowed execution envelope.
Work routes to Local Private, Premium Governed, or Review Required.
Only allowed capabilities are exposed. The model cannot exceed the boundary.
Every decision produces a linked, structured audit record.
The policy engine is deterministic control logic — not another AI model making judgment calls. Given the data sensitivity, task type, output destination, execution mode, and tenant configuration, the engine returns a bounded decision: which lane is allowed, which models can be used, which capabilities are exposed, whether approval or sanitization is required, or whether the request must be denied.
The policy engine does not offer suggestions. It computes the allowed envelope, and the downstream system cannot exceed it.
The orchestrator sees only the capabilities allowed for that lane. If a capability is not exposed, it cannot be invoked — not because the model decided to be careful, but because the platform did not make it available.
When policy requires approval, Foresight creates a structured, scoped, time-stamped, attributable approval record tied to a specific action on a specific artifact. Not a chat message that looks like agreement.
Every meaningful decision produces a structured evidence record: policy decisions, routing choices, approval events, and artifact lineage. This is not a raw activity log. It is structured, linked evidence designed to answer specific questions after the fact.
All four modes share the same Foresight experience. The difference is where work runs, what leaves the firm's boundary, and what governance applies.
| Managed Cloud | Private Local | Governed Enterprise API | Hybrid Governed | |
|---|---|---|---|---|
| Same Foresight experience | ✓ | ✓ | ✓ | ✓ |
| Data leaves firm's environment | Yes | Never — on-prem, dedicated, or private cloud | Under enterprise terms | Only approved derived outputs |
| Local / self-hosted models | — | ✓ Exclusively | — | ✓ For raw sensitive work |
| Enterprise-grade external reasoning | Standard hosted | — | ✓ With vendor controls | ✓ Governed, derived inputs only |
| Provider-level vendor controls (DPA, SOC 2, retention) | Limited | N/A — nothing external | ✓ Full enterprise terms | ✓ For governed external lane |
| Approval-gated sensitive actions | Optional | ✓ | ✓ | ✓ |
| Audit story | Standard | Cleanest and simplest | Strong — vendor-backed | Strong — policy-backed with routing evidence |
| Best for | Non-sensitive workflows | Strictest boundary firms | Firms that approve enterprise vendors | Mixed workloads needing both |
Start with Private Local if the priority is the absolute cleanest boundary — nothing external, strongest audit simplicity, no vendor dependency to explain. Deploy on your own hardware, a dedicated server, or your own cloud infrastructure (AWS, Azure, GCP) — the boundary is the same.
Consider Governed Enterprise API if the firm's vendor and security review process can approve enterprise-grade external providers under contractual terms, retention controls, and compliance attestations — and the firm wants frontier reasoning quality without maintaining local hardware.
Consider Hybrid Governed when the firm has mixed workloads — some that need to stay strictly private, others where governed external reasoning on approved, sanitized derivatives would materially improve strategic or advisory output.
Private Local is not a stepping stone. It is a complete, production-grade execution mode. The other modes expand capability with additional governance, not corrections of a weakness. Your firm makes the call.
The system promotes to the stricter sensitivity class. Ambiguity does not widen permissions.
The system requires review before proceeding. It does not route optimistically.
The premium governed lane is not exposed. Work stays private until conditions are met.
Release is gated by review and, where policy requires, by explicit approval. Drafting may proceed internally.
Execution pauses. The system creates an approval request and waits. It does not substitute inference for authorization.
Always the more restrictive path. Governance that only works when everything goes right is not governance.
Foresight's evidence model is designed to answer specific questions months later:
This is not logging for the sake of logging. It is structured evidence designed to make governance reconstructable and defensible.
A CPA uploads financial statements and asks: "Review these statements, flag unusual variances, and draft questions for the client."
Foresight detects the uploaded documents, classifies them as financial/client-sensitive, and identifies the task as analysis with a client-facing drafting component.
The policy engine evaluates data sensitivity, task type, and output destination against the firm's execution mode. All work routes to the local private lane. Client-facing output marked as review-required.
Local models extract data from statements, identify variances, flag anomalies, and produce structured findings — all within the private lane.
Foresight generates an internal draft of client questions based on the analysis. The draft is tagged as review-required because the output class is client-facing.
The draft cannot be sent or released until a qualified reviewer approves it. Foresight creates an approval object and holds the deliverable.
If the partner later asks for a strategic advisory memo, Foresight may allow enterprise-grade external reasoning — but only on a derived, sanitized fact pack built from local analysis. Raw financial statements do not enter the external lane. The routing decision is policy-gated and produces audit evidence.
Every step — classification, policy decision, lane selection, model usage, draft generation, review hold, approval, and release — produces a linked evidence record.
Credibility requires honesty about boundaries.
AI assists with analysis, drafting, and coordination. Final sign-off on client-facing work, official filings, or regulated deliverables remains a human responsibility. Foresight enforces the review gate — it does not replace the reviewer.
In Private Local, sensitive data does not leave the private environment. In Governed Enterprise API, external processing happens under enterprise contractual terms with vendor-level safeguards. In Hybrid Governed, external routing is policy-gated and limited to approved derived inputs. There is no mode where raw sensitive files are casually sent to uncontrolled external models.
Foresight provides an architecture that makes strong compliance posture achievable. Specific regulatory compliance depends on the firm's deployment, configuration, policies, and jurisdiction. We do not substitute an architecture claim for legal counsel.
The control boundary is enforced by deterministic policy logic, not by instructing the model to be careful. If the boundary depended on the model's cooperation, it would not be a boundary.
It is a governed execution system with structured policy controls, first-class approval objects, bounded capability exposure, and linked audit evidence. The architecture is the product.
Whether you handle financial records, legal documents, medical files, or sensitive diligence materials — Foresight gives your team AI-powered workflows inside boundaries you can actually defend.
Tell us what kind of environment you operate in and what you need to govern. We’ll respond with the right deployment angle — Private Local, Hybrid Governed, or both.
Submitted through Nathan’s existing web-form flow with a Foresight-specific subject line.