A three-week productized review. Gap analysis, shadow-AI inventory, draft use policy for your General Counsel to sign, and a 90-day guardrails playbook. Published price range, fee credits toward an ATLAS Enterprise engagement.
Timeline
Three weeks · 15 business days · remote-first
Fee range
$60K-$120K
Credit mechanic
100% toward Enterprise · 12 months
Why now
Most firms in 2026 are governing AI use after the fact. The Review reverses the order: define what is allowed before the firm depends on it. Sit between operations, compliance, and legal — where firms have no defensible incumbent vendor and where bad answers are loud, public, and slow to roll back.
Four pre-published deliverables
Data quality, definition stability, and process maturity scored against a published rubric. The score is the firm's defensible answer to the GP, IC, GC, or LP question: 'Where do we actually stand?'
Where employees are already using LLMs on firm data, what they are pasting into them, and the categorized risk exposure. The inventory is built from structured interviews — Atlas does not surveil employees.
Data classification, allowed and disallowed workflows, prompt patterns, human-review gates, retention rules, and recommended LP-disclosure language. Delivered as a draft for the firm's General Counsel to review and sign off on. Atlas is not legal counsel.
The operational sequence to bring the firm from current state to the published policy: who does what, in what order, with what evidence.
Delivery
Artifact upload, stakeholder interviews across operations, compliance, IT, and any active AI vendor relationships.
Structured interviews documenting where LLMs are already in use. Risk categorization against the rubric.
Use policy draft pressure-tested with the firm's GC. 90-day guardrails playbook sequenced with the operating team.
Signed deliverable package and a live executive readout with the GP, COO, GC, and CCO. Use policy moves to GC for final review and adoption.
Two weeks of inbound questions included. Beyond that, work continues only on a separately scoped engagement.
Who this is for
Employees are already using LLMs on firm data without an approved policy.
LPs are asking about AI governance in DDQs or annual reviews.
The firm is preparing institutional AI policy and needs an external defensible artifact.
About to deploy a vendor AI tool internally and need a use-policy and guardrails baseline first.
Pricing
$60K-$120K
The range reflects firm size, scope complexity, and number of data domains in play. It is not contract negotiation. We quote a single fixed fee within the range after the strategic call.
Remote-first. On-site engagement is a quoted upcharge.
Deliverable preview
The full policy is the firm's once delivered. The preview below is structure only — section names, what each section covers, and the shape of the deliverable. The scoring rubric and proprietary template content are not shown.
AI Use Policy · Preview · Section index
1. Scope and applicability
Who the policy covers. Which firm data classes it governs. Which AI systems are in scope.
2. Data classification
Public, internal, confidential, restricted. Worked examples for each class drawn from the firm's actual data.
3. Allowed workflows
Tasks that may use LLM assistance, with named tools and named data classes.
4. Disallowed workflows
Tasks where LLM assistance is prohibited. Including a default prohibition on workflows not yet evaluated.
5. Prompt patterns and human-review gates
Standard prompt frames for allowed workflows. Required human-review steps. Retention rules for prompts and responses.
6. Vendor and tool register
Approved tools, their data-handling posture, and the firm's contract terms with each.
7. Incident response
What to do when firm data leaves the approved surface, who is notified, in what order.
8. LP disclosure language
Suggested phrasing for LP DDQ responses, annual report disclosures, and ad-hoc LP questions.
9. Review cadence
How often the policy is revisited, who owns the review, and what changes require GP and GC sign-off.
The full policy includes worked examples, named tools, the scoring rubric, and language reviewed by the firm's General Counsel. Atlas is not legal counsel; the firm's GC signs off before adoption.
100% of the assessment fee credits toward an ATLAS Enterprise engagement signed within 12 months of the assessment's completion date. The assessment is a productized first commercial step, not a hurdle to clear.
Scope clarity
Productized service, not consulting. The boundaries are part of the product. The AI use policy is a draft for the firm's General Counsel to review and sign off on; Atlas is not legal counsel. The 'not legal advice' boundary is the one buyers should re-read most closely.
The assessment produces diagnosis and plan only. It does not include building integrations, configuring systems, migrating data, or shipping fixes. Implementation is a separate Enterprise engagement.
Both assessments run remote-first via artifact upload and structured interviews. On-site engagement is a quoted upcharge, not bundled.
Where the AI Review produces a use-policy draft, Atlas is not legal counsel. The firm's General Counsel reviews and signs off before adoption.
Atlas does not sign or warrant the firm's LP communications. Any assessment artifact used in LP materials remains the firm's responsibility.
Each assessment is point-in-time. Quarterly re-assessment is a separate, future product and is not included.
The assessment is not a substitute for fund admin audit, SOC 2 attestation, or formal regulatory examination.
Strategic call
A 60-90 minute structured discovery conversation. We map your current state, surface the two or three most consequential gaps, and produce a follow-up scorecard. It is a discovery call, not a sales pitch. Atlas reserves the right to decline calls that fall outside scope.
Book a strategic callThree weeks. Four signed artifacts. A draft policy your GC can take to sign-off. Fee credits toward an ATLAS Enterprise engagement within 12 months.