Why Sandstone Is Built for Privilege Protection

Jessica Ngyuen
April 21, 2026
How thoughtful design, access controls, and enterprise-grade contractual safeguards keep in-house legal teams confidently protected.
Picture this: a colleague types a query into Google or sends a prompt to a generative AI tool, asking for legal advice. This isn't hard to picture. It's actually anticipated — or should be anticipated — behavior.
If I’m honest, Google (and now my enterprise-grade version of Claude and Gemini) has been my starting point for many legal and general questions I’ve had over the years. And I’m a licensed attorney.
Why are so many people suddenly concerned about using AI tools for legal work? A federal judge in New York recently made headlines in the legal tech world. In United States v. Heppner (S.D.N.Y. Feb. 17, 2026), Judge Jed Rakoff ruled that a criminal defendant's interactions with a publicly available AI tool were not protected by attorney-client privilege or the work product doctrine.
The Heppner case sent a ripple of concern through legal departments everywhere: Is my AI tool putting privileged information at risk?
What Heppner Actually Decided (And What It Didn't)
The Heppner decision turned on a very specific set of facts. The defendant used a consumer-grade, publicly available AI tool entirely on his own initiative, with no attorney direction. The platform's terms of service explicitly permitted data retention, model training on user inputs, and potential disclosure to third parties. Judge Rakoff found that under these circumstances, the defendant had no reasonable expectation of confidentiality — and without confidentiality, there is no privilege.
The decision was not a ruling against AI in legal practice. It was a ruling against the use of public AI tools without attorney direction or confidentiality protections. Courts and legal commentators have widely noted that the holding is narrow and fact-specific. A magistrate judge in the Eastern District of Michigan reached a contrary conclusion in a similar context, finding that AI-assisted work conducted internally — without disclosure to an adversary — retained work product protection.
The takeaway for in-house legal teams and law firms is straightforward: the same principles that protect privilege in every other context apply here. Use a secure tool. Use it under attorney direction. Maintain confidentiality.
Sandstone is built precisely around these principles.
Access by Design: A Tool Built for Attorneys and Those Who Work Under Them
Attorney-client privilege has always required more than just confidentiality — it requires that the work be done by or under the direction of counsel. That is why Sandstone's access and permissions architecture is not an afterthought. It is a core design decision.
Sandstone is a dedicated workspace for in-house legal teams. Access is limited to attorneys and legal professionals working under attorney supervision. This is not simply a licensing policy — it is enforced through our access controls and permissions framework. When a member of the legal team uses Sandstone, they do so as part of their professional legal work. That is precisely the context courts have consistently recognized as supporting privilege protection.
This stands in direct contrast to the risk scenario Heppner illustrated: a layperson independently consulting a public AI tool with no attorney involvement, no confidentiality protections, and no oversight. Sandstone's structure eliminates each of those risk factors by design.
The Technical and Contractual Layer: Why Your Data Stays Yours
Beyond access controls, Sandstone has built a contractual and technical framework specifically designed to preserve the confidentiality that privilege requires.
We operate under enterprise agreements — not consumer terms of service — with our AI model providers, including Anthropic, OpenAI, and Google Gemini. In addition to those enterprise agreements, we have guaranteed Zero Data Retention (ZDR) agreements with each provider. Under these agreements, our providers are contractually prohibited from:
- Retaining prompts or model outputs following generation
- Logging substantive content for human review
- Using customer data to train, fine-tune, or otherwise improve their models
- Disclosing customer data to any third party
This is the critical distinction Heppner turns on. Judge Rakoff found that the public version of the AI tool the defendant used had terms of service that explicitly permitted data retention, training use, and third-party disclosure, destroying any reasonable expectation of confidentiality. Sandstone's enterprise plus ZDR framework contractually prohibits all of those things. Your prompts and the model's outputs are deleted after generation. They are never stored, never reviewed, and never used to train any model.
The Analogy That Should Put This in Context: Email and the Cloud
Here is a perspective worth sitting with: in-house legal teams have been sharing privileged information with third-party technology providers for decades, and no one seriously argues that doing so automatically destroys privilege.
When you draft a privileged memo in Microsoft Word Online, your content passes through Microsoft's servers. When you send a privileged communication over Gmail or Outlook, it travels through Google's or Microsoft's infrastructure. When you store client files in SharePoint, Box, or Google Drive, you are entrusting confidential data to a third-party cloud provider.
Privilege is not waived in these scenarios because attorneys take reasonable steps to maintain confidentiality — they use enterprise accounts, review terms of service and data processing agreements, and confirm that providers are not sharing or disclosing their content to unauthorized parties. Courts have consistently recognized that routing confidential communications through a secure, contractually protected third-party service does not destroy privilege, provided the confidentiality of the communication is reasonably maintained.
Sandstone operates on exactly the same logic. We are a purpose-built professional tool for in-house legal teams, with enterprise contractual protections that are functionally equivalent to — and in some respects stronger than — the protections attorneys already rely on in their daily workflows. If your organization has satisfied itself that cloud-based email and document editing and storage tools are consistent with your privilege obligations, using Sandstone should be no different.
What This Means for Your Legal Team
The Heppner decision is a useful reminder, not a cause for alarm
Here is what using it the right way looks like:
Use a tool built for legal professionals. Consumer AI tools are designed for general use and carry general terms of service. Sandstone is designed specifically for in-house legal teams to streamline their operations, with access controls, data protections, and contractual safeguards that reflect the confidentiality obligations attorneys carry.
Use it under attorney direction. AI used in the course of attorney-supervised legal work — analyzing exposure, drafting strategy, or handling repetitive tasks like reviewing documents — is fundamentally different from a layperson independently querying a public tool. Sandstone's access structure ensures your team uses it in exactly this professional capacity.
Verify the confidentiality framework. Sandstone openly shares its terms and policies on data security through Sandstone’s Trust Center. We encourage your team and your CISOs to review it, ask questions, and confirm that our framework meets your organization's standards. Transparency is how trust is built.
Our Commitment
Sandstone was built by people who understand the in-house legal function and the need for legal workflow automation — the confidentiality obligations, the professional responsibility standards, and the stakes of getting data security wrong. We have made deliberate, documented choices at every layer of our product — access design, contractual architecture, integrations, and technical infrastructure — to ensure that the work your legal team does in Sandstone remains confidential, protected, and yours.
We will continue monitoring developments in this space, including evolving case law on AI and privilege, regulatory guidance from bar associations, and changes to our processor agreements. When something material changes, we will tell you.
This post is for informational purposes only and does not constitute legal advice. Attorneys should evaluate AI tools in light of their own professional responsibility obligations and applicable bar guidance.