v1.0 January 26, 2026
Artificial intelligence and psychedelic medicines are both transformative technologies. Together, they hold extraordinary promise — for expanding access to care, improving practitioner training, and deepening understanding of the human mind.
They also carry unique risks.
Artificial Intelligence and psychedelic medicines operate in domains of heightened vulnerability: altered states, complex emotions, identity formation, and meaning-making. Both rely on trust — between participants and practitioners, between institutions and the public, between data and outcomes, and, increasingly, between humans and machines.
At Sabba, we believe that the long-term success of this field depends not only on innovation, but on clear standards for consent, transparency, and human dignity — especially when real human experiences are used to train AI systems. This Standard is rooted in fair information practices, and illustrates our commitment to building AI systems with data that was gathered fairly–by giving users notice of what data was collected and a choice to consent to AI training on their data. By starting with fairly gathered data, we create an environment primed for good AI Governance, allowing us to achieve more fairness, transparency, privacy, security, and human-centricity for our users.
This Standard sets out the principles Sabba applies when developing, deploying, or integrating AI systems that rely on human-derived data. It is intended both as an internal standard that Sabba is striving to achieve and as a proposed template for the broader psychedelic ecosystem.
Sabba Praxis is an interoperable platform. We are open to integrating a wide range of AI models and approaches with aligned partners. But interoperability does not imply endorsement and we're committed to valuing those in the community who prioritize consent and data control. Accordingly, integration is contingent on meeting these standards:
We want to live in a world where AI systems are built on fairness, respect for human agency, transparency, and accountability. Accordingly, we expect our partner organizations to employ ethics by design, including collection and use limitations, security safeguards, notice provisions, and purpose specifications to ensure that they are ethically collecting high quality, relevant data. To that end, Sabba strives towards these core principles in everything we build and in every partnership into which we enter:
Sabba seeks to integrate only with AI systems whose training data was collected with explicit, purpose-specific consent. This principle helps us foster human-centric, trustworthy, and accountable AI development.
At minimum, individuals must have been clearly informed — at or before the time of data collection — that their data could be used to:
Data subject consent derived from general statements such as "recorded for training purposes," "quality improvement," or "research purposes" is not sufficient to meet this Standard when AI training or downstream reuse is involved. Sabba seeks to integrate only with AI systems whose human-derived training data came from people who consented to their data being used to train AI or machine learning systems. When people aren't told at the time of consent that AI or machine learning systems may be involved, consent is unclear and may be nonexistent.
Sabba distinguishes between necessary disclosures and consent.
In contexts involving emotional distress, crisis, or altered states of consciousness, Sabba recognizes that individuals may not be able to offer meaningful informed consent for secondary data uses at the moment of interaction.
Accordingly:
Retroactive data use disclosures without an opportunity for affirmative choice ("opt in") do not meet this standard.
Where human-derived data is collected in vulnerable or altered-state contexts, Sabba requires a deferred, tiered consent process for secondary uses. Secondary uses include any use other than the immediate provision of necessary services to the data subject at the time of the interaction from which the individual's data was recorded or otherwise derived.
Within a reasonable period following the interaction (e.g., 24–48 hours), individuals should be given a clear, plain-language notice stating that data was retained by the organization as a result of that interaction along with a summary of the type of data retained (name, location, voice, etc.). Then, the data subject should be presented with distinct choices regarding potential uses, such as:
For optional uses, silence or non-response does not constitute consent. Affirmative Consent, via opt-in, should be used whenever possible.
Absent verifiable consent, data must be excluded from those uses.
This approach prioritizes autonomy, clarity, and respect over scale.
A Note About Prompt Engineering: Sabba will not deliberately use human data that does not meet these standards to train its AI systems. However, Sabba may occasionally use review insights gleaned from studies, trials, and other scientific sources to inform prompting. While Sabba does not intentionally input personally identifiable material into AI tools, Sabba recognizes that it may not be able to verify the data use consent underlying every scientific report relevant to its prompting. Sabba encourages organizations who seek to integrate with Sabba's platform to disclose these kinds of uses. Sabba commits to working with organizations who make these disclosures to maintain data integrity.
Likewise, Sabba uses unidentifiable human-derived data from open sources directly in some of its prompts. None of this data is gathered or transformed by Sabba, but instead, is used as a reference.
The more vulnerable the data subject and/or the data collection context, the higher the standard that should be applied when assuring that consent is knowing and voluntary.
Where data is collected from individuals who may be emotionally distressed, cognitively impaired, intoxicated, or in altered states, Sabba expects that integrated partners will:
Organizations that partner with Sabba to integrate their system(s) must be able to provide clear documentation describing:
This documentation does not need to be public, but it must be verifiable.
A note on commercially available LLMs, Deep Learning Tools, and Expert Systems:
Sabba understands that data lineage is likely not verifiable when using commercially available AI tools. Sabba asks that organizations who partner with Sabba disclose use of these commercial tools to Sabba before integration. Sabba commits to working with organizations who make such disclosures to create a plan to maintain data integrity before, during, and after integration.
For the avoidance of any doubt, Sabba does use commercially available AI tools. Sabba will never intentionally supply personally identifiable information to a commercially available AI tool. First names or nicknames may be used to customize AI patient interactions.
For AI systems trained on data derived from IRB-approved research or clinical trials, Sabba applies a tiered standard:
Prospective studies (new data collection):
AI training and downstream reuse must be explicitly disclosed in the research protocol, and participants must consent to that specific use. Reidentification risks, if applicable, should be disclosed in the consent to AI training and downstream reuse.
Legacy studies:
Post-hoc AI training may be permitted where participants are re-contacted, provided with clear disclosure of proposed AI use, and given a meaningful opportunity to opt out.
Restricted cases:
Where participants cannot reasonably be re-contacted and AI training was not disclosed in the original consent, Sabba may limit or decline use of such data for training deployable or persistent AI systems, particularly in contexts involving vulnerability or altered states. Sabba reserves the right to take deidentification, aggregation, and reidentification risks into account when limiting or declining use of data under this section of the Standard.
IRB approval strengthens confidence, but does not replace the need for clarity, agency, and documented consent appropriate to the intended AI use.
Sabba is designed to interoperate with a wide range of AI systems and partners. However:
Our goal is not to exclude any specific system or partner, but rather, to use our platform wisely. Sabba strives to ensure that the systems we amplify are worthy of the trust people place in this field.
The psychedelic renaissance is still young. The norms we establish now — around consent, data stewardship, and AI — will shape public trust for decades.
Sabba believes that building durable infrastructure requires setting clear standards early, even when doing so is uncomfortable or slows short-term momentum. This Standard isn't a contract, but rather, a guidepost. We're illuminating our thought process so you can see how we're working to build responsible AI systems.
We know that we'll need to update this Standard over time. To improve transparency, Sabba commits to keeping prior versions of this Standard public, and to keeping a documented history of changes to the Standard available.
We invite partners, researchers, clinicians, and technologists to engage with this framework, challenge it, and improve it — but not to bypass it.
Human experience is not raw material by default. It is a gift that carries responsibility.