Sabba Logo

Sabba Human Data Governance Standard

v1.0 January 26, 2026

Introduction: Two Promethean Technologies, One Responsibility

Artificial intelligence and psychedelic medicines are both transformative technologies. Together, they hold extraordinary promise — for expanding access to care, improving practitioner training, and deepening understanding of the human mind.

They also carry unique risks.

Artificial Intelligence and psychedelic medicines operate in domains of heightened vulnerability: altered states, complex emotions, identity formation, and meaning-making. Both rely on trust — between participants and practitioners, between institutions and the public, between data and outcomes, and, increasingly, between humans and machines.

At Sabba, we believe that the long-term success of this field depends not only on innovation, but on clear standards for consent, transparency, and human dignity — especially when real human experiences are used to train AI systems. This Standard is rooted in fair information practices, and illustrates our commitment to building AI systems with data that was gathered fairly–by giving users notice of what data was collected and a choice to consent to AI training on their data. By starting with fairly gathered data, we create an environment primed for good AI Governance, allowing us to achieve more fairness, transparency, privacy, security, and human-centricity for our users.

This Standard sets out the principles Sabba applies when developing, deploying, or integrating AI systems that rely on human-derived data. It is intended both as an internal standard that Sabba is striving to achieve and as a proposed template for the broader psychedelic ecosystem.

Sabba Praxis is an interoperable platform. We are open to integrating a wide range of AI models and approaches with aligned partners. But interoperability does not imply endorsement and we're committed to valuing those in the community who prioritize consent and data control. Accordingly, integration is contingent on meeting these standards:

Core Principles

We want to live in a world where AI systems are built on fairness, respect for human agency, transparency, and accountability. Accordingly, we expect our partner organizations to employ ethics by design, including collection and use limitations, security safeguards, notice provisions, and purpose specifications to ensure that they are ethically collecting high quality, relevant data. To that end, Sabba strives towards these core principles in everything we build and in every partnership into which we enter:

4. Heightened Standards for Vulnerable Populations

The more vulnerable the data subject and/or the data collection context, the higher the standard that should be applied when assuring that consent is knowing and voluntary.

Where data is collected from individuals who may be emotionally distressed, cognitively impaired, intoxicated, or in altered states, Sabba expects that integrated partners will:

  • limit consent requests to only what is necessary in that moment, employing reasonable data minimization for the context and deferring secondary use consent requests as explained in Section 3.
  • use plain-language explanations that are developed with the culture of the population the organization serves in mind
  • separate care delivery from consent decisions
  • provide the data subject with a meaningful ability to decline without penalty
  • document consideration of voluntariness and power dynamics

5. Clean Data Lineage and Documentation

Organizations that partner with Sabba to integrate their system(s) must be able to provide clear documentation describing:

  • the source(s) of training data
  • the consent framework under which data was collected
  • whether and how data was anonymized or de-identified
  • whether the organization retains the ability to re-identify any data subjects
  • the scope of permitted downstream use as determined by data subject consent
  • any restrictions on deployment, reuse, or commercialization as determined by data subject consent

This documentation does not need to be public, but it must be verifiable.

A note on commercially available LLMs, Deep Learning Tools, and Expert Systems:

Sabba understands that data lineage is likely not verifiable when using commercially available AI tools. Sabba asks that organizations who partner with Sabba disclose use of these commercial tools to Sabba before integration. Sabba commits to working with organizations who make such disclosures to create a plan to maintain data integrity before, during, and after integration.

For the avoidance of any doubt, Sabba does use commercially available AI tools. Sabba will never intentionally supply personally identifiable information to a commercially available AI tool. First names or nicknames may be used to customize AI patient interactions.

6. Research and IRB-Governed Data Use

For AI systems trained on data derived from IRB-approved research or clinical trials, Sabba applies a tiered standard:

Prospective studies (new data collection):

AI training and downstream reuse must be explicitly disclosed in the research protocol, and participants must consent to that specific use. Reidentification risks, if applicable, should be disclosed in the consent to AI training and downstream reuse.

Legacy studies:

Post-hoc AI training may be permitted where participants are re-contacted, provided with clear disclosure of proposed AI use, and given a meaningful opportunity to opt out.

Restricted cases:

Where participants cannot reasonably be re-contacted and AI training was not disclosed in the original consent, Sabba may limit or decline use of such data for training deployable or persistent AI systems, particularly in contexts involving vulnerability or altered states. Sabba reserves the right to take deidentification, aggregation, and reidentification risks into account when limiting or declining use of data under this section of the Standard.

IRB approval strengthens confidence, but does not replace the need for clarity, agency, and documented consent appropriate to the intended AI use.

7. Interoperability with Accountability

Sabba is designed to interoperate with a wide range of AI systems and partners. However:

  • interoperability does not imply ethical equivalence
  • integration is conditional, not automatic
  • Sabba reserves the right to decline or pause integration where this Standard or our other compliance, ethical, or integrity standards are not met

Our goal is not to exclude any specific system or partner, but rather, to use our platform wisely. Sabba strives to ensure that the systems we amplify are worthy of the trust people place in this field.

Closing

The psychedelic renaissance is still young. The norms we establish now — around consent, data stewardship, and AI — will shape public trust for decades.

Sabba believes that building durable infrastructure requires setting clear standards early, even when doing so is uncomfortable or slows short-term momentum. This Standard isn't a contract, but rather, a guidepost. We're illuminating our thought process so you can see how we're working to build responsible AI systems.

We know that we'll need to update this Standard over time. To improve transparency, Sabba commits to keeping prior versions of this Standard public, and to keeping a documented history of changes to the Standard available.

We invite partners, researchers, clinicians, and technologists to engage with this framework, challenge it, and improve it — but not to bypass it.

Human experience is not raw material by default. It is a gift that carries responsibility.

    Sabba Human Data Governance Standard