Diverse AI

Diverse AI — Safety & Community Policies
Diverse Technical Solutions LLC (“Diverse AI,” “we,” “us”) operates the Diverse AI platform (including our website, mobile application, AI companion features, community spaces, and expert marketplace). This document sets out key safety rules and enforcement principles. It supplements — and does not replace — our Terms of Service, Privacy Policy, Community Standards & Safety Policy (Part IV of the Legal Suite), and other incorporated policies. Effective as stated below; we may update these policies and will notify users of material changes as described in our Terms. The topics below mirror categories commonly published by major social platforms (similar in scope to public rules hubs such as help.x.com/rules-and-policies), tailored to our features: community, messaging, media, experts, and AI chat.
Websitehttps://diverseaiapp.com
EffectiveApril 7, 2026
Version1.2

Child Safety

Our Platform is intended for adults 18 years of age or older (see Terms of Service). We have zero tolerance for child sexual exploitation and abuse material (CSAM), grooming, or any sexualization of minors. We also restrict certain media depicting physical abuse of children to reduce re-victimization and normalization of violence against children.

1.1 Who is a “child” or “minor”

Throughout our policies we use “child,” “children,” and “minor” to mean any person under 18 years of age.

1.2 Child Sexual Exploitation and Abuse (CSEA)

We prohibit any content or conduct that features, promotes, solicits, or facilitates child sexual exploitation or abuse, including: real imagery or video; text-based sexualization of minors; illustrated, animated, or computer-generated depictions (including outputs from generative AI) when they constitute or promote sexual exploitation of minors; links or instructions intended to obtain such material; and grooming or predatory behavior toward minors.We prohibit any content or conduct that features, promotes, solicits, or facilitates child sexual exploitation or abuse, including:

Intent does not excuse harm. Content shared “for awareness,” humor, or outrage can still re-victimize children and spread abusive material. We may remove such content and suspend or terminate accounts, and we may preserve and report data to law enforcement and designated organizations as required or permitted by law.

1.3 Physical Child Abuse Media

To limit re-victimization and normalization of violence against children, we may remove depictions of physical child abuse in many cases, including when shared to raise awareness. When reviewing reports, we may consider factors such as: whether the child appears nude, partially clothed, or fully clothed; the severity of harm shown; and whether the context appears abusive, non-abusive (e.g. advocacy), or newsworthy — without waiving our right to remove content where we believe removal protects minors.

1.4 Minors in Physical Altercations

We aim to protect minors depicted in fights or assaults. We may remove or restrict content based on factors such as: abusive vs. non-abusive context; whether we have a report from the minor or an authorized representative; and whether imagery is excessively graphic.

1.5 How to Report Child Safety Concerns

Report through the in-app reporting flow or by emailing [email protected]. Include URLs, usernames, timestamps, and a description where possible. If a child is in immediate danger, contact local law enforcement first.

United States: You may also report child sexual exploitation to the National Center for Missing & Exploited Children (NCMEC) via the CyberTipline at cybertipline.org or call 1-800-THE-LOST (1-800-843-5678). Do not download, screenshot, or re-share suspected CSAM to “prove” a violation — report and disengage.

Adult Content (Age-Gated)

Diverse AI is a women-centered platform focused on wellbeing, community, and professional connection. We do not position the Platform as an adult-content service. Where we allow limited consensual adult expression (e.g. in private or labeled contexts as our product design permits), it must comply with this section, our Community Standards, and applicable law.

2.1 What We May Treat as Adult Content

Adult content includes consensually created material depicting adult nudity or sexual behavior that is pornographic or primarily intended to sexually arouse, including AI-generated, photographic, or animated depictions when they meet that description.

2.2 Requirements

Adult content must be consensually produced and shared; must not involve minors, non-consent, exploitation, or coercion; must not sexualize minors; and must not appear in highly visible places (e.g. profile photos, banners, or AI chat avatars if we designate those as public-facing). We may require labels, warnings, or age-restricted distribution consistent with our product controls.

Users under 18 may not use the Platform. We may still use sensitive content controls so adult users can choose what appears in feeds or recommendations.

2.3 Enforcement

We may remove unmarked adult content, restrict distribution, require labeling, or suspend accounts for repeated violations. Report via in-app tools or [email protected].

Non-Consensual Intimate Imagery (NCII)

3.1 Overview

You may not post, share, send in direct messages, or otherwise distribute intimate photos or videos of someone that were produced or distributed without their valid consent. This is sometimes called “revenge porn” or non-consensual intimate image abuse (NCII). It is a severe privacy violation and can cause serious physical, emotional, financial, and safety harm. This policy applies to uploads to profiles, posts, comments, chat attachments, expert or community spaces, and any other feature that allows media sharing on the Platform.

3.2 Violations of This Policy

Without consent of the person depicted, you may not post or share explicit sexual images or videos — including material that appears to be private, stolen, leaked, or recorded without knowledge. Examples include, but are not limited to:

  • (a) Hidden-camera or voyeuristic content involving nudity, partial nudity, or sexual acts
  • (b) “Creepshots,” upskirting, or similar images focused on a person’s intimate body areas without consent
  • (c) Digitally manipulated media (including “deepfakes”) that place someone’s face or likeness onto another person’s nude or sexual body
  • (d) Images or videos taken or shared in an intimate setting and not intended for public distribution
  • (e) Offering bounties, payments, or rewards in exchange for obtaining or distributing someone’s intimate images or videos
  • (f) Threats to distribute intimate material to coerce, harass, or harm someone
3.3 What Is Not a Violation (Context)

Consensually produced adult content may be permitted only where it complies with Section 2 (Adult content) and all labeling or distribution rules. If you post consensual adult material, you must use any sensitive-content or age-restriction tools we provide. We may label or restrict media if you do not.

3.4 Who Can Report

Because some consensual adult content may be allowed in limited contexts, we evaluate NCII reports with attention to consent and context. Anyone may report: creepshots or upskirting; content offering a bounty or payment for non-consensual intimate media; and intimate images or videos accompanied by text wishing harm or seeking “revenge,” or with information that could be used to contact or harass the person depicted (for example, phone numbers or direct calls to harass). For other reports, we may need to hear from the depicted person or an authorized representative (such as legal counsel) before we take enforcement action, so we can confirm lack of consent and reduce mistaken removals.

3.5 How to Report

Use the in-app reporting flow and choose the option that best describes non-consensual or unauthorized intimate imagery. You may include links, usernames, timestamps, and a short description. You may also email [email protected] with the subject line “NCII report.” Do not forward or attach illegal material; describe the location of the content on our Platform instead. If you believe a crime has occurred or someone is in immediate danger, contact local law enforcement.

3.6 Enforcement

We may remove content, restrict distribution, warn accounts, temporarily suspend posting, or permanently suspend accounts. We may immediately and permanently suspend accounts we identify as the original poster of non-consensual intimate media, accounts dedicated primarily to distributing such material (for example, upskirt or voyeurism accounts), or accounts used to solicit or trade NCII.

In some cases, a user may share content inadvertently (for example, to condemn abuse). We may require removal of the media and temporarily restrict the account; repeat violations may result in permanent suspension. If you believe enforcement was in error, you may appeal as described in our Community Standards.

3.7 Cooperation with Authorities

Where required or permitted by law, we may preserve data and report NCII to law enforcement or designated organizations. Nothing in this policy limits our ability to comply with valid legal process.

Abuse, Harassment & Hateful Conduct

We want open conversation — especially for women’s safety and empowerment — but not at the cost of targeted abuse. We prohibit behavior and content that harasses, threatens, degrades, or silences others, consistent with our Community Standards (Part IV).

4.1 Targeted Harassment

Prohibited conduct includes malicious, repeated targeting of a person (e.g. many posts or comments in a short period, dedicated harassment accounts, tagging or mentioning someone to humiliate them, or coordinated pile-ons).

4.2 Incitement

Do not encourage others to harass someone online or offline, including calls for physical confrontation.

4.3 Unwanted Sexual Content and Objectification

Unsolicited sexual media, unwanted sexual comments about someone’s body, solicitation of sexual acts, or sexual objectification without consent is prohibited — including in direct messages, comments, and AI chat misuse directed at other users (e.g. using the Platform to stalk or sexualize someone).

4.4 Insults and Context

We may act on insults or slurs used to target individuals, particularly where they form a pattern. We consider context: good-faith criticism of ideas or institutions is not the same as targeted harassment; consensual banter between friends may not violate this policy.

4.5 Names, Pronouns, and Dignity

Where required by law, or where we determine it necessary to protect targeted users after review, we may limit visibility of content that misgenders or deadnames someone maliciously. Complex cases may require information from the person affected.

4.6 Reporting and Enforcement

Anyone may report via the app or [email protected]. For some actions we may need to hear from the targeted person. Enforcement may include content removal, warnings, feature restrictions, temporary suspension, or permanent ban, depending on severity and history. Appeals may be submitted as described in our Community Standards.

Suicide & Self-Harm

We support open discussion of mental health, recovery, and help-seeking. We prohibit content that encourages, instructs, glorifies, or coordinates self-harm or suicide.

5.1 Prohibited Material

Examples include:

  • Graphic depiction or promotion of self-injury
  • Sharing methods or means of suicide
  • Encouraging disordered eating as a goal
  • Dangerous “challenges” that predictably cause serious harm
  • Facilitating substance abuse as self-harm

AI-generated content is treated the same as other media when it violates this policy.

5.2 Allowed Themes

We generally allow: personal stories of struggle without instructional detail; help-seeking; awareness and recovery content; and signposting to professional or crisis resources, provided the content does not romanticize self-harm.

5.3 AI Companion (Chat)

Our AI companion is not a crisis service. If you are in immediate danger, contact local emergency services. In the United States you can call or text 988 or text HOME to 741741. International users should use local crisis lines.

5.4 Reporting

Report concerns through in-app reporting or [email protected]. We may escalate to human review, restrict distribution, or remove content. We may take safety actions (including contacting authorities) where we believe there is imminent risk, as permitted by law.

Private Information, Doxxing & Offline Harassment

6.1 Posting Private Information Without Permission

You may not publish or direct others to another person’s private or personally identifying information without their permission and a valid public-interest justification. This includes, for example:

  • Home or physical address
  • Government ID numbers; passport or visa details
  • Financial account or payment information
  • Private phone numbers or personal email intended to be private
  • Medical records
  • Live location data used to stalk or endanger someone
  • Non-public intimate details shared to facilitate harassment (“doxxing”)
  • Minors’ schools, schedules, or locations disclosed in a risky way

Do not threaten to expose private information to silence, extort, or harm someone. Content that exposes minors’ schools, schedules, or locations in a risky way may be removed. We may remove or restrict content, lock features, or suspend accounts; we may report credible threats to law enforcement.

6.2 How to Report

Use in-app reporting or email [email protected]. If you are at risk of imminent harm, contact local emergency services.

Impersonation, Spam & Inauthentic Behavior

7.1 Impersonation

You may not pose as another person, organization, Expert Contributor, Diverse AI staff, moderator, law enforcement, medical professional, or emergency service in a deceptive way. Parody or commentary accounts, if permitted by product features, must be clearly distinguishable so a reasonable user would not be misled about identity or affiliation.

7.2 Spam and Platform Manipulation

Do not use the Platform to send bulk unsolicited messages, operate misleading engagement schemes, artificially inflate metrics, coordinate inauthentic amplification, or use unauthorized automation (bots, scrapers) in violation of our Terms. Do not mislead users about the origin of content solely to evade enforcement.

Violent Threats, Incitement & Graphic Violence

8.1 Threats and Incitement

You may not threaten violence against a person or group, or encourage others to commit violence, including coded or implicit threats where the intent is clear. Wishing serious physical harm or death in an abusive or targeted way may be treated as a threat depending on context.

8.2 Graphic Violence and Gore

We may require sensitive-content treatment, limit distribution, or remove excessively graphic depictions of injury, death, or cruelty (including to people or animals) especially when shared for shock, harassment, or glorification. Documentary, news, or awareness contexts may be assessed differently, but we still prioritize user safety and revictimization risk.

Hateful Conduct

9.1 Prohibited Hate

We prohibit attacking, dehumanizing, or promoting hatred against people on the basis of race, ethnicity, national origin, caste, religion, sex, sexual orientation, gender, gender identity, serious disease or disability, immigration status, or other protected characteristics under applicable law or our Community Standards. This includes slurs used maliciously, hateful symbols where context shows promotion of hate, and content that denies mass violence against protected groups when used to harass or incite (subject to review and jurisdiction).

9.2 Reporting

Report via in-app tools or [email protected]. We consider context; good-faith discussion, academic citation, or counter-speech may not violate this policy when clearly non-abusive.

Illegal or Regulated Activity, Scams & Exploitation

10.1 Prohibited Commercial and Criminal Conduct

You may not use the Platform to sell or facilitate illegal goods or services (including drugs, weapons where unlawful, human trafficking, sexual services where prohibited, or fraud schemes). Do not run scams, phishing, romance scams, investment fraud, or impersonation to obtain money or credentials. Offers must comply with law and our Expert marketplace rules.

Terrorism & Violent Extremism

11.1 Zero Tolerance

We prohibit terrorist organizations, violent extremist movements, and content that glorifies, recruits for, or provides instruction for terrorism or extremist violence. We may remove content, suspend accounts, and cooperate with law enforcement as permitted by law.

Misleading Synthetic, Manipulated & AI-Generated Media

12.1 Deceptive Media

You may not post synthetic, manipulated, or AI-generated audio, images, or video in a way that is likely to deceive people about real events, statements, or actions of identifiable individuals, except where clearly labeled as altered or fictional in line with product tools we provide. This is in addition to Section 3 (NCII) and Section 1 (child safety). Deepfakes used for fraud, election interference, or reputational harm may be removed or down-ranked.

12.2 Civic Integrity

You may not use the Platform to coordinate deliberate attempts to mislead voters or interfere with democratic processes where prohibited by law. We may label, reduce distribution, or remove coordinated inauthentic civic manipulation.

12.3 AI Companion Outputs

Outputs from Diverse AI’s own AI features are not independent verification of facts. Do not treat AI-generated summaries or suggestions as proof that another user said or did something.

Relationship to Other Policies

This document summarizes major safety themes for users (Sections 1–12) and should be read together with Part IV (Community Standards & Safety Policy) and the rest of our Legal Suite. Expert Contributors must also follow Part V (Expert Contributor Terms). For intellectual property, privacy, account termination, and dispute resolution, see the full Terms of Service and Privacy Policy at diverseaiapp.com.

Part ITerms of Service
Part IIPrivacy Policy
Part IIICookie Policy
Part IVCommunity Standards & Safety Policy
Part VExpert Contributor Terms
Part VIAccessibility & AI Ethics Policy

© 2026 Diverse Technical Solutions LLC. All rights reserved. · diverseaiapp.com