Child Safety
Our Platform is intended for adults 18 years of age or older (see Terms of Service). We have zero tolerance for child sexual exploitation and abuse material (CSAM), grooming, or any sexualization of minors. We also restrict certain media depicting physical abuse of children to reduce re-victimization and normalization of violence against children.
Throughout our policies we use “child,” “children,” and “minor” to mean any person under 18 years of age.
We prohibit any content or conduct that features, promotes, solicits, or facilitates child sexual exploitation or abuse, including: real imagery or video; text-based sexualization of minors; illustrated, animated, or computer-generated depictions (including outputs from generative AI) when they constitute or promote sexual exploitation of minors; links or instructions intended to obtain such material; and grooming or predatory behavior toward minors.We prohibit any content or conduct that features, promotes, solicits, or facilitates child sexual exploitation or abuse, including:
Intent does not excuse harm. Content shared “for awareness,” humor, or outrage can still re-victimize children and spread abusive material. We may remove such content and suspend or terminate accounts, and we may preserve and report data to law enforcement and designated organizations as required or permitted by law.
To limit re-victimization and normalization of violence against children, we may remove depictions of physical child abuse in many cases, including when shared to raise awareness. When reviewing reports, we may consider factors such as: whether the child appears nude, partially clothed, or fully clothed; the severity of harm shown; and whether the context appears abusive, non-abusive (e.g. advocacy), or newsworthy — without waiving our right to remove content where we believe removal protects minors.
We aim to protect minors depicted in fights or assaults. We may remove or restrict content based on factors such as: abusive vs. non-abusive context; whether we have a report from the minor or an authorized representative; and whether imagery is excessively graphic.
Report through the in-app reporting flow or by emailing [email protected]. Include URLs, usernames, timestamps, and a description where possible. If a child is in immediate danger, contact local law enforcement first.
United States: You may also report child sexual exploitation to the National Center for Missing & Exploited Children (NCMEC) via the CyberTipline at cybertipline.org or call 1-800-THE-LOST (1-800-843-5678). Do not download, screenshot, or re-share suspected CSAM to “prove” a violation — report and disengage.
Adult Content (Age-Gated)
Diverse AI is a women-centered platform focused on wellbeing, community, and professional connection. We do not position the Platform as an adult-content service. Where we allow limited consensual adult expression (e.g. in private or labeled contexts as our product design permits), it must comply with this section, our Community Standards, and applicable law.
Adult content includes consensually created material depicting adult nudity or sexual behavior that is pornographic or primarily intended to sexually arouse, including AI-generated, photographic, or animated depictions when they meet that description.
Adult content must be consensually produced and shared; must not involve minors, non-consent, exploitation, or coercion; must not sexualize minors; and must not appear in highly visible places (e.g. profile photos, banners, or AI chat avatars if we designate those as public-facing). We may require labels, warnings, or age-restricted distribution consistent with our product controls.
Users under 18 may not use the Platform. We may still use sensitive content controls so adult users can choose what appears in feeds or recommendations.
We may remove unmarked adult content, restrict distribution, require labeling, or suspend accounts for repeated violations. Report via in-app tools or [email protected].
Non-Consensual Intimate Imagery (NCII)
You may not post, share, send in direct messages, or otherwise distribute intimate photos or videos of someone that were produced or distributed without their valid consent. This is sometimes called “revenge porn” or non-consensual intimate image abuse (NCII). It is a severe privacy violation and can cause serious physical, emotional, financial, and safety harm. This policy applies to uploads to profiles, posts, comments, chat attachments, expert or community spaces, and any other feature that allows media sharing on the Platform.
Without consent of the person depicted, you may not post or share explicit sexual images or videos — including material that appears to be private, stolen, leaked, or recorded without knowledge. Examples include, but are not limited to:
- (a) Hidden-camera or voyeuristic content involving nudity, partial nudity, or sexual acts
- (b) “Creepshots,” upskirting, or similar images focused on a person’s intimate body areas without consent
- (c) Digitally manipulated media (including “deepfakes”) that place someone’s face or likeness onto another person’s nude or sexual body
- (d) Images or videos taken or shared in an intimate setting and not intended for public distribution
- (e) Offering bounties, payments, or rewards in exchange for obtaining or distributing someone’s intimate images or videos
- (f) Threats to distribute intimate material to coerce, harass, or harm someone
Consensually produced adult content may be permitted only where it complies with Section 2 (Adult content) and all labeling or distribution rules. If you post consensual adult material, you must use any sensitive-content or age-restriction tools we provide. We may label or restrict media if you do not.
Because some consensual adult content may be allowed in limited contexts, we evaluate NCII reports with attention to consent and context. Anyone may report: creepshots or upskirting; content offering a bounty or payment for non-consensual intimate media; and intimate images or videos accompanied by text wishing harm or seeking “revenge,” or with information that could be used to contact or harass the person depicted (for example, phone numbers or direct calls to harass). For other reports, we may need to hear from the depicted person or an authorized representative (such as legal counsel) before we take enforcement action, so we can confirm lack of consent and reduce mistaken removals.
Use the in-app reporting flow and choose the option that best describes non-consensual or unauthorized intimate imagery. You may include links, usernames, timestamps, and a short description. You may also email [email protected] with the subject line “NCII report.” Do not forward or attach illegal material; describe the location of the content on our Platform instead. If you believe a crime has occurred or someone is in immediate danger, contact local law enforcement.
We may remove content, restrict distribution, warn accounts, temporarily suspend posting, or permanently suspend accounts. We may immediately and permanently suspend accounts we identify as the original poster of non-consensual intimate media, accounts dedicated primarily to distributing such material (for example, upskirt or voyeurism accounts), or accounts used to solicit or trade NCII.
In some cases, a user may share content inadvertently (for example, to condemn abuse). We may require removal of the media and temporarily restrict the account; repeat violations may result in permanent suspension. If you believe enforcement was in error, you may appeal as described in our Community Standards.
Where required or permitted by law, we may preserve data and report NCII to law enforcement or designated organizations. Nothing in this policy limits our ability to comply with valid legal process.
Abuse, Harassment & Hateful Conduct
We want open conversation — especially for women’s safety and empowerment — but not at the cost of targeted abuse. We prohibit behavior and content that harasses, threatens, degrades, or silences others, consistent with our Community Standards (Part IV).
Prohibited conduct includes malicious, repeated targeting of a person (e.g. many posts or comments in a short period, dedicated harassment accounts, tagging or mentioning someone to humiliate them, or coordinated pile-ons).
Do not encourage others to harass someone online or offline, including calls for physical confrontation.
Unsolicited sexual media, unwanted sexual comments about someone’s body, solicitation of sexual acts, or sexual objectification without consent is prohibited — including in direct messages, comments, and AI chat misuse directed at other users (e.g. using the Platform to stalk or sexualize someone).
We may act on insults or slurs used to target individuals, particularly where they form a pattern. We consider context: good-faith criticism of ideas or institutions is not the same as targeted harassment; consensual banter between friends may not violate this policy.
Where required by law, or where we determine it necessary to protect targeted users after review, we may limit visibility of content that misgenders or deadnames someone maliciously. Complex cases may require information from the person affected.
Anyone may report via the app or [email protected]. For some actions we may need to hear from the targeted person. Enforcement may include content removal, warnings, feature restrictions, temporary suspension, or permanent ban, depending on severity and history. Appeals may be submitted as described in our Community Standards.
Suicide & Self-Harm
We support open discussion of mental health, recovery, and help-seeking. We prohibit content that encourages, instructs, glorifies, or coordinates self-harm or suicide.
Examples include:
- Graphic depiction or promotion of self-injury
- Sharing methods or means of suicide
- Encouraging disordered eating as a goal
- Dangerous “challenges” that predictably cause serious harm
- Facilitating substance abuse as self-harm
AI-generated content is treated the same as other media when it violates this policy.
We generally allow: personal stories of struggle without instructional detail; help-seeking; awareness and recovery content; and signposting to professional or crisis resources, provided the content does not romanticize self-harm.
Our AI companion is not a crisis service. If you are in immediate danger, contact local emergency services. In the United States you can call or text 988 or text HOME to 741741. International users should use local crisis lines.
Report concerns through in-app reporting or [email protected]. We may escalate to human review, restrict distribution, or remove content. We may take safety actions (including contacting authorities) where we believe there is imminent risk, as permitted by law.
Private Information, Doxxing & Offline Harassment
You may not publish or direct others to another person’s private or personally identifying information without their permission and a valid public-interest justification. This includes, for example:
- Home or physical address
- Government ID numbers; passport or visa details
- Financial account or payment information
- Private phone numbers or personal email intended to be private
- Medical records
- Live location data used to stalk or endanger someone
- Non-public intimate details shared to facilitate harassment (“doxxing”)
- Minors’ schools, schedules, or locations disclosed in a risky way
Do not threaten to expose private information to silence, extort, or harm someone. Content that exposes minors’ schools, schedules, or locations in a risky way may be removed. We may remove or restrict content, lock features, or suspend accounts; we may report credible threats to law enforcement.
Use in-app reporting or email [email protected]. If you are at risk of imminent harm, contact local emergency services.
Impersonation, Spam & Inauthentic Behavior
You may not pose as another person, organization, Expert Contributor, Diverse AI staff, moderator, law enforcement, medical professional, or emergency service in a deceptive way. Parody or commentary accounts, if permitted by product features, must be clearly distinguishable so a reasonable user would not be misled about identity or affiliation.
Do not use the Platform to send bulk unsolicited messages, operate misleading engagement schemes, artificially inflate metrics, coordinate inauthentic amplification, or use unauthorized automation (bots, scrapers) in violation of our Terms. Do not mislead users about the origin of content solely to evade enforcement.
Violent Threats, Incitement & Graphic Violence
You may not threaten violence against a person or group, or encourage others to commit violence, including coded or implicit threats where the intent is clear. Wishing serious physical harm or death in an abusive or targeted way may be treated as a threat depending on context.
We may require sensitive-content treatment, limit distribution, or remove excessively graphic depictions of injury, death, or cruelty (including to people or animals) especially when shared for shock, harassment, or glorification. Documentary, news, or awareness contexts may be assessed differently, but we still prioritize user safety and revictimization risk.
Hateful Conduct
We prohibit attacking, dehumanizing, or promoting hatred against people on the basis of race, ethnicity, national origin, caste, religion, sex, sexual orientation, gender, gender identity, serious disease or disability, immigration status, or other protected characteristics under applicable law or our Community Standards. This includes slurs used maliciously, hateful symbols where context shows promotion of hate, and content that denies mass violence against protected groups when used to harass or incite (subject to review and jurisdiction).
Report via in-app tools or [email protected]. We consider context; good-faith discussion, academic citation, or counter-speech may not violate this policy when clearly non-abusive.
Illegal or Regulated Activity, Scams & Exploitation
You may not use the Platform to sell or facilitate illegal goods or services (including drugs, weapons where unlawful, human trafficking, sexual services where prohibited, or fraud schemes). Do not run scams, phishing, romance scams, investment fraud, or impersonation to obtain money or credentials. Offers must comply with law and our Expert marketplace rules.
Terrorism & Violent Extremism
We prohibit terrorist organizations, violent extremist movements, and content that glorifies, recruits for, or provides instruction for terrorism or extremist violence. We may remove content, suspend accounts, and cooperate with law enforcement as permitted by law.
Misleading Synthetic, Manipulated & AI-Generated Media
You may not post synthetic, manipulated, or AI-generated audio, images, or video in a way that is likely to deceive people about real events, statements, or actions of identifiable individuals, except where clearly labeled as altered or fictional in line with product tools we provide. This is in addition to Section 3 (NCII) and Section 1 (child safety). Deepfakes used for fraud, election interference, or reputational harm may be removed or down-ranked.
You may not use the Platform to coordinate deliberate attempts to mislead voters or interfere with democratic processes where prohibited by law. We may label, reduce distribution, or remove coordinated inauthentic civic manipulation.
Outputs from Diverse AI’s own AI features are not independent verification of facts. Do not treat AI-generated summaries or suggestions as proof that another user said or did something.
Relationship to Other Policies
This document summarizes major safety themes for users (Sections 1–12) and should be read together with Part IV (Community Standards & Safety Policy) and the rest of our Legal Suite. Expert Contributors must also follow Part V (Expert Contributor Terms). For intellectual property, privacy, account termination, and dispute resolution, see the full Terms of Service and Privacy Policy at diverseaiapp.com.
© 2026 Diverse Technical Solutions LLC. All rights reserved. · diverseaiapp.com