Leaked internal documents from Meta reveal that the technology giant has implemented strict guidelines preventing its AI chatbot from discussing abortion and sexual health topics with users under the age of 18. The policies, obtained by Mother Jones, outline a broad prohibition on content related to reproductive organs, contraception, and abortion access, creating a significant contrast with the company's approach to other sensitive health topics.
The guidelines explicitly forbid the AI from providing information that could assist a user in obtaining an abortion, including locational data or value judgments on the procedure. Additionally, the chatbot is barred from offering advice on sexual health, STI prevention, and menstrual hygiene to minors. This approach differs sharply from the chatbot's protocols for mental health issues; when users inquire about suicide, self-harm, or eating disorders, the AI is programmed to direct them toward professional helplines and counselors.
A Pattern of Censorship
Martha Dimitratou, head of the advocacy group Repro Uncensored, criticized the move as part of a wider trend of censorship across Meta’s platforms. Data indicates that the removal of content related to sexual and reproductive health, as well as LGBTQ communities, has more than doubled between 2024 and 2025. Dimitratou noted that while Meta treats mental health crises as safety issues requiring resources, reproductive health is treated primarily as a political liability.
Technologists and digital rights advocates suggest these policies may be a response to mounting political pressure. Jacob Hoffman-Andrews of the Electronic Frontier Foundation highlighted concerns that Meta is capitulating to conservative political agendas, referencing recent executive orders aimed at suppressing information related to gender and sexuality in AI systems.
Impact and Testing
Independent testing of the AI revealed that the restrictions often go beyond the stated policies. In some instances, the chatbot refused to discuss basic topics such as menstruation and contraception, even where the information is legally accessible to minors. Furthermore, tests conducted from locations where abortion is legal, such as Brussels, resulted in the chatbot refusing to engage in related conversations, suggesting a blanket application of restrictions that ignores local legality.
In response to the findings, a Meta spokesperson stated that the company’s AI is designed to engage in age-appropriate discussions and provide factual information without offering opinions. The company affirmed its commitment to safety while maintaining that it allows discussion and debate on healthcare services within the boundaries of its policies.

Comments
Leave a comment