A former OpenAI researcher has stepped down from her position, citing deep concerns over the company’s recent decision to introduce advertising into ChatGPT. Zo Hitzig, an economist who spent two years helping shape OpenAI’s model pricing and structure, announced her resignation on Monday—coinciding directly with the company’s initial tests of ad placements.
In a guest essay published in The New York Times, Hitzig argued that OpenAI is risking a repeat of the privacy mistakes made by social media giants like Facebook a decade ago. While acknowledging that advertising itself is not inherently wrong, she emphasized that the context of AI interactions makes it uniquely dangerous.
The "Archive of Human Candor"
Hitzig pointed out that users often treat AI chatbots as confidential confidants, sharing medical fears, relationship issues, and religious beliefs without the expectation of an ulterior motive. She described this vast collection of personal data as an "archive of human candor" without precedent.
The introduction of ads, she warns, changes the fundamental nature of the relationship between the user and the tool. Unlike traditional search engines or social feeds, chatbots are often viewed as neutral entities, leading users to lower their guards and disclose highly sensitive information.
A Warning from History
Drawing a parallel to Facebook’s history, Hitzig noted that the social network once promised users control over their data, only for those pledges to erode under economic pressure. She expressed fear that OpenAI could follow a similar trajectory.
"I believe the first iteration of ads will probably follow those principles," Hitzig wrote regarding OpenAI's current safety promises. "But I’m worried subsequent iterations won’t, because the company is building an economic engine that creates strong incentives to override its own rules."
Context of the Controversy
Hitzig’s resignation adds a significant voice to the growing industry debate regarding monetization strategies for generative AI. OpenAI recently confirmed it is testing ads for users on free and lower-tier subscription plans in the U.S. The company has stated that these ads will be clearly labeled, appear at the bottom of responses, and will not influence the chatbot's answers. However, for Hitzig, the economic incentives driving the ad model were enough to confirm that the company had stopped asking the critical questions she was hired to help answer.

Comments
Leave a comment