OpenAI's GPT-5.2 Surfaces on Cursor Amid Intensifying AI Competition

Victor Zhang
Victor Zhang
A stylized digital brain with glowing circuits, representing advanced AI and computational power, set against a dark, futuristic background.

OpenAI's GPT-5.2 model has reportedly appeared in the Cursor IDE, with some observers suggesting it is specifically designed to compete with Google's Gemini 3. Screenshots circulating within the developer community show "gpt-5.2" and "gpt-5.2-thinking" options in Cursor's model dropdown menu.

The deployment of GPT-5.2 in a programming environment like Cursor IDE, rather than the ChatGPT web interface, indicates OpenAI's focus on programming as a key application for AI and a benchmark for model reasoning capabilities.

Project Garlic and Model Capabilities

Evidence suggests GPT-5.2, internally codenamed "Project Garlic," is a specialized model with a re-architected design, not a simple fine-tuned version of GPT-5. OpenAI Chief Research Officer Mark Chen stated that GPT-5.2's performance in programming and logical reasoning tasks surpasses both Gemini 3 and Anthropic's Opus 4.5. The model also reportedly demonstrates improved long-term task execution, maintaining context over extended operations. In Cursor, this could enable it to understand entire code repositories and adjust multiple referenced files with minimal errors when a single file is modified.

This agent capability is seen as a strategic move by OpenAI to counter what it perceives as an ecosystem blockade by Gemini 3. While "GPT-5.2" and "Garlic model" are not official product names, "Garlic" is an internal codename, with a public release potentially under the name GPT-5.2 or GPT-5.5. Some media outlets, such as TechStartups, have reported that "Garlic" is slated for release as soon as it is stable.

Leaked information suggests the GPT-5.2 or Garlic model will introduce several enhancements:

  • Enhanced mathematical reasoning: Improved precision for complex problem-solving.

  • Advanced academic reasoning: Optimized processing for nuanced queries and detailed responses.

  • Faster processing and energy efficiency: Reduced latency and computational costs.

  • Enhanced reliability: Fewer errors and inconsistencies.

  • Customizability: Greater flexibility for users to adjust model behavior.

The "Shallotpeat" Initiative

OpenAI is also reportedly developing an "even bigger" model, codenamed "Shallotpeat." This codename, a play on "shallots don't grow well in peat soil," metaphorically implies that the existing pre-training methods are suboptimal and require a foundational overhaul.

Shallotpeat was initially revealed by OpenAI CEO Sam Altman to employees last October as a model specifically developed to challenge Gemini 3. Following Gemini 3's release and strong performance, OpenAI integrated error correction solutions from Shallotpeat's pre-training phase into Garlic.

According to The Information, Altman had previously warned employees in an internal memo about Google's AI advancements, noting a shrinking lead for OpenAI. He specifically mentioned Google's development of a new AI that appeared to surpass OpenAI in training methods, referring to Gemini 3. Altman acknowledged Google's recent success in pre-training, an area where OpenAI had encountered challenges as its models scaled.

Strategic Shifts and Resource Allocation

Altman has reportedly de-emphasized the pursuit of Artificial General Intelligence (AGI) to prioritize competitive survival against Google. Internally, OpenAI has urged improvements to ChatGPT's quality, even at the expense of delayed advertising and personal assistants. This shift suggests a temporary pause in AGI development to ensure the company's immediate competitive standing, particularly given its planned multi-trillion dollar investment in infrastructure over the next five years.

Despite the competitive pressures, ChatGPT maintained its position as the top-paid app in Apple's 2025 list, while Gemini ranked lower.

Computing Power and Market Dynamics

The emergence of Google's Gemini 3 has intensified the competition, leading to a "zero-sum game" for computing power. OpenAI is reportedly focusing its resources on text/reasoning models like GPT-5.2, potentially at the expense of projects like Sora, its video generation model. While safety reviews and deepfake risks are cited for Sora's pause, the underlying factor is the immense computing power required for video generation compared to text models.

Google, too, faces computing power constraints. In December 2025, developers using Google AI Studio experienced significant reductions in free API quotas for Gemini 2.5 Pro and Gemini 2.5 Flash. Logan Kilpatrick, Product Lead for Google AI Studio, confirmed that these reductions were necessary to reallocate computing resources to meet the demands of new models like Gemini 3 Pro and Nano Banana Pro (Gemini 3 Pro Image).