Skip to content

April 18, 2026

Is Cursor using their own in house model? What have experts said about it?

11 episodes8 podcastsApr 29, 2025 – Mar 9, 2026
SharePostShare

Cursor employs a sophisticated and pragmatic multi-model strategy rather than relying on a single in-house model [1, 29]. The company uses an ensemble of different large language models, with one expert citing as many as 13 distinct AIs, to power various aspects of its code editor [3, 17]. This hybrid approach involves leveraging large, powerful third-party foundation models from providers like Anthropic for complex reasoning and agentic tasks, while deploying a suite of custom-trained and fine-tuned models for performance-critical functions [1, 3, 15]. Initially, the team planned to rely on third-party models, but they found it essential to develop their own to achieve the necessary product quality and build a competitive moat, thereby challenging the notion that AI application companies are merely "wrappers" .

The company's in-house and custom model development is extensive and operates at a significant scale. Cursor runs the DeepSeek V2 model on its own inference infrastructure, which it has scaled to handle **hundreds of millions of daily calls** [4, 8, 14]. For code completion, a custom "tab" model processes approximately 100 million requests per day . A key proprietary asset is the "Composer" model, described as a fine-tuned version of a large Chinese mixture-of-experts model that is designed for rapid coding iterations [2, 28]. This model is particularly dynamic, with one source reporting that its weights are updated based on real-world user feedback as frequently as every 90 minutes . In addition to these specialized models, Cursor has also released its own foundation model specifically designed for programming, which serves as an alternative to models from OpenAI or Anthropic within the platform .

Go deeper

Search this topic across 400+ expert conversations on Sonic.

Search →

The strategic rationale for this custom model development centers on performance, defensibility, and unit economics. For speed-sensitive tasks like autocomplete, custom models are crucial for meeting latency targets of **under 300ms** . This focus is so integral that experts report every significant feature or "magic moment" in the editor is powered by a custom-trained model [23, 25]. This strategy also creates a data flywheel, where user data from larger models is used to distill smaller, faster, and more efficient versions . Economically, developing proprietary models is a key lever for improving margins. One analyst noted that Cursor's gross margins are low due to payments to model providers like Anthropic , and another expert described the strategy as achieving massive distribution with third-party models before launching an efficient in-house model like Composer to drastically improve profitability [9, 22]. This allows Cursor to build a more durable business by creating specialized models for specific coding tasks, particularly for enterprise users .

What the sources say

Points of agreement

  • Cursor employs a multi-model strategy, using an ensemble of custom-built, open-source, and third-party models for different tasks.
  • The company develops its own custom models for speed-sensitive features like code completion, which handle hundreds of millions of daily requests.
  • Cursor runs large open-source models, specifically DeepSeek V2, on its own inference infrastructure at a massive scale.

Points of disagreement

  • Sources describe Cursor's in-house model differently, with one calling it a new foundation model and another calling it a fine-tuned version of a Chinese model.
  • Expert opinion is split, with some calling the tool obsolete while others highlight its multi-billion dollar ARR and indispensability to engineers.
  • There is a contrast between reports of low margins due to payments to Anthropic and the stated strategy of using their own model to drastically improve margins.

Sources

Gradient DissentApr 29, 2025

Inside Cursor: The future of AI coding with Co-founder Sualeh Asif

Co-founder Sualeh Asif details Cursor's pragmatic multi-model strategy, its use of DeepSeek V2 on custom infrastructure, and its 'dogfooding' approach to product development.

View →
Lex Fridman PodcastJan 31, 2026

State of AI in 2026: LLMs, Coding, Scaling Laws, China, Agents, GPUs, AGI | Lex Fridman Podcast #490

This source specifies that Cursor's 'Composer' model is a fine-tuned version of a large Chinese mixture-of-experts model that continuously learns from user feedback.

View →
Lenny's PodcastMay 1, 2025

The rise of Cursor: The $300M ARR AI tool that engineers can’t stop using | Michael Truell

Michael Truell explains that Cursor's competitive advantage comes from using custom-trained models to power every significant feature, rather than just being a wrapper for other APIs.

View →
20VC with Harry StebbingsMar 9, 2026

"Cursor is Dead" is Total BS: Here is Why | Miles Clements

Myles Clements counters claims of Cursor's decline by pointing to its strategy of building specialized models and its achievement of ARR in the billions.

View →
a16z PodcastJan 13, 2026

Ben Horowitz on Investing in AI: AI Bubbles, Economic Impact, and VC Acceleration

Ben Horowitz states that Cursor has released its own foundation model for programming and that the editor is a composite of 13 different AI models.

View →
20VC with Harry StebbingsNov 24, 2025

Base44’s Founder, Maor Shlomo on How Vibe Coding Will Kill SaaS

This source outlines Cursor's business strategy: achieve wide distribution with third-party models, then introduce its own efficient 'Composer' model to significantly increase margins.

View →

Related questions

Ask your own research questions

Search and synthesize across 400+ expert conversations in real time.

Try: “Is Cursor using their own in house model? What have experts said about it?

Search this on Sonic →