Local agents make you faster. Cloud agents do the work while you're not there.
Smaller specialized models can match or beat frontier generalists on the tasks they're trained for. Working with Applied Compute, we RL-trained SWE-check, a bug detection model that matches Opus 4.6 on our internal evals while running ~10x faster.
We’re retiring the old Core and Team plans and introducing a new lineup: Free, Pro, Max, Teams, and Enterprise. We’re also beginning to charge for products that have been free until now, including Ask Devin and Devin Review.
Today we're launching Cognition Japan, our first expansion into Asia, to partner with Japanese enterprises ready to transform how software gets built.
We wrote up why COBOL is so hard for agents, what it takes to get it right, and where Devin is delivering today.
We're releasing SWE-1.6, our latest model optimized for both intelligence and model UX.
Devin can now schedule its own recurring sessions. Run a task once, and if it goes well, tell Devin to keep doing it. It maintains state between runs, so each session picks up where the last one left off.
Devin can now break down large tasks and delegate them to a team of managed Devins, with each running in its own isolated VM in parallel.
We are sharing an early preview of our ongoing SWE-1.6 training run.
Devin is a cloud agent platform for engineering teams. You work with it like a teammate — give it tasks, review its PRs, and let it handle your backlog. Here's how we use it to build Devin itself.
Today, we launch Cognition for Government to modernize America’s critical infrastructure with AI software engineering.
Today we’re releasing Devin 2.2, the most important update to Devin since launch.
We built a feature that massively increased our internal token spend on Devin. But our PRs are now much more free of bugs and we can't go back.
We’re excited to join in Cursor, Cloudflare, Vercel, git-ai, OpenCode and others in support of [Agent Trace](https://agent-trace.dev/). As described in the spec, Agent Trace is an open, vendor-neutral spec for recording AI contributions alongside human authorship in version-controlled codebases.
Cognizant has partnered with Cognition to deploy Devin and Windsurf across its engineering teams and customer base.
Cognition expands to Europe and opens a London office.
As code generation gets easier, code review is the new bottleneck. That's why we're launching a new way to quickly review and understand complex PRs in our latest tool for codebase understanding - augmenting human attention with AI.
Infosys, a global leader in digital services and consulting, has partnered with Cognition to deploy Devin, the AI software engineer, across its organization and global client base.
Eighteen months since launch, Devin’s gone from tackling small projects, to deeply embedding in engineering teams at thousands of companies, including some of the largest businesses in the world. We decided it was well past time for Devin to get a performance review - just like any human engineer.
Codemaps is meant to offer a shared understanding of a system between humans and AI, enabling your AI to teach you about the code you are looking at quickly and elegantly. A codemap can be generated about any system or snippet to illuminate its code paths, helping users learn and recall. Codemaps allows AI to be a partner that explains code in an accurate and consistent way, rather than generating tons of inscrutable slop.
Today we’re releasing SWE-1.5, the latest in our family of models optimized for software engineering. It is a frontier-size model with hundreds of billions of parameters that achieves near-SOTA coding performance. It also sets a new standard for speed: we partnered with Cerebras to serve it at up to 950 tok/s – 6x faster than Haiku 4.5 and 13x faster than Sonnet 4.5. SWE-1.5 is now available in Windsurf!