OpenAI has unveiled GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano, boasting faster performance, reduced costs, and enhanced instruction-following and coding abilities. The models offer up to one million tokens of context, enabling them to process content nearly eight times the size of the entire React codebase ideal for complex software engineering, legal analysis, and multi-document review.

On benchmarks, GPT-4.1 scored 54.6% on SWE-bench Verified, outperforming GPT-4o by 21.4% in coding tasks, and achieved a 38.3% score on Scale’s MultiChallenge, a 10.5% improvement in instruction following. Notably, GPT-4.1 mini delivers high-speed performance at 83% lower costs than its predecessor, while GPT-4.1 nano is the fastest yet.

OpenAI has begun rolling out GPT-4.1 to ChatGPT Plus, Pro, and Team subscribers, while GPT-4.1 mini is available for both free and paying users. This release follows earlier criticism for skipping a safety report, prompting OpenAI to promise more frequent AI safety disclosures via its new Safety Evaluations Hub.
