OpenAI announced on Thursday the release of GPT-4o mini, a lite version of its flagship artificial intelligence model, GPT-4o. The company said the new smaller model is its most cost-efficient model yet, costing 60% less than GPT-3.5 Turbo.
OpenAI said GPT-4o mini aims to make the startup’s AI technology “much more affordable,” climate-friendly, and available to everyone. The model became available to free users of ChatGPT, paying Plus and Team subscribers starting July 18. Enterprise users will gain access starting next week, it said.
OpenAI says GPT-4o mini beat rivals on major benchmarks
According to OpenAI CEO Sam Altman, GPT-4o mini costs 15 cents per million input tokens and 60 cents per million output tokens, making it more than 60% cheaper than GPT-3.5 Turbo. He said the new model costs 100 times less than text-davinci-003.
In an X post (formerly Twitter), Altman described the ChatGPT-4o mini as “technology too cheap to meter.”
towards intelligence too cheap to meter:https://t.co/76GEqATfws
15 cents per million input tokens, 60 cents per million output tokens, MMLU of 82%, and fast.
most importantly, we think people will really, really like using the new model.
— Sam Altman (@sama) July 18, 2024
In a blog post, OpenAI said GPT-4o mini outperforms the GPT-4 model on chat preferences and scored 82% on Massive Multitask Language Understanding (MMLU). MMLU measures the intelligence and reasoning capabilities of language models on text. A higher score means it can understand and use language proficiently in aspects of real life.
By comparison, Google’s Gemini Flash scored 77.9% on the MMLU test and Anthropic’s Claude Haiku 73.8%, OpenAI detailed. GPT-4o mini, which has knowledge up to October 2023, also beat its closest competitors on math and coding tasks.
GPT-4o mini gets new safety measures
OpenAI said the mini model has new safety features to resist jailbreaks and so-called prompt injections. It explained that the safety features improve the reliability of the model’s responses and increase safety in large-scale applications.
GPT-4o mini is currently available as a text and vision model in the API or application programming interface. Support for text, image, video and audio inputs and outputs would be made available in the future.