OpenAI has announced the release of GPT-4o, an improved version of its GPT-4 model that powers ChatGPT, its flagship product.
OpenAI recently presented its Spring Update event, which was streamed live on YouTube as the company had promised.
In a livestream announcement on Monday, OpenAI CTO Mira Murati stated that the most recent update “is much faster” and enhances “capabilities across text, vision, and audio.”
Experts claimed that everyone can use it for free, and paying users will still be able to “have up to five times the capacity limits” compared to free users, according to Murati.
According to the maker, GPT-4o will function much more quickly than its older brother while still functioning similarly to GPT-4.
In addition, GPT-4o introduces new technology for its voice mode, which allows users to converse with ChatGPT via their microphones.
The lag between the speaker ending and ChatGPT responding broke the immersion that OpenAI aimed to achieve while utilising voice mode to communicate with the chatbot.
According to a post made by OpenAI CEO Sam Altman, the model is “natively multimodal,” meaning it can produce content and comprehend speech, text, or image directions.
Altman on X claimed that the API for GPT-4o is twice as fast and half as expensive as GPT-4 Turbo, so developers who wish to play around with it can do so.
Microsoft-backed OpenAI is under increasing pressure to extend the user base of ChatGPT, its well-liked chatbot product that stunned the world with its capacity to generate flawless software code and human-like textual content.
The GPT-4o desktop version was also utilised by the demonstrators to look over part of their code.
In addition to explaining the functionality of the code, GPT-4o could also predict the results of making particular modifications to it.
We earlier reported that Microsoft allegedly forbade the USPD from using the facial recognition software Azure OpenAI, which is driven by artificial intelligence.