OpenAI Unveils New GPT-4o Model (Huge ChatGPT Update)

OpenAI Unveils New GPT-4o Model (Huge ChatGPT Update)

OpenAI Announces New GPT-4o Model

OpenAI has announced the release of GPT-4o, a significant update to its large language model, which is currently used by over 100 million people worldwide.

This update promises to revolutionize user interaction with AI by introducing voice and video capabilities to both free and paid users. The new features are set to roll out in the coming weeks, bringing an unprecedented level of accessibility and functionality.

GPT-4o Model Features

Enhancing Human-AI Interaction

The primary goal of GPT-4o, according to OpenAI, is to reduce the friction between humans and machines, making AI more accessible to everyone.

GPT-4o Model Features

During a live-streamed event, Mira Murati, OpenAI’s Chief Technology Officer, demonstrated the transformative potential of the new features. In a stunning showcase, the technology allowed for real-time conversations, voice modulation, and even simulated emotions.

Real-Time Conversations and Emotion Simulation

In one of the most impressive demos, OpenAI researchers engaged in a real-time bedtime story session with ChatGPT-4o. The AI not only responded with appropriate tone and emotion but also varied its voice from playful to dramatic to singsong, as requested. This capability is expected to enhance the user experience significantly, making interactions with the AI more natural and engaging.

Video Capabilities

The introduction of video capabilities marks another milestone for GPT-4o. The demo showed the AI solving math equations written on paper in front of a phone lens, all while maintaining a playful conversation with the engineers.

This real-time visual interaction opens up numerous possibilities for educational purposes, remote assistance, and more. Sam Altman was live tweeting about GPT-4o’s video capabilities during the Spring Update.

Speed and Efficiency

OpenAI has emphasized the improvements in speed and quality with the new update. GPT-4o can respond to audio inputs in as little as 232 milliseconds, with an average response time of 320 milliseconds. This responsiveness is comparable to human interaction speeds, making conversations with the AI feel more fluid and natural.

GPT-4o Model Text Evaluation

Accessibility and Reach

The new features are designed to be accessible to as many people as possible. OpenAI has announced that the updates will enhance performance in over 50 languages, ensuring a broad global reach. Additionally, a desktop version of GPT-4o is rolling out for Mac users, initially available to paid subscribers.

Pro Users and API Enhancements

While free users will benefit from the new features, Pro users are not left out. They will have access to up to five times the capacity, ensuring a more robust and reliable experience. The update also extends to the application programming interface (API), which is now twice as fast and 50% cheaper. These enhancements will benefit developers and businesses that rely on AI for their operations.

Advanced Capabilities and Real-Time Translation

One of the standout features demonstrated was the AI’s ability to handle multiple speakers simultaneously. During the live demo, three presenters spoke to ChatGPT at the same time, and the AI successfully discerned and responded to each speaker individually. Additionally, the AI showcased real-time translation capabilities, effortlessly translating between Italian and English.

GPT-4o Model Translation

Addressing Challenges and Safeguards

OpenAI acknowledges the new challenges that come with real-time audio and visual capabilities. To address potential misuse, the company is working with various stakeholders to implement safeguards. The features will be rolled out iteratively, ensuring that proper protections are in place.

Strategic Timing

The announcement of GPT-4o comes strategically timed just a day before Google’s I/O developer conference, which is expected to focus heavily on AI advancements. This move by OpenAI not only captures attention but also sets a high bar for upcoming AI technologies.

Future Rollout and Availability

OpenAI has stated that GPT-4o’s text and image capabilities are starting to roll out today, with the full suite of features becoming available to free and Plus users in the coming weeks. The new version of Voice Mode with GPT-4o will be in alpha within ChatGPT Plus soon, providing users with enhanced interaction options.


GPT-4o represents a significant leap forward in AI technology, with its voice and video capabilities set to transform how users interact with AI. OpenAI’s commitment to reducing friction between humans and machines is evident in this update, which promises to bring AI closer to everyone.

As the features roll out, users can look forward to a more engaging, responsive, and versatile AI experience. You can check out OpenAI’s complete live stream below for more details.

Recent Posts

About AI Insider Tips

AI Insider Tips is your trusted source in navigating the ever-evolving landscape of AI. Our mission is to bridge the gap between the AI community and the public, making complex AI concepts accessible to all.

AI Insider Alerts

Sign up below to receive exclusive AI tips & tricks.
Skip to content