Featured image
OpenAI

OpenAI Unveils Exciting Updates and Lower Prices at DevDay Event

avatar

Sven

November 6th, 2023

~ 4 min read

OpenAI, the renowned artificial intelligence research organization, recently held its DevDay event where it announced a wide range of new models and developer products. From the introduction of GPT-4 Turbo with expanded capabilities and lower prices to the launch of the Assistants API and exciting features like vision support and text-to-speech, OpenAI is revolutionizing the AI landscape. In this blog post, we'll delve into the highlights of these announcements and how they will benefit developers and users alike.

GPT-4 Turbo with 128K Context

OpenAI introduced the preview version of GPT-4 Turbo, the next generation model in their line-up. With an impressive 128K context window, GPT-4 Turbo can comprehend the equivalent of over 300 pages of text in a single prompt. Not only does it possess enhanced capabilities and knowledge of world events up to April 2023, but it is also offered at a significantly lower price for input and output tokens compared to its predecessor, GPT-4. Developers can already try GPT-4 Turbo by using the designated API, with a stable production-ready model set to be released soon.

Improved Function Calling and Instruction Following

Function calling has seen notable improvements with GPT-4 Turbo. Developers can now request multiple functions in a single message, streamlining the interaction process with the model. Moreover, GPT-4 Turbo excels at following instructions accurately, making it ideal for tasks that require specific formats. The introduction of JSON mode and the ability to generate syntactically correct JSON objects further enhance developers' control and flexibility.

Reproducible Outputs and Log Probabilities

OpenAI has introduced a new seed parameter, enabling developers to achieve reproducible outputs for consistent completions. This feature proves valuable for debugging, unit testing, and exerting greater control over the model's behavior. Additionally, OpenAI plans to launch log probabilities for the most likely output tokens generated by GPT-4 Turbo and GPT-3.5 Turbo, facilitating the development of features like autocomplete in search experiences.

Introducing GPT-3.5 Turbo 16K

In addition to GPT-4 Turbo, OpenAI also unveiled GPT-3.5 Turbo, a new version that supports a 16K context window by default. This updated model offers improved instruction following, JSON mode, and parallel function calling. Developers can access GPT-3.5 Turbo by utilizing the designated API, with existing models set to be automatically upgraded to the new version on December 11. Access to older models will still be available until June 13, 2024.

Assistants API for Enhanced AI Experiences

OpenAI introduced the Assistants API, allowing developers to build agent-like experiences within their applications. These AI assistants possess specific instructions, extra knowledge, and the ability to call models and tools to perform tasks. Functionality such as Code Interpreter and Retrieval have been incorporated into the API, reducing the need for extensive development work. Assistants API offers flexibility, enabling developers to create a wide array of applications such as data analysis, coding assistance, vacation planning, voice-controlled experiences, and more.

New Modalities: Vision, DALL·E 3, and TTS

OpenAI expands the capabilities of its platform with the integration of new modalities. GPT-4 Turbo now supports vision, allowing developers to process images and perform tasks like generating captions or analyzing real-world visuals. DALL·E 3, a popular feature available to ChatGPT Plus and Enterprise users, can now be integrated into third-party apps and products via the Images API. Developers can leverage DALL·E 3 to generate programmatically designed images and designs. OpenAI also introduces a text-to-speech API that enables developers to generate high-quality human-like speech from text.

Exciting Opportunities for Model Customization and Lower Prices

OpenAI is continually finding innovative ways to cater to developers' specific needs. GPT-4 fine-tuning, while requiring more work than its predecessor, offers experimental access with the potential for meaningful improvements. For organizations requiring extensive customization, OpenAI's Custom Models program provides selected organizations with dedicated researchers to train GPT-4 models tailored to their domains. Additionally, OpenAI has reduced prices across several models, making their powerful technology even more accessible to developers.

Conclusion

OpenAI's DevDay event was an exciting showcase of new models, APIs, and features that promise to revolutionize the AI landscape. With the introduction of GPT-4 Turbo, Assistants API, and enhancements in function calling, instruction following, and model customization, OpenAI is empowering developers to create advanced AI applications. The integration of vision support, DALL·E 3, and text-to-speech capabilities further expands the possibilities. Moreover, OpenAI's commitment to lowering prices and increasing rate limits demonstrates their dedication to supporting developers as they scale their applications. Exciting times lie ahead as developers embrace these advancements and unlock the full potential of OpenAI's technologies.