OpenAI Unveils Exciting Updates to GPT-4 and GPT-3.5 Turbo Models
OpenAI, the mastermind behind the GPT-3 language model, has rolled out upgraded versions of its GPT-4 and GPT-3.5 Turbo models packed with innovative features:
- Function Calling: Developers can now define functions for the models and receive a JSON object containing arguments to call those functions, ensuring a more reliable way to obtain structured data.
- Enhanced Steerability: The models are fine-tuned to detect when a function should be called based on user input and to generate JSON that complies with the function signature.
- GPT-3.5 Turbo’s 16k Context Version: With four times the context length of the standard 4k version, this model can handle around 20 pages of text in a single request.
- 75% Price Cut on Embeddings: OpenAI’s cutting-edge embeddings model now costs 75% less.
- 25% Savings on Input Tokens for GPT-3.5 Turbo: The price of input tokens for GPT-3.5 Turbo has been slashed by 25%.
These updated models are currently available to all developers, and OpenAI is welcoming more people from the waitlist to experience GPT-4 in the coming weeks.
OpenAI CEO Sam Altman, in a blog post, described the updates as “a major step forward for our language models” and expressed his excitement to see how developers will incorporate these latest models and features into their applications.
Discover the groundbreaking advancements in OpenAI’s GPT-4 and GPT-3.5 Turbo models! These cutting-edge updates catapult the realm of artificial intelligence to new heights. Get ready to experience enhanced power and adaptability that will undoubtedly pave the way for unparalleled applications and innovations.