OpenAI Unleashes ChatGPT and Whisper APIs for Next-Gen Language Capabilities

MMS Founder
MMS Daniel Dominguez

Article originally posted on InfoQ. Visit InfoQ

OpenAI has announced that it’s now letting third-party developers integrate ChatGPT and Whisper into their apps and services via API, offering access to AI-powered language and speech-to-text capabilities. As compared to using the company’s current language approach, these APIs will make it easier for businesses to integrate ChatGPT and Whisper into their platforms.

The new ChatGPT model, also known as the gpt-3.5-turbo costs $0.002 per 1,000 tokens, which is 10 times less than the GPT-3.5 models that are currently in use. For many situations outside of conversation, it’s also the best model. Unstructured text is typically consumed by GPT models and is supplied to the model as a series of tokens. Instead, ChatGPT models consume a series of messages together with their associated metadata.

Modern language processing techniques can produce answers to inputs in natural language that are human-like. The model is an effective tool for creating conversational interfaces because it is capable of comprehending linguistic nuance, including idioms, slang, and colloquialisms. With ChatGPT, developers can build chatbots, virtual assistants, and other conversational interfaces that respond to users in a tailored and human-like manner. However, the most recent ChatGPT model will now be substantially more affordable and open to third parties thanks to a dedicated open-source platform.

OpenAI has also unveiled a new API for Whisper, its speech-to-text technology. According to the company, you may use it to translate or transcribe audio for $0.006 per minute. Whisper model is open source, so you can run it on your own hardware without paying anything.

Additionally, OpenAI is introducing certain policy adjustments that it claims are a result of developer input. One significant feature is that it will no longer train its models on data provided over the API unless users specifically consent to that use.

Moreover, OpenAI is now providing dedicated instances for customers who desire greater control over the particular model version and system performance. Requests are typically processed using computing resources that are shared with other users and are charged separately. The API is hosted on Azure, and with dedicated instances, developers can purchase a time-limited allocation of compute resources specifically designated for handling their queries.

AI can provide incredible opportunities and economic empowerment to everyone, and the best way to achieve that is to allow everyone to build with it, says OpenAI.

The launch of these API’s is expected to have a significant impact on the developer community, as it provides new tools and capabilities for building more advanced and sophisticated language applications.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.