On Tuesday, OpenAI released the most recent iteration of its main big language model, GPT-4, claiming that it performs at “human-level proficiency” on a number of standardized tests.
Because ChatGPT-4 is “bigger” than earlier versions, which means it has more weights in its model file and was trained on more data, it costs more to execute.
Several experts in the area currently hold the opinion that many recent developments in AI result from training ever-larger models on thousands of supercomputers, which can cost millions of dollars. One strategy that focuses on “scaling up” to get greater results is GPT-4.
Microsoft has invested billions in the firm, and according to OpenAI, it used Azure to train the model. Citing “the competitive landscape,” OpenAI declined to disclose information about the precise model size or the hardware it used to train it.
The latest version of OpenAI’s GPT large language model is a preview of new developments that could start filtering down to consumer products like chatbots in the coming weeks. OpenAI’s GPT large language model powers many of the artificial intelligence demos that have been astounding people in the technology industry over the past six months, including Bing’s AI chat and ChatGPT. Microsoft revealed on Tuesday that the AI chatbot at Bing employs GPT-4.
The new model, according to OpenAI, will give fewer factually wrong responses, chatter off topic less frequently, and even outperform humans on several standardized examinations.
According to OpenAI, GPT-4 scored at the 90th percentile on a mock bar exam, the 93rd percentile on a SAT reading test, and the 89th percentile on a SAT math test.
OpenAI cautions that the new program isn’t yet perfect and that it often performs worse than people. The corporation said that it still had a serious issue with “hallucination,” or making things up. It nevertheless has a tendency to insist on being right even when it is incorrect.
The business stated in a blog post that it is working to address the “many acknowledged limitations” of GPT-4, including social biases, hallucinations, and antagonistic prompts.
The distinction between GPT-3.5 and GPT-4 can be difficult to make in informal speech. When the task’s complexity reaches a certain level, GPT-4 distinguishes itself from GPT-3.5 by being more dependable, inventive, and capable of handling far more nuanced instructions, according to a blog post by OpenAI.
The new model will be accessible to ChatGPT users who have paid for it as well as through an API that enables developers to incorporate the AI into their applications. For every 750 words of prompts, OpenAI will charge roughly 3 cents, and for every 750 words of response, they will charge about 6 cents.
Source (CNBC)