Powered by Midjourney AI
Author profile picture

The Artificial Intelligence (AI) research lab OpenAI released GPT-4, the latest version of its groundbreaking AI system. Its creators say it can solve complex problems more accurately and it can be more creative. 

GPT-4 was defined by OpenAI’s co-founder Sam Altman as a “multimodal” model, meaning it can accept text and image inputs. Users can interact with it to ask questions about pictures. GPT-4 can handle larger text inputs, processing 25,000 words – eight times more than its previous version. 

The new model is now available for the users of ChatGPT Plus, the paid version of the ChatGPT chatbot. Developers can sign up on a waitlist to access the API – application programming interface, which allows two computer programs to communicate. OpenAI is backed by Microsoft, which confirmed that Bing Chat – its chatbot developed with OpenAI – is running on GPT-4. 

Furthermore, the company worked with partners to offer GPT-4 driven services, such as Duolingo Max. This new tier of the language learning app uses OpenAI’s latest model to chat with users and to explain the mistakes they have made.

What is OpenAI

OpenAI is an American AI research and development company intending to create and promote human-friendly artificial intelligence systems. Tech giant Microsoft backs OpenAI.

What is ChatGPT?

ChatGPT – Generative Pretrained Transformed – is OpenAI’s AI-driven chatbot. Launched in November 2022, it uses GPT-3 models, which allow it to respond to text-based queries and generate natural language answers.

What is GPT-4?

GPT-4 is the latest version of OpenAi’s Ai system. It accepts images and text as input, generating more creative outputs and less likely to invent facts. 

GPT-4 understand images 

The main difference input-wise compared to its previous versions is the ability to see and understand image inputs. As part of GPT-4’s presentation event, OpenAI’s president Greg Brockman showed the system’s capabilities to work with images. These include analyzing and responding to pictures alongside text prompts and performing tasks based on those pictures. 

During the demo, GPT-4 was asked to explain why an image of a squirrel with a camera was funny. The system replied: “Because we don’t expect them to act as human.” In another test, Brockman submitted a hand-drawn sketch of a website, and the AI could create a functional website based on that drawing. 

GPT-4 image recognition capabilities have yet to be publicly available. They are being tested by Be My Eyes, an app visually-impaired people use to describe what their phone sees. 

Improved creativity and reasoning 

OpenAI states its latest model is “more creative and collaborative than ever.” The system can generate and edit with users on creative and technical tasks. It can compose a song, write a screenplay or learn a user’s writing style. In addition, whereas there are almost no differences between GPT-3.5 and GPT-4 in a casual conversation, the latest version can handle much more nuanced instructions. 

Such improved ability in solving more complex solutions has been proved in academic tests.  In a simulation of the bar exam required of US law school graduates before professional practice, GPT-4 scored around the top 10 percent of the test takers. Its older version – GPT-3.5 – ranked around the bottom 10 percent, OpenAI says. 

GPT-4 outplays ChatGPT in reasoning capabilities. In a demo on the website, OpenAI shows how the new model can find a 30-minutes meeting slot based on three people’s schedules. The company also said GPT-4 is more multilingual, as it answers with high accuracy questions across 26 languages. 

Correcting weaknesses

According to OpenAI, GPT-4 improves many of the weaknesses of the previous system version. Developers trained the model on data scrapped from the Internet, which GPT-4 uses to respond to user inputs. However, if the model doesn’t find the correct answer, it makes up facts and information – the hallucination problem. Besides, the system can give abusive or upsetting responses if given the wrong prompts. 

Thanks to users’ conversations with ChatGPT, OpenAI said it managed to improve these defects – but not eliminate them – with GPT-4. The company said GPT-4 responded more sensitively to medical and self-harm advice 29 percent more often while wrongly responding for disallowed content 82 percent less often. 

Nevertheless, OpenAI warned that GPT-4 will still make up facts, urging users to take care of the outputs. The AI startup stated that GPT-4 scores 40 percent higher in hallucination tests. 

Continuous development

OpenAI says it worked – and will keep working – on AI safety and security, integrating the lessons learned from ChatGPT to boost safety research and monitoring. More updates and improvements will come as more people will use GPT-4. 

In the featured image: Midjourney’s representation of GPT-4