How Does OpenAI’s GPT 3 Work?

Apr 30, 2021
5 min read

What if you had a technology that could write the best write-ups? Seems something from Science Fiction, right? But guess what, OpenAI’s GPT-3 can do that.

The GPT 3 can write texts, poems, translate paragraphs, chat, code, design, create answers, and much more. It's like having a personal writing assistant! Currently, it is one of the largest language models in the world. OpenAI released GPT 3 in June 2020. Since then, it is the talk of the Machine Learning community.

Developers around the world developed startups based on GPT-3. Copy.AI is one such GPT-3 powered AI which can generate copies for social media posts, ads, and more. You need to select the copy type and describe your product. And within a few seconds, it will generate results.

But What Exactly is GPT-3?

According to Greg Raiz, the CEO of Raizlabs, “GPT-3 is the largest leap in artificial intelligence we’ve seen in a long time and the implications of these advances will be felt for a really long time”.

OpenAI’s GPT-3 is the successor to GPT-2. OpenAI is an AI research center that works on Artificial General Intelligence (AGI), which may help humanity. It has been developing AI technologies over the years. The GPT-3 is from the GPT series of API.

The early mentions of Generative Pre-Training (GPT) are in its research papers. It discusses how Artificial Intelligence can work on unlabeled data. Most of the available AI technologies work on labeled data.

However, in the GPT 3 paper, the researchers say that the 175 billion parameter language model can predict future texts on unlabeled data. If you use GPT 3, you can assess the words and use them to teach the AI. The AI system will then start predicting future words based on the unlabeled data. You have to repeat the procedure until you generate the desired output.

So, this makes it suitable for any writing! It can generate any copy with a few instructions.

According to Greg Raiz’s article, “The original GPT only worked on 7000 books data. However, the GPT-3 worked on 410 billion data by crawling the world wide web. Not only that, but it also uses data from books, Wikipedia, and a lot more. So, it is over 45 Terrabytes of texts!

For now, the GPT-3 is in an experimental stage. Selected developers are getting early access to it. Sahar Mor is one such developer who got early access. He said, “The main difference I see with GPT 3 is that, unlike other AI applications that are narrow, its performance is general and human-like.” He used it to develop Airpaper that generates structured data from any document, PDF, or image!

How Does GPT-3 Function?

The GPT-3 generates texts using a neural network system. It is considered the most accurate compared to its predecessors in the GPT series. As it uses 175 billion parameters, it is capable of learning on its own. It means GPT-3 can do any work without training!

It is highly efficient due to its in-context evaluation. Once the user feeds the information, the AI analyses the texts. Then it provides the most probable statistical data using a calculation. For this, it uses the massive data that exists on the internet.

It doesn't understand the meaning of the words it produces, but it generates the correct sentence. With a bit of language adjustment, GPT-3 can even write stories, blogs, essays, jokes, and more. Interestingly, the output data is similar to the human writing style.

Here are some of the examples!

1. The developer demonstrates how he used GPT 3 based technology to describe the react app he wants. The AI generations the function (code) to build the app (To-do).

https://twitter.com/sharifshameem/status/1284421499915403264

2. Here, GPT -3 is developing paragraphs based on the initial hints.

https://twitter.com/zebulgar/status/1283927560435326976

3. The user instructs the GPT 3 to tweak the writing style of a sentence, and it does its magic!

https://twitter.com/quasimondo/status/1284372088460115968
https://twitter.com/quasimondo/status/1284372088460115968

Many people shared their experiences with this Artificial Intelligence on Twitter. If you type GPT-3 on the Twitter search bar, you will find many examples of how it works. Do you know it can also write a song with just the title and artist’s name? Not only that, but it can work as an ideas generator for businesses.

Recently Designer Jordan Signer used this technology to build a plugin and named it "Designer". This plugin produces app designs from descriptions.

https://twitter.com/jsngr/status/1284511080715362304

What Are The Limitations of GPT-3?

Although GPT-3 looks promising and is one of the greatest revolutions in the AI world, it has its shortcomings. Even Sam Altman, the founder of OpenAI, has pointed out its limitation.

Here are some of its shortcomings.

Still in its Beta version

The GPT-3 is still in its experimental version. It has a lot of potentials which we have already seen. But the hype around it is shadowing the problems that come with this model.

Doesn’t understand the meaning of the output

This model uses statistics to dish out the most probable sentence or code. However, it doesn't understand what it is presenting. Therefore, it lacks semantic understanding. GPT 3's data is what it crawled on the internet, so it generates sexists, racists, and biased output!

Not anywhere close to AGI

Many developers have praised the model as the start of AGI, but it is not so. It generates human-like sentences because of its massive datasets. Structurally it is similar to its predecessor, GPT-2. The only difference is the resource it used for training.

As it doesn't have semantic understanding, proper reasoning skills, and lacks generalization, so it doesn't have 'human like' writing skills. It can generate faulty output with biases. So, it’s far from something AI enthusiasts will consider in AGI.

Future of GPT-3

Till the time GPT-3 is in its experimental phase, it is difficult to say much about its future. However, as you can understand from the hype around it, it has a lot of potentials. The fact that many startups are using it to develop AI apps is evident of its future. But the fear is its human-like functioning! The output data is close to what human writes, so, this creates the chances of misuse.

The cyber-world is already dealing with fake news, false information, and Deepfakes. With technology like GPT-3 in the scene, it can cause more problems! Therefore, there needs to be a regulation. OpenAI has already said about the dangerous impact of GPT-3, if not regulated.

For now, only after proper inspection, the developers are getting access to this technology. OpenAI is strict with this model's use. So, we can't say much about this model's future. However, it has opened many doors to an AGI world and got the ML community dreaming!