What Chat GPT Can Really Do For You

chat GPT

The chatbot ChatGPT has been causing enthusiasm on the Internet for days. He answers questions surprisingly well, writes poems, and evaluates class tests. But is he really as smart as he seems? How does it work and where does it fail?

What are ChatGPT and GPT3?

The Internet is full of screenshots of chats with the bot ChatGPT. It is based on the GPT3 language model, which generates human-like text. The abbreviation stands for Generative Pre-trained Transformer 3. The chatbot was developed by OpenAI and is available free of charge during the test phase. If ChatGPT sounds familiar: That’s right! GPT3 is not entirely new. 

The impressive leap in chatbot technology made a name for itself two years ago. At that time, however, only selected people were allowed to try out the program. Now it’s openly available to everyone – not as commercial software but as a trial and error – just like Dall-E software, where a computer automatically creates images. This software also comes from OpenAI. The company was founded by Elon Musk.

What can GPT3 do and how good is it?

What does it take to make a radio show? If you ask GPT3 this question, you will get a meaningful answer. “First of all, you have to think about the topic of your show and how it will be structured. For example, if you are planning a talk show, you need to invite guests and come up with questions for the talk,” says GPT3. Then the chatbot mentions music selection and recording techniques. The answer may not sound like a professionally written guide and is a bit rambling, but it is clear and really relates to the question. Anyone who has ever been stuck in an automated support chat with a conventional chat robot knows that this is not a matter of course. But as confident as the answers may sound, they don’t have to be right.

How does ChatGPT work and what are its limitations?

For example, it is possible to ask ChatGPT math problems that the system answers incorrectly. Because as clever and relevant as what has been said sounds: It is not an intelligent system, explains 

Benjamin Grewe, Professor of Neuroinformatics at ETH Zurich. “Imagine sitting in a tower with a book of Chinese characters. You would only read these signs. You would grow up in this tower and you wouldn’t have seen anything else. Then maybe someday, if I give you three signs, you could probably continue writing,” Grewe explains the principle of ChatGPT. Following the example, the system does not speak Chinese, but simply recognized the patterns from the book. 

This is how GPT3 works – this is the statistical language model that ChatGPT is built on. Or to put it simply: It can predict with a very high probability which word will come next and is usually correct. For this purpose, as much data as possible was fed in order to recognize: A follows B. So if incorrect information has appeared in the data sets used for learning, the program cannot recognize the errors by itself. GPT3 lacks the ability to recognize a context.

Which applications are conceivable?

As a little helper, technology could soon make life easier for many. For example, by automatically completing entire paragraphs of text after only a few words have been written on the computer. In this way, it would be possible to work much more efficiently and productively in some areas, says Benjamin Grewe, Professor of Neuroinformatics. For example in the nursing documentation in the hospital. “You could use such systems very well there,” says 

Aljoscha Burchardt from the German Research Center for Artificial Intelligence in Berlin. “But then the question always arises: is the quality good enough? Can the systems handle the special vocabulary? Then how do you adapt it for a real job?”GPT3 could also help when writing computer codes. Burchardt believes that GPT3 could certainly build smaller code components quickly. 

The question, however, is how much work it takes to correct the errors that have arisen. “We had the same question a few years ago with machine translation, where it sometimes pays off for the translator to do a pre-translation with the system – and sometimes it’s too much effort to make the whole text even nicer and more consistent to make people say: Then we’d better translate from the beginning.”The system could also help when comparing opinions: A class teacher had GPT3 evaluate old exams by students. To do this, he previously fed the system with his evaluation criteria. 

His grades and the systems were pretty close. Among other things, the system could provide automated feedback for e-learning, says Burchard. It is also interesting: “We may not even know exactly what our evaluation criteria are, and perhaps such systems will even help us, because they can look at 100,000 class tests, to get more objective criteria than we previously knew. There is certainly still a lot to do epistemologically.”So far, however, these are just ideas: “What is surprising is that there are still no business models where these systems are really useful,” says Aljoscha Burchardt.

Do we need a labeling requirement for computer-generated content?

Of course, yes, says Aljoscha Burchardt from the German Research Center for Artificial Intelligence in Berlin. His greatest concern: That numerous unreal, computer-generated images and texts will appear on the Internet within a very short time, which lack any factual basis. 

For example, pictures in which the Cologne Cathedral is next to the Eiffel Tower. “These texts and images will be used again as training material by the next generation of this AI system.” In this way, errors would be consolidated. It is therefore important to mark the computer-generated documents. “There are also subtle ways of doing this, such as some kind of watermark that can be incorporated into texts. We humans don’t even notice that, but another system can recognize when reading the text: That was the output of another system. I don’t have to take it at face value when I’m training.”