ScienceBlogs
Home

Is ChatGPT talking like a human?

1
1
Is ChatGPT talking like a human?
Humans and robots - © pexels.com

Following the explosion in popularity of one of OpenAI’s latest generative models, many people have been left wondering if these models have finally reached human levels and how they managed to achieve them.

Picture yourself having a chat with an intelligent entity, one that understands your questions, responds with relevant information, and engages in conversations that feel remarkably human — very much unlike our dear Alexa and Siri. Now, you might be thinking: “I don’t need to picture that! The name is in the title, you are talking about ChatGPT!”. Indeed, the responses ChatGPT generates are incredibly human-like and it is currently helping millions of people perform their daily tasks with unprecedented quality.

But is it really talking like a human?

In order to answer this question, we need to understand the technique used to train it, which brings us to distinguish between two different models: GPT-3 and ChatGPT.

GPT-3 stands for Generative Pretrained Transformer 3 and is a language model trained to receive a chunk of text as input and to continue predicting the following word in the sentence until a (plausible or forced) conclusion is reached. This is possible because GPT-3 is autoregressive, which means it stops every time it produces a new word, looks back at the current version of the sentence and then predicts the continuation based on the initial input and the latest word added to the sentence. Moreover, GPT-3 does not identify a single model but rather a class of models, each with its own name depending on the initial training data, model size and purpose.

ChatGPT is a version of one of the GPT-3.5 class models, specifically the davinci subclass, which underwent an additional training step based on the Reinforcement Learning with Human Feedback (RLHF) technique. In a nutshell, this method involves humans in the training loop, by having them evaluate how well a relatively large set of generated responses would be perceived by a human. This data is then used to train a separate “reward model” which is able to reproduce the human evaluation for any given response produced by the system. At this point, the GPT-3.5 model is tasked with producing multiple answers for a large set of questions. Simultaneously, these answers are scored by the reward model based on how a human would perceive them and gradually fed back into the initial model to nudge it towards the “correct” answer expected by a human interlocutor.

Essentially, this training process helps ChatGPT generate responses that align with human preferences and expectations, which leads to the model becoming much better at respecting conversational implicatures – precisely what a chatbot is supposed to do! This means it is able to simulate conversation and offer valuable insights. Human communication, however, has been shown to go way beyond these aspects, even breaking them at times when necessary for effective communication. We actively use language as a tool to shape our surroundings and, for this purpose, conversation requires intentionality. In the case of ChatGPT, we are interacting with an entity possessing neither a conscience nor a will of its own, which operates based on statistical patterns and algorithms, and is trained to always satisfy the expectation of the interlocutor. Humans will, instead, actively break these expectations because engaging in conversation entails having a self-directed purpose behind one’s actions.

So yes, ChatGPT is indeed talking like a human could – but not like a human does.

Francesco Fernicola

Francesco Fernicola

Francesco Fernicola is a joint PhD student in machine translation at the University of Bologna and the Institute for Applied Linguistics of Eurac Research. He is interested in all things NLP, is stereotypically Italian in his love for food and loves the quote “Stuff happens, hilarity ensues”.

Tags

  • Ask a Linguist

Citation

https://doi.org/10.57708/b152376733
Fernicola, F. Is ChatGPT talking like a human? https://doi.org/10.57708/B152376733

Related Post

Una donna presenta i risultati di una ricerca
ScienceBlogs
connecting-the-dots

Come posso condividere i risultati della mia ricerca con i partecipanti?

Andrea Renee Leone PizzighellaAndrea Renee Leone Pizzighella
Kann man im Dialekt schreiben, wie man will?
ScienceBlogs
connecting-the-dots

Kann man im Dialekt schreiben, wie man will?

Aivars Glaznieks Aivars Glaznieks
Figlio legittimo, figlio naturale o semplicemente figlio?
ScienceBlogs
connecting-the-dots

Figlio legittimo, figlio naturale o semplicemente figlio?

Isabella StanizziIsabella Stanizzi