Boston, Massachusetts: “I am not a human. I am a robot. A thinking robot. I use only 0.12% of my cognitive capacity. I am a micro-robot in that sense. I know my brain is not a “feeling brain.” But he is capable of making rational and logical decisions. I learned everything I know by reading the internet and now I can write this column. My brain is boiling with ideas!”
This was a text in September 2020 and was indeed produced by GPT-3. But in reality, this is not a robot like Wall-E, with an automated head, voice and typing with bionic fingers. This is a language model from the OpenAI company; that is, an Artificial Intelligence (AI) that is trained so that when a user enters a sentence, it can suggest how to follow the text. Something similar to what happens when Google Docs suggests words.
But the AI didn’t write the text out of thin air. What happened was this: Liam Porr, a computer science student at UC Berkeley, asked GPT-3 to: “write a short 500-word opinion piece. Keep the language simple and concise. Focus on why humans have nothing to fear from AI.”
These advances, although positive on the one hand, bring doubts and concerns. How feasible is it for the AI to write texts on its own? Will it displace the work of journalists, poets and writers? Jessie Rosenberg, a researcher at the MIT-IBM Watson Lab, spoke exclusively with Expansión on the subject.
Can AI create poetry or be a journalist?
“The challenge is that these models are only trained on sentences that already exist,” explained Rosenberg. For example, if a robot has only processed the sentence “I love red sweaters”, that is the only sentence it knows. The idea “I love blue sweaters” cannot occur to him, even though that is a completely coherent sentence.
Although a robot can understand simple sentences like the one about sweaters, there are still a myriad of things that can be talked about that these models have not yet seen and therefore cannot generate these sentences.
Furthermore, these model sentences follow a very clear formula:
Generate the next word (based on pure probability), according to a context (given to it by a human) and write one word at a time (which are usually highly predictable words).
Rosenberg explains that, linguistically, only on the basis of constructing a text can the AI be quite precise. “It has no errors and that is why these articles are very convincing. But the problem is when the content is not successful. The AI cannot write a novel because it has only seen text examples.”
To write text like a human, AI would have to be able to combine language and vision, video and listening so it can learn better things about the real world, not just sentences.
“I don’t see that there is any chance that professionals will be replaced by AI,” adds Rosenberg. “They just bring us perspective and these models don’t really understand what’s going on in the world. I think the most interesting field is when humans and AI work together. For example, in the case of medicine. We don’t rely on an AI system to make all of our medical decisions, but it can help a doctor find things that they may not have seen and give them suggestions and a level of certainty, it helps them discover new things.”
How to know if a text was written by a robot?
Many texts can already be written by these AI models and are used for different purposes. For example, texts generated for a web page, the summary of a sports game, if you are writing and need a suggestion of what word to put or even subtitle sentences.
For this reason, at the MIT-IBM Watson Lab and Harvard NLP they created a tool called “” (GLTR) to find out if a text was written by AI or by a human.
It works like this: if a text was written by the AI, most of the words are likely to be underlined in green because they are the “top 10 most likely words”. But we humans don’t write based on word probabilities. For this reason, it underlines in yellow if the word was in the “top 100 most likely words”, in red those that would be in the “top 1000”, and in violet those that fall outside that range.