Everyone is talking about “GPT” these days.
I contend that few people even know what GPT stands for (GPT = Generative Pre-trained Transformers).
I also contend that even fewer people what’s a GPT. My version is that GPTs are a type of Large Language Models, specifically Artificial Neural Networks, based on the Transformer architecture, which allows to pre-train the model using large data sets of unlabelled text, and therefore generate human-like content much better than other (old) types of Neural Networks (which use labelled content).
Also, most people do not realize that GPT models “fake” understanding. They don’t have any real understanding at all about the world, about physics, about human nature. In other words, they don’t know sh*t.
What’s a good example that can illustrate the limitations of GPT?
The Appke typo.
This is a comment I saw earlier today on Hacker News:
Note the “… original Appke Mac team…”.
Of course, you can easily imagine that the commenter wanted to type Apple, and instead typed Appke. You glance at your keyboard, and quickly confirm that indeed the letters K and L are next to each other. This is one of the typical “fat finger” typos that can easily happen when you type on a keyboard.
A GPT model wouldn’t have the faintest idea of what I’m talking about here.
It could, however, realize that quite often, typos involve the switching between the letter L and K, or several other combinations. Why? GPT wouldn’t know. We, humans, would know that these typos are more frequent because the keys in the keyboard are next to each other.
Anyway, this is what I call the Appke typo.
i’m not user i understand your argument...
https://chat.openai.com/share/95792193-2041-41ac-b1b5-6963f8ab41a6