ChatGPT is an anti-tool

ChatGPT is definitely something very hyped in the techbro sphere, for reasons that tend to fall apart under any degree of scrutiny. For me though, ChatGPT is an anti-tool, and by this I mean that it not only does not fulfill the advertised goal of being essentially an expert in your pocket, but also has a significiant negative impact on humanity as a whole.

OpenAI's own brand of snake oil

If OpenAI's advertisement of their own product was even remotely plausible, it would be a monumentally amazing tool. But ChatGPT does not work as advertised. Period. With the LLM (large language model) approach it will never reach the goal of being a virtual expert in your pocket, no matter how many servers and GPUs you will throw at making it.


What ChatGPT does is essentially equivalent to one thing – an autocomplete function, just at a huge scale. The difference between the autocomplete function that you may use with your phone's keyboard, and ChatGPT come down to just scale. Sure, there are people who are conned in a similar way to how people are conned by cold reading – but that changes nothing in how OpenAI's product works in practice. ChatGPT can only pump out statistically likely tokens – it just has more statistics to use. It does not have any reasoning capability, it does not know if what it says has any basis in reality – all it “sees” are the tokens representing various letters and symbols, and how statistically they fit together.

Because of this, there is no way to implement any automated metric that will check the expected correctness of the information within the output of ChatGPT. The best you can get is a value signifying how sure the statistical model is that this is valid natural language stuff, which is useless for the end users. Why not having such “correctness” value a big deal? Well... it essentially forces you to fact check everything, or risk a potentially fatal mistake. It makes it impossible to responsibly trust the output of the text generator. Due to this it completely fails to fulfill that goal of being “an expert in your pocket”.

Making access to knowledge harder

If the above misleading advertising was all there was to it, I wouldn't call ChatGPT an anti-tool – but the issue is that this product can be effectively used for a few things – things that are not exactly beneficial to humanity as a whole to say the least.

ChatGPT excels when all you need is a plausible sounding text, with no regards to correctness. This in turn means that it is a great tool for generating spam, setting up effective misinformation campaigns, and content milling useless ad-filled stuff. It also makes plagiarism significiantly easier, making it need far less effort to change things up just enough to be not immediately noticeable. All this means that the density of useless information rises dramatically, ergo making finding reaching for the knowledge significiantly harder.

With how much easier poisoning the information and knowledge pool is thanks to the ChatGPT, the damage caused by this product is very much significiant and noticeable – even if you are not using this thing.

But wait... there's more!

The above is in my opinion more than enough to call ChatGPT an antitool. And yet... well, there are other problems with this product, other ways in which it causes damage to the global society.

The creation of ChatGPT is exploitative in nature. The training data had to come from somewhere. Considering the size of the dataset necessary, OpenAI simply scrapped all the textual data they could from the internet, including all the social media content and pirated material. The business model of ChatGPT relies on taking the labor of other people, without consent nor compensation. It's not even a matter of copyright here, as OpenAI does license some of the copyrighted stuff – but the people compensated are not the authors of the works, but instead the capitalists. One could argue that this would be morally fine if they made ChatGPT available for free... but even that line of defense is shattered instantly by the fact that ChatGPT is a commercial product. A commercial product that repackages works of many, many common people.

And then you have the enviromental impact. While it is sadly hard to find any hard data on the power usage of the datacenters required to power ChatGPT – as this kind of information is not exactly something marketing teams want to share. But it is a fact that merely training an LLM model is computationally expensive, burning through a lot of energy. The fact that to add more information to the model you need to retrain the model from the scratch only amplifies the power usage. OpenAI is throwing more and more computational resources into their product, chasing the capitalistic ideal of infinite growth – and as such, the power usage rises even more, while the lack of utility stays the same. The queries to the statistical model themselves are also hilariously expensive – requiring far more resources and energy per query than a search engine.

What's next?

I am sceptical that the current LLM hype is a bubble. After all, corporations love not paying people for their hard work, and ChatGPT – as well as other generative AI models – do allow corpos to exploit the work of others in a far easier way. That being said, I do have hope that ChatGPT will be unsustainably expensive and will run out of the room for capitalistic growth sooner than later – even if that hope is very much limited.

We can still try to avoid some of the harmful effects of ChatGPT by promoting going away from the corporate internet – by making moves to give internet back to people. Encourage people to set up their own websites, encourage people to use RSS feeds, encourage people to use decentralized social media. The less incentive there is to chase the numbers and capitalistic ideals, the less need there is to appease the content algorithms – the more human the internet will be... and I do think that is a good thing to strive for.