<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Capitalism &amp;mdash; Makiki&#39;s Cave of Thoughts</title>
    <link>https://blog.kobold-cave.eu/tag:Capitalism</link>
    <description></description>
    <pubDate>Sat, 02 May 2026 03:44:57 +0000</pubDate>
    <item>
      <title>A &#34;small&#34; rant about the state of modern web</title>
      <link>https://blog.kobold-cave.eu/a-small-rant-about-the-state-of-modern-web</link>
      <description>&lt;![CDATA[Modern web fucking sucks - and it sucks for so many reasons that I run out of anger at this point and only sadness remains. And yet I still manage to feel the need to air my greviances - even though I know it ain&#39;t gonna change shit in practice. It will be just a somewhat structured inconsequential rant - but I just need to put it out.&#xA;&#xA;!--more--&#xA;&#xA;So... where do I start?&#xA;&#xA;Godawful hardware requirements&#xA;&#xA;Well, starting with the low hanging fruit is probably the easiest.&#xA;&#xA;To browse the internet, you need to use a device with a somewhat excessive power, and a somewhat excessive amount of RAM. And sure, I get that a bit higher requirements to handle stuff like proper Unicode rendering make some sense, having a browser eat up more RAM than some of the professional tools and having typical websites be impossible to properly view on lower end devices does not.&#xA;&#xA;There is absolutely no legitimate reason why devices with 4 GB of RAM shouldn&#39;t be able to handle checking a news site and having something running in the background - yet alas, here we fucking are. And if you are going to say that these devices are not a thing anymore and everyone has an ample amount of RAM to (ab)use, I am going to shove those 16 GB RAM sticks you likely have down your throat.&#xA;&#xA;Huh. Guess I do still have some anger left.&#xA;&#xA;But yeah, I have recently talked with someone who has an old-ass laptop that has 4 GB of RAM, and I remember the times where I, as a young whippersnapper, could run many various flash games and view pretty much any website on a Windows XP PC with 256 MB of RAM. And despite all the performance and bandwith gains over time, the web feels more sluggish at times compared to these times where 10 MB flash game was considered huge.&#xA;&#xA;Utterly insane complexity&#xA;&#xA;Part of why the system requirements of browsing web are so high comes down to the fact that web browsers are extraordinarily complex applications - and they have to be, because the web standards are extraordinarily complex. You have the easy part, which is HTML, and two absolutely fucked up monstrosities - CSS and JS.&#xA;&#xA;CSS is used for styling the HTML. And god, it has so many fucking functionalities that one has to wonder how the fuck anyone can even implement it all correctly. You not only have simple shit like changing font or background of an element, you not only have the more advanced yet still very useful things like flexbox layouting, no, you also have animations. And not just 2d animations, no, you also have 3d ones. And then you realize that there are constantly more new things added to the CSS specification that brings it closer and closer to a general purpose language for some ungodly reason, while most websites would be just fine with like, idk, a few kB of CSS tops. Hell, even this blog would be more than fine having less CSS but I am too lazy to build my own solution, and it is still better than a typical Wordpress install, so well.&#xA;&#xA;And as for JS... god, JS is so fucking ass, both as a language spec and as the entire goddamn ecosystem. It is just a fucking mess, with so many goddamn features and APIs constantly being added on top that if you are out of the loop for a year or so getting back up to speed might be a bit of a struggle, because the best practices may have changed heavily during that time. And when you add the ecosystem on top, with npm dependency fuckfest of unauditability, zillion different build systems (for an interpreted language no less), zillion different frameworks (with like 10 in more or less popular use), fuckton of the flavor of the month libraries, and at this point I wouldn&#39;t be surprised to see someone that makes webdev their entire fucking personality, way beyond a typical autistic or ADHD hyperfixation, cause that&#39;s what it sometimes feels is expected from you to keep up.&#xA;&#xA;And all that complexity also causes one more thing - making your own browser is a nigh impossible task now. Hell, even Microsoft has tried doing so with Edge, and ultimately ended up making a yet another fork of motherfucking Chromium. Like, it is a miracle that we still have Firefox and its forks, despite Mozilla&#39;s best efforts to fucking throw it all right into the fucking garbage can.&#xA;&#xA;And sure, there is a new rendering engine actually being made - Servo - but it is like one part of the browser, and it currently is very much work in progress, which only confirms how incredibly fucking hard it is to make even a barely working browser.&#xA;&#xA;And yes, I know, there is a legitimate need for some of these complexities - but what we currently have now is way fucking beyond what we need. But hey, that unyielding increase in complexity serves to cement the dominance of Chromium on the browser market, so we ain&#39;t gonna see a change unless some kind of a revolution happens. Yay.&#xA;&#xA;Lack of care, plenty of malice&#xA;&#xA;Of course, the specs are just one part of the puzzle, and you can still make performant and lean websites. Hell, it is arguably easier than using all these big ass frameworks and shit. The issue? You need to want to not force the client to download 10 MB of JS code to display like 10 KB of meaningful data, you need to want to care about the bandwidth and shit - but hey, let&#39;s add all those trackers and ads so that the website will be unusable on anything lower-end than a fullblown gaming computer.&#xA;&#xA;And then you have all these goddamn frameworks, which sure, make it a bit easier to create interactive frontends - but they are so fucking heavy. And hell, some people use it for relatively tiny stuff instead of fullblown apps in the browser, which pumps up the js bundle size so fucking hard that I don&#39;t fucking know what can even be said at this point.&#xA;&#xA;Corporate social media&#xA;&#xA;You could make an entire series of articles on the effect that corporate social media had on the society. You could write an entire book on this.&#xA;&#xA;Sites such as Facebook and Twitter effectively killed the good old forums via sheer convienence, and became alghorithmic hellholes promoting aggravating shit for the purposes of engagement metrics. More clicks, more addiction, more ads shown, who cares if you incite genocide by doing so. And this of course impacts people pushing out stuff to attempt to game the algorithm - while in practice they are the ones played by it. Posting various clickbaits, engagement baits, catchy misinformation, dropping hate bombs, and so on.&#xA;&#xA;Furthermore, the algorithms used want to use as much data as possible to make the use of such websites more addicting, and the ads being more targetted - and as such, the corporate social media attempt to learn everything they can about you without even asking you for even a semblance of consent. Even if you don&#39;t have an account and don&#39;t visit the social media service, you are being fingerprinted and tracked at every single goddamn opportunity whenever you find something with a &#34;share to facebook&#34; button or similar. Almost every ad displayed on a site also comes with zillion of trackers to spy on your movements. It is honestly absolutely terrifying.&#xA;&#xA;The centralization of the speech under these social media corpos also means that these corpos dictate what is even allowed to be said, providing the immense censorship abilities. Political party they don&#39;t like? Just cut their reach. Talking about women reproductive health? Call it porn and ban it. Someone talking a decentralized alternative to their service? You know they will aggresively ban such thing. It fucking sucks.&#xA;&#xA;Malicious code&#xA;&#xA;So, JavaScript is actually a curse of the web. Sure, it is useful, but if you look at this with even a bit of scrutiny, you realize that it is running arbitrary, unauditable code taken from random places. Sure, the sandboxing is implemented solidly enough in the two browsers that still matter, but it&#39;s not like you can&#39;t do fucked up shit within that sandbox.&#xA;&#xA;The aforementioned trackers mentioned above, the utterly asinine spyware that is for some ungodly reason accepted by populace mentioned above - it is just one category of stupid bullshit that browsers are all too happy to allow. Nah, you also have some more direct and explicit attacks - for example tricking the target into letting the website&#39;s push notifications in. These push notifications in turn show various forms of &#34;your computer is infected&#34; messages, which in turn leads the poor user into scamware shit.&#xA;&#xA;You also have secret cryptominers, which burn through your computational resources like there is no tomorrow just so the owner of the miner can get a few cents or something.&#xA;&#xA;You also have XSS attacks where the attacker injects their own JS code into a legitimate website. Sure, it requires the website to be vulnerable to such attacks... but you ain&#39;t gonna know until it is too late most of the time, unless you have a habit of attempting to hack every website you see. And hell, you probably won&#39;t know even after the XSS vuln fucks you - the user of the website.&#xA;&#xA;Browser monopoly&#xA;&#xA;So, there are two or three web browsers that matter: Chrome, Firefox, and if you count it, Safari. You may ask, what about stuff like Edge, Opera, Brave or Vivaldi? Sorry, they are just Chromium forks. Like, at this point if you see a new web browser, it is more likely than not a chrome reskin.&#xA;&#xA;Still, we have choices, right? Two or three competing browsers ain&#39;t bad, right? Well, it wouldn&#39;t be bad if Chrome/Chromium wasn&#39;t like 80-90% of browsers used. Ergo, Google dictates the web standards, and I can guarantee you that they do not care at all about the needs of people, they only care about forcing their adtech down your throat.&#xA;&#xA;Conclusion&#xA;&#xA;\crying kobold noises\&#xA;&#xA;#Web #Capitalism]]&gt;</description>
      <content:encoded><![CDATA[<p>Modern web fucking sucks – and it sucks for so many reasons that I run out of anger at this point and only sadness remains. And yet I still manage to feel the need to air my greviances – even though I know it ain&#39;t gonna change shit in practice. It will be just a somewhat structured inconsequential rant – but I just need to put it out.</p>



<p>So... where do I start?</p>

<h2 id="godawful-hardware-requirements" id="godawful-hardware-requirements">Godawful hardware requirements</h2>

<p>Well, starting with the low hanging fruit is probably the easiest.</p>

<p>To browse the internet, you need to use a device with a somewhat excessive power, and a somewhat excessive amount of RAM. And sure, I get that a bit higher requirements to handle stuff like proper Unicode rendering make some sense, having a browser eat up more RAM than some of the professional tools and having typical websites be impossible to properly view on lower end devices does not.</p>

<p>There is absolutely no legitimate reason why devices with 4 GB of RAM shouldn&#39;t be able to handle checking a news site and having something running in the background – yet alas, here we fucking are. And if you are going to say that these devices are not a thing anymore and everyone has an ample amount of RAM to (ab)use, I am going to shove those 16 GB RAM sticks you likely have down your throat.</p>

<p>Huh. Guess I do still have some anger left.</p>

<p>But yeah, I have recently talked with someone who has an old-ass laptop that has 4 GB of RAM, and I remember the times where I, as a young whippersnapper, could run many various flash games and view pretty much any website on a Windows XP PC with 256 MB of RAM. And despite all the performance and bandwith gains over time, the web feels more sluggish at times compared to these times where 10 MB flash game was considered huge.</p>

<h2 id="utterly-insane-complexity" id="utterly-insane-complexity">Utterly insane complexity</h2>

<p>Part of why the system requirements of browsing web are so high comes down to the fact that web browsers are extraordinarily complex applications – and they have to be, because the web standards are extraordinarily complex. You have the easy part, which is HTML, and two absolutely fucked up monstrosities – CSS and JS.</p>

<p>CSS is used for styling the HTML. And god, it has so many fucking functionalities that one has to wonder how the fuck anyone can even implement it all correctly. You not only have simple shit like changing font or background of an element, you not only have the more advanced yet still very useful things like flexbox layouting, no, you also have animations. And not just 2d animations, no, you also have 3d ones. And then you realize that there are constantly more new things added to the CSS specification that brings it closer and closer to a general purpose language for some ungodly reason, while most websites would be just fine with like, idk, a few kB of CSS tops. Hell, even this blog would be more than fine having less CSS but I am too lazy to build my own solution, and it is still better than a typical Wordpress install, so well.</p>

<p>And as for JS... god, JS is so fucking ass, both as a language spec and as the entire goddamn ecosystem. It is just a fucking mess, with so many goddamn features and APIs constantly being added on top that if you are out of the loop for a year or so getting back up to speed might be a bit of a struggle, because the best practices may have changed heavily during that time. And when you add the ecosystem on top, with npm dependency fuckfest of unauditability, zillion different build systems (for an interpreted language no less), zillion different frameworks (with like 10 in more or less popular use), fuckton of the flavor of the month libraries, and at this point I wouldn&#39;t be surprised to see someone that makes webdev their entire fucking personality, way beyond a typical autistic or ADHD hyperfixation, cause that&#39;s what it sometimes feels is expected from you to keep up.</p>

<p>And all that complexity also causes one more thing – making your own browser is a nigh impossible task now. Hell, even Microsoft has tried doing so with Edge, and ultimately ended up making a yet another fork of motherfucking Chromium. Like, it is a miracle that we still have Firefox and its forks, despite Mozilla&#39;s best efforts to fucking throw it all right into the fucking garbage can.</p>

<p>And sure, there is a new rendering engine actually being made – <a href="https://servo.org/">Servo</a> – but it is like one part of the browser, and it currently is very much work in progress, which only confirms how incredibly fucking hard it is to make even a barely working browser.</p>

<p>And yes, I know, there is a legitimate need for some of these complexities – but what we currently have now is way fucking beyond what we need. But hey, that unyielding increase in complexity serves to cement the dominance of Chromium on the browser market, so we ain&#39;t gonna see a change unless some kind of a revolution happens. Yay.</p>

<h2 id="lack-of-care-plenty-of-malice" id="lack-of-care-plenty-of-malice">Lack of care, plenty of malice</h2>

<p>Of course, the specs are just one part of the puzzle, and you can still make performant and lean websites. Hell, it is arguably easier than using all these big ass frameworks and shit. The issue? You need to want to not force the client to download 10 MB of JS code to display like 10 KB of meaningful data, you need to want to care about the bandwidth and shit – but hey, let&#39;s add all those trackers and ads so that the website will be unusable on anything lower-end than a fullblown gaming computer.</p>

<p>And then you have all these goddamn frameworks, which sure, make it a bit easier to create interactive frontends – but they are so fucking heavy. And hell, some people use it for relatively tiny stuff instead of fullblown apps in the browser, which pumps up the js bundle size so fucking hard that I don&#39;t fucking know what can even be said at this point.</p>

<h2 id="corporate-social-media" id="corporate-social-media">Corporate social media</h2>

<p>You could make an entire series of articles on the effect that corporate social media had on the society. You could write an entire book on this.</p>

<p>Sites such as Facebook and Twitter effectively killed the good old forums via sheer convienence, and became alghorithmic hellholes promoting aggravating shit for the purposes of engagement metrics. More clicks, more addiction, more ads shown, who cares <a href="https://www.amnesty.org/en/latest/news/2022/09/myanmar-facebooks-systems-promoted-violence-against-rohingya-meta-owes-reparations-new-report/">if you incite genocide</a> by doing so. And this of course impacts people pushing out stuff to attempt to game the algorithm – while in practice they are the ones played by it. Posting various clickbaits, engagement baits, catchy misinformation, dropping hate bombs, and so on.</p>

<p>Furthermore, the algorithms used want to use as much data as possible to make the use of such websites more addicting, and the ads being more targetted – and as such, the corporate social media attempt to learn everything they can about you without even asking you for even a semblance of consent. Even if you don&#39;t have an account and don&#39;t visit the social media service, you are being fingerprinted and tracked at every single goddamn opportunity whenever you find something with a “share to facebook” button or similar. Almost every ad displayed on a site also comes with zillion of trackers to spy on your movements. It is honestly absolutely terrifying.</p>

<p>The centralization of the speech under these social media corpos also means that these corpos dictate what is even allowed to be said, providing the immense censorship abilities. Political party they don&#39;t like? Just cut their reach. Talking about women reproductive health? Call it porn and ban it. Someone talking a decentralized alternative to their service? You know they will aggresively ban such thing. It fucking sucks.</p>

<h2 id="malicious-code" id="malicious-code">Malicious code</h2>

<p>So, JavaScript is actually a curse of the web. Sure, it is useful, but if you look at this with even a bit of scrutiny, you realize that it is running arbitrary, unauditable code taken from random places. Sure, the sandboxing is implemented solidly enough in the two browsers that still matter, but it&#39;s not like you can&#39;t do fucked up shit within that sandbox.</p>

<p>The aforementioned trackers mentioned above, the utterly asinine spyware that is for some ungodly reason accepted by populace mentioned above – it is just one category of stupid bullshit that browsers are all too happy to allow. Nah, you also have some more direct and explicit attacks – for example tricking the target into letting the website&#39;s push notifications in. These push notifications in turn show various forms of “your computer is infected” messages, which in turn leads the poor user into scamware shit.</p>

<p>You also have secret cryptominers, which burn through your computational resources like there is no tomorrow just so the owner of the miner can get a few cents or something.</p>

<p>You also have XSS attacks where the attacker injects their own JS code into a legitimate website. Sure, it requires the website to be vulnerable to such attacks... but you ain&#39;t gonna know until it is too late most of the time, unless you have a habit of attempting to hack every website you see. And hell, you probably won&#39;t know even after the XSS vuln fucks you – the user of the website.</p>

<h2 id="browser-monopoly" id="browser-monopoly">Browser monopoly</h2>

<p>So, there are two or three web browsers that matter: Chrome, Firefox, and if you count it, Safari. You may ask, what about stuff like Edge, Opera, Brave or Vivaldi? Sorry, they are just Chromium forks. Like, at this point if you see a new web browser, it is more likely than not a chrome reskin.</p>

<p>Still, we have choices, right? Two or three competing browsers ain&#39;t bad, right? Well, it wouldn&#39;t be bad if Chrome/Chromium wasn&#39;t like 80-90% of browsers used. Ergo, Google dictates the web standards, and I can guarantee you that they do not care at all about the needs of people, they only care about forcing their adtech down your throat.</p>

<h2 id="conclusion" id="conclusion">Conclusion</h2>

<p>*crying kobold noises*</p>

<p><a href="https://blog.kobold-cave.eu/tag:Web" class="hashtag"><span>#</span><span class="p-category">Web</span></a> <a href="https://blog.kobold-cave.eu/tag:Capitalism" class="hashtag"><span>#</span><span class="p-category">Capitalism</span></a></p>
]]></content:encoded>
      <guid>https://blog.kobold-cave.eu/a-small-rant-about-the-state-of-modern-web</guid>
      <pubDate>Wed, 03 Jul 2024 07:53:19 +0000</pubDate>
    </item>
    <item>
      <title>IPv4, centralization of the internet, and capitalism sucking balls yet again</title>
      <link>https://blog.kobold-cave.eu/ipv4-centralization-of-the-internet-and-capitalism-sucking-balls-yet-again</link>
      <description>&lt;![CDATA[The internet is designed to be decentralized, to have no central authority. It is designed to be the web of many networks communicating with each other. Anyone in theory is able to host their own services, their websites, and connect with others freely... except this is very much not the case. And well, there is quite a few reasons for this, to say the least - but here I want to focus on the one of them - IPv4 address exhaustion.&#xA;&#xA;!--more--&#xA;&#xA;A short infodump about IPv4 and IPv6.&#xA;&#xA;IPv4, in very simplified terms, is a set of protocols used to essentially handle all the internet flow, with its specification finalized in early 80s. It honestly works fine, except for one issue - the address range.&#xA;&#xA;The creators of the internet standards didn&#39;t think the internet will be a big thing, so they&#39;ve made the address range a 32-bit number - which gives us a bit under 4.3 billion addresses. Furthermore, they allocated these addresses to organizations willy nilly, and suddenly there was a credible threat of exhausting the pool of available addresses.&#xA;&#xA;So, what&#39;s the solution? Well, of course, extending the address space, right? And well, yes. That is the solution. And as such, in 1998 the standard for IPv6 was introduced. Main draw? 128 bits of address - ergo, 2^128 (roughly 340 undecillion) possible addresses. Main problem? It needs new hardware pretty much everywhere - and as such, there were steps made to stall the address exhaustion problem. Some steps were societal - some organizations have released their huge IP ranges back to the common pool, and the IANA made the IP allocation policy way harsher to slow down the use of this limited resource. Some were technical, such as the introduction of NAT44 and eventually NAT444, for which I will explain the basic idea a bit later.&#xA;&#xA;Yet this still was not enough - the last block of IPv4 addresses was assigned in 2019, and the IPv6 deployment is more or less nowhere. Of course, there is a question that immediately comes to the mind - why? &#xA;&#xA;Why despite all this time, IPv6 was not widely implemented?&#xA;&#xA;The answer, while maybe not exactly simple, boils down to one thing: the capitalism. Not only there is little capitalistic incentive to roll out IPv6, considering it is a large cost due to requiring new infrastructure, but also keeping IPv4 alive for as long as possible is something that is actually beneficial to the capitalists.&#xA;&#xA;The &#34;benefit&#34; of IPv4 that I am talking about is simple - the aforementioned scarcity and address exhaustion. The estabilished players on the ISP and cloud provider markets have wide enough pools of IPv4 addresses reserved for themselves to still last them for a while. They very treat it as a commodity in similar vein to how landlords and real estate traders treat the land and housing - abusing the scarcity for profit, renting the IPv4 addresses for not insignificiant amount of money.&#xA;&#xA;The scarcity of the IPv4 addresses also makes hosting things on premises - whether for business or for personal stuff - far more difficult than it should be. And the reason for this are NAT44 and NAT444 - the address conservation mechanisms I have mentioned a few paragraphs ago. As simply as I can put it, NAT44 allows multiple devices on the same network to share the same public IP, while NAT444 expands this concept to multiple networks at once. As ingenious of a solution this is, allowing various devices to initiate connections with relative ease - it causes a massive problem when you want someone from outside to connect to your devices. There are ways to work around this, but they are pretty darn technical and in some cases pretty unreliable.&#xA;&#xA;Now, one may ask why I am mentioning difficulties with on premises hosting of various things as something that is advantageous for capitalists. And the answer is simple - it pushes people towards the use of cloud provider services instead of their own hardware and it pushes people towards the corporate social media instead their very own places on the internet. This in turn contributes towards the centralization of the internet within the hands of few big corpos, centralization that allows them to dictate the terms for smaller players and self-hosters and to determine what is even allowed on the internet in the first place.&#xA;&#xA;Capitalism is pain&#xA;&#xA;There is a capitalistic incentive to prolong the life of IPv4 protocols for as long as possible, and little incentive to transition towards IPv6. Not only the big players gain advantage from the issues of IPv4, they also would bear most of the costs of this techonogical transition.&#xA;&#xA;Funnily enough, most of the current networking hardware in use handle both IPv4 and IPv6 just fine. Hell, most likely your computer, your phone, and whatever networking devices you have at your home handle both IPv4 and IPv6 too. The issue is infrastructure - the thing that we common folk have little influence on. It doesn&#39;t matter that you can have your own IPv6 address subnet when your ISP doesn&#39;t want to implement IPv6 in their own infrastructure. This means for IPv6 to become a proper thing, we have to wait until either the capitalist class decides to be nice or the politicians decide to force the capitalist class to properly implement IPv6. In some countries that forcing factor does show up, in others it seems really unlikely that any politician will even bother - but every host that is migrated to IPv6 is at least some progress towards a freer internet.&#xA;&#xA;Let&#39;s just hope that we will be able to see IPv6 dominate over IPv4 within our lifetimes.&#xA;&#xA;#Capitalism #Internet]]&gt;</description>
      <content:encoded><![CDATA[<p>The internet is designed to be decentralized, to have no central authority. It is designed to be the web of many networks communicating with each other. Anyone in theory is able to host their own services, their websites, and connect with others freely... except this is very much not the case. And well, there is quite a few reasons for this, to say the least – but here I want to focus on the one of them – IPv4 address exhaustion.</p>



<h2 id="a-short-infodump-about-ipv4-and-ipv6" id="a-short-infodump-about-ipv4-and-ipv6">A short infodump about IPv4 and IPv6.</h2>

<p>IPv4, in very simplified terms, is a set of protocols used to essentially handle all the internet flow, with its specification finalized in early 80s. It honestly works fine, except for one issue – the address range.</p>

<p>The creators of the internet standards didn&#39;t think the internet will be a big thing, so they&#39;ve made the address range a 32-bit number – which gives us a bit under 4.3 billion addresses. Furthermore, they allocated these addresses to organizations willy nilly, and suddenly there was a credible threat of exhausting the pool of available addresses.</p>

<p>So, what&#39;s the solution? Well, of course, extending the address space, right? And well, yes. That is <strong>the</strong> solution. And as such, in 1998 the standard for IPv6 was introduced. Main draw? 128 bits of address – ergo, 2^128 (roughly 340 undecillion) possible addresses. Main problem? It needs new hardware pretty much everywhere – and as such, there were steps made to stall the address exhaustion problem. Some steps were societal – some organizations have released their huge IP ranges back to the common pool, and the IANA made the IP allocation policy way harsher to slow down the use of this limited resource. Some were technical, such as the introduction of NAT44 and eventually NAT444, for which I will explain the basic idea a bit later.</p>

<p>Yet this still was not enough – the last block of IPv4 addresses was assigned in 2019, and the IPv6 deployment is more or less nowhere. Of course, there is a question that immediately comes to the mind – why?</p>

<h2 id="why-despite-all-this-time-ipv6-was-not-widely-implemented" id="why-despite-all-this-time-ipv6-was-not-widely-implemented">Why despite all this time, IPv6 was not widely implemented?</h2>

<p>The answer, while maybe not exactly simple, boils down to one thing: the capitalism. Not only there is little capitalistic incentive to roll out IPv6, considering it is a large cost due to requiring new infrastructure, but also keeping IPv4 alive for as long as possible is something that is actually beneficial to the capitalists.</p>

<p>The “benefit” of IPv4 that I am talking about is simple – the aforementioned scarcity and address exhaustion. The estabilished players on the ISP and cloud provider markets have wide enough pools of IPv4 addresses reserved for themselves to still last them for a while. They very treat it as a commodity in similar vein to how landlords and real estate traders treat the land and housing – abusing the scarcity for profit, renting the IPv4 addresses for not insignificiant amount of money.</p>

<p>The scarcity of the IPv4 addresses also makes hosting things on premises – whether for business or for personal stuff – far more difficult than it should be. And the reason for this are NAT44 and NAT444 – the address conservation mechanisms I have mentioned a few paragraphs ago. As simply as I can put it, NAT44 allows multiple devices on the same network to share the same public IP, while NAT444 expands this concept to multiple networks at once. As ingenious of a solution this is, allowing various devices to initiate connections with relative ease – it causes a massive problem when you want someone from outside to connect to your devices. There are ways to work around this, but they are pretty darn technical and in some cases pretty unreliable.</p>

<p>Now, one may ask why I am mentioning difficulties with on premises hosting of various things as something that is advantageous for capitalists. And the answer is simple – it pushes people towards the use of cloud provider services instead of their own hardware and it pushes people towards the corporate social media instead their very own places on the internet. This in turn contributes towards the centralization of the internet within the hands of few big corpos, centralization that allows them to dictate the terms for smaller players and self-hosters and to determine what is even allowed on the internet in the first place.</p>

<h2 id="capitalism-is-pain" id="capitalism-is-pain">Capitalism is pain</h2>

<p>There is a capitalistic incentive to prolong the life of IPv4 protocols for as long as possible, and little incentive to transition towards IPv6. Not only the big players gain advantage from the issues of IPv4, they also would bear most of the costs of this techonogical transition.</p>

<p>Funnily enough, most of the current networking hardware in use handle both IPv4 and IPv6 just fine. Hell, most likely your computer, your phone, and whatever networking devices you have at your home handle both IPv4 and IPv6 too. The issue is infrastructure – the thing that we common folk have little influence on. It doesn&#39;t matter that you can have your own IPv6 address subnet when your ISP doesn&#39;t want to implement IPv6 in their own infrastructure. This means for IPv6 to become a proper thing, we have to wait until either the capitalist class decides to be nice or the politicians decide to force the capitalist class to properly implement IPv6. In some countries that forcing factor does show up, in others it seems really unlikely that any politician will even bother – but every host that is migrated to IPv6 is at least some progress towards a freer internet.</p>

<p>Let&#39;s just hope that we will be able to see IPv6 dominate over IPv4 within our lifetimes.</p>

<p><a href="https://blog.kobold-cave.eu/tag:Capitalism" class="hashtag"><span>#</span><span class="p-category">Capitalism</span></a> <a href="https://blog.kobold-cave.eu/tag:Internet" class="hashtag"><span>#</span><span class="p-category">Internet</span></a></p>
]]></content:encoded>
      <guid>https://blog.kobold-cave.eu/ipv4-centralization-of-the-internet-and-capitalism-sucking-balls-yet-again</guid>
      <pubDate>Sun, 16 Jun 2024 21:43:39 +0000</pubDate>
    </item>
    <item>
      <title>ChatGPT is an anti-tool</title>
      <link>https://blog.kobold-cave.eu/chatgpt-is-an-anti-tool</link>
      <description>&lt;![CDATA[ChatGPT is definitely something very hyped in the techbro sphere, for reasons that tend to fall apart under any degree of scrutiny. For me though, ChatGPT is an anti-tool, and by this I mean that it not only does not fulfill the advertised goal of being essentially an expert in your pocket, but also has a significiant negative impact on humanity as a whole.&#xA;!--more--&#xA;&#xA;OpenAI&#39;s own brand of snake oil&#xA;&#xA;If OpenAI&#39;s advertisement of their own product was even remotely plausible, it would be a monumentally amazing tool. But ChatGPT does not work as advertised. Period. With the LLM (large language model) approach it will never reach the goal of being a virtual expert in your pocket, no matter how many servers and GPUs you will throw at making it.&#xA;&#xA;img src=&#39;https://imgs.xkcd.com/comics/machine_learning.png&#39;/img&#xA;(source: https://xkcd.com/1838)&#xA;&#xA;What ChatGPT does is essentially equivalent to one thing - an autocomplete function, just at a huge scale. The difference between the autocomplete function that you may use with your phone&#39;s keyboard, and ChatGPT come down to just scale. Sure, there are people who are conned in a similar way to how people are conned by cold reading - but that changes nothing in how OpenAI&#39;s product works in practice. ChatGPT can only pump out statistically likely tokens - it just has more statistics to use. It does not have any reasoning capability, it does not know if what it says has any basis in reality - all it &#34;sees&#34; are the tokens representing various letters and symbols, and how statistically they fit together.&#xA;&#xA;Because of this, there is no way to implement any automated metric that will check the expected correctness of the information within the output of ChatGPT. The best you can get is a value signifying how sure the statistical model is that this is valid natural language stuff, which is useless for the end users. Why not having such &#34;correctness&#34; value a big deal? Well... it essentially forces you to fact check everything, or risk a potentially fatal mistake. It makes it impossible to responsibly trust the output of the text generator. Due to this it completely fails to fulfill that goal of being &#34;an expert in your pocket&#34;.&#xA;&#xA;Making access to knowledge harder&#xA;&#xA;If the above misleading advertising was all there was to it, I wouldn&#39;t call ChatGPT an anti-tool - but the issue is that this product can be effectively used for a few things - things that are not exactly beneficial to humanity as a whole to say the least.&#xA;&#xA;ChatGPT excels when all you need is a plausible sounding text, with no regards to correctness. This in turn means that it is a great tool for generating spam, setting up effective misinformation campaigns, and content milling useless ad-filled stuff. It also makes plagiarism significiantly easier, making it need far less effort to change things up just enough to be not immediately noticeable. All this means that the density of useless information rises dramatically, ergo making finding reaching for the knowledge significiantly harder.&#xA;&#xA;With how much easier poisoning the information and knowledge pool is thanks to the ChatGPT, the damage caused by this product is very much significiant and noticeable - even if you are not using this thing.&#xA;&#xA;But wait... there&#39;s more!&#xA;&#xA;The above is in my opinion more than enough to call ChatGPT an antitool. And yet... well, there are other problems with this product, other ways in which it causes damage to the global society.&#xA;&#xA;The creation of ChatGPT is exploitative in nature. The training data had to come from somewhere. Considering the size of the dataset necessary, OpenAI simply scrapped all the textual data they could from the internet, including all the social media content and pirated material. The business model of ChatGPT relies on taking the labor of other people, without consent nor compensation. It&#39;s not even a matter of copyright here, as OpenAI does license some of the copyrighted stuff - but the people compensated are not the authors of the works, but instead the capitalists. One could argue that this would be morally fine if they made ChatGPT available for free... but even that line of defense is shattered instantly by the fact that ChatGPT is a commercial product. A commercial product that repackages works of many, many common people.&#xA;&#xA;And then you have the enviromental impact. While it is sadly hard to find any hard data on the power usage of the datacenters required to power ChatGPT - as this kind of information is not exactly something marketing teams want to share. But it is a fact that merely training an LLM model is computationally expensive, burning through a lot of energy. The fact that to add more information to the model you need to retrain the model from the scratch only amplifies the power usage. OpenAI is throwing more and more computational resources into their product, chasing the capitalistic ideal of infinite growth - and as such, the power usage rises even more, while the lack of utility stays the same. The queries to the statistical model themselves are also hilariously expensive - requiring far more resources and energy per query than a search engine.&#xA;&#xA;What&#39;s next?&#xA;&#xA;I am sceptical that the current LLM hype is a bubble. After all, corporations love not paying people for their hard work, and ChatGPT - as well as other generative AI models - do allow corpos to exploit the work of others in a far easier way. That being said, I do have hope that ChatGPT will be unsustainably expensive and will run out of the room for capitalistic growth sooner than later - even if that hope is very much limited.&#xA;&#xA;We can still try to avoid some of the harmful effects of ChatGPT by promoting going away from the corporate internet - by making moves to give internet back to people. Encourage people to set up their own websites, encourage people to use RSS feeds, encourage people to use decentralized social media. The less incentive there is to chase the numbers and capitalistic ideals, the less need there is to appease the content algorithms - the more human the internet will be... and I do think that is a good thing to strive for.&#xA;&#xA;#AI #Capitalism]]&gt;</description>
      <content:encoded><![CDATA[<p>ChatGPT is definitely something very hyped in the techbro sphere, for reasons that tend to fall apart under any degree of scrutiny. For me though, ChatGPT is an anti-tool, and by this I mean that it not only does not fulfill the advertised goal of <a href="https://web.archive.org/web/20240108223001/https://openai.com/chatgpt">being essentially an expert in your pocket</a>, but also has a significiant negative impact on humanity as a whole.
</p>

<h2 id="openai-s-own-brand-of-snake-oil" id="openai-s-own-brand-of-snake-oil">OpenAI&#39;s own brand of snake oil</h2>

<p>If OpenAI&#39;s advertisement of their own product was even remotely plausible, it would be a monumentally amazing tool. But ChatGPT does not work as advertised. Period. With the <a href="https://en.m.wikipedia.org/wiki/Large_language_model">LLM (large language model)</a> approach it will never reach the goal of being a virtual expert in your pocket, no matter how many servers and GPUs you will throw at making it.</p>

<p><img src="https://imgs.xkcd.com/comics/machine_learning.png"></img>
(source: <a href="https://xkcd.com/1838">https://xkcd.com/1838</a>)</p>

<p>What ChatGPT does is essentially equivalent to one thing – an autocomplete function, just at a huge scale. The difference between the autocomplete function that you may use with your phone&#39;s keyboard, and ChatGPT come down to just scale. Sure, there are people who <a href="https://softwarecrisis.dev/letters/llmentalist/">are conned in a similar way to how people are conned by cold reading</a> – but that changes nothing in how OpenAI&#39;s product works in practice. ChatGPT can only pump out statistically likely tokens – it just has more statistics to use. It does not have any reasoning capability, it does not know if what it says has any basis in reality – all it “sees” are the tokens representing various letters and symbols, and how statistically they fit together.</p>

<p>Because of this, there is no way to implement any automated metric that will check the expected correctness of the information within the output of ChatGPT. The best you can get is a value signifying how sure the statistical model is that this is valid natural language stuff, which is useless for the end users. Why not having such “correctness” value a big deal? Well... it essentially forces you to fact check everything, or risk a potentially fatal mistake. It makes it impossible to responsibly trust the output of the text generator. Due to this it completely fails to fulfill that goal of being “an expert in your pocket”.</p>

<h2 id="making-access-to-knowledge-harder" id="making-access-to-knowledge-harder">Making access to knowledge harder</h2>

<p>If the above misleading advertising was all there was to it, I wouldn&#39;t call ChatGPT an anti-tool – but the issue is that this product can be effectively used for a few things – things that are not exactly beneficial to humanity as a whole to say the least.</p>

<p>ChatGPT excels when all you need is a plausible sounding text, with no regards to correctness. This in turn means that it is a great tool for generating spam, setting up effective misinformation campaigns, and content milling useless ad-filled stuff. It also makes plagiarism significiantly easier, making it need far less effort to change things up just enough to be not immediately noticeable. All this means that the density of useless information rises dramatically, ergo making finding reaching for the knowledge significiantly harder.</p>

<p>With how much easier poisoning the information and knowledge pool is thanks to the ChatGPT, the damage caused by this product is very much significiant and noticeable – even if you are not using this thing.</p>

<h2 id="but-wait-there-s-more" id="but-wait-there-s-more">But wait... there&#39;s more!</h2>

<p>The above is in my opinion more than enough to call ChatGPT an antitool. And yet... well, there are other problems with this product, other ways in which it causes damage to the global society.</p>

<p>The creation of ChatGPT is exploitative in nature. The training data had to come from somewhere. Considering the size of the dataset necessary, OpenAI simply scrapped all the textual data they could from the internet, including all the social media content and pirated material. The business model of ChatGPT relies on taking the labor of other people, without consent nor compensation. It&#39;s not even a matter of copyright here, as OpenAI does license some of the copyrighted stuff – <a href="https://12ft.io/proxy?q=https%3A%2F%2Fwww.wired.com%2Fstory%2Fopenai-axel-springer-news-licensing-deal-whats-in-it-for-writers%2F">but the people compensated are not the authors of the works, but instead the capitalists</a>. One could argue that this would be morally fine if they made ChatGPT available for free... but even that line of defense is shattered instantly by the fact that ChatGPT is a commercial product. A commercial product that repackages works of many, many common people.</p>

<p>And then you have the enviromental impact. While it is sadly hard to find any hard data on the power usage of the datacenters required to power ChatGPT – as this kind of information is not exactly something marketing teams want to share. But it is a fact that merely training an LLM model is computationally expensive, <a href="https://www.washington.edu/news/2023/07/27/how-much-energy-does-chatgpt-use/">burning through a lot of energy</a>. The fact that to add more information to the model you need to retrain the model from the scratch only amplifies the power usage. OpenAI is throwing more and more computational resources into their product, chasing the capitalistic ideal of infinite growth – and as such, the power usage rises even more, while the <del>lack of</del> utility stays the same. The queries to the statistical model themselves are also hilariously expensive – requiring far more resources and energy per query than a search engine.</p>

<h2 id="what-s-next" id="what-s-next">What&#39;s next?</h2>

<p>I am sceptical that the current LLM hype is a bubble. After all, corporations love not paying people for their hard work, and ChatGPT – as well as other generative AI models – do allow corpos to exploit the work of others in a far easier way. That being said, I do have hope that ChatGPT will be unsustainably expensive and will run out of the room for capitalistic growth sooner than later – even if that hope is very much limited.</p>

<p>We can still try to avoid some of the harmful effects of ChatGPT by promoting going away from the corporate internet – by making moves to give internet back to people. Encourage people to set up their own websites, encourage people to use RSS feeds, encourage people to use decentralized social media. The less incentive there is to chase the numbers and capitalistic ideals, the less need there is to appease the content algorithms – the more human the internet will be... and I do think that is a good thing to strive for.</p>

<p><a href="https://blog.kobold-cave.eu/tag:AI" class="hashtag"><span>#</span><span class="p-category">AI</span></a> <a href="https://blog.kobold-cave.eu/tag:Capitalism" class="hashtag"><span>#</span><span class="p-category">Capitalism</span></a></p>
]]></content:encoded>
      <guid>https://blog.kobold-cave.eu/chatgpt-is-an-anti-tool</guid>
      <pubDate>Tue, 23 Jan 2024 09:37:14 +0000</pubDate>
    </item>
  </channel>
</rss>