• 3 Posts
  • 9 Comments
Joined 2 years ago
cake
Cake day: April 24th, 2023

help-circle




  • Okay, so I’m going to jump in and defend podcasts, because I think they’re an exception to the Dead Internet Theory.

    There is (ironically, I know) a podcast I really liked on the topic:

    https://podtail.com/en/podcast/frontiers-of-commoning-with-david-bollier/rabble-evan-henshaw-plath-how-network-protocols-en/

    The quick summary is, while some kinds of social media have been captured by big centralized companies, centralized, and enshittified - like microblogging with X, or videos with YouTube and TikTok - podcasting hasn’t.

    Because podcasting is distributed via RSS, a free open source protocol, anyone can create and distribute a podcast and there are hundreds of podcast apps to listen with.

    There’s no centralized location where you have to go to listen to podcasts - you search for podcasts on whatever app you like and you follow the podcasts you want to listen to. Apple Podcasts and Spotify have big databases of podcasts, but you don’t have to use either of them, as long as somebody has an RSS feed you can subscribe directly to their podcast without going through a gatekeeping platform of any kind.

    This makes it really difficult to enshittify the podcastosphere with a ton of AI slop, because people follow the podcasts they want to follow, they don’t rely on an algorithm to feed them new podcasts the way TikTok feeds them new videos, and if their podcast app tries to promote content they don’t want, they can just switch apps.

    So while this idea is shitty and a podcastosphere dominated by AI would suck I really don’t expect it to get much traction.


  • Which is funny to me, in the “if we don’t laugh we’ll cry” sense.

    Because whenever I go to a museum and look at modern art, abstract art, and so on, I see all sorts of curators notes explaining how this mix of shapes and colors and designs actually has some sort of profound, inspirational, generally leftist or anti-establishment, message.

    But people don’t see the message or understand the message right away. From a casual look, it’s just pleasing colors and shapes. You have to believe it matters and put in the effort to understand it in order to get those inspirational messages.

    In other words: if you oppose the establishment, and you are looking for modern art that opposes the establishment, you will find it. But if you’re an average person that anti-establishment message will mean nothing to you. You won’t even think to look for it.

    So it’s not that modern art doesn’t contain left or progressive ideologies. It’s worse. Because it does contain left and progressive ideologies in a self censored form. All the power and energy of these left and progressive artists has been captured by the establishment and harmlessly redirected into pursuits that “support” left and progressive causes but pose no threat to the powers that be.

    And I’m thinking more and more this makes it bad art.

    Think about advertisements. If someone in a car looks out and sees a billboard passing, they should understand, in the few seconds they see that billboard, what product it’s advertising and why they should buy that product. A billboard that doesn’t get those two points across in a matter of seconds has failed at being a billboard.

    And a piece of visual art that doesn’t get those same two messages across in a matter of seconds - what message it’s sending and why you should care - has failed at being visual art.


  • Oh no, says Sam, the eminently predictable consequences of my own actions have come to pass through no fault of my own.

    And he’s not wrong. If I search for any topic on Google without narrowing my search very carefully, the first page will consist of one AI generated autoresponse and ten AI generated articles from SEO-exploiting link farms.

    If I search for advice on Reddit I have to narrow it down to posts from before 2022 or comments will be full of users who see a question and think “it is both useful and appropriate for me to plug this question into ChatGPT and post its answer.”

    Product reviews have been 95% spam and ad copy for decades.

    I read fucking fanfiction and half the stories that started in 2025 show signs of being at least edited by LLMs.

    If that’s not the shambling zombie corpse of the Internet pretending to be human I don’t know what it is.





  • Don’t mistake the soil for the seed.

    People have been exhausted, stressed, poor, and lonely for centuries, and, yes, those factors worsen people’s mental health.

    The current Western loneliness epidemic, especially, has been worsening for decades - “Bowling Alone”, published in 2000, was one of the first popular discussions of a trend already present in the '90s - and, especially after COVID, loneliness and isolation (and fucking social media doomscrolling) have worsened people’s mental health even further. You’re not wrong. It’s a real thing.

    And this may make people more vulnerable to AI-induced psychosis. If you don’t have any real people to talk to, and you rely on an AI tool for the illusion of companionship, that’s not a good sign for your mental health in general.

    AND ALSO. AI-induced psychosis is, itself, a real thing, and it’s induced by people’s misunderstanding of how LLMs work (that is, thinking there’s a real mind behind the language generating algorithm) and LLM programming that’s designed to addict users by providing validation and positive feedback. And the more widely LLM tools are used, the more they’re crammed into every app, and the more their backers talk up how “smart” they are, the more common AI-induced psychosis is going to become.

    I mean, back in the day, people had to be deeply mentally ill before they started imagining their dog was telling them they were God. Now you can get an LLM to tell you it’s God, or you’re God, after a few hundred hours of conversation. I think the horror stories of mental illness we’re seeing now are just going to be the tip of the iceberg.


  • I’m just going to rant a bit, because this exemplifies why, I think, LLMs are not just bullshit but a looming public health crisis.

    Language is a tool used by humans to express their underlying thoughts.

    For most of human evolution, the only entities that could use language were other humans - that is, other beings with minds and thoughts.

    In our stories and myths and religions, anything that talked to us like a person - a God, a spirit, a talking animal - was something intelligent, with a mind, to some degree, like ours. And who knows how many religions were started when someone heard what sounded like a voice in the rumble of thunder or the crackling of a burning bush and thought Someone must be talking directly to them?

    It’s part of the culture of every society. It’s baked into our genetics. If something talks to us, we assume it has a mind and is expressing its thoughts to us through language.

    And because language is an inexact tool, we instinctively try to build up a theory of mind, to understand what the speaker is actually thinking, what they know and what they believe, as we hold a conversation with them.

    But now we have LLMs, which are something entirely new to this planet - technology that flawlessly mimics language without any underlying thought whatsoever.

    And if we don’t keep that in mind, if we follow our instincts and try to understand what the LLM is actually “thinking”, to build a theory of mind for a tool without any mind at all, we necessarily embrace unreason. We’re trying to rationalize something with no reasoning behind it. We are convincing ourselves to believe in something that doesn’t exist. And then we return to the LLM tool and ask it if we’re right about it, and it reinforces our belief.

    It’s very easy for us to create a fantasy of an AI intelligence speaking to us through chat prompts, because humans are very, very good at rationalizing. And because all LLMs are programmed, to some degree, to generate language the user wants to hear, it’s also very easy for us to spiral down into self-reinforcing irrationality, as the LLM-generated text convinces us there’s another mind behind those chat prompts, and that mind agrees with you and assures you that you are right and reinforces whatever irrational beliefs you’ve come up with.

    I think this is why we’re seeing so much irrationality, and literal mental illness, linked to overuse of LLMs. And why we’re probably going to see exponentially more. We didn’t evolve for this. It breaks our brains.