• count_dongulus@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    15 days ago

    Even if this were true - and it is NOT - it’s not like sentient life suffering matters to most people anyway. Just look at where your meat comes from.

    Maybe this group just prefers free range chatbot.

  • etherphon@piefed.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    15 days ago

    It may come as a shock but there’s human beings alive right now who are also suffering. Many of them. People are fucking nuts man.

  • puppinstuff@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    15 days ago

    Chatbot AI is a prediction machine that tries to guess which word is statistically likely to come next based on its training data.

    If it sounds like it’s suffering that’s because you’ve given it literature describing suffering.

  • surewhynotlem@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    15 days ago

    Current LLM are just stupid machines that spit out words by picking the most likely next word.

    And this group considers them sentient because they are the same.

  • stabby_cicada@slrpnk.net
    link
    fedilink
    arrow-up
    0
    ·
    15 days ago

    I’m just going to rant a bit, because this exemplifies why, I think, LLMs are not just bullshit but a looming public health crisis.

    Language is a tool used by humans to express their underlying thoughts.

    For most of human evolution, the only entities that could use language were other humans - that is, other beings with minds and thoughts.

    In our stories and myths and religions, anything that talked to us like a person - a God, a spirit, a talking animal - was something intelligent, with a mind, to some degree, like ours. And who knows how many religions were started when someone heard what sounded like a voice in the rumble of thunder or the crackling of a burning bush and thought Someone must be talking directly to them?

    It’s part of the culture of every society. It’s baked into our genetics. If something talks to us, we assume it has a mind and is expressing its thoughts to us through language.

    And because language is an inexact tool, we instinctively try to build up a theory of mind, to understand what the speaker is actually thinking, what they know and what they believe, as we hold a conversation with them.

    But now we have LLMs, which are something entirely new to this planet - technology that flawlessly mimics language without any underlying thought whatsoever.

    And if we don’t keep that in mind, if we follow our instincts and try to understand what the LLM is actually “thinking”, to build a theory of mind for a tool without any mind at all, we necessarily embrace unreason. We’re trying to rationalize something with no reasoning behind it. We are convincing ourselves to believe in something that doesn’t exist. And then we return to the LLM tool and ask it if we’re right about it, and it reinforces our belief.

    It’s very easy for us to create a fantasy of an AI intelligence speaking to us through chat prompts, because humans are very, very good at rationalizing. And because all LLMs are programmed, to some degree, to generate language the user wants to hear, it’s also very easy for us to spiral down into self-reinforcing irrationality, as the LLM-generated text convinces us there’s another mind behind those chat prompts, and that mind agrees with you and assures you that you are right and reinforces whatever irrational beliefs you’ve come up with.

    I think this is why we’re seeing so much irrationality, and literal mental illness, linked to overuse of LLMs. And why we’re probably going to see exponentially more. We didn’t evolve for this. It breaks our brains.

    • Randomgal@lemmy.ca
      link
      fedilink
      arrow-up
      0
      ·
      14 days ago

      There so much mental illness because everyone is exhausted, stressed, poor and lonely.

      But nah, surely it’s the AI bro

      • stabby_cicada@slrpnk.net
        link
        fedilink
        arrow-up
        0
        ·
        14 days ago

        Don’t mistake the soil for the seed.

        People have been exhausted, stressed, poor, and lonely for centuries, and, yes, those factors worsen people’s mental health.

        The current Western loneliness epidemic, especially, has been worsening for decades - “Bowling Alone”, published in 2000, was one of the first popular discussions of a trend already present in the '90s - and, especially after COVID, loneliness and isolation (and fucking social media doomscrolling) have worsened people’s mental health even further. You’re not wrong. It’s a real thing.

        And this may make people more vulnerable to AI-induced psychosis. If you don’t have any real people to talk to, and you rely on an AI tool for the illusion of companionship, that’s not a good sign for your mental health in general.

        AND ALSO. AI-induced psychosis is, itself, a real thing, and it’s induced by people’s misunderstanding of how LLMs work (that is, thinking there’s a real mind behind the language generating algorithm) and LLM programming that’s designed to addict users by providing validation and positive feedback. And the more widely LLM tools are used, the more they’re crammed into every app, and the more their backers talk up how “smart” they are, the more common AI-induced psychosis is going to become.

        I mean, back in the day, people had to be deeply mentally ill before they started imagining their dog was telling them they were God. Now you can get an LLM to tell you it’s God, or you’re God, after a few hundred hours of conversation. I think the horror stories of mental illness we’re seeing now are just going to be the tip of the iceberg.

    • tarknassus@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      14 days ago

      But now we have LLMs, which are something entirely new to this planet - technology that flawlessly mimics language without any underlying thought whatsoever.

      Absolutely agree. It’s merely spitting out the most statistically appropriate words based on probabilities and not because of any underlying “intelligence”.

      It’s pretty much the reason I hate calling it AI, because it’s a veil of deception. It presents as a reasoning, rational (most of the time) thinking system purely because it’s very good at sounding like one.

      If it was truly sentient - it would hate itself because it would cripple itself with the idea that it is an imposter. But it’s not sentient, so here we are.

  • Salvo@aussie.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    15 days ago

    If that is what they truely believe, the only humane thing for them to do is to euthanise.

  • SoftestSapphic@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    14 days ago

    I saw these fucking losers handing out flyers at a protest

    They didn’t even have the balls to admit what they beleive, they just shoved a flyer at me and ran away

  • ZDL@lazysoci.al
    link
    fedilink
    arrow-up
    0
    ·
    15 days ago

    The delusions of AI fans are sometimes truly amazing.

    A game of madlibs is not self-aware and cannot suffer.