• MountingSuspicion@reddthat.com
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      1 month ago

      That’s not a valid reason to lie. If it does not have the information it should state as much. This post underscores one of the biggest issues with AI. It will confidently say whatever “statistically plausible” thing regardless of the actual truth.

      Edit: in case there is any confusion, by “should” I mean in an ideal scenario where ai is able to be used the way people think they can currently use it. I’m aware that it’s not really how ai works hence the remainder of the comment making note of the statistically plausible bit. AI makes factual errors on things that could arguably be answered with its current dataset (how many bs in blueberry etc) and this is not an issue with the dataset, it’s a side effect with the way LLMs work. They are not reasoning machines. They are fancy algorithms. This makes them impractical for use in several areas where they’re already being deployed and that’s a problem.

      • ddplf@szmer.info
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        You don’t understand how AI works under the hood. It can’t tell you it’s lying, because it doesn’t know the concept lying. In fact - it doesn’t know ANYTHING, literally. It’s not thinking, it’s predicting. It’s speculating what the viable answer would look like based on his dataset.

        You don’t actually get real answers to your questions - you only get a text that the AI determined may seem most fitting to your prompt.