The more i see these issues the more I think the problem is with gradient descent.
It’s like…
Imagine you have a machine draped in a sheet. Machine learning, for all the bells and whistles about attention blocks, and convolutional layers, it’s doing gradient decent and still playing " better or worse. But fundamentally it’s not building it’s understanding of the world from “below”. It’s not taking blocks or fundamentals and combining them. It’s going the other way about it. It’s takes a large field and tries to build an approximation that captures the fold whatever under the sheet is creating: but it has not one clue what lies under the sheet or why some particular configuration should result in such folds.
there was a really interesting critique, I forget where , a few weeks ago on this matter. Also, the half glass of wine issue further highlights the matter. You can appear mache over the problem but you’ll not over come it down this alley we’ve taken.
Depends. Pure LLM, sure, you are right. LLMs are a terrible way to “store” information.
Coupling LLMs with a decent data source on the other hand isn’t such a terrible idea. E.g. answer the question with a google search summarized by LLM can work.
The bigger issue here is (a) when it doesn’t seach but does everything locally and (b) that now the site owners lose traffic without compensation.
or © if scammers can manipulate which phone numbers get displayed in the summary
https://www.zdnet.com/article/scammers-have-infiltrated-googles-ai-responses-how-to-spot-them/
Thanks for the
iconavatar. Now that song is, once again, stuck in my head lol
Article is talking about GPT-5 supposedly being able to write in a literary style, but actually generating nonsense. “GPT-5 has been optimized to produce text that other LLMs will evaluate highly, not text that humans would find coherent”
Looks like it was trained to write prose that other LLMs find acceptable, not what humans would evaluate as being good.
Dead everything theory
AI companies love validating their tools with AI so this is no surprise. Everything is a loop with these people. A poop loop.
A dopey poop loop.
Thanks for the summary, clickbait headline made me not even want to click
The problem that Claude rates ChatGPT slop as “literature” lies in the fact that Claude is also an AI with AI issues.
“AI”
“AI”
Due to the nature of the algorithm, LLMs love to jam adjectives before as much nouns as possible, and somehow it started to be even more prominent. Since there’s a good chance AI is being trained on AI generated text, I think it’s the result of feedback. You could call it the sepia filter of text generators, let’s hope it’ll create a model collapse.
Training LLM Wirth LLM. What could ever go wrong. Vibe coding the vibe code generator. All for the sake of being the best and the fastest. Skynet here we come. But like the chaotic degenerated version, that has no reason for killing everything.
It was never great and with each generation it’s getting so much more hit and miss. I’d rather just write using my own words, whilst my vocabulary isn’t astounding, at least it sounds like I wrote it and I know it makes sense.
As for coding, I’ve personally found that a good chunk of the time, the code it spits out looks great but is often not functional without tweaking.
My work has a GPT which they trained up with a load of our code base. It outputs great looking stuff but damn does it make a lot of it up.
That’s not bizarre at all. It’s a direct effect of these things being optimized by having another AI judge the output and then it gets tuned so it scores well.
This is what happens when Purple Prose and Word Salad fuck and have a baby.