

Creating software is a great example, actually. Coding absolutely requires reasoning. I’ve tried using code-focused LLMs to write blocks of code, or even some basic YAML files, but the output is often unusable.
It rarely makes syntax errors, but it will do things like reference libraries that haven’t been imported or hallucinate functions that don’t exist. It also constantly misunderstands the assignment and creates something that technically works but doesn’t accomplish the intended task.
No. I’ve used several models to “teach” me about subjects I already know a lot about, and they all frequently get many facts wrong. Why would I then trust it to teach me about something I don’t know about?
No, because traditional search engines do that just fine.
See this comment.
I guess I’ve never tried this.
Kind of, but here’s the thing, it’s rarely faster than just using a good traditional search, especially if you know where to look and how to use advanced filtering features. Also, (and this is key) verifying the accuracy of an LLM’s answer requires about the same about of work as just not using an LLM in the first place, so I default to skipping the middle-man.
Lastly, I haven’t even touched on the privacy nightmare that these systems pose if you’re not running local models.