Haven’t used any coding LLMs. I honestly have no clue about the accuracy of the comic. Can anyone enlighten me?
I use them frequently, they’re extremely helpful just don’t get it to write everything.
As for the comic, it’s pretty inaccurate. The only one that I find true is the too much water, sometimes the bots like to take … longer methods.
They’re okay for tasks which are reasonably a single file. I use them for simple Python scripts since they generally spit out something very similar to what I’d write, just faster. However there is a tipping point where a task becomes too complex and they fall on their face and it becomes faster to write the code yourself.
I’m never going to pay for AI, so I’m really just burning the AI company’s money as I do it, too.
Selfhost your LLM’s Qwen3:14b is fast, open source and answers code questions with very good accuracy.
You only need ollama and a podman container (for openwebUI)
Frankly, I don’t think you seriously tested anything that you’ve mentioned here.
Nobody’s using Qwen because it doesn’t do tool calls. Nobody really uses ollama for useful workloads because they don’t own the hardware to make it good enough.
That’s not to say that I don’t want self-hosted models to be good. I absolutely do. But let’s be realistic here.