logicprog 6 hours ago

This is just the same flippant dismissive stuff as usual. At this point, it's its own brand of anti-AI slop. Just because LLMs are not deterministic, it does not mean that you can't effectively iterate and modify code it generates. Or that it won't give you something useful almost every time you use it. Also, this article talks about possible use cases of LLMs for learning, and then dismisses them as completely replaceable with a book, a tutorial, or a mentor. Not counting the fact that books and tutorials are not individually tailored to the person and what they want to work on and are interested in can't be infinitely synthesized to take you as far as you want to go and are often limited for certain technologies and mentors are often very difficult to come by.

  • davydm 5 hours ago

    I get your part about mentors. I came up through having to figure stuff out myself a lot via stack overflow and friends, where the biggest problem for me is usually how to ask the right question (eg with elastic search, having to find and understand "index" vs "store" - once I have those two terms, searching is a lot easier, and without them, it's a bit of a crapshoot). Mentors help here because they had to travel that road too and probably can translate from my description to the correct terms.

    And I really wish I could trust an llm for that, or, indeed, any task. But I generally find answers fall into one of these useless buckets: 1. Reword the question as an answer (so common, so useless) 2. Trivial solutions that are correct - meaning one or two lines that are valid, but that I could have easily written myself quicker than getting an agent involved, and without the other detractors on this list 3. Wildly incorrect "solutions". I'm talking about code that doesn't even build because the llm can't take proper direction on which version of the library to refer to, so it keeps giving results based off old information that is no longer relevant. Try resolving a webpack 5 issue - you'll get a lot of webpack 4 answers and none of them will work, even if you specify webpack 5 4. The absolute worst: subtly incorrect solutions that seem correct and are confidently presented as correct. This has been my experience with basically every "oh wow, look what the llm can do" demo. I'm that annoying person who finds the big mid-demo.

    The problems are: 1. A person inexperienced in the domain will flounder for ages trying out crap that doesn't work and understanding nothing of it. 2. A person experienced in the domain will spend a reasonable amount of time correcting the llm - and personally, I'd much rather write my own code via tdd-driven emergent design - I'll understand it, and it will be proven to work when it's done.

    I see that proponents of the tech often gloss over this and don't realise that they're actually spending more time overall, especially when having to polish out all the bugs. Or maintain the system.

    Use whatever you want, but I've got zero confidence in the models, and I prefer to write code instead of gambling. But to each, their own.

    • galaxyLogic 3 hours ago

      The way I see AI coding-agents at the moment is they are interns. You wouldn't give an intern responsibility for the whole project. You need an experienced developer who COULD do the job with some help from interns, but now the AI can be the intern.

      There's an old saying "Fire is a good servant, but bad master". I think same applies to AI. In "vibe-coding" AI is too much the master.

anonym29 6 hours ago

"New thing bad, old things good"

- people throughout all of recorded human history

  • 000ooo000 6 hours ago

    "New thing good, old thing bad"

    - also people throughout all of recorded human history