4/3/2025 at 6:55:10 AM
I believe there are two kinds of skill: standalone and foundational.Over the centuries we’ve lost and gained a lot of standalone skills. Most people throughout history would scoff at my poor horse-riding, sword fighting or my inability to navigate by the stars.
My logic, reasoning and oratory abilities on the other hand, as well as my understanding of fundamental mechanics and engineering principles would probably hold up quite well (language barrier notwithstanding) back in ancient Greece or in 18th century France.
I believe AI is fine to use for standalone skills in programming. Writing isolated bits of logic, e.g. a getRandomHexColor() function in JavaScript or a query in an SQL dialect you’re not deeply familiar with is a great help and timesaver.
On the other hand, handing over the fundamental architecture of your project to an AI will erode your foundational problem solving and software design abilities.
Fortunately, AI is quite good at the former, but still far from being able to do the latter. So, to me at least, AI based code editors are helpful without the risk of long term skill degradation.
by wolframhempel
4/3/2025 at 10:06:29 AM
I'd classify this as theoretical skills vs tool skills.Even your engineering principles are probably superior to ancient Greeks, since you can simulate bridges before laying the first stone. "It worked the last time" is still a viable strategy, but the models we have today means we can often say "it will work the first time we try."
My point being that theory (and thus what is considered foundational) has progressed as well.
by flowerthoughts
4/3/2025 at 7:01:40 AM
> horse-riding, sword fighting or my inability to navigate by the stars.Some better more suitable examples would be warranted here, none of these were as widespread or common as you'd assume. Little to no metaphorical scoffing would happen for those. Now, sewing and darning, and subsistence, while mundane, are uncommon for many of us.
by politelemon
4/3/2025 at 12:37:48 PM
For some strange reason, I’m better at sewing than both my wife and mother-in-law. I learned it in public school when both genders learned both woodworking and sewing, and maintained an interest so that I could wear “grunge” in the 1990s. The teachers I had remembered that those classes were gendered while they worked.by sshine
4/3/2025 at 11:08:52 AM
“still far from being able to do the latter” These models have been in wide use for under three years. AI IDEs barely a year. Gemini 2.5 Pro is shockingly good at architecture if you make it into a conversation rather than expecting a one-shot exercise. I share your native skepticism, but the pace of improvement has taken me aback and made me reluctant to stake much on what LLMs can’t do. Give it 6 months.by codebra
4/3/2025 at 10:40:56 AM
Taking your SQL example, if you don't properly understand the SQL dialect how can you know that what the AI gives you is correct?by sceptic123
4/3/2025 at 11:08:04 AM
I'd say because psychologically (and also based on CS Theory) creating something and verifying draw from similar but also unrelated skills.It's like NP. Solving an NP problem is very hard. Verifying that the solution is correct is very easy.
You might not know the statements required, but once the AI reminds you of which statements are available, you can check the logic using these statements makes sense.
Yes, there is a pitfall of being lazy and forgetting to verify the output. That's where a lot of vibe coding problems come from in my opinion.
by LiKao
4/3/2025 at 1:18:34 PM
The biggest problem with LLMs is that they are very good at presenting something that looks like a correct solution without having the required knowledge to confirm if it is indeed correct.So my concern is more "do you know how to verify" rather than "did you forget to verify".
by sceptic123
4/3/2025 at 7:07:25 AM
This is a great comment and says what I've been thinking but hadn't put into words yet.Too many people think what I do is "write code". That is incorrect. What I do is listen, read, watch and think. If code needs writing then it already basically writes itself because at that point I've already done the thinking. The typing part is an inconvenience that I'd happily give up if I could get my thoughts into the computer directly somehow.
AI tools make the easy stuff easier. They don't help much with hard stuff. The most useful thing I've found them for is getting an initial orientation in a completely unfamiliar area. But after that, when I need hard details, it's books, manuals, blogs etc just like before. I find juniors are already lacking in their ability to find and assimilate knowledge and I feel like having AI isn't going to help here.
by globular-toast
4/3/2025 at 8:05:43 AM
Abstracting away the software paraphernalia makes this more clear in my view: our job is to understand and specify abstract symbolic systems. Making them work with the current computer architectures is incidental.This is why I don't see LLM assisted coding as revolutionary. At best I think it's a marginal improvement on indexing, search and code completion as they have existed for at least a decade now.
NLP is a poor medium for specifying abstract symbolic systems. And LLMs work by finding patterns in latent space, I think. But the latent space doesn't represent reality, it represents language as recorded in the training data. It's easy to underestimate just how much training data were used for the current state-of-the-art foundational models. And it's easy to overestimate the ability these tools have to weave language and by induction attribute reasoning abilities to them.
The intuition I have about these LLM-driven tools is that we're adding degrees of freedom to the levers we use. When you're near an attractor congruent with your goals it feels like magic. But I think this is over fitting: the things we do now are closely mirrored by the data we used to train these models. But as we move forward in terms of tooling, domains, technology, culture etc, the data available will become increasingly obsolete, relevant data increasingly scarce.
Besides there's the problem of unknown unknowns: lots of people using these tools are assuming that the attractors they see pulling on their outcome is adequate because they can only see some arbitrary surface of it. And since they don't know what geometries lie beneath, they end up creating and exposing systems with several unknown issues that might have implications in security, legality, morality, etc. And since there's a time delay between their feeling of accomplishment and the surfacing of issues, and they will be likely to use the same approach, we might be heading for one hell of a bullwhip effect across dimension we can't anticipate at all.
by namaria