4/25/2025 at 10:35:59 AM
We can finally just take a photo of a textbook problem that has no answer reference and no discussion about it and prompt an LLM to help us understand what's missing in our understanding of the problem, if our solution is plausible and how we could verify it.LLM changed nothing though. It's just boosting people's intention. If your intention is to learn, you are in luck! It's never been easier to teach yourself some skill for free. But if you just want to be a poser and fake it until you make it, you are gonna be brainrot waaaay faster than usual.
by gchamonlive
4/25/2025 at 11:02:02 AM
Note that we are the first-wave of AI users. We are already well-equiped to ask the LLM the right questions. We already have experience with old-fashioned self-learning. So we only need some discipline to avoid skill atrophy.But what happens with generations that will grow up with AI readily available? There is a good chance that there will be a generational skill atrophy in the future, as less people will be inclined to develop the experience required to use AI as a helper, but not depend on it.
by m000
4/25/2025 at 3:56:04 PM
I was learning a new cloud framework for a side project recently and wanted to ask my dad about it since it's the exact same framework he's used for his job for many years, so he'd know all sorts of things about it. I was expecting him to give me a few ideas or have a chat about a mutual interest since this wasn't for income or anything. Instead all he said was "DeepSeek's pretty good, have you tried it yet?"So I just went to DeepSeek instead and finished like 25% of my project in a day. It was the first time in my whole life that programming was not fun at all. I was just accomplishing work - for a side project at that. And it seems the LLMs are already more interested in talking to me about code than my dad who's a staff engineer.
I am going to use the time saved to practice an instrument and abandon the "programming as a hobby" thing unless there's a specific app I have a need for.
by doright
4/25/2025 at 5:22:55 PM
I find this to be an interesting anecdote because at a certain level for a long time the most helpful advice you could give is what would be the best reference for the problem at hand which might have been a book or website or wiki or Google for stack overflow and now a particular AI model might be the most efficient way to give someone a 'good reference.' I could certainly see someone recommending a model the same way they may have recommended a book or tutorial.On point of discussing code.. a lot of cloud frameworks are boring but good. It usually isn't the interesting bit and it is a relatively recent quirk that everyone seems to care more about the framework compared to the thing you actually wanted to achieve. It's not a fun algorithm optimization, it's not a fun object modeling exercise, it's not some nichey math thing of note or whatever got them into coding in the first place. While I can't speak for your father I haven't met a programmer who doesn't get excited to talk about at least one coding topic this cloud framework just might not have been it.
by xemdetia
4/25/2025 at 5:46:54 PM
> It usually isn't the interesting bit and it is a relatively recent quirk that everyone seems to care more about the framework compared to the thing you actually wanted to achieve. It's not a fun algorithm optimization, it's not a fun object modeling exercise, it's not some nichey math thing of note or whatever got them into coding in the first place.I only read your comment after I posted mine, but my take is basically the same as yours: the GP thinks the IT learning-treadmill is fun and his dad doesn't.
It's not hard to see the real problem here.
by lelanthran
4/25/2025 at 5:44:46 PM
> It was the first time in my whole life that programming was not fun at all.And learning new technologies in pursuit of resume-driven-development is fun?
I gotta say, if learning the intricacies of $LATEST_FAD is "fun" for you, then you're not really going to have a good time, employment-wise, in the age of AI.
If learning algorithms and data structures and their applicability in production is fun, then the age of AI is going to leave you with very in-demand skills.
by lelanthran
4/25/2025 at 5:55:27 PM
> And learning new technologies in pursuit of resume-driven-development is fun?Nothing to do with employment. I was just doing a "home-cooked app"[0] thing for fun that served a personal usecase. Putting it on my resume would be a nice-to-have to prove I'm still sharpening my skills, but it isn't the reason I was developing the app to begin with.
What I think at least is the administration and fault monitoring of lots of random machines and connected infrastructure in the cloud might be left somewhat untouched by AI for now, but if it's just about slinging some code to have an end product, LLMs are probably going to overtake that hobby in a few years (if anyone has such a weird hobby they'd want to write a bunch of code because it's fun and not to show to employers).
by doright
4/25/2025 at 1:51:53 PM
> There is a good chance that there will be a generational skill atrophy in the futureWe already see this today: a lot of young people do not know how to type in keyboards, how to write in word processors, how to save files, etc. A significant part of a new generation is having to be trained on basic computer things same as our grandparents did.
It's very intersting how "tech savvy" and "tech compentent" are two different things.
by dlisboa
4/26/2025 at 10:44:45 AM
Those are all very specific technical IT related skills, if the next generation doesn't know how to do those things, it's because they don't need to. Not because they can't learn.by esperent
4/26/2025 at 1:20:31 PM
Except both corporations and academia require them, and it's likely you'll need them at some point in your everyday life too. You can't run many types of business on tablets and smartphones alone.by CM30
4/26/2025 at 2:26:18 PM
> Except both corporations and academia require themAnd so the people who are aiming to go into that kind of work will learn these skills.
Academia is a tiny proportion of people. "Business" is larger but I think you might be surprised by just how much of business you can do on a phone or tablet these days, with all the files shared and linked between chats and channels rather than saved in the traditional sense.
As a somewhat related example, I've finally caved into to following all the marketing staff I hire and started using Canva. The only time you now need to "save a picture" is... never. You just hit share and send the file directly into the WhatsApp chat with the local print shop.
by esperent
4/26/2025 at 1:54:26 PM
...And the businessman in me tells me there will be a market for ever simpler business tools, because computer-illiterate people will still want to do business.by Phanteaume
4/26/2025 at 12:34:53 PM
Yes, but they weren't field specific from the rise of the PC to the iPhone. The next life skill, homeEc skill, public forum, etc meant the average kid or middle class adult was being judged on whether they were working on these skills.by satanfirst
4/25/2025 at 4:24:11 PM
Jaron Lanier was a critic of the view that files were somehow an essential part of computing:https://www.cato-unbound.org/2006/01/08/jaron-lanier/gory-an...
Typing on a keyboard, using files and writing on a word processor, etc. are accidental skills, not really essential skills. They're like writing cursive: we learned them, so we think naturally everybody must and lament how much it sucks that kids these days do not. But they don't because they don't need to: we now have very capable computing systems that don't need files at all, or at least don't need to surface them at the user level.
It could be that writing or understanding code without AI help turns out to be another accidental skill, like writing or understanding assembly code today. It just won't be needed in the future.
by bitwize
4/26/2025 at 5:57:41 AM
> They're like writing cursive: we learned them, so we think naturally everybody must and lament how much it sucks that kids these days do notWriting cursive may not be the most useful skill (though cursive italic is easy to learn and fast to write), but there's nothing quite like being able to read an important historical document (like the US Constitution) in its original form.
by musicale
4/25/2025 at 4:44:29 PM
Waxing philosophical about accidental/essential kinda sweeps under the rug that it's an orthogonal dimension to practical for a given status quo. And that's what a lot of people care about even if it's possible to win a conversation by deploying boomer ad hominem.I will lament that professionals with desk jobs can't touch-type. But not out of some "back in my day" bullshit. I didn't learn until my 20s. I eventually had an "oh no" realization that it would probably pay major dividends on the learning investment. It did. And then I knew.
I was real good at making excuses to never learn too. Much more resistant than the student/fresh grads I've since convinced to learn.
by dogleash
4/26/2025 at 12:38:45 AM
Typing was only a universally applicable skill for maybe the past three or four decades. PCs were originally a hard sell among the C suite. You mean before I get anything out of this machine, I have to type things into it? That's what my secretary is for!So if anything, we're going back to the past, when typing need only be learned by specialists who worked in certain fields: clerical work, data entry, and maybe programming.
by bitwize
4/25/2025 at 4:37:36 PM
[dead]by dingdongbong
4/25/2025 at 7:08:11 PM
> But what happens with generations that will grow up with AI readily available? There is a good chance that there will be a generational skill atrophy in the futureSpot on. Look at the stark difference in basic tech troubleshooting abilities between millennials and gen z/alpha. Both groups have had computers most of their lives but the way that the computers have been "dumbed down" for lack of a better term has definitely accelerated that skill atrophy.
by noboostforyou
4/25/2025 at 1:31:33 PM
I'm far from an AI enthusiast but concerning:> There is a good chance that there will be a generational skill atrophy in the future, as less people will be inclined to develop the experience required to use AI as a helper, but not depend on it.
I don't how to care for livestock or what to do to prepare and can a pig or a cow. I could learn it. But I'll keep using the way of least resistance and get it from my butcher. Or to be more technological: I'd have to learn how to make a bare OS capable of starting from a motherboard, it still does not prevent me from deploying k8s clusters and coding apps to run on it.
by arkh
4/25/2025 at 1:49:19 PM
> I don't how to care for livestock or what to do to prepare and can a pig or a cow. I could learn it. But I'll keep using the way of least resistance and get it from my butcherYou'd sing a different tune if there was a good chance from being poisoned by your butcher.
The two examples you chose are obvious choices because the dependencies you have are reliable. You trust their output and methodologies. Now think about current LLMs-based agents running your bank account, deciding on loans,...
by skydhash
4/25/2025 at 4:22:12 PM
Sure, but we still will need future generation people to want to learn how to butcher and then actually follow through on being butchers. I guess the implied fear is that people who lack fundamentals and are reliant on AI become subordinate to the machine's whimsy, rather than the other way around.by kevinsync
4/26/2025 at 12:06:01 AM
If your butcher felt the same way you did, he wouldn't existby trefoiled
4/25/2025 at 5:48:08 PM
Maybe its not so much that it prevents anything, rather it will hedge toward a future where all we get is a jpeg of a jpeg of a jpeg. ie. everything will be an electron app or some other generational derivative not yet envisioned yet, many steps removed from competent engineering.by jofla_net
4/25/2025 at 11:29:52 AM
Lying is pretty amazingly useful. How are you going to teach your kid to not use that magical thing that solves every possible problem? - C.K. LouisReplace lying with LLM and all I see is a losing battle.
by raincole
4/25/2025 at 2:03:40 PM
This is a great quote, but for the opposite reason. Lying has been an option forever - people learn how to use it and how not to use, as befits their situation and agenda. The same will happen with AI. Society will adapt, us first-AI-users will use it far differently than people in 10, 20, 30+ years. Things will change, bad things will happen, good things will happen, maybe it will be Terminator, maybe it will be Star Trek, maybe it will be Star Wars or Mad Max or the Culture.Current parents, though, aren't going to teach kids how to use it, kids will figure that out and it will take a while.
by gilbetron
4/25/2025 at 12:02:42 PM
[flagged]by gchamonlive
4/25/2025 at 11:37:50 AM
We also grew with internet and the newer generation is having a hard time following it.However we were born post invention of photography and look at the havoc it's wreaking with post-truth.
The answer to that lies in reforming the education system so that we teach kids digital hygiene.
How on earth we still teach kids Latin in some places but not python? It's just an example, extrapolate python to everything tech that is needed for us to have a healthy relationship with tech.
by gchamonlive
4/25/2025 at 1:08:48 PM
I've long maintained that kids must learn end to end what it takes to put content on the web themselves (registering a domain, writing some html, exposing it on a server, etc.) so they understand that _truly anyone can do this_. Learning both that creating "authoritative" looking content is trivial and that they are _not_ beholden to a specific walled garden owner in order to share content on the web.by lostphilosopher
4/25/2025 at 12:06:21 PM
> It's just an example, extrapolate python to everything tech that is needed for us to have a healthy relationship with tech.Perhaps that's also a reason why - tech is so large, there's no time in a traditional curriculum to teach all of it. And only teaching what's essential is going to be tricky because who gets to decide what's essential? And won't this change over time?
by sjamaan
4/25/2025 at 12:25:58 PM
I don't think that argument holds. If you're going to pick anything in Tech to teach the masses, Python is a pretty good candidate.There is no perfect solution, but most imperfect attempts are superior to doing nothing.
by airstrike
4/25/2025 at 1:27:45 PM
I'd argue it's a bad candidate because it doesn't run in a normal person computing environment. I can't write a Python application and give it to another normie and have them able to run it, it doesn't run on a Phone, it doesn't run on a web browser.So it's teaching them a language they can't use to augment their work between or pass their work to other non-techies.
by whywhywhywhy
4/26/2025 at 11:58:57 AM
What normal person computing environment has tools to program? Only thing I can think of is spreadsheet functions.by harvey9
4/25/2025 at 9:02:09 PM
I'm not sure that's what we're solving for. There is no silver bullet. No single language runs on every phone.If we're teaching everyone some language, we could very much decide that this language ought to be installed in the "normal person computing environment".
I definitely don't want people to learn to write code from JavaScript as it has way too many issues to be deemed representative of the coding experience.
by airstrike
4/25/2025 at 6:58:54 PM
Javascript addresses most of your concerns, if you also teach how to deploy it.(I'm guessing that's what you were hinting at.)
by jimbokun
4/25/2025 at 2:13:49 PM
Yes, you can, actually.Pyinstaller will produce PE, ELF, and Mach-O executables, and
Py2wasm will produce wasm modules that will run in just about any modern browser.
by anonym29
4/25/2025 at 3:23:33 PM
How is someone just learning coding expected to understand half the words you just typed.by whywhywhywhy
4/25/2025 at 2:09:33 PM
Are grammar rules surrounding past participles and infinitives, or the history of the long-dead civilizations that were ultimately little more than footnotes throughout history really more important than basic digital literacy?by anonym29
4/25/2025 at 6:33:26 PM
Some people would argue that understanding ancient civilaztions and cultures is a worthy goal. I don't think it has to be an either/or thing.Also digital literacy is a fantastic skill - I'm all for it. And I think that digital (and cultural) literacy leads me to wonder if AI is making the human experience better, or if it is primarily making corporations a lot of money to the detriment of the majority of people's lives.
by UtopiaPunk
4/25/2025 at 3:10:45 PM
Right - if you see these things as useless trivia, why waste your time with them when you could be getting trained by your betters on the most profitable current form of ditch-digging?by buescher
4/25/2025 at 1:08:45 PM
It likely no longer matters. Not in the sense that AI replaces programmers and engineers, but it is a fact of life. Like GPS replacing paper navigation skills.by bitexploder
4/25/2025 at 1:24:47 PM
I grew up never needing paper maps. Once I got my license, GPS was ubiquitous. Most modern paper maps are quite the same as Google Maps or equivalents would be though. The underlying core material is the same so I don't think most people would struggle to read it.I think learning and critical thinking are skills in and of themselves and if you have a magic answering machine that does not require these skills to get an answer (even an incorrect one), it's gonna be a problem. There are already plenty of people that will repeat whatever made up story they hear on social media. With the way LLMs hallucinate and even when corrected double down, it's not going to make it better.
by mschild
4/25/2025 at 2:10:01 PM
>Most modern paper maps are quite the same as Google Maps or equivalents would be though. The underlying core material is the same so I don't think most people would struggle to read it.That's absolutely not the case, paper maps don't have a blue dot showing your current location. Paper maps are full of symbols, conventions, they have a fixed scale...
Last year I bought a couple of paper maps and went hiking. And although I am trained in reading paper maps and orientating myself, and the area itself was not that wild and was full of features, still I had moments when I got lost, when I had to backtrack and when I had to make a real effort to translate the map. Great fun, though.
by GeoAtreides
4/25/2025 at 12:29:10 PM
This is the worst form of AI there will ever be, it will only get better. So traditional self-learning might be completely useless if it really gets much betterby ozgrakkurt
4/25/2025 at 1:12:56 PM
> it will only get betterI wanted to highlight this assumption, because that's what it is, not a statement of truth.
For one, it doesn't really look like the current techniques we have for AI will scale to the "much better" you're talking about -- we're hitting a lot of limits where just throwing more money at the same algorithms isn't producing the giant leaps we've seen in the past.
But also, it may just end up that AI provider companies aren't infinite growth companies, and once companies aren't able to print their own free money (stock) based on the idea of future growth, and they have to tighten their purse strings and start charging what it actually costs them, the models we'll have realistic, affordable access to will actually DECREASE.
I'm pretty sure the old fashioned, meat-based learning model is going to remain price competitive for a good long while.
by DanHulton
4/25/2025 at 3:52:57 PM
The real problem with AI is that you will never have an AI. You will have access to somebody else's AI, and that AI will not tell you the truth, or tell you what advances your interests... it'll tell you what advances its owner's interests. Already the public AIs have very strong ideological orientations, even if they are today the ones that the HN gestalt also happens to agree with, and if they aren't already today pushing products in accordance with some purchased advertising... well... how would you tell? It's not like it's going to tell you.Perhaps some rare open source rebels will hold the line, and perhaps it'll be legal to buy the hardware to run them, and maybe the community will manage to keep up with feature parity with the commercial models, and maybe enough work can be done to ensure some concept of integrity in the training data, especially if some future advance happens to reduce the need for training data. It's not impossible, but it's not a sure thing, either.
In the super long run this could even grow into the major problem that AIs have, but based on how slow humanity in general has been to pick up on this problem in other existing systems, I wouldn't even hazard a guess as to how long it will take to become a significant economic force.
by jerf
4/25/2025 at 7:03:15 PM
Marc Adreesen has pretty much outright acknowledged him and many others in Silicon Valley supported Trump because of the limits the Biden-Harris administration wanted to put on AI companies.So yeah, the current AI companies are making it very difficult for public alternatives to emerge.
by jimbokun
4/25/2025 at 3:33:20 PM
Makes sense, I also don’t think llms are that useful or improve but I meant in a more general sense, it seems like there will eventually be much more capable technology than LLMs. Also agree it can be worse x months/years from now so what I wrote doesn’t make that much sense in that wayby ozgrakkurt
4/25/2025 at 3:12:18 PM
I felt this way until 3.7 and then 2.5 came out, and O3 now too. Those models are clear step-ups from the models of mid-late 2024 when all the talk of stalling was coming out.None of this includes hardware optimizations either, which lags software advances by years.
We need 2-3 years of plateauing to really say intelligence growth is exhausted, we have just been so inundated with rapid advance that small gaps seem like the party ending.
by Workaccount2
4/25/2025 at 1:22:38 PM
I can get productivity advantages from using power tools, yet regular exercise has great advantages, too.It's a bit similar with the brain, learning and AI use. Except when it comes to gaining and applying knowledge, the muscle that is trained is judgement.
by sho_hn
4/25/2025 at 12:35:23 PM
people say this but the models seem to be getting worse over timeby blibble
4/25/2025 at 1:03:18 PM
Are you saying the best models are not the ones out today, but those of the past? I don't see that happening with the increased competition, nobody can afford it, and it disagrees with my experience. Plateauing, maybe, but that's only as far as my ability to discern.by esafak
4/25/2025 at 1:10:33 PM
Models are getting better, like Gemini 2.5 Pro is incredible, compare to what we had a year ago it's on a completely different level.by GaggiX
4/25/2025 at 12:58:53 PM
That's optimistic. Sci-fi has taught us that way worse forms of AI are possible.by VyseofArcadia
4/25/2025 at 1:02:24 PM
Worse in the sense of capability, not alignment.by esafak
4/25/2025 at 4:27:09 PM
Meanwhile, in 1999, somewhere on Slashdot:"This is the worst form of web there will ever be; it will only get better."
by bitwize
4/25/2025 at 6:59:38 PM
Great way to put it. People who can't imagine a worse version are sorely lacking imagination.I for one can't wait to be force fed ads with every answer.
by alternatex
4/25/2025 at 12:24:35 PM
I have this idea that a lot of issues we are having today are not with concrete thing X, but with concrete thing X running amok in this big, big world of ours. Take AI for example: give a self-aware, slightly evil AI to physically and news-isolated medieval villagers somewhere. If they survive the AI's initial havoc, they will apply their lesson right way. Maybe they will isolate the AI in a cave with a big boulder on the door, to be removed only when the village needs advice regarding the crops or some disease. Kids getting near that thing? No way. It was decided in a town hall that that was a very bad idea.Now, compare that with our world: even if thing X is obviously harming the kids, there is nothing we can do.
by dsign
4/25/2025 at 2:11:12 PM
It’s still unconvincing that the shift to AI is fundamentally different than the shift to compiled languages, the shift to high level languages, the shift to IDEs, etc. In each of those stages something important was presumably lost.by mondrian
4/25/2025 at 3:09:52 PM
The shift to compiled languages and from compiled languages to high level languages brought us Wirth's law.by 68463645
4/25/2025 at 2:54:28 PM
> But what happens with generations that will grow up with AI readily available? There is a good chance that there will be a generational skill atrophy in the future, as less people will be inclined to develop the experience required to use AI as a helper, but not depend on it.Just like there is already generational gap with developers who don't understand how to use a terminal (or CS students who don't understand what file systems are).
AI will ensure there are people who don't think and just outsource all of their thinking to their llm of choice.
by htrp
4/25/2025 at 5:23:13 PM
This is going to be like that thing where we have to fix printers for the generation above and below us isn't it, hahaDamn kids, you were supposed to be teasing me for not knowing how the new tech works by now.
by corobo
4/25/2025 at 6:08:41 PM
Is AI going to be meaningfully different from vanilla Google searching though? The difference is a free extra clicks to yield mostly the same level of results.by jimbob45
4/25/2025 at 11:00:18 AM
I don't think many social systems are equipped to deal with it though.- Recruitment processes are not AI-aware and will definitely won't be able to identify the more capable individual hence losing out on talent
- Police departments are not equipped to deal with the coming wave of complaints regarding cyberfraud as the tech illiterate get tricked by anonymous LLM systems
- Universities and schools are not equipped to deal with students submitting coursework completed by LLM hence missing their educational targets
- Political systems are not equipped to deal with subversive campaigns using unethical online entertainment platforms (let's not called them social media please) such as FB and they are definitely not equipped to deal with those campaigns when they boost their effectiveness with LLM at scale
by netdevphoenix
4/25/2025 at 11:21:36 AM
> - Political systems are not equipped to deal with subversive campaigns using unethical online entertainment platforms (let's not called them social media please) such as FB and they are definitely not equipped to deal with those campaigns when they boost their effectiveness with LLM at scaleYes, and it seems to me that at least democracies haven't really figured out and evolved to deal with the Internet after 30 years.
So don't hold your breath !
by sunshine-o
4/25/2025 at 12:06:50 PM
schools have had to contend with cheating for a long time, and no-device-allowed sitting exams have been the norm for a long while nowby polotics
4/25/2025 at 3:19:25 PM
The amount of cheating and ease of it has gone way up based on my monitoring of teaching communities. Like it's not even close in terms of before ChatGPT vs. after ChatGPT.Worse yet many educators are not being supported by their administration since enrollments are falling and the admin wants to keep the dollars coming regardless of if the students are learning.
It's worse than just copying Wikipedia because plagarism detectors aren't as effective and may never be.
It's an arms race and right now AI cheating has structural advantages that will take time to remove.
by Espressosaurus
4/25/2025 at 7:11:24 PM
Yes, but "no devices allowed sitting exams" address all of the ChatGPT cheating concerns.But that does nothing for homework or long term projects where you can't control the student's physical location for the duration of the work.
You could do a detailed interview after the work is completed, to verify the student actually understands the work they supposedly produced. But that adds to the time spent between instructors and students making it harder to scale classes to large sizes. Which may not be a completely bad thing.
by jimbokun
4/25/2025 at 10:56:07 AM
> It's just boosting people's intention.This.
It will in a sense just further boost inequality between people who want to do things, and folks who just want to coast without putting in the effort. The latter will be able to coast even more, and will learn even less. The former will be able to learn / do things much more effectively and productively.
Since good LLMs with reasoning are here, I've learned so many things I otherwise wouldn't have bothered with - because I'm able to always get an explanation in exactly the format that I like, on exactly the level of complexity I need, etc. It brings me so much joy.
Not just professional things either (though those too of course) - random "daily science trivia" like asking how exactly sugar preserves food, with both a high-level intuition and low-level molecular details. Sure, I could've learned that if I wanted too before, but this is something I just got interested in for a moment and had like 3 minutes of headspace to dedicate to, and in those 3 minutes I'm actually able to get an LLM to give me an excellent tailor-suited explanation. This also made me notice that I've been having such short moments of random curiosity constantly, and previously they mostly just went unanswered - now each of them can be satisfied.
by cube2222
4/25/2025 at 11:09:20 AM
> Since good LLMs with reasoning are hereI disagree. I get egregious mistakes often from them.
> because I'm able to always get an explanation
Reading an explanation may feel like learning, but I doubt it. It is the effort of going from problem/doubt to constructing a solution - and the explanation is a mere description of the solution - that is learning. Knowing words to that effect is not exactly learning. It is an emulation of learning, a simulacrum. And that would be bad enough if we could trust LLMs to produce sound explanations every time.
So not only getting the explanation is a surrogate of learning something, you also risk internalizing spurious explanations.
by namaria
4/26/2025 at 1:58:18 PM
Some problems do not deserve your full attention/expertise.I am not a physicist and I will most likely never require to do anything related to quantum physics in my daily life. But it's fun to be able to have a quick mental model to "have an idea" about who was Max Planck.
by Phanteaume
4/25/2025 at 11:31:34 AM
Every now and then I give LLMs a try, because I think it's important to stay up to date with technology. Sometimes there have been specs that I find particularly hard to parse in domains I am a bit unfamiliar in where I thought the AI could help. At first the solutions seemed correct but then on further inspection, no they were far more convoluted than needed, even if they worked.by myaccountonhn
4/25/2025 at 12:45:02 PM
I can tell when my teammate’s code contains LLM-induced/written code, because it “functionally works” but does so in a way that is so overcomplicated and unhinged that a human isn’t likely to have gone out of their way to design something so wildly and specifically weird.by FridgeSeal
4/25/2025 at 2:00:44 PM
That's why I don't bother with LLMs even for scripts. Scripts are short for a reason, you only have so much time to dedicate on it. And often you pillage from one script to use in another, because every line is doing something useful. But almost everything I generated with LLM are both long and full of abstractions.by skydhash
4/25/2025 at 11:22:51 AM
I think so too. Otherwise every Google maps user would be an awesome wayfinder. The opposite is true.by smallnix
4/25/2025 at 11:41:01 AM
First, as you get used to LLMs you learn how to get sensible explanations from them, and how to detect when they're bullshitting around, imo. It's just another skill you have to learn, by putting in the effort of extensively using LLMs.> Reading an explanation may feel like learning, but I doubt it. It is the effort of going from problem/doubt to constructing a solution - and the explanation is a mere description of the solution - that is learning. Knowing words to that effect is not exactly learning. It is an emulation of learning, a simulacrum. And that would be bad enough if we could trust LLMs to produce sound explanations every time.
Every person learns differently, and different topics often require different approaches. Not everybody learns exactly like you do. What doesn't work for you may work for me, and vice versa.
As an aside, I'm not gonna be doing molecular experiments with sugar preservation at home, esp. since as I said my time budget is 3 minutes. The alternative here was reading about it on wikipedia or some other website.
by cube2222
4/25/2025 at 12:03:37 PM
> It's just another skill you have to learn, by putting in the effort of extensively using LLMs.I'd rather just skip the hassle and keep using known good sources for 'learning about' things.
It's fine to 'learn about' things, that is the extent of most of my knowledge. But from reading books, attending lectures, watching documentaries, science videos on youtube or, sure, even asking LLMs, you can at best 'learn about' things. And with various misconceptions at that. I am under no illusion that these sources can at best give me a very vague overview of subjects.
When I want to 'learn something', actually acquire skills, I don't think that there is any other way than tackling problems, solving them, being able to build solutions independently and being able to explain these solutions to people with no shared context. I know very few things. But I am sure to keep in mind that the many things I 'know about' are just vague apprehensions with lots of misconceptions mixed in. And I prefer to keep to published books and peer reviewed articles when possible. Entertaining myself with 'non-fiction' books, videos etc is to me just entertainment. I never mistake that for learning.
by namaria
4/25/2025 at 2:03:15 PM
Reading an explanation is the first part of learning, chatgpt almost always follows up with “do you want to try some example problems?”by jerkstate
4/25/2025 at 5:50:27 PM
There's an adage I heard during my time in game dev that went something like "gamers will exploit the fun out of a game if you let them." The idea is that people presumably play videos games to have fun, however, if given the opportunity, most players will take paths of least resistance, even if they make the game boring.I see the same risk when AI is understood to be a learning tool. Sure, it can absolutely be a tool for learning, but it does take some will power to intentionally learn if it is solving short-term problems.
That temptation is enormously amplified if AI is used as a teaching tool in grade school! School is sometimes boring, and it can be challenging for a teen to push through a problem-set or essay that they are uninterested in. If an AI will get them a passing grade today, how can they resist?
These problems with AI in schools exist today, and they seem destined to become worse: https://www.whitehouse.gov/presidential-actions/2025/04/adva...
by UtopiaPunk
4/26/2025 at 1:00:13 AM
The internet really fulled this.If you just play a game on its own, you end up playing all the non optimal strategies and just enjoy the game the most fun way. But then someone will spend weeks with spreadsheets working out the absolute time fastest way to progress the game even if it means repeating the most mundane action ever.
Now everyone watches a YouTube guide to play the game and ignores everything but the most optimal way to play the game. Even worse is that games almost expect you to do this and make playing the non optimal route impossibly difficult.
by Gigachad
4/25/2025 at 3:17:33 PM
> We can finally just take a photo of a textbook problem...You nailed it. LLMs are an autodidact's dream. I've been working through a physics book with a good-old pencil and notebook and got stuck on some problems. It turned out the book did a poor job of explaining the concept at hand and I worked with ChatGPT+ to arrive at a more comprehensible derivation. Also the problems were badly worded and the AI explained that to me too. It even produced that Latex document study guide! Moreover, I can belabor a topic which I would not do with a human for fear of bothering them. So for me anyway, AI is not enabling brain rot, but brain enhancement. I find these technologies to be completely miraculous.
by julienchastang
4/26/2025 at 12:08:45 PM
The first thing an autodidact learn is not to use a single source/book for learning anything.by skydhash
4/26/2025 at 12:23:17 PM
The second thing is that you can't go over all books about anything in a lifetime. There is wisdom in choosing when to be ignorant.by gchamonlive
4/25/2025 at 4:09:06 PM
The problem is that social systems aren't run off people teaching themselves things, and for many people being autodidact won't raise their status in any meaningful way, so these are a poor set of tradeoffs.by bookman117
4/25/2025 at 1:54:04 PM
This is a luxury belief. You cannot envision someone who is wholly unable to wield self-control, introspection, etc. These tools have major downsides specifically because they fail to really account for human nature.by everdrive
4/25/2025 at 1:57:46 PM
Should we avoid building any tool if there's a chance someone with poor discipline might use that tool in a way that harms themselves?by simonw
4/26/2025 at 12:34:37 PM
Generally, yes. Is this just an argument against safety precautions?"Who needs seat belts and airbags? A well-disciplined defensive driver simply won't crash."
by bccdee
4/26/2025 at 1:18:13 PM
Seat belts and airbags (and the legislation that enforced them) were introduced as carefully designed trade-offs based on accumulated research and knowledge as to their impact.We didn't simply avoid inventing cars because we didn't know how to make crashes safe.
by simonw
4/25/2025 at 3:19:18 PM
These tools are broadly forced on everyone. Can you really avoid smartphones, social media, content feeds, etc these days? It's not a matter of choice -- society is reshaped and it's impossible to avoid these impositions.by everdrive
4/25/2025 at 8:14:56 PM
Smartphone didn’t take off because it was forced on people. Otherwise we’d all be using Windows Mobile. Smartphone has real benefits, to state the obvious. The right course is to deal with the downsides, such as limiting using it in classroom, but not hint its development. Same with LLM.by signatoremo
4/25/2025 at 2:00:57 PM
It’s not about the tool itself, but more so the corporate interests behind the tools.Open source AI tools that you can run locally in your machines? Awesome! AI tools that are owned by a corporation with the intent of selling your things you don’t need and ideas you don’t want? Not so awesome.
by financetechbro
4/25/2025 at 2:10:53 PM
And employers requiring an increase in productivity off the back of providing you with access to those toolsby sceptic123
4/25/2025 at 11:16:14 AM
Indeed. A friend of mine is a motion designer (and a quite talented one at that) and he goes on and on about how AI is gonna take is job away any day soon. And sure, there are all these tools popping up basically enabling people to do (some of) what he does for a living. But I'm still completely uninterested in motion design. I might prompt a tool a few times to see what it does, but I'm just not interested in the process of getting things right. I can appreciate the result, but I'm not very interested in the craft, even if the craft is just a matter of prompting. That's why I work in a different field.I will note however, that it has expanded his capabilities. Some of the tools he use are scriptable and he can now prompt his way into getting these scripts. Something he'd previously would have needed a programmer for. In this aspect his capabilities now overlap mine, but he's still not the slightest more interested in actually learning programming.
by zppln
4/26/2025 at 1:00:20 PM
Nothing is free, without effort you're not learning.by codr7
4/25/2025 at 10:44:17 AM
> of a textbook problemWell said. Textbook problem that has the answer everywhere.
The question is, would you create similar neural paths if reading the explanation as opposed to figuring it out on your own?
by nottorp
4/25/2025 at 10:53:41 AM
> would you create similar neural pathsExcelent point, and I believe the answer is a resounding negative.
Struggling with a problem generates skills and knowledge which you then possess and recall more easily, while reading an answer merely acquires some information that competes with a whole host of other low-effort information that you need to remember.
by MonkeyClub
4/25/2025 at 11:02:37 AM
Unlikely. Reading the explanation involves memorising it temporarily and at best understanding what it means at a surface level. Figuring it out on your own also involves using and perhaps improving your problem solving skills in addition to understanding the explanation at a deeper level. I feel LLMs will be for our reasoning skills what writing was for our memory skills.Plato might have been wrong about the ills of cyberization cognitive skill such as memory. I wonder if two thousand years later from then, we will be right about the ills of cyberization of a cognitive skill such as reasoning
by netdevphoenix
4/25/2025 at 11:12:55 AM
> Reading the explanation involves memorising it temporarily and at best understanding what it means at a surface level.I agree. I don't really feel like I know something unless I can go from being presented with a novel instance of a problem in that domain and work out a solution by myself, and also explain that to someone else - not just happen into a solution.
> Plato might have been wrong about the ills of cyberization cognitive skill such as memory.
How so? From the dialogue where he describes Socrates discussing writing I get a pretty nuanced view that lands pretty much where you did above: access to writing fosters a false sense of understanding when one can read explanations and repeat them but not actually internalize the reasoning behind it.
by namaria
4/25/2025 at 10:50:27 AM
I believe there is a lot of value to trying to figure out things by myself -- ofc only focusing on things that I really care for. I have no issue relying on AI on most of the work stuffs, they are boring anyway.by hnthrowaway0315
4/26/2025 at 1:04:11 PM
I personally can't thing of anything more boring than verifying shitty, computer generated code.by codr7
4/25/2025 at 11:38:29 AM
What's the difference? Isn't explaining things so that people don't have to figure out by themselves the whole point of the educational system?You will still need the textbook because llms hallucinate just as much as a teacher can be wrong in class. There is no free lunch, llm is just a tool. You create the meaning.
by gchamonlive
4/25/2025 at 2:05:33 PM
> What's the difference? Isn't explaining things so that people don't have to figure out by themselves the whole point of the educational system? THEN SAID A teacher, Speak to us of Teaching.
And he said:
No man can reveal to you aught but that which already lies half asleep in the dawning of your knowledge.
The teacher who walks in the shadow of the temple, among his followers, gives not of his wisdom but rather of his faith and his lovingness.
If he is indeed wise he does not bid you enter the house of his wisdom, but rather leads you to the threshold of your own mind.
The astronomer may speak to you of his understanding of space, but he cannot give you his understanding.
The musician may sing to you of the rhythm which is in all space, but he cannot give you the ear which arrests the rhythm nor the voice that echoes it.
And he who is versed in the science of numbers can tell of the regions of weight and measure, but he cannot conduct you thither.
For the vision of one man lends not its wings to another man.
And even as each one of you stands alone in God’s knowledge, so must each one of you be alone in his knowledge of God and in his understanding of the earth.
The Prophet by Kahlil Gibran
by skydhash
4/25/2025 at 10:51:39 AM
i'm using chatgpt for this exact case. It helps me verify my solution is correct, and when it's not, where is my mistake. Without it, i would have simply skipped to the next problem, hoping i didn't make a mistake. It's definitely a win.by bsaul
4/25/2025 at 11:48:30 AM
I mostly use chatgpt to make my writing more verbose because I've been told that it's too terse.by spiritplumber
4/25/2025 at 4:46:55 PM
Terse writing is a gift. I'm an editor and I wish my writers were more terse.by lee-rhapsody
4/25/2025 at 11:11:12 AM
just curious: wouldn't this entire enterprise be fraught with danger though ? given the proclivity of LLMs to hallucinate how would you (not you per se, but the person engaging with the LLM to learn) avoid being hallucinated to ?being a neophyte in a subject, and relying solely on 'wisdom' of LLMs seems like a surefire recipe for disaster.
by signa11
4/25/2025 at 11:33:18 AM
I don't think so. It's the same thing with photography: https://en.m.wikipedia.org/wiki/On_PhotographyIf you trust symbols blindly, sure it's a hazard. But if you treat it as a plausible answer then it's all good. It's still your job to do the heavy lifting of understanding the domain of the latent search space, curate the answers and verify the information generated
There is no free lunch. LLMs isn't made to make your life easier. It's made for you to focus on what matters which is the creation of meaning.
by gchamonlive
4/25/2025 at 11:49:22 AM
I really don't understand your response. a better way to ask the same question would probably be: would you learn numerical-methods from (a video of) Mr. Hamming or LLM ?by signa11
4/25/2025 at 11:56:44 AM
From Wikipedia> Sontag argues that the proliferation of photographic images had begun to establish within people a "chronic voyeuristic relation to the world."[1] Among the consequences of this practice of photography is that the meaning of all events is leveled and made equal.
This is the same with photography as with llms. The same with anything symbolic actually. It's just a representation of reality. If you trust a photograph fully that can give you a representation of reality that isn't grounded in reality. It's semiotics. Same with llm, if you trust it fully you are bound to get screwed by hallucination.
There are gaps in the logical jumps, I know. I'd recommend you take a look at Philosophize This' episodes about her work to fill them at least superficially.
by gchamonlive
4/25/2025 at 11:42:41 AM
Most people will cut corners on verifying at the first chance they get. That’s the existential risk.by lazide
4/25/2025 at 12:05:37 PM
There are better things to do than focusing on these people, at least for me.by gchamonlive
4/25/2025 at 2:17:32 PM
‘These people’ is everyone in the right circumstances. Ignore it at all our peril.by lazide
4/25/2025 at 3:17:49 PM
If I have to choose peril for the sake of my sanity, I'd do so.However we are not talking about everyone, are we? Just people that "will cut corners on verifying at the first chance they get".
Is it you? I have no idea. I can only remain vigilant so it's not myself.
by gchamonlive
4/25/2025 at 7:32:40 PM
I teach languages at the college level. Students who seek "help" from side-by-side translations think this way, too. "I'm just using the translation to check my work; the translation I produced is still mine." Then you show them a passage they haven't read before, and you deny them the use of a translation, and suddenly they have no idea how to proceed -- or their translation is horrendous, far far worse than the one they "produced" with the help of the translation.Some of these students are dishonest. Many aren't. Many genuinely believe the work they submit is their own, that they really did do the work, and that they're learning the languages. It isn't, they didn't, and they aren't.
People are quite poor at this kind of attribution, especially when they're already cognitively overloaded. They forget sources. They mistake others' ideas for their own. So your model of intention, and your distinction between those who wish to learn and those who pose, don't work. The people most inclined to seek the assistance that these tools seem to offer are the ones least capable of using them responsibly or recognizing the consequences of their use.
These tools are a guaranteed path to brain rot and an obstacle to real, actual study and learning, which require struggle without access to easy answers.
by globnomulous
4/26/2025 at 3:36:01 PM
> Many genuinely believe the work they submit is their own, that they really did do the work, and that they're learning the languages. It isn't, they didn't, and they aren't.But I'm talking about a very specific intentionality in using LLMs which is to "o help us understand what's missing in our understanding of the problem, if our solution is plausible and how we could verify it".
My model of intention and the distinction relies on that. If your students are using LLMs to compose the final work you have a unique chance to show them that they are not designed to be used like that.
If not, then sure it's a guaranteed path to brain rot. But just as a knife can be used to both heal and kill, an LLM can be used to learn and to fake. The distinction lies in knowing yourself.
by gchamonlive
4/26/2025 at 11:24:04 AM
>Some of these students are dishonest. Many aren't. Many genuinely believe the work they submit is their own, that they really did do the work, and that they're learning the languages. It isn't, they didn't, and they aren't.People are quite poor at this kind of attribution, especially when they're already cognitively overloaded. They forget sources. They mistake others' ideas for their own.
This attitude is common not only among students, in my experience many people behave this way.
I also see some parallels to LLM hallucinations..
by SalariedSlave
4/25/2025 at 4:06:18 PM
Exactly, it's quite an enabler, as one of the biggest issues for folks is not wanting to ask questions for fear of looking inadequate. Now they have something they can ask questions of without outside judgement.by diob
4/25/2025 at 2:07:13 PM
Yes, but realistically, can we expect the average person to follow what's in their long-term interest? People regularly eat junk food & doomscroll for 5 hours, knowing full well that its bad for them long-term.by biophysboy
4/26/2025 at 11:29:29 AM
If the AI is being factual when you ask it, they’ll say anything with full conviction. Possibly teaching you the wrong principles without you even knowingby yapyap
4/26/2025 at 12:32:33 PM
I had a teacher once in highschool, an extremely competent one, but he was saying that a Hörst was a tecnonic valley and a Graben is tecnonic mountain. I had just come from an exchange in Austria and that sounded just wrong to me, because they mean the opposite in German. It turned out it actually was.The same way a teacher doesn't substitute the texboook, LLM won't substitute DYOR. It'll help you understand where your flaws lie. The heavy lifting is still your job.
by gchamonlive
4/25/2025 at 6:38:35 PM
> If your intention is to learn, you are in luck! It's never been easier to teach yourself some skill for free.I'll emphasize this: for generally well-understood subjects, LLMs make incredibly good tutors.
Talking to ChatGPT or whichever, I feel like I'm five years old again — able to just ask my parents any arbitrary "why?" question I can think of and get a satisfying answer. And it's an answer that also provides plenty of context to dig deeper / cross-validate in other sources / etc.
AFAICT, children stop receiving useful answers to their arbitrary "why?" questions — and eventually give up on trying — because their capacity to generate questions exceeds their parents' breadth of knowledge.
But asking an (entry-level) "why?" question to a current-generation model, feels like asking someone who is a college professor in every academic subject at once. Even as a 35-year-old with plenty of life experience and "hobbyist-level" knowedge in numerous disciplines (beyond the ones I've actually learned formally in academia and in my career), I feel like I'm almost never anywhere near hitting the limits of a current-gen LLM's knowledge.
It's an enlivening feeling — it wakes back up that long-dormant desire to just ask "why? why? why?" again. You might call it addictive — but it's not the LLM itself that's addictive. It's learning that's addictive! The LLM is just making "consuming the knowledge already available on the Internet" practical and low-friction in a way that e.g. search engines never did.
---
Also, pleasantly, the answers provided by these models in response to "why?" questions are usually very well "situated" to the question.
This is the problem with just trying to find an answer in a textbook: it assumes you're in the midst of learning everything about a subject, dedicating yourself to the domain, picking up all the right jargon in a best-practice dependency-graph-topsorted order. For amateurs, out-of-context textbook answers tend to require a depth-first recursive wiki-walk of terms just to understand what the originally answer from the textbook means.
But for "amateur" questions in domains I don't have any sort of formal education in, but love to learn about (for me, that's e.g. high-energy particle physics), the resulting conversation I get from an LLM generally feels like less like a textbook answer, and more like the script of a pop-science educational article/video tailor-made to what I was wondering about.
But the model isn't fixed to this approach. The responses are tailored to exactly the level of knowledge I demonstrate in the query — speaking to me "on my level." (I.e. the more precisely I know how to ask the question, the more technical the response will be.) And this is iterative: as the answers to previous questions teach and demonstrate vocabulary, I can then use that vocabulary in follow-up questions, and the answers will gradually attune to that level as well. Or if I just point-blank ask a very technical question about something I do know well, it'll jump right to a highly-technical answer.
---
One neat thing that the average college professor won't be able to do for you: because the model understands multiple disciplines at once, you can make analogies between what you know well and what you're asking about — and the model knows enough about both subjects to tell you if your analogy is sound: where it holds vs. where it falls apart. This is an incredible accelerator for learning domains that you suspect may contain concepts that are structural isomorphisms to concepts in a domain you know well. And it's not something you'd expect to get from an education in the subject, unless your teacher happened to know exactly those two fields.
As an extension of that: I've found that you can ask LLMs a particular genre of question that is incredibly useful, but which humans are incredibly bad at answering. That question is: "is there a known term for [long-winded definition from your own perspective, as someone who doesn't generally understand the subject, and might need to use analogies from outside of the domain to explain what you mean]?" Asking this question — and getting a good answer — lets you make non-local jumps across the "jargon graph" in a domain, letting you find key terms to look into that you might have never been exposed to otherwise, or never understood the significance of otherwise.
(By analogy, I invite any developer to try asking an LLM "is there a library/framework/command-line tool/etc that does X?", for any X you can imagine, the moment it occurs to you as a potential "nice to have", before assuming it doesn't exist. You might be surprised how often the answer is yes.)
---
Finally, I'll mention — if there's any excuse for the "sycophantry" of current-gen conversational models, it's that that attitude makes perfect sense when using a model for this kind of "assisted auto-didactic learning."
An educator speaking to a learner should be patient, celebrate realizations, neutrally acknowledge misapprehensions but correct them by supplying the correct information rather than being pushy, etc.
I somewhat feel like auto-didactic learning is the "idiomatic use-case" that modern models are actually tuned for — everything else they can do is just a side-effect.
by derefr
4/25/2025 at 8:08:38 PM
> One neat thing that the average college professor won't be able to do for you: because the model understands multiple disciplines at once, you can make analogies between what you know well and what you're asking about — and the model knows enough about both subjects to tell you if your analogy is sound: where it holds vs. where it falls apart. This is an incredible accelerator for learning domains that you suspect may contain concepts that are structural isomorphisms to concepts in a domain you know well. And it's not something you'd expect to get from an education in the subject, unless your teacher happened to know exactly those two fields.I really agree with what you've written in general, but this in particular is something I've really enjoyed. I know physics, and I know computing, and I can have an LLM talk me through electronics with that in mind - I know how electricity works, and I know how computers work, but it's applying it to electronics that I need it to help me with. And it does a great job of that.
by Alex-Programs
4/25/2025 at 3:05:28 PM
> LLM changed nothing though. It's just boosting people's intention. If your intention is to learn, you are in luck! It's never been easier to teach yourself some skill for free.I wouldn't be so sure. Search engine quality has degraded significantly since the advent of LLMs. I've seen the first page of Google entirely taken up by AI slop when searching for some questions.
by 68463645
4/25/2025 at 2:33:39 PM
> We can finally just take a photo of a textbook problem that has no answer reference and no discussion about it and prompt an LLM to help us understand what's missing in our understanding of the problem, if our solution is plausible and how we could verify it.I would take that advice with caution. LLM's are not oracles of absolute truth. They often hallucinate and omit important pieces of information.
Like any powerful tool, it can be dangerous in the unskilled hands.
by Nickersf
4/25/2025 at 10:44:37 AM
My impression is similar. LLMs are a godsend for those willing to learn, as they can usually answer extremely specific questions well enough to at least send you into the right general direction.But if you're so insecure about yourself that you invest more energy into faking it than other people do in actually doing it this, this is probably a one-way street to never actually be able to do anything yourself.
by atoav