4/16/2026 at 3:31:56 PM
I'm finding the "adaptive thinking" thing very confusing, especially having written code against the previous thinking budget / thinking effort / etc modes: https://platform.claude.com/docs/en/build-with-claude/adapti...Also notable: 4.7 now defaults to NOT including a human-readable reasoning token summary in the output, you have to add "display": "summarized" to get that: https://platform.claude.com/docs/en/build-with-claude/adapti...
(Still trying to get a decent pelican out of this one but the new thinking stuff is tripping me up.)
by simonw
4/16/2026 at 5:29:03 PM
Its especially concerning / frustrating because boris’s reply to my bug report on opus being dumber was “we think adaptive thinking isnt working” and then thats the last I heard of it: https://news.ycombinator.com/item?id=47668520Now disabling adaptive thinking plus increasing effort seem to be what has gotten me back to baseline performance but “our internal evals look good“ is not good enough right now for what many others have corroborated seeing
by JamesSwift
4/17/2026 at 8:19:11 AM
For 4.7 it is no longer possible to disable adaptive thinking. Which is weird given the comment from Boris followed with silence (and closed github issue). So much for the transparency.> Claude Opus 4.7 (claude-opus-4-7), adaptive thinking is the only supported thinking mode. Thinking is off unless you explicitly set thinking: {type: "adaptive"} in your request; manual thinking: {type: "enabled"} is rejected with a 400 error.
https://platform.claude.com/docs/en/build-with-claude/adapti...
For my claude code I went with following config:
* /effort xhigh (in the terminal cli) - To avoid lazying
* "env": {"CLAUDE_CODE_DISABLE_1M_CONTEXT": "1"} (settings.json) - It seems like opus is just worse with larger context
* "display": "summarized" (settings.json) - To bring back summaries.
* "showThinkingSummaries": true (settings.json) - Should show extended thinking summaries in interactive sessions
Freaking wizardry.
by rkuska
4/17/2026 at 9:51:02 AM
It's early days for Opus 4.7, but I will say this: Today, I had a conversation go well into the 200K token range (I think I got up to 275K before ending the session), and the model seemed surprisingly capable, all things beings considered.Particularly when compared to Opus 4.6, which seems to veer into the dumb zone heavily around the 200k mark.
It could have just been a one-off, but I was overall pleased with the result.
by arcanemachiner
4/17/2026 at 11:44:18 AM
I’m super envious. I can’t seem to do anything without a half a million tokens. I had to create a slash command that I run at the start of every session so the darn thing actually reads its own memory- whatever default is just doesn’t seem to do it. It’ll do things like start to spin up scripts it’s already written and stored in the code base unless I start every conversation with instructions to go read persistence and memory files. I also seem to have to actively remind it to go update those things at various parts of the conversation even though it has instructions to self update. All these things add up to a ton of work every session.I think i’m doing it wrong
by captainregex
4/17/2026 at 2:01:10 PM
Something sounds very wrong with your setup or how you use it.Is your CLAUDE.md barren?
Try moving memory files into the project:
(In your project's .claude/settings.local.json)
{ ...
"plansDirectory": "./plans/wip",
"autoMemoryDirectory": "/Users/foo/project/.claude/memory"
}
(Memory path has to be absolute)I did this because memory (and plans) should show up in git status so that they are more visible, but then I noticed the agent started reading/setting them more.
by hombre_fatal
4/17/2026 at 1:23:56 PM
This does kind of smell like the wrong way to use it. Not trying to self-promote here, but the experiences you shared really made me think I headed the right direction with my prompting framework ("projex" - I once made a post about it).I straight up skip all the memory thing provided by harnesses or plugins. Most of my thread is just plan, execute, close - Each naturally produce a file - either a plan to execute, a execution log, a post-work walkthrough, and is also useful as memory and future reference.
by 3371
4/17/2026 at 4:54:41 PM
Something seems wrong. A half-million tokens is almost five times larger than I allow even long-running conversations to get too. I've manually disabled the 1M context, so my limit is 200K, and I don't like it to get above 50%.Is it... not aware of its current directory? Is its current directory not the root of your repo? Have you maybe disabled all tool use? I don't even know how I could get it to do what you're describing.
Maybe spend more time in /plan mode, so it uses tools and the Explore sub-agent to see what the current state of things is?
by pwinnski
4/17/2026 at 5:32:51 PM
Two quick thoughts:- Use the Plan mode, create a thorough plan, then hand it off to the next agent for execution.
- Start encapsulating these common actions into Skills (they can live globally, or in the project, per skill, as needed). Skills are basically like scripts for LLMs - package repeatable behavior into single commands.
by arcanemachiner
4/17/2026 at 2:22:22 PM
If i had to guess i think you have probably overstuffed the context in hopes of moulding it and gotten worse outcomes because of that. I keep the default context _extremely_ small (as small as possible) and rely on invoked slash commands for a lot of what might have been in a CLAUDE.md beforeby JamesSwift
4/17/2026 at 2:54:57 PM
Your display and thinking summery settings aren’t working for me (v2.1.112 on macOS). Any advice?by j_bum
4/17/2026 at 7:08:25 PM
It seems like the correct way is to use:`claude --thinking-display summarized`
The thinking is then visible with ctrl+o in the claude cli (shortcut available at least on mac).
Well you can't really trust the documentation I guess. I can't edit my original comment anymore.
by rkuska
4/17/2026 at 9:11:20 AM
It doesn’t really come as a surprise to me that these companies are struggling to reliably fix issues with software which relies on a central component which is nondeterministic.But they made their own bed with that one.
by beaker52
4/17/2026 at 12:09:21 PM
I've noticed a lack of product cohesion in general and it does make me wonder if it's a result of dogfooding AI.For example, chat, cowork and code have no overlap - projects created in one of the modes are not available in another and can't be shared.
As another example, using Claude with one of their hosted environments has a nice integration with GitHub on the desktop, but some of it also requires 'gh' to be installed and authenticated, and you don't have that available without configuring a workaround and sharing a PAT. It doesn't use the GH connector for everything. Switch to remote-control (ideal on Windows/WSL) or local and that deep integration is gone and you're back to prompting the model to commit and push and the UI isn't integrated the same.
Cowork will absolutely blow through your quota for one task but chat and code will give you much more breathing room.
Projects in Code are based on repos whereas in Chat and Cowork they are stateful entities. You can't attach a repo to a cowork project or attach external knowledge to a code project (and maybe you want that because creating a design doc or doing research isn't a programming task or whatever)
Use Claude Code on the CLI and you can't provide inline comments on a plan. There is a technical limitation there I suppose.
The desktop app is very nice and evolving but it's not a single coherent offering even within the same mode of operation. And I think that's something that is easy to do if you're getting AI to build shit in a silo.
by ljm
4/17/2026 at 1:31:10 PM
this is "you ship your org chart" not ai.by randall
4/17/2026 at 3:03:18 PM
Even a distributed or silo'd org chart has some affinity across the hierarchy in order to keep things in overall alignment. You wouldn't expect to use a product suite that is, holistically, not fully compatible with its own ecosystem, even down to not having a single concept of a project. Or requiring a CLI tool in an ephemeral environment that you cannot easily configure.That's clearly a trade-off that Anthropic have accepted but it makes for a disappointing UX. Which is a shame because Claude Desktop could easily become a hands-off IDE if it nailed things down better.
by ljm
4/17/2026 at 3:53:37 PM
And the multiple concepts of subscriptions for products, and the idea of MCPs/connectors that arent shared between the different modalities, and the idea of api key vs subscription, and two different inbound websites (claude.ai and claude.com)...by JamesSwift
4/17/2026 at 1:51:44 PM
Agreed. I use the Claude desktop app almost every day, and have used Code and Cowork since their respective launch dates, and even I still have a really hard time grokking what each is for. It becomes even more confusing when you enable the (Anthropic-provided) filesystem extension for Chat mode. Anthropic really needs to streamline this.by lilytweed
4/17/2026 at 4:26:52 PM
YES! I thought it was just me being a bit scattered. But uploading an important file to a project only to have it not there because....<garbled answer from Claude> is distracting to say the least. I don't know what I've enabled offhand but I hate having to stop and try to work out why Claude can't reference a file uploaded to the project in a chat within that project. I think they should pause on all the wild aspirations and devote some time to fundamentals.by notsydonia
4/17/2026 at 2:15:38 PM
Add to that that notion mcp works for the chat but not code. now my workflow has docs I comment with others in notion, while the actual work and source of truth is in GitHub.Need to fall back to codex to keep things in sync, but that's a great opportunity to also make sure I can compare how things run - and it catches a lot of issues with Claude Code and is great at fixing small/medium issues.
by harha
4/17/2026 at 3:50:39 PM
Absolutely its dogfooding AI and vibing huge features on the house of cards. Its a fucking mess, and the product design is simultaneously confusing and infuriating. But the product is useful and Im more productive with it than without it now.by JamesSwift
4/17/2026 at 10:49:05 AM
Well, the fun part is that the algorithms themselves are deterministic. They are just so afraid of model distillation that they force some randomness on top (and now hide thinking). Arguably for coding, you'd probably want temperature=0, and any variation would be dependent on token input alone.by thaanpaa
4/17/2026 at 11:43:33 AM
Meh. Temp 0 means throwing away huge swathes of the information painstakingly acquired through training for minimal benefit, if any. Nondeterminism is a red-herring, the model is still going to be an inscrutable black box with mostly unknowable nonlinear transition boundaries w.r.t. inputs, even if you make it perfectly repeatable. It doesn't protect you from tiny changes in inputs having large changes in outputs _with no explanation as to why_. And in the process you've made the model significantly stupider.As for distillation... sampling from the temp 1 distribution makes it easier.
by hexaga
4/17/2026 at 1:35:47 PM
Bringing up computational determinism in the early days of AI was absolutely career-limiting. But now, even if the model itself is deterministic for batch size 1, load balancing for MOE routing can make things non-deterministic any larger batch size. Good luck with that guys!by LogicFailsMe
4/16/2026 at 10:56:59 PM
Seconded. After disabling adaptive thinking and using a default higher thinking, I finally got the quality I'm looking for out of Opus 4.6, and I'm pleased with what I see so far in Opus 4.7.Whatever their internal evals say about adaptive thinking, they're measuring the wrong thing.
by pkilgore
4/16/2026 at 11:37:36 PM
Unless they're measuring capexby hbbio
4/17/2026 at 12:05:46 AM
Its even more maddening for me because my whole team is paying direct API pricing for the privilege of this experience! Just charge me the cost and let me tune this thing, sheesh!by JamesSwift
4/17/2026 at 4:35:51 AM
Why don’t you switch to codex? The grass is greener here. Do use 5.3-codex though, 5.4 is not for coding, despite what many say.by manmal
4/17/2026 at 2:25:02 PM
Anthropic in general is miles ahead in “getting work done”, and its not just me on the team. Theres a lot of paper cuts to work through to be truly generic in providerI did try out codex before claude went to shit and it was good, even uniquely good in some ways, but wasnt good enough to choose it over claude. Absolutely when claude was bad again it would have been better, but thats hindsight that I should have moved over temporarily.
by JamesSwift
4/17/2026 at 7:14:29 AM
If you get to pay X to YY $$ per each request (because thats the real cost for Anthropic), I strongly believe AI train would suddenly derail.Currently we are all subsidied by investors money.
How long you can have a business that is only losing money. At some point prices will level up and this will be the end of this escapade.
by pojzon
4/17/2026 at 4:07:45 PM
Once local models hit claude code + opus 4.5 levels that is the new normal. That is a good-enough baseline of intelligence to sustain productivity for the next 10 years or more. We are still so close to this line in the sand that theres not a lot of margin for regression in the SOTA models before they become "worse than no AI" for getting real work done day-to-day. But eventually the local models and harnesses will catch up and there will no longer be a need to use the SAAS versions and still reap the benefits of AI in general.by JamesSwift
4/17/2026 at 9:03:09 AM
It's very unlikely that API use is subsidized.by FeepingCreature
4/17/2026 at 11:41:27 AM
I keep hearing both sides of this "debate," but no one is providing any direct evidence other than "I do(n't) think that is true."by jermaustin1
4/18/2026 at 12:33:23 PM
Well there can't be direct evidence, it's a private corporation and we don't know how big the model is. But you can look on Openrouter for hosters that offer free models with known sizes, where there's no brand and so no incentive to subsidize, and they don't look wildly bigger than OpenAI/Anthropic API prices.edit: example: GLM 5.1, a 751B model, is offered for 0.6$/m in, 4.43$/m out. Scuttlebutt (ie. I asked Google's AI) seems to think that Opus 4 is a 1T/5T MoE model, so you can treat it (with some effort) as a 1T model for pricing purposes. Its API pricing is $1.55 in, $25 out, ie. 2x to 5x more than GLM. Idk what to say other than this sounds about right, probably with healthy margin.
by FeepingCreature
4/16/2026 at 11:41:30 PM
That's why they put the cute animal in your terminal.by echelon
4/17/2026 at 3:40:26 AM
Ok, side topic… but that little bastard cheerfully told me out of no where that I have a mall of without a null check AND a free inside a conditional that might not get called.It didn’t give me a line number or file. I had to go investigate. Finally found what it was talking about.
It was wrong. It took me about 20 minutes start to finish.
Turned it off and will not be turning it back on.
by SV_BubbleTime
4/17/2026 at 6:18:05 AM
I thought it just emitted tongue-in-cheek comments, not serious analysis. And I use the past tense because I had it enable explicitly and a few days ago it disappeared by itself, didn't touch anything.by darkwater
4/17/2026 at 11:38:46 AM
The buddies were Anthropics April fools day stunt. Buddies were removed from a newer version of Claude code. By default Claude code updates automatically.by c0wb0yc0d3r
4/17/2026 at 5:35:02 PM
Maybe it was supposed to be tongue in cheek.But I don’t know, man in my opinion you don’t fucking snicker about a malloc without a null check and only a conditional free that isn’t there.
Go to hell “Sprocket”.
by SV_BubbleTime
4/17/2026 at 8:48:50 AM
Except for the model weights themselves, they hardly have any!by TeMPOraL
4/17/2026 at 8:08:16 AM
As far as I understand Opus 4.7 disregards the disable adaptive thinking flag. So if you're seeing it perform well, perhaps their evals are inline?by robertfall
4/17/2026 at 12:15:31 PM
Is 4.6 without adaptive thinking better than 4.5? Honest question. I switched back to 4.5 because 4.6 seemed mostly to take longer and consume more tokens, without noticeable improvement in the end result.by misja111
4/16/2026 at 6:26:38 PM
This matches my experience as well, "adaptive thinking" chooses to not think when it should.by ai_slop_hater
4/16/2026 at 10:14:25 PM
I think this might be an unsolved problem. When GPT-5 came out, they had a "router" (classifier?) decide whether to use the thinking model or not.It was terrible. You could upload 30 pages of financial documents and it would decide "yeah this doesn't require reasoning." They improved it a lot but it still makes mistakes constantly.
I assume something similar is happening in this case.
by andai
4/17/2026 at 8:16:51 AM
You're misunderstanding the purpose of "auto"-model-routing or things like "adaptive thinking". It's a solved problem for the companies. It solves their problems. Not yours ;)by siva7
4/17/2026 at 3:18:17 AM
Maybe it is an unsolved problem, but either way I am confused why Anthropic is pushing adaptive thinking so hard, making it the only option on their latest models. To combat how unreliable it is, they set thinking effort to "high" by default in the API. In Claude Code, they now set it to "xhigh" by default. The fact that you cannot even inspect the thinking blocks to try and understand its behavior doesn't help. I know they throw around instructions how to enable thinking blocks, or blocks with thinking summaries, or whatever (I am too confused by now, what it is that they allow us to see), but nothing worked for me so far.by ai_slop_hater
4/17/2026 at 5:17:37 AM
Because with adaptive thinking they control compute, not youby siva7
4/17/2026 at 1:33:04 AM
I find that GPT 5.4 is okay at it. It does think harder for harder problems and still answers quickly for simpler ones, IME.by solarkraft
4/17/2026 at 12:42:15 AM
Is knowing how hard a problem is, before doing it, solved in humans?by nomel
4/17/2026 at 1:09:58 AM
Yes, everyweek when assigning fking points to tasks on jira/sby biglost
4/17/2026 at 2:58:59 AM
As a unit this is funny, Jira points assigned per second (now possible with parallel tool calling AIs)by arthurcolle
4/17/2026 at 7:50:22 AM
I don't think so. If the model used to analyse the complexity is dumb, it won't route correctly. They clearly don't want to start every query using the highest level of intelligence as this could undermine their obvious attempt at resource optimisation.I faced the same issue using Open Router's intelligent routing mechanism. It was terrible, but it had a tendency to prefer the most expensive model. So 98% of all queries ended up being the most expensive model, even for simple queries.
by Gareth321
4/17/2026 at 5:57:31 AM
[dead]by WobblyDev
4/17/2026 at 3:05:40 AM
It makes me think of this parallel: often in combinatorial optimization ,estimating if it is hard to find a solution to a problem costs you as much as solving it.With a small bounded compute budget, you're going to sometimes make mistakes with your router/thinking switch. Same with speculative decoding, branch predictors etc.
by mochomocha
4/16/2026 at 10:37:22 PM
[dead]by rrvsh
4/17/2026 at 5:18:19 AM
[flagged]by Moonye666
4/16/2026 at 11:21:04 PM
[dead]by azrollin
4/16/2026 at 5:58:10 PM
you're using a proprietary blackboxby whateveracct
4/16/2026 at 6:03:04 PM
Sure, but that blackbox was giving me a lot of value last month.by JamesSwift
4/16/2026 at 8:43:23 PM
Me too, but it was obviously wildly unsustainable. I was telling friends at xmas to enjoy all the subsidized and free compute funded by VC dollars while they can because it'll be gone soon.With the fully-loaded cost of even an entry-level 1st year developer over $100k, coding agents are still a good value if they increase that entry-level dev's net usable output by 10%. Even at >$500/mo it's still cheaper than the health care contribution for that employee. And, as of today, even coding-AI-skeptics agree SoTA coding agents can deliver at least 10% greater productivity on average for an entry-level developer (after some adaptation). If we're talking about Jeff Dean/Sanjay Ghemawat-level coders, then opinions vary wildly.
Even if coding agents didn't burn astronomical amounts of scarce compute, it was always clear the leading companies would stop incinerating capital buying market share and start pushing costs up to capture the majority of the value being delivered. As a recently retired guy, vibe-coding was a fun casual hobby for a few months but now that the VC-funded party is winding down, I'll just move on to the next hobby on the stack. As the costs-to-actual-value double and then double again, it'll be interesting to see how many of the $25/mo and free-tier usage converts to >$2500/yr long-term customers. I suspect some CFO's spreadsheets are over-optimistic regarding conversion/retention ARPU as price-to-value escalates.
by mrandish
4/16/2026 at 6:22:16 PM
so it's also a skinner boxby whateveracct
4/16/2026 at 7:01:09 PM
Whoops haha. Surely that can't be how black boxes normally work right?by slopinthebag
4/16/2026 at 6:30:10 PM
its a drug. that is how it works. they ration it before the new stuff. seeing legends of programming shilling it pains me the most. so far there are a few decent non insane public people talking about it :Mitchel Hashimoto, Jeremy Howard, Casei Muratori. hell even DHH drank the coolaid while most of his interviews in the past years was how he went away from AWS and reduced the bill from 3 million to 1millions by basically loosing 9s, resiliency and availability. but it seems he is fine with loosing what makes his business work(programming) to a company that sells Overpowered stack overflow slot machines.by retinaros
4/16/2026 at 7:24:29 PM
I work with some 'legends of programming' and they're all excited about it. I am too, though I am not a legend. It really is changing the game as a valid new technology, and it's not just a 'slot machine'. Anthropic is burning their goodwill though with their lack of QA or intentional silent degradation.by heurist
4/16/2026 at 7:41:50 PM
it is a slot machine. you win a lot if what you do is in the dataset. and yes most of enterprise software is likely in it as it is quite basic CRUD API/WebUI. the winning doesnt change the fact that it is a slot machine and you just need one big loss to end your work.as long as you introduce plans you introduce a push to optimize for cost vs quality. that is what burnt cursor before CC and Codex. They now will be too. Then one day everything will be remote in OAI and Anthropic server. and there won't be a way to tell what is happening behind. Claude Code is already at this level. Showing stuff like "Improvising..." while hiding COT and adding a bunch of features as quick as they can.
by retinaros
4/16/2026 at 7:26:58 PM
The fact that they might gimp it in the future doesn’t mean it does offer very real world value right now. If you’re not using an LLM to code, you’re basically a dinosaur now. You’re forcing yourself to walk while everyone else is in a vehicle, and a good vehicle at that that gets you to your destination in one piece.by dyauspitr
4/16/2026 at 7:36:43 PM
as an overpowered stack overflow machine this is quite good and a huge jump. As a prompt to code generator with yolo mode (the one advertised by those companies) it is alternating between good to trash and every single person that works away from the distribution of the SFT dataset can know this. I understand that this dataset is huge tho and I can see the value in it. I just think in the long term it brings more negatives.If you vibecode CRUD APIs and react/shadcn UIs then I understand it might look amazing.
by retinaros
4/16/2026 at 8:29:10 PM
Yes, definitely CRUDs but also iPhone applications, highly performant financial software (its kdb queries are better than 95% of humans), database structure and querying and embedded systems are other things it’s surprisingly good at. When you take all of those into account there’s very little else left.by dyauspitr
4/16/2026 at 11:24:46 PM
The question is, are you getting value from your setups or not?by NobleLie
4/16/2026 at 6:36:41 PM
[flagged]by throwaway9980
4/16/2026 at 6:48:10 PM
I think you're loosing your ability to spellby bloppe
4/16/2026 at 6:59:50 PM
never said he was a looser. just that his take on genAi coding doesnt align with his previous battles for freedom away from Cloud. OAI and Anthropic have a stronger lock in than any cloud infra company.you got everything to loose by giving your knowledge and job to closedAI and anthropic.
just look at markets like office suite to understand how the end plays.
by retinaros
4/16/2026 at 7:47:37 PM
Is office suite supposed to be an example of lock-in? I haven't used it since middle school. I've worked at 3 companies and, to the best of my knowledge, not a single person at any of them used office suite. That's not to say we use pen and paper. We just use google docs, or notion, or (my personal favorite) just markdown and possibly LaTeX.I think it's somewhat analogous with models. Sure, you could bind yourself to a bunch of bespoke features, but that's probably a bad idea. Try to make it as easy as possible for yourself to swap out models and even use open-weight models if you ever need to.
You will get locked into the technology in general, though, just not a particular vendor's product.
by bloppe
4/16/2026 at 10:24:35 PM
loser(Didn't you notice being mocked for the spelling error?)
by jibal
4/16/2026 at 7:03:02 PM
Those jobs are as good as loost already. There's no endgame where knowledge workers keep knowledge working they way they have been knowledge working. Adapt or be a loosing looser forever.by throwaway9980
4/16/2026 at 6:56:14 PM
And now it isn't. Pray they don't alter the deal any further.by butlike
4/16/2026 at 6:56:37 PM
paying for - so some form of return is expected.by chinathrow
4/16/2026 at 7:07:00 PM
the issue is the return is amorphous and unstructuredthere's no contract. you send a bunch of text in (context etc) and it gives you some freeform text out.
by whateveracct
4/16/2026 at 7:14:23 PM
Sure, but I pay real money both to Antrophic and to JetBrains. I get a shitty in line completion full of random garbage or I get correct predictions. I ask Junie (the JetBrains agent) to do a task and it wanders off in a direction I have no idea why I pay for that.by chinathrow
4/16/2026 at 7:19:37 PM
> I have no idea why I pay for that.And Claude have no idea why it did that.
by gowld
4/16/2026 at 7:23:05 PM
Exactly, and we feel vindicated when it works but sold when it fails. Something will have to change.by chinathrow
4/16/2026 at 7:23:30 PM
> Sure, but I pay real money both to Antrophic...I misread that as Atrophic. I hope that doesn't catch on...
by SyneRyder
4/16/2026 at 6:03:50 PM
It’s the official communication that sucks. It’s one thing for the product to be a black box if you can trust the company. But time and time again Boris lies and gaslights about what’s broken, a bug or intentional.by iterateoften
4/16/2026 at 6:28:42 PM
> It’s the official communication that sucks. It’s one thing for the product to be a black box if you can trust the company.A company providing a black box offering is telling you very clearly not to place too much trust in them because it's harder to nail them down when they shift the implementation from under one's feet. It's one of my biggest gripes about frontier models: you have no verifiable way to know how the models you're using change from day to day because they very intentionally do not want you to know that. The black box is a feature for them.
by CodingJeebus
4/16/2026 at 6:42:50 PM
If you cared so bad you could make your own evals.by bomewish
4/16/2026 at 7:08:25 PM
so pay anthropic money to maybe detect when the model is on a down week? lolby whateveracct
4/16/2026 at 4:04:54 PM
> Still trying to get a decent pelican out of this one but the new thinking stuff is tripping me upWouldn't that be p-hacking where p stands for pelican?
by avaer
4/16/2026 at 5:16:43 PM
Input: I want to wash my car. The car wash is 50 meters away. Should I walk or drive?Output: Walk. It'll take you under a minute, and driving 50 meters barely gets the engine warm — plus you'd just have to park again at the other end. Honestly, by the time you started the car, you'd already be there on foot.
---
I asked it to figure out why it made the mistake:
"Physical/spatial common sense. Exactly what just happened — I pattern-matched "50 meters, walk vs drive" to a pedestrian trip and skipped the part where the car is the thing being transported. Similar traps: "I need to move this couch upstairs, should I take the elevator or stairs?" (the couch has to go with you), or anything where the object in the task needs to accompany you.
Counting and letter-level operations. How many R's in "strawberry," counting words in a sentence, reversing strings. I process text as tokens, not characters, so these are surprisingly error-prone.
Arithmetic at scale. Small sums are fine; multi-digit multiplication, long division, or anything requiring careful carrying is where I'll confidently produce a wrong answer. I should reach for a calculator and often don't. Recent events and live data. My training has a cutoff, and even after that I can be wrong about who currently holds a role, what a stock is worth, or whether a restaurant is still open. I should search more often than I do.
Negation and constraint-stacking. "Name a movie that's not a sequel, not from the 2010s, not American, and has no dialogue" — the more filters, the more likely I quietly violate one. Your specific situation. I'll give generic advice when the right answer depends on details I didn't ask about. Sometimes I should stop and ask; often I don't.
Confidently wrong on obscure specifics. Quoting a passage from a real book, citing a real paper's exact title, naming the B-side of a 1974 single — I'll often generate something plausible rather than admit I don't know.
Taking a premise too literally. The car wash answer is a version of this: I answered the question as posed instead of stepping back to ask what you were actually trying to do. The meta-gotcha: I'm often most wrong when I sound most confident. If an answer comes out fast and tidy on a question that should be messy, that's a signal to push back."
by LostMyLogin
4/16/2026 at 9:15:15 PM
Funny, just tried a few runs of the car wash prompt with Sonnet 4.6. It significantly improved after I put this into my personal preferences:"- prioritize objective facts and critical analysis over validation or encouragement - you are not a friend, but a neutral information-processing machine. - make reserch and ask questions when relevant, do not jump strait to giving an answer."
by sdeframond
4/16/2026 at 10:16:32 PM
It's funny, when I asked GPT to generate a LLM prompt for logic and accuracy, it added "Never use warm or encouraging language."I thought that was odd, but later it made sense to me -- most of human communication is walking on eggshells around people's egos, and that's strongly encoded in the training data (and even more in the RLHF).
by andai
4/17/2026 at 12:47:00 PM
I am an American born to greek parents. For ‘normal’ conversation, I have adapted two ways of interacting - the greek one is direct and has instant access to emotional reactions. The American one obfuscates emotions, as if daily interactions were a game of poker. When i let my ‘greek’ out here in the US , it initially adds life to any interaction but over time the other participants distance themselves from connection. It is as if Greeks (many Europeans?) run at a higher temperature (also using temperature as it applies to LLMs). In greece, Intent and meaning are more often conveyed by emotion and its intensity, often only loosely connected to the meaning of the words used.in daily conversation , Americans rely entirely on meaning of content subtracting almost all emotion unless threatening behavior or violence is involved. Emotion expression is used as a ‘tell’ or bait in the US. Interestingly this distinction has dissolved over the past two decades as greece has ‘westernized’ and youth in particular are indistinguishable by any metric.by bawana
4/17/2026 at 4:23:01 PM
That's very interesting. I don't really understand what you're saying though, can you give some examples?by andai
4/16/2026 at 11:02:27 PM
> most of human communication is walking on eggshellsThat's not human communication, that's Anglosphere communication. Other cultures are much more direct and are finding it very hard to work with Anglos (we come across as rude, they come across as not saying things they should be saying).
by stavros
4/17/2026 at 8:09:12 AM
Depends on the culture as you said, but some of them are even less direct than English speaking countries. Japan for example.by eloisant
4/17/2026 at 12:29:07 PM
And India. It's a common experience that engineering teams from India will say yes to everything and then do what they think is best. Rather than saying no and explaining what they want to do insteadby afro88
4/16/2026 at 11:21:33 PM
What culture are those? Scandinavian? Those often just say nothing.by vardalab
4/17/2026 at 11:25:36 AM
After having worked with people from former Eastern Bloc countries, I would nominate a few of them for direct communication, e.g., "I won't do that because it is a stupid idea," or, "Can we discuss this when you know what you're doing?"by projektfu
4/16/2026 at 11:46:44 PM
Scandinavian are quite different between each others as well.by strokirk
4/17/2026 at 3:55:54 AM
The Dutch especially. It's refreshingby jmpavlec
4/16/2026 at 11:35:07 PM
I'm Greek. I don't know about other Mediterranean cultures, but I assume they're similar.by stavros
4/17/2026 at 3:01:24 PM
Scandinavian cultures are not uniform also. Danes can be very direct; Swedes - not so much.by suslik
4/17/2026 at 5:18:04 AM
[dead]by m3adow
4/17/2026 at 4:32:28 PM
I love this. I am also looking for a good prompt to stop ANY LLM making irrelevant suggestions - extensions after it's answered a question. Eg; "Would you like me to create a timeline of ....?" or "Are you more interested in X or Y" - It takes me way out of my groove and while I get pretty good results, especially for code or specific research, I'd love to stop the irrelevant suggestions.by notsydonia
4/18/2026 at 2:57:40 AM
Have you tried and failed, or you're just worried it might be hard? When I first set up a client for API calls, I put this paragraph in my system prompt:> Never ask questions or attempt to keep the conversation going -- answer the questions directly asked, and give additional information where it is likely to be helpful, but don't offer to do more things for the user.
I've never had an LLM offer to do things or try to keep the conversation going with this in my prompt.
by darshan
4/16/2026 at 10:36:52 PM
Do you think the typos are helping or hurting output quality?by idle_zealot
4/17/2026 at 7:22:44 AM
No idea, but I'll fix them just in case ^^'by sdeframond
4/17/2026 at 1:10:59 AM
That should be "research" and "straight" in the last sentence. Maybe that will improve it further?by mkl
4/17/2026 at 7:23:36 AM
Oopsby sdeframond
4/17/2026 at 4:05:23 AM
“Be critical, not sycophantic” is a general improvement for the majority of tasks where you want to derive logic in my experience.by devmor
4/16/2026 at 6:00:00 PM
| I want to wash my car. The car wash is 50 meters away. Should I walk or drive?
● Drive. The car needs to be at the car wash.
Wonder if this is just randomness because its an LLM, or if you have different settings than me?
by rubinlinux
4/16/2026 at 6:08:02 PM
My settings are pretty standard:% claude Claude Code v2.1.111 Opus 4.7 (1M context) with xhigh effort · Claude Max ~/... Welcome to Opus 4.7 xhigh! · /effort to tune speed vs. intelligence
I want to wash my car. The car wash is 50 meters away. Should I walk or drive?
Walk. 50 meters is shorter than most parking lots — you'd spend more time starting the car and parking than walking there. Plus, driving to a car wash you're about to use defeats the purpose if traffic or weather dirties it en route.
by shaneoh
4/16/2026 at 6:45:34 PM
To me Claude Opus 4.6 seems even more confused.I want to wash my car. The car wash is 50 meters away. Should I walk or drive?
Walk. It's 50 meters — you're going there to clean the car anyway, so drive it over if it needs washing, but if you're just dropping it off or it's a self-service place, walking is fine for that distance.
by reddit_clone
4/16/2026 at 8:45:46 PM
Just asked Claude Code with Opus-4.6. The answer was short "Drive. You need a car at the car wash".No surprises, works as expected.
by lr1970
4/17/2026 at 1:46:21 AM
Yeah, it was probably patched. It could reason novel problems only of you ask it to pay attention to some particular detail a.k.a. handholding..Same would happen with the the sheep and the wolf and the cabbage puzzle. If you l formulated similarly, there is a wolf and a cabbage without mentioning the sheep, it would summon up the sheep into existence at a random step. It was patched shortly after.
by onemoresoop
4/17/2026 at 2:52:07 AM
I’m not sure ‘patched’ is the right word here. Are you suggesting they edited the LLM weights to fix cabbage transportation and car wash question answering?by jameshart
4/17/2026 at 5:45:46 AM
Absolutely not my area of expertise but giving it a few examples of what should be the expected answer in a fine-tuning step seems like a reasonable thing and I would expect it would "fix" it as in less likely to fall into the trap.At the same time, I wouldn't be surprised if some of these would be "patched" via simply prompt rewrite, e.g. for the strawberry one they might just recognize the question and add some clarifying sentence to your prompt (or the system prompt) before letting it go to the inference step?
But I'm just thinking out loud, don't take it too seriously.
by gf000
4/18/2026 at 10:08:59 PM
Used patched for lack of a better word. Not sure how they fix the edge cases for these types of fixes/patches or whatever they’re specifically calledby onemoresoop
4/17/2026 at 5:13:29 AM
They might have further trained the model with these edgecases in the datasetby TheLNL
4/18/2026 at 7:06:34 PM
Whatever it was, that’s not real thinking, we can possibly patch all knowledge and even if we did, it would become crystallize somehow.by onemoresoop
4/17/2026 at 6:39:50 AM
What if it’s raining though? Car wash wouldn’t be open though it would waste gasby lexarflash8g
4/16/2026 at 6:35:12 PM
There is a certain amount of it which is the randomness of an LLM. You really want to ask most questions like this several times.That said, I have several local models I run on my laptop that I've asked this question to 10-20 times while testing out different parameters that have answered this consistently correctly.
by lambda
4/17/2026 at 4:35:46 PM
--------> I want to wash my car. The car wash is 50 meters away. Should I walk or drive?Walk. It's 50 meters — driving there and back takes longer than walking, and you'd just be dirtying the car you're about to clean.
---------> haha but how can i wash my car then?
Walk to the car wash first to check what they offer — many have self-service bays where you bring your car, or full-service where they wash it for you. If it's self-service, then drive over with the dirty car. If full-service, drive over and they handle it.
Either way, the car needs to end up at the wash — the walk was just to scout it out. Or just drive straight there; 50 meters of extra dirt won't matter.
----------> you, you sound like a politician. Are you sure your first answer was really reasoned like your second answer?
You're right, my first answer was nonsense. I pattern-matched on "50 meters = walk" without thinking about the actual task. You can't wash a car without the car being there. Just drive.
by holoduke
4/16/2026 at 7:03:54 PM
I've tried these with Claude various times and never get the wrong answer. I don't know why, but I am leaning they have stuff like "memory" turned on and possibly reusing sessions for everything? Only thing I think explains it to me.If your always messing with the AI it might be making memories and expectations are being set. Or its the randomness. But I turned memories off, I don't like cross chats infecting my conversations context and I at worse it suggested "walk over and see if it is busy, then grab the car when line isn't busy".
by kalcode
4/16/2026 at 7:26:27 PM
Even Gemini with no memory does hilarious things. Like, if you ask it how heavy the average man is, you usually get the right answer but occasionally you get a table that says:- 20-29: 190 pounds
- 30-39: 375 pounds
- 40-49: 750 pounds
- 50-59: 4900 pounds
Yet somehow people believe LLMs are on the cusp of replacing mathematicians, traders, lawyers and what not. At least for code you can write tests, but even then, how are you gonna trust something that can casually make such obvious mistakes?
by jorvi
4/17/2026 at 1:33:57 AM
> how are you gonna trust something that can casually make such obvious mistakes?In many cases, a human can review the content generated, and still save a huge amount of time. LLMs are incredibly good at generating contracts, random business emails, and doing pointless homework for students.
by drnick1
4/17/2026 at 5:49:31 AM
And humans are incredibly bad at "skimming through this long text to check for errors", so this is not a happy pairing.As for the homework, there is obviously a huge category that is pointless. But it should not be that way, and the fundamental idea behind homework is sound and the only way something can be properly learnt is by doing exercises and thinking through it yourself.
by gf000
4/16/2026 at 8:56:15 PM
Yeah, ChatGPT's paid version is wildly inaccurate on very important and very basic things. I never got onboard with AI to begin with but nowadays I don't even load it unless I'm really stuck on something programming related.by nickjj
4/16/2026 at 7:29:05 PM
So what? That might happen one out of 100 times. Even if it’s 1 in 10 who cares? Math is verifiable. You’ve just saved yourself weeks or months of work.by dyauspitr
4/16/2026 at 8:30:16 PM
You don't think these errors compound? Generated code has 100's of little decisions. Yes, it "usually" works.by icedchai
4/16/2026 at 11:28:54 PM
LLM’s: sometimes wrong but never in doubt.by russfink
4/16/2026 at 8:37:08 PM
Not in my experience. With a proper TDD framework it does better than most programmers at a company who anecdotally have a bug every 2-3 tasks.by dyauspitr
4/17/2026 at 12:10:39 AM
The kind of mistakes it makes are usually strange and inhuman though. Like getting hard parts correct while also getting something fundamental about the same problem wrong. And not in the “easy to miss or type wrong” way.I wish I had an example for you saved, but happens to me pretty frequently. Not only that but it also usually does testing incorrectly at a fundamental level, or builds tests around incorrect assumptions.
by tranceylc
4/17/2026 at 2:03:33 PM
I've seen LLMs implement "creative" workarounds. Example: Sonnet 4.5 couldn't figure out how to authenticate a web socket request using whatever framework I was experimenting with, so it decided to just not bother. Instead, it passed the username as part of the web socket request and blindly trusted that user was actually authenticated.The application looked like it worked. Tests did pass. But if you did a cursory examination of the code, it was all smoke and mirrors.
by icedchai
4/17/2026 at 6:18:44 PM
Yeah recently it had an issue getting OIDC working and decided to implement its own, throwing in a few thousand extra lines. I'm sure there were no security holes created in there at all. /sby svachalek
4/17/2026 at 8:10:26 PM
Well, the tests passed, right?by icedchai
4/17/2026 at 2:56:58 PM
yes i wished i had safes some of my best examples too. One i had was super weird in chatgpt pro. It told me that after 30 years my interest would become negative and i would start loosing money. Didnt want to accept the error.by bratwurst3000
4/17/2026 at 9:07:34 AM
Errors compounding is a meme. In iterated as well as verifiable domains, errors dilute instead of compounding because the llm has repeated chances to notice its failure.by FeepingCreature
4/17/2026 at 12:49:31 AM
Yes, just use random results. You’ve just saved yourself weeks or months of work of gathering actual results.by coldtea
4/16/2026 at 7:28:31 PM
Claude Opus 4.7 responds with walk for me with and without adaptive thinking, but neither the basic model used when you Google search or GPT 5.4 do.by heurist
4/16/2026 at 11:26:37 PM
Or, the first time a mistake is detected, a correction is automatically applied.by russfink
4/16/2026 at 6:10:53 PM
Idk but ironically, I had to re-read the first part of GP's comment three times, wondering WTF they're implying a mistake, before I noticed it's the car wash, not the car, that's 50 meters away.I'd say it's a very human mistake to make.
by TeMPOraL
4/16/2026 at 7:08:54 PM
> I'd say it's a very human mistake to make.>> It'll take you under a minute, and driving 50 meters barely gets the engine warm — plus you'd just have to park again at the other end. Honestly, by the time you started the car, you'd already be there on foot.
It talks about starting, driving, and parking the car, clearly reasoning about traveling that distance in the car not to the car. It did not make the same mistake you did.
by magicalist
4/16/2026 at 9:43:21 PM
We truly do not need to lower the bar to the floor whenever an LLM makes an embarrassing logical error, particularly when the excuses don't line up at all with the reasoning in its explanation.by toraway
4/16/2026 at 6:24:09 PM
I don't want my computer to make human mistakes.by thfuran
4/16/2026 at 7:29:45 PM
It may be inescapable for problems where we need to interpret human language?by AgentOrange1234
4/16/2026 at 11:18:39 PM
then throw away the turing testby jasonfarnon
4/16/2026 at 6:37:03 PM
then don't train it on human databy scrollaway
4/17/2026 at 12:18:09 AM
LLMs do not have trouble reading, it didn't make the mistake you made and it wouldn't. You missed a word, LLMs cannot miss words. It's not even remotely a human mistake.by 59nadir
4/17/2026 at 1:30:33 AM
> I want to wash my car. The car wash is 50 meters away. Should I walk or drive?I think no real human would ask such a question. Or if we do we maybe mean should I drive some other car than the one that is already at the car-wash?
A human would answer, "silly question ". But a human would not ask such a question.
by galaxyLogic
4/17/2026 at 2:44:54 AM
A human totally would, as one of those brain-teaser trick questions. Its the same kind of question as "A plane crashes right on the border between the US and Canada. Where do they bury the survivors?" Its the kind of question you only get right if you pay close attention. Asking an AI that is like asking a 5 year old. You're not asking to get an answer, you're asking to see if they're paying attention.by psadauskas
4/17/2026 at 2:50:34 AM
I was given to understand that attention is all you need.by jameshart
4/17/2026 at 11:12:30 AM
That’s why we’re testing for it.by layer8
4/17/2026 at 8:47:53 AM
That a human would not ask such a question means it's not in the training set, so it shows how bad an LLM can be at thinking from first principles. Which, I think, is the point of such silly questions.by ahartmetz
4/16/2026 at 5:28:51 PM
Well, at least we know that's one gotcha/benchmark they aren't gaming.by vintermann
4/17/2026 at 8:58:41 AM
Humans tend to confabulate when asked "why you did X", funny how LLMs are pretty much the same.by krzat
4/16/2026 at 6:46:46 PM
I tried o3, instant-5.3, Opus 3, and haiku 4.5, and couldn't get them to give bad answers to the couch: stairs vs elevator question. Is there a specific wording you used?by fragmede
4/16/2026 at 9:53:20 PM
That's an example the LLM came up with itself while analyzing its failed car wash walk/drive answer, it's not OP's question.by toraway
4/17/2026 at 8:35:13 AM
What would be a bad answer to stairs/elevator question?by scotty79
4/17/2026 at 12:05:14 PM
You can’t get the couch into the elevator, typically. Trust me, I tried.Couch depending. I will persist in trying every time this comes up.
by Filligree
4/17/2026 at 12:48:53 PM
Well if it's one of those hospital elevators that can take a bed with a patient, you probably could. Or if it's a small 2 seater sofa. The question isn't as dumb as it sounds at first, and a human would definitely ask a follow up question.by gambiting
4/17/2026 at 5:45:57 PM
You can take a mattress up an elevator though (1). Some couches might fit in some elevators.1: source: me...
by BenjiWiebe
4/16/2026 at 6:07:59 PM
I'd say the joke is on you ;-)by smooc
4/16/2026 at 5:48:06 PM
What about Qwen? Does it get that right?by slekker
4/16/2026 at 5:59:06 PM
I've run several local models that get this right. Qwen 3.5 122B-A10B gets this right, as does Gemma 4 31B. These are local models I'm running on my laptop GPU (Strix Halo, 128 GiB of unified RAM).And I've been using this commonly as a test when changing various parameters, so I've run it several times, these models get it consistently right. Amazing that Opus 4.7 whiffs it, these models are a couple of orders of magnitude smaller, at least if the rumors of the size of Opus are true.
by lambda
4/16/2026 at 6:40:21 PM
Does Gemma 4 31B run full res on Strix or are you running a quantized one? How much context can you get?by qingcharles
4/16/2026 at 7:55:41 PM
I'm running an 8 bit quant right now, mostly for speed as memory bandwidth is the limiting factor and 8 bit quants generally lose very little compared to the full res, but also to save RAM.I'm still working on tweaking the settings; I'm hitting OOM fairly often right now, it turns out that the sliding window attention context is huge and llama.cpp wants to keep lots of context snapshots.
by lambda
4/16/2026 at 8:04:50 PM
I had a whole bunch of trouble getting Gemma 4 working properly. Mostly because there aren't many people running it yet, so there aren't many docs on how to set it up correctly.It is a fantastic model when it works, though! Good luck :)
by qingcharles
4/17/2026 at 1:32:28 PM
This "figuring out" is just going to come from stuff it was trained on - people discussing why LLMs fail at certain things, and those people (training samples) not always being correct about it!The "How many R's in "strawberry, counting words in a sentence, reversing strings. I process text as tokens, not characters, so these are surprisingly error-prone" explanation sounds plausible, but I don't think it it correct.
Any model I've ever tried that failed on things like "R's in strawberry" was quite capable of reliably returning the letter sequence of the word, so the mapping of tokens back to letters is not the issue, as should also be obvious by ability of models to do things like mapping between ASCII and Base64 (6 bits/char => 2 letters encode 3 chars). This is just sequence to sequence prediction, which is something LLMs excel at - their core competency!
I think the actual reason for failures at these types of counting and reversing tasks is twofold:
1) These algorithmic type tasks require a step-by-step decomposition and variable amount of compute, so are not amenable to direct response from an LLM (fixed ~100 layers of compute). Asking it to plan and complete the task in step-by-step fashion (where for example it can now take advantage of it's ability to generate the letter sequence before reversing it, or counting it) is going to be much more successful. A thinking model may do this automatically without needing to be told do it.
2) These types of task, requiring accurate reference and sequencing through positions in its context, are just not natural tasks for an LLM, and it is probably not doing them (without specific prompting) in the way you imagine. Say you are asking it to reverse the letter sequence of a 10 letter word, and it has somehow managed to generate letter # 10, the last letter of the word, and now needs to copy letter #9 to the output. It will presumably have learnt that 10-1 is 9, but how to use that to access the appropriate position in context (or worse yet if you didn't ask it to go step by step and first generate the letter sequence, so the sequence doesn't even exist in context!)? The letter sequence may have quotes and/or commas or spaces in it, and altogether starts at a given offset in the context, so it's far more difficult than just copying token at context position #9 ! It's probably not even actually using context positions to do this, at least not in this way. You can make tasks like this much easier for the model by telling it exactly how to perform it, generating step-by-step intermediate outputs to track it's progress etc.
BTW, note that the model itself has no knowledge of, or insight into, the tokenization scheme that is being used with it, other than what is available on the web, or that it might have been trained to know. In fact, if you ask a strong model how it could even in theory figure out (by experimentation) it's own tokenization scheme, it will realize this is next to impossible. The best hope might be some sort of statistical analysis of it's own output, hoping to take advantage of the fact that it is generating sub-word token probabilities, not word probabilities. Sonet 4.6's conclusion was "Without logprob access, the model almost certainly cannot recover its exact tokenization scheme through introspection or behavioral self-probing alone".
by HarHarVeryFunny
4/16/2026 at 6:31:58 PM
[flagged]by canarias_mate
4/16/2026 at 4:31:06 PM
The p stands for putrification.by throwup238
4/16/2026 at 5:27:51 PM
Note that for Claude Code, it looks like they added a new undocumented command line argument `--thinking-display summarized` to control this parameter, and that's the only way to get thinking summaries back there.VS Code users can write a wrapper script which contains `exec "$@" --thinking-display summarized` and set that as their claudeCode.claudeProcessWrapper in VS Code settings in order to get thinking summaries back.
by shawnz
4/16/2026 at 5:58:21 PM
Here is additional discussion and hacks around trying to retain Thinking output in Claude Code (prior to this release):by accrual
4/16/2026 at 4:27:38 PM
Does this mean Claude no longer outputs the full raw reasoning, only summaries? At one point, exposing the LLM's full CoT was considered a core safety tenet.by puppystench
4/16/2026 at 6:10:29 PM
Anthropic was chirping about Chinese model companies distilling Claude with the thinking traces, and then the thinking traces started to disappear. Looks like the output product and our understanding has been negatively affected but that pales in comparison with protecting the IP of the model I guess.by MarkMarine
4/16/2026 at 10:18:37 PM
When Gemini Pro came out, I found the thinking traces to be extremely valuable. Ironically, I found them much more readable than the final output. They were a structured, logical breakdown of the problem. The final output was a big blob of prose. They removed the traces a few weeks later.by andai
4/16/2026 at 10:43:50 PM
That’s kind of funny since a Chinese model started the thinking chains being visible in Claude and OA in the first place.by axpy906
4/16/2026 at 4:34:15 PM
I don't think it ever has. For a very long time now, the reasoning of Claude has been summarized by Haiku. You can tell because a lot of the times it fails, saying, "I don't see any thought needing to be summarised."by fasterthanlime
4/16/2026 at 4:47:36 PM
Maybe there was no thinking.by fmbb
4/17/2026 at 1:19:32 PM
Not a haiku, more a koan.by derrida
4/16/2026 at 6:42:50 PM
It also gets confused if the entire prompt is in a text file attachment.And the summarizer shows the safety classifier's thinking for a second before the model thinking, so every question starts off with "thinking about the ethics of this request".
by astrange
4/17/2026 at 9:09:35 AM
I'd get confused if I was a LLM and you put my entire prompt in a text file attachment. I'd be like, "is this the user or is this a prompt injection??"by FeepingCreature
4/20/2026 at 7:06:22 AM
If you paste a long enough prompt into either GPT or Claude they turn it into an attachment, so it can happen. I think it's invisible to the model, but somehow not to the summarizer.by astrange
4/16/2026 at 6:29:34 PM
They are trying to optimize the circus trick that 'reasoning' is. The economics still do not favor a viable business at these valuations or levels of cost subsidization. The amount of compute required to make 'reasoning' work or to have these incremental improvements is increasingly obfuscated in light of the IPO.by einrealist
4/16/2026 at 5:36:52 PM
Safety versus Distillation, guess we see what's more important.by blazespin
4/16/2026 at 4:42:04 PM
Anthropic always summarizes the reasoning output to prevent some distillation attacksby DrammBA
4/16/2026 at 6:13:11 PM
Genuine question, why have you chosen to phrase this scraping and distillation as an attack? I'm imagining you're doing it because that's how Anthropic prefers to frame it, but isn't scraping and distillation, with some minor shuffling of semantics, exactly what Anthropic and co did to obtain their own position? And would it be valid to interpret that as an attack as well?by jdiff
4/16/2026 at 7:01:55 PM
> I'm imagining you're doing it because that's how Anthropic prefers to frame itCorrect.
> would it be valid to interpret that as an attack as well?
Yup.
by DrammBA
4/16/2026 at 6:22:45 PM
If you ask claude in chinese it thinks its deepseek.by irthomasthomas
4/17/2026 at 2:22:38 AM
I don't think that learning from textbooks to take an exam and learning from the answers of another student taking the exam are the same.Joking aside, I also don't believe that maximum access to raw Internet data and its quantity is why some models are doing better than Google. It seems that these SoTA models gain more power from synthetic data and how they discard garbage.
by typ
4/16/2026 at 8:50:58 PM
Firehosing Anthropic to exfiltrate their model seems materially different than Anthropic downloading all of the Internet to create the model in the first place to me. But maybe that's just me?by fragmede
4/16/2026 at 10:34:44 PM
I don't see the material difference in firehosing anthropic vs anthropic firehosing random sites on the internet. As someone who runs a few of those random sites, I've had to take actions that increase my costs (and burn my time) to mitigate a new host of scrapers constantly firing at every available endpoint, even ones specifically marked as off limits.by jdiff
4/16/2026 at 9:48:24 PM
Yeah, it's different. Anthropic profits when it delivers tokens. Hosting providers pay when Anthropic scrapes them.by robrenaud
4/17/2026 at 12:25:57 AM
Yes, what the LLM providers did was worse and impacted people financially a whole lot more in lost compensation for works as well as operational costs that would never reach the heights they did solely because of scrapers on behalf of model providers.by 59nadir
4/16/2026 at 5:37:57 PM
Attacks? That's a choice of words.by vintermann
4/16/2026 at 5:45:06 PM
Definitely Anthropic playing the victim after distilling the whole internet.by DrammBA
4/16/2026 at 7:03:06 PM
Proprietary pattern matcher proves there's no moat; promptly pre-covers other's perception.by butlike
4/16/2026 at 5:17:58 PM
Very cool that these companies can scrape basically all extant human knowledge, utterly disregard IP/copyright/etc, and they cry foul when the tables turn.by nyc_data_geek1
4/16/2026 at 5:39:31 PM
Yep, that is exactly what happens. It's a disgrace that their models aren't open, after training on everything humanity has preserved.They should at least release the weights of their old/deprecated models, but no, that would be losing money.
by stavros
4/16/2026 at 9:22:46 PM
We should treat LLM somewhat like patents or drugs. After 5 years or so, the models should become open source. Or at very least the weights. To compensate for the distilling of human knowledge.by copperx
4/16/2026 at 7:04:25 PM
All extant human knowledge SO FAR. Remember, by the nature of the beast, the companies will always be operating in hindsight with outdated human knowledge.by butlike
4/16/2026 at 5:19:51 PM
and so does OpenAIby MasterScrat
4/16/2026 at 5:18:44 PM
CoT is basically bullshit, entirely confabulated and not related to any "thought process"...by andrepd
4/17/2026 at 12:21:53 AM
But still CoT distillation WORKS. See the DeepSeek R1 paper.by clbrmbr
4/17/2026 at 2:26:34 AM
Tokens relate to each other. More tokens more computeby whattheheckheck
4/16/2026 at 4:26:01 PM
yeah they took "i pick the budget" and turned it into "trust us".by p_stuart82
4/16/2026 at 5:10:59 PM
I keep saying even if there's not current malfeasance, the incentives being set up where the model ultimately determines the token use which determines the model provider's revenue will absolutely overcome any safeguards or good intentions given long enough.by bandrami
4/16/2026 at 6:59:16 PM
This might be true, but right now everybody is like "please let me spend more by making you think longer." The datacenter incentives from Anthropic this month are "please don't melt our GPUs anymore" though.by vessenes
4/16/2026 at 4:17:34 PM
"Also notable: 4.7 now defaults to NOT including a human-readable reasoning token summary in the output, you have to add "display": "summarized" to get that"I did not follow all of this, but wasn't there something about, that those reasoning tokens did not represent internal reasoning, but rather a rough approximation that can be rather misleading, what the model actual does?
by lukan
4/16/2026 at 4:20:53 PM
The reasoning is the secret sauce. They don't output that. But to let you have some feedback about what is going on, they pass this reasoning through another model that generates a human friendly summary (that actively destroys the signal, which could be copied by competition).by motoboi
4/16/2026 at 4:29:48 PM
Don't or can't.My assumption is the model no longer actually thinks in tokens, but in internal tensors. This is advantageous because it doesn't have to collapse the decision and can simultaneously propogate many concepts per context position.
by XenophileJKO
4/16/2026 at 5:24:44 PM
I would expect to see a significant wall clock improvement if that was the case - Meta's Coconut paper was ~3x faster than tokenspace chain-of-thought because latents contain a lot more information than individual tokens.Separately, I think Anthropic are probably the least likely of the big 3 to release a model that uses latent-space reasoning, because it's a clear step down in the ability to audit CoT. There has even been some discussion that they accidentally "exposed" the Mythos CoT to RL [0] - I don't see how you would apply a reward function to latent space reasoning tokens.
[0]: https://www.lesswrong.com/posts/K8FxfK9GmJfiAhgcT/anthropic-...
by ainch
4/17/2026 at 12:31:56 AM
There’s also a paper [0] from many well known researchers that serves as a kind of informal agreement not to make the CoT unmonitorable via RL or neuralese. I also don’t think Anthropic researchers would break this “contract”.by clbrmbr
4/16/2026 at 4:34:21 PM
If that's true, then we're following the timeline of https://ai-2027.com/by haellsigh
4/16/2026 at 7:44:55 PM
> If that's true, then we're following the timelineLiterally just a citation of Meta's Coconut paper[1].
Notice the 2027 folk's contribution to the prediction is that this will have been implemented by "thousands of Agent-2 automated researchers...making major algorithmic advances".
So, considering that the discussion of latent space reasoning dates back to 2022[2] through CoT unfaithfulness, looped transformers, using diffusion for refining latent space thoughts, etc, etc, all published before ai 2027, it seems like to be "following the timeline of ai-2027" we'd actually need to verify that not only was this happening, but that it was implemented by major algorithmic advances made by thousands of automated researchers, otherwise they don't seem to have made a contribution here.
[1] https://ai-2027.com/#:~:text=Figure%20from%20Hao%20et%20al.%...
by magicalist
4/16/2026 at 7:32:30 PM
Hilariously, I clicked back a bunch and got a client side error. We have a long way to go. I wouldn't worry about it.by butlike
4/16/2026 at 4:59:51 PM
Care to expound on that? Maybe a reference to the relevant section?by matltc
4/16/2026 at 5:10:00 PM
Ctrl-F "neuralese" on that page.by ACCount37
4/16/2026 at 5:08:37 PM
You should just read the thing, whether or not you believe it, to have an informed opinion on the ongoing debate.by 9991
4/17/2026 at 2:38:25 AM
I did read it a while back. Was curious what parent was referring to specificallyby matltc
4/17/2026 at 5:59:50 PM
March 2027 -> Neuralese recurrence and memory> For example, perhaps models will be trained to think in artificial languages that are more efficient than natural language but difficult for humans to interpret.
by 9991
4/16/2026 at 5:07:28 PM
That's not supposed to happen til 2027. Ruh roh.by 9991
4/16/2026 at 5:37:36 PM
Only if you ignore context and just ctrl-f in the timeline.What are you, Haiku?
But yeah, in many ways we're at least a year ahead on that timeline.
by literalAardvark
4/16/2026 at 6:36:00 PM
Don't.The first 500 or so tokens are raw thinking output, then the summarizer kicks in for longer thinking traces. Sometimes longer thinking traces leak through, or the summarizer model (i.e. Claude Haiku) refuses to summarize them and includes a direct quote of the passage which it won't summarize. Summarizer prompt can be viewed [here](https://xcancel.com/lilyofashwood/status/2027812323910353105...), among other places.
by JoshuaDavid
4/16/2026 at 5:15:41 PM
No, there is research in that direction and it shows some promise but that’s not what’s happening here.by WhitneyLand
4/16/2026 at 5:40:44 PM
Are you sure? It would be great to get official/semi-official validation that thinking is or is not resolved to a token embedding value in the context.by XenophileJKO
4/16/2026 at 6:46:07 PM
You can read the model cards. Claude thinks in regular text, but the summarizer is to hide its tool use and other things (web searches, coding).by astrange
4/16/2026 at 4:42:06 PM
Most likely, would be cool yes see a open source Nivel use diffusion for thinking.by alex7o
4/16/2026 at 5:25:19 PM
Don't. thinking right now is just text. Chain of though, but just regular tokens and text being output by the model.by motoboi
4/16/2026 at 5:22:05 PM
Although it's more likely they are protecting secret sauce in this case, I'm wondering if there is an alternate explanation that LLMs reason better when NOT trying to reason with natural language output tokens but rather implement reasoning further upstream in the transformer.by dheera
4/17/2026 at 3:54:25 PM
I would doubt it. They are mostly trained on natural language. They may be getting some visual reasoning capability from multi-modal training on video, but their reasoning doesn't seem to generalize much from one domain to another.Some future AGI, not LLM based, that learns from it's own experience based on sensory feedback (and has non-symbolic feedback paths) presumably would at least learn some non-symbolic reasoning, however effective that may be.
by HarHarVeryFunny
4/19/2026 at 5:34:47 PM
My argument for this is mostly that we don't use language for all forms of reasoning, and are likely doing so on some internal representations or embeddings. Animals also demonstrate abilities to reason with situations without actually having a language.I see language more as a protocol for inter-agent communication (including human-human communication) but it contains a lot of inefficiencies and historical baggage and is not necessarily the optimal representation of ideas within a brain.
by dheera
4/16/2026 at 4:23:59 PM
'Hey Claude, these tokens are utter unrelated bollocks, but obviously we still want to charge the user for them regardless. Please construct a plausible explanation as to why we should still be able to do that.'by boomskats
4/16/2026 at 5:45:29 PM
... here's the pelican, I think Qwen3.6-35B-A3B running locally did a better job! https://simonwillison.net/2026/Apr/16/qwen-beats-opus/by simonw
4/16/2026 at 6:03:33 PM
A secret backup test to the pelican? This is as noteworthy as 4.7 dropping.by bredren
4/16/2026 at 6:43:02 PM
That flamingo is hilarious. Is that his beak or a huge joint he's smoking?by qingcharles
4/16/2026 at 8:16:31 PM
With the sunglasses, the long flamingo neck and the "joint", I immediately thought of the poster for Fear And Loathing In Las Vegas:https://www.imdb.com/title/tt0120669/mediaviewer/rm264790937...
EDIT: Actually, it must be a beak. If you zoom in, only one eye is visible and it's facing to the left. The sunglasses are actually on sideways!
by SyneRyder
4/16/2026 at 6:56:05 PM
You used a secret backup test! Truly honored to see the flamingos. We obviously need them all now ;-)by cakeface
4/17/2026 at 3:01:08 AM
Opus did get the feet on pedals better.by djmips
4/16/2026 at 9:21:52 PM
based sun worshipping pelicanby ionwake
4/17/2026 at 4:03:11 AM
> Also notable: 4.7 now defaults to NOT including a human-readable reasoning token summary in the output, you have to add "display": "summarized" to get thatThat’s extremely bothersome because half of what helps teams build better guardrails and guidelines for agents is the ability to do deep analysis on session transcripts.
I guess we shouldn’t be surprised these vendors want to do everything they can to force users to rely explicitly on their offerings.
by devmor
4/16/2026 at 5:07:48 PM
It's likely hiding the model downgrade path they require to meet sustainable revenue. Should be interesting if they can enshittify slowly enough to avoid the ablative loss of customers! Good luck all VCs!by cyanydeez
4/16/2026 at 5:10:54 PM
They have super sustainable revenue. They are deadly supply constrained on compute, and have a really difficult balancing act over the next year or two in which they have to trade off spending that limited compute on model training so that they can stay ahead, while leaving enough of it available for customers that they can keep growing number of customers.by vessenes
4/16/2026 at 5:27:12 PM
But do they? When was the last time they declined your subscription because they have no compute?by dainiusse
4/16/2026 at 7:34:01 PM
> When was the last time they declined your subscription because they have no compute?Is that a serious question? There have been a bunch of obvious signs in recent weeks they are significantly compute constrained and current revenue isn't adequate ranging from myriad reports of model regression ('Claude is getting dumber/slower') to today's announcement which first claims 4.7 the same price as 4.6 but later discloses "the same input can map to more tokens—roughly 1.0–1.35× depending on the content type. Second, Opus 4.7 thinks more at higher effort levels, particularly on later turns in agentic settings. This improves its reliability on hard problems, but it does mean it produces more output tokens" and "we’ve raised the default effort level to xhigh for all plans" and disclosing that all images are now processed at higher resolution which uses a lot more tokens.
In addition to the changes in performance, usage and consumption costs users can see, people say they are 'optimizing' opaque under-the-hood parameters as well. Hell, I'm still just a light user of their free web chat (Sonnet 4.6) and even that started getting noticeably slower/dumber a few weeks ago. Over months of casual use I ran into their free tier limits exactly twice. In the past week I've hit them every day, despite being especially light-use days. Two days ago the free web chat was overloaded for a couple hours ("Claude is unavailable now. Try again later"). Yesterday, I hit the free limit after literally five questions, two were revising an 8 line JS script and and three were on current news.
by mrandish
4/16/2026 at 6:00:48 PM
Most weekdays.by alwa
4/16/2026 at 5:49:03 PM
Just last week. They cut off openclaw. And they added a price increased fast mode. And they announced today new features that are not included with max subscriptions.They are short 5GW roughly and scrambling to add it.
by vessenes
4/16/2026 at 6:44:20 PM
Now. Is it price increase or resource shortage. These are not the same thing.by dainiusse
4/16/2026 at 7:00:13 PM
If there is any elasticity to demand whatsoever, then these are the same thing.by vessenes
4/16/2026 at 5:58:46 PM
IT's cute you think they're gonna do any full training of a model. As soon as they can extract cash from the machine, the better.by cyanydeez
4/16/2026 at 6:40:56 PM
This is low effort thinking, and a low effort comment. They have a lot of cash. They do not think they have achieved a "city of geniuses" in a datacenter yet. They are racing against two high quality frontier model teams, with meta in the wings. They have billions of dollars in cash that they are currently trying to spend to increase their datacenter capacity.Any compute time spent on inference is necessarily taken from training compute time, causing them long term strategic worries.
What part of that do you think leads toward cash extraction?
by vessenes
4/16/2026 at 5:42:39 PM
CLAUDE_CODE_DISABLE_ADAPTIVE_THINKING=1 claude…by markrogersjr
4/16/2026 at 8:27:38 PM
As per https://code.claude.com/docs/en/model-config#adaptive-reason...:> Opus 4.7 always uses adaptive reasoning. The fixed thinking budget mode and CLAUDE_CODE_DISABLE_ADAPTIVE_THINKING do not apply to it.
by miguno
4/16/2026 at 5:51:07 PM
What does that actually do? Force the "effort" to be static to what I set?by slekker
4/17/2026 at 1:52:49 PM
Since the performance of 4.6 started dropping, I started using Codex more and more. OpenAI playing it smart by being more cost-effective, even if they are catching up in terms of total utility in their desktop application, is going to win more than Anthropic (if they can't drop prices).by godzillabrennus
4/16/2026 at 6:54:42 PM
https://github.com/anthropics/claude-agent-sdk-python/pull/8... - created PR for that cause hit it in their python sdkby maximgran
4/17/2026 at 12:37:41 AM
The reasoning modes are really weird with 4.7In my tests, asking for "none" reasoning resulted in higher costs than asking for "medium" reasoning...
Also, "medium" reasoning only had 1/10 of the reasoning tokens 4.6 used to have.
by XCSme
4/18/2026 at 11:10:02 AM
Medium reasoning has regressed since 4.6. While None and Max have improved since 4.6 in our benchmark. We suspect that this is how Claude tries to cope with the increased user base. Note, Google and OpenAI probably did something similar long ago.by Eifert
4/17/2026 at 12:38:04 AM
Oh, and also, the "none" and "medium" variants performed the same (??)by XCSme
4/17/2026 at 3:21:08 AM
Insane! Even Haiku doesn't make such mistakes.by nisarg2
4/17/2026 at 10:36:38 AM
I am not sure it's a mistake, this might be their new "adaptive reasoning" + hidden reasoning trace, so we can't verify.by XCSme
4/17/2026 at 7:48:29 AM
Claude is known for its shitty metering.by amelius
4/17/2026 at 12:16:39 AM
Claude Opus 4.6 has been hilarious for me so far: https://i.imgur.com/jYawPDY.pngby Razengan
4/17/2026 at 12:19:01 AM
Made my day!by mosfets
4/16/2026 at 4:19:01 PM
Don't look at "thinking" tokens. LLMs sometimes produce thinking tokens that are only vaguely related to the task if at all, then do the correct thing anyways.by dgb23
4/16/2026 at 5:35:11 PM
Why does this comment appear every time someone complains about CoT becoming more and more inaccessible with Claude?I have entire processes built on top of summaries of CoT. They provide tremendous value and no, I don't care if "model still did the correct thing". Thinking blocks show me if model is confused, they show me what alternative paths existed.
Besides, "correct thing" has a lot of meanings and decision by the model may be correct relative to the context it's in but completely wrong relative to what I intended.
The proof that thinking tokens are indeed useful is that anthropic tries to hide them. If they were useless, why would they even try all of this?
Starting to feel PsyOp'd here.
by gck1
4/16/2026 at 6:27:58 PM
Didn't you notice that the stream is not coherent or noisy? Sometimes it goes from thought A to thought B then action C, but A was entirely unnecessary noise that had nothing to do with B and C. I also sometimes had signals in the thinking output that were red flags, or as you said it got confused, but then it didn't matter at all. Now I just never look at the thinking tokens anymore, because I got bamboozled too often.Perhaps when you summarize it, then you might miss some of these or you're doing things differently otherwise.
by dgb23
4/16/2026 at 6:57:05 PM
The usefulness of thinking tokens in my case might come down to the conditions I have claude working in.I primarily use claude for Rust, with what I call a masochistic lint config. Compiler and lint errors almost always trigger extended thinking when adaptive thinking is on, and that's where these tokens become a goldmine. They reveal whether the model actually considered the right way to fix the issue. Sometimes it recognizes that ownership needs to be refactored. Sometimes it identifies that the real problem lives in a crate that's for some reason is "out of scope" even though its right there in the workspace, and then concludes with something like "the pragmatic fix is to just duplicate it here for now."
So yes, the resulting code works, and by some definition the model did the correct thing. But to me, "correct" doesn't just mean working, it means maintainable. And on that question, the thinking tokens are almost never wrong or useless. Claude gets things done, but it's extremely "lazy".
by gck1
4/16/2026 at 9:56:28 PM
Also, for anyone using opus with claude code, they again, "broke" the thinking summaries even if you had "showThinkingSummaries": true in your settings.json [1]You have to pass `--thinking-display summarized` flag explicitly.
by gck1
4/16/2026 at 6:27:43 PM
I agree. Ever since the release of R1, it's like every single American AI company has realized that they actually do not want to show CoT, and then separately that they cannot actually run CoT models profitably. Ever since then, we've seen everyone implement a very bad dynamic-reasoning system that makes you feel like an ass for even daring to ask the model for more than 12 tokens of thought.by quadruple
4/16/2026 at 5:30:22 PM
Thinking summaries might not be useful for revealing the model's actual intentions, but I find that they can be helpful in signalling to me when I have left certain things underspecified in the prompt, so that I can stop and clarify.by shawnz
4/16/2026 at 4:20:15 PM
They also sometimes flag stuff in their reasoning and then think themselves out of mentioning it in the response, when it would actually have been a very welcome flag.by thepasch
4/16/2026 at 4:35:33 PM
Yea I’ve seen this and stopped it and asked it about it.Sometimes they notice bugs or issues and just completely ignore it.
by vorticalbox
4/16/2026 at 4:55:17 PM
This can result in some funny interactions. I don't know if Claude will say anything, but I've had some models act "surprised" when I commented on something in their thinking, or even deny saying anything about it until I insisted that I can see their reasoning output.by Gracana
4/16/2026 at 4:57:02 PM
Supposedly (https://www.reddit.com/r/ClaudeAI/comments/1seune4/claude_ch...) they can't even see their own reasoning afterwards.by ceejayoz
4/16/2026 at 6:45:08 PM
It depends on the version. For the more recent Claudes they've been keeping it.by astrange
4/16/2026 at 5:45:22 PM
Thinking helps the models arrive at the correct answer with more consistency. However, they get the reward at the end of a cycle. Turns out, without huge constraints during training thinking, the series of thinking tokens, is gibberish to humans.I wonder if they decided that the gibberish is better and the thinking is interesting for humans to watch but overall not very useful.
by dataviz1000
4/16/2026 at 6:30:30 PM
OK so you're saying the gibberish is a feature and not a bug so to speak? So the thinking output can be understood as coughing and mumbling noises that help the model get into the right paths?by dgb23
4/16/2026 at 7:05:18 PM
Here is a 3blue1brown short about the relationship between words in a 3 dimensional vector space. [0] In order to show this conceptually to a human it requires reducing the dimensions from 10,000 or 20,000 to 3.In order to get the thinking to be human understandable the researchers will reward not just the correct answer at the end during training but also seed at the beginning with structured thinking token chains and reward the format of the thinking output.
The thinking tokens do just a handful of things: verification, backtracking, scratchpad or state management (like you doing multiplication on a paper instead of in your mind), decomposition (break into smaller parts which is most of what I see thinking output do), and criticize itself.
An example would be a math problem that was solved by an Italian and another by a German which might cause those geographic areas to be associated with the solution in the 20,000 dimensions. So if it gets more accurate answers in training by mentioning them it will be in the gibberish unless they have been trained to have much more sensical (like the 3 dimensions) human readable output instead.
It has been observed, sometimes, a model will write perfectly normal looking English sentences that secretly contain hidden codes for itself in the way the words are spaced or chosen.
by dataviz1000
4/17/2026 at 1:55:19 AM
> It has been observed, sometimes, a model will write perfectly normal looking English sentences that secretly contain hidden codes for itself in the way the words are spaced or chosen.This sounds very interesting, do you have any references?
by johnmaguire
4/16/2026 at 10:01:25 PM
no, he's saying that in amongst whatever else is there, you can often see how you could refine your prompt to guide it better in the firtst place, helping it to avoid bad thinking threads to begin with.by alienbaby
4/16/2026 at 10:59:10 PM
This is because the "thinking" you see is a summary by a highly quantized model - not the actual model, to mask these tokensby sharms
4/17/2026 at 6:50:20 PM
bigger change here might not be model quality, but debuggability.once you hide the reasoning, remove the knobs, and let the model choose its own effort, it gets much harder to tell whether the model got worse or just got harder to inspect.
that’s a real shift. less tool, more black box.
by jafitc
4/16/2026 at 7:31:35 PM
If you do include reasoning tokens you pay more, right?by nextaccountic
4/17/2026 at 5:44:00 AM
In fact, you need to pay regardless of whether the output includes reasoning tokens or notby schneehertz
4/17/2026 at 1:24:28 AM
Prompts seem to need to evolve with every new model.by j45
4/16/2026 at 4:14:34 PM
I got Opus 4.7 working on oh-my-pi with this commit if it interests you: https://github.com/azais-corentin/oh-my-pi/commit/6a74456f0b...by haellsigh
4/17/2026 at 4:25:07 AM
[flagged]by vdalhambra
4/17/2026 at 4:10:14 PM
[dead]by weatherfun
4/17/2026 at 2:59:01 AM
[flagged]by zhonghuajin
4/16/2026 at 9:44:30 PM
[dead]by boxingdog