alt.hn

2/17/2026 at 4:42:35 PM

Using go fix to modernize Go code

https://go.dev/blog/gofix

by todsacerdoti

2/17/2026 at 5:14:20 PM

I really liked this part:

In December 2024, during the frenzied adoption of LLM coding assistants, we became aware that such tools tended—unsurprisingly—to produce Go code in a style similar to the mass of Go code used during training, even when there were newer, better ways to express the same idea. Less obviously, the same tools often refused to use the newer ways even when directed to do so in general terms such as “always use the latest idioms of Go 1.25.” In some cases, even when explicitly told to use a feature, the model would deny that it existed. [...] To ensure that future models are trained on the latest idioms, we need to ensure that these idioms are reflected in the training data, which is to say the global corpus of open-source Go code.

by homarp

2/17/2026 at 6:10:48 PM

PHP went through a similar effort a while back to just clear out places like Stackoverflow of terrible out of date advice (e.g. posts advocating magic_quotes). LLMs make this a slightly different problem because, for the most part, once the bad advice is in the model it's never going away. In theory there's an easier to test surface around how good the advice it's giving is but trying to figure out how it got to that conclusion and correct it for any future models is arcane. It's unlikely that model trainers will submit their RC models to various communities to make sure it isn't lying about those specific topics so everything needs to happen in preparation of the next generation and relying on the hope that you've identified the bad source it originally trained on and that the model will actually prioritize training on that same, now corrected, source.

by munk-a

2/17/2026 at 8:04:02 PM

This is one area where reinforcement learning can help.

The way you should think of RL (both RLVR and RLHF) is the "elicitation hypothesis[1]." In pretraining, models learn their capabilities by consuming large amounts of web text. Those capabilities include producing both low and high quality outputs (as both low and high quality outputs are present in their pretraining corpora). In post training, RL doesn't teach them new skills (see E.G. the "Limits of RLVR"[2] paper). Instead, it "teaches" the models to produce the more desirable, higher-quality outputs, while suppressing the undesirable, low-quality ones.

I'm pretty sure you could design an RL task that specifically teaches models to use modern idioms, either as an explicit dataset of chosen/rejected completions (where the chosen is the new way and the rejected is the old), or as a verifiable task where the reward goes down as the number of linter errors goes up.

I wouldn't be surprised if frontier labs have datasets for this for some of the major languages and packages.

[1] https://www.interconnects.ai/p/elicitation-theory-of-post-tr...

[2] https://limit-of-rlvr.github.io

by miki123211

2/17/2026 at 8:16:02 PM

I believe you absolutely could... as the model owner. The question is whether Go project owners can convince all the model trainers to invest in RL to fix their models and the follow up question is whether the single maintainer of some critical but obscure open source project could also convince the model trainers to commit to RL when they realize the model is horribly mistrained.

In Stackoverflow data is trivial to edit and the org (previously, at least) was open to requests from maintainers to update accepted answers to provide more correct information. Editing is trivial and cheap to carry out for a database - for a model editing is possible (less easy but do-able), expensive and a potential risk to the model owner.

by munk-a

2/18/2026 at 3:02:15 PM

I know Claude will read through code from Go libraries it has imported to ensure it is doing things correctly, but I do have to wonder for other languages and those small libraries, if we'll start seeing things like AGENT_README.md a file that describes the project, then describes what functionality is where in the code, and if necessary drills on a source file by source file basis (unless it's too massive - context limits are still limits). In any regard, I could see that becoming more common. Especially if you link to said file from the README.md for the model to go to. ;)

by giancarlostoro

2/18/2026 at 2:11:59 AM

I think this can be fixed more generally by biasing towards newer data in model outputs and putting more weight on authoritative sources rather than treating all data the same. So no one needs to go in and specifically single out Go code but will instead look at new examples which use features like generics from sources like Google who would follow best/better practices than the rest of the codebase.

by xmprt

2/17/2026 at 9:38:45 PM

[flagged]

by jpalepu

2/17/2026 at 7:25:57 PM

They're particularly bad about concurrent go code, in my experience - it's almost always tutorial-like stuff, over-simplified and missing error and edge case handling to the point that it's downright dangerous to use... but it routinely slips past review because it seems simple and simple is correct, right? Go concurrency is so easy!

And then you point out issues in a review, so the author feeds it back into an LLM, and code that looks like it handles that case gets added... while also introducing a subtle data race and a rare deadlock.

Very nearly every single time. On all models.

by Groxx

2/17/2026 at 7:54:27 PM

> a subtle data race and a rare deadlock

That's a langage problem that humans face as well, which golang could stop having (see C++'s Thread Safety annotations).

by Jyaif

2/17/2026 at 9:09:29 PM

Go has a pretty good race detector already, and all it (usually) takes to enable it is passing the -race flag to go build/test/run/etc.

by kbolino

2/18/2026 at 12:37:23 AM

No language protects from dead lock.

by Thaxll

2/18/2026 at 1:10:22 PM

I probably agree that they don't protect you from all deadlocks, but some languages protect you from some dead locks.

by Jyaif

2/17/2026 at 8:43:28 PM

You should be using rust... mm kay :\

by awesome_dude

2/17/2026 at 11:36:14 PM

Doing concurrency in Rust was more complex (though not overly so) than doing it in Golang was, but the fact that the compiler will outright not let me pass mutable refs to each thread does make me feel more comfortable about doing so at all.

Meanwhile I copy-pasted a Python async TaskGroup example from the docs and still found that, despite using a TaskGroup which is specifically designed to await every task and only return once all are done, it returned the instant theloop was completed and tasks were created and then the program exited without having done any of the work.

Concurrency woo~

by danudey

2/18/2026 at 3:57:05 AM

The person I was replying to sounded exactly like the Rust zealots roving the internet trying to convince people to change.

by awesome_dude

2/18/2026 at 6:49:01 AM

So you are trying to explain concurrency to the folks who implemented CSP in both Plan9 and Go. Interesting. I should return "cspbook.pdf" back.

by anthk

2/18/2026 at 7:10:34 AM

One day, maybe today, you will learn to read

by awesome_dude

2/18/2026 at 7:41:26 AM

(eval) rather than (read), then.

On concurrency, Go has the bolts screwed in; it basically was 'lets reuse everything we can from Plan9 into a multiplatform language'.

by anthk

2/17/2026 at 8:10:08 PM

Good use case for Elixir. Apparently it performs best across all programming languages with LLM completions and its concurrency model is ideal too.

https://autocodebench.github.io/

by brightball

2/17/2026 at 9:30:02 PM

This is the exact opposite of my experience.

Claude 4.6 has been excellent with Go, and truly incompetent with Elixir, to the point where I would have serious concerns about choosing Elixir for a new project.

by monooso

2/17/2026 at 9:43:13 PM

Shouldn't you have concerns picking Claude 4.6 for your next project if it produces subpar elixer code? Cheapy shot perhaps, but I have a feeling exotic languages will remain more exotic longer now that LLM aided development is becoming the norm.

by hbogert

2/19/2026 at 2:38:13 AM

The specific agent is irrelevant. This is related to a broader personal opinion regarding LLMs and language choice.

Before we continue, the following opinion comes with several important caveats:

1. It only applies to paid professional work. If it's a hobby project, choose whatever makes you happy.

2. It ignores the strengths and weaknesses of different languages. These may outweigh any LLM-related concerns.

3. This is my opinion today. I _think_ it will survive longer than the next LLM cycle, but who knows these days.

4. May contain nuts.

Okay, that's the ass-covering dispensed with, on to the opinion:

If the choice is between a language which is "LLM friendly" (for want of a better phrase) and one which is not, it is irresponsible to choose the latter.

by monooso

2/18/2026 at 12:01:18 AM

We've finally figured out how to spread ossification from network protocols to programming languages! \o/

by majewsky

2/18/2026 at 2:18:15 AM

We live in different realities.

Opus and Sonnett practically writes the same idiomatic elixir (phoenix, mind you) code that I would have written myself, with few edits.

It's scary good.

by dimitrios1

2/19/2026 at 2:40:31 AM

I envy your reality.

by monooso

2/17/2026 at 5:42:43 PM

I have run into that a lot which is annoying. Even though all the code compiles because go is backwards compatible it all looks so much different. Same issue for python but in that case the API changes lead to actual breakage. For this reason I find go to be fairly great for codegen as the stability of the language is hard to compete with and the standard lib a powerful enough tool to support many many use cases.

by robviren

2/17/2026 at 6:02:53 PM

The use of LLMs will lead to homogeneous, middling code.

by HumblyTossed

2/17/2026 at 7:46:34 PM

Middling code should not exist. Boilerplate code should not exist. For some reason we're suddenly accepting code-gen as SOP instead of building a layer of abstraction on top of the too-onerous layer we're currently building at. Prior generations of software development would see a too-onerous layer and build tools to abstract to a higher level, this generation seems stuck in an idea that we just need tooling to generate all that junk but can continue to work at this level.

by munk-a

2/17/2026 at 8:33:26 PM

But Go culture promulgates this practice of repeating boilerplate. In fact this is one of the biggest confusion points of new gophers. "I want to do a thing that seems common enough, what library are you all using to do X?". Everyone scoffs, pushes up their glasses and says, "well actually, you should just use the standard library, it's always worked just fine for me". And the new gopher is confused because they really believe that reinventing the wheel is an acceptable practice. This is what leads to using LLMs to write all that code (admittedly, it's a fine use of an LLM).

by nobleach

2/17/2026 at 8:16:51 PM

LLMs have always been great at generating code that doesn't really mean anything - no architectural decisions, the same for "any" program. But only rarely does one see questions why we're needing to generating "meaningless" code in the first place.

by kimixa

2/17/2026 at 8:23:14 PM

This gets to one of my core fears around the last few years of software development. A lot of companies right now are saddling their codebases with pages and pages of code that does what they need it to do but of which they have no comprehension.

For a long time my motto around software development has been "optimize for maintainability" and I'm quite concerned that in a few years this habit is going to hit us like a truck in the same way the off-shoring craze did - a bunch of companies will start slowly dying off as their feature velocity slows to a crawl and a lot of products that were useful will be lost. It's not my problem, I know, but it's quite concerning.

by munk-a

2/18/2026 at 12:58:49 AM

The "LLMs shouldn't be writing code" take is starting to feel like the new "we should all just use No-Code."

We’ve been trying to "build a better layer" for thirty years. From Dreamweaver to Scratch to Bubble, the goal was always the same: hide the syntax so the "logic" can shine. But it turns out, the syntax wasn't the enemy—the abstraction ceiling was.

by hackerbrother

2/18/2026 at 2:43:43 PM

Where are the amazing no-hassle, no-boilerplate tools from last generation? Or the generation before that? Give me a break: it's easy to post this but it's proven very hard to simply "pick the right abstraction for everyone".

by slibhb

2/18/2026 at 12:18:11 AM

[dead]

by zer00eyz

2/17/2026 at 6:52:47 PM

It does. I’ve been writing Go for long enough, and the code that LLMs output is pretty average. It’s what I would expect a mid level engineer to produce. I still write code manually for stuff I care about or where code structure matters.

Maybe the best way is to do the scaffolding yourself and use LLMs to fill the blanks. That may lead to better structured code, but it doesn’t resolve the problem described above where it generates suboptimal or outdated code. Code is a form of communication and I think good code requires an understanding of how to communicate ideas clearly. LLMs have no concept of that, it’s just gluing tokens together. They litter code with useless comments while leaving the parts that need them most without.

by cedws

2/18/2026 at 12:26:56 PM

I am also of the opinion that LLMs are still pretty bad at what's called "Low level design" - that is structuring functions and classes in a project. I wonder if a rule like torvalds' "No more than 4 levels of indentation" might make them work better.

by rdfc-xn-uuid

2/17/2026 at 7:32:48 PM

Do LLMs generate code similar to middling code of a given domain? Why not generate in a perfect language used only by cool and very handsome people, like Fortran, and then translate it to once the important stuff is done?

by bee_rider

2/17/2026 at 8:53:55 PM

This might work if Fortran were portable, or if only one compiler were targeted.

by pklausler

2/17/2026 at 9:30:59 PM

middling code, delivered within a tolerable time frame, budget, without taking excessive risk, is good enough for many real-world commercial software projects. homogeneous middling code, written by humans or extruded by machines, is arguably even a positive for the organisation: lots of organisations are more interested in delivery of software projects being predictable, or having a high bus-factor due to the fungibility of the folks (or machines) building and maintaining the code, rather than depending upon excellence.

by shoo

2/17/2026 at 7:23:24 PM

You might even say that LLMs are not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.

by saghm

2/17/2026 at 6:24:56 PM

I'm not sure if that's a criticism or praise - I mean, most people strive for readable code.

by awesome_dude

2/17/2026 at 6:28:02 PM

LLM generated code reminds me of perl's "write-only" reputation.

by candiddevmike

2/17/2026 at 7:21:03 PM

Does it really? Because I see some quite fine code. The problem is assumptions, or missing side effects when the code is used, or getting stuck in a bad approach "loop" - but not code quality per se.

by coldtea

2/17/2026 at 6:54:52 PM

In all honesty I've only used LLMs in anger with Go, and come away (generally speaking) happy with what it produced.

by awesome_dude

2/17/2026 at 8:35:37 PM

For a few years, yeah. Eventually it will probably lead to the average quality of code being considerably higher than it was pre-LLMs.

by meowface

2/17/2026 at 7:13:22 PM

[dead]

by throwaway613746

2/17/2026 at 9:29:02 PM

I'd prefer we start nuking the idea of using LLMs to write code, not help it get better. Why don't you people listen to Rob Pike, this technology is not good for us. Its a stain on software and the world in general, but I get it most of ya'll yearn for slop. The masses yearn for slop.

by dakolli

2/17/2026 at 10:27:42 PM

I totally agree. I read threads like this and I just can’t believe people are wasting their time with LLM’s.

by throw432196

2/17/2026 at 10:38:11 PM

The masses yearn to not have to fiddle with bs for rent and food

by whattheheckheck

2/17/2026 at 6:32:11 PM

I definitely see that with C++ code Not so easy to "fix", though. Or so I think. But I do hope still, as more and more "modern" C++ code gets published

by BiraIgnacio

2/18/2026 at 11:25:03 AM

battle of my life. several times i’ve had to update my agent instructions to prefer modern and usually better syntax to the old way of doing things. largely it’s worked well for me. i find that making the agents read release notes, and some official blog posts, helps them maintain a healthy and reasonably up-to-date instructions on writing go.

by yawboakye

2/17/2026 at 6:16:28 PM

I think tooling that can modify your source code to make it more modern is really cool stuff. OpenRewrite comes to mind for Java, but nothing comes to the top of my mind for other languages. And heck, I into recently learned about OpenRewrite and I've been writing Java for a long time.

Even though I don't like Go, I acknowledge that tooling like this built right into the language is a huge deal for language popularity and maturity. Other languages just aren't this opinionated about build tools, testing frameworks, etc.

I suspect that as newer languages emerge over the years, they'll take notes from Go and how well it integrates stuff like this.

by retrodaredevil

2/17/2026 at 9:41:19 PM

Coccinelle for C, used by Linux kernel devs for decades, here's an article from 2009:

https://lwn.net/Articles/315686

Also IDE tooling for C#, Java, and many other languages; JetBrains' IDEs can do massive refactorings and code fixes across millions of lines of code (I use them all the time), including automatically upgrading your code to new language features. The sibling comment is slightly "wrong" — they've been available for decades, not mere years.

Here's a random example:

https://www.jetbrains.com/help/rider/ConvertToPrimaryConstru...

These can be applied across the whole project with one command, rewriting however many problems there are.

Also JetBrains has "structural search and replace" which takes language syntax into account, it works on a higher level than just text like what you'd see in text editors and pseudo-IDEs (like vscode):

https://www.jetbrains.com/help/idea/structural-search-and-re...

https://www.jetbrains.com/help/idea/tutorial-work-with-struc...

For modern .NET you have Roslyn analyzers built in to the C# compiler which often have associated code fixes, but they can only be driven from the IDE AFAIK. Here's a tutorial on writing one:

https://learn.microsoft.com/en-us/dotnet/csharp/roslyn-sdk/t...

by homebrewer

2/17/2026 at 10:30:47 PM

Rust has clippy nagging you with a bunch of modernity fixes, and sometimes it can autofix them. I learned about a lot of small new features that make the code cleaner through clippy.

by nitnelave

2/17/2026 at 9:58:49 PM

Does anyone have experience transforming a typescript codebase this way? Typescript's LSP server is not powerful enough and doesn't support basic things like removing a positional argument from a function (and all call sites).

Would jscodeshift work for this? Maybe in conjunction with claude?

by loevborg

2/18/2026 at 11:39:04 PM

ESLint (and typescript-eslint) has the concept of fixers, which updates the source code.

by glitchdout

2/18/2026 at 1:02:18 AM

jscodeshift supports ts as a parser, so it should work.

If you want to also remove argument from call sites, you'll likely need to create your own tool that integrates TS Language Service data and jscodeshift.

LLMs definitely help with these codemods quite a bit -- you don't need to manually figure out the details in manipulating AST. But make sure to write tests -- a lot of them -- and come up with a way to quickly fix bugs, revert your change and then iterate. If you have set up the workflow, you may be able to just let LLM automate this for you in a loop until all issues are fixed.

by g947o

2/18/2026 at 1:08:57 AM

Try ast-grep

by 0x696C6961

2/18/2026 at 1:29:50 AM

Haskell has had hlint for a very long time. Things like rewriting chained calls of `concat` and `map` into `concatMap`, or just rewriting your boolean expressions like `if a then b else False`.

by kccqzy

2/17/2026 at 9:43:52 PM

> but nothing comes to the top of my mind for other languages

"cargo clippy --fix" for Rust, essentially integrated with its linter. It doesn't fix all lints, however.

by nu11ptr

2/17/2026 at 8:03:42 PM

Java and .NET IDEs have had this capabilities for years now, even when Eclipse was the most used one there were the tips from Checkstyle, and other similar plugins.

by pjmlp

2/18/2026 at 12:33:19 AM

Yeah I've noticed the IDEs have this ability, but I think tooling outside of IDEs that can be applied in a repeatable way is much better than doing a bunch of mouse clicks in an IDE to change something.

I think the two things that make this a big deal are: callable from the command line (which means it can integrate with CI/CD or AI tools) and like I mentioned, the fact this is built into Go itself.

by retrodaredevil

2/17/2026 at 9:16:38 PM

eslint had `--fix` since like 10 years, so this is not exactly new.

by silverwind

2/18/2026 at 12:35:04 AM

I can’t find where in the article the author claims it is new (as in original).

In fact, the author shows that this is an evolution of go vet and others.

What’s new, however, is the framework that allows home-grown add ons, which doesn’t have to do everything from scratch.

by YesThatTom2

2/17/2026 at 5:56:29 PM

Its tooling like this that really makes golang an excellent language to work with. I had missed that rangeint addition to the language but with go fix I'll just get that improvement for free!

Real kudos to the golang team.

by kiernanmcgowan

2/17/2026 at 6:09:53 PM

There have been many situations where I'd rather use another language, but Go's tooling is so good that I still end up writing it in Go. So hard to beat the build in testing, linting, and incredible compilation.

by jjice

2/17/2026 at 6:17:43 PM

Absolutely.

The Go team has built such trust with backwards compatibility that improvements like this are exciting, rather than anxiety-inducing.

Compare that with other ecosystems, where APIs are constantly shifting, and everything seems to be @Deprecated or @Experimental.

by iamcalledrob

2/17/2026 at 7:46:41 PM

I just searched for `for` loops with `:=` within and hand-fixed them. I found a few forms of the for loops and where there was a high number, I used regexp.

This tool is way cooler, post-redesign.

by lowmagnet

2/18/2026 at 1:33:08 AM

Not even mentioned in the article, my favorite capability is the new `//go:fix inline` directive, which can be applied to a one-line function to make go fix inline it's contents into the caller.

That ends up being a really powerful primitive for library authors to get users off of deprecated functions, as long as the old semantics are concisely expressible with the new features. It can even be used (and I'm hoping someone makes tooling to encourage this) to auto-migrate users to new semver-incompatible versions of widely used libraries by releasing a 1.x version that's implemented entirely in terms of thin wrappers around 2.x functions and go fix will automatically upgrade users when they run it.

by HALtheWise

2/18/2026 at 2:03:31 AM

Incidentally I saw the Wes McKinney podcast who said that Go was the perfect language now because of the fast compile-run cycle with strong types and built in multi thread safety was perfect for Coding Agents. Gave me a whole new interest. https://www.youtube.com/watch?v=1VfzDXeQRhU

by rr808

2/18/2026 at 3:48:28 AM

[dead]

by barendscholtus

2/17/2026 at 8:44:57 PM

Go and its long established conventions and tools continues to be a massive boon to my agentic coding.

We have `go run main.go` as the convention to boot every apps dev environment, with support for multiple work trees, central config management, a pre-migrated database and more. Makes it easy and fast to dev and test many versions of an app at once.

See https://github.com/housecat-inc/cheetah for the shared tool for this.

Then of course `go generate`, `go build`, `go test` and `go vet` are always part of the fast dev and test loop. Excited to add `go fix` into the mix.

by nzoschke

2/17/2026 at 11:29:06 PM

And the screamingly fast compilation speed is a boon to fast LLM iterations as well.

by hu3

2/18/2026 at 6:25:09 AM

Not related with go: I recently tried to learn Python beyond the classical example code from the web. After discovering that there are more or less 4 different ways to do a thing with no clear guide what best practice is. I come from C and there one is happy if there is ONE way to do a thing :). Is Go at this stage? Im Intrested in learning Go but not to a point where i need a LLM to determin if my code follows best practice.

by Surac

2/18/2026 at 11:51:15 AM

go is fairly opinionated and there is very often only one way to do a thing.

For parts where it is not the case see this thread they really work on keeping a fairly simple language. I'd recommend it to anyone tired of python messes

by notTooFarGone

2/18/2026 at 5:15:26 PM

the LLM training data angle is real - we noticed the same thing when reviewing AI-generated Go code. the style is fine to fix automatically, but the security patterns are harder. LLMs trained on pre-1.21 code will happily generate goroutines with shared mutable state and no context propagation, which go fix doesn't touch. honestly not sure if the solution is more tooling or just better review habits for AI-generated code specifically

by the_harpia_io

2/18/2026 at 12:00:19 AM

Self-service analyzers! This will be huge for big libraries and I can imagine heavily used by infra folks (although they could use analyzers already).

by suralind

2/17/2026 at 10:43:19 PM

I wonder if you could make a language that is NOT backward compatible if you develop a tool like this alongside it.

by malklera

2/18/2026 at 1:24:33 AM

biome kind of does this stuff for me in the typescript world. for example, it recommends for...of instead of forEach. paired with ultracite, it really improves my workflow because it is super easy to add to any project, just one dependency. breath of fresh air compared with the eslint ecosystem.

i now have an agents.md file that says to run biome fix after every modification. i end with much nicer code that i don't have to go and fix myself (or resave the file to get biome to run). speeds things up considerably to not have that step in my own workflow.

by latchkey

2/17/2026 at 7:50:41 PM

[dead]

by Arifcodes

2/17/2026 at 10:11:59 PM

We have this with our frontend code through elm-review. There are a great many rules for it with fixes, and we write some specifically for our app too. They then run pre-push so you get feedback early that you need to fix things.

The real key: there's no ignore comment as with other linters. The most you can do is run a suppress command so that every file gets its current number of violations of each rule recorded in JSON, and then you can only ever decrease those numbers.

by 1-more

2/17/2026 at 11:57:58 PM

[dead]

by Arifcodes

2/18/2026 at 12:20:46 AM

I believe go tooling is one of the top loved features among gophers, in particular gofmt.

The best part is that all those command line utilities are also available for import. I've used them to create a license checker by walking the dep tree in code.

There are also a ton of gems in the `./internal` directories, Roger Peppe has extracted them for reuse here: https://github.com/rogpeppe/go-internal

by verdverm

2/18/2026 at 4:40:53 AM

[flagged]

by RosaIsela

2/18/2026 at 4:41:04 AM

[flagged]

by RosaIsela