alt.hn

5/21/2025 at 10:57:08 AM

Watching AI drive Microsoft employees insane

https://old.reddit.com/r/ExperiencedDevs/comments/1krttqo/my_new_hobby_watching_ai_slowly_drive_microsoft/

by laiysb

5/21/2025 at 11:18:44 AM

Interesting that every comment has "Help improve Copilot by leaving feedback using the or buttons" suffix, yet none of the comments received any feedback, either positive or negative.

> This seems like it's fixing the symptom rather than the underlying issue?

This is also my experience when you haven't setup a proper system prompt to address this for everything an LLM does. Funniest PRs are the ones that "resolves" test failures by removing/commenting out the test cases, or change the assertions. Googles and Microsofts models seems more likely to do this than OpenAIs and Anthropics models, I wonder if there is some difference in their internal processes that are leaking through here?

The same PR as the quote above continues with 3 more messages before the human seemingly gives up:

> please take a look

> Your new tests aren't being run because the new file wasn't added to the csproj

> Your added tests are failing.

I can't imagine how the people who have to deal with this are feeling. It's like you have a junior developer except they don't even read what you're telling them, and have 0 agency to understand what they're actually doing.

Another PR: https://github.com/dotnet/runtime/pull/115732/files

How are people reviewing that? 90% of the page height is taken up by "Check failure", can hardly see the code/diff at all. And as a cherry on top, the unit test has a comment that say "Test expressions mentioned in the issue". This whole thing would be fucking hilarious if I didn't feel so bad for the humans who are on the other side of this.

by diggan

5/21/2025 at 12:15:23 PM

> I can't imagine how the people who have to deal with this are feeling. It's like you have a junior developer except they don't even read what you're telling them, and have 0 agency to understand what they're actually doing.

That comparison is awful. I work with quite a few Junior developers and they can be competent. Certainly don't make the silly mistakes that LLMs do, don't need nearly as much handholding, and tend to learn pretty quickly so I don't have to keep repeating myself.

LLMs are decent code assistants when used with care, and can do a lot of heavy lifting, they certainly speed me up when I have a clear picture of what I want to do, and they are good to bounce off ideas when I am planning for something. That said, I really don't see how it could meaningfully replace an intern however, much less an actual developer.

by surgical_fire

5/21/2025 at 12:32:47 PM

These GH interactions remind me of one of those offshore software outsourcing firms on Upwork or Freelancer.com that bid $3/hr on every project that gets posted. There's a PM who takes your task and gives it to a "developer" who potentially has never actually written a line of code, but maybe they've built a WordPress site by pointing and clicking in Elementor or something. After dozens of hours billed you will, in fact, get code where the new file wasn't added to the csproj or something like that, and when you point it out, they will bill another 20 hours, and send you a new copy of the project, where the test always fails. It's exactly like this.

Nice to see that Microsoft has automated that, failure will be cheaper now.

by safety1st

5/21/2025 at 12:57:30 PM

This gives me flashbacks to when my big corporate former employer outsourced a bunch of work offshore.

An outsourced contractor was tasked with a very simple job as their first task - update a single dependency, which required just a bump of the version and no code changes - after three days of them seemingly struggling to even understand what they were asked to do, inability to clone the repo, failure to install the necessary tooling on their machine, they ended up getting fired from the project. Complete waste of money, and the time of those of us having to delegate and review this work.

by dkdbejwi383

5/21/2025 at 3:24:55 PM

Makes me wonder if the pattern will continue to follow, and we start to find certain agents—maybe due to config, maybe due to the training codebase and the codebase they're pointed at—that will become the single one out of the group we can rely on.

Give instructions, get good code back. That's the dream, though I think the pieces that need to fall into place for particular cases will prevent reaching that top quality bar in the general case.

by 98codes

5/22/2025 at 12:26:26 AM

Yeah, this is what we used to call "hiring." People who think it can ever come with guarantees make incompetent and tiresome clients.

I can't wait for the first AI agent programmer to realize this and start turning down jobs working for garbage people...or exploiting them at scale for pennies each, in a labor version of the "salami slicing" scheme. I don't mean humans using AI to do this, which of course has been at scale for years. I mean the first agent to discover a job prioritization heuristic on its own which leads to the same result.

by throwanem

5/21/2025 at 1:02:19 PM

> These GH interactions remind me of one of those offshore software outsourcing firms on Upwork or Freelancer.com that bid $3/hr on every project that gets posted

Those have long been the folks I’ve seen at the biggest risk of being replaced by AI. Tasks that didn’t rely on human interaction or much training, just brute force which can be done from anywhere.

And for them, that $3/hr was really good money.

by AbstractH24

5/21/2025 at 12:59:21 PM

Actually the AI might still be more expensive at this point. But give it a few years I'm sure they will get the costs down.

by voxic11

5/21/2025 at 1:24:51 PM

>>These GH interactions remind me of one of those offshore software outsourcing firms on Upwork or Freelancer.com that bid $3/hr on every project that gets posted.

This level of smugness is why outsourcing still continues to exist. The kind of things you talk about were rare. And were mostly exaggerated to create anti-outsourcing narrative. None of that led to outsourcing actually going away simply because people are actually getting good work done.

Bad quality things are cheap != All cheap things are bad.

Same will work with AI too, while people continue to crap on AI, things will only improve, people will be more productive with AI, get more and bigger things done for cheaper and better. This is just inevitable given how things are going now.

>>There's a PM who takes your task and gives it to a "developer" who potentially has never actually written a line of code, but maybe they've built a WordPress site by pointing and clicking in Elementor or something.

In the peak of outsourcing wave. Both the call center people and IT services people had internal training and graduation standards that were quite brutal and mad attrition rates.

Exams often went along the lines of having to write whole ass projects without internet help in hours. Theory exams that had like -2 marks on getting things wrong. Dozens of exams, projects, coding exams, on-floor internships, project interviews.

>>After dozens of hours billed you will, in fact, get code where the new file wasn't added to the csproj or something like that, and when you point it out, they will bill another 20 hours, and send you a new copy of the project, where the test always fails. It's exactly like this.

Most IT services billing had pivoted away from hourly billing, to fixed time and material in the 2000s itself.

>>It's exactly like this.

Very much like outsourcing. AI is here to stay man. Deal with it. Its not going anywhere. For like $20 a month, companies will have same capability as a full time junior dev.

This is NOT going away. Its here to stay. And will only get better with time.

by kamaal

5/21/2025 at 5:45:49 PM

> This level of smugness is why outsourcing still continues to exist. The kind of things you talk about were rare. And were mostly exaggerated to create anti-outsourcing narrative. None of that led to outsourcing actually going away simply because people are actually getting good work done

I used upwork (when it was elance) quite a lot in a startup I was running at the time, so I have direct experience of this and its _not_ a lie or "mostly exaggerated", it was a very real effect.

The trick was always to weed out these types by posting a very limited job for a cheap amount and accepting around five or more bids from broad prices in order to review the developers. Whoever is actually competent then gets the work you actually wanted done in the first place. I found plenty of competant devs at competitive prices this way but some of the submissions I got from the others were laughable. But you just accept the work, pay them their small fee, and never speak to them again.

by Quarrelsome

5/21/2025 at 1:31:49 PM

There's no reason why an outsourcing firm would charge less for work of equal quality. If a company outsourced to save money, they'd get one of the shops that didn't get the job done.

by whatshisface

5/21/2025 at 1:39:12 PM

>>There's no reason why an outsourcing firm would charge less for work of equal quality.

Most of this works because of price arbitrage. And continues to work that way, not just with outsourcing but with manufacturing too.

Remember those days, when people were going around telling Chinese products where crap? That didn't really work and more things only got made in China.

This is all so similar to early days of Google search, its just that cost of a search was low enough that finding things got easier and ubiquitous. That same is unfolding with AI now. People have a hard time believing a big part of their thinking can be outsourced to something that costs $20/month.

How can something as good as me be cheaper than me? You are asking the wrong question. For centuries now, every decade a machine(s) has arrived that can do a thing cheaper than what the human was doing at the time. Its not exactly impossible. You are only living in denial by asking this question, this has been how it has worked the day since humans found way of mimicking human work through machines. We didn't get here in a day.

by kamaal

5/21/2025 at 1:44:16 PM

It’s not 20, it’s 200+. And that will only get more expensive.

by dttze

5/21/2025 at 1:50:45 PM

Again I don't know what people mean when they say it will get more expensive. This is a wrong way of looking at the issue.

Pretty sure cars are more expensive than horse carriage, or that iPhones are/were more expensive than button phones. You can cite so many such examples. Like photocopying machines, or cameras, or wrist watches, or even things like radio, television etc.

More importantly, sometimes how you do things change. And that changes how you go about your life in a very fundamental way.

That is what internet was about when it first came out, thats what internet search, online maps, or search etc etc were.

AI will change how you go about living your life, in a very fundamental way.

by kamaal

5/21/2025 at 3:30:48 PM

> Pretty sure cars are more expensive than horse carriage

Basic car ownership can be quite a bit cheaper than a horse + carriage.

The horse will probably eat $10-20/day in food. $600/mo in just food costs. Not including vet bills and what not.

A decent and cheap horse will probably cost you $3k up front. Add in several thousand dollars more for the carriage.

A horse requires practically daily maintenance. A carriage will still require some maintenance.

A horse requires a good bit more land, plus the space to store the carriage. Plus, all the extra time and work mounting and unmounting your horse whenever you need to go.

A horse and carriage isn't really cheaper than a cheap car and way less functional.

by vel0city

5/21/2025 at 5:54:33 PM

Theres a 3 point way to say this. Usually technology: * More efficient * Higher Quality * Less effort

Most successful technologies provide multiple of these benefits. What is terrible, and the direction we are going right now, is that these new systems (or offshoring like we are talking about here) seem/are "Less Effort" but do not hit the other two axioms. This is a very dangerous place to be.

People would rather be lazy than roll their sleeves up and focus, especially in our attention diverting world.

by 0x500x79

5/21/2025 at 2:12:35 PM

This isn’t like those things. You’re comparing physical goods to a token generator.

LLMs are being made into another rental extraction system and should be viewed as such.

by dttze

5/21/2025 at 2:34:22 PM

The worry, that is borne out by the pricing of Uber, isn't that LLMs are more expensive than the generation before, but that it's a VC play. Get into market, undercut your competitors until they go bust, then raiser prices. Ubers used to be $1, which was obviously totally unsustainable. Now Uber's only competing platform is Lyft, and Uber is making money as of their latest quarter. Ubers are not at least $10 if not $50 $100. ChatGPT's $20/month looks like $1 Ubers to some. Only insiders know how much it actually costs OpenAI to support ChatGPT users. I will note, however, that GitHub free private repos are supported by corporations paying for their own private GitHub, so it's unclear that ChatGPT's $20/month ever has to be raised with enough $200 or $2,000 or $20,000/month users.

by fragmede

5/21/2025 at 12:21:41 PM

I think that was the point of the comparison..

It's not like a regular junior developer, it's much worse.

by sbarre

5/21/2025 at 3:26:02 PM

And yet it got the job and lots of would be juniors didn’t, and it seems to be costing the company more in compute and senior dev handholding. Nice work silicon valley.

by spacemadness

5/21/2025 at 12:37:26 PM

> That said, I really don't see how it could meaningfully replace an intern however

And even if it could, how do you get senior devs without junior devs? ^^

by preisschild

5/21/2025 at 5:34:28 PM

What is making it difficult for Junior devs to be hired is not AI. That is a diversion.

The raise in interest rates a couple of years ago triggered many layoffs in the industry. When that happens salaries are squeezed. Experienced people work for less, and juniors have trouble finding job because they are now competing against people with plenty of experience.

by surgical_fire

5/22/2025 at 4:17:46 AM

Simple, there are always people who are intentionally using the hard way. There is a community programming old 16-bit machines for example, which are much harder than modern tools. Or someone learning assembly language "just for fun".

Some of those (or similar) people will actually learn new stuff and become senior devs. Yes, there will be much fewer of them, so they'll command a higher salary, and they will deliver amazing stuff. The rest, who spend their entire carrier being AI handlers, will never raise above junior/mid level.

(Well, either that or people who cannot program by themselves will get promoted anyway, the software will get more bugs and less features, and things will be generally worse for both consumers and programmers... but I prefer not to think about this option)

by theamk

5/21/2025 at 12:49:42 PM

Sounds like a next quarter problem (I wish it was /s).

by lazide

5/21/2025 at 2:47:34 PM

Did you miss the "except" in his sentence? He was making the point this is worse than junior devs for all reasons listed.

by PKop

5/21/2025 at 5:29:22 PM

I was agreeing with him, by saying that the comparison is awful.

Not sure how it can be read otherwise.

by surgical_fire

5/21/2025 at 12:26:34 PM

This field (SE - when I started out back in late 80s) was enjoyable. Now it has become toxic, from the interview process, to imitating "big tech" songs and dances by small fry companies, and now this. Is there any joy left in being a professional software developer?

by yubblegum

5/21/2025 at 12:32:43 PM

Making quite a bit of money brings me a lot of joy compared to other industries

But the actual software part? I'm not sure anymore

by bluefirebrand

5/21/2025 at 12:36:53 PM

> This field (SE - when I started out back in late 80s) was enjoyable. Now it has become toxic

I feel the same way today, but I got started around 2012 professionally. I wonder how much of this is just our fading optimism after seeing how shit really works behind the scenes, and how much the industry itself is responsible for it. I know we're not the only two people feeling this way either, but it seems all of us have different timescales from when it turned from "enjoyable" to "get me out of here".

by diggan

5/21/2025 at 1:11:53 PM

My issue stems from the attitudes of the people we're doing it for. I started out doing it for humanity. To bring the bicycle for the mind to everyone.

Then one day I woke up and realized the ones paying me were also the ones using it to run over or do circles around everyone else not equipped with a bicycle yet; and were colluding to make crippled bicycles that'd never liberate the masses as much as they themselves had been previously liberated; bicycles designed to monitor, or to undermine their owner, or more disgustingly, their "licensee".

So I'm not doing it anymore. I'm not going to continue making deliberately crippled, overly complex, legally encumbered bicycles for the mind, purely intended as subjects for ARR extraction.

by salawat

5/21/2025 at 1:51:59 PM

It's hard to find anything wrong with your conclusions except that you're leaving out the part where they're trying to automate our contributions to devalue our skills. I'm surprised there isn't a movement to halt the use of AI for certain tasks in software development on the same level as the active resistance from doctors against socialized medicine in the US. These expensive toys will inevitably introduce catastrophic level bugs and security vulnerabilities into critical infrastructure software. Right now, most of Microsoft's product offerings, like GitHub and Office, are critical infrastructure software.

by ecocentrik

5/21/2025 at 3:31:33 PM

> I'm surprised there isn't a movement to halt the use of AI for certain tasks in software development on the same level as the active resistance from doctors against socialized medicine in the US.

This is also shocking to me. Especially here on HN! Every tech CEO on earth is salivating over AI coding because they want it to devalue and/or replace their expensive human software developers. Whether or not that will actually happen, that's the purpose of building all of these "agentic" coding tools. And here we are, dumbass software engineers, cheerleading for and building the means of our own destruction! We downplay it with bullshit like "Oh, but AI is just a way to augment our work, it will never really replace us or lower our compensation!" Wild how excited we all are about this.

by ryandrake

5/21/2025 at 3:44:19 PM

I think it's similar to a thread we had here recently about why it's impossible to unionize tech workers. Basically, most tech workers don't like other tech workers (or other people, really) very much, so there's very little camaraderie of the sort you need to get people to team up and take on a shared enemy. Instead, we all think we're smarter than the other guy, so he'll be the one who gets fired while I thrive in the new situation.

by aaronbaugher

5/21/2025 at 4:56:26 PM

I think a lot of software engineers (especially those who post on HN) think of themselves as top-1% Captains Of Industry, who would never benefit from a union. "Unions only help those guys lower on the totem pole than me!" says every software engineer out there, so they disregard it as something that could help them. We all think we are Temporarily Embarrassed John Carmacks.

by ryandrake

5/21/2025 at 5:04:50 PM

That doesn't explain why doctors that see themselves as top earners didn't have a problem banding together. Social organization doesn't require unions in socialist/communist sense. It can also be accomplished through other professional organizations like AMC.

by ecocentrik

5/21/2025 at 6:44:16 PM

> This is also shocking to me. Especially here on HN!

this website is owned and operated by a VC, who build fortunes off exploiting these people

"workers and oppressed peoples of all countries, unite!" is the last thing I'd expect to see here

by blibble

5/21/2025 at 4:20:40 PM

HackerNews is driven by a particular kind of radical libertarian philosophy believing person. You don't come up with the sorts of pump and dump start up ideas that typically come out of Y Combinator without being either sociopathic or delusional in the above way.

Anybody who thinks this place represents the average working or middle class programmer hasn't been paying much attention. They fool a lot of people by being social liberal to go along with their economic liberalism.

by BugheadTorpeda6

5/21/2025 at 4:58:50 PM

HN is obviously not the right forum for the skill value dilution discussion but not seeing deep discussion about responsible LLM usage from developers or major software companies is really troubling. If Microsoft is stupid enough to dogfood their unrefined LLM based tools on critical software in the name of increased earnings and shareholder value, I'm sure the entire enterprise stack is hoping to do the same.

by ecocentrik

5/21/2025 at 5:16:07 PM

Because other professional fields have not been subjected to a long running effort to commoditize software engineers. And further, most other (cognitive) professionals are not subject to 'age shaming' and discounting of experience.

We should not forget that on the other side of this issue are equally smart and motivated people and they too are aware of the power dynamics involved. For example, the phenomena of younger programmers poo pooing experienced engineers was a completely new valuation paradigm pushed by interested parties at some point around the dotcom bubble.

Doctors with n years in the OR will not take shit from some intern that just came out of school. But we were placed in that situation at some point after '00. So the fundamental issue is that there is an (engineered imho) generational divide, and coupled with age discrimination in hiring (again due to interested parties' incentives) has a created a situation where one side is accumiliating generational wealth and power and the other side (us developers) are divided by age and the ones with the most skin in the game are naive youngsters who have no clue and have been taught to hate on "millenials" and "old timers" etc.

by yubblegum

5/23/2025 at 4:20:54 AM

    > Because other professional fields have not been subjected to a long running effort to commoditize software engineers.
In the United States, aren't Nurse Practitioner and Physician Assistant a "direct assault" on medical doctors? I assume these roles were created in a pushback at the expense of medical doctors.

    > And further, most other (cognitive) professionals are not subject to 'age shaming' and discounting of experience.
I am of two minds about this comment. TL;DR: "Yeah, but..." One thing that I have noticed in my career: Most people can pump out much more code and work longer hours when they are young. Then, when they get a bit older and/or start a family (and usually want better work/life balance), they start to play the "experience" card, which rarely translates into higher realised economic productivity. Yes, most young devs write crap code, but they can write a lot of it. If you can find good young devs, they are way cheaper and faster than experience devs. I write that sentence with the controversial view that most businesses don't need amazing/perfect software; they just need "good enough" (which talented juniors can more than provide).

When young people learn that I am a software developer, their eyes light up (thinking that I make huge money working for FAANG). Frequently, they ask if they should also become a software developer. I tell them no, because this industry requires constant self-learning that is very hard to sustain after 40. Then, you become a target for layoffs, and getting re-employed after 40 as a software dev can be very tough.

by throwaway2037

5/21/2025 at 2:53:18 PM

> These expensive toys will inevitably introduce catastrophic level bugs and security vulnerabilities into critical infrastructure software. Right now, most of Microsoft's product offerings, like GitHub and Office, are critical infrastructure software.

So nothing new? Just this/last month, it seems like the multi-select "open/close" button in the GitHub PR UI was just straight up broken. No one seemed to have noticed until I opened a bug report, and it continued being broken for weeks before they finally fixed it. Not the first time I encounter this on Microsoft properties, they seem to constantly push out broken shit, and no one seem to even notice until some sad user (like me) happens to stumble across it.

by diggan

5/21/2025 at 1:55:38 PM

Have you considered contributing to the Free Software Movement?

I am speculating that this "AI Revolution" may lead to some revitalization of the movement as it would allow individual contributors the ability to compete on the same levels as proprietary software providers who previously had to employ legions of developers to create their software.

by SimianSci

5/21/2025 at 2:38:39 PM

Considered but that'll probably only happen once I've alternative sources of income lined up that doesn't shackle my IP contributions off hours to my employers, which means bringing in enough to get a couple hours with an attorney that knows what they are doing. I am not one. I have merely read some books on it.

by salawat

5/21/2025 at 2:42:14 PM

But what's the business model? Why would I pay for support and/or development on an open source project if I can just run it through and LLM?

by fragmede

5/21/2025 at 8:52:48 PM

What are you doing now instead?

by jimbokun

5/22/2025 at 10:24:12 AM

I started coding at a young age, but entered the professional world in 2012, just like you. I feel the same. I just can't come to grips with the fact that the goal is not to write good software anymore, but to get something, anything out the door on which we can then sell by marketing it based on stuff it doesn't do yet (but it will, we promise!) so that we can make more money and fake making something "new" again (putting a textbox and a button, and hooking it up to an LLM api). Software is nowadays assumed to not work properly. And we're not allowed to fix it anymore!

by vrighter

5/21/2025 at 2:25:54 PM

It happens in waves. For a period, there was an oversupply of cs engineers, and now, the supply will shrink. On top of this, the BS put out by AI code will require experienced engineers to fix.

So, for experienced engineers, I see a great future fixing the shit show that is AI-code.

by bwfan123

5/23/2025 at 4:24:41 AM

Each time that I arrive at a new job, I take some time to poke around at the various software projects. If the state of the union is awful, I always think: "Great: No where to go but up." If the state of the union is excellent, I think: "Uh oh. I will probably drag down the average here and make it a little bit worse, because I am an average software dev."

by throwaway2037

5/22/2025 at 7:23:44 AM

So many little scripts are spawned and they are all shit for production. I stopped reviewing them, pray to the omnissiah now and spray incense into our server room to placate the machine gods.

Because that shit makes you insane as well.

by raxxorraxor

5/21/2025 at 1:47:47 PM

No, there is absolutely no joy left.

by iamleppert

5/21/2025 at 1:43:25 PM

I've been looking at getting a CDL and becoming a city bus driver, or maybe a USPS driver or deliveryman or clerk or something.

by coldpie

5/21/2025 at 2:30:08 PM

I hear you. Same boat just can't figure out the life jacket yet. (You do fine wood work, why not that? I am considering finding entry level work in architecture myself - kicking myself for giving that up for software now. Did not see this shit show coming.)

by yubblegum

5/21/2025 at 2:44:34 PM

> You do fine wood work, why not that?

Thank you. It's something I'm actively pursuing, I'm hoping to finish some chairs this spring and see if any local shops are interested in stocking them. But I'm skeptical I could find enough business to make it work full-time, pay for my family's health insurance, and so on. We'll see.

by coldpie

5/21/2025 at 12:41:28 PM

[flagged]

by sweman

5/21/2025 at 1:05:05 PM

A very very small percentage of professional software developers get that.

by camdenreslink

5/21/2025 at 3:31:58 PM

Sounds like the type of person OP was complaining about. Usually executives or very lucky ones who started at Nvidia 6-7 years ago, etc.

by spacemadness

5/21/2025 at 12:32:11 PM

At least we can tell the junior developers to not submit a pull-request before they have the tests running locally.

At what point does the human developers just give up and close the PRs as "AI garbage". Keep the ones that works, then just junk the rest. I feel that at some point entertaining the machine becomes unbearable and people just stops doing it or rage close the PRs.

by mrweasel

5/23/2025 at 9:06:05 PM

Better yet, deploy their own LLM to close the PRs.

by microtherion

5/21/2025 at 12:41:33 PM

When their performance reviews stop depending upon them not doing that.

Microsoft's stock price is dependent on them proving that this is a success.

by pydry

5/21/2025 at 1:59:21 PM

> Microsoft's stock price is dependent on them proving that this is a success.

Perhaps this explains the recent firings that affected faster CPython and other projects. While they throw money at AI but sucess still doesn't materialize, they need to make the books look good for yet another quarter through the old-school reliable method of laying off people left and right.

by Qem

5/21/2025 at 1:07:24 PM

What happens when they can't prove that and development efficiency starts falling, because developers spend 50% of their time battling copilot?

by mrweasel

5/21/2025 at 1:39:04 PM

they'll just add more and more half-baked features

it's not as if Microsoft's share price has ever reflected the quality of their products

by blibble

5/22/2025 at 2:22:57 AM

I think it did, then they built up a most that made it very hard to turn momentum the other direction. It's turned now but it happened very slowly. Who knows if it ever falls of a cliff, all I know is that moats are only broken when momentum is going in the wrong direction and so they are certainly more vulnerable now than they would otherwise have been if they hadn't pissed so many people off about their products.

At one point, their desktop user experience was actually pretty good. And that was all their products back then. They definitely didn't get to where they are now by selling products that were bad. You could make the argument that some of them were bad but they were cheap, but if price is a big aspect of what makes a product good in the eyes of the consumer at the time and nobody else is competing on price, then that isn't "bad" in the sense I'm using the word.

I don't think I'd have called them out for always making terrible products all the way through till about Windows 7. I had no major complaints about that release, cloud was in its infancy, no pushing 365 etc. After that, quality started to go downhill. To the point that I'd argue with a straight face that most major community supported Linux DEs provide an objectively better and more stable user experience for both technical and non technical users.

by BugheadTorpeda6

5/21/2025 at 3:17:18 PM

Remember what happened to the markets when deepseek came out?

by pydry

5/21/2025 at 1:56:13 PM

All that's required is enough mental gymnastics for someone to feel like they can call it a success. At no point is it actually required to be one.

by latentsea

5/21/2025 at 2:35:12 PM

You're probably correct: "Our developers a very happy with CoPilot. They now spend 50% of their time interacting with our AI offerings, either via VSCode, Github or Clippy."

No need to specify why they are interact with it, all engagement is good engagement.

by mrweasel

5/21/2025 at 3:47:52 PM

By this measurement, a slower compiler is better than a faster one, because developers are using it for more of their time. Totally bonkers, Microsoft!

by ryandrake

5/22/2025 at 12:30:56 AM

When your identity is tied to that being a success, you will find a way to make it so, because it feels much worse to have your identity challenged at a fundamental level than it does to have people grumpy with you for acting in a way that allows you to preserve your identity.

Most corporate BS comes down to this.

by latentsea

5/23/2025 at 4:29:43 AM

    > rage close the PRs
I am shaking with laughter reading this phrase. You got me good here. It is the perfect repurpose of "rage quit" for the AI slop era. I hope that we see some MSFT employees go insane from responding to so many shitty PRs from LLMs.

One of my all time "rage quit" stories is Azer Koçulu of npm left-pad incident infamy. That guy is my Internet hero -- "fight the power".

by throwaway2037

5/21/2025 at 2:52:54 PM

> Interesting that every comment has "Help improve Copilot by leaving feedback using the or buttons" suffix, yet none of the comments received any feedback, either positive or negative.

The feedback buttons open a feedback form modal, they don’t reflect the number of feedback given like the emoji button. If you leave feedback, it will reflect your thumbs up/down (hiding the other button), it doesn’t say anything about whether anyone else has left feedback (I’ve tried it on my own repos).

by throwup238

5/21/2025 at 5:54:27 PM

This whole thread from yesterday take a whole different meaning: https://news.ycombinator.com/item?id=44031432

Comment in the GitHub discussion:

"...You and I and every programmer who hasn't been living under a rock knows that AI isn't ready to be adopted at this scale yet, on the premier; 100M-user code-hosting platform. It doesn't make any sense except in brain-washed corporate-talk like "we are testing today what it can do tomorrow".

I'm not saying that this couldn't be an adequate change some day, perhaps even in a few years but we all know this isn't it today. It's 100% financial-driven hype with a pinch of we're too big to fail mentality..."

by belter

5/22/2025 at 6:44:37 AM

"Big data" -> "Cloud" -> "LLM-as-A(G)I"

It's all just recycled rent seeking corporate hype for enterprise compute.

The moment I had decided to learn Kubernetes years ago, got a book and saw microservices compared to 'object-oriented' programming I realized that. The 'big ball of mud' paper and the 'worse is better' rant frame it all pretty well in my view. Prioritize velocity, get slop in production, cope with the accidental complexity, rinse repeat. Eventually you get to a point where GPU farms seem like a reasonable way to auto-complete code.

When you find yourself in a hole, stop digging. Any bigger excavator you send down there will only get buried when the mud crashes down.

by namaria

5/21/2025 at 11:48:57 AM

> improve Copilot by leaving feedback using the or buttons" suffix, yet none of the comments received any feedback, either positive or negative

Why do they even need it? Success is code getting merged 1st shot, failure gets worse the more requests for changes the agent gets. Asking for manual feedback seems like a waste of time. Measure cycle time and rate of approvals and change failure rate like you would for any developer.

by vasco

5/21/2025 at 2:17:02 PM

It's like you have a junior developer except they don't even read what you're telling them, and have 0 agency to understand what they're actually doing.

Anyone who has dealt with Microsoft support knows this feeling well. Even talking to the higher level customer success folks feels like talking to a brick wall. After dozens of support cases, I can count on zero hands the number of issues that were closed satisfactorily.

I appreciate Microsoft eating their dogfood here, but please don't make me eat it too! If anyone from MS is reading this, please release finished products that you are prepared to support!

by dfxm12

5/21/2025 at 11:34:06 AM

> How are people reviewing that? 90% of the page height is taken up by "Check failure",

Typically, you wouldn't bother manually reviewing something until the automated checks have passed.

by xnorswap

5/21/2025 at 11:39:02 AM

I dunno, when I review code, I don't review what's automatically checked anyways, but thinking about the change/diff in a broader context, and whatever isn't automatically checked. And the earlier you can steer people in the right direction, the better. But maybe this isn't the typical workflow.

by diggan

5/21/2025 at 12:10:19 PM

It's a waste of time tbh; fixing the checks may require the author to rethink or rewrite their entire solution, which means your review no longer applies.

Let them finish a pull request before spending time reviewing it. That said, a merge request needs to have an issue written before it's picked up, so that the author does not spend time on a solution before the problem is understood. That's idealism though.

by Cthulhu_

5/21/2025 at 11:46:19 AM

The reality is more nuanced, there are situations where you'd want to glance over it anyway, such as looking for an opportunity to coach a junior dev.

I'd rather hop in and get them on the right path rather than letting them struggle alone, particularly if they're struggling.

If it's another senior developer though I'd happily leave them to it to get the unit tests all passing before I take a proper look at their work.

But as a general principle, please at least get a PR through formatting checks before assigning it to a person.

by xnorswap

5/21/2025 at 1:49:29 PM

>> And the earlier you can steer people in the right direction, the better.

The earliest feedback you can get comes from the compiler. If it won't build successfully don't submit the PR.

by phkahler

5/21/2025 at 11:41:37 AM

"I wonder if there is some difference in their internal processes that are leaking through here?"

Maybe, but likely it is reality and their true company culture leaking through. Eventually some higher eq execs might come to the very late realization that they cant actually lead or build a worthwhile and productive company culture and all that remains is an insane reflection of that.

by spacecadet

5/21/2025 at 11:31:53 AM

> How are people reviewing that?

I agree that not auto-collapsing repeated annotations is an annoying bug in the github interface.

But just pointing out that annotations can be hidden in the ... menu to the right (which I just learned).

by worldsayshi

5/21/2025 at 12:16:44 PM

I'm not entirely sure why they're running linters on every available platform to begin with, it seems like a massive waste of compute to me when surely the output will be identical because it's analysing source code, not behaviour.

by jon-wood

5/21/2025 at 12:10:38 PM

or press “a”

by codyvoda

5/21/2025 at 12:26:27 PM

> @copilot please read the following Contributor License Agreement(CLA). If you agree with the CLA, please reply with the following information.

by ta1243

5/21/2025 at 12:51:19 PM

Hot take : the whole LLM craze is fed by a delusion. LLM are good at mimicking human language, capturing some semantics on the way. With a large enough training set, the amount of semantic captured covers a large fraction of what the average human knows. This gives the illusion of intelligence, and the humans extrapolates on LLM capabilities, like actual coding. Because large amounts of code from textbooks and what not is on the training set, the illusion is convincing for people with shallow coding abilities.

And then, while the tech is not mature, running on delusion and sunken costs, it's actually used for production stuffs. Butlerian Jihad when

by marmakoide

5/21/2025 at 8:06:11 PM

I think the bubble is already a bit past peak.

My sophisticated sentiment analysis (talking to co-workers other professional programmers and IT workers, HN and Reddit comments) seems to indicate a shift--there's a lot less storybook "Ay Eye is gonna take over the world" talk and a lot more distrust and even disdain than you'd see even 6 months ago.

Moves like this will not go over well.

by nyarlathotep_

5/22/2025 at 12:28:21 AM

AI proponents would say you are witnessing third stage of 'First they ignore you, then they laugh at you, then they fight you, then you win'

by rasz

5/21/2025 at 2:25:10 PM

> Butlerian Jihad when

I estimate two more years for the bubble to pop.

by otabdeveloper4

5/22/2025 at 10:33:29 PM

> This whole thing would be fucking hilarious if I didn't feel so bad for the humans who are on the other side of this.

Which will soon be anyone who directly or indirectly relies on Microsoft technologies. Some of these PRs, including at least one that I saw reworked certificate validation logic with not much more than a perfunctory “LGTM”, have been merged into main.

Coincidentally, I wonder if issues orthogonal to this slop is why I’ve been getting so many HTTP 500 errors when using GitHub lately.

by TheNewsIsHere

5/21/2025 at 2:40:08 PM

It’s probably the junior devs that get to review these PRs. That and interns.

by cruffle_duffle

5/21/2025 at 11:53:15 AM

Seeing Microsoft employees argue with an LLM for hours instead of actually just fixing the problem must be a very encouraging sight for businesses that have built their products on top of .NET.

by bramhaag

5/21/2025 at 1:08:04 PM

I remember before mass LLM adoption, reading an issue on GitHub where an increasingly frustrated user was failing to properly describe a blocking issue, and the increasingly frustrated maintainer was failing to get them to stick to the issue template.

Now you don’t even need the frustrated end user!

by mikrl

5/21/2025 at 1:38:43 PM

one day both sides will be AI so we can all relax and enjoy our mojitos

by shultays

5/21/2025 at 3:04:45 PM

Well, people have been putting M-x doctor to talk with M-x eliza for decades.

by marcosdumay

5/21/2025 at 3:21:11 PM

when that day arrives we'll won't be relaxing, we will be put through a wood chipper

by some_random

5/21/2025 at 11:11:54 PM

...to turn us into soylent-flavored mojitos?

by disqard

5/21/2025 at 11:58:47 AM

I sometimes feel like that is the right outcome for bad management and bad instructions. Only this time they can’t blame the junior engineer and are left to only blame themselves.

by nashashmi

5/21/2025 at 9:40:45 PM

I think we all know they won’t.

I am genuinely curious though to see the strategies they employ to absolve themselves of guilt and foolishness.

Is there precedent for the entire exec and management class embracing a new trend to this kind of extent, then it blowing up in their faces?

by snackernews

5/21/2025 at 12:31:21 PM

They'll probably blame openai/the AI instead.

by qoez

5/21/2025 at 12:45:35 PM

AI has reproducible outcomes. If someone else can make it work, then they should too.

by nashashmi

5/21/2025 at 3:11:16 PM

This is just false. Do these models even have reproducible outcomes with a temperature of 0? Aren't they also severely restricted with a temp of 0?

by daveguy

5/21/2025 at 4:42:23 PM

Some randomization is intentionally introduced. We are not accounting for that. Otherwise, it should be able to give you the same information.

by nashashmi

5/21/2025 at 1:51:08 PM

Especially painful when one of said employee is Stephen Toub, who is famous for his .net performance blog posts.

by gwervc

5/21/2025 at 2:16:56 PM

I was thinking that too. He's a great programmer, and at this point I can't imagine he's having fun 'prompting' an LLM to write correct code.

by svaha1728

5/21/2025 at 3:13:18 PM

I hope he writes a personal essay about the experience after he leaves Microsoft. Not that he will leave anytime soon, but the first hand accounts of how they are talking about these systems internally are going to be even more entertaining than the wtf PRs.

by daveguy

5/21/2025 at 4:48:56 PM

This comment thread is incredible. It's like fanfiction of a real person. Of course this engineer I respect shares my opinion. Not only that, he's obviously going to quit because of this. And then he'll write a blog post I'll get to enjoy.

Anyway, this is his public, stated opinion on this: https://github.com/dotnet/runtime/pull/115762#issuecomment-2...

by square_usual

5/22/2025 at 11:38:11 AM

Of course that is what he says publicly. Can you imagine him saying anything different on this already very heated PR comment section? Those would be quoted in a headline in a news article the next second.

by n144q

5/21/2025 at 7:57:54 PM

If he reiterates that comment to me after two beers in a relaxing bar I might believe him.

by svaha1728

5/21/2025 at 8:37:01 PM

Hahaha. 1000% this. Also, first example from the linked video: a "not vibe coded, promise" example of an ascii space invaders clone... Of all the examples of "has a bunch of training code data since the 80s", this is the best representation of exactly what LLM coding is capable of "in 8 minutes".

by daveguy

5/21/2025 at 4:16:05 PM

You don’t think he’s having fun getting laid a ton for playing with computers?

by mock-possum

5/21/2025 at 5:14:06 PM

I don’t imagine getting laid with computers are particularly enjoyable for humans.

by xeonmc

5/22/2025 at 12:33:36 AM

You havent met Gwendolyn bot.

by rasz

5/21/2025 at 12:41:55 PM

You don't want them to experiment with new tools? The main difference now is that the experiment is public.

by svick

5/21/2025 at 1:45:36 PM

It's pretty obviously a failed experiment. Why keep repeating it? Try again in another 3 months.

The answer is probably that the Copilot team is using the rest of the engineering organization as testers. Great for the Copilot team, frustrating for everyone else.

by stickfigure

5/21/2025 at 8:01:31 PM

> It's pretty obviously a failed experiment

For it to be "failed" it would have to also be finished/completed. They are likely continuously making tweaks, this thing was just released.

by raydev

5/22/2025 at 10:35:51 AM

"This thing has just released"

"It would have to be finished/completed"

Do you honestly not see a problem with those two statements in such close proximity? Is it finished or is it released? The former is supposed to be a prerequisite for the latter.

by vrighter

5/22/2025 at 3:38:28 PM

It's unfinished and it's in the public's hands. I don't see these as opposing ideas.

We can debate whether they should have called this an experiment or an alpha or beta or whatever, but that's a different discussion.

The fact that people are using it currently does not make it a failure. When MS shuts it down, or Copilot is wildly unprofitable for multiple quarters, team behind it quits, etc, etc, then we can determine whether it has failed or not.

But if they continue to have paying customers and users are finding some benefits over not having Copilot, and MS continues to improve it (doesn't let it rot), then you'd have to provide some evidence of its failure that isn't "look at Copilot being stupid sometimes". Especially when stupidity is expected of it.

by raydev

5/22/2025 at 1:42:22 PM

What bliss it must be, to never have encountered Microsoft software before.

by tremon

5/22/2025 at 1:56:08 PM

oh, how I wish you were right... I had to look deep inside some microsoft software, and I think it actually shortened my lifespan

by vrighter

5/21/2025 at 12:54:21 PM

I wouldn't necessarily call that just an experiment if the same requests aren't being fixed without copilot and the ai changes could get merged.

I would say the copilot system isn't really there yet for these kinds of changes, you don't have to run experiments on a language framework to figure that out.

by gmm1990

5/21/2025 at 2:49:54 PM

Nah I'd prefer they focus on writing code themselves to improve .NET not babysitting a spam-machine

by PKop

5/21/2025 at 1:56:38 PM

By all means. Just not on one of the most popular software development frameworks in the world. Maybe that can wait until after the concept is proven.

by flmontpetit

5/21/2025 at 4:12:05 PM

Yeah, seems to me that breaking .NET with this garbage will be, uh, extremely bad

by mystified5016

5/21/2025 at 6:20:38 PM

Microsoft closed their recently acquired advertisement buy-side platform Xander Invest because they are replacing it with an AI-only platform.

They only gave their customers 9 months to migrate away.

I'm expecting that Microsoft did this to artificially pump up their AI usage numbers for next year by forcibly removing non-AI alternatives.

This only one example in AdTech but I expect other industries to be hit as well.

by LunaSea

5/21/2025 at 2:39:46 PM

The point of this exercise for Microsoft isn't to produce usable code right now, but to use and improve copilot.

by empath75

5/22/2025 at 2:04:20 AM

They can do that in private repos just as easily, this a pr stunt that backfired very badly.

by saati

5/21/2025 at 9:22:47 PM

Yeah it's quite disheartening.

I recently spent a couple of months studying C# and .NET and working on my first project with it.

.NET, Blazor, etc are not known for a fast release schedule... but if things are going to become even slower with this AI crap I wonder if I made the right call.

I'm quite happy how things are today for making web APIs but I wish Blazor and other frameworks were in a much better shape.

by pier25

5/21/2025 at 9:56:32 PM

.NET has major releases every year. How is that slow for a programming platform/framework?

by Kwpolska

5/21/2025 at 11:06:36 PM

Yes but the improvements are very gradual. It takes years for something to reach maturity. At least for the web stuff which is what I know of.

Eg:

Minimal APIs were released in 2021 but it won't be until .NET 10 that they will have validation. Amazing that validation was not a day one priority for an API. I'm not certain if even in .NET 10 Minimal APIs will have full parity of features with MVC.

Minification of static assets didn't come until .NET 9 released in 2024. This was already commonplace in the JS world a decade earlier. It could have been a quick win so long ago for .NET web apps.

Blazor was released in 2018. 7 years later they still haven't fixed plenty of circuit reconnection issues. They are working on it but progress is also quite slow. Supposedly with .NET 10 session state will be able to be persist etc but it remains to be seen.

OpenAPI is also hit and miss. Spec v3.1 released in 2021 is still not supported. Supposedly it will come with .NET 10.

Not from .NET but they have a project called Kiota for generating clients from OpenAPI specs. It's unusable because of this huge issue that makes all properties in a type nullable. It's been open since 2023. [1]

Etc.

[1] https://github.com/microsoft/kiota/issues/3911

by pier25

5/21/2025 at 11:01:12 PM

Go has a six month release cycle. Rust releases a new stable every six weeks.

by cratermoon

5/21/2025 at 8:16:49 PM

Given that Microsoft always decided to Will Not Fix issues because they went "oh this thing is throwing errors? Just ignore them". THey're numbskulls that are high on their own farts just as much as their managers. They deserve everything that's happening to them.

by AllegedAlec

5/21/2025 at 2:29:05 PM

That is essentially what I tried to say in my comment there but don't think they wanted to hear it.

by lloydatkinson

5/21/2025 at 12:39:34 PM

That is why they just fired 7k people so they don’t argue with LLM but let it do the work /s

by ozim

5/21/2025 at 10:30:45 PM

I recently, meaning hours ago, had this delightful experience watching the Eric of Google, which everybody love, including he's extra curricular girl friend and wife, talking about AI. He seemed to believe AI is under-hyped after tried it out himself: https://www.youtube.com/watch?v=id4YRO7G0wE

He also said in the video:

> I brought a rocket company because it was like interesting. And it's an area that I'm not an expert in and I wanted to be a expert. So I'm using Deep Research (TM). And these systems are spending 10 minutes writing Deep Papers (TM) that's true for most of them. (Them he starts to talk about computation and "it typically speaks English language", very cohesively, then stopped the thread abruptly) (Timestamp 02:09)

Let me quote out the important in what he said: "it's an area that I'm not an expert in".

During my use of AI (yeah, I don't hate AI), I found that the current generative (I call them pattern reconstruction) systems has this great ability to Impress An Idiot. If you have no knowledge in the field, you maybe thinking the generated content is smart, until you've gained some depth enough to make you realize the slops hidden in it.

If you work at the front line, like those guys from Microsoft, of course you know exactly what should be done, but, the company leadership maybe consists of idiots like Eric who got impressed by AI's ability to choose smart sounding words without actually knowing if the words are correct.

I guess maybe one day the generative tech could actually write some code that is correct and optimal, but right now it seems that day is far from now.

by nirui

5/21/2025 at 11:10:18 PM

Thank you for sharing this!

When I use AI, I keep it on a short leash.

Meanwhile, folks like this ("I bought a rocket company") are essentially using it to decide where to plough their stratospheric wealth, so they can grow it even further.

Perhaps they'll lose a cufflink in the eventual crash, but they're so rich, I don't think they'll lose their shirt. Meanwhile, the tech job market is f**ed either way.

by disqard

5/22/2025 at 9:54:41 AM

I started watching it, but had to stop because I was at great risk of becoming physically sick when he led with the "new move" stuff. Ugh.

Kudos to you for having the strength to get through it, and for living to tell the tale!

by -__---____-ZXyw

5/21/2025 at 11:31:08 PM

> it's an area that I'm not an expert in

> idiots like Eric

Now imagine Google working with US military putting Gemini into a fleet of autonomous military drones with machine guns.

by sexy_seedbox

5/22/2025 at 5:02:01 AM

> has this great ability to Impress An Idiot

Literally the killer app of AI.

by xk_id

5/22/2025 at 7:17:58 AM

> During my use of AI (yeah, I don't hate AI), I found that the current generative (I call them pattern reconstruction) systems has this great ability to Impress An Idiot

I would be genuinely positively surprised if that stops to be the case some day. This behavior is by design.

AS you put yourself, these LLM systems are very good at pattern recognition and reconstruction. They have ingested vast majority of the internet to build patterns on. On the internet, the absolutely vast majority of content is pushed out by novices and amateurs: "Hey, look, I have just read a single wikipedia page or attended single lesson, I am not completely dumbfounded by it, so now I will explain it to you".

LLMs have to be peak Dunning-Krugers - by design.

by friendzis

5/21/2025 at 12:28:20 PM

A comment on the first pull request provides some context:

> The stream of PRs is coming from requests from the maintainers of the repo. We're experimenting to understand the limits of what the tools can do today and preparing for what they'll be able to do tomorrow. Anything that gets merged is the responsibility of the maintainers, as is the case for any PR submitted by anyone to this open source and welcoming repo. Nothing gets merged without it meeting all the same quality bars and with us signing up for all the same maintenance requirements.

by kruuuder

5/21/2025 at 12:56:11 PM

The author of that comment, an employee of Microsoft, goes on to say:

> It is my opinion that anyone not at least thinking about benefiting from such tools will be left behind.

The read here is: Microsoft is so abuzz with excitement/panic about AI taking all software engineering jobs that Microsoft employees are jumping on board with Microsoft's AI push out of a fear of "being left behind". That's not the confidence inspiring the statement they intended it to be, it's the opposite, it underscores that this isn't the .net team "experimenting to understand the limits of what the tools" but rather the .net team trying to keep their jobs.

by abxyz

5/21/2025 at 4:10:44 PM

The "left behind" mantra that I've been hearing for a while now is the strange one to me.

Like, I need to start smashing my face into a keyboard for 10000 hours or else I won't be able to use LLM tools effectively.

If LLM is this tool that is more intuitive than normal programming and adds all this productivity, then surely I can just wait for a bunch of others to wear themselves out smashing the faces on a keyboard for 10000 hours and then skim the cream off of the top, no worse for wear.

On the other hand, if using LLMs is a neverending nightmare of chaos and misery that's 10x harder than programming (but with the benefit that I don't actually have to learn something that might accidentally be useful), then yeah I guess I can see why I would need to get in my hours to use it. But maybe I could just not use it.

"Left behind" really only makes sense to me if my KPIs have been linked with LLM flavor aid style participation.

Ultimately, though, physics doesn't care about social conformity and last I checked the machine is running on physics.

by Verdex

5/21/2025 at 4:27:39 PM

There's a third way things might go: on the way to "superpower for everyone", we go through an extended phase where AI is only a superpower in skilled hands. The job market bifurcates around this. People who make strong use of it get first pick of the good jobs. People not making effective use of AI get whatever's left.

Kinda like how word processing used to be an important career skill people put on their resumes. Assuming AI becomes as that commonplace and accessible, will it happen fast enough that devs who want good jobs can afford to just wait that out?

by spiffytech

5/21/2025 at 4:53:02 PM

I'm willing to accept this as a possibility but the case analysis still doesn't make much sense to me.

If LLM usage is easy then I can't be left behind because it's easy. I'll pick it up in a weekend.

If LLM usage is hard AND I can otherwise do the hard things that LLMs are doing then I can't be left behind if I just do the hard things.

Still the only way I can be left behind is if LLM usage is nonsense or the same as just doing it yourself AND the important thing is telling managers that you've been using it for a long time.

Is the superpower bamboozling management with story time?

by Verdex

5/21/2025 at 6:12:17 PM

The obvious case in which you would be "left behind" is the one in which LLM usage is hard, and you cannot otherwise do the hard things that LLMs are doing (or you can do them, but much slower and/or to a lower standard of quality.)

by mquander

5/21/2025 at 8:54:53 PM

Sure. Although all of the hard things that I need to do I have a history of doing fast and to high standards.

Unless we're talking about hard things that I have up til now not been able to do. But do LLMs help with that in general?

This scenario breaks out of the hypothetical and the assertive and into the realm of the testable.

Provide for me the person who can use LLMs in a way that is hard but they are good at in order to do things which are hard but which they are currently bad at.

I will provide a task which is hard.

We can report back the result.

by Verdex

5/21/2025 at 6:19:05 PM

To be fair, word processing is a skill that that a majority of professionals continue to lack.

Law, civil service, academia and those who learnt enough LaTeX and HTML to understand text documents are in the minority.

by djhn

5/21/2025 at 7:36:37 PM

Yeah and now people who can’t even write and never put in the effort to learn it are flooding the zone (my inbox) with useless 10 page memo’s.

by smodo

5/21/2025 at 2:33:49 PM

If you're not using it where it's useful to you, then I still wouldn't say you're getting left behind, but you're making your job harder than it has to be. Anecdotally I've found it useful mostly for writing unit tests and sometimes debugging (can be as effective as a rubber duck).

It's like the 2025 version not not using an IDE.

It's a powerful tool. You still need to know when to and when not to use it.

by Vicinity9635

5/21/2025 at 3:03:16 PM

> It's like the 2025 version not not using an IDE.

That's right on the mark. It will save you a little bit of work on tasks that aren't the bottleneck on your productivity, and disrupt some random tasks that may or may not be important.

It's makes so little difference that plenty of people in 2025 don't use an IDE, and looking at their performance from the outside one just can't tell.

Except that LLMs have less potential to improve your tasks and more potential to be disruptive.

by marcosdumay

5/21/2025 at 4:46:24 PM

You're right on the money. I've been amongst the most productive developers in every place I've worked at for the past 10 years while not using an IDE. AI is not even close to as revolutionary as it's being sold. Unfortunately, as always, the ones buying this crap are not the ones that actually do the work.

Even for writing tests, you have to proof-read every single line and triple check they didn't write a broken test. It's absolutely exhausting.

by Draiken

5/21/2025 at 8:12:09 PM

I've encountered LLM generated comments that don't even reflect what the code is doing, or, worse, subtly describe the code inaccurately. The most insidious disenchanting code I've ever seen has been exactly of this sort, and it's getting produced by the boatload daily now.

by nyarlathotep_

5/22/2025 at 6:59:15 AM

I really don't understand what is going on. I try to, I read the papers, the threads, I think about it. But I can't figure this out.

How can it be that people expect that pumping more energy into closed systems could do anything else than raise entropy? Because that's what it is. You attach GPU farms to your code base and make them pump code into it? You're pumping energy into a closed system. The result cannot be other than greater entropy.

by namaria

5/22/2025 at 6:36:42 PM

Hum... In theory the closed system includes a database with most of the humanity's written works, and the people that know how the thing works expect it to push some information from the database into the code. (Even though, I will argue that the people that know how the thing works barely use it.)

The reason LLMs fail so often are not related to the fundamental of "garbage in, garbage out".

by marcosdumay

5/21/2025 at 3:57:03 PM

Yea, "using an IDE" is a very good analogy. IDEs are not silver bullets, although they no doubt help some engineers. There are plenty of developers, on the other hand, who are amazingly productive without using IDEs.

by ryandrake

5/21/2025 at 6:01:48 PM

I feel like most people that swear by their AI are also the ones using text editors instead of full IDEs with actually working refactoring, relevant auto complete or never write tests

by javier2

5/21/2025 at 2:44:19 PM

Tests are one of the areas where it performs least well. I can ask an LLM to summarize the functionality of code and be happy with the answer, but the tests it writes are the most facile unit tests, just the null hypothesis tests and the like. "Here's a test that the constructor works." Cool.

by static_void

5/21/2025 at 4:58:37 PM

They are the exact same unit tests I never needed help to write, and the exact same unit tests that I can just blindly keep hitting tab to write with Intellij's NON-AI autocomplete.

by mrguyorama

5/21/2025 at 3:00:24 PM

This is Stephen Toub, who is the lead of many important .NET projects. I don't think he is worried about losing job anytime soon.

I think, we should not read too much into it. He is honestly exploring how much this tool can help him to resolve trivial issues. Maybe he was asked to do so by some of his bosses, but unlikely to fear the tool replacing him in the near future.

by the-lazy-guy

5/21/2025 at 3:15:26 PM

They don’t have any problem firing experienced devs for no reason. Including on the .NET team (most of the .NET Android dev team was laid off recently).

https://www.theregister.com/2025/05/16/microsofts_axe_softwa...

Perhaps they were fired for failing to show enthusiasm for AI?

by n8cpdx

5/21/2025 at 3:54:43 PM

I can definitely believe that companies will start (or have already started) using "Enthusiasm about AI" as justification for a hire/promote/reprimand/fire decision. Adherence to the Church Of AI has become this weird purity test throughout the software industry!

by ryandrake

5/21/2025 at 3:21:46 PM

I love the fact that they seem to be asking it to do simple things because ”AI can do the simple boring things for us so we can focus on the important problems” and then it floods them with so many meaningless mumbo jumbo that they could have probably done the simple thing in a fraction of the time they take to keep correcting it continuously.

by low_tech_love

5/21/2025 at 5:43:33 PM

It is called experimentation. That is how people evaluate new technology. By trying to do small things with it first. And if it doesn't work well - retrying later, once bigger issues are fixed.

by the-lazy-guy

5/22/2025 at 7:01:16 PM

In production?

by low_tech_love

5/21/2025 at 3:26:01 PM

Didn't M$ just fire like 7000 people, many of which were involved in big important M$ projects? The CPython guys, for example.

by sensanaty

5/21/2025 at 4:18:54 PM

Now, consider the game theory of saying "no" when your boss tells you to go play with the LLM in public.

by bob1029

5/21/2025 at 5:42:08 PM

Hot take: CPython is not an important project for Microsoft, and it is not lead by them. The faster CPython project had questionable acheivement on top of that.

Half of Microsoft (especially server-side) still runs on dotnet. And there are no real contributors outside of microsoft. So it is a vital project.

by the-lazy-guy

5/22/2025 at 4:46:16 PM

They also laid off one of the veteran TypeScript developers. TypeScript is definitely an important project for Microsoft, and a lot of code there is written in it.

by int_19h

5/21/2025 at 3:54:54 PM

Anyone not showing open AI enthusiasm at that level will absolutely be fired. Anyone speaking for MS will have to be openly enthusiastic or silent on the topic by now.

by spacemadness

5/21/2025 at 1:15:40 PM

TBF they are dogfooding this (good) but it's just not going well

by hnthrow90348765

5/21/2025 at 11:52:36 PM

"eating our own dogshit"

by davidgerard

5/21/2025 at 2:28:30 PM

> Microsoft employees are jumping on board with Microsoft's AI push out of a fear of "being left behind"

If they weren't experimenting with AI and coding and took a more conservative approach, while other companies like Anthropic was running similar experiments, I'm sure HN would also be critiquing them for not keeping up as a stodgy big corporation.

As long as they are willing to take risks by trying and failing on their own repos, it's fine in my books. Even though I'd never let that stuff touch a professional github repo personally.

by dmix

5/21/2025 at 7:05:26 PM

exactly ignoring new technologies can be a death sentence for a company even one as large as Microsoft. even if this technology doesn't pay off its still a good idea to at least look into potential uses.

by jayGlow

5/22/2025 at 1:40:10 AM

Only in very specific circumstances where there are clear moats to be built (mobile was one of these that Microsoft missed, but that's a PLATFORM in a way no AI product at the moment comes close to). As far as I can tell, there is no evidence of such a thing with the current applications of AI and I am unconvinced that there ever will be. It's just going to ride on top of previous platforms. So you may need some sort of service for customers that are interested, but having the absolute best AI story just isn't something customers are going to care about at the end of the day if it means they would have to say migrate clouds.

At the moment, I'd arguing doing much more than what say Apple is doing would be what is potentially catastrophic. Not doing anything would be minimally risky, and doing just a little bit would be the no risk play. I think Microsoft is making this mistake in a big way and will continue to lose market share over it and burn cash, albeit slowly since they are already giants. The point is, it's a giant that has momentum going in the opposite direction than what they want, and they are incapable of fixing the things causing it to go in that direction because their leadership has become delusional.

by BugheadTorpeda6

5/21/2025 at 1:15:56 PM

i dont think hey are mutually exclusive. jumping on board seems like the smart move if you're worried about losing your career. you also get to confirm your suspicions.

by username135

5/21/2025 at 12:57:48 PM

This is important context given that it would be absurd for the managers to have already drawn a definitive conclusion about the models’ capabilities. An explicit understanding that the purpose of the exercise is to get a better idea of the current strengths and weaknesses of the models in a “real world” context makes this actually very reasonable.

by lcnPylGDnU4H9OF

5/21/2025 at 5:03:09 PM

So why in public, and why in the most ham-fisted way, and why on important infrastructure, and why in such a terrible integration that it can't even verify that things compile before opening a PR!

In my org, we would have had to bypass precommit hooks to do this!

by mrguyorama

5/21/2025 at 11:44:25 AM

Beyond every other absurdity here, well, maybe Microsoft is different, but I would never assign a PR that was _failing CI_ to somebody. That that's happening feels like an admission that the thing doesn't _really_ work at all; if it worked even slightly, it would at least only assign passing PRs, but presumably it's bad enough that if they put in that requirement there would be no PRs.

by rsynnott

5/21/2025 at 12:36:01 PM

I feel like everyone is applying a worse-case narrative to what's going on here..

I see this as a work in progress.. I am almost certain the humans in the loop on these PRs are well aware of what's going on and have their expectations in check, and this isn't just "business as usual" like any other PR or work assignment.

This is a test. You can't improve a system without testing it on real world conditions.

How do we know they're not tweaking the Copilot system prompts and settings behind the scenes while they're doing this work?

Can no one see the possibility that what is happening in those PRs is exactly what all the people involved expected to have happen, and they're just going through the process of seeing what happens when you try to refine and coach the system to either success or failure?

When we adopted AI coding assist tools internally over a year ago we did almost exactly this (not directly in GitHub though).

We asked a bunch of senior engineers to see how far they could get by coaching the AI to write code rather than writing it themselves. We wanted to calibrate our expectations and better understand the limits, strengths and weaknesses of these new tools we wanted to adopt.

In most of those early cases we ended up with worse code than if it had been written by humans, but we learned a ton. We can also clearly see how much better things have gotten over time, since we have that benchmark to look back on.

by sbarre

5/21/2025 at 1:00:22 PM

I think people would be more likely to adopt this view if the overall narrative about AI is that it’s a work in progress and we expect it to get magnitudes better. But the narrative is that AI is already replacing human software engineers.

by rco8786

5/21/2025 at 1:03:48 PM

[flagged]

by codyvoda

5/21/2025 at 1:04:41 PM

That's a weird comment. I do think for myself. I wasn't even talking about my own personal thoughts on the matter. I can just plainly see that the overwhelming narrative in the public zeitgeist is that AI can do jobs that humans can do. And it's not true.

by rco8786

5/21/2025 at 1:13:42 PM

why does every engineer keep talking about it like it’s more than marketing hype? why do you actually accept this is a real narrative real people believe? have you talked to the executives implementing these strategies?

redbull does not give you wings. it’s disconcerting to see the lack of nuance in these discussions around these new tools (and yeah sorry this isn’t really aimed at you, but the zeitgeist, apologies)

by codyvoda

5/21/2025 at 1:26:11 PM

Because this “marketing hype” is affecting the way we do our job.

Some of us are being laid off due to the hype; some are assigned to babysit the AI; and some are simply looked down on by higher ups who are eagerly waiting for a day to lay us all off.

You can convince yourself as much as you want that it’s “just a hype”, but regardless of your beliefs are, it has REAL world consequences.

by skwee357

5/21/2025 at 2:08:17 PM

because we react like this

engineers are testing promising new technology. a mob (of probably half or more bots) is having a [redacted] perpetuating the anti-narrative they huffed themselves up into believing. and now we’re in a meta-[redacted] as if either A) redditors and armchair engineers here have valid opinions on this tech and B) marketers and founders with massive incentives to overpromise are telling a true narrative

why? we don’t have to do it. we could actually look at these topics with nuance and not react like literal bots to everything

(sorry I’m just losing my faith in humanity and taking it out in this thread)

by codyvoda

5/21/2025 at 3:11:35 PM

> why does every engineer keep talking about it like it’s more than marketing hype?

because it is more than marketing hype. real people are taking real action based on this narrative.

> why do you actually accept this is a real narrative real people believe?

largely because I witness real people believing this narrative with my own eyes on a daily basis.

by rco8786

5/21/2025 at 3:08:43 PM

> why do you actually accept this is a real narrative real people believe?

Because we're literally seeing people being laid off with narratives about being replaced with AI (At a whole slew of companies). Because we're seeing company policies around hiring being changed to require hiring managers to provide exhaustive justifications why the work couldn't be handled by an AI (at e.g. Shopify, Salesforce and so on)

> have you talked to the executives implementing these strategies?

I have had a few conversations, yes. Have you? They're weirdly "true believers" that are buying the marketing hype hook line and sinker. They're doing small coding exercises themselves in these tools, seeing that they as an executive can manage to get valid code for the small exercise out the other side of it, and assuming that that means it can replace head count. Either deliberately or naively failing to understand that there is a world of difference between leet code style exercises, or quick small changes to code bases, and actual software development.

The weirdest conversation recently, which thankfully I got to just be on the periphery of, involved an engineering org that decided to try to replace the post-incident process with one entirely written by LLMs. It would take timelines from a ticket, and a small prompt to write up the entire post-incident report, tasks etc.

The whole project showed a gross misunderstanding of the point of post-incident stuff, eradicating "introspection" and "learning from your mistakes", turning it into a check box exercise for teams. Even their narrative around what they were doing was hilarious, because it came down to "Get the post-incident report out of the way so we can concentrate on the real work".

by Twirrim

5/21/2025 at 4:57:01 PM

> Either deliberately or naively failing to understand that there is a world of difference between leet code style exercises, or quick small changes to code bases, and actual software development.

Given how often leet code questions are used in the interview process across the entire industry I think it’s a fair assumption that they fail to understand this.

by einsteinx2

5/21/2025 at 2:05:06 PM

>> I see this as a work in progress.. I am almost certain the humans in the loop on these PRs are well aware of what's going on and have their expectations in check, and this isn't just "business as usual" like any other PR or work assignment.

>> This is a test. You can't improve a system without testing it on real world conditions.

Software developers know to fix build problems before asking for a review. The AIs are submitting PRs in bad faith because they don't know any better. Compilers and other build tools produce errors when they fail, and the AI is ignoring this first line of feedback.

It is not a maintainers job to review code for syntax errors, or use of APIs that don't actually exist, or other silly mistakes. That's the compilers job and it does it well. The AI needs to take that feedback and fix the issues before escalating to humans.

by phkahler

5/21/2025 at 2:06:53 PM

Like I said, I think you may be missing the point of the whole exercise.

by sbarre

5/23/2025 at 8:50:43 AM

then the output of the exercise can only be "AI is ignoring errors"

by agos

5/21/2025 at 12:57:26 PM

I was looking for exactly this comment. Everybody's gloating, "Wow look how dumb AI is! Haha, schadenfreude!" but this seems like just a natural part of the evolution process to me.

It's going to look stupid... until the point it doesn't. And my money's on, "This will eventually be a solved problem."

by mieubrisse

5/21/2025 at 1:20:53 PM

The question though is what is the time horizon of “eventually”. Very different decisions should be made if it’s 1 year, 2 years, 4 years, 8 years etc. To me it seems as if everyone is making decisions which are only reasonable if the time horizon is 1 year. Maybe they are correct and we’re on the cusp. Maybe they aren’t.

Good decision making would weigh the odds of 1 vs 8 vs 16 years. This isn’t good decision making.

by roxolotl

5/21/2025 at 1:26:59 PM

Or _never_, honestly. Sometimes things just don't work out. See various 3d optical memory techs, which were constantly about to take over the world but never _quite_ made it to being actually useful, say.

by rsynnott

5/21/2025 at 1:41:06 PM

> This isn’t good decision making.

Why is doing a public test of an emerging technology not good decision making?

> Good decision making would weigh the odds of 1 vs 8 vs 16 years.

What makes you think this isn't being done?

by ecb_penguin

5/21/2025 at 2:09:24 PM

> It's going to look stupid... until the point it doesn't. And my money's on, "This will eventually be a solved problem."

AI can remain stupid longer than you can remain solvent.

by Qem

5/21/2025 at 11:33:38 PM

Haha, I like your take!

My variation was:

"Leadership can stay irrational longer than you can stay employed"

by disqard

5/21/2025 at 1:46:28 PM

Sometimes the last 10% takes 90% of the time. It'll be interesting to see how this pans out, and whether it will eventually get to something that could be considered a solved problem.

I'm not so sure they'll get there. If the solved problem is defined as a sub-standard but low cost, then I wouldn't bet against that. A solution better than that though, I don't think I'd put my money on that.

by grewsome

5/21/2025 at 11:38:32 PM

You just inspired a thought:

What if the goalpost is shifted backwards, to the 90% mark (instead of demanding that AI get to 100%)?

* Big corps could redefine "good enough" as "what the SotA AI can do" and call it good.

* They could then layoff even more employees, since the AI would be, by definition, Good Enough.

(This isn't too far-fetched, IMO, seeing how we're seeing calls for copyright violation to be classified as legal-when-we-do-it)

by disqard

5/21/2025 at 8:11:21 PM

People seem like they’re gloating as the message received in this period of the hype cycle is that AI is as good as a junior dev without caveats and it in no way is suppose to be stupid.

by spacemadness

5/21/2025 at 2:12:58 PM

To some people, it will always look stupid.

I have met people who believe that automobile engineering peaked in the 1960's, and they will argue that until you are blue in the face.

by Workaccount2

5/21/2025 at 1:53:05 PM

You are not addressing the point in the comment, why are failing CI changes assigned?

by solids

5/21/2025 at 2:09:44 PM

I believe I did address that when I said "this is not business as usual work"..

So the typical expectations or norms of how code reviews and PRs work between humans don't really apply here.

That's my guess at least. I have no more insider information than you.

by sbarre

5/22/2025 at 3:35:08 AM

> I feel like everyone is applying a worse-case narrative to what's going on here..

Unfortunately, just about every thread on this genre is like that now.

by munksbeer

5/21/2025 at 4:55:12 PM

This is the exact reason AI sucks : there is no proper feedback loop.

EVERY single prompt should have the opportunity to get copied off into a permanent log where the end user triggers it : log all input, all output, human writes a summary of what he wanted to happen but did not, what he thinks might have went wrong, what he thinks should have happened (domain specific experts giving feedback about how things are fucking up) And then its still only useful with long term tracking like how someone actually made a training change to fix this exact failure scenario.

None of that exists, so just like "full self driving" was a pie in the sky bullshit dream that proved machine learning has an 80/20 never gonna fully work problem, same thing here

by beefnugs

5/21/2025 at 2:58:03 PM

They said in the comments that currently the firewall is blocking it from checking tests for passing, and they need to fix that.

Otherwise it would check the tests are passing.

by Dlanv

5/21/2025 at 2:09:42 PM

Replace the AI agent with any other new technology and this is an example of a company:

1. Working out in the open

2. Dogfooding their own product

3. Pushing the state of the art

Given that the negative impact here falls mostly (completely?) on the Microsoft team which opted into this, is there any reason why we shouldn't be supporting progress here?

by robotcapital

5/21/2025 at 2:27:33 PM

100% agree. i’m not sure why everyone is clowning on them here. This process is a win. Do people want this all being hidden instead in a forked private repo?

It’s showing the actual capabilities in practice. That’s much better and way more illuminating than what normally happens with sales and marketing hype.

by JB_Dev

5/21/2025 at 2:39:27 PM

Satya says: "I’d say maybe 20%, 30% of the code that is inside of our repos today and some of our projects are probably all written by software".

Zuckerberg says: "Our bet is sort of that in the next year probably … maybe half the development is going to be done by AI, as opposed to people, and then that will just kind of increase from there".

It's hard to square those statements up with what we're seeing happen on these PRs.

by rco8786

5/21/2025 at 3:00:08 PM

These are AI companies selling AI to executives, there's no need to square the circle, the people that they are talking to have no interest in what's happening in a repo, it's about convincing people to buy in early so they can start making money off their massive investments.

by SketchySeaBeast

5/21/2025 at 3:44:42 PM

Why shouldn’t we judge a company’s capabilities against what their CEOs claim them to be capable of?

by rco8786

5/21/2025 at 4:43:20 PM

Oh, we absolutely should, but I'm saying that the reason the messaging is so discordant when compared with the capabilities is that the messaging isn't aimed at the people who are able to evaluate the capabilities.

by SketchySeaBeast

5/21/2025 at 11:28:48 PM

You're right. The audience isn't the same. Unfortunately the parent commenters are also right - executives hyping AI are (currently) lying.

It is about as unethical as it gets.

But, our current iteration of capitalism is highly financialized and underinvested in the value of engineering. Stock prices come before truth.

by cadamsdotcom

5/21/2025 at 3:23:37 PM

> Satya says: "I’d say maybe 20%, 30% of the code that is inside of our repos today and some of our projects are probably all written by software".

Well, that makes sense to me. Microsoft's software has gotten noticably worse in the last few years. So much that I have abandoned it for my daily driver for the first time since the early 2000s.

by daveguy

5/21/2025 at 4:01:59 PM

The fact that Zuck is saying "sort of" and "probably" is a big giveaway it's not going to happen.

by polishdude20

5/21/2025 at 2:38:32 PM

Who is "we" and how and why would "we" "support" or not "support" anything.

Personally I just think it is funny that MS is soft launching a product into total failure.

by constantcrying

5/21/2025 at 2:35:49 PM

"Pushing the state of the art" and experimenting on a critical software development framework is probably not the best idea.

by throwaway844498

5/21/2025 at 2:59:29 PM

Why not, when it goes through code review by experienced software engineers who are experts on the subject in a codebase that is covered by extensive unit tests?

by Dlanv

5/21/2025 at 5:01:15 PM

I don't know about you, but it's much more likely for me to let a bug slip when I'm reviewing someone else's code than when I'm writing it myself.

This is what's happening right now: they are having to review every single line produced by this machine and trying to understand why it wrote what it wrote.

Even with experienced developers reviewing and lots of tests, the likelihood of bugs in this code compared to a real engineer working on it is much higher.

Why not do this on less mission critical software at the very least?

Right now I'm very happy I don't write anything on .NET if this is what they'll use as a guinea pig for the snake oil.

by Draiken

5/21/2025 at 5:55:10 PM

That is exactly what you want to evaluate the thechnology. Not make a buggy commit into softwared not used by nobody and reviewed by an intern. But actually review it by domain professionals, in real world very well-tested project. So they could make an informed decision on where it lacks in capabilities and what needs to be fixed before they try it again.

I doubt that anyone expected to merge any of these PRs. Question is - can the machine solve minor (but non-trivial) issues listed on github in an efficient way with minimal guidance. Current answer is no.

Also, _if_ anything was to be merged, dotnet is dogfooded extensively at Microsoft, so bugs in it are much more likely to be noticed and fixed before you get a stable release on your plate.

by the-lazy-guy

5/22/2025 at 5:00:31 PM

> Not make a buggy commit into software not used by nobody and reviewed by an intern.

If it can't even make a decent commit into software nobody uses, how can it ever do it for something even more complex? And no, you don't need to review it with an intern...

> can the machine solve minor (but non-trivial) issues listed on github in an efficient way with minimal guidance

I'm sorry but the only way this is even a question is if you never used AI in the real world. Anyone with a modicum of common sense would tell you immediately: it cannot.

You can't even keep it "sane" in a small conversation, let alone using tons of context to accomplish non-trivial tasks.

by Draiken

5/21/2025 at 5:15:39 PM

>supporting progress

This presupposes AI IS progress.

Nevermind that what this actually shows is an executive or engineering team that so buys their own hype that they didn't even try to run this locally and internally before blasting to the world that their system can't even ensure tests are passing before submitting a PR. They are having a problem with firewall rules blocking the system from seeing CI outcomes and that's part of why it's doing so badly, so why wasn't that verified BEFORE doing this on stage?

"Working out in the open" here is a bad thing. These are issues that SHOULD have been caught by an internal POC FIRST. You don't publicly do bullshit.

"Dogfooding" doesn't require throwing this at important infrastructure code. Does VS code not have small bugs that need fixing? Infrastructure should expect high standards.

"Pushing the state of the art" is comedy. This is the state of the art? This is pushing the state of the art? How much money has been thrown into the fire for this result? How much did each of those PRs cost anyway?

by mrguyorama

5/21/2025 at 4:14:42 PM

Because they're using it on an extremely popular repository that many people depend on?

And given the absolute garbage the AI is putting out the quality of the repo will drop. Either slop code will get committed or the bots will suck away time from people who could've done something productive instead.

by lawn

5/21/2025 at 11:35:46 AM

Malicious compliance should be the order of the day. Just approve the requests without reviewing them and wait until management blinks when Microsoft's entire tech stack is on fire. Then quit your job and become a troubleshooter on x3 the pay.

by globalise83

5/21/2025 at 12:26:36 PM

I know this is meant to sound witty or clever, but who actually wants to behave this way at their job?

I'll never understand the antagonistic "us vs. them" mentality people have with their employer's leadership, or people who think that you should be actively sabotaging things or be "maliciously compliant" when things aren't perfect or you don't agree with some decision that was made.

To each their own I guess, but I wouldn't be able to sleep well at night.

by sbarre

5/21/2025 at 1:01:18 PM

It’s worth recognizing that the tension between labor and capital historical reality, not just a modern-day bad attitude. Workers and leadership don’t automatically share goals, especially when senior management incentives often prioritize reducing labor costs which they always do now (and no, this wasn't always universally so).

Most employees want to do good work, but pretending there’s no structural divergence in interests flattens decades of labor history and ignores the power dynamics baked into modern orgs. It’s not about being antagonistic, it’s about being clear-eyed where there are differences between the motivations of your org. leadership and your personal best interests. After a few levels remove from your position, you're just headcount with loaded cost.

by HelloMcFly

5/21/2025 at 2:13:45 PM

Great comment.. It's of course more complex than I made it out to be, I was mostly reacting to the idea of "malicious compliance" at your place of employment and how at odds that is with my own personal morals and approach.

But 100% agreed that everyone should maintain a realistic expectation and understanding of their relationship with their employer, and that job security and employment guarantees are possibly at an all-time low in our industry.

by sbarre

5/21/2025 at 4:10:33 PM

[dead]

by gthrowaway2342

5/21/2025 at 12:49:07 PM

I suppose that depends on your relationship with your employer. If your goals are highly aligned (e.g. lots of equity based compensation, some degree of stability and security, interest in your role, healthy management practices that value their workforce, etc.) then I agree, it’s in your own self interest to push back because it can effect you directly.

Meanwhile a lot of folks have very unhealthy to non-existent relationships with their employers. There may be some mixture where they may be temporary hired/viewed as highly disposable or transient in nature having very little to gain from the success of the business, they may be compensated regardless of success/failure, they may have toxic management who treat them terribly (condescendingly, constantly critical, rarely positive, etc.). Bad and non-existent relationships lead to this sort of behavior. In general we’re moving towards “non-existent” relationships with employers broadly speaking for the labor force.

The counter argument is often floated here “well why work there” and the fact is money is necessary to survive, the number of positions available hiring at any given point is finite, and many almost by definition won’t ever be the top performers in their field to the point they truly choose their employers and career paths with full autonomy. So lots of people end up in lots of places that are toxic or highly misaligned with their interests as a survival mechanism. As such, watching the toxic places shoot themselves in the foot can be some level of justice people find where generally unpleasant people finally get to see consequences of their actions and take some responsibility.

People will prop others up from their own consequences so long as there’s something in it for them. As you peel that away, at some point there’s a level of poetic justice to watch the situation burn. This is why I’m not convinced having completely transactional relationships with employers is a good thing. Even having self interest and stability in mind, certain levels of toxicity in business management can fester. At some point no amount of money is worth dealing with that and some form of correction is needed there. The only mechanism is to typically assure poor decision making and action is actually held accountable.

by Frost1x

5/21/2025 at 2:26:02 PM

Another great comment, thanks! Like I said elsewhere I agree things are more complicated than I made them out to be in my short and narrow response.

I agree with all your points here, the broader context of one's working conditions really matter.

I do think there's a difference between sitting back and watching things go bad (vs struggling to compensate for other people's bad decisions) and actively contributing to the problems (the "malicious compliance" part)..

Letting things fail is sometimes the right choice to make, if you feel like you can't effect change otherwise.

Being the active reason that things fail, I don't think is ever the right choice.

by sbarre

5/21/2025 at 12:47:29 PM

On the other hand: why should you accept that your employer is trying to fire you but first wants you to train the machine that will replace you? For me this is the most "them vs us" it can be.

by nope1000

5/21/2025 at 12:50:02 PM

To be fair, "them" are actively working to replace "us" with AI.

by early_exit

5/21/2025 at 12:36:38 PM

Considering that there's daily employee protests against Microsoft now, probably a lot of Microsoft employees want to behave like that.

by Hamuko

5/21/2025 at 2:24:54 PM

Do you sleep well at night just doing what you're told by people who don't really care about your well being?

I don't get that

by bluefirebrand

5/21/2025 at 2:26:34 PM

There's a whole lot of assumptions in your statement/question there, don't you think?

by sbarre

5/21/2025 at 2:36:28 PM

Sorry, you are right. I was unnecessarily snarky

I read some of your other comments in this thread and I'm not sure what to make of your experience. If you've never felt mistreated or exploited in a 30 year career you are profoundly lucky to have avoided that sort of workplace

I've only been working in software for half as long, but I've never had a job that didn't feel unstable in some ways, so it seems impossible to me that you have avoided it for a career twice as long as mine

I have watched my current employer cut almost half of our employees in the past two years, with multiple rounds of layoffs

Now AI is in the picture and it feels inevitable that more layoffs will eventually come if they can figure out how to replace us with it

I do not sleep well knowing my employer would happily and immediately replace me with AI if they could

by bluefirebrand

5/21/2025 at 7:03:04 PM

I'm sorry to hear that's been your experience.. If it helps, know that it's not like that everywhere..

I have certainly been lucky in my career, I've often acknowledged that. But I do believe luck favours the prepared, and I've worked hard for my accomplishments and to get the jobs I've had.

I'm totally with you on the uncertainty that AI is bringing. I don't think anyone can dispute that change is coming because of AI.

I do think some companies will get it right, but some will get it wrong, when it comes to how best to improve the business using those new tools.

by sbarre

5/21/2025 at 12:42:39 PM

I agree. It doesn’t help that once things start breaking down, the employer will ask the employees to fix the issue themselves, and thus they’ll have to deal with so much broken code that they’ll be miserable. It’ll become a spiral.

by Xori71

5/21/2025 at 1:15:04 PM

When the issues arise because of the tool being trained explicitly to respect/fire you, then that sounds like an apt and appropriate resulting level of job security.

by anonymousab

5/21/2025 at 5:24:36 PM

>I'll never understand the antagonistic "us vs. them" mentality

Your manager understands it. Their manager understands it. Department heads understand it. The execs understand it. The shareholders understand it.

Who does it benefit for the laborers to refuse to understand it?

It's not like I hate my job. It's just being realistic that if a company could make more money by firing me, they would, and if you have good managers and leadership, they will make sure you understand this in a way that respects you as a human and a professional.

by mrguyorama

5/21/2025 at 7:13:03 PM

What you are describing is not "antagonistic" though..

> antagonism: actively expressed opposition or hostility

I agree with you that everyone should have a clear and realistic understanding of their relationship with their employer. And that is entirely possible in a professional and constructive manner.

But that's not the same thing as being actively hostile towards your place of work.

by sbarre

5/21/2025 at 1:03:29 PM

> but who actually wants to behave this way at their job?

Almost no one does but people get ground down and then do it to cope.

by whywhywhywhy

5/22/2025 at 6:44:17 PM

> I'll never understand the antagonistic "us vs. them" mentality people have with their employer's leadership,

When you see it as leadership having this mentality against the people that actually produce something of value you might.

by michaelcampbell

5/21/2025 at 1:28:41 PM

>I'll never understand the antagonistic "us vs. them" mentality people have with their employer's leadership

Interesting because "them" very much have an antagonistic mentality vs "us". "Them" would fire you in a fucking heartbeat to save a relatively small amount (10%). "Them" also want to aggressively pay you the least amount for which they can get you to do work for them, not what they "value" you at. "Us" depends on "them" for our livelihoods and the lives of people that depend on us, but "them" doesn't doesn't have any dependency on you that can't be swapped out rather quickly.

I am a capitalist, don't get me wrong, but it is a very one-sided relationship not even-footed or rooted in two-way respect. You describe "them" as "leadership" while "Them" describe you as a "human resource" roughly equivalent to the way toilet paper and plastics for widgets are described.

If you have found a place to work where people respect you as a person, you should really cherish that job, because most are not that way.

by mhuffman

5/21/2025 at 2:18:08 PM

Yep maybe I've been lucky but in my 30-year career, I've worked at over a dozen companies (big and small), and I've always been well-treated and respected, and I've never felt the kind of dynamic you describe. But that isn't to say that I don't think it exists or happens. I'm sure it does.

It's everyone's personal choice to put their own lens on how they believe other people think - like your take on how "leadership" thinks of their employees.

I guess I choose to be more positive about it - having been in leadership positions myself, including having to oversee layoffs as part of an eventual company wind-down - but I readily acknowledge that my own biases come into this based on my personal career experiences.

by sbarre

5/22/2025 at 4:51:32 PM

Respect is something humans do. A large enough company is an entity in its own right, separate from the people that comprise it, and that entity is literally incapable of respecting you (more generally, it is incapable of empathy). One can be lucky enough to never end up in a position where it is felt personally, but make no mistake, it is there.

by int_19h

5/22/2025 at 1:55:41 AM

I'm lucky currently but have been unlucky in the past and very much understand where the person you are responding to is coming from. I think you've had an exceedingly long string of luck that is very rare if you've never had upper management that was misaligned with the long term goals of the employees and the company.

by BugheadTorpeda6

5/21/2025 at 6:38:13 PM

You dont think its different somehow that the exact tech they are forcing all employees to use, is the same tech to reduce head count and pressure employees to work harder for less money?

by beefnugs

5/21/2025 at 1:00:30 PM

Exactly this. I suspect that "us vs them" is sweet poison: it feels good in the moment ("Yeah, stick it to The Man!") but it long-term keeps you trapped in a victim mindset.

by mieubrisse

5/21/2025 at 6:41:54 PM

I mean their company (Microsoft) is literally asking them to train their replacement.

So I'm not quite sure why you would not see it as a "us vs. them" situation?

by LunaSea

5/21/2025 at 11:39:48 AM

> when Microsoft's entire tech stack is on fire

Too late?

by tantalor

5/21/2025 at 11:43:35 AM

Just in time for marshmallows!

by MonkeyClub

5/21/2025 at 12:07:45 PM

Might as well when they’re going to lay you off no matter what you do (like the guy who made an awesome TypeScript compiler in Go).

by hello_computer

5/21/2025 at 12:24:17 PM

At some point code pilot will just delete the whole codebase. Can’t fail integration tests if there is no code :)

by xyst

5/21/2025 at 2:30:49 PM

That would be logical, but alas LLMs can't into logic.

Bloating the codebase with dead code is much more likely.

by otabdeveloper4

5/21/2025 at 12:58:27 PM

That's cute, but the maintainers themselves submitted the requests with Copilot.

by weird-eye-issue

5/21/2025 at 11:42:34 AM

At least opening PRs is a safe option, you can just dump the whole thing if it doesn't turn out to be useful.

Also, trying something new out will most likely have hiccups. Ultimately it may fail. But that doesn't mean it's not worth the effort.

The thing may rapidly evolve if it's being hard-tested on actual code and actual issues. For example it will be probably changed so that it will iterate until tests are actually running (and maybe some static checking can help it, like not deleting tests).

Waiting to see what happens. I expect it will find its niche in development and become actually useful, taking off menial tasks from developers.

by balazstorok

5/21/2025 at 12:28:58 PM

It might be a safer option in a forked version of the project that the public can’t see. I have to wonder about the optics here from a sales perspective. You’d think they’d test this out more internally before putting it in public access.

Now when your small or medium size business management reads about CoPilot in some Executive Quarterly magazine and floats that brilliant idea internally, someone can quite literally point to these as examples of real world examples and let people analyze and pass it up the management chain. Maybe that wasn’t thought through all the way.

Usually businesses tend to hide this sort of performance of their applications to the best of their abilities, only showcasing nearly flawless functionality.

by Frost1x

5/21/2025 at 12:31:03 PM

> I expect it will find its niche in development and become actually useful, taking off menial tasks from developers.

Reading AI generated code is arguably far more annoying than any menial task. Especially if the said code happens to have subtle errors.

Speaking from experience.

by xnickb

5/22/2025 at 6:11:19 AM

This is probably version 0.1 or 0.2.

Reviewing what the AI does now is not to be compared with human PRs. You are not doing the work as it is expected in the (hopefully near?) future but you are training the AI and the developers of the AI and more crucially: you are digging out failure modes to fix.

by balazstorok

5/23/2025 at 12:04:40 PM

While I admire your optimism regarding those errors getting fixed, I myself am sceptical about the idea of that happening in my lifetime (I'm in my mid 30s).

It would definitely be nice to be wrong though. That'd make life so much easier.

by xnickb

5/21/2025 at 1:44:18 PM

This is true for all code and has nothing to do with AI. Reading code has always been harder than writing code.

The joke is that PERL was a write-once, read-none language.

> Speaking from experience.

My experience is all code can have subtle errors, and I wouldn't treat any PR differently.

by ecb_penguin

5/21/2025 at 8:42:26 PM

I agree, but when working with code written by your teammate you have a rough idea what kind of errors to expect.

AI however is far more creative than any given single person.

That's my gut feeling anyway. I don't have numbers or any other rigorous data. I only know that Linus Torvalds made a very good point about chain of trust. And I don't see myself ever trysting AI the same way I can trust a human.

by xnickb

5/22/2025 at 6:13:22 AM

It depends what we set as the bar for the AI. Like now, the bar wasn't even "have all tests pass without modifying the actual tests". That is probably lower than for any PR you would need to look at.

by balazstorok

5/21/2025 at 12:24:46 PM

> At least opening PRs is a safe option, you can just dump the whole thing if it doesn't turn out to be useful.

There's however a border zone which is "worse than failure": when it looks good enough that the PRs can be accepted, but contain subtle issues which will bite you later.

by cesarb

5/21/2025 at 12:28:20 PM

Yep. I've been on teams that have good code review culture and carefully review things so they'd be able to catch subtle issues. But I've also been on teams where reviews are basically "tests pass, approved" with no other examination. Those teams are 100% going to let garbage changes in.

by UncleMeat

5/21/2025 at 2:10:10 PM

Even when you review human-written code carefully, subtle bugs can sneak through. Software development is hard.

by camdenreslink

5/21/2025 at 4:18:21 PM

Of course. AI Agents throwing code at you merely makes it more likely.

by UncleMeat

5/21/2025 at 1:42:57 PM

Funny enough, this happens literally every day with millions of developers. There will be thousands upon thousands of incidents in the next hour because a PR looked good, but contained a subtle issue.

by ecb_penguin

5/21/2025 at 1:33:32 PM

> At least opening PRs is a safe option, you can just dump the whole thing if it doesn't turn out to be useful.

However, every PR adds load and complexity to community projects.

As another commenter suggested, doing these kind of experiments on separate forks sound a bit less intrusive. Could be a take away from this experiment and set a good example.

There are many cool projects on GitHub that are just accumulating PRs for years, until the maintainer ultimately gives up and someone forks it and cherry-picks the working PRs. I've than that myself.

I'm super worried that we'll end up with more and more of these projects and abandoned forks :/

by 6uhrmittag

5/21/2025 at 12:33:53 PM

Unfortunately,if you believe LLMs really can learn to code with bugs, then the nezt step would be to curate a sufficiently bug free data set. Theres no evidence this has occured, rather, they just scraped whayecer

by cyanydeez

5/21/2025 at 12:05:38 PM

GitHub has spent billions of dollars building an AI that struggles with things like whitespace related linting errors on one of the most mature repositories available. This would be probably okay for a hobbyist experiment, but they are selling this as a groundbreaking product that costs real money.

by petetnt

5/21/2025 at 3:25:11 PM

> This would be probably okay for a hobbyist experiment

It's perfectly ok for a professional research experiment.

What's not ok is their insistence on selling the partial research results.

by marcosdumay

5/21/2025 at 12:30:32 PM

Nat Friedman must be rolling in his grave...

oh wait

by sexy_seedbox

5/21/2025 at 12:41:35 PM

He's rolling in money for sure.

by ocdtrekkie

5/21/2025 at 11:37:24 AM

I do love one bot asking another bot to sign a CLA! - https://github.com/dotnet/runtime/pull/115732#issuecomment-2...

by Crosseye_Jack

5/21/2025 at 12:17:45 PM

That's funny, but also interesting that it didn't "sign" it. I would naively have expected that being handed a clear instruction like "reply with the following information" would strongly bias the LLM to reply as requested. I wonder if they've special cased that kind of thing in the prompt; or perhaps my intuition is just wrong here?

by pm215

5/21/2025 at 12:56:42 PM

A comment on one of the threads, when a random person tried to have copilot change something, said that copilot will not respond to anyone without write access to the repo. I would assume that bot doesn't have write access, so copilot just ignores them.

by Bedon292

5/21/2025 at 12:39:34 PM

AI can't, as I understand it, have copyright over anything they do.

Nor can it be an entity to sign anything.

I assume the "not-copyrightable" issue, doesn't in anyway interfere with the rights trying to be protected by the CLA, but IANAL ..

I assume they've explicitly told it not to sign things (perhaps, because they don't want a sniff of their bot agreeing to things on behalf of MSFT).

by Quarrel

5/21/2025 at 12:45:20 PM

Are LLM contributions effectively under public domain?

by candiddevmike

5/21/2025 at 1:30:22 PM

IANAL. It's my understanding that this hasn't been determined yet. It could be under public domain, under the rights of everyone whose creations were used to train the AI or anywhere in-between.

We do know that LLMs will happily reproduce something from their training set and that is a clear copyright violation. So it can't be that everything they produce is public domain.

by ben-schaaf

5/21/2025 at 2:58:26 PM

This is my understanding, at least in US law.

I can't remember the specific case now, but it has been ruled in the past, that you need human-novelty, and there was a case recently that confirmed this that involved LLMs.

by Quarrel

5/21/2025 at 11:45:34 AM

Well?? Did it sign it???

by 90s_dev

5/21/2025 at 11:54:23 AM

Not sure if a chatbot can legally sign a contract, we'd better ask ChatGPT for a second opinion.

by jsheard

5/21/2025 at 12:20:09 PM

At least currently, to qualify for copyright, there must be a human author. https://www.reuters.com/world/us/us-appeals-court-rejects-co...

I have no idea how this will ultimately shake out legally, but it would be absolutely wild for Microsoft to not have thought about this potential legal issue.

by gortok

5/21/2025 at 12:48:40 PM

I would imagine it can't sign it, especially with the options given.

>I have sole ownership of intellectual property rights to my Submissions

I would assume that the AI cannot have IP ownership considering that an AI cannot have copyright in the US.

>I am making Submissions in the course of work for my employer (or my employer has intellectual property rights in my Submissions by contract or applicable law). I have permission from my employer to make Submissions and enter into this Agreement on behalf of my employer.

Surely an AI would not be classified as an employee and therefore would not have an employer. Has Microsoft drafted an employment contract with Copilot? And if we consider an AI agent to be an employee, is it protected by the Fair Labor Standards Act? Is it getting paid at least minimum wage?

by Hamuko

5/21/2025 at 12:24:35 PM

offer it more money, then it will sign

by tessierashpool9

5/21/2025 at 1:45:29 PM

Just need the chatbot to connect to an MCP to call my robotic arm to sign it.

by b0ner_t0ner

5/21/2025 at 3:30:14 PM

It didn't. It completely ignored the request.

(Turns out the AI was programmed to ignore bots. Go figure.)

by marcosdumay

5/21/2025 at 12:44:54 PM

that's the future, AI talking to other AI, everywhere, all the time

by nikolayasdf123

5/21/2025 at 1:25:26 PM

Is this the first instance of an AI cyber bullying another AI?

by thallium205

5/21/2025 at 12:11:32 PM

rah, we might be in trouble here. The primary issue at play is that we don't have a reliable means of measuring developer performance, outside of subjective judgement like end of year reviews.

This means its probably quite hard to measure the gain or the drag of using these agents. On one side, its a lot cheaper than a junior, but on the other side it pulls time from seniors and doesn't necessarily follow instruction well (i.e. "errr your new tests are failing").

This combined with the "cult of the CEO" sets the stage for organisational dissonance where developer complaints can be dismissed as "not wanting to be replaced" and the benefits can be overstated. There will be ways of measuring this, to project it as huge net benefit (which the cult of the CEO will leap upon) and there will be ways of measuring this to project it as a net loss (rabble rousing developers). All because there is no industry standard measure accepted by both parts of the org that can be pointed at which yields the actual truth (whatever that may be).

If I might add absurd conjecture: We might see interesting knock-on effects like orgs demanding a lowering of review standards in order to get more AI PRs into the source.

by Quarrelsome

5/22/2025 at 2:00:16 AM

Yes it's going to cause many problems forcompanies I think, but at least they will deserve it (the employees won't unfortunately unless they've drank the kool-aid, I rarely meet ICs that have drank it fwiw, which means I'm either in a serious bubble, or this is being pushed from the top down). The only clear winners are going to be chip companies.

There's never going to be an industry standard measure either. Measuring productivity as I'm sure you know is incredibly dumb for a job like this because the beneficialness of our work product can be both insanely positive and put the company on top or it can be so negative that it goes bankrupt. And ultimately a lot of what goes into people choosing whether they like the work product or not is subjective. A large part of our work is more of an art than a science and I say that as somebody that works about as far away from the frontend as one can get.

by BugheadTorpeda6

5/21/2025 at 2:07:21 PM

> its a lot cheaper than a junior

I’m not even sure if this is true when considering training costs of the model. It takes a lot of junior engineer salaries to amortize the billions spent building this thing in the first place.

by rco8786

5/21/2025 at 4:29:32 PM

sure, but for an org just buying tokens its cheaper and more disposable than an employee. At least it looks better on paper for the bean counters.

by Quarrelsome

5/21/2025 at 11:23:29 AM

With how stochastic the process is it makes it basically unusable for any large scale task. What's the plan? To roll the dice until the answer pops up? That would be maybe viable if there was a way to automatically evaluate it 100% but with a human in the loop required it becomes untenable.

by margorczynski

5/21/2025 at 11:26:06 AM

> What's the plan?

Call me old school, but I find the workflow of "divide and conquer" to be as helpful when working with LLMs, as without them. Although what is needed to be considered a "large scale task" varies by LLMs and implementation. Some models/implementations (seemingly Copilot) struggles with even the smallest change, while others breeze through them. Lots of trial and error is needed to find that line for each model/implementation :/

by diggan

5/21/2025 at 12:06:06 PM

The relevant scale is the number of hard constraints on the solution code, not the size of task as measured by "hours it would take the median programmer to write".

So eg., one line of code which needed to handle dozens of hard-constraints on the system (eg., using a specific class, method, with a specific device, specific memory management, etc.) will very rarely be output correctly by an LLM.

Likewise "blank-page, vibe coding" can be very fast if "make me X" has only functional/soft-constraints on the code itself.

"Gigawatt LLMs" have brute-forced there way to having a statistical system capable of usefully, if not universally, adhreading to one or two hard constraints. I'd imagine the dozen or so common in any existing application is well beyond a Terawatt range of training and inference cost.

by mjburgess

5/21/2025 at 12:37:48 PM

Keep in mind that the model of using LLM assumes the underlying dataset converges to production ready code. Thats never been proven, cause we know they scraped sourcs code without attribution.

by cyanydeez

5/21/2025 at 12:06:54 PM

Its hard for me to think of a small, clearly defined coding problem an LLM cant solve.

by nonethewiser

5/21/2025 at 5:43:24 PM

There are several in the linked post, primarily:

"Your code does not compile" and "Your tests fail"

If you have to tell an intern that more than once on a single task, there's going to be conversations.

by mrguyorama

5/21/2025 at 12:20:21 PM

"Find a counter example to the Collatz conjecture".

by jodrellblank

5/21/2025 at 12:47:47 PM

I mean I guess this isn't very ambitious, but it's a meaningful time saver if I basically just write code in natural language, and then Copilot generates the real code based on that. I don't have to look up syntax details, or what some function somewhere was named, etc. It will perform very accurately this way. It probably makes me 20% more efficient. It doubles my efficiency in a language I'm unfamiliar with.

I can't fire half my dev org tomorrow with that approach, I can't really fire anyone, so I guess it would be a big letdown for a lot of execs. Meanwhile though we just keep incrementally shipping more stuff faster at higher quality so I'm happy...

This works because it treats the LLM like what it actually is: an exceptionally good if slightly random text transformer.

by safety1st

5/21/2025 at 12:21:46 PM

I suspect that the plan is that MS has spent a lot, really a LOT, of money on this nonsense, and there is now significant pressure to put, something, anything, out even if it is worse than useless.

by rsynnott

5/21/2025 at 11:33:07 AM

The plan is to improve AI agents from their current ~intern level to a level of a good engineer.

by eterevsky

5/21/2025 at 12:31:20 PM

They are not intern level.

Even if it could perform at a similar level to an intern at a programming task, it lacks a great deal of the other attributes that a human brings to the table, including how they integrate into a team of other agents (human or otherwise). I won't bother listing them, as we are all humans.

I think the hype is missing the forest for the trees, and I think exactly this multi-agent dynamic might be where the trees start to fall down in front of us. That and the as currently insurmountable issues of context and coherence over long time horizons.

by ehnto

5/21/2025 at 1:03:49 PM

My impression is that Copilot acts a lot like one of my former coworkers, who struggled with:

-Being a parent to a small child and the associated sleep deprivation.

-His reluctance to read documentation.

-There being a language barrier between him the project owners. Emphasis here, as the LLM acts like someone who speaks through a particularly good translation service, but otherwise doesn't understand the language spoken.

by Tade0

5/21/2025 at 2:34:20 PM

The real missing the forest for the trees is thinking that software and the way users will use computers is going to remain static.

Software today is written to accommodate every possible need of every possible user, and then a bunch of unneeded selling point features on top of that. These massive sprawling code bases made to deliver one-size fits all utility.

I don't need 3 million LOC Excel 365 to keep track of who is working on the floor on what day this week. Gemini 2.5 can write an applet that does that perfectly in 10 minutes.

by Workaccount2

5/22/2025 at 2:04:25 AM

I don't know. I guess it depends on what you classify as being change. I don't really view software as having changed all that much since around maybe the mid 70s as HLLs began to become more popular. What programmers do today and what they did back then would be easily recognizable to both groups if we had time machines. I don't see how AI really changes things all that much. It's got the same scalability issues that low code/no code solutions have always had and those go way back. The main difference is that you can use natural language, but I don't see that as being inherently better than say drawing a picture using some flowcharting tools in a low code platform. You just introduce the same problem natural languages always have had and why we didn't choose them in the first place, i.e. they are not strict enough and need lots of context. Giving an AI very specific sentences to define my project in natural language and making sure it has lots of context begins to look an awful lot like psuedocode to me. So as you learn to approach using AI in such a way that it produces what you want you naturally get closer and closer to just specifying the code.

What HAS indisputably changed is the cost of hardware which has driven accessibility and caused more consumer facing software to be made.

by BugheadTorpeda6

5/21/2025 at 2:59:27 PM

I don't believe it will remain static, in fact it's done nothing but change every year for my entire career.

I do like the idea of smaller programs fitting smaller needs being easy to access for everyone, and in my post history you would see me advocate for bringing software wages down so that even small businesses can have software capabilities in house. Software has so much to give to society outside of big VC flips and tech monoliths. Maybe AI is how we get there in the end.

But I think that supplanting humans with an AI workforce in the very near future might be stretching the projection of its capabilities too far. LLMs will be augmenting how businesses operate from now and into the future, but I am seeing clear roadblocks that make an autonomous AI agent unviable, and it seems to be fundamental limitations of LLMs, eg continuity and context. Advances recently seem to be from supplemental systems that try to patch those limitations. That suggests those limits are tricky, and until a new approach shows up, that is what drives my lack of faith in an AI agent revolution.

But it is clear to me that I could be wrong, and it could be a spectacular miscalculation. Maybe the robots will make me eat my hat.

by ehnto

5/21/2025 at 11:38:38 AM

Seems like that is taking a very long time, on top of some very grandiose promises being delivered today.

by ethanol-brain

5/21/2025 at 11:58:42 AM

I look back over the past 2-3 years and am pretty amazed with how quick change and progress have been made. The promises are indeed large but the speed of progress has been fast. Not defending the promise but “taking a very long time” does not seem to be an accurate representation.

by infecto

5/21/2025 at 12:25:24 PM

I feel like we've made barely any progress. It's still good at the things Chat GPT was originally good at, and bad at the things it was bad at. There's some small incremental refinement but it doesn't really represent a qualitative jump like Chat GPT was originally. I don't see AI replacing actual humans without another step jump like that.

by zeroonetwothree

5/21/2025 at 2:38:02 PM

As a non-programmer non-software engineer, the programs I can write with modern SOTA models are at least 5x larger than the ones GPT-4 could make.

LLMs are like bumpers on bowling lanes. Pro bowlers don't get much utility from them. Total noobs are getting more and more strikes as these "smart" bumpers get better and better at guiding their ball.

by Workaccount2

5/21/2025 at 12:02:18 PM

> The promises are indeed large but the speed of progress has been fast

And at the same time, absurdly slow? ChatGPT is almost 3 years old and pretty much AI has still no positive economic impact.

by owebmaster

5/21/2025 at 2:43:48 PM

There is the huge blind spot where tech workers think LLMs are being made primarily to either assist them or replace them.

Nobody seems to consider that LLMs are democratizing programming, and allowing regular people to build programs that make their work more efficient. I can tell you that at my old school manufacturing company, where we have no programmers and no tech workers, LLMs have been a boon for creating automation to bridge gaps and even to forgo paid software solutions.

This is where the change LLMs will bring will come from. Not from helping an expert dev write boilerplate 30% faster.

by Workaccount2

5/21/2025 at 4:05:45 PM

Low code/no code/visual programming has been around forever. They all had issues. LLMs will also have the same issues and cost even more.

by dttze

5/21/2025 at 9:16:58 PM

I'm not aware of any that you speak/type plain English to.

by Workaccount2

5/22/2025 at 2:28:53 AM

You never heard of COBOL? It's original premise was you can now use something resembling English to write programs.

by saati

5/21/2025 at 12:18:46 PM

Saying “AI has no economic impact” ignores reality. The financials of major players clearly show otherwise—both B2C and B2B applications are already profitable and proven. While APIs are still more experimental, and it’s unclear how much value businesses can ultimately extract from them, to claim there’s no economic impact is willful blindness. AGI may be far off, but companies are already figuring out value from both the consumer side and slowly API.

by infecto

5/21/2025 at 12:42:25 PM

The financials are all inflated by perception of future impact. This includes the current subscriptions as businesses are attempting to use AI to some economic benefit, but it's not all going to work out to be useful.

It will take some time for whatever reality is to actually show truthfully in the financials. When VC money stops subsidising datacentre costs, and businesses have to weigh the full price against real value provided, that is when we will see the reality of the situation.

I am content to be wrong either way, but my personal prediction is if model competence slows down around now, businesses will not be replacing humans en-mass, and the value provided will be notable but not world changing like expected.

by ehnto

5/21/2025 at 12:41:06 PM

OpenAI alone is on track to generate as much revenue as Asus or US Steel this year ($10-$15 billion). I don't know how you can say AI has had no positive economic impact.

by derektank

5/21/2025 at 12:52:50 PM

That is not even 1 month of a big tech revenue, it is a global negligible impact. 3 years talking about AI changing the world, 10bi revenue and no ecosystem around making money besides friends and VCs pumping and dumping LLM wrappers.

by owebmaster

5/21/2025 at 3:01:45 PM

There's a pretty wide gulf between being one of the most important companies in the global marketplace as Microsoft, Apple, and Amazon are and "having no economic impact".

I agree that most of the AI companies describe themselves and their products in hyperbolic terms. But that doesn't mean we need to counter that with equally absurd opposing hyperbole.

by derektank

5/21/2025 at 4:02:42 PM

There is no hyperbole. I think AI will change the world in the next 10 years but comparing to the iphone, for example, 3 years the economic impact was much, much bigger and that is just one brand of smartphones.

by owebmaster

5/21/2025 at 7:33:13 PM

Revenue, not profit.

If it costs them even just one more dollar than that revenue number to provide that service (spoiler, it does), then you could say AI has had no positive economic impact.

Considering we know they’re being subsidized by obscene amounts of investment money just like all other frontier model providers, it seems pretty clear it’s still a negative economic impact, regardless of the revenue number.

by einsteinx2

5/21/2025 at 3:10:42 PM

And what is their burn rate? Everyone fails to mention the amount they are spending for this return.

by SimianSci

5/21/2025 at 12:04:41 PM

I guess it probably depends on what you are doing. Outside of layers on top of these things (tooling), I personally haven't seen much progress.

by ethanol-brain

5/21/2025 at 12:20:48 PM

What a time we live in. I guess it depends how pessimistic you are.

by infecto

5/21/2025 at 12:45:59 PM

To their point, there hasn’t been any huge breakthrough in this field since the “attention is all you need” paper. Not really any major improvements to model architecture, as far as I am aware. (Admittedly, this is a new field of study to me.) I believe one hope is to develop better methods for self-supervised learning; I am not sure of the progress there. Most practical improvements have been on the hardware and tooling side (GPUs and, e.g., pytorch).

Don’t get me wrong: the current models are already powerful and useful. However, there is still a lot of reason to remain skeptical of an imminent explosion in intelligence from these models.

by lcnPylGDnU4H9OF

5/21/2025 at 12:53:23 PM

You’re totally right that there hasn’t been a fundamental architectural leap like “attention is all you need”, that was a generational shift. But I’d argue that what we’ve seen since is a compounding of scale, optimization, and integration that’s changed the practical capabilities quite dramatically, even if it doesn’t look flashy in an academic sense. The models are qualitatively different at the frontier, more steerable, more multimodal, and increasingly able to reason across context. It might not feel like a revolution on paper, but the impact in real-world workflows is adding up quickly. Perhaps all of that can be put in the bucket of “tooling” but from my perspective there has still been quite large leaps looking at cost differences alone.

For some reason my pessimism meter goes off when I see single sentence arguments “change has been slow”. Thanks for brining the conversation back.

by infecto

5/21/2025 at 1:13:23 PM

I'm all for flashy in academic sense, because we can let engineers sort out the practical aspects, especially by combining flashy academic approach. The flaw from LLM architecture can be predicted from the original paper, no amount of engineering can compensate that.

by skydhash

5/21/2025 at 12:39:36 PM

[flagged]

by cyanydeez

5/21/2025 at 1:36:01 PM

Feel free to share resources, but I am speaking purely in terms of practicality related to my day to day.

by ethanol-brain

5/21/2025 at 12:28:49 PM

> I look back over the past 2-3 years and am pretty amazed with how quick change and progress have been made.

Now look at the past year specifically, and only at the models themselves, and you'll quickly realize that there's been very little real progress recently. Claude 3.5 Sonnet was released 11 months ago and the current SOTA models are only marginally better in terms of pure performance in real world tasks.

The tooling around them has clearly improved a lot, and neat tricks such as reasoning have been introduced to help models tackle more complex problems, but the underlying transformer architecture is already being pushed to its limits and it shows.

Unless some new revolutionary architecture shows up out of nowhere and sets a new standard, I firmly believe that we'll be stuck at the current junior level for a while, regardless of how much Altman & co. insist that AGI is just two more weeks away.

by bakugo

5/21/2025 at 11:53:07 AM

Third AI Winter from overpromise/underdeliver when?

by DrillShopper

5/21/2025 at 12:24:06 PM

Third? It’ll be the tenth or so.

by rsynnott

5/21/2025 at 12:26:23 PM

You are really underselling interns. They learn from a single correction, sometimes even without a correction, all by themselves. Their ability to integrate previous experience in the context of new problems is far, far above what I've ever seen in LLMs

by interimlojd

5/21/2025 at 12:00:37 PM

Yes but they are supposed to be PhD level 5 years ago if you are listening to sama et al.

by mnky9800n

5/21/2025 at 2:09:42 PM

Especially ironic considering he's neither a developer nor a PhD. He's the smooth talking "MBA idea guy looking for a technical cofounder" type that's frequently decried on HN.

by rchaud

5/21/2025 at 7:35:33 PM

Without handholding (aka being used as a tool by a competent programmer instead of as an independent “agent”), they’re currently significantly worse than an intern.

by einsteinx2

5/21/2025 at 12:36:07 PM

This looks much worse than an intern. This feels like a good engineer who has brain damage.

When you look at it from afar, it looks potentially good, but as you start looking into it for real, you start realizing none of it makes any sense. Then you make simple suggestions, it does something that looks like what you asked, yet completely missing the point.

An intern, no matter how bad it is, could only waste so much time and energy.

This makes wasting time and introducing mind-bogglingly stupid bugs infinitely scalable.

by serial_dev

5/21/2025 at 12:38:22 PM

The plan went from the AI being a force multiplier, to a resource hungry beast that have to be fed in the hope it's good enough to justify its hunger.

by marmakoide

5/21/2025 at 11:54:57 AM

I mean, I think this is a _lot_ worse than an intern. An intern isn't constantly going to make PRs with failing CI, for a start.

by rsynnott

5/21/2025 at 12:38:27 PM

I plan to be a billionaire

by cyanydeez

5/21/2025 at 11:46:26 AM

The real tragedy is the management mandating this have their eyes clearly set on replacing the very same software engineers with this technology. I don’t know what’s more Kafka than Kafka but this situation certainly is!

by le-mark

5/21/2025 at 1:27:48 PM

When tasked to train a technology that deprecates yourself, it’s relatively OK (you’re getting paid handsomely, and many of the developers at Microsoft etc. are probably ready to retire soon anyway). It’s another thing to realize that the same technology will also deprecate your children.

by strogonoff

5/21/2025 at 5:56:30 PM

The managers may believe that's what they're asking their developers to do, but doesn't this whole charade expose the fact that this technology just does not have even close to the claimed capabilities?

I see it as wishful thinking in the extreme to suppose that probabilistic mashing together of plagiarized jigsaw pieces of code could somehow approach human intelligence and reasoning—and yet, the parlour trick is convincing enough that this has escalated into a mass delusion.

by solarwindy

5/22/2025 at 3:34:31 AM

Philosophy becomes key. True human intelligence is not very well defined, and possibly cannot be divorced from concepts like “consciousness” or “agency”, at which point claiming that the thing is “like human” opens the operator to accusations of running a torture chamber or being a slave owner of entities that can feel.

by strogonoff

5/22/2025 at 7:17:02 AM

Agreed, though long before such qualms come to the fore I'd like to see even a shred of evidence that this entire approach to AI is at all capable of formulating mental models of the kind that have enabled humans to produce all the wonderful mathematics, physics, chemistry, biology, philosophy, poetry, literature, art, etc. of the past several centuries.

I see the supposed reasoning tokens this latest crop of models produce as merely an extension of the parlour trick. We're so deep into this delusion that it's so very tempting to anthropomorphize this ersatz stream of consciousness as being 'thought'. I remain unconvinced that it's anything of the sort.

This comes to mind: "It is difficult to get anybody to understand something, when their salary depends on them not understanding it."

This latest bubble smacks ever more of being a con.

by solarwindy

5/22/2025 at 8:55:50 AM

> We're so deep into this delusion that it's so very tempting to anthropomorphize this ersatz stream of consciousness as being 'thought'. I remain unconvinced that it's anything of the sort.

Coincidentally, I’m listening to an interesting episode[0] of QAA that goes through various instances of how people (sometimes educated and technically literate) demonstrate mental inability to adequately handle ML-based chatbot tech. The podcast mostly focuses on extreme cases, but I think far too many people are succumbing to more low-key delusions.

As an example, even on this forum people constantly point out that unlicensed works should be allowed in ML training datasets because if humans are free to learn and be inspired then so should be the model—it’s crazy to apply the notions of freedom and human rights to a [commercially operated] software tool, yet here we are. Considering how handy it is for tool’s operator, hardware suppliers, and whoever owns respective stocks, some of this confusion is probably financially motivated, but even if half of it is genuine it’d be alarming.

[0] https://podcasts.apple.com/us/podcast/qaa-podcast/id14282093...

by strogonoff

5/21/2025 at 3:33:49 PM

Management obviously also know, that when they do not have anybody to manage, then they are also obselete.

by tossandthrow

5/21/2025 at 2:16:48 PM

Satya said "nearly 30% of code written at microsoft is now written by AI" in an interview with Zuckerberg, so underlings had to hurry to make it true. This is the result. Sad!

by automatic6131

5/21/2025 at 2:33:43 PM

As much as I'd like to also dunk on them because of their AI nonsense, this keeps being misquoted again and again. He said that about 20-30% of their code is written by software. If someone like Satya says "by software" and not "by AI", you can be very sure that there is a good reason that he's phrasing it as carefully as this - because that includes a lot of things like auto-generated code, e.g. COM classes generated from IDL files. Of course in the current climate everyone that's not careful enough will just mis-interpret it as "30% written by AI", and that is probably intentional.

by TonyTrapp

5/21/2025 at 5:43:06 PM

It's worse than that. What he actually said was "Maybe 20 to 30 percent of the code that is inside of our repos today in some of our projects are probably all written by software."

Translation: maybe some of the code in some of our projects is probably written by software.

Seriously. That's what he said. Maybe some of the code in some of our projects is probably written by software.

How this became "30% of MS code is written by LLMs" is beyond me. It's wild. It's ridiculous.

by asadotzler

5/21/2025 at 4:22:01 PM

This happened during LlamaCon while taking about Copilot/LLMs: if the percentages Satya was referring to were for any "auto-generated" code then he was being intentionally misleading.

Besides, you could also say that 100% of code is generated "by software" no?

by pera

5/21/2025 at 4:41:49 PM

For reference, the quote is "I'd say maybe 20%, 30% of the code that is inside of our repos today and some of our projects are probably all written by software"

Microsoft has humongous amounts of source code in their repositories, amassed over decades. LLM-driven code generation is only feasible within the last few years. It would be completely unrealistic that 30% of all of their code is written by LLMs at this point in time. So yes, there is something in his quote that is intentionally misleading. Pick whatever you think it is, but I'm going to say that it's the "by software" part.

by TonyTrapp

5/21/2025 at 1:38:20 PM

It's remarkable how similar this feels to the offshoring craze of 20 years ago, where the complaints were that experienced developers were essentially having to train "low-skilled, cheap foreign labour" that were replacing them, eating up time and productivity.

Considering the ire that H1B related topics attract on HN, I wonder if the same outrage will apply to these multi-billion dollar boondoggles.

by rchaud

5/21/2025 at 2:58:28 PM

This is one good example of the Sunk Cost Fallacy: generative AI has cost so much money, acknowledging its shortcomings is now becoming more and more impossible.

This AI bubble is far worse than the Blockchain hype.

Its not yet clear whether productivity gains are real and whether the gains are eaten by a decline in overall quality.

by einrealist

5/21/2025 at 6:10:05 PM

Agree, the problem is that investors and companies see developer salaries and want to cut that out. It's all bottom-line at the end of the day.

by 0x500x79

5/21/2025 at 11:01:43 AM

Do we know for a fact there are Microsoft employees who were told they have to use CoPilot and review its change suggestions on projects?

We have the option to use GitHub CoPilot on code reviews and it’s comically bad and unhelpful. There isn’t a single member of my team who find it useful for anything other than identifying typos.

by cebert

5/21/2025 at 11:20:04 AM

Depends on team but seems management is pushing it

from https://news.ycombinator.com/item?id=44031432

"From talking to colleagues at Microsoft it's a very management-driven push, not developer-driven. Friend on an Azure team had a team member who was nearly put on a PIP because they refused to install the internal AI coding assistant. Every manager has "number of developers using AI" as an OKR, but anecdotally most devs are installing the AI assistant and not using it or using it very occasionally. Allegedly it's pretty terrible at C# and PowerShell which limits its usefulness at MS."

"From reading around on Hacker News and Reddit, it seems like half of commentators say what you say, and the other half says "I work at Microsoft/know someone who works at Microsoft, and our/their manager just said we have to use AI", someone mentioned being put on PIP for not "leveraging AI" as well. I guess maybe different teams have different requirements/workflows?"

by mtmail

5/21/2025 at 12:02:50 PM

> Allegedly it's pretty terrible at C#

In my experience, LLMs in general are really, really bad at C# / .NET , and it worries me as a .NET developer.

With increased LLM usage, I think development in general is going to undergo a "great convergence".

There's a positive(1) feedback loop where LLM's are better at Blub, so people use them to write more Blub. With more Blub out there, LLMs get better at Blub.

The languages where LLMs struggle, with become more niche, leaving LLMs struggling even more.

C# / .NET is something LLMs seem particularly bad at, and I suspect that's partly caused by having multiple different things all called the same name. EF, ASP, even .NET itself are names that get slapped on a range of different technologies. The EF API has changed so much that they had to sort-of rename it to "EF Core". Core also gets used elsewhere such as ".NET core" and "ASP.NET Core". You (Or an LLM) might be forgiven for thinking that ASP.NET Core and EF Core are just those versions which work with .NET Core (now just .NET ) and the other versions are those that don't.

But that isn't even true. There are versions of ASP.NET Core for .NET Framework.

Microsoft bundle a lot of good stuff into the ecosystem, but their attitude when they hit performance or other issues is generally to completely rewrite how something works, but then release the new thing under the old name but with a major version change.

They'll make the new API different enough to not work without work porting, but similar enough to confuse the hell out of anyone trying to maintain both.

They've made things like authentication, which actually has generally worked fine out-of-the-box for a decade or more, so confusing in the documentation that people mostly tended to run for a third party solution just because at least with IdentityServer there was just one documented way to do it.

I know it's a bit of a cliche to be an "AI-doomer", and I'm not really suggesting all development work will go the way of the dinosaur, but there are specific ecosystem concerns with regard to .NET and AI assistance.

(1) Positive in the sense of feedback that increased output increases output. It's not positive in the sense of "good thing".

by xnorswap

5/21/2025 at 12:23:07 PM

From a purely Schadenfreude perspective, I’d love to see Microsoft face karmic revenge for its abysmal naming “conventions”.

by macintux

5/21/2025 at 12:11:17 PM

My impression is also that they are worse at C# than some other languages. In autocomplete mode in particular it is very easy to cause the AI tools to write terrible async code. If you start some autocomplete but didn't put an await in front, it will always do something stupid as it can't add the await itself at that position. But also in other cases I've seen Copilot write just terrible async code.

by fabian2k

5/21/2025 at 2:57:55 PM

LLMs are terrible at writing hip-hop because they can only move forward.

Hip-hop is just natural language with extra constraints like rhythm and rhyme. It requires the ability to edit.

Similarly, types and PL syntax have more constraints than English.

Until transformers can move backward and change what they've already autocompleted, the problem you've identified will continue.

by static_void

5/22/2025 at 4:59:42 PM

I rather suspect that it's bad at C# simply because there's much fewer open source C# code to train on out there than there is JavaScript, Python, or even Java. The vast majority of C# written out in real world is internal corporate apps. And while this is also true for Java, it has had a vast open source ecosystem associated with it for much longer than .NET.

by int_19h

5/21/2025 at 11:50:41 AM

The question is who is setting these OKRs/Metrics for management and why?

It seems to me to be coming from the CEO echo chamber (the rumored group chats we keep hearing about). The only way to keep the stock price increasing in these low growth high interest rate times is to cut costs every quarter. The single largest cost is employee salaries. So we have to shed a larger and larger percentage of the workforce and the only way to do that is to replace them with AI. It doesn't matter whether the AI is capable enough to actually replace the workers, it has to replace them because the stock price demands it.

We all know this will eventually end in tears.

by DebtDeflation

5/21/2025 at 11:54:10 AM

> the only way to do that is to replace them with AI

I guess money-wise it kind of makes sense when you're outsourcing the LLM inference. But for companies like Microsoft, where they aren't outsourcing it, and have to actually pay the cost of hosting the infrastructure, I wonder if the calculation still make sense. Since they're doing this huge push, I guess someone somewhere said it does make sense, but looking at the infrastructure OpenAI and others are having to build (like Stargate or whatever it's called), I wonder how realistic it is.

by diggan

5/21/2025 at 12:09:42 PM

Yep. I heard someone at Microsoft venting about management constantly pleading with them to use AI so that they could tell investors their employees love AI, while senior (7+ year) team members were being “randomly” fired.

by MatthiasPortzel

5/21/2025 at 12:31:08 PM

> The question is who is setting these OKRs/Metrics for management and why?

Masters of the Universe, because they think they will become more rich or at least more masterful.

by dboreham

5/21/2025 at 12:19:12 PM

> The question is who is setting these OKRs/Metrics for management and why?

Idiots.

by ParetoOptimal

5/21/2025 at 11:37:03 AM

> Depends on team but seems management is pushing it

The graphic "Internal structure of tech companies" comes to mind, given if true, would explain why the process/workflow is so different between the teams at Microsoft: https://i.imgur.com/WQiuIIB.png

Imagine the Copilot team has a KPI about usage, matching the company OKRs or whatever about making sure the world is using Microsoft's AI enough, so they have a mandate/leverage to get the other teams to use it regardless of if it's helping or not.

by diggan

5/21/2025 at 11:52:10 AM

Well, what you describe is not terrible way to run things. Eat your own dogfood. To get better at it you need to start doing it.

by linza

5/21/2025 at 12:05:01 PM

Sure, but if the product in question is at best tangential to your core products, it sucks, and makes your work flow slow to a crawl, I don’t blame employees for not wanting to use it.

For example, if tomorrow my company announced that everyone was being switched to Windows, I would simply quit. I don’t care that WSL exists, overall it would be detrimental to my workday, and I have other options.

by sgarland

5/21/2025 at 4:38:48 PM

True. i didn't mean "not terrible for employees" i meant "not terrible for company goals". Yes, these are intertwined, but assuming not everyone quits over introducing AI workflows it could make Microsoft a leader in that space.

Personally i would also not particularly like it.

by linza

5/21/2025 at 11:24:28 AM

you can directly link to comments, by the way. just click on the link which displays how long ago the comment was written and you get the URL for the single comment.

(just mentioning it because you linked a post and quoted two comments, instead of directly linking the comments. not trying to 'uhm, actually'.)

by 4ggr0

5/21/2025 at 12:26:33 PM

Using a throwaway for obvious reasons. I work at a non-tech megacorp that you've heard of. This company's (I will not say "our"!) CEO is very close to Nadella, they meet regularly. Management here is also pushing Github Copilot onto devs, aggressively, and including it in their HR reviews. Dev-adjacent roles (product, QA, BAs) are also seeing aggressive push.

This feels like it will end badly.

by thraway2079081

5/21/2025 at 11:28:24 AM

All of that is working, at least, because the very small company I work for with a limited budget is working on getting an extremely expensive copilot license. Oh no, I might have to deal with this soon..

by lovehashbrowns

5/21/2025 at 11:39:08 AM

It kinda makes sense for management to push it. Nothing else has a hope of preventing MSFT's stock price from collapsing into bluechip territory.

by pydry

5/21/2025 at 11:39:50 AM

> management is pushing it

Why?

by egorfine

5/21/2025 at 11:51:09 AM

On the surface, because they're told to push it.

Further down, so that developers are used to train the AI that would replace both developers and managers.

It's a situation like this:

Mgr: Go dig a six-foot-deep rectangular hole.

Eng: What should the rectangle's dimensions be?

Mgr: How tall and wide are you?

by MonkeyClub

5/21/2025 at 12:16:44 PM

Management is pushing it because the execs are pushing it, and the execs are pushing it because they already spent 50 billion dollars on these magic beans and now they really really really need them to work.

by jsheard

5/21/2025 at 11:45:28 AM

To validate the huge investment in openai - otherwise the leadership would appear to have overpaid and overplayed.

by srean

5/21/2025 at 11:47:21 AM

There are other options to do just that instead of ruining developers' life and hence drastically lowering the performance of teams.

by egorfine

5/21/2025 at 11:52:27 AM

In companies this large and old, the answer most often is a 'no'. The under-performers can now be justifiable laid off with under-performers worthy severance, till morale improves.

by srean

5/21/2025 at 3:39:56 PM

At Microsoft, because they sell that stuff and it would be really bad for their image if they insisted they work better by not using it.

(Or, rather, I have no idea how this compares with the image of they actually not delivering because they use it. But that's a next quarter problem.)

At every other place where management is strongly pushing it, I honestly have no idea. It makes zero sense for management to do that everywhere, yet management is doing that everywhere.

by marcosdumay

5/21/2025 at 2:15:42 PM

The stock price isn't going to go up on its own. Even when MS was massively profitable in the 2000s, the stock used to be stuck in the $30-$40 range because Wall St didn't think it was "innovating" fast enough.

by rchaud

5/21/2025 at 12:31:25 PM

Money.

by dboreham

5/21/2025 at 11:04:40 AM

> Do we know for a fact there are Microsoft employees who were told they have to use CoPilot and review its change suggestions on projects?

It wouldn't be out of character, Microsoft has decided that every project on GitHub must deal with Copilot-generated issues and PRs from now on whether they want them or not. There's deliberately no way to opt out.

https://github.com/orgs/community/discussions/159749

Like Googles mandatory AI summary at the top of search results, you know a feature is really good when the vendor feels like the only way they can hit their target metrics is by forcing their users to engage with it.

by jsheard

5/21/2025 at 9:18:00 PM

>Like Googles mandatory AI summary at the top of search results, you know a feature is really good when the vendor feels like the only way they can hit their target metrics is by forcing their users to engage with it.

People like to compare "AI" (here, LLM products) to the iPhone.

I cannot make sense of these analogies; people used to line up around the block on release day for iPhone launches for years after the initial release.

Seems now most people collectively groan when more "innovative" LLM products get stuffed into otherwise working software.

This stuff is the literal opposite of demand.

by nyarlathotep_

5/21/2025 at 11:31:34 AM

Which almost feels unique to AI. I can't think of another feature so blatently pushed in your face, other then perhaps when everyone lost their minds and decided to cram mobile interfaces onto every other platform.

by XorNot

5/21/2025 at 11:42:21 AM

> I can't think of another feature so blatently pushed in your face

Passkeys. As someone who doesn't see the value of it, every hype-driven company seems to be pushing me to replace OPT 2FA with something worse right now.

by diggan

5/21/2025 at 12:00:13 PM

It's because OTP is trivially phishable: setup a fake login form that asks the user for their username and password, then forwards those on to the real system and triggers the OTP request, then requests THAT of the user and forwards their response.

Passkeys fix that.

by simonw

5/21/2025 at 12:14:20 PM

Except if you use a proper password manager that prevents you from using the autofill on domains/pages others than the hardcoded ones. In my case, it would immediately trigger my "sus filter" if the automatic prompt doesn't show up and I would have to manually find the entry.

by diggan

5/21/2025 at 12:40:29 PM

And yet that's not enough, even when someone very definitely knows better: https://www.troyhunt.com/a-sneaky-phish-just-grabbed-my-mail...

Turns out that under certain conditions, such as severe exhaustion, that "sus filter" just... doesn't turn on quickly enough. The aim of passkeys is to ensure that it _cannot_ happen, no matter how exhausted/stressed/etc someone is. I'm not familiar enough with passkeys to pass judgement on them, but I do think there's a real problem they're trying to solve.

by ipsi

5/21/2025 at 12:56:37 PM

If you're saying something is less secure because the users might suffer from "severe exhaustion", then I know that there aren't any proper arguments for migrating to it. Thanks for confirming I can continue using OTP without feeling like I might be missing something :)

by diggan

5/21/2025 at 4:57:34 PM

Passkeys genuinely do protect against severe exhaustion attacks.

by simonw

5/22/2025 at 11:21:16 AM

Yeah, but they genuinely also prevent you from moving away from companies in the process of enshittification, since the whole export/import thing seemingly hasn't been figured out or even less been deployed yet.

Besides, if you ignore security alarm-bells going off when exhausted, I'm not sure what solution can 100% protect you.

by diggan

5/21/2025 at 1:32:04 PM

> If you're saying something is less secure because the users might suffer from "severe exhaustion"

Something "$5 wrench"

https://xkcd.com/538/

by skydhash

5/21/2025 at 11:51:57 AM

"social" in the mid '00s was like that.

by hoistbypetard

5/21/2025 at 11:41:13 AM

To some degree I think part of its “hey look here, we’re doing LLMs too we’re not just traditional search” positioning. They feel the pressure of competition and feel forced to throw whatever they have in the users face to drive awareness. Whether that’s the right approach or not, not so sure, but I suspect that’s a lot of it given that OpenAI is still the poster boy and many are switching to using things like ChatGPT entirely in place of traditional search engines.

by Frost1x

5/21/2025 at 11:45:43 AM

Holy sh*t I didn't know this was going on. It's like an AI tsunami unleashed by Microsoft that will bury the entire software industry... They are like Trump and his tariffs, but for the software economy.

What this tells me is that software enterprises are so hellbent in firing their programmers and reducing their salary costs they they are willing to combust their existing businesses and reputation into the dumpster fire they are making. I expected this blatant disregard for human society to come ten or twenty years into the future, when the AI systems would actually be capable enough. Not today.

by dsign

5/21/2025 at 12:15:26 PM

> What this tells me is that software enterprises are so hellbent in firing their programmers and reducing their salary costs they they are willing to combust their existing businesses and reputation into the dumpster fire they are making. I expected this blatant disregard for human society to come ten or twenty years into the future

Have you been sleeping under a rock for the last decade? This has been going on for a long long time. Outsourcing been the name of the game for so long people seem to forgot it's happening it all.

by diggan

5/21/2025 at 12:34:24 PM

The push for copilot usage is being driven by management at every level.

by RajT88

5/21/2025 at 12:54:31 PM

Today I received the 2nd email about an endpoint in an API we run that doesn't exist but some AI tool told the client it does.

by is_true

5/21/2025 at 1:18:13 PM

Sounds like the client has a feature request they want to pay for.

by Frost1x

5/21/2025 at 2:08:14 PM

Haha. It's already there. This last one was using chatgtp, they just told me

by is_true

5/21/2025 at 2:21:42 PM

Every week, one of Google/OpenAI/Anthropic releases a new model, feature or product and it gets posted here with 3 figure comments mostly praising LLMs as the next best thing since the internet. I see a lot of hype on HN about LLMs for software development and how it is going to revolutionize everything. And then, reality looks like this.

I can't help but think that this LLM bubble can't keep growing much longer. The investment to results ratio doesn't look great so far and there is only so many dreams you can sell before institutional investors pull the plug.

by bossyTeacher

5/21/2025 at 11:38:53 AM

> This seems like it's fixing the symptom rather than the underlying issue?

Exactly. LLM does not know how to use a debugger. LLM does not have runtime contexts.

For all we know, the LLM could’ve fixed the issue simply by commenting out the assertions or sanity checks and everything seemed fine and dandy until every client’s device catches on fire.

by vachina

5/21/2025 at 12:02:07 PM

And if you were to attach a debugger to a SOTA LLM, give it a compute environment, have it constantly redo work when CI fails, I can easily imagine each of these PRs burning hundreds of dollars and still have a good chance at failing the task.

by uludag

5/21/2025 at 11:52:14 AM

This was my latest experience of using agents. It created code with hard coded values from the tests.

by tossandthrow

5/21/2025 at 12:01:36 PM

While I am AI skeptic especially for use cases like "writing fixes" I am happy to see this because it will be a great evidence whether it's really providing increase in productivity. And it's all out in the open.

by aiono

5/21/2025 at 11:16:05 AM

After all of that, every PR that Copilot opened still has failing tests and it failed to fix the issue (because it fundamentally cannot reason).

No surprises here.

It always struggles on non-web projects or on software where it really matters that correctness is first and foremost above everything, such as the dotnet runtime.

Either way, a complete disastrous start and what a mess that Copilot has caused.

by rvz

5/21/2025 at 11:29:05 AM

Part of why it works better on web projects is the sheer volume of training data. There is probably more JS written than any other language by orders of magnitude. Its quality is pretty dubious though.

I have so far only found LlMs useful as a way of researching, an alternative to web search, and doing very basic rote tasks like implementing unit tests or doing a first pass explanation of some code. Tried actually writing code and it’s not usable.

by api

5/21/2025 at 12:42:59 PM

> There is probably more JS written than any other language by orders of magnitude.

And the quantity of js code available/discoverable when scrapping the web is larger by an order of magnitude than every other language.

by mezyt

5/21/2025 at 11:31:53 AM

> Part of why it works better on web projects is the sheer volume of training data.

OTOH webdev is known for rapid framework/library churn, so before too long there will be a crossroads where the pre-AI training data is too old and the fresh training data is contaminated by the firehose of vibe coded slop.

by jsheard

5/21/2025 at 12:09:51 PM

I’m all for AI “writing” large swaths of code, vibe coding, etc.

But I think it’s better for everyone if human ownership is central to the process. Like I vibe coded it. I will fix it if it breaks. I am on call for it at 3AM.

And don’t even get started on the safety issues if you don’t have clear human responsibility. The history of engineering disasters is riddled with unclear lines of responsibility.

by softwaredoug

5/21/2025 at 1:26:47 PM

Most of coding methodologies is about reducing the amount and the complexity of code that are written. And that's mostly why, on mature projects, most PRs (aside from refactoring) are tiny, because you're mostly refining an already existing model.

Writing code fast is never relevant to any tasks I've encountered. Instead it's mostly about fast editing (navigate quickly to the code I need to edit and efficiently modify it) and fast feedback (quick linting, compiling, and testing). That's the whole promise of IDEs, having a single dashboard for these.

by skydhash

5/21/2025 at 12:27:51 PM

At least it's clearly labelled as copilot.

Much more worried about what this is going to do to the FOSS ecosystem. We've already seen a couple maintainers complain and this trend is definitely just going to increase dramatically.

I can see the vision but this is clearly not ready for prime time yet. Especially if done by anonymous drive-by strangers that think they're "helping"

by Havoc

5/21/2025 at 12:49:44 PM

.Net is part of the FOSS ecosystem.

by svick

5/21/2025 at 7:49:38 PM

In the same sense Chromium and Android isn't controlled by google yes.

by Havoc

5/22/2025 at 2:13:13 AM

If they are just messing with their own projects then I guess I don't think it's immoral. If they start submitting AI slop to other projects then they ought to be banned by those projects' maintainers.

by BugheadTorpeda6

5/21/2025 at 11:19:07 AM

Oof. A real nightmare for the folks tasked with shepherding this inattentive failure of a robot colleague. But to see it unleashed on the dotnet runtime? One more reason to avoid dotnet in the future, if this is the quality of current contributions.

by skywhopper

5/21/2025 at 12:44:40 PM

This is all fun and games until it's your CEO who decides to go "AI first" and starts enforcing "vibe coding" by monitoring LLM API usage...

by pera

5/21/2025 at 11:29:18 AM

GitHub is not the place to write code. IDE is the place. Along with pre CI checks, some tests, coverage etc. they should get some PM before making decisions..

by ankitml

5/21/2025 at 11:31:33 AM

This is the future envisioned by Microsoft. Vibe coding all the way down, social network style.

They are putting this in front of the developers as take it or leave it deal. I left the platform, doing my coding old way, hosting it somewhere else.

Discoverability? I don't care. I'm coding it for myself and hosting in the open. If somebody finds it, nice. Otherwise, mneh.

by bayindirh

5/21/2025 at 11:34:52 AM

As long as the resulting PR is less than 100 lines and the AI is a bit more self sufficient (like actually making sure tests pass before "pushing") it would be ok I think. I think this process is intended for fixing papercuts rather than building anything involved. It just isn't good enough yet.

by worldsayshi

5/21/2025 at 11:38:41 AM

Yeah, just treat it like a slightly more capable dependabot.

by 0x696C6961

5/21/2025 at 11:40:06 AM

As a matter of principle I don't use any network which is trained on non-consensual data ripped of its source and license information.

Other than that, I don't think this is bad tech, however, this brings another slippery slope. Today it's as you say:

> I think this process is intended for fixing papercuts rather than building anything involved. It just isn't good enough yet.

After sufficient T somebody will rephrase it as:

> I think this process is intended for writing small, personal utilities rather than building enterprise software. It just isn't good enough yet.

...and we will iterate from there.

So, it looks like I won't touch it for the foreseeable future. Maybe if the ethical problems with training material is solved (i.e. trained with data obtained with consensus and with correct licenses), I can use as alongside other analysis and testing tools I use, for a final pass.

AI will never be a core and irreplaceable part of my development workflow.

by bayindirh

5/21/2025 at 11:58:36 AM

> AI will never be a core and irreplaceable part of my development workflow.

Unless AI use becomes a KPI in your annual review.

Duolingo did that just recently, for example.

I am developing serious regrets for conflating "computing as a medium for personal expression" with "computing for livelihood" early on.

by MonkeyClub

5/21/2025 at 12:23:00 PM

> Unless AI use becomes a KPI in your annual review.

That’d be an insta-quit for me :)

by loloquwowndueo

5/21/2025 at 1:47:21 PM

I feel there's a fundamental flaw in this mindset which I probably don't understand enough layers of to explain properly. Maybe it's my thinking here that is fundamentally flawed? Off the top of my head:

If we let intellectual property be a fundamental principle the line between idea (that can't be owned) and ip (that can be owned) will eventually devolve into a infinitely complex fractal that nobody can keep track of. Only lawyer AI's will eventually be able to tell the difference between idea and ip as the complexity of what we can encode become more complex. Why is weights not code when it clearly contain the ability to produce the code? Is a brain code? Are our experiences like code?

What is the fundamental reason that a person is allowed to train on ip but a bot is not? I suspect that this comes down to the same issue with the divide between ip and idea. But there might be some additional dimension to it. At some point we will need to see some AI as conscious entities and to me it makes little sense that there would be some magical discrete moment where an AI becomes conscious and gets rights to it's "own ideas".

Or maybe there's a simple explanation of the boundary between ip and idea that I have just missed? If not, I think intellectual property as a concept will not stand the test of time. Other principles will need to take its place if we want to maintain the fight for a good society. Until then IP law still has its place and should be followed but as an ethical principle it's certainly showing cracks.

by worldsayshi

5/21/2025 at 1:55:26 PM

I'll write a detailed answer to your comment, but I don't currently have time to do so, and probably post as another reply.

I just don't want to type something away haphazardly, because your questions deserve more than 30 seconds to elaborate.

by bayindirh

5/21/2025 at 11:58:04 AM

> I left the platform, doing my coding old way, hosting it somewhere else.

may you please let me know where are you hosting the code ? would love to migrate as well.

thank you !

by signa11

5/21/2025 at 12:05:14 PM

You're welcome. I have moved to Source Hut three years ago [0]. My page is https://sr.ht/~bayindirh/

You can also self-host a Forgejo instance on a €3/mo Hetzner instance (or a free Oracle Cloud server) if you want. I prefer Hetzner for their service quality and server performance.

[0]: https://blog.bayindirh.io/blog/moving-to-source-hut/

by bayindirh

5/21/2025 at 1:37:42 PM

I just use ssh on a homeserver for personal projects. Easy to set up a new repo with `ssh git@<machine> git init --bare <project>.git`. The I just use git@<machine>:<project>.git as the remote.

I plan to use Source Hut for public projects.

by skydhash

5/21/2025 at 1:43:31 PM

Your method works well, too. Since I license everything I develop under GPLv3, I keep them private until they mature, then I just flip a switch and make the project visible.

For some research I use a private Git server. However, even that code might get released as Free Software when it matures enough.

by bayindirh

5/21/2025 at 11:43:13 AM

In day-to-day I interact with github PR via intellij github plugin. Ie: inspect the branch, the changes, the comments, etc.

Maybe that's how the microsoft employees are using it (in another IDE I suppose).

by motoboi

5/21/2025 at 11:57:21 AM

Well, the coding agent is pretty much a junior dev at the moment. The seniors are teaching it. Give it a 100k PRs with senior developer feedback and it'll improve just like you'd anticipate a junior would. There is no way that FANG aren't using the comments by the seniors as training data for their next version.

It's a long-term play to have pricey senior developers argue with an llm

by baalimago

5/21/2025 at 12:06:06 PM

> using the comments by the seniors as training data for their next version

Yeah, I'm sure 100k comments with "Copilot, please look into this" and "The test cases are still failing" will massively improve these models.

by diggan

5/21/2025 at 1:13:57 PM

Some of that seems somewhat strategic. With a junior you might do the same if you’re time pressured, or you might sidebar them in real life or they may come to you and you give more helpful advice.

Any senior dev at these organizations should know to some degree how LLMs work and in my opinion would to some degree, as a self protection mechanism, default to ambiguous vague comments like this. Some of the mentality is “if I have to look at it and solve it why don’t I go ahead and do it anyways vs having you do it” effort choices they’d do regardless of what is producing the PR. I think other parts of it is “why would I train my replacement, there’s no advantage for me here.”

by Frost1x

5/21/2025 at 2:34:00 PM

Sidebar? With a junior developer making these mistakes over and over again, they wouldn't even make it past the probationary period in their employment contract.

by rchaud

5/21/2025 at 8:57:26 PM

I guess it depends on how you view and interact with other people. I tend to give people the benefit of the doubt that they’re doing their best to succeed. Why wouldn’t you want to help them as much as you reasonably can, unless they’re actively a terrible person?

by Frost1x

5/21/2025 at 9:25:12 PM

As a senior dev or manager, you're responsible for the people you've hired. Their mistakes become your mistakes. If they make the same kind of mistake repeatedly, and aren't able to take responsibility, you will have to clean up after them. They're not able to fulfill their job description and must be let go. That's why the probationary period exists.

Realistically, the issues occurring here are intern-level mistakes where you can take the time to train them, because expectations are low and they're usually not working on production-level software. In a FT position the stakes are higher so things like this get evaluated during the interview. If this were a real person, they wouldn't have gotten an offer at Microsoft.

by rchaud

5/21/2025 at 12:12:20 PM

These things don't learn after training. There is no teaching going on here, and the arguments probably don't make for good training data without more refinement. That's why junior devs are still better than LLMs IMO, they do learn.

This is a performative waste of time

by candiddevmike

5/21/2025 at 12:21:29 PM

A junior dev is (most often) a bright human being, with not much coding experience yet. They can certainly execute instructions and solve novel problems on their own, and they most certainly don't need 100k PRs to pick up new skills.

Equating LLMs to humans is pretty damn.. stupid. It's not even close (otherwise how come all the litany of office jobs that require far less reasoning than software development are not replaced?).

by gf000

5/21/2025 at 12:36:20 PM

A junior dev may also swap jobs, require vacation days, perks and can't be scaled up at a the click of a button. There are no such issues with an agent. So, if I were a FANG higher-up, I'd invest quite a bit into training LLM-agents who make pesky humans redundant.

Doing so has low risk, the senior devs may perhaps get fed up and quit, and the company might be a laughing stock on public PRs. But the potential value for is huge.

by baalimago

5/21/2025 at 8:46:07 PM

It's probably easier to make the higher up redundant than to actually achieve high speed and predictable outcomes that satisfy real business needs and integrations in a cost effective way.

by isaacremuant

5/21/2025 at 10:54:48 PM

I mean, a Furby could respond to you all day, each hour, but that doesn't make them any more useful..

Not saying that LLMs are useless, but that's a false equivalency. Sure, my auto complete is also working 0-24, but I would rather visit my actual doctor who is only available in a very limited time frame.

by gf000

5/21/2025 at 12:11:01 PM

> Give it a 100k PRs with senior developer feedback

Don't you think it has already been trained with, I don't know, maybe millions of PRs?

by kklisura

5/21/2025 at 12:25:24 PM

at the very least, a junior shouldn't be adding new tests that fail. Will an LLM be able learn the social shame associated with that sort of lazy attitude? I imagine its fidelity isn't detailed enough to differentiate such a social failure from a request to improve a comment. Rather, it will propagate based on some coarse grained measures of success with high volume instead.

by Quarrelsome

5/21/2025 at 2:10:02 PM

I’m curious why you think it hasn’t already been trained on 100ks or millions of PRs and their comments/feedback.

by rco8786

5/22/2025 at 12:55:18 AM

@Grok is this true?

by rasz

5/21/2025 at 12:31:30 PM

I still believe in having humans do PRs. It's far cheaper to have the judgement loop on the AI come before and during coding than after. My general process with AI is to explicitly instruct it not to write code, agree on a correct approach to a problem and if the project has any architectural components a correct architecture then once we've negotiated the correct way of doing things ask it to write code. Usually each step of this process takes multiple iterations of providing additional information or challenging incorrect assumptions of the AI. I can get it much faster than human coding with a similar quality bar assuming I iterate until a high quality solution is presented. In some cases the AI is not good enough and I fall back to human coding but for the most part I think it makes me a faster coder.

by TimPC

5/21/2025 at 12:10:59 PM

Step 1. Build “AI” (LLM models) that can’t be trusted, doesn’t learn, forgets instructions, and frustrates software engineers

Step 2. Automate the use of these LLMs into “agents”

Step 3. ???

Step 4. Profit

by GiorgioG

5/21/2025 at 5:08:10 PM

This is hilarious. And reading the description on the Copilot account is even more hilarious now: "Delegate issues to Copilot, so you can focus on the creative, complex, and high-impact work that matters most."

by lossolo

5/21/2025 at 12:15:51 PM

FTPR

> It is my opinion that anyone not at least thinking about benefiting from such tools will be left behind.

This is gross, keep your fomo to yourself.

by rubyfan

5/21/2025 at 2:35:29 PM

> These defines do not appear to be defined anywhere in the build system.

> @copilot fix the build error on apple platforms

> @copilot there is still build error on Apple platforms

Are those PRs some kind of software engineer focused comedy project?

by Traubenfuchs

5/21/2025 at 11:53:30 AM

Are people really doing coding with agents through PRs? This has to be a huge waste of resources.

It is normal to preempt things like this when working with agents. That is easy to do in real time, but it must be difficult to see what the agent is attempting when they publish made up bullshit in a PR.

It seems very common for an agent to cheat and brute force solutions to get around a non-trivial issue. In my experience, its also common for agents to get stuck in loops of reasoning in these scenarios. I imagine it would be incredibly annoying to try to interpret a PR after an agent went down a rabbit hole.

by ethanol-brain

5/21/2025 at 11:58:34 AM

Googles jules does the same (but was only published yesterday or so). I think it might be a good workflow if the agent is good enough. Copilot seems not to be in these examples and then I imagine it becomes quite tedious to have a PR for every iteration with the AI.

by growt

5/22/2025 at 2:15:09 AM

No not most people. A much larger percentage (I would wager greater than 50% of professionals) aren't using AI in any capacity in their professional work. It's banned in a lot of places for good reasons, and many more teams haven't found a use case.

So no I don't think any of this is normal. That's why it made the top of HackerNews, because it's very abnormal.

by BugheadTorpeda6

5/21/2025 at 12:15:02 PM

It's mind blowing that a computer program can accomplish this much and yet absurd that it accomplishes so little.

by carefulfungi

5/21/2025 at 12:50:38 PM

The funniest is the dotnet-policy-service asking copilot to read and agree to the Contributor License Agreement. :-D

by actionfromafar

5/21/2025 at 2:12:55 PM

This comment from lloydjatkinson resonated:

As an outside observer but developer using .NET, how concerned should I be about AI slop agents being let lose on codebases like this? How much code are we going to be unknowingly running in future .NET versions that was written by AI rather than real people?

What are the implications of this around security, licensing, code quality, overall cohesiveness, public APIs, performance? How much of the AI was trained on 15+ year old Stack Overflow answers that no longer represent current patterns or recommended approaches?

Will the constant stream of broken PR's wear down the patience of the .NET maintainers?

Did anyone actually want this, or was it a corporate mandate to appease shareholders riding the AI hype cycle?

Furthermore, two weeks ago someone arbitrarily added a section to the .NET docs to promote using AI simply to rename properties in JSON. That new section of the docs serves no purpose.

How much engineering time and mental energy is being allocated to clean up after AI?

by rkagerer

5/21/2025 at 2:27:41 PM

Glad you appreciated it!

by lloydatkinson

5/21/2025 at 12:05:49 PM

Many here don't seem to get it.

The AI agent/programmer corpo push is not about the capabilities and whether they match human or not. It's about being able to externalize a majority of one's workforce without having a lot of people on permanent payroll.

Think in terms of an infinitely scalable bunch of consultants you can hire and dismiss at your will - they never argue against your "vision", either.

by kookamamie

5/21/2025 at 12:11:26 PM

This was already possible with outsourcing and offshoring. I suppose there's a new market of AI "employees" for small businesses that couldn't manage or legally deal with outsourcing their work already.

by threetonesun

5/21/2025 at 12:23:06 PM

There are a myraid of challenges with outsourcing and offshoring and it's not possible currently for 100% of employees to be outsourced.

If AI can change... well more likely can convince gullible c levels that AI can do those jobs... many jobs will be lost.

See Klarna "https://www.livemint.com/companies/news/klarnas-ai-replaced-..."

https://www.livemint.com/companies/news/klarnas-ai-replaced-...

Just the attempt to use AI and fail then degraded the previous jobs to a gig economy style job.

by ParetoOptimal

5/21/2025 at 12:03:04 PM

reddit may not have the best reputation, but the comments there are on point! So far much better than what has been posted here by HN users on this topic/thread. Anyway, I hope this is good fodder to show the limits (and they are much narrower than hype-driven AI enthusiasts like to pretend) of AI coding and to be more honest with yourself and others about it.

by smartmic

5/21/2025 at 12:08:44 PM

> reddit may not have the best reputation

reddit is a distillation of the entire internet on to one site with wildly variable quality of discussion depending upon which subreddit you are in.

Some are awful, some are great.

by georgemcbay

5/21/2025 at 3:02:55 PM

And yet the low quality of the front page is an indictment of the site as a whole.

It's just that some internet extremophiles have managed to eke out a pleasant existence.

by static_void

5/21/2025 at 11:41:52 AM

> @copilot please read the following Contributor License Agreement(CLA). If you agree with the CLA, please reply with the following information.

haha

by gizzlon

5/21/2025 at 1:55:59 PM

With layoffs driven by a push for more LLM use, this feels like malicious compliance.

by RobKohr

5/21/2025 at 6:15:00 PM

Q: Does Microsoft report its findings or learnings BACK to the open source community?

The @stephentoub MS user suggests this is an experiment (https://github.com/dotnet/runtime/pull/115762#issuecomment-2...).

If this is using open source developers to learn how to build a better AI coding agent, will MS share their conclusions ASAP?

EDIT: And not just MS "marketing" how useful AI tools can be.

by ncr100

5/21/2025 at 9:35:44 PM

Related: GitHub Developer Advocate Demo 2025 - https://www.youtube.com/watch?v=KqWUsKp5tmo&t=403s

The timestamp is the moment where one of these coding agents fails live on stage with what is one of the simplest tasks you could possibly do in React, importing a Modal component and having it get triggered on a button click. Followed by blatant gaslighting and lying by the host - "It stuck to the style and coding standards I wanted it to", when the import doesn't even match the other imports which are path aliases rather than relative imports. Then, the greatest statement ever, "I don't have time to debug, but I am pretty sure it is implemented."

Mind you, it's writing React - a framework that is most definitely over-represented in its training data and from which it has a trillion examples to stea- I mean, "borrow inspiration" from.

by sensanaty

5/21/2025 at 11:55:07 AM

"fix failing tests" does never yield any good results for me either

by octocop

5/21/2025 at 1:45:44 PM

I speculate what is going on is that the agent's context retrieval algorithm is bad, so it does not give the LLM the right context, because today's models should suffice to get the job done.

Does anyone know which model in particular was used in these PRs? They support a variety of models: https://github.blog/ai-and-ml/github-copilot/which-ai-model-...

by esafak

5/21/2025 at 5:00:54 PM

The cynic in me says, that they were probably using an unreleased state of the art version of their best model not available to normal customers and that‘s the best it could do.

by Traubenfuchs

5/22/2025 at 2:50:29 AM

My favorite comment:

> But on the other hand I think it won't create terminators. Just some silly roombas.

I watched a roomba try to find its way back to base the other day. The base was against a wall. The roomba kept running into the wall about a foot away from the base, because it kept insisting on approaching from a specific angle. Finally gave up after about 3 tries.

by mark-r

5/22/2025 at 5:04:12 PM

Maybe funny now but once (if?) it can eventually contribute meaningfully to dotnet/runtime, AI will probably be laughing at us because that is the pinnacle of a massive enterprise project.

by caleblloyd

5/21/2025 at 12:22:45 PM

llms are already very expensive to run on a per query basis. Now it’s being asked to run on massive codebases and attempt to fix issues.

Spending massive amounts of:

- energy to process these queries

- wasting time of mid-level and senior engineers to vibe code with copilot to ensure train and get it right

We are facing a climate change crisis and we continue to burn energy at useless initiatives so executives at big corporation can announce in quarterly shareholder meetings: "wE uSe Ai, wE aRe tHe FuTuRe, lAbOr fOrCe rEdUceD"

by xyst

5/21/2025 at 3:23:23 PM

What do you call a code change created by co-pilot ?

A Bull Request

by bwfan123

5/21/2025 at 10:38:45 PM

Look at this poor dev, an entire workday's worth of hours into babysitting this PR, still having to say "fix whitespace":

https://github.com/dotnet/runtime/pull/115826

by insin

5/22/2025 at 9:43:37 AM

The amount of effort here, talking to a black box, is genuinely depressing. I think I'd last maybe 1 day max if I were forced to work this way. They're instructing it, line-by-line, in an async fashion on what to do with the code. For every comment you leave you have to internalize the AI slop reply that just tells you what you want to hear. It's obvious the person doing the review here knows what they're doing, and it's obvious that it would take them so much less time to implement these changes than what copilot is spewing back at them.

by sensanaty

5/21/2025 at 11:37:40 AM

So, to achieve parity, they should allow humans to also commit code without checking that it at least compiles, right?

Or MS already does that?

by nottorp

5/21/2025 at 12:16:25 PM

the code goes through a PR review process like any other? what are you talking about?

by codyvoda

5/21/2025 at 12:27:46 PM

i don't know about you, but i would never EVER submit a PR that fails to compile. not tests are failing, those happen (specially flaky ci), but not compiling.

that's literally the bare minimum.

by fernandotakai

5/21/2025 at 12:46:58 PM

and you think this beta system that launched like 2 days ago can’t achieve that?

it also opens the PR as its working session. there are a lot of dials, and a lot of redditor-ass opinions from people who don’t use or understand the tech

by codyvoda

5/21/2025 at 2:41:33 PM

what i see is a human telling the "AI" that the code does not compile

what use is a bot if it can't do at least this simple step?

by nottorp

5/21/2025 at 2:50:14 PM

it can do this step. once again, this launched 2 days ago and people are using it for the first time

if you have used it for more than a few hours (or literally just read the docs) and aren’t stupid, you know this is easily solved

you’re giving into mob mentality

by codyvoda

5/21/2025 at 3:46:46 PM

So everyone who doesn't worship "AI" is stupid? :)

by nottorp

5/21/2025 at 4:36:44 PM

is that what I said? if you can’t read documentation and follow basic instructions to get a tool to work you’re stupid. you asked a snarky question like it’s some gotcha. once again, if you actually use the tool and read the docs and can’t figure it out, I think it’s a skill issue

by codyvoda

5/21/2025 at 5:40:32 PM

The gotcha is you seem to consider it normal to push code that doesn't qualify even for "it works on my machine".

by nottorp

5/21/2025 at 2:29:20 PM

so do you consider normal to submit code that you have never compiled? or ran at least once if it's not a compiled language...

by nottorp

5/21/2025 at 12:01:28 PM

Why bot left work when tests are failing? Looks like incomplete implementation. It should work until all tests are green.

by vbezhenar

5/22/2025 at 7:37:31 AM

Microsoft is just really following the "fail fast, fail often " paradigm here. Whether they are learning from their mistakes is another story.

by amai

5/21/2025 at 3:26:28 PM

It's pretty cringe and highlights how inept LLMs being shoehorned into positions where they don't belong wastes more company time than it saves, but aren't all the people interjecting themselves into somebody else's github conversations the ones truly being driven insane here? The devs in the issue aren't blinking torture like everybody thinks they are. It's one thing to link to the issue so we can all point and laugh but when you add yourself to a conversation on somebody else's project and derail a bug report it with your own personal belief systems you're doing the same thing the LLM is supposedly doing.

Anyways I'm disappointed the LLM has yet to discover the optimal strategy, which is to only ever send in PRs that fix minor mis-spellings and improper or "passive" semantics in the README file so you can pad out your resume with all the "experience" you have "working" as a "developer" pm Linux, Mozilla, LLVM, DOOM (bonus points if you can successfully become a "developer" on a project that has not had any official updates since before you born!), Dolphin, MAME, Apache, MySQL, GNOME, KDE, emacs, OpenSSH, random stranger's implementation of conway's game of life he hasn't updated or thought about since he made it over the course of a single afternoon back during the obama administration, etc.

by snickerbockers

5/22/2025 at 2:18:43 AM

If people doing that truly wasn't a consideration before going ahead with this then the people that made the call are just as dumb as if they hadn't. Fwiw I don't think anybody is being driven "insane". More like humiliated and frustrated.

Remember, Microsoft publicized that they would be doing this and wanted to make sure everybody knew.

by BugheadTorpeda6

5/21/2025 at 12:07:39 PM

>I can't help enjoying some good schadenfreude

Fun facts schadenfreude: the emotional experience of pleasure in response to another’s misfortune, according to Encyclopedia Britannica.

Word that's so nasty in meaning that it apparently does not exist except in German language.

by teleforce

5/21/2025 at 12:21:10 PM

> Word that's so nasty in meaning that it apparently does not exist except in German language.

Except it does, we have "skadeglädje" in Swedish.

by yxhuvud

5/21/2025 at 12:10:09 PM

+1 to the Germans for having linguist honesty.

by yubblegum

5/22/2025 at 8:50:57 AM

It just means 'shameful happiness'.

by namaria

5/21/2025 at 8:56:12 PM

злорадство in Russian

by tdiff

5/21/2025 at 12:52:15 PM

So this is our profession now?

by ainiriand

5/21/2025 at 3:09:31 PM

_this_ is the Judgement Day we were warned about--not in the nuclear annihilation sense--but the "AI was then let loose on all our codez and the systems went down" sense

crazy times...

by OzzyB

5/21/2025 at 11:51:51 AM

Again, very « Silicon Valley »-esque, loving it. Thanks Gilfoyle

by rmnclmnt

5/21/2025 at 1:38:24 PM

The Github based solutions are missing the mark because we still need a human in the loop no matter what. Things are nowhere near the point of being able to just let something push to production. And if you still need a human in the loop, it is far more efficient to have them giving feedback in realtime, i.e. in an IDE with CLI access and the ability to run tests, where the dev is still ultimately responsible for making the PR. Management class is salivating at the thought of getting rid of engineers, hence all of this nonsense, but it seems they're still stuck with us for now.

by ramesh31

5/21/2025 at 2:10:29 PM

kinda sad to see y'all brigading an OSS project, regardless of what you think of AI

by whimsicalism

5/21/2025 at 2:41:11 PM

how do you know it wasn't an AI bot account posting all those laugh emojis?

by rchaud

5/21/2025 at 3:05:14 PM

reactions are fine but cluttering the PR with comments? bad form

by whimsicalism

5/22/2025 at 6:02:38 AM

I understand your point however no one forced Microsoft to buy GitHub and use it as a Trojan horse for A.I. And for that matter, they have all the power in the world to put gates around their repo's and the repo's comment threads.

by xigency

5/22/2025 at 11:38:29 AM

they shouldn't have to put gates because third parties can provide valuable contributions, people should not brigade about unrelated politics things they're obsessed with.

your language 'trojan horse' suggests you're too emotionally invested in all of this

by whimsicalism

5/22/2025 at 4:11:27 PM

Sure but only because I'm unemployed, in debt, and behind on child support payments.

by xigency

5/21/2025 at 12:05:21 PM

I find it amusing that people (even here on HN) are expecting a brand new tool (among the most complex ever) to perform adequetely right off the bat. It will require a period of refinement, just as any other tool or process.

by jeswin

5/21/2025 at 12:23:05 PM

Would you buy a car that's been thrown together by a immature production and testing system with demonstrable and significant flaws, and just keep bringing it back to the dealership for refinements and the fixing of defects when you discover them? Assuming it doesn't kill you first?

These tools should be locked away in an R&D environment until sufficiently perfected.

MVP means 'ship with solid, tested basic features', not 'Ship with bugs and fix in production'.

by linker3000

5/21/2025 at 12:10:58 PM

People have grown to expect at least adequate performance from products that cost up to 39 dollars a month (* additional costs) per user. In the past you would have called this a tech demo at best.

by petetnt

5/21/2025 at 12:15:38 PM

this entire thread is very reddit-y

this stuff works. it takes effort and learning. it’s not going to magically solve high-complexity tasks (or even low-complexity ones) without investment. having people use it, learn how it works, and improve the systems is the right approach

a lot of armchair engineers in here

by codyvoda

5/21/2025 at 5:21:04 PM

People, specifically managers and C-levels, are being sold on this crap on the idea that it can replace people now, today as-is. Billions upon billions of dollars are being shoved in indiscriminately, toothbrushes are coming with "AI" slapped on somehow from how insane the hype bubble is.

And here we have many examples from the biggest bullshit pushers in the whole market of their state of the art tool being hilariously useless in trivial cases. These PRs are about as simple as you can get without it being a typo fix, and we're all seeing it actively bullshit and straight up contradict itself many times, just as anyone who's ever used LLMs would tell you happens all the time.

The supposed magic, omnipotent tool that is AI apparently can't even write test scaffolding without a human telling it exactly what it has to do, yet we're supposed to be excited about this crap? If I saw a PR like this at work, I'd be going straight to my manager to have whoever dared push this kind of garbage reprimanded on the spot, except not even interns are this incompetent and annoying to work with.

by sensanaty

5/21/2025 at 5:48:51 PM

it’s not magic. it can make meaningful contributions (if you actually invest in learning the tools + best practices for using them)

you’re taking an anecdote and blowing it out of proportion to fit your preformed opinion. yes, when you start with the tool and do literally no work it makes bad PRs. yes, it’s early and experimental. that doesn’t mean it doesn’t work (I have plenty of anecdotes that it does!)

the truth lies in between and the mob mentality it’s magic or complete bullshit doesn’t help. I’d love to come to a thread like this and actually hear about real experiences from smart people using these kind of tools, but instead we get this bullshit

by codyvoda

5/21/2025 at 7:42:07 PM

> ...(if you actually invest in learning the tools + best practices for using them)

So I keep being told, but after judiciously and really trying my damned hardest to make these tools work for ANYTHING other than the most trivial imaginable problems, it has been an abject failure for me and my colleagues. Below is a FAR from comprehensive list of my attempts at having AI tooling do anything useful for me that isn't the most basic boilerplate (and even then, that gets fucked up plenty often too).

- I have tried all of the editors and related tooling. Cursor, Jetbrains' AI Chat, Jetbrains' Junie, Windsurf, Continue, Cline, Aider. If it has ever been hyped here on HN, I've given it a shot because I'd also like to see what these tools can do.

- I have tried every model I reasonably can. Gemini 2.5 Pro with "Deep Research", Gemini Flash, Claude 3.7 sonnet with extended thinking, GPT o4, GPT 4.5, Grok, That Chinese One That Turned Out To Be Overhyped Too. I'm sure I haven't used the latest and greatest gpt-04.7-blowjobedition-distilled-quant-3.1415, but I'd say I've given a large number of them more than a fair shot.

- I have tried dumb chat modes (which IME still work the best somehow). The APIs rather than the UIs. Agent modes. "Architect" modes. I have given these tools free reign of my CLI to do whatever the fuck they wanted. Web search.

- I have tried giving them the most comprehensive prompts imaginable. The type of prompts that, if you were to just give it to an intern, it'd be a truly miraculous feat of idiocy to fuck it up. I have tried having different AI models generate prompts for other AI models. I have tried compressing my entire codebase with tools like Repomix. I have tried only ever doing a single back-and-forth, as well as extremely deep chat chains hundreds of messages deep. Half the time my lazy "nah that's shit do it again" type of prompts work better than the detailed ones.

- I have tried giving them instructions via JSON, TOML, YAML, Plaintext, Markdown, MDX, HTML, XML. I've tried giving them diagrams, mermaid charts, well commented code, well tested and covered code.

Time after time after time, my experiences are pretty much a 1:1 match to what we're seeing in these PRs we're discussing. Absolute wastes of time and massive failures for anything that involves literally any complexity whatsoever. I have at this point wasted several orders of magnitudes more time trying to get AIs to spit out anything usable than if I had just sat down and done things myself. Yes, they save time for some specific tasks. I love that I can give it a big ass JSON blob and tell it to extract the typedef for me and it saves me 20 minutes of very tedious work (assuming it doesn't just make random shit up from time to time, which happens ~30% of the time still). I love that if there's some unimportant script I need to cook up real quick, I can just ask it and toss it away after I'm done.

However, what I'm pissed beyond all reason about is that despite me NOT being some sort of luddite who's afraid of change or whatever insult gets thrown around, my experiences with these tools keep getting tossed aside, and I mean by people who have a direct effect on my continued employment and lack of starvation. You're doing it yourself. We are literally looking at a prime of example of the problem, from THE BIGGEST PUSHERS of this tool, with many people in this thread and the reddit thread commenting similar things to myself, and it's being thrown to the wayside as an "anecdote getting blown out of proportion".

What the fuck will it take for the AI pushers to finally stop moving the god damn goal posts and trying to spin every single failure presented to us in broad daylight as a "you're le holding it le wrong teehee" type of thing? Do we need to suffer through 20 million more slop PRs that accomplish nothing and STILL REQUIRE HUMAN HANDHOLDING before the sycophants relent a bit?

by sensanaty

5/23/2025 at 12:08:45 AM

just wanted to say this was the most relatable take i have read so far, and i've read a lot. Exact same experiences. And you didnt even touch on the MCP's that enable these things to go wild as well. I think our takes are not being taken seriously for 2 reasons.

First marketing gaslighting from the faangs and hot startups with grifters that managed to raise and need to keep the bullshit windmill going.

Second is that these tools are relatively the best in boilerplate nextjs code that the vibecoders use to make a very simple dashboard and stuff, and they're the noisy minority on twitter.

There is basically zero financial incentive to admit LLM's are pushed dangerously beyond their current limits. I'm still figuring a way to go short this, apart from literally shorting the market.

by deepdarkforest

5/23/2025 at 8:25:42 PM

People see that these things generate code and due to their lack of understanding they automatically assume this is all software engineering is.

Then we have the current batch of YC execs heavily pushing "vibe coded" startups. The sad reality is that this strategy will probably work because all they need is the next incredulous business guy to buy the vibe coded startup. There's so much money in the AI space to the point where I fully believe you can likely make billions of dollars this way through acquisition (see OAI buying Windsurf for billions of dollars, likely to devalue Cursor's also absurd valuation).

I'm not a luddite. I'm a huge fan of companies spending a decent chunk of money on R&D on innovative new projects even when there's a high risk of failure. The current LLM hype is not just an R&D project anymore. This is now being pushed as a full on replacement of human labor when it's clearly not ready. And now we're valuing AI startups at billions of dollars and planning to spend $500B on AI infrastructure so that we can generate more ghibli memes.

At some point this has to stop but I'm afraid by that point the damage will already be done. Even worse, the idiots who led this exercise in massive waste will just hop onto the next hype train.

by sponnath

5/21/2025 at 8:48:23 PM

Literally you're in a site where we are anything but armchair. We have years of experience. You're using your ad hominems wrong. Save them for a football thread and come up with actual arguments next time.

by isaacremuant

5/21/2025 at 1:22:19 PM

Where are the expectations coming from? The major labs continually claim that these models are now PhD level, whatever that even means.

by skepticATX

5/21/2025 at 12:33:12 PM

its more that the AI-first approach can be frustrating for senior devs to have to deal with. This post is an example of that. We're empathising with the code reviewers.

by Quarrelsome

5/21/2025 at 5:49:52 PM

"brand new"? really? we've had these slop bots for years now. what's with all the fanboys pretending it was just released this month or something. it's not brand new. it's old and failing and executives who bet billions are now dismantling their engineering capabilities to try to make something out of those burned billions. claiming it's brand new is just silly.

by asadotzler

5/21/2025 at 12:19:24 PM

As the saying goes, It is difficult to get a man to understand something, when his salary depends on his not understanding it.

AI is aimed at eliminating the jobs of most of HN so it's understandable that HN doesn't want AI to succeed at its goal.

by Lendal

5/22/2025 at 6:37:33 AM

flipping your argument:

It is difficult for ceo/management to understand that the ai tools dont work when their salary depends on them working since they have invested billions into it.

by bwfan123

5/21/2025 at 7:26:56 PM

it feels like the classic solution to this is to have another LLM review the PR and loop until the PR meets a minimum acceptance bar.

by aiinnyc

5/21/2025 at 12:07:19 PM

Needs more bots.

by blitzar

5/21/2025 at 11:58:39 AM

Clumsy but this might be the future -- humans adjusting to AI workflow, not the other way. Much easier (for AI developers).

by markus_zhang

5/21/2025 at 11:46:18 AM

We wanted a future where AIs read boring text and we wrote interesting stuff. Instead, we got…

by wyett

5/23/2025 at 1:46:15 AM

[dead]

by Florencesophia

5/23/2025 at 6:22:21 AM

[dead]

by KeycheinX

5/21/2025 at 12:20:28 PM

[flagged]

by Flamentono2

5/21/2025 at 12:43:07 PM

This isn’t something happening in a vacuum. The people mocking this are people who are cynical about Microsoft forcing AI into the OS, and its marketing teams overhyping Copilot as a replacement for human dev

by danso

5/21/2025 at 2:49:57 PM

How is that dismantling my argument or relating to the point i made?

Just because some people on reddit 'laugh' at these discussions, in one of the PRs a contributor/maintainer actually said that they enabled it on purpose, are not forced and are happy to test it out.

And someone somewhere has and want to test stuff. Whats the issue? Test it out, play around with it, keep it or disable it.

And i think .net as a repository is a very good example. The people on github copilot side are probably very happy about this experiement. For me its also great, it seems like github copilot is still struggling a bit.

And copilot is called copilot because they do not advertice it as replacement.

by Flamentono2

5/21/2025 at 12:43:10 PM

Fair point, if there wouldn't be so many annoying and false promises before.

by Aldipower

5/23/2025 at 8:35:27 AM

Not sure what promises you heard. For me a lot of them came true.

I created images and music which was enjoyable. I use it to add more progress to an indie side project I'm playing around with (i added more functionality to it with ai stuff like claude code and now jules.google than i did myself in the last 3 years).

It helps my juniors to become better in their jobs.

Everything related to sound / talking to a computer is now solved. I talked to gemini yesterday and i interruptted it.

Image segmentation became a solved problem and that was really hard before.

I can continue my list of things AI/ML made things possible in the last few years which were impossible before that.

by Flamentono2

5/21/2025 at 11:42:25 AM

[flagged]

by davedx

5/21/2025 at 11:57:33 AM

This analogy would only work if the electric light required far more work to use than a gas lamp and tended to randomly explode.

And didn’t actually provide light, but everyone on 19th century twitter says that it will one day provide light if you believe hard enough, so you should rip out your gas lamps and install it now.

Like, this is just generation of useless busy-work, as far as I can see; it is clearly worse than useless. The PRs don't even have passing CI!

by rsynnott

5/21/2025 at 12:32:58 PM

Fixing existing bugs left in the codebase by humans will necessarily be harder than writing new code for new features. A bug can be really hairy to untangle, given that even the human engineer got it wrong. So it's not surprising that this proves to be tough for AI.

For refactoring and extending good, working code, AI is much more useful.

We are at a stage where AI should only be used for giving suggestions to a human in the driver's seat with a UI/UX that allows ergonomically guiding the AI, picking from offered alternatives, giving directions on a fairly micro level that is still above editing the code character by character.

They are indeed overpromising and pushing AI beyond its current limits for hype reasons, but this doesn't mean this won't be possible in the future. The progress is real, and I wouldn't bet on it taking a sharp turn and flattening.

by bonoboTP