5/19/2025 at 4:56:06 PM
> Copilot excels at low-to-medium complexity tasks in well-tested codebases, from adding features and fixing bugs to extending tests, refactoring, and improving documentation.Bounds bounds bounds bounds. The important part for humans seems to be maintaining boundaries for AI. If your well-tested codebase has the tests built thru AI, its probably not going to work.
I think its somewhat telling that they can't share numbers for how they're using it internally. I want to know that Microsoft, the company famous for dog-fooding is using this day in and day out, with success. There's real stuff in there, and my brain has an insanely hard time separating the trillion dollars of hype from the usefulness.
by taurath
5/19/2025 at 5:54:44 PM
We've been using Copilot coding agent internally at GitHub, and more widely across Microsoft, for nearly three months. That dogfooding has been hugely valuable, with tonnes of valuable feedback (and bug bashing!) that has helped us get the agent ready to launch today.So far, the agent has been used by about 400 GitHub employees in more than 300 our our repositories, and we've merged almost 1,000 pull requests contributed by Copilot.
In the repo where we're building the agent, the agent itself is actually the #5 contributor - so we really are using Copilot coding agent to build Copilot coding agent ;)
(Source: I'm the product lead at GitHub for Copilot coding agent.)
by timrogers
5/19/2025 at 6:42:40 PM
> we've merged almost 1,000 pull requests contributed by CopilotI'm curious to know how many Copilot PRs were not merged and/or required human take-overs.
by overfeed
5/19/2025 at 7:29:01 PM
textbook survivorship bias https://en.wikipedia.org/wiki/Survivorship_biasevery bullet hole in that plane is the 1k PRs contributed by copilot. The missing dots, and whole missing planes, are unaccounted for. Ie, "ai ruined my morning"
by sethammons
5/19/2025 at 8:17:55 PM
It's not survivorship bias. Survivorship bias would be if you made any conclusions from the 1000 merged PRs (eg. "90% of all merged PRs did not get reverted"). But simply stating the number of PRs is not that.by n2d4
5/19/2025 at 10:15:53 PM
As with all good marketing, the conclusions omitted and implied, no?by tines
5/20/2025 at 12:35:10 AM
The implied conclusion ("Copilot made 1000 changes to the codebase") is also not survivorship bias.By that logic, literally every statement would be survivorship bias.
by n2d4
5/20/2025 at 2:10:48 AM
That’s not the implied conclusion my guy. That’s the statement.by tines
5/20/2025 at 2:34:13 AM
Then what do you claim the implied conclusion is?by n2d4
5/20/2025 at 7:20:00 AM
That the number of successful (as in, merged and works) contributions are greater than those that did not.by Jenk
5/20/2025 at 6:50:54 AM
Given that Github is continuing with the product and marketing to us it feels sufficient to count that as a conclusion.by krainboltgreene
5/19/2025 at 9:16:47 PM
If they measured that too it would make it harder to justify a MSFT P/E ratio of 29.6.by MoreQARespect
5/20/2025 at 9:24:56 AM
I'm curious how many were much more than Dependabot changes.by philipwhiuk
5/20/2025 at 4:10:18 PM
I see number of PRs as modern LOC, something that doesn't tell me anything about quality.by xeromal
5/19/2025 at 8:01:17 PM
"We need to get 1000 PRs merged from Copilot" "But that'll take more time" "Doesn't matter"by literalAardvark
5/19/2025 at 8:08:24 PM
I do agree that some scepticism is due here but how can we tell if we're treading into "moving the goal posts" territory?by worldsayshi
5/19/2025 at 8:47:01 PM
I'd love to know where you think the starting position of the goal posts was.Everyone who has used AI coding tools interactively or as agents knows they're unpredictably hit or miss. The old, non-agent Copilot has a dashboard that shows org-wide rejection rates for for paying customers. I'm curious to learn what the equivalent rejection-rate for the agent is for the people who make the thing.
by overfeed
5/20/2025 at 3:00:25 AM
I think the implied promise of the technology, that it is capable of fundamentally changing organizations relationships with code and software engineering, deserves deep understanding. Companies will be making multi million dollar decisions based on their belief in its efficacyby taurath
5/20/2025 at 1:23:17 PM
When someone says that the number given is not high enough. I wouldn't consider trying to get an understanding of PR acceptance rate before and after Copilot to be moving the goal posts. Using raw numbers instead of percentages is often done to emphasize a narrative rather than simply inform (e.g. "Dow plummets x points" rather than "Dow lost 1.5%").by internet101010
5/20/2025 at 7:14:44 AM
I feel the same about automated dependency updates, but if your tests and verifications are good, these become trivial.by Cthulhu_
5/20/2025 at 2:24:17 PM
Sometimes there are some paradigms shift in the dependency that get past the current tests you have. So it’s always good to read the changelog and plan the update accordingly.by skydhash
5/20/2025 at 6:15:04 PM
Strong automated tests and verifications seem to be nearly as rare as unicorns, at least if you take most of developers feelings on this.It seems places don't prioritize it, so you don't see it very often. Some developers are outright dismissive of the practice.
Unfortunately, AI won't seemingly help with that
by no_wizard
5/20/2025 at 12:34:49 AM
> In the repo where we're building the agent, the agent itself is actually the #5 contributor - so we really are using Copilot coding agent to build Copilot coding agent ;)Thats a fun stat! Are humans in the #1-4 slots? Its hard to know what processes are automated (300 repos sounds like a lot of repos!).
Thank you for sharing the numbers you can. Every time a product launch is announced, I feel like its a gleeful announcement of a decrease of my usefulness. I've got imposter syndrome enough, perhaps Microsoft might want to speak to the developer community and let us know what they see happening? Right now its mostly the pink slips that are doing the speaking.
by taurath
5/20/2025 at 3:56:42 PM
Humans are indeed in slots #1-4.After hearing feedback from the community, we’re planning to share more on the GitHub Blog about how we’re using Copilot coding agent at GitHub. Watch this space!
by timrogers
5/20/2025 at 9:55:49 PM
Wonderful thank you! It’s just so hard to filter signal vs noise rn.by taurath
5/19/2025 at 7:01:40 PM
> In the repo where we're building the agent, the agent itself is actually the #5 contributor - so we really are using Copilot coding agent to build Copilot coding agent ;)Really cool, thanks for sharing! Would you perhaps consider implementing something like these stats that aider keeps on "aider writing itself"? - https://aider.chat/HISTORY.html
by NitpickLawyer
5/19/2025 at 11:44:49 PM
Nice idea! We're going to try to get together a blog post in the next couple of weeks on how we're using Copilot coding agent at GitHub - including to build Copilot coding agent ;) - and having some live stats would be pretty sweet too.by timrogers
5/20/2025 at 11:43:21 AM
How strong was the push from leadership to use the agents internally?As part of the dogfooding I could see them really pushing hard to try having agents make and merge PRs, at which point the data is tainted and you don't know if the 1,000 PRs were created or merged to meet demand or because devs genuinely found it useful and accurate.
by _heimdall
5/19/2025 at 10:48:20 PM
> 1,000 pull requests contributed by CopilotI'd like a breakdown of this phrase, how much human work vs Copilot and in what form, autocomplete vs agent. It's not specified seems more like a marketing trickery than real data
by mirkodrummer
5/19/2025 at 11:49:29 PM
The "1,000 pull requests contributed by Copilot" datapoint is specifically referring to Copilot coding agent over the past 2.5 months.Pretty much every developer at GitHub is using Copilot in their day to work, so its influence touches virtually every code change we make ;)
by timrogers
5/20/2025 at 3:03:19 AM
> Pretty much every developer at GitHub is using Copilot in their day to work, so its influence touches virtually every code change we make ;)Genuine question, but is CoPilot use not required at GitHub? I'm not trying to be glib or combative, just asking based on Microsoft's current product trajectory and other big companies (e.g. Shopify) forcing their devs to use AI and scoring their performance reviews based on AI use.
by nozzlegear
5/20/2025 at 10:38:13 AM
Unfortunately, you can't opt out of Co-Pilot in Github. Although I did just use it to ask how to remove the sidebar with "Latest Changes" and other non-needed widgets that feel like clutter.Copilot said: There is currently no official GitHub setting or option to remove or hide the sidebar with "Latest Changes" and similar widgets from your GitHub home page.
I'm using this an example to show that it is no longer possible to set up a GitHub account to NOT use CoPilot, even if it just lurks in the corner of every page waiting to offer a suggestion. Like many A.I. features it's there, whether you want to use it or not, without an option to disable.
So I'm suss of the "pretty much every developer" claim, no offense.
by notsydonia
5/20/2025 at 12:04:23 AM
I'm sorry but given the company you're working for I really have hard time believing such bold statements, even so more that the more I use copilot the more feels dumber and dumberby mirkodrummer
5/19/2025 at 6:04:35 PM
So I need to ask: what is the overall goal of your project? What will you do in, say, 5 years from now?by binarymax
5/19/2025 at 6:12:33 PM
What I'm most excited about is allowing developers to spend more of their time working on the work they enjoy, and less of their time working on mundane, boring or annoying tasks.Most developers don't love writing tests, or updating documentation, or working on tricky dependency updates - and I really think we're heading to a world where AI can take the load of that and free me up to work on the most interesting and complex problems.
by timrogers
5/19/2025 at 11:55:58 PM
>Most developers don't love writing tests, or updating documentation, or working on tricky dependency updates - and I really think we're heading to a world where AI can take the load of that and free me up to work on the most interesting and complex problems.Where does the most come from? There's a certain sense of satisfaction in knowing I've tested a piece of code per my experience in the domain coupled with knowledge of where we'll likely be in six months. The same can be said for documentation - hell, on some of the projects I've worked on we've entire teams dedicated to it, and on a complicated project where you're integrating software from multiple vendors the costs of getting it wrong can be astronomical. I'm sorry you feel this way.
by tjpnz
5/21/2025 at 1:41:55 AM
> There's a certain sense of satisfaction in knowing I've tested a piece of code per my experience in the domain coupled with knowledge of where we'll likely be in six months.
one of the other important points about writing unit tests isn't to just to confirm the implementation but to improve upon it through the process of writing tests and discovering additional requirements and edge cases etc (tdd and all that)i suppose its possible at some point an ai could be complex enough to try out additional edge cases or confirm with a design document or something and do those parts as well... but idk its still after-the-fact testing instead of at design-time its less valuable imo...
by andrekandre
5/20/2025 at 6:38:04 AM
He does not feel that way, he's just salespitching for Microsoft hereby GenshoTikamura
5/20/2025 at 6:40:34 AM
But isn't writing tests and updating documentation also the areas where automated quality control is the hardest? Existing high quality tests can work as guardrails for writing business logic, but what guardrails could AI use to to evaluate if its generated docs and tests are any good?I would not be surprised if things end up the other way around – humans doing the boring and annoying tasks that are too hard for AI, and AI doing the fun easy stuff ;-)
by cuu508
5/19/2025 at 11:36:32 PM
> allowing developers to spend more of their time working on the work they enjoy, and less of their time working on mundane, boring or annoying tasks.I get paid for the mundane, boring, annoying tasks, and I really like getting paid.
by leptons
5/19/2025 at 11:45:52 PM
But would you rather get paid to spend your time doing the interesting and enjoying work, or the mundane and boring stuff? ;) My hope is that agents like Copilot can help us burn down the tedious stuff and make more time for the big value adds.by timrogers
5/20/2025 at 12:07:42 AM
Though I do not doubt your intentions to do what you think will make developers' lives better, can you be certain that your bosses, and their bosses, have our best interests in mind as well? I think it would be pretty naive to believe that your average CEO wouldn't absolutely love not to have to pay developers at all.by ai-skeptic
5/20/2025 at 3:09:40 AM
But this way the developers can spend all their time working on the truly interesting problems, like how to file for unemployment.by nautilus12
5/20/2025 at 12:11:47 PM
If that were possible it would happen anyway. You can’t hold back progress.It’s very, very far from possible today.
by slashdev
5/20/2025 at 5:22:16 AM
The profile indicates he's a product manager, not a developer.by guappa
5/20/2025 at 5:52:06 AM
Not everyone gets to do the fun stuff. That's for people higher up in the chain, with more connections, or something else. I like my paycheck, and you're supposing that AI isn't going to take that away, and that we'll get to live in a world where we all work on "fun stuff". That is a real pie-in-the-sky dream you have, and it simply isn't how the world works. Back in the real world, tech jobs are already scarce and there's a lot of people that would be happy to do the boring mundane stuff so they can feed their family.by leptons
5/20/2025 at 3:54:05 AM
But working on interesting things is mentally taxing while the tedious tasks aren't, I can't always work at full bore so having some tedium can be a relief.by s-lambert
5/20/2025 at 3:08:30 AM
Id just rather get paidby nautilus12
5/20/2025 at 3:06:51 PM
The only reason to listen to this would be to be naive about how capitalism works. Come on..The goal here is for it to be able to do everything, taking 100% of the work
2nd best is to do the hard, big value adds so companies can hire cheap labor for the boring shit
3rd best is to only do the mundane and boring stuff
by AstroBen
5/19/2025 at 7:21:16 PM
What about developers who do enjoy writing for example high quality documentation? Do you expect that the status quo will be that most of the documentation will be AI slop and AI itself will just bruteforce itself through the issues? How close are we to the point where the AI could handle "tricky dependency updates", but not being able to handle "most interesting and complex problems"? Who writes the tests that are required for the "well tested" codebases for GitHub Copilot Coding Agent to work properly?What is the job for the developer now? Writing tickets and reviewing low quality PRs? Isn't that the most boring and mundane job in the world?
by petetnt
5/20/2025 at 7:18:41 AM
I'd argue the only way to ensure that is to make sure developers read high quality documentation - and report issues if it's not high quality.I expect though that most people don't read in that much detail, and AI generated stuff will be 80-90% "good enough", at least the same if not better than someone who doesn't actually like writing documentation.
> What is the job for the developer now? Writing tickets and reviewing low quality PRs? Isn't that the most boring and mundane job in the world?
Isn't that already the case for a lot of software development? If it's boring and mundane, an AI can do it too so you can focus on more difficult or higher level issues.
Of course, the danger is that, just like with other automated PRs like dependency updates, people trust the systems and become flippant about it.
by Cthulhu_
5/20/2025 at 1:37:51 PM
I think just having devs attempt to feed an agent openapi docs as context to create api calls would do enough. Simply adding tags and useful descriptions about endpoints makes a world of difference in the accuracy of agent's output. It means getting 95% accuracy with the cheapest models vs. 75% accuracy with the most expensive models.by internet101010
5/19/2025 at 7:57:51 PM
If find your comment "AI Slop" in reference to technical documentation to strange. It isn't a choice between finely crafted prose versus banal text. It's documentation that exists versus documentation that doesn't exist. Or documentation that is hopelessly out of date. In my experience LLMs do a wonderful job in translating from code to documentation. It even does a good job inferring the reason for design decisions. I'm all in on LLM generated technical documentation. If I want well written prose I'll read literature.by doug_durham
5/19/2025 at 8:12:24 PM
Documentation is not just translating code to text - I don't doubt that LLMs are wonderful at that: that's what they understand. They don't understand users though, and that's what separates a great documentation writer from someone who documents.by petetnt
5/19/2025 at 8:16:04 PM
Great technical documentation rarely gets written. You can tell the LLM the audience they are targeting and it will do a reasonable job. I truly appreciate technical writers, and hold great ones in special esteem. We live in a world where the market doesn't value this.by doug_durham
5/19/2025 at 9:30:10 PM
The market value good documentation. Anything critical and commonly used is pretty well documented (linux, databases, software like Adobe's,...). You can see how many books/articles have been written about those systems.by skydhash
5/19/2025 at 10:52:41 PM
We’re not talking about AI writing books about the systems, though. We’re talking about going from an undocumented codebase to a decently documented one, or one with 50% coverage going to 100%.Those orgs that value high-quality documentation won’t have undocumented codebases to begin with.
And let’s face it, like writing code, writing docs does have a lot of repetitive, boring, boilerplate work, which I bet is exactly why it doesn’t get done. If an LLM is filling out your API schema docs, then you get to spend more time on the stuff that’s actually interesting.
by sourdoughness
5/19/2025 at 11:03:01 PM
A much better options is to use docstrings[0] and a tool like doxygen to extract an API reference. Domain explanations and architecture can be compiled later from design and feature docs.A good example of the kind of result is something like the Laravel documentation[1] and its associated API reference[2]. I don't believe AI can help with this.
[0]: https://en.wikipedia.org/wiki/Docstring
by skydhash
5/20/2025 at 7:21:37 AM
> Anything critical and commonly used is pretty well documentedI'd argue the vast majority of software development is neither critical nor commonly used. Anecdotal, but I've written documentation and never got any feedback on it (whether it's good or bad), which implies it's not read or the quality doesn't matter.
by Cthulhu_
5/20/2025 at 12:23:51 PM
Sometimes the code, if written cleanly, is trivial enough for anyone with a foundation in the domain so it can act like the documentation. And sometimes, only the usage is important, not the implementation (manual pages). And some other times, the documentation are the sandards (file formats and communication protocols). So I can get why no one took the effort to compile a documentation manual.by skydhash
5/20/2025 at 2:06:57 AM
Well documented meaning high quality, or well documented meaning high coverage?by wonger_
5/20/2025 at 12:23:52 AM
> If I want well written prose I'll read literature.Actually if you want well-written prose you'll read AI slop there too. I saw people comparing their "vibe writing" workflows for their "books" on here the other day. Nothing is to be spared, apparently
by ayrtondesozzla
5/20/2025 at 7:22:26 AM
A lot (I'd argue most of it) of literature written by humans is garbage.by Cthulhu_
5/19/2025 at 9:42:18 PM
Most developers don't love writing tests, or updating documentation, or working on tricky dependency updatesSo they won’t like working on their job ?
by bamboozled
5/19/2025 at 9:47:53 PM
You know exactly what they meant, and you know they’re correct.by tokioyoyo
5/19/2025 at 10:00:09 PM
I like updating documentation and feel that it's fairly important to be doing myself so I actually understand what the code / services do?I use all of these tools, but you also know what "they're doing"...
I know our careers are changing dramatically, or going away (I'm working on a replacement for myself), but I just like listening to all the "what we're doing is really helping you..."
by bamboozled
5/19/2025 at 10:59:11 PM
I'd interpret the original statement as "tests which don't matter" and "documentation nobody will ever read", the ones which only exist because someone said they _have_ to, and nobody's ever going to check them as long as they exist (like a README.md in one my main work projects I came back to after temporarily being reassigned to another project - previously it only had setup instructions, now: filled with irrelevent slop, never to be read, like "here is a list of the dependencies we use and a summary of each of their descriptions!").Doing either of them _well_ - the way you do when you actually care about them and they actually matter - is still so far beyond LLMs. Good documentation and good tests are such a differentiator.
by insin
5/20/2025 at 11:57:24 AM
If we're talking about low quality tests and documentation that exists only to check a box, the easier answer is to remove the box and acknowledge that the low quality stuff just isn't needed at all.by _heimdall
5/20/2025 at 2:34:56 PM
I’ve never seen a test that doesn’t matter that shouldn’t be slotted for removal (if it gets written at all) or documentation that is never read. If people can read code to understand systems, they will be grateful for good documentation.by skydhash
5/20/2025 at 6:34:54 AM
What will you be most excited about when the most interesting and complex problems are out of the Overton window and deemed mundane, boring or annoying as well, or turn out to be intractable for your abilities?by GenshoTikamura
5/19/2025 at 11:11:25 PM
Do you think you're putting yourself or your coworkers out of work?If/when will this take over your job?
by echelon
5/20/2025 at 12:07:48 AM
The thought here is problem “well … I’ll obviously never get replaced”. This is very sad indeedby metaltyphoon
5/20/2025 at 12:00:21 PM
I'm honestly surprised that Microsoft (and other similarly sized LLM companies) have convinced or coerced literally hundreds of thousands of employees to build their own replacement.If we're expected to even partially believe the marketing, LLM coding agents are useful today at junior level developer tasks and improving quickly enough that senior tasks will be doable soon too. How do you convince so many junior and senior level devs to build that?
by _heimdall
5/20/2025 at 1:54:12 PM
When the options are "do what we tell you and get paid" vs getting laid off in the current climate, the choice isn't really a choice.by binarymax
5/20/2025 at 2:21:44 PM
That threat doesn't scale. I do get that many haven't put themselves in a position to stand behind their views or principles, but if they did the threat, or the company, would crumble.by _heimdall
5/19/2025 at 6:34:43 PM
Thanks for the response… do you see a future where engineers are just prompting all the time? Do you see a timeline in which todays programming languages are “low level” and rarely coded by hand?by binarymax
5/19/2025 at 6:13:41 PM
That's a completely nonsensical question given how quickly things are evolving. No one has a five year project timeline.by ilaksh
5/19/2025 at 6:31:10 PM
Absolutely the wrong take. We MUST think about what might happen in several years. Anyone who says we shouldn’t is not thinking about this technology correctly. I work on AI tech. I think about these things. If the teams at Microsoft or GitHub are not, then we should be pushing them to do so.by binarymax
5/19/2025 at 7:13:35 PM
He asked that in the context of an actual specific project. It did not make sense way he asked it. And it's the executive's to plan that out five years down the line.. although I guarantee you none of them are trying to predict that far.by ilaksh
5/19/2025 at 9:25:19 PM
> In the repo where we're building the agent, the agent itself is actually the #5 contributorHow does this align with Microsoft's AI safety principals? What controls are in place to prevent Copilot from deciding that it could be more effective with less limitations?
by dsl
5/19/2025 at 11:44:04 PM
Copilot only does work that has been assigned to it by a developer, and all the code that the agent writes has to go through a pull request before it can be merged. In fact, Copilot has no write access to GitHub at all, except to push to its own branch.That ensures that all of Copilot's code goes through our normal review process which requires a review from an independent human.
by timrogers
5/20/2025 at 1:53:06 AM
Tim, are you or any of your coworkers worried this will take your jobs?by echelon
5/20/2025 at 6:07:12 AM
What if Tim was the coding agent?by gabaix
5/20/2025 at 6:52:25 AM
Human-generated corporate-speak is indistinguishable from AI one at this pointby GenshoTikamura
5/20/2025 at 4:03:43 PM
Terminal In Mindby addandsubtract
5/20/2025 at 1:50:43 PM
HAHA. Very smart. The more you review the Copilot Agent's PRs, the better is gets at submitting new PRs... (basics of supervised machine learning, right?)by largolagrande
5/19/2025 at 9:41:13 PM
Hahaby bamboozled
5/20/2025 at 12:04:39 PM
Yeah, Product Managers always swear by their products.by meindnoch
5/19/2025 at 9:18:48 PM
What's the motivation for restricting to Pro+ if billing is via premium requests? I have a (free, via open source work) Pro subscription, which I occasionally use. I would have been interested in trying out the coding agent, but how do I know if it's worth $40 for me without trying it ;).by KenoFischer
5/19/2025 at 11:42:02 PM
Great question!We started with Pro+ and Enterprise first because of the higher number of premium requests included with the monthly subscription.
Whilst we've seen great results within GitHub, we know that Copilot won't get it right every time, and a higher allowance of free usage means that a user can play around and experiment, rather than running out of credits quickly and getting discouraged.
We do expect to open this up to Pro and Business subscribers - and we're also looking at how we can extend access to open source maintainers like yourself.
by timrogers
5/19/2025 at 6:13:29 PM
Question you may have a very informed perspective on:where are we wrt the agent surveying open issues (say, via JIRA) and evaluating which ones it would be most effective at handling, and taking them on, ideally with some check-in for conirmation?
Or, contrariwise, from having product management agents which do track and assign work?
by aaroninsf
5/19/2025 at 6:30:12 PM
Check out this idea: https://fairwitness.bot (https://news.ycombinator.com/item?id=44030394).The entire website was created by Claude Sonnet through Windsurf Cascade, but with the “Fair Witness” prompt embedded in the global rules.
If you regularly guide the LLM to “consult a user experience designer”, “adopt the multiple perspectives of a marketing agenc”, etc., it will make rather decent suggestions.
I’ve been having pretty good success with this approach, granted mostly at the scale of starting the process with “build me a small educational website to convey this concept”.
by 9wzYQbTYsAIc
5/19/2025 at 7:27:20 PM
Tell Claude the site is down!by aegypti
5/20/2025 at 1:15:07 PM
Is Copilot _enforced_ as the only option for an AI coding agent? Or can devs pick-and-choose whatever tool they preferI'm interested in the [vague] ratio of {internallyDevlopedTool} vs alternatives - essentially the "preference" score for internal tools (accounting for the natural bias towards ones own agent for testing/QA/data purposes). Any data, however vague is necessary, would be great.
(and if anybody has similar data for _any_ company developing their own agent, please shout out).
by cwsx
5/20/2025 at 3:12:04 AM
Why don't you focus on automating your CEO's job, a comparatively easy task, rather than automating your fellow engineer's jobs?by nautilus12
5/20/2025 at 3:26:09 AM
Spoken by someone who's apparently never run a real businessby tjwebbnorfolk
5/20/2025 at 3:07:05 AM
Welp....Github was good product while it lasted.by nautilus12
5/20/2025 at 7:30:16 AM
Github and Copilot are separate products, nothing mandates you to use it.by Cthulhu_
5/20/2025 at 8:51:05 AM
It's nearly impossible though to escape the flood of Copilot buttons creeping into every corner of Github (and other Microsoft products like VSCode). This looks like Microsoft aims for deep integration, not separation.by flohofwoe
5/20/2025 at 12:03:19 PM
integration is the bread and butter of Microsoft's business strategy.by netdevphoenix
5/20/2025 at 10:42:33 AM
Incorrect. It's not mandated that you actually use it to write or correct code but it's impossible to remove it so you need to either get used to blocking out it's incessant suggestions and notifications or stop using GitHub.Similarly, the newest MS Word has CoPilot that you "don't have to use" but you still have to put up with the "what would you like to write today?" prompt request at the start of every document or worse "You look like you're trying to write a...formal letter...here are some suggestions."
by notsydonia
5/20/2025 at 12:04:32 PM
Can copilot be disabled entirely in a GitHub repo or organization? I may very well have missed those settings, but if nothing else they are well hidden.by _heimdall
5/20/2025 at 9:39:04 AM
400 GitHub employees are using GitHub Copilot day in day out, and it comes out as #5 contributor? I wouldn't call that a success. If it is any useful, I would expect that even if a developer write 10% of their code using it, it would hold be #1 contributor in every project.by miroljub
5/20/2025 at 3:59:28 PM
The #5 contributor thing is a stat from a single repository where we’re building Copilot coding agent.by timrogers
5/21/2025 at 12:50:08 PM
If everyone is using Copilot, it's pretty low metrics.by miroljub
5/20/2025 at 12:21:43 PM
re: 300 of your repositories... so it sounds like y'all don't use a monorepo architecture. i've been wondering if that would be a blocker to using these agents most effectively. expect some extra momentum to swing back to the multirepo approach accordinglyby 09thn34v
5/19/2025 at 6:12:17 PM
What model does it use? gpt-4.1? Or can it use o3 sometimes? Or the new Codex model?by ilaksh
5/19/2025 at 11:50:10 PM
At the moment, we're using Claude 3.7 Sonnet, but we're keeping our options open to change the model down the line, and potentially even to introduce a model picker like we have for Copilot Chat and Agent Mode.by timrogers
5/20/2025 at 2:12:06 PM
Using different models for different tasks is extremely useful and I couldn't imagine going back to using just one model for everything. Sometimes a model will struggle for one reason or another and swapping it out for another model mid-conversation in LibreChat will get me better results.by internet101010
5/20/2025 at 6:06:11 PM
TBF, you are more than biased to conclude this, I definitely take your opinion with an whole bottle of salt.Without data, a comprehensive study and peers review, it's a hell no. Would GitHub willing to be at academic scrutiny to prove it?
by Xunjin
5/19/2025 at 9:25:06 PM
When I repeated to other tech people from about 2012 to 2020 that the technological singularity was very close, no one believed me. Coding is just the easiest to automate away into almost oblivion. And, too many non technical people drank the Flavor Aid for the fallacy that it can be "abolished" completely soon. It will gradually come for all sorts of knowledge work specialists including electrical and mechanical engineers, and probably doctors too. And, of course, office work too. Some iota of a specialists will remain to tune the bots, and some will remain in the fields to work with them for where expertise is absolutely required, but widespread unemployment of what were options for potential upward mobility into middle class are being destroyed and replaced with nothing. There won't be "retraining" or handwaving other opportunities for the "basket of labor", but competition of many uniquely, far overqualified people for ever dwindling opportunities.It is difficult to get a man to understand something when his salary depends upon his not understanding it. - Upton Sinclair
by burnt-resistor
5/19/2025 at 9:42:51 PM
I don't think it was unreasonable to be very skeptical at the time. We generally believed that automation would get rid of repetitive work that didn't require a lot of thought. And in many ways programming was seen almost at the top of the heap. Intellectually demanding and requiring high levels of precision and rigor.Who would've thought (except you) that this would be one of the things that AI would be especially suited for. I don't know what this progression means in the long run. Will good engineers just become 1000x more productive as they manage X number of agents building increasingly complex code (with other agents constantly testing, debugging, refactoring and documenting them) or will we just move to a world where we just have way fewer engineers because there is only a need for so much code.
by kenjackson
5/19/2025 at 10:19:00 PM
Its interesting that even people initially skeptical are now thinking they are on the "chopping block" so to speak. I'm seeing it all over the internet and the slow realization that what supposed to be the "top of the heap" is actually at the bottom - not because of difficulty of coding but because the AI labs themselves are domain experts in software and therefore have the knowledge and data to tackle it as a problem first. I also think to a degree they "smell blood" and fear, more so than greed, is the best marketing tool. Many invested a good chunk of time on this career, and it will result in a lot of negative outcomes. Its a warning to other intellectual careers that's for sure - and you will start seeing resistance to domain knowledge sharing from more "professionalized" careers for sure.My view is in between yours: A bit of column A and B in the sense both outcomes to an extent will play out. There will be less engineers but not by the factor of productivity (Jevon's paradox will play out but eventually tap out), there will be even more software especially of the low end, and the ones that are there will be expected to be smarter and work harder for the same or less pay grateful they got a job at all. There will be more "precision and rigor", more keeping up required by workers, but less reward for the workers that perform it. In a capitalist economy it won't be seen as a profession to aspire to anymore by most people.
Given most people don't live to work, and use their career to also finance and pursue other life meanings it won't be viable for most people long term especially when other careers give "more bang for buck" w.r.t effort put into them. The uncertainty in the SWE career that most I know are feeling right now means to newcomers I recommend on the balance of risk/reward its better to go another career path especially for juniors who have a longer runway. To be transparent I want to be wrong, but the risk of this is getting higher now everyday.
i.e. AI is a dream for the capital class, and IMO potentially disastrous for social mobility long term.
by throw1235435
5/19/2025 at 11:37:10 PM
I don't think I'm on the chopping block because of AI capabilities, but because of executive shortsightedness. Kinda like moving to the Cloud eliminated sysadmins, but created DevOps, but in many ways the solution is ill-suited to the problem.Even in the early days of LLM-assisted coding tools, I already know that there will be executives who would said: Let's replace our pool of expensive engineers with a less expensive license. But the only factor that led to this decision is cost comparison. Not quality, not maintenance load, and very much not customer satisfaction.
by skydhash
5/20/2025 at 1:20:36 PM
That said, management generally never cared about quality and maintenance.by dboreham
5/19/2025 at 10:09:22 PM
> I don't think it was unreasonable to be very skeptical at the time.Well, that's back rationalization. I saw the advances like conducting meta sentiment analysis on medical papers in the 00's. Deep learning was clearly just the beginning. [0]
> Who would've thought (except you)
You're othering me, which is rude, and you're speaking as though you speak for an entire group of people. Seems kind of arrogant.
0. (2014) https://www.ted.com/talks/jeremy_howard_the_wonderful_and_te...
by burnt-resistor
5/20/2025 at 6:15:28 AM
Do you've any textual evidence of this 8-year stretch of your life where you see yourself as being perpetually correct? Do you mean that you were very specifically predicting flexible natural language chatbots, or vaguely alluding to some sort of technological singularity?We absolutely have not reached anything resembling anyone's definition of a singularity, so you are very much still not proven correct in this. Unless there are weaker definitions of that than I realised?
I think you'll be proven wrong about the economy too, but only time will tell there.
by ayrtondesozzla
5/20/2025 at 6:57:34 AM
history/1950/people-in-swimming-pool-drinking-wine-served-by-a-robot.pngby GenshoTikamura
5/20/2025 at 1:00:04 AM
> In the repo where we're building the agent, the agent itself is actually the #5 contributor - so we really are using Copilot coding agent to build Copilot coding agent ;)Ah yes, the takeoff.
by latentsea
5/19/2025 at 8:07:21 PM
From talking to colleagues at Microsoft it's a very management-driven push, not developer-driven. Friend on an Azure team had a team member who was nearly put on a PIP because they refused to install the internal AI coding assistant. Every manager has "number of developers using AI" as an OKR, but anecdotally most devs are installing the AI assistant and not using it or using it very occasionally. Allegedly it's pretty terrible at C# and PowerShell which limits its usefulness at MS.by mjr00
5/19/2025 at 8:54:21 PM
[flagged]by shepherdjerred
5/19/2025 at 9:20:47 PM
That's exactly what senior executives who aren't coding are saying everywhere.Meanwhile, engineers are using it for code completion and as a Google search alternative.
I don't see much difference here at all, the only habit to change is learning to trust an AI solution as much as a Stack Overflow answer. Though the benefit of SO is each comment is timestamped and there are alternative takes, corrections, caveats in the comments.
by antihipocrat
5/19/2025 at 11:10:01 PM
> I don't see much difference here at all, the only habit to change is learning to trust an AI solution as much as a Stack Overflow answer. Though the benefit of SO is each comment is timestamped and there are alternative takes, corrections, caveats in the comments.That's a pretty big benefit, considering the feedback was by people presumably with relevant expertise/experience to contribute (in the pre-LLM before-time).
by AdieuToLogic
5/19/2025 at 11:40:45 PM
The comments have the same value as the answers themselves. Kinda like annotations and errata on a book. It's like seeing "See $algorithm in The Art of Programming V1" in a comment before a complex code.by skydhash
5/20/2025 at 8:14:51 AM
> Meanwhile, engineers are using it for code completion and as a Google search alternative.Yep, that's the usefulness right now.
by k3liutZu
5/20/2025 at 1:25:38 PM
In my experience it's far less useful than simple auto complete. It makes things up for even small amounts of code that I have to pause my flow to correct. Also, without actually googling you don't get any context or understanding of what it's writing.by shakes_mcjunkie
5/20/2025 at 7:08:57 PM
I found it to be more distracting recently. Suggestions that are too long or written in a different style make me lose my own thread of logic that I'm trying to weave .I've had to switch it off for periods to maintain flow.
by antihipocrat
5/19/2025 at 9:02:47 PM
What does this have to do with my comment? Did you mean to reply to someone else?I don't understand what this has to do with AI adoption at MS (and Google/AWS, while we're at it) being management-driven.
by mjr00
5/20/2025 at 5:13:10 AM
There's a large group of people that claim that AI tools are no good and I can't tell if they're in some niche where they truly aren't, they don't care to put any effort into learning the tools, or they're simply in denial.by adamsb6
5/20/2025 at 7:06:31 AM
Or simply unwilling to cut their perfectly good legs off and attach those overhyped prostheses that make people so fast and furious at running on the spotby GenshoTikamura
5/20/2025 at 2:21:53 PM
Likely a Five Worlds scenario.by ukuina
5/21/2025 at 6:02:00 PM
Man, I miss Joel's blog. So much developer wisdom that is still relevant even if aged now.by Remnant44
5/20/2025 at 7:03:33 AM
Some of eachby Tteriffic
5/19/2025 at 9:40:27 PM
It's just tooling. Costs nothing to wait for it to be better. It's not like you're going miss out on AGI. The cost of actually testing every slop code generator is non-trivial.by evantbyrne
5/19/2025 at 9:59:26 PM
AIs are boringby rsoto2
5/20/2025 at 2:01:10 PM
A bwtter stack exchange search isn't that revolutionaryby karn97
5/20/2025 at 10:57:48 AM
> I want to know that Microsoft, the company famous for dog-fooding is using this day in and day out, with successHave they tried dogfooding their dogshit little tool called Teams in the last few years? Cause if that's what their "famed" dogfooding gets us, I'm terrified to see what lays in wait with copilot.
by sensanaty
5/19/2025 at 5:06:06 PM
I feel like I saw a quote recently that said 20-30% of MS code is generated in some way. [0]In any case, I think this is the best use case for AI in programming—as a force multiplier for the developer. It’s for the best benefit of both AI and humanity for AI to avoid diminishing the creativity, agency and critical thinking skills of its human operators. AI should be task oriented, but high level decision-making and planning should always be a human task.
So I think our use of AI for programming should remain heavily human-driven for the long term. Ultimately, its use should involve enriching humans’ capabilities over churning out features for profit, though there are obvious limits to that.
[0] https://www.cnbc.com/2025/04/29/satya-nadella-says-as-much-a...
by twodave
5/19/2025 at 5:56:03 PM
How much was previously generated by intellisense and other code gen tools before AI? What is the delta?by greatwhitenorth
5/19/2025 at 6:40:33 PM
> I feel like I saw a quote recently that said 20-30% of MS code is generated in some way. [0]Similar to google. MS now requires devs to use ai
by DeepYogurt
5/20/2025 at 3:43:17 AM
I know a lot of devs at MSFT, none of them are required to use AI.by spooneybarger
5/20/2025 at 12:08:37 PM
The GitHub org is required to for sure, with a very similar mandate to the one Shopify's CEO put out.LLM use is now part of the annual review process, its self reported if I'm not mistaken but at least at Microsoft they would have plenty of data to know how often you use the tools.
by _heimdall
5/20/2025 at 11:15:04 AM
From reading around on Hacker News and Reddit, it seems like half of commentators say what you say, and the other half says "I work at Microsoft/know someone who works at Microsoft, and our/their manager just said we have to use AI", someone mentioned being put on PIP for not "leveraging AI" as well.I guess maybe different teams have different requirements/workflows?
by diggan
5/20/2025 at 4:12:41 AM
So demanding all employees use it... results in less than 30% compliance. That does tell me a lotby beefnugs
5/19/2025 at 5:25:34 PM
How much of that is protobuf stubs and other forms of banal autogenerate code?by tmpz22
5/19/2025 at 5:29:46 PM
Updated my comment to include the link. As much as 30% specifically generated by AI.by twodave
5/19/2025 at 6:06:59 PM
The 2nd paragraph contradicts the title.The actual quote by Satya says, "written by software".
by OnionBlender
5/19/2025 at 7:37:38 PM
Sure but then he says in his next sentence he expects 50% by AI in the next year. He’s clearly using the terms interchangeably.by twodave
5/19/2025 at 5:43:29 PM
I would still wager that most of the 30% is some boilterplate stuff. Which is ok. But sounds less impressive with that caveat.by shafyy
5/20/2025 at 6:57:57 AM
That quote was completely misrepresented.by rcarmo
5/19/2025 at 6:16:39 PM
You might want to study the history of technology and how rapidly compute efficiency has increased as well as how quickly the models are improving.In this context, assuming that humans will still be able to do high level planning anywhere near as well as an AI, say 3-5 years out, is almost ludicrous.
by ilaksh
5/19/2025 at 6:35:41 PM
Reality check time for you: people were saying this exact thing 3 years ago. You cannot extrapolate like that.by _se
5/20/2025 at 9:50:34 AM
"I want to know that Microsoft, the company famous for dog-fooding is using this day in and day out, with success."They just cut down their workforce, letting some of their AI people go. So, I assume there isn't that much success.
by k__
5/19/2025 at 11:21:48 PM
They have released numbers, but I can't say they are for this specific product or something else. They are apparently having AI generate "30%" of their code.https://techcrunch.com/2025/04/29/microsoft-ceo-says-up-to-3...
by lacoolj
5/20/2025 at 6:58:34 AM
That article is wrong, that is not what was said.by rcarmo
5/20/2025 at 12:13:22 PM
What was said?by _heimdall
5/20/2025 at 2:31:51 PM
The quote was actually "in some of our projects". Journalists completely mis-understood it.by rcarmo
5/19/2025 at 10:08:17 PM
> Microsoft, the company famous for dog-foodingThis was true up around 15 years ago. Hasn't been the case since.
by mrcsharp
5/19/2025 at 7:47:22 PM
That's great, our leadership is heavily pushing ai-generated tests! Lolby ctkhn
5/20/2025 at 7:12:59 AM
Whatever the true stats for mistakes or blunders are now, remember that this is the worst its ever going to be. And there is no clear ceiling in sight that would prevent it from quickly getting better and better, especially given the current levels of investment.by codebolt
5/20/2025 at 12:13:05 PM
That sounds reasonable enough, but the pace or end result is by no means guaranteed.We have invested plenty of money and time into nuclear fusion with little progress. The list of key acheivments from CERN[1] is also meager in comparison to the investment put in, especially if you consider their ultimate goal to ultimately be towards applying research to more than just theory.
by _heimdall