alt.hn

2/27/2026 at 1:05:54 AM

56% of CEOs report zero financial return from AI in 2026 (PwC survey, n=4,454)

https://aishortcutlab.com/articles/pwc-ceo-survey-2026-only-12-of-ceos-win-with-ai

by harran

2/27/2026 at 1:12:36 PM

I'm not saying that CEOs (or devs, for that matter) lie. But on AI I don't think we can rely on any self-reported results, positive or negative, based on surveys.

There is just too much incentive to say... no, to BELIEVE... both that AI yields 10x productivity that AI is useless.

I am swinging wildly between the two too, personally. The more time I spend with AI, the more I am developing this split personality where one part of me says "I hope this thing blows up before I lose my job and my children never have the chance to have an office job again" and the other one says "AI is actually not easy! You have to know how to use it well, deveop tools, plan, curate your context... This means I am acquiring useful skills here, tring to port Flappy Bird to COBOL".

And obviously, depending which side controls my cortex in that moment, I may err on the "AI is useless crap" or the "AI all the things!" side

by fpaf

2/27/2026 at 3:03:58 PM

I think an interesting analogy for what many of us are experiencing here is the phenomena of Doom Scrolling; deep down we know we should put it down (and go outside), but the immediate experience of it and the value it feels like it’s offering in the moment has you keep scrolling and scrolling.

Similarly many have reported a sense of say programming productivity but a more objective reflection later on reveals the myriad issues with constantly and subtly heralding in large quantities of lower quality code and blowing past any caution or rigourkus discipline that would come with the laying down of lines of code “by hand”.

by evolve2k

2/27/2026 at 4:31:47 PM

I don't know.

I'm having coworkers resign due to AI mandates from upper management. Some of them are saying they are going to move on from the tech industry

It's not just doom scrolling. AI is having a substantial negative impact on some people

by bluefirebrand

2/27/2026 at 5:20:50 PM

I have also decided to do this as soon as the burden of lying about my AI usage becomes too onerous. Right now at Cisco, there are no mandates, only very strong recommendations with the explicit threat of being "left behind" if you fail to comply. Some teams have included AI usage in their personal KPIs which affect bonuses and promotions, but mine fortunately has not.

Once the execs or my personal manager implement AI requirements, I'll have to start lying, which I really prefer not to do. If they start tracking, then I'll have to vibecode a script to make bullshit requests to the API each day. And if they start auditing, then I'll just check out (more than usual) and wait to be fired. They're only hurting themselves with this shit.

by eudamoniac

2/27/2026 at 1:07:52 AM

I've been building implementation guides for solo founders and small businesses trying to use AI practically, so I read the PwC CEO Survey closely when it dropped.

The headline number (12% of CEOs generating measurable returns) gets cited a lot, but I think the more revealing finding is the 56% with zero financial impact.

These are companies with enterprise AI budgets, dedicated teams, and access to every tool on the market and the majority are getting nothing back.

PwC calls it "Pilot Purgatory." The pattern: AI gets deployed in isolated, tactical projects that don't connect to revenue. internal tooling, content drafts, meeting summaries while the 12% they call the "Vanguard" are using AI in the product and customer experience itself (44% of Vanguard vs 17% of everyone else).

What I found interesting from a solo founder angle: the structural barriers causing large companies to fail at this “bureaucracy, legacy systems, misaligned incentives, multi-department approval processes” don't exist at the one-person scale.

The bottleneck for small operators is different: it's not knowing which workflows are worth building, in what order, and what "system-level" vs "task-level" use actually means in practice.

Curious if others have a take on why the enterprise failure rate is this high despite the investment, and whether the Vanguard pattern (AI into the product, not just the back office) matches what people are seeing in practice.

by harran

2/27/2026 at 1:33:52 AM

I work in a large enterprise. On one hand, we’re being told we should think of ways to use AI more. On the other hand, to even start (beyond just using Copilot to develop what I’m already working on), I need to have an idea and sell it to some AI board to get their blessing. At that point, I will have a microscope on me, tracking everything, to watch if this wild experiment is a success or failure. No thanks.

If they really want me to try something new, they will give me the space to try things where I am free to fail quietly and privately, pivot, and continue trying things. Asking for ship dates on day one is no way to operate projects with so many unknown unknowns. No one wants to learn and fail with an audience.

by al_borland

2/27/2026 at 2:11:07 AM

That’s hard with AI, because early efforts are exploratory by nature. You don’t really know the shape of the value until you’ve iterated. If experimentation immediately becomes a public performance review, the safest move is not to experiment. I think this is a big part of why so many enterprise initiatives stall. The org says it wants discovery, but the governance model assumes delivery. Your point about needing space to fail quietly is important.

by harran

2/27/2026 at 8:11:55 AM

That is kind of weird take, because whole my life, people WANTED to be part of initiatives like this and were jealous of people selected for initiatives like that.

by watwut

2/27/2026 at 9:31:20 AM

Some people sit in front of the classroom because they want, others because they must. Many more choose elsewhere. Reasoning is their own.

I don't find it strange, having routinely tanked my own chances/social credit for initiatives... because, like the parent, I don't want a target on my back. Somebody above thinks I do, though, apparently. Experience isn't conditioned on... that experience, if that makes sense. Unpleasant to say the least.

Where you see jealousy, which is a strange thing to invite, I see fear of missing out/rat-racing. Pass. Plenty of motivators and opportunity without the charade. Or, to put it charitably, noise/competition/advertising.

All to say, the initiatives are usually loaded with expectations, reasonable and not. Tread carefully.

by bravetraveler

2/27/2026 at 1:21:24 PM

There is difference between "this is why I do not want to be part of that" and "this is why the institution as such did not found a way to do it". The some peoples unwillingness to do that is not that much relevant, unless the company is in the "all qualified people are avoiding that task" situation.

> Where you see jealousy, which is a strange thing to invite,

Any desirable position invites slight jealousy. It is no different then when you have low pressure project, project that uses cool language, project with the good manager. I used the word in that sense, in the the "horrible they are going to dislike me" sense.

> I see fear of missing out/rat-racing. Pass. Plenty of motivators and opportunity without the charade.

This seems oddly out of place to me? These initiatives usually give you more freedom and "customer wants it now" kind of pressure.

by watwut

2/27/2026 at 2:19:30 PM

> There is difference between "this is why I do not want to be part of that" and "this is why the institution as such did not found a way to do it". The some peoples unwillingness to do that is not that much relevant, unless the company is in the "all qualified people are avoiding that task" situation.

Organizations are large, they gave us gifts like the Peter Principle. I'm sure that situation and plenty others exist. I provided my experience/anecdote, showing yours might not be encompassing. Apologies, meant no detraction. Point being, the initiatives are often overrated and rather easy to ignore. By no means am I saying everyone ignores them [or should].

> Any desirable position invites slight jealousy. It is no different then when you have low pressure project, project that uses cool language, project with the good manager. I used the word in that sense, in the the "horrible they are going to dislike me" sense.

Of course, that's why I call it a rat race. A group running towards the same things... to be disappointed, in my experience. The freedom is welcome, the 'customer [or peer] wants it now' pressure can be left behind. None of this requires going out of your way, however. Perhaps that is why organizations may struggle, regardless, I'm thankful for it.

edit: format and phrasing

by bravetraveler

2/27/2026 at 5:44:00 PM

I found things go much better for me when I can work on a project on my own, then if it works well, show it off and let it go up the chain. I can then focus on the work, pivot as needed, or scrap the idea if it didn’t pan out.

A co-worker of mine had an idea recently (not AI related). He told our manager and sr director about it. I think now it’s gone up to the VP level. The whole project hinges on a very specific thing working that some other team needs to do with a vendor tool and is having a lot of trouble actually getting right. Meanwhile, he’s now been asked to make multiple presentations to justify and defend it, and there are 2 or 3 separate project managers trying to track it, each with their own set of weekly meetings, tracking spreadsheet, and other such things. All those PMs are also asking for timelines and dates for a lot of unknown unknowns. All our time is being engulfed by ceremony and bureaucracy, and we don’t even know if it will work yet. If that one piece doesn’t pan out, it will be a very public failure and we’ll then be expected to come up with some other option. My part of it also involves a vendor tool. When I POC’d my part, it worked fine. When I went to set it up with some real data later, the tool isn’t working, despite the POC still running and updating fine. I can’t even replicate the POC again, but it might randomly work next week. There are bugs in systems I don’t control. It’s not a total deal breaker, but it’s a risk and may require a pivot that would fundamentally change how parts of it work that have already been presented.

Had he waited a bit to see if it will actually work, it would be no big deal and wouldn’t have cost us much time. Instead, if the vendor tool falls through, or we need to pivot to work around a bug, we now need to show that to all these people, explain why it didn’t work, and get beat up about it. We also need to pretend to be busy with this and make progress on it, when we’re in a holding pattern waiting for our dependency that may or may not work. We also found out this week that if someone on that other team makes a small mistake, it wipes out everything and the house of cards crumbles. I personally think it’s a big enough risk to scrap the whole idea and find something else, but it’s too late for that now.

Of course, I tend to like to work in the background. I want the organization to function better and more smoothly. I’m not just seeking glory. I’ve still ended up being the one our team tapped when the CIO came knocking for some big project he wanted to see done quickly, despite trying to stay out of it. I don’t think anyone was jealous of me, they were all happy that they didn’t have to deal with all the uncertainty. The only good thing about that was because it was the CIO asking, it was very easy to ignore everything else and I was able to knock it out pretty fast.

Maybe you’ve worked in companies with less bureaucracy or cultures that can handle these types of projects better. For us, it feels like no good deed goes unpunished, so it’s just easier to keep your mouth shut and focus on the work until there is something actually worth showing. That is especially true for these grass roots projects vs big corporate sponsored projects that are happening no matter what.

by al_borland

2/27/2026 at 2:34:44 AM

The following is my take on what's happening — outside the software-development domain, which is special vis-à-vis LLMs for obvious reasons.

Given worker access to generative LLMs, plus training and motivation to use them, LLMs are effective for certain workflows. Those workflows tend to be personal, one-offs, or summarization in nature: write a bash script for this headache I have every day; tell me what colleague X is trying to say in his 1200-word email, since his writing is garbage and he can't get to the point; "what's the Excel formula syntax for this other thing that I keep forgetting?"; etc.

So the time and mental-energy savings inures to the workers, mostly from coordination tasks that don't directly create core value. And then those savings aren't "reinvested" into value-producing activities whose benefits would inure to the firm because the workers have no incentive to do so; don't know how to create core value; don't have the skills to create core value; or aren't permitted to do those activities by higher-ups.

Bottom line: LLMs are eating busywork coordination activities — hence no impact on most firms' bottom lines.

by treetalker

2/27/2026 at 2:46:55 AM

Exactly! this aligns with the "pilot purgatory" pattern. AI boosts productivity at the task level, but unless those savings are applied to workflows that directly drive revenue or strategic value, the firm sees little financial impact. It's a classic misalignment between individual efficiency and organizational ROI.

by harran

2/27/2026 at 3:44:48 AM

> PwC calls it "Pilot Purgatory." The pattern: AI gets deployed in isolated, tactical projects that don't connect to revenue.

I feel like both the name and the description miss the mark though - the use isn't in pilots or isolated projects, it's individual people using it to find stuff and read/write/code/work/make decisions for them, and none of that is going to drive strategic value until companies raise expectations on productivity to take advantage of it.

It makes me think of a couple of bullet points from that "An AI CEO said something honest" post[1]:

> - majority of workers have no reason to be super motivated, they want to do their 9-5 and get back to their life

> - they're not using AI to be 10x more effective they're using it to churn out their tasks with less energy spend

[1] https://news.ycombinator.com/item?id=47042788

by nlawalker

2/27/2026 at 11:34:58 AM

I have to say something my Dad used to say, hope this doesn't land poorly: "they can want with one hand and shit in the other, see which fills first."

Generally agree with the peer comment, carrot vs stick applies (ie: 'safety'). There are more, arguably better, moves. Demanding juice from a husk, hmm. Selecting for fresh graduates/those without leverage, still, I see.

by bravetraveler

2/27/2026 at 4:11:19 AM

Yeah, the reluctance often comes from the learning curve, resistance to change, and fear of being let go "employees see it happen to others". Motivation might shift if organizations provide psychological safety, training, and space to experiment, showing that AI can enhance the work rather than just replace it.

by harran

2/27/2026 at 1:30:31 AM

> The bottleneck for small operators is different: it's not knowing which workflows are worth building, in what order, and what "system-level" vs "task-level" use actually means in practice.

Are you saying that from what you see, small operators also fail to get ROI, but for different reasons?

by andsoitis

2/27/2026 at 2:23:42 AM

yes, but not at the same rate. and yes it's usually for different reasons.

Enterprises usually struggle because of structure: approvals, incentives, legacy systems, fragmentation.

Small operators usually struggle because they stay at the task level "prompt-by-prompt productivity boosts" instead of building workflow-level or system-level leverage.

by harran

2/27/2026 at 3:25:03 AM

Buying a gym membership has never made anyone fit.

by atlgator

2/27/2026 at 3:36:49 AM

True, but it's also more than just using the tool, it's also how it's applied.

by harran

2/27/2026 at 7:10:45 AM

> 56% of CEOs report zero financial return from AI in 2026 (PwC survey, n=4,454)

This is a lie. It can't be zero. It is negative.

by hulitu

2/27/2026 at 1:14:11 PM

It’s quite possible you are correct, but since gen-ai is good at generating stuff, it is taking on busy work, which might bring the ROI back to ~zero. In my business I suspect that gen ai has provided a modest productivity boost (single digit) but, due to other factors, I can’t quantify the revenue impact.

by apercu

2/27/2026 at 11:21:33 AM

> Here's the thing. If you're a solo founder watching this play out from the sidelines, this isn't discouraging. It's the biggest competitive window you've had in years. And most people aren't looking at it that way.

The vast majority of people I'm coming across, both online and here, where I live, have absolutely no knowledge or understanding of how to work with AI.

From Perplexity/Sonar and GPT5 I've learned that most people do not treat it like an intelligence, they treat it like a search engine with better text output.

This article reminded me of that.

I find it extremely inaccurate to claim that the issue with big companies is structure, because that - as happens far too often - ignores the root cause:

The people in charge, who don't make the necessary smart and radical-seeming decisions.

I know it's nowadays rather unpopular to point at actual, real shortcomings of people, but that's how it is. Someone, at some point, made dumb decisions or failed to make smart decisions.

"Let's put humanities greatest invention, a functional artificial intelligence, to the task of doing paperwork."

Why aren't they making smart decision? Well ... because they can't!

It's not about structure, it's about the failure to recognize potential and ability. When you're the boss, then you make decisions which make things happen.

They can make dumb decisions, like using AI solely for paperwork, or they can make smart decisions, like causing changes in the company that enable the gigantic potential.

Or, in other words:

Handing a monkey a book doesn't magically make the monkey grasp the power it's holding in its hands.

> Not because you have more resources. Because you have fewer barriers.

No. It's all about decisions, decision-making and the ability to make smart decisions. When you're the person who makes the decisions, then you can take down the barriers, work around them or at least start trying figuring out how to do so. Everything else is just excuses.

Barriers don't make decisions. People do. The barriers exist in their heads more than anywhere else. When you're incapable of making smart decisions, then the problem is you.

by 5o1ecist

2/27/2026 at 6:58:53 PM

[dead]

by adrian-vega

2/27/2026 at 1:21:35 PM

[dead]

by irenetusuq

2/27/2026 at 1:51:01 AM

The question is whether legacy players can drive strategic growth that changes their trajectory to meet the AI-native disrupters. This is a data point.

by atela

2/27/2026 at 10:44:37 AM

Piggybacking off what you said we should circle back, lean in and look for synergies, shift the paradigm and do a deep dive on leveraging the low hanging fruit deliverables.

Let's take this offline and put it on the backburner.

by blitzar

2/27/2026 at 2:37:15 AM

Exactly! having the budget isn't enough. Legacy players need to adapt processes and incentives to turn AI investment into real strategic advantage, or AI-native disruptors will outpace them.

by harran

2/27/2026 at 3:33:56 AM

Are these AI-native disruptors in the room with us now?

by add-sub-mul-div

2/27/2026 at 3:49:27 AM

AI-native disruptors are designing products and experiences around AI from inception, rapidly capturing value and reshaping customer expectations. In the near term, for some, that is a raising red flag.

by harran

2/27/2026 at 6:36:00 AM

Who? The only “disrupters” I see are AI hypesters selling AI tools.

Who are the people using these tools to create successful businesses and (non-AI) products?

by jdlshore

2/27/2026 at 3:10:35 PM

Is this some umpteenth-level irony or are you simply missing the joke?

by drakinosh

2/27/2026 at 6:37:16 AM

Their bots are.

by tonyedgecombe

2/27/2026 at 1:08:09 AM

The average person is not ready for AI yet. Microsoft's Copilot has a low adoption rate. Data Centers have big energy bills and a lack of clients, and have no ROI for most of them.

by orionblastar

2/27/2026 at 1:23:20 AM

I think you’re pointing at something real. Adoption lag matters. If the end user doesn't change behavior, ROI won’t show up no matter how much infrastructure gets built. I’d add another layer though: expectations. Many CEOs implicitly treat AI like deterministic software. install it, flip the switch, get linear productivity gains. But these systems are probabilistic. They’re "slippery" Output quality varies, edge cases multiply, and oversight is required. That makes ROI non-linear.

by harran

2/27/2026 at 1:05:53 PM

Is the product bad or is literally everyone else wrong?

Who can guess!

by estimator7292

2/27/2026 at 5:40:52 PM

The current generation of children uses AI on their smartphones. The average adult doesn't use or doesn't know how to use AI. In time, the children will grow up to get AI jobs, and the average adult will be a senior citizen.

by orionblastar

2/27/2026 at 7:56:06 PM

If AI were actually capable of delivering on the promises driving the bubble, the next generation of children won't grow up to get jobs at all unless they're serving or cleaning up after trillionaires, or selling their bodies. Thankfully, there's basically zero chance of AI delivering on those promises. It'll continue to be useful sometimes, in some ways, to some people, but I'm less than optimistic that most people will be feel like what they've gained was worth what they lost and that includes most CEOs.

by autoexec

2/28/2026 at 1:51:43 AM

The main target is businesses for AI. Not the average person, but Microsoft wants to have the average person use Copilot. https://www.reddit.com/r/technology/comments/1p92qzs/you_hea... But people are rejecting Copilot, and Microsoft has most of their datacenters not in use except for clients like OpenAI. OpenAI might take its business to Google or Amazon because Microsoft Copilot competes directly with them.

by orionblastar

2/28/2026 at 2:54:07 PM

That's as useful as saying "The current generation of children uses electricity." Using "AI" for image generation, game play, statics on economy or generating source code are completely different things. Also the dumbest orthography tool is "AI", because it involves a field that is associated with human intelligence.

by 1718627440