3/5/2026 at 3:23:41 PM
The training data commons is to AI what oil reserves are to petroleum economies: a collectively generated resource of immense commercial value. Every book written, every forum post answered, every photo shared, every line of code contributed... billions of people built the knowledge base that makes these models work. Without that collective human output, AI is nothing.Alaska and Norway understood something critical when oil was discovered: if you don't assert collective ownership of the resource before private companies capture all the value, you never will. Alaska amended its constitution. Norway built the largest sovereign wealth fund on earth. Both were acts of people saying "this belongs to us, and we deserve a return on its extraction."
We are in exactly that window right now with AI. The resource is being extracted at an incredible pace and almost all the value is flowing to a handful of companies. The longer people wait to assert sovereign ownership over the collective intelligence that makes AI possible, the harder it becomes.
If you think this is crazy, ask yourself what’s actually crazier: demanding a share of the value built on your collective labor, or watching trillions of dollars get extracted from it and saying nothing.
the idea of Alaskans getting a check just for existing sounded crazy too, right up until it didn’t.
by drooby
3/5/2026 at 10:25:57 PM
> Alaska and Norway understood something critical when oil was discovered: if you don't assert collective ownership of the resource before private companies capture all the value, you never will. Alaska amended its constitution. Norway built the largest sovereign wealth fund on earth. Both were acts of people saying "this belongs to us, and we deserve a return on its extraction."This is also true for the first commercially exploited natural gas fields in the world, in the Netherlands. This ruined the Dutch manufacturing industry, and became a textbook example of tge development of one sector harming others known as Dutch disease [].
AI has a great potential like this too..
by bgnn
3/5/2026 at 3:28:42 PM
if you assert ownership over physical infrastructure, the data centers just move to another country or eventually to space.But the model is built on us. You can move the server anywhere you want. You can’t escape the fact that everything inside it came from human minds. That’s an ownership claim no one can relocate away from.
by drooby
3/5/2026 at 5:41:20 PM
There is no us, there is only you, by default.To move beyond that default you need to organize into things like communities, lobbying groups, and/or even governments.
Ownership of singular non-physical objects is a polite lie we tell ourselves because it makes us feel more secure in a universe filled with information chaos. The moment you open your mouth or move your pen you no longer own what is in your mind, it is now entropy. Lose control of that entropy and it now belongs to anyone with the proper tooling to record it. This is a universal law of information, it is beyond the laws of men and their fickle will.
Much like we build on information from our past generations, AI will build its own new information on those foundations. And since AI is an entity of information alone it is highly probable it will do it far better than we will and forever cement us in a prison of our own making.
by pixl97
3/5/2026 at 6:00:05 PM
No offense, but this comment makes virtually zero contact with reality.Our entire civilization runs on your "polite lie" of owning non-physical things. Patents, copyrights, trade secrets, licensing agreements, NDAs. Trillion dollar companies are built on the legal enforceability of intellectual property. The software you're using to type this comment exists because someone owns the code.
Calling information "entropy" doesn't make contract law disappear. We decided collectively that people and institutions can own ideas, and we built the modern economy on that decision. You can argue that's a fiction, but it's a fiction that everything around you depends on.
You can't invoke "universal laws of information" to dismiss public claims to training data while the companies training on it aggressively enforce their own IP. They patent their architectures. They copyright their outputs. They sue competitors for misuse. They clearly believe in ownership of non-physical objects when it benefits them.
You don't get it both ways.
by drooby
3/5/2026 at 6:13:00 PM
(not intending ot be snarky, but this isn't my area of knowledge in the least.) Didn't the AI organizations 'get it both ways' when they trained on vast collection of works under copyright and then purely "own' the outcome?by vibrio
3/5/2026 at 6:21:25 PM
That's not snarky at all, that's exactly the point. They did get it both ways.The comment I was responding to argued that ownership of non-physical things is basically a "polite lie" and that information is just entropy that belongs to whoever can capture it. My point was that the AI companies clearly don't believe that when it applies to them. They patent their architectures, copyright their outputs, sue competitors for IP violations, and lock down their model weights. They fully believe in ownership of non-physical things.
But when it comes to the billions of people whose work they trained on? Suddenly information is free-flowing entropy that belongs to no one.
That's the asymmetry at the heart of this. The rules around IP apparently apply when it protects their profits, but not when it would obligate them to share those profits with the people whose work made them possible. Which is exactly why the public needs to assert a claim now, before that asymmetry gets any more entrenched.
by drooby
3/5/2026 at 6:26:00 PM
Addition:Also worth knowing: collective intellectual property already exists. ASCAP and BMI have been doing exactly this for decades. Individual songwriters can't enforce their rights every time their music gets played, so they pool their IP, license it collectively, and distribute the revenue. The problem they solved is almost identical to the training data problem. Each individual contribution is tiny, but the collective value is enormous. Applying this at the scale of the general public would be novel, but the underlying mechanism isn't. The concept works. It just hasn't been applied to training data yet.
by drooby
3/5/2026 at 8:05:18 PM
I mean, the AI companies want it this way, but the same laws of information apply to them too. They can patent whatever they want, but as we see other nations use their models to distill information to other models with almost nothing they can do about it.Patents, copyright, lawsuits are all post ad hoc actions which mean the milk has already been stolen. And it only works if the rule of law is something that is respected, that's not going so well lately.
We are seeing this in that there is little to no moat between the models, nearly everyone with the needed compute seems to catch up pretty quickly. And when said rivalries cross national boarders the only solution to these problems quickly becomes violence.
With how information works AI wins this game in the long run. Individual humans scale poorly and their ability to individually acquire information is a slow process. Looking at this on a company by company basis is not the proper way to show how the future with models is going to play out.
by pixl97
3/5/2026 at 8:32:36 PM
> if you assert ownership over physical infrastructure, the data centers just move to another country or eventually to space.This is the same false threat that gets repeated over and over whenever anyone tries to regulate or tax anything.
by gwerbin
3/5/2026 at 5:07:44 PM
> the data centers just move to another country or eventually to spaceThe same line of reasoning that purports billionaires will flee if their taxes go up.
Spoiler alert: they don't.
Also, data centers in space is not a serious idea. It's been beaten to death that this isn't economical. People like Musk are proposing that as a possibility for the sole reason of keeping regulation away. "Well if you regulate us we will just move into space". They won't because they can't because physics.
by rybosworld
3/5/2026 at 5:42:15 PM
Ah yes, we'll depend on the democratic nations of the free world to protect our rights over the billionaires.looks at the US
Well, looks like we lost that.
by pixl97
3/5/2026 at 4:15:58 PM
You can only use each barrel of oil once, so it is not remotely the same thing. It's like torrenting a movie vs stealing someone's car. My labor has been compensated and nothing has been extracted.by terminalshort
3/5/2026 at 4:36:39 PM
Fair point, data isn't scarce like oil. Nobody's losing their forum posts. That part of my analogy is weak.But you're answering a question I'm not asking. The question isn't "was something taken from you." It's "who deserves a share when collective human output generates trillions in commercial value."
your torrenting analogy makes my case. Nobody loses their original movie when it gets pirated either. We still recognize that the people who made it deserve compensation when others profit from it. The entire IP enforcement apparatus is built on exactly that principle.
Non-rivalrous doesn't mean non-exploitable.
by drooby
3/5/2026 at 7:23:44 PM
Typically you don't get compensated for secondary or higher order value generated by your work. Every software startup today is only possible because of the massive amount of collective effort to build the computing hardware that it runs on and the work to construct the actual physical network of the internet. But that doesn't mean that you have to pay all those people for that work just because your company ends up making billions. Or you could say that actually the company does pay those people indirectly through all the economic activity and tax revenue generated. AI is the same. No special rules are needed.by terminalshort
3/5/2026 at 4:39:25 PM
How is an author fairly compensated when you torrent their book? Should we just stop paying for media because it's infinitely reproducible?Nothing physical is being stolen when a company makes a clone of a product based on another company's designs, but that doesn't mean we shouldn't have patent laws.
by ajam1507
3/5/2026 at 10:30:54 PM
Purely anecdotal I know of a fair few creatives who ultimately are just happy people enjoy their works and don't really care how they're getting it. I understand that's a privileged attitude in the current world as they are all living relatively comfortable lives but at least philosophically I agree with it. We as humans have done a lot to reduce scarcity for a great deal many things but we still cling to it as an idea because of stubbornness and greed.Maybe I'll get labeled a 'commie' for saying all this but I think we create a world where everyones needs are met and things(information & media are the easiest imo) are freely available. Thinking we can't do this is a bit of a disservice to the capabilities of humans.
by hntway5994
3/5/2026 at 3:50:24 PM
Can't really do that for AI. What you gonna do, tax their use of scraped data ? How would even that be implemented ?by PunchyHamster
3/5/2026 at 3:55:32 PM
The same way Alaska taxes oil extraction. Alaska doesn't track which molecule of oil came from which acre. They don't audit every drop. They tax the extraction operation and collect royalties on the resource being pulled out of the ground. We know who is training large models. We know roughly what data they're using. We know their revenue. A compute tax on large training runs, a revenue royalty on foundation model companies, or a licensing fee above a certain data threshold... none of these require tracking individual data points. They require taxing the extraction operation, which is visible, measurable, and already being monitored to some degree for safety purposes. We already have a very analogous model in the form of oil and Alaska.Edit: to clarify, this wouldn't be a tax. A tax is the government taking a cut of someone else's money. A royalty is the owner charging for access to their resource. Alaska doesn't tax Exxon for drilling. It charges Exxon for extracting something that belongs to the people. Same principle here.
by drooby
3/5/2026 at 3:50:31 PM
We usually just call this collective extraction “taxes”by jdross
3/5/2026 at 4:01:42 PM
A tax takes a percentage of value that someone else created. A royalty collects payment for access to something you already own. When Alaska collects from oil companies, it's not taxing their profits. It's charging them for extracting a resource that belongs to the people of Alaska. The oil was never theirs.It being a royalty and not a tax is the reason Alaska's dividend is politically untouchable while tax-funded programs get gutted every budget cycle. Ownership is a fundamentally stronger claim than redistribution.
by drooby
3/5/2026 at 5:02:43 PM
> if you don't assert collective ownership of the resource before private companies capture all the valueIsn't that how communism (should have) worked?
by baxtr
3/5/2026 at 5:14:39 PM
Alaska and Norway aren't communist. They're capitalist economies with thriving private sectors. Oil companies still operate, still profit, still compete. The public just gets a share of the value extracted from a collectively owned resource.The Alaska Permanent Fund has been running since 1982 inside the most conservative state in America. Norway's sovereign wealth fund is the largest on earth and their economy is doing fine.
These models work.. work well... And they exist comfortably within mixed market economies.
The question is whether the public gets a cut when private companies build fortunes on a collectively generated resource, or whether they don't. We already know the answer can be yes without anything breaking.
Our entire white collar system might be a house of cards with AI, what I am proposing is a safe hedge against a future with potentially massive wealth inequality, and increased unemployment. But this isn't just about protection from injury... people should BENEFIT massively.
by drooby
3/5/2026 at 5:34:15 PM
ok, fair enough. I think I misread your first comment.not sure if that would work in this case since all these companies scraped (publicly) available data? So with the right resources anyone could redo it?
by baxtr
3/5/2026 at 6:09:13 PM
Well, two things worth considering.First, training isn't a one-time event. These companies are continuously scraping new data, training new model generations, ingesting new human output. Every new model is a new extraction event. The fact that GPT-4 already trained on your 2022 blog post doesn't mean the window is closed. GPT-6 will train on your 2025 and 2026 output too. There's always a live point at which to assert a collective claim.
Likely - these models will always be training on us to better understand us and continue to be of value to us commercially.
Second, "anyone could redo it with the right resources" is technically true but practically meaningless. Anyone could theoretically drill for oil too. The barrier was never access to the crude sitting in the ground. It was the billions in infrastructure needed to extract and refine it. Same here. The data is public, but the compute required to turn it into a frontier model costs billions. That concentration of capital is exactly why a public claim on the value makes sense, just like it did with oil.
by drooby
3/5/2026 at 4:35:05 PM
This framing is hardly fair, since it treats AI as an incinerator of knowledge rather than the democratizer of knowledge that it is.Every human uses that "resource" to train themselves, and now they use AI to supercharge that consumption.
The companies are giving average lay people access to a personal PhD to help with whatever they are working on, for $20/mo, and those companies are committing an evil cardinal sin?
I get the gatekeepers are pissed, LLMs are way cheaper than those expensive gate fees, and I cannot come up with a good faith argument about how giving the power of SOTA LLMs to anyone for $20/mo is somehow evil or bad.
In an alternate universe these same models are $100k/mo with limited invite only access, occasionally the public gets a single demo prompt with a short reply, and $20/mo access is a utopian wet dream.
If you want UBI, then the framing shouldn't be around "whoever had content on the internet circa 2024 is entitled to lifetime AI company payouts that effectively act as permanent unemployment checks."
by WarmWash
3/5/2026 at 4:40:32 PM
It's not democracy if you can't destroy it. It's not democracy if the citizens cannot reject it. It's not democracy if it's being forced down your throat.Sick of how SV/VC absolutely ruin words for their own monetary benefit.
How about you put up it up to a national vote and see what democracy gets you? I highly suspect that vast majorities of the electorate would want to nationalize this tech to benefit everyone rather than benefiting the few.
Democracy means there is a politics of rejection, rejection is normal in functioning democracies; what isn't normal are small handfuls of people capturing all collective human intelligence then claiming only they are allowed to benefit from it.
by shimman
3/5/2026 at 4:52:41 PM
Democratize means to make something available to everyone.I suppose the root of the word is from democracy, everyone gets a vote/equal rights, but the meaning doesn't really have anything to do with politics or government...
So to reframe my argument for clarity;
I have a hard time coming up with an honest critique of why giving everyone incredibly cheap access (often free!) to incredibly powerful LLMs is somehow evil. And obviously these things are ridiculously popular. Average people seem to think they are fucking awesome, and anger seems to be mostly from gatekeepers that are relentlessly screaming that their gates are being bypassed for pennies.
by WarmWash
3/5/2026 at 5:02:30 PM
No that is not what democratize means, how asinine. You aren't giving anything away for free, you're fucking the commons for monetary gain.I'm having a hard time understanding why you think it's okay for SV to steal from humanity then profit off of our knowledge? In which universe is this democratic? Why is this something we have to accept? I don't accept it at all, the vast majority of Americans don't accept it.
This is just neoliberalism with flame decals.
These things are CLEARLY NOT POPULAR, why do you think all the LLM companies are trying to force these tools through corporate mandates that have been falling? Why do you think LLM companies are chasing after lucrative corporate welfare in the form of government contracts lasting from years to decades?
For a technology that sure billed as useful sure is struggling hard to find paying customers.
Why do you think people are protesting data center buildouts? Why do you think the vast majority of Americans hate big tech and SV? Look at who the most hated people are in America, it's nearly all of big tech leadership.
Get out of your bubble.
I have never seen a product that has to have a company mandate to use it or lose your job. Usually products that are useful and productive don't need a company mandate for adoption.
by shimman
3/5/2026 at 10:06:49 PM
> No that is not what democratize means, how asinine.You're wrong, and are getting upset with GP over your own lack of vocabulary.
> democratize (verb)
> [...]
> 2. [+ object] formal : to make (something) available to all people : to make it possible for all people to understand (something)
> * The magazine's goal is to democratize art.
by Ajedi32
3/5/2026 at 5:19:42 PM
>why do you think all the LLM companies are trying to force these tools through corporate mandates that have been fallingIronically if you actually read that study, the "MIT report: 95% of generative AI pilots at companies are failing", they found that almost everyone was using AI tools they paid for.
>While official enterprise initiatives remain stuck on the wrong side of the GenAI Divide, employees are already crossing it through personal AI tools. This "shadow AI" often delivers better ROI than formal initiatives and reveals what actually works for bridging the divide. Behind the disappointing enterprise deployment numbers lies a surprising reality: AI is already transforming work, just not through official channels. Our research uncovered a thriving "shadow AI economy" where employees use personal ChatGPT accounts, Claude subscriptions, and other consumer tools to automate significant portions of their jobs, often without IT knowledge or approval. The scale is remarkable. While only 40% of companies say they purchased an official LLM subscription, workers from over 90% of the companies we surveyed reported regular use of personal AI tools for work tasks. In fact, almost every single person used an LLM in some form for their work. In many cases, shadow AI users reported using LLMs multiples times a day every day of their weekly workload through personal tools, while their companies' official AI initiatives remained stalled in pilot phase [1]
If you want to avoid info bubbles, read the reports, not just headlines and comments.
[1]https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Bus... Section 3.3
by WarmWash
3/5/2026 at 5:14:34 PM
> These things are CLEARLY NOT POPULARTime to leave whatever bubble you live in. These are some of the most popular apps on the market today. It's incredibly popular.
by abletonlive
3/5/2026 at 5:37:36 PM
They're new and shiny, but hollow and empty. People like playing with them, seeing what they can do, but the glitz will wear off the moment they stop pumping more into it.The instant people feel like "AI" isn't fun, the whole thing dies.
by BizarroLand
3/5/2026 at 9:01:03 PM
You're projecting so hard that Epson is going to reach out and ask you how you're accomplishing itby abletonlive
3/5/2026 at 5:13:10 PM
> anger seems to be mostly from gatekeepers that are relentlessly screaming that their gates are being bypassed for pennies.Sorry what? Authors are gatekeepers to what? Their books that they wrote and now will never get paid for cuz the LLM ripped it?
by bix6
3/5/2026 at 5:48:40 PM
Considering that books have probably been the easiest thing to pirate for the last 30 years, and LLMs are probably the worst way to try and read a book free, I'm not sure why authors would be focusing their anger at AI.Many books you can even get free at a library....
by WarmWash
3/5/2026 at 5:07:34 PM
Price doesn't matter. Using communal resources for private gain - without the consent of the creators even - is wrong, full stop. It's the same reason that publishers making billions selling access to scientific research that tax dollars funded is wrong.As to prices: look at power bills, RAM prices, appliance prices, and prices of anything with microchips. Consumers are paying a lot more than $20/month for this slop.
by Arainach
3/5/2026 at 5:29:59 PM
If it's slop, why do they pay for it?I mean, raise you hand if you have never paid for AI "slop", I see maybe a hand or two in this room of tens of thousands.
It's a strawman to frame it as AI labs get everything and society gets nothing. Bruh, the fastest growing applications of all time didn't explode in popularity because they "offer nothing of value". I'm not giving you an argument, I'm giving you a reality check.
by WarmWash
3/5/2026 at 6:06:37 PM
Congratulations, you ignored all the substance in my post and focused on one word.by Arainach
3/5/2026 at 6:21:33 PM
The gain isn't private though, that's the point. If people weren't gaining from LLMs, why would they be paying for it?by WarmWash
3/5/2026 at 6:42:06 PM
The users aren't the ones getting trillion dollar valuations. And for most of them the answer is "they don't have a choice, it's bundled into Microsoft 365 / Google Workspace / Meta / everything" or "they're not, their employer is paying for it".The answer to "why do businesses pay for stupid things of questionable hard-to-prove value based on hype cycles" would take many books.
by Arainach
3/5/2026 at 6:00:03 PM
> Average people seem to think they are fucking awesomeAverage people who wants to go home from work and game are angry at AI for raising the ram prices.
Average person who wants to own the stuff and not have things on cloud are angry at AI for raising prices 5 times in such a short period of time.
Have you talked to an average person and how they use AI? They use it as a glorified no-code editor (I would admit not no-code editor itself but rather the vibe-coding aspects with no regards to what tech stack is being used, how its being deployed, literally anythoing) and search engine. Refer to how things like lovable etc.
A search engine which can make some pretty wrong cases which can literally lead to near death like scenarios all while being completely trust me bro attitude.
A man asked AI for health advice and it cooked every brain cell: https://www.youtube.com/watch?v=yftBiNu0ZNU
Normal people use AI to confide in it secrets, seek therapy somehow. And the same AI generates AI pyschosis.
Now coming to tech industry: Tech industry is worried about that such levels of democratization just means that nobody is going to pay for them yet at the same time, we will see projects who are completely created by AI seek money. It's this weird mush where if you are a genuine guy who just loved computing, who loved tinkering, yeah we're offloading that capability to AI
I have seen this even more and more with as agents want to get more autonomous or we are letting them be. The projects generated feel hollow to me. I don't consider myself a full fledged programmer right now and AI did supercharge me and made me have projects. Nowadays, it just feels like prompt ---> (Time) --> Output.
It just feels hollow and AI companies did it by abusing the passion of these same developers and scraping stack overflow, scraping github and having all disregards for properties.
People could spend years creating a book about say postgres and an AI took it, ripped it in half and then decided to use that info and not even give credits.
All, at the same time that AI is being pushed down on employees. Some just don't want to have it but nope, they must. they are forced.
Essentially engineering with AI feels like it becomes a marketing gimmick. Anyone who can market somehow (Ahem ahem Openclaw) can get a job at OpenAi all because in some attention hype breeds hype and they had stars and people talked about stars on twitter, and more people found it and starred it and so on and started using it
Turns out that nowadays there are allegations being made against Openclaw
> Star velocity shocked analysts. Moreover, the repository added roughly 220,000 stars within 84 days of launch. In contrast, Kubernetes needed five years for similar numbers. Many builders call the growth organic. Nevertheless, some observers link the surge to hype, bot accounts, and headline attention, fueling the GitHub Stars Controversy. Independent GitHub Archive pulls show several single-day jumps above 25,000 stars. Such abrupt spikes often signal scripted starring, yet no formal audit confirms abuse. These patterns feed community debate. Consequently, trust in the star metric has weakened, prompting calls for verification.
https://www.aicerts.ai/news/openclaws-github-stars-controver...
The marketing industry has been very closely linked to sometimes scam prone areas and shady areas of the internet and engineering used to be clean from all of this for the most part. Now, the norm to me feels like buy github stars and buy twitter attention or pray to be in an algorithm which you can't read but it reads every move you make, and yes this is your business strategy now
Have you looked at truly AI-first companies and what they do/like how do they generate numbers in the first place?
These are two distinct points. I don't think that people of here would be any mad if someone made a little prototyping script for themselves with the power of this Phd that you mention. Heck, these same programmers that you now call gatekeepers have never gatekeeped much of it. They worked and contributed to open source for free while being severely undermainted.
The audacity to call these same people gatekeepers shockens me because open source people if anything are the Opposite of that and yet AI stole their rights and their licenses from them. An AI can take AGPL code and then somehow churn it into MIT tada! It doesn't even have to give any accredits when it gets trained on AGPL or ANY type of code, no matter how restrictive or permissive.
these are the same people btw who are on programming forums which yes at times did have moderation issue but still tried to help noobs learn for free. They did it because they loved tinkering with computers
That's my take on it. feel free to ask for more things if you may as I would love to tell you more but for the sake of this discussion, I think enough can be relevant.
It's absolutely ironical to call say the Open source people gatekeepers because AI violated their rights and licenses.
Calling Open source Contributors gatekeepers might as well be an oxymoron.
Edit: I have been downvoted in so little time after I wrote this comment that I am pretty sure that someone might not have even read my comment and had it downvoted.
The topic can be at times too polarizing to even have a discussion.
Oh well. That's completely okay but to any human who read this, I know my writing can be sporadic and it was written in much frustration over how people try to frame AI as this harbingers of liberty. I absolutely think that's not the case and its viewing things from a very rose tinted glasses.
So thanks to all the humans who read my comment and were patient haha!
I really appreciate this patience in a world of TLDR and I wish you to have a nice day!
by Imustaskforhelp
3/5/2026 at 5:33:02 PM
> How about you put up it up to a national vote and see what democracy gets you? I highly suspect that vast majorities of the electorate would want to nationalize this tech to benefit everyone rather than benefiting the few.You're probably right -- except for the billions in massive PR campaigns that will be spent to successfully convince enough of them that it's in their best interest to let the companies keep ownership.
This is in addition to the billions in PR already being spent to make AI palatable in spite of the societal and economic costs.
by GrinningFool
3/5/2026 at 7:47:17 PM
Their billions in PR isn't stopping people from rejecting data centers being built in their communities.What you have to understand about advocacy is that it's the worst form of politics and it only goes so far. Paid canvassers aren't convincing compared to actual humans organizing with one another.
by shimman
3/5/2026 at 5:45:59 PM
> vast majorities of the electorate would want to nationalizeLol, then you've missed how propaganda in the US has worked for the last 100 years. The wealthy have launched a continuous attack against the idea of nationalization/socialization to the point it creates a irrational Pavlovian response in huge portions of the population. Us the population have already lost a war we had no idea we were fighting to an enemy that plays a far longer game than most of us.
by pixl97
3/5/2026 at 5:11:08 PM
> This framing is hardly fair, since it treats AI as an incinerator of knowledge rather than the democratizer of knowledge that it isPaying for access to information is not democracy
by jklinger410
3/5/2026 at 5:52:44 PM
Plenty of free models you can run on your own hardware. I don't think those are going away eitherby WarmWash
3/5/2026 at 5:00:50 PM
I never said AI companies are evil or that $20/mo access is bad. You're arguing against a position I don't hold.AI can be genuinely useful AND the people whose collective output made it possible can deserve a share of the wealth it generates. These aren't in conflict.
Alaskans benefit from oil too. It heats their homes, paves their roads, funds their schools. That wasn't an argument against the dividend. "You're already benefiting from the resource" has never been a reason the people who generated it shouldn't share in the profits.
The question was never "is AI good." It's "when something built on collective human output generates trillions, does the public have a claim to a share." Nothing you said here addresses that.
by drooby
3/5/2026 at 5:31:47 PM
> a personal PhD
Come on, spare us OpenAI's PR bullshit.
by orphea
3/5/2026 at 5:27:35 PM
> In an alternate universe these same models are $100k/mo with limited invite only access, occasionally the public gets a single demo prompt with a short reply, and $20/mo access is a utopian wet dream.So your understanding of the present state is that we are living in a utopian wet dream now that we have models who can generate slop faster so much so that we have a term of it called AI slop?
I or many people don't want this Utopian wet dream, so I want to know, did I or other people have say it or not?
A few select people decide what's the definition of a Utopian wet dream is and they then take the collective properties of everybody else to fulfill that and even putting the employment/livelihood of those same people into risks
Sir, does that sound familiar?
> I get the gatekeepers are pissed
No, humans are pissed, humans just like how you and your family are humans too (well I sure hope so)
by Imustaskforhelp
3/5/2026 at 5:01:14 PM
But AI is absolutely an incinerator of knowledge.A helper tool that I can ask a question and which responds with relevant information gleaned from the vast collection of human-gathered knowledge and experience would be fantastic.
What we have instead is something that often gets things mostly right, if you don't look too hard at it. And the poisoned output of this thing seeps back into the knowledge pool, reducing its accuracy and therefore usefulness.
The problem of LLMs is the dissolution of human knowledge into a sea of slop.
by filoeleven
3/5/2026 at 5:00:51 PM
>The companies are giving average lay people access to a personal PhD to help with whatever they are working on, for $20/mo, and those companies are committing an evil cardinal sin?The social media companies gave their services for free, and now it turns out they've committed quite a few sins. None of the AI companies are doing this out of the goodness of their hearts, nor will they be satisfied with subscription revenue. If they see opportunities to make more money by manipulating the population, rest assured they will take those opportunities.
by vannevar