3/31/2025 at 11:16:44 PM
Avoid it! Perplexity rules prohibit commercial use, so it’s effectively a toy for hobby stuff like gardening and recipes. ChatGPT rules prohibit sharing the output with other AI, and they’re training on everything you pass in, so you’re literally paying to get brain raped, they hold all the cards, and that’s as dumb as it sounds.Anybody coding with OpenAI at this point ought to be publicly mocked for being a sheep. All you’re doing when you code with AI is becoming dependent on something where you’re paying them to train their model to imitate you. It’s not good for anyone but OpenAI.
Better off learning a better (compiled) programming language. If you have to use AI, use Groq or Ollama as you can keep these outputs to train or fine tune your own AI one day.
Why pay OpenAI for the privilege to help them take your job? Why outsource mental activity to a service with a customer noncompete?
by bionhoward
4/1/2025 at 9:43:17 AM
> ChatGPT rules prohibit sharing the output with other AIWho gives a fuck ? If openai followed the law chatgpt wouldn't exist
by lm28469
4/1/2025 at 4:51:41 PM
> Better off learning a better (compiled) programming language.I didn't really go too much into my background beyond the task at hand, but I have a CS degree and did C++ development professionally in the engineering space for years (and golang after that). I switched to data engineering because I enjoy the work, not because of an inability to work in "better" languages.
I'm not a rockstar or anything, but assume I'm as competent as a typical person who's been writing software for 20 years.
> If you have to use AI, use Groq or Ollama as you can keep these outputs to train or fine tune your own AI one day.
How do I train models via ollama? As I said, I've been using it for my work, but I've been leveraging it for "fuzzy matching" data, extracting pertinent elements from free form text, or general research. I'd love to be able to shove some of my datasets into one and use a prompt to interact with it. The best I've been able to do currently is showing it my database schema and having it write queries for me, which is not that valuable to me.
by mywittyname
4/1/2025 at 7:11:39 AM
Running any decent local llm (I mean at least on 4o-mini lvl) with Ollama or in any other way is still very expensive.by caro_kann
4/1/2025 at 8:20:41 AM
There’s this popular pastime lately where people pretend to be the last rational mind on Earth while making blanket pronouncements about tech they don’t understand.>“Avoid it! Perplexity rules prohibit commercial use…”
Cool story, but also: irrelevant. Nobody serious is shipping products on Perplexity as a backend. It’s a research tool with a nice wrapper. The people building with LLMs are using OpenAI, Claude, Mistral, Groq, and Ollama, depending on the constraints and goals. Acting like the existence of one walled garden invalidates an entire paradigm is like saying cars are bad because golf carts can’t go on highways.
> “ChatGPT rules prohibit sharing the output with other AI…”
There are some restrictions, yes. And that matters… if your use case is literally just shuttling model output between APIs like a glorified message broker. Most developers are fine with this because they’re building systems, not playing telephone.
> “They’re training on everything you pass in…”
This is just flat wrong. OpenAI doesn’t train on API input/output unless you explicitly opt in. The fact that this myth is still circulating tells me people are repeating each other instead of reading the docs.
> “You’re paying to get brain raped…”
If your argument requires dehumanizing metaphors to land, you don’t have an argument. You have trauma cosplay.
> “Coding with OpenAI = being a sheep…”
This is the kind of thing people say when they’ve never delivered software in production. The tools either help or they don’t. Calling people sheep for using powerful tools is anti-intellectualism dressed up as cynicism. Nobody’s building a search engine from scratch to prove they’re not a sheep either. We use leverage. That’s the whole game.
> “You’re paying them to train their model to imitate you…”
Actually, no — again, unless you opt in. But even if that were true, you’re also trading that data for time saved, features shipped, and workflows unlocked. You know, ROI.
> “Better off learning a better (compiled) programming language…”
I have nothing against compiled languages, but this is like telling someone struggling with Figma that they should learn Blender. It might be good advice in a vacuum, but it doesn’t help you win the game that’s actually being played.
> “Why pay OpenAI for the privilege to help them take your job?”
You could’ve said the same thing about AWS. Or GitHub. Or Stack Overflow. Or even programming itself, in the mainframe era. Gatekeeping based on purity is a waste of time. The actual work is understanding what AI can do, what it shouldn’t do, and when to lean in.
> “Why outsource mental activity to a service with a customer noncompete?”
You’re not outsourcing thinking. You’re compressing cognitive overhead. There’s a difference. If you think using AI is “outsourcing thinking,” you were probably outsourcing thinking to Stack Overflow and Copilot already.
⸻
Look, are there risks? Costs? Vendor lock-in traps? Of course. Anyone seriously building with AI right now is absorbing all that volatility and pushing forward anyway, because the upside is that massive. Dismissing the entire space as a scam or a trap is either willful ignorance or fear masquerading as intellectual superiority.
If you don’t want to use AI tools, don’t. But don’t throw rocks from the sidelines at the people out there testing edge cases, logging failures, bleeding tokens, and figuring out what’s possible.
by timbritt