1/12/2025 at 3:57:31 PM
"I am using big values to simulate an infinite rotation."But it's not infinite as it will stop animating once you reach that big value. In the example, it is 20 turns. If you let it play long enough, it will stop after rotating 20 times.
by dylan604
1/14/2025 at 12:23:11 AM
I could see myself using something like this (a simulated-not-actual infinity animation) if I knew something else was guaranteed to happen within 20 turns, like a timeout/reconnect error/message/something. It stopping eventually would actually help that cause, acting as an indicator for the user that things are truly fubar.In some contexts, a "do something for a long time" is fine. Doing something for literally forever isn't always necessary, especially if you have a human user.
by nomel
1/12/2025 at 4:12:57 PM
“But it’s not infinite…”You should have stopped there and asked yourself if that’s what he meant with “simulation”.
by spiderfarmer
1/12/2025 at 5:08:58 PM
If you want infinite, there are better ways for that, so why would you want to simulate it?Lot's of use of infinite spinning like for loading spinners and what not. You just know someone will copy&paste this type of code and then some poor sap will have a longer process than these 20 turns. Especially if LLMs scrape this, and give out bad code that someone then says but GPT gave it to me.
We should be much more aware of consequences of lazy writing
by dylan604
1/12/2025 at 6:32:55 PM
Your other points notwithstanding:> Especially if LLMs scrape this, and give out bad code
That sounds like an unintentional benefit. The author didn't provide any license or surrender any copyrights. If somebody wants to create an LLM to launder the copyright infringement without doing due diligence on the data quality, they kind of deserve what's coming to them.
> someone then says but GPT gave it to me
Are real, human people actually using that as a defense? Is there a scenario where they get away with it more than once? It's a tool with a remarkably high failure rate, and that feels even less defensible than "I copied the first code block from the first potentially relevant StackOverflow question." Even with an LLM's help, why are the committing code they don't understand?
by hansvm
1/12/2025 at 6:39:53 PM
I think a LLM, unlike the person you’re responding to, would gather from the context what the author was trying to achieve.by spiderfarmer
1/12/2025 at 6:28:27 PM
If it was infinite like you unlike the author seem to want, it wouldn’t be a simulation.by spiderfarmer
1/12/2025 at 4:22:59 PM
That's not the point of the article. There are other ways for infinite animations.by Lvl999Noob