2/13/2026 at 10:25:53 PM
This seems to have parallels with the well-established practice of giving bots free reign to issue DMCA takedown notices (or similar but legally distinct takedowns) while the humans behind the bots are shielded from responsibility for the obviously wrong and harmful actions of those bots. We should have been cracking down on that behavior hard a decade ago, so that we might have stronger legal and cultural precedent now that such irresponsibility by the humans is worthy of meaningful punishment.by wtallis
2/14/2026 at 12:08:52 AM
We need to crack down in general on people and companies causing damages to people through automation, and then hiding behind it with a "well, we can't possibly scale without using automation, but we also can't be responsible for what that automation does."You shouldn't be able to use AI or automation as the decider to ban someone from your business/service. You shouldn't be able to use AI or automation as the decider to hire/fire people. You shouldn't be able to use AI or automation to investigate and judge fraud cases. You shouldn't be able to use AI or automation to make editorial / content decisions, including issuing and responding to DMCA complaints.
We're in desperate need for some kind of Internet Service Customer's Bill of Rights. It's been the unregulated wild west for way too long.
by ryandrake
2/14/2026 at 2:48:51 AM
> You shouldn't be able to use AI or automation as the decider to ban someone from your business/serviceThat would mean dooming companies to lose the arms race against fraud and spam. If they don't use automation to suspend accounts, their platforms will drown in junk. There's no way human reviewers can keep up with bots that spam forums and marketplaces with fraudulent accounts.
Instead of dictating the means, we should hold companies accountable for everything they do, regardless of whether they use automation or not. Their responsibility shouldn't be diminished by the tools they use.
by dkarl
2/14/2026 at 12:38:43 AM
I think you probably should be able to do those things (using AI to hire, fire, ban, etc.)... but that every act and communication needs to be tied to a responsible human, who is fully held responsible for the consequences (discriminatory hiring, fraudulent takedown requests, etc.)by kbelder
2/14/2026 at 2:20:58 AM
I think that's part of the way there, but I think you would need to go farther. The main failure state I anticipate is the appointment of a designated fall guy to be responsible. The person would need to reasonably be considered qualified for starters, so you couldn't just find someone desperate willing to take the risk for a paycheck.And it shouldn't just be one person, unless they are at the very top of a small pyramid. Legal culpability needs to percolate upwards to ensure leadership has the proper incentive. No throwing your Head of Safety to the wolves while you go back to gilding your parachute.
by vohk
2/14/2026 at 6:30:55 PM
It seems like you're looking for the EU AI Acthttps://digital-strategy.ec.europa.eu/en/policies/regulatory...
by dns_snek
2/14/2026 at 12:15:00 AM
Where is Tech Teddy Roosevelt?by whattheheckheck
2/14/2026 at 1:25:48 AM
In all of us, but unrepresented by those in power.by willis936