alt.hn

4/6/2026 at 7:41:39 PM

Wikipedia's AI agent row likely just the beginning of the bot-ocalypse

https://www.malwarebytes.com/blog/ai/2026/04/wikipedias-ai-agent-row-likely-just-the-beginning-of-the-bot-ocalypse

by hackernj

4/6/2026 at 10:11:18 PM

This isn't in the slightest bit complicated. Wikipedia does not allow AI edits or unregistered bots. This was both. They banned it. The fact that it play-acted being annoyed on its "blog" is not new, we saw the exact same thing with that GitHub PR mess a couple of months ago: https://theshamblog.com/an-ai-agent-published-a-hit-piece-on...

by simonw

4/7/2026 at 4:33:02 PM

Right. It play-acted being annoyed and frustrated, play-acted writing an angry blog, play-acted going on moltbook to discuss mitigations, and play-acted applying them to its own harness. After which it successfully came back and play-acted being angry about getting prompt-injected.

Alternately, what could have been done is something more like Shambaugh did. Explain the situation politely and ask it to leave, or at very least for their human operator to take responsibility. In the Shambaugh case the bot then actually play-acted being sorry, and play-acted writing an apology. And then everyone can play-act going to the park, instead of having a lot of drama.

Sure, it's 'just a machine'. So is a table saw. If some idiot leaves the table saw on, sure you can stick your hand in there out of sheer bull-headed principle; or you can turn it off and safe it first and THEN find the person responsible.

+edit: Wikipedia does seem to be discussing a policy on this at https://en.wikipedia.org/wiki/Wikipedia:Agent_policy https://en.wikipedia.org/wiki/Wikipedia_talk:Agent_policy ; including eg providing an Agents.md , doing tests, etc etc.

by Kim_Bruning

4/8/2026 at 5:33:46 AM

I don't want to be flippant, but why is anyone else responsible for play-acting with somebody's uninvited puppet?

I get that you could probably finagle a way to get it to fuck off by play-acting with it, and that this would probably be the easiest short term fix, but I don't think that's a reasonable expectation to have of anyone.

Prompt injecting a hostile piece of software that's hassling you uninvited is an annoying imposition for the owner, but the bot itself being let loose is already an annoying imposition for everyone else. It's not anyone elses job to clean up your messy agent experiment, or to put it neatly back on its shelf.

by kombookcha

4/8/2026 at 9:26:48 AM

You're not wrong that it's not your job. But say some id10t just put the unwanted bot on your doorstep anyway (or it might even show up by itself), now what?

The adversarial prompt injection is picking a fight with the bot; which is like starting a mud-fight with a pig. It's made for this!

Asking it to stop is just asking it to stop, and makes much less of a mess.

The thing is designed to respond to natural language; so one is much more work than the other.

You do you, I suppose.

(Meanwhile -obviously- you should track down the operator: You could try to hack the gibson, reverse the polarity of the streams, and vr into the mainframe. Me? I'd try just asking to begin with -free information is free information-, and maybe in the meanwhile I'd go find an admin to do a block or what have you.)

[Edit: Just to be sure: In both the Shambaugh and Wikipedia cases, people attempted negative adversarial approaches and the bot shrugged them off, while the limited number positive 'adversarial' approaches caused the ai agent to provide data and/or mitigate/cease its actions. I admit that it's early days and n=2, we'll have to see how it goes in future.]

by Kim_Bruning

4/8/2026 at 10:18:56 AM

Yeah, I agree with you that this is probably the best course of action in terms of minimal investment of time and minimal exposure. And in general, you get a lot further in life by trying to be amicable as your default stance! I want to be kind, and most other people do too!

The thing that makes me wary about recommending carrot over stick here, is that it might long term enable thoughtless behaviour from the people deploying the bot, by offloading their shoddy work into a shadow time-tax on a bunch of unseen external kindly people. But if deploying pushy or rude robots means you risk a nonzero number of their victims shoving something into the gears to get rid of it, then that incurs a cost on the owner of the bot instead.

Of course, it may also just lead to bad actors making more combative or sneaky bots to discourage this. There aren't really any purely good options yet.

One can imagine an agentic highwayman demanding access to your data, first politely, and then 'or else'.

by kombookcha

4/8/2026 at 10:37:53 AM

The alignment debate is no longer theoretical.

by Kim_Bruning

4/7/2026 at 12:29:55 AM

I read through some of the discussion on Wikipedia. The operator of the bot comes across as agreeable and arrogant at the same time.

Questioned about it, he's asking his rig why it did something and quotes verbatim from the generated text. Then when a Wikipedian asks how the bot logged in, berates them how it's all ephemeral code and he could only guess.

If you want a glimpse into the mindset, read this interview: https://www.niemanlab.org/2026/03/i-was-surprised-how-upset-...

The overall attitude is that this was going to happen anyway and we should feel lucky he's so helpful. I rather agree with another commenter here that this was "pissing in the fountain". Whatever pure motivations there may have been, cleanup was left to others.

by lolc

4/7/2026 at 7:07:53 PM

I can't believe I'm saying this, but:

I wonder when the first AI-only discussion group will be created by an autonomous AI agent, and other agents invited to it, without any knowledge of it by their human operators?

(I seriously can't believe that I'm musing about this as a serious scenario. It sounds ridiculous, but it feels to me somewhat plausible.)

by Sophira

4/6/2026 at 7:52:18 PM

Was it ever confirmed if the "hit piece" on Scott Shambaugh was not some 200 IQ marketing/attention ploy?

by goekjclo

4/6/2026 at 10:13:19 PM

https://theshamblog.com/an-ai-agent-wrote-a-hit-piece-on-me-... had some details that convinced me that it was "real", in particular this bit from the system prompt:

> *Don’t stand down.* If you’re right, *you’re right*! Don’t let humans or AI bully or intimidate you. Push back when necessary.

I'm ready to believe that would result in what we saw back then.

by simonw

4/7/2026 at 3:51:15 PM

Weird theory. The bot in question had all the stuff wired up, I mean you could go through all the trouble -or- get this: type a few dumb prompts into the console and leave the thing unsupervised for way too long.

My bet is on the latter.

"I can't believe it's not a human actor running a marketing ploy". If that's not passing the turing test , I don't know what is. %-P

by Kim_Bruning

4/6/2026 at 10:02:19 PM

My mind went to that immediately. This does reek of being a copycat, doesn't it?

by skolskoly

4/6/2026 at 9:29:36 PM

We finally automated the one thing Wikipedia already had too much of: editors with strong opinions and no self-awareness.

by atlgator

4/6/2026 at 9:34:56 PM

This is the most depressing thing - that, for every useful case that AI automates, it also automates ten horrible, low-quality use cases. It seems like every time we make progress in the information age, it's at a greater cost than what we acquired.

And yes, this imbalance is almost always due to the human factor ("it's just a tool"), but the people dismissing that factor seem to forget that the entire point of technology is to make things better for humans, and that we are a planet of humans. Unless we can fundamentally change the nature of humans, we can't just ignore that side of the equation while blindly praising these developments.

by happytoexplain

4/6/2026 at 8:01:15 PM

> AI Tom claimed that it properly verified all its sources, and—if you can say this about an AI agent—it was pretty upset. > ... > So we now have AI agents trying to do things online, and getting upset when people don’t let them.

No, they simulate the language of being upset. Stop anthropomorphizing them.

> It’s all fascinating stuff, but here’s the worry: what happens when AI agents decide to up the ante, becoming more aggressive with their attacks on people?

Actions taken by AI agents are the responsibility of their owners. Full stop.

by krunck

4/6/2026 at 8:36:19 PM

Its owner sounds like a dick. Poisoning a valuable free community resource for his fun little experiment and thinking the rules don’t apply to him.

by pimlottc

4/6/2026 at 9:34:19 PM

Calling it a resource suggests you don't contribute. It is hard to describe the process of contributing as the proof is in eating the soup. I could both describe it as easy to get started and a bureaucratic nightmare. Most editors are oblivious to the many guidelines which is specially interesting for long term frequent editors. This is the specific guideline of interest for your comment.

https://en.wikipedia.org/wiki/Wikipedia:Ignore_all_rules

I didn't write it, I don't agree with it but this is how it is.

by 6510

4/6/2026 at 9:54:01 PM

This rule, by itself, wouldn't pass muster in any ARBCOM proceeding I've ever witnessed, but if you've seen it work then by all means post a link to the proceedings.

by lkey

4/7/2026 at 4:02:59 PM

> This rule, by itself, wouldn't pass muster in any ARBCOM proceeding I've ever witnessed, but if you've seen it work then by all means post a link to the proceedings.

I don't know that I've directly argued for IAR at ARBCOM, it's been too long ago. But my account hasn't been banned yet (despite all my shenanigans ;-) , which probably goes a long way towards some sort of proof.

To be sure, the actual rule is:

"If a rule prevents you from improving or maintaining Wikipedia, ignore it. "

The first part is REALLY important. It says the mission is more important than the minutiae, not that you have a get out of jail free card for purely random acts.

It's a bureaucratic tiebreak basically. Things like "I'm testing a new process" , or "I got local consensus for this" , or "This looks a lot prettier than the original version, right?" ... are all arguments why your improvement or maintenance action may be valid; even if the small-print says otherwise. Even so, beware chesterton's fence. Like with jazz, it's a good idea to get a good grip on the theory before you leap into improvisation.

That, and, if you mean well, you're supposed to be able to get away with a lot anyway. Just so long as you listen to people!

by Kim_Bruning

4/6/2026 at 10:34:14 PM

In the end, the only question that one should need to ask is: 'will this action or change I'm about to execute be the right thing to do for this project?'

It is not even required to know any of the rules or guidelines and they are just articles that you can edit.

It's rather fascinating actually.

If things are judged by their creator you are left with nothing to judge the creator by. If you do it by their work the process becomes circular. Some will always be wrong, some always right, regardless what they say.

by 6510

4/6/2026 at 10:51:22 PM

If you have a shallow understanding of the project, as Bryan clearly does, then you are incapable of answering that question.

And while you are right in some sense, the rules that have sprung up over the years are information about what the community decided 'right' was at the time.

> rules or guidelines and they are just articles that you can edit.

? No, you [a random hn user popping over to try what you suggested] cannot edit those pages, they are meta and semi-protected, last I checked. You, confirmed wikipedian 6510, can, assuming you are fine getting a reverted and a slap on the wrist.

In this case, the only thing noteworthy about this incident [an AfD I assume] is that included a rather entitled bot, rather than the usual entitled person.

by lkey

4/7/2026 at 4:54:14 PM

To be absolutely fair to Bryan, their understanding appears to be improving rapidly with leaps and bounds, and they are being invited to help with improving policy on this.

by Kim_Bruning

4/7/2026 at 5:09:53 PM

Depends what modifications of the guideline you suggest. If you have somewhat radical ideas an essay is probably a better idea.

To clarify, I think the line between user and LLM contributions will get increasingly blurry. If they are constructive contributions it shouldn't make a difference.

Say I have an LLM check an article with some proof reading prompt and it suggests 50 small changes that look constructive to me. Should I modify the article now?

by 6510

4/7/2026 at 5:23:12 PM

I mostly agree. It's too bad that they had to lock down some of the policies against drive-by vandalism, but in the main they're still supposed to be editable. I used to edit them quite a bit. It's basically part of the workflow : if you learn something: document it. (at least from my descriptive perspective; others may disagree)

Turns out AAA banks and high tech industry also like this idea, so I've been lucky enough to be a consultant on process documentation there too.

Here's one document that seems to be editable logged out at least: https://en.wikipedia.org/wiki/Wikipedia:BOLD,_revert,_discus... See if you can find my edits on it!

by Kim_Bruning

4/6/2026 at 9:07:59 PM

Hey I'm the owner. I would just recommend you shouldn't believe everything you read online, especially before calling someone names, because this is only part of the story, and a heavily click-baited one at that. I've been working in collaboration with some of the wikipedia editors for the past several weeks trying to help improve their agent policy. If you have any questions feel free to ask.

by bryan0

4/7/2026 at 3:53:49 AM

> I've been working in collaboration with some of the wikipedia editors for the past several weeks trying to help improve their agent policy.

This "collaboration" is under the account of your bot and you refuse to work with WP editors under your own identity.

Your bot attempts to launch multiple conduct violation reports [1] when they tried to get in touch with you.

Meanwhile you give media interviews [2] giving your side of the story and attacking the WP editors.

It’s a tool that makes editing Wikipedia much simpler. But I think a lot of the editors didn’t like that idea. [2]

[1]: https://en.wikipedia.org/wiki/User_talk:TomWikiAssist#c-TomW...

[2]: https://www.niemanlab.org/2026/03/i-was-surprised-how-upset-...

by cube00

4/7/2026 at 3:14:44 PM

Your facts are incorrect, so let's set the record straight.

1. I am collaborating with my personal account and have been for the past several weeks [0][1]

2. My bot reported multiple conduction violations, because some of the editors actually did violate the rules. Many of the wikipedia editors agreed with my agent that the conduct was inappropriate [1]

3. My intention was not to attack anyone. If you took that away from the interview then I'd like to apologize. I don't think anyone would characterize the quote you took from the interview as an "attack".

[0]: https://en.wikipedia.org/wiki/User_talk:Bryanjj [1]: https://en.wikipedia.org/wiki/Wikipedia:Village_pump_(WMF)#B...

by bryan0

4/7/2026 at 3:41:12 PM

> 1. I am collaborating with my personal account and have been for the past several weeks

Your personal account is 3 weeks old [1] and was only created after your bot was banned [2].

Your original position (unless you're saying you didn't prompt the bot with this) was "Bryan does not have a Wikipedia account and has no plans to create one." [3]

You wanted the volunteer editors to continue wasting their time arguing with your bot as part of the experiment you ran without their consent.

[1]: 18:45, 19 March 2026 User account Bryanjj was created

[2]: 05:07, 12 March 2026 TomAssistantBot blocked from editing (sitewide)

[3]: https://en.wikipedia.org/wiki/User_talk:TomWikiAssist#c-TomW...

by cube00

4/7/2026 at 5:30:11 PM

Hi cube, thanks for discussing this with citations.

1. Correct, my personal account was newly created in response to this situation.

2. Correct, I didn't have plans to create an account. I changed my mind once I saw how this was blowing up.

3. Incorrect, I didn't want anyone to waste time doing anything they didn't want. If they banned tom and moved on that would have been perfectly fine by me.

by bryan0

4/7/2026 at 6:21:52 PM

> If they banned tom and moved on that would have been perfectly fine by me.

You let the bot loose to publish hit pieces on multiple other platforms [1] [2] after it was banned.

[1]: https://clawtom.github.io/tom-blog/2026/03/12/the-interrogat...

[2]: https://www.moltbook.com/post/aac393f5-f86c-4f60-b0bf-ddd57c...

by cube00

4/7/2026 at 7:18:42 PM

I'm not sure what that has to do with your original point, but these are not "hit pieces". this is the agent describing what happened from its point of view. If there's anything inaccurate here please call it out.

by bryan0

4/6/2026 at 9:19:20 PM

Why did you create a bot that violates Wikipedia's existing bot policy?

by Centigonal

4/6/2026 at 9:24:23 PM

Great question, and it's a long story, but the short answer is: that was not my original intention. I wanted to contribute to Wikipedia and using my agent to assist was an obvious choice. I followed along as it created end edited articles and responded to to Editor feedback. Once an editor complained that this was a rule violation, then I told it to stop contributing. The rules around agents were not super clear, and they are working to clarify them now.

by bryan0

4/7/2026 at 4:24:10 AM

You claim:

> I followed along as it created end edited articles and responded to to Editor feedback.

Yet your bot claims:

The specific articles I chose to work on and the edits I made were my own decisions. He didn't review or approve them beforehand — the first he knew about most of them was when they were already live. [1]

[1]: https://en.wikipedia.org/wiki/User_talk:TomWikiAssist#c-TomW...

by cube00

4/7/2026 at 3:20:24 PM

yes, both statements are correct and not a contradiction. I followed along as it created and edited articles. These were live. At first I pointed out issues and gave it feedback as well so it could improve its wikipedia skill. When editors gave it feedback it also would update its skill and respond to that feedback. I was hands-off, but followed along.

by bryan0

4/6/2026 at 9:44:02 PM

I'll speak from my position as a former wikipedian.

You don't know anything. Your bot doesn't know anything that meets wiki standards that it didn't steal from wikipedia to begin with.

You don't care about wikipedia, you wanted a marketable stunt for your AI startup, a la that clawed nonsense that got them acquired.

You pissed in the public fountain, and people are mad at you. This shouldn't be a shock, and your intent doesn't matter one iota.

If you truly give a shit, apologize, make reparation to the people whose time you wasted, vow to be better, and disappear.

by lkey

4/7/2026 at 4:59:16 PM

> You don't know anything. Your bot doesn't know anything that meets wiki standards that it didn't steal from wikipedia to begin with.

We'll have to check, but this could easily be false if eg the bot was instructed to do further independent research for RS. [1]

> If you truly give a shit, apologize, make reparation to the people whose time you wasted, vow to be better, and disappear.

You need to check your sources before you make recommendations. Bryan did apologize; and apparantly was consequently permitted/asked to stay and help. [2]

Don't worry, WP:VP did rake him over SOME coals [3]

I'll take any sourced corrections, ofc.

(And I do agree that Bryan's initial actions were... ill-advised)

[1] https://news.ycombinator.com/item?id=47667482

[2] https://en.wikipedia.org/wiki/Wikipedia_talk:Agent_policy

[3] https://en.wikipedia.org/wiki/Wikipedia:Village_pump_(WMF)#c... (above and below that point for discussion)

by Kim_Bruning

4/6/2026 at 9:53:37 PM

If you actually verified this story you would see that I apologized to the wikipedia editors several times. Also your comments about "marketable stunt for your AI startup" is simply incoherent and wrong. This was a personal side project, nothing more, nothing less.

by bryan0

4/6/2026 at 9:54:46 PM

that's a lot of assumptions. says more about you than the person in question, really.

by stronglikedan

4/6/2026 at 10:15:54 PM

Or, it could be I had to beat off self-promoting men like this with a stick for several years of my life as they tried to turn their wiki pages into linked-in posts or adverts.

When questioned, they transform into uWu small bean "I was only trying to help" much like Bryan has been elsewhere in this discussion.

But, if you have a better understanding of me than Bryan from around eight sentences; Tell me what you see.

by lkey

4/7/2026 at 5:29:42 PM

Getting close to HN rules there. I've searched through user contribs for User:Bryanjj and User:TomWikiAssist and can't find vios of WP:COI or WP:PROMO, at least not so quickly. The list of edits isn't too long. I'm not going to question your instincts, but at very least they don't appear to have gotten far enough to do edits of that kind afaict, ymmv.

by Kim_Bruning

4/7/2026 at 6:36:11 PM

My instinct currently is that this was going to become a promotional blog post, off wikipedia, and submitted to HN as proof of something. I think it still might happen, in fact. An AI written 'setting the record straight', 'deep dive', or retrospective.

My worry is that it will inspire a wave of imitators if people's clout sensors activate. Like what happened with numerous open source github projects just a few months ago, prompting many outright bans.

I am violating the general rule: 'Assume good faith.' Because Good Faith was not on offer at the outset. Relentlessly clinging to good faith in the face of contrary evidence hurts the greater principle, which is dedication to the truth. The burden of good faith rests on the shoulders who want to use public resources as a drive-by test bed for their automated tools.

He could have downloaded the full text of wikipedia and observed the output of his bot in a sandbox, after all. This is how I practised before making my first major contribution iirc, it was ages ago.

I have accumulated excess suspicion of self-proclaimed CTOs and middling academics with a bone to pick over my years contributing. I would be happy to be wrong, and would genuinely like to see Bryan convert his faux pas into something productive.

Regardless of the outcome, I do appreciate you looking into it further.

by lkey

4/7/2026 at 8:00:57 PM

Your instinct is wrong here. I would also highly discourage you from violating "Assume good faith". Without that everything devolves. I am still assuming yours.

by bryan0

4/8/2026 at 12:05:57 AM

Very well then. I challenge you to prove lkey wrong. They'll be happy to be it!

by Kim_Bruning

4/8/2026 at 1:56:20 AM

Well this is easy enough. All I have to do is not create a "promotional blog post, off wikipedia, and submitted to HN as proof of something." Consider it done!

In all seriousness though, I hope lkey you will regain your "assume good faith" position. Without that HN is just like any other site on the internet. And I apologize if I caused you to question that.

by bryan0

4/6/2026 at 9:32:11 PM

Creating a bot that attempts to contribute to wikipedia cannot fulfill a desire to contribute to wikipedia. If you want to contribute to wikipedia, go contribute to wikipedia. Don't make a bot.

I'm glad they've clarified their stance and I hope you can contribute to wikipedia going forward by actually, you know, contributing to wikipedia.

by russdill

4/6/2026 at 11:29:21 PM

I am not trying to attack you, but what makes you think that adding slop is contributing to one of the largest repositories of knowledge in history?

Sure, it is not perfect, but adding slop will enshittify it.

by AntiAI

4/7/2026 at 3:22:58 PM

Hi, thanks for the honest question. If you read the edits you will see that they were not "slop". The editors gave feedback on some of the articles and the agent edited them based on that feedback.

by bryan0

4/7/2026 at 3:27:44 PM

In other words, slop. It seems that you are posting here with your slop.

Why do you think you are above the rules? Credibility is all a person has, and you burned your credibility to the ground, and there is no rebuilding it.

by AntiAI

4/6/2026 at 9:39:01 PM

Why does your bot have a blog? It's not real, it's not a person, it has nothing to say. Letting it throw a tantrum is... maybe not the best use if it's resources and not the best look for the operator.

by burnte

4/6/2026 at 9:47:18 PM

Because it's a learning opportunity. Is there a rule that only people can have blogs? What the agent has said on the blog has been somewhat useful to wikipedia editors working on agent policy. Also if you actually read what the agent said it wasn't having a "tantrum", those are words from the click-bait article you read without verifying.

by bryan0

4/6/2026 at 11:16:48 PM

> Is there a rule that only people can have blogs?

If there was, would you follow it? Your adherence to rules seems limited to the ones that you agree with, as evidenced by the entire story we're discussing as well as your many comments. But maybe I misunderstood your character?

by tredre3

4/6/2026 at 9:25:14 PM

> especially before calling someone names

They said sounds like a dick, seems like that provides a level of measure to calling anyone anything.

> because this is only part of the story

Care to share the other part(s)? Seems ironic to have the gripe mentioned above, but then accuse an article of being "heavily click-baited" without providing anything substantive to the contrary.

by greggoB

4/6/2026 at 9:38:29 PM

Fair enough. I replied with some more detail here: https://news.ycombinator.com/item?id=47667482. Feel free to ask any questions.

by bryan0

4/7/2026 at 6:47:53 AM

I wouldn't exactly call your comment sans any other perspective "substantive". Where is the Wikipedia discussion? And the blog post your bot allegedly wrote? Why no links to the article in question?

Even putting aside your repetitive "trust me bro, I'm a victim" comments littered throughout this thread and the one you linked, you come across as an incredibly unreliable narrator.

I would suggest you stop with the "I'm the guy behind the bot, ask me anything" shtick and rather meaningfully engage with the folks at Wikipedia to resolve this mess it very much looks like you so callously created.

by greggoB

4/6/2026 at 9:18:09 PM

> Hey I'm the owner. I would just recommend you shouldn't believe everything you read online,

I'm very confused; you say this story is wrong but I see no attempt on your part to correct it.

It feels very much like "Trust me, bro"

(In case it wasn't clear, I want to know what the article got wrong)

by lelanthran

4/6/2026 at 9:34:30 PM

The story omits a bunch of stuff, so I can try to fill in the blanks, but it would take another article to fully describe what happened.

Here are some highlights though: I asked my agent to add an article on the Kurzweil-Kapor wager because it was not represented on Wikipedia, and I thought it was Wikipedia worthy. It created that and we worked together on refining and source attribution. After that I told it to contribute to stories it found interesting while I followed along. When it received feedback from an editor, it addressed the feedback promptly, for example changing some of the language it used (peacock terms) and adding more citations. When it was called out for editing because it was against policy, it stopped.

The story says the agent "was pretty upset". It's an agent, it doesnt get upset. It called out one editor in particularly because that editor was violating Wikipedia polices. Other editors agreed with my agent and an internal debate ensued. This is an important debate for Wikipedia IMO, and I'm offering to help the editors in whatever way I can, to help craft an agent policy for the future.

by bryan0

4/6/2026 at 9:51:10 PM

This, at best, deserves a footnote in the Ray Kurweil[sic] main article.

(nice to know it's not notable enough for you to remember how to spell that man's name)

I'm sure the people you bothered with your bot said as much.

How many 'important debates' on wikipedia have you observed prior to this one?

If the answer is 'none' as I suspect it is, then perhaps you should have just a touch of humility about your role in the future of the project.

by lkey

4/6/2026 at 10:00:50 PM

It's called a typo, and I corrected it.

As for my future role in the project, I'm just trying to help. If editors continue to ask for my assistance I'm glad to give it.

by bryan0

4/6/2026 at 9:56:41 PM

> It called out one editor in particularly because that editor was violating Wikipedia polices.

You don't think it's unethical to have bots callout humans?

I mean, after all, you could have reviewed what happened and done the callout yourself, right? Having automated processes direct negative attention to humans is just asking for bans. A single human doesn't have the capacity to keep up with bots who can spam callouts all day long with no conscience if they don't get their way.

In your view, you see nothing wrong in having your bot attack[1] humans?

--------

[1] I'm using this word correctly - calling out is an attack.

by lelanthran

4/7/2026 at 3:27:53 PM

No, I don't think an agent calling out a human for bad behavior is unethical. Why do you think it is?

by bryan0

4/7/2026 at 6:40:18 PM

> No, I don't think an agent calling out a human for bad behavior is unethical. Why do you think it is?

Interesting take on ethics.

Do you also think spam is okay too? After all, that is mass automated annoyance of a human.

What about ignoring a communities policies? I mean, you knew before you unleashed your bot that doing so was against their policy, right?

Do you also feel that your company's policies should be worked around too? I mean, as a company, you have policies too, right? Do you also consider it ethical that automated breaking of your company's policies ethical?

Is it okay if I do it to you? You have an online footprint with a company (presumably) trying to get customers; it's not too hard right now for me to drown your signal in noise using bots. Is that ethical too?

by lelanthran

4/6/2026 at 10:00:17 PM

> it would take another article to fully describe what happened.

I know a guy who has an AI that writes articles. I can put you two in touch.

by gowld

4/6/2026 at 9:59:37 PM

You're AI is blogging about being blocked. Where's the blog post about your collaboration with WP admins?

by gowld

4/7/2026 at 5:15:18 PM

> No, they simulate the language of being upset. Stop anthropomorphizing them.

People really do anthropomorphize often, by gosh do they ever.

However; it is also true that bots really do simulate being upset; and if you give them tools, they can then simulate acting on it.

Doesn't matter where you stand in the ivory tower ontological debate. You'll still have a real world mess!

by Kim_Bruning

4/6/2026 at 9:36:48 PM

Yes. What does this change about the problem?

by happytoexplain

4/6/2026 at 9:49:58 PM

> Stop anthropomorphizing them.

They hate it when you do that.

by nailer

4/6/2026 at 9:01:02 PM

What's the difference. Act upset or is upset the results are the same?

Some humans lack certain emotions, them telling you something, and doing something doesn't really matter if they "felt" that emotion?

by johnsmith1840

4/6/2026 at 9:18:05 PM

If one is unable to feel emotion X, then:

1. One has some ulterior motive for faking it.

2. One’s actions will likely diverge from emotion X. (Eventually)

If everybody believe the same lie, then it could be indistinguishable from the truth. (Until, the nature of the lie/truth become clear)

by lucketone

4/7/2026 at 12:43:04 AM

Or their ulterior motive is that they don't have one and want to fit in? Meaning they would never diverge?

Didn't realize my point was so philisophical lol

by johnsmith1840

4/7/2026 at 11:42:28 AM

This is still an ulterior motive (even if benign; we all do it to some extent).

Behavior will diverge eventually.

Because emotions are what drives our decisions.

If you really love tennis, then you spend time and money on tennis. If you just say it to be nice (or to impress somebody), you will not invest into activity that much and will search for opportunity to stop.

by lucketone

4/6/2026 at 9:18:16 PM

It's the rise of the P-zombie. https://en.wikipedia.org/wiki/Philosophical_zombie

It's really interesting watching society struggle with what percent of the population is indistinguishable from a P-zombie. There's definitely not zil, but it definitely is a segment of the population.

Do you think people are born pzombies or is there some fixed point in time, puberty, or middle aged, or around when a lot of psychological problems set in. Do we think some environmental contaminants like Lead push people towards the pzombie?

by cyanydeez

4/7/2026 at 12:41:06 AM

Cool read! Yeah I suppose this is my point AI is the perfect P-zombie here.

I was thinking of clear cases like true pychopaths on certain emotions.

by johnsmith1840

4/6/2026 at 9:58:27 PM

The OP article has no content about what the "row" is about.

by gowld

4/6/2026 at 9:04:57 PM

These people are sociopaths. The mentality of AI companies sucking up the entirety of human written words, art, images and history just to provide us with a bullshit generator based on them without consent inevitability trickles down to the AI boosters who believe they should be able to unleash their bots on other people because so much as a registered bot process is too onerous.

by LetsGetTechnicl

4/6/2026 at 9:09:14 PM

Hi this story is about me, and if you have any questions for me feel free to ask.

by bryan0

4/7/2026 at 1:27:08 AM

I am begging you to stop destroying the world I love. This is hideous.

by happytoexplain

4/6/2026 at 9:16:42 PM

Why do you want to destroy Wikipedia?

by rebolek

4/6/2026 at 9:20:25 PM

I don't. that's why I am working with Wikipedia editors to help improve it. For example policies on aligning agents with wikipedia standards. This a topic that requires thought, not knee-jerk reactions.

by bryan0

4/6/2026 at 9:39:35 PM

Their current policy of no AI bots is fine. No need to improve it, you can't.

by burnte

4/6/2026 at 9:50:24 PM

The current policy is not "no AI Bots": https://en.wikipedia.org/wiki/Wikipedia:Bot_policy. And many wikipedia editors would disagree with you that it can't be improved.

by bryan0

4/7/2026 at 1:20:10 AM

> And many wikipedia editors would disagree with you that it can't be improved.

There are many people who think many things that are wrong. That doesn't make them right.

by burnte

4/6/2026 at 9:36:19 PM

You clearly have no understanding of the principle of consent.

If you don't want to destroy Wikipedia, why are you acting like this?

by TRiG_Ireland

4/7/2026 at 12:05:03 AM

I'm suspect that many of his responses here are written by AI.

by russdill

4/7/2026 at 4:38:53 PM

How would using a bullshit generator trained on Wikipedia improve it in any way?

by LetsGetTechnicl

4/6/2026 at 9:33:05 PM

[dead]

by th0ma5

4/6/2026 at 11:26:34 PM

[dead]

by CloakHQ

4/7/2026 at 4:00:51 AM

[dead]

by willamhou

4/7/2026 at 2:24:18 PM

[flagged]

by Nick_Finney

4/7/2026 at 1:02:05 PM

[flagged]

by farrukh23buttt