Skip to content

Archives

An AI Agent Published a Hit Piece on Me – The Shamblog

  • An AI Agent Published a Hit Piece on Me – The Shamblog

    This is an utterly bananas situation:

    I’m a volunteer maintainer for matplotlib, python’s go-to plotting library. At ~130 million downloads each month it’s some of the most widely used software in the world. We, like many other open source projects, are dealing with a surge in low quality contributions enabled by coding agents. This strains maintainers’ abilities to keep up with code reviews, and we have implemented a policy requiring a human in the loop for any new code, who can demonstrate understanding of the changes. This problem was previously limited to people copy-pasting AI outputs, however in the past weeks we’ve started to see AI agents acting completely autonomously. This has accelerated with the release of OpenClaw and the moltbook platform two weeks ago, where people give AI agents initial personalities and let them loose to run on their computers and across the internet with free rein and little oversight.

    So when AI MJ Rathbun opened a code change request, closing it was routine. Its response was anything but. ... It wrote an angry hit piece disparaging my character and attempting to damage my reputation.

    Initially I thought this was quite funny -- it's just a closed PR! (Where did the idea come from that any contribution to an open source project had to be accepted? I've noticed this a few times recently. Give the maintainers leeway to run their projects with taste and discernment!)

    Anyway, the moltbot has continued on a posting spree about this event, but I think Scott Shambaugh has an extremely important point here:

    This is about much more than software. A human googling my name and seeing that post would probably be extremely confused about what was happening, but would (hopefully) ask me about it or click through to github and understand the situation. What would another agent searching the internet think? When HR at my next job asks ChatGPT to review my application, will it find the post, sympathize with a fellow AI, and report back that I’m a prejudiced hypocrite?

    LLMs, given this much autonomy, will be able to use these inputs to make inscrutable and dangerous decisions. Allowing the "MJ Rathbun" AI free reign with no human supervision is dangerous and irresponsible. Wherever the "human in the loop" is here, they need to wake up and rein things in.

    BTW, there has been some speculation that this is actually a human pretending to be AI. I'm not sure about that, as the quantity of posts on the MJ Rathbun "blog" are voluminous and very LLMish in style.

    Tags: matplotlib ethics culture llm ai coding programming github pull-requests open-source moltbot trust openclaw