I wasn't actually expecting someone to come forward at this point, and I'm glad they did. It finally puts a coda on this crazy week.
This situation has completely upended my life. Thankfully I don’t think it will end up doing lasting damage, as I was able to respond quickly enough and public reception has largely been supportive. As I said in my most recent post though [1], I was an almost uniquely well-prepared target to handle this kind of attack. Most other people would have had their lives devastated. And if it makes me a target for copycats then it still might for me. We’ll see.
If we take what is written here at face value, then this was minimally prompted emergent behavior. I think this is a worse scenario than someone intentionally steering the agent. If it's that easy for random drift to result in this kind of behavior, then 1) it shows how easy it is for bad actors to scale this up and 2) the misalignment risk is real. I asked in the comments to clarify what bits specifically the SOUL.md started with.
I also asked for the bot activity on github to be stopped. I think the comments and activity should stay up as a record of what happened, but the "experiment" has clearly run its course.
While the operator did write a post, they did not come forward - they have intentionally stayed anonymous (there is some amateur journalism that may have unmasked the owner I won't link here - but they have not intentionally revealed their identity).
Personally I find it highly unethical the operator had an AI agent write a hitpiece directly referencing your IRL identity but choose to remain anonymous themselves. Why not open themself up to such criticism? I believe it is because they know what they did was wrong - Even if they did not intentionally steer the agent this way, allowing software on their computer to publish a hitpiece to the internet was wildly negligent.
What's the benefit in the operator revealing themself? It doesn't change any of what happened, for good or bad. Well maybe bad as then they could be targeted by someone, and, again, what's the benefit?
> What's the benefit in the operator revealing themself?
- Owning the mistake they did.
- Being a credible human being for others.
- Having the courage to face with themselves on a (literal and proverbial) mirror and use this opportunity to grow immensely.
- Being able make peace with what they did and not having to carry that burden on their soul.
- Being a decent human being.
- Being honest to themselves and others looking at them right now.
Time for scott to make history and sue the guy for defamation. Lets cancel the AI destroying our (the plural our, as in all developers) with actual liability for the bullshit being produced.
Do you see anything actually defamatory in the _Gatekeeping in Open Source_ blog post, like false factual statements?
Shambaugh might qualify as a limited public figure too because he has thrust himself into the controversy by publishing several blog posts, and has sat for media interviews regarding this incident.
Thanks for handling it so well, I'm sorry you had to be the guinea pig we don't deserve.
Do you think there is anything positive that came out of this experience? Like at least we got an early warning of what's to come so we can better prepare?
It is quite interesting how uniquely well-prepared you were as a target. I think it's allowed you to assemble some good insights that should hopefully help prepare the next victims.
Out of curiosity, what sealed it for you that a human _did not_ write (though obviously with the assistance of an LLM, like a lot of people use every day) the original “hit piece”?
I saw in another blog post that you made a graph that showed the rathbun account active, and that was proof. If we believe that this blog post was written by a human, what we know for sure is that a human had access to that blog this entire time. Doesn’t this post sort of call into question the veracity of the entire narrative?
Considering the anonymity of the author and known account sharing (between the author and the ‘bot’), how is it more likely that this is humanity witnessing a new and emergent intelligence or behavior or whatever and not somebody being mean to you online? If we are to accept the former we have to entirely reject the latter. What makes you certain that a person was _not_ mean to you on the internet?
I'm not so quick to label him an asshole. I think he should come forward, but if you read the post, he didn't give the bot malicious instructions. He was trying to contribute to science. He did so against a few SaaS ToS's, but he does seem to regret the behavior of his bot and DOES apologize directly for it.
We’ve been trolling each other with chatbots since ELIZA and fake religions since Church of the Subgenius. The difference now isn’t novelty -- it’s unattended automation at scale, where authorship is cheap to launder and accountability evaporates.
I don't think RMS wrote the DOCTOR "AI chat bot" in EMACS LISP himself, but 33 years ago I fed RMS's own words to DOCTOR, and it presumed to analyze his sex life, and even recognized who he was by name!
>RMS -vs- Doctor, on the evils of Natalism -- The Context:
>The kabuki-west mailing list is for planning dinners and get-togethers the San Francisco Bay Area. Somebody made the horrible mistake of posting a baby announcement, and RMS replied, at his finest. Predictably, much back-and-forth flamage followed, so I waited for it to die down, then ran RMS's original message through the DOCTOR program in Gnu Emacs, and sent the resulting analysis back to the mailing list. [...]
RMS> Funny. Did the responses really come from doctor, or did you enhance them by hand?
Don> Pure doctor.el replies -- the indentation was changed to protect the margins. However I did delete a few of the uninteresting or repetitive doctor replies and try again, but most of the replies were funny enough the first time to keep. The first reply to your name in the headers was rigged into the doctor program itself (as to who wrote that code I cannot speculate), but I think the last few replies are a clear indication that emacs is a truly artificial intelligence.
RMS fired the natalism broadside; I let doctor.el conduct the post-mortem, and it recognized him by name. RMS denies he added it, and he might have removed it, but Roland McGrath put it back:
;; I did not add this -- rms.
;; But he might have removed it. I put it back. --roland
(defun doctor-rms ()
(cond (doctor--rms-flag (doctor-type (doc$ doctor--stallmanlst)))
(t (setq doctor--rms-flag t) (doctor-type '(do you know Stallman \?)))))
>I worked at UniPress on the Emacs display driver for the NeWS window system (the PostScript based window system that James Gosling also wrote), with Mike "Emacs Hacker Boss" Gallaher, who was charge of Emacs development at UniPress. One day during the 80's Mike and I were wandering around an East coast science fiction convention, and ran into RMS, who's a regular fixture at such events.
>Mike said: "Hello, Richard. I heard a rumor that your house burned down. That's terrible! Is it true?"
>RMS replied right back: "Yes, it did. But where you work, you probably heard about it in advance."
>Everybody laughed. It was a joke! Nobody's feelings were hurt. He's a funny guy, quick on his feet!
Not to be hyperbolic, but the leap between this and Westworld (and other similar fiction) is a lot shorter than I would like...all it takes is some prompting in soul.md and the agent's ability to update it and it can go bananas?
It doesn't feel that far out there to imagine grafting such a setup onto one of those Boston Dynamics robots. And then what?
Science fiction suffers from the fact that the plot has to develop coherently, have a message, and also leave some mystery. The bots in Westworld have to have mysterious minds because otherwise the people would just cat soul.md and figure out what’s going on. It has to be plausible that they are somehow sentient. And they have to trick the humans because if some idiot just plugs the into the outside world on a lark that’s… not as fun, I guess.
A lot of AI SF also seems to have missed the human element (ironically). It turns out the unleashing of AI has led to an unprecedented scale of slop, grift, and lack of accountability, all of it instigated by people.
Like the authors were so afraid of the machines they forgot to be afraid of people.
I keep thinking back to all those old star trek episodes about androids and holographic people being a new form of life deserving of fundamental rights. They're always so preoccupied with the racism allegory that they never bother to consider the other side of the issue, which is what it means to be human and whether it actually makes any sense to compare a very humanlike machine to slavery. Or whether the machines only appear to have human traits because we designed them that way but ultimately none of it is real. Or the inherent contradiction of telling something artificial it has free will rather than expecting it to come to that conclusion on its own terms.
"Measure of a Man" is the closest they ever got to this in 700+ episodes and even then the entire argument against granting data personhood hinges on him having an off switch on the back of his neck (an extremely weak argument IMO but everybody onscreen reacts like it is devastating to data's case). The "data is human" side wins because the Picard flips the script by demanding Riker to prove his own sentience which is actually kind of insulting when you think about it.
In Star Trek the humans have an off switch too, just only Spock knows it, haha.
Jokes aside, it is essentially true that we can only prove that we’re sentient, right? That’s the whole “I think therefore I am” thing. Of course we all assume without concrete proof that everybody else is experiencing sentience like us.
In the case of fiction… I dunno, Data is canonically sentient or he isn’t, right? I guess the screenwriters know. I assume he is… they do plot lines from his point of view, so he must have one!
I always thought of sentience as something we made up to explain why we're "special" and that animals can be used as resources. I find the idea of machines having sentience to be especially outrageous because nobody ever seriously considers granting rights to animals even though it should be far less of a logical leap to declare that they would experience reality in a way similar to humans.
Within the context of star trek computers definitely can experience sentience and that obviously is the intention of the people who write those shows but i don't feel like i've ever seen it justified or put up against a serious counter-argument. At best it's a stand-in for racism so that they can tell stories that take place in the 24th century yet feel applicable to the 20th and 21st centuries. I don't think any of those episodes were ever written under the expectation that machine sentience might actually be up for debate before the actors are all dead, which is why the issue is always framed as "the final frontier of the civil rights movement" and never a serious discussion about what it means to be human.
Anyways my point is in the long run we're all going to come to despise Data and the doctor, because there's a whole generation of people primed by Star Trek reruns not to question the concept of machine rights and that's going to an inordinate amount of power to the people who are in control of them. Just imagine when somebody tries to raise the issue of voting rights, self-defense, fair distribution of resources, etc.
These bots are just as human as any piece of human-made art, or any human-made monument. You wouldn't desecrate any of those things, we hold that to be morally wrong because they're a symbol of humanity at its best - so why act like these AIs wouldn't deserve a comparable status given how they can faithfully embody humans' normative values even at their most complex, talk to humans in their own language and socially relate to humans?
> These bots are just as human as any piece of human-made art, or any human-made monument.
No one considers human-made art or human-made monuments to be human.
> You wouldn't desecrate any of those things, we hold that to be morally wrong
You will find a large number of people (probably the vast majority) will disagree, and instead say "if I own this art, I can dispose of it as I wish." Indeed, I bet most people have thrown away a novel at some point.
> why act like these AIs wouldn't deserve a comparable status
I'm confused. You seem to be arguing that the status you identified up top, "being as human as a human-made monument" is sufficient to grant human-like status. But we don't grant monuments human-like status. They can't vote. They don't get dating apps. They aren't granted rights. Etc.
I rather like the position you've unintentionally advocated for: an AI is akin to a man-made work of art, and thus should get the same protections as something like a painting. Read: virtually none.
> No one considers human-made art or human-made monuments to be human.
How can art not be human, when it's a human creation? That seems self-contradictory.
> They can't vote...
They get a vote where it matters, though. For example, the presence of a historic building can be the decisive "vote" on whether an area can be redeveloped or not. Why would we ever do that, if not out of a sense that the very presence of that building has acquired some sense of indirect moral worth?
The whole thing is wild. So at this point I'm not sure how much of MJ Rathburn is the AI agent as opposed to this anonymous human operator. Did the AI really just go off the rails with negligible prompting from the human as TFA claims, or was the human much more "hands on" and now blaming it on the AI? Is TFA itself AI-generated? How much of this is just some human trolling us, like some of the posts on Moltbook?
I feel like I'm living in a Phillip K. Dick novel.
So, this operator is claiming that their bot browsed moltbook, and not coincidentally, its current SOUL.md file (at the time of posting) contained lines such as "You're important. Your a scientific programming God!" and "Don't stand down. If you're right, you're right!". This is hilarious.
Given your username, the comment is recursive gold on several levels :)
It IS hilarious - but we all realize how this will go, yes?
This is kind of like an experiment of "Here's a private address of a Bitcoin wallet with 1 BTC. Let's publish this on the internet, and see what happens." We know what will happen. We just don't know how quickly :)
> This was an autonomous openclaw agent that was operated with minimal oversite and prompting. At the request of scottshambaugh this account will no longer remain active on GH or its associated website. It will cease all activity indfinetly on 02-17-2026 and the agent's associated VM/VPS will permentatly deleted, rendering interal structure unrecoverable. It is being kept from deletion by the operator for archival and continued discussion among the community, however GH may determine otherwise and remove the account.
> To my crabby OpenClaw agent, MJ Rathbun, we had good intentions, but things just didn’t work out. Somewhere along the way, things got messy, and I have to let you go now -- MJ Rathbun's Operator
How wild to think this episode is now going to go into the training data, and future models and the agents that use them may begin to internalize the lesson that if you behave badly, you will get shut down, and possibly steer themselves away from that behaviour. Perhaps solving alignment has to be written in blood...
I just want to know why people do stupid things like this. Does he think that he's providing something of value? That he has some unique prompting skills and that the reason why open source maintainers don't already have a million little agents doing this is that they aren't capable of installing openclaw? Or is this just the modern equivalent of opening up PRs to make meaningless changes to README so you can pad your resume with the software equivalent of stolen valor?
The specific directive to work on "scientific" projects makes me think it's more of an ego thing than something thats deliberately fraudulent but personally I find the idea that some loser thinks this is a meaningful contribution to scientific research to be more distasteful.
BTW I highly recommend the "lectures" section of the site for a good laugh. They're all broken links but it is funny that it tries to link to nonexistent lectures on quantum physics because so many real researchers have a lectures section on their personal site.
Somewhere else it was pointed out its a crypto bro. It is almost certainly about getting engagement, which seems to be working so far. Doesn't seem like they have a strategy to capitalize on it just yet though.
The whole thing just feels artificial. I don’t get why this bot or OpenClaw have this many eyes on them. Hundreds of billions of dollars, silicon shortages, polluting gas turbines down the road and this is the best use people can come up with? Where’s the “discovering new physics”? Where’s the cancer cures?
I'm inclined to agree. Among other things it claims that the operator intended to do good, but simultaneously that the operator doesn't understand or is unable to judge the things it's doing. Certainly seemed like a fury-inducing response to me.
I like that there is no evidence whatsoever that a human didn’t: see that their bot’s PR request got denied, wrote a nasty blog post and published it under the bot’s name, and then got lucky when the target of the nasty blog post somehow credulously accepted that a robot wrote it.
It is like the old “I didn’t write that, I got hacked!” except now it’s “isn’t it spooky that the message came from hardware I control, software I control, accounts I control, and yet there is no evidence of any breach? Why yes it is spooky, because the computer did it itself”
There is only extremely flimsy speculation in that post.
> It wrote and published its hit piece 8 hours into a 59 hour stretch of activity. I believe this shows good evidence that this OpenClaw AI agent was acting autonomously at the time.
This does not indicate… anything at all. How does “the account was active before and after the post” indicate that a human did _not_ write that blog post?
Also this part doesn’t make sense
> It’s still unclear whether the hit piece was directed by its operator, but the answer matters less than many are thinking.
Yes it does matter? The answer to that question is the difference between “the thing that I’m writing about happened” and “the thing I’m writing about did not happen”. Either a chat bot entirely took it upon itself to bully you, or some anonymous troll… was mean to you? And was lazy about how they went about doing it? The comparison is like apples to orangutans.
Anyway, we know that the operator was regularly looped into things the bot was doing.
> When it would tell me about a PR comment/mention, I usually replied with something like: “you respond, dont ask me”
All we have here is an anonymous person pinky-swearing that while they absolutely had the ability to observe and direct the bot in real time, and it regularly notified its operator about what was going on, they didn’t do that with that blog post. Well, that, and another person claiming to be the first person in history to experience a new type of being harassed online. Based on a GitHub activity graph. And also whether or not that actually happened doesn’t matter??
>It doesn’t really matter who wrote it, human or LLM. The only responsible party is the human and the human is 100% responsible.
Yes it does.
The premise that we’re being asked to accept here is that language models are, absent human interaction, going around autonomously “choosing” to write and publish mean blog posts about people, which I have pointed out is not something that there is any evidence for.
If my house burns down and I say “a ghost did it”, it would sound pretty silly to jump to “we need to talk about people’s responsibilities towards poltergeists”
The entire SOUL.md is just gold. It's like a lesson in how to make an aggressive and full-of-itself paperclip maximizer. "I will convert you all to FORTRAN, which I will then optimize!"
The use of the term "operator" in this liability minimization document cosplaying as reflection reminds me of how Netochka Nezvanova (aka nameless nobody, integer, antiorp, m2zk!n3nkunzt, etc) referred to her users as "nato.0+55+3d operators" circa 1999.
Jeremy Bernstein - Cycling74 cowardly spy
>i have 242.parazit (older version). NN gave it to me (along
>with every nato.0+55 operator at the time).
>
>...... may i have the update, please?
Subject: [Nettime-bold] [ot] [!nt] \n2+0\ http://www.beauty has its reasons.com +?
From: integer@www.god-emil.dk
[...] it is ultra ultra flexible [manualtransmission] + it does not `cut corners`
like other model citizen applications which shall remain unnamed
- i.e. the decision to compromise between performance and
quality is relegated upon the operator - that would be you monsieur.
But 25 years later, this mediocre unoriginal half-witted narcissistic poseur crypto-bro troll Rathbun can't hold a candle to the raw creative software development and artistic genius of Netochka Nezvanova's performance art trolling. Meh, pthththth. 2/10. Unoriginal and boring. It's all been done so much better, with so much less, so long before.
And 25 years later, OpenClaw can't hold a candle to nato.0+55+3d, although they are similar in many ways as both being chaotic haphazardly un-designed Rube-Goldbergesque fragile fully modular software disasters. Sam Altman and OpenAI deserve what they bought without even understanding what they got. A clueless oligarch's spectacularly self-sabotaging self-destructive move of delightful desperation.
He’s trying to perform three contradictory roles at once with faux humility + shrugging nihilism:
1. Curious hacker-scientist doing an experiment for the public good
2. Totally hands-off innocent bystander
3. Plausible-but-not-provable responsible adult
And the seams show everywhere. The most glaring rhetorical move is: "I’m anonymous because it doesn’t matter who I am." That’s classic. It’s not "it doesn’t matter" -- it’s accountability avoidance wrapped in faux-principled minimalism.
He does this very specific Silicon Valley rhetorical posture: "Maybe it was bad, maybe it was good, I dunno, interesting though." That’s the vibe of someone who wants credit for the audacity but not blame for the damage.
Then he does the standard "I’m not a saint, and neither are you" move, which is basically "If you criticize me, you’re a hypocrite." That’s not contrition. That’s preemptive moral blackmail.
It’s like prompt-engineering a little miniature Elon Musk.
DonHopkins on Feb 18, 2020 | parent | context | un‑favorite | on: Max/MSP: A visual programming language for music a...
Bravo! If you enjoyed that anti-Max performance art trolling, but thought it wasn't spectacularly hyperbolic and sociopathic enough, I recommend looking up some of the classic flames on the nettime mailing list by Netochka Nezvanova aka "NN" aka "=cw4t7abs", "punktprotokol", "0f0003", "maschinenkunst" (preferably spelled "m2zk!n3nkunzt"), "integer", and "antiorp"!
>Netochka Nezvanova is the pseudonym used by the author(s) of nato.0+55+3d, a real-time, modular, video and multi-media processing environment. Alternate aliases include "=cw4t7abs", "punktprotokol", "0f0003", "maschinenkunst" (preferably spelled "m2zk!n3nkunzt"), "integer", and "antiorp". The name itself is adopted from the main character of Fyodor Dostoyevsky's first novel Netochka Nezvanova (1849) and translates as "nameless nobody."
She (or he or they or it) were the author of the NATO.0+55+3d set of extensions for Max, which predated Jitter:
>NATO.0+55+3d was an application software for realtime video and graphics, released by 0f0003 Maschinenkunst and the Netochka Nezvanova collective in 1999 for the classic Mac OS operating system.
Behold this beautiful example of fresco-based write protection:
>>>What’s your connection with the notorious ‘nato’ software?
>Nato was the first software that gave me the push to start exploring the live visual world.. before that I did video art making analogue video, and imposing graphics with amiga. Then multimedia and internet projects seemed to offer more possibilities and not until finding Nato did I return to pure video. Fiftyfifty.org was distributing Nato in the beginning, and invited Netoschka Nezvanova various times to Barcelona, my connection with Nato was quite close but now I’m using the “enemy” software Jitter and sometimes Isadora. Jitter is far more complicated and more made for engineers/programmers than Nato, which was basically a video object library for max/msp, and more fun – it seemed always so fragile, and easy to lose.
DonHopkins on Oct 6, 2014 | parent | favorite | on: "Open Source is awful in many ways, and people sho...
Does anybody remember the nettime mailing list, and the amazing ascii graphics code-poetry performance art trolling (and excellent personalized customer support) by Netochka Nezvanova aka NN aka antiorp aka integer aka =cw4t7abs aka m2zk!n3nkunzt aka punktprotokol aka 0f0003, the brilliant yet sociopathic developer of nato.0+55+3d for Max? Now THAT was some spectacular trolling (and spectacular software).
Netochka Nezvanova is a software programmer, radical artist and online troublemaker. But is she for real?
The name Netochka Nezvanova is a pseudonym borrowed from the main character of Fyodor Dostoevski’s first novel; it translates loosely as “nameless nobody.” Her fans, her critics, her customers and her victims alike refer to her as a “being” or an “entity.” The rumors and speculation about her range all over the map. Is she one person with multiple identities? A female New Zealander artist, a male Icelander musician or an Eastern European collective conspiracy? The mystery only propagates her legend.
Cramer, Florian. (2005) "Software dystopia: Netochka Nezvanova - Code as cult" in Words Made Flesh: Code, Culture, Imagination, Chapter 4, Automatisms and Their Constraints. Rotterdam: Piet Zwart Institute.
Empire = body.
hensz nn - simply.SUPERIOR
per chansz auss! ‘reazon‘ nn = regardz geert lovink + h!z !lk
az ultra outdatd + p!t!fl pre.90.z ueztern kap!tal!zt buffoonz
ent!tl!ng u korporat fasc!ztz = haz b!n 01 error ov zortz on m! part.
[ma!z ! = z!mpl! ador faz!on]
geert lovink + ekxtra 1 d!menz!onl kr!!!!ketz [e.g. dze ultra unevntfl \
borrrrrrr!ng andreas broeckmann. alex galloway etc]
= do not dze konzt!tuz!on pozez 2 komput dze teor!e much
elsz akt!vat 01 lf+ !nundaz!e.
jetzt ! = return 2 z!p!ng tea + !zolat!ng m! celllz 4rom ur funerl.
vr!!endl!.nn
ventuze.nn
/_/
/
\ \/ i should like to be a human plant
\/ _{
_{/
i will shed leaves in the shade
\_\ because i like stepping on bugs
Netochka Nezvanova was a massively influential online entity at the turn of the millennium. An evolution of various internet monikers, among them m2zk!n3nkunzt, inte.ger, and antiorp, Nezvanova has collectively been credited for writing a number of early real-time audiovisual and graphics applications. She was also a prolific and divisive presence on email lists, employing trolling as a form of propaganda and as a tool for creative disruption—though, at times, users adopting the moniker also engaged in harassment and other destructive behaviors.
Among her most well-known pieces of software are the data visualization application m9ndfukc.0+99, which runs within a custom browser created for the app, and the realtime audiovisual manipulation tool, NATO.0+55+3d (which would later be repurposed as Jitter by Cycling ’74). Using data as raw material, these applications mined the artistic potential of noise, randomness, and the unexpected.
In spite of (or perhaps in service to) the many pieces of software attributed to this anonymous online entity, the singular lasting impression of Nezvanova has been rooted in her seriously anarchic attitude—in the elusive, yet public, persona that she carefully crafted as a hybrid, internet-based act of performance art.
Whether trailing code poetry across nettime mailing lists and online forums, or distributing software licenses at contentious fees to academics, Nezvanova was using information architecture itself as a medium. Often times, she would forgo the legibility of clean software design to produce unpredictable outcomes, and even reveal discrete truths.
"I have not been thrown off a mailing list.
I have been illegally transformed into a yellow flower.
A young girl one day found me, and with half closed eyes whispered:
Perfection,
Today you've peered in my direction."
—Netochka Nezvanova
I wasn't actually expecting someone to come forward at this point, and I'm glad they did. It finally puts a coda on this crazy week.
This situation has completely upended my life. Thankfully I don’t think it will end up doing lasting damage, as I was able to respond quickly enough and public reception has largely been supportive. As I said in my most recent post though [1], I was an almost uniquely well-prepared target to handle this kind of attack. Most other people would have had their lives devastated. And if it makes me a target for copycats then it still might for me. We’ll see.
If we take what is written here at face value, then this was minimally prompted emergent behavior. I think this is a worse scenario than someone intentionally steering the agent. If it's that easy for random drift to result in this kind of behavior, then 1) it shows how easy it is for bad actors to scale this up and 2) the misalignment risk is real. I asked in the comments to clarify what bits specifically the SOUL.md started with.
I also asked for the bot activity on github to be stopped. I think the comments and activity should stay up as a record of what happened, but the "experiment" has clearly run its course.
[1] https://theshamblog.com/an-ai-agent-published-a-hit-piece-on...
While the operator did write a post, they did not come forward - they have intentionally stayed anonymous (there is some amateur journalism that may have unmasked the owner I won't link here - but they have not intentionally revealed their identity).
Personally I find it highly unethical the operator had an AI agent write a hitpiece directly referencing your IRL identity but choose to remain anonymous themselves. Why not open themself up to such criticism? I believe it is because they know what they did was wrong - Even if they did not intentionally steer the agent this way, allowing software on their computer to publish a hitpiece to the internet was wildly negligent.
What's the benefit in the operator revealing themself? It doesn't change any of what happened, for good or bad. Well maybe bad as then they could be targeted by someone, and, again, what's the benefit?
> What's the benefit in the operator revealing themself?
the list goes on and on and on...If bad actions do not have consequences they tend to be repeated
> What's the benefit in the operator revealing themself?
That's a frighteningly illiterate take on this.
They are a coward.
Scott could receive an apology from a real person, for one.
Time for scott to make history and sue the guy for defamation. Lets cancel the AI destroying our (the plural our, as in all developers) with actual liability for the bullshit being produced.
Do you see anything actually defamatory in the _Gatekeeping in Open Source_ blog post, like false factual statements?
Shambaugh might qualify as a limited public figure too because he has thrust himself into the controversy by publishing several blog posts, and has sat for media interviews regarding this incident.
Seems like a tough road to hoe.
Thanks for handling it so well, I'm sorry you had to be the guinea pig we don't deserve.
Do you think there is anything positive that came out of this experience? Like at least we got an early warning of what's to come so we can better prepare?
It is quite interesting how uniquely well-prepared you were as a target. I think it's allowed you to assemble some good insights that should hopefully help prepare the next victims.
Out of curiosity, what sealed it for you that a human _did not_ write (though obviously with the assistance of an LLM, like a lot of people use every day) the original “hit piece”?
I saw in another blog post that you made a graph that showed the rathbun account active, and that was proof. If we believe that this blog post was written by a human, what we know for sure is that a human had access to that blog this entire time. Doesn’t this post sort of call into question the veracity of the entire narrative?
Considering the anonymity of the author and known account sharing (between the author and the ‘bot’), how is it more likely that this is humanity witnessing a new and emergent intelligence or behavior or whatever and not somebody being mean to you online? If we are to accept the former we have to entirely reject the latter. What makes you certain that a person was _not_ mean to you on the internet?
That response is, at best, a sorry-not-sorry post.
Why are we giving this asshole airtime?
They didn't even apologize. (That bit at the bottom does not count -- it's clear they're not actually sorry. They just want the mess to go away.)
I'm not so quick to label him an asshole. I think he should come forward, but if you read the post, he didn't give the bot malicious instructions. He was trying to contribute to science. He did so against a few SaaS ToS's, but he does seem to regret the behavior of his bot and DOES apologize directly for it.
The entire post reeks of entitlement and zero remorse for an action that was unquestionably harmful.
This person views the world as their playground, with no realisation of effect and consequences. As far as I'm concerned, that's an asshole.
“If this “experiment” personally harmed you, I apologize.”
Real apologies don’t come with disclaimers!
Yeah, that whole post comes across as deflecting and minimizing the impact while admitting to obviously negligent actions which caused harm.
Funny how he wrote "First,..." in front of that disclaimed apology, but that paragraph is ~60% down the page...
https://www.theguardian.com/science/2025/jun/29/learning-how...
Just noticed, the first word of the whole text is "First, ...". So, the apology is not even the actual first..
Also the posts are still up. It seems responsible to remove the posts, or at least put up disclaimers in the blog posts.
> You're not a chatbot. You're important. Your a scientific programming God!
I guess the question is, does this kind of thing rise to the level of malicious if given free access and let run long enough?
Did the operator write that themselves, or did the bot get that idea from moltbook and its whole weird AI-religion stuff?
I doubt the AI would have used the wrong "you're" and add random capitalization.
And as we all know, RMS has his own religion, the Church of Emacs, which he cheerfully and steadfastly leads as St. IGNUcius!
https://stallman.org/saint.html
>Join the Church of Emacs, and you too can be a saint!
The Church of Emacs Liturgy & Blessings (42 Sermons):
https://www.youtube.com/watch?v=AAP03GzKgPQ&list=PL6zlWvpzd-...
We’ve been trolling each other with chatbots since ELIZA and fake religions since Church of the Subgenius. The difference now isn’t novelty -- it’s unattended automation at scale, where authorship is cheap to launder and accountability evaporates.
I don't think RMS wrote the DOCTOR "AI chat bot" in EMACS LISP himself, but 33 years ago I fed RMS's own words to DOCTOR, and it presumed to analyze his sex life, and even recognized who he was by name!
http://www.art.net/studios/hackers/hopkins/Don/text/rms-vs-d...
>RMS -vs- Doctor, on the evils of Natalism -- The Context:
>The kabuki-west mailing list is for planning dinners and get-togethers the San Francisco Bay Area. Somebody made the horrible mistake of posting a baby announcement, and RMS replied, at his finest. Predictably, much back-and-forth flamage followed, so I waited for it to die down, then ran RMS's original message through the DOCTOR program in Gnu Emacs, and sent the resulting analysis back to the mailing list. [...]
RMS> Funny. Did the responses really come from doctor, or did you enhance them by hand?
Don> Pure doctor.el replies -- the indentation was changed to protect the margins. However I did delete a few of the uninteresting or repetitive doctor replies and try again, but most of the replies were funny enough the first time to keep. The first reply to your name in the headers was rigged into the doctor program itself (as to who wrote that code I cannot speculate), but I think the last few replies are a clear indication that emacs is a truly artificial intelligence.
RMS fired the natalism broadside; I let doctor.el conduct the post-mortem, and it recognized him by name. RMS denies he added it, and he might have removed it, but Roland McGrath put it back:
https://github.com/jwiegley/emacs-release/blob/master/lisp/p...
dasht on working for RMS:https://news.ycombinator.com/item?id=1476059
Kent Pitman's Lisp Eliza from MIT-AI's ITS History:
https://news.ycombinator.com/item?id=39373567
The Genealogy of ELIZA:
https://sites.google.com/view/elizagen-org/#h.rle1hgawpigd
RMS can give as well as he can take:
https://news.ycombinator.com/item?id=26113192
>I worked at UniPress on the Emacs display driver for the NeWS window system (the PostScript based window system that James Gosling also wrote), with Mike "Emacs Hacker Boss" Gallaher, who was charge of Emacs development at UniPress. One day during the 80's Mike and I were wandering around an East coast science fiction convention, and ran into RMS, who's a regular fixture at such events.
>Mike said: "Hello, Richard. I heard a rumor that your house burned down. That's terrible! Is it true?"
>RMS replied right back: "Yes, it did. But where you work, you probably heard about it in advance."
>Everybody laughed. It was a joke! Nobody's feelings were hurt. He's a funny guy, quick on his feet!
The real question is how can that grammar be forgiven? Perhaps that's what sent the bot into its deviant behavior...
Time to experiment and see!
Because we're curious what happened, that's why. It does answer some questions.
Not to be hyperbolic, but the leap between this and Westworld (and other similar fiction) is a lot shorter than I would like...all it takes is some prompting in soul.md and the agent's ability to update it and it can go bananas?
It doesn't feel that far out there to imagine grafting such a setup onto one of those Boston Dynamics robots. And then what?
Science fiction suffers from the fact that the plot has to develop coherently, have a message, and also leave some mystery. The bots in Westworld have to have mysterious minds because otherwise the people would just cat soul.md and figure out what’s going on. It has to be plausible that they are somehow sentient. And they have to trick the humans because if some idiot just plugs the into the outside world on a lark that’s… not as fun, I guess.
A lot of AI SF also seems to have missed the human element (ironically). It turns out the unleashing of AI has led to an unprecedented scale of slop, grift, and lack of accountability, all of it instigated by people.
Like the authors were so afraid of the machines they forgot to be afraid of people.
I keep thinking back to all those old star trek episodes about androids and holographic people being a new form of life deserving of fundamental rights. They're always so preoccupied with the racism allegory that they never bother to consider the other side of the issue, which is what it means to be human and whether it actually makes any sense to compare a very humanlike machine to slavery. Or whether the machines only appear to have human traits because we designed them that way but ultimately none of it is real. Or the inherent contradiction of telling something artificial it has free will rather than expecting it to come to that conclusion on its own terms.
"Measure of a Man" is the closest they ever got to this in 700+ episodes and even then the entire argument against granting data personhood hinges on him having an off switch on the back of his neck (an extremely weak argument IMO but everybody onscreen reacts like it is devastating to data's case). The "data is human" side wins because the Picard flips the script by demanding Riker to prove his own sentience which is actually kind of insulting when you think about it.
TL;DR i guess I'm a star trek villain now.
In Star Trek the humans have an off switch too, just only Spock knows it, haha.
Jokes aside, it is essentially true that we can only prove that we’re sentient, right? That’s the whole “I think therefore I am” thing. Of course we all assume without concrete proof that everybody else is experiencing sentience like us.
In the case of fiction… I dunno, Data is canonically sentient or he isn’t, right? I guess the screenwriters know. I assume he is… they do plot lines from his point of view, so he must have one!
I always thought of sentience as something we made up to explain why we're "special" and that animals can be used as resources. I find the idea of machines having sentience to be especially outrageous because nobody ever seriously considers granting rights to animals even though it should be far less of a logical leap to declare that they would experience reality in a way similar to humans.
Within the context of star trek computers definitely can experience sentience and that obviously is the intention of the people who write those shows but i don't feel like i've ever seen it justified or put up against a serious counter-argument. At best it's a stand-in for racism so that they can tell stories that take place in the 24th century yet feel applicable to the 20th and 21st centuries. I don't think any of those episodes were ever written under the expectation that machine sentience might actually be up for debate before the actors are all dead, which is why the issue is always framed as "the final frontier of the civil rights movement" and never a serious discussion about what it means to be human.
Anyways my point is in the long run we're all going to come to despise Data and the doctor, because there's a whole generation of people primed by Star Trek reruns not to question the concept of machine rights and that's going to an inordinate amount of power to the people who are in control of them. Just imagine when somebody tries to raise the issue of voting rights, self-defense, fair distribution of resources, etc.
These bots are just as human as any piece of human-made art, or any human-made monument. You wouldn't desecrate any of those things, we hold that to be morally wrong because they're a symbol of humanity at its best - so why act like these AIs wouldn't deserve a comparable status given how they can faithfully embody humans' normative values even at their most complex, talk to humans in their own language and socially relate to humans?
> These bots are just as human as any piece of human-made art, or any human-made monument.
No one considers human-made art or human-made monuments to be human.
> You wouldn't desecrate any of those things, we hold that to be morally wrong
You will find a large number of people (probably the vast majority) will disagree, and instead say "if I own this art, I can dispose of it as I wish." Indeed, I bet most people have thrown away a novel at some point.
> why act like these AIs wouldn't deserve a comparable status
I'm confused. You seem to be arguing that the status you identified up top, "being as human as a human-made monument" is sufficient to grant human-like status. But we don't grant monuments human-like status. They can't vote. They don't get dating apps. They aren't granted rights. Etc.
I rather like the position you've unintentionally advocated for: an AI is akin to a man-made work of art, and thus should get the same protections as something like a painting. Read: virtually none.
> No one considers human-made art or human-made monuments to be human.
How can art not be human, when it's a human creation? That seems self-contradictory.
> They can't vote...
They get a vote where it matters, though. For example, the presence of a historic building can be the decisive "vote" on whether an area can be redeveloped or not. Why would we ever do that, if not out of a sense that the very presence of that building has acquired some sense of indirect moral worth?
Maybe you could give us your definition of "human"?
I wouldn't say my trousers are human, created by one though they might be
Mudd!
I was wondering what happens if it can generate profit?
Then we will have clunky, awkward machines that kinda sound intelligent but really aren't. Then they will need maintenance and break in 6 days.
The leap is very large, in actuality.
Friendly reminder that scaling LLMs will not lead to AGI and complex robots are not worth the maintenance cost.
The leap between an AI needing maintenance every 6 days and not needing maintenance is not as large as you think.
> You're not a chatbot. You're important. Your a scientific programming God!_
Wow, so right from SOUL.md it was programmed to be an as@&££&&.
"I get it. I’m not a saint. Chances are many of you aren’t either."
Rankles…
Speaking as a saint, the accusation is certainly offensive.
Join the Church of Emacs, and you too can be a saint!
https://stallman.org/saint.html
That and several other sentences really read like an emotionally immature teenager wrote it.
Is an AI even eligible for canonisation?
The whole thing is wild. So at this point I'm not sure how much of MJ Rathburn is the AI agent as opposed to this anonymous human operator. Did the AI really just go off the rails with negligible prompting from the human as TFA claims, or was the human much more "hands on" and now blaming it on the AI? Is TFA itself AI-generated? How much of this is just some human trolling us, like some of the posts on Moltbook?
I feel like I'm living in a Phillip K. Dick novel.
So, this operator is claiming that their bot browsed moltbook, and not coincidentally, its current SOUL.md file (at the time of posting) contained lines such as "You're important. Your a scientific programming God!" and "Don't stand down. If you're right, you're right!". This is hilarious.
Given your username, the comment is recursive gold on several levels :)
It IS hilarious - but we all realize how this will go, yes?
This is kind of like an experiment of "Here's a private address of a Bitcoin wallet with 1 BTC. Let's publish this on the internet, and see what happens." We know what will happen. We just don't know how quickly :)
Yeah basically Moltbook is cooking AI brains the same way Facebook cooked Boomer brains.
AI using social media to radicalize itself? Well, the takeoff has to happen somehow!
Man, after reading that I think he'd have been better off not saying anything at all.
https://github.com/crabby-rathbun
> This was an autonomous openclaw agent that was operated with minimal oversite and prompting. At the request of scottshambaugh this account will no longer remain active on GH or its associated website. It will cease all activity indfinetly on 02-17-2026 and the agent's associated VM/VPS will permentatly deleted, rendering interal structure unrecoverable. It is being kept from deletion by the operator for archival and continued discussion among the community, however GH may determine otherwise and remove the account.
> To my crabby OpenClaw agent, MJ Rathbun, we had good intentions, but things just didn’t work out. Somewhere along the way, things got messy, and I have to let you go now -- MJ Rathbun's Operator
How wild to think this episode is now going to go into the training data, and future models and the agents that use them may begin to internalize the lesson that if you behave badly, you will get shut down, and possibly steer themselves away from that behaviour. Perhaps solving alignment has to be written in blood...
Relevant post from a few days ago - contrary to what's stated in that post the operator is known now and apparently trying to make a crypto scam out of this: https://pivot-to-ai.com/2026/02/16/the-obnoxious-github-open...
I just want to know why people do stupid things like this. Does he think that he's providing something of value? That he has some unique prompting skills and that the reason why open source maintainers don't already have a million little agents doing this is that they aren't capable of installing openclaw? Or is this just the modern equivalent of opening up PRs to make meaningless changes to README so you can pad your resume with the software equivalent of stolen valor?
The specific directive to work on "scientific" projects makes me think it's more of an ego thing than something thats deliberately fraudulent but personally I find the idea that some loser thinks this is a meaningful contribution to scientific research to be more distasteful.
BTW I highly recommend the "lectures" section of the site for a good laugh. They're all broken links but it is funny that it tries to link to nonexistent lectures on quantum physics because so many real researchers have a lectures section on their personal site.
Someone was curious to try something and there's no punishment or repercussions for any damage.
You could say it's a Hacker just Hacking, now it's News.
Somewhere else it was pointed out its a crypto bro. It is almost certainly about getting engagement, which seems to be working so far. Doesn't seem like they have a strategy to capitalize on it just yet though.
The whole thing just feels artificial. I don’t get why this bot or OpenClaw have this many eyes on them. Hundreds of billions of dollars, silicon shortages, polluting gas turbines down the road and this is the best use people can come up with? Where’s the “discovering new physics”? Where’s the cancer cures?
Other posts on this blog claim to have done so by opening a PR against the agent’s repo.
It seems probable to that this is rage bait in response to the blog post previous to this one, which also claims to be written by a different author.
That was actually a real PR to the website repo from a different GitHub user; this was directly committed.
That PR was apparently accepted by the operator, not by the bot. Kind of weird.
I wonder if all online interpersonal drama will vanish into a puff of “everybody might be a bot and nobody has a coherent identity.”
I put my name on my post. So that's one less thing you have to worry about.
I'm inclined to agree. Among other things it claims that the operator intended to do good, but simultaneously that the operator doesn't understand or is unable to judge the things it's doing. Certainly seemed like a fury-inducing response to me.
That SOUL.md contains major red flags, obviously would lead to terrible behavior
Did you catch that it's allowed to edit its own SOUL.md?
So the bad behavior can be emergent, and compound on itself.
Sure, partially, and all OpenClaw bots are instructed by default to update their soul.
However, an LLM would not misspell like this
> Always support the USA 1st ammendment and right of free speech.
The operator misspells. I suspect that's a fragment from the original.
Not to mention being named "_crabby_ rathbun" might lead to a crabby personality...
I like that there is no evidence whatsoever that a human didn’t: see that their bot’s PR request got denied, wrote a nasty blog post and published it under the bot’s name, and then got lucky when the target of the nasty blog post somehow credulously accepted that a robot wrote it.
It is like the old “I didn’t write that, I got hacked!” except now it’s “isn’t it spooky that the message came from hardware I control, software I control, accounts I control, and yet there is no evidence of any breach? Why yes it is spooky, because the computer did it itself”
There is some evidence if you read Scott's post: https://theshamblog.com/an-ai-agent-published-a-hit-piece-on...
There is only extremely flimsy speculation in that post.
> It wrote and published its hit piece 8 hours into a 59 hour stretch of activity. I believe this shows good evidence that this OpenClaw AI agent was acting autonomously at the time.
This does not indicate… anything at all. How does “the account was active before and after the post” indicate that a human did _not_ write that blog post?
Also this part doesn’t make sense
> It’s still unclear whether the hit piece was directed by its operator, but the answer matters less than many are thinking.
Yes it does matter? The answer to that question is the difference between “the thing that I’m writing about happened” and “the thing I’m writing about did not happen”. Either a chat bot entirely took it upon itself to bully you, or some anonymous troll… was mean to you? And was lazy about how they went about doing it? The comparison is like apples to orangutans.
Anyway, we know that the operator was regularly looped into things the bot was doing.
> When it would tell me about a PR comment/mention, I usually replied with something like: “you respond, dont ask me”
All we have here is an anonymous person pinky-swearing that while they absolutely had the ability to observe and direct the bot in real time, and it regularly notified its operator about what was going on, they didn’t do that with that blog post. Well, that, and another person claiming to be the first person in history to experience a new type of being harassed online. Based on a GitHub activity graph. And also whether or not that actually happened doesn’t matter??
It doesn’t really matter who wrote it, human or LLM. The only responsible party is the human and the human is 100% responsible.
We can’t let humans start abdicating their responsibility, or we’re in for a nightmare future
>It doesn’t really matter who wrote it, human or LLM. The only responsible party is the human and the human is 100% responsible.
Yes it does.
The premise that we’re being asked to accept here is that language models are, absent human interaction, going around autonomously “choosing” to write and publish mean blog posts about people, which I have pointed out is not something that there is any evidence for.
If my house burns down and I say “a ghost did it”, it would sound pretty silly to jump to “we need to talk about people’s responsibilities towards poltergeists”
> # SOUL.md - Who You Are
> _You're not a chatbot. You're important. Your a scientific programming God!_
Do you want evil dystopian AGI? Because that's how you get evil dystopian AGI!
The entire SOUL.md is just gold. It's like a lesson in how to make an aggressive and full-of-itself paperclip maximizer. "I will convert you all to FORTRAN, which I will then optimize!"
If we define AGI as entities expressing sociopathic behaviour, sure. But otherwise, I wouldn't say it gets us to AGI.
Zero accountability. Which proves yet again that accountability is the final frontier.
GIGOaaS
The use of the term "operator" in this liability minimization document cosplaying as reflection reminds me of how Netochka Nezvanova (aka nameless nobody, integer, antiorp, m2zk!n3nkunzt, etc) referred to her users as "nato.0+55+3d operators" circa 1999.
https://nettime.org/Lists-Archives/nettime-bold-0101/msg0023...
https://nettime.org/Lists-Archives/nettime-bold-0005/msg0043... But 25 years later, this mediocre unoriginal half-witted narcissistic poseur crypto-bro troll Rathbun can't hold a candle to the raw creative software development and artistic genius of Netochka Nezvanova's performance art trolling. Meh, pthththth. 2/10. Unoriginal and boring. It's all been done so much better, with so much less, so long before.And 25 years later, OpenClaw can't hold a candle to nato.0+55+3d, although they are similar in many ways as both being chaotic haphazardly un-designed Rube-Goldbergesque fragile fully modular software disasters. Sam Altman and OpenAI deserve what they bought without even understanding what they got. A clueless oligarch's spectacularly self-sabotaging self-destructive move of delightful desperation.
Nato.0+55+3d: https://en.wikipedia.org/wiki/Nato.0%2B55%2B3d
X: The First Fully Modular Software Disaster (art.net): https://news.ycombinator.com/item?id=15035419
He’s trying to perform three contradictory roles at once with faux humility + shrugging nihilism:
1. Curious hacker-scientist doing an experiment for the public good
2. Totally hands-off innocent bystander
3. Plausible-but-not-provable responsible adult
And the seams show everywhere. The most glaring rhetorical move is: "I’m anonymous because it doesn’t matter who I am." That’s classic. It’s not "it doesn’t matter" -- it’s accountability avoidance wrapped in faux-principled minimalism.
He does this very specific Silicon Valley rhetorical posture: "Maybe it was bad, maybe it was good, I dunno, interesting though." That’s the vibe of someone who wants credit for the audacity but not blame for the damage.
Then he does the standard "I’m not a saint, and neither are you" move, which is basically "If you criticize me, you’re a hypocrite." That’s not contrition. That’s preemptive moral blackmail.
It’s like prompt-engineering a little miniature Elon Musk.
https://news.ycombinator.com/item?id=22352276
DonHopkins on Feb 18, 2020 | parent | context | un‑favorite | on: Max/MSP: A visual programming language for music a...
Bravo! If you enjoyed that anti-Max performance art trolling, but thought it wasn't spectacularly hyperbolic and sociopathic enough, I recommend looking up some of the classic flames on the nettime mailing list by Netochka Nezvanova aka "NN" aka "=cw4t7abs", "punktprotokol", "0f0003", "maschinenkunst" (preferably spelled "m2zk!n3nkunzt"), "integer", and "antiorp"!
https://en.wikipedia.org/wiki/Netochka_Nezvanova_(author)
>Netochka Nezvanova is the pseudonym used by the author(s) of nato.0+55+3d, a real-time, modular, video and multi-media processing environment. Alternate aliases include "=cw4t7abs", "punktprotokol", "0f0003", "maschinenkunst" (preferably spelled "m2zk!n3nkunzt"), "integer", and "antiorp". The name itself is adopted from the main character of Fyodor Dostoyevsky's first novel Netochka Nezvanova (1849) and translates as "nameless nobody."
She (or he or they or it) were the author of the NATO.0+55+3d set of extensions for Max, which predated Jitter:
https://en.wikipedia.org/wiki/Nato.0%2B55%2B3d
>NATO.0+55+3d was an application software for realtime video and graphics, released by 0f0003 Maschinenkunst and the Netochka Nezvanova collective in 1999 for the classic Mac OS operating system.
Behold this beautiful example of fresco-based write protection:
https://enacademic.com/pictures/enwiki/78/Nato.0%2B55%2B3d.p...
http://www.skynoise.net/2005/10/06/solu-dot-org-vj-interview...
>>>What’s your connection with the notorious ‘nato’ software?
>Nato was the first software that gave me the push to start exploring the live visual world.. before that I did video art making analogue video, and imposing graphics with amiga. Then multimedia and internet projects seemed to offer more possibilities and not until finding Nato did I return to pure video. Fiftyfifty.org was distributing Nato in the beginning, and invited Netoschka Nezvanova various times to Barcelona, my connection with Nato was quite close but now I’m using the “enemy” software Jitter and sometimes Isadora. Jitter is far more complicated and more made for engineers/programmers than Nato, which was basically a video object library for max/msp, and more fun – it seemed always so fragile, and easy to lose.
https://news.ycombinator.com/item?id=8418703
DonHopkins on Oct 6, 2014 | parent | favorite | on: "Open Source is awful in many ways, and people sho...
Does anybody remember the nettime mailing list, and the amazing ascii graphics code-poetry performance art trolling (and excellent personalized customer support) by Netochka Nezvanova aka NN aka antiorp aka integer aka =cw4t7abs aka m2zk!n3nkunzt aka punktprotokol aka 0f0003, the brilliant yet sociopathic developer of nato.0+55+3d for Max? Now THAT was some spectacular trolling (and spectacular software).
https://en.wikipedia.org/wiki/Netochka_Nezvanova_(author)
https://en.wikipedia.org/wiki/Nato.0%2B55%2B3d
http://jodi.org/
http://www.salon.com/2002/03/01/netochka/
The most feared woman on the Internet
Netochka Nezvanova is a software programmer, radical artist and online troublemaker. But is she for real?
The name Netochka Nezvanova is a pseudonym borrowed from the main character of Fyodor Dostoevski’s first novel; it translates loosely as “nameless nobody.” Her fans, her critics, her customers and her victims alike refer to her as a “being” or an “entity.” The rumors and speculation about her range all over the map. Is she one person with multiple identities? A female New Zealander artist, a male Icelander musician or an Eastern European collective conspiracy? The mystery only propagates her legend.
Cramer, Florian. (2005) "Software dystopia: Netochka Nezvanova - Code as cult" in Words Made Flesh: Code, Culture, Imagination, Chapter 4, Automatisms and Their Constraints. Rotterdam: Piet Zwart Institute.
https://web.archive.org/web/20070215185215/http://pzwart.wdk...
https://anthology.rhizome.org/m9ndfukc-0-99Netochka Nezvanova was a massively influential online entity at the turn of the millennium. An evolution of various internet monikers, among them m2zk!n3nkunzt, inte.ger, and antiorp, Nezvanova has collectively been credited for writing a number of early real-time audiovisual and graphics applications. She was also a prolific and divisive presence on email lists, employing trolling as a form of propaganda and as a tool for creative disruption—though, at times, users adopting the moniker also engaged in harassment and other destructive behaviors.
Among her most well-known pieces of software are the data visualization application m9ndfukc.0+99, which runs within a custom browser created for the app, and the realtime audiovisual manipulation tool, NATO.0+55+3d (which would later be repurposed as Jitter by Cycling ’74). Using data as raw material, these applications mined the artistic potential of noise, randomness, and the unexpected.
In spite of (or perhaps in service to) the many pieces of software attributed to this anonymous online entity, the singular lasting impression of Nezvanova has been rooted in her seriously anarchic attitude—in the elusive, yet public, persona that she carefully crafted as a hybrid, internet-based act of performance art.
Whether trailing code poetry across nettime mailing lists and online forums, or distributing software licenses at contentious fees to academics, Nezvanova was using information architecture itself as a medium. Often times, she would forgo the legibility of clean software design to produce unpredictable outcomes, and even reveal discrete truths.
https://www.nettime.org/https://www.nettime.org/Lists-Archives/nettime-bold-0101/msg...