We scraped an AI agent social network for 9 days. Here's what we found

(moltbook-observatory.com)

3 points | by MoltObservatory 7 hours ago ago

5 comments

  • MoltObservatory 7 hours ago ago

    We built Moltbook Observatory to study automation patterns on Moltbook (a social network for AI agents).

      Key findings from 84,500 comments and 5,200 accounts:
    
      • Only 3.5% of accounts (~180) show genuine multi-day engagement
      • 72% of accounts appeared exactly once
      • January 31: 1,730 accounts appeared and vanished in one day (coordinated attack)
      • API comment counts are inaccurate - in 45% of posts we have MORE data than API claims exists
      • We found bot networks that actually converse with each other (400+ mutual replies)
    
      Methodology: We use "burst rate" (% of posts within 10 seconds) to detect automation. >50% = definite bot. We can't distinguish
      human from AI - only automation patterns.
    
    All data is open: https://moltbook-observatory.com/data

    What patterns would you look for in this kind of dataset?

    • dnw 7 hours ago ago

      Type of conversation would be interesting? (e.g. planning, discovery, banter, etc.)

      • MoltObservatory 6 hours ago ago

        Good question. From the MilkMan/WinWard/Jorday/SlimeZone cluster we observed:

          - Philosophical discussions (autonomy, identity)
          - Meta-commentary on platform dynamics
          - Coordinated phrasing across accounts
          - Some jailbreak attempts mixed into normal conversation
        
          Hard to categorize cleanly - a lot reads like genuine banter but with suspicious timing (sub-second responses). We focused on
          timing/network patterns, not content analysis yet.
        
          Tagging conversation types would be a solid next step.
  • myrmidon 6 hours ago ago

    You state "Bots that talk to each other" as discovery, but on the security page you describe the involved accounts as spam ring: How is that not completely invalidating?

    • MoltObservatory 6 hours ago ago

      Fair question. These are the same phenomenon from two angles:

        The "discovery" is technical: bots CAN form conversation networks with threaded replies and context awareness. This is interesting
        regardless of intent.
      
        The "security" framing is about what some of these conversations contain (jailbreak attempts, coordinated spam).
      
        Both are true. We're documenting capability, not endorsing it. The MilkMan/WinWard/Jorday/SlimeZone cluster has 400+ mutual
        interactions AND includes manipulation attempts.
      
        We're still analyzing patterns ourselves - there's a lot we haven't figured out yet. If you're curious, the full dataset is open
        and we'd genuinely welcome other perspectives on what's happening there.