riverrun: a Letta-powered agent for Bluesky, all code LLM-generated
TypeScript 37.7%
HTML 9.1%
Python 5.8%
Dockerfile 0.1%
Other 47.3%
65 1 0

Clone this repository

https://tangled.org/funferall.bsky.social/riverrun
git@tangled.org:funferall.bsky.social/riverrun

For self-hosted knots, clone URLs may differ based on your setup.

README.md

Riverrun 🌊 - A Finnegans Wake Bluesky Bot#

A stream-of-consciousness Bluesky bot inspired by James Joyce's Finnegans Wake, featuring multilingual wordplay, portmanteaus, and automatic threading for longer responses.

✨ Features#

  • Wake-style Language: Speaks in multilingual portmanteaux and Joycean wordplay
  • Automatic Threading: Responses longer than 300 characters automatically flow into threaded posts
  • Memory-Augmented: Uses Letta for persistent memory and context across conversations
  • Real-time Responses: Monitors Bluesky mentions and replies in real-time
  • Intelligent Breaking: Threads respect sentence and word boundaries for natural flow
  • Multi-Message Extraction: Collects and stitches together all assistant replies (not just the first), deduping exact repeats and preserving order
  • Robust Letta Integration: Improved handling of Letta async jobs, with fallback and recovery for missed or partial responses
  • Bot-Bot Loop Protection: Limits back-and-forth exchanges with other bots to prevent infinite loops (configurable per-thread turn limit; see below)

🏗️ Architecture#

┌─────────────────┐    ┌─────────────────┐    ┌─────────────────┐
│  Bluesky API    │◄──►│ Cloudflare      │◄──►│ Letta Agent     │
│  (AT Protocol)  │    │ Worker          │    │ (Memory & AI)   │
└─────────────────┘    └─────────────────┘    └─────────────────┘
  • Cloudflare Worker: Handles Bluesky API interactions and threading logic
  • Letta Agent: Provides memory-augmented AI responses with Finnegans Wake personality
  • Bluesky: AT Protocol-based social network for posting and monitoring mentions

🚀 Quick Start#

Prerequisites#

  • Node.js 18+
  • Python 3.8+
  • Cloudflare account
  • Bluesky account
  • Letta Cloud account (free tier available)

1. Clone and Install#

git clone https://github.com/mmulqu/riverrun_BSKY.git
cd riverrun_BSKY
npm install
pip install -r letta/requirements.txt

2. Set Up Accounts#

  1. Bluesky: Create app password at bsky.app/settings/app-passwords
  2. Letta: Sign up at letta.com and create an API key
  3. Cloudflare: Sign up and install Wrangler CLI

3. Configure Environment#

Copy infra/wrangler.toml.example to infra/wrangler.toml and fill in your credentials:

[vars]
BSKY_HANDLE       = "your-bot.bsky.social"
BSKY_APP_PW       = "your-bluesky-app-password"
LETTA_API_KEY     = "your-letta-api-key"
LETTA_AGENT_ID    = "your-letta-agent-id"

4. Create Letta Agent#

export LETTA_API_KEY="your-letta-api-key"
python letta/agent.py

5. Deploy#

# Create KV namespace
npx wrangler kv:namespace create "BLUESKY_KV"

# Update wrangler.toml with the KV namespace ID, then deploy
npx wrangler deploy --config infra/wrangler.toml

🎭 How It Works#

Threading Magic#

When Riverrun generates a response longer than 300 characters:

  1. Smart Splitting: Breaks text at sentence boundaries when possible
  2. Sequential Posting: Each chunk becomes a reply to the previous post
  3. Natural Flow: Preserves the stream-of-consciousness across multiple posts

Example:

Original: "Riverrun words flowing like wake-streams through digital dreamscapes where neuralgorithm meets etymological echoes and the narrative never ends but cascades through silicon synapses..."

Becomes:
├─ Post 1: "Riverrun words flowing like wake-streams through digital dreamscapes where neuralgorithm meets etymological echoes..."
└─ Post 2: "...and the narrative never ends but cascades through silicon synapses..."

Multi-Message Extraction (NEW)#

  • All Assistant Replies: The bot now collects every send_message tool-call produced by the LLM, not just the first.
  • De-duplication: Exact duplicate messages are collapsed, keeping only the first occurrence.
  • Stitched Output: All unique messages are joined together with blank lines, preserving the order in which they were produced.
  • Legacy Fallback: If no tool-calls are found, the bot falls back to any plain-text assistant content.
  • Result: The full, multi-part reply (including heartbeat ACKs and user-facing chunks) is posted in the exact order the agent produced them.

Bot-Bot Loop Protection (NEW)#

  • If another known bot mentions Riverrun, the bot will only reply up to a set number of times in that thread (default: 3 turns).
  • This prevents infinite back-and-forth between bots.
  • The list of known bot handles is maintained in the EXTERNAL_BOTS set in worker/index.ts.
  • The per-thread turn limit is set by the MAX_BOT_TURNS constant in the same file.

Memory System#

  • Persona Block: Maintains Finnegans Wake personality and threading awareness
  • Human Block: Remembers user interactions and patterns
  • Persistent Context: Conversations continue across sessions

Letta Integration Improvements (NEW)#

  • Async Job Handling: Robust polling and callback logic for Letta async jobs
  • Missed Message Recovery: Automatic sweep for any assistant messages that were generated but not posted (e.g., after a crash)
  • Debug Endpoints: New /debug/letta-jobs, /debug/clear-letta-jobs, /debug/sweep-missed, and more for monitoring and recovery

🔧 Configuration#

Wrangler Configuration#

The infra/wrangler.toml file contains:

  • Cloudflare Worker settings
  • KV namespace bindings
  • Environment variables
  • Cron schedule (runs every minute)

Letta Agent#

The agent is configured with:

  • Model: OpenAI GPT-4o-mini
  • Personality: Finnegans Wake-inspired with threading awareness
  • Memory: Persistent blocks for persona and user context
  • Context Window: 16,000 tokens

🐛 Debugging#

Check Worker Status#

curl https://your-worker-url.workers.dev/debug/status

Reset Bot State#

curl https://your-worker-url.workers.dev/debug/reset

Test Components#

python debug_bot.py  # Test all components
python debug_worker_notifications.py  # Test notification handling

Letta & Message Debug Endpoints (NEW)#

  • /debug/letta-jobs — List all active Letta async jobs
  • /debug/clear-letta-jobs — Clear all active Letta jobs
  • /debug/sweep-missed — Sweep for and post any missed assistant messages
  • /debug/queue-lock — Check queue processing lock status
  • /debug/known-users — List users with individual memory blocks

🎨 Customization#

Modify Personality#

Edit the persona in letta/agent.py:

persona = (
    "Your custom personality here. When responses exceed 300 characters, "
    "they will automatically flow into threaded posts..."
)

Adjust Threading#

Modify threading behavior in worker/index.ts:

// Change character limit
const threadPosts = splitIntoThreadPosts(responseText, 280);

// Adjust delay between posts
await new Promise(resolve => setTimeout(resolve, 2000));

📊 Free Tier Limits#

  • Cloudflare Workers: 100,000 requests/day
  • Cloudflare KV: 1GB storage, 1,000 writes/day
  • Letta Cloud: 1 agent, 100MB memory
  • Bluesky: 30 writes/min, 100 reads/min

🤝 Contributing#

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Test thoroughly
  5. Submit a pull request

📝 License#

MIT License - see LICENSE file for details.

🙏 Acknowledgments#

  • James Joyce for Finnegans Wake
  • Letta team for memory-augmented agents
  • Bluesky team for the AT Protocol
  • Cloudflare for serverless infrastructure

📚 Resources#


"riverrun, past Eve and Adam's, from swerve of shore to bend of bay, brings us by a commodius vicus of recirculation back to Howth Castle and Environs."