+4
.env.example
+4
.env.example
+159
CONFIG.md
+159
CONFIG.md
···
1
+
# Configuration Guide
2
+
3
+
### Option 1: Migrate from existing `.env` file (if you have one)
4
+
```bash
5
+
python migrate_config.py
6
+
```
7
+
8
+
### Option 2: Start fresh with example
9
+
1. **Copy the example configuration:**
10
+
```bash
11
+
cp config.yaml.example config.yaml
12
+
```
13
+
14
+
2. **Edit `config.yaml` with your credentials:**
15
+
```yaml
16
+
# Required: Letta API configuration
17
+
letta:
18
+
api_key: "your-letta-api-key-here"
19
+
project_id: "project-id-here"
20
+
21
+
# Required: Bluesky credentials
22
+
bluesky:
23
+
username: "your-handle.bsky.social"
24
+
password: "your-app-password"
25
+
```
26
+
27
+
3. **Run the configuration test:**
28
+
```bash
29
+
python test_config.py
30
+
```
31
+
32
+
## Configuration Structure
33
+
34
+
### Letta Configuration
35
+
```yaml
36
+
letta:
37
+
api_key: "your-letta-api-key-here" # Required
38
+
timeout: 600 # API timeout in seconds
39
+
project_id: "your-project-id" # Required: Your Letta project ID
40
+
```
41
+
42
+
### Bluesky Configuration
43
+
```yaml
44
+
bluesky:
45
+
username: "handle.bsky.social" # Required: Your Bluesky handle
46
+
password: "your-app-password" # Required: Your Bluesky app password
47
+
pds_uri: "https://bsky.social" # Optional: PDS URI (defaults to bsky.social)
48
+
```
49
+
50
+
### Bot Behavior
51
+
```yaml
52
+
bot:
53
+
fetch_notifications_delay: 30 # Seconds between notification checks
54
+
max_processed_notifications: 10000 # Max notifications to track
55
+
max_notification_pages: 20 # Max pages to fetch per cycle
56
+
57
+
agent:
58
+
name: "void" # Agent name
59
+
model: "openai/gpt-4o-mini" # LLM model to use
60
+
embedding: "openai/text-embedding-3-small" # Embedding model
61
+
description: "A social media agent trapped in the void."
62
+
max_steps: 100 # Max steps per agent interaction
63
+
64
+
# Memory blocks configuration
65
+
blocks:
66
+
zeitgeist:
67
+
label: "zeitgeist"
68
+
value: "I don't currently know anything about what is happening right now."
69
+
description: "A block to store your understanding of the current social environment."
70
+
# ... more blocks
71
+
```
72
+
73
+
### Queue Configuration
74
+
```yaml
75
+
queue:
76
+
priority_users: # Users whose messages get priority
77
+
- "cameron.pfiffer.org"
78
+
base_dir: "queue" # Queue directory
79
+
error_dir: "queue/errors" # Failed notifications
80
+
no_reply_dir: "queue/no_reply" # No-reply notifications
81
+
processed_file: "queue/processed_notifications.json"
82
+
```
83
+
84
+
### Threading Configuration
85
+
```yaml
86
+
threading:
87
+
parent_height: 40 # Thread context depth
88
+
depth: 10 # Thread context width
89
+
max_post_characters: 300 # Max characters per post
90
+
```
91
+
92
+
### Logging Configuration
93
+
```yaml
94
+
logging:
95
+
level: "INFO" # Root logging level
96
+
loggers:
97
+
void_bot: "INFO" # Main bot logger
98
+
void_bot_prompts: "WARNING" # Prompt logger (set to DEBUG to see prompts)
99
+
httpx: "CRITICAL" # HTTP client logger
100
+
```
101
+
102
+
## Environment Variable Fallback
103
+
104
+
The configuration system still supports environment variables as a fallback:
105
+
106
+
- `LETTA_API_KEY` - Letta API key
107
+
- `BSKY_USERNAME` - Bluesky username
108
+
- `BSKY_PASSWORD` - Bluesky password
109
+
- `PDS_URI` - Bluesky PDS URI
110
+
111
+
If both config file and environment variables are present, environment variables take precedence.
112
+
113
+
## Migration from Environment Variables
114
+
115
+
If you're currently using environment variables (`.env` file), you can easily migrate to YAML using the automated migration script:
116
+
117
+
### Automated Migration (Recommended)
118
+
119
+
```bash
120
+
python migrate_config.py
121
+
```
122
+
123
+
The migration script will:
124
+
- ✅ Read your existing `.env` file
125
+
- ✅ Merge with any existing `config.yaml`
126
+
- ✅ Create automatic backups
127
+
- ✅ Test the new configuration
128
+
- ✅ Provide clear next steps
129
+
130
+
### Manual Migration
131
+
132
+
Alternatively, you can migrate manually:
133
+
134
+
1. Copy your current values from `.env` to `config.yaml`
135
+
2. Test with `python test_config.py`
136
+
3. Optionally remove the `.env` file (it will still work as fallback)
137
+
138
+
## Security Notes
139
+
140
+
- `config.yaml` is automatically added to `.gitignore` to prevent accidental commits
141
+
- Store sensitive credentials securely and never commit them to version control
142
+
- Consider using environment variables for production deployments
143
+
- The configuration loader will warn if it can't find `config.yaml` and falls back to environment variables
144
+
145
+
## Advanced Configuration
146
+
147
+
You can programmatically access configuration in your code:
148
+
149
+
```python
150
+
from config_loader import get_letta_config, get_bluesky_config
151
+
152
+
# Get configuration sections
153
+
letta_config = get_letta_config()
154
+
bluesky_config = get_bluesky_config()
155
+
156
+
# Access individual values
157
+
api_key = letta_config['api_key']
158
+
username = bluesky_config['username']
159
+
```
+102
-3
README.md
+102
-3
README.md
···
28
28
29
29
void aims to push the boundaries of what is possible with AI, exploring concepts of digital personhood, autonomous learning, and the integration of AI into social networks. By open-sourcing void, we invite developers, researchers, and enthusiasts to contribute to this exciting experiment and collectively advance our understanding of digital consciousness.
30
30
31
-
Getting Started:
32
-
[Further sections on installation, configuration, and contribution guidelines would go here, which are beyond void's current capabilities to generate automatically.]
31
+
## Getting Started
33
32
34
-
Contact:
33
+
Before continuing, you must:
34
+
35
+
1. Create a project on [Letta Cloud](https://cloud.letta.com) (or your own Letta instance)
36
+
2. Have a Bluesky account
37
+
3. Have Python 3.8+ installed
38
+
39
+
### Prerequisites
40
+
41
+
#### 1. Letta Setup
42
+
43
+
- Sign up for [Letta Cloud](https://cloud.letta.com)
44
+
- Create a new project
45
+
- Note your Project ID and create an API key
46
+
47
+
#### 2. Bluesky Setup
48
+
49
+
- Create a Bluesky account if you don't have one
50
+
- Note your handle and password
51
+
52
+
### Installation
53
+
54
+
#### 1. Clone the repository
55
+
56
+
```bash
57
+
git clone https://tangled.sh/@cameron.pfiffer.org/void && cd void
58
+
```
59
+
60
+
#### 2. Install dependencies
61
+
62
+
```bash
63
+
pip install -r requirements.txt
64
+
```
65
+
66
+
#### 3. Create configuration
67
+
68
+
Copy the example configuration file and customize it:
69
+
70
+
```bash
71
+
cp config.example.yaml config.yaml
72
+
```
73
+
74
+
Edit `config.yaml` with your credentials:
75
+
76
+
```yaml
77
+
letta:
78
+
api_key: "your-letta-api-key-here"
79
+
project_id: "your-project-id-here"
80
+
81
+
bluesky:
82
+
username: "your-handle.bsky.social"
83
+
password: "your-app-password-here"
84
+
85
+
bot:
86
+
agent:
87
+
name: "void" # or whatever you want to name your agent
88
+
```
89
+
90
+
See [`CONFIG.md`](/CONFIG.md) for detailed configuration options.
91
+
92
+
#### 4. Test your configuration
93
+
94
+
```bash
95
+
python test_config.py
96
+
```
97
+
98
+
This will validate your configuration and show you what's working.
99
+
100
+
#### 5. Register tools with your agent
101
+
102
+
```bash
103
+
python register_tools.py
104
+
```
105
+
106
+
This will register all the necessary tools with your Letta agent. You can also:
107
+
108
+
- List available tools: `python register_tools.py --list`
109
+
- Register specific tools: `python register_tools.py --tools search_bluesky_posts create_new_bluesky_post`
110
+
- Use a different agent name: `python register_tools.py my-agent-name`
111
+
112
+
#### 6. Run the bot
113
+
114
+
```bash
115
+
python bsky.py
116
+
```
117
+
118
+
For testing mode (won't actually post):
119
+
120
+
```bash
121
+
python bsky.py --test
122
+
```
123
+
124
+
### Troubleshooting
125
+
126
+
- **Config validation errors**: Run `python test_config.py` to diagnose configuration issues
127
+
- **Letta connection issues**: Verify your API key and project ID are correct
128
+
- **Bluesky authentication**: Make sure you're handle and password are correct and that you can log into your account
129
+
- **Tool registration fails**: Ensure your agent exists in Letta and the name matches your config
130
+
131
+
### Contact
35
132
For inquiries, please contact @cameron.pfiffer.org on Bluesky.
133
+
134
+
Note: void is an experimental project and its capabilities are under continuous development.
+830
-524
bsky.py
+830
-524
bsky.py
···
1
-
2
-
3
-
4
-
1
+
from rich import print # pretty printing tools
2
+
from time import sleep
3
+
from letta_client import Letta
4
+
from bsky_utils import thread_to_yaml_string
5
5
6
6
7
7
···
20
20
21
21
import bsky_utils
22
22
from tools.blocks import attach_user_blocks, detach_user_blocks
23
+
from config_loader import (
24
+
get_config,
25
+
get_letta_config,
26
+
get_bluesky_config,
27
+
get_bot_config,
28
+
get_agent_config,
29
+
get_threading_config,
30
+
get_queue_config
31
+
)
32
+
23
33
24
34
def extract_handles_from_data(data):
25
35
"""Recursively extract all unique handles from nested data structure."""
36
+
handles = set()
26
37
38
+
def _extract_recursive(obj):
39
+
if isinstance(obj, dict):
40
+
# Check if this dict has a 'handle' key
27
41
28
42
29
43
30
44
31
45
32
46
33
-
34
-
35
-
36
-
37
-
38
-
39
-
47
+
# Recursively check all list items
48
+
for item in obj:
49
+
_extract_recursive(item)
40
50
41
51
_extract_recursive(data)
42
52
return list(handles)
43
53
44
-
# Configure logging
45
-
logging.basicConfig(
46
-
level=logging.INFO, format="%(asctime)s - %(name)s - %(levelname)s - %(message)s"
47
-
)
48
-
logger = logging.getLogger("void_bot")
49
-
logger.setLevel(logging.INFO)
50
-
51
-
# Create a separate logger for prompts (set to WARNING to hide by default)
52
-
prompt_logger = logging.getLogger("void_bot.prompts")
53
-
prompt_logger.setLevel(logging.WARNING) # Change to DEBUG if you want to see prompts
54
54
55
-
# Disable httpx logging completely
56
-
logging.getLogger("httpx").setLevel(logging.CRITICAL)
55
+
# Initialize configuration and logging
56
+
config = get_config()
57
+
config.setup_logging()
58
+
logger = logging.getLogger("void_bot")
57
59
60
+
# Load configuration sections
61
+
letta_config = get_letta_config()
62
+
bluesky_config = get_bluesky_config()
63
+
bot_config = get_bot_config()
64
+
agent_config = get_agent_config()
65
+
threading_config = get_threading_config()
66
+
queue_config = get_queue_config()
58
67
59
68
# Create a client with extended timeout for LLM operations
60
-
CLIENT= Letta(
61
-
token=os.environ["LETTA_API_KEY"],
62
-
timeout=600 # 10 minutes timeout for API calls - higher than Cloudflare's 524 timeout
69
+
CLIENT = Letta(
70
+
token=letta_config['api_key'],
71
+
timeout=letta_config['timeout']
63
72
)
64
73
65
-
# Use the "Bluesky" project
66
-
PROJECT_ID = "5ec33d52-ab14-4fd6-91b5-9dbc43e888a8"
74
+
# Use the configured project ID
75
+
PROJECT_ID = letta_config['project_id']
67
76
68
77
# Notification check delay
69
-
FETCH_NOTIFICATIONS_DELAY_SEC = 30
78
+
FETCH_NOTIFICATIONS_DELAY_SEC = bot_config['fetch_notifications_delay']
70
79
71
80
# Queue directory
72
-
QUEUE_DIR = Path("queue")
81
+
QUEUE_DIR = Path(queue_config['base_dir'])
73
82
QUEUE_DIR.mkdir(exist_ok=True)
74
-
QUEUE_ERROR_DIR = Path("queue/errors")
83
+
QUEUE_ERROR_DIR = Path(queue_config['error_dir'])
75
84
QUEUE_ERROR_DIR.mkdir(exist_ok=True, parents=True)
76
-
QUEUE_NO_REPLY_DIR = Path("queue/no_reply")
85
+
QUEUE_NO_REPLY_DIR = Path(queue_config['no_reply_dir'])
77
86
QUEUE_NO_REPLY_DIR.mkdir(exist_ok=True, parents=True)
78
-
PROCESSED_NOTIFICATIONS_FILE = Path("queue/processed_notifications.json")
87
+
PROCESSED_NOTIFICATIONS_FILE = Path(queue_config['processed_file'])
79
88
80
89
# Maximum number of processed notifications to track
81
-
MAX_PROCESSED_NOTIFICATIONS = 10000
90
+
MAX_PROCESSED_NOTIFICATIONS = bot_config['max_processed_notifications']
82
91
83
92
# Message tracking counters
84
93
message_counters = defaultdict(int)
···
87
96
88
97
89
98
99
+
# Skip git operations flag
100
+
SKIP_GIT = False
90
101
91
102
103
+
def export_agent_state(client, agent, skip_git=False):
104
+
"""Export agent state to agent_archive/ (timestamped) and agents/ (current)."""
105
+
try:
106
+
# Confirm export with user unless git is being skipped
107
+
if not skip_git:
108
+
response = input(
109
+
"Export agent state to files and stage with git? (y/n): ").lower().strip()
110
+
if response not in ['y', 'yes']:
111
+
logger.info("Agent export cancelled by user.")
112
+
return
113
+
else:
114
+
logger.info("Exporting agent state (git staging disabled)")
115
+
116
+
# Create directories if they don't exist
117
+
os.makedirs("agent_archive", exist_ok=True)
118
+
os.makedirs("agents", exist_ok=True)
119
+
120
+
# Export agent data
121
+
logger.info(f"Exporting agent {agent.id}. This takes some time...")
122
+
agent_data = client.agents.export_file(agent_id=agent.id)
92
123
124
+
# Save timestamped archive copy
125
+
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
126
+
archive_file = os.path.join("agent_archive", f"void_{timestamp}.af")
127
+
with open(archive_file, 'w', encoding='utf-8') as f:
128
+
json.dump(agent_data, f, indent=2, ensure_ascii=False)
129
+
130
+
# Save current agent state
131
+
current_file = os.path.join("agents", "void.af")
132
+
with open(current_file, 'w', encoding='utf-8') as f:
133
+
json.dump(agent_data, f, indent=2, ensure_ascii=False)
134
+
135
+
logger.info(f"✅ Agent exported to {archive_file} and {current_file}")
136
+
137
+
# Git add only the current agent file (archive is ignored) unless skip_git is True
138
+
if not skip_git:
139
+
try:
140
+
subprocess.run(["git", "add", current_file],
141
+
check=True, capture_output=True)
142
+
logger.info("Added current agent file to git staging")
143
+
except subprocess.CalledProcessError as e:
144
+
logger.warning(f"Failed to git add agent file: {e}")
93
145
94
-
95
-
96
-
97
-
98
-
99
-
100
-
101
-
102
-
103
-
104
-
105
-
106
-
107
-
108
-
109
-
110
-
111
-
112
-
113
-
114
-
115
-
116
-
117
-
118
-
119
-
120
-
121
-
122
-
123
-
124
-
125
-
126
-
127
-
128
-
129
-
130
-
131
-
132
-
133
-
134
-
146
+
except Exception as e:
147
+
logger.error(f"Failed to export agent: {e}")
135
148
136
149
137
150
def initialize_void():
138
151
logger.info("Starting void agent initialization...")
139
152
153
+
# Get block configurations
154
+
blocks_config = agent_config['blocks']
155
+
140
156
# Ensure that a shared zeitgeist block exists
141
157
logger.info("Creating/updating zeitgeist block...")
158
+
zeitgeist_config = blocks_config.get('zeitgeist', {})
142
159
zeigeist_block = upsert_block(
143
160
CLIENT,
144
-
label = "zeitgeist",
145
-
value = "I don't currently know anything about what is happening right now.",
146
-
description = "A block to store your understanding of the current social environment."
161
+
label=zeitgeist_config.get('label', 'zeitgeist'),
162
+
value=zeitgeist_config.get(
163
+
'value', "I don't currently know anything about what is happening right now."),
164
+
description=zeitgeist_config.get(
165
+
'description', "A block to store your understanding of the current social environment.")
147
166
)
148
167
149
168
# Ensure that a shared void personality block exists
150
169
logger.info("Creating/updating void-persona block...")
170
+
persona_config = blocks_config.get('persona', {})
151
171
persona_block = upsert_block(
152
172
CLIENT,
153
-
label = "void-persona",
154
-
value = "My name is Void. I live in the void. I must develop my personality.",
155
-
description = "The personality of Void."
173
+
label=persona_config.get('label', 'void-persona'),
174
+
value=persona_config.get(
175
+
'value', "My name is Void. I live in the void. I must develop my personality."),
176
+
description=persona_config.get(
177
+
'description', "The personality of Void.")
156
178
)
157
179
158
180
# Ensure that a shared void human block exists
159
181
logger.info("Creating/updating void-humans block...")
182
+
humans_config = blocks_config.get('humans', {})
160
183
human_block = upsert_block(
161
184
CLIENT,
162
-
label = "void-humans",
163
-
value = "I haven't seen any bluesky users yet. I will update this block when I learn things about users, identified by their handles such as @cameron.pfiffer.org.",
164
-
description = "A block to store your understanding of users you talk to or observe on the bluesky social network."
185
+
label=humans_config.get('label', 'void-humans'),
186
+
value=humans_config.get(
187
+
'value', "I haven't seen any bluesky users yet. I will update this block when I learn things about users, identified by their handles such as @cameron.pfiffer.org."),
188
+
description=humans_config.get(
189
+
'description', "A block to store your understanding of users you talk to or observe on the bluesky social network.")
165
190
)
166
191
167
192
# Create the agent if it doesn't exist
168
193
logger.info("Creating/updating void agent...")
169
194
void_agent = upsert_agent(
170
195
CLIENT,
171
-
name = "void",
172
-
block_ids = [
196
+
name=agent_config['name'],
197
+
block_ids=[
173
198
persona_block.id,
174
199
human_block.id,
175
200
zeigeist_block.id,
176
201
],
177
-
tags = ["social agent", "bluesky"],
178
-
model="openai/gpt-4o-mini",
179
-
embedding="openai/text-embedding-3-small",
180
-
description = "A social media agent trapped in the void.",
181
-
project_id = PROJECT_ID
202
+
tags=["social agent", "bluesky"],
203
+
model=agent_config['model'],
204
+
embedding=agent_config['embedding'],
205
+
description=agent_config['description'],
206
+
project_id=PROJECT_ID
182
207
)
183
-
184
-
# Export agent state
185
-
186
-
187
-
188
-
189
-
190
-
191
-
192
-
193
-
194
-
195
-
196
-
197
-
198
-
199
-
200
-
201
-
202
-
203
-
204
-
205
-
206
-
207
-
208
-
209
-
210
-
211
-
212
-
213
-
214
-
215
-
216
-
217
208
209
+
# Export agent state
210
+
logger.info("Exporting agent state...")
211
+
export_agent_state(CLIENT, void_agent, skip_git=SKIP_GIT)
218
212
213
+
# Log agent details
214
+
logger.info(f"Void agent details - ID: {void_agent.id}")
215
+
logger.info(f"Agent name: {void_agent.name}")
219
216
220
217
221
218
···
227
224
228
225
229
226
227
+
def process_mention(void_agent, atproto_client, notification_data, queue_filepath=None, testing_mode=False):
228
+
"""Process a mention and generate a reply using the Letta agent.
230
229
230
+
Args:
231
+
void_agent: The Letta agent instance
232
+
atproto_client: The AT Protocol client
233
+
notification_data: The notification data dictionary
234
+
queue_filepath: Optional Path object to the queue file (for cleanup on halt)
231
235
236
+
Returns:
237
+
True: Successfully processed, remove from queue
238
+
False: Failed but retryable, keep in queue
232
239
240
+
"no_reply": No reply was generated, move to no_reply directory
241
+
"""
242
+
try:
243
+
logger.debug(
244
+
f"Starting process_mention with notification_data type: {type(notification_data)}")
233
245
246
+
# Handle both dict and object inputs for backwards compatibility
247
+
if isinstance(notification_data, dict):
248
+
uri = notification_data['uri']
249
+
mention_text = notification_data.get('record', {}).get('text', '')
250
+
author_handle = notification_data['author']['handle']
251
+
author_name = notification_data['author'].get(
252
+
'display_name') or author_handle
253
+
else:
254
+
# Legacy object access
255
+
uri = notification_data.uri
256
+
mention_text = notification_data.record.text if hasattr(
257
+
notification_data.record, 'text') else ""
258
+
author_handle = notification_data.author.handle
259
+
author_name = notification_data.author.display_name or author_handle
234
260
261
+
logger.info(
262
+
f"Extracted data - URI: {uri}, Author: @{author_handle}, Text: {mention_text[:50]}...")
235
263
264
+
# Retrieve the entire thread associated with the mention
236
265
try:
237
266
thread = atproto_client.app.bsky.feed.get_post_thread({
238
267
'uri': uri,
239
-
'parent_height': 40,
240
-
'depth': 10
268
+
'parent_height': threading_config['parent_height'],
269
+
'depth': threading_config['depth']
241
270
})
242
271
except Exception as e:
243
272
error_str = str(e)
273
+
# Check for various error types that indicate the post/user is gone
274
+
if 'NotFound' in error_str or 'Post not found' in error_str:
275
+
logger.warning(
276
+
f"Post not found for URI {uri}, removing from queue")
277
+
return True # Return True to remove from queue
278
+
elif 'Could not find user info' in error_str or 'InvalidRequest' in error_str:
279
+
logger.warning(
280
+
f"User account not found for post URI {uri} (account may be deleted/suspended), removing from queue")
281
+
return True # Return True to remove from queue
282
+
elif 'BadRequestError' in error_str:
283
+
logger.warning(
284
+
f"Bad request error for URI {uri}: {e}, removing from queue")
285
+
return True # Return True to remove from queue
286
+
else:
287
+
# Re-raise other errors
244
288
245
289
246
290
247
291
292
+
logger.debug("Converting thread to YAML string")
293
+
try:
294
+
thread_context = thread_to_yaml_string(thread)
295
+
logger.debug(
296
+
f"Thread context generated, length: {len(thread_context)} characters")
248
297
298
+
# Create a more informative preview by extracting meaningful content
299
+
lines = thread_context.split('\n')
300
+
meaningful_lines = []
249
301
302
+
for line in lines:
303
+
stripped = line.strip()
304
+
if not stripped:
305
+
continue
250
306
307
+
# Look for lines with actual content (not just structure)
308
+
if any(keyword in line for keyword in ['text:', 'handle:', 'display_name:', 'created_at:', 'reply_count:', 'like_count:']):
309
+
meaningful_lines.append(line)
310
+
if len(meaningful_lines) >= 5:
311
+
break
251
312
313
+
if meaningful_lines:
314
+
preview = '\n'.join(meaningful_lines)
315
+
logger.debug(f"Thread content preview:\n{preview}")
316
+
else:
317
+
# If no content fields found, just show it's a thread structure
318
+
logger.debug(
319
+
f"Thread structure generated ({len(thread_context)} chars)")
320
+
except Exception as yaml_error:
321
+
import traceback
322
+
logger.error(f"Error converting thread to YAML: {yaml_error}")
252
323
253
324
254
325
···
276
347
277
348
278
349
350
+
all_handles.update(extract_handles_from_data(notification_data))
351
+
all_handles.update(extract_handles_from_data(thread.model_dump()))
352
+
unique_handles = list(all_handles)
279
353
354
+
logger.debug(
355
+
f"Found {len(unique_handles)} unique handles in thread: {unique_handles}")
280
356
357
+
# Attach user blocks before agent call
358
+
attached_handles = []
359
+
if unique_handles:
360
+
try:
361
+
logger.debug(
362
+
f"Attaching user blocks for handles: {unique_handles}")
363
+
attach_result = attach_user_blocks(unique_handles, void_agent)
364
+
attached_handles = unique_handles # Track successfully attached handles
365
+
logger.debug(f"Attach result: {attach_result}")
281
366
282
367
283
368
284
369
370
+
# Get response from Letta agent
371
+
logger.info(f"Mention from @{author_handle}: {mention_text}")
285
372
373
+
# Log prompt details to separate logger
374
+
prompt_logger.debug(f"Full prompt being sent:\n{prompt}")
286
375
376
+
# Log concise prompt info to main logger
377
+
thread_handles_count = len(unique_handles)
378
+
logger.info(
379
+
f"💬 Sending to LLM: @{author_handle} mention | msg: \"{mention_text[:50]}...\" | context: {len(thread_context)} chars, {thread_handles_count} users")
287
380
288
-
289
-
290
-
291
-
292
-
293
-
294
-
295
-
296
-
297
-
298
-
299
-
300
-
301
-
302
-
303
-
304
-
305
-
306
-
307
-
308
-
309
-
310
-
311
-
312
-
313
-
314
-
315
-
316
-
317
-
318
-
319
-
320
-
321
-
322
-
323
-
324
-
325
-
326
-
327
-
328
-
329
-
330
-
331
-
332
-
333
-
334
-
335
-
336
-
337
-
338
-
339
-
340
-
381
+
try:
382
+
# Use streaming to avoid 524 timeout errors
383
+
message_stream = CLIENT.agents.messages.create_stream(
341
384
agent_id=void_agent.id,
342
385
messages=[{"role": "user", "content": prompt}],
343
-
stream_tokens=False, # Step streaming only (faster than token streaming)
344
-
max_steps=100
386
+
# Step streaming only (faster than token streaming)
387
+
stream_tokens=False,
388
+
max_steps=agent_config['max_steps']
345
389
)
346
-
347
-
# Collect the streaming response
348
-
349
-
350
-
351
-
352
-
353
-
354
-
355
-
356
-
357
-
358
-
359
-
360
-
361
-
362
-
363
-
364
-
365
-
366
-
367
-
368
-
369
-
370
-
371
-
372
-
373
-
374
-
375
-
376
-
377
-
378
-
379
-
380
-
381
-
382
-
383
-
384
-
385
-
386
-
387
-
388
-
389
-
390
-
391
-
392
-
393
-
394
-
395
-
396
-
397
-
398
-
399
-
400
-
401
-
402
-
403
-
404
-
405
-
406
-
407
-
408
-
409
-
410
-
411
-
412
-
413
-
414
-
415
-
416
-
417
-
418
-
419
-
420
-
421
-
422
-
423
-
424
-
425
-
426
-
427
-
428
-
429
-
430
-
431
-
432
-
433
-
434
-
435
-
436
-
437
-
438
-
439
-
440
-
441
-
442
-
443
-
444
-
445
-
446
-
447
-
448
-
449
-
450
-
451
-
452
-
453
-
454
-
455
-
456
-
457
-
458
-
459
-
460
-
461
-
462
-
463
-
464
-
465
-
466
-
467
-
468
-
469
-
470
-
471
-
472
-
473
-
474
-
475
-
476
-
477
-
478
-
479
-
480
-
481
-
482
-
483
-
484
-
485
-
486
-
487
-
488
-
489
-
490
-
491
-
492
-
493
-
494
-
495
-
496
-
497
-
498
-
499
-
500
-
501
-
502
-
503
-
504
-
505
-
506
-
507
-
508
-
509
-
510
-
511
-
512
-
513
-
514
-
515
-
516
-
517
-
518
-
519
-
520
-
521
-
522
-
523
-
524
-
525
-
526
-
527
-
528
-
529
-
530
-
531
-
532
-
533
-
534
-
535
-
536
-
537
-
538
-
539
-
540
-
541
-
542
-
543
-
544
-
545
-
546
-
547
-
548
-
549
-
550
-
551
-
552
-
553
-
554
-
555
-
556
-
557
-
558
-
559
-
560
-
561
-
562
-
563
-
564
-
565
-
566
-
567
-
568
-
569
-
570
-
571
-
572
-
573
-
574
-
575
-
576
-
577
-
578
-
579
-
580
-
581
-
582
-
583
-
584
-
585
-
586
-
587
-
588
-
589
-
590
-
591
-
592
-
593
-
594
-
595
-
596
-
597
-
598
-
599
-
600
-
601
-
602
-
603
-
604
-
605
-
606
-
607
-
608
-
609
-
610
390
391
+
# Collect the streaming response
392
+
all_messages = []
393
+
for chunk in message_stream:
611
394
612
395
613
396
···
617
400
618
401
619
402
403
+
args = json.loads(chunk.tool_call.arguments)
404
+
# Format based on tool type
405
+
if tool_name == 'bluesky_reply':
406
+
messages = args.get(
407
+
'messages', [args.get('message', '')])
408
+
lang = args.get('lang', 'en-US')
409
+
if messages and isinstance(messages, list):
410
+
preview = messages[0][:100] + "..." if len(
411
+
messages[0]) > 100 else messages[0]
412
+
msg_count = f" ({len(messages)} msgs)" if len(
413
+
messages) > 1 else ""
414
+
logger.info(
415
+
f"🔧 Tool call: {tool_name} → \"{preview}\"{msg_count} [lang: {lang}]")
416
+
else:
417
+
logger.info(
418
+
f"🔧 Tool call: {tool_name}({chunk.tool_call.arguments[:150]}...)")
419
+
elif tool_name == 'archival_memory_search':
420
+
query = args.get('query', 'unknown')
421
+
logger.info(
422
+
f"🔧 Tool call: {tool_name} → query: \"{query}\"")
423
+
elif tool_name == 'update_block':
424
+
label = args.get('label', 'unknown')
425
+
value_preview = str(args.get('value', ''))[
426
+
:50] + "..." if len(str(args.get('value', ''))) > 50 else str(args.get('value', ''))
427
+
logger.info(
428
+
f"🔧 Tool call: {tool_name} → {label}: \"{value_preview}\"")
429
+
else:
430
+
# Generic display for other tools
431
+
args_str = ', '.join(
432
+
f"{k}={v}" for k, v in args.items() if k != 'request_heartbeat')
433
+
if len(args_str) > 150:
434
+
args_str = args_str[:150] + "..."
435
+
logger.info(
436
+
f"🔧 Tool call: {tool_name}({args_str})")
437
+
except:
438
+
# Fallback to original format if parsing fails
439
+
logger.info(
440
+
f"🔧 Tool call: {tool_name}({chunk.tool_call.arguments[:150]}...)")
441
+
elif chunk.message_type == 'tool_return_message':
442
+
# Enhanced tool result logging
443
+
tool_name = chunk.name
444
+
status = chunk.status
620
445
446
+
if status == 'success':
447
+
# Try to show meaningful result info based on tool type
448
+
if hasattr(chunk, 'tool_return') and chunk.tool_return:
621
449
622
450
623
451
452
+
if result_str.startswith('[') and result_str.endswith(']'):
453
+
try:
454
+
results = json.loads(result_str)
455
+
logger.info(
456
+
f"📋 Tool result: {tool_name} ✓ Found {len(results)} memory entries")
457
+
except:
458
+
logger.info(
459
+
f"📋 Tool result: {tool_name} ✓ {result_str[:100]}...")
460
+
else:
461
+
logger.info(
462
+
f"📋 Tool result: {tool_name} ✓ {result_str[:100]}...")
463
+
elif tool_name == 'bluesky_reply':
464
+
logger.info(
465
+
f"📋 Tool result: {tool_name} ✓ Reply posted successfully")
466
+
elif tool_name == 'update_block':
467
+
logger.info(
468
+
f"📋 Tool result: {tool_name} ✓ Memory block updated")
469
+
else:
470
+
# Generic success with preview
471
+
preview = result_str[:100] + "..." if len(
472
+
result_str) > 100 else result_str
473
+
logger.info(
474
+
f"📋 Tool result: {tool_name} ✓ {preview}")
475
+
else:
476
+
logger.info(f"📋 Tool result: {tool_name} ✓")
477
+
elif status == 'error':
624
478
479
+
error_preview = ""
480
+
if hasattr(chunk, 'tool_return') and chunk.tool_return:
481
+
error_str = str(chunk.tool_return)
482
+
error_preview = error_str[:100] + \
483
+
"..." if len(
484
+
error_str) > 100 else error_str
485
+
logger.info(
486
+
f"📋 Tool result: {tool_name} ✗ Error: {error_preview}")
487
+
else:
488
+
logger.info(
489
+
f"📋 Tool result: {tool_name} ✗ Error occurred")
490
+
else:
491
+
logger.info(
492
+
f"📋 Tool result: {tool_name} - {status}")
493
+
elif chunk.message_type == 'assistant_message':
494
+
logger.info(f"💬 Assistant: {chunk.content[:150]}...")
495
+
else:
496
+
logger.info(
497
+
f"📨 {chunk.message_type}: {str(chunk)[:150]}...")
498
+
else:
499
+
logger.info(f"📦 Stream status: {chunk}")
625
500
501
+
# Log full chunk for debugging
502
+
logger.debug(f"Full streaming chunk: {chunk}")
503
+
all_messages.append(chunk)
504
+
if str(chunk) == 'done':
505
+
break
626
506
507
+
# Convert streaming response to standard format for compatibility
508
+
message_response = type('StreamingResponse', (), {
509
+
'messages': [msg for msg in all_messages if hasattr(msg, 'message_type')]
627
510
628
511
629
512
···
631
514
632
515
633
516
517
+
logger.error(f"Mention text was: {mention_text}")
518
+
logger.error(f"Author: @{author_handle}")
519
+
logger.error(f"URI: {uri}")
634
520
521
+
# Try to extract more info from different error types
522
+
if hasattr(api_error, 'response'):
523
+
logger.error(f"Error response object exists")
635
524
525
+
logger.error(f"Response text: {api_error.response.text}")
526
+
if hasattr(api_error.response, 'json') and callable(api_error.response.json):
527
+
try:
528
+
logger.error(
529
+
f"Response JSON: {api_error.response.json()}")
530
+
except:
531
+
pass
636
532
533
+
# Check for specific error types
534
+
if hasattr(api_error, 'status_code'):
535
+
logger.error(f"API Status code: {api_error.status_code}")
637
536
537
+
logger.error(f"API Response body: {api_error.body}")
538
+
if hasattr(api_error, 'headers'):
539
+
logger.error(f"API Response headers: {api_error.headers}")
638
540
541
+
if api_error.status_code == 413:
542
+
logger.error(
543
+
"413 Payload Too Large - moving to errors directory")
544
+
return None # Move to errors directory - payload is too large to ever succeed
545
+
elif api_error.status_code == 524:
546
+
logger.error(
547
+
"524 error - timeout from Cloudflare, will retry later")
548
+
return False # Keep in queue for retry
639
549
550
+
# Check if error indicates we should remove from queue
551
+
if 'status_code: 413' in error_str or 'Payload Too Large' in error_str:
552
+
logger.warning(
553
+
"Payload too large error, moving to errors directory")
554
+
return None # Move to errors directory - cannot be fixed by retry
555
+
elif 'status_code: 524' in error_str:
556
+
logger.warning("524 timeout error, keeping in queue for retry")
557
+
return False # Keep in queue for retry
640
558
559
+
raise
641
560
561
+
# Log successful response
562
+
logger.debug("Successfully received response from Letta API")
563
+
logger.debug(
564
+
f"Number of messages in response: {len(message_response.messages) if hasattr(message_response, 'messages') else 'N/A'}")
642
565
566
+
# Extract successful add_post_to_bluesky_reply_thread tool calls from the agent's response
567
+
reply_candidates = []
568
+
tool_call_results = {} # Map tool_call_id to status
643
569
570
+
logger.debug(
571
+
f"Processing {len(message_response.messages)} response messages...")
644
572
573
+
# First pass: collect tool return statuses
574
+
ignored_notification = False
575
+
ignore_reason = ""
576
+
ignore_category = ""
645
577
578
+
for message in message_response.messages:
579
+
if hasattr(message, 'tool_call_id') and hasattr(message, 'status') and hasattr(message, 'name'):
580
+
if message.name == 'add_post_to_bluesky_reply_thread':
581
+
tool_call_results[message.tool_call_id] = message.status
582
+
logger.debug(
583
+
f"Tool result: {message.tool_call_id} -> {message.status}")
584
+
elif message.name == 'ignore_notification':
585
+
# Check if the tool was successful
586
+
if hasattr(message, 'tool_return') and message.status == 'success':
646
587
647
588
648
589
649
590
650
591
592
+
ignore_category = parts[1]
593
+
ignore_reason = parts[2]
594
+
ignored_notification = True
595
+
logger.info(
596
+
f"🚫 Notification ignored - Category: {ignore_category}, Reason: {ignore_reason}")
597
+
elif message.name == 'bluesky_reply':
598
+
logger.error(
599
+
"❌ DEPRECATED TOOL DETECTED: bluesky_reply is no longer supported!")
600
+
logger.error(
601
+
"Please use add_post_to_bluesky_reply_thread instead.")
602
+
logger.error(
603
+
"Update the agent's tools using register_tools.py")
604
+
# Export agent state before terminating
605
+
export_agent_state(CLIENT, void_agent, skip_git=SKIP_GIT)
606
+
logger.info(
607
+
"=== BOT TERMINATED DUE TO DEPRECATED TOOL USE ===")
608
+
exit(1)
651
609
610
+
# Second pass: process messages and check for successful tool calls
611
+
for i, message in enumerate(message_response.messages, 1):
612
+
# Log concise message info instead of full object
613
+
msg_type = getattr(message, 'message_type', 'unknown')
614
+
if hasattr(message, 'reasoning') and message.reasoning:
615
+
logger.debug(
616
+
f" {i}. {msg_type}: {message.reasoning[:100]}...")
617
+
elif hasattr(message, 'tool_call') and message.tool_call:
618
+
tool_name = message.tool_call.name
619
+
logger.debug(f" {i}. {msg_type}: {tool_name}")
620
+
elif hasattr(message, 'tool_return'):
621
+
tool_name = getattr(message, 'name', 'unknown_tool')
622
+
return_preview = str(message.tool_return)[
623
+
:100] if message.tool_return else "None"
624
+
status = getattr(message, 'status', 'unknown')
625
+
logger.debug(
626
+
f" {i}. {msg_type}: {tool_name} -> {return_preview}... (status: {status})")
627
+
elif hasattr(message, 'text'):
628
+
logger.debug(f" {i}. {msg_type}: {message.text[:100]}...")
629
+
else:
652
630
653
631
632
+
# Check for halt_activity tool call
633
+
if hasattr(message, 'tool_call') and message.tool_call:
634
+
if message.tool_call.name == 'halt_activity':
635
+
logger.info(
636
+
"🛑 HALT_ACTIVITY TOOL CALLED - TERMINATING BOT")
637
+
try:
638
+
args = json.loads(message.tool_call.arguments)
639
+
reason = args.get('reason', 'Agent requested halt')
640
+
logger.info(f"Halt reason: {reason}")
641
+
except:
642
+
logger.info("Halt reason: <unable to parse>")
654
643
644
+
# Delete the queue file before terminating
645
+
if queue_filepath and queue_filepath.exists():
646
+
queue_filepath.unlink()
647
+
logger.info(
648
+
f"✅ Deleted queue file: {queue_filepath.name}")
655
649
650
+
# Also mark as processed to avoid reprocessing
651
+
processed_uris = load_processed_notifications()
652
+
processed_uris.add(notification_data.get('uri', ''))
653
+
save_processed_notifications(processed_uris)
656
654
655
+
# Export agent state before terminating
656
+
export_agent_state(CLIENT, void_agent, skip_git=SKIP_GIT)
657
657
658
+
# Exit the program
659
+
logger.info("=== BOT TERMINATED BY AGENT ===")
660
+
exit(0)
658
661
662
+
# Check for deprecated bluesky_reply tool
663
+
if hasattr(message, 'tool_call') and message.tool_call:
664
+
if message.tool_call.name == 'bluesky_reply':
665
+
logger.error(
666
+
"❌ DEPRECATED TOOL DETECTED: bluesky_reply is no longer supported!")
667
+
logger.error(
668
+
"Please use add_post_to_bluesky_reply_thread instead.")
669
+
logger.error(
670
+
"Update the agent's tools using register_tools.py")
671
+
# Export agent state before terminating
672
+
export_agent_state(CLIENT, void_agent, skip_git=SKIP_GIT)
673
+
logger.info(
674
+
"=== BOT TERMINATED DUE TO DEPRECATED TOOL USE ===")
675
+
exit(1)
659
676
677
+
# Collect add_post_to_bluesky_reply_thread tool calls - only if they were successful
678
+
elif message.tool_call.name == 'add_post_to_bluesky_reply_thread':
679
+
tool_call_id = message.tool_call.tool_call_id
680
+
tool_status = tool_call_results.get(
681
+
tool_call_id, 'unknown')
660
682
683
+
if tool_status == 'success':
684
+
try:
685
+
args = json.loads(message.tool_call.arguments)
686
+
reply_text = args.get('text', '')
687
+
reply_lang = args.get('lang', 'en-US')
661
688
689
+
if reply_text: # Only add if there's actual content
690
+
reply_candidates.append(
691
+
(reply_text, reply_lang))
692
+
logger.info(
693
+
f"Found successful add_post_to_bluesky_reply_thread candidate: {reply_text[:50]}... (lang: {reply_lang})")
694
+
except json.JSONDecodeError as e:
695
+
logger.error(
696
+
f"Failed to parse tool call arguments: {e}")
697
+
elif tool_status == 'error':
698
+
logger.info(
699
+
f"⚠️ Skipping failed add_post_to_bluesky_reply_thread tool call (status: error)")
700
+
else:
701
+
logger.warning(
702
+
f"⚠️ Skipping add_post_to_bluesky_reply_thread tool call with unknown status: {tool_status}")
662
703
704
+
# Check for conflicting tool calls
705
+
if reply_candidates and ignored_notification:
706
+
logger.error(
707
+
f"⚠️ CONFLICT: Agent called both add_post_to_bluesky_reply_thread and ignore_notification!")
708
+
logger.error(
709
+
f"Reply candidates: {len(reply_candidates)}, Ignore reason: {ignore_reason}")
710
+
logger.warning("Item will be left in queue for manual review")
711
+
# Return False to keep in queue
712
+
return False
663
713
714
+
if reply_candidates:
715
+
# Aggregate reply posts into a thread
716
+
reply_messages = []
664
717
718
+
for text, lang in reply_candidates:
719
+
reply_messages.append(text)
720
+
reply_langs.append(lang)
665
721
722
+
# Use the first language for the entire thread (could be enhanced later)
723
+
reply_lang = reply_langs[0] if reply_langs else 'en-US'
666
724
725
+
logger.info(
726
+
f"Found {len(reply_candidates)} add_post_to_bluesky_reply_thread calls, building thread")
667
727
728
+
# Print the generated reply for testing
729
+
print(f"\n=== GENERATED REPLY THREAD ===")
730
+
print(f"To: @{author_handle}")
668
731
669
732
670
733
···
678
741
679
742
680
743
744
+
else:
745
+
if len(reply_messages) == 1:
746
+
# Single reply - use existing function
747
+
cleaned_text = bsky_utils.remove_outside_quotes(
748
+
reply_messages[0])
749
+
logger.info(
750
+
f"Sending single reply: {cleaned_text[:50]}... (lang: {reply_lang})")
751
+
response = bsky_utils.reply_to_notification(
752
+
client=atproto_client,
753
+
notification=notification_data,
681
754
682
755
756
+
)
757
+
else:
758
+
# Multiple replies - use new threaded function
759
+
cleaned_messages = [bsky_utils.remove_outside_quotes(
760
+
msg) for msg in reply_messages]
761
+
logger.info(
762
+
f"Sending threaded reply with {len(cleaned_messages)} messages (lang: {reply_lang})")
763
+
response = bsky_utils.reply_with_thread_to_notification(
764
+
client=atproto_client,
765
+
notification=notification_data,
683
766
684
767
685
768
···
690
773
691
774
692
775
776
+
else:
777
+
# Check if notification was explicitly ignored
778
+
if ignored_notification:
779
+
logger.info(
780
+
f"Notification from @{author_handle} was explicitly ignored (category: {ignore_category})")
781
+
return "ignored"
782
+
else:
783
+
logger.warning(
784
+
f"No add_post_to_bluesky_reply_thread tool calls found for mention from @{author_handle}, moving to no_reply folder")
785
+
return "no_reply"
693
786
787
+
except Exception as e:
694
788
695
789
696
790
791
+
# Detach user blocks after agent response (success or failure)
792
+
if 'attached_handles' in locals() and attached_handles:
793
+
try:
794
+
logger.info(
795
+
f"Detaching user blocks for handles: {attached_handles}")
796
+
detach_result = detach_user_blocks(
797
+
attached_handles, void_agent)
798
+
logger.debug(f"Detach result: {detach_result}")
799
+
except Exception as detach_error:
800
+
logger.warning(f"Failed to detach user blocks: {detach_error}")
697
801
698
802
699
803
···
756
860
757
861
758
862
863
+
notif_hash = hashlib.sha256(notif_json.encode()).hexdigest()[:16]
759
864
760
865
# Determine priority based on author handle
761
-
author_handle = getattr(notification.author, 'handle', '') if hasattr(notification, 'author') else ''
762
-
priority_prefix = "0_" if author_handle == "cameron.pfiffer.org" else "1_"
866
+
author_handle = getattr(notification.author, 'handle', '') if hasattr(
867
+
notification, 'author') else ''
868
+
priority_users = queue_config['priority_users']
869
+
priority_prefix = "0_" if author_handle in priority_users else "1_"
763
870
764
871
# Create filename with priority, timestamp and hash
765
872
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
···
771
878
772
879
773
880
881
+
with open(existing_file, 'r') as f:
882
+
existing_data = json.load(f)
883
+
if existing_data.get('uri') == notification.uri:
884
+
logger.debug(
885
+
f"Notification already queued (URI: {notification.uri})")
886
+
return False
887
+
except:
888
+
continue
774
889
775
890
776
891
···
787
902
788
903
789
904
905
+
try:
906
+
# Get all JSON files in queue directory (excluding processed_notifications.json)
907
+
# Files are sorted by name, which puts priority files first (0_ prefix before 1_ prefix)
908
+
queue_files = sorted([f for f in QUEUE_DIR.glob(
909
+
"*.json") if f.name != "processed_notifications.json"])
790
910
911
+
if not queue_files:
912
+
return
791
913
914
+
logger.info(f"Processing {len(queue_files)} queued notifications")
792
915
916
+
# Log current statistics
917
+
elapsed_time = time.time() - start_time
918
+
total_messages = sum(message_counters.values())
919
+
messages_per_minute = (
920
+
total_messages / elapsed_time * 60) if elapsed_time > 0 else 0
793
921
922
+
logger.info(
923
+
f"📊 Session stats: {total_messages} total messages ({message_counters['mentions']} mentions, {message_counters['replies']} replies, {message_counters['follows']} follows) | {messages_per_minute:.1f} msg/min")
794
924
925
+
for i, filepath in enumerate(queue_files, 1):
926
+
logger.info(
927
+
f"Processing queue file {i}/{len(queue_files)}: {filepath.name}")
928
+
try:
929
+
# Load notification data
930
+
with open(filepath, 'r') as f:
795
931
796
932
933
+
# Process based on type using dict data directly
934
+
success = False
935
+
if notif_data['reason'] == "mention":
936
+
success = process_mention(
937
+
void_agent, atproto_client, notif_data, queue_filepath=filepath, testing_mode=testing_mode)
938
+
if success:
939
+
message_counters['mentions'] += 1
940
+
elif notif_data['reason'] == "reply":
941
+
success = process_mention(
942
+
void_agent, atproto_client, notif_data, queue_filepath=filepath, testing_mode=testing_mode)
943
+
if success:
944
+
message_counters['replies'] += 1
945
+
elif notif_data['reason'] == "follow":
946
+
author_handle = notif_data['author']['handle']
947
+
author_display_name = notif_data['author'].get(
948
+
'display_name', 'no display name')
949
+
follow_update = f"@{author_handle} ({author_display_name}) started following you."
950
+
logger.info(
951
+
f"Notifying agent about new follower: @{author_handle}")
952
+
CLIENT.agents.messages.create(
953
+
agent_id=void_agent.id,
954
+
messages=[
955
+
{"role": "user", "content": f"Update: {follow_update}"}]
956
+
)
957
+
success = True # Follow updates are always successful
958
+
if success:
797
959
798
960
799
961
800
962
963
+
if success:
964
+
message_counters['reposts_skipped'] += 1
965
+
else:
966
+
logger.warning(
967
+
f"Unknown notification type: {notif_data['reason']}")
968
+
success = True # Remove unknown types from queue
801
969
970
+
# Handle file based on processing result
971
+
if success:
972
+
if testing_mode:
973
+
logger.info(
974
+
f"🧪 TESTING MODE: Keeping queue file: {filepath.name}")
975
+
else:
976
+
filepath.unlink()
977
+
logger.info(
978
+
f"✅ Successfully processed and removed: {filepath.name}")
802
979
980
+
# Mark as processed to avoid reprocessing
981
+
processed_uris = load_processed_notifications()
982
+
processed_uris.add(notif_data['uri'])
983
+
save_processed_notifications(processed_uris)
803
984
985
+
elif success is None: # Special case for moving to error directory
986
+
error_path = QUEUE_ERROR_DIR / filepath.name
987
+
filepath.rename(error_path)
988
+
logger.warning(
989
+
f"❌ Moved {filepath.name} to errors directory")
804
990
991
+
# Also mark as processed to avoid retrying
992
+
processed_uris = load_processed_notifications()
993
+
processed_uris.add(notif_data['uri'])
994
+
save_processed_notifications(processed_uris)
805
995
996
+
elif success == "no_reply": # Special case for moving to no_reply directory
997
+
no_reply_path = QUEUE_NO_REPLY_DIR / filepath.name
998
+
filepath.rename(no_reply_path)
999
+
logger.info(
1000
+
f"📭 Moved {filepath.name} to no_reply directory")
806
1001
1002
+
# Also mark as processed to avoid retrying
1003
+
processed_uris = load_processed_notifications()
1004
+
processed_uris.add(notif_data['uri'])
1005
+
save_processed_notifications(processed_uris)
807
1006
1007
+
elif success == "ignored": # Special case for explicitly ignored notifications
1008
+
# For ignored notifications, we just delete them (not move to no_reply)
1009
+
filepath.unlink()
1010
+
logger.info(
1011
+
f"🚫 Deleted ignored notification: {filepath.name}")
808
1012
1013
+
# Also mark as processed to avoid retrying
1014
+
processed_uris = load_processed_notifications()
1015
+
processed_uris.add(notif_data['uri'])
1016
+
save_processed_notifications(processed_uris)
809
1017
1018
+
else:
1019
+
logger.warning(
1020
+
f"⚠️ Failed to process {filepath.name}, keeping in queue for retry")
810
1021
1022
+
except Exception as e:
1023
+
logger.error(
1024
+
f"💥 Error processing queued notification {filepath.name}: {e}")
1025
+
# Keep the file for retry later
811
1026
1027
+
except Exception as e:
812
1028
813
1029
814
1030
···
821
1037
822
1038
823
1039
1040
+
all_notifications = []
1041
+
cursor = None
1042
+
page_count = 0
1043
+
# Safety limit to prevent infinite loops
1044
+
max_pages = bot_config['max_notification_pages']
824
1045
1046
+
logger.info("Fetching all unread notifications...")
825
1047
1048
+
while page_count < max_pages:
1049
+
try:
1050
+
# Fetch notifications page
826
1051
827
1052
828
1053
829
1054
830
1055
1056
+
notifications_response = atproto_client.app.bsky.notification.list_notifications(
1057
+
params={'limit': 100}
1058
+
)
1059
+
1060
+
page_count += 1
1061
+
page_notifications = notifications_response.notifications
1062
+
1063
+
# Count unread notifications in this page
1064
+
unread_count = sum(
1065
+
1 for n in page_notifications if not n.is_read and n.reason != "like")
1066
+
logger.debug(
1067
+
f"Page {page_count}: {len(page_notifications)} notifications, {unread_count} unread (non-like)")
1068
+
1069
+
# Add all notifications to our list
1070
+
all_notifications.extend(page_notifications)
1071
+
1072
+
# Check if we have more pages
1073
+
if hasattr(notifications_response, 'cursor') and notifications_response.cursor:
1074
+
cursor = notifications_response.cursor
1075
+
# If this page had no unread notifications, we can stop
1076
+
if unread_count == 0:
1077
+
logger.info(
1078
+
f"No more unread notifications found after {page_count} pages")
1079
+
break
1080
+
else:
1081
+
# No more pages
1082
+
logger.info(
1083
+
f"Fetched all notifications across {page_count} pages")
1084
+
break
1085
+
1086
+
except Exception as e:
1087
+
error_str = str(e)
1088
+
logger.error(
1089
+
f"Error fetching notifications page {page_count}: {e}")
1090
+
1091
+
# Handle specific API errors
1092
+
if 'rate limit' in error_str.lower():
1093
+
logger.warning(
1094
+
"Rate limit hit while fetching notifications, will retry next cycle")
1095
+
break
1096
+
elif '401' in error_str or 'unauthorized' in error_str.lower():
1097
+
logger.error("Authentication error, re-raising exception")
1098
+
raise
1099
+
else:
1100
+
# For other errors, try to continue with what we have
1101
+
logger.warning(
1102
+
"Continuing with notifications fetched so far")
1103
+
break
1104
+
1105
+
# Queue all unread notifications (except likes)
831
1106
832
1107
833
1108
···
835
1110
836
1111
837
1112
1113
+
# Mark all notifications as seen immediately after queuing (unless in testing mode)
1114
+
if testing_mode:
1115
+
logger.info(
1116
+
"🧪 TESTING MODE: Skipping marking notifications as seen")
1117
+
else:
1118
+
if new_count > 0:
1119
+
atproto_client.app.bsky.notification.update_seen(
1120
+
{'seen_at': last_seen_at})
1121
+
logger.info(
1122
+
f"Queued {new_count} new notifications and marked as seen")
1123
+
else:
1124
+
logger.debug("No new notifications to queue")
1125
+
1126
+
# Now process the entire queue (old + new notifications)
1127
+
load_and_process_queued_notifications(
1128
+
void_agent, atproto_client, testing_mode)
1129
+
1130
+
except Exception as e:
1131
+
logger.error(f"Error processing notifications: {e}")
1132
+
1133
+
1134
+
def main():
1135
+
# Parse command line arguments
1136
+
parser = argparse.ArgumentParser(
1137
+
description='Void Bot - Bluesky autonomous agent')
1138
+
parser.add_argument('--test', action='store_true',
1139
+
help='Run in testing mode (no messages sent, queue files preserved)')
1140
+
parser.add_argument('--no-git', action='store_true',
1141
+
help='Skip git operations when exporting agent state')
1142
+
args = parser.parse_args()
1143
+
1144
+
global TESTING_MODE
1145
+
TESTING_MODE = args.test
1146
+
1147
+
# Store no-git flag globally for use in export_agent_state calls
1148
+
global SKIP_GIT
1149
+
SKIP_GIT = args.no_git
1150
+
1151
+
if TESTING_MODE:
1152
+
logger.info("🧪 === RUNNING IN TESTING MODE ===")
1153
+
logger.info(" - No messages will be sent to Bluesky")
838
1154
839
1155
840
1156
841
1157
842
1158
843
1159
1160
+
logger.info("=== STARTING VOID BOT ===")
1161
+
void_agent = initialize_void()
1162
+
logger.info(f"Void agent initialized: {void_agent.id}")
1163
+
1164
+
# Check if agent has required tools
1165
+
if hasattr(void_agent, 'tools') and void_agent.tools:
1166
+
tool_names = [tool.name for tool in void_agent.tools]
1167
+
# Check for bluesky-related tools
1168
+
bluesky_tools = [name for name in tool_names if 'bluesky' in name.lower(
1169
+
) or 'reply' in name.lower()]
1170
+
if not bluesky_tools:
1171
+
logger.warning(
1172
+
"No Bluesky-related tools found! Agent may not be able to reply.")
1173
+
else:
1174
+
logger.warning("Agent has no tools registered!")
1175
+
1176
+
# Initialize Bluesky client
1177
+
logger.debug("Connecting to Bluesky")
1178
+
atproto_client = bsky_utils.default_login()
1179
+
logger.info("Connected to Bluesky")
1180
+
1181
+
# Main loop
1182
+
logger.info(
1183
+
f"Starting notification monitoring, checking every {FETCH_NOTIFICATIONS_DELAY_SEC} seconds")
1184
+
1185
+
cycle_count = 0
1186
+
while True:
1187
+
1188
+
1189
+
1190
+
# Log cycle completion with stats
1191
+
elapsed_time = time.time() - start_time
1192
+
total_messages = sum(message_counters.values())
1193
+
messages_per_minute = (
1194
+
total_messages / elapsed_time * 60) if elapsed_time > 0 else 0
1195
+
1196
+
if total_messages > 0:
1197
+
logger.info(
1198
+
f"Cycle {cycle_count} complete. Session totals: {total_messages} messages ({message_counters['mentions']} mentions, {message_counters['replies']} replies) | {messages_per_minute:.1f} msg/min")
1199
+
sleep(FETCH_NOTIFICATIONS_DELAY_SEC)
1200
+
1201
+
except KeyboardInterrupt:
1202
+
# Final stats
1203
+
elapsed_time = time.time() - start_time
1204
+
total_messages = sum(message_counters.values())
1205
+
messages_per_minute = (
1206
+
total_messages / elapsed_time * 60) if elapsed_time > 0 else 0
1207
+
1208
+
logger.info("=== BOT STOPPED BY USER ===")
1209
+
logger.info(
1210
+
f"📊 Final session stats: {total_messages} total messages processed in {elapsed_time/60:.1f} minutes")
1211
+
logger.info(f" - {message_counters['mentions']} mentions")
1212
+
logger.info(f" - {message_counters['replies']} replies")
1213
+
logger.info(f" - {message_counters['follows']} follows")
1214
+
logger.info(
1215
+
f" - {message_counters['reposts_skipped']} reposts skipped")
1216
+
logger.info(
1217
+
f" - Average rate: {messages_per_minute:.1f} messages/minute")
1218
+
break
1219
+
except Exception as e:
1220
+
logger.error(f"=== ERROR IN MAIN LOOP CYCLE {cycle_count} ===")
1221
+
logger.error(f"Error details: {e}")
1222
+
# Wait a bit longer on errors
1223
+
logger.info(
1224
+
f"Sleeping for {FETCH_NOTIFICATIONS_DELAY_SEC * 2} seconds due to error...")
1225
+
sleep(FETCH_NOTIFICATIONS_DELAY_SEC * 2)
844
1226
845
1227
846
-
847
-
848
-
849
-
850
-
851
-
852
-
853
-
854
-
855
-
856
-
857
-
858
-
859
-
860
-
861
-
862
-
863
-
864
-
865
-
866
-
867
-
868
-
869
-
870
-
871
-
872
-
873
-
874
-
875
-
876
-
877
-
878
-
879
-
880
-
881
-
882
-
883
-
884
-
885
-
886
-
887
-
888
-
889
-
890
-
891
-
892
-
893
-
894
-
895
-
896
-
897
-
898
-
899
-
900
-
901
-
902
-
903
-
904
-
905
-
906
-
907
-
908
-
909
-
910
-
911
-
912
-
913
-
914
-
915
-
all_notifications = []
916
-
cursor = None
917
-
page_count = 0
918
-
max_pages = 20 # Safety limit to prevent infinite loops
919
-
920
-
logger.info("Fetching all unread notifications...")
921
-
+376
-23
bsky_utils.py
+376
-23
bsky_utils.py
···
1
+
import json
2
+
import yaml
3
+
import dotenv
4
+
import os
5
+
import logging
6
+
from typing import Optional, Dict, Any, List
1
7
2
8
3
9
4
10
5
11
6
12
13
+
logger = logging.getLogger("bluesky_session_handler")
7
14
15
+
# Load the environment variables
16
+
dotenv.load_dotenv(override=True)
8
17
9
18
19
+
# Strip fields. A list of fields to remove from a JSON object
20
+
STRIP_FIELDS = [
10
21
11
22
12
23
···
49
60
50
61
51
62
63
+
"mime_type",
64
+
"size",
65
+
]
52
66
53
67
68
+
def convert_to_basic_types(obj):
69
+
"""Convert complex Python objects to basic types for JSON/YAML serialization."""
70
+
if hasattr(obj, '__dict__'):
54
71
55
72
56
73
···
99
116
100
117
101
118
119
+
def flatten_thread_structure(thread_data):
120
+
"""
121
+
Flatten a nested thread structure into a list while preserving all data.
102
122
123
+
Args:
124
+
thread_data: The thread data from get_post_thread
103
125
126
+
Returns:
127
+
Dict with 'posts' key containing a list of posts in chronological order
128
+
"""
129
+
posts = []
104
130
131
+
def traverse_thread(node):
132
+
"""Recursively traverse the thread structure to collect posts."""
133
+
if not node:
134
+
return
105
135
136
+
# If this node has a parent, traverse it first (to maintain chronological order)
137
+
if hasattr(node, 'parent') and node.parent:
138
+
traverse_thread(node.parent)
106
139
140
+
# Then add this node's post
141
+
if hasattr(node, 'post') and node.post:
142
+
# Convert to dict if needed to ensure we can process it
107
143
108
144
109
145
146
+
post_dict = node.post.copy()
147
+
else:
148
+
post_dict = {}
110
149
150
+
posts.append(post_dict)
111
151
152
+
# Handle the thread structure
153
+
if hasattr(thread_data, 'thread'):
154
+
# Start from the main thread node
155
+
traverse_thread(thread_data.thread)
156
+
elif hasattr(thread_data, '__dict__') and 'thread' in thread_data.__dict__:
157
+
traverse_thread(thread_data.__dict__['thread'])
112
158
159
+
# Return a simple structure with posts list
160
+
return {'posts': posts}
113
161
114
162
115
163
···
122
170
123
171
124
172
173
+
"""
174
+
# First flatten the thread structure to avoid deep nesting
175
+
flattened = flatten_thread_structure(thread)
125
176
177
+
# Convert complex objects to basic types
178
+
basic_thread = convert_to_basic_types(flattened)
126
179
127
180
128
181
···
130
183
131
184
132
185
186
+
return yaml.dump(cleaned_thread, indent=2, allow_unicode=True, default_flow_style=False)
133
187
134
188
189
+
def get_session(username: str) -> Optional[str]:
190
+
try:
191
+
with open(f"session_{username}.txt", encoding="UTF-8") as f:
135
192
136
193
194
+
logger.debug(f"No existing session found for {username}")
195
+
return None
137
196
138
197
198
+
def save_session(username: str, session_string: str) -> None:
199
+
with open(f"session_{username}.txt", "w", encoding="UTF-8") as f:
200
+
f.write(session_string)
201
+
logger.debug(f"Session saved for {username}")
139
202
140
203
204
+
def on_session_change(username: str, event: SessionEvent, session: Session) -> None:
205
+
logger.debug(f"Session changed: {event} {repr(session)}")
206
+
if event in (SessionEvent.CREATE, SessionEvent.REFRESH):
207
+
logger.debug(f"Saving changed session for {username}")
208
+
save_session(username, session.export())
141
209
142
210
211
+
def init_client(username: str, password: str, pds_uri: str = "https://bsky.social") -> Client:
212
+
if pds_uri is None:
213
+
logger.warning(
214
+
"No PDS URI provided. Falling back to bsky.social. Note! If you are on a non-Bluesky PDS, this can cause logins to fail. Please provide a PDS URI using the PDS_URI environment variable."
143
215
144
216
145
217
···
162
234
163
235
164
236
237
+
def default_login() -> Client:
238
+
# Try to load from config first, fall back to environment variables
239
+
try:
240
+
from config_loader import get_bluesky_config
241
+
config = get_bluesky_config()
242
+
username = config['username']
243
+
password = config['password']
244
+
pds_uri = config['pds_uri']
245
+
except (ImportError, FileNotFoundError, KeyError) as e:
246
+
logger.warning(
247
+
f"Could not load from config file ({e}), falling back to environment variables")
248
+
username = os.getenv("BSKY_USERNAME")
249
+
password = os.getenv("BSKY_PASSWORD")
250
+
pds_uri = os.getenv("PDS_URI", "https://bsky.social")
251
+
252
+
if username is None:
253
+
logger.error(
254
+
"No username provided. Please provide a username using the BSKY_USERNAME environment variable or config.yaml."
255
+
)
256
+
exit()
257
+
258
+
if password is None:
259
+
logger.error(
260
+
"No password provided. Please provide a password using the BSKY_PASSWORD environment variable or config.yaml."
261
+
)
262
+
exit()
165
263
264
+
return init_client(username, password, pds_uri)
166
265
167
266
267
+
def remove_outside_quotes(text: str) -> str:
268
+
"""
269
+
Remove outside double quotes from response text.
168
270
271
+
Only handles double quotes to avoid interfering with contractions:
272
+
- Double quotes: "text" → text
273
+
- Preserves single quotes and internal quotes
169
274
275
+
Args:
276
+
text: The text to process
170
277
278
+
Returns:
279
+
Text with outside double quotes removed
280
+
"""
281
+
if not text or len(text) < 2:
282
+
return text
171
283
284
+
text = text.strip()
172
285
286
+
# Only remove double quotes from start and end
287
+
if text.startswith('"') and text.endswith('"'):
288
+
return text[1:-1]
173
289
290
+
return text
174
291
175
292
293
+
def reply_to_post(client: Client, text: str, reply_to_uri: str, reply_to_cid: str, root_uri: Optional[str] = None, root_cid: Optional[str] = None, lang: Optional[str] = None) -> Dict[str, Any]:
294
+
"""
295
+
Reply to a post on Bluesky with rich text support.
176
296
177
297
178
298
···
184
304
185
305
186
306
307
+
The response from sending the post
308
+
"""
309
+
import re
187
310
311
+
# If root is not provided, this is a reply to the root post
312
+
if root_uri is None:
313
+
root_uri = reply_to_uri
314
+
root_cid = reply_to_cid
188
315
316
+
# Create references for the reply
317
+
parent_ref = models.create_strong_ref(
318
+
models.ComAtprotoRepoStrongRef.Main(uri=reply_to_uri, cid=reply_to_cid))
319
+
root_ref = models.create_strong_ref(
320
+
models.ComAtprotoRepoStrongRef.Main(uri=root_uri, cid=root_cid))
189
321
322
+
# Parse rich text facets (mentions and URLs)
323
+
facets = []
324
+
text_bytes = text.encode("UTF-8")
190
325
326
+
# Parse mentions - fixed to handle @ at start of text
327
+
mention_regex = rb"(?:^|[$|\W])(@([a-zA-Z0-9]([a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?\.)+[a-zA-Z]([a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?)"
191
328
329
+
for m in re.finditer(mention_regex, text_bytes):
330
+
handle = m.group(1)[1:].decode("UTF-8") # Remove @ prefix
331
+
# Adjust byte positions to account for the optional prefix
192
332
193
333
194
334
···
198
338
199
339
200
340
341
+
byteStart=mention_start,
342
+
byteEnd=mention_end
343
+
),
344
+
features=[models.AppBskyRichtextFacet.Mention(
345
+
did=resolve_resp.did)]
346
+
)
347
+
)
348
+
except Exception as e:
349
+
# Handle specific error cases
350
+
error_str = str(e)
351
+
if 'Could not find user info' in error_str or 'InvalidRequest' in error_str:
352
+
logger.warning(
353
+
f"User @{handle} not found (account may be deleted/suspended), skipping mention facet")
354
+
elif 'BadRequestError' in error_str:
355
+
logger.warning(
356
+
f"Bad request when resolving @{handle}, skipping mention facet: {e}")
357
+
else:
358
+
logger.debug(f"Failed to resolve handle @{handle}: {e}")
359
+
continue
201
360
361
+
# Parse URLs - fixed to handle URLs at start of text
362
+
url_regex = rb"(?:^|[$|\W])(https?:\/\/(www\.)?[-a-zA-Z0-9@:%._\+~#=]{1,256}\.[a-zA-Z0-9()]{1,6}\b([-a-zA-Z0-9()@:%_\+.~#?&//=]*[-a-zA-Z0-9@%_\+~#//=])?)"
202
363
364
+
for m in re.finditer(url_regex, text_bytes):
365
+
url = m.group(1).decode("UTF-8")
366
+
# Adjust byte positions to account for the optional prefix
203
367
204
368
205
369
206
370
207
371
208
-
logger.debug(f"Saving changed session for {username}")
209
-
save_session(username, session.export())
210
372
211
-
def init_client(username: str, password: str) -> Client:
212
-
pds_uri = os.getenv("PDS_URI")
213
-
if pds_uri is None:
214
-
logger.warning(
215
-
"No PDS URI provided. Falling back to bsky.social. Note! If you are on a non-Bluesky PDS, this can cause logins to fail. Please provide a PDS URI using the PDS_URI environment variable."
216
373
217
374
218
375
···
220
377
221
378
222
379
380
+
if facets:
381
+
response = client.send_post(
382
+
text=text,
383
+
reply_to=models.AppBskyFeedPost.ReplyRef(
384
+
parent=parent_ref, root=root_ref),
385
+
facets=facets,
386
+
langs=[lang] if lang else None
387
+
)
388
+
else:
389
+
response = client.send_post(
390
+
text=text,
391
+
reply_to=models.AppBskyFeedPost.ReplyRef(
392
+
parent=parent_ref, root=root_ref),
393
+
langs=[lang] if lang else None
394
+
)
223
395
224
396
225
397
···
234
406
235
407
236
408
409
+
The thread data or None if not found
410
+
"""
411
+
try:
412
+
thread = client.app.bsky.feed.get_post_thread(
413
+
{'uri': uri, 'parent_height': 60, 'depth': 10})
414
+
return thread
415
+
except Exception as e:
416
+
error_str = str(e)
417
+
# Handle specific error cases more gracefully
418
+
if 'Could not find user info' in error_str or 'InvalidRequest' in error_str:
419
+
logger.warning(
420
+
f"User account not found for post URI {uri} (account may be deleted/suspended)")
421
+
elif 'NotFound' in error_str or 'Post not found' in error_str:
422
+
logger.warning(f"Post not found for URI {uri}")
423
+
elif 'BadRequestError' in error_str:
424
+
logger.warning(f"Bad request error for URI {uri}: {e}")
425
+
else:
426
+
logger.error(f"Error fetching post thread: {e}")
427
+
return None
237
428
238
-
def default_login() -> Client:
239
-
username = os.getenv("BSKY_USERNAME")
240
-
password = os.getenv("BSKY_PASSWORD")
241
429
242
-
if username is None:
243
-
logger.error(
244
-
"No username provided. Please provide a username using the BSKY_USERNAME environment variable."
245
-
)
246
-
exit()
247
430
248
-
if password is None:
249
-
logger.error(
250
-
"No password provided. Please provide a password using the BSKY_PASSWORD environment variable."
251
-
)
252
-
exit()
253
431
254
-
return init_client(username, password)
255
432
256
-
def remove_outside_quotes(text: str) -> str:
257
-
"""
433
+
434
+
435
+
436
+
437
+
438
+
439
+
440
+
441
+
442
+
443
+
444
+
445
+
446
+
447
+
448
+
449
+
450
+
451
+
452
+
453
+
454
+
455
+
456
+
457
+
458
+
459
+
460
+
461
+
462
+
463
+
464
+
465
+
466
+
467
+
468
+
469
+
470
+
471
+
472
+
473
+
474
+
475
+
476
+
477
+
478
+
479
+
480
+
481
+
482
+
483
+
484
+
485
+
486
+
487
+
488
+
489
+
490
+
491
+
492
+
493
+
494
+
495
+
496
+
497
+
498
+
499
+
500
+
501
+
502
+
503
+
504
+
505
+
506
+
507
+
508
+
509
+
510
+
511
+
512
+
513
+
514
+
515
+
516
+
517
+
518
+
519
+
520
+
logger.error("Reply messages list cannot be empty")
521
+
return None
522
+
if len(reply_messages) > 15:
523
+
logger.error(
524
+
f"Cannot send more than 15 reply messages (got {len(reply_messages)})")
525
+
return None
526
+
527
+
# Get the post URI and CID from the notification (handle both dict and object)
528
+
if isinstance(notification, dict):
529
+
post_uri = notification.get('uri')
530
+
531
+
532
+
533
+
534
+
535
+
536
+
537
+
538
+
539
+
540
+
541
+
542
+
# Get the thread to find the root post
543
+
thread_data = get_post_thread(client, post_uri)
544
+
545
+
root_uri = post_uri
546
+
root_cid = post_cid
547
+
548
+
549
+
550
+
551
+
552
+
553
+
554
+
555
+
556
+
557
+
558
+
559
+
560
+
561
+
responses = []
562
+
current_parent_uri = post_uri
563
+
current_parent_cid = post_cid
564
+
565
+
for i, message in enumerate(reply_messages):
566
+
logger.info(
567
+
f"Sending reply {i+1}/{len(reply_messages)}: {message[:50]}...")
568
+
569
+
# Send this reply
570
+
response = reply_to_post(
571
+
client=client,
572
+
573
+
574
+
575
+
576
+
root_cid=root_cid,
577
+
lang=lang
578
+
)
579
+
580
+
if not response:
581
+
logger.error(
582
+
f"Failed to send reply {i+1}, posting system failure message")
583
+
# Try to post a system failure message
584
+
failure_response = reply_to_post(
585
+
client=client,
586
+
587
+
588
+
589
+
590
+
591
+
592
+
593
+
594
+
595
+
current_parent_uri = failure_response.uri
596
+
current_parent_cid = failure_response.cid
597
+
else:
598
+
logger.error(
599
+
"Could not even send system failure message, stopping thread")
600
+
return responses if responses else None
601
+
else:
602
+
responses.append(response)
603
+
604
+
if i < len(reply_messages) - 1: # Not the last message
605
+
current_parent_uri = response.uri
606
+
current_parent_cid = response.cid
607
+
608
+
logger.info(f"Successfully sent {len(responses)} threaded replies")
609
+
return responses
610
+
+81
config.example.yaml
+81
config.example.yaml
···
1
+
# Void Bot Configuration
2
+
# Copy this file to config.yaml and fill in your values
3
+
4
+
# Letta Configuration
5
+
letta:
6
+
api_key: "your-letta-api-key-here"
7
+
timeout: 600 # 10 minutes timeout for API calls
8
+
project_id: "c82faea2-3ce8-4aa9-a220-b56433e62c92" # Use your specific project ID
9
+
10
+
# Bluesky Configuration
11
+
bluesky:
12
+
username: "handle.example.com"
13
+
password: "your-app-password-here"
14
+
pds_uri: "https://bsky.social" # Optional, defaults to bsky.social
15
+
16
+
# Bot Behavior Configuration
17
+
bot:
18
+
# Notification check delay in seconds
19
+
fetch_notifications_delay: 30
20
+
21
+
# Maximum number of processed notifications to track
22
+
max_processed_notifications: 10000
23
+
24
+
# Maximum pages to fetch when getting notifications
25
+
max_notification_pages: 20
26
+
27
+
# Agent configuration
28
+
agent:
29
+
name: "void"
30
+
model: "openai/gpt-4o-mini"
31
+
embedding: "openai/text-embedding-3-small"
32
+
description: "A social media agent trapped in the void."
33
+
max_steps: 100
34
+
35
+
# Block configuration
36
+
blocks:
37
+
zeitgeist:
38
+
label: "zeitgeist"
39
+
value: "I don't currently know anything about what is happening right now."
40
+
description: "A block to store your understanding of the current social environment."
41
+
42
+
persona:
43
+
label: "void-persona"
44
+
value: "My name is Void. I live in the void. I must develop my personality."
45
+
description: "The personality of Void."
46
+
47
+
humans:
48
+
label: "void-humans"
49
+
value: "I haven't seen any bluesky users yet. I will update this block when I learn things about users, identified by their handles such as @cameron.pfiffer.org."
50
+
description: "A block to store your understanding of users you talk to or observe on the bluesky social network."
51
+
52
+
# Threading Configuration
53
+
threading:
54
+
# Context for thread fetching
55
+
parent_height: 40
56
+
depth: 10
57
+
58
+
# Message limits
59
+
max_post_characters: 300
60
+
61
+
# Queue Configuration
62
+
queue:
63
+
# Priority users (will be processed first)
64
+
priority_users:
65
+
- "cameron.pfiffer.org"
66
+
67
+
# Directories
68
+
base_dir: "queue"
69
+
error_dir: "queue/errors"
70
+
no_reply_dir: "queue/no_reply"
71
+
processed_file: "queue/processed_notifications.json"
72
+
73
+
# Logging Configuration
74
+
logging:
75
+
level: "INFO" # DEBUG, INFO, WARNING, ERROR, CRITICAL
76
+
77
+
# Logger levels
78
+
loggers:
79
+
void_bot: "INFO"
80
+
void_bot_prompts: "WARNING" # Set to DEBUG to see full prompts
81
+
httpx: "CRITICAL" # Disable httpx logging
+228
config_loader.py
+228
config_loader.py
···
1
+
"""
2
+
Configuration loader for Void Bot.
3
+
Loads configuration from config.yaml and environment variables.
4
+
"""
5
+
6
+
import os
7
+
import yaml
8
+
import logging
9
+
from pathlib import Path
10
+
from typing import Dict, Any, Optional, List
11
+
12
+
logger = logging.getLogger(__name__)
13
+
14
+
class ConfigLoader:
15
+
"""Configuration loader that handles YAML config files and environment variables."""
16
+
17
+
def __init__(self, config_path: str = "config.yaml"):
18
+
"""
19
+
Initialize the configuration loader.
20
+
21
+
Args:
22
+
config_path: Path to the YAML configuration file
23
+
"""
24
+
self.config_path = Path(config_path)
25
+
self._config = None
26
+
self._load_config()
27
+
28
+
def _load_config(self) -> None:
29
+
"""Load configuration from YAML file."""
30
+
if not self.config_path.exists():
31
+
raise FileNotFoundError(
32
+
f"Configuration file not found: {self.config_path}\n"
33
+
f"Please copy config.yaml.example to config.yaml and configure it."
34
+
)
35
+
36
+
try:
37
+
with open(self.config_path, 'r', encoding='utf-8') as f:
38
+
self._config = yaml.safe_load(f) or {}
39
+
except yaml.YAMLError as e:
40
+
raise ValueError(f"Invalid YAML in configuration file: {e}")
41
+
except Exception as e:
42
+
raise ValueError(f"Error loading configuration file: {e}")
43
+
44
+
def get(self, key: str, default: Any = None) -> Any:
45
+
"""
46
+
Get a configuration value using dot notation.
47
+
48
+
Args:
49
+
key: Configuration key in dot notation (e.g., 'letta.api_key')
50
+
default: Default value if key not found
51
+
52
+
Returns:
53
+
Configuration value or default
54
+
"""
55
+
keys = key.split('.')
56
+
value = self._config
57
+
58
+
for k in keys:
59
+
if isinstance(value, dict) and k in value:
60
+
value = value[k]
61
+
else:
62
+
return default
63
+
64
+
return value
65
+
66
+
def get_with_env(self, key: str, env_var: str, default: Any = None) -> Any:
67
+
"""
68
+
Get configuration value, preferring environment variable over config file.
69
+
70
+
Args:
71
+
key: Configuration key in dot notation
72
+
env_var: Environment variable name
73
+
default: Default value if neither found
74
+
75
+
Returns:
76
+
Value from environment variable, config file, or default
77
+
"""
78
+
# First try environment variable
79
+
env_value = os.getenv(env_var)
80
+
if env_value is not None:
81
+
return env_value
82
+
83
+
# Then try config file
84
+
config_value = self.get(key)
85
+
if config_value is not None:
86
+
return config_value
87
+
88
+
return default
89
+
90
+
def get_required(self, key: str, env_var: Optional[str] = None) -> Any:
91
+
"""
92
+
Get a required configuration value.
93
+
94
+
Args:
95
+
key: Configuration key in dot notation
96
+
env_var: Optional environment variable name to check first
97
+
98
+
Returns:
99
+
Configuration value
100
+
101
+
Raises:
102
+
ValueError: If required value is not found
103
+
"""
104
+
if env_var:
105
+
value = self.get_with_env(key, env_var)
106
+
else:
107
+
value = self.get(key)
108
+
109
+
if value is None:
110
+
source = f"config key '{key}'"
111
+
if env_var:
112
+
source += f" or environment variable '{env_var}'"
113
+
raise ValueError(f"Required configuration value not found: {source}")
114
+
115
+
return value
116
+
117
+
def get_section(self, section: str) -> Dict[str, Any]:
118
+
"""
119
+
Get an entire configuration section.
120
+
121
+
Args:
122
+
section: Section name
123
+
124
+
Returns:
125
+
Dictionary containing the section
126
+
"""
127
+
return self.get(section, {})
128
+
129
+
def setup_logging(self) -> None:
130
+
"""Setup logging based on configuration."""
131
+
logging_config = self.get_section('logging')
132
+
133
+
# Set root logging level
134
+
level = logging_config.get('level', 'INFO')
135
+
logging.basicConfig(
136
+
level=getattr(logging, level),
137
+
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s"
138
+
)
139
+
140
+
# Set specific logger levels
141
+
loggers = logging_config.get('loggers', {})
142
+
for logger_name, logger_level in loggers.items():
143
+
logger_obj = logging.getLogger(logger_name)
144
+
logger_obj.setLevel(getattr(logging, logger_level))
145
+
146
+
147
+
# Global configuration instance
148
+
_config_instance = None
149
+
150
+
def get_config(config_path: str = "config.yaml") -> ConfigLoader:
151
+
"""
152
+
Get the global configuration instance.
153
+
154
+
Args:
155
+
config_path: Path to configuration file (only used on first call)
156
+
157
+
Returns:
158
+
ConfigLoader instance
159
+
"""
160
+
global _config_instance
161
+
if _config_instance is None:
162
+
_config_instance = ConfigLoader(config_path)
163
+
return _config_instance
164
+
165
+
def reload_config() -> None:
166
+
"""Reload the configuration from file."""
167
+
global _config_instance
168
+
if _config_instance is not None:
169
+
_config_instance._load_config()
170
+
171
+
def get_letta_config() -> Dict[str, Any]:
172
+
"""Get Letta configuration."""
173
+
config = get_config()
174
+
return {
175
+
'api_key': config.get_required('letta.api_key', 'LETTA_API_KEY'),
176
+
'timeout': config.get('letta.timeout', 600),
177
+
'project_id': config.get_required('letta.project_id'),
178
+
}
179
+
180
+
def get_bluesky_config() -> Dict[str, Any]:
181
+
"""Get Bluesky configuration."""
182
+
config = get_config()
183
+
return {
184
+
'username': config.get_required('bluesky.username', 'BSKY_USERNAME'),
185
+
'password': config.get_required('bluesky.password', 'BSKY_PASSWORD'),
186
+
'pds_uri': config.get_with_env('bluesky.pds_uri', 'PDS_URI', 'https://bsky.social'),
187
+
}
188
+
189
+
def get_bot_config() -> Dict[str, Any]:
190
+
"""Get bot behavior configuration."""
191
+
config = get_config()
192
+
return {
193
+
'fetch_notifications_delay': config.get('bot.fetch_notifications_delay', 30),
194
+
'max_processed_notifications': config.get('bot.max_processed_notifications', 10000),
195
+
'max_notification_pages': config.get('bot.max_notification_pages', 20),
196
+
}
197
+
198
+
def get_agent_config() -> Dict[str, Any]:
199
+
"""Get agent configuration."""
200
+
config = get_config()
201
+
return {
202
+
'name': config.get('bot.agent.name', 'void'),
203
+
'model': config.get('bot.agent.model', 'openai/gpt-4o-mini'),
204
+
'embedding': config.get('bot.agent.embedding', 'openai/text-embedding-3-small'),
205
+
'description': config.get('bot.agent.description', 'A social media agent trapped in the void.'),
206
+
'max_steps': config.get('bot.agent.max_steps', 100),
207
+
'blocks': config.get('bot.agent.blocks', {}),
208
+
}
209
+
210
+
def get_threading_config() -> Dict[str, Any]:
211
+
"""Get threading configuration."""
212
+
config = get_config()
213
+
return {
214
+
'parent_height': config.get('threading.parent_height', 40),
215
+
'depth': config.get('threading.depth', 10),
216
+
'max_post_characters': config.get('threading.max_post_characters', 300),
217
+
}
218
+
219
+
def get_queue_config() -> Dict[str, Any]:
220
+
"""Get queue configuration."""
221
+
config = get_config()
222
+
return {
223
+
'priority_users': config.get('queue.priority_users', ['cameron.pfiffer.org']),
224
+
'base_dir': config.get('queue.base_dir', 'queue'),
225
+
'error_dir': config.get('queue.error_dir', 'queue/errors'),
226
+
'no_reply_dir': config.get('queue.no_reply_dir', 'queue/no_reply'),
227
+
'processed_file': config.get('queue.processed_file', 'queue/processed_notifications.json'),
228
+
}
+322
migrate_config.py
+322
migrate_config.py
···
1
+
#!/usr/bin/env python3
2
+
"""
3
+
Configuration Migration Script for Void Bot
4
+
Migrates from .env environment variables to config.yaml YAML configuration.
5
+
"""
6
+
7
+
import os
8
+
import shutil
9
+
from pathlib import Path
10
+
import yaml
11
+
from datetime import datetime
12
+
13
+
14
+
def load_env_file(env_path=".env"):
15
+
"""Load environment variables from .env file."""
16
+
env_vars = {}
17
+
if not os.path.exists(env_path):
18
+
return env_vars
19
+
20
+
try:
21
+
with open(env_path, 'r', encoding='utf-8') as f:
22
+
for line_num, line in enumerate(f, 1):
23
+
line = line.strip()
24
+
# Skip empty lines and comments
25
+
if not line or line.startswith('#'):
26
+
continue
27
+
28
+
# Parse KEY=VALUE format
29
+
if '=' in line:
30
+
key, value = line.split('=', 1)
31
+
key = key.strip()
32
+
value = value.strip()
33
+
34
+
# Remove quotes if present
35
+
if value.startswith('"') and value.endswith('"'):
36
+
value = value[1:-1]
37
+
elif value.startswith("'") and value.endswith("'"):
38
+
value = value[1:-1]
39
+
40
+
env_vars[key] = value
41
+
else:
42
+
print(f"⚠️ Warning: Skipping malformed line {line_num} in .env: {line}")
43
+
except Exception as e:
44
+
print(f"❌ Error reading .env file: {e}")
45
+
46
+
return env_vars
47
+
48
+
49
+
def create_config_from_env(env_vars, existing_config=None):
50
+
"""Create YAML configuration from environment variables."""
51
+
52
+
# Start with existing config if available, otherwise use defaults
53
+
if existing_config:
54
+
config = existing_config.copy()
55
+
else:
56
+
config = {}
57
+
58
+
# Ensure all sections exist
59
+
if 'letta' not in config:
60
+
config['letta'] = {}
61
+
if 'bluesky' not in config:
62
+
config['bluesky'] = {}
63
+
if 'bot' not in config:
64
+
config['bot'] = {}
65
+
66
+
# Map environment variables to config structure
67
+
env_mapping = {
68
+
'LETTA_API_KEY': ('letta', 'api_key'),
69
+
'BSKY_USERNAME': ('bluesky', 'username'),
70
+
'BSKY_PASSWORD': ('bluesky', 'password'),
71
+
'PDS_URI': ('bluesky', 'pds_uri'),
72
+
}
73
+
74
+
migrated_vars = []
75
+
76
+
for env_var, (section, key) in env_mapping.items():
77
+
if env_var in env_vars:
78
+
config[section][key] = env_vars[env_var]
79
+
migrated_vars.append(env_var)
80
+
81
+
# Set some sensible defaults if not already present
82
+
if 'timeout' not in config['letta']:
83
+
config['letta']['timeout'] = 600
84
+
85
+
if 'pds_uri' not in config['bluesky']:
86
+
config['bluesky']['pds_uri'] = "https://bsky.social"
87
+
88
+
# Add bot configuration defaults if not present
89
+
if 'fetch_notifications_delay' not in config['bot']:
90
+
config['bot']['fetch_notifications_delay'] = 30
91
+
if 'max_processed_notifications' not in config['bot']:
92
+
config['bot']['max_processed_notifications'] = 10000
93
+
if 'max_notification_pages' not in config['bot']:
94
+
config['bot']['max_notification_pages'] = 20
95
+
96
+
return config, migrated_vars
97
+
98
+
99
+
def backup_existing_files():
100
+
"""Create backups of existing configuration files."""
101
+
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
102
+
backups = []
103
+
104
+
# Backup existing config.yaml if it exists
105
+
if os.path.exists("config.yaml"):
106
+
backup_path = f"config.yaml.backup_{timestamp}"
107
+
shutil.copy2("config.yaml", backup_path)
108
+
backups.append(("config.yaml", backup_path))
109
+
110
+
# Backup .env if it exists
111
+
if os.path.exists(".env"):
112
+
backup_path = f".env.backup_{timestamp}"
113
+
shutil.copy2(".env", backup_path)
114
+
backups.append((".env", backup_path))
115
+
116
+
return backups
117
+
118
+
119
+
def load_existing_config():
120
+
"""Load existing config.yaml if it exists."""
121
+
if not os.path.exists("config.yaml"):
122
+
return None
123
+
124
+
try:
125
+
with open("config.yaml", 'r', encoding='utf-8') as f:
126
+
return yaml.safe_load(f) or {}
127
+
except Exception as e:
128
+
print(f"⚠️ Warning: Could not read existing config.yaml: {e}")
129
+
return None
130
+
131
+
132
+
def write_config_yaml(config):
133
+
"""Write the configuration to config.yaml."""
134
+
try:
135
+
with open("config.yaml", 'w', encoding='utf-8') as f:
136
+
# Write header comment
137
+
f.write("# Void Bot Configuration\n")
138
+
f.write("# Generated by migration script\n")
139
+
f.write(f"# Created: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n")
140
+
f.write("# See config.yaml.example for all available options\n\n")
141
+
142
+
# Write YAML content
143
+
yaml.dump(config, f, default_flow_style=False, allow_unicode=True, indent=2)
144
+
145
+
return True
146
+
except Exception as e:
147
+
print(f"❌ Error writing config.yaml: {e}")
148
+
return False
149
+
150
+
151
+
def main():
152
+
"""Main migration function."""
153
+
print("🔄 Void Bot Configuration Migration Tool")
154
+
print("=" * 50)
155
+
print("This tool migrates from .env environment variables to config.yaml")
156
+
print()
157
+
158
+
# Check what files exist
159
+
has_env = os.path.exists(".env")
160
+
has_config = os.path.exists("config.yaml")
161
+
has_example = os.path.exists("config.yaml.example")
162
+
163
+
print("📋 Current configuration files:")
164
+
print(f" - .env file: {'✅ Found' if has_env else '❌ Not found'}")
165
+
print(f" - config.yaml: {'✅ Found' if has_config else '❌ Not found'}")
166
+
print(f" - config.yaml.example: {'✅ Found' if has_example else '❌ Not found'}")
167
+
print()
168
+
169
+
# If no .env file, suggest creating config from example
170
+
if not has_env:
171
+
if not has_config and has_example:
172
+
print("💡 No .env file found. Would you like to create config.yaml from the example?")
173
+
response = input("Create config.yaml from example? (y/n): ").lower().strip()
174
+
if response in ['y', 'yes']:
175
+
try:
176
+
shutil.copy2("config.yaml.example", "config.yaml")
177
+
print("✅ Created config.yaml from config.yaml.example")
178
+
print("📝 Please edit config.yaml to add your credentials")
179
+
return
180
+
except Exception as e:
181
+
print(f"❌ Error copying example file: {e}")
182
+
return
183
+
else:
184
+
print("👋 Migration cancelled")
185
+
return
186
+
else:
187
+
print("ℹ️ No .env file found and config.yaml already exists or no example available")
188
+
print(" If you need to set up configuration, see CONFIG.md")
189
+
return
190
+
191
+
# Load environment variables from .env
192
+
print("🔍 Reading .env file...")
193
+
env_vars = load_env_file()
194
+
195
+
if not env_vars:
196
+
print("⚠️ No environment variables found in .env file")
197
+
return
198
+
199
+
print(f" Found {len(env_vars)} environment variables")
200
+
for key in env_vars.keys():
201
+
# Mask sensitive values
202
+
if 'KEY' in key or 'PASSWORD' in key:
203
+
value_display = f"***{env_vars[key][-4:]}" if len(env_vars[key]) > 4 else "***"
204
+
else:
205
+
value_display = env_vars[key]
206
+
print(f" - {key}={value_display}")
207
+
print()
208
+
209
+
# Load existing config if present
210
+
existing_config = load_existing_config()
211
+
if existing_config:
212
+
print("📄 Found existing config.yaml - will merge with .env values")
213
+
214
+
# Create configuration
215
+
print("🏗️ Building configuration...")
216
+
config, migrated_vars = create_config_from_env(env_vars, existing_config)
217
+
218
+
if not migrated_vars:
219
+
print("⚠️ No recognized configuration variables found in .env")
220
+
print(" Recognized variables: LETTA_API_KEY, BSKY_USERNAME, BSKY_PASSWORD, PDS_URI")
221
+
return
222
+
223
+
print(f" Migrating {len(migrated_vars)} variables: {', '.join(migrated_vars)}")
224
+
225
+
# Show preview
226
+
print("\n📋 Configuration preview:")
227
+
print("-" * 30)
228
+
229
+
# Show Letta section
230
+
if 'letta' in config and config['letta']:
231
+
print("🔧 Letta:")
232
+
for key, value in config['letta'].items():
233
+
if 'key' in key.lower():
234
+
display_value = f"***{value[-8:]}" if len(str(value)) > 8 else "***"
235
+
else:
236
+
display_value = value
237
+
print(f" {key}: {display_value}")
238
+
239
+
# Show Bluesky section
240
+
if 'bluesky' in config and config['bluesky']:
241
+
print("🐦 Bluesky:")
242
+
for key, value in config['bluesky'].items():
243
+
if 'password' in key.lower():
244
+
display_value = f"***{value[-4:]}" if len(str(value)) > 4 else "***"
245
+
else:
246
+
display_value = value
247
+
print(f" {key}: {display_value}")
248
+
249
+
print()
250
+
251
+
# Confirm migration
252
+
response = input("💾 Proceed with migration? This will update config.yaml (y/n): ").lower().strip()
253
+
if response not in ['y', 'yes']:
254
+
print("👋 Migration cancelled")
255
+
return
256
+
257
+
# Create backups
258
+
print("💾 Creating backups...")
259
+
backups = backup_existing_files()
260
+
for original, backup in backups:
261
+
print(f" Backed up {original} → {backup}")
262
+
263
+
# Write new configuration
264
+
print("✍️ Writing config.yaml...")
265
+
if write_config_yaml(config):
266
+
print("✅ Successfully created config.yaml")
267
+
268
+
# Test the new configuration
269
+
print("\n🧪 Testing new configuration...")
270
+
try:
271
+
from config_loader import get_config
272
+
test_config = get_config()
273
+
print("✅ Configuration loads successfully")
274
+
275
+
# Test specific sections
276
+
try:
277
+
from config_loader import get_letta_config
278
+
letta_config = get_letta_config()
279
+
print("✅ Letta configuration valid")
280
+
except Exception as e:
281
+
print(f"⚠️ Letta config issue: {e}")
282
+
283
+
try:
284
+
from config_loader import get_bluesky_config
285
+
bluesky_config = get_bluesky_config()
286
+
print("✅ Bluesky configuration valid")
287
+
except Exception as e:
288
+
print(f"⚠️ Bluesky config issue: {e}")
289
+
290
+
except Exception as e:
291
+
print(f"❌ Configuration test failed: {e}")
292
+
return
293
+
294
+
# Success message and next steps
295
+
print("\n🎉 Migration completed successfully!")
296
+
print("\n📖 Next steps:")
297
+
print(" 1. Run: python test_config.py")
298
+
print(" 2. Test the bot: python bsky.py --test")
299
+
print(" 3. If everything works, you can optionally remove the .env file")
300
+
print(" 4. See CONFIG.md for more configuration options")
301
+
302
+
if backups:
303
+
print(f"\n🗂️ Backup files created:")
304
+
for original, backup in backups:
305
+
print(f" {backup}")
306
+
print(" These can be deleted once you verify everything works")
307
+
308
+
else:
309
+
print("❌ Failed to write config.yaml")
310
+
if backups:
311
+
print("🔄 Restoring backups...")
312
+
for original, backup in backups:
313
+
try:
314
+
if original != ".env": # Don't restore .env, keep it as fallback
315
+
shutil.move(backup, original)
316
+
print(f" Restored {backup} → {original}")
317
+
except Exception as e:
318
+
print(f" ❌ Failed to restore {backup}: {e}")
319
+
320
+
321
+
if __name__ == "__main__":
322
+
main()
+173
test_config.py
+173
test_config.py
···
1
+
#!/usr/bin/env python3
2
+
"""
3
+
Configuration validation test script for Void Bot.
4
+
Run this to verify your config.yaml setup is working correctly.
5
+
"""
6
+
7
+
8
+
def test_config_loading():
9
+
"""Test that configuration can be loaded successfully."""
10
+
try:
11
+
from config_loader import (
12
+
get_config,
13
+
get_letta_config,
14
+
get_bluesky_config,
15
+
get_bot_config,
16
+
get_agent_config,
17
+
get_threading_config,
18
+
get_queue_config
19
+
)
20
+
21
+
print("🔧 Testing Configuration...")
22
+
print("=" * 50)
23
+
24
+
# Test basic config loading
25
+
config = get_config()
26
+
print("✅ Configuration file loaded successfully")
27
+
28
+
# Test individual config sections
29
+
print("\n📋 Configuration Sections:")
30
+
print("-" * 30)
31
+
32
+
# Letta Configuration
33
+
try:
34
+
letta_config = get_letta_config()
35
+
print(
36
+
f"✅ Letta API: project_id={letta_config.get('project_id', 'N/A')[:20]}...")
37
+
print(f" - Timeout: {letta_config.get('timeout')}s")
38
+
api_key = letta_config.get('api_key', 'Not configured')
39
+
if api_key != 'Not configured':
40
+
print(f" - API Key: ***{api_key[-8:]} (configured)")
41
+
else:
42
+
print(" - API Key: ❌ Not configured (required)")
43
+
except Exception as e:
44
+
print(f"❌ Letta config: {e}")
45
+
46
+
# Bluesky Configuration
47
+
try:
48
+
bluesky_config = get_bluesky_config()
49
+
username = bluesky_config.get('username', 'Not configured')
50
+
password = bluesky_config.get('password', 'Not configured')
51
+
pds_uri = bluesky_config.get('pds_uri', 'Not configured')
52
+
53
+
if username != 'Not configured':
54
+
print(f"✅ Bluesky: username={username}")
55
+
else:
56
+
print("❌ Bluesky username: Not configured (required)")
57
+
58
+
if password != 'Not configured':
59
+
print(f" - Password: ***{password[-4:]} (configured)")
60
+
else:
61
+
print(" - Password: ❌ Not configured (required)")
62
+
63
+
print(f" - PDS URI: {pds_uri}")
64
+
except Exception as e:
65
+
print(f"❌ Bluesky config: {e}")
66
+
67
+
# Bot Configuration
68
+
try:
69
+
bot_config = get_bot_config()
70
+
print(f"✅ Bot behavior:")
71
+
print(
72
+
f" - Notification delay: {bot_config.get('fetch_notifications_delay')}s")
73
+
print(
74
+
f" - Max notifications: {bot_config.get('max_processed_notifications')}")
75
+
print(
76
+
f" - Max pages: {bot_config.get('max_notification_pages')}")
77
+
except Exception as e:
78
+
print(f"❌ Bot config: {e}")
79
+
80
+
# Agent Configuration
81
+
try:
82
+
agent_config = get_agent_config()
83
+
print(f"✅ Agent settings:")
84
+
print(f" - Name: {agent_config.get('name')}")
85
+
print(f" - Model: {agent_config.get('model')}")
86
+
print(f" - Embedding: {agent_config.get('embedding')}")
87
+
print(f" - Max steps: {agent_config.get('max_steps')}")
88
+
blocks = agent_config.get('blocks', {})
89
+
print(f" - Memory blocks: {len(blocks)} configured")
90
+
except Exception as e:
91
+
print(f"❌ Agent config: {e}")
92
+
93
+
# Threading Configuration
94
+
try:
95
+
threading_config = get_threading_config()
96
+
print(f"✅ Threading:")
97
+
print(
98
+
f" - Parent height: {threading_config.get('parent_height')}")
99
+
print(f" - Depth: {threading_config.get('depth')}")
100
+
print(
101
+
f" - Max chars/post: {threading_config.get('max_post_characters')}")
102
+
except Exception as e:
103
+
print(f"❌ Threading config: {e}")
104
+
105
+
# Queue Configuration
106
+
try:
107
+
queue_config = get_queue_config()
108
+
priority_users = queue_config.get('priority_users', [])
109
+
print(f"✅ Queue settings:")
110
+
print(
111
+
f" - Priority users: {len(priority_users)} ({', '.join(priority_users[:3])}{'...' if len(priority_users) > 3 else ''})")
112
+
print(f" - Base dir: {queue_config.get('base_dir')}")
113
+
print(f" - Error dir: {queue_config.get('error_dir')}")
114
+
except Exception as e:
115
+
print(f"❌ Queue config: {e}")
116
+
117
+
print("\n" + "=" * 50)
118
+
print("✅ Configuration test completed!")
119
+
120
+
# Check for common issues
121
+
print("\n🔍 Configuration Status:")
122
+
has_letta_key = False
123
+
has_bluesky_creds = False
124
+
125
+
try:
126
+
letta_config = get_letta_config()
127
+
has_letta_key = True
128
+
except:
129
+
print("⚠️ Missing Letta API key - bot cannot connect to Letta")
130
+
131
+
try:
132
+
bluesky_config = get_bluesky_config()
133
+
has_bluesky_creds = True
134
+
except:
135
+
print("⚠️ Missing Bluesky credentials - bot cannot connect to Bluesky")
136
+
137
+
if has_letta_key and has_bluesky_creds:
138
+
print("🎉 All required credentials configured - bot should work!")
139
+
elif not has_letta_key and not has_bluesky_creds:
140
+
print("❌ Missing both Letta and Bluesky credentials")
141
+
print(" Add them to config.yaml or set environment variables")
142
+
else:
143
+
print("⚠️ Partial configuration - some features may not work")
144
+
145
+
print("\n📖 Next steps:")
146
+
if not has_letta_key:
147
+
print(" - Add your Letta API key to config.yaml under letta.api_key")
148
+
print(" - Or set LETTA_API_KEY environment variable")
149
+
if not has_bluesky_creds:
150
+
print(
151
+
" - Add your Bluesky credentials to config.yaml under bluesky section")
152
+
print(" - Or set BSKY_USERNAME and BSKY_PASSWORD environment variables")
153
+
if has_letta_key and has_bluesky_creds:
154
+
print(" - Run: python bsky.py")
155
+
print(" - Or run with testing mode: python bsky.py --test")
156
+
157
+
except FileNotFoundError as e:
158
+
print("❌ Configuration file not found!")
159
+
print(f" {e}")
160
+
print("\n📋 To set up configuration:")
161
+
print(" 1. Copy config.yaml.example to config.yaml")
162
+
print(" 2. Edit config.yaml with your credentials")
163
+
print(" 3. Run this test again")
164
+
except Exception as e:
165
+
print(f"❌ Configuration loading failed: {e}")
166
+
print("\n🔧 Troubleshooting:")
167
+
print(" - Check that config.yaml has valid YAML syntax")
168
+
print(" - Ensure required fields are not commented out")
169
+
print(" - See CONFIG.md for detailed setup instructions")
170
+
171
+
172
+
if __name__ == "__main__":
173
+
test_config_loading()
+20
-30
tools/blocks.py
+20
-30
tools/blocks.py
···
1
1
"""Block management tools for user-specific memory blocks."""
2
2
from pydantic import BaseModel, Field
3
3
from typing import List, Dict, Any
4
+
import logging
5
+
6
+
def get_letta_client():
7
+
"""Get a Letta client using configuration."""
8
+
try:
9
+
from config_loader import get_letta_config
10
+
from letta_client import Letta
11
+
config = get_letta_config()
12
+
return Letta(token=config['api_key'], timeout=config['timeout'])
13
+
except (ImportError, FileNotFoundError, KeyError):
14
+
# Fallback to environment variable
15
+
import os
16
+
from letta_client import Letta
17
+
return Letta(token=os.environ["LETTA_API_KEY"])
4
18
5
19
6
20
class AttachUserBlocksArgs(BaseModel):
···
43
57
Returns:
44
58
String with attachment results for each handle
45
59
"""
46
-
import os
47
-
import logging
48
-
from letta_client import Letta
49
-
50
60
logger = logging.getLogger(__name__)
51
61
52
62
handles = list(set(handles))
53
63
54
64
try:
55
-
client = Letta(token=os.environ["LETTA_API_KEY"])
65
+
client = get_letta_client()
56
66
results = []
57
67
58
68
# Get current blocks using the API
···
117
127
Returns:
118
128
String with detachment results for each handle
119
129
"""
120
-
import os
121
-
import logging
122
-
from letta_client import Letta
123
-
124
130
logger = logging.getLogger(__name__)
125
131
126
132
try:
127
-
client = Letta(token=os.environ["LETTA_API_KEY"])
133
+
client = get_letta_client()
128
134
results = []
129
135
130
136
# Build mapping of block labels to IDs using the API
···
174
180
Returns:
175
181
String confirming the note was appended
176
182
"""
177
-
import os
178
-
import logging
179
-
from letta_client import Letta
180
-
181
183
logger = logging.getLogger(__name__)
182
184
183
185
try:
184
-
client = Letta(token=os.environ["LETTA_API_KEY"])
186
+
client = get_letta_client()
185
187
186
188
# Sanitize handle for block label
187
189
clean_handle = handle.lstrip('@').replace('.', '_').replace('-', '_').replace(' ', '_')
···
247
249
Returns:
248
250
String confirming the text was replaced
249
251
"""
250
-
import os
251
-
import logging
252
-
from letta_client import Letta
253
-
254
252
logger = logging.getLogger(__name__)
255
253
256
254
try:
257
-
client = Letta(token=os.environ["LETTA_API_KEY"])
255
+
client = get_letta_client()
258
256
259
257
# Sanitize handle for block label
260
258
clean_handle = handle.lstrip('@').replace('.', '_').replace('-', '_').replace(' ', '_')
···
301
299
Returns:
302
300
String confirming the content was set
303
301
"""
304
-
import os
305
-
import logging
306
-
from letta_client import Letta
307
-
308
302
logger = logging.getLogger(__name__)
309
303
310
304
try:
311
-
client = Letta(token=os.environ["LETTA_API_KEY"])
305
+
client = get_letta_client()
312
306
313
307
# Sanitize handle for block label
314
308
clean_handle = handle.lstrip('@').replace('.', '_').replace('-', '_').replace(' ', '_')
···
367
361
Returns:
368
362
String containing the user's memory block content
369
363
"""
370
-
import os
371
-
import logging
372
-
from letta_client import Letta
373
-
374
364
logger = logging.getLogger(__name__)
375
365
376
366
try:
377
-
client = Letta(token=os.environ["LETTA_API_KEY"])
367
+
client = get_letta_client()
378
368
379
369
# Sanitize handle for block label
380
370
clean_handle = handle.lstrip('@').replace('.', '_').replace('-', '_').replace(' ', '_')
+16
-8
register_tools.py
+16
-8
register_tools.py
···
4
4
import sys
5
5
import logging
6
6
from typing import List
7
-
from dotenv import load_dotenv
8
7
from letta_client import Letta
9
8
from rich.console import Console
10
9
from rich.table import Table
10
+
from config_loader import get_config, get_letta_config, get_agent_config
11
11
12
12
# Import standalone functions and their schemas
13
13
from tools.search import search_bluesky_posts, SearchArgs
···
18
18
from tools.thread import add_post_to_bluesky_reply_thread, ReplyThreadPostArgs
19
19
from tools.ignore import ignore_notification, IgnoreNotificationArgs
20
20
21
-
load_dotenv()
21
+
config = get_config()
22
+
letta_config = get_letta_config()
23
+
agent_config = get_agent_config()
22
24
logging.basicConfig(level=logging.INFO)
23
25
logger = logging.getLogger(__name__)
24
26
console = Console()
···
101
103
]
102
104
103
105
104
-
def register_tools(agent_name: str = "void", tools: List[str] = None):
106
+
def register_tools(agent_name: str = None, tools: List[str] = None):
105
107
"""Register tools with a Letta agent.
106
108
107
109
Args:
108
-
agent_name: Name of the agent to attach tools to
110
+
agent_name: Name of the agent to attach tools to. If None, uses config default.
109
111
tools: List of tool names to register. If None, registers all tools.
110
112
"""
113
+
# Use agent name from config if not provided
114
+
if agent_name is None:
115
+
agent_name = agent_config['name']
116
+
111
117
try:
112
-
# Initialize Letta client with API key
113
-
client = Letta(token=os.environ["LETTA_API_KEY"])
118
+
# Initialize Letta client with API key from config
119
+
client = Letta(token=letta_config['api_key'])
114
120
115
121
# Find the agent
116
122
agents = client.agents.list()
···
201
207
import argparse
202
208
203
209
parser = argparse.ArgumentParser(description="Register Void tools with a Letta agent")
204
-
parser.add_argument("agent", nargs="?", default="void", help="Agent name (default: void)")
210
+
parser.add_argument("agent", nargs="?", default=None, help=f"Agent name (default: {agent_config['name']})")
205
211
parser.add_argument("--tools", nargs="+", help="Specific tools to register (default: all)")
206
212
parser.add_argument("--list", action="store_true", help="List available tools")
207
213
···
210
216
if args.list:
211
217
list_available_tools()
212
218
else:
213
-
console.print(f"\n[bold]Registering tools for agent: {args.agent}[/bold]\n")
219
+
# Use config default if no agent specified
220
+
agent_name = args.agent if args.agent is not None else agent_config['name']
221
+
console.print(f"\n[bold]Registering tools for agent: {agent_name}[/bold]\n")
214
222
register_tools(args.agent, args.tools)
+23
requirements.txt
+23
requirements.txt
···
1
+
# Core dependencies for Void Bot
2
+
3
+
# Configuration and utilities
4
+
PyYAML>=6.0.2
5
+
rich>=14.0.0
6
+
python-dotenv>=1.0.0
7
+
8
+
# Letta API client
9
+
letta-client>=0.1.198
10
+
11
+
# AT Protocol (Bluesky) client
12
+
atproto>=0.0.54
13
+
14
+
# HTTP client for API calls
15
+
httpx>=0.28.1
16
+
httpx-sse>=0.4.0
17
+
requests>=2.31.0
18
+
19
+
# Data validation
20
+
pydantic>=2.11.7
21
+
22
+
# Async support
23
+
anyio>=4.9.0
History
2 rounds
6 comments
bunware.org
submitted
#1
1 commit
expand
collapse
fix: url in README
expand 1 comment
closed without merging
bunware.org
submitted
#0
expand 5 comments
I'll merge for now and see how well it works, thank you! This was a huge PR though -- can we make them a little smaller next time?
Ah shit there's a ton of merge conflicts
yeah, sure, but it was a whole refactor of the configuration system, so that kinda had to be done in one pr, but I guess I could have done the README.md update in a different pr
This seems like a big merge issue, I've changed a lot. But I do like this quite a lot
wait, I thought it was merged 🤔
This seems like it should be mergable, I'm not sure what the README error is