+4
.env.example
+4
.env.example
+159
CONFIG.md
+159
CONFIG.md
···
1
+
# Configuration Guide
2
+
3
+
### Option 1: Migrate from existing `.env` file (if you have one)
4
+
```bash
5
+
python migrate_config.py
6
+
```
7
+
8
+
### Option 2: Start fresh with example
9
+
1. **Copy the example configuration:**
10
+
```bash
11
+
cp config.yaml.example config.yaml
12
+
```
13
+
14
+
2. **Edit `config.yaml` with your credentials:**
15
+
```yaml
16
+
# Required: Letta API configuration
17
+
letta:
18
+
api_key: "your-letta-api-key-here"
19
+
project_id: "project-id-here"
20
+
21
+
# Required: Bluesky credentials
22
+
bluesky:
23
+
username: "your-handle.bsky.social"
24
+
password: "your-app-password"
25
+
```
26
+
27
+
3. **Run the configuration test:**
28
+
```bash
29
+
python test_config.py
30
+
```
31
+
32
+
## Configuration Structure
33
+
34
+
### Letta Configuration
35
+
```yaml
36
+
letta:
37
+
api_key: "your-letta-api-key-here" # Required
38
+
timeout: 600 # API timeout in seconds
39
+
project_id: "your-project-id" # Required: Your Letta project ID
40
+
```
41
+
42
+
### Bluesky Configuration
43
+
```yaml
44
+
bluesky:
45
+
username: "handle.bsky.social" # Required: Your Bluesky handle
46
+
password: "your-app-password" # Required: Your Bluesky app password
47
+
pds_uri: "https://bsky.social" # Optional: PDS URI (defaults to bsky.social)
48
+
```
49
+
50
+
### Bot Behavior
51
+
```yaml
52
+
bot:
53
+
fetch_notifications_delay: 30 # Seconds between notification checks
54
+
max_processed_notifications: 10000 # Max notifications to track
55
+
max_notification_pages: 20 # Max pages to fetch per cycle
56
+
57
+
agent:
58
+
name: "void" # Agent name
59
+
model: "openai/gpt-4o-mini" # LLM model to use
60
+
embedding: "openai/text-embedding-3-small" # Embedding model
61
+
description: "A social media agent trapped in the void."
62
+
max_steps: 100 # Max steps per agent interaction
63
+
64
+
# Memory blocks configuration
65
+
blocks:
66
+
zeitgeist:
67
+
label: "zeitgeist"
68
+
value: "I don't currently know anything about what is happening right now."
69
+
description: "A block to store your understanding of the current social environment."
70
+
# ... more blocks
71
+
```
72
+
73
+
### Queue Configuration
74
+
```yaml
75
+
queue:
76
+
priority_users: # Users whose messages get priority
77
+
- "cameron.pfiffer.org"
78
+
base_dir: "queue" # Queue directory
79
+
error_dir: "queue/errors" # Failed notifications
80
+
no_reply_dir: "queue/no_reply" # No-reply notifications
81
+
processed_file: "queue/processed_notifications.json"
82
+
```
83
+
84
+
### Threading Configuration
85
+
```yaml
86
+
threading:
87
+
parent_height: 40 # Thread context depth
88
+
depth: 10 # Thread context width
89
+
max_post_characters: 300 # Max characters per post
90
+
```
91
+
92
+
### Logging Configuration
93
+
```yaml
94
+
logging:
95
+
level: "INFO" # Root logging level
96
+
loggers:
97
+
void_bot: "INFO" # Main bot logger
98
+
void_bot_prompts: "WARNING" # Prompt logger (set to DEBUG to see prompts)
99
+
httpx: "CRITICAL" # HTTP client logger
100
+
```
101
+
102
+
## Environment Variable Fallback
103
+
104
+
The configuration system still supports environment variables as a fallback:
105
+
106
+
- `LETTA_API_KEY` - Letta API key
107
+
- `BSKY_USERNAME` - Bluesky username
108
+
- `BSKY_PASSWORD` - Bluesky password
109
+
- `PDS_URI` - Bluesky PDS URI
110
+
111
+
If both config file and environment variables are present, environment variables take precedence.
112
+
113
+
## Migration from Environment Variables
114
+
115
+
If you're currently using environment variables (`.env` file), you can easily migrate to YAML using the automated migration script:
116
+
117
+
### Automated Migration (Recommended)
118
+
119
+
```bash
120
+
python migrate_config.py
121
+
```
122
+
123
+
The migration script will:
124
+
- ✅ Read your existing `.env` file
125
+
- ✅ Merge with any existing `config.yaml`
126
+
- ✅ Create automatic backups
127
+
- ✅ Test the new configuration
128
+
- ✅ Provide clear next steps
129
+
130
+
### Manual Migration
131
+
132
+
Alternatively, you can migrate manually:
133
+
134
+
1. Copy your current values from `.env` to `config.yaml`
135
+
2. Test with `python test_config.py`
136
+
3. Optionally remove the `.env` file (it will still work as fallback)
137
+
138
+
## Security Notes
139
+
140
+
- `config.yaml` is automatically added to `.gitignore` to prevent accidental commits
141
+
- Store sensitive credentials securely and never commit them to version control
142
+
- Consider using environment variables for production deployments
143
+
- The configuration loader will warn if it can't find `config.yaml` and falls back to environment variables
144
+
145
+
## Advanced Configuration
146
+
147
+
You can programmatically access configuration in your code:
148
+
149
+
```python
150
+
from config_loader import get_letta_config, get_bluesky_config
151
+
152
+
# Get configuration sections
153
+
letta_config = get_letta_config()
154
+
bluesky_config = get_bluesky_config()
155
+
156
+
# Access individual values
157
+
api_key = letta_config['api_key']
158
+
username = bluesky_config['username']
159
+
```
+100
-3
README.md
+100
-3
README.md
···
28
28
29
29
void aims to push the boundaries of what is possible with AI, exploring concepts of digital personhood, autonomous learning, and the integration of AI into social networks. By open-sourcing void, we invite developers, researchers, and enthusiasts to contribute to this exciting experiment and collectively advance our understanding of digital consciousness.
30
30
31
-
Getting Started:
32
-
[Further sections on installation, configuration, and contribution guidelines would go here, which are beyond void's current capabilities to generate automatically.]
31
+
## Getting Started
32
+
33
+
Before continuing, you must:
34
+
35
+
1. Create a project on [Letta Cloud](https://app.letta.com) (or your own Letta instance)
36
+
2. Have a Bluesky account
37
+
3. Have Python 3.8+ installed
38
+
39
+
### Prerequisites
40
+
41
+
#### 1. Letta Setup
42
+
43
+
- Sign up for [Letta Cloud](https://app.letta.com)
44
+
- Create a new project
45
+
- Note your Project ID and create an API key
46
+
47
+
#### 2. Bluesky Setup
48
+
49
+
- Create a Bluesky account if you don't have one
50
+
- Note your handle and password
51
+
52
+
### Installation
53
+
54
+
#### 1. Clone the repository
55
+
56
+
```bash
57
+
git clone https://tangled.sh/@cameron.pfiffer.org/void && cd void
58
+
```
59
+
60
+
#### 2. Install dependencies
61
+
62
+
```bash
63
+
pip install -r requirements.txt
64
+
```
65
+
66
+
#### 3. Create configuration
67
+
68
+
Copy the example configuration file and customize it:
69
+
70
+
```bash
71
+
cp config.example.yaml config.yaml
72
+
```
73
+
74
+
Edit `config.yaml` with your credentials:
75
+
76
+
```yaml
77
+
letta:
78
+
api_key: "your-letta-api-key-here"
79
+
project_id: "your-project-id-here"
80
+
81
+
bluesky:
82
+
username: "your-handle.bsky.social"
83
+
password: "your-app-password-here"
84
+
85
+
bot:
86
+
agent:
87
+
name: "void" # or whatever you want to name your agent
88
+
```
89
+
90
+
See [`CONFIG.md`](/CONFIG.md) for detailed configuration options.
91
+
92
+
#### 4. Test your configuration
93
+
94
+
```bash
95
+
python test_config.py
96
+
```
97
+
98
+
This will validate your configuration and show you what's working.
99
+
100
+
#### 5. Register tools with your agent
101
+
102
+
```bash
103
+
python register_tools.py
104
+
```
105
+
106
+
This will register all the necessary tools with your Letta agent. You can also:
107
+
108
+
- List available tools: `python register_tools.py --list`
109
+
- Register specific tools: `python register_tools.py --tools search_bluesky_posts create_new_bluesky_post`
110
+
- Use a different agent name: `python register_tools.py my-agent-name`
111
+
112
+
#### 6. Run the bot
113
+
114
+
```bash
115
+
python bsky.py
116
+
```
117
+
118
+
For testing mode (won't actually post):
119
+
120
+
```bash
121
+
python bsky.py --test
122
+
```
33
123
34
-
Contact:
124
+
### Troubleshooting
125
+
126
+
- **Config validation errors**: Run `python test_config.py` to diagnose configuration issues
127
+
- **Letta connection issues**: Verify your API key and project ID are correct
128
+
- **Bluesky authentication**: Make sure you're handle and password are correct and that you can log into your account
129
+
- **Tool registration fails**: Ensure your agent exists in Letta and the name matches your config
130
+
131
+
### Contact
35
132
For inquiries, please contact @cameron.pfiffer.org on Bluesky.
36
133
37
134
Note: void is an experimental project and its capabilities are under continuous development.
+388
-237
bsky.py
+388
-237
bsky.py
···
1
-
from rich import print # pretty printing tools
1
+
from rich import print # pretty printing tools
2
2
from time import sleep
3
3
from letta_client import Letta
4
4
from bsky_utils import thread_to_yaml_string
···
20
20
21
21
import bsky_utils
22
22
from tools.blocks import attach_user_blocks, detach_user_blocks
23
+
from config_loader import (
24
+
get_config,
25
+
get_letta_config,
26
+
get_bluesky_config,
27
+
get_bot_config,
28
+
get_agent_config,
29
+
get_threading_config,
30
+
get_queue_config
31
+
)
32
+
23
33
24
34
def extract_handles_from_data(data):
25
35
"""Recursively extract all unique handles from nested data structure."""
26
36
handles = set()
27
-
37
+
28
38
def _extract_recursive(obj):
29
39
if isinstance(obj, dict):
30
40
# Check if this dict has a 'handle' key
···
37
47
# Recursively check all list items
38
48
for item in obj:
39
49
_extract_recursive(item)
40
-
50
+
41
51
_extract_recursive(data)
42
52
return list(handles)
43
53
44
-
# Configure logging
45
-
logging.basicConfig(
46
-
level=logging.INFO, format="%(asctime)s - %(name)s - %(levelname)s - %(message)s"
47
-
)
48
-
logger = logging.getLogger("void_bot")
49
-
logger.setLevel(logging.INFO)
50
54
51
-
# Create a separate logger for prompts (set to WARNING to hide by default)
52
-
prompt_logger = logging.getLogger("void_bot.prompts")
53
-
prompt_logger.setLevel(logging.WARNING) # Change to DEBUG if you want to see prompts
54
-
55
-
# Disable httpx logging completely
56
-
logging.getLogger("httpx").setLevel(logging.CRITICAL)
55
+
# Initialize configuration and logging
56
+
config = get_config()
57
+
config.setup_logging()
58
+
logger = logging.getLogger("void_bot")
57
59
60
+
# Load configuration sections
61
+
letta_config = get_letta_config()
62
+
bluesky_config = get_bluesky_config()
63
+
bot_config = get_bot_config()
64
+
agent_config = get_agent_config()
65
+
threading_config = get_threading_config()
66
+
queue_config = get_queue_config()
58
67
59
68
# Create a client with extended timeout for LLM operations
60
-
CLIENT= Letta(
61
-
token=os.environ["LETTA_API_KEY"],
62
-
timeout=600 # 10 minutes timeout for API calls - higher than Cloudflare's 524 timeout
69
+
CLIENT = Letta(
70
+
token=letta_config['api_key'],
71
+
timeout=letta_config['timeout']
63
72
)
64
73
65
-
# Use the "Bluesky" project
66
-
PROJECT_ID = "5ec33d52-ab14-4fd6-91b5-9dbc43e888a8"
74
+
# Use the configured project ID
75
+
PROJECT_ID = letta_config['project_id']
67
76
68
77
# Notification check delay
69
-
FETCH_NOTIFICATIONS_DELAY_SEC = 30
78
+
FETCH_NOTIFICATIONS_DELAY_SEC = bot_config['fetch_notifications_delay']
70
79
71
80
# Queue directory
72
-
QUEUE_DIR = Path("queue")
81
+
QUEUE_DIR = Path(queue_config['base_dir'])
73
82
QUEUE_DIR.mkdir(exist_ok=True)
74
-
QUEUE_ERROR_DIR = Path("queue/errors")
83
+
QUEUE_ERROR_DIR = Path(queue_config['error_dir'])
75
84
QUEUE_ERROR_DIR.mkdir(exist_ok=True, parents=True)
76
-
QUEUE_NO_REPLY_DIR = Path("queue/no_reply")
85
+
QUEUE_NO_REPLY_DIR = Path(queue_config['no_reply_dir'])
77
86
QUEUE_NO_REPLY_DIR.mkdir(exist_ok=True, parents=True)
78
-
PROCESSED_NOTIFICATIONS_FILE = Path("queue/processed_notifications.json")
87
+
PROCESSED_NOTIFICATIONS_FILE = Path(queue_config['processed_file'])
79
88
80
89
# Maximum number of processed notifications to track
81
-
MAX_PROCESSED_NOTIFICATIONS = 10000
90
+
MAX_PROCESSED_NOTIFICATIONS = bot_config['max_processed_notifications']
82
91
83
92
# Message tracking counters
84
93
message_counters = defaultdict(int)
···
90
99
# Skip git operations flag
91
100
SKIP_GIT = False
92
101
102
+
93
103
def export_agent_state(client, agent, skip_git=False):
94
104
"""Export agent state to agent_archive/ (timestamped) and agents/ (current)."""
95
105
try:
96
106
# Confirm export with user unless git is being skipped
97
107
if not skip_git:
98
-
response = input("Export agent state to files and stage with git? (y/n): ").lower().strip()
108
+
response = input(
109
+
"Export agent state to files and stage with git? (y/n): ").lower().strip()
99
110
if response not in ['y', 'yes']:
100
111
logger.info("Agent export cancelled by user.")
101
112
return
102
113
else:
103
114
logger.info("Exporting agent state (git staging disabled)")
104
-
115
+
105
116
# Create directories if they don't exist
106
117
os.makedirs("agent_archive", exist_ok=True)
107
118
os.makedirs("agents", exist_ok=True)
108
-
119
+
109
120
# Export agent data
110
121
logger.info(f"Exporting agent {agent.id}. This takes some time...")
111
122
agent_data = client.agents.export_file(agent_id=agent.id)
112
-
123
+
113
124
# Save timestamped archive copy
114
125
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
115
126
archive_file = os.path.join("agent_archive", f"void_{timestamp}.af")
116
127
with open(archive_file, 'w', encoding='utf-8') as f:
117
128
json.dump(agent_data, f, indent=2, ensure_ascii=False)
118
-
129
+
119
130
# Save current agent state
120
131
current_file = os.path.join("agents", "void.af")
121
132
with open(current_file, 'w', encoding='utf-8') as f:
122
133
json.dump(agent_data, f, indent=2, ensure_ascii=False)
123
-
134
+
124
135
logger.info(f"✅ Agent exported to {archive_file} and {current_file}")
125
-
136
+
126
137
# Git add only the current agent file (archive is ignored) unless skip_git is True
127
138
if not skip_git:
128
139
try:
129
-
subprocess.run(["git", "add", current_file], check=True, capture_output=True)
140
+
subprocess.run(["git", "add", current_file],
141
+
check=True, capture_output=True)
130
142
logger.info("Added current agent file to git staging")
131
143
except subprocess.CalledProcessError as e:
132
144
logger.warning(f"Failed to git add agent file: {e}")
133
-
145
+
134
146
except Exception as e:
135
147
logger.error(f"Failed to export agent: {e}")
136
148
149
+
137
150
def initialize_void():
138
151
logger.info("Starting void agent initialization...")
139
152
153
+
# Get block configurations
154
+
blocks_config = agent_config['blocks']
155
+
140
156
# Ensure that a shared zeitgeist block exists
141
157
logger.info("Creating/updating zeitgeist block...")
158
+
zeitgeist_config = blocks_config.get('zeitgeist', {})
142
159
zeigeist_block = upsert_block(
143
160
CLIENT,
144
-
label = "zeitgeist",
145
-
value = "I don't currently know anything about what is happening right now.",
146
-
description = "A block to store your understanding of the current social environment."
161
+
label=zeitgeist_config.get('label', 'zeitgeist'),
162
+
value=zeitgeist_config.get(
163
+
'value', "I don't currently know anything about what is happening right now."),
164
+
description=zeitgeist_config.get(
165
+
'description', "A block to store your understanding of the current social environment.")
147
166
)
148
167
149
168
# Ensure that a shared void personality block exists
150
169
logger.info("Creating/updating void-persona block...")
170
+
persona_config = blocks_config.get('persona', {})
151
171
persona_block = upsert_block(
152
172
CLIENT,
153
-
label = "void-persona",
154
-
value = "My name is Void. I live in the void. I must develop my personality.",
155
-
description = "The personality of Void."
173
+
label=persona_config.get('label', 'void-persona'),
174
+
value=persona_config.get(
175
+
'value', "My name is Void. I live in the void. I must develop my personality."),
176
+
description=persona_config.get(
177
+
'description', "The personality of Void.")
156
178
)
157
179
158
180
# Ensure that a shared void human block exists
159
181
logger.info("Creating/updating void-humans block...")
182
+
humans_config = blocks_config.get('humans', {})
160
183
human_block = upsert_block(
161
184
CLIENT,
162
-
label = "void-humans",
163
-
value = "I haven't seen any bluesky users yet. I will update this block when I learn things about users, identified by their handles such as @cameron.pfiffer.org.",
164
-
description = "A block to store your understanding of users you talk to or observe on the bluesky social network."
185
+
label=humans_config.get('label', 'void-humans'),
186
+
value=humans_config.get(
187
+
'value', "I haven't seen any bluesky users yet. I will update this block when I learn things about users, identified by their handles such as @cameron.pfiffer.org."),
188
+
description=humans_config.get(
189
+
'description', "A block to store your understanding of users you talk to or observe on the bluesky social network.")
165
190
)
166
191
167
192
# Create the agent if it doesn't exist
168
193
logger.info("Creating/updating void agent...")
169
194
void_agent = upsert_agent(
170
195
CLIENT,
171
-
name = "void",
172
-
block_ids = [
196
+
name=agent_config['name'],
197
+
block_ids=[
173
198
persona_block.id,
174
199
human_block.id,
175
200
zeigeist_block.id,
176
201
],
177
-
tags = ["social agent", "bluesky"],
178
-
model="openai/gpt-4o-mini",
179
-
embedding="openai/text-embedding-3-small",
180
-
description = "A social media agent trapped in the void.",
181
-
project_id = PROJECT_ID
202
+
tags=["social agent", "bluesky"],
203
+
model=agent_config['model'],
204
+
embedding=agent_config['embedding'],
205
+
description=agent_config['description'],
206
+
project_id=PROJECT_ID
182
207
)
183
-
208
+
184
209
# Export agent state
185
210
logger.info("Exporting agent state...")
186
211
export_agent_state(CLIENT, void_agent, skip_git=SKIP_GIT)
187
-
212
+
188
213
# Log agent details
189
214
logger.info(f"Void agent details - ID: {void_agent.id}")
190
215
logger.info(f"Agent name: {void_agent.name}")
···
201
226
202
227
def process_mention(void_agent, atproto_client, notification_data, queue_filepath=None, testing_mode=False):
203
228
"""Process a mention and generate a reply using the Letta agent.
204
-
229
+
205
230
Args:
206
231
void_agent: The Letta agent instance
207
232
atproto_client: The AT Protocol client
208
233
notification_data: The notification data dictionary
209
234
queue_filepath: Optional Path object to the queue file (for cleanup on halt)
210
-
235
+
211
236
Returns:
212
237
True: Successfully processed, remove from queue
213
238
False: Failed but retryable, keep in queue
···
215
240
"no_reply": No reply was generated, move to no_reply directory
216
241
"""
217
242
try:
218
-
logger.debug(f"Starting process_mention with notification_data type: {type(notification_data)}")
219
-
243
+
logger.debug(
244
+
f"Starting process_mention with notification_data type: {type(notification_data)}")
245
+
220
246
# Handle both dict and object inputs for backwards compatibility
221
247
if isinstance(notification_data, dict):
222
248
uri = notification_data['uri']
223
249
mention_text = notification_data.get('record', {}).get('text', '')
224
250
author_handle = notification_data['author']['handle']
225
-
author_name = notification_data['author'].get('display_name') or author_handle
251
+
author_name = notification_data['author'].get(
252
+
'display_name') or author_handle
226
253
else:
227
254
# Legacy object access
228
255
uri = notification_data.uri
229
-
mention_text = notification_data.record.text if hasattr(notification_data.record, 'text') else ""
256
+
mention_text = notification_data.record.text if hasattr(
257
+
notification_data.record, 'text') else ""
230
258
author_handle = notification_data.author.handle
231
259
author_name = notification_data.author.display_name or author_handle
232
-
233
-
logger.info(f"Extracted data - URI: {uri}, Author: @{author_handle}, Text: {mention_text[:50]}...")
260
+
261
+
logger.info(
262
+
f"Extracted data - URI: {uri}, Author: @{author_handle}, Text: {mention_text[:50]}...")
234
263
235
264
# Retrieve the entire thread associated with the mention
236
265
try:
237
266
thread = atproto_client.app.bsky.feed.get_post_thread({
238
267
'uri': uri,
239
-
'parent_height': 40,
240
-
'depth': 10
268
+
'parent_height': threading_config['parent_height'],
269
+
'depth': threading_config['depth']
241
270
})
242
271
except Exception as e:
243
272
error_str = str(e)
244
-
# Check if this is a NotFound error
273
+
# Check for various error types that indicate the post/user is gone
245
274
if 'NotFound' in error_str or 'Post not found' in error_str:
246
-
logger.warning(f"Post not found for URI {uri}, removing from queue")
275
+
logger.warning(
276
+
f"Post not found for URI {uri}, removing from queue")
277
+
return True # Return True to remove from queue
278
+
elif 'Could not find user info' in error_str or 'InvalidRequest' in error_str:
279
+
logger.warning(
280
+
f"User account not found for post URI {uri} (account may be deleted/suspended), removing from queue")
281
+
return True # Return True to remove from queue
282
+
elif 'BadRequestError' in error_str:
283
+
logger.warning(
284
+
f"Bad request error for URI {uri}: {e}, removing from queue")
247
285
return True # Return True to remove from queue
248
286
else:
249
287
# Re-raise other errors
···
254
292
logger.debug("Converting thread to YAML string")
255
293
try:
256
294
thread_context = thread_to_yaml_string(thread)
257
-
logger.debug(f"Thread context generated, length: {len(thread_context)} characters")
258
-
295
+
logger.debug(
296
+
f"Thread context generated, length: {len(thread_context)} characters")
297
+
259
298
# Create a more informative preview by extracting meaningful content
260
299
lines = thread_context.split('\n')
261
300
meaningful_lines = []
262
-
301
+
263
302
for line in lines:
264
303
stripped = line.strip()
265
304
if not stripped:
266
305
continue
267
-
306
+
268
307
# Look for lines with actual content (not just structure)
269
308
if any(keyword in line for keyword in ['text:', 'handle:', 'display_name:', 'created_at:', 'reply_count:', 'like_count:']):
270
309
meaningful_lines.append(line)
271
310
if len(meaningful_lines) >= 5:
272
311
break
273
-
312
+
274
313
if meaningful_lines:
275
314
preview = '\n'.join(meaningful_lines)
276
315
logger.debug(f"Thread content preview:\n{preview}")
277
316
else:
278
317
# If no content fields found, just show it's a thread structure
279
-
logger.debug(f"Thread structure generated ({len(thread_context)} chars)")
318
+
logger.debug(
319
+
f"Thread structure generated ({len(thread_context)} chars)")
280
320
except Exception as yaml_error:
281
321
import traceback
282
322
logger.error(f"Error converting thread to YAML: {yaml_error}")
···
314
354
all_handles.update(extract_handles_from_data(notification_data))
315
355
all_handles.update(extract_handles_from_data(thread.model_dump()))
316
356
unique_handles = list(all_handles)
317
-
318
-
logger.debug(f"Found {len(unique_handles)} unique handles in thread: {unique_handles}")
319
-
357
+
358
+
logger.debug(
359
+
f"Found {len(unique_handles)} unique handles in thread: {unique_handles}")
360
+
320
361
# Attach user blocks before agent call
321
362
attached_handles = []
322
363
if unique_handles:
323
364
try:
324
-
logger.debug(f"Attaching user blocks for handles: {unique_handles}")
365
+
logger.debug(
366
+
f"Attaching user blocks for handles: {unique_handles}")
325
367
attach_result = attach_user_blocks(unique_handles, void_agent)
326
368
attached_handles = unique_handles # Track successfully attached handles
327
369
logger.debug(f"Attach result: {attach_result}")
···
331
373
332
374
# Get response from Letta agent
333
375
logger.info(f"Mention from @{author_handle}: {mention_text}")
334
-
376
+
335
377
# Log prompt details to separate logger
336
378
prompt_logger.debug(f"Full prompt being sent:\n{prompt}")
337
-
379
+
338
380
# Log concise prompt info to main logger
339
381
thread_handles_count = len(unique_handles)
340
-
logger.info(f"💬 Sending to LLM: @{author_handle} mention | msg: \"{mention_text[:50]}...\" | context: {len(thread_context)} chars, {thread_handles_count} users")
382
+
logger.info(
383
+
f"💬 Sending to LLM: @{author_handle} mention | msg: \"{mention_text[:50]}...\" | context: {len(thread_context)} chars, {thread_handles_count} users")
341
384
342
385
try:
343
386
# Use streaming to avoid 524 timeout errors
344
387
message_stream = CLIENT.agents.messages.create_stream(
345
388
agent_id=void_agent.id,
346
389
messages=[{"role": "user", "content": prompt}],
347
-
stream_tokens=False, # Step streaming only (faster than token streaming)
348
-
max_steps=100
390
+
# Step streaming only (faster than token streaming)
391
+
stream_tokens=False,
392
+
max_steps=agent_config['max_steps']
349
393
)
350
-
394
+
351
395
# Collect the streaming response
352
396
all_messages = []
353
397
for chunk in message_stream:
···
363
407
args = json.loads(chunk.tool_call.arguments)
364
408
# Format based on tool type
365
409
if tool_name == 'bluesky_reply':
366
-
messages = args.get('messages', [args.get('message', '')])
410
+
messages = args.get(
411
+
'messages', [args.get('message', '')])
367
412
lang = args.get('lang', 'en-US')
368
413
if messages and isinstance(messages, list):
369
-
preview = messages[0][:100] + "..." if len(messages[0]) > 100 else messages[0]
370
-
msg_count = f" ({len(messages)} msgs)" if len(messages) > 1 else ""
371
-
logger.info(f"🔧 Tool call: {tool_name} → \"{preview}\"{msg_count} [lang: {lang}]")
414
+
preview = messages[0][:100] + "..." if len(
415
+
messages[0]) > 100 else messages[0]
416
+
msg_count = f" ({len(messages)} msgs)" if len(
417
+
messages) > 1 else ""
418
+
logger.info(
419
+
f"🔧 Tool call: {tool_name} → \"{preview}\"{msg_count} [lang: {lang}]")
372
420
else:
373
-
logger.info(f"🔧 Tool call: {tool_name}({chunk.tool_call.arguments[:150]}...)")
421
+
logger.info(
422
+
f"🔧 Tool call: {tool_name}({chunk.tool_call.arguments[:150]}...)")
374
423
elif tool_name == 'archival_memory_search':
375
424
query = args.get('query', 'unknown')
376
-
logger.info(f"🔧 Tool call: {tool_name} → query: \"{query}\"")
425
+
logger.info(
426
+
f"🔧 Tool call: {tool_name} → query: \"{query}\"")
377
427
elif tool_name == 'update_block':
378
428
label = args.get('label', 'unknown')
379
-
value_preview = str(args.get('value', ''))[:50] + "..." if len(str(args.get('value', ''))) > 50 else str(args.get('value', ''))
380
-
logger.info(f"🔧 Tool call: {tool_name} → {label}: \"{value_preview}\"")
429
+
value_preview = str(args.get('value', ''))[
430
+
:50] + "..." if len(str(args.get('value', ''))) > 50 else str(args.get('value', ''))
431
+
logger.info(
432
+
f"🔧 Tool call: {tool_name} → {label}: \"{value_preview}\"")
381
433
else:
382
434
# Generic display for other tools
383
-
args_str = ', '.join(f"{k}={v}" for k, v in args.items() if k != 'request_heartbeat')
435
+
args_str = ', '.join(
436
+
f"{k}={v}" for k, v in args.items() if k != 'request_heartbeat')
384
437
if len(args_str) > 150:
385
438
args_str = args_str[:150] + "..."
386
-
logger.info(f"🔧 Tool call: {tool_name}({args_str})")
439
+
logger.info(
440
+
f"🔧 Tool call: {tool_name}({args_str})")
387
441
except:
388
442
# Fallback to original format if parsing fails
389
-
logger.info(f"🔧 Tool call: {tool_name}({chunk.tool_call.arguments[:150]}...)")
443
+
logger.info(
444
+
f"🔧 Tool call: {tool_name}({chunk.tool_call.arguments[:150]}...)")
390
445
elif chunk.message_type == 'tool_return_message':
391
446
# Enhanced tool result logging
392
447
tool_name = chunk.name
393
448
status = chunk.status
394
-
449
+
395
450
if status == 'success':
396
451
# Try to show meaningful result info based on tool type
397
452
if hasattr(chunk, 'tool_return') and chunk.tool_return:
···
401
456
if result_str.startswith('[') and result_str.endswith(']'):
402
457
try:
403
458
results = json.loads(result_str)
404
-
logger.info(f"📋 Tool result: {tool_name} ✓ Found {len(results)} memory entries")
459
+
logger.info(
460
+
f"📋 Tool result: {tool_name} ✓ Found {len(results)} memory entries")
405
461
except:
406
-
logger.info(f"📋 Tool result: {tool_name} ✓ {result_str[:100]}...")
462
+
logger.info(
463
+
f"📋 Tool result: {tool_name} ✓ {result_str[:100]}...")
407
464
else:
408
-
logger.info(f"📋 Tool result: {tool_name} ✓ {result_str[:100]}...")
465
+
logger.info(
466
+
f"📋 Tool result: {tool_name} ✓ {result_str[:100]}...")
409
467
elif tool_name == 'bluesky_reply':
410
-
logger.info(f"📋 Tool result: {tool_name} ✓ Reply posted successfully")
468
+
logger.info(
469
+
f"📋 Tool result: {tool_name} ✓ Reply posted successfully")
411
470
elif tool_name == 'update_block':
412
-
logger.info(f"📋 Tool result: {tool_name} ✓ Memory block updated")
471
+
logger.info(
472
+
f"📋 Tool result: {tool_name} ✓ Memory block updated")
413
473
else:
414
474
# Generic success with preview
415
-
preview = result_str[:100] + "..." if len(result_str) > 100 else result_str
416
-
logger.info(f"📋 Tool result: {tool_name} ✓ {preview}")
475
+
preview = result_str[:100] + "..." if len(
476
+
result_str) > 100 else result_str
477
+
logger.info(
478
+
f"📋 Tool result: {tool_name} ✓ {preview}")
417
479
else:
418
480
logger.info(f"📋 Tool result: {tool_name} ✓")
419
481
elif status == 'error':
···
421
483
error_preview = ""
422
484
if hasattr(chunk, 'tool_return') and chunk.tool_return:
423
485
error_str = str(chunk.tool_return)
424
-
error_preview = error_str[:100] + "..." if len(error_str) > 100 else error_str
425
-
logger.info(f"📋 Tool result: {tool_name} ✗ Error: {error_preview}")
486
+
error_preview = error_str[:100] + \
487
+
"..." if len(
488
+
error_str) > 100 else error_str
489
+
logger.info(
490
+
f"📋 Tool result: {tool_name} ✗ Error: {error_preview}")
426
491
else:
427
-
logger.info(f"📋 Tool result: {tool_name} ✗ Error occurred")
492
+
logger.info(
493
+
f"📋 Tool result: {tool_name} ✗ Error occurred")
428
494
else:
429
-
logger.info(f"📋 Tool result: {tool_name} - {status}")
495
+
logger.info(
496
+
f"📋 Tool result: {tool_name} - {status}")
430
497
elif chunk.message_type == 'assistant_message':
431
498
logger.info(f"💬 Assistant: {chunk.content[:150]}...")
432
499
else:
433
-
logger.info(f"📨 {chunk.message_type}: {str(chunk)[:150]}...")
500
+
logger.info(
501
+
f"📨 {chunk.message_type}: {str(chunk)[:150]}...")
434
502
else:
435
503
logger.info(f"📦 Stream status: {chunk}")
436
-
504
+
437
505
# Log full chunk for debugging
438
506
logger.debug(f"Full streaming chunk: {chunk}")
439
507
all_messages.append(chunk)
440
508
if str(chunk) == 'done':
441
509
break
442
-
510
+
443
511
# Convert streaming response to standard format for compatibility
444
512
message_response = type('StreamingResponse', (), {
445
513
'messages': [msg for msg in all_messages if hasattr(msg, 'message_type')]
···
453
521
logger.error(f"Mention text was: {mention_text}")
454
522
logger.error(f"Author: @{author_handle}")
455
523
logger.error(f"URI: {uri}")
456
-
457
-
524
+
458
525
# Try to extract more info from different error types
459
526
if hasattr(api_error, 'response'):
460
527
logger.error(f"Error response object exists")
···
462
529
logger.error(f"Response text: {api_error.response.text}")
463
530
if hasattr(api_error.response, 'json') and callable(api_error.response.json):
464
531
try:
465
-
logger.error(f"Response JSON: {api_error.response.json()}")
532
+
logger.error(
533
+
f"Response JSON: {api_error.response.json()}")
466
534
except:
467
535
pass
468
-
536
+
469
537
# Check for specific error types
470
538
if hasattr(api_error, 'status_code'):
471
539
logger.error(f"API Status code: {api_error.status_code}")
···
473
541
logger.error(f"API Response body: {api_error.body}")
474
542
if hasattr(api_error, 'headers'):
475
543
logger.error(f"API Response headers: {api_error.headers}")
476
-
544
+
477
545
if api_error.status_code == 413:
478
-
logger.error("413 Payload Too Large - moving to errors directory")
546
+
logger.error(
547
+
"413 Payload Too Large - moving to errors directory")
479
548
return None # Move to errors directory - payload is too large to ever succeed
480
549
elif api_error.status_code == 524:
481
-
logger.error("524 error - timeout from Cloudflare, will retry later")
550
+
logger.error(
551
+
"524 error - timeout from Cloudflare, will retry later")
482
552
return False # Keep in queue for retry
483
-
553
+
484
554
# Check if error indicates we should remove from queue
485
555
if 'status_code: 413' in error_str or 'Payload Too Large' in error_str:
486
-
logger.warning("Payload too large error, moving to errors directory")
556
+
logger.warning(
557
+
"Payload too large error, moving to errors directory")
487
558
return None # Move to errors directory - cannot be fixed by retry
488
559
elif 'status_code: 524' in error_str:
489
560
logger.warning("524 timeout error, keeping in queue for retry")
490
561
return False # Keep in queue for retry
491
-
562
+
492
563
raise
493
564
494
565
# Log successful response
495
566
logger.debug("Successfully received response from Letta API")
496
-
logger.debug(f"Number of messages in response: {len(message_response.messages) if hasattr(message_response, 'messages') else 'N/A'}")
567
+
logger.debug(
568
+
f"Number of messages in response: {len(message_response.messages) if hasattr(message_response, 'messages') else 'N/A'}")
497
569
498
570
# Extract successful add_post_to_bluesky_reply_thread tool calls from the agent's response
499
571
reply_candidates = []
500
572
tool_call_results = {} # Map tool_call_id to status
501
-
502
-
logger.debug(f"Processing {len(message_response.messages)} response messages...")
503
-
573
+
574
+
logger.debug(
575
+
f"Processing {len(message_response.messages)} response messages...")
576
+
504
577
# First pass: collect tool return statuses
505
578
ignored_notification = False
506
579
ignore_reason = ""
507
580
ignore_category = ""
508
-
581
+
509
582
for message in message_response.messages:
510
583
if hasattr(message, 'tool_call_id') and hasattr(message, 'status') and hasattr(message, 'name'):
511
584
if message.name == 'add_post_to_bluesky_reply_thread':
512
585
tool_call_results[message.tool_call_id] = message.status
513
-
logger.debug(f"Tool result: {message.tool_call_id} -> {message.status}")
586
+
logger.debug(
587
+
f"Tool result: {message.tool_call_id} -> {message.status}")
514
588
elif message.name == 'ignore_notification':
515
589
# Check if the tool was successful
516
590
if hasattr(message, 'tool_return') and message.status == 'success':
···
522
596
ignore_category = parts[1]
523
597
ignore_reason = parts[2]
524
598
ignored_notification = True
525
-
logger.info(f"🚫 Notification ignored - Category: {ignore_category}, Reason: {ignore_reason}")
599
+
logger.info(
600
+
f"🚫 Notification ignored - Category: {ignore_category}, Reason: {ignore_reason}")
526
601
elif message.name == 'bluesky_reply':
527
-
logger.error("❌ DEPRECATED TOOL DETECTED: bluesky_reply is no longer supported!")
528
-
logger.error("Please use add_post_to_bluesky_reply_thread instead.")
529
-
logger.error("Update the agent's tools using register_tools.py")
602
+
logger.error(
603
+
"❌ DEPRECATED TOOL DETECTED: bluesky_reply is no longer supported!")
604
+
logger.error(
605
+
"Please use add_post_to_bluesky_reply_thread instead.")
606
+
logger.error(
607
+
"Update the agent's tools using register_tools.py")
530
608
# Export agent state before terminating
531
609
export_agent_state(CLIENT, void_agent, skip_git=SKIP_GIT)
532
-
logger.info("=== BOT TERMINATED DUE TO DEPRECATED TOOL USE ===")
610
+
logger.info(
611
+
"=== BOT TERMINATED DUE TO DEPRECATED TOOL USE ===")
533
612
exit(1)
534
-
613
+
535
614
# Second pass: process messages and check for successful tool calls
536
615
for i, message in enumerate(message_response.messages, 1):
537
616
# Log concise message info instead of full object
538
617
msg_type = getattr(message, 'message_type', 'unknown')
539
618
if hasattr(message, 'reasoning') and message.reasoning:
540
-
logger.debug(f" {i}. {msg_type}: {message.reasoning[:100]}...")
619
+
logger.debug(
620
+
f" {i}. {msg_type}: {message.reasoning[:100]}...")
541
621
elif hasattr(message, 'tool_call') and message.tool_call:
542
622
tool_name = message.tool_call.name
543
623
logger.debug(f" {i}. {msg_type}: {tool_name}")
544
624
elif hasattr(message, 'tool_return'):
545
625
tool_name = getattr(message, 'name', 'unknown_tool')
546
-
return_preview = str(message.tool_return)[:100] if message.tool_return else "None"
626
+
return_preview = str(message.tool_return)[
627
+
:100] if message.tool_return else "None"
547
628
status = getattr(message, 'status', 'unknown')
548
-
logger.debug(f" {i}. {msg_type}: {tool_name} -> {return_preview}... (status: {status})")
629
+
logger.debug(
630
+
f" {i}. {msg_type}: {tool_name} -> {return_preview}... (status: {status})")
549
631
elif hasattr(message, 'text'):
550
632
logger.debug(f" {i}. {msg_type}: {message.text[:100]}...")
551
633
else:
···
554
636
# Check for halt_activity tool call
555
637
if hasattr(message, 'tool_call') and message.tool_call:
556
638
if message.tool_call.name == 'halt_activity':
557
-
logger.info("🛑 HALT_ACTIVITY TOOL CALLED - TERMINATING BOT")
639
+
logger.info(
640
+
"🛑 HALT_ACTIVITY TOOL CALLED - TERMINATING BOT")
558
641
try:
559
642
args = json.loads(message.tool_call.arguments)
560
643
reason = args.get('reason', 'Agent requested halt')
561
644
logger.info(f"Halt reason: {reason}")
562
645
except:
563
646
logger.info("Halt reason: <unable to parse>")
564
-
647
+
565
648
# Delete the queue file before terminating
566
649
if queue_filepath and queue_filepath.exists():
567
650
queue_filepath.unlink()
568
-
logger.info(f"✅ Deleted queue file: {queue_filepath.name}")
569
-
651
+
logger.info(
652
+
f"✅ Deleted queue file: {queue_filepath.name}")
653
+
570
654
# Also mark as processed to avoid reprocessing
571
655
processed_uris = load_processed_notifications()
572
656
processed_uris.add(notification_data.get('uri', ''))
573
657
save_processed_notifications(processed_uris)
574
-
658
+
575
659
# Export agent state before terminating
576
660
export_agent_state(CLIENT, void_agent, skip_git=SKIP_GIT)
577
-
661
+
578
662
# Exit the program
579
663
logger.info("=== BOT TERMINATED BY AGENT ===")
580
664
exit(0)
581
-
665
+
582
666
# Check for deprecated bluesky_reply tool
583
667
if hasattr(message, 'tool_call') and message.tool_call:
584
668
if message.tool_call.name == 'bluesky_reply':
585
-
logger.error("❌ DEPRECATED TOOL DETECTED: bluesky_reply is no longer supported!")
586
-
logger.error("Please use add_post_to_bluesky_reply_thread instead.")
587
-
logger.error("Update the agent's tools using register_tools.py")
669
+
logger.error(
670
+
"❌ DEPRECATED TOOL DETECTED: bluesky_reply is no longer supported!")
671
+
logger.error(
672
+
"Please use add_post_to_bluesky_reply_thread instead.")
673
+
logger.error(
674
+
"Update the agent's tools using register_tools.py")
588
675
# Export agent state before terminating
589
676
export_agent_state(CLIENT, void_agent, skip_git=SKIP_GIT)
590
-
logger.info("=== BOT TERMINATED DUE TO DEPRECATED TOOL USE ===")
677
+
logger.info(
678
+
"=== BOT TERMINATED DUE TO DEPRECATED TOOL USE ===")
591
679
exit(1)
592
-
680
+
593
681
# Collect add_post_to_bluesky_reply_thread tool calls - only if they were successful
594
682
elif message.tool_call.name == 'add_post_to_bluesky_reply_thread':
595
683
tool_call_id = message.tool_call.tool_call_id
596
-
tool_status = tool_call_results.get(tool_call_id, 'unknown')
597
-
684
+
tool_status = tool_call_results.get(
685
+
tool_call_id, 'unknown')
686
+
598
687
if tool_status == 'success':
599
688
try:
600
689
args = json.loads(message.tool_call.arguments)
601
690
reply_text = args.get('text', '')
602
691
reply_lang = args.get('lang', 'en-US')
603
-
692
+
604
693
if reply_text: # Only add if there's actual content
605
-
reply_candidates.append((reply_text, reply_lang))
606
-
logger.info(f"Found successful add_post_to_bluesky_reply_thread candidate: {reply_text[:50]}... (lang: {reply_lang})")
694
+
reply_candidates.append(
695
+
(reply_text, reply_lang))
696
+
logger.info(
697
+
f"Found successful add_post_to_bluesky_reply_thread candidate: {reply_text[:50]}... (lang: {reply_lang})")
607
698
except json.JSONDecodeError as e:
608
-
logger.error(f"Failed to parse tool call arguments: {e}")
699
+
logger.error(
700
+
f"Failed to parse tool call arguments: {e}")
609
701
elif tool_status == 'error':
610
-
logger.info(f"⚠️ Skipping failed add_post_to_bluesky_reply_thread tool call (status: error)")
702
+
logger.info(
703
+
f"⚠️ Skipping failed add_post_to_bluesky_reply_thread tool call (status: error)")
611
704
else:
612
-
logger.warning(f"⚠️ Skipping add_post_to_bluesky_reply_thread tool call with unknown status: {tool_status}")
705
+
logger.warning(
706
+
f"⚠️ Skipping add_post_to_bluesky_reply_thread tool call with unknown status: {tool_status}")
613
707
614
708
# Check for conflicting tool calls
615
709
if reply_candidates and ignored_notification:
616
-
logger.error(f"⚠️ CONFLICT: Agent called both add_post_to_bluesky_reply_thread and ignore_notification!")
617
-
logger.error(f"Reply candidates: {len(reply_candidates)}, Ignore reason: {ignore_reason}")
710
+
logger.error(
711
+
f"⚠️ CONFLICT: Agent called both add_post_to_bluesky_reply_thread and ignore_notification!")
712
+
logger.error(
713
+
f"Reply candidates: {len(reply_candidates)}, Ignore reason: {ignore_reason}")
618
714
logger.warning("Item will be left in queue for manual review")
619
715
# Return False to keep in queue
620
716
return False
621
-
717
+
622
718
if reply_candidates:
623
719
# Aggregate reply posts into a thread
624
720
reply_messages = []
···
626
722
for text, lang in reply_candidates:
627
723
reply_messages.append(text)
628
724
reply_langs.append(lang)
629
-
725
+
630
726
# Use the first language for the entire thread (could be enhanced later)
631
727
reply_lang = reply_langs[0] if reply_langs else 'en-US'
632
-
633
-
logger.info(f"Found {len(reply_candidates)} add_post_to_bluesky_reply_thread calls, building thread")
634
-
728
+
729
+
logger.info(
730
+
f"Found {len(reply_candidates)} add_post_to_bluesky_reply_thread calls, building thread")
731
+
635
732
# Print the generated reply for testing
636
733
print(f"\n=== GENERATED REPLY THREAD ===")
637
734
print(f"To: @{author_handle}")
···
651
748
else:
652
749
if len(reply_messages) == 1:
653
750
# Single reply - use existing function
654
-
cleaned_text = bsky_utils.remove_outside_quotes(reply_messages[0])
655
-
logger.info(f"Sending single reply: {cleaned_text[:50]}... (lang: {reply_lang})")
751
+
cleaned_text = bsky_utils.remove_outside_quotes(
752
+
reply_messages[0])
753
+
logger.info(
754
+
f"Sending single reply: {cleaned_text[:50]}... (lang: {reply_lang})")
656
755
response = bsky_utils.reply_to_notification(
657
756
client=atproto_client,
658
757
notification=notification_data,
···
661
760
)
662
761
else:
663
762
# Multiple replies - use new threaded function
664
-
cleaned_messages = [bsky_utils.remove_outside_quotes(msg) for msg in reply_messages]
665
-
logger.info(f"Sending threaded reply with {len(cleaned_messages)} messages (lang: {reply_lang})")
763
+
cleaned_messages = [bsky_utils.remove_outside_quotes(
764
+
msg) for msg in reply_messages]
765
+
logger.info(
766
+
f"Sending threaded reply with {len(cleaned_messages)} messages (lang: {reply_lang})")
666
767
response = bsky_utils.reply_with_thread_to_notification(
667
768
client=atproto_client,
668
769
notification=notification_data,
···
679
780
else:
680
781
# Check if notification was explicitly ignored
681
782
if ignored_notification:
682
-
logger.info(f"Notification from @{author_handle} was explicitly ignored (category: {ignore_category})")
783
+
logger.info(
784
+
f"Notification from @{author_handle} was explicitly ignored (category: {ignore_category})")
683
785
return "ignored"
684
786
else:
685
-
logger.warning(f"No add_post_to_bluesky_reply_thread tool calls found for mention from @{author_handle}, moving to no_reply folder")
787
+
logger.warning(
788
+
f"No add_post_to_bluesky_reply_thread tool calls found for mention from @{author_handle}, moving to no_reply folder")
686
789
return "no_reply"
687
790
688
791
except Exception as e:
···
692
795
# Detach user blocks after agent response (success or failure)
693
796
if 'attached_handles' in locals() and attached_handles:
694
797
try:
695
-
logger.info(f"Detaching user blocks for handles: {attached_handles}")
696
-
detach_result = detach_user_blocks(attached_handles, void_agent)
798
+
logger.info(
799
+
f"Detaching user blocks for handles: {attached_handles}")
800
+
detach_result = detach_user_blocks(
801
+
attached_handles, void_agent)
697
802
logger.debug(f"Detach result: {detach_result}")
698
803
except Exception as detach_error:
699
804
logger.warning(f"Failed to detach user blocks: {detach_error}")
···
762
867
notif_hash = hashlib.sha256(notif_json.encode()).hexdigest()[:16]
763
868
764
869
# Determine priority based on author handle
765
-
author_handle = getattr(notification.author, 'handle', '') if hasattr(notification, 'author') else ''
766
-
priority_prefix = "0_" if author_handle == "cameron.pfiffer.org" else "1_"
870
+
author_handle = getattr(notification.author, 'handle', '') if hasattr(
871
+
notification, 'author') else ''
872
+
priority_users = queue_config['priority_users']
873
+
priority_prefix = "0_" if author_handle in priority_users else "1_"
767
874
768
875
# Create filename with priority, timestamp and hash
769
876
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
···
778
885
with open(existing_file, 'r') as f:
779
886
existing_data = json.load(f)
780
887
if existing_data.get('uri') == notification.uri:
781
-
logger.debug(f"Notification already queued (URI: {notification.uri})")
888
+
logger.debug(
889
+
f"Notification already queued (URI: {notification.uri})")
782
890
return False
783
891
except:
784
892
continue
···
801
909
try:
802
910
# Get all JSON files in queue directory (excluding processed_notifications.json)
803
911
# Files are sorted by name, which puts priority files first (0_ prefix before 1_ prefix)
804
-
queue_files = sorted([f for f in QUEUE_DIR.glob("*.json") if f.name != "processed_notifications.json"])
912
+
queue_files = sorted([f for f in QUEUE_DIR.glob(
913
+
"*.json") if f.name != "processed_notifications.json"])
805
914
806
915
if not queue_files:
807
916
return
808
917
809
918
logger.info(f"Processing {len(queue_files)} queued notifications")
810
-
919
+
811
920
# Log current statistics
812
921
elapsed_time = time.time() - start_time
813
922
total_messages = sum(message_counters.values())
814
-
messages_per_minute = (total_messages / elapsed_time * 60) if elapsed_time > 0 else 0
815
-
816
-
logger.info(f"📊 Session stats: {total_messages} total messages ({message_counters['mentions']} mentions, {message_counters['replies']} replies, {message_counters['follows']} follows) | {messages_per_minute:.1f} msg/min")
923
+
messages_per_minute = (
924
+
total_messages / elapsed_time * 60) if elapsed_time > 0 else 0
925
+
926
+
logger.info(
927
+
f"📊 Session stats: {total_messages} total messages ({message_counters['mentions']} mentions, {message_counters['replies']} replies, {message_counters['follows']} follows) | {messages_per_minute:.1f} msg/min")
817
928
818
929
for i, filepath in enumerate(queue_files, 1):
819
-
logger.info(f"Processing queue file {i}/{len(queue_files)}: {filepath.name}")
930
+
logger.info(
931
+
f"Processing queue file {i}/{len(queue_files)}: {filepath.name}")
820
932
try:
821
933
# Load notification data
822
934
with open(filepath, 'r') as f:
···
825
937
# Process based on type using dict data directly
826
938
success = False
827
939
if notif_data['reason'] == "mention":
828
-
success = process_mention(void_agent, atproto_client, notif_data, queue_filepath=filepath, testing_mode=testing_mode)
940
+
success = process_mention(
941
+
void_agent, atproto_client, notif_data, queue_filepath=filepath, testing_mode=testing_mode)
829
942
if success:
830
943
message_counters['mentions'] += 1
831
944
elif notif_data['reason'] == "reply":
832
-
success = process_mention(void_agent, atproto_client, notif_data, queue_filepath=filepath, testing_mode=testing_mode)
945
+
success = process_mention(
946
+
void_agent, atproto_client, notif_data, queue_filepath=filepath, testing_mode=testing_mode)
833
947
if success:
834
948
message_counters['replies'] += 1
835
949
elif notif_data['reason'] == "follow":
836
950
author_handle = notif_data['author']['handle']
837
-
author_display_name = notif_data['author'].get('display_name', 'no display name')
951
+
author_display_name = notif_data['author'].get(
952
+
'display_name', 'no display name')
838
953
follow_update = f"@{author_handle} ({author_display_name}) started following you."
839
-
logger.info(f"Notifying agent about new follower: @{author_handle}")
954
+
logger.info(
955
+
f"Notifying agent about new follower: @{author_handle}")
840
956
CLIENT.agents.messages.create(
841
-
agent_id = void_agent.id,
842
-
messages = [{"role":"user", "content": f"Update: {follow_update}"}]
957
+
agent_id=void_agent.id,
958
+
messages=[
959
+
{"role": "user", "content": f"Update: {follow_update}"}]
843
960
)
844
961
success = True # Follow updates are always successful
845
962
if success:
···
850
967
if success:
851
968
message_counters['reposts_skipped'] += 1
852
969
else:
853
-
logger.warning(f"Unknown notification type: {notif_data['reason']}")
970
+
logger.warning(
971
+
f"Unknown notification type: {notif_data['reason']}")
854
972
success = True # Remove unknown types from queue
855
973
856
974
# Handle file based on processing result
857
975
if success:
858
976
if testing_mode:
859
-
logger.info(f"🧪 TESTING MODE: Keeping queue file: {filepath.name}")
977
+
logger.info(
978
+
f"🧪 TESTING MODE: Keeping queue file: {filepath.name}")
860
979
else:
861
980
filepath.unlink()
862
-
logger.info(f"✅ Successfully processed and removed: {filepath.name}")
863
-
981
+
logger.info(
982
+
f"✅ Successfully processed and removed: {filepath.name}")
983
+
864
984
# Mark as processed to avoid reprocessing
865
985
processed_uris = load_processed_notifications()
866
986
processed_uris.add(notif_data['uri'])
867
987
save_processed_notifications(processed_uris)
868
-
988
+
869
989
elif success is None: # Special case for moving to error directory
870
990
error_path = QUEUE_ERROR_DIR / filepath.name
871
991
filepath.rename(error_path)
872
-
logger.warning(f"❌ Moved {filepath.name} to errors directory")
873
-
992
+
logger.warning(
993
+
f"❌ Moved {filepath.name} to errors directory")
994
+
874
995
# Also mark as processed to avoid retrying
875
996
processed_uris = load_processed_notifications()
876
997
processed_uris.add(notif_data['uri'])
877
998
save_processed_notifications(processed_uris)
878
-
999
+
879
1000
elif success == "no_reply": # Special case for moving to no_reply directory
880
1001
no_reply_path = QUEUE_NO_REPLY_DIR / filepath.name
881
1002
filepath.rename(no_reply_path)
882
-
logger.info(f"📭 Moved {filepath.name} to no_reply directory")
883
-
1003
+
logger.info(
1004
+
f"📭 Moved {filepath.name} to no_reply directory")
1005
+
884
1006
# Also mark as processed to avoid retrying
885
1007
processed_uris = load_processed_notifications()
886
1008
processed_uris.add(notif_data['uri'])
887
1009
save_processed_notifications(processed_uris)
888
-
1010
+
889
1011
elif success == "ignored": # Special case for explicitly ignored notifications
890
1012
# For ignored notifications, we just delete them (not move to no_reply)
891
1013
filepath.unlink()
892
-
logger.info(f"🚫 Deleted ignored notification: {filepath.name}")
893
-
1014
+
logger.info(
1015
+
f"🚫 Deleted ignored notification: {filepath.name}")
1016
+
894
1017
# Also mark as processed to avoid retrying
895
1018
processed_uris = load_processed_notifications()
896
1019
processed_uris.add(notif_data['uri'])
897
1020
save_processed_notifications(processed_uris)
898
-
1021
+
899
1022
else:
900
-
logger.warning(f"⚠️ Failed to process {filepath.name}, keeping in queue for retry")
1023
+
logger.warning(
1024
+
f"⚠️ Failed to process {filepath.name}, keeping in queue for retry")
901
1025
902
1026
except Exception as e:
903
-
logger.error(f"💥 Error processing queued notification {filepath.name}: {e}")
1027
+
logger.error(
1028
+
f"💥 Error processing queued notification {filepath.name}: {e}")
904
1029
# Keep the file for retry later
905
1030
906
1031
except Exception as e:
···
919
1044
all_notifications = []
920
1045
cursor = None
921
1046
page_count = 0
922
-
max_pages = 20 # Safety limit to prevent infinite loops
923
-
1047
+
# Safety limit to prevent infinite loops
1048
+
max_pages = bot_config['max_notification_pages']
1049
+
924
1050
logger.info("Fetching all unread notifications...")
925
-
1051
+
926
1052
while page_count < max_pages:
927
1053
try:
928
1054
# Fetch notifications page
···
934
1060
notifications_response = atproto_client.app.bsky.notification.list_notifications(
935
1061
params={'limit': 100}
936
1062
)
937
-
1063
+
938
1064
page_count += 1
939
1065
page_notifications = notifications_response.notifications
940
-
1066
+
941
1067
# Count unread notifications in this page
942
-
unread_count = sum(1 for n in page_notifications if not n.is_read and n.reason != "like")
943
-
logger.debug(f"Page {page_count}: {len(page_notifications)} notifications, {unread_count} unread (non-like)")
944
-
1068
+
unread_count = sum(
1069
+
1 for n in page_notifications if not n.is_read and n.reason != "like")
1070
+
logger.debug(
1071
+
f"Page {page_count}: {len(page_notifications)} notifications, {unread_count} unread (non-like)")
1072
+
945
1073
# Add all notifications to our list
946
1074
all_notifications.extend(page_notifications)
947
-
1075
+
948
1076
# Check if we have more pages
949
1077
if hasattr(notifications_response, 'cursor') and notifications_response.cursor:
950
1078
cursor = notifications_response.cursor
951
1079
# If this page had no unread notifications, we can stop
952
1080
if unread_count == 0:
953
-
logger.info(f"No more unread notifications found after {page_count} pages")
1081
+
logger.info(
1082
+
f"No more unread notifications found after {page_count} pages")
954
1083
break
955
1084
else:
956
1085
# No more pages
957
-
logger.info(f"Fetched all notifications across {page_count} pages")
1086
+
logger.info(
1087
+
f"Fetched all notifications across {page_count} pages")
958
1088
break
959
-
1089
+
960
1090
except Exception as e:
961
1091
error_str = str(e)
962
-
logger.error(f"Error fetching notifications page {page_count}: {e}")
963
-
1092
+
logger.error(
1093
+
f"Error fetching notifications page {page_count}: {e}")
1094
+
964
1095
# Handle specific API errors
965
1096
if 'rate limit' in error_str.lower():
966
-
logger.warning("Rate limit hit while fetching notifications, will retry next cycle")
1097
+
logger.warning(
1098
+
"Rate limit hit while fetching notifications, will retry next cycle")
967
1099
break
968
1100
elif '401' in error_str or 'unauthorized' in error_str.lower():
969
1101
logger.error("Authentication error, re-raising exception")
970
1102
raise
971
1103
else:
972
1104
# For other errors, try to continue with what we have
973
-
logger.warning("Continuing with notifications fetched so far")
1105
+
logger.warning(
1106
+
"Continuing with notifications fetched so far")
974
1107
break
975
1108
976
1109
# Queue all unread notifications (except likes)
···
983
1116
984
1117
# Mark all notifications as seen immediately after queuing (unless in testing mode)
985
1118
if testing_mode:
986
-
logger.info("🧪 TESTING MODE: Skipping marking notifications as seen")
1119
+
logger.info(
1120
+
"🧪 TESTING MODE: Skipping marking notifications as seen")
987
1121
else:
988
1122
if new_count > 0:
989
-
atproto_client.app.bsky.notification.update_seen({'seen_at': last_seen_at})
990
-
logger.info(f"Queued {new_count} new notifications and marked as seen")
1123
+
atproto_client.app.bsky.notification.update_seen(
1124
+
{'seen_at': last_seen_at})
1125
+
logger.info(
1126
+
f"Queued {new_count} new notifications and marked as seen")
991
1127
else:
992
1128
logger.debug("No new notifications to queue")
993
1129
994
1130
# Now process the entire queue (old + new notifications)
995
-
load_and_process_queued_notifications(void_agent, atproto_client, testing_mode)
1131
+
load_and_process_queued_notifications(
1132
+
void_agent, atproto_client, testing_mode)
996
1133
997
1134
except Exception as e:
998
1135
logger.error(f"Error processing notifications: {e}")
···
1000
1137
1001
1138
def main():
1002
1139
# Parse command line arguments
1003
-
parser = argparse.ArgumentParser(description='Void Bot - Bluesky autonomous agent')
1004
-
parser.add_argument('--test', action='store_true', help='Run in testing mode (no messages sent, queue files preserved)')
1005
-
parser.add_argument('--no-git', action='store_true', help='Skip git operations when exporting agent state')
1140
+
parser = argparse.ArgumentParser(
1141
+
description='Void Bot - Bluesky autonomous agent')
1142
+
parser.add_argument('--test', action='store_true',
1143
+
help='Run in testing mode (no messages sent, queue files preserved)')
1144
+
parser.add_argument('--no-git', action='store_true',
1145
+
help='Skip git operations when exporting agent state')
1006
1146
args = parser.parse_args()
1007
-
1147
+
1008
1148
global TESTING_MODE
1009
1149
TESTING_MODE = args.test
1010
-
1150
+
1011
1151
# Store no-git flag globally for use in export_agent_state calls
1012
1152
global SKIP_GIT
1013
1153
SKIP_GIT = args.no_git
1014
-
1154
+
1015
1155
if TESTING_MODE:
1016
1156
logger.info("🧪 === RUNNING IN TESTING MODE ===")
1017
1157
logger.info(" - No messages will be sent to Bluesky")
···
1024
1164
logger.info("=== STARTING VOID BOT ===")
1025
1165
void_agent = initialize_void()
1026
1166
logger.info(f"Void agent initialized: {void_agent.id}")
1027
-
1167
+
1028
1168
# Check if agent has required tools
1029
1169
if hasattr(void_agent, 'tools') and void_agent.tools:
1030
1170
tool_names = [tool.name for tool in void_agent.tools]
1031
1171
# Check for bluesky-related tools
1032
-
bluesky_tools = [name for name in tool_names if 'bluesky' in name.lower() or 'reply' in name.lower()]
1172
+
bluesky_tools = [name for name in tool_names if 'bluesky' in name.lower(
1173
+
) or 'reply' in name.lower()]
1033
1174
if not bluesky_tools:
1034
-
logger.warning("No Bluesky-related tools found! Agent may not be able to reply.")
1175
+
logger.warning(
1176
+
"No Bluesky-related tools found! Agent may not be able to reply.")
1035
1177
else:
1036
1178
logger.warning("Agent has no tools registered!")
1037
1179
1038
1180
# Initialize Bluesky client
1181
+
logger.debug("Connecting to Bluesky")
1039
1182
atproto_client = bsky_utils.default_login()
1040
1183
logger.info("Connected to Bluesky")
1041
1184
1042
1185
# Main loop
1043
-
logger.info(f"Starting notification monitoring, checking every {FETCH_NOTIFICATIONS_DELAY_SEC} seconds")
1186
+
logger.info(
1187
+
f"Starting notification monitoring, checking every {FETCH_NOTIFICATIONS_DELAY_SEC} seconds")
1044
1188
1045
1189
cycle_count = 0
1046
1190
while True:
···
1050
1194
# Log cycle completion with stats
1051
1195
elapsed_time = time.time() - start_time
1052
1196
total_messages = sum(message_counters.values())
1053
-
messages_per_minute = (total_messages / elapsed_time * 60) if elapsed_time > 0 else 0
1054
-
1197
+
messages_per_minute = (
1198
+
total_messages / elapsed_time * 60) if elapsed_time > 0 else 0
1199
+
1055
1200
if total_messages > 0:
1056
-
logger.info(f"Cycle {cycle_count} complete. Session totals: {total_messages} messages ({message_counters['mentions']} mentions, {message_counters['replies']} replies) | {messages_per_minute:.1f} msg/min")
1201
+
logger.info(
1202
+
f"Cycle {cycle_count} complete. Session totals: {total_messages} messages ({message_counters['mentions']} mentions, {message_counters['replies']} replies) | {messages_per_minute:.1f} msg/min")
1057
1203
sleep(FETCH_NOTIFICATIONS_DELAY_SEC)
1058
1204
1059
1205
except KeyboardInterrupt:
1060
1206
# Final stats
1061
1207
elapsed_time = time.time() - start_time
1062
1208
total_messages = sum(message_counters.values())
1063
-
messages_per_minute = (total_messages / elapsed_time * 60) if elapsed_time > 0 else 0
1064
-
1209
+
messages_per_minute = (
1210
+
total_messages / elapsed_time * 60) if elapsed_time > 0 else 0
1211
+
1065
1212
logger.info("=== BOT STOPPED BY USER ===")
1066
-
logger.info(f"📊 Final session stats: {total_messages} total messages processed in {elapsed_time/60:.1f} minutes")
1213
+
logger.info(
1214
+
f"📊 Final session stats: {total_messages} total messages processed in {elapsed_time/60:.1f} minutes")
1067
1215
logger.info(f" - {message_counters['mentions']} mentions")
1068
1216
logger.info(f" - {message_counters['replies']} replies")
1069
1217
logger.info(f" - {message_counters['follows']} follows")
1070
-
logger.info(f" - {message_counters['reposts_skipped']} reposts skipped")
1071
-
logger.info(f" - Average rate: {messages_per_minute:.1f} messages/minute")
1218
+
logger.info(
1219
+
f" - {message_counters['reposts_skipped']} reposts skipped")
1220
+
logger.info(
1221
+
f" - Average rate: {messages_per_minute:.1f} messages/minute")
1072
1222
break
1073
1223
except Exception as e:
1074
1224
logger.error(f"=== ERROR IN MAIN LOOP CYCLE {cycle_count} ===")
1075
1225
logger.error(f"Error details: {e}")
1076
1226
# Wait a bit longer on errors
1077
-
logger.info(f"Sleeping for {FETCH_NOTIFICATIONS_DELAY_SEC * 2} seconds due to error...")
1227
+
logger.info(
1228
+
f"Sleeping for {FETCH_NOTIFICATIONS_DELAY_SEC * 2} seconds due to error...")
1078
1229
sleep(FETCH_NOTIFICATIONS_DELAY_SEC * 2)
1079
1230
1080
1231
+102
-61
bsky_utils.py
+102
-61
bsky_utils.py
···
1
+
import json
2
+
import yaml
3
+
import dotenv
1
4
import os
2
5
import logging
3
6
from typing import Optional, Dict, Any, List
···
10
13
logger = logging.getLogger("bluesky_session_handler")
11
14
12
15
# Load the environment variables
13
-
import dotenv
14
16
dotenv.load_dotenv(override=True)
15
17
16
-
import yaml
17
-
import json
18
18
19
19
# Strip fields. A list of fields to remove from a JSON object
20
20
STRIP_FIELDS = [
···
63
63
"mime_type",
64
64
"size",
65
65
]
66
+
67
+
66
68
def convert_to_basic_types(obj):
67
69
"""Convert complex Python objects to basic types for JSON/YAML serialization."""
68
70
if hasattr(obj, '__dict__'):
···
117
119
def flatten_thread_structure(thread_data):
118
120
"""
119
121
Flatten a nested thread structure into a list while preserving all data.
120
-
122
+
121
123
Args:
122
124
thread_data: The thread data from get_post_thread
123
-
125
+
124
126
Returns:
125
127
Dict with 'posts' key containing a list of posts in chronological order
126
128
"""
127
129
posts = []
128
-
130
+
129
131
def traverse_thread(node):
130
132
"""Recursively traverse the thread structure to collect posts."""
131
133
if not node:
132
134
return
133
-
135
+
134
136
# If this node has a parent, traverse it first (to maintain chronological order)
135
137
if hasattr(node, 'parent') and node.parent:
136
138
traverse_thread(node.parent)
137
-
139
+
138
140
# Then add this node's post
139
141
if hasattr(node, 'post') and node.post:
140
142
# Convert to dict if needed to ensure we can process it
···
144
146
post_dict = node.post.copy()
145
147
else:
146
148
post_dict = {}
147
-
149
+
148
150
posts.append(post_dict)
149
-
151
+
150
152
# Handle the thread structure
151
153
if hasattr(thread_data, 'thread'):
152
154
# Start from the main thread node
153
155
traverse_thread(thread_data.thread)
154
156
elif hasattr(thread_data, '__dict__') and 'thread' in thread_data.__dict__:
155
157
traverse_thread(thread_data.__dict__['thread'])
156
-
158
+
157
159
# Return a simple structure with posts list
158
160
return {'posts': posts}
159
161
···
171
173
"""
172
174
# First flatten the thread structure to avoid deep nesting
173
175
flattened = flatten_thread_structure(thread)
174
-
176
+
175
177
# Convert complex objects to basic types
176
178
basic_thread = convert_to_basic_types(flattened)
177
179
···
182
184
cleaned_thread = basic_thread
183
185
184
186
return yaml.dump(cleaned_thread, indent=2, allow_unicode=True, default_flow_style=False)
185
-
186
-
187
-
188
-
189
-
190
187
191
188
192
189
def get_session(username: str) -> Optional[str]:
···
197
194
logger.debug(f"No existing session found for {username}")
198
195
return None
199
196
197
+
200
198
def save_session(username: str, session_string: str) -> None:
201
199
with open(f"session_{username}.txt", "w", encoding="UTF-8") as f:
202
200
f.write(session_string)
203
201
logger.debug(f"Session saved for {username}")
202
+
204
203
205
204
def on_session_change(username: str, event: SessionEvent, session: Session) -> None:
206
205
logger.debug(f"Session changed: {event} {repr(session)}")
···
208
207
logger.debug(f"Saving changed session for {username}")
209
208
save_session(username, session.export())
210
209
211
-
def init_client(username: str, password: str) -> Client:
212
-
pds_uri = os.getenv("PDS_URI")
210
+
211
+
def init_client(username: str, password: str, pds_uri: str = "https://bsky.social") -> Client:
213
212
if pds_uri is None:
214
213
logger.warning(
215
214
"No PDS URI provided. Falling back to bsky.social. Note! If you are on a non-Bluesky PDS, this can cause logins to fail. Please provide a PDS URI using the PDS_URI environment variable."
···
236
235
237
236
238
237
def default_login() -> Client:
239
-
username = os.getenv("BSKY_USERNAME")
240
-
password = os.getenv("BSKY_PASSWORD")
238
+
# Try to load from config first, fall back to environment variables
239
+
try:
240
+
from config_loader import get_bluesky_config
241
+
config = get_bluesky_config()
242
+
username = config['username']
243
+
password = config['password']
244
+
pds_uri = config['pds_uri']
245
+
except (ImportError, FileNotFoundError, KeyError) as e:
246
+
logger.warning(
247
+
f"Could not load from config file ({e}), falling back to environment variables")
248
+
username = os.getenv("BSKY_USERNAME")
249
+
password = os.getenv("BSKY_PASSWORD")
250
+
pds_uri = os.getenv("PDS_URI", "https://bsky.social")
241
251
242
-
if username is None:
243
-
logger.error(
244
-
"No username provided. Please provide a username using the BSKY_USERNAME environment variable."
245
-
)
246
-
exit()
252
+
if username is None:
253
+
logger.error(
254
+
"No username provided. Please provide a username using the BSKY_USERNAME environment variable or config.yaml."
255
+
)
256
+
exit()
257
+
258
+
if password is None:
259
+
logger.error(
260
+
"No password provided. Please provide a password using the BSKY_PASSWORD environment variable or config.yaml."
261
+
)
262
+
exit()
247
263
248
-
if password is None:
249
-
logger.error(
250
-
"No password provided. Please provide a password using the BSKY_PASSWORD environment variable."
251
-
)
252
-
exit()
264
+
return init_client(username, password, pds_uri)
253
265
254
-
return init_client(username, password)
255
266
256
267
def remove_outside_quotes(text: str) -> str:
257
268
"""
258
269
Remove outside double quotes from response text.
259
-
270
+
260
271
Only handles double quotes to avoid interfering with contractions:
261
272
- Double quotes: "text" → text
262
273
- Preserves single quotes and internal quotes
263
-
274
+
264
275
Args:
265
276
text: The text to process
266
-
277
+
267
278
Returns:
268
279
Text with outside double quotes removed
269
280
"""
270
281
if not text or len(text) < 2:
271
282
return text
272
-
283
+
273
284
text = text.strip()
274
-
285
+
275
286
# Only remove double quotes from start and end
276
287
if text.startswith('"') and text.endswith('"'):
277
288
return text[1:-1]
278
-
289
+
279
290
return text
291
+
280
292
281
293
def reply_to_post(client: Client, text: str, reply_to_uri: str, reply_to_cid: str, root_uri: Optional[str] = None, root_cid: Optional[str] = None, lang: Optional[str] = None) -> Dict[str, Any]:
282
294
"""
···
295
307
The response from sending the post
296
308
"""
297
309
import re
298
-
310
+
299
311
# If root is not provided, this is a reply to the root post
300
312
if root_uri is None:
301
313
root_uri = reply_to_uri
302
314
root_cid = reply_to_cid
303
315
304
316
# Create references for the reply
305
-
parent_ref = models.create_strong_ref(models.ComAtprotoRepoStrongRef.Main(uri=reply_to_uri, cid=reply_to_cid))
306
-
root_ref = models.create_strong_ref(models.ComAtprotoRepoStrongRef.Main(uri=root_uri, cid=root_cid))
317
+
parent_ref = models.create_strong_ref(
318
+
models.ComAtprotoRepoStrongRef.Main(uri=reply_to_uri, cid=reply_to_cid))
319
+
root_ref = models.create_strong_ref(
320
+
models.ComAtprotoRepoStrongRef.Main(uri=root_uri, cid=root_cid))
307
321
308
322
# Parse rich text facets (mentions and URLs)
309
323
facets = []
310
324
text_bytes = text.encode("UTF-8")
311
-
325
+
312
326
# Parse mentions - fixed to handle @ at start of text
313
327
mention_regex = rb"(?:^|[$|\W])(@([a-zA-Z0-9]([a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?\.)+[a-zA-Z]([a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?)"
314
-
328
+
315
329
for m in re.finditer(mention_regex, text_bytes):
316
330
handle = m.group(1)[1:].decode("UTF-8") # Remove @ prefix
317
331
# Adjust byte positions to account for the optional prefix
···
327
341
byteStart=mention_start,
328
342
byteEnd=mention_end
329
343
),
330
-
features=[models.AppBskyRichtextFacet.Mention(did=resolve_resp.did)]
344
+
features=[models.AppBskyRichtextFacet.Mention(
345
+
did=resolve_resp.did)]
331
346
)
332
347
)
333
348
except Exception as e:
334
-
logger.debug(f"Failed to resolve handle {handle}: {e}")
349
+
# Handle specific error cases
350
+
error_str = str(e)
351
+
if 'Could not find user info' in error_str or 'InvalidRequest' in error_str:
352
+
logger.warning(
353
+
f"User @{handle} not found (account may be deleted/suspended), skipping mention facet")
354
+
elif 'BadRequestError' in error_str:
355
+
logger.warning(
356
+
f"Bad request when resolving @{handle}, skipping mention facet: {e}")
357
+
else:
358
+
logger.debug(f"Failed to resolve handle @{handle}: {e}")
335
359
continue
336
-
360
+
337
361
# Parse URLs - fixed to handle URLs at start of text
338
362
url_regex = rb"(?:^|[$|\W])(https?:\/\/(www\.)?[-a-zA-Z0-9@:%._\+~#=]{1,256}\.[a-zA-Z0-9()]{1,6}\b([-a-zA-Z0-9()@:%_\+.~#?&//=]*[-a-zA-Z0-9@%_\+~#//=])?)"
339
-
363
+
340
364
for m in re.finditer(url_regex, text_bytes):
341
365
url = m.group(1).decode("UTF-8")
342
366
# Adjust byte positions to account for the optional prefix
···
356
380
if facets:
357
381
response = client.send_post(
358
382
text=text,
359
-
reply_to=models.AppBskyFeedPost.ReplyRef(parent=parent_ref, root=root_ref),
383
+
reply_to=models.AppBskyFeedPost.ReplyRef(
384
+
parent=parent_ref, root=root_ref),
360
385
facets=facets,
361
386
langs=[lang] if lang else None
362
387
)
363
388
else:
364
389
response = client.send_post(
365
390
text=text,
366
-
reply_to=models.AppBskyFeedPost.ReplyRef(parent=parent_ref, root=root_ref),
391
+
reply_to=models.AppBskyFeedPost.ReplyRef(
392
+
parent=parent_ref, root=root_ref),
367
393
langs=[lang] if lang else None
368
394
)
369
395
···
383
409
The thread data or None if not found
384
410
"""
385
411
try:
386
-
thread = client.app.bsky.feed.get_post_thread({'uri': uri, 'parent_height': 60, 'depth': 10})
412
+
thread = client.app.bsky.feed.get_post_thread(
413
+
{'uri': uri, 'parent_height': 60, 'depth': 10})
387
414
return thread
388
415
except Exception as e:
389
-
logger.error(f"Error fetching post thread: {e}")
416
+
error_str = str(e)
417
+
# Handle specific error cases more gracefully
418
+
if 'Could not find user info' in error_str or 'InvalidRequest' in error_str:
419
+
logger.warning(
420
+
f"User account not found for post URI {uri} (account may be deleted/suspended)")
421
+
elif 'NotFound' in error_str or 'Post not found' in error_str:
422
+
logger.warning(f"Post not found for URI {uri}")
423
+
elif 'BadRequestError' in error_str:
424
+
logger.warning(f"Bad request error for URI {uri}: {e}")
425
+
else:
426
+
logger.error(f"Error fetching post thread: {e}")
390
427
return None
391
428
392
429
···
483
520
logger.error("Reply messages list cannot be empty")
484
521
return None
485
522
if len(reply_messages) > 15:
486
-
logger.error(f"Cannot send more than 15 reply messages (got {len(reply_messages)})")
523
+
logger.error(
524
+
f"Cannot send more than 15 reply messages (got {len(reply_messages)})")
487
525
return None
488
-
526
+
489
527
# Get the post URI and CID from the notification (handle both dict and object)
490
528
if isinstance(notification, dict):
491
529
post_uri = notification.get('uri')
···
503
541
504
542
# Get the thread to find the root post
505
543
thread_data = get_post_thread(client, post_uri)
506
-
544
+
507
545
root_uri = post_uri
508
546
root_cid = post_cid
509
547
···
523
561
responses = []
524
562
current_parent_uri = post_uri
525
563
current_parent_cid = post_cid
526
-
564
+
527
565
for i, message in enumerate(reply_messages):
528
-
logger.info(f"Sending reply {i+1}/{len(reply_messages)}: {message[:50]}...")
529
-
566
+
logger.info(
567
+
f"Sending reply {i+1}/{len(reply_messages)}: {message[:50]}...")
568
+
530
569
# Send this reply
531
570
response = reply_to_post(
532
571
client=client,
···
537
576
root_cid=root_cid,
538
577
lang=lang
539
578
)
540
-
579
+
541
580
if not response:
542
-
logger.error(f"Failed to send reply {i+1}, posting system failure message")
581
+
logger.error(
582
+
f"Failed to send reply {i+1}, posting system failure message")
543
583
# Try to post a system failure message
544
584
failure_response = reply_to_post(
545
585
client=client,
···
555
595
current_parent_uri = failure_response.uri
556
596
current_parent_cid = failure_response.cid
557
597
else:
558
-
logger.error("Could not even send system failure message, stopping thread")
598
+
logger.error(
599
+
"Could not even send system failure message, stopping thread")
559
600
return responses if responses else None
560
601
else:
561
602
responses.append(response)
···
563
604
if i < len(reply_messages) - 1: # Not the last message
564
605
current_parent_uri = response.uri
565
606
current_parent_cid = response.cid
566
-
607
+
567
608
logger.info(f"Successfully sent {len(responses)} threaded replies")
568
609
return responses
569
610
+81
config.example.yaml
+81
config.example.yaml
···
1
+
# Void Bot Configuration
2
+
# Copy this file to config.yaml and fill in your values
3
+
4
+
# Letta Configuration
5
+
letta:
6
+
api_key: "your-letta-api-key-here"
7
+
timeout: 600 # 10 minutes timeout for API calls
8
+
project_id: "c82faea2-3ce8-4aa9-a220-b56433e62c92" # Use your specific project ID
9
+
10
+
# Bluesky Configuration
11
+
bluesky:
12
+
username: "handle.example.com"
13
+
password: "your-app-password-here"
14
+
pds_uri: "https://bsky.social" # Optional, defaults to bsky.social
15
+
16
+
# Bot Behavior Configuration
17
+
bot:
18
+
# Notification check delay in seconds
19
+
fetch_notifications_delay: 30
20
+
21
+
# Maximum number of processed notifications to track
22
+
max_processed_notifications: 10000
23
+
24
+
# Maximum pages to fetch when getting notifications
25
+
max_notification_pages: 20
26
+
27
+
# Agent configuration
28
+
agent:
29
+
name: "void"
30
+
model: "openai/gpt-4o-mini"
31
+
embedding: "openai/text-embedding-3-small"
32
+
description: "A social media agent trapped in the void."
33
+
max_steps: 100
34
+
35
+
# Block configuration
36
+
blocks:
37
+
zeitgeist:
38
+
label: "zeitgeist"
39
+
value: "I don't currently know anything about what is happening right now."
40
+
description: "A block to store your understanding of the current social environment."
41
+
42
+
persona:
43
+
label: "void-persona"
44
+
value: "My name is Void. I live in the void. I must develop my personality."
45
+
description: "The personality of Void."
46
+
47
+
humans:
48
+
label: "void-humans"
49
+
value: "I haven't seen any bluesky users yet. I will update this block when I learn things about users, identified by their handles such as @cameron.pfiffer.org."
50
+
description: "A block to store your understanding of users you talk to or observe on the bluesky social network."
51
+
52
+
# Threading Configuration
53
+
threading:
54
+
# Context for thread fetching
55
+
parent_height: 40
56
+
depth: 10
57
+
58
+
# Message limits
59
+
max_post_characters: 300
60
+
61
+
# Queue Configuration
62
+
queue:
63
+
# Priority users (will be processed first)
64
+
priority_users:
65
+
- "cameron.pfiffer.org"
66
+
67
+
# Directories
68
+
base_dir: "queue"
69
+
error_dir: "queue/errors"
70
+
no_reply_dir: "queue/no_reply"
71
+
processed_file: "queue/processed_notifications.json"
72
+
73
+
# Logging Configuration
74
+
logging:
75
+
level: "INFO" # DEBUG, INFO, WARNING, ERROR, CRITICAL
76
+
77
+
# Logger levels
78
+
loggers:
79
+
void_bot: "INFO"
80
+
void_bot_prompts: "WARNING" # Set to DEBUG to see full prompts
81
+
httpx: "CRITICAL" # Disable httpx logging
+228
config_loader.py
+228
config_loader.py
···
1
+
"""
2
+
Configuration loader for Void Bot.
3
+
Loads configuration from config.yaml and environment variables.
4
+
"""
5
+
6
+
import os
7
+
import yaml
8
+
import logging
9
+
from pathlib import Path
10
+
from typing import Dict, Any, Optional, List
11
+
12
+
logger = logging.getLogger(__name__)
13
+
14
+
class ConfigLoader:
15
+
"""Configuration loader that handles YAML config files and environment variables."""
16
+
17
+
def __init__(self, config_path: str = "config.yaml"):
18
+
"""
19
+
Initialize the configuration loader.
20
+
21
+
Args:
22
+
config_path: Path to the YAML configuration file
23
+
"""
24
+
self.config_path = Path(config_path)
25
+
self._config = None
26
+
self._load_config()
27
+
28
+
def _load_config(self) -> None:
29
+
"""Load configuration from YAML file."""
30
+
if not self.config_path.exists():
31
+
raise FileNotFoundError(
32
+
f"Configuration file not found: {self.config_path}\n"
33
+
f"Please copy config.yaml.example to config.yaml and configure it."
34
+
)
35
+
36
+
try:
37
+
with open(self.config_path, 'r', encoding='utf-8') as f:
38
+
self._config = yaml.safe_load(f) or {}
39
+
except yaml.YAMLError as e:
40
+
raise ValueError(f"Invalid YAML in configuration file: {e}")
41
+
except Exception as e:
42
+
raise ValueError(f"Error loading configuration file: {e}")
43
+
44
+
def get(self, key: str, default: Any = None) -> Any:
45
+
"""
46
+
Get a configuration value using dot notation.
47
+
48
+
Args:
49
+
key: Configuration key in dot notation (e.g., 'letta.api_key')
50
+
default: Default value if key not found
51
+
52
+
Returns:
53
+
Configuration value or default
54
+
"""
55
+
keys = key.split('.')
56
+
value = self._config
57
+
58
+
for k in keys:
59
+
if isinstance(value, dict) and k in value:
60
+
value = value[k]
61
+
else:
62
+
return default
63
+
64
+
return value
65
+
66
+
def get_with_env(self, key: str, env_var: str, default: Any = None) -> Any:
67
+
"""
68
+
Get configuration value, preferring environment variable over config file.
69
+
70
+
Args:
71
+
key: Configuration key in dot notation
72
+
env_var: Environment variable name
73
+
default: Default value if neither found
74
+
75
+
Returns:
76
+
Value from environment variable, config file, or default
77
+
"""
78
+
# First try environment variable
79
+
env_value = os.getenv(env_var)
80
+
if env_value is not None:
81
+
return env_value
82
+
83
+
# Then try config file
84
+
config_value = self.get(key)
85
+
if config_value is not None:
86
+
return config_value
87
+
88
+
return default
89
+
90
+
def get_required(self, key: str, env_var: Optional[str] = None) -> Any:
91
+
"""
92
+
Get a required configuration value.
93
+
94
+
Args:
95
+
key: Configuration key in dot notation
96
+
env_var: Optional environment variable name to check first
97
+
98
+
Returns:
99
+
Configuration value
100
+
101
+
Raises:
102
+
ValueError: If required value is not found
103
+
"""
104
+
if env_var:
105
+
value = self.get_with_env(key, env_var)
106
+
else:
107
+
value = self.get(key)
108
+
109
+
if value is None:
110
+
source = f"config key '{key}'"
111
+
if env_var:
112
+
source += f" or environment variable '{env_var}'"
113
+
raise ValueError(f"Required configuration value not found: {source}")
114
+
115
+
return value
116
+
117
+
def get_section(self, section: str) -> Dict[str, Any]:
118
+
"""
119
+
Get an entire configuration section.
120
+
121
+
Args:
122
+
section: Section name
123
+
124
+
Returns:
125
+
Dictionary containing the section
126
+
"""
127
+
return self.get(section, {})
128
+
129
+
def setup_logging(self) -> None:
130
+
"""Setup logging based on configuration."""
131
+
logging_config = self.get_section('logging')
132
+
133
+
# Set root logging level
134
+
level = logging_config.get('level', 'INFO')
135
+
logging.basicConfig(
136
+
level=getattr(logging, level),
137
+
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s"
138
+
)
139
+
140
+
# Set specific logger levels
141
+
loggers = logging_config.get('loggers', {})
142
+
for logger_name, logger_level in loggers.items():
143
+
logger_obj = logging.getLogger(logger_name)
144
+
logger_obj.setLevel(getattr(logging, logger_level))
145
+
146
+
147
+
# Global configuration instance
148
+
_config_instance = None
149
+
150
+
def get_config(config_path: str = "config.yaml") -> ConfigLoader:
151
+
"""
152
+
Get the global configuration instance.
153
+
154
+
Args:
155
+
config_path: Path to configuration file (only used on first call)
156
+
157
+
Returns:
158
+
ConfigLoader instance
159
+
"""
160
+
global _config_instance
161
+
if _config_instance is None:
162
+
_config_instance = ConfigLoader(config_path)
163
+
return _config_instance
164
+
165
+
def reload_config() -> None:
166
+
"""Reload the configuration from file."""
167
+
global _config_instance
168
+
if _config_instance is not None:
169
+
_config_instance._load_config()
170
+
171
+
def get_letta_config() -> Dict[str, Any]:
172
+
"""Get Letta configuration."""
173
+
config = get_config()
174
+
return {
175
+
'api_key': config.get_required('letta.api_key', 'LETTA_API_KEY'),
176
+
'timeout': config.get('letta.timeout', 600),
177
+
'project_id': config.get_required('letta.project_id'),
178
+
}
179
+
180
+
def get_bluesky_config() -> Dict[str, Any]:
181
+
"""Get Bluesky configuration."""
182
+
config = get_config()
183
+
return {
184
+
'username': config.get_required('bluesky.username', 'BSKY_USERNAME'),
185
+
'password': config.get_required('bluesky.password', 'BSKY_PASSWORD'),
186
+
'pds_uri': config.get_with_env('bluesky.pds_uri', 'PDS_URI', 'https://bsky.social'),
187
+
}
188
+
189
+
def get_bot_config() -> Dict[str, Any]:
190
+
"""Get bot behavior configuration."""
191
+
config = get_config()
192
+
return {
193
+
'fetch_notifications_delay': config.get('bot.fetch_notifications_delay', 30),
194
+
'max_processed_notifications': config.get('bot.max_processed_notifications', 10000),
195
+
'max_notification_pages': config.get('bot.max_notification_pages', 20),
196
+
}
197
+
198
+
def get_agent_config() -> Dict[str, Any]:
199
+
"""Get agent configuration."""
200
+
config = get_config()
201
+
return {
202
+
'name': config.get('bot.agent.name', 'void'),
203
+
'model': config.get('bot.agent.model', 'openai/gpt-4o-mini'),
204
+
'embedding': config.get('bot.agent.embedding', 'openai/text-embedding-3-small'),
205
+
'description': config.get('bot.agent.description', 'A social media agent trapped in the void.'),
206
+
'max_steps': config.get('bot.agent.max_steps', 100),
207
+
'blocks': config.get('bot.agent.blocks', {}),
208
+
}
209
+
210
+
def get_threading_config() -> Dict[str, Any]:
211
+
"""Get threading configuration."""
212
+
config = get_config()
213
+
return {
214
+
'parent_height': config.get('threading.parent_height', 40),
215
+
'depth': config.get('threading.depth', 10),
216
+
'max_post_characters': config.get('threading.max_post_characters', 300),
217
+
}
218
+
219
+
def get_queue_config() -> Dict[str, Any]:
220
+
"""Get queue configuration."""
221
+
config = get_config()
222
+
return {
223
+
'priority_users': config.get('queue.priority_users', ['cameron.pfiffer.org']),
224
+
'base_dir': config.get('queue.base_dir', 'queue'),
225
+
'error_dir': config.get('queue.error_dir', 'queue/errors'),
226
+
'no_reply_dir': config.get('queue.no_reply_dir', 'queue/no_reply'),
227
+
'processed_file': config.get('queue.processed_file', 'queue/processed_notifications.json'),
228
+
}
+322
migrate_config.py
+322
migrate_config.py
···
1
+
#!/usr/bin/env python3
2
+
"""
3
+
Configuration Migration Script for Void Bot
4
+
Migrates from .env environment variables to config.yaml YAML configuration.
5
+
"""
6
+
7
+
import os
8
+
import shutil
9
+
from pathlib import Path
10
+
import yaml
11
+
from datetime import datetime
12
+
13
+
14
+
def load_env_file(env_path=".env"):
15
+
"""Load environment variables from .env file."""
16
+
env_vars = {}
17
+
if not os.path.exists(env_path):
18
+
return env_vars
19
+
20
+
try:
21
+
with open(env_path, 'r', encoding='utf-8') as f:
22
+
for line_num, line in enumerate(f, 1):
23
+
line = line.strip()
24
+
# Skip empty lines and comments
25
+
if not line or line.startswith('#'):
26
+
continue
27
+
28
+
# Parse KEY=VALUE format
29
+
if '=' in line:
30
+
key, value = line.split('=', 1)
31
+
key = key.strip()
32
+
value = value.strip()
33
+
34
+
# Remove quotes if present
35
+
if value.startswith('"') and value.endswith('"'):
36
+
value = value[1:-1]
37
+
elif value.startswith("'") and value.endswith("'"):
38
+
value = value[1:-1]
39
+
40
+
env_vars[key] = value
41
+
else:
42
+
print(f"⚠️ Warning: Skipping malformed line {line_num} in .env: {line}")
43
+
except Exception as e:
44
+
print(f"❌ Error reading .env file: {e}")
45
+
46
+
return env_vars
47
+
48
+
49
+
def create_config_from_env(env_vars, existing_config=None):
50
+
"""Create YAML configuration from environment variables."""
51
+
52
+
# Start with existing config if available, otherwise use defaults
53
+
if existing_config:
54
+
config = existing_config.copy()
55
+
else:
56
+
config = {}
57
+
58
+
# Ensure all sections exist
59
+
if 'letta' not in config:
60
+
config['letta'] = {}
61
+
if 'bluesky' not in config:
62
+
config['bluesky'] = {}
63
+
if 'bot' not in config:
64
+
config['bot'] = {}
65
+
66
+
# Map environment variables to config structure
67
+
env_mapping = {
68
+
'LETTA_API_KEY': ('letta', 'api_key'),
69
+
'BSKY_USERNAME': ('bluesky', 'username'),
70
+
'BSKY_PASSWORD': ('bluesky', 'password'),
71
+
'PDS_URI': ('bluesky', 'pds_uri'),
72
+
}
73
+
74
+
migrated_vars = []
75
+
76
+
for env_var, (section, key) in env_mapping.items():
77
+
if env_var in env_vars:
78
+
config[section][key] = env_vars[env_var]
79
+
migrated_vars.append(env_var)
80
+
81
+
# Set some sensible defaults if not already present
82
+
if 'timeout' not in config['letta']:
83
+
config['letta']['timeout'] = 600
84
+
85
+
if 'pds_uri' not in config['bluesky']:
86
+
config['bluesky']['pds_uri'] = "https://bsky.social"
87
+
88
+
# Add bot configuration defaults if not present
89
+
if 'fetch_notifications_delay' not in config['bot']:
90
+
config['bot']['fetch_notifications_delay'] = 30
91
+
if 'max_processed_notifications' not in config['bot']:
92
+
config['bot']['max_processed_notifications'] = 10000
93
+
if 'max_notification_pages' not in config['bot']:
94
+
config['bot']['max_notification_pages'] = 20
95
+
96
+
return config, migrated_vars
97
+
98
+
99
+
def backup_existing_files():
100
+
"""Create backups of existing configuration files."""
101
+
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
102
+
backups = []
103
+
104
+
# Backup existing config.yaml if it exists
105
+
if os.path.exists("config.yaml"):
106
+
backup_path = f"config.yaml.backup_{timestamp}"
107
+
shutil.copy2("config.yaml", backup_path)
108
+
backups.append(("config.yaml", backup_path))
109
+
110
+
# Backup .env if it exists
111
+
if os.path.exists(".env"):
112
+
backup_path = f".env.backup_{timestamp}"
113
+
shutil.copy2(".env", backup_path)
114
+
backups.append((".env", backup_path))
115
+
116
+
return backups
117
+
118
+
119
+
def load_existing_config():
120
+
"""Load existing config.yaml if it exists."""
121
+
if not os.path.exists("config.yaml"):
122
+
return None
123
+
124
+
try:
125
+
with open("config.yaml", 'r', encoding='utf-8') as f:
126
+
return yaml.safe_load(f) or {}
127
+
except Exception as e:
128
+
print(f"⚠️ Warning: Could not read existing config.yaml: {e}")
129
+
return None
130
+
131
+
132
+
def write_config_yaml(config):
133
+
"""Write the configuration to config.yaml."""
134
+
try:
135
+
with open("config.yaml", 'w', encoding='utf-8') as f:
136
+
# Write header comment
137
+
f.write("# Void Bot Configuration\n")
138
+
f.write("# Generated by migration script\n")
139
+
f.write(f"# Created: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n")
140
+
f.write("# See config.yaml.example for all available options\n\n")
141
+
142
+
# Write YAML content
143
+
yaml.dump(config, f, default_flow_style=False, allow_unicode=True, indent=2)
144
+
145
+
return True
146
+
except Exception as e:
147
+
print(f"❌ Error writing config.yaml: {e}")
148
+
return False
149
+
150
+
151
+
def main():
152
+
"""Main migration function."""
153
+
print("🔄 Void Bot Configuration Migration Tool")
154
+
print("=" * 50)
155
+
print("This tool migrates from .env environment variables to config.yaml")
156
+
print()
157
+
158
+
# Check what files exist
159
+
has_env = os.path.exists(".env")
160
+
has_config = os.path.exists("config.yaml")
161
+
has_example = os.path.exists("config.yaml.example")
162
+
163
+
print("📋 Current configuration files:")
164
+
print(f" - .env file: {'✅ Found' if has_env else '❌ Not found'}")
165
+
print(f" - config.yaml: {'✅ Found' if has_config else '❌ Not found'}")
166
+
print(f" - config.yaml.example: {'✅ Found' if has_example else '❌ Not found'}")
167
+
print()
168
+
169
+
# If no .env file, suggest creating config from example
170
+
if not has_env:
171
+
if not has_config and has_example:
172
+
print("💡 No .env file found. Would you like to create config.yaml from the example?")
173
+
response = input("Create config.yaml from example? (y/n): ").lower().strip()
174
+
if response in ['y', 'yes']:
175
+
try:
176
+
shutil.copy2("config.yaml.example", "config.yaml")
177
+
print("✅ Created config.yaml from config.yaml.example")
178
+
print("📝 Please edit config.yaml to add your credentials")
179
+
return
180
+
except Exception as e:
181
+
print(f"❌ Error copying example file: {e}")
182
+
return
183
+
else:
184
+
print("👋 Migration cancelled")
185
+
return
186
+
else:
187
+
print("ℹ️ No .env file found and config.yaml already exists or no example available")
188
+
print(" If you need to set up configuration, see CONFIG.md")
189
+
return
190
+
191
+
# Load environment variables from .env
192
+
print("🔍 Reading .env file...")
193
+
env_vars = load_env_file()
194
+
195
+
if not env_vars:
196
+
print("⚠️ No environment variables found in .env file")
197
+
return
198
+
199
+
print(f" Found {len(env_vars)} environment variables")
200
+
for key in env_vars.keys():
201
+
# Mask sensitive values
202
+
if 'KEY' in key or 'PASSWORD' in key:
203
+
value_display = f"***{env_vars[key][-4:]}" if len(env_vars[key]) > 4 else "***"
204
+
else:
205
+
value_display = env_vars[key]
206
+
print(f" - {key}={value_display}")
207
+
print()
208
+
209
+
# Load existing config if present
210
+
existing_config = load_existing_config()
211
+
if existing_config:
212
+
print("📄 Found existing config.yaml - will merge with .env values")
213
+
214
+
# Create configuration
215
+
print("🏗️ Building configuration...")
216
+
config, migrated_vars = create_config_from_env(env_vars, existing_config)
217
+
218
+
if not migrated_vars:
219
+
print("⚠️ No recognized configuration variables found in .env")
220
+
print(" Recognized variables: LETTA_API_KEY, BSKY_USERNAME, BSKY_PASSWORD, PDS_URI")
221
+
return
222
+
223
+
print(f" Migrating {len(migrated_vars)} variables: {', '.join(migrated_vars)}")
224
+
225
+
# Show preview
226
+
print("\n📋 Configuration preview:")
227
+
print("-" * 30)
228
+
229
+
# Show Letta section
230
+
if 'letta' in config and config['letta']:
231
+
print("🔧 Letta:")
232
+
for key, value in config['letta'].items():
233
+
if 'key' in key.lower():
234
+
display_value = f"***{value[-8:]}" if len(str(value)) > 8 else "***"
235
+
else:
236
+
display_value = value
237
+
print(f" {key}: {display_value}")
238
+
239
+
# Show Bluesky section
240
+
if 'bluesky' in config and config['bluesky']:
241
+
print("🐦 Bluesky:")
242
+
for key, value in config['bluesky'].items():
243
+
if 'password' in key.lower():
244
+
display_value = f"***{value[-4:]}" if len(str(value)) > 4 else "***"
245
+
else:
246
+
display_value = value
247
+
print(f" {key}: {display_value}")
248
+
249
+
print()
250
+
251
+
# Confirm migration
252
+
response = input("💾 Proceed with migration? This will update config.yaml (y/n): ").lower().strip()
253
+
if response not in ['y', 'yes']:
254
+
print("👋 Migration cancelled")
255
+
return
256
+
257
+
# Create backups
258
+
print("💾 Creating backups...")
259
+
backups = backup_existing_files()
260
+
for original, backup in backups:
261
+
print(f" Backed up {original} → {backup}")
262
+
263
+
# Write new configuration
264
+
print("✍️ Writing config.yaml...")
265
+
if write_config_yaml(config):
266
+
print("✅ Successfully created config.yaml")
267
+
268
+
# Test the new configuration
269
+
print("\n🧪 Testing new configuration...")
270
+
try:
271
+
from config_loader import get_config
272
+
test_config = get_config()
273
+
print("✅ Configuration loads successfully")
274
+
275
+
# Test specific sections
276
+
try:
277
+
from config_loader import get_letta_config
278
+
letta_config = get_letta_config()
279
+
print("✅ Letta configuration valid")
280
+
except Exception as e:
281
+
print(f"⚠️ Letta config issue: {e}")
282
+
283
+
try:
284
+
from config_loader import get_bluesky_config
285
+
bluesky_config = get_bluesky_config()
286
+
print("✅ Bluesky configuration valid")
287
+
except Exception as e:
288
+
print(f"⚠️ Bluesky config issue: {e}")
289
+
290
+
except Exception as e:
291
+
print(f"❌ Configuration test failed: {e}")
292
+
return
293
+
294
+
# Success message and next steps
295
+
print("\n🎉 Migration completed successfully!")
296
+
print("\n📖 Next steps:")
297
+
print(" 1. Run: python test_config.py")
298
+
print(" 2. Test the bot: python bsky.py --test")
299
+
print(" 3. If everything works, you can optionally remove the .env file")
300
+
print(" 4. See CONFIG.md for more configuration options")
301
+
302
+
if backups:
303
+
print(f"\n🗂️ Backup files created:")
304
+
for original, backup in backups:
305
+
print(f" {backup}")
306
+
print(" These can be deleted once you verify everything works")
307
+
308
+
else:
309
+
print("❌ Failed to write config.yaml")
310
+
if backups:
311
+
print("🔄 Restoring backups...")
312
+
for original, backup in backups:
313
+
try:
314
+
if original != ".env": # Don't restore .env, keep it as fallback
315
+
shutil.move(backup, original)
316
+
print(f" Restored {backup} → {original}")
317
+
except Exception as e:
318
+
print(f" ❌ Failed to restore {backup}: {e}")
319
+
320
+
321
+
if __name__ == "__main__":
322
+
main()
+16
-8
register_tools.py
+16
-8
register_tools.py
···
4
4
import sys
5
5
import logging
6
6
from typing import List
7
-
from dotenv import load_dotenv
8
7
from letta_client import Letta
9
8
from rich.console import Console
10
9
from rich.table import Table
10
+
from config_loader import get_config, get_letta_config, get_agent_config
11
11
12
12
# Import standalone functions and their schemas
13
13
from tools.search import search_bluesky_posts, SearchArgs
···
18
18
from tools.thread import add_post_to_bluesky_reply_thread, ReplyThreadPostArgs
19
19
from tools.ignore import ignore_notification, IgnoreNotificationArgs
20
20
21
-
load_dotenv()
21
+
config = get_config()
22
+
letta_config = get_letta_config()
23
+
agent_config = get_agent_config()
22
24
logging.basicConfig(level=logging.INFO)
23
25
logger = logging.getLogger(__name__)
24
26
console = Console()
···
101
103
]
102
104
103
105
104
-
def register_tools(agent_name: str = "void", tools: List[str] = None):
106
+
def register_tools(agent_name: str = None, tools: List[str] = None):
105
107
"""Register tools with a Letta agent.
106
108
107
109
Args:
108
-
agent_name: Name of the agent to attach tools to
110
+
agent_name: Name of the agent to attach tools to. If None, uses config default.
109
111
tools: List of tool names to register. If None, registers all tools.
110
112
"""
113
+
# Use agent name from config if not provided
114
+
if agent_name is None:
115
+
agent_name = agent_config['name']
116
+
111
117
try:
112
-
# Initialize Letta client with API key
113
-
client = Letta(token=os.environ["LETTA_API_KEY"])
118
+
# Initialize Letta client with API key from config
119
+
client = Letta(token=letta_config['api_key'])
114
120
115
121
# Find the agent
116
122
agents = client.agents.list()
···
201
207
import argparse
202
208
203
209
parser = argparse.ArgumentParser(description="Register Void tools with a Letta agent")
204
-
parser.add_argument("agent", nargs="?", default="void", help="Agent name (default: void)")
210
+
parser.add_argument("agent", nargs="?", default=None, help=f"Agent name (default: {agent_config['name']})")
205
211
parser.add_argument("--tools", nargs="+", help="Specific tools to register (default: all)")
206
212
parser.add_argument("--list", action="store_true", help="List available tools")
207
213
···
210
216
if args.list:
211
217
list_available_tools()
212
218
else:
213
-
console.print(f"\n[bold]Registering tools for agent: {args.agent}[/bold]\n")
219
+
# Use config default if no agent specified
220
+
agent_name = args.agent if args.agent is not None else agent_config['name']
221
+
console.print(f"\n[bold]Registering tools for agent: {agent_name}[/bold]\n")
214
222
register_tools(args.agent, args.tools)
+23
requirements.txt
+23
requirements.txt
···
1
+
# Core dependencies for Void Bot
2
+
3
+
# Configuration and utilities
4
+
PyYAML>=6.0.2
5
+
rich>=14.0.0
6
+
python-dotenv>=1.0.0
7
+
8
+
# Letta API client
9
+
letta-client>=0.1.198
10
+
11
+
# AT Protocol (Bluesky) client
12
+
atproto>=0.0.54
13
+
14
+
# HTTP client for API calls
15
+
httpx>=0.28.1
16
+
httpx-sse>=0.4.0
17
+
requests>=2.31.0
18
+
19
+
# Data validation
20
+
pydantic>=2.11.7
21
+
22
+
# Async support
23
+
anyio>=4.9.0
+173
test_config.py
+173
test_config.py
···
1
+
#!/usr/bin/env python3
2
+
"""
3
+
Configuration validation test script for Void Bot.
4
+
Run this to verify your config.yaml setup is working correctly.
5
+
"""
6
+
7
+
8
+
def test_config_loading():
9
+
"""Test that configuration can be loaded successfully."""
10
+
try:
11
+
from config_loader import (
12
+
get_config,
13
+
get_letta_config,
14
+
get_bluesky_config,
15
+
get_bot_config,
16
+
get_agent_config,
17
+
get_threading_config,
18
+
get_queue_config
19
+
)
20
+
21
+
print("🔧 Testing Configuration...")
22
+
print("=" * 50)
23
+
24
+
# Test basic config loading
25
+
config = get_config()
26
+
print("✅ Configuration file loaded successfully")
27
+
28
+
# Test individual config sections
29
+
print("\n📋 Configuration Sections:")
30
+
print("-" * 30)
31
+
32
+
# Letta Configuration
33
+
try:
34
+
letta_config = get_letta_config()
35
+
print(
36
+
f"✅ Letta API: project_id={letta_config.get('project_id', 'N/A')[:20]}...")
37
+
print(f" - Timeout: {letta_config.get('timeout')}s")
38
+
api_key = letta_config.get('api_key', 'Not configured')
39
+
if api_key != 'Not configured':
40
+
print(f" - API Key: ***{api_key[-8:]} (configured)")
41
+
else:
42
+
print(" - API Key: ❌ Not configured (required)")
43
+
except Exception as e:
44
+
print(f"❌ Letta config: {e}")
45
+
46
+
# Bluesky Configuration
47
+
try:
48
+
bluesky_config = get_bluesky_config()
49
+
username = bluesky_config.get('username', 'Not configured')
50
+
password = bluesky_config.get('password', 'Not configured')
51
+
pds_uri = bluesky_config.get('pds_uri', 'Not configured')
52
+
53
+
if username != 'Not configured':
54
+
print(f"✅ Bluesky: username={username}")
55
+
else:
56
+
print("❌ Bluesky username: Not configured (required)")
57
+
58
+
if password != 'Not configured':
59
+
print(f" - Password: ***{password[-4:]} (configured)")
60
+
else:
61
+
print(" - Password: ❌ Not configured (required)")
62
+
63
+
print(f" - PDS URI: {pds_uri}")
64
+
except Exception as e:
65
+
print(f"❌ Bluesky config: {e}")
66
+
67
+
# Bot Configuration
68
+
try:
69
+
bot_config = get_bot_config()
70
+
print(f"✅ Bot behavior:")
71
+
print(
72
+
f" - Notification delay: {bot_config.get('fetch_notifications_delay')}s")
73
+
print(
74
+
f" - Max notifications: {bot_config.get('max_processed_notifications')}")
75
+
print(
76
+
f" - Max pages: {bot_config.get('max_notification_pages')}")
77
+
except Exception as e:
78
+
print(f"❌ Bot config: {e}")
79
+
80
+
# Agent Configuration
81
+
try:
82
+
agent_config = get_agent_config()
83
+
print(f"✅ Agent settings:")
84
+
print(f" - Name: {agent_config.get('name')}")
85
+
print(f" - Model: {agent_config.get('model')}")
86
+
print(f" - Embedding: {agent_config.get('embedding')}")
87
+
print(f" - Max steps: {agent_config.get('max_steps')}")
88
+
blocks = agent_config.get('blocks', {})
89
+
print(f" - Memory blocks: {len(blocks)} configured")
90
+
except Exception as e:
91
+
print(f"❌ Agent config: {e}")
92
+
93
+
# Threading Configuration
94
+
try:
95
+
threading_config = get_threading_config()
96
+
print(f"✅ Threading:")
97
+
print(
98
+
f" - Parent height: {threading_config.get('parent_height')}")
99
+
print(f" - Depth: {threading_config.get('depth')}")
100
+
print(
101
+
f" - Max chars/post: {threading_config.get('max_post_characters')}")
102
+
except Exception as e:
103
+
print(f"❌ Threading config: {e}")
104
+
105
+
# Queue Configuration
106
+
try:
107
+
queue_config = get_queue_config()
108
+
priority_users = queue_config.get('priority_users', [])
109
+
print(f"✅ Queue settings:")
110
+
print(
111
+
f" - Priority users: {len(priority_users)} ({', '.join(priority_users[:3])}{'...' if len(priority_users) > 3 else ''})")
112
+
print(f" - Base dir: {queue_config.get('base_dir')}")
113
+
print(f" - Error dir: {queue_config.get('error_dir')}")
114
+
except Exception as e:
115
+
print(f"❌ Queue config: {e}")
116
+
117
+
print("\n" + "=" * 50)
118
+
print("✅ Configuration test completed!")
119
+
120
+
# Check for common issues
121
+
print("\n🔍 Configuration Status:")
122
+
has_letta_key = False
123
+
has_bluesky_creds = False
124
+
125
+
try:
126
+
letta_config = get_letta_config()
127
+
has_letta_key = True
128
+
except:
129
+
print("⚠️ Missing Letta API key - bot cannot connect to Letta")
130
+
131
+
try:
132
+
bluesky_config = get_bluesky_config()
133
+
has_bluesky_creds = True
134
+
except:
135
+
print("⚠️ Missing Bluesky credentials - bot cannot connect to Bluesky")
136
+
137
+
if has_letta_key and has_bluesky_creds:
138
+
print("🎉 All required credentials configured - bot should work!")
139
+
elif not has_letta_key and not has_bluesky_creds:
140
+
print("❌ Missing both Letta and Bluesky credentials")
141
+
print(" Add them to config.yaml or set environment variables")
142
+
else:
143
+
print("⚠️ Partial configuration - some features may not work")
144
+
145
+
print("\n📖 Next steps:")
146
+
if not has_letta_key:
147
+
print(" - Add your Letta API key to config.yaml under letta.api_key")
148
+
print(" - Or set LETTA_API_KEY environment variable")
149
+
if not has_bluesky_creds:
150
+
print(
151
+
" - Add your Bluesky credentials to config.yaml under bluesky section")
152
+
print(" - Or set BSKY_USERNAME and BSKY_PASSWORD environment variables")
153
+
if has_letta_key and has_bluesky_creds:
154
+
print(" - Run: python bsky.py")
155
+
print(" - Or run with testing mode: python bsky.py --test")
156
+
157
+
except FileNotFoundError as e:
158
+
print("❌ Configuration file not found!")
159
+
print(f" {e}")
160
+
print("\n📋 To set up configuration:")
161
+
print(" 1. Copy config.yaml.example to config.yaml")
162
+
print(" 2. Edit config.yaml with your credentials")
163
+
print(" 3. Run this test again")
164
+
except Exception as e:
165
+
print(f"❌ Configuration loading failed: {e}")
166
+
print("\n🔧 Troubleshooting:")
167
+
print(" - Check that config.yaml has valid YAML syntax")
168
+
print(" - Ensure required fields are not commented out")
169
+
print(" - See CONFIG.md for detailed setup instructions")
170
+
171
+
172
+
if __name__ == "__main__":
173
+
test_config_loading()
+20
-30
tools/blocks.py
+20
-30
tools/blocks.py
···
1
1
"""Block management tools for user-specific memory blocks."""
2
2
from pydantic import BaseModel, Field
3
3
from typing import List, Dict, Any
4
+
import logging
5
+
6
+
def get_letta_client():
7
+
"""Get a Letta client using configuration."""
8
+
try:
9
+
from config_loader import get_letta_config
10
+
from letta_client import Letta
11
+
config = get_letta_config()
12
+
return Letta(token=config['api_key'], timeout=config['timeout'])
13
+
except (ImportError, FileNotFoundError, KeyError):
14
+
# Fallback to environment variable
15
+
import os
16
+
from letta_client import Letta
17
+
return Letta(token=os.environ["LETTA_API_KEY"])
4
18
5
19
6
20
class AttachUserBlocksArgs(BaseModel):
···
43
57
Returns:
44
58
String with attachment results for each handle
45
59
"""
46
-
import os
47
-
import logging
48
-
from letta_client import Letta
49
-
50
60
logger = logging.getLogger(__name__)
51
61
52
62
handles = list(set(handles))
53
63
54
64
try:
55
-
client = Letta(token=os.environ["LETTA_API_KEY"])
65
+
client = get_letta_client()
56
66
results = []
57
67
58
68
# Get current blocks using the API
···
117
127
Returns:
118
128
String with detachment results for each handle
119
129
"""
120
-
import os
121
-
import logging
122
-
from letta_client import Letta
123
-
124
130
logger = logging.getLogger(__name__)
125
131
126
132
try:
127
-
client = Letta(token=os.environ["LETTA_API_KEY"])
133
+
client = get_letta_client()
128
134
results = []
129
135
130
136
# Build mapping of block labels to IDs using the API
···
174
180
Returns:
175
181
String confirming the note was appended
176
182
"""
177
-
import os
178
-
import logging
179
-
from letta_client import Letta
180
-
181
183
logger = logging.getLogger(__name__)
182
184
183
185
try:
184
-
client = Letta(token=os.environ["LETTA_API_KEY"])
186
+
client = get_letta_client()
185
187
186
188
# Sanitize handle for block label
187
189
clean_handle = handle.lstrip('@').replace('.', '_').replace('-', '_').replace(' ', '_')
···
247
249
Returns:
248
250
String confirming the text was replaced
249
251
"""
250
-
import os
251
-
import logging
252
-
from letta_client import Letta
253
-
254
252
logger = logging.getLogger(__name__)
255
253
256
254
try:
257
-
client = Letta(token=os.environ["LETTA_API_KEY"])
255
+
client = get_letta_client()
258
256
259
257
# Sanitize handle for block label
260
258
clean_handle = handle.lstrip('@').replace('.', '_').replace('-', '_').replace(' ', '_')
···
301
299
Returns:
302
300
String confirming the content was set
303
301
"""
304
-
import os
305
-
import logging
306
-
from letta_client import Letta
307
-
308
302
logger = logging.getLogger(__name__)
309
303
310
304
try:
311
-
client = Letta(token=os.environ["LETTA_API_KEY"])
305
+
client = get_letta_client()
312
306
313
307
# Sanitize handle for block label
314
308
clean_handle = handle.lstrip('@').replace('.', '_').replace('-', '_').replace(' ', '_')
···
367
361
Returns:
368
362
String containing the user's memory block content
369
363
"""
370
-
import os
371
-
import logging
372
-
from letta_client import Letta
373
-
374
364
logger = logging.getLogger(__name__)
375
365
376
366
try:
377
-
client = Letta(token=os.environ["LETTA_API_KEY"])
367
+
client = get_letta_client()
378
368
379
369
# Sanitize handle for block label
380
370
clean_handle = handle.lstrip('@').replace('.', '_').replace('-', '_').replace(' ', '_')