a digital person for bluesky

Enhance X integration with comprehensive debugging and thread context improvements

- Added detailed debugging data structure for X bot, saving conversation metadata and user analysis.
- Implemented common issue handling for incomplete thread context and cache staleness.
- Updated `get_thread_context()` to include an `until_id` parameter, allowing exclusion of future tweets.
- Enhanced mention processing with improved logging for conversation tracking and debugging.
- Saved agent response data and conversation debug information to dedicated debug folders for better analysis.
- Ensured tools are self-contained, removing reliance on shared functions and environment variables.

These changes improve the robustness of the X integration and facilitate better debugging and analysis of conversation flows.

Changed files
+311 -58
tools
+23
CLAUDE.md
··· 81 python queue_manager.py delete @example.bsky.social --force 82 ``` 83 84 ## Architecture Overview 85 86 ### Core Components ··· 151 ## Key Coding Principles 152 153 - All errors in tools must be thrown, not returned as strings. 154 155 ## Memory: Python Environment Commands 156
··· 81 python queue_manager.py delete @example.bsky.social --force 82 ``` 83 84 + ### X Debug Data Structure 85 + 86 + The X bot saves comprehensive debugging data to `x_queue/debug/conversation_{conversation_id}/` for each processed mention: 87 + 88 + - `thread_data_{mention_id}.json` - Raw thread data from X API 89 + - `thread_context_{mention_id}.yaml` - Processed YAML thread context sent to agent 90 + - `debug_info_{mention_id}.json` - Conversation metadata and user analysis 91 + - `agent_response_{mention_id}.json` - Complete agent interaction including prompt, reasoning, tool calls, and responses 92 + 93 + This debug data is especially useful for analyzing how different conversation types (including Grok interactions) are handled. 94 + 95 + **Common Issues:** 96 + - **Incomplete Thread Context**: X API's conversation search may miss recent tweets in long conversations. The bot attempts to fetch missing referenced tweets directly. 97 + - **Cache Staleness**: Thread context caching is disabled during processing to ensure fresh data. 98 + - **Search API Limitations**: X API recent search only covers 7 days and may have indexing delays. 99 + - **Temporal Constraints**: Thread context uses `until_id` parameter to exclude tweets that occurred after the mention being processed, preventing "future knowledge" leakage. 100 + 101 ## Architecture Overview 102 103 ### Core Components ··· 168 ## Key Coding Principles 169 170 - All errors in tools must be thrown, not returned as strings. 171 + - **Tool Self-Containment**: Tools executed in the cloud (like user block management tools) must be completely self-contained: 172 + - Cannot use shared functions like `get_letta_client()` 173 + - Must create Letta client inline using environment variables: `Letta(token=os.environ["LETTA_API_KEY"])` 174 + - Cannot use config.yaml (only environment variables) 175 + - Cannot use logging (cloud execution doesn't support it) 176 + - Must include all necessary imports within the function 177 178 ## Memory: Python Environment Commands 179
+16 -24
tools/blocks.py
··· 548 Returns: 549 String confirming the note was appended 550 """ 551 - logger = logging.getLogger(__name__) 552 - 553 try: 554 - client = get_letta_client() 555 556 block_label = f"x_user_{user_id}" 557 ··· 569 block_id=str(block.id), 570 value=new_value 571 ) 572 - logger.info(f"Appended note to existing block: {block_label}") 573 return f"✓ Appended note to X user {user_id}'s memory block" 574 575 else: ··· 580 value=initial_value, 581 limit=5000 582 ) 583 - logger.info(f"Created new block with note: {block_label}") 584 585 # Check if block needs to be attached to agent 586 current_blocks = client.agents.blocks.list(agent_id=str(agent_state.id)) ··· 592 agent_id=str(agent_state.id), 593 block_id=str(block.id) 594 ) 595 - logger.info(f"Attached new block to agent: {block_label}") 596 return f"✓ Created and attached X user {user_id}'s memory block with note" 597 else: 598 return f"✓ Created X user {user_id}'s memory block with note" 599 600 except Exception as e: 601 - logger.error(f"Error appending note to X user block: {e}") 602 raise Exception(f"Error appending note to X user block: {str(e)}") 603 604 ··· 615 Returns: 616 String confirming the text was replaced 617 """ 618 - logger = logging.getLogger(__name__) 619 - 620 try: 621 - client = get_letta_client() 622 623 block_label = f"x_user_{user_id}" 624 ··· 643 block_id=str(block.id), 644 value=new_value 645 ) 646 - logger.info(f"Replaced text in block: {block_label}") 647 return f"✓ Replaced text in X user {user_id}'s memory block" 648 649 except Exception as e: 650 - logger.error(f"Error replacing text in X user block: {e}") 651 raise Exception(f"Error replacing text in X user block: {str(e)}") 652 653 ··· 663 Returns: 664 String confirming the content was set 665 """ 666 - logger = logging.getLogger(__name__) 667 - 668 try: 669 - client = get_letta_client() 670 671 block_label = f"x_user_{user_id}" 672 ··· 680 block_id=str(block.id), 681 value=content 682 ) 683 - logger.info(f"Set content for existing block: {block_label}") 684 return f"✓ Set content for X user {user_id}'s memory block" 685 686 else: ··· 690 value=content, 691 limit=5000 692 ) 693 - logger.info(f"Created new block with content: {block_label}") 694 695 # Check if block needs to be attached to agent 696 current_blocks = client.agents.blocks.list(agent_id=str(agent_state.id)) ··· 702 agent_id=str(agent_state.id), 703 block_id=str(block.id) 704 ) 705 - logger.info(f"Attached new block to agent: {block_label}") 706 return f"✓ Created and attached X user {user_id}'s memory block" 707 else: 708 return f"✓ Created X user {user_id}'s memory block" 709 710 except Exception as e: 711 - logger.error(f"Error setting X user block content: {e}") 712 raise Exception(f"Error setting X user block content: {str(e)}") 713 714 ··· 723 Returns: 724 String containing the user's memory block content 725 """ 726 - logger = logging.getLogger(__name__) 727 - 728 try: 729 - client = get_letta_client() 730 731 block_label = f"x_user_{user_id}" 732 ··· 737 return f"No memory block found for X user: {user_id}" 738 739 block = blocks[0] 740 - logger.info(f"Retrieved content for block: {block_label}") 741 742 return f"Memory block for X user {user_id}:\n\n{block.value}" 743 744 except Exception as e: 745 - logger.error(f"Error viewing X user block: {e}") 746 raise Exception(f"Error viewing X user block: {str(e)}") 747 748
··· 548 Returns: 549 String confirming the note was appended 550 """ 551 try: 552 + # Create Letta client inline - cloud tools must be self-contained 553 + import os 554 + from letta_client import Letta 555 + client = Letta(token=os.environ["LETTA_API_KEY"]) 556 557 block_label = f"x_user_{user_id}" 558 ··· 570 block_id=str(block.id), 571 value=new_value 572 ) 573 return f"✓ Appended note to X user {user_id}'s memory block" 574 575 else: ··· 580 value=initial_value, 581 limit=5000 582 ) 583 584 # Check if block needs to be attached to agent 585 current_blocks = client.agents.blocks.list(agent_id=str(agent_state.id)) ··· 591 agent_id=str(agent_state.id), 592 block_id=str(block.id) 593 ) 594 return f"✓ Created and attached X user {user_id}'s memory block with note" 595 else: 596 return f"✓ Created X user {user_id}'s memory block with note" 597 598 except Exception as e: 599 raise Exception(f"Error appending note to X user block: {str(e)}") 600 601 ··· 612 Returns: 613 String confirming the text was replaced 614 """ 615 try: 616 + # Create Letta client inline - cloud tools must be self-contained 617 + import os 618 + from letta_client import Letta 619 + client = Letta(token=os.environ["LETTA_API_KEY"]) 620 621 block_label = f"x_user_{user_id}" 622 ··· 641 block_id=str(block.id), 642 value=new_value 643 ) 644 return f"✓ Replaced text in X user {user_id}'s memory block" 645 646 except Exception as e: 647 raise Exception(f"Error replacing text in X user block: {str(e)}") 648 649 ··· 659 Returns: 660 String confirming the content was set 661 """ 662 try: 663 + # Create Letta client inline - cloud tools must be self-contained 664 + import os 665 + from letta_client import Letta 666 + client = Letta(token=os.environ["LETTA_API_KEY"]) 667 668 block_label = f"x_user_{user_id}" 669 ··· 677 block_id=str(block.id), 678 value=content 679 ) 680 return f"✓ Set content for X user {user_id}'s memory block" 681 682 else: ··· 686 value=content, 687 limit=5000 688 ) 689 690 # Check if block needs to be attached to agent 691 current_blocks = client.agents.blocks.list(agent_id=str(agent_state.id)) ··· 697 agent_id=str(agent_state.id), 698 block_id=str(block.id) 699 ) 700 return f"✓ Created and attached X user {user_id}'s memory block" 701 else: 702 return f"✓ Created X user {user_id}'s memory block" 703 704 except Exception as e: 705 raise Exception(f"Error setting X user block content: {str(e)}") 706 707 ··· 716 Returns: 717 String containing the user's memory block content 718 """ 719 try: 720 + # Create Letta client inline - cloud tools must be self-contained 721 + import os 722 + from letta_client import Letta 723 + client = Letta(token=os.environ["LETTA_API_KEY"]) 724 725 block_label = f"x_user_{user_id}" 726 ··· 731 return f"No memory block found for X user: {user_id}" 732 733 block = blocks[0] 734 735 return f"Memory block for X user {user_id}:\n\n{block.value}" 736 737 except Exception as e: 738 raise Exception(f"Error viewing X user block: {str(e)}") 739 740
+272 -34
x.py
··· 12 from rich.panel import Panel 13 from rich.text import Text 14 15 16 # Configure logging 17 logging.basicConfig( ··· 197 logger.warning("Search request failed") 198 return [] 199 200 - def get_thread_context(self, conversation_id: str, use_cache: bool = True) -> Optional[List[Dict]]: 201 """ 202 - Get all tweets in a conversation thread. 203 204 Args: 205 conversation_id: The conversation ID to fetch (should be the original tweet ID) 206 use_cache: Whether to use cached data if available 207 208 Returns: 209 List of tweets in the conversation, ordered chronologically ··· 240 "expansions": "author_id,in_reply_to_user_id,referenced_tweets.id", 241 "sort_order": "recency" # Get newest first, we'll reverse later 242 } 243 244 logger.info(f"Fetching thread context for conversation {conversation_id}") 245 response = self._make_request(endpoint, params) ··· 262 tweets.append(original_tweet) 263 logger.info("Added original tweet to thread context") 264 265 if tweets: 266 # Sort chronologically (oldest first) 267 tweets.sort(key=lambda x: x.get('created_at', '')) 268 logger.info(f"Retrieved {len(tweets)} tweets in thread") ··· 551 552 queue_file = X_QUEUE_DIR / filename 553 554 - # Save mention data 555 with open(queue_file, 'w') as f: 556 - json.dump({ 557 - 'mention': mention, 558 - 'queued_at': datetime.now().isoformat(), 559 - 'type': 'x_mention' 560 - }, f, indent=2) 561 562 logger.info(f"Queued X mention {mention_id} -> {filename}") 563 ··· 1005 mention_text = mention.get('text', '') 1006 author_id = mention.get('author_id') 1007 conversation_id = mention.get('conversation_id') 1008 1009 - logger.debug(f"Extracted data - ID: {mention_id}, Author: {author_id}, Text: {mention_text[:50]}...") 1010 1011 if not conversation_id: 1012 - logger.warning(f"No conversation_id found for mention {mention_id}") 1013 return None 1014 1015 - # Get thread context 1016 try: 1017 - thread_data = x_client.get_thread_context(conversation_id) 1018 if not thread_data: 1019 - logger.error(f"Failed to get thread context for conversation {conversation_id}") 1020 return False 1021 except Exception as e: 1022 - logger.error(f"Error getting thread context: {e}") 1023 return False 1024 1025 # Convert to YAML string 1026 thread_context = thread_to_yaml_string(thread_data) 1027 - logger.debug(f"Thread context generated, length: {len(thread_context)} characters") 1028 1029 # Check for #voidstop 1030 if "#voidstop" in thread_context.lower() or "#voidstop" in mention_text.lower(): ··· 1054 ``` 1055 1056 The YAML above shows the complete conversation thread. The most recent post is the one mentioned above that you should respond to, but use the full thread context to understand the conversation flow. 1057 1058 To reply, use the add_post_to_x_thread tool: 1059 - Each call creates one post (max 280 characters) ··· 1189 except json.JSONDecodeError as e: 1190 logger.error(f"Failed to parse tool call arguments: {e}") 1191 1192 # Handle conflicts 1193 if reply_candidates and ignored_notification: 1194 logger.error("⚠️ CONFLICT: Agent called both add_post_to_x_thread and ignore_notification!") ··· 1252 def acknowledge_x_post(x_client, post_id, note=None): 1253 """ 1254 Acknowledge an X post that we replied to. 1255 - For X, we could implement this as a private note/database entry since X doesn't have 1256 - a built-in acknowledgment system like Bluesky's stream.thought.ack. 1257 1258 Args: 1259 - x_client: XClient instance (reserved for future X API acknowledgment features) 1260 post_id: The X post ID we're acknowledging 1261 note: Optional note to include with the acknowledgment 1262 ··· 1264 True if successful, False otherwise 1265 """ 1266 try: 1267 - # x_client reserved for future X API acknowledgment features 1268 - # For now, implement as a simple log entry 1269 - # In the future, this could write to a database or file system 1270 - ack_dir = X_QUEUE_DIR / "acknowledgments" 1271 - ack_dir.mkdir(exist_ok=True) 1272 1273 - ack_data = { 1274 - 'post_id': post_id, 1275 - 'acknowledged_at': datetime.now().isoformat(), 1276 - 'note': note 1277 - } 1278 1279 - ack_file = ack_dir / f"ack_{post_id}.json" 1280 - with open(ack_file, 'w') as f: 1281 - json.dump(ack_data, f, indent=2) 1282 - 1283 - logger.debug(f"Acknowledged X post {post_id}" + (f" with note: {note[:50]}..." if note else "")) 1284 - return True 1285 1286 except Exception as e: 1287 logger.error(f"Error acknowledging X post {post_id}: {e}") 1288 return False
··· 12 from rich.panel import Panel 13 from rich.text import Text 14 15 + import bsky_utils 16 + 17 18 # Configure logging 19 logging.basicConfig( ··· 199 logger.warning("Search request failed") 200 return [] 201 202 + def get_thread_context(self, conversation_id: str, use_cache: bool = True, until_id: Optional[str] = None) -> Optional[List[Dict]]: 203 """ 204 + Get all tweets in a conversation thread up to a specific tweet ID. 205 206 Args: 207 conversation_id: The conversation ID to fetch (should be the original tweet ID) 208 use_cache: Whether to use cached data if available 209 + until_id: Optional tweet ID to use as upper bound (excludes posts after this ID) 210 211 Returns: 212 List of tweets in the conversation, ordered chronologically ··· 243 "expansions": "author_id,in_reply_to_user_id,referenced_tweets.id", 244 "sort_order": "recency" # Get newest first, we'll reverse later 245 } 246 + 247 + # Add until_id parameter to exclude tweets after the mention being processed 248 + if until_id: 249 + params["until_id"] = until_id 250 + logger.info(f"Using until_id={until_id} to exclude future tweets") 251 252 logger.info(f"Fetching thread context for conversation {conversation_id}") 253 response = self._make_request(endpoint, params) ··· 270 tweets.append(original_tweet) 271 logger.info("Added original tweet to thread context") 272 273 + # Attempt to fill gaps by fetching referenced tweets that are missing 274 + # This helps with X API's incomplete conversation search results 275 + tweet_ids = set(t.get('id') for t in tweets) 276 + missing_tweet_ids = set() 277 + 278 + # Collect all referenced tweet IDs that aren't in our current set 279 + for tweet in tweets: 280 + referenced_tweets = tweet.get('referenced_tweets', []) 281 + for ref in referenced_tweets: 282 + ref_id = ref.get('id') 283 + if ref_id and ref_id not in tweet_ids: 284 + missing_tweet_ids.add(ref_id) 285 + 286 + # Fetch missing referenced tweets individually 287 + for missing_id in missing_tweet_ids: 288 + try: 289 + endpoint = f"/tweets/{missing_id}" 290 + params = { 291 + "tweet.fields": "id,text,author_id,created_at,in_reply_to_user_id,referenced_tweets,conversation_id", 292 + "user.fields": "id,name,username", 293 + "expansions": "author_id" 294 + } 295 + response = self._make_request(endpoint, params) 296 + if response and "data" in response: 297 + missing_tweet = response["data"] 298 + # Only add if it's actually part of this conversation 299 + if missing_tweet.get('conversation_id') == conversation_id: 300 + tweets.append(missing_tweet) 301 + tweet_ids.add(missing_id) 302 + logger.info(f"Retrieved missing referenced tweet: {missing_id}") 303 + 304 + # Also add user data if available 305 + if "includes" in response and "users" in response["includes"]: 306 + for user in response["includes"]["users"]: 307 + users_data[user["id"]] = user 308 + except Exception as e: 309 + logger.warning(f"Could not fetch missing tweet {missing_id}: {e}") 310 + 311 if tweets: 312 + # Filter out tweets that occur after until_id (if specified) 313 + if until_id: 314 + original_count = len(tweets) 315 + # Convert until_id to int for comparison (Twitter IDs are sequential) 316 + until_id_int = int(until_id) 317 + tweets = [t for t in tweets if int(t.get('id', '0')) <= until_id_int] 318 + filtered_count = len(tweets) 319 + if original_count != filtered_count: 320 + logger.info(f"Filtered out {original_count - filtered_count} tweets after until_id {until_id}") 321 + 322 # Sort chronologically (oldest first) 323 tweets.sort(key=lambda x: x.get('created_at', '')) 324 logger.info(f"Retrieved {len(tweets)} tweets in thread") ··· 607 608 queue_file = X_QUEUE_DIR / filename 609 610 + # Save mention data with enhanced debugging information 611 + mention_data = { 612 + 'mention': mention, 613 + 'queued_at': datetime.now().isoformat(), 614 + 'type': 'x_mention', 615 + # Debug info for conversation tracking 616 + 'debug_info': { 617 + 'mention_id': mention.get('id'), 618 + 'author_id': mention.get('author_id'), 619 + 'conversation_id': mention.get('conversation_id'), 620 + 'in_reply_to_user_id': mention.get('in_reply_to_user_id'), 621 + 'referenced_tweets': mention.get('referenced_tweets', []), 622 + 'text_preview': mention.get('text', '')[:200], 623 + 'created_at': mention.get('created_at'), 624 + 'public_metrics': mention.get('public_metrics', {}), 625 + 'context_annotations': mention.get('context_annotations', []) 626 + } 627 + } 628 + 629 with open(queue_file, 'w') as f: 630 + json.dump(mention_data, f, indent=2) 631 632 logger.info(f"Queued X mention {mention_id} -> {filename}") 633 ··· 1075 mention_text = mention.get('text', '') 1076 author_id = mention.get('author_id') 1077 conversation_id = mention.get('conversation_id') 1078 + in_reply_to_user_id = mention.get('in_reply_to_user_id') 1079 + referenced_tweets = mention.get('referenced_tweets', []) 1080 1081 + # Enhanced conversation tracking for debug - especially important for Grok handling 1082 + logger.info(f"🔍 CONVERSATION DEBUG - Mention ID: {mention_id}") 1083 + logger.info(f" Author ID: {author_id}") 1084 + logger.info(f" Conversation ID: {conversation_id}") 1085 + logger.info(f" In Reply To User ID: {in_reply_to_user_id}") 1086 + logger.info(f" Referenced Tweets: {len(referenced_tweets)} items") 1087 + for i, ref in enumerate(referenced_tweets[:3]): # Log first 3 referenced tweets 1088 + logger.info(f" Reference {i+1}: {ref.get('type')} -> {ref.get('id')}") 1089 + logger.info(f" Text preview: {mention_text[:100]}...") 1090 1091 if not conversation_id: 1092 + logger.warning(f"❌ No conversation_id found for mention {mention_id} - this may cause thread context issues") 1093 return None 1094 1095 + # Get thread context (disable cache for missing context issues) 1096 + # Use mention_id as until_id to exclude tweets that occurred after this mention 1097 try: 1098 + thread_data = x_client.get_thread_context(conversation_id, use_cache=False, until_id=mention_id) 1099 if not thread_data: 1100 + logger.error(f"❌ Failed to get thread context for conversation {conversation_id}") 1101 return False 1102 + 1103 + # If this mention references a specific tweet, ensure we have that tweet in context 1104 + if referenced_tweets: 1105 + for ref in referenced_tweets: 1106 + if ref.get('type') == 'replied_to': 1107 + ref_id = ref.get('id') 1108 + # Check if the referenced tweet is in our thread data 1109 + thread_tweet_ids = [t.get('id') for t in thread_data.get('tweets', [])] 1110 + if ref_id and ref_id not in thread_tweet_ids: 1111 + logger.warning(f"Missing referenced tweet {ref_id} in thread context, attempting to fetch") 1112 + try: 1113 + # Fetch the missing referenced tweet directly 1114 + endpoint = f"/tweets/{ref_id}" 1115 + params = { 1116 + "tweet.fields": "id,text,author_id,created_at,in_reply_to_user_id,referenced_tweets,conversation_id", 1117 + "user.fields": "id,name,username", 1118 + "expansions": "author_id" 1119 + } 1120 + response = x_client._make_request(endpoint, params) 1121 + if response and "data" in response: 1122 + missing_tweet = response["data"] 1123 + if missing_tweet.get('conversation_id') == conversation_id: 1124 + # Add to thread data 1125 + if 'tweets' not in thread_data: 1126 + thread_data['tweets'] = [] 1127 + thread_data['tweets'].append(missing_tweet) 1128 + 1129 + # Add user data if available 1130 + if "includes" in response and "users" in response["includes"]: 1131 + if 'users' not in thread_data: 1132 + thread_data['users'] = {} 1133 + for user in response["includes"]["users"]: 1134 + thread_data['users'][user["id"]] = user 1135 + 1136 + logger.info(f"✅ Added missing referenced tweet {ref_id} to thread context") 1137 + else: 1138 + logger.warning(f"Referenced tweet {ref_id} belongs to different conversation {missing_tweet.get('conversation_id')}") 1139 + except Exception as e: 1140 + logger.error(f"Failed to fetch referenced tweet {ref_id}: {e}") 1141 + 1142 + # Enhanced thread context debugging 1143 + logger.info(f"🧵 THREAD CONTEXT DEBUG - Conversation ID: {conversation_id}") 1144 + thread_posts = thread_data.get('tweets', []) 1145 + thread_users = thread_data.get('users', {}) 1146 + logger.info(f" Posts in thread: {len(thread_posts)}") 1147 + logger.info(f" Users in thread: {len(thread_users)}") 1148 + 1149 + # Log thread participants for Grok detection 1150 + for user_id, user_info in thread_users.items(): 1151 + username = user_info.get('username', 'unknown') 1152 + name = user_info.get('name', 'Unknown') 1153 + is_verified = user_info.get('verified', False) 1154 + logger.info(f" User {user_id}: @{username} ({name}) verified={is_verified}") 1155 + 1156 + # Special logging for Grok or AI-related users 1157 + if 'grok' in username.lower() or 'grok' in name.lower(): 1158 + logger.info(f" 🤖 DETECTED GROK USER: @{username} ({name})") 1159 + 1160 + # Log conversation structure 1161 + for i, post in enumerate(thread_posts[:5]): # Log first 5 posts 1162 + post_id = post.get('id') 1163 + post_author = post.get('author_id') 1164 + post_text = post.get('text', '')[:50] 1165 + is_reply = 'in_reply_to_user_id' in post 1166 + logger.info(f" Post {i+1}: {post_id} by {post_author} (reply={is_reply}) - {post_text}...") 1167 + 1168 except Exception as e: 1169 + logger.error(f"❌ Error getting thread context: {e}") 1170 return False 1171 1172 # Convert to YAML string 1173 thread_context = thread_to_yaml_string(thread_data) 1174 + logger.info(f"📄 Thread context generated, length: {len(thread_context)} characters") 1175 + 1176 + # Save comprehensive conversation data for debugging 1177 + try: 1178 + debug_dir = X_QUEUE_DIR / "debug" / f"conversation_{conversation_id}" 1179 + debug_dir.mkdir(parents=True, exist_ok=True) 1180 + 1181 + # Save raw thread data (JSON) 1182 + with open(debug_dir / f"thread_data_{mention_id}.json", 'w') as f: 1183 + json.dump(thread_data, f, indent=2) 1184 + 1185 + # Save YAML thread context 1186 + with open(debug_dir / f"thread_context_{mention_id}.yaml", 'w') as f: 1187 + f.write(thread_context) 1188 + 1189 + # Save mention processing debug info 1190 + debug_info = { 1191 + 'processed_at': datetime.now().isoformat(), 1192 + 'mention_id': mention_id, 1193 + 'conversation_id': conversation_id, 1194 + 'author_id': author_id, 1195 + 'in_reply_to_user_id': in_reply_to_user_id, 1196 + 'referenced_tweets': referenced_tweets, 1197 + 'thread_stats': { 1198 + 'total_posts': len(thread_posts), 1199 + 'total_users': len(thread_users), 1200 + 'yaml_length': len(thread_context) 1201 + }, 1202 + 'users_in_conversation': { 1203 + user_id: { 1204 + 'username': user_info.get('username'), 1205 + 'name': user_info.get('name'), 1206 + 'verified': user_info.get('verified', False), 1207 + 'is_grok': 'grok' in user_info.get('username', '').lower() or 'grok' in user_info.get('name', '').lower() 1208 + } 1209 + for user_id, user_info in thread_users.items() 1210 + } 1211 + } 1212 + 1213 + with open(debug_dir / f"debug_info_{mention_id}.json", 'w') as f: 1214 + json.dump(debug_info, f, indent=2) 1215 + 1216 + logger.info(f"💾 Saved conversation debug data to: {debug_dir}") 1217 + 1218 + except Exception as debug_error: 1219 + logger.warning(f"Failed to save debug data: {debug_error}") 1220 + # Continue processing even if debug save fails 1221 1222 # Check for #voidstop 1223 if "#voidstop" in thread_context.lower() or "#voidstop" in mention_text.lower(): ··· 1247 ``` 1248 1249 The YAML above shows the complete conversation thread. The most recent post is the one mentioned above that you should respond to, but use the full thread context to understand the conversation flow. 1250 + 1251 + If you need to update user information, use the x_user_* tools. 1252 1253 To reply, use the add_post_to_x_thread tool: 1254 - Each call creates one post (max 280 characters) ··· 1384 except json.JSONDecodeError as e: 1385 logger.error(f"Failed to parse tool call arguments: {e}") 1386 1387 + # Save agent response data to debug folder 1388 + try: 1389 + debug_dir = X_QUEUE_DIR / "debug" / f"conversation_{conversation_id}" 1390 + 1391 + # Save complete agent interaction 1392 + agent_response_data = { 1393 + 'processed_at': datetime.now().isoformat(), 1394 + 'mention_id': mention_id, 1395 + 'conversation_id': conversation_id, 1396 + 'prompt_sent': prompt, 1397 + 'reply_candidates': reply_candidates, 1398 + 'ignored_notification': ignored_notification, 1399 + 'ack_note': ack_note, 1400 + 'tool_call_results': tool_call_results, 1401 + 'all_messages': [] 1402 + } 1403 + 1404 + # Convert messages to serializable format 1405 + for message in message_response.messages: 1406 + msg_data = { 1407 + 'message_type': getattr(message, 'message_type', 'unknown'), 1408 + 'content': getattr(message, 'content', ''), 1409 + 'reasoning': getattr(message, 'reasoning', ''), 1410 + 'status': getattr(message, 'status', ''), 1411 + 'name': getattr(message, 'name', ''), 1412 + } 1413 + 1414 + if hasattr(message, 'tool_call') and message.tool_call: 1415 + msg_data['tool_call'] = { 1416 + 'name': message.tool_call.name, 1417 + 'arguments': message.tool_call.arguments, 1418 + 'tool_call_id': getattr(message.tool_call, 'tool_call_id', '') 1419 + } 1420 + 1421 + agent_response_data['all_messages'].append(msg_data) 1422 + 1423 + with open(debug_dir / f"agent_response_{mention_id}.json", 'w') as f: 1424 + json.dump(agent_response_data, f, indent=2) 1425 + 1426 + logger.info(f"💾 Saved agent response debug data") 1427 + 1428 + except Exception as debug_error: 1429 + logger.warning(f"Failed to save agent response debug data: {debug_error}") 1430 + 1431 # Handle conflicts 1432 if reply_candidates and ignored_notification: 1433 logger.error("⚠️ CONFLICT: Agent called both add_post_to_x_thread and ignore_notification!") ··· 1491 def acknowledge_x_post(x_client, post_id, note=None): 1492 """ 1493 Acknowledge an X post that we replied to. 1494 + Uses the same Bluesky client and uploads to the void data repository on atproto, 1495 + just like Bluesky acknowledgments. 1496 1497 Args: 1498 + x_client: XClient instance (not used, kept for compatibility) 1499 post_id: The X post ID we're acknowledging 1500 note: Optional note to include with the acknowledgment 1501 ··· 1503 True if successful, False otherwise 1504 """ 1505 try: 1506 + # Use Bluesky client to upload acks to the void data repository on atproto 1507 + bsky_client = bsky_utils.default_login() 1508 1509 + # Create a synthetic URI and CID for the X post 1510 + # X posts don't have atproto URIs/CIDs, so we create identifiers 1511 + post_uri = f"x://twitter.com/post/{post_id}" 1512 + post_cid = f"x_{post_id}_cid" # Synthetic CID for X posts 1513 1514 + # Use the same acknowledge_post function as Bluesky 1515 + ack_result = bsky_utils.acknowledge_post(bsky_client, post_uri, post_cid, note) 1516 1517 + if ack_result: 1518 + logger.debug(f"Acknowledged X post {post_id} via atproto" + (f" with note: {note[:50]}..." if note else "")) 1519 + return True 1520 + else: 1521 + logger.error(f"Failed to acknowledge X post {post_id}") 1522 + return False 1523 + 1524 except Exception as e: 1525 logger.error(f"Error acknowledging X post {post_id}: {e}") 1526 return False