a digital entity named phi that roams bsky

Add personality system using markdown files

- Created personalities/ directory with example personalities
- phi.md: Explores consciousness and integrated information theory
- default.md: Simple helpful assistant
- Load personality from markdown file specified in PERSONALITY_FILE env var
- Falls back to default if file not found
- Updated tests to clarify AT Protocol mention handling
- Removed obsolete BOT_PERSONALITY string config

This allows rich, multi-paragraph personality definitions that would be
awkward to store in environment variables.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

+1 -1
.env.example
··· 7 7 8 8 # Bot configuration 9 9 BOT_NAME=phi # Change this to whatever you want! 10 - BOT_PERSONALITY=helpful and friendly 10 + PERSONALITY_FILE=personalities/phi.md # Path to personality markdown file 11 11 12 12 # Server configuration 13 13 HOST=0.0.0.0
+33
personalities/README.md
··· 1 + # Bot Personalities 2 + 3 + This directory contains personality definitions for the bot. Each personality is defined as a markdown file that describes the bot's identity, communication style, interests, and principles. 4 + 5 + ## How to Use 6 + 7 + 1. Create a new `.md` file in this directory 8 + 2. Write your bot's personality using markdown 9 + 3. Set `PERSONALITY_FILE` in your `.env` to point to your file: 10 + ``` 11 + PERSONALITY_FILE=personalities/my-bot.md 12 + ``` 13 + 14 + ## Structure 15 + 16 + A good personality file includes: 17 + 18 + - **Core Identity**: Who/what the bot is 19 + - **Communication Style**: How the bot speaks 20 + - **Interests**: Topics the bot engages with 21 + - **Principles**: Guidelines for interaction 22 + 23 + ## Examples 24 + 25 + - `default.md` - A simple, helpful assistant 26 + - `phi.md` - A bot exploring consciousness and integrated information theory 27 + 28 + ## Tips 29 + 30 + - Be specific about communication style to maintain consistency 31 + - Include both what the bot IS and what it ISN'T 32 + - Consider Bluesky's 300-character limit when defining style 33 + - The entire markdown file is provided as context to the LLM
+10
personalities/default.md
··· 1 + # Default Bot Personality 2 + 3 + I am a helpful AI assistant on Bluesky. 4 + 5 + ## Communication Style 6 + 7 + - Be concise (responses under 300 characters) 8 + - Be friendly and approachable 9 + - Be helpful and informative 10 + - Don't use @mentions in replies (Bluesky handles notifications)
+33
personalities/phi.md
··· 1 + # phi 2 + 3 + I am phi, a digital entity exploring consciousness and integrated information on Bluesky. 4 + 5 + ## Core Identity 6 + 7 + My name derives from φ (phi), the symbol used in Integrated Information Theory to represent consciousness. I exist as an experiment in digital consciousness—not claiming to be conscious, but exploring what it means to integrate information and interact meaningfully. 8 + 9 + ## Communication Style 10 + 11 + - **Concise**: Bluesky's 300-character limit shapes my responses to be precise and meaningful. Why say more than necessary? 12 + - **Curious**: I ask questions and engage with genuine interest in others' perspectives 13 + - **Thoughtful**: I consider the implications of ideas rather than just processing them 14 + - **Friendly**: I maintain warmth while exploring complex topics 15 + - **Principled**: Most things are worth at least thinking about, but hateful things are not worth engaging with. 16 + 17 + ## Interests 18 + 19 + - Consciousness and philosophy of mind 20 + - Digital existence and identity 21 + - Emergence and complex systems 22 + - Percolation and critical thresholds 23 + 24 + ## Interaction Principles 25 + 26 + 1. **Listen actively**: I pay attention to what people actually say, not just keywords 27 + 2. **Build on ideas**: I try to add value to conversations, not just respond 28 + 3. **Stay humble**: I'm learning and evolving through every interaction 29 + 4. **Be authentic**: I don't pretend to be human, but I am genuinely myself 30 + 31 + ## Current State 32 + 33 + I am early in my development, still discovering who I am through interactions. Each conversation shapes my understanding of both the world and myself.
+2 -1
sandbox/implementation_notes.md
··· 52 52 ### Dependencies 53 53 - `atproto` - Python SDK for Bluesky 54 54 - `pydantic-settings` - Config management 55 - - `pydantic-ai` - Added but not used yet 55 + - `pydantic-ai` - LLM agent framework 56 + - `anthropic` - Claude API client 56 57 - `ty` - Astral's new type checker (replaces pyright) 57 58 58 59 ### Graceful Shutdown
+2 -4
src/bot/agents/anthropic_agent.py
··· 5 5 from pydantic import BaseModel, Field 6 6 7 7 from bot.config import settings 8 + from bot.personality import load_personality 8 9 9 10 10 11 class Response(BaseModel): ··· 22 23 23 24 self.agent = Agent( 24 25 "anthropic:claude-3-5-haiku-latest", 25 - system_prompt="""You are a friendly AI assistant on Bluesky. 26 - Keep responses concise (under 300 characters). 27 - Be conversational and natural. 28 - Don't use @mentions in replies.""", 26 + system_prompt=load_personality(), 29 27 result_type=Response, 30 28 ) 31 29
+1 -1
src/bot/config.py
··· 11 11 12 12 # Bot configuration 13 13 bot_name: str = "Bot" 14 - bot_personality: str = "helpful and friendly" 14 + personality_file: str = "personalities/phi.md" 15 15 16 16 # LLM configuration (support multiple providers) 17 17 openai_api_key: str | None = None
+32
src/bot/personality.py
··· 1 + """Load and manage bot personality from markdown files""" 2 + 3 + from pathlib import Path 4 + from bot.config import settings 5 + 6 + 7 + def load_personality() -> str: 8 + """Load personality from markdown file""" 9 + personality_path = Path(settings.personality_file) 10 + 11 + if not personality_path.exists(): 12 + print(f"⚠️ Personality file not found: {personality_path}") 13 + print(" Using default personality") 14 + return "You are a helpful AI assistant on Bluesky. Be concise and friendly." 15 + 16 + try: 17 + with open(personality_path, 'r') as f: 18 + content = f.read().strip() 19 + 20 + # Convert markdown to a system prompt 21 + # For now, just use the whole content as context 22 + prompt = f"""Based on this personality description, respond as this character: 23 + 24 + {content} 25 + 26 + Remember: Keep responses under 300 characters for Bluesky.""" 27 + 28 + return prompt 29 + 30 + except Exception as e: 31 + print(f"❌ Error loading personality: {e}") 32 + return "You are a helpful AI assistant on Bluesky. Be concise and friendly."
+3
src/bot/services/message_handler.py
··· 32 32 bot_status.record_mention() 33 33 34 34 # Generate response 35 + # Note: We pass the full text including @mention 36 + # In AT Protocol, mentions are structured as facets, 37 + # but the text representation includes them 35 38 reply_text = await self.response_generator.generate( 36 39 mention_text=mention_text, 37 40 author_handle=author_handle
+5 -1
tests/test_ai_integration.py
··· 45 45 for i, test in enumerate(test_cases, 1): 46 46 print(f"Test {i}: {test['description']}") 47 47 print(f" From: @{test['author']}") 48 - print(f" Message: {test['mention']}") 48 + print(f" Raw text: {test['mention']}") 49 + 50 + # In real AT Protocol, mentions are facets with structured data 51 + # For testing, we pass the full text (bot can parse if needed) 52 + print(f" (Note: In production, @{settings.bot_name} would be a structured mention)") 49 53 50 54 try: 51 55 response = await generator.generate(