A React Native app for the ultimate thinking partner.

refactor(streaming): complete rewrite of message streaming architecture

Completely redesigned streaming message accumulation to fix persistent
issues with messages disappearing, flickering, and being replaced during
streaming.

Key changes:
- Simplified streaming state to single accumulating message + completed array
- New message boundary detection: finalize on reasoning_message ID change
- Display both completed and current messages simultaneously (no more "one slot")
- Convert streaming messages to permanent format only after stream completes
- Remove server message fetching - build from stream data only

New architecture:
- currentStreamingMessage: single message being accumulated
- completedStreamingMessages[]: finished messages (stream still active)
- Simple API: accumulateReasoning, accumulateToolCall, accumulateAssistant

This follows the Letta streaming protocol correctly:
- Chunks are DELTAS (incremental, not full text)
- Reasoning + content share same message ID
- New reasoning with different ID = previous message complete

Files changed:
- src/stores/chatStore.ts: New StreamingMessage interface, simple accumulation
- src/hooks/useMessageStream.ts: Dead simple chunk handler with ID-based finalization
- src/hooks/useMessageGroups.ts: Display both completed and current streams
- src/screens/ChatScreen.tsx: Auto-expand reasoning blocks
- STREAMING_ANALYSIS.md: Complete documentation of streaming behavior

This resolves days of streaming issues with a simpler, more correct implementation.

+132
STREAMING_ANALYSIS.md
··· 1 + # Letta Streaming Analysis & Implementation Guide 2 + 3 + ## Real Streaming Behavior (From Testing) 4 + 5 + Based on actual testing with the message "create a memory block called cameron": 6 + 7 + ### Message Flow Pattern 8 + 9 + ``` 10 + [0-21] reasoning_message (ID: message-f7b4fa60-0195-4e50-98c9-dfb6a03b013f) 11 + [22-36] tool_call_message (ID: message-f7b4fa60-0195-4e50-98c9-dfb6a03b013f) ← SAME ID 12 + [37] tool_return_message (ID: message-e906b6cc-33a1-440c-8ff6-15b06ec287c8) ← NEW ID 13 + [38-53] reasoning_message (ID: message-cc7aa672-7859-4e22-9ccd-2efbde068e6c) ← NEW ID 14 + [54-88] assistant_message (ID: message-cc7aa672-7859-4e22-9ccd-2efbde068e6c) ← SAME ID 15 + [89] stop_reason 16 + [90] usage_statistics 17 + ``` 18 + 19 + ### Key Insights 20 + 21 + 1. **Chunks are DELTAS** - Each chunk contains only the new text, not the full accumulated text 22 + 2. **Message IDs group related chunks** - All chunks with the same ID belong together 23 + 3. **Reasoning + Content share IDs** - Reasoning and its paired content (tool_call OR assistant) have the same ID 24 + 4. **Tool returns have separate IDs** - Tool return messages have different IDs from their calls 25 + 5. **New reasoning = New message group** - When a new reasoning_message with a different ID arrives, the previous group is complete 26 + 27 + ## Message Grouping Rules 28 + 29 + ### Rule 1: Message Boundary Detection 30 + A new message group starts when we receive a `reasoning_message` chunk with a **different ID** than the current one. 31 + 32 + ### Rule 2: Message Structure 33 + Each message group contains: 34 + - `reasoning` (optional) - from `reasoning_message` chunks 35 + - `content` - from either `tool_call_message` OR `assistant_message` chunks 36 + - `id` - the shared message ID from Letta 37 + 38 + ### Rule 3: Tool Returns 39 + Tool return messages (`tool_return_message`) have separate IDs and should be stored separately, then paired with their tool call using `step_id`. 40 + 41 + ## Correct Implementation Strategy 42 + 43 + ### Data Structure 44 + ```typescript 45 + interface AccumulatingMessage { 46 + id: string; 47 + reasoning: string; // Accumulate reasoning chunks here 48 + content: string; // Accumulate tool_call OR assistant chunks here 49 + type: 'tool_call' | 'assistant' | null; // What kind of content 50 + toolCallName?: string; // For tool calls 51 + } 52 + ``` 53 + 54 + ### Algorithm 55 + 56 + ``` 57 + currentMessage = null 58 + completedMessages = [] 59 + 60 + ON CHUNK RECEIVED: 61 + chunkId = chunk.id 62 + 63 + // NEW MESSAGE GROUP DETECTED 64 + if chunk.message_type === 'reasoning_message' AND currentMessage AND currentMessage.id !== chunkId: 65 + // Previous message is complete 66 + completedMessages.push(currentMessage) 67 + currentMessage = { id: chunkId, reasoning: '', content: '', type: null } 68 + 69 + // INITIALIZE IF NEEDED 70 + if NOT currentMessage: 71 + currentMessage = { id: chunkId, reasoning: '', content: '', type: null } 72 + 73 + // ACCUMULATE BASED ON TYPE 74 + if chunk.message_type === 'reasoning_message': 75 + currentMessage.reasoning += chunk.reasoning 76 + 77 + else if chunk.message_type === 'tool_call_message': 78 + currentMessage.type = 'tool_call' 79 + currentMessage.toolCallName = chunk.tool_call.name 80 + currentMessage.content += chunk.tool_call.arguments // Delta! 81 + 82 + else if chunk.message_type === 'assistant_message': 83 + currentMessage.type = 'assistant' 84 + currentMessage.content += extractText(chunk.content) // Delta! 85 + 86 + else if chunk.message_type === 'tool_return_message': 87 + // Tool returns are separate - just store them 88 + storeToolReturn(chunk) 89 + 90 + ON STREAM COMPLETE: 91 + // Finalize the last message 92 + if currentMessage: 93 + completedMessages.push(currentMessage) 94 + 95 + // Convert to display format and add to messages array 96 + convertAndStore(completedMessages) 97 + ``` 98 + 99 + ## Problems We've Had 100 + 101 + ### Problem 1: "One Slot" - Messages Replacing Each Other 102 + **Symptom**: Only one message visible during streaming, messages replace each other 103 + **Cause**: Only showing `currentStream`, not showing `completedStreamPhases` alongside it 104 + **Fix**: Display BOTH completed messages AND current accumulating message 105 + 106 + ### Problem 2: Messages Disappearing After Completion 107 + **Symptom**: Messages visible during stream, gone after stream ends 108 + **Cause**: Clearing state before converting to permanent messages 109 + **Fix**: Convert to messages FIRST, THEN clear streaming state 110 + 111 + ### Problem 3: Finalization Timing 112 + **Symptom**: Messages not moving to completed at the right time 113 + **Cause**: Finalizing on message TYPE change instead of ID change 114 + **Fix**: Only finalize when we see a new reasoning_message with a different ID 115 + 116 + ### Problem 4: Fetching from Server 117 + **Symptom**: Flickering, delay, messages appearing twice 118 + **Cause**: Trying to fetch "official" messages from server after streaming 119 + **Fix**: Don't fetch from server - we have everything from the stream 120 + 121 + ## Current Issues 122 + 123 + 1. Messages still disappearing during streaming (one slot problem persists) 124 + 2. Messages not reappearing after completion 125 + 3. Flickering between phases 126 + 127 + ## Required Fixes 128 + 129 + 1. **Simplify state structure** - One accumulating message, one array of completed 130 + 2. **Fix display logic** - Show BOTH accumulated + completed simultaneously 131 + 3. **Fix finalization** - Trigger only on ID change 132 + 4. **Fix persistence** - Convert to messages immediately, don't clear state prematurely
+60 -62
src/hooks/useMessageGroups.ts
··· 71 71 } 72 72 73 73 /** 74 - * Streaming state interface 74 + * Simple streaming message 75 75 */ 76 - export interface StreamingState { 76 + interface StreamingMessage { 77 + id: string; 77 78 reasoning: string; 78 - assistantMessage: string; 79 - toolCalls: Array<{ 80 - id: string; 81 - name: string; 82 - args: string; 83 - }>; 79 + content: string; 80 + type: 'tool_call' | 'assistant' | null; 81 + toolCallName?: string; 82 + timestamp: string; 84 83 } 85 84 86 85 interface UseMessageGroupsParams { 87 86 messages: LettaMessage[]; 88 87 isStreaming: boolean; 89 - streamingState?: StreamingState; 88 + currentStreamingMessage: StreamingMessage | null; 89 + completedStreamingMessages: StreamingMessage[]; 90 90 } 91 91 92 92 /** ··· 95 95 export function useMessageGroups({ 96 96 messages, 97 97 isStreaming, 98 - streamingState, 98 + currentStreamingMessage, 99 + completedStreamingMessages, 99 100 }: UseMessageGroupsParams): MessageGroup[] { 100 101 return useMemo(() => { 101 102 // Step 1: Filter out system messages and login/heartbeat ··· 222 223 return timeA - timeB; 223 224 }); 224 225 225 - // Step 6: Append streaming groups if active 226 - if (isStreaming && streamingState) { 227 - const streamingGroups = createStreamingGroups(streamingState); 228 - filteredGroups.push(...streamingGroups); 226 + // Step 6: Add completed streaming messages (finished but stream still active) 227 + console.log('📊 Completed streaming messages:', completedStreamingMessages.length); 228 + completedStreamingMessages.forEach((msg, index) => { 229 + const group: MessageGroup = { 230 + id: msg.id, 231 + groupKey: `streaming-completed-${msg.id}`, 232 + type: msg.type === 'tool_call' ? 'tool_call' : 'assistant', 233 + content: msg.content, 234 + reasoning: msg.reasoning || undefined, 235 + created_at: msg.timestamp, 236 + role: 'assistant', 237 + isStreaming: false, // It's done, just not persisted yet 238 + }; 239 + 240 + if (msg.type === 'tool_call' && msg.toolCallName) { 241 + group.toolCall = { 242 + name: msg.toolCallName, 243 + args: msg.content, 244 + }; 245 + } 246 + 247 + filteredGroups.push(group); 248 + console.log(` ✅ [${index}] ${msg.type}:`, msg.content.substring(0, 40)); 249 + }); 250 + 251 + // Step 7: Add current accumulating message (if any) 252 + if (currentStreamingMessage) { 253 + const group: MessageGroup = { 254 + id: currentStreamingMessage.id, 255 + groupKey: `streaming-current-${currentStreamingMessage.id}`, 256 + type: currentStreamingMessage.type === 'tool_call' ? 'tool_call' : 'assistant', 257 + content: currentStreamingMessage.content, 258 + reasoning: currentStreamingMessage.reasoning || undefined, 259 + created_at: currentStreamingMessage.timestamp, 260 + role: 'assistant', 261 + isStreaming: true, // Still accumulating 262 + }; 263 + 264 + if (currentStreamingMessage.type === 'tool_call' && currentStreamingMessage.toolCallName) { 265 + group.toolCall = { 266 + name: currentStreamingMessage.toolCallName, 267 + args: currentStreamingMessage.content, 268 + }; 269 + } 270 + 271 + filteredGroups.push(group); 272 + console.log(' 🔄 Current streaming:', currentStreamingMessage.type, currentStreamingMessage.content.substring(0, 40)); 229 273 } 230 274 275 + console.log('📊 FINAL GROUP COUNT:', filteredGroups.length, 'groups'); 231 276 return filteredGroups; 232 - }, [messages, isStreaming, streamingState]); 277 + }, [messages, isStreaming, currentStreamingMessage, completedStreamingMessages]); 233 278 } 234 279 235 280 /** ··· 376 421 377 422 // Unknown message type - skip 378 423 return null; 379 - } 380 - 381 - /** 382 - * Create streaming groups from current stream state 383 - * Returns an array because multiple tool calls can be streaming simultaneously 384 - */ 385 - function createStreamingGroups(state: StreamingState): MessageGroup[] { 386 - const now = new Date().toISOString(); 387 - const groups: MessageGroup[] = []; 388 - 389 - // If we have tool calls, create a group for EACH one 390 - if (state.toolCalls.length > 0) { 391 - state.toolCalls.forEach((toolCall, index) => { 392 - groups.push({ 393 - id: 'streaming', 394 - groupKey: `streaming-tool_call-${toolCall.id || index}`, 395 - type: 'tool_call', 396 - content: toolCall.args, 397 - reasoning: index === 0 ? state.reasoning || undefined : undefined, // Only first gets reasoning 398 - toolCall: { 399 - name: toolCall.name, 400 - args: toolCall.args, 401 - }, 402 - toolReturn: undefined, // No return yet during streaming 403 - created_at: now, 404 - role: 'assistant', 405 - isStreaming: true, 406 - }); 407 - }); 408 - return groups; 409 - } 410 - 411 - // Assistant message streaming 412 - if (state.assistantMessage || state.reasoning) { 413 - groups.push({ 414 - id: 'streaming', 415 - groupKey: 'streaming-assistant', 416 - type: 'assistant', 417 - content: state.assistantMessage, 418 - reasoning: state.reasoning || undefined, 419 - created_at: now, 420 - role: 'assistant', 421 - isStreaming: true, 422 - }); 423 - } 424 - 425 - return groups; 426 424 } 427 425 428 426 /**
+11
src/hooks/useMessageInteractions.ts
··· 72 72 } 73 73 }, []); 74 74 75 + // Auto-expand reasoning for a message (doesn't toggle, just adds) 76 + const expandReasoning = useCallback((messageId: string) => { 77 + setExpandedReasoning((prev) => { 78 + if (prev.has(messageId)) return prev; // Already expanded 79 + const next = new Set(prev); 80 + next.add(messageId); 81 + return next; 82 + }); 83 + }, []); 84 + 75 85 return { 76 86 // State 77 87 expandedReasoning, ··· 84 94 toggleCompaction, 85 95 toggleToolReturn, 86 96 copyToClipboard, 97 + expandReasoning, 87 98 }; 88 99 }
+86 -76
src/hooks/useMessageStream.ts
··· 1 - import { useCallback } from 'react'; 1 + import { useCallback, useRef } from 'react'; 2 2 import { useChatStore } from '../stores/chatStore'; 3 3 import { useAgentStore } from '../stores/agentStore'; 4 4 import lettaApi from '../api/lettaApi'; ··· 11 11 const chatStore = useChatStore(); 12 12 const coAgent = useAgentStore((state) => state.coAgent); 13 13 14 - // Handle individual streaming chunks 14 + // Track last message ID to detect when a new message starts 15 + const lastMessageIdRef = useRef<string | null>(null); 16 + 17 + // Handle individual streaming chunks - ULTRA SIMPLE 15 18 const handleStreamingChunk = useCallback((chunk: StreamingChunk) => { 16 - console.log('Streaming chunk:', chunk.message_type, 'content:', chunk.content); 19 + const chunkType = chunk.message_type; 20 + const chunkId = (chunk as any).id; 17 21 18 - // Handle error chunks 19 - if ((chunk as any).error) { 20 - console.error('Error chunk received:', (chunk as any).error); 21 - chatStore.stopStreaming(); 22 - chatStore.setSendingMessage(false); 23 - chatStore.clearStream(); 22 + // Skip non-content chunks 23 + if (chunkType === 'stop_reason' || chunkType === 'usage_statistics') { 24 24 return; 25 25 } 26 26 27 - // Handle stop_reason chunks 28 - if ((chunk as any).message_type === 'stop_reason') { 29 - console.log('Stop reason received:', (chunk as any).stopReason || (chunk as any).stop_reason); 27 + // Handle errors 28 + if ((chunk as any).error) { 29 + console.error('❌ Stream error:', (chunk as any).error); 30 30 return; 31 31 } 32 32 33 - // Process reasoning messages 34 - if (chunk.message_type === 'reasoning_message' && chunk.reasoning) { 35 - chatStore.updateStreamReasoning(chunk.reasoning); 33 + console.log(`📦 [${chunkType}] ID: ${chunkId?.substring(0, 8)}...`); 34 + 35 + // DETECT NEW MESSAGE: If we see a new reasoning with different ID, finalize current 36 + if (chunkType === 'reasoning_message' && chunkId) { 37 + if (lastMessageIdRef.current && chunkId !== lastMessageIdRef.current) { 38 + console.log('🔄 NEW MESSAGE DETECTED - finalizing previous'); 39 + chatStore.finalizeCurrentMessage(); 40 + } 41 + lastMessageIdRef.current = chunkId; 36 42 } 37 43 38 - // Process tool call messages 39 - else if ((chunk.message_type === 'tool_call_message' || chunk.message_type === 'tool_call') && chunk.tool_call) { 40 - // CRITICAL FIX: When we get the first tool call, clear reasoning from the previous assistant message 41 - // The tool call will have its own reasoning chunks coming 42 - const currentToolCallCount = chatStore.currentStream.toolCalls.length; 43 - if (currentToolCallCount === 0) { 44 - // This is the first tool call - clear accumulated reasoning from assistant phase 45 - chatStore.clearStream(); 44 + // ACCUMULATE BASED ON TYPE 45 + if (chunkType === 'reasoning_message' && chunk.reasoning && chunkId) { 46 + chatStore.accumulateReasoning(chunkId, chunk.reasoning); 47 + } 48 + else if (chunkType === 'tool_call_message' && chunkId) { 49 + const toolCall = (chunk as any).toolCall || (chunk as any).tool_call; 50 + if (toolCall) { 51 + const toolName = toolCall.name || toolCall.tool_name || 'unknown'; 52 + const args = toolCall.arguments || ''; 53 + chatStore.accumulateToolCall(chunkId, toolName, args); 46 54 } 47 - 48 - const callObj = chunk.tool_call.function || chunk.tool_call; 49 - const toolName = callObj?.name || callObj?.tool_name || 'tool'; 50 - const args = callObj?.arguments || callObj?.args || {}; 51 - const toolCallId = chunk.id || `tool_${toolName}_${Date.now()}`; 52 - 53 - const formatArgsPython = (obj: any): string => { 54 - if (!obj || typeof obj !== 'object') return ''; 55 - return Object.entries(obj) 56 - .map(([k, v]) => `${k}=${typeof v === 'string' ? `"${v}"` : JSON.stringify(v)}`) 57 - .join(', '); 58 - }; 59 - 60 - const toolLine = `${toolName}(${formatArgsPython(args)})`; 61 - chatStore.addStreamToolCall({ id: toolCallId, name: toolName, args: toolLine }); 62 55 } 63 - 64 - // Process assistant messages 65 - else if (chunk.message_type === 'assistant_message' && chunk.content) { 56 + else if (chunkType === 'assistant_message' && chunkId) { 66 57 let contentText = ''; 67 58 const content = chunk.content as any; 68 59 69 60 if (typeof content === 'string') { 70 61 contentText = content; 71 - } else if (typeof content === 'object' && content !== null) { 72 - if (Array.isArray(content)) { 73 - contentText = content 74 - .filter((item: any) => item.type === 'text') 75 - .map((item: any) => item.text || '') 76 - .join(''); 77 - } else if (content.text) { 78 - contentText = content.text; 79 - } 62 + } else if (Array.isArray(content)) { 63 + contentText = content 64 + .filter((item: any) => item.type === 'text') 65 + .map((item: any) => item.text || '') 66 + .join(''); 67 + } else if (content?.text) { 68 + contentText = content.text; 80 69 } 81 70 82 71 if (contentText) { 83 - chatStore.updateStreamAssistant(contentText); 72 + chatStore.accumulateAssistant(chunkId, contentText); 84 73 } 85 74 } 75 + // tool_return_message - just log, we'll handle pairing later 76 + else if (chunkType === 'tool_return_message') { 77 + console.log('📨 Tool return received'); 78 + } 86 79 }, [chatStore]); 87 80 88 81 // Send a message with streaming ··· 138 131 139 132 try { 140 133 chatStore.startStreaming(); 134 + lastMessageIdRef.current = null; // Reset for new stream 141 135 142 136 // Build message content 143 137 let messageContent: any; ··· 180 174 handleStreamingChunk(chunk); 181 175 }, 182 176 async (response) => { 183 - console.log('Stream complete - refreshing messages from server'); 177 + console.log('🎬 STREAM COMPLETE'); 178 + 179 + // Finalize the last message 180 + chatStore.finalizeCurrentMessage(); 181 + 182 + // Get all completed messages 183 + const { currentStreamingMessage, completedStreamingMessages } = useChatStore.getState(); 184 184 185 - // Wait for server to finalize, then refresh messages 186 - setTimeout(async () => { 187 - try { 188 - const currentCount = chatStore.messages.filter((msg) => !msg.id.startsWith('temp-')).length; 189 - const fetchLimit = Math.max(currentCount + 10, 100); 185 + const allStreamedMessages = [...completedStreamingMessages]; 186 + if (currentStreamingMessage) { 187 + allStreamedMessages.push(currentStreamingMessage); 188 + } 189 + 190 + console.log('📨 Converting', allStreamedMessages.length, 'streamed messages to permanent messages'); 191 + 192 + // Convert to LettaMessage format and add to messages 193 + const permanentMessages: LettaMessage[] = allStreamedMessages.map((msg, idx) => ({ 194 + id: msg.id, 195 + role: 'assistant', 196 + message_type: msg.type === 'tool_call' ? 'tool_call_message' : 'assistant_message', 197 + content: msg.content, 198 + reasoning: msg.reasoning, 199 + ...(msg.type === 'tool_call' && msg.toolCallName ? { 200 + tool_call: { 201 + name: msg.toolCallName, 202 + arguments: msg.content, 203 + } 204 + } : {}), 205 + created_at: msg.timestamp, 206 + } as any)); 190 207 191 - const recentMessages = await lettaApi.listMessages(coAgent.id, { 192 - limit: fetchLimit, 193 - use_assistant_message: true, 194 - }); 208 + // Add to messages array 209 + if (permanentMessages.length > 0) { 210 + chatStore.addMessages(permanentMessages); 211 + } 195 212 196 - console.log('Received', recentMessages.length, 'messages from server after stream'); 213 + // Clear streaming state 214 + chatStore.clearAllStreamingState(); 215 + chatStore.stopStreaming(); 216 + chatStore.setSendingMessage(false); 217 + chatStore.clearImages(); 197 218 198 - // Replace all messages with server version 199 - chatStore.setMessages(recentMessages); 200 - } catch (error) { 201 - console.error('Failed to refresh messages after stream:', error); 202 - } finally { 203 - chatStore.stopStreaming(); 204 - chatStore.setSendingMessage(false); 205 - chatStore.clearStream(); 206 - chatStore.clearImages(); 207 - } 208 - }, 500); 219 + console.log('✅ Stream finished and converted to messages'); 209 220 }, 210 221 (error) => { 211 222 console.error('Stream error:', error); 223 + chatStore.clearAllStreamingState(); 212 224 chatStore.stopStreaming(); 213 225 chatStore.setSendingMessage(false); 214 - chatStore.clearStream(); 215 226 } 216 227 ); 217 228 } catch (error) { 218 229 console.error('Failed to send message:', error); 230 + chatStore.clearAllStreamingState(); 219 231 chatStore.stopStreaming(); 220 232 chatStore.setSendingMessage(false); 221 - chatStore.clearStream(); 222 233 throw error; 223 234 } 224 235 }, ··· 228 239 return { 229 240 isStreaming: chatStore.isStreaming, 230 241 isSendingMessage: chatStore.isSendingMessage, 231 - currentStream: chatStore.currentStream, 232 242 sendMessage, 233 243 }; 234 244 }
+69 -42
src/screens/ChatScreen.tsx
··· 1 - import React, { useRef, useState } from 'react'; 1 + import React, { useRef, useState, useEffect } from 'react'; 2 2 import { 3 3 View, 4 4 StyleSheet, ··· 40 40 toggleCompaction, 41 41 toggleToolReturn, 42 42 copyToClipboard, 43 + expandReasoning, 43 44 } = useMessageInteractions(); 44 45 45 46 // Scroll management ··· 53 54 const addImage = useChatStore((state) => state.addImage); 54 55 const removeImage = useChatStore((state) => state.removeImage); 55 56 const lastMessageNeedsSpace = useChatStore((state) => state.lastMessageNeedsSpace); 56 - const currentStream = useChatStore((state) => state.currentStream); 57 + const currentStreamingMessage = useChatStore((state) => state.currentStreamingMessage); 58 + const completedStreamingMessages = useChatStore((state) => state.completedStreamingMessages); 57 59 58 60 /** 59 61 * Transform raw Letta messages into unified MessageGroup objects. 60 62 * 61 63 * This groups messages by ID (reasoning + assistant → single group), 62 - * pairs tool calls with returns, and appends a temporary streaming group 63 - * while the agent is responding. 64 + * pairs tool calls with returns, and appends streaming messages. 64 65 * 65 66 * Each MessageGroup has a unique groupKey for FlatList rendering. 66 67 */ 67 68 const messageGroups = useMessageGroups({ 68 69 messages, 69 70 isStreaming, 70 - streamingState: currentStream, 71 + currentStreamingMessage, 72 + completedStreamingMessages, 71 73 }); 72 74 73 75 // Animation refs and layout 74 76 const spacerHeightAnim = useRef(new Animated.Value(0)).current; 75 77 const [containerHeight, setContainerHeight] = React.useState(0); 76 78 79 + // Auto-expand reasoning blocks when message groups change 80 + useEffect(() => { 81 + messageGroups.forEach((group) => { 82 + // Auto-expand any message with reasoning 83 + if (group.reasoning && group.reasoning.trim()) { 84 + expandReasoning(group.id); 85 + } 86 + }); 87 + }, [messageGroups, expandReasoning]); 88 + 77 89 // Handle send message - no auto-scroll 78 90 const handleSend = async (text: string) => { 79 91 await sendMessage(text, selectedImages); ··· 104 116 ); 105 117 }; 106 118 119 + const isEmpty = messageGroups.length === 0 && !isLoadingMessages; 120 + 107 121 return ( 108 122 <KeyboardAvoidingView 109 123 style={[styles.container, { backgroundColor: theme.colors.background.primary }]} ··· 111 125 keyboardVerticalOffset={Platform.OS === 'ios' ? 90 : 0} 112 126 onLayout={(e) => setContainerHeight(e.nativeEvent.layout.height)} 113 127 > 114 - {/* Messages List */} 115 - <FlatList 116 - ref={scrollViewRef} 117 - data={messageGroups} 118 - renderItem={renderMessageGroup} 119 - keyExtractor={(group) => group.groupKey} 120 - contentContainerStyle={[ 121 - styles.messagesList, 122 - { paddingBottom: insets.bottom + 80 }, 123 - ]} 124 - onContentSizeChange={onContentSizeChange} 125 - onScroll={onScroll} 126 - scrollEventThrottle={16} 127 - onEndReached={loadMoreMessages} 128 - onEndReachedThreshold={0.5} 129 - initialNumToRender={100} 130 - maxToRenderPerBatch={20} 131 - windowSize={21} 132 - removeClippedSubviews={Platform.OS === 'android'} 133 - /> 128 + {!isEmpty && ( 129 + <> 130 + {/* Messages List */} 131 + <FlatList 132 + ref={scrollViewRef} 133 + data={messageGroups} 134 + renderItem={renderMessageGroup} 135 + keyExtractor={(group) => group.groupKey} 136 + contentContainerStyle={[ 137 + styles.messagesList, 138 + { paddingBottom: insets.bottom + 80 }, 139 + ]} 140 + onContentSizeChange={onContentSizeChange} 141 + onScroll={onScroll} 142 + scrollEventThrottle={16} 143 + onEndReached={loadMoreMessages} 144 + onEndReachedThreshold={0.5} 145 + initialNumToRender={100} 146 + maxToRenderPerBatch={20} 147 + windowSize={21} 148 + removeClippedSubviews={Platform.OS === 'android'} 149 + /> 134 150 135 - {/* Spacer for animation */} 136 - {lastMessageNeedsSpace && <Animated.View style={{ height: spacerHeightAnim }} />} 151 + {/* Spacer for animation */} 152 + {lastMessageNeedsSpace && <Animated.View style={{ height: spacerHeightAnim }} />} 153 + </> 154 + )} 137 155 138 - {/* Message Input - Enhanced with rainbow animations */} 139 - <MessageInputEnhanced 140 - onSend={handleSend} 141 - isSendingMessage={isSendingMessage || isLoadingMessages} 142 - theme={theme} 143 - colorScheme={colorScheme} 144 - hasMessages={messageGroups.length > 0} 145 - isLoadingMessages={isLoadingMessages} 146 - isStreaming={isStreaming} 147 - hasExpandedReasoning={expandedReasoning.size > 0} 148 - selectedImages={selectedImages} 149 - onAddImage={addImage} 150 - onRemoveImage={removeImage} 151 - disabled={isSendingMessage || isLoadingMessages} 152 - /> 156 + {/* Message Input - Centered when empty, at bottom when has messages */} 157 + <View style={isEmpty ? styles.centeredInputContainer : styles.inputWrapper}> 158 + <MessageInputEnhanced 159 + onSend={handleSend} 160 + isSendingMessage={isSendingMessage || isLoadingMessages} 161 + theme={theme} 162 + colorScheme={colorScheme} 163 + hasMessages={messageGroups.length > 0} 164 + isLoadingMessages={isLoadingMessages} 165 + isStreaming={isStreaming} 166 + hasExpandedReasoning={expandedReasoning.size > 0} 167 + selectedImages={selectedImages} 168 + onAddImage={addImage} 169 + onRemoveImage={removeImage} 170 + disabled={isSendingMessage || isLoadingMessages} 171 + /> 172 + </View> 153 173 </KeyboardAvoidingView> 154 174 ); 155 175 } ··· 164 184 maxWidth: 800, 165 185 width: '100%', 166 186 alignSelf: 'center', 187 + }, 188 + centeredInputContainer: { 189 + flex: 1, 190 + justifyContent: 'center', 191 + }, 192 + inputWrapper: { 193 + // Wrapper for when messages exist (no special styling needed) 167 194 }, 168 195 inputContainer: { 169 196 position: 'absolute',
+103 -46
src/stores/chatStore.ts
··· 2 2 import type { LettaMessage, StreamingChunk } from '../types/letta'; 3 3 4 4 /** 5 - * Streaming state for accumulating message chunks 6 - * 7 - * Used by useMessageGroups to create a temporary streaming MessageGroup 8 - * that displays while the agent is responding. 5 + * Simple streaming message accumulator 6 + * One message = reasoning + content (tool_call OR assistant) 9 7 */ 10 - interface StreamState { 8 + interface StreamingMessage { 9 + id: string; // Message ID from Letta 11 10 reasoning: string; 12 - toolCalls: Array<{ id: string; name: string; args: string }>; 13 - assistantMessage: string; 11 + content: string; 12 + type: 'tool_call' | 'assistant' | null; 13 + toolCallName?: string; 14 + timestamp: string; 14 15 } 15 16 16 17 interface ChatState { ··· 21 22 earliestCursor: string | null; 22 23 hasMoreBefore: boolean; 23 24 24 - // Streaming state 25 + // Streaming state - SIMPLE! 25 26 isStreaming: boolean; 26 27 isSendingMessage: boolean; 27 - currentStream: StreamState; 28 + currentStreamingMessage: StreamingMessage | null; // What we're accumulating right now 29 + completedStreamingMessages: StreamingMessage[]; // Messages finished but stream still active 28 30 29 31 // UI state 30 32 hasInputText: boolean; ··· 40 42 prependMessages: (messages: LettaMessage[]) => void; 41 43 clearMessages: () => void; 42 44 43 - // Streaming actions 45 + // Streaming actions - SIMPLE! 44 46 startStreaming: () => void; 45 47 stopStreaming: () => void; 46 - updateStreamReasoning: (reasoning: string) => void; 47 - updateStreamAssistant: (content: string) => void; 48 - addStreamToolCall: (toolCall: { id: string; name: string; args: string }) => void; 49 - clearStream: () => void; 48 + 49 + // Accumulate into current message 50 + accumulateReasoning: (messageId: string, reasoning: string) => void; 51 + accumulateToolCall: (messageId: string, toolName: string, args: string) => void; 52 + accumulateAssistant: (messageId: string, content: string) => void; 53 + 54 + // Move current to completed (when we detect new message) 55 + finalizeCurrentMessage: () => void; 56 + 57 + // Clear all streaming state 58 + clearAllStreamingState: () => void; 50 59 51 60 // Image actions 52 61 addImage: (image: { uri: string; base64: string; mediaType: string }) => void; ··· 75 84 76 85 isStreaming: false, 77 86 isSendingMessage: false, 78 - currentStream: { 79 - reasoning: '', 80 - toolCalls: [], 81 - assistantMessage: '', 82 - }, 87 + currentStreamingMessage: null, 88 + completedStreamingMessages: [], 83 89 84 90 hasInputText: false, 85 91 lastMessageNeedsSpace: false, ··· 117 123 set({ messages: [], earliestCursor: null, hasMoreBefore: false }); 118 124 }, 119 125 120 - // Streaming actions 126 + // Streaming actions - DEAD SIMPLE 121 127 startStreaming: () => { 128 + console.log('▶️ START STREAMING'); 122 129 set({ 123 130 isStreaming: true, 124 - currentStream: { reasoning: '', toolCalls: [], assistantMessage: '' }, 131 + currentStreamingMessage: null, 132 + completedStreamingMessages: [], 125 133 lastMessageNeedsSpace: true, 126 134 }); 127 135 }, 128 136 129 137 stopStreaming: () => { 138 + console.log('⏹️ STOP STREAMING'); 130 139 set({ isStreaming: false, lastMessageNeedsSpace: false }); 131 140 }, 132 141 133 - // Accumulate reasoning chunks (useMessageGroups will pair with assistant message) 134 - updateStreamReasoning: (reasoning) => { 135 - set((state) => ({ 136 - currentStream: { 137 - ...state.currentStream, 138 - reasoning: state.currentStream.reasoning + reasoning, 139 - }, 140 - })); 142 + // Accumulate reasoning (delta) 143 + accumulateReasoning: (messageId, reasoning) => { 144 + set((state) => { 145 + // If no current message OR different ID, create new 146 + if (!state.currentStreamingMessage || state.currentStreamingMessage.id !== messageId) { 147 + console.log('🆕 New message started:', messageId.substring(0, 20)); 148 + return { 149 + currentStreamingMessage: { 150 + id: messageId, 151 + reasoning: reasoning, 152 + content: '', 153 + type: null, 154 + timestamp: new Date().toISOString(), 155 + }, 156 + }; 157 + } 158 + 159 + // Same message, accumulate reasoning 160 + return { 161 + currentStreamingMessage: { 162 + ...state.currentStreamingMessage, 163 + reasoning: state.currentStreamingMessage.reasoning + reasoning, 164 + }, 165 + }; 166 + }); 141 167 }, 142 168 143 - // Accumulate assistant message chunks (useMessageGroups will pair with reasoning) 144 - updateStreamAssistant: (content) => { 145 - set((state) => ({ 146 - currentStream: { 147 - ...state.currentStream, 148 - assistantMessage: state.currentStream.assistantMessage + content, 149 - }, 150 - })); 169 + // Accumulate tool call (delta) 170 + accumulateToolCall: (messageId, toolName, args) => { 171 + set((state) => { 172 + if (!state.currentStreamingMessage || state.currentStreamingMessage.id !== messageId) { 173 + console.error('❌ Tool call for unknown message:', messageId); 174 + return {}; 175 + } 176 + 177 + return { 178 + currentStreamingMessage: { 179 + ...state.currentStreamingMessage, 180 + type: 'tool_call', 181 + toolCallName: toolName, 182 + content: state.currentStreamingMessage.content + args, 183 + }, 184 + }; 185 + }); 151 186 }, 152 187 153 - addStreamToolCall: (toolCall) => { 188 + // Accumulate assistant (delta) 189 + accumulateAssistant: (messageId, content) => { 154 190 set((state) => { 155 - // Check if tool call already exists 156 - const exists = state.currentStream.toolCalls.some((tc) => tc.id === toolCall.id); 157 - if (exists) return state; 191 + if (!state.currentStreamingMessage || state.currentStreamingMessage.id !== messageId) { 192 + console.error('❌ Assistant content for unknown message:', messageId); 193 + return {}; 194 + } 158 195 159 196 return { 160 - currentStream: { 161 - ...state.currentStream, 162 - toolCalls: [...state.currentStream.toolCalls, toolCall], 197 + currentStreamingMessage: { 198 + ...state.currentStreamingMessage, 199 + type: 'assistant', 200 + content: state.currentStreamingMessage.content + content, 163 201 }, 164 202 }; 165 203 }); 166 204 }, 167 205 168 - clearStream: () => { 206 + // Move current to completed 207 + finalizeCurrentMessage: () => { 208 + set((state) => { 209 + if (!state.currentStreamingMessage) { 210 + console.log('⚠️ No current message to finalize'); 211 + return {}; 212 + } 213 + 214 + console.log('✅ FINALIZE MESSAGE:', state.currentStreamingMessage.id.substring(0, 20)); 215 + return { 216 + completedStreamingMessages: [...state.completedStreamingMessages, state.currentStreamingMessage], 217 + currentStreamingMessage: null, 218 + }; 219 + }); 220 + }, 221 + 222 + // Clear everything 223 + clearAllStreamingState: () => { 224 + console.log('🧹 CLEAR ALL STREAMING STATE'); 169 225 set({ 170 - currentStream: { reasoning: '', toolCalls: [], assistantMessage: '' }, 226 + currentStreamingMessage: null, 227 + completedStreamingMessages: [], 171 228 }); 172 229 }, 173 230