Monorepo for Aesthetic.Computer
aesthetic.computer
1# Plan: Per-Note Waveform Visualization for Notepat
2
3## Problem Statement
4Currently, `notepat.mjs` can only visualize a single mixed waveform from `sound.speaker.waveforms` which combines all playing sounds into one audio stream. When multiple notes are playing simultaneously, we cannot visually separate each note's individual waveform - they're all mixed together in the master output.
5
6## Goal
7Enable each active note in notepat to have its own separate waveform data for individual visualization, so that when displaying the visualizer, each note's lane can show its unique audio characteristics rather than all showing the same mixed waveform.
8
9## Current Architecture
10
11### 1. **speaker.mjs (Audio Worklet)**
12- Location: `/workspaces/aesthetic-computer/system/public/aesthetic.computer/lib/speaker.mjs`
13- Manages the `#queue` array containing all active `Synth` instances
14- In the `process()` method (lines ~515-590):
15 - Loops through all instruments in `this.#queue`
16 - Calls `instrument.next(s)` for each to get amplitude
17 - **Mixes all instruments together** into `output[0][s]` (left) and `output[1][s]` (right)
18 - Samples the **mixed output** into `waveformLeft` and `waveformRight` arrays
19 - Stores these in `#currentWaveformLeft` and `#currentWaveformRight`
20- Message handler (lines ~105-135):
21 - Responds to `"get-waveforms"` messages
22 - Sends back the mixed waveforms via `postMessage({ type: "waveforms", content: { left, right } })`
23
24### 2. **bios.mjs (Audio System Bridge)**
25- Location: `/workspaces/aesthetic-computer/system/public/aesthetic.computer/bios.mjs`
26- Sets up the speaker AudioWorklet (lines ~1385-1490)
27- Creates `requestSpeakerWaveforms()` function (line ~1471) that posts `"get-waveforms"` message
28- Receives waveform data via message handler (line ~1484) and forwards to disk via `send({ type: "waveforms", content })`
29- Also handles sound lifecycle: `killSound()`, `updateSound()`, etc.
30
31### 3. **disk.mjs (Piece Runtime)**
32- Receives waveform messages and makes available via `sound.speaker.waveforms` object
33- Pieces like `notepat.mjs` access via `sound.speaker.waveforms.left` and `.right`
34
35### 4. **notepat.mjs (Current Implementation)**
36- Maintains `sounds` object tracking active notes: `{ "3c": { sound: Synth }, "4e": { sound: Synth }, ... }`
37- Each note has a unique `sound.id` (UUID)
38- In visualizer mode, gets `sound.speaker.waveforms.left` - but this is the **mixed waveform of all notes**
39
40## Proposed Solution
41
42### Phase 1: Capture Per-Instrument Waveforms in speaker.mjs
43
44**Modify**: `/workspaces/aesthetic-computer/system/public/aesthetic.computer/lib/speaker.mjs`
45
461. Add new private field to track per-instrument waveforms:
47```javascript
48#perInstrumentWaveforms = new Map(); // Map<instrumentId, {left: [], right: []}>
49```
50
512. In the `process()` method sample loop (around line 527):
52```javascript
53for (const instrument of this.#queue) {
54 const amplitude = instrument.next(s);
55
56 // Track per-instrument output BEFORE mixing
57 if (!this.#perInstrumentWaveforms.has(instrument.id)) {
58 this.#perInstrumentWaveforms.set(instrument.id, { left: [], right: [] });
59 }
60
61 const leftOutput = instrument.pan(0, amplitude);
62 const rightOutput = instrument.pan(1, amplitude);
63
64 // Sample individual instrument waveform
65 if (s % waveformRate === 0) {
66 const instrumentWaveforms = this.#perInstrumentWaveforms.get(instrument.id);
67 instrumentWaveforms.left.push(leftOutput);
68 instrumentWaveforms.right.push(rightOutput);
69 }
70
71 // Then mix into master output
72 output[0][s] += leftOutput;
73 output[1][s] += rightOutput;
74
75 // ... volume calculation ...
76}
77```
78
793. Maintain waveform buffer size per instrument (similar to global waveform):
80```javascript
81// After the sample loop
82for (const [id, waveforms] of this.#perInstrumentWaveforms) {
83 // Remove old samples if exceeding waveformSize
84 if (waveforms.left.length > waveformSize) {
85 const excess = waveforms.left.length - waveformSize;
86 waveforms.left.splice(0, excess);
87 waveforms.right.splice(0, excess);
88 }
89}
90
91// Clean up waveforms for dead instruments
92this.#perInstrumentWaveforms = new Map(
93 [...this.#perInstrumentWaveforms].filter(([id]) =>
94 this.#queue.some(inst => inst.id === id)
95 )
96);
97```
98
994. Add new message handler:
100```javascript
101if (msg.type === "get-per-instrument-waveforms") {
102 // Convert Map to plain object for postMessage
103 const waveformsObj = {};
104 for (const [id, waveforms] of this.#perInstrumentWaveforms) {
105 waveformsObj[id] = {
106 left: [...waveforms.left],
107 right: [...waveforms.right]
108 };
109 }
110
111 this.port.postMessage({
112 type: "per-instrument-waveforms",
113 content: waveformsObj
114 });
115}
116```
117
118### Phase 2: Update bios.mjs to Request and Forward Per-Instrument Data
119
120**Modify**: `/workspaces/aesthetic-computer/system/public/aesthetic.computer/bios.mjs`
121
1221. Create new request function (around line 1471, next to `requestSpeakerWaveforms`):
123```javascript
124requestPerInstrumentWaveforms = function () {
125 speakerProcessor.port.postMessage({ type: "get-per-instrument-waveforms" });
126};
127```
128
1292. Add message handler (around line 1484, in the speaker message handler):
130```javascript
131if (msg.type === "per-instrument-waveforms") {
132 send({ type: "per-instrument-waveforms", content: msg.content });
133}
134```
135
1363. Add to exports/make available (around line 1027):
137```javascript
138requestPerInstrumentWaveforms,
139```
140
1414. Add message handler in main receive function (around line 8343):
142```javascript
143if (type === "get-per-instrument-waveforms") {
144 requestPerInstrumentWaveforms?.();
145}
146```
147
148### Phase 3: Update disk.mjs to Expose Per-Instrument Waveforms
149
150**Modify**: `/workspaces/aesthetic-computer/system/public/aesthetic.computer/disks/disk.mjs`
151
1521. Add storage for per-instrument waveforms:
153```javascript
154// In the speaker object initialization
155sound.speaker.perInstrumentWaveforms = {}; // { instrumentId: { left: [], right: [] } }
156```
157
1582. Add receiver for the new message type (wherever speaker waveforms are received):
159```javascript
160if (type === "per-instrument-waveforms") {
161 sound.speaker.perInstrumentWaveforms = content;
162}
163```
164
1653. Request per-instrument waveforms alongside regular waveforms (in the frame/sim loop):
166```javascript
167send({ type: "get-per-instrument-waveforms" });
168```
169
170### Phase 4: Update notepat.mjs to Use Per-Instrument Waveforms
171
172**Modify**: `/workspaces/aesthetic-computer/system/public/aesthetic.computer/disks/notepat.mjs`
173
1741. In `pictureLines()` function (around line 2040), instead of cycling through notes with the same waveform:
175```javascript
176activeNotes.forEach((trailNote, noteIndex) => {
177 if (!trailNote) return;
178
179 const noteData = sounds[trailNote];
180 if (!noteData?.sound?.id) return;
181
182 // Get THIS note's specific waveform
183 const instrumentId = noteData.sound.id;
184 const instrumentWaveforms = sound.speaker.perInstrumentWaveforms?.[instrumentId];
185
186 if (!instrumentWaveforms?.left || instrumentWaveforms.left.length < 16) {
187 return; // Skip if no waveform data available
188 }
189
190 const noteWaveform = instrumentWaveforms.left;
191 const color = colorFromNote(trailNote, num);
192 const laneHeight = picture.height / activeNotes.length;
193 const laneCenterY = (noteIndex + 0.5) * laneHeight;
194
195 const step = picture.width / noteWaveform.length;
196
197 for (let i = 1; i < noteWaveform.length; i++) {
198 const x1 = (i - 1) * step;
199 const y1 = laneCenterY + noteWaveform[i - 1] * laneHeight * 0.4;
200 const x2 = i * step;
201 const y2 = laneCenterY + noteWaveform[i] * laneHeight * 0.4;
202
203 ink(...color).line(x1, y1, x2, y2);
204 }
205});
206```
207
208## Implementation Considerations
209
210### Performance
211- **Memory**: Each active note will store ~220 samples (at 44.1kHz/200), so 10 notes = ~2200 floats = ~9KB
212- **CPU**: Additional array operations per frame, but minimal since we're already iterating instruments
213- **Optimization**: Could throttle per-instrument waveform updates (update every N frames instead of every frame)
214
215### Edge Cases
216- **Dead instruments**: Clean up waveform data when instruments are killed (already handled in Phase 1)
217- **No waveform data yet**: Check for existence before visualizing (handled in Phase 4)
218- **ID mismatch**: Synth IDs should be stable throughout playback - verify this assumption
219
220### Alternative: Simplified Approach
221If per-instrument proves too complex, could instead:
222- Tag each Synth with metadata (e.g., `instrument.noteLabel = "3c"`)
223- Only sample waveforms for instruments with matching labels
224- Still mix in speaker.mjs but provide filtered waveforms per label
225
226## Testing Plan
2271. Add console logs to verify per-instrument waveforms are captured
2282. Test with 1, 2, and 5+ simultaneous notes
2293. Verify waveforms differ between notes (play different frequencies)
2304. Check memory usage doesn't balloon
2315. Verify waveforms clean up when notes stop
232
233## Future Enhancements
234- Expose per-instrument amplitudes and frequencies
235- Allow filtering by instrument type (synth vs sample)
236- Add API to request waveforms for specific instrument IDs
237- Consider WebAssembly for waveform processing if performance becomes an issue