+15
-11
languages/ziglang/0.15/arraylist.md
+15
-11
languages/ziglang/0.15/arraylist.md
···
1
# arraylist
2
3
-
0.15 made arraylist unmanaged - allocator passed to each method. the compiler catches missing allocators immediately, so that's not worth documenting. what matters is ownership.
4
5
## ownership patterns
6
7
-
**build and discard** - most common. defer cleanup, use `.items` to borrow:
8
9
```zig
10
var buf: std.ArrayList(u8) = .empty;
11
-
defer buf.deinit(alloc);
12
13
try buf.print(alloc, "{s}: {d}", .{ name, value });
14
-
sendResponse(buf.items); // borrow the slice
15
```
16
17
-
**build and return** - transfer ownership, no defer:
18
19
```zig
20
var buf: std.ArrayList(u8) = .empty;
21
-
// no defer - caller owns the memory
22
23
try buf.appendSlice(alloc, data);
24
-
return buf.toOwnedSlice(alloc);
25
```
26
27
-
the difference: `.items` borrows (arraylist still owns the memory), `.toOwnedSlice()` transfers (caller must free).
28
29
see: [dashboard.zig#L187](https://tangled.sh/@zzstoatzz.io/music-atmosphere-feed/tree/main/src/dashboard.zig#L187) for the return pattern
30
31
## direct methods vs writer
32
33
-
arraylist has `.print()` directly - you don't always need a writer:
34
35
```zig
36
try buf.print(alloc, "{{\"count\":{d}}}", .{count});
37
```
38
39
-
use `.writer(alloc)` when you need to pass to something expecting `std.Io.Writer`:
40
41
```zig
42
const w = buf.writer(alloc);
···
45
46
## why unmanaged
47
48
-
from the [release notes](https://ziglang.org/download/0.15.1/release-notes.html): storing the allocator had costs - worse method signatures for reservations, can't statically initialize, extra memory for nested containers. the benefits (convenience, avoiding wrong allocator) didn't justify it since the allocator is always nearby.
···
1
# arraylist
2
3
+
`ArrayList` is zig's growable buffer - you use it when you don't know the size upfront. common for building strings, collecting results, or accumulating data before sending it somewhere.
4
+
5
+
in 0.15, arraylist became "unmanaged" - you pass the allocator to each method instead of storing it in the struct. the compiler catches missing allocators immediately, so that's not the tricky part. the tricky part is ownership.
6
7
## ownership patterns
8
9
+
when you build up data in an arraylist, you eventually need to do something with it. there are two paths:
10
+
11
+
**build and discard** - you use the data, then throw it away. this is most common (e.g., building an http response):
12
13
```zig
14
var buf: std.ArrayList(u8) = .empty;
15
+
defer buf.deinit(alloc); // cleanup when we're done
16
17
try buf.print(alloc, "{s}: {d}", .{ name, value });
18
+
sendResponse(buf.items); // borrow the slice, arraylist still owns it
19
```
20
21
+
**build and return** - you're building something to give to a caller. they'll own the memory:
22
23
```zig
24
var buf: std.ArrayList(u8) = .empty;
25
+
// no defer here - we're transferring ownership
26
27
try buf.appendSlice(alloc, data);
28
+
return buf.toOwnedSlice(alloc); // caller must free this
29
```
30
31
+
the key difference: `.items` gives you a view into the arraylist's memory (it still owns it). `.toOwnedSlice()` hands ownership to you (arraylist forgets about it, you must free it).
32
33
see: [dashboard.zig#L187](https://tangled.sh/@zzstoatzz.io/music-atmosphere-feed/tree/main/src/dashboard.zig#L187) for the return pattern
34
35
## direct methods vs writer
36
37
+
you might think you need to get a writer to write formatted output, but arraylist has `.print()` built in:
38
39
```zig
40
try buf.print(alloc, "{{\"count\":{d}}}", .{count});
41
```
42
43
+
use `.writer(alloc)` when you need to pass to something that expects a generic `std.Io.Writer`:
44
45
```zig
46
const w = buf.writer(alloc);
···
49
50
## why unmanaged
51
52
+
from the [release notes](https://ziglang.org/download/0.15.1/release-notes.html): storing the allocator had costs - worse method signatures for reservations, can't statically initialize, extra memory for nested containers. the benefits (convenience, avoiding wrong allocator) didn't justify it since the allocator is always nearby anyway.
+15
-3
languages/ziglang/0.15/build.md
+15
-3
languages/ziglang/0.15/build.md
···
1
# build
2
3
## 0.15 change
4
5
-
pre-0.15 used `exe.addModule()`. now use `createModule` with `imports` array:
6
7
```zig
8
const exe = b.addExecutable(.{
···
18
});
19
```
20
21
## dependency hash trick
22
23
-
to get the hash for build.zig.zon, run `zig build` with a wrong hash. it tells you the correct one:
24
25
```
26
error: hash mismatch... expected 1220abc..., found 1220def...
27
```
28
29
## don't forget
30
31
-
`b.installArtifact(exe)` - without this, `zig build` produces nothing.
···
1
# build
2
3
+
`build.zig` is where you configure how your project compiles - what files to include, what dependencies to pull in, what artifacts to produce. zig's build system is written in zig itself, so it's just code.
4
+
5
## 0.15 change
6
7
+
the way you attach dependencies to your executable changed. before 0.15, you'd call `exe.addModule()` after creating the executable. now you declare everything upfront in a `createModule` call with an `imports` array:
8
9
```zig
10
const exe = b.addExecutable(.{
···
20
});
21
```
22
23
+
the `.name` in the imports array is what you'll use in your code: `@import("websocket")`.
24
+
25
## dependency hash trick
26
27
+
dependencies are declared in `build.zig.zon` with a hash for verification. to get the correct hash for a new dependency, just put any placeholder hash and run `zig build`. the error message tells you what the hash should be:
28
29
```
30
error: hash mismatch... expected 1220abc..., found 1220def...
31
```
32
33
+
copy the "found" value into your .zon file.
34
+
35
## don't forget
36
37
+
after creating your executable, you need to tell zig to actually install it:
38
+
39
+
```zig
40
+
b.installArtifact(exe);
41
+
```
42
+
43
+
without this line, `zig build` runs successfully but produces no output. easy to miss.
+17
-12
languages/ziglang/0.15/comptime.md
+17
-12
languages/ziglang/0.15/comptime.md
···
1
# comptime
2
3
-
comptime lets you generate types, validate inputs, and catch errors at compile time. for a complete example, see [zql](https://tangled.sh/@zzstoatzz.io/zql) which parses SQL at comptime and generates type-safe bindings.
4
5
## type-returning functions
6
7
-
a function that takes comptime params and returns a `type`:
8
9
```zig
10
pub fn Wrapper(comptime T: type) type {
···
18
}
19
```
20
21
-
`@This()` refers to the struct being defined - necessary since the struct is anonymous.
22
23
## generating tuple types from struct fields
24
25
-
extract field types in a specific order to build a tuple:
26
27
```zig
28
fn BindTuple(comptime Args: type, comptime param_names: []const []const u8) type {
···
41
}
42
```
43
44
-
this reorders struct fields into a tuple matching the parameter order. useful for binding named args to positional parameters.
45
46
see: [zql/src/Query.zig#L78](https://tangled.sh/@zzstoatzz.io/zql/tree/main/src/Query.zig#L78)
47
48
## compile-time validation
49
50
-
`@compileError` stops compilation with a message:
51
52
```zig
53
inline for (required_fields) |name| {
···
57
}
58
```
59
60
-
if your code compiles, it's valid. invalid states are unrepresentable.
61
62
## branch quota
63
64
-
complex comptime parsing hits the default branch quota (1000 backwards branches). scale it with input:
65
66
```zig
67
@setEvalBranchQuota(input.len * 100);
68
```
69
70
-
without this, complex parsing fails with "evaluation exceeded maximum branch quota."
71
72
see: [zql/src/parse.zig#L48](https://tangled.sh/@zzstoatzz.io/zql/tree/main/src/parse.zig#L48)
73
74
## constraints
75
76
-
- no allocation at comptime - use fixed-size arrays
77
-
- no runtime values - everything must be known at compile time
78
-
- comptime code runs during compilation, adding build time
···
1
# comptime
2
3
+
zig runs code at compile time. not just constants - actual logic, loops, conditionals. you can generate types, validate inputs, and catch errors before your program ever runs.
4
+
5
+
the payoff: things that would be runtime checks in other languages become compile errors in zig. if your code compiles, certain classes of bugs are impossible.
6
+
7
+
for a complete example, see [zql](https://tangled.sh/@zzstoatzz.io/zql) - it parses SQL at compile time and generates type-safe bindings. typo in a parameter name? compile error.
8
9
## type-returning functions
10
11
+
the core pattern: a function that takes comptime parameters and returns a `type`. you're generating a struct definition:
12
13
```zig
14
pub fn Wrapper(comptime T: type) type {
···
22
}
23
```
24
25
+
`@This()` refers to the struct being defined - you need it because the struct doesn't have a name (it's an anonymous struct returned from a function).
26
27
## generating tuple types from struct fields
28
29
+
sometimes you need to reorder or extract types from a struct. this pattern builds a tuple type by pulling field types in a specific order:
30
31
```zig
32
fn BindTuple(comptime Args: type, comptime param_names: []const []const u8) type {
···
45
}
46
```
47
48
+
use case: you have named arguments (`.{ .name = "alice", .age = 25 }`) but need to bind them to positional SQL parameters in a specific order.
49
50
see: [zql/src/Query.zig#L78](https://tangled.sh/@zzstoatzz.io/zql/tree/main/src/Query.zig#L78)
51
52
## compile-time validation
53
54
+
`@compileError` stops compilation with a custom message. combine with `inline for` to check things at compile time:
55
56
```zig
57
inline for (required_fields) |name| {
···
61
}
62
```
63
64
+
if someone forgets a required field, they get a compile error pointing at exactly what's missing.
65
66
## branch quota
67
68
+
zig limits how much work comptime code can do (prevents infinite loops from hanging compilation). the default is 1000 "backwards branches" (loops, recursion). for complex parsing, you'll hit this:
69
70
```zig
71
@setEvalBranchQuota(input.len * 100);
72
```
73
74
+
scale it with your input size so small inputs compile fast and large inputs still work.
75
76
see: [zql/src/parse.zig#L48](https://tangled.sh/@zzstoatzz.io/zql/tree/main/src/parse.zig#L48)
77
78
## constraints
79
80
+
a few things to know:
81
+
- no allocation at comptime - you can't call an allocator, so use fixed-size arrays
82
+
- no runtime values - everything must be known at compile time (that's the point)
83
+
- comptime code runs during compilation, so complex logic adds build time
+19
-12
languages/ziglang/0.15/concurrency.md
+19
-12
languages/ziglang/0.15/concurrency.md
···
1
# concurrency
2
3
-
zig has threads, mutexes, and atomics. no async/await. for syntax, see std.Thread docs. these notes cover design decisions.
4
5
## when to use atomics vs mutex
6
7
-
**atomics for simple counters:**
8
9
```zig
10
posts_checked: std.atomic.Value(u64) = .init(0),
11
12
_ = self.posts_checked.fetchAdd(1, .monotonic);
13
```
14
15
-
**mutex for complex data structures:**
16
17
```zig
18
bufo_matches: std.StringHashMap(MatchInfo),
19
bufo_mutex: Thread.Mutex = .{},
20
21
self.bufo_mutex.lock();
22
defer self.bufo_mutex.unlock();
23
try self.bufo_matches.put(name, info);
24
```
25
26
-
the pattern in [find-bufo/bot/src/stats.zig](https://tangled.sh/@zzstoatzz.io/find-bufo/tree/main/bot/src/stats.zig): atomics for the five simple counters (posts_checked, matches_found, etc.), mutex for the hashmap of per-bufo match data.
27
28
-
rule: if it's a single integer, use atomic. if it's a container or multi-field update, use mutex.
29
30
## memory ordering
31
32
-
all usages in these projects use `.monotonic` - sufficient for independent counters where you just need eventual visibility, not synchronization between threads.
33
34
-
use stricter orderings (`.acquire`, `.release`) when one thread's write must be visible to another thread before proceeding. none of these projects need that.
35
36
## callback pattern
37
38
-
jetstream doesn't use channels or message passing. it takes a function pointer:
39
40
```zig
41
callback: *const fn (Post) void,
42
43
self.callback(.{
44
.uri = uri,
45
.text = text,
46
});
47
```
48
49
-
simpler than channels when you just need to notify one consumer.
50
51
see: [find-bufo/bot/src/jetstream.zig#L18](https://tangled.sh/@zzstoatzz.io/find-bufo/tree/main/bot/src/jetstream.zig#L18)
52
53
-
## reconnection
54
55
-
exponential backoff for network consumers:
56
57
```zig
58
var backoff: u64 = 1;
···
65
}
66
```
67
68
-
starts at 1s, doubles each failure, caps at 60s.
···
1
# concurrency
2
3
+
when your program needs to do multiple things at once - handle many connections, run background tasks, update stats while processing requests. zig gives you threads, mutexes, and atomics. no async/await.
4
+
5
+
these notes focus on design decisions, not syntax. for api details, see std.Thread docs.
6
7
## when to use atomics vs mutex
8
9
+
you often need to share state between threads. the question is how to protect it.
10
+
11
+
**atomics** are for simple counters - things where each operation is independent:
12
13
```zig
14
posts_checked: std.atomic.Value(u64) = .init(0),
15
16
+
// in some thread:
17
_ = self.posts_checked.fetchAdd(1, .monotonic);
18
```
19
20
+
**mutex** is for complex data or multi-step operations:
21
22
```zig
23
bufo_matches: std.StringHashMap(MatchInfo),
24
bufo_mutex: Thread.Mutex = .{},
25
26
+
// in some thread:
27
self.bufo_mutex.lock();
28
defer self.bufo_mutex.unlock();
29
try self.bufo_matches.put(name, info);
30
```
31
32
+
the pattern in [find-bufo/bot/src/stats.zig](https://tangled.sh/@zzstoatzz.io/find-bufo/tree/main/bot/src/stats.zig): five simple counters use atomics (posts_checked, matches_found, etc.), but the hashmap of per-bufo match data uses a mutex.
33
34
+
rule of thumb: single integer that threads increment independently? atomic. anything else? mutex.
35
36
## memory ordering
37
38
+
you'll see `.monotonic` everywhere in these projects. it's the weakest ordering - just means "this operation is atomic, but i don't care about ordering relative to other operations."
39
40
+
that's fine for independent counters. you'd use stricter orderings (`.acquire`, `.release`) when one thread's write must be visible to another thread before it proceeds - like signaling that data is ready. none of these projects need that.
41
42
## callback pattern
43
44
+
the jetstream client doesn't use channels or complicated message passing. it just takes a function pointer and calls it when a message arrives:
45
46
```zig
47
callback: *const fn (Post) void,
48
49
+
// when a message comes in:
50
self.callback(.{
51
.uri = uri,
52
.text = text,
53
});
54
```
55
56
+
simpler than channels when you have one producer and one consumer. the callback runs on the producer's thread, so keep it fast.
57
58
see: [find-bufo/bot/src/jetstream.zig#L18](https://tangled.sh/@zzstoatzz.io/find-bufo/tree/main/bot/src/jetstream.zig#L18)
59
60
+
## reconnection with backoff
61
62
+
network connections fail. when they do, don't hammer the server - back off exponentially:
63
64
```zig
65
var backoff: u64 = 1;
···
72
}
73
```
74
75
+
starts at 1 second, doubles each failure, caps at 60 seconds. simple and effective.
+16
-12
languages/ziglang/0.15/io.md
+16
-12
languages/ziglang/0.15/io.md
···
1
# i/o
2
3
-
0.15 replaced generic `anytype` reader/writer with concrete types using explicit buffers. see [release notes](https://ziglang.org/download/0.15.1/release-notes.html) for the rationale.
4
5
-
## http server pattern
6
7
```zig
8
var read_buffer: [8192]u8 = undefined;
···
14
var server = http.Server.init(reader.interface(), &writer.interface);
15
```
16
17
-
the buffers are yours - stack allocated, explicit size. `.interface()` extracts the concrete type that http.Server expects.
18
19
see: [http.zig#L14](https://tangled.sh/@zzstoatzz.io/music-atmosphere-feed/tree/main/src/http.zig#L14)
20
21
-
## http client pattern
22
23
-
for api calls, use `Io.Writer.Allocating` to collect the response:
24
25
```zig
26
var client = http.Client{ .allocator = allocator };
···
36
37
if (result.status != .ok) return error.FetchFailed;
38
39
-
const response = aw.toArrayList().items;
40
```
41
42
see: [find-bufo/bot/src/main.zig#L196](https://tangled.sh/@zzstoatzz.io/find-bufo/tree/main/bot/src/main.zig#L196)
43
44
## tls reading quirk
45
46
-
when reading from raw tls (not http.Client), you must loop until data arrives. `n == 0` means "try again", not EOF:
47
48
```zig
49
outer: while (total_read < response_buf.len) {
···
54
total_read += n;
55
break;
56
}
57
-
// n == 0: tls may have consumed input without producing output
58
-
// (buffering partial records, renegotiation, etc.)
59
}
60
}
61
```
62
63
-
this happens because tls decryption can consume input bytes without producing output yet. the inner loop keeps trying until actual data appears.
64
65
-
also: raw tls needs explicit flush at both layers:
66
```zig
67
tls_client.writer.flush() catch return error.Failed;
68
stream_writer.interface.flush() catch return error.Failed;
···
72
73
## when you don't need to flush
74
75
-
high-level apis (http.Server, http.Client) handle flushing internally. `request.respond()` flushes for you. only raw tls/stream code needs explicit flushes.
···
1
# i/o
2
3
+
reading and writing data - files, sockets, http. zig 0.15 overhauled this entirely, replacing generic `anytype` interfaces with concrete types that use explicit buffers. the [release notes](https://ziglang.org/download/0.15.1/release-notes.html) explain why (better error messages, no generic pollution, clearer ownership).
4
+
5
+
the main thing to know: you provide the buffers, and you call `.interface()` to get the type that APIs expect.
6
7
+
## http server
8
+
9
+
when handling incoming connections, you set up buffers for reading requests and writing responses:
10
11
```zig
12
var read_buffer: [8192]u8 = undefined;
···
18
var server = http.Server.init(reader.interface(), &writer.interface);
19
```
20
21
+
you own these buffers (they're on your stack). the http.Server borrows them. `.interface()` extracts the concrete reader/writer type.
22
23
see: [http.zig#L14](https://tangled.sh/@zzstoatzz.io/music-atmosphere-feed/tree/main/src/http.zig#L14)
24
25
+
## http client
26
27
+
when making outgoing requests (calling APIs, fetching data), use `Io.Writer.Allocating` to collect the response body:
28
29
```zig
30
var client = http.Client{ .allocator = allocator };
···
40
41
if (result.status != .ok) return error.FetchFailed;
42
43
+
const response = aw.toArrayList().items; // the response body
44
```
45
+
46
+
the allocating writer grows as needed to hold whatever the server sends back.
47
48
see: [find-bufo/bot/src/main.zig#L196](https://tangled.sh/@zzstoatzz.io/find-bufo/tree/main/bot/src/main.zig#L196)
49
50
## tls reading quirk
51
52
+
if you're doing raw tls (not using http.Client), there's a gotcha: when reading, `n == 0` doesn't mean end-of-stream. it means "i consumed some input but don't have output yet" - tls may be buffering partial records or handling renegotiation. you have to keep trying:
53
54
```zig
55
outer: while (total_read < response_buf.len) {
···
60
total_read += n;
61
break;
62
}
63
+
// n == 0: keep trying, tls isn't done yet
64
}
65
}
66
```
67
68
+
also, raw tls needs explicit flushes at both the tls layer and the underlying stream:
69
70
```zig
71
tls_client.writer.flush() catch return error.Failed;
72
stream_writer.interface.flush() catch return error.Failed;
···
76
77
## when you don't need to flush
78
79
+
the high-level apis handle this for you. `http.Server`'s `request.respond()` flushes internally. `http.Client` flushes when the request completes. you only need manual flushes when working with raw streams or tls directly.