Compare changes

Choose any two refs to compare.

Changed files
+210 -6
languages
ziglang
protocols
+9
languages/ziglang/0.15/README.md
··· 2 2 3 3 [release notes](https://ziglang.org/download/0.15.1/release-notes.html) 4 4 5 + ## issue tracking 6 + 7 + zig moved from github to codeberg. old issues (≤25xxx) are on github, new issues (30xxx+) are on codeberg.org/ziglang/zig. 8 + 9 + query via API: 10 + ```bash 11 + curl -s "https://codeberg.org/api/v1/repos/ziglang/zig/issues?state=all&q=flate" 12 + ``` 13 + 5 14 ## notes 6 15 7 16 - [arraylist](./arraylist.md) - ownership patterns (toOwnedSlice vs deinit)
+15
languages/ziglang/0.15/io.md
··· 77 77 ## when you don't need to flush 78 78 79 79 the high-level apis handle this for you. `http.Server`'s `request.respond()` flushes internally. `http.Client` flushes when the request completes. you only need manual flushes when working with raw streams or tls directly. 80 + 81 + ## gzip decompression bug (0.15.x only) 82 + 83 + http.Client panics when decompressing certain gzip responses on x86_64-linux. the deflate decompressor sets up a Writer with `unreachableRebase` but can hit a code path that calls `rebase` when the buffer fills. 84 + 85 + **workaround:** 86 + ```zig 87 + _ = try client.fetch(.{ 88 + .location = .{ .url = url }, 89 + .response_writer = &aw.writer, 90 + .headers = .{ .accept_encoding = .{ .override = "identity" } }, 91 + }); 92 + ``` 93 + 94 + fixed in 0.16. see: [zat/xrpc.zig](https://tangled.sh/zzstoatzz.io/zat/tree/main/src/internal/xrpc.zig#L88)
+10
languages/ziglang/build/dependencies.md
··· 104 104 105 105 fails gracefully when pkg-config isn't available. 106 106 107 + ## tangled packages 108 + 109 + tangled.sh hosts zig packages. fetch without `.tar.gz` extension: 110 + 111 + ```bash 112 + zig fetch --save https://tangled.sh/zzstoatzz.io/zat/archive/main 113 + ``` 114 + 115 + `zig fetch` checks Content-Type headers to determine archive format - tangled handles this server-side. 116 + 107 117 sources: 108 118 - [ghostty/build.zig.zon](https://github.com/ghostty-org/ghostty/blob/main/build.zig.zon) 109 119 - [ghostty/pkg/](https://github.com/ghostty-org/ghostty/tree/main/pkg)
+81
protocols/MCP/README.md
··· 1 + # mcp 2 + 3 + the model context protocol. an open standard for connecting ai models (hosts) to external systems (servers) via structured tools, resources, and prompts. it acts as a "usb-c port for ai." 4 + 5 + ## architecture 6 + 7 + mcp defines a client-server relationship: 8 + 9 + - **host**: the ai application (e.g., claude code, vscode) that coordinates and manages mcp clients. 10 + - **client**: maintains a dedicated connection to an mcp server and obtains context from it for the host. a host can have multiple clients. 11 + - **server**: a program that provides context (tools, resources, prompts) to mcp clients. servers can run locally (stdio) or remotely (http/sse). 12 + 13 + ``` 14 + ┌─────────────┐ ┌─────────────┐ 15 + │ MCP Host │ │ MCP Server │ 16 + │ (LLM Client)│─────│ (Tools, Data)│ 17 + └──────┬──────┘ └─────────────┘ 18 + │ ▲ 19 + │ request/response│ 20 + │ │ 21 + │ context, actions│ 22 + ▼ │ 23 + ┌─────────────┐ ┌─────────────┐ 24 + │ MCP Client │─────│ External │ 25 + │ (Per Server)│ │ System │ 26 + └─────────────┘ └─────────────┘ 27 + ``` 28 + 29 + ## primitives 30 + 31 + mcp servers expose three core primitives: 32 + 33 + ### tools 34 + executable functions that the host (via the llm) can invoke. 35 + - define actions an ai can take. 36 + - typically correspond to python functions with type hints and docstrings. 37 + - examples: `add_event_to_calendar(title: str, date: str)`, `search_docs(query: str)`. 38 + 39 + ### resources 40 + read-only data sources exposed to the host. 41 + - content is addressed by a uri (e.g., `config://app/settings.json`, `github://repo/readme.md`). 42 + - can be structured (json) or unstructured (text, binary). 43 + - examples: application configuration, documentation, database entries. 44 + 45 + ### prompts 46 + reusable templates for interaction. 47 + - define common interactions or workflows. 48 + - can guide the llm in complex tasks. 49 + - examples: `summarize_document(document: str)`, `generate_report(data: dict)`. 50 + 51 + ## transport 52 + 53 + mcp supports flexible transport mechanisms: 54 + - **stdio**: standard input/output. efficient for local, co-located processes. 55 + - **streamable http**: for remote servers. uses http post for client messages and server-sent events (sse) for streaming responses. supports standard http auth. 56 + 57 + ## applications & patterns 58 + 59 + ### plyr.fm mcp server 60 + an mcp server that exposes a music library (plyr.fm) to llm clients. 61 + - **purpose**: allows llms to query track information, search the library, and get user-specific data (e.g., liked tracks). 62 + - **design**: primarily **read-only** tools (e.g., `list_tracks`, `get_track`, `search`). mutations are handled by a separate cli. 63 + - **source**: [zzstoatzz/plyr-python-client](https://github.com/zzstoatzz/plyr-python-client/tree/main/packages/plyrfm-mcp) 64 + 65 + ### prefect mcp server 66 + an mcp server for interacting with prefect, a workflow orchestration system. 67 + - **purpose**: enables llms to monitor and manage prefect workflows. 68 + - **design**: exposes monitoring tools (read-only) and provides guidance for **mutations** via the prefect cli. 69 + - **pattern**: emphasizes "agent-friendly usage" of the prefect cli, including `--no-prompt` and `prefect api` for json output, to facilitate programmatic interaction by llms. 70 + - **source**: [prefecthq/prefect-mcp-server](https://github.com/PrefectHQ/prefect-mcp-server) 71 + 72 + ## ecosystem 73 + 74 + - [fastmcp](./fastmcp.md) - pythonic server framework 75 + - [pdsx](https://github.com/zzstoatzz/pdsx) - mcp server for atproto 76 + - [inspector](https://github.com/modelcontextprotocol/inspector) - web-based debugger for mcp servers 77 + 78 + ## sources 79 + 80 + - [modelcontextprotocol.io](https://modelcontextprotocol.io) - official documentation 81 + - [jlowin/fastmcp](https://github.com/jlowin/fastmcp) - the fastmcp python library
+79
protocols/MCP/fastmcp.md
··· 1 + # fastmcp 2 + 3 + a high-level python framework for building mcp servers. it aims to simplify development by abstracting protocol details, emphasizing developer experience and type safety. 4 + 5 + ## philosophy 6 + 7 + fastmcp is designed to be a fast, simple, and complete solution for mcp development. unlike a low-level sdk, it provides a "pythonic" and declarative way to define mcp servers, handling complex protocol intricacies automatically. it leverages python's type hints and docstrings to generate mcp schemas (for tools, resources, and prompts) automatically. 8 + 9 + ## basic usage 10 + 11 + define your mcp server and expose functionality using decorators: 12 + 13 + ```python 14 + from fastmcp import FastMCP, Context 15 + 16 + # Initialize the MCP server 17 + mcp = FastMCP("my-awesome-server", description="A server for managing awesome things.") 18 + 19 + @mcp.tool() 20 + def add_numbers(a: int, b: int) -> int: 21 + """Adds two numbers together. 22 + 23 + :param a: The first number. 24 + :param b: The second number. 25 + :returns: The sum of the two numbers. 26 + """ 27 + return a + b 28 + 29 + @mcp.resource("config://app/settings") 30 + def get_app_settings() -> dict: 31 + """Retrieves the current application settings.""" 32 + return {"debug_mode": True, "log_level": "INFO"} 33 + 34 + @mcp.prompt 35 + def create_summary_prompt(text: str) -> str: 36 + """Generates a prompt to summarize the given text.""" 37 + return f"please summarize the following document:\n\n{text}" 38 + 39 + @mcp.tool() 40 + async def fetch_external_data(ctx: Context, url: str) -> str: 41 + """Fetches data from an external URL. 42 + 43 + :param ctx: The MCP context for logging. 44 + :param url: The URL to fetch. 45 + :returns: The content of the URL as a string. 46 + """ 47 + ctx.info(f"fetching data from {url}") 48 + # In a real scenario, this would use an async http client 49 + return f"content from {url}" 50 + 51 + # Run the server 52 + if __name__ == "__main__": 53 + mcp.run() # Automatically detects and uses STDIO or HTTP transport 54 + ``` 55 + 56 + ## key features & patterns 57 + 58 + ### decorator-based definitions 59 + tools, resources, and prompts are defined using simple python functions decorated with `@mcp.tool`, `@mcp.resource`, and `@mcp.prompt`. fastmcp infers schemas from type hints and docstrings. 60 + 61 + ### context injection 62 + the `Context` object can be injected into tools, resources, or prompts, providing access to mcp session capabilities like logging (`ctx.info()`), sending progress updates (`ctx.session.send_progress()`), and making llm sampling requests (`ctx.session.sample()`) 63 + 64 + ### flexible transports 65 + fastmcp abstracts the underlying transport. servers can run over stdio (for local integration) or streamable http (for remote deployments) with minimal code changes. 66 + 67 + ### server composition & proxying 68 + fastmcp supports advanced patterns like combining multiple mcp servers into a single endpoint or proxying requests to other mcp servers. 69 + 70 + ### authentication 71 + built-in support for various authentication mechanisms, including oauth providers and custom api keys. 72 + 73 + ### client library 74 + provides a `fastmcp.Client` for programmatic interaction with any mcp server, supporting diverse transports and server-initiated llm sampling. 75 + 76 + ## sources 77 + 78 + - [github.com/jlowin/fastmcp](https://github.com/jlowin/fastmcp) - fastmcp source code and documentation 79 + - [pypi.org/project/fastmcp](https://pypi.org/project/fastmcp/) - fastmcp package on pypi
+1
protocols/README.md
··· 5 5 ## contents 6 6 7 7 - [atproto](./atproto/) - the AT Protocol for connected clouds 8 + - [mcp](./MCP/) - model context protocol
+10 -1
protocols/atproto/README.md
··· 2 2 3 3 the AT Protocol. an open protocol for decentralized social applications. 4 4 5 + ## philosophy 6 + 7 + **atmospheric computing**: a paradigm of "connected clouds." if traditional servers are "the cloud" (centralized, closed), the AT Protocol creates an "atmosphere" where millions of personal clouds float and interoperate. 8 + 9 + - **sovereign**: users run their own "personal cloud" (PDS) and own their identity. 10 + - **connected**: services aggregate data from these personal clouds to build shared experiences. 11 + - **open**: applications compete on service quality, not by locking away data. 12 + 5 13 ## architecture 6 14 7 15 ``` ··· 32 40 33 41 ### components 34 42 35 - **PDS (Personal Data Server)** hosts user accounts. stores the data repo (a signed merkle tree of records), handles authentication, emits changes to a firehose stream. each user's data lives on one PDS, but users can migrate between providers. 43 + **PDS (Personal Data Server)** is your "personal cloud". it hosts your account, stores your data repo (a signed merkle tree), handles auth, and emits changes. users can migrate their PDS without losing identity or data. 36 44 37 45 **Relay** aggregates firehose streams from many PDSes into one. an optimization - downstream services subscribe to one relay instead of thousands of PDSes. multiple relays can exist; anyone can run one. 38 46 ··· 63 71 ## sources 64 72 65 73 - [atproto.com](https://atproto.com) - official documentation 74 + - [atmospheric computing](https://www.pfrazee.com/blog/atmospheric-computing) - paul frazee on the "connected clouds" paradigm 66 75 - [introduction to atproto](https://mackuba.eu/2025/08/20/introduction-to-atproto/) - mackuba 67 76 - [federation architecture](https://bsky.social/about/blog/5-5-2023-federation-architecture) - bluesky 68 77 - [plyr.fm](https://github.com/zzstoatzz/plyr.fm) - music streaming on atproto
+5 -5
protocols/atproto/data.md
··· 103 103 104 104 ## why this matters 105 105 106 - the "each user is one database" model enables: 106 + the "each user is one database" model is the foundation of **atmospheric computing**: 107 107 108 - - **portability**: your data is yours, stored in your PDS 109 - - **verification**: anyone can verify record authenticity via signatures 110 - - **aggregation**: applications build views across users without centralizing storage 111 - - **interop**: multiple apps can read the same records if they share schemas 108 + - **portability**: your "personal cloud" is yours. if a host fails, you move your data elsewhere. 109 + - **verification**: trust is cryptographic. you verify the data signature, not the provider. 110 + - **aggregation**: applications weave together data from millions of personal clouds into a cohesive "atmosphere." 111 + - **interop**: apps share schemas, so my music player can read your social graph.