···3030 <div class="mx-6">
3131 These services may not be fully accessible until upgraded.
3232 <a class="underline text-red-800 dark:text-red-200"
3333- href="https://tangled.org/@tangled.org/core/tree/master/docs/migrations.md">
3333+ href="https://docs.tangled.org/migrating-knots-spindles.html#migrating-knots-spindles">
3434 Click to read the upgrade guide</a>.
3535 </div>
3636 </details>
+9-29
appview/pages/templates/brand/brand.html
···44<div class="grid grid-cols-10">
55 <header class="col-span-full md:col-span-10 px-6 py-2 mb-4">
66 <h1 class="text-2xl font-bold dark:text-white mb-1">Brand</h1>
77- <p class="text-gray-600 dark:text-gray-400 mb-1">
77+ <p class="text-gray-500 dark:text-gray-300 mb-1">
88 Assets and guidelines for using Tangled's logo and brand elements.
99 </p>
1010 </header>
···14141515 <!-- Introduction Section -->
1616 <section>
1717- <p class="text-gray-600 dark:text-gray-400 mb-2">
1717+ <p class="text-gray-500 dark:text-gray-300 mb-2">
1818 Tangled's logo and mascot is <strong>Dolly</strong>, the first ever <em>cloned</em> mammal. Please
1919 follow the below guidelines when using Dolly and the logotype.
2020 </p>
2121- <p class="text-gray-600 dark:text-gray-400 mb-2">
2121+ <p class="text-gray-500 dark:text-gray-300 mb-2">
2222 All assets are served as SVGs, and can be downloaded by right-clicking and clicking "Save image as".
2323 </p>
2424 </section>
···3434 </div>
3535 <div class="order-1 lg:order-2">
3636 <h2 class="text-xl font-semibold dark:text-white mb-3">Black logotype</h2>
3737- <p class="text-gray-600 dark:text-gray-400 mb-4">For use on light-colored backgrounds.</p>
3737+ <p class="text-gray-500 dark:text-gray-300 mb-4">For use on light-colored backgrounds.</p>
3838 <p class="text-gray-700 dark:text-gray-300">
3939 This is the preferred version of the logotype, featuring dark text and elements, ideal for light
4040 backgrounds and designs.
···5353 </div>
5454 <div class="order-1 lg:order-2">
5555 <h2 class="text-xl font-semibold dark:text-white mb-3">White logotype</h2>
5656- <p class="text-gray-600 dark:text-gray-400 mb-4">For use on dark-colored backgrounds.</p>
5656+ <p class="text-gray-500 dark:text-gray-300 mb-4">For use on dark-colored backgrounds.</p>
5757 <p class="text-gray-700 dark:text-gray-300">
5858 This version features white text and elements, ideal for dark backgrounds
5959 and inverted designs.
···8181 </div>
8282 <div class="order-1 lg:order-2">
8383 <h2 class="text-xl font-semibold dark:text-white mb-3">Mark only</h2>
8484- <p class="text-gray-600 dark:text-gray-400 mb-4">
8484+ <p class="text-gray-500 dark:text-gray-300 mb-4">
8585 When a smaller 1:1 logo or icon is needed, Dolly's face may be used on its own.
8686 </p>
8787 <p class="text-gray-700 dark:text-gray-300 mb-4">
···123123 </div>
124124 <div class="order-1 lg:order-2">
125125 <h2 class="text-xl font-semibold dark:text-white mb-3">Colored backgrounds</h2>
126126- <p class="text-gray-600 dark:text-gray-400 mb-4">
126126+ <p class="text-gray-500 dark:text-gray-300 mb-4">
127127 White logo mark on colored backgrounds.
128128 </p>
129129 <p class="text-gray-700 dark:text-gray-300 mb-4">
···165165 </div>
166166 <div class="order-1 lg:order-2">
167167 <h2 class="text-xl font-semibold dark:text-white mb-3">Lighter backgrounds</h2>
168168- <p class="text-gray-600 dark:text-gray-400 mb-4">
168168+ <p class="text-gray-500 dark:text-gray-300 mb-4">
169169 Dark logo mark on lighter, pastel backgrounds.
170170 </p>
171171 <p class="text-gray-700 dark:text-gray-300 mb-4">
···186186 </div>
187187 <div class="order-1 lg:order-2">
188188 <h2 class="text-xl font-semibold dark:text-white mb-3">Recoloring</h2>
189189- <p class="text-gray-600 dark:text-gray-400 mb-4">
189189+ <p class="text-gray-500 dark:text-gray-300 mb-4">
190190 Custom coloring of the logotype is permitted.
191191 </p>
192192 <p class="text-gray-700 dark:text-gray-300 mb-4">
···194194 </p>
195195 <p class="text-gray-700 dark:text-gray-300 text-sm">
196196 <strong>Example:</strong> Gray/sand colored logotype on a light yellow/tan background.
197197- </p>
198198- </div>
199199- </section>
200200-201201- <!-- Silhouette Section -->
202202- <section class="grid grid-cols-1 lg:grid-cols-2 gap-8 items-center">
203203- <div class="order-2 lg:order-1">
204204- <div class="border border-gray-200 dark:border-gray-700 p-8 sm:p-16 bg-gray-50 dark:bg-gray-100 rounded">
205205- <img src="https://assets.tangled.network/tangled_dolly_silhouette.svg"
206206- alt="Dolly silhouette"
207207- class="w-full max-w-32 mx-auto" />
208208- </div>
209209- </div>
210210- <div class="order-1 lg:order-2">
211211- <h2 class="text-xl font-semibold dark:text-white mb-3">Dolly silhouette</h2>
212212- <p class="text-gray-600 dark:text-gray-400 mb-4">A minimalist version of Dolly.</p>
213213- <p class="text-gray-700 dark:text-gray-300">
214214- The silhouette can be used where a subtle brand presence is needed,
215215- or as a background element. Works on any background color with proper contrast.
216216- For example, we use this as the site's favicon.
217197 </p>
218198 </div>
219199 </section>
···2222 <p class="text-gray-500 dark:text-gray-400">
2323 Choose a spindle to execute your workflows on. Only repository owners
2424 can configure spindles. Spindles can be selfhosted,
2525- <a class="text-gray-500 dark:text-gray-400 underline" href="https://tangled.org/@tangled.org/core/blob/master/docs/spindle/hosting.md">
2525+ <a class="text-gray-500 dark:text-gray-400 underline" href="https://docs.tangled.org/spindles.html#self-hosting-guide">
2626 click to learn more.
2727 </a>
2828 </p>
···11+---
22+title: Tangled docs
33+author: The Tangled Contributors
44+date: 21 Sun, Dec 2025
55+abstract: |
66+ Tangled is a decentralized code hosting and collaboration
77+ platform. Every component of Tangled is open-source and
88+ self-hostable. [tangled.org](https://tangled.org) also
99+ provides hosting and CI services that are free to use.
1010+1111+ There are several models for decentralized code
1212+ collaboration platforms, ranging from ActivityPubโs
1313+ (Forgejo) federated model, to Radicleโs entirely P2P model.
1414+ Our approach attempts to be the best of both worlds by
1515+ adopting the AT Protocolโa protocol for building decentralized
1616+ social applications with a central identity
1717+1818+ Our approach to this is the idea of โknotsโ. Knots are
1919+ lightweight, headless servers that enable users to host Git
2020+ repositories with ease. Knots are designed for either single
2121+ or multi-tenant use which is perfect for self-hosting on a
2222+ Raspberry Pi at home, or larger โcommunityโ servers. By
2323+ default, Tangled provides managed knots where you can host
2424+ your repositories for free.
2525+2626+ The appview at tangled.org acts as a consolidated "view"
2727+ into the whole network, allowing users to access, clone and
2828+ contribute to repositories hosted across different knots
2929+ seamlessly.
3030+---
3131+3232+# Quick start guide
3333+3434+## Login or sign up
3535+3636+You can [login](https://tangled.org) by using your AT Protocol
3737+account. If you are unclear on what that means, simply head
3838+to the [signup](https://tangled.org/signup) page and create
3939+an account. By doing so, you will be choosing Tangled as
4040+your account provider (you will be granted a handle of the
4141+form `user.tngl.sh`).
4242+4343+In the AT Protocol network, users are free to choose their account
4444+provider (known as a "Personal Data Service", or PDS), and
4545+login to applications that support AT accounts.
4646+4747+You can think of it as "one account for all of the atmosphere"!
4848+4949+If you already have an AT account (you may have one if you
5050+signed up to Bluesky, for example), you can login with the
5151+same handle on Tangled (so just use `user.bsky.social` on
5252+the login page).
5353+5454+## Add an SSH key
5555+5656+Once you are logged in, you can start creating repositories
5757+and pushing code. Tangled supports pushing git repositories
5858+over SSH.
5959+6060+First, you'll need to generate an SSH key if you don't
6161+already have one:
6262+6363+```bash
6464+ssh-keygen -t ed25519 -C "foo@bar.com"
6565+```
6666+6767+When prompted, save the key to the default location
6868+(`~/.ssh/id_ed25519`) and optionally set a passphrase.
6969+7070+Copy your public key to your clipboard:
7171+7272+```bash
7373+# on X11
7474+cat ~/.ssh/id_ed25519.pub | xclip -sel c
7575+7676+# on wayland
7777+cat ~/.ssh/id_ed25519.pub | wl-copy
7878+7979+# on macos
8080+cat ~/.ssh/id_ed25519.pub | pbcopy
8181+```
8282+8383+Now, navigate to 'Settings' -> 'Keys' and hit 'Add Key',
8484+paste your public key, give it a descriptive name, and hit
8585+save.
8686+8787+## Create a repository
8888+8989+Once your SSH key is added, create your first repository:
9090+9191+1. Hit the green `+` icon on the topbar, and select
9292+ repository
9393+2. Enter a repository name
9494+3. Add a description
9595+4. Choose a knotserver to host this repository on
9696+5. Hit create
9797+9898+Knots are self-hostable, lightweight Git servers that can
9999+host your repository. Unlike traditional code forges, your
100100+code can live on any server. Read the [Knots](TODO) section
101101+for more.
102102+103103+## Configure SSH
104104+105105+To ensure Git uses the correct SSH key and connects smoothly
106106+to Tangled, add this configuration to your `~/.ssh/config`
107107+file:
108108+109109+```
110110+Host tangled.org
111111+ Hostname tangled.org
112112+ User git
113113+ IdentityFile ~/.ssh/id_ed25519
114114+ AddressFamily inet
115115+```
116116+117117+This tells SSH to use your specific key when connecting to
118118+Tangled and prevents authentication issues if you have
119119+multiple SSH keys.
120120+121121+Note that this configuration only works for knotservers that
122122+are hosted by tangled.org. If you use a custom knot, refer
123123+to the [Knots](TODO) section.
124124+125125+## Push your first repository
126126+127127+Initialize a new Git repository:
128128+129129+```bash
130130+mkdir my-project
131131+cd my-project
132132+133133+git init
134134+echo "# My Project" > README.md
135135+```
136136+137137+Add some content and push!
138138+139139+```bash
140140+git add README.md
141141+git commit -m "Initial commit"
142142+git remote add origin git@tangled.org:user.tngl.sh/my-project
143143+git push -u origin main
144144+```
145145+146146+That's it! Your code is now hosted on Tangled.
147147+148148+## Migrating an existing repository
149149+150150+Moving your repositories from GitHub, GitLab, Bitbucket, or
151151+any other Git forge to Tangled is straightforward. You'll
152152+simply change your repository's remote URL. At the moment,
153153+Tangled does not have any tooling to migrate data such as
154154+GitHub issues or pull requests.
155155+156156+First, create a new repository on tangled.org as described
157157+in the [Quick Start Guide](#create-a-repository).
158158+159159+Navigate to your existing local repository:
160160+161161+```bash
162162+cd /path/to/your/existing/repo
163163+```
164164+165165+You can inspect your existing Git remote like so:
166166+167167+```bash
168168+git remote -v
169169+```
170170+171171+You'll see something like:
172172+173173+```
174174+origin git@github.com:username/my-project (fetch)
175175+origin git@github.com:username/my-project (push)
176176+```
177177+178178+Update the remote URL to point to tangled:
179179+180180+```bash
181181+git remote set-url origin git@tangled.org:user.tngl.sh/my-project
182182+```
183183+184184+Verify the change:
185185+186186+```bash
187187+git remote -v
188188+```
189189+190190+You should now see:
191191+192192+```
193193+origin git@tangled.org:user.tngl.sh/my-project (fetch)
194194+origin git@tangled.org:user.tngl.sh/my-project (push)
195195+```
196196+197197+Push all your branches and tags to Tangled:
198198+199199+```bash
200200+git push -u origin --all
201201+git push -u origin --tags
202202+```
203203+204204+Your repository is now migrated to Tangled! All commit
205205+history, branches, and tags have been preserved.
206206+207207+## Mirroring a repository to Tangled
208208+209209+If you want to maintain your repository on multiple forges
210210+simultaneously, for example, keeping your primary repository
211211+on GitHub while mirroring to Tangled for backup or
212212+redundancy, you can do so by adding multiple remotes.
213213+214214+You can configure your local repository to push to both
215215+Tangled and, say, GitHub. You may already have the following
216216+setup:
217217+218218+```
219219+$ git remote -v
220220+origin git@github.com:username/my-project (fetch)
221221+origin git@github.com:username/my-project (push)
222222+```
223223+224224+Now add Tangled as an additional push URL to the same
225225+remote:
226226+227227+```bash
228228+git remote set-url --add --push origin git@tangled.org:user.tngl.sh/my-project
229229+```
230230+231231+You also need to re-add the original URL as a push
232232+destination (Git replaces the push URL when you use `--add`
233233+the first time):
234234+235235+```bash
236236+git remote set-url --add --push origin git@github.com:username/my-project
237237+```
238238+239239+Verify your configuration:
240240+241241+```
242242+$ git remote -v
243243+origin git@github.com:username/repo (fetch)
244244+origin git@tangled.org:username/my-project (push)
245245+origin git@github.com:username/repo (push)
246246+```
247247+248248+Notice that there's one fetch URL (the primary remote) and
249249+two push URLs. Now, whenever you push, Git will
250250+automatically push to both remotes:
251251+252252+```bash
253253+git push origin main
254254+```
255255+256256+This single command pushes your `main` branch to both GitHub
257257+and Tangled simultaneously.
258258+259259+To push all branches and tags:
260260+261261+```bash
262262+git push origin --all
263263+git push origin --tags
264264+```
265265+266266+If you prefer more control over which remote you push to,
267267+you can maintain separate remotes:
268268+269269+```bash
270270+git remote add github git@github.com:username/my-project
271271+git remote add tangled git@tangled.org:username/my-project
272272+```
273273+274274+Then push to each explicitly:
275275+276276+```bash
277277+git push github main
278278+git push tangled main
279279+```
280280+281281+# Knot self-hosting guide
282282+283283+So you want to run your own knot server? Great! Here are a few prerequisites:
284284+285285+1. A server of some kind (a VPS, a Raspberry Pi, etc.). Preferably running a Linux distribution of some kind.
286286+2. A (sub)domain name. People generally use `knot.example.com`.
287287+3. A valid SSL certificate for your domain.
288288+289289+## NixOS
290290+291291+Refer to the [knot
292292+module](https://tangled.org/tangled.org/core/blob/master/nix/modules/knot.nix)
293293+for a full list of options. Sample configurations:
294294+295295+- [The test VM](https://tangled.org/tangled.org/core/blob/master/nix/vm.nix#L85)
296296+- [@pyrox.dev/nix](https://tangled.org/pyrox.dev/nix/blob/d19571cc1b5fe01035e1e6951ec8cf8a476b4dee/hosts/marvin/services/tangled.nix#L15-25)
297297+298298+## Docker
299299+300300+Refer to
301301+[@tangled.org/knot-docker](https://tangled.org/@tangled.org/knot-docker).
302302+Note that this is community maintained.
303303+304304+## Manual setup
305305+306306+First, clone this repository:
307307+308308+```
309309+git clone https://tangled.org/@tangled.org/core
310310+```
311311+312312+Then, build the `knot` CLI. This is the knot administration
313313+and operation tool. For the purpose of this guide, we're
314314+only concerned with these subcommands:
315315+316316+ * `knot server`: the main knot server process, typically
317317+ run as a supervised service
318318+ * `knot guard`: handles role-based access control for git
319319+ over SSH (you'll never have to run this yourself)
320320+ * `knot keys`: fetches SSH keys associated with your knot;
321321+ we'll use this to generate the SSH
322322+ `AuthorizedKeysCommand`
323323+324324+```
325325+cd core
326326+export CGO_ENABLED=1
327327+go build -o knot ./cmd/knot
328328+```
329329+330330+Next, move the `knot` binary to a location owned by `root` --
331331+`/usr/local/bin/` is a good choice. Make sure the binary itself is also owned by `root`:
332332+333333+```
334334+sudo mv knot /usr/local/bin/knot
335335+sudo chown root:root /usr/local/bin/knot
336336+```
337337+338338+This is necessary because SSH `AuthorizedKeysCommand` requires [really
339339+specific permissions](https://stackoverflow.com/a/27638306). The
340340+`AuthorizedKeysCommand` specifies a command that is run by `sshd` to
341341+retrieve a user's public SSH keys dynamically for authentication. Let's
342342+set that up.
343343+344344+```
345345+sudo tee /etc/ssh/sshd_config.d/authorized_keys_command.conf <<EOF
346346+Match User git
347347+ AuthorizedKeysCommand /usr/local/bin/knot keys -o authorized-keys
348348+ AuthorizedKeysCommandUser nobody
349349+EOF
350350+```
351351+352352+Then, reload `sshd`:
353353+354354+```
355355+sudo systemctl reload ssh
356356+```
357357+358358+Next, create the `git` user. We'll use the `git` user's home directory
359359+to store repositories:
360360+361361+```
362362+sudo adduser git
363363+```
364364+365365+Create `/home/git/.knot.env` with the following, updating the values as
366366+necessary. The `KNOT_SERVER_OWNER` should be set to your
367367+DID, you can find your DID in the [Settings](https://tangled.sh/settings) page.
368368+369369+```
370370+KNOT_REPO_SCAN_PATH=/home/git
371371+KNOT_SERVER_HOSTNAME=knot.example.com
372372+APPVIEW_ENDPOINT=https://tangled.org
373373+KNOT_SERVER_OWNER=did:plc:foobar
374374+KNOT_SERVER_INTERNAL_LISTEN_ADDR=127.0.0.1:5444
375375+KNOT_SERVER_LISTEN_ADDR=127.0.0.1:5555
376376+```
377377+378378+If you run a Linux distribution that uses systemd, you can use the provided
379379+service file to run the server. Copy
380380+[`knotserver.service`](/systemd/knotserver.service)
381381+to `/etc/systemd/system/`. Then, run:
382382+383383+```
384384+systemctl enable knotserver
385385+systemctl start knotserver
386386+```
387387+388388+The last step is to configure a reverse proxy like Nginx or Caddy to front your
389389+knot. Here's an example configuration for Nginx:
390390+391391+```
392392+server {
393393+ listen 80;
394394+ listen [::]:80;
395395+ server_name knot.example.com;
396396+397397+ location / {
398398+ proxy_pass http://localhost:5555;
399399+ proxy_set_header Host $host;
400400+ proxy_set_header X-Real-IP $remote_addr;
401401+ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
402402+ proxy_set_header X-Forwarded-Proto $scheme;
403403+ }
404404+405405+ # wss endpoint for git events
406406+ location /events {
407407+ proxy_set_header X-Forwarded-For $remote_addr;
408408+ proxy_set_header Host $http_host;
409409+ proxy_set_header Upgrade websocket;
410410+ proxy_set_header Connection Upgrade;
411411+ proxy_pass http://localhost:5555;
412412+ }
413413+ # additional config for SSL/TLS go here.
414414+}
415415+416416+```
417417+418418+Remember to use Let's Encrypt or similar to procure a certificate for your
419419+knot domain.
420420+421421+You should now have a running knot server! You can finalize
422422+your registration by hitting the `verify` button on the
423423+[/settings/knots](https://tangled.org/settings/knots) page. This simply creates
424424+a record on your PDS to announce the existence of the knot.
425425+426426+### Custom paths
427427+428428+(This section applies to manual setup only. Docker users should edit the mounts
429429+in `docker-compose.yml` instead.)
430430+431431+Right now, the database and repositories of your knot lives in `/home/git`. You
432432+can move these paths if you'd like to store them in another folder. Be careful
433433+when adjusting these paths:
434434+435435+* Stop your knot when moving data (e.g. `systemctl stop knotserver`) to prevent
436436+any possible side effects. Remember to restart it once you're done.
437437+* Make backups before moving in case something goes wrong.
438438+* Make sure the `git` user can read and write from the new paths.
439439+440440+#### Database
441441+442442+As an example, let's say the current database is at `/home/git/knotserver.db`,
443443+and we want to move it to `/home/git/database/knotserver.db`.
444444+445445+Copy the current database to the new location. Make sure to copy the `.db-shm`
446446+and `.db-wal` files if they exist.
447447+448448+```
449449+mkdir /home/git/database
450450+cp /home/git/knotserver.db* /home/git/database
451451+```
452452+453453+In the environment (e.g. `/home/git/.knot.env`), set `KNOT_SERVER_DB_PATH` to
454454+the new file path (_not_ the directory):
455455+456456+```
457457+KNOT_SERVER_DB_PATH=/home/git/database/knotserver.db
458458+```
459459+460460+#### Repositories
461461+462462+As an example, let's say the repositories are currently in `/home/git`, and we
463463+want to move them into `/home/git/repositories`.
464464+465465+Create the new folder, then move the existing repositories (if there are any):
466466+467467+```
468468+mkdir /home/git/repositories
469469+# move all DIDs into the new folder; these will vary for you!
470470+mv /home/git/did:plc:wshs7t2adsemcrrd4snkeqli /home/git/repositories
471471+```
472472+473473+In the environment (e.g. `/home/git/.knot.env`), update `KNOT_REPO_SCAN_PATH`
474474+to the new directory:
475475+476476+```
477477+KNOT_REPO_SCAN_PATH=/home/git/repositories
478478+```
479479+480480+Similarly, update your `sshd` `AuthorizedKeysCommand` to use the updated
481481+repository path:
482482+483483+```
484484+sudo tee /etc/ssh/sshd_config.d/authorized_keys_command.conf <<EOF
485485+Match User git
486486+ AuthorizedKeysCommand /usr/local/bin/knot keys -o authorized-keys -git-dir /home/git/repositories
487487+ AuthorizedKeysCommandUser nobody
488488+EOF
489489+```
490490+491491+Make sure to restart your SSH server!
492492+493493+#### MOTD (message of the day)
494494+495495+To configure the MOTD used ("Welcome to this knot!" by default), edit the
496496+`/home/git/motd` file:
497497+498498+```
499499+printf "Hi from this knot!\n" > /home/git/motd
500500+```
501501+502502+Note that you should add a newline at the end if setting a non-empty message
503503+since the knot won't do this for you.
504504+505505+# Spindles
506506+507507+## Pipelines
508508+509509+Spindle workflows allow you to write CI/CD pipelines in a
510510+simple format. They're located in the `.tangled/workflows`
511511+directory at the root of your repository, and are defined
512512+using YAML.
513513+514514+The fields are:
515515+516516+- [Trigger](#trigger): A **required** field that defines
517517+ when a workflow should be triggered.
518518+- [Engine](#engine): A **required** field that defines which
519519+ engine a workflow should run on.
520520+- [Clone options](#clone-options): An **optional** field
521521+ that defines how the repository should be cloned.
522522+- [Dependencies](#dependencies): An **optional** field that
523523+ allows you to list dependencies you may need.
524524+- [Environment](#environment): An **optional** field that
525525+ allows you to define environment variables.
526526+- [Steps](#steps): An **optional** field that allows you to
527527+ define what steps should run in the workflow.
528528+529529+### Trigger
530530+531531+The first thing to add to a workflow is the trigger, which
532532+defines when a workflow runs. This is defined using a `when`
533533+field, which takes in a list of conditions. Each condition
534534+has the following fields:
535535+536536+- `event`: This is a **required** field that defines when
537537+ your workflow should run. It's a list that can take one or
538538+ more of the following values:
539539+ - `push`: The workflow should run every time a commit is
540540+ pushed to the repository.
541541+ - `pull_request`: The workflow should run every time a
542542+ pull request is made or updated.
543543+ - `manual`: The workflow can be triggered manually.
544544+- `branch`: Defines which branches the workflow should run
545545+ for. If used with the `push` event, commits to the
546546+ branch(es) listed here will trigger the workflow. If used
547547+ with the `pull_request` event, updates to pull requests
548548+ targeting the branch(es) listed here will trigger the
549549+ workflow. This field has no effect with the `manual`
550550+ event. Supports glob patterns using `*` and `**` (e.g.,
551551+ `main`, `develop`, `release-*`). Either `branch` or `tag`
552552+ (or both) must be specified for `push` events.
553553+- `tag`: Defines which tags the workflow should run for.
554554+ Only used with the `push` event - when tags matching the
555555+ pattern(s) listed here are pushed, the workflow will
556556+ trigger. This field has no effect with `pull_request` or
557557+ `manual` events. Supports glob patterns using `*` and `**`
558558+ (e.g., `v*`, `v1.*`, `release-**`). Either `branch` or
559559+ `tag` (or both) must be specified for `push` events.
560560+561561+For example, if you'd like to define a workflow that runs
562562+when commits are pushed to the `main` and `develop`
563563+branches, or when pull requests that target the `main`
564564+branch are updated, or manually, you can do so with:
565565+566566+```yaml
567567+when:
568568+ - event: ["push", "manual"]
569569+ branch: ["main", "develop"]
570570+ - event: ["pull_request"]
571571+ branch: ["main"]
572572+```
573573+574574+You can also trigger workflows on tag pushes. For instance,
575575+to run a deployment workflow when tags matching `v*` are
576576+pushed:
577577+578578+```yaml
579579+when:
580580+ - event: ["push"]
581581+ tag: ["v*"]
582582+```
583583+584584+You can even combine branch and tag patterns in a single
585585+constraint (the workflow triggers if either matches):
586586+587587+```yaml
588588+when:
589589+ - event: ["push"]
590590+ branch: ["main", "release-*"]
591591+ tag: ["v*", "stable"]
592592+```
593593+594594+### Engine
595595+596596+Next is the engine on which the workflow should run, defined
597597+using the **required** `engine` field. The currently
598598+supported engines are:
599599+600600+- `nixery`: This uses an instance of
601601+ [Nixery](https://nixery.dev) to run steps, which allows
602602+ you to add [dependencies](#dependencies) from
603603+ Nixpkgs (https://github.com/NixOS/nixpkgs). You can
604604+ search for packages on https://search.nixos.org, and
605605+ there's a pretty good chance the package(s) you're looking
606606+ for will be there.
607607+608608+Example:
609609+610610+```yaml
611611+engine: "nixery"
612612+```
613613+614614+### Clone options
615615+616616+When a workflow starts, the first step is to clone the
617617+repository. You can customize this behavior using the
618618+**optional** `clone` field. It has the following fields:
619619+620620+- `skip`: Setting this to `true` will skip cloning the
621621+ repository. This can be useful if your workflow is doing
622622+ something that doesn't require anything from the
623623+ repository itself. This is `false` by default.
624624+- `depth`: This sets the number of commits, or the "clone
625625+ depth", to fetch from the repository. For example, if you
626626+ set this to 2, the last 2 commits will be fetched. By
627627+ default, the depth is set to 1, meaning only the most
628628+ recent commit will be fetched, which is the commit that
629629+ triggered the workflow.
630630+- `submodules`: If you use Git submodules
631631+ (https://git-scm.com/book/en/v2/Git-Tools-Submodules)
632632+ in your repository, setting this field to `true` will
633633+ recursively fetch all submodules. This is `false` by
634634+ default.
635635+636636+The default settings are:
637637+638638+```yaml
639639+clone:
640640+ skip: false
641641+ depth: 1
642642+ submodules: false
643643+```
644644+645645+### Dependencies
646646+647647+Usually when you're running a workflow, you'll need
648648+additional dependencies. The `dependencies` field lets you
649649+define which dependencies to get, and from where. It's a
650650+key-value map, with the key being the registry to fetch
651651+dependencies from, and the value being the list of
652652+dependencies to fetch.
653653+654654+Say you want to fetch Node.js and Go from `nixpkgs`, and a
655655+package called `my_pkg` you've made from your own registry
656656+at your repository at
657657+`https://tangled.org/@example.com/my_pkg`. You can define
658658+those dependencies like so:
659659+660660+```yaml
661661+dependencies:
662662+ # nixpkgs
663663+ nixpkgs:
664664+ - nodejs
665665+ - go
666666+ # custom registry
667667+ git+https://tangled.org/@example.com/my_pkg:
668668+ - my_pkg
669669+```
670670+671671+Now these dependencies are available to use in your
672672+workflow!
673673+674674+### Environment
675675+676676+The `environment` field allows you define environment
677677+variables that will be available throughout the entire
678678+workflow. **Do not put secrets here, these environment
679679+variables are visible to anyone viewing the repository. You
680680+can add secrets for pipelines in your repository's
681681+settings.**
682682+683683+Example:
684684+685685+```yaml
686686+environment:
687687+ GOOS: "linux"
688688+ GOARCH: "arm64"
689689+ NODE_ENV: "production"
690690+ MY_ENV_VAR: "MY_ENV_VALUE"
691691+```
692692+693693+### Steps
694694+695695+The `steps` field allows you to define what steps should run
696696+in the workflow. It's a list of step objects, each with the
697697+following fields:
698698+699699+- `name`: This field allows you to give your step a name.
700700+ This name is visible in your workflow runs, and is used to
701701+ describe what the step is doing.
702702+- `command`: This field allows you to define a command to
703703+ run in that step. The step is run in a Bash shell, and the
704704+ logs from the command will be visible in the pipelines
705705+ page on the Tangled website. The
706706+ [dependencies](#dependencies) you added will be available
707707+ to use here.
708708+- `environment`: Similar to the global
709709+ [environment](#environment) config, this **optional**
710710+ field is a key-value map that allows you to set
711711+ environment variables for the step. **Do not put secrets
712712+ here, these environment variables are visible to anyone
713713+ viewing the repository. You can add secrets for pipelines
714714+ in your repository's settings.**
715715+716716+Example:
717717+718718+```yaml
719719+steps:
720720+ - name: "Build backend"
721721+ command: "go build"
722722+ environment:
723723+ GOOS: "darwin"
724724+ GOARCH: "arm64"
725725+ - name: "Build frontend"
726726+ command: "npm run build"
727727+ environment:
728728+ NODE_ENV: "production"
729729+```
730730+731731+### Complete workflow
732732+733733+```yaml
734734+# .tangled/workflows/build.yml
735735+736736+when:
737737+ - event: ["push", "manual"]
738738+ branch: ["main", "develop"]
739739+ - event: ["pull_request"]
740740+ branch: ["main"]
741741+742742+engine: "nixery"
743743+744744+# using the default values
745745+clone:
746746+ skip: false
747747+ depth: 1
748748+ submodules: false
749749+750750+dependencies:
751751+ # nixpkgs
752752+ nixpkgs:
753753+ - nodejs
754754+ - go
755755+ # custom registry
756756+ git+https://tangled.org/@example.com/my_pkg:
757757+ - my_pkg
758758+759759+environment:
760760+ GOOS: "linux"
761761+ GOARCH: "arm64"
762762+ NODE_ENV: "production"
763763+ MY_ENV_VAR: "MY_ENV_VALUE"
764764+765765+steps:
766766+ - name: "Build backend"
767767+ command: "go build"
768768+ environment:
769769+ GOOS: "darwin"
770770+ GOARCH: "arm64"
771771+ - name: "Build frontend"
772772+ command: "npm run build"
773773+ environment:
774774+ NODE_ENV: "production"
775775+```
776776+777777+If you want another example of a workflow, you can look at
778778+the one [Tangled uses to build the
779779+project](https://tangled.org/@tangled.org/core/blob/master/.tangled/workflows/build.yml).
780780+781781+## Self-hosting guide
782782+783783+### Prerequisites
784784+785785+* Go
786786+* Docker (the only supported backend currently)
787787+788788+### Configuration
789789+790790+Spindle is configured using environment variables. The following environment variables are available:
791791+792792+* `SPINDLE_SERVER_LISTEN_ADDR`: The address the server listens on (default: `"0.0.0.0:6555"`).
793793+* `SPINDLE_SERVER_DB_PATH`: The path to the SQLite database file (default: `"spindle.db"`).
794794+* `SPINDLE_SERVER_HOSTNAME`: The hostname of the server (required).
795795+* `SPINDLE_SERVER_JETSTREAM_ENDPOINT`: The endpoint of the Jetstream server (default: `"wss://jetstream1.us-west.bsky.network/subscribe"`).
796796+* `SPINDLE_SERVER_DEV`: A boolean indicating whether the server is running in development mode (default: `false`).
797797+* `SPINDLE_SERVER_OWNER`: The DID of the owner (required).
798798+* `SPINDLE_PIPELINES_NIXERY`: The Nixery URL (default: `"nixery.tangled.sh"`).
799799+* `SPINDLE_PIPELINES_WORKFLOW_TIMEOUT`: The default workflow timeout (default: `"5m"`).
800800+* `SPINDLE_PIPELINES_LOG_DIR`: The directory to store workflow logs (default: `"/var/log/spindle"`).
801801+802802+### Running spindle
803803+804804+1. **Set the environment variables.** For example:
805805+806806+ ```shell
807807+ export SPINDLE_SERVER_HOSTNAME="your-hostname"
808808+ export SPINDLE_SERVER_OWNER="your-did"
809809+ ```
810810+811811+2. **Build the Spindle binary.**
812812+813813+ ```shell
814814+ cd core
815815+ go mod download
816816+ go build -o cmd/spindle/spindle cmd/spindle/main.go
817817+ ```
818818+819819+3. **Create the log directory.**
820820+821821+ ```shell
822822+ sudo mkdir -p /var/log/spindle
823823+ sudo chown $USER:$USER -R /var/log/spindle
824824+ ```
825825+826826+4. **Run the Spindle binary.**
827827+828828+ ```shell
829829+ ./cmd/spindle/spindle
830830+ ```
831831+832832+Spindle will now start, connect to the Jetstream server, and begin processing pipelines.
833833+834834+## Architecture
835835+836836+Spindle is a small CI runner service. Here's a high-level overview of how it operates:
837837+838838+* Listens for [`sh.tangled.spindle.member`](/lexicons/spindle/member.json) and
839839+[`sh.tangled.repo`](/lexicons/repo.json) records on the Jetstream.
840840+* When a new repo record comes through (typically when you add a spindle to a
841841+repo from the settings), spindle then resolves the underlying knot and
842842+subscribes to repo events (see:
843843+[`sh.tangled.pipeline`](/lexicons/pipeline.json)).
844844+* The spindle engine then handles execution of the pipeline, with results and
845845+logs beamed on the spindle event stream over WebSocket
846846+847847+### The engine
848848+849849+At present, the only supported backend is Docker (and Podman, if Docker
850850+compatibility is enabled, so that `/run/docker.sock` is created). spindle
851851+executes each step in the pipeline in a fresh container, with state persisted
852852+across steps within the `/tangled/workspace` directory.
853853+854854+The base image for the container is constructed on the fly using
855855+[Nixery](https://nixery.dev), which is handy for caching layers for frequently
856856+used packages.
857857+858858+The pipeline manifest is [specified here](https://docs.tangled.org/spindles.html#pipelines).
859859+860860+## Secrets with openbao
861861+862862+This document covers setting up spindle to use OpenBao for secrets
863863+management via OpenBao Proxy instead of the default SQLite backend.
864864+865865+### Overview
866866+867867+Spindle now uses OpenBao Proxy for secrets management. The proxy handles
868868+authentication automatically using AppRole credentials, while spindle
869869+connects to the local proxy instead of directly to the OpenBao server.
870870+871871+This approach provides better security, automatic token renewal, and
872872+simplified application code.
873873+874874+### Installation
875875+876876+Install OpenBao from Nixpkgs:
877877+878878+```bash
879879+nix shell nixpkgs#openbao # for a local server
880880+```
881881+882882+### Setup
883883+884884+The setup process can is documented for both local development and production.
885885+886886+#### Local development
887887+888888+Start OpenBao in dev mode:
889889+890890+```bash
891891+bao server -dev -dev-root-token-id="root" -dev-listen-address=127.0.0.1:8201
892892+```
893893+894894+This starts OpenBao on `http://localhost:8201` with a root token.
895895+896896+Set up environment for bao CLI:
897897+898898+```bash
899899+export BAO_ADDR=http://localhost:8200
900900+export BAO_TOKEN=root
901901+```
902902+903903+#### Production
904904+905905+You would typically use a systemd service with a
906906+configuration file. Refer to
907907+[@tangled.org/infra](https://tangled.org/@tangled.org/infra)
908908+for how this can be achieved using Nix.
909909+910910+Then, initialize the bao server:
911911+912912+```bash
913913+bao operator init -key-shares=1 -key-threshold=1
914914+```
915915+916916+This will print out an unseal key and a root key. Save them
917917+somewhere (like a password manager). Then unseal the vault
918918+to begin setting it up:
919919+920920+```bash
921921+bao operator unseal <unseal_key>
922922+```
923923+924924+All steps below remain the same across both dev and
925925+production setups.
926926+927927+#### Configure openbao server
928928+929929+Create the spindle KV mount:
930930+931931+```bash
932932+bao secrets enable -path=spindle -version=2 kv
933933+```
934934+935935+Set up AppRole authentication and policy:
936936+937937+Create a policy file `spindle-policy.hcl`:
938938+939939+```hcl
940940+# Full access to spindle KV v2 data
941941+path "spindle/data/*" {
942942+ capabilities = ["create", "read", "update", "delete"]
943943+}
944944+945945+# Access to metadata for listing and management
946946+path "spindle/metadata/*" {
947947+ capabilities = ["list", "read", "delete", "update"]
948948+}
949949+950950+# Allow listing at root level
951951+path "spindle/" {
952952+ capabilities = ["list"]
953953+}
954954+955955+# Required for connection testing and health checks
956956+path "auth/token/lookup-self" {
957957+ capabilities = ["read"]
958958+}
959959+```
960960+961961+Apply the policy and create an AppRole:
962962+963963+```bash
964964+bao policy write spindle-policy spindle-policy.hcl
965965+bao auth enable approle
966966+bao write auth/approle/role/spindle \
967967+ token_policies="spindle-policy" \
968968+ token_ttl=1h \
969969+ token_max_ttl=4h \
970970+ bind_secret_id=true \
971971+ secret_id_ttl=0 \
972972+ secret_id_num_uses=0
973973+```
974974+975975+Get the credentials:
976976+977977+```bash
978978+# Get role ID (static)
979979+ROLE_ID=$(bao read -field=role_id auth/approle/role/spindle/role-id)
980980+981981+# Generate secret ID
982982+SECRET_ID=$(bao write -f -field=secret_id auth/approle/role/spindle/secret-id)
983983+984984+echo "Role ID: $ROLE_ID"
985985+echo "Secret ID: $SECRET_ID"
986986+```
987987+988988+#### Create proxy configuration
989989+990990+Create the credential files:
991991+992992+```bash
993993+# Create directory for OpenBao files
994994+mkdir -p /tmp/openbao
995995+996996+# Save credentials
997997+echo "$ROLE_ID" > /tmp/openbao/role-id
998998+echo "$SECRET_ID" > /tmp/openbao/secret-id
999999+chmod 600 /tmp/openbao/role-id /tmp/openbao/secret-id
10001000+```
10011001+10021002+Create a proxy configuration file `/tmp/openbao/proxy.hcl`:
10031003+10041004+```hcl
10051005+# OpenBao server connection
10061006+vault {
10071007+ address = "http://localhost:8200"
10081008+}
10091009+10101010+# Auto-Auth using AppRole
10111011+auto_auth {
10121012+ method "approle" {
10131013+ mount_path = "auth/approle"
10141014+ config = {
10151015+ role_id_file_path = "/tmp/openbao/role-id"
10161016+ secret_id_file_path = "/tmp/openbao/secret-id"
10171017+ }
10181018+ }
10191019+10201020+ # Optional: write token to file for debugging
10211021+ sink "file" {
10221022+ config = {
10231023+ path = "/tmp/openbao/token"
10241024+ mode = 0640
10251025+ }
10261026+ }
10271027+}
10281028+10291029+# Proxy listener for spindle
10301030+listener "tcp" {
10311031+ address = "127.0.0.1:8201"
10321032+ tls_disable = true
10331033+}
10341034+10351035+# Enable API proxy with auto-auth token
10361036+api_proxy {
10371037+ use_auto_auth_token = true
10381038+}
10391039+10401040+# Enable response caching
10411041+cache {
10421042+ use_auto_auth_token = true
10431043+}
10441044+10451045+# Logging
10461046+log_level = "info"
10471047+```
10481048+10491049+#### Start the proxy
10501050+10511051+Start OpenBao Proxy:
10521052+10531053+```bash
10541054+bao proxy -config=/tmp/openbao/proxy.hcl
10551055+```
10561056+10571057+The proxy will authenticate with OpenBao and start listening on
10581058+`127.0.0.1:8201`.
10591059+10601060+#### Configure spindle
10611061+10621062+Set these environment variables for spindle:
10631063+10641064+```bash
10651065+export SPINDLE_SERVER_SECRETS_PROVIDER=openbao
10661066+export SPINDLE_SERVER_SECRETS_OPENBAO_PROXY_ADDR=http://127.0.0.1:8201
10671067+export SPINDLE_SERVER_SECRETS_OPENBAO_MOUNT=spindle
10681068+```
10691069+10701070+On startup, spindle will now connect to the local proxy,
10711071+which handles all authentication automatically.
10721072+10731073+### Production setup for proxy
10741074+10751075+For production, you'll want to run the proxy as a service:
10761076+10771077+Place your production configuration in
10781078+`/etc/openbao/proxy.hcl` with proper TLS settings for the
10791079+vault connection.
10801080+10811081+### Verifying setup
10821082+10831083+Test the proxy directly:
10841084+10851085+```bash
10861086+# Check proxy health
10871087+curl -H "X-Vault-Request: true" http://127.0.0.1:8201/v1/sys/health
10881088+10891089+# Test token lookup through proxy
10901090+curl -H "X-Vault-Request: true" http://127.0.0.1:8201/v1/auth/token/lookup-self
10911091+```
10921092+10931093+Test OpenBao operations through the server:
10941094+10951095+```bash
10961096+# List all secrets
10971097+bao kv list spindle/
10981098+10991099+# Add a test secret via the spindle API, then check it exists
11001100+bao kv list spindle/repos/
11011101+11021102+# Get a specific secret
11031103+bao kv get spindle/repos/your_repo_path/SECRET_NAME
11041104+```
11051105+11061106+### How it works
11071107+11081108+- Spindle connects to OpenBao Proxy on localhost (typically
11091109+ port 8200 or 8201)
11101110+- The proxy authenticates with OpenBao using AppRole
11111111+ credentials
11121112+- All spindle requests go through the proxy, which injects
11131113+ authentication tokens
11141114+- Secrets are stored at
11151115+ `spindle/repos/{sanitized_repo_path}/{secret_key}`
11161116+- Repository paths like `did:plc:alice/myrepo` become
11171117+ `did_plc_alice_myrepo`
11181118+- The proxy handles all token renewal automatically
11191119+- Spindle no longer manages tokens or authentication
11201120+ directly
11211121+11221122+### Troubleshooting
11231123+11241124+**Connection refused**: Check that the OpenBao Proxy is
11251125+running and listening on the configured address.
11261126+11271127+**403 errors**: Verify the AppRole credentials are correct
11281128+and the policy has the necessary permissions.
11291129+11301130+**404 route errors**: The spindle KV mount probably doesn't
11311131+existโrun the mount creation step again.
11321132+11331133+**Proxy authentication failures**: Check the proxy logs and
11341134+verify the role-id and secret-id files are readable and
11351135+contain valid credentials.
11361136+11371137+**Secret not found after writing**: This can indicate policy
11381138+permission issues. Verify the policy includes both
11391139+`spindle/data/*` and `spindle/metadata/*` paths with
11401140+appropriate capabilities.
11411141+11421142+Check proxy logs:
11431143+11441144+```bash
11451145+# If running as systemd service
11461146+journalctl -u openbao-proxy -f
11471147+11481148+# If running directly, check the console output
11491149+```
11501150+11511151+Test AppRole authentication manually:
11521152+11531153+```bash
11541154+bao write auth/approle/login \
11551155+ role_id="$(cat /tmp/openbao/role-id)" \
11561156+ secret_id="$(cat /tmp/openbao/secret-id)"
11571157+```
11581158+11591159+# Migrating knots and spindles
11601160+11611161+Sometimes, non-backwards compatible changes are made to the
11621162+knot/spindle XRPC APIs. If you host a knot or a spindle, you
11631163+will need to follow this guide to upgrade. Typically, this
11641164+only requires you to deploy the newest version.
11651165+11661166+This document is laid out in reverse-chronological order.
11671167+Newer migration guides are listed first, and older guides
11681168+are further down the page.
11691169+11701170+## Upgrading from v1.8.x
11711171+11721172+After v1.8.2, the HTTP API for knots and spindles has been
11731173+deprecated and replaced with XRPC. Repositories on outdated
11741174+knots will not be viewable from the appview. Upgrading is
11751175+straightforward however.
11761176+11771177+For knots:
11781178+11791179+- Upgrade to the latest tag (v1.9.0 or above)
11801180+- Head to the [knot dashboard](https://tangled.org/settings/knots) and
11811181+ hit the "retry" button to verify your knot
11821182+11831183+For spindles:
11841184+11851185+- Upgrade to the latest tag (v1.9.0 or above)
11861186+- Head to the [spindle
11871187+ dashboard](https://tangled.org/settings/spindles) and hit the
11881188+ "retry" button to verify your spindle
11891189+11901190+## Upgrading from v1.7.x
11911191+11921192+After v1.7.0, knot secrets have been deprecated. You no
11931193+longer need a secret from the appview to run a knot. All
11941194+authorized commands to knots are managed via [Inter-Service
11951195+Authentication](https://atproto.com/specs/xrpc#inter-service-authentication-jwt).
11961196+Knots will be read-only until upgraded.
11971197+11981198+Upgrading is quite easy, in essence:
11991199+12001200+- `KNOT_SERVER_SECRET` is no more, you can remove this
12011201+ environment variable entirely
12021202+- `KNOT_SERVER_OWNER` is now required on boot, set this to
12031203+ your DID. You can find your DID in the
12041204+ [settings](https://tangled.org/settings) page.
12051205+- Restart your knot once you have replaced the environment
12061206+ variable
12071207+- Head to the [knot dashboard](https://tangled.org/settings/knots) and
12081208+ hit the "retry" button to verify your knot. This simply
12091209+ writes a `sh.tangled.knot` record to your PDS.
12101210+12111211+If you use the nix module, simply bump the flake to the
12121212+latest revision, and change your config block like so:
12131213+12141214+```diff
12151215+ services.tangled.knot = {
12161216+ enable = true;
12171217+ server = {
12181218+- secretFile = /path/to/secret;
12191219++ owner = "did:plc:foo";
12201220+ };
12211221+ };
12221222+```
12231223+12241224+# Hacking on Tangled
12251225+12261226+We highly recommend [installing
12271227+Nix](https://nixos.org/download/) (the package manager)
12281228+before working on the codebase. The Nix flake provides a lot
12291229+of helpers to get started and most importantly, builds and
12301230+dev shells are entirely deterministic.
12311231+12321232+To set up your dev environment:
12331233+12341234+```bash
12351235+nix develop
12361236+```
12371237+12381238+Non-Nix users can look at the `devShell` attribute in the
12391239+`flake.nix` file to determine necessary dependencies.
12401240+12411241+## Running the appview
12421242+12431243+The Nix flake also exposes a few `app` attributes (run `nix
12441244+flake show` to see a full list of what the flake provides),
12451245+one of the apps runs the appview with the `air`
12461246+live-reloader:
12471247+12481248+```bash
12491249+TANGLED_DEV=true nix run .#watch-appview
12501250+12511251+# TANGLED_DB_PATH might be of interest to point to
12521252+# different sqlite DBs
12531253+12541254+# in a separate shell, you can live-reload tailwind
12551255+nix run .#watch-tailwind
12561256+```
12571257+12581258+To authenticate with the appview, you will need Redis and
12591259+OAuth JWKs to be set up:
12601260+12611261+```
12621262+# OAuth JWKs should already be set up by the Nix devshell:
12631263+echo $TANGLED_OAUTH_CLIENT_SECRET
12641264+z42ty4RT1ovnTopY8B8ekz9NuziF2CuMkZ7rbRFpAR9jBqMc
12651265+12661266+echo $TANGLED_OAUTH_CLIENT_KID
12671267+1761667908
12681268+12691269+# if not, you can set it up yourself:
12701270+goat key generate -t P-256
12711271+Key Type: P-256 / secp256r1 / ES256 private key
12721272+Secret Key (Multibase Syntax): save this securely (eg, add to password manager)
12731273+ z42tuPDKRfM2mz2Kv953ARen2jmrPA8S9LX9tRq4RVcUMwwL
12741274+Public Key (DID Key Syntax): share or publish this (eg, in DID document)
12751275+ did:key:zDnaeUBxtG6Xuv3ATJE4GaWeyXM3jyamJsZw3bSPpxx4bNXDR
12761276+12771277+# the secret key from above
12781278+export TANGLED_OAUTH_CLIENT_SECRET="z42tuP..."
12791279+12801280+# Run Redis in a new shell to store OAuth sessions
12811281+redis-server
12821282+```
12831283+12841284+## Running knots and spindles
12851285+12861286+An end-to-end knot setup requires setting up a machine with
12871287+`sshd`, `AuthorizedKeysCommand`, and a Git user, which is
12881288+quite cumbersome. So the Nix flake provides a
12891289+`nixosConfiguration` to do so.
12901290+12911291+<details>
12921292+ <summary><strong>macOS users will have to set up a Nix Builder first</strong></summary>
12931293+12941294+ In order to build Tangled's dev VM on macOS, you will
12951295+ first need to set up a Linux Nix builder. The recommended
12961296+ way to do so is to run a [`darwin.linux-builder`
12971297+ VM](https://nixos.org/manual/nixpkgs/unstable/#sec-darwin-builder)
12981298+ and to register it in `nix.conf` as a builder for Linux
12991299+ with the same architecture as your Mac (`linux-aarch64` if
13001300+ you are using Apple Silicon).
13011301+13021302+ > IMPORTANT: You must build `darwin.linux-builder` somewhere other than inside
13031303+ > the Tangled repo so that it doesn't conflict with the other VM. For example,
13041304+ > you can do
13051305+ >
13061306+ > ```shell
13071307+ > cd $(mktemp -d buildervm.XXXXX) && nix run nixpkgs#darwin.linux-builder
13081308+ > ```
13091309+ >
13101310+ > to store the builder VM in a temporary dir.
13111311+ >
13121312+ > You should read and follow [all the other intructions][darwin builder vm] to
13131313+ > avoid subtle problems.
13141314+13151315+ Alternatively, you can use any other method to set up a
13161316+ Linux machine with Nix installed that you can `sudo ssh`
13171317+ into (in other words, root user on your Mac has to be able
13181318+ to ssh into the Linux machine without entering a password)
13191319+ and that has the same architecture as your Mac. See
13201320+ [remote builder
13211321+ instructions](https://nix.dev/manual/nix/2.28/advanced-topics/distributed-builds.html#requirements)
13221322+ for how to register such a builder in `nix.conf`.
13231323+13241324+ > WARNING: If you'd like to use
13251325+ > [`nixos-lima`](https://github.com/nixos-lima/nixos-lima) or
13261326+ > [Orbstack](https://orbstack.dev/), note that setting them up so that `sudo
13271327+ > ssh` works can be tricky. It seems to be [possible with
13281328+ > Orbstack](https://github.com/orgs/orbstack/discussions/1669).
13291329+13301330+</details>
13311331+13321332+To begin, grab your DID from http://localhost:3000/settings.
13331333+Then, set `TANGLED_VM_KNOT_OWNER` and
13341334+`TANGLED_VM_SPINDLE_OWNER` to your DID. You can now start a
13351335+lightweight NixOS VM like so:
13361336+13371337+```bash
13381338+nix run --impure .#vm
13391339+13401340+# type `poweroff` at the shell to exit the VM
13411341+```
13421342+13431343+This starts a knot on port 6444, a spindle on port 6555
13441344+with `ssh` exposed on port 2222.
13451345+13461346+Once the services are running, head to
13471347+http://localhost:3000/settings/knots and hit "Verify". It should
13481348+verify the ownership of the services instantly if everything
13491349+went smoothly.
13501350+13511351+You can push repositories to this VM with this ssh config
13521352+block on your main machine:
13531353+13541354+```bash
13551355+Host nixos-shell
13561356+ Hostname localhost
13571357+ Port 2222
13581358+ User git
13591359+ IdentityFile ~/.ssh/my_tangled_key
13601360+```
13611361+13621362+Set up a remote called `local-dev` on a git repo:
13631363+13641364+```bash
13651365+git remote add local-dev git@nixos-shell:user/repo
13661366+git push local-dev main
13671367+```
13681368+13691369+The above VM should already be running a spindle on
13701370+`localhost:6555`. Head to http://localhost:3000/settings/spindles and
13711371+hit "Verify". You can then configure each repository to use
13721372+this spindle and run CI jobs.
13731373+13741374+Of interest when debugging spindles:
13751375+13761376+```
13771377+# Service logs from journald:
13781378+journalctl -xeu spindle
13791379+13801380+# CI job logs from disk:
13811381+ls /var/log/spindle
13821382+13831383+# Debugging spindle database:
13841384+sqlite3 /var/lib/spindle/spindle.db
13851385+13861386+# litecli has a nicer REPL interface:
13871387+litecli /var/lib/spindle/spindle.db
13881388+```
13891389+13901390+If for any reason you wish to disable either one of the
13911391+services in the VM, modify [nix/vm.nix](/nix/vm.nix) and set
13921392+`services.tangled.spindle.enable` (or
13931393+`services.tangled.knot.enable`) to `false`.
13941394+13951395+# Contribution guide
13961396+13971397+## Commit guidelines
13981398+13991399+We follow a commit style similar to the Go project. Please keep commits:
14001400+14011401+* **atomic**: each commit should represent one logical change
14021402+* **descriptive**: the commit message should clearly describe what the
14031403+change does and why it's needed
14041404+14051405+### Message format
14061406+14071407+```
14081408+<service/top-level directory>/<affected package/directory>: <short summary of change>
14091409+14101410+Optional longer description can go here, if necessary. Explain what the
14111411+change does and why, especially if not obvious. Reference relevant
14121412+issues or PRs when applicable. These can be links for now since we don't
14131413+auto-link issues/PRs yet.
14141414+```
14151415+14161416+Here are some examples:
14171417+14181418+```
14191419+appview/state: fix token expiry check in middleware
14201420+14211421+The previous check did not account for clock drift, leading to premature
14221422+token invalidation.
14231423+```
14241424+14251425+```
14261426+knotserver/git/service: improve error checking in upload-pack
14271427+```
14281428+14291429+14301430+### General notes
14311431+14321432+- PRs get merged "as-is" (fast-forward)โlike applying a patch-series
14331433+using `git am`. At present, there is no squashingโso please author
14341434+your commits as they would appear on `master`, following the above
14351435+guidelines.
14361436+- If there is a lot of nesting, for example "appview:
14371437+pages/templates/repo/fragments: ...", these can be truncated down to
14381438+just "appview: repo/fragments: ...". If the change affects a lot of
14391439+subdirectories, you may abbreviate to just the top-level names, e.g.
14401440+"appview: ..." or "knotserver: ...".
14411441+- Keep commits lowercased with no trailing period.
14421442+- Use the imperative mood in the summary line (e.g., "fix bug" not
14431443+"fixed bug" or "fixes bug").
14441444+- Try to keep the summary line under 72 characters, but we aren't too
14451445+fussed about this.
14461446+- Follow the same formatting for PR titles if filled manually.
14471447+- Don't include unrelated changes in the same commit.
14481448+- Avoid noisy commit messages like "wip" or "final fix"โrewrite history
14491449+before submitting if necessary.
14501450+14511451+## Code formatting
14521452+14531453+We use a variety of tools to format our code, and multiplex them with
14541454+[`treefmt`](https://treefmt.com). All you need to do to format your changes
14551455+is run `nix run .#fmt` (or just `treefmt` if you're in the devshell).
14561456+14571457+## Proposals for bigger changes
14581458+14591459+Small fixes like typos, minor bugs, or trivial refactors can be
14601460+submitted directly as PRs.
14611461+14621462+For larger changesโespecially those introducing new features, significant
14631463+refactoring, or altering system behaviorโplease open a proposal first. This
14641464+helps us evaluate the scope, design, and potential impact before implementation.
14651465+14661466+Create a new issue titled:
14671467+14681468+```
14691469+proposal: <affected scope>: <summary of change>
14701470+```
14711471+14721472+In the description, explain:
14731473+14741474+- What the change is
14751475+- Why it's needed
14761476+- How you plan to implement it (roughly)
14771477+- Any open questions or tradeoffs
14781478+14791479+We'll use the issue thread to discuss and refine the idea before moving
14801480+forward.
14811481+14821482+## Developer Certificate of Origin (DCO)
14831483+14841484+We require all contributors to certify that they have the right to
14851485+submit the code they're contributing. To do this, we follow the
14861486+[Developer Certificate of Origin
14871487+(DCO)](https://developercertificate.org/).
14881488+14891489+By signing your commits, you're stating that the contribution is your
14901490+own work, or that you have the right to submit it under the project's
14911491+license. This helps us keep things clean and legally sound.
14921492+14931493+To sign your commit, just add the `-s` flag when committing:
14941494+14951495+```sh
14961496+git commit -s -m "your commit message"
14971497+```
14981498+14991499+This appends a line like:
15001500+15011501+```
15021502+Signed-off-by: Your Name <your.email@example.com>
15031503+```
15041504+15051505+We won't merge commits if they aren't signed off. If you forget, you can
15061506+amend the last commit like this:
15071507+15081508+```sh
15091509+git commit --amend -s
15101510+```
15111511+15121512+If you're submitting a PR with multiple commits, make sure each one is
15131513+signed.
15141514+15151515+For [jj](https://jj-vcs.github.io/jj/latest/) users, you can run the following command
15161516+to make it sign off commits in the tangled repo:
15171517+15181518+```shell
15191519+# Safety check, should say "No matching config key..."
15201520+jj config list templates.commit_trailers
15211521+# The command below may need to be adjusted if the command above returned something.
15221522+jj config set --repo templates.commit_trailers "format_signed_off_by_trailer(self)"
15231523+```
15241524+15251525+Refer to the [jujutsu
15261526+documentation](https://jj-vcs.github.io/jj/latest/config/#commit-trailers)
15271527+for more information.
-136
docs/contributing.md
···11-# tangled contributing guide
22-33-## commit guidelines
44-55-We follow a commit style similar to the Go project. Please keep commits:
66-77-* **atomic**: each commit should represent one logical change
88-* **descriptive**: the commit message should clearly describe what the
99-change does and why it's needed
1010-1111-### message format
1212-1313-```
1414-<service/top-level directory>/<affected package/directory>: <short summary of change>
1515-1616-1717-Optional longer description can go here, if necessary. Explain what the
1818-change does and why, especially if not obvious. Reference relevant
1919-issues or PRs when applicable. These can be links for now since we don't
2020-auto-link issues/PRs yet.
2121-```
2222-2323-Here are some examples:
2424-2525-```
2626-appview/state: fix token expiry check in middleware
2727-2828-The previous check did not account for clock drift, leading to premature
2929-token invalidation.
3030-```
3131-3232-```
3333-knotserver/git/service: improve error checking in upload-pack
3434-```
3535-3636-3737-### general notes
3838-3939-- PRs get merged "as-is" (fast-forward) -- like applying a patch-series
4040-using `git am`. At present, there is no squashing -- so please author
4141-your commits as they would appear on `master`, following the above
4242-guidelines.
4343-- If there is a lot of nesting, for example "appview:
4444-pages/templates/repo/fragments: ...", these can be truncated down to
4545-just "appview: repo/fragments: ...". If the change affects a lot of
4646-subdirectories, you may abbreviate to just the top-level names, e.g.
4747-"appview: ..." or "knotserver: ...".
4848-- Keep commits lowercased with no trailing period.
4949-- Use the imperative mood in the summary line (e.g., "fix bug" not
5050-"fixed bug" or "fixes bug").
5151-- Try to keep the summary line under 72 characters, but we aren't too
5252-fussed about this.
5353-- Follow the same formatting for PR titles if filled manually.
5454-- Don't include unrelated changes in the same commit.
5555-- Avoid noisy commit messages like "wip" or "final fix"โrewrite history
5656-before submitting if necessary.
5757-5858-## code formatting
5959-6060-We use a variety of tools to format our code, and multiplex them with
6161-[`treefmt`](https://treefmt.com): all you need to do to format your changes
6262-is run `nix run .#fmt` (or just `treefmt` if you're in the devshell).
6363-6464-## proposals for bigger changes
6565-6666-Small fixes like typos, minor bugs, or trivial refactors can be
6767-submitted directly as PRs.
6868-6969-For larger changesโespecially those introducing new features, significant
7070-refactoring, or altering system behaviorโplease open a proposal first. This
7171-helps us evaluate the scope, design, and potential impact before implementation.
7272-7373-### proposal format
7474-7575-Create a new issue titled:
7676-7777-```
7878-proposal: <affected scope>: <summary of change>
7979-```
8080-8181-In the description, explain:
8282-8383-- What the change is
8484-- Why it's needed
8585-- How you plan to implement it (roughly)
8686-- Any open questions or tradeoffs
8787-8888-We'll use the issue thread to discuss and refine the idea before moving
8989-forward.
9090-9191-## developer certificate of origin (DCO)
9292-9393-We require all contributors to certify that they have the right to
9494-submit the code they're contributing. To do this, we follow the
9595-[Developer Certificate of Origin
9696-(DCO)](https://developercertificate.org/).
9797-9898-By signing your commits, you're stating that the contribution is your
9999-own work, or that you have the right to submit it under the project's
100100-license. This helps us keep things clean and legally sound.
101101-102102-To sign your commit, just add the `-s` flag when committing:
103103-104104-```sh
105105-git commit -s -m "your commit message"
106106-```
107107-108108-This appends a line like:
109109-110110-```
111111-Signed-off-by: Your Name <your.email@example.com>
112112-```
113113-114114-We won't merge commits if they aren't signed off. If you forget, you can
115115-amend the last commit like this:
116116-117117-```sh
118118-git commit --amend -s
119119-```
120120-121121-If you're submitting a PR with multiple commits, make sure each one is
122122-signed.
123123-124124-For [jj](https://jj-vcs.github.io/jj/latest/) users, you can run the following command
125125-to make it sign off commits in the tangled repo:
126126-127127-```shell
128128-# Safety check, should say "No matching config key..."
129129-jj config list templates.commit_trailers
130130-# The command below may need to be adjusted if the command above returned something.
131131-jj config set --repo templates.commit_trailers "format_signed_off_by_trailer(self)"
132132-```
133133-134134-Refer to the [jj
135135-documentation](https://jj-vcs.github.io/jj/latest/config/#commit-trailers)
136136-for more information.
-172
docs/hacking.md
···11-# hacking on tangled
22-33-We highly recommend [installing
44-nix](https://nixos.org/download/) (the package manager)
55-before working on the codebase. The nix flake provides a lot
66-of helpers to get started and most importantly, builds and
77-dev shells are entirely deterministic.
88-99-To set up your dev environment:
1010-1111-```bash
1212-nix develop
1313-```
1414-1515-Non-nix users can look at the `devShell` attribute in the
1616-`flake.nix` file to determine necessary dependencies.
1717-1818-## running the appview
1919-2020-The nix flake also exposes a few `app` attributes (run `nix
2121-flake show` to see a full list of what the flake provides),
2222-one of the apps runs the appview with the `air`
2323-live-reloader:
2424-2525-```bash
2626-TANGLED_DEV=true nix run .#watch-appview
2727-2828-# TANGLED_DB_PATH might be of interest to point to
2929-# different sqlite DBs
3030-3131-# in a separate shell, you can live-reload tailwind
3232-nix run .#watch-tailwind
3333-```
3434-3535-To authenticate with the appview, you will need redis and
3636-OAUTH JWKs to be setup:
3737-3838-```
3939-# oauth jwks should already be setup by the nix devshell:
4040-echo $TANGLED_OAUTH_CLIENT_SECRET
4141-z42ty4RT1ovnTopY8B8ekz9NuziF2CuMkZ7rbRFpAR9jBqMc
4242-4343-echo $TANGLED_OAUTH_CLIENT_KID
4444-1761667908
4545-4646-# if not, you can set it up yourself:
4747-goat key generate -t P-256
4848-Key Type: P-256 / secp256r1 / ES256 private key
4949-Secret Key (Multibase Syntax): save this securely (eg, add to password manager)
5050- z42tuPDKRfM2mz2Kv953ARen2jmrPA8S9LX9tRq4RVcUMwwL
5151-Public Key (DID Key Syntax): share or publish this (eg, in DID document)
5252- did:key:zDnaeUBxtG6Xuv3ATJE4GaWeyXM3jyamJsZw3bSPpxx4bNXDR
5353-5454-# the secret key from above
5555-export TANGLED_OAUTH_CLIENT_SECRET="z42tuP..."
5656-5757-# run redis in at a new shell to store oauth sessions
5858-redis-server
5959-```
6060-6161-## running knots and spindles
6262-6363-An end-to-end knot setup requires setting up a machine with
6464-`sshd`, `AuthorizedKeysCommand`, and git user, which is
6565-quite cumbersome. So the nix flake provides a
6666-`nixosConfiguration` to do so.
6767-6868-<details>
6969- <summary><strong>MacOS users will have to setup a Nix Builder first</strong></summary>
7070-7171- In order to build Tangled's dev VM on macOS, you will
7272- first need to set up a Linux Nix builder. The recommended
7373- way to do so is to run a [`darwin.linux-builder`
7474- VM](https://nixos.org/manual/nixpkgs/unstable/#sec-darwin-builder)
7575- and to register it in `nix.conf` as a builder for Linux
7676- with the same architecture as your Mac (`linux-aarch64` if
7777- you are using Apple Silicon).
7878-7979- > IMPORTANT: You must build `darwin.linux-builder` somewhere other than inside
8080- > the tangled repo so that it doesn't conflict with the other VM. For example,
8181- > you can do
8282- >
8383- > ```shell
8484- > cd $(mktemp -d buildervm.XXXXX) && nix run nixpkgs#darwin.linux-builder
8585- > ```
8686- >
8787- > to store the builder VM in a temporary dir.
8888- >
8989- > You should read and follow [all the other intructions][darwin builder vm] to
9090- > avoid subtle problems.
9191-9292- Alternatively, you can use any other method to set up a
9393- Linux machine with `nix` installed that you can `sudo ssh`
9494- into (in other words, root user on your Mac has to be able
9595- to ssh into the Linux machine without entering a password)
9696- and that has the same architecture as your Mac. See
9797- [remote builder
9898- instructions](https://nix.dev/manual/nix/2.28/advanced-topics/distributed-builds.html#requirements)
9999- for how to register such a builder in `nix.conf`.
100100-101101- > WARNING: If you'd like to use
102102- > [`nixos-lima`](https://github.com/nixos-lima/nixos-lima) or
103103- > [Orbstack](https://orbstack.dev/), note that setting them up so that `sudo
104104- > ssh` works can be tricky. It seems to be [possible with
105105- > Orbstack](https://github.com/orgs/orbstack/discussions/1669).
106106-107107-</details>
108108-109109-To begin, grab your DID from http://localhost:3000/settings.
110110-Then, set `TANGLED_VM_KNOT_OWNER` and
111111-`TANGLED_VM_SPINDLE_OWNER` to your DID. You can now start a
112112-lightweight NixOS VM like so:
113113-114114-```bash
115115-nix run --impure .#vm
116116-117117-# type `poweroff` at the shell to exit the VM
118118-```
119119-120120-This starts a knot on port 6444, a spindle on port 6555
121121-with `ssh` exposed on port 2222.
122122-123123-Once the services are running, head to
124124-http://localhost:3000/settings/knots and hit verify. It should
125125-verify the ownership of the services instantly if everything
126126-went smoothly.
127127-128128-You can push repositories to this VM with this ssh config
129129-block on your main machine:
130130-131131-```bash
132132-Host nixos-shell
133133- Hostname localhost
134134- Port 2222
135135- User git
136136- IdentityFile ~/.ssh/my_tangled_key
137137-```
138138-139139-Set up a remote called `local-dev` on a git repo:
140140-141141-```bash
142142-git remote add local-dev git@nixos-shell:user/repo
143143-git push local-dev main
144144-```
145145-146146-### running a spindle
147147-148148-The above VM should already be running a spindle on
149149-`localhost:6555`. Head to http://localhost:3000/settings/spindles and
150150-hit verify. You can then configure each repository to use
151151-this spindle and run CI jobs.
152152-153153-Of interest when debugging spindles:
154154-155155-```
156156-# service logs from journald:
157157-journalctl -xeu spindle
158158-159159-# CI job logs from disk:
160160-ls /var/log/spindle
161161-162162-# debugging spindle db:
163163-sqlite3 /var/lib/spindle/spindle.db
164164-165165-# litecli has a nicer REPL interface:
166166-litecli /var/lib/spindle/spindle.db
167167-```
168168-169169-If for any reason you wish to disable either one of the
170170-services in the VM, modify [nix/vm.nix](/nix/vm.nix) and set
171171-`services.tangled.spindle.enable` (or
172172-`services.tangled.knot.enable`) to `false`.
···11-# knot self-hosting guide
22-33-So you want to run your own knot server? Great! Here are a few prerequisites:
44-55-1. A server of some kind (a VPS, a Raspberry Pi, etc.). Preferably running a Linux distribution of some kind.
66-2. A (sub)domain name. People generally use `knot.example.com`.
77-3. A valid SSL certificate for your domain.
88-99-There's a couple of ways to get started:
1010-* NixOS: refer to
1111-[flake.nix](https://tangled.sh/@tangled.sh/core/blob/master/flake.nix)
1212-* Docker: Documented at
1313-[@tangled.sh/knot-docker](https://tangled.sh/@tangled.sh/knot-docker)
1414-(community maintained: support is not guaranteed!)
1515-* Manual: Documented below.
1616-1717-## manual setup
1818-1919-First, clone this repository:
2020-2121-```
2222-git clone https://tangled.org/@tangled.org/core
2323-```
2424-2525-Then, build the `knot` CLI. This is the knot administration and operation tool.
2626-For the purpose of this guide, we're only concerned with these subcommands:
2727-2828-* `knot server`: the main knot server process, typically run as a
2929-supervised service
3030-* `knot guard`: handles role-based access control for git over SSH
3131-(you'll never have to run this yourself)
3232-* `knot keys`: fetches SSH keys associated with your knot; we'll use
3333-this to generate the SSH `AuthorizedKeysCommand`
3434-3535-```
3636-cd core
3737-export CGO_ENABLED=1
3838-go build -o knot ./cmd/knot
3939-```
4040-4141-Next, move the `knot` binary to a location owned by `root` --
4242-`/usr/local/bin/` is a good choice. Make sure the binary itself is also owned by `root`:
4343-4444-```
4545-sudo mv knot /usr/local/bin/knot
4646-sudo chown root:root /usr/local/bin/knot
4747-```
4848-4949-This is necessary because SSH `AuthorizedKeysCommand` requires [really
5050-specific permissions](https://stackoverflow.com/a/27638306). The
5151-`AuthorizedKeysCommand` specifies a command that is run by `sshd` to
5252-retrieve a user's public SSH keys dynamically for authentication. Let's
5353-set that up.
5454-5555-```
5656-sudo tee /etc/ssh/sshd_config.d/authorized_keys_command.conf <<EOF
5757-Match User git
5858- AuthorizedKeysCommand /usr/local/bin/knot keys -o authorized-keys
5959- AuthorizedKeysCommandUser nobody
6060-EOF
6161-```
6262-6363-Then, reload `sshd`:
6464-6565-```
6666-sudo systemctl reload ssh
6767-```
6868-6969-Next, create the `git` user. We'll use the `git` user's home directory
7070-to store repositories:
7171-7272-```
7373-sudo adduser git
7474-```
7575-7676-Create `/home/git/.knot.env` with the following, updating the values as
7777-necessary. The `KNOT_SERVER_OWNER` should be set to your
7878-DID, you can find your DID in the [Settings](https://tangled.sh/settings) page.
7979-8080-```
8181-KNOT_REPO_SCAN_PATH=/home/git
8282-KNOT_SERVER_HOSTNAME=knot.example.com
8383-APPVIEW_ENDPOINT=https://tangled.sh
8484-KNOT_SERVER_OWNER=did:plc:foobar
8585-KNOT_SERVER_INTERNAL_LISTEN_ADDR=127.0.0.1:5444
8686-KNOT_SERVER_LISTEN_ADDR=127.0.0.1:5555
8787-```
8888-8989-If you run a Linux distribution that uses systemd, you can use the provided
9090-service file to run the server. Copy
9191-[`knotserver.service`](/systemd/knotserver.service)
9292-to `/etc/systemd/system/`. Then, run:
9393-9494-```
9595-systemctl enable knotserver
9696-systemctl start knotserver
9797-```
9898-9999-The last step is to configure a reverse proxy like Nginx or Caddy to front your
100100-knot. Here's an example configuration for Nginx:
101101-102102-```
103103-server {
104104- listen 80;
105105- listen [::]:80;
106106- server_name knot.example.com;
107107-108108- location / {
109109- proxy_pass http://localhost:5555;
110110- proxy_set_header Host $host;
111111- proxy_set_header X-Real-IP $remote_addr;
112112- proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
113113- proxy_set_header X-Forwarded-Proto $scheme;
114114- }
115115-116116- # wss endpoint for git events
117117- location /events {
118118- proxy_set_header X-Forwarded-For $remote_addr;
119119- proxy_set_header Host $http_host;
120120- proxy_set_header Upgrade websocket;
121121- proxy_set_header Connection Upgrade;
122122- proxy_pass http://localhost:5555;
123123- }
124124- # additional config for SSL/TLS go here.
125125-}
126126-127127-```
128128-129129-Remember to use Let's Encrypt or similar to procure a certificate for your
130130-knot domain.
131131-132132-You should now have a running knot server! You can finalize
133133-your registration by hitting the `verify` button on the
134134-[/settings/knots](https://tangled.org/settings/knots) page. This simply creates
135135-a record on your PDS to announce the existence of the knot.
136136-137137-### custom paths
138138-139139-(This section applies to manual setup only. Docker users should edit the mounts
140140-in `docker-compose.yml` instead.)
141141-142142-Right now, the database and repositories of your knot lives in `/home/git`. You
143143-can move these paths if you'd like to store them in another folder. Be careful
144144-when adjusting these paths:
145145-146146-* Stop your knot when moving data (e.g. `systemctl stop knotserver`) to prevent
147147-any possible side effects. Remember to restart it once you're done.
148148-* Make backups before moving in case something goes wrong.
149149-* Make sure the `git` user can read and write from the new paths.
150150-151151-#### database
152152-153153-As an example, let's say the current database is at `/home/git/knotserver.db`,
154154-and we want to move it to `/home/git/database/knotserver.db`.
155155-156156-Copy the current database to the new location. Make sure to copy the `.db-shm`
157157-and `.db-wal` files if they exist.
158158-159159-```
160160-mkdir /home/git/database
161161-cp /home/git/knotserver.db* /home/git/database
162162-```
163163-164164-In the environment (e.g. `/home/git/.knot.env`), set `KNOT_SERVER_DB_PATH` to
165165-the new file path (_not_ the directory):
166166-167167-```
168168-KNOT_SERVER_DB_PATH=/home/git/database/knotserver.db
169169-```
170170-171171-#### repositories
172172-173173-As an example, let's say the repositories are currently in `/home/git`, and we
174174-want to move them into `/home/git/repositories`.
175175-176176-Create the new folder, then move the existing repositories (if there are any):
177177-178178-```
179179-mkdir /home/git/repositories
180180-# move all DIDs into the new folder; these will vary for you!
181181-mv /home/git/did:plc:wshs7t2adsemcrrd4snkeqli /home/git/repositories
182182-```
183183-184184-In the environment (e.g. `/home/git/.knot.env`), update `KNOT_REPO_SCAN_PATH`
185185-to the new directory:
186186-187187-```
188188-KNOT_REPO_SCAN_PATH=/home/git/repositories
189189-```
190190-191191-Similarly, update your `sshd` `AuthorizedKeysCommand` to use the updated
192192-repository path:
193193-194194-```
195195-sudo tee /etc/ssh/sshd_config.d/authorized_keys_command.conf <<EOF
196196-Match User git
197197- AuthorizedKeysCommand /usr/local/bin/knot keys -o authorized-keys -git-dir /home/git/repositories
198198- AuthorizedKeysCommandUser nobody
199199-EOF
200200-```
201201-202202-Make sure to restart your SSH server!
203203-204204-#### MOTD (message of the day)
205205-206206-To configure the MOTD used ("Welcome to this knot!" by default), edit the
207207-`/home/git/motd` file:
208208-209209-```
210210-printf "Hi from this knot!\n" > /home/git/motd
211211-```
212212-213213-Note that you should add a newline at the end if setting a non-empty message
214214-since the knot won't do this for you.
-59
docs/migrations.md
···11-# Migrations
22-33-This document is laid out in reverse-chronological order.
44-Newer migration guides are listed first, and older guides
55-are further down the page.
66-77-## Upgrading from v1.8.x
88-99-After v1.8.2, the HTTP API for knot and spindles have been
1010-deprecated and replaced with XRPC. Repositories on outdated
1111-knots will not be viewable from the appview. Upgrading is
1212-straightforward however.
1313-1414-For knots:
1515-1616-- Upgrade to latest tag (v1.9.0 or above)
1717-- Head to the [knot dashboard](https://tangled.org/settings/knots) and
1818- hit the "retry" button to verify your knot
1919-2020-For spindles:
2121-2222-- Upgrade to latest tag (v1.9.0 or above)
2323-- Head to the [spindle
2424- dashboard](https://tangled.org/settings/spindles) and hit the
2525- "retry" button to verify your spindle
2626-2727-## Upgrading from v1.7.x
2828-2929-After v1.7.0, knot secrets have been deprecated. You no
3030-longer need a secret from the appview to run a knot. All
3131-authorized commands to knots are managed via [Inter-Service
3232-Authentication](https://atproto.com/specs/xrpc#inter-service-authentication-jwt).
3333-Knots will be read-only until upgraded.
3434-3535-Upgrading is quite easy, in essence:
3636-3737-- `KNOT_SERVER_SECRET` is no more, you can remove this
3838- environment variable entirely
3939-- `KNOT_SERVER_OWNER` is now required on boot, set this to
4040- your DID. You can find your DID in the
4141- [settings](https://tangled.org/settings) page.
4242-- Restart your knot once you have replaced the environment
4343- variable
4444-- Head to the [knot dashboard](https://tangled.org/settings/knots) and
4545- hit the "retry" button to verify your knot. This simply
4646- writes a `sh.tangled.knot` record to your PDS.
4747-4848-If you use the nix module, simply bump the flake to the
4949-latest revision, and change your config block like so:
5050-5151-```diff
5252- services.tangled.knot = {
5353- enable = true;
5454- server = {
5555-- secretFile = /path/to/secret;
5656-+ owner = "did:plc:foo";
5757- };
5858- };
5959-```
···11-# spindle architecture
22-33-Spindle is a small CI runner service. Here's a high level overview of how it operates:
44-55-* listens for [`sh.tangled.spindle.member`](/lexicons/spindle/member.json) and
66-[`sh.tangled.repo`](/lexicons/repo.json) records on the Jetstream.
77-* when a new repo record comes through (typically when you add a spindle to a
88-repo from the settings), spindle then resolves the underlying knot and
99-subscribes to repo events (see:
1010-[`sh.tangled.pipeline`](/lexicons/pipeline.json)).
1111-* the spindle engine then handles execution of the pipeline, with results and
1212-logs beamed on the spindle event stream over wss
1313-1414-### the engine
1515-1616-At present, the only supported backend is Docker (and Podman, if Docker
1717-compatibility is enabled, so that `/run/docker.sock` is created). Spindle
1818-executes each step in the pipeline in a fresh container, with state persisted
1919-across steps within the `/tangled/workspace` directory.
2020-2121-The base image for the container is constructed on the fly using
2222-[Nixery](https://nixery.dev), which is handy for caching layers for frequently
2323-used packages.
2424-2525-The pipeline manifest is [specified here](/docs/spindle/pipeline.md).
-52
docs/spindle/hosting.md
···11-# spindle self-hosting guide
22-33-## prerequisites
44-55-* Go
66-* Docker (the only supported backend currently)
77-88-## configuration
99-1010-Spindle is configured using environment variables. The following environment variables are available:
1111-1212-* `SPINDLE_SERVER_LISTEN_ADDR`: The address the server listens on (default: `"0.0.0.0:6555"`).
1313-* `SPINDLE_SERVER_DB_PATH`: The path to the SQLite database file (default: `"spindle.db"`).
1414-* `SPINDLE_SERVER_HOSTNAME`: The hostname of the server (required).
1515-* `SPINDLE_SERVER_JETSTREAM_ENDPOINT`: The endpoint of the Jetstream server (default: `"wss://jetstream1.us-west.bsky.network/subscribe"`).
1616-* `SPINDLE_SERVER_DEV`: A boolean indicating whether the server is running in development mode (default: `false`).
1717-* `SPINDLE_SERVER_OWNER`: The DID of the owner (required).
1818-* `SPINDLE_PIPELINES_NIXERY`: The Nixery URL (default: `"nixery.tangled.sh"`).
1919-* `SPINDLE_PIPELINES_WORKFLOW_TIMEOUT`: The default workflow timeout (default: `"5m"`).
2020-* `SPINDLE_PIPELINES_LOG_DIR`: The directory to store workflow logs (default: `"/var/log/spindle"`).
2121-2222-## running spindle
2323-2424-1. **Set the environment variables.** For example:
2525-2626- ```shell
2727- export SPINDLE_SERVER_HOSTNAME="your-hostname"
2828- export SPINDLE_SERVER_OWNER="your-did"
2929- ```
3030-3131-2. **Build the Spindle binary.**
3232-3333- ```shell
3434- cd core
3535- go mod download
3636- go build -o cmd/spindle/spindle cmd/spindle/main.go
3737- ```
3838-3939-3. **Create the log directory.**
4040-4141- ```shell
4242- sudo mkdir -p /var/log/spindle
4343- sudo chown $USER:$USER -R /var/log/spindle
4444- ```
4545-4646-4. **Run the Spindle binary.**
4747-4848- ```shell
4949- ./cmd/spindle/spindle
5050- ```
5151-5252-Spindle will now start, connect to the Jetstream server, and begin processing pipelines.
-285
docs/spindle/openbao.md
···11-# spindle secrets with openbao
22-33-This document covers setting up Spindle to use OpenBao for secrets
44-management via OpenBao Proxy instead of the default SQLite backend.
55-66-## overview
77-88-Spindle now uses OpenBao Proxy for secrets management. The proxy handles
99-authentication automatically using AppRole credentials, while Spindle
1010-connects to the local proxy instead of directly to the OpenBao server.
1111-1212-This approach provides better security, automatic token renewal, and
1313-simplified application code.
1414-1515-## installation
1616-1717-Install OpenBao from nixpkgs:
1818-1919-```bash
2020-nix shell nixpkgs#openbao # for a local server
2121-```
2222-2323-## setup
2424-2525-The setup process can is documented for both local development and production.
2626-2727-### local development
2828-2929-Start OpenBao in dev mode:
3030-3131-```bash
3232-bao server -dev -dev-root-token-id="root" -dev-listen-address=127.0.0.1:8201
3333-```
3434-3535-This starts OpenBao on `http://localhost:8201` with a root token.
3636-3737-Set up environment for bao CLI:
3838-3939-```bash
4040-export BAO_ADDR=http://localhost:8200
4141-export BAO_TOKEN=root
4242-```
4343-4444-### production
4545-4646-You would typically use a systemd service with a configuration file. Refer to
4747-[@tangled.org/infra](https://tangled.org/@tangled.org/infra) for how this can be
4848-achieved using Nix.
4949-5050-Then, initialize the bao server:
5151-```bash
5252-bao operator init -key-shares=1 -key-threshold=1
5353-```
5454-5555-This will print out an unseal key and a root key. Save them somewhere (like a password manager). Then unseal the vault to begin setting it up:
5656-```bash
5757-bao operator unseal <unseal_key>
5858-```
5959-6060-All steps below remain the same across both dev and production setups.
6161-6262-### configure openbao server
6363-6464-Create the spindle KV mount:
6565-6666-```bash
6767-bao secrets enable -path=spindle -version=2 kv
6868-```
6969-7070-Set up AppRole authentication and policy:
7171-7272-Create a policy file `spindle-policy.hcl`:
7373-7474-```hcl
7575-# Full access to spindle KV v2 data
7676-path "spindle/data/*" {
7777- capabilities = ["create", "read", "update", "delete"]
7878-}
7979-8080-# Access to metadata for listing and management
8181-path "spindle/metadata/*" {
8282- capabilities = ["list", "read", "delete", "update"]
8383-}
8484-8585-# Allow listing at root level
8686-path "spindle/" {
8787- capabilities = ["list"]
8888-}
8989-9090-# Required for connection testing and health checks
9191-path "auth/token/lookup-self" {
9292- capabilities = ["read"]
9393-}
9494-```
9595-9696-Apply the policy and create an AppRole:
9797-9898-```bash
9999-bao policy write spindle-policy spindle-policy.hcl
100100-bao auth enable approle
101101-bao write auth/approle/role/spindle \
102102- token_policies="spindle-policy" \
103103- token_ttl=1h \
104104- token_max_ttl=4h \
105105- bind_secret_id=true \
106106- secret_id_ttl=0 \
107107- secret_id_num_uses=0
108108-```
109109-110110-Get the credentials:
111111-112112-```bash
113113-# Get role ID (static)
114114-ROLE_ID=$(bao read -field=role_id auth/approle/role/spindle/role-id)
115115-116116-# Generate secret ID
117117-SECRET_ID=$(bao write -f -field=secret_id auth/approle/role/spindle/secret-id)
118118-119119-echo "Role ID: $ROLE_ID"
120120-echo "Secret ID: $SECRET_ID"
121121-```
122122-123123-### create proxy configuration
124124-125125-Create the credential files:
126126-127127-```bash
128128-# Create directory for OpenBao files
129129-mkdir -p /tmp/openbao
130130-131131-# Save credentials
132132-echo "$ROLE_ID" > /tmp/openbao/role-id
133133-echo "$SECRET_ID" > /tmp/openbao/secret-id
134134-chmod 600 /tmp/openbao/role-id /tmp/openbao/secret-id
135135-```
136136-137137-Create a proxy configuration file `/tmp/openbao/proxy.hcl`:
138138-139139-```hcl
140140-# OpenBao server connection
141141-vault {
142142- address = "http://localhost:8200"
143143-}
144144-145145-# Auto-Auth using AppRole
146146-auto_auth {
147147- method "approle" {
148148- mount_path = "auth/approle"
149149- config = {
150150- role_id_file_path = "/tmp/openbao/role-id"
151151- secret_id_file_path = "/tmp/openbao/secret-id"
152152- }
153153- }
154154-155155- # Optional: write token to file for debugging
156156- sink "file" {
157157- config = {
158158- path = "/tmp/openbao/token"
159159- mode = 0640
160160- }
161161- }
162162-}
163163-164164-# Proxy listener for Spindle
165165-listener "tcp" {
166166- address = "127.0.0.1:8201"
167167- tls_disable = true
168168-}
169169-170170-# Enable API proxy with auto-auth token
171171-api_proxy {
172172- use_auto_auth_token = true
173173-}
174174-175175-# Enable response caching
176176-cache {
177177- use_auto_auth_token = true
178178-}
179179-180180-# Logging
181181-log_level = "info"
182182-```
183183-184184-### start the proxy
185185-186186-Start OpenBao Proxy:
187187-188188-```bash
189189-bao proxy -config=/tmp/openbao/proxy.hcl
190190-```
191191-192192-The proxy will authenticate with OpenBao and start listening on
193193-`127.0.0.1:8201`.
194194-195195-### configure spindle
196196-197197-Set these environment variables for Spindle:
198198-199199-```bash
200200-export SPINDLE_SERVER_SECRETS_PROVIDER=openbao
201201-export SPINDLE_SERVER_SECRETS_OPENBAO_PROXY_ADDR=http://127.0.0.1:8201
202202-export SPINDLE_SERVER_SECRETS_OPENBAO_MOUNT=spindle
203203-```
204204-205205-Start Spindle:
206206-207207-Spindle will now connect to the local proxy, which handles all
208208-authentication automatically.
209209-210210-## production setup for proxy
211211-212212-For production, you'll want to run the proxy as a service:
213213-214214-Place your production configuration in `/etc/openbao/proxy.hcl` with
215215-proper TLS settings for the vault connection.
216216-217217-## verifying setup
218218-219219-Test the proxy directly:
220220-221221-```bash
222222-# Check proxy health
223223-curl -H "X-Vault-Request: true" http://127.0.0.1:8201/v1/sys/health
224224-225225-# Test token lookup through proxy
226226-curl -H "X-Vault-Request: true" http://127.0.0.1:8201/v1/auth/token/lookup-self
227227-```
228228-229229-Test OpenBao operations through the server:
230230-231231-```bash
232232-# List all secrets
233233-bao kv list spindle/
234234-235235-# Add a test secret via Spindle API, then check it exists
236236-bao kv list spindle/repos/
237237-238238-# Get a specific secret
239239-bao kv get spindle/repos/your_repo_path/SECRET_NAME
240240-```
241241-242242-## how it works
243243-244244-- Spindle connects to OpenBao Proxy on localhost (typically port 8200 or 8201)
245245-- The proxy authenticates with OpenBao using AppRole credentials
246246-- All Spindle requests go through the proxy, which injects authentication tokens
247247-- Secrets are stored at `spindle/repos/{sanitized_repo_path}/{secret_key}`
248248-- Repository paths like `did:plc:alice/myrepo` become `did_plc_alice_myrepo`
249249-- The proxy handles all token renewal automatically
250250-- Spindle no longer manages tokens or authentication directly
251251-252252-## troubleshooting
253253-254254-**Connection refused**: Check that the OpenBao Proxy is running and
255255-listening on the configured address.
256256-257257-**403 errors**: Verify the AppRole credentials are correct and the policy
258258-has the necessary permissions.
259259-260260-**404 route errors**: The spindle KV mount probably doesn't exist - run
261261-the mount creation step again.
262262-263263-**Proxy authentication failures**: Check the proxy logs and verify the
264264-role-id and secret-id files are readable and contain valid credentials.
265265-266266-**Secret not found after writing**: This can indicate policy permission
267267-issues. Verify the policy includes both `spindle/data/*` and
268268-`spindle/metadata/*` paths with appropriate capabilities.
269269-270270-Check proxy logs:
271271-272272-```bash
273273-# If running as systemd service
274274-journalctl -u openbao-proxy -f
275275-276276-# If running directly, check the console output
277277-```
278278-279279-Test AppRole authentication manually:
280280-281281-```bash
282282-bao write auth/approle/login \
283283- role_id="$(cat /tmp/openbao/role-id)" \
284284- secret_id="$(cat /tmp/openbao/secret-id)"
285285-```
-183
docs/spindle/pipeline.md
···11-# spindle pipelines
22-33-Spindle workflows allow you to write CI/CD pipelines in a simple format. They're located in the `.tangled/workflows` directory at the root of your repository, and are defined using YAML.
44-55-The fields are:
66-77-- [Trigger](#trigger): A **required** field that defines when a workflow should be triggered.
88-- [Engine](#engine): A **required** field that defines which engine a workflow should run on.
99-- [Clone options](#clone-options): An **optional** field that defines how the repository should be cloned.
1010-- [Dependencies](#dependencies): An **optional** field that allows you to list dependencies you may need.
1111-- [Environment](#environment): An **optional** field that allows you to define environment variables.
1212-- [Steps](#steps): An **optional** field that allows you to define what steps should run in the workflow.
1313-1414-## Trigger
1515-1616-The first thing to add to a workflow is the trigger, which defines when a workflow runs. This is defined using a `when` field, which takes in a list of conditions. Each condition has the following fields:
1717-1818-- `event`: This is a **required** field that defines when your workflow should run. It's a list that can take one or more of the following values:
1919- - `push`: The workflow should run every time a commit is pushed to the repository.
2020- - `pull_request`: The workflow should run every time a pull request is made or updated.
2121- - `manual`: The workflow can be triggered manually.
2222-- `branch`: Defines which branches the workflow should run for. If used with the `push` event, commits to the branch(es) listed here will trigger the workflow. If used with the `pull_request` event, updates to pull requests targeting the branch(es) listed here will trigger the workflow. This field has no effect with the `manual` event. Supports glob patterns using `*` and `**` (e.g., `main`, `develop`, `release-*`). Either `branch` or `tag` (or both) must be specified for `push` events.
2323-- `tag`: Defines which tags the workflow should run for. Only used with the `push` event - when tags matching the pattern(s) listed here are pushed, the workflow will trigger. This field has no effect with `pull_request` or `manual` events. Supports glob patterns using `*` and `**` (e.g., `v*`, `v1.*`, `release-**`). Either `branch` or `tag` (or both) must be specified for `push` events.
2424-2525-For example, if you'd like to define a workflow that runs when commits are pushed to the `main` and `develop` branches, or when pull requests that target the `main` branch are updated, or manually, you can do so with:
2626-2727-```yaml
2828-when:
2929- - event: ["push", "manual"]
3030- branch: ["main", "develop"]
3131- - event: ["pull_request"]
3232- branch: ["main"]
3333-```
3434-3535-You can also trigger workflows on tag pushes. For instance, to run a deployment workflow when tags matching `v*` are pushed:
3636-3737-```yaml
3838-when:
3939- - event: ["push"]
4040- tag: ["v*"]
4141-```
4242-4343-You can even combine branch and tag patterns in a single constraint (the workflow triggers if either matches):
4444-4545-```yaml
4646-when:
4747- - event: ["push"]
4848- branch: ["main", "release-*"]
4949- tag: ["v*", "stable"]
5050-```
5151-5252-## Engine
5353-5454-Next is the engine on which the workflow should run, defined using the **required** `engine` field. The currently supported engines are:
5555-5656-- `nixery`: This uses an instance of [Nixery](https://nixery.dev) to run steps, which allows you to add [dependencies](#dependencies) from [Nixpkgs](https://github.com/NixOS/nixpkgs). You can search for packages on https://search.nixos.org, and there's a pretty good chance the package(s) you're looking for will be there.
5757-5858-Example:
5959-6060-```yaml
6161-engine: "nixery"
6262-```
6363-6464-## Clone options
6565-6666-When a workflow starts, the first step is to clone the repository. You can customize this behavior using the **optional** `clone` field. It has the following fields:
6767-6868-- `skip`: Setting this to `true` will skip cloning the repository. This can be useful if your workflow is doing something that doesn't require anything from the repository itself. This is `false` by default.
6969-- `depth`: This sets the number of commits, or the "clone depth", to fetch from the repository. For example, if you set this to 2, the last 2 commits will be fetched. By default, the depth is set to 1, meaning only the most recent commit will be fetched, which is the commit that triggered the workflow.
7070-- `submodules`: If you use [git submodules](https://git-scm.com/book/en/v2/Git-Tools-Submodules) in your repository, setting this field to `true` will recursively fetch all submodules. This is `false` by default.
7171-7272-The default settings are:
7373-7474-```yaml
7575-clone:
7676- skip: false
7777- depth: 1
7878- submodules: false
7979-```
8080-8181-## Dependencies
8282-8383-Usually when you're running a workflow, you'll need additional dependencies. The `dependencies` field lets you define which dependencies to get, and from where. It's a key-value map, with the key being the registry to fetch dependencies from, and the value being the list of dependencies to fetch.
8484-8585-Say you want to fetch Node.js and Go from `nixpkgs`, and a package called `my_pkg` you've made from your own registry at your repository at `https://tangled.sh/@example.com/my_pkg`. You can define those dependencies like so:
8686-8787-```yaml
8888-dependencies:
8989- # nixpkgs
9090- nixpkgs:
9191- - nodejs
9292- - go
9393- # custom registry
9494- git+https://tangled.org/@example.com/my_pkg:
9595- - my_pkg
9696-```
9797-9898-Now these dependencies are available to use in your workflow!
9999-100100-## Environment
101101-102102-The `environment` field allows you define environment variables that will be available throughout the entire workflow. **Do not put secrets here, these environment variables are visible to anyone viewing the repository. You can add secrets for pipelines in your repository's settings.**
103103-104104-Example:
105105-106106-```yaml
107107-environment:
108108- GOOS: "linux"
109109- GOARCH: "arm64"
110110- NODE_ENV: "production"
111111- MY_ENV_VAR: "MY_ENV_VALUE"
112112-```
113113-114114-## Steps
115115-116116-The `steps` field allows you to define what steps should run in the workflow. It's a list of step objects, each with the following fields:
117117-118118-- `name`: This field allows you to give your step a name. This name is visible in your workflow runs, and is used to describe what the step is doing.
119119-- `command`: This field allows you to define a command to run in that step. The step is run in a Bash shell, and the logs from the command will be visible in the pipelines page on the Tangled website. The [dependencies](#dependencies) you added will be available to use here.
120120-- `environment`: Similar to the global [environment](#environment) config, this **optional** field is a key-value map that allows you to set environment variables for the step. **Do not put secrets here, these environment variables are visible to anyone viewing the repository. You can add secrets for pipelines in your repository's settings.**
121121-122122-Example:
123123-124124-```yaml
125125-steps:
126126- - name: "Build backend"
127127- command: "go build"
128128- environment:
129129- GOOS: "darwin"
130130- GOARCH: "arm64"
131131- - name: "Build frontend"
132132- command: "npm run build"
133133- environment:
134134- NODE_ENV: "production"
135135-```
136136-137137-## Complete workflow
138138-139139-```yaml
140140-# .tangled/workflows/build.yml
141141-142142-when:
143143- - event: ["push", "manual"]
144144- branch: ["main", "develop"]
145145- - event: ["pull_request"]
146146- branch: ["main"]
147147-148148-engine: "nixery"
149149-150150-# using the default values
151151-clone:
152152- skip: false
153153- depth: 1
154154- submodules: false
155155-156156-dependencies:
157157- # nixpkgs
158158- nixpkgs:
159159- - nodejs
160160- - go
161161- # custom registry
162162- git+https://tangled.org/@example.com/my_pkg:
163163- - my_pkg
164164-165165-environment:
166166- GOOS: "linux"
167167- GOARCH: "arm64"
168168- NODE_ENV: "production"
169169- MY_ENV_VAR: "MY_ENV_VALUE"
170170-171171-steps:
172172- - name: "Build backend"
173173- command: "go build"
174174- environment:
175175- GOOS: "darwin"
176176- GOARCH: "arm64"
177177- - name: "Build frontend"
178178- command: "npm run build"
179179- environment:
180180- NODE_ENV: "production"
181181-```
182182-183183-If you want another example of a workflow, you can look at the one [Tangled uses to build the project](https://tangled.sh/@tangled.sh/core/blob/master/.tangled/workflows/build.yml).
···11+package db
22+33+import (
44+ "context"
55+ "database/sql"
66+ "log/slog"
77+ "strings"
88+99+ _ "github.com/mattn/go-sqlite3"
1010+ "tangled.org/core/log"
1111+)
1212+1313+type DB struct {
1414+ db *sql.DB
1515+ logger *slog.Logger
1616+}
1717+1818+func Setup(ctx context.Context, dbPath string) (*DB, error) {
1919+ // https://github.com/mattn/go-sqlite3#connection-string
2020+ opts := []string{
2121+ "_foreign_keys=1",
2222+ "_journal_mode=WAL",
2323+ "_synchronous=NORMAL",
2424+ "_auto_vacuum=incremental",
2525+ }
2626+2727+ logger := log.FromContext(ctx)
2828+ logger = log.SubLogger(logger, "db")
2929+3030+ db, err := sql.Open("sqlite3", dbPath+"?"+strings.Join(opts, "&"))
3131+ if err != nil {
3232+ return nil, err
3333+ }
3434+3535+ conn, err := db.Conn(ctx)
3636+ if err != nil {
3737+ return nil, err
3838+ }
3939+ defer conn.Close()
4040+4141+ _, err = conn.ExecContext(ctx, `
4242+ create table if not exists known_dids (
4343+ did text primary key
4444+ );
4545+4646+ create table if not exists public_keys (
4747+ id integer primary key autoincrement,
4848+ did text not null,
4949+ key text not null,
5050+ created text not null default (strftime('%Y-%m-%dT%H:%M:%SZ', 'now')),
5151+ unique(did, key),
5252+ foreign key (did) references known_dids(did) on delete cascade
5353+ );
5454+5555+ create table if not exists _jetstream (
5656+ id integer primary key autoincrement,
5757+ last_time_us integer not null
5858+ );
5959+6060+ create table if not exists events (
6161+ rkey text not null,
6262+ nsid text not null,
6363+ event text not null, -- json
6464+ created integer not null default (strftime('%s', 'now')),
6565+ primary key (rkey, nsid)
6666+ );
6767+6868+ create table if not exists migrations (
6969+ id integer primary key autoincrement,
7070+ name text unique
7171+ );
7272+ `)
7373+ if err != nil {
7474+ return nil, err
7575+ }
7676+7777+ return &DB{
7878+ db: db,
7979+ logger: logger,
8080+ }, nil
8181+}
-64
knotserver/db/init.go
···11-package db
22-33-import (
44- "database/sql"
55- "strings"
66-77- _ "github.com/mattn/go-sqlite3"
88-)
99-1010-type DB struct {
1111- db *sql.DB
1212-}
1313-1414-func Setup(dbPath string) (*DB, error) {
1515- // https://github.com/mattn/go-sqlite3#connection-string
1616- opts := []string{
1717- "_foreign_keys=1",
1818- "_journal_mode=WAL",
1919- "_synchronous=NORMAL",
2020- "_auto_vacuum=incremental",
2121- }
2222-2323- db, err := sql.Open("sqlite3", dbPath+"?"+strings.Join(opts, "&"))
2424- if err != nil {
2525- return nil, err
2626- }
2727-2828- // NOTE: If any other migration is added here, you MUST
2929- // copy the pattern in appview: use a single sql.Conn
3030- // for every migration.
3131-3232- _, err = db.Exec(`
3333- create table if not exists known_dids (
3434- did text primary key
3535- );
3636-3737- create table if not exists public_keys (
3838- id integer primary key autoincrement,
3939- did text not null,
4040- key text not null,
4141- created text not null default (strftime('%Y-%m-%dT%H:%M:%SZ', 'now')),
4242- unique(did, key),
4343- foreign key (did) references known_dids(did) on delete cascade
4444- );
4545-4646- create table if not exists _jetstream (
4747- id integer primary key autoincrement,
4848- last_time_us integer not null
4949- );
5050-5151- create table if not exists events (
5252- rkey text not null,
5353- nsid text not null,
5454- event text not null, -- json
5555- created integer not null default (strftime('%s', 'now')),
5656- primary key (rkey, nsid)
5757- );
5858- `)
5959- if err != nil {
6060- return nil, err
6161- }
6262-6363- return &DB{db: db}, nil
6464-}
···88 var = builtins.getEnv name;
99 in
1010 if var == ""
1111- then throw "\$${name} must be defined, see docs/hacking.md for more details"
1111+ then throw "\$${name} must be defined, see https://docs.tangled.org/hacking-on-tangled.html#hacking-on-tangled for more details"
1212 else var;
1313 envVarOr = name: default: let
1414 var = builtins.getEnv name;
···9292 jetstreamEndpoint = jetstream;
9393 listenAddr = "0.0.0.0:6444";
9494 };
9595- environmentFile = "${config.services.tangled.knot.stateDir}/.env";
9695 };
9796 services.tangled.spindle = {
9897 enable = true;
···11+package models
22+33+import (
44+ "encoding/base64"
55+ "strings"
66+)
77+88+// SecretMask replaces secret values in strings with "***".
99+type SecretMask struct {
1010+ replacer *strings.Replacer
1111+}
1212+1313+// NewSecretMask creates a mask for the given secret values.
1414+// Also registers base64-encoded variants of each secret.
1515+func NewSecretMask(values []string) *SecretMask {
1616+ var pairs []string
1717+1818+ for _, value := range values {
1919+ if value == "" {
2020+ continue
2121+ }
2222+2323+ pairs = append(pairs, value, "***")
2424+2525+ b64 := base64.StdEncoding.EncodeToString([]byte(value))
2626+ if b64 != value {
2727+ pairs = append(pairs, b64, "***")
2828+ }
2929+3030+ b64NoPad := strings.TrimRight(b64, "=")
3131+ if b64NoPad != b64 && b64NoPad != value {
3232+ pairs = append(pairs, b64NoPad, "***")
3333+ }
3434+ }
3535+3636+ if len(pairs) == 0 {
3737+ return nil
3838+ }
3939+4040+ return &SecretMask{
4141+ replacer: strings.NewReplacer(pairs...),
4242+ }
4343+}
4444+4545+// Mask replaces all registered secret values with "***".
4646+func (m *SecretMask) Mask(input string) string {
4747+ if m == nil || m.replacer == nil {
4848+ return input
4949+ }
5050+ return m.replacer.Replace(input)
5151+}
+135
spindle/models/secret_mask_test.go
···11+package models
22+33+import (
44+ "encoding/base64"
55+ "testing"
66+)
77+88+func TestSecretMask_BasicMasking(t *testing.T) {
99+ mask := NewSecretMask([]string{"mysecret123"})
1010+1111+ input := "The password is mysecret123 in this log"
1212+ expected := "The password is *** in this log"
1313+1414+ result := mask.Mask(input)
1515+ if result != expected {
1616+ t.Errorf("expected %q, got %q", expected, result)
1717+ }
1818+}
1919+2020+func TestSecretMask_Base64Encoded(t *testing.T) {
2121+ secret := "mysecret123"
2222+ mask := NewSecretMask([]string{secret})
2323+2424+ b64 := base64.StdEncoding.EncodeToString([]byte(secret))
2525+ input := "Encoded: " + b64
2626+ expected := "Encoded: ***"
2727+2828+ result := mask.Mask(input)
2929+ if result != expected {
3030+ t.Errorf("expected %q, got %q", expected, result)
3131+ }
3232+}
3333+3434+func TestSecretMask_Base64NoPadding(t *testing.T) {
3535+ // "test" encodes to "dGVzdA==" with padding
3636+ secret := "test"
3737+ mask := NewSecretMask([]string{secret})
3838+3939+ b64NoPad := "dGVzdA" // base64 without padding
4040+ input := "Token: " + b64NoPad
4141+ expected := "Token: ***"
4242+4343+ result := mask.Mask(input)
4444+ if result != expected {
4545+ t.Errorf("expected %q, got %q", expected, result)
4646+ }
4747+}
4848+4949+func TestSecretMask_MultipleSecrets(t *testing.T) {
5050+ mask := NewSecretMask([]string{"password1", "apikey123"})
5151+5252+ input := "Using password1 and apikey123 for auth"
5353+ expected := "Using *** and *** for auth"
5454+5555+ result := mask.Mask(input)
5656+ if result != expected {
5757+ t.Errorf("expected %q, got %q", expected, result)
5858+ }
5959+}
6060+6161+func TestSecretMask_MultipleOccurrences(t *testing.T) {
6262+ mask := NewSecretMask([]string{"secret"})
6363+6464+ input := "secret appears twice: secret"
6565+ expected := "*** appears twice: ***"
6666+6767+ result := mask.Mask(input)
6868+ if result != expected {
6969+ t.Errorf("expected %q, got %q", expected, result)
7070+ }
7171+}
7272+7373+func TestSecretMask_ShortValues(t *testing.T) {
7474+ mask := NewSecretMask([]string{"abc", "xy", ""})
7575+7676+ if mask == nil {
7777+ t.Fatal("expected non-nil mask")
7878+ }
7979+8080+ input := "abc xy test"
8181+ expected := "*** *** test"
8282+ result := mask.Mask(input)
8383+ if result != expected {
8484+ t.Errorf("expected %q, got %q", expected, result)
8585+ }
8686+}
8787+8888+func TestSecretMask_NilMask(t *testing.T) {
8989+ var mask *SecretMask
9090+9191+ input := "some input text"
9292+ result := mask.Mask(input)
9393+ if result != input {
9494+ t.Errorf("expected %q, got %q", input, result)
9595+ }
9696+}
9797+9898+func TestSecretMask_EmptyInput(t *testing.T) {
9999+ mask := NewSecretMask([]string{"secret"})
100100+101101+ result := mask.Mask("")
102102+ if result != "" {
103103+ t.Errorf("expected empty string, got %q", result)
104104+ }
105105+}
106106+107107+func TestSecretMask_NoMatch(t *testing.T) {
108108+ mask := NewSecretMask([]string{"secretvalue"})
109109+110110+ input := "nothing to mask here"
111111+ result := mask.Mask(input)
112112+ if result != input {
113113+ t.Errorf("expected %q, got %q", input, result)
114114+ }
115115+}
116116+117117+func TestSecretMask_EmptySecretsList(t *testing.T) {
118118+ mask := NewSecretMask([]string{})
119119+120120+ if mask != nil {
121121+ t.Error("expected nil mask for empty secrets list")
122122+ }
123123+}
124124+125125+func TestSecretMask_EmptySecretsFiltered(t *testing.T) {
126126+ mask := NewSecretMask([]string{"ab", "validpassword", "", "xyz"})
127127+128128+ input := "Using validpassword here"
129129+ expected := "Using *** here"
130130+131131+ result := mask.Mask(input)
132132+ if result != expected {
133133+ t.Errorf("expected %q, got %q", expected, result)
134134+ }
135135+}
+1-1
spindle/motd
···2020 **
2121 ********
22222323-This is a spindle server. More info at https://tangled.sh/@tangled.sh/core/tree/master/docs/spindle
2323+This is a spindle server. More info at https://docs.tangled.org/spindles.html#spindles
24242525Most API routes are under /xrpc/
+31-13
spindle/server.go
···88 "log/slog"
99 "maps"
1010 "net/http"
1111+ "sync"
11121213 "github.com/go-chi/chi/v5"
1314 "tangled.org/core/api/tangled"
···3031)
31323233//go:embed motd
3333-var motd []byte
3434+var defaultMotd []byte
34353536const (
3637 rbacDomain = "thisserver"
3738)
38393940type Spindle struct {
4040- jc *jetstream.JetstreamClient
4141- db *db.DB
4242- e *rbac.Enforcer
4343- l *slog.Logger
4444- n *notifier.Notifier
4545- engs map[string]models.Engine
4646- jq *queue.Queue
4747- cfg *config.Config
4848- ks *eventconsumer.Consumer
4949- res *idresolver.Resolver
5050- vault secrets.Manager
4141+ jc *jetstream.JetstreamClient
4242+ db *db.DB
4343+ e *rbac.Enforcer
4444+ l *slog.Logger
4545+ n *notifier.Notifier
4646+ engs map[string]models.Engine
4747+ jq *queue.Queue
4848+ cfg *config.Config
4949+ ks *eventconsumer.Consumer
5050+ res *idresolver.Resolver
5151+ vault secrets.Manager
5252+ motd []byte
5353+ motdMu sync.RWMutex
5154}
52555356// New creates a new Spindle server with the provided configuration and engines.
···128131 cfg: cfg,
129132 res: resolver,
130133 vault: vault,
134134+ motd: defaultMotd,
131135 }
132136133137 err = e.AddSpindle(rbacDomain)
···201205 return s.e
202206}
203207208208+// SetMotdContent sets custom MOTD content, replacing the embedded default.
209209+func (s *Spindle) SetMotdContent(content []byte) {
210210+ s.motdMu.Lock()
211211+ defer s.motdMu.Unlock()
212212+ s.motd = content
213213+}
214214+215215+// GetMotdContent returns the current MOTD content.
216216+func (s *Spindle) GetMotdContent() []byte {
217217+ s.motdMu.RLock()
218218+ defer s.motdMu.RUnlock()
219219+ return s.motd
220220+}
221221+204222// Start starts the Spindle server (blocking).
205223func (s *Spindle) Start(ctx context.Context) error {
206224 // starts a job queue runner in the background
···246264 mux := chi.NewRouter()
247265248266 mux.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
249249- w.Write(motd)
267267+ w.Write(s.GetMotdContent())
250268 })
251269 mux.HandleFunc("/events", s.Events)
252270 mux.HandleFunc("/logs/{knot}/{rkey}/{name}", s.Logs)
···174174175175func (commit Commit) CoAuthors() []object.Signature {
176176 var coAuthors []object.Signature
177177-177177+ seen := make(map[string]bool)
178178 matches := coAuthorRegex.FindAllStringSubmatch(commit.Message, -1)
179179180180 for _, match := range matches {
181181 if len(match) >= 3 {
182182 name := strings.TrimSpace(match[1])
183183 email := strings.TrimSpace(match[2])
184184+185185+ if seen[email] {
186186+ continue
187187+ }
188188+ seen[email] = true
184189185190 coAuthors = append(coAuthors, object.Signature{
186191 Name: name,
+3
types/diff.go
···74747575// used by html elements as a unique ID for hrefs
7676func (d *Diff) Id() string {
7777+ if d.IsDelete {
7878+ return d.Name.Old
7979+ }
7780 return d.Name.New
7881}
7982
+112
types/diff_test.go
···11+package types
22+33+import "testing"
44+55+func TestDiffId(t *testing.T) {
66+ tests := []struct {
77+ name string
88+ diff Diff
99+ expected string
1010+ }{
1111+ {
1212+ name: "regular file uses new name",
1313+ diff: Diff{
1414+ Name: struct {
1515+ Old string `json:"old"`
1616+ New string `json:"new"`
1717+ }{Old: "", New: "src/main.go"},
1818+ },
1919+ expected: "src/main.go",
2020+ },
2121+ {
2222+ name: "new file uses new name",
2323+ diff: Diff{
2424+ Name: struct {
2525+ Old string `json:"old"`
2626+ New string `json:"new"`
2727+ }{Old: "", New: "src/new.go"},
2828+ IsNew: true,
2929+ },
3030+ expected: "src/new.go",
3131+ },
3232+ {
3333+ name: "deleted file uses old name",
3434+ diff: Diff{
3535+ Name: struct {
3636+ Old string `json:"old"`
3737+ New string `json:"new"`
3838+ }{Old: "src/deleted.go", New: ""},
3939+ IsDelete: true,
4040+ },
4141+ expected: "src/deleted.go",
4242+ },
4343+ {
4444+ name: "renamed file uses new name",
4545+ diff: Diff{
4646+ Name: struct {
4747+ Old string `json:"old"`
4848+ New string `json:"new"`
4949+ }{Old: "src/old.go", New: "src/renamed.go"},
5050+ IsRename: true,
5151+ },
5252+ expected: "src/renamed.go",
5353+ },
5454+ }
5555+5656+ for _, tt := range tests {
5757+ t.Run(tt.name, func(t *testing.T) {
5858+ if got := tt.diff.Id(); got != tt.expected {
5959+ t.Errorf("Diff.Id() = %q, want %q", got, tt.expected)
6060+ }
6161+ })
6262+ }
6363+}
6464+6565+func TestChangedFilesMatchesDiffId(t *testing.T) {
6666+ // ChangedFiles() must return values matching each Diff's Id()
6767+ // so that sidebar links point to the correct anchors.
6868+ // Tests existing, deleted, new, and renamed files.
6969+ nd := NiceDiff{
7070+ Diff: []Diff{
7171+ {
7272+ Name: struct {
7373+ Old string `json:"old"`
7474+ New string `json:"new"`
7575+ }{Old: "", New: "src/modified.go"},
7676+ },
7777+ {
7878+ Name: struct {
7979+ Old string `json:"old"`
8080+ New string `json:"new"`
8181+ }{Old: "src/deleted.go", New: ""},
8282+ IsDelete: true,
8383+ },
8484+ {
8585+ Name: struct {
8686+ Old string `json:"old"`
8787+ New string `json:"new"`
8888+ }{Old: "", New: "src/new.go"},
8989+ IsNew: true,
9090+ },
9191+ {
9292+ Name: struct {
9393+ Old string `json:"old"`
9494+ New string `json:"new"`
9595+ }{Old: "src/old.go", New: "src/renamed.go"},
9696+ IsRename: true,
9797+ },
9898+ },
9999+ }
100100+101101+ changedFiles := nd.ChangedFiles()
102102+103103+ if len(changedFiles) != len(nd.Diff) {
104104+ t.Fatalf("ChangedFiles() returned %d items, want %d", len(changedFiles), len(nd.Diff))
105105+ }
106106+107107+ for i, diff := range nd.Diff {
108108+ if changedFiles[i] != diff.Id() {
109109+ t.Errorf("ChangedFiles()[%d] = %q, but Diff.Id() = %q", i, changedFiles[i], diff.Id())
110110+ }
111111+ }
112112+}