···423423 </listitem>
424424 <listitem>
425425 <para>
426426+ Nebula now runs as a system user and group created for each
427427+ nebula network, using the <literal>CAP_NET_ADMIN</literal>
428428+ ambient capability on launch rather than starting as root.
429429+ Ensure that any files each Nebula instance needs to access are
430430+ owned by the correct user and group, by default
431431+ <literal>nebula-${networkName}</literal>.
432432+ </para>
433433+ </listitem>
434434+ <listitem>
435435+ <para>
426436 In <literal>mastodon</literal> it is now necessary to specify
427437 location of file with <literal>PostgreSQL</literal> database
428438 password. In
···512522 <para>
513523 A few openssh options have been moved from extraConfig to the
514524 new freeform option <literal>settings</literal> and renamed as
515515- follow:
516516- <literal>services.openssh.kbdInteractiveAuthentication</literal>
517517- to
518518- <literal>services.openssh.settings.KbdInteractiveAuthentication</literal>,
519519- <literal>services.openssh.passwordAuthentication</literal> to
520520- <literal>services.openssh.settings.PasswordAuthentication</literal>,
521521- <literal>services.openssh.useDns</literal> to
522522- <literal>services.openssh.settings.UseDns</literal>,
523523- <literal>services.openssh.permitRootLogin</literal> to
524524- <literal>services.openssh.settings.PermitRootLogin</literal>,
525525- <literal>services.openssh.logLevel</literal> to
526526- <literal>services.openssh.settings.LogLevel</literal>.
525525+ follows:
527526 </para>
527527+ <itemizedlist spacing="compact">
528528+ <listitem>
529529+ <para>
530530+ <literal>services.openssh.forwardX11</literal> to
531531+ <literal>services.openssh.settings.X11Forwarding</literal>
532532+ </para>
533533+ </listitem>
534534+ <listitem>
535535+ <para>
536536+ <literal>services.openssh.kbdInteractiveAuthentication</literal>
537537+ ->
538538+ <literal>services.openssh.settings.KbdInteractiveAuthentication</literal>
539539+ </para>
540540+ </listitem>
541541+ <listitem>
542542+ <para>
543543+ <literal>services.openssh.passwordAuthentication</literal>
544544+ to
545545+ <literal>services.openssh.settings.PasswordAuthentication</literal>
546546+ </para>
547547+ </listitem>
548548+ <listitem>
549549+ <para>
550550+ <literal>services.openssh.useDns</literal> to
551551+ <literal>services.openssh.settings.UseDns</literal>
552552+ </para>
553553+ </listitem>
554554+ <listitem>
555555+ <para>
556556+ <literal>services.openssh.permitRootLogin</literal> to
557557+ <literal>services.openssh.settings.PermitRootLogin</literal>
558558+ </para>
559559+ </listitem>
560560+ <listitem>
561561+ <para>
562562+ <literal>services.openssh.logLevel</literal> to
563563+ <literal>services.openssh.settings.LogLevel</literal>
564564+ </para>
565565+ </listitem>
566566+ <listitem>
567567+ <para>
568568+ <literal>services.openssh.kexAlgorithms</literal> to
569569+ <literal>services.openssh.settings.KexAlgorithms</literal>
570570+ </para>
571571+ </listitem>
572572+ <listitem>
573573+ <para>
574574+ <literal>services.openssh.macs</literal> to
575575+ <literal>services.openssh.settings.Macs</literal>
576576+ </para>
577577+ </listitem>
578578+ <listitem>
579579+ <para>
580580+ <literal>services.openssh.cyphers</literal> to
581581+ <literal>services.openssh.settings.Cyphers</literal>
582582+ </para>
583583+ </listitem>
584584+ <listitem>
585585+ <para>
586586+ <literal>services.openssh.gatewayPorts</literal> to
587587+ <literal>services.openssh.settings.GatewayPorts</literal>
588588+ </para>
589589+ </listitem>
590590+ </itemizedlist>
528591 </listitem>
529592 <listitem>
530593 <para>
···801864 <link xlink:href="options.html#opt-services.garage.package">services.garage.package</link>
802865 or upgrade accordingly
803866 <link xlink:href="options.html#opt-system.stateVersion">system.stateVersion</link>.
867867+ </para>
868868+ </listitem>
869869+ <listitem>
870870+ <para>
871871+ Nebula now supports the
872872+ <literal>services.nebula.networks.<name>.isRelay</literal>
873873+ and
874874+ <literal>services.nebula.networks.<name>.relays</literal>
875875+ configuration options for setting up or allowing traffic
876876+ relaying. See the
877877+ <link xlink:href="https://www.defined.net/blog/announcing-relay-support-in-nebula/">announcement</link>
878878+ for more details about relays.
804879 </para>
805880 </listitem>
806881 <listitem>
+15-1
nixos/doc/manual/release-notes/rl-2305.section.md
···101101102102- The [services.wordpress.sites.<name>.plugins](#opt-services.wordpress.sites._name_.plugins) and [services.wordpress.sites.<name>.themes](#opt-services.wordpress.sites._name_.themes) options have been converted from sets to attribute sets to allow for consumers to specify explicit install paths via attribute name.
103103104104+- Nebula now runs as a system user and group created for each nebula network, using the `CAP_NET_ADMIN` ambient capability on launch rather than starting as root. Ensure that any files each Nebula instance needs to access are owned by the correct user and group, by default `nebula-${networkName}`.
105105+104106- In `mastodon` it is now necessary to specify location of file with `PostgreSQL` database password. In `services.mastodon.database.passwordFile` parameter default value `/var/lib/mastodon/secrets/db-password` has been changed to `null`.
105107106108- The `--target-host` and `--build-host` options of `nixos-rebuild` no longer treat the `localhost` value specially – to build on/deploy to local machine, omit the relevant flag.
···126128127129- The module `usbmuxd` now has the ability to change the package used by the daemon. In case you're experiencing issues with `usbmuxd` you can try an alternative program like `usbmuxd2`. Available as [services.usbmuxd.package](#opt-services.usbmuxd.package)
128130129129-- A few openssh options have been moved from extraConfig to the new freeform option `settings` and renamed as follow: `services.openssh.kbdInteractiveAuthentication` to `services.openssh.settings.KbdInteractiveAuthentication`, `services.openssh.passwordAuthentication` to `services.openssh.settings.PasswordAuthentication`, `services.openssh.useDns` to `services.openssh.settings.UseDns`, `services.openssh.permitRootLogin` to `services.openssh.settings.PermitRootLogin`, `services.openssh.logLevel` to `services.openssh.settings.LogLevel`.
131131+- A few openssh options have been moved from extraConfig to the new freeform option `settings` and renamed as follows:
132132+ - `services.openssh.forwardX11` to `services.openssh.settings.X11Forwarding`
133133+ - `services.openssh.kbdInteractiveAuthentication` -> `services.openssh.settings.KbdInteractiveAuthentication`
134134+ - `services.openssh.passwordAuthentication` to `services.openssh.settings.PasswordAuthentication`
135135+ - `services.openssh.useDns` to `services.openssh.settings.UseDns`
136136+ - `services.openssh.permitRootLogin` to `services.openssh.settings.PermitRootLogin`
137137+ - `services.openssh.logLevel` to `services.openssh.settings.LogLevel`
138138+ - `services.openssh.kexAlgorithms` to `services.openssh.settings.KexAlgorithms`
139139+ - `services.openssh.macs` to `services.openssh.settings.Macs`
140140+ - `services.openssh.cyphers` to `services.openssh.settings.Cyphers`
141141+ - `services.openssh.gatewayPorts` to `services.openssh.settings.GatewayPorts`
130142131143- `services.mastodon` gained a tootctl wrapped named `mastodon-tootctl` similar to `nextcloud-occ` which can be executed from any user and switches to the configured mastodon user with sudo and sources the environment variables.
132144···198210 - Increased the minimum length of a response that will be gzipped.
199211200212- [Garage](https://garagehq.deuxfleurs.fr/) version is based on [system.stateVersion](options.html#opt-system.stateVersion), existing installations will keep using version 0.7. New installations will use version 0.8. In order to upgrade a Garage cluster, please follow [upstream instructions](https://garagehq.deuxfleurs.fr/documentation/cookbook/upgrading/) and force [services.garage.package](options.html#opt-services.garage.package) or upgrade accordingly [system.stateVersion](options.html#opt-system.stateVersion).
213213+214214+- Nebula now supports the `services.nebula.networks.<name>.isRelay` and `services.nebula.networks.<name>.relays` configuration options for setting up or allowing traffic relaying. See the [announcement](https://www.defined.net/blog/announcing-relay-support-in-nebula/) for more details about relays.
201215202216- `hip` has been separated into `hip`, `hip-common` and `hipcc`.
203217
···203203 proxy_send_timeout ${cfg.proxyTimeout};
204204 proxy_read_timeout ${cfg.proxyTimeout};
205205 proxy_http_version 1.1;
206206- # don't let clients close the keep-alive connection to upstream
206206+ # don't let clients close the keep-alive connection to upstream. See the nginx blog for details:
207207+ # https://www.nginx.com/blog/avoiding-top-10-nginx-configuration-mistakes/#no-keepalives
207208 proxy_set_header "Connection" "";
208209 include ${recommendedProxyConfig};
209210 ''}
···11+import ./make-test-python.nix (
22+ { pkgs, lib, ... }:
33+44+ let
55+ user = "alice"; # from ./common/user-account.nix
66+ password = "foobar"; # from ./common/user-account.nix
77+ in {
88+ name = "cockpit";
99+ meta = {
1010+ maintainers = with lib.maintainers; [ lucasew ];
1111+ };
1212+ nodes = {
1313+ server = { config, ... }: {
1414+ imports = [ ./common/user-account.nix ];
1515+ security.polkit.enable = true;
1616+ users.users.${user} = {
1717+ extraGroups = [ "wheel" ];
1818+ };
1919+ services.cockpit = {
2020+ enable = true;
2121+ openFirewall = true;
2222+ settings = {
2323+ WebService = {
2424+ Origins = "https://server:9090";
2525+ };
2626+ };
2727+ };
2828+ };
2929+ client = { config, ... }: {
3030+ imports = [ ./common/user-account.nix ];
3131+ environment.systemPackages = let
3232+ seleniumScript = pkgs.writers.writePython3Bin "selenium-script" {
3333+ libraries = with pkgs.python3Packages; [ selenium ];
3434+ } ''
3535+ from selenium import webdriver
3636+ from selenium.webdriver.common.by import By
3737+ from selenium.webdriver.firefox.options import Options
3838+ from selenium.webdriver.support.ui import WebDriverWait
3939+ from selenium.webdriver.support import expected_conditions as EC
4040+ from time import sleep
4141+4242+4343+ def log(msg):
4444+ from sys import stderr
4545+ print(f"[*] {msg}", file=stderr)
4646+4747+4848+ log("Initializing")
4949+5050+ options = Options()
5151+ options.add_argument("--headless")
5252+5353+ driver = webdriver.Firefox(options=options)
5454+5555+ driver.implicitly_wait(10)
5656+5757+ log("Opening homepage")
5858+ driver.get("https://server:9090")
5959+6060+ wait = WebDriverWait(driver, 60)
6161+6262+6363+ def wait_elem(by, query):
6464+ wait.until(EC.presence_of_element_located((by, query)))
6565+6666+6767+ def wait_title_contains(title):
6868+ wait.until(EC.title_contains(title))
6969+7070+7171+ def find_element(by, query):
7272+ return driver.find_element(by, query)
7373+7474+7575+ def set_value(elem, value):
7676+ script = 'arguments[0].value = arguments[1]'
7777+ return driver.execute_script(script, elem, value)
7878+7979+8080+ log("Waiting for the homepage to load")
8181+8282+ # cockpit sets initial title as hostname
8383+ wait_title_contains("server")
8484+ wait_elem(By.CSS_SELECTOR, 'input#login-user-input')
8585+8686+ log("Homepage loaded!")
8787+8888+ log("Filling out username")
8989+ login_input = find_element(By.CSS_SELECTOR, 'input#login-user-input')
9090+ set_value(login_input, "${user}")
9191+9292+ log("Filling out password")
9393+ password_input = find_element(By.CSS_SELECTOR, 'input#login-password-input')
9494+ set_value(password_input, "${password}")
9595+9696+ log("Submiting credentials for login")
9797+ driver.find_element(By.CSS_SELECTOR, 'button#login-button').click()
9898+9999+ # driver.implicitly_wait(1)
100100+ # driver.get("https://server:9090/system")
101101+102102+ log("Waiting dashboard to load")
103103+ wait_title_contains("${user}@server")
104104+105105+ log("Waiting for the frontend to initalize")
106106+ sleep(1)
107107+108108+ log("Looking for that banner that tells about limited access")
109109+ container_iframe = find_element(By.CSS_SELECTOR, 'iframe.container-frame')
110110+ driver.switch_to.frame(container_iframe)
111111+112112+ assert "Web console is running in limited access mode" in driver.page_source
113113+114114+ driver.close()
115115+ '';
116116+ in with pkgs; [ firefox-unwrapped geckodriver seleniumScript ];
117117+ };
118118+ };
119119+120120+ testScript = ''
121121+ start_all()
122122+123123+ server.wait_for_open_port(9090)
124124+ server.wait_for_unit("network.target")
125125+ server.wait_for_unit("multi-user.target")
126126+ server.systemctl("start", "polkit")
127127+128128+ client.wait_for_unit("multi-user.target")
129129+130130+ client.succeed("curl -k https://server:9090 -o /dev/stderr")
131131+ print(client.succeed("whoami"))
132132+ client.succeed('PYTHONUNBUFFERED=1 selenium-script')
133133+ '';
134134+ }
135135+)
+1-1
nixos/tests/cups-pdf.nix
···23232424 testScript = ''
2525 from subprocess import run
2626- machine.wait_for_unit("cups.service")
2626+ machine.wait_for_unit("multi-user.target")
2727 for name in ("opt", "noopt"):
2828 text = f"test text {name}".upper()
2929 machine.wait_until_succeeds(f"lpstat -v {name}")
+143-58
nixos/tests/nebula.nix
···1010 environment.systemPackages = [ pkgs.nebula ];
1111 users.users.root.openssh.authorizedKeys.keys = [ snakeOilPublicKey ];
1212 services.openssh.enable = true;
1313+ networking.interfaces.eth1.useDHCP = false;
13141415 services.nebula.networks.smoke = {
1516 # Note that these paths won't exist when the machine is first booted.
···30313132 lighthouse = { ... } @ args:
3233 makeNebulaNode args "lighthouse" {
3333- networking.interfaces.eth1.ipv4.addresses = [{
3434+ networking.interfaces.eth1.ipv4.addresses = lib.mkForce [{
3435 address = "192.168.1.1";
3536 prefixLength = 24;
3637 }];
37383839 services.nebula.networks.smoke = {
3940 isLighthouse = true;
4141+ isRelay = true;
4042 firewall = {
4143 outbound = [ { port = "any"; proto = "any"; host = "any"; } ];
4244 inbound = [ { port = "any"; proto = "any"; host = "any"; } ];
···4446 };
4547 };
46484747- node2 = { ... } @ args:
4848- makeNebulaNode args "node2" {
4949- networking.interfaces.eth1.ipv4.addresses = [{
4949+ allowAny = { ... } @ args:
5050+ makeNebulaNode args "allowAny" {
5151+ networking.interfaces.eth1.ipv4.addresses = lib.mkForce [{
5052 address = "192.168.1.2";
5153 prefixLength = 24;
5254 }];
···5557 staticHostMap = { "10.0.100.1" = [ "192.168.1.1:4242" ]; };
5658 isLighthouse = false;
5759 lighthouses = [ "10.0.100.1" ];
6060+ relays = [ "10.0.100.1" ];
5861 firewall = {
5962 outbound = [ { port = "any"; proto = "any"; host = "any"; } ];
6063 inbound = [ { port = "any"; proto = "any"; host = "any"; } ];
···6265 };
6366 };
64676565- node3 = { ... } @ args:
6666- makeNebulaNode args "node3" {
6767- networking.interfaces.eth1.ipv4.addresses = [{
6868+ allowFromLighthouse = { ... } @ args:
6969+ makeNebulaNode args "allowFromLighthouse" {
7070+ networking.interfaces.eth1.ipv4.addresses = lib.mkForce [{
6871 address = "192.168.1.3";
6972 prefixLength = 24;
7073 }];
···7376 staticHostMap = { "10.0.100.1" = [ "192.168.1.1:4242" ]; };
7477 isLighthouse = false;
7578 lighthouses = [ "10.0.100.1" ];
7979+ relays = [ "10.0.100.1" ];
7680 firewall = {
7781 outbound = [ { port = "any"; proto = "any"; host = "any"; } ];
7882 inbound = [ { port = "any"; proto = "any"; host = "lighthouse"; } ];
···8084 };
8185 };
82868383- node4 = { ... } @ args:
8484- makeNebulaNode args "node4" {
8585- networking.interfaces.eth1.ipv4.addresses = [{
8787+ allowToLighthouse = { ... } @ args:
8888+ makeNebulaNode args "allowToLighthouse" {
8989+ networking.interfaces.eth1.ipv4.addresses = lib.mkForce [{
8690 address = "192.168.1.4";
8791 prefixLength = 24;
8892 }];
···9296 staticHostMap = { "10.0.100.1" = [ "192.168.1.1:4242" ]; };
9397 isLighthouse = false;
9498 lighthouses = [ "10.0.100.1" ];
9999+ relays = [ "10.0.100.1" ];
95100 firewall = {
96101 outbound = [ { port = "any"; proto = "any"; host = "lighthouse"; } ];
97102 inbound = [ { port = "any"; proto = "any"; host = "any"; } ];
···99104 };
100105 };
101106102102- node5 = { ... } @ args:
103103- makeNebulaNode args "node5" {
104104- networking.interfaces.eth1.ipv4.addresses = [{
107107+ disabled = { ... } @ args:
108108+ makeNebulaNode args "disabled" {
109109+ networking.interfaces.eth1.ipv4.addresses = lib.mkForce [{
105110 address = "192.168.1.5";
106111 prefixLength = 24;
107112 }];
···111116 staticHostMap = { "10.0.100.1" = [ "192.168.1.1:4242" ]; };
112117 isLighthouse = false;
113118 lighthouses = [ "10.0.100.1" ];
119119+ relays = [ "10.0.100.1" ];
114120 firewall = {
115121 outbound = [ { port = "any"; proto = "any"; host = "lighthouse"; } ];
116122 inbound = [ { port = "any"; proto = "any"; host = "any"; } ];
···123129 testScript = let
124130125131 setUpPrivateKey = name: ''
126126- ${name}.succeed(
127127- "mkdir -p /root/.ssh",
128128- "chown 700 /root/.ssh",
129129- "cat '${snakeOilPrivateKey}' > /root/.ssh/id_snakeoil",
130130- "chown 600 /root/.ssh/id_snakeoil",
131131- )
132132+ ${name}.start()
133133+ ${name}.succeed(
134134+ "mkdir -p /root/.ssh",
135135+ "chown 700 /root/.ssh",
136136+ "cat '${snakeOilPrivateKey}' > /root/.ssh/id_snakeoil",
137137+ "chown 600 /root/.ssh/id_snakeoil",
138138+ "mkdir -p /root"
139139+ )
132140 '';
133141134142 # From what I can tell, StrictHostKeyChecking=no is necessary for ssh to work between machines.
···146154 ${name}.succeed(
147155 "mkdir -p /etc/nebula",
148156 "nebula-cert keygen -out-key /etc/nebula/${name}.key -out-pub /etc/nebula/${name}.pub",
149149- "scp ${sshOpts} /etc/nebula/${name}.pub 192.168.1.1:/tmp/${name}.pub",
157157+ "scp ${sshOpts} /etc/nebula/${name}.pub root@192.168.1.1:/root/${name}.pub",
150158 )
151159 lighthouse.succeed(
152152- 'nebula-cert sign -ca-crt /etc/nebula/ca.crt -ca-key /etc/nebula/ca.key -name "${name}" -groups "${name}" -ip "${ip}" -in-pub /tmp/${name}.pub -out-crt /tmp/${name}.crt',
160160+ 'nebula-cert sign -ca-crt /etc/nebula/ca.crt -ca-key /etc/nebula/ca.key -name "${name}" -groups "${name}" -ip "${ip}" -in-pub /root/${name}.pub -out-crt /root/${name}.crt'
153161 )
154162 ${name}.succeed(
155155- "scp ${sshOpts} 192.168.1.1:/tmp/${name}.crt /etc/nebula/${name}.crt",
156156- "scp ${sshOpts} 192.168.1.1:/etc/nebula/ca.crt /etc/nebula/ca.crt",
163163+ "scp ${sshOpts} root@192.168.1.1:/root/${name}.crt /etc/nebula/${name}.crt",
164164+ "scp ${sshOpts} root@192.168.1.1:/etc/nebula/ca.crt /etc/nebula/ca.crt",
165165+ '(id nebula-smoke >/dev/null && chown -R nebula-smoke:nebula-smoke /etc/nebula) || true'
157166 )
158167 '';
159168160160- in ''
161161- start_all()
169169+ getPublicIp = node: ''
170170+ ${node}.succeed("ip --brief addr show eth1 | awk '{print $3}' | tail -n1 | cut -d/ -f1").strip()
171171+ '';
162172173173+ # Never do this for anything security critical! (Thankfully it's just a test.)
174174+ # Restart Nebula right after the mutual block and/or restore so the state is fresh.
175175+ blockTrafficBetween = nodeA: nodeB: ''
176176+ node_a = ${getPublicIp nodeA}
177177+ node_b = ${getPublicIp nodeB}
178178+ ${nodeA}.succeed("iptables -I INPUT -s " + node_b + " -j DROP")
179179+ ${nodeB}.succeed("iptables -I INPUT -s " + node_a + " -j DROP")
180180+ ${nodeA}.systemctl("restart nebula@smoke.service")
181181+ ${nodeB}.systemctl("restart nebula@smoke.service")
182182+ '';
183183+ allowTrafficBetween = nodeA: nodeB: ''
184184+ node_a = ${getPublicIp nodeA}
185185+ node_b = ${getPublicIp nodeB}
186186+ ${nodeA}.succeed("iptables -D INPUT -s " + node_b + " -j DROP")
187187+ ${nodeB}.succeed("iptables -D INPUT -s " + node_a + " -j DROP")
188188+ ${nodeA}.systemctl("restart nebula@smoke.service")
189189+ ${nodeB}.systemctl("restart nebula@smoke.service")
190190+ '';
191191+ in ''
163192 # Create the certificate and sign the lighthouse's keys.
164193 ${setUpPrivateKey "lighthouse"}
165194 lighthouse.succeed(
166195 "mkdir -p /etc/nebula",
167196 'nebula-cert ca -name "Smoke Test" -out-crt /etc/nebula/ca.crt -out-key /etc/nebula/ca.key',
168197 'nebula-cert sign -ca-crt /etc/nebula/ca.crt -ca-key /etc/nebula/ca.key -name "lighthouse" -groups "lighthouse" -ip "10.0.100.1/24" -out-crt /etc/nebula/lighthouse.crt -out-key /etc/nebula/lighthouse.key',
198198+ 'chown -R nebula-smoke:nebula-smoke /etc/nebula'
169199 )
170200171201 # Reboot the lighthouse and verify that the nebula service comes up on boot.
···175205 lighthouse.wait_for_unit("nebula@smoke.service")
176206 lighthouse.succeed("ping -c5 10.0.100.1")
177207178178- # Create keys for node2's nebula service and test that it comes up.
179179- ${setUpPrivateKey "node2"}
180180- ${signKeysFor "node2" "10.0.100.2/24"}
181181- ${restartAndCheckNebula "node2" "10.0.100.2"}
208208+ # Create keys for allowAny's nebula service and test that it comes up.
209209+ ${setUpPrivateKey "allowAny"}
210210+ ${signKeysFor "allowAny" "10.0.100.2/24"}
211211+ ${restartAndCheckNebula "allowAny" "10.0.100.2"}
182212183183- # Create keys for node3's nebula service and test that it comes up.
184184- ${setUpPrivateKey "node3"}
185185- ${signKeysFor "node3" "10.0.100.3/24"}
186186- ${restartAndCheckNebula "node3" "10.0.100.3"}
213213+ # Create keys for allowFromLighthouse's nebula service and test that it comes up.
214214+ ${setUpPrivateKey "allowFromLighthouse"}
215215+ ${signKeysFor "allowFromLighthouse" "10.0.100.3/24"}
216216+ ${restartAndCheckNebula "allowFromLighthouse" "10.0.100.3"}
187217188188- # Create keys for node4's nebula service and test that it comes up.
189189- ${setUpPrivateKey "node4"}
190190- ${signKeysFor "node4" "10.0.100.4/24"}
191191- ${restartAndCheckNebula "node4" "10.0.100.4"}
218218+ # Create keys for allowToLighthouse's nebula service and test that it comes up.
219219+ ${setUpPrivateKey "allowToLighthouse"}
220220+ ${signKeysFor "allowToLighthouse" "10.0.100.4/24"}
221221+ ${restartAndCheckNebula "allowToLighthouse" "10.0.100.4"}
192222193193- # Create keys for node4's nebula service and test that it does not come up.
194194- ${setUpPrivateKey "node5"}
195195- ${signKeysFor "node5" "10.0.100.5/24"}
196196- node5.fail("systemctl status nebula@smoke.service")
197197- node5.fail("ping -c5 10.0.100.5")
223223+ # Create keys for disabled's nebula service and test that it does not come up.
224224+ ${setUpPrivateKey "disabled"}
225225+ ${signKeysFor "disabled" "10.0.100.5/24"}
226226+ disabled.fail("systemctl status nebula@smoke.service")
227227+ disabled.fail("ping -c5 10.0.100.5")
198228199199- # The lighthouse can ping node2 and node3 but not node5
229229+ # The lighthouse can ping allowAny and allowFromLighthouse but not disabled
200230 lighthouse.succeed("ping -c3 10.0.100.2")
201231 lighthouse.succeed("ping -c3 10.0.100.3")
202232 lighthouse.fail("ping -c3 10.0.100.5")
203233204204- # node2 can ping the lighthouse, but not node3 because of its inbound firewall
205205- node2.succeed("ping -c3 10.0.100.1")
206206- node2.fail("ping -c3 10.0.100.3")
234234+ # allowAny can ping the lighthouse, but not allowFromLighthouse because of its inbound firewall
235235+ allowAny.succeed("ping -c3 10.0.100.1")
236236+ allowAny.fail("ping -c3 10.0.100.3")
207237208208- # node3 can ping the lighthouse and node2
209209- node3.succeed("ping -c3 10.0.100.1")
210210- node3.succeed("ping -c3 10.0.100.2")
238238+ # allowFromLighthouse can ping the lighthouse and allowAny
239239+ allowFromLighthouse.succeed("ping -c3 10.0.100.1")
240240+ allowFromLighthouse.succeed("ping -c3 10.0.100.2")
241241+242242+ # block allowFromLighthouse <-> allowAny, and allowFromLighthouse -> allowAny should still work.
243243+ ${blockTrafficBetween "allowFromLighthouse" "allowAny"}
244244+ allowFromLighthouse.succeed("ping -c10 10.0.100.2")
245245+ ${allowTrafficBetween "allowFromLighthouse" "allowAny"}
246246+ allowFromLighthouse.succeed("ping -c10 10.0.100.2")
247247+248248+ # allowToLighthouse can ping the lighthouse but not allowAny or allowFromLighthouse
249249+ allowToLighthouse.succeed("ping -c3 10.0.100.1")
250250+ allowToLighthouse.fail("ping -c3 10.0.100.2")
251251+ allowToLighthouse.fail("ping -c3 10.0.100.3")
211252212212- # node4 can ping the lighthouse but not node2 or node3
213213- node4.succeed("ping -c3 10.0.100.1")
214214- node4.fail("ping -c3 10.0.100.2")
215215- node4.fail("ping -c3 10.0.100.3")
253253+ # allowAny can ping allowFromLighthouse now that allowFromLighthouse pinged it first
254254+ allowAny.succeed("ping -c3 10.0.100.3")
216255217217- # node2 can ping node3 now that node3 pinged it first
218218- node2.succeed("ping -c3 10.0.100.3")
219219- # node4 can ping node2 if node2 pings it first
220220- node2.succeed("ping -c3 10.0.100.4")
221221- node4.succeed("ping -c3 10.0.100.2")
256256+ # block allowAny <-> allowFromLighthouse, and allowAny -> allowFromLighthouse should still work.
257257+ ${blockTrafficBetween "allowAny" "allowFromLighthouse"}
258258+ allowFromLighthouse.succeed("ping -c10 10.0.100.2")
259259+ allowAny.succeed("ping -c10 10.0.100.3")
260260+ ${allowTrafficBetween "allowAny" "allowFromLighthouse"}
261261+ allowFromLighthouse.succeed("ping -c10 10.0.100.2")
262262+ allowAny.succeed("ping -c10 10.0.100.3")
263263+264264+ # allowToLighthouse can ping allowAny if allowAny pings it first
265265+ allowAny.succeed("ping -c3 10.0.100.4")
266266+ allowToLighthouse.succeed("ping -c3 10.0.100.2")
267267+268268+ # block allowToLighthouse <-> allowAny, and allowAny <-> allowToLighthouse should still work.
269269+ ${blockTrafficBetween "allowAny" "allowToLighthouse"}
270270+ allowAny.succeed("ping -c10 10.0.100.4")
271271+ allowToLighthouse.succeed("ping -c10 10.0.100.2")
272272+ ${allowTrafficBetween "allowAny" "allowToLighthouse"}
273273+ allowAny.succeed("ping -c10 10.0.100.4")
274274+ allowToLighthouse.succeed("ping -c10 10.0.100.2")
275275+276276+ # block lighthouse <-> allowFromLighthouse and allowAny <-> allowFromLighthouse; allowFromLighthouse won't get to allowAny
277277+ ${blockTrafficBetween "allowFromLighthouse" "lighthouse"}
278278+ ${blockTrafficBetween "allowFromLighthouse" "allowAny"}
279279+ allowFromLighthouse.fail("ping -c3 10.0.100.2")
280280+ ${allowTrafficBetween "allowFromLighthouse" "lighthouse"}
281281+ ${allowTrafficBetween "allowFromLighthouse" "allowAny"}
282282+ allowFromLighthouse.succeed("ping -c3 10.0.100.2")
283283+284284+ # block lighthouse <-> allowAny, allowAny <-> allowFromLighthouse, and allowAny <-> allowToLighthouse; it won't get to allowFromLighthouse or allowToLighthouse
285285+ ${blockTrafficBetween "allowAny" "lighthouse"}
286286+ ${blockTrafficBetween "allowAny" "allowFromLighthouse"}
287287+ ${blockTrafficBetween "allowAny" "allowToLighthouse"}
288288+ allowFromLighthouse.fail("ping -c3 10.0.100.2")
289289+ allowAny.fail("ping -c3 10.0.100.3")
290290+ allowAny.fail("ping -c3 10.0.100.4")
291291+ ${allowTrafficBetween "allowAny" "lighthouse"}
292292+ ${allowTrafficBetween "allowAny" "allowFromLighthouse"}
293293+ ${allowTrafficBetween "allowAny" "allowToLighthouse"}
294294+ allowFromLighthouse.succeed("ping -c3 10.0.100.2")
295295+ allowAny.succeed("ping -c3 10.0.100.3")
296296+ allowAny.succeed("ping -c3 10.0.100.4")
297297+298298+ # block lighthouse <-> allowToLighthouse and allowToLighthouse <-> allowAny; it won't get to allowAny
299299+ ${blockTrafficBetween "allowToLighthouse" "lighthouse"}
300300+ ${blockTrafficBetween "allowToLighthouse" "allowAny"}
301301+ allowAny.fail("ping -c3 10.0.100.4")
302302+ allowToLighthouse.fail("ping -c3 10.0.100.2")
303303+ ${allowTrafficBetween "allowToLighthouse" "lighthouse"}
304304+ ${allowTrafficBetween "allowToLighthouse" "allowAny"}
305305+ allowAny.succeed("ping -c3 10.0.100.4")
306306+ allowToLighthouse.succeed("ping -c3 10.0.100.2")
222307 '';
223308})
+107-101
nixos/tests/pgadmin4.nix
···11import ./make-test-python.nix ({ pkgs, lib, buildDeps ? [ ], pythonEnv ? [ ], ... }:
2233- /*
33+/*
44 This test suite replaces the typical pytestCheckHook function in python
55 packages. Pgadmin4 test suite needs a running and configured postgresql
66 server. This is why this test exists.
···17171818 */
19192020- let
2121- pgadmin4SrcDir = "/pgadmin";
2222- pgadmin4Dir = "/var/lib/pgadmin";
2323- pgadmin4LogDir = "/var/log/pgadmin";
2020+let
2121+ pgadmin4SrcDir = "/pgadmin";
2222+ pgadmin4Dir = "/var/lib/pgadmin";
2323+ pgadmin4LogDir = "/var/log/pgadmin";
24242525- in
2626- {
2727- name = "pgadmin4";
2828- meta.maintainers = with lib.maintainers; [ gador ];
2525+in
2626+{
2727+ name = "pgadmin4";
2828+ meta.maintainers = with lib.maintainers; [ gador ];
29293030- nodes.machine = { pkgs, ... }: {
3131- imports = [ ./common/x11.nix ];
3232- # needed because pgadmin 6.8 will fail, if those dependencies get updated
3333- nixpkgs.overlays = [
3434- (self: super: {
3535- pythonPackages = pythonEnv;
3636- })
3737- ];
3030+ nodes.machine = { pkgs, ... }: {
3131+ imports = [ ./common/x11.nix ];
3232+ # needed because pgadmin 6.8 will fail, if those dependencies get updated
3333+ nixpkgs.overlays = [
3434+ (self: super: {
3535+ pythonPackages = pythonEnv;
3636+ })
3737+ ];
38383939- environment.systemPackages = with pkgs; [
4040- pgadmin4
4141- postgresql
4242- chromedriver
4343- chromium
4444- # include the same packages as in pgadmin minus speaklater3
4545- (python3.withPackages
4646- (ps: buildDeps ++
4747- [
4848- # test suite package requirements
4949- pythonPackages.testscenarios
5050- pythonPackages.selenium
5151- ])
5252- )
3939+ environment.systemPackages = with pkgs; [
4040+ pgadmin4
4141+ postgresql
4242+ chromedriver
4343+ chromium
4444+ # include the same packages as in pgadmin minus speaklater3
4545+ (python3.withPackages
4646+ (ps: buildDeps ++
4747+ [
4848+ # test suite package requirements
4949+ pythonPackages.testscenarios
5050+ pythonPackages.selenium
5151+ ])
5252+ )
5353+ ];
5454+ services.postgresql = {
5555+ enable = true;
5656+ authentication = ''
5757+ host all all localhost trust
5858+ '';
5959+ ensureUsers = [
6060+ {
6161+ name = "postgres";
6262+ ensurePermissions = {
6363+ "DATABASE \"postgres\"" = "ALL PRIVILEGES";
6464+ };
6565+ }
5366 ];
5454- services.postgresql = {
5555- enable = true;
5656- authentication = ''
5757- host all all localhost trust
5858- '';
5959- ensureUsers = [
6060- {
6161- name = "postgres";
6262- ensurePermissions = {
6363- "DATABASE \"postgres\"" = "ALL PRIVILEGES";
6464- };
6565- }
6666- ];
6767- };
6867 };
6868+ };
69697070- testScript = ''
7171- machine.wait_for_unit("postgresql")
7070+ testScript = ''
7171+ machine.wait_for_unit("postgresql")
72727373- # pgadmin4 needs its data and log directories
7474- machine.succeed(
7575- "mkdir -p ${pgadmin4Dir} \
7676- && mkdir -p ${pgadmin4LogDir} \
7777- && mkdir -p ${pgadmin4SrcDir}"
7878- )
7373+ # pgadmin4 needs its data and log directories
7474+ machine.succeed(
7575+ "mkdir -p ${pgadmin4Dir} \
7676+ && mkdir -p ${pgadmin4LogDir} \
7777+ && mkdir -p ${pgadmin4SrcDir}"
7878+ )
79798080- machine.succeed(
8181- "tar xvzf ${pkgs.pgadmin4.src} -C ${pgadmin4SrcDir}"
8282- )
8080+ machine.succeed(
8181+ "tar xvzf ${pkgs.pgadmin4.src} -C ${pgadmin4SrcDir}"
8282+ )
83838484- machine.wait_for_file("${pgadmin4SrcDir}/pgadmin4-${pkgs.pgadmin4.version}/README.md")
8484+ machine.wait_for_file("${pgadmin4SrcDir}/pgadmin4-${pkgs.pgadmin4.version}/README.md")
85858686- # set paths and config for tests
8787- machine.succeed(
8888- "cd ${pgadmin4SrcDir}/pgadmin4-${pkgs.pgadmin4.version} \
8989- && cp -v web/regression/test_config.json.in web/regression/test_config.json \
9090- && sed -i 's|PostgreSQL 9.4|PostgreSQL|' web/regression/test_config.json \
9191- && sed -i 's|/opt/PostgreSQL/9.4/bin/|${pkgs.postgresql}/bin|' web/regression/test_config.json \
9292- && sed -i 's|\"headless_chrome\": false|\"headless_chrome\": true|' web/regression/test_config.json"
9393- )
8686+ # set paths and config for tests
8787+ # also ensure Server Mode is set to false, which will automatically exclude some unnecessary tests.
8888+ # see https://github.com/pgadmin-org/pgadmin4/blob/fd1c26408bbf154fa455a49ee5c12895933833a3/web/regression/runtests.py#L217-L226
8989+ machine.succeed(
9090+ "cd ${pgadmin4SrcDir}/pgadmin4-${pkgs.pgadmin4.version} \
9191+ && cp -v web/regression/test_config.json.in web/regression/test_config.json \
9292+ && sed -i 's|PostgreSQL 9.4|PostgreSQL|' web/regression/test_config.json \
9393+ && sed -i 's|/opt/PostgreSQL/9.4/bin/|${pkgs.postgresql}/bin|' web/regression/test_config.json \
9494+ && sed -i 's|\"headless_chrome\": false|\"headless_chrome\": true|' web/regression/test_config.json \
9595+ && sed -i 's|builtins.SERVER_MODE = None|builtins.SERVER_MODE = False|' web/regression/runtests.py"
9696+ )
94979595- # adapt chrome config to run within a sandbox without GUI
9696- # see https://stackoverflow.com/questions/50642308/webdriverexception-unknown-error-devtoolsactiveport-file-doesnt-exist-while-t#50642913
9797- # add chrome binary path. use spaces to satisfy python indention (tabs throw an error)
9898- # this works for selenium 3 (currently used), but will need to be updated
9999- # to work with "from selenium.webdriver.chrome.service import Service" in selenium 4
100100- machine.succeed(
101101- "cd ${pgadmin4SrcDir}/pgadmin4-${pkgs.pgadmin4.version} \
102102- && sed -i '\|options.add_argument(\"--disable-infobars\")|a \ \ \ \ \ \ \ \ options.binary_location = \"${pkgs.chromium}/bin/chromium\"' web/regression/runtests.py \
103103- && sed -i '\|options.add_argument(\"--no-sandbox\")|a \ \ \ \ \ \ \ \ options.add_argument(\"--headless\")' web/regression/runtests.py \
104104- && sed -i '\|options.add_argument(\"--disable-infobars\")|a \ \ \ \ \ \ \ \ options.add_argument(\"--disable-dev-shm-usage\")' web/regression/runtests.py \
105105- && sed -i 's|(chrome_options=options)|(executable_path=\"${pkgs.chromedriver}/bin/chromedriver\", chrome_options=options)|' web/regression/runtests.py \
106106- && sed -i 's|driver_local.maximize_window()||' web/regression/runtests.py"
107107- )
9898+ # adapt chrome config to run within a sandbox without GUI
9999+ # see https://stackoverflow.com/questions/50642308/webdriverexception-unknown-error-devtoolsactiveport-file-doesnt-exist-while-t#50642913
100100+ # add chrome binary path. use spaces to satisfy python indention (tabs throw an error)
101101+ machine.succeed(
102102+ "cd ${pgadmin4SrcDir}/pgadmin4-${pkgs.pgadmin4.version} \
103103+ && sed -i '\|options.add_argument(\"--disable-infobars\")|a \ \ \ \ \ \ \ \ options.binary_location = \"${pkgs.chromium}/bin/chromium\"' web/regression/runtests.py \
104104+ && sed -i '\|options.add_argument(\"--no-sandbox\")|a \ \ \ \ \ \ \ \ options.add_argument(\"--headless\")' web/regression/runtests.py \
105105+ && sed -i '\|options.add_argument(\"--disable-infobars\")|a \ \ \ \ \ \ \ \ options.add_argument(\"--disable-dev-shm-usage\")' web/regression/runtests.py \
106106+ && sed -i 's|(chrome_options=options)|(executable_path=\"${pkgs.chromedriver}/bin/chromedriver\", chrome_options=options)|' web/regression/runtests.py \
107107+ && sed -i 's|driver_local.maximize_window()||' web/regression/runtests.py"
108108+ )
108109109109- # Don't bother to test LDAP or kerberos authentication
110110- with subtest("run browser test"):
111111- machine.succeed(
112112- 'cd ${pgadmin4SrcDir}/pgadmin4-${pkgs.pgadmin4.version}/web \
113113- && python regression/runtests.py \
114114- --pkg browser \
115115- --exclude browser.tests.test_ldap_login.LDAPLoginTestCase,browser.tests.test_ldap_login,browser.tests.test_kerberos_with_mocking'
116116- )
110110+ # don't bother to test kerberos authentication
111111+ excluded_tests = [ "browser.tests.test_kerberos_with_mocking",
112112+ ]
117113118118- # fontconfig is necessary for chromium to run
119119- # https://github.com/NixOS/nixpkgs/issues/136207
120120- with subtest("run feature test"):
121121- machine.succeed(
122122- 'cd ${pgadmin4SrcDir}/pgadmin4-${pkgs.pgadmin4.version}/web \
123123- && export FONTCONFIG_FILE=${pkgs.makeFontsConf { fontDirectories = [];}} \
124124- && python regression/runtests.py --pkg feature_tests'
125125- )
114114+ with subtest("run browser test"):
115115+ machine.succeed(
116116+ 'cd ${pgadmin4SrcDir}/pgadmin4-${pkgs.pgadmin4.version}/web \
117117+ && python regression/runtests.py \
118118+ --pkg browser \
119119+ --exclude ' + ','.join(excluded_tests)
120120+ )
126121127127- with subtest("run resql test"):
128128- machine.succeed(
129129- 'cd ${pgadmin4SrcDir}/pgadmin4-${pkgs.pgadmin4.version}/web \
130130- && python regression/runtests.py --pkg resql'
131131- )
132132- '';
133133- })
122122+ with subtest("run resql test"):
123123+ machine.succeed(
124124+ 'cd ${pgadmin4SrcDir}/pgadmin4-${pkgs.pgadmin4.version}/web \
125125+ && python regression/runtests.py --pkg resql'
126126+ )
127127+128128+ # fontconfig is necessary for chromium to run
129129+ # https://github.com/NixOS/nixpkgs/issues/136207
130130+ # also, the feature_tests require Server Mode = True
131131+ with subtest("run feature test"):
132132+ machine.succeed(
133133+ 'cd ${pgadmin4SrcDir}/pgadmin4-${pkgs.pgadmin4.version}/web \
134134+ && export FONTCONFIG_FILE=${pkgs.makeFontsConf { fontDirectories = [];}} \
135135+ && sed -i \'s|builtins.SERVER_MODE = False|builtins.SERVER_MODE = True|\' regression/runtests.py \
136136+ && python regression/runtests.py --pkg feature_tests'
137137+ )
138138+ '';
139139+})