This website is an archive. Please see https://github.com/ngi-nix/summer-of-nix .

The Summer of Nix

The Rise of Special Project infra

Published on 2022-08-12 by Ctem

72 hours out. A bead of sweat slides from your brow, falls to the marred chassis of your local build server, and sizzles into mist, leaving a scant salt stain to tell the tale. It’s the start of the hottest Summer of Nix on record, and you have three days to research, provision, configure, and deploy. With or without you, this lecture is going live.

The deadline was 19 July, 2022; at five o’clock on that glistening Tuesday afternoon, one Eelco Dolstra – a living legend to those who understand – would take to the webcam from the verdant city of Utrecht to deliver a highly anticipated slice of fresh perspective on his now-19-year-old brainchild, Nix. It was the inaugural event in the premier Summer of Nix (SoN) Public Lecture Series. The hype was real, but so was our predicament: no self-hosted livestreaming infrastructure was yet in place.

Would we simply fall back on the usual gatekeeper platforms, surrendering control of the narrative and feeding our own to corporate leviathans in a vacuum of moral agency?

The ubiquity of these platforms suggests that most would.1 This, however, is SoN, and we aren’t most. Defying norms of defeatism and manufactured consent, we dare to declare a world of configurability. We celebrate that self-hosting empowers us to maintain ownership of both our content and its presentation, allowing us to introduce to our audience – our community – a way to engage with said content free from third-party influence.

Note that our solidarity is no coincidence; SoN begins with a contractual agreement between each participant and Eelco himself to uphold both NGI Zero’s aim to contribute to an open internet and the ACM Code of Ethics and Professional Conduct, which emphasizes among other points the importance of respecting privacy.

Needless to say, the prospect of streaming Eelco’s lecture exclusively to closed-source, centralized, and infamously privacy-disrespecting services was an irony too plain to ignore. Fortunately, infrastructural improvement is a fundamental objective of the program, and Nix is the definitive tool for the job. In short, the Public Lecture Series livestreaming infrastructure was a natural first target. Time was not on our side, but good people were:

  • The NixOS Foundation was at the ready for server provisioning and DNS management.
  • Personnel at Tweag generously offered expertise in configuration and deployment in a pinch.
  • A new SoN Special Project – tidily dubbed infra – was launched to formalize the effort and attract participation.

With this kind of support, the odds were decidedly stacked in our favor, but could we deliver? We’d find out soon enough...

Phase I: Research

Everybody needs a saga. Ours was: self-host a secure livestreaming server to accommodate a sizable audience participating from all around the world and at variable network speeds, fast. Oh, and add a chat room for synchronous Q&A.

On the server side, we settled on the following stack: virtual private server (VPS) → NixOS → CaddyOwncast.

The media would originate on our intrepid moderator’s local OBS Studio and be streamed at archival quality over the Real-Time Messaging Protocol (RTMP) to our remote Owncast service. Owncast would transcode the video to multiple stream qualities (i.e., optimized variants of video bitrate and CPU usage) and present the selected variant to the user in an attractive, branded web UI with a convenient chat box.

Phase II: Provision

Controlling the narrative has always called for a bit of hardware. For this exercise, we went with a single VPS equipped with dedicated vCPUs. The general provisioning procedure follows:

  1. Choose a (preferably reputable and carbon negative) VPS provider.

  2. Provision an instance appropriate to the task and estimated audience.

    • To minimize latency, select the available region that best approximates the location of the source stream as well as the mean location of the majority of expected participants where applicable. Note that object storage and CDN caching additionally may be leveraged to accommodate participants; object storage enables the streaming service to serve media files when the concurrent user count exceeds available bandwidth, and CDN caching enables users to download these assets from the nearest available server.
    • Select a server with enough (preferably dedicated) vCPU cores to handle video transcoding, which is a highly processor-intensive operation. Note that this should correspond to the stream variants selected in the streaming service configuration.

    The instance we provisioned for the first stress test included the following specs:

    vCPURAMLocal storageLocation
    832 GB240 GBGermany
  3. Reserve a public IPv4 address and IPv6 subnet and associate with the provisioned instance.

  4. Assuming registration of the desired domain name, use a/the DNS registrar-provided interface to update the (or create an) A record for the target subdomain (live.nixos.org in our case) with the IP address associated with the provisioned instance. Note that propagation may take up to 48 hours.

Phase III: Configure

Perhaps unintuitively, this was the easiest part. Thanks to the nixpkgs community contributions of high-quality modules2 for production-ready services3, configuring a fully self-hosted livestreaming service safely behind a reverse proxy was scarcely more involved than toggling enable. Consider the following Nix code:

{
  networking.firewall.allowedTCPPorts = [ 80 443 ]
    ++ [ config.services.owncast.rtmp-port ];

  security.acme.defaults.email = "admin@example.com";

  services.owncast.enable = true;

  services.caddy = {
    enable = true;
    email = config.security.acme.defaults.email;
    virtualHosts = {
      "live.nixos.org".extraConfig = let
        owncastWebService = "http://${config.services.owncast.listen}:${
            toString config.services.owncast.port
          }";
      in ''
        encode gzip
        reverse_proxy ${owncastWebService}
      '';
    };
  };
}

Demonstrating the seamless UX of a well-written NixOS module, the Owncast configuration is completely out-of-the-box; using all defaults, one line installs and activates the service, initializes all necessary state (including a SQLite database owncast.db that stores important service configuration such as administrative credentials) in /var/lib/owncast, and binds the web server to 127.0.0.1:8080.

The Caddy module is similarly loaded (with goodies). The only customization necessary here was to specify the domain we prepared (in the last step of the provisioning phase above) as a virtual host from which Caddy can forward client requests to the backend Owncast web server. For good measure, we also specified an email address for Caddy to use when setting up SSL on our behalf. This is the email address to use for account creation and correspondence from the SSL certificate authority, which is Let's Encrypt by default.

The remaining configuration simply opens ports in the system firewall for HTTP, HTTPS, and RTMP. Note that the RTMP port is also an Owncast module-specified default.

That’s the entirety of the NixOS configuration specific to our use case; all additional configuration (e.g., importing generated hardware configuration, enabling SSH and adding appropriate keys, defining Nix garbage collection rules, etc.) is simply boilerplate.

Phase IV: Deploy

With the instance provisioned and the configuration prepared, it was about time to introduce the two, marking the beautiful beginning of a successful working relationship. This is generally a two-step process:

  1. Install NixOS on the instance. A provider may make this very easy (by providing a template image), moderately easy (by allowing a custom ISO to be uploaded), or hit-or-miss (by providing an image for a non-NixOS distribution from which NixOS may be converted with NixOS-Infect or NIXOS_LUSTRATE).

    Our provider didn’t do us too many favors in this regard. They were known to play nice with nixos-infected installations, however, so we went with that.

  2. Deploy the configuration. Note that several tools exist to automate this process: deploy-rs is wildly popular for flake-based configurations, and Cachix Deploy is a promising new offering (just to name two).

    We were quite frankly in a hurry, however, and found it perfectly acceptable to do it the old-fashioned way:

    1. Access the host over SSH.
    2. Clone the configuration.
    3. Run nixos-rebuild switch to build and activate the configuration (and make it the boot default).

With this, we had a fully operational production server. For the finishing touches, we would make just a few program-specific tweaks.

Streaming service configuration

As previously mentioned, much of the Owncast configuration is stored as mutable state, enabling the service to be modified on-the-fly from the administrative web UI. A suggested procedure follows:

  1. Log in to the administrative web UI:

    • URL: https://<target FQDN>/admin/ (e.g., https://live.nixos.org/admin/)
    • User name: admin
    • Password: <stream key> (abc123 by default)
  2. In server settings, set a new Stream Key.

    Note that this is also the administrative web UI password.

  3. In general settings, modify instance details, tags, and social handles as appropriate.

  4. In video settings:

    • Add stream variants to enable adaptive bitrate streaming to accommodate users on various-quality networks.

      The following table may be used as a guideline:

      EncoderNameResolution (WxH)BitrateFramerate
      x264SD854x480800-1200 kbps25 or 30 fps
      x264HD1280x7201200-1900 kbps25 or 30 fps
      x264FHD1920x10801900-4500 kbps25 or 30 fps

      Note that some services recommend a higher bitrate for HD:

      EncoderNameResolution (WxH)BitrateFramerate
      x264HD1280x7203000 kbps25 or 30 fps
      x264FHD1920x10804500 kbps25 or 30 fps

      Our stress tests showed that the use of standard variants with our VPS were reportedly adequate for most users but problematic for some in Japan, Thailand, and the UK. Based on these results, object storage and CDN caching are recommended.

      Additional notes:

      • For interactive live streams (e.g., lectures with Q&A sessions), consider decreasing the latency buffer to Low.
      • For further tuning, consider referring to a bitrate calculator (e.g., Bitrate Calc).

Streaming client configuration

With the server side all set, it was time to get streaming! The basic procedure follows:

  1. On a local workstation, install and launch OBS Studio (obs-studio in nixpkgs).

  2. In stream settings, set the following values:

    • Service: Custom...
    • Server: rtmp://<target FQDN> (e.g., rtmp://live.nixos.org)
    • Stream key: abc123 (by default)
  3. Go live.

  4. Confirm the stream at the designated URL (https://live.nixos.org in our case).

It worked? It worked. Crisis averted with time to spare. 🍹

Next Steps

Following a successful first lecture, the SoN infra team is concerned not only with maintenance and optimization of the infrastructure we’ve already deployed but also with innovation on a larger scale. The ultimate goal is to contribute a scalable, fully self-hosted, documented, NixOS-based, general-purpose, FLOSS4 conference suite. In short, we’re working toward a turnkey solution that can accommodate use cases ranging from the SoN Public Lecture Series all the way to a full conference experience – namely NixCon.

In addition to main conference components (e.g., administrative tools, breakout rooms, etc.) and general convenience features (e.g., collaboration tools such as shared whiteboards), a notable priority is the improvement of our accessibility story, e.g., through integration of the Vosk speech recognition toolkit for livestream closed captioning/transcription/subtitling (in multiple languages).

To this end, we’re working together with other stalwart teams such as the SoN Special Project for Jitsi and have received guidance and insight from the gracious maintainers of the NixCon 2020 livestreaming infrastructure, the crucial project of which our efforts will be a continuation.

1

In the context of this article, ubiquity is scoped to societies who enjoy the free global exchange of information (i.e., unfiltered cross-border Internet traffic). Likewise, most refers to the majority of members of such a society when faced with choosing a method for applicable content publishing.

2
Service module quality, while service context-dependent, is generally evaluated with respect to such criteria as:

- sensibility of included defaults (i.e., such that the service can be started without requiring excessive configuration for common use cases)
- exposure of a balanced set of options (i.e., such that the service is sufficiently configurable but not at the expense of module maintainability)
- inclusion of a balanced set of [integration tests](https://nixos.org/guides/integration-testing-using-virtual-machines.html) (i.e., that is sufficiently comprehensive for stable operation but not restrictively opinionated)
3

A production-ready service is characterized here as a service that is appropriately licensed and sufficiently stable, secure, performant, and featureful for a given use case. Note that in the context of FLOSS4, active maintainership is additionally preferred.

4

While the term FLOSS indicates a politically neutral position, this project prefers software that is licensed to protect the four essential freedoms of users as defined in the Free Software Definition. As a bare minimum, the term open source is used here to describe software that fulfills the conditions of the Open Source Definition.