docs.sr2.uk/docs/operator/architecture.md

3.9 KiB

sidebar_position
10

Architecture

We begin the guide with a high-level overview of the architecture of the CDR Link deployment framework.

Introduction

We deploy each CDR Link instance on a single Red Had Enterprise Linux 9 (or compatible) host using rootless Podman. Each component of Link is a container instance with the containers managed via systemd using Podman Quadlet. Components communicate via isolated networks that are also configured via Quadlet, and make use of the slirp4netns user-mode networking for unprivileged network namespaces.

Both discretionary access controls and SELinux are used to prevent lateral movement between containers should a container be compromised, with particular attention given to the messaging channels WhatsApp and Signal. No container runs its application as the inside "root" user, which is an unprivileged user on the host.

The /home mount point on the host is encrypted using LUKS with a per instance key to protect all user data at rest. Further, we use separate partitions for critical audit logging to ensure that a resource exhaustion attack cannot prevent later investigation, and automatically shut down instances where there is no space available for audit logging.

Components

The following diagram shows the dependency relationships between the components of CDR Link. If you use our Ansible role for deployment then these will be automatically configured. The Link stack containers are OCI compliant containers and you can run these with alternatives such as Docker Compose however we would not be able to provide support for this setup. Our former Docker Compose deployment framework has been deprecated and we intend to migrate all partners to our new rootless Podman setup for the improved reliability, performance, and security.

:::info While the following diagram refers to .service units, these are the units generated by Podman Quadlet. The service definitions are in .container units within the Quadlet directory at $HOME/.config/containers/systemd. :::

flowchart TD
    bridge-worker.service --> bridge-postgresql.service
    bridge-worker.service --> bridge-whatsapp.service
    bridge-worker.service --> signal-cli-rest-api.service

    link.service --> bridge-postgresql.service
    link.service --> bridge-worker.service

    opensearch-dashboards.service --> zammad-opensearch.service

    zammad-nginx.service --> zammad-railsserver.service
    zammad-nginx.service --> link.service

    zammad-storage.target{zammad-storage.target}
    zammad-storage.target --> zammad-postgresql.service
    zammad-storage.target --> zammad-redis.service
    zammad-storage.target --> zammad-memcached.service
    zammad-storage.target --> zammad-opensearch.service

    zammad-railsserver.service -.-> zammad-init.service

    zammad-init.service --> zammad-storage.target
    zammad-railsserver.service --> zammad-storage.target
    zammad-scheduler.service --> zammad-storage.target
    zammad-websocket.service --> zammad-storage.target

    link.target{link.target}
    link.target --> opensearch-dashboards.service
    link.target --> zammad-nginx.service
    link.target --> zammad-scheduler.service
    link.target --> zammad-websocket.service