10.03.2021
Lesezeit: ca. 3 Minuten

How we run a Mumble voice chat server in a Nomad Cluster

The idea

Tools for collaborative work or pair programming are more important than ever. In addition to these tools, video chat systems are often used for communication. These offer many advantages, but also have the big disadvantage of being very bandwidth intensive. Often, a voice chat connection is sufficient for the task. A well known solution (at least for gamers) is Mumble. It a free, open source voice communication system. It is well known for its high quality and low latency. My goal was to set up a mumble server in our cloud infrastructure as a supplement to existing communication channels In this article I am giving a detailed manual based on my experience.

The Cloud

To manage our applications in our cloud infrastructure we decided to use a stack based on HashiCorp Tools and several well know DevOps tools like Terraform, Consul, Vault, Nomad and Ansible. Last but not least we use Traefik as our application proxy. We are very happy with that setup as it makes the management of our container based applications a breeze.

Step 1: Mumble server in a docker container

I used the github project PHLAK/docker-mumble which sets up the mumble server, creates the necessary directory and users and builds a ready-to-use docker image for our mumble server.

Step 2: Build automation with Gitlab CI/CD

In the buildApp stage the image is built in a gitlab ci/cd pipeline and pushed to the gitlab container registry.

buildApp:
    stage: build
    before_script:
        # login to the gitlab docker registry
        - /usr/local/bin/dockerd &
        - docker -H unix:///var/run/docker.sock login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
    image: docker:dind
    only:
        - master
    script:
        # Create a docker image
        - docker -H unix:///var/run/docker.sock build -t git.yourHost.com:yourPort/mumble:$CI_COMMIT_SHA .
        # Push the image to the gitlab registry
        - docker -H unix:///var/run/docker.sock push git.yourHost.com:yourPort/mumble:$CI_COMMIT_SHA
    tags:
        - "docker"

In the deployApp stage I use our bastion host to create the mumble.nomad file. This file is then used to start a nomad job and deploy a mumble docker container into our nomad cluster.

deployApp:
    stage: deploy
    script:
        # Create the nomad group name
        - export NOMAD_GROUP="mumbleGroup"
        # Create the project name used in the nomad template
        - export NOMAD_PROJECT_NAME="mumble"
        # Use the nomad template to create a nomad file
        - ./nomad-template.sh > mumble.nomad
        # Run the nomad job
        - nomad job run mumble.nomad
    only:
        - master
    tags:
        - 'bastion'

Step 3: Creating a nomad job specification

The nomad-template.sh script from the deployApp stage is helping me to create the job specification.

First I define the data center where it should be deployed and the type of the job.

datacenters = ["dc1"]
type = "service"

Next I provide a definition on how to perform updates.

update {
    max_parallel     = 1
    canary           = 1
    min_healthy_time = "30s"
    healthy_deadline = "9m"
    auto_revert      = true
    auto_promote     = true
}

Followed by the configuration for the group and the task. The task basically represents the docker run command. I define which image should be used and which port should be exposed.

group "$NOMAD_GROUP" {
count = 1
# the task contains all steps needed to start a new service
task "$NOMAD_PROJECT_NAME" {
    # create a docker container with the given image
    driver = "docker"
    config {
        image = "$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA"
        force_pull = true
        auth {
        username = "$CI_REGISTRY_USER"
        password = "$CI_REGISTRY_PASSWORD"
        }
        # exposed ports of this container (using the map we can define a name for the port to reuse later)
        port_map {
        app = 64738
    }
}

Next I am going to register our container in consul and add it to traffic to be accessible from outside our network.

  service {
    # port to use for consul
    port = "app"
    # config for traefik
    tags = [
      "traefik.enable=true",
      "traefik.tags=service",
      "traefik.tcp.routers.$NOMAD_PROJECT_NAME-tcp.rule=HostSNI(\`$NOMAD_PROJECT_NAME.yourDomain.com\`)",
      "traefik.tcp.routers.$NOMAD_PROJECT_NAME-tcp.entrypoints=voicechat-tcp",
      "traefik.tcp.routers.$NOMAD_PROJECT_NAME-tcp.tls.passthrough=true",
      "traefik.udp.routers.$NOMAD_PROJECT_NAME-udp.entrypoints=voicechat-udp",
    ]
  }

Finally, I allocate resources for the container and set up the restart params of the task in case of failure.

resources {
    cpu    = 500 # MHz
    memory = 1024 # MB
    network {
        mbits = 100
        port "app" {}
        }
}

restart {
    interval = "30m"
    attempts = 2
    delay    = "15s"
    mode     = "fail"
}

Step 4: Add the tcp and udp entry point to traefik

Mumble uses tcp and udp to transfer data between the server and the client. I added the new entry points (voicechat-tcp and voicechat-udp) and the ports to the docker-compose configuration of traefik.

command:
    - "--entrypoints.voicechat-tcp.address=:64738"
    - "--entrypoints.voicechat-udp.address=:64738/udp"

ports:
    - "64738:64738"
    - "64738:64738/udp"

Conclusion

After these four steps I am running an auto deployd mumble-server in our cloud. This method works for all kind of apps and shows the beauty and elegance of cloud based infrastructure.

Martin Pfeffer
Developer

Als Software Developer entwickelt Martin Backend Lösungen in Java und Kotlin. Er ist Teil eines Agilen Squads bei der openFORCE.

Scroll to top