linux_benchmarking


Deploying a DHCP server in proxmox

In the previous article in this series, we created using Ansible a set virtual networks in proxmox but these networks cannot yet be used properly without a DHCP server to configure the network settings of future client machines.

The first decision we need to make when it comes to the DHCP server, is deciding where to install it. One option would be to just install a DHCP server on the proxmox hypervisor itself and configure it to serve DHCP client on all networks, and while this would technically work it wouldn’t be in my opinion the best way to do since we would be bloating the hypervisor itself with additional services. A better way to do it would be to spin a separate machine which runs this DHCP server and only this DHCP server. This would give more control over this service.

There is still one potential issue with this approach, spinning up a virtual machine just to serve DHCP seems like an overkill, it seems like something which could be accomplished using something much lighter and proxmox provides the perfect way to solve that with its native LXC support.

LXC is a low level Linux container runtime with much less overhead than a virtual machine, and the native support in Proxmox means that deploying LXC containers will be easy to automate and the same goes for managing LXC container images.

Managing LXC images

The first thing we need to worry about here, is where to store these images. By default container images/templates are stored in a local proxmox storage provider provisioned on the hypervisor’s root filesystem along with ISO images, and other kinds of data.

I prefer having a dedicated storage provider just for this purpose, so let’s create one.

The first step here to create a directory to map this provider to directly on the ZFS pool which is mounted on /hdd_storage:

root@ghima-node01:/hdd_storage# mkdir lxc_templates
root@ghima-node01:/hdd_storage#

Once that is done, we need add this as a storage back-end in proxmox, we do that by doing to the storage page of our data-center and clicking the add button:

linux_benchmarking


And then adding a Directory type storage provider with the ID: lxc_templates and map it to the above created directory” /hdd_storage/lxc_templates and set the content type to be exclusively Container template.

linux_benchmarking


Now we have a place to store our LXC images, now how do we store them and make sure we can do it in an automated way?

It seems there is an Ansible plugin which does exactly what we need here and this is an example of how downloading a single LXC template is supposed to look like:

- name: Download proxmox appliance container template
  community.general.proxmox_template:
    node: uk-mc02
    api_user: root@pam
    api_password: 1q2w3e
    api_host: node1
    storage: local
    content_type: vztmpl
    template: ubuntu-20.04-standard_20.04-1_amd64.tar.gz

We also need a list of valid templates available in the proxmox LXC image repo. We can obtain this list by running the command below:

# pveam available --section system
system          alpine-3.12-default_20200823_amd64.tar.xz
system          alpine-3.13-default_20210419_amd64.tar.xz
system          alpine-3.14-default_20210623_amd64.tar.xz
system          archlinux-base_20210420-1_amd64.tar.gz
system          centos-7-default_20190926_amd64.tar.xz
system          centos-8-default_20201210_amd64.tar.xz
system          debian-9.0-standard_9.7-1_amd64.tar.gz
system          debian-10-standard_10.7-1_amd64.tar.gz
system          devuan-3.0-standard_3.0_amd64.tar.gz
system          fedora-33-default_20201115_amd64.tar.xz
system          fedora-34-default_20210427_amd64.tar.xz
system          gentoo-current-default_20200310_amd64.tar.xz
system          opensuse-15.2-default_20200824_amd64.tar.xz
system          ubuntu-16.04-standard_16.04.5-1_amd64.tar.gz
system          ubuntu-18.04-standard_18.04.1-1_amd64.tar.gz
system          ubuntu-20.04-standard_20.04-1_amd64.tar.gz
system          ubuntu-20.10-standard_20.10-1_amd64.tar.gz
system          ubuntu-21.04-standard_21.04-1_amd64.tar.gz

As mentioned in the link above, before we can proceed with using this Ansible plugin we first must satisfy a couple conditions both locally and the remote system:

  • Locally, we must install the community.general Ansible plugin.
  • Remotely, we must have pip available, and then use it to install the proxmoxer and the requests Python libraries.

And as always, we have to make sure this steps can be automated easily in the future.

Locally Let’s start with the Ansible plugin setup, according to the Ansible documentation, this can be done via requirements.yaml files, so let’s create the following file:

collections:
  - community.general

And then install the dependencies in it using the following command:

$ ansible-galaxy   collection install  -r requirements.yaml
Process install dependency map
Starting collection install process
Installing 'community.general:3.8.0' to '~/.ansible/collections/ansible_collections/community/general'

Remotely

We also have to take care of installing a few dependencies in our Proxmox hypervisor, let’s use Ansible for automating that as well. For that purpose we create an Ansible role called ghima-dependencies which will mainly focus on installing dependencies in our hypervisor, its content for now is as follows:

---

- name: Update all packages to their latest version
  apt:
    name: "*"
    state: latest

- name: Install the package "python3-pip"
  apt:
    name: python3-pip

- name: Install python libraries
  pip:
    name: requests, proxmoxer

Once these dependencies are installed, our environment is ready to start downloading LXC templates, let’s create another Ansible role to handle downloading these images and let’s call it ghima-lxc-template-downloader:

---

- name: Download proxmox ubuntu container template
  community.general.proxmox_template:
    node: ""
    api_user: ""
    api_password: ""
    api_host: ""
    storage: lxc_templates
    content_type: vztmpl
    template: ubuntu-21.04-standard_21.04-1_amd64.tar.gz

- name: Download proxmox debian container template
  community.general.proxmox_template:
    node: ""
    api_user: ""
    api_password: ""
    api_host: ""
    storage: lxc_templates
    content_type: vztmpl
    template: debian-11-standard_11.0-1_amd64.tar.gz

- name: Download proxmox alpine container template
  community.general.proxmox_template:
    node: ""
    api_user: ""
    api_password: ""
    api_host: ""
    storage: lxc_templates
    content_type: vztmpl
    template: alpine-3.14-default_20210623_amd64.tar.xz

Once we execute the update ansible playbook, we see the expected result reflected in the Proxmox UI:

linux_benchmarking


Creating LXC containers

Now that we have templates available locally we need to create the container on which our DHCP server will be running. Here we have a choice to deploy our containers either using Terraform or Ansible but since we are already using Terraform for managing infrastructure (our virtual machines) let’s use that for our containers as well and use Ansible for configuration management.

For creating the container which will host the DHCP server, the following terraform template will do the job:

resource "proxmox_lxc" "dhcp-server" {

  target_node  = "ghima-node01"
  hostname     = "dhcp-server"
  ostemplate   = "lxc_templates:vztmpl/alpine-3.14-default_20210623_amd64.tar.xz"
  password     = var.cloudinit_password
  unprivileged = true
  onboot = true
  start = true

  rootfs {
    storage = var.ssd_storage
    size    = "2G"
  }

  nameserver = var.cloudinit_dns

  network {
    name   = "eth0"
    bridge = var.lan_if
    ip     = "192.168.4.254/24"
    gw     = "192.168.4.1"
    firewall = false
  }

  network {
    name   = "eth1"
    bridge = var.dev_if
    ip     = "192.168.5.254/24"
  }

  network {
    name   = "eth2"
    bridge = var.prod_if
    ip     = "192.168.6.254/24"
  }

  network {
    name   = "eth3"
    bridge = var.sec_if
    ip     = "192.168.7.254/24"
  }

  network {
    name   = "eth4"
    bridge = var.sandbox_if
    ip     = "192.168.7.254/24"
  }

}

This results in a functioning Debian container, now we need to install a DHCP server on it and configure it to run, this is again a task for Ansible, let’s start by creating a role called debian-dhcp-server where have all DHCP-related settings:

---


- name: Update all packages to their latest version
  apt:
    name: "*"
    state: latest

- name: Install the package "python3-pip"
  apt:
    name: isc-dhcp-server



- name: Creating the dhcp-server configuration file
  copy:
    dest: "/etc/dhcp/dhcpd.conf"
    content: |
      ddns-update-style none;


      option domain-name-servers 8.8.8.8;

      default-lease-time -1;
      max-lease-time -1;

      authoritative;

      subnet 192.168.5.0 netmask 255.255.255.0
      {
              interface eth1;
              option routers 192.168.5.1;
              option subnet-mask 255.255.255.0;
              option broadcast-address 192.168.5.255;
              max-lease-time 7200;
              range 192.168.5.150 192.168.5.250;
      }

      subnet 192.168.6.0 netmask 255.255.255.0
      {
              interface eth2;
              option routers 192.168.6.1;
              option subnet-mask 255.255.255.0;
              option broadcast-address 192.168.6.255;
              max-lease-time 7200;
              range 192.168.6.150 192.168.6.250;
      }

      subnet 192.168.7.0 netmask 255.255.255.0
      {
              interface eth3;
              option routers 192.168.7.1;
              option subnet-mask 255.255.255.0;
              option broadcast-address 192.168.7.255;
              max-lease-time 7200;
              range 192.168.7.150 192.168.7.250;
      }

      subnet 192.168.8.0 netmask 255.255.255.0
      {
              interface eth4;
              option routers 192.168.8.1;
              option subnet-mask 255.255.255.0;
              option broadcast-address 192.168.8.255;
              max-lease-time 7200;
              range 192.168.8.150 192.168.8.250;
      }



- name: Creating the dhcp-isc configuration file
  copy:
    dest: "/etc/default/isc-dhcp-server"
    content: |
      INTERFACESv4="eth1 eth2 eth3 eth4"

- name: Restart service networking
  ansible.builtin.service:
    name: isc-dhcp-server
    state: restarted

The task defined above should do the job, now let’s create a test container to verify that this indeed will work, here is the Terraform definition of said container:

resource "proxmox_lxc" "test-ct" {

  target_node  = "ghima-node01"
  hostname     = "test-ct"
  ostemplate   = "lxc_templates:vztmpl/debian-11-standard_11.0-1_amd64.tar.gz"
  password     = var.cloudinit_password
  unprivileged = true
  onboot = true
  start = true

  rootfs {
    storage = var.ssd_storage
    size    = "2G"
  }

  ssh_public_keys = var.ssh_key
  nameserver = var.cloudinit_dns


  network {
    name   = "eth1"
    bridge = var.dev_if
    ip     = "dhcp"
  }

  network {
    name   = "eth2"
    bridge = var.prod_if
    ip     = "dhcp"
  }

  network {
    name   = "eth3"
    bridge = var.sec_if
    ip     = "dhcp"
  }

  network {
    name   = "eth4"
    bridge = var.sandbox_if
    ip     = "dhcp"
  }

}

And it looks like things are working as expected!

linux_benchmarking