Broadcom P2100G weak network performance within docker

From Thomas-Krenn-Wiki
Jump to navigation Jump to search

During testing of a docker container, it has been noticed that the network performance within the docker container is quite slow.

In this article, you will find the instructions for the increase in performance of a P2100G Broadcom network card under Linux - in our case Proxmox VE 8.2 with Linux kernel 6.8.

Network performance (default settings)

root@js-docker-01:/home/ansible# docker run -ti --rm curlimages/curl http://10.230.2.231:32768/test.file --output /dev/null
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  5 1024M    5 55.1M    0     0  1447k      0  0:12:04  0:00:39  0:11:25 1791k

A bandwith of 1.8 MB/s is reached, although this is a 100 Gbit/s network card. This problem does not occur at other Broadcom network cards (10 Gbit/s or 4x 25 Gbit/s).

Adjustment of configuration

An adjustment of the hypervisor can be made, which solves the problem and increases the performance to a normal level.

root@hypervisor01:/# ethtool --offload enp1s0f0np0 generic-receive-offload off
root@hypervisor01:/# ethtool --offload enp1s0f0np0 generic-receive-offload off

If you deactivate both offloading features of the network card, the performance within the container increases.

Network performance (fixed)

root@js-docker-01:/home/ansible# docker run -ti --rm curlimages/curl http://10.230.2.231:32768/test.file --output /dev/null
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
 41  9.7G   41 4199M    0     0   640M      0  0:00:15  0:00:06  0:00:09  660M

Persistent ethtool settings

As ethtool adjustments are not persistent (not reboot fixed), an adjustment on the der /etc/network/interface file must be made. Here is an example of a bond where both affected ports are adjusted with pre-ups.

auto bond0
iface bond0 inet manual
        bond-slaves enp1s0f0np0 enp1s0f1np1
        bond-miimon 100
        bond-mode 802.3ad
        bond-xmit-hash-policy layer2+3
        mtu 9000
        pre-up /sbin/ethtool --offload enp1s0f0np0 generic-receive-offload off
        pre-up /sbin/ethtool --offload enp1s0f1np1 generic-receive-offload off
#CEPH-05


Author: Jonas Sterr

Jonas Sterr has been working for Thomas-Krenn for several years. Originally employed as a trainee in technical support and then in hosting (formerly Filoo), Mr. Sterr now mainly deals with the topics of storage (SDS / Huawei / Netapp), virtualization (VMware, Proxmox, HyperV) and network (switches, firewalls) in product management at Thomas-Krenn.AG in Freyung.


Translator: Alina Ranzinger

Alina has been working at Thomas-Krenn.AG since 2024. After her training as multilingual business assistant, she got her job as assistant of the Product Management and is responsible for the translation of texts and for the organisation of the department.


Related articles

Backup Proxmox VE VM with HA status stopped under Veeam
Known Issues Proxmox VE
No zvol device link - Proxmox VE Error during backup or snapshot