The home server I have has 4 Intel Gigabit NICs. The past couple of years I have only been using 1 NIC to a main 24 port gigabit switch. This is described in the Basic Network Setup below. The newer home server has 5 drive, 2 SSD system drive, 2 larger data storage drives and 1 drive used as a parity drive for off line raid for the data storage drives. For most the time a single NIC will provide sufficient bandwidth between the server and switch. However there exists server capacity to saturate bandwidth of a single gigabit NIC. To increase effective bandwidth there is an option to bond 2 or more NIC together to combine their bandwidth. This is call NIC Bonding. To allow virtual machine NIC access the NIC(s) must be setup in bridge mode. Furthermore bridging NICs can also allow the NICs to act as a switch, obviously where more than one NIC is available. The Full Network Setup section below, describes setting up the system with bonded and bridge NICs. All these described setups were found to operate well. I added a 2.5gbe NIC card to my servers and switch.

Some references are noted below Network Setup Links.

Old Network Interfaces Setup

Netplan Old Setup with 802.3ad bonding, too complex do not use

As of 2021 instead of bonding my server I have installed a 2.5Gb/s ethernet card in my mainserver, backup server and main desktop computers. I have purchased a 5 ports 2.5Gb/s switch. My 2 home home Wifi access points (APs) are connected to the 2.5Gb/s switch too. I have had these older Wifi APs since 2014, Netgear EX6200/AC1200 and they still serve my home well. They each have 5 x 1Gb/s ethernet ports, for which 1 is used to connect to the upstream switch. As much of my home network equipment is connected by CAT 6 ethernet cable the dual APs are not very stretched. I have been luck to avoid the whole Wifi mesh to enable adequate Wifi coverage, which is clearly inferior to the cabled system I have. I researched purchasing upgraded Wifi6 APs, but those with a 2.5Gb/s ethernet port (or better) are still unreasonably expensive. The 5GB/s Ethernet cards and switches are not readily available. The 10Gb/s cards and switches are still much more expensive and consume significantly more power at 10Gb/s connection, meaning operating costs are probably also adversely affected.

I have not tested the performance of the 2.5Gb/s ethernet system, in all honesty I have not noticed much difference. This is perhaps not unexpected as my in eternet connection is ~ 30Mb/s down and 12Mb/s up. My main server primary storage is 3.5“ spinning disk with peak performance at best <2Gb/s and sustain and average performance much lower than this. These hard disks are limited to individual speed performance, no interleaving raid to speed thing up. The operating systems and VMs are on SSDs / NVM with much higher achievable transfer rate and should improved overall system performance, but are not used for normal server storage purposes, hence the speed advantage here are not generally restricted by the network speed.

This is the setup for my new server with 4 built in Intel NICs, with Ubuntu 20.04 To reduce performance reduction due to limited Ethernet bandwidth using only one NIC I propose to use 2 NICs in bonded configuration and also use bridging to allow server virtual machine access to the NICs and also use the remaining 2 NICs effectively as a switch.

Ubuntu has for some time defaulted to, where as Debian 11 still defaults to the interfaces configuration style. Use sudo apt install to install netplan on Debian. After configuring netplan, move the interfaces file to prevent overlapping configuration, e.g. sudo mv /etc/network/interfaces /etc/network/interfaces.old

To check available interfaces and names: ip link

Netplan does not require the bridge utilities to be loaded however these utilities can be used upon the bridge: sudo apt install bridge-utils

Under netplan the bonded configuration does not need ifenslave utility loaded, as this utilities is dependent upon ifupdown. Do not install sudo apt install ifenslave

The netplan website with basic information Also another resource is from cloud-init Networking Config Version 2.

My new server NIC connectors (hardware) are configured as follows:

 USB2-1          LAN3(eno3)    LAN4(eno4)
 USB2-0          LAN1(eno1)    LAN2(eno2)     VGA

The new server board does not have any back USB3 ports. No great loss, never used them yet.

As instructed in the system created yaml file /etc/netplan/50-cloud-init.yaml, create the file sudo vim /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg and add the line network: {config: disabled}

Edit the network configuration file: /etc/netplan/interfaces.yaml as follows:

  #setup network interfaces
  version: 2
  renderer: networkd
      dhcp4: no
      dhcp6: no
      optional: true
      dhcp4: no
      dhcp6: no
      optional: true
      dhcp4: no
      dhcp6: no
      optional: true
      dhcp4: no
      dhcp6: no
      optional: true
      dhcp4: no
      dhcp6: no
      optional: true
#  #Setup the Bond
#  bonds:
#    bond0:
#      interfaces: [eno1, eno2]
#      parameters:
#        mode: balance-rr
  #Setup Bridge Interface
      addresses: []
      interfaces: [eno1, en02, eno3, eno4, enp2s0]
        addresses: []
        stp: off
        forward-delay: 9
        hello-time: 2
        max-age: 12

Some additional netplan commands:

  • sudo netplan –debug apply To apply any changes to the network configuration.
  • sudo netplan –debug generate To generate backend specific configuration files.
  • sudo netplan try To try a new neplan configuration with automatic roll back.
  • journalctl -u systemd-networkd to check the networkd log

Moving back to Debian I am also moving away from netplan back to interfaces.

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
  iface lo inet loopback

# The primary network interface
#allow-hotplug enp1s0
#iface enp1s0 inet dhcp

auto enp1s0
  iface enp1s0 inet static
    #dns-nameservers only functional is resolvconf installed

  iface enp1s0 inet6 static
    address 2001:470:1f2c:10d::17/64
    gateway 2001:470:1f2c:10d::3

The VM netplan yaml configuration file for static LAN IP address: /etc/netplan/network.yaml as follows:

  version: 2
  renderer: networkd
      addresses: []
         addresses: []

I also created a bridge definition file for libvirt as recommended by examples:

Create a file br0.xml, vim ~/br0.xml and add following to it:

  <forward mode='bridge'/>
  <bridge name='br0'/>

Next have libvirt add the new network and autostart it:

  • sudo virsh net-define ~/br0.xml
  • sudo virsh net-start br0
  • sudo virsh net-autostart br0

The qemu defined networks can be listed with the command: virsh net-list –all

You can list networks with networkctl list

  • To see bridge status information: brctl show
  • To see bond setup status: cat /proc/net/bonding/bond0
  • To list network configuration: ifconfig, ip a, ip route
  • Kernal IP routing table: route

The NetworkManager is not required on a server, as the base ifconfig and related commands provide full functionality. NetworkManager may conflict with base configuration. Remove with sudo apt remove NetworkManager. (To see information on system network start-up and ongoing status: sudo systemctl status NetworkManager or more comprehensively journalctl -u NetworkManager)

sudo systemctl restart systemd-networkd

Links relating to bridged and bonded Networking

A bridged network allows different networks to be connected, both physical, like NICs or Wifi and virtual, allowing virtual machine to connect to a physical network and even be assigned a LAN IP address. Bonding allows physical networking devices such as NICs or Wifi to be bonded to allow increased bandwidth or redundancy. Sadly there seems to be alot of information out there that isceither for older version of software or other purposing.

Home Server Index

  • /mnt/shared/www/dokuwiki/data/pages/home_server/home_server_setup/network_setup.txt
  • Last modified: 2022-10-21 Fri wk42 07:50
  • by baumkp