This is an old revision of the document!
Network Setup
The home server I have has 4 Intel Gigabit NICs. The past couple of years I have only been using 1 NIC to a main 24 port gigabit switch. This is described in the Basic Network Setup below. The newer home server has 5 drive, 2 SSD system drive, 2 larger data storage drives and 1 drive used as a parity drive for off line raid for the data storage drives. For most the time a single NIC will provide sufficient bandwidth between the server and switch. However there exists server capacity to saturate bandwidth of a single gigabit NIC. To increase effective bandwidth there is an option to bond 2 or more NIC together to combine their bandwidth. This is call NIC Bonding. To allow virtual machine NIC access the NIC(s) must be setup in bridge mode. Furthermore bridging NICs can also allow the NICs to act as a switch, obviously where more than one NIC is available. The Full Network Setup section below, describes setting up the system with bonded and bridge NICs. All these described setups were found to operate well.
Some references are noted below Network Setup Links.
Archived Network Setups
Full Network Setup 20.04
As of 2021 instead of bonding my server I have installed a 2.5Gb/s ethernet card in my mainserver, backup server and main desktop computers. I have purchased a 5 ports 2.5Gb/s switch. My 2 home home Wifi access points (APs) are connected to the 2.5Gb/s switch too. I have had these older Wifi APs since 2014, Netgear EX6200/AC1200 and they still serve my home well. They each have 5 x 1Gb/s ethernet ports, for which 1 is used to connect to the upstream switch. As much of my home network equipment is connected by CAT 6 ethernet cable the dual APs are not very stretched. I have been luck to avoid the whole Wifi mesh to enable adequate Wifi coverage, which is clearly inferior to the cabled system I have. I researched purchasing upgraded Wifi6 APs, but those with a 2.5Gb/s ethernet port (or better) are still unreasonably expensive. The 5GB/s Ethernet cards and switches are not readily available. The 10Gb/s cards and switches are still much more expensive and consume significantly more power at 10Gb/s connection, meaning operating costs are probably also adversely affected.
I have not tested the performance of the 2.5Gb/s ethernet system, in all honesty I have not noticed much difference. This is perhaps not unexpected as my in eternet connection is ~ 30Mb/s down and 12Mb/s up. My main server primary storage is 3.5“ spinning disk with peak performance at best <2Gb/s and sustain and average performance much lower than this. These hard disks are limited to individual speed performance, no interleaving raid to speed thing up. The operating systems and VMs are on SSDs / NVM with much higher achievable transfer rate and should improved overall system performance, but are not used for normal server storage purposes, hence the speed advantage here are not generally restricted by the network speed.
This is the setup for my new server with 4 built in Intel NICs, with Ubuntu 20.04 To reduce performance reduction due to limited Ethernet bandwidth using only one NIC I propose to use 2 NICs in bonded configuration and also use bridging to allow server virtual machine access to the NICs and also use the remaining 2 NICs effectively as a switch.
To check available interfaces and names: ip link
Netplan does not require the bridge utilities to be loaded however these utilities can be used upon the bridge: sudo apt install bridge-utils
Under netplan the bonded configuration does not need ifenslave utility loaded, as this utilities is dependent upon ifupdown. Do not install sudo apt install ifenslave
The netplan website with basic information netplan.io. Also another resource is from cloud-init Networking Config Version 2.
My new server NIC connectors (hardware) are configured as follows:
IPMI_LAN USB2-1 LAN3(eno3) LAN4(eno4) USB2-0 LAN1(eno1) LAN2(eno2) VGA
The new server board does not have any back USB3 ports. No great loss, never used them yet.
As instructed in the system created yaml file /etc/netplan/50-cloud-init.yaml
, create the file sudo vim /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg
and add the line network: {config: disabled}
Edit the network configuration file: /etc/netplan/interfaces.yaml
as follows:
network: #setup network interfaces version: 2 renderer: networkd ethernets: eno1: dhcp4: no dhcp6: no optional: true eno2: dhcp4: no dhcp6: no optional: true eno3: dhcp4: no dhcp6: no optional: true eno4: dhcp4: no dhcp6: no optional: true #Setup the Bond bonds: bond0: interfaces: [eno1, eno2] parameters: mode: balance-rr #Setup Bridge Interface bridges: br0: addresses: [192.168.1.10/24] interfaces: [bond0, eno3, eno4] gateway4: 192.168.1.1 nameservers: addresses: [192.168.1.1] parameters: stp: off forward-delay: 9 hello-time: 2 max-age: 12
Some additional netplan commands:
sudo netplan –debug apply
To apply any changes to the network configuration.sudo netplan –debug generate
To generate backend specific configuration files.sudo netplan try
To try a new neplan configuration with automatic roll back.journalctl -u systemd-networkd
to check the networkd log
Full VM Network Setup 20.04
The VM netplan yaml configuration file for static LAN IP address: /etc/netplan/network.yaml
as follows:
network: version: 2 renderer: networkd ethernets: ens3: addresses: [192.168.1.12/24] gateway4: 192.168.1.1 nameservers: addresses: [192.168.1.1]
I also created a bridge definition file for libvirt as recommended by netplan.io examples:
Create a file br0.xml, vim ~/br0.xml
and add following to it:
<network> <name>br0</name> <forward mode='bridge'/> <bridge name='br0'/> </network>
Next have libvirt add the new network and autostart it:
- sudo virsh net-define ~/br0.xml
- sudo virsh net-start br0
- sudo virsh net-autostart br0
The qemu defined networks can be listed with the command: virsh net-list –all
You can list networks with networkctl list
Some helpful commands and comments:
- To see bridge status information:
brctl show
- To see bond setup status:
cat /proc/net/bonding/bond0
- To list network configuration:
ifconfig
,ip a
,ip route
- Kernal IP routing table:
route
The NetworkManager is not required on a server, as the base ifconfig and related commands provide full functionality. NetworkManager may conflict with base configuration. Remove with sudo apt remove NetworkManager
. (To see information on system network start-up and ongoing status: sudo systemctl status NetworkManager
or more comprehensively journalctl -u NetworkManager
)
sudo systemctl restart systemd-networkd
Ubuntu Network Setup Links
Links relating to bridged and bonded Networking
A bridged network allows different networks to be connected, both physical, like NICs or Wifi and virtual, allowing virtual machine to connect to a physical network and even be assigned a LAN IP address. Bonding allows physical networking devices such as NICs or Wifi to be bonded to allow increased bandwidth or redundancy. Sadly there seems to be alot of information out there that isceither for older version of software or other purposing.
- Debian wiki BridgeNetworkConnections and Bonding
- Serverfault How do I put a bridge on top of a bonded interface?
- Gentoo Home Router
- centos.org 41.5.2.1. bonding Module Directives
- kernal.org Linux Ethernet Bonding Driver HOWTO
- Themas Krenn Link Aggregation and LACP basics
- Wikipedia Link aggregation