Network Setup
Most server have more than one network connection although one is technically enough. Routers by definition need to have a minimum of at least 2 network connections.
It would seem that Debian Linux supports multiple methods to define network connections:
/etc/network/interfaces
with- network manager
- systemd-networkd
- netplan
As usual they all have their own pros and cons. Also care needs to be taken not to have conflicting methods operating at the same time, particularly on the same interface.
References
- How to Configure Network on Debian 12: A Guide for Beginners (systemd-networkd)
- Network Configuration with Systemd-networkd on Ubuntu/Debian (systemd-networkd))
- Debian Wiki - Network Configuration (All methods)
- Debian Wiki - Bridge Network Connections (/etc/network/interfaces method)
- All About the Debian /etc/network/interfaces File: The Comprehensive Guide (/etc/network/interfaces method)
- Working with systemd-networkd (systemd-networkd)
- Creating a bridge for virtual machines using systemd-networkd (systemd-networkd)
Archived Network Setups
Full Network Setup
As of 2021 instead of bonding my server I have installed a 2.5Gb/s ethernet card in my mainserver, and backup server. My current main desktop computer comes with 2.5Gb/s standard. I original purchased a 5 ports 2.5Gb/s switch, but upgraded to an 8 port version.
My 2 home home Wifi 6 access points (APs) are also connected to the 2.5Gb/s switch with 2.5Gb/s Ethernet ports and have 4 x 4 Gb/s ports downstream, Netgear WAX206-100AU. Interestingly the WAX206 went EOL (End of Life) on 2023-02-01, It appears to be less than 2 years, perhaps only 1 year after the product was available for purchase. Netgear indicated they support their products for minimum 5 years after EOL, so until 2028-02-01, only about 3 years as of writing this. Last year (2022) I upgraded from older Wifi APs since 2014, Netgear EX6200/AC1200. I got these APs around 2014. They each have 5 x 1Gb/s ethernet ports, for which 1 is used to connect to the upstream switch. As much of my home network equipment is connected by CAT 6 ethernet cable the dual APs are not very stretched. I have been lucky to avoid the whole Wifi mesh to enable adequate Wifi coverage, which is clearly inferior to the cabled system I have. 5GB/s Ethernet cards and switches are not readily available, even now end of 2024. I have been very happy with these Netgear APs. Sadly there do not seem to be any similar products on the market at this time. I suspect this is only a limited market for these devices with most home users going for the inferior / overpriced Mesh router option as it does not require ethernet cabling and business users going with expensive AP endpoints with only a single PoE 2.5GB ethernet connection. The 10Gb/s cards and switches are still much more expensive and consume significantly more power at 10Gb/s connection, meaning operating costs are probably also adversely affected.
I have not tested the performance of the 2.5Gb/s ethernet system, in all honesty I have not noticed much difference. This is perhaps not unexpected as my in Ethernet connection is ~ 270Mb/s down and 22Mb/s up. My main server primary storage is 3.5“ spinning disk with peak performance at best <2Gb/s and sustain and average performance much lower than this. These hard disks are limited to individual speed performance, no interleaving raid to speed thing up. The operating systems and VMs are on SSDs / NVM with much higher achievable transfer rate and should improved overall system performance, but are not used for normal server storage purposes, hence the speed advantage here are not generally restricted by the network speed.
This is the setup for my new server with 4 built in Intel NICs, with Ubuntu 20.04 To reduce performance reduction due to limited Ethernet bandwidth using only one NIC I propose to use 2 NICs in bonded configuration and also use bridging to allow server virtual machine access to the NICs and also use the remaining 2 NICs effectively as a switch.
Ubuntu has for some time defaulted to netplan.io, where as Debian 11 still defaults to the interfaces configuration style. Use sudo apt install netplan.io
to install netplan on Debian. After configuring netplan, move the interfaces file to prevent overlapping configuration, e.g. sudo mv /etc/network/interfaces /etc/network/interfaces.old
To check available interfaces and names: ip link
Netplan does not require the bridge utilities to be loaded however these utilities can be used upon the bridge: sudo apt install bridge-utils
Under netplan the bonded configuration does not need ifenslave utility loaded, as this utilities is dependent upon ifupdown. Do not install sudo apt install ifenslave
The netplan website with basic information netplan.io. Also another resource is from cloud-init Networking Config Version 2.
My new server NIC connectors (hardware) are configured as follows:
IPMI_LAN USB2-1 LAN3(eno3) LAN4(eno4) USB2-0 LAN1(eno1) LAN2(eno2) VGA
The new server board does not have any back USB3 ports. No great loss, never used them yet.
As instructed in the system created yaml file /etc/netplan/50-cloud-init.yaml
, create the file sudo vim /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg
and add the line network: {config: disabled}
Edit the network configuration file: /etc/netplan/interfaces.yaml
as follows:
Full VM Network Setup
Moving back to Debian I am also moving away from netplan back to interfaces.
# This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). source /etc/network/interfaces.d/* # The loopback network interface auto lo iface lo inet loopback # The primary network interface #allow-hotplug enp1s0 #iface enp1s0 inet dhcp auto enp1s0 iface enp1s0 inet static address 192.168.1.17/24 gateway 192.168.1.1 #dns-nameservers 192.168.1.1 only functional is resolvconf installed iface enp1s0 inet6 static address 2001:470:1f2c:10d::17/64 gateway 2001:470:1f2c:10d::3
The VM netplan yaml configuration file for static LAN IP address: /etc/netplan/network.yaml
as follows:
network: version: 2 renderer: networkd ethernets: ens3: addresses: [192.168.1.12/24] gateway4: 192.168.1.1 nameservers: addresses: [192.168.1.1]
I also created a bridge definition file for libvirt as recommended by netplan.io examples:
Create a file br0.xml, vim ~/br0.xml
and add following to it:
<network> <name>br0</name> <forward mode='bridge'/> <bridge name='br0'/> </network>
Next have libvirt add the new network and autostart it:
- sudo virsh net-define ~/br0.xml
- sudo virsh net-start br0
- sudo virsh net-autostart br0
The qemu defined networks can be listed with the command: virsh net-list –all
You can list networks with networkctl list
Some helpful commands and comments:
- To see bridge status information:
brctl show
- To see bond setup status:
cat /proc/net/bonding/bond0
- To list network configuration:
ifconfig
,ip a
,ip route
- Kernal IP routing table:
route
The NetworkManager is not required on a server, as the base ifconfig and related commands provide full functionality. NetworkManager may conflict with base configuration. Remove with sudo apt remove NetworkManager
. (To see information on system network start-up and ongoing status: sudo systemctl status NetworkManager
or more comprehensively journalctl -u NetworkManager
)
sudo systemctl restart systemd-networkd
Ubuntu Network Setup Links
Links relating to bridged and bonded Networking
A bridged network allows different networks to be connected, both physical, like NICs or Wifi and virtual, allowing virtual machine to connect to a physical network and even be assigned a LAN IP address. Bonding allows physical networking devices such as NICs or Wifi to be bonded to allow increased bandwidth or redundancy. Sadly there seems to be alot of information out there that isceither for older version of software or other purposing.
- Debian wiki BridgeNetworkConnections and Bonding
- Serverfault How do I put a bridge on top of a bonded interface?
- Gentoo Home Router
- centos.org 41.5.2.1. bonding Module Directives
- kernal.org Linux Ethernet Bonding Driver HOWTO
- Themas Krenn Link Aggregation and LACP basics
- Wikipedia Link aggregation