{{tag>network interface netplan nic setup loopback eth ethernet bridge bond networkd linux debian setup command}}
=====Network Setup=====
The home server I have has 4 Intel Gigabit NICs. The past couple of years I have only been using 1 NIC to a main 24 port gigabit switch. This is described in the Basic Network Setup below. The newer home server has 5 drive, 2 SSD system drive, 2 larger data storage drives and 1 drive used as a parity drive for off line raid for the data storage drives. For most the time a single NIC will provide sufficient bandwidth between the server and switch. However there exists server capacity to saturate bandwidth of a single gigabit NIC. To increase effective bandwidth there is an option to bond 2 or more NIC together to combine their bandwidth. This is call NIC Bonding. To allow virtual machine NIC access the NIC(s) must be setup in bridge mode. Furthermore bridging NICs can also allow the NICs to act as a switch, obviously where more than one NIC is available. The Full Network Setup section below, describes setting up the system with bonded and bridge NICs. All these described setups were found to operate well.
I added a 2.5gbe NIC card to my servers and switch.
Some references are noted below Network Setup Links.
=====Archived Network Setups=====
++++Network Interfaces Setup|
====Basic Network Setup====
To check available interfaces and names: ''ip link''
Ensure the bridge utilites are loaded: ''sudo apt install bridge-utils''
Edit the network configuration file: ''/etc/network/interfaces'' as follows:
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
# auto eth0
# iface eth0 inet dhcp
#Basic bridge setup on a NIC to allow virtual machine NIC access
#The DHCP server is used to assign a fixed IP address based upon MAC
auto br0
iface br0 inet dhcp
bridge_ports eth0
bridge_fd 9
bridge_hello 2
bridge_maxage 12
bridge_stp off
#No point enabling NIC that are not being used
#auto eth1
#iface eth1 inet manual
#auto eth2
#iface eth2 inet manual
#auto eth3
#iface eth3 inet manual
I tried earlier to use static assigned IP setup, but had problems with operation and used setup with dhcp, which worked. I then setup the dhcp sever to assign a fix IP address to the eth0 address.++++
++++Netplan Old Setup with 802.3ad bonding, too complex do not use|
====Full Network Setup====
As noted in the main section I have a server with 4 built in Intel NICs. To reduce performance reduction due to limited Ethernet bandwidth using only one NIC I propose to use 2 NICs in bonded configuration and also use bridging to allow server virtual machine access to the NICs and also use the remaining 2 NICs effectively as a switch.
To check available interfaces and names: ''ip link''
Ensure the bridge utilities are loaded: ''sudo apt install bridge-utils''
The bonded configuration needs ifenslave utility loaded: ''sudo apt install ifenslave''
My NIC connectors are setup as follows:
IPMI_LAN
USB2-1 USB3-1 LAN3(eth2) LAN4(eth3)
USB2-0 USB3-0 LAN1(eth0) LAN2(eth1) VGA
Edit the network configuration file: ''/etc/network/interfaces'' as follows:
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5)
# and brctl(8).
# The loopback network interface
auto lo
iface lo inet loopback
#Setup the Bond
auto bond0
iface bond0 inet manual
hwaddress ether DE:AD:BE:EF:69:01
post-up ifenslave bond0 eth0 eth1
pre-down ifenslave -d bond0 eth0 eth1
bond-slaves none
bond-mode 4
bond-miimon 100
bond-downdelay 0
bond-updelay 0
bond-lacp-rate fast
bond-xmit_hash_policy layer2+3
#bond-mode 4 requires that the connected switch has matching
#configuration
#Start Bond network interfaces in manual
auto eth0
iface eth0 inet manual
bond-master bond0
auto eth1
iface eth1 inet manual
bond-master bond0
#Setup Bridge Interface
auto br0
iface br0 inet static
address 192.168.1.5
network 192.168.1.0
netmask 255.255.255.0
broadcast 192.168.1.255
gateway 192.168.1.1
dns-nameservers 192.168.1.1
bridge_ports bond0 eth2 eth3
bridge_stp off
bridge_fd 9
bridge_hello 2
bridge_maxage 12
The following is a description of some of the parameters
*Bonding
*bond-mode
*balance-rr or 0 (default) is a good general option
*802.3ad or 4 requires a switch that is correspondingly setup with IEEE 802.3ad Dynamic link aggregation.
*bond-lacp-rate, only required for 802.3ad mode, Option specifying the rate in which we'll ask our link partner to transmit LACPDU packets, default is slow or 0:
*slow or 0, Request partner to transmit LACPDUs every 30 seconds
*fast or 1, Request partner to transmit LACPDUs every 1 second
*bond-xmit_hash_policy
*layer2 (default)
*layer2+3
*layer3+4
layer2 and layer2+3 options are 802.3ad compliant, layer3+4 is not fully compliant and may cause problems on some equipment/configurations.
*bond-slaves
*bond-master
*hwaddress ether xx:xx:xx:xx:xx:xx
*The MAC address xx:xx:xx:xx:xx:xx must be replaced by the hardware address of one of the interfaces that are being bonded or by a locally administered address (see this Wikipedia page for details). If you don't specify the Ethernet address then it will default to the address of the first interface that is enslaved. This could be a problem as it is possible for various reasons that the hardware address of the bond could change, and this may cause problems with other parts of your network.
**I recommend using the simpler default balanced-rr option as it is simpler to setup and maintain.**
Bonding Benefits and Limitations
*Benefits
*Increased Ethernet speed/bandwidth (with limitations)
*Link Redundancy (not a feature of particular interest to me)
*Limitations
*More complex setup
*Not as fast and flexible as faster ethernet connection as each transport connection only uses one media connection to prevent packet mix up, hence maximum connection speed limited to speed of one bond lane
Modern harddisks are generally faster than a 1 Gb/s ethernet connection, SSDs significantly so. Yet many individual data demand usages are significantly slower, e.g. video 0.5 to 30Mb/s, audio 100 - 400 kb/s. Furthermore most external internet connection are still normally slower then 100Mb/s, with only larger offices having 1Gb/s or more bandwidth. So the biggest speed/time impact is when coping files across a speed limited ethernet LAN connection or where a server is used to provide information to multiple clients. **Ethernet bonding can help improve server performance by sharing multiple simultaneous client connections between the bonded ethernet connections.**
Wifi quoted speeds are particularly bogus / optimistic. The quoted speed is usually the best possible speed achievable. Wifi bandwidth is often shared with many simultaneous users, with each n user only often getting at best a 1/n share of bandwidth. There are also latency and interference issues with Wifi that can affect performance. Wired LAN ethernet connection tend to provide more reliable consistent performance. That being said Wifi is convenient and in most, but certainly not all, cases fast enough.++++
=====Full Network Setup=====
As of 2021 instead of bonding my server I have installed a 2.5Gb/s ethernet card in my mainserver, and backup server. My current main desktop computer comes with 2.5Gb/s standard. I original purchased a 5 ports 2.5Gb/s switch, but upgraded to an 8 port version. My 2 home home Wifi 6 access points (APs) are connected to the 2.5Gb/s switch too with 2.5Gb/s Ethernet ports and have 4 x 4 Gb/s ports downstream. Last year (2022) I upgraded from older Wifi APs since 2014, Netgear EX6200/AC1200. They each have 5 x 1Gb/s ethernet ports, for which 1 is used to connect to the upstream switch. As much of my home network equipment is connected by CAT 6 ethernet cable the dual APs are not very stretched. I have been lucky to avoid the whole Wifi mesh to enable adequate Wifi coverage, which is clearly inferior to the cabled system I have. The 5GB/s Ethernet cards and switches are not readily available. The 10Gb/s cards and switches are still much more expensive and consume significantly more power at 10Gb/s connection, meaning operating costs are probably also adversely affected.\\
I have not tested the performance of the 2.5Gb/s ethernet system, in all honesty I have not noticed much difference. This is perhaps not unexpected as my in Ethernet connection is ~ 65Mb/s down and 17Mb/s up. My main server primary storage is 3.5" spinning disk with peak performance at best <2Gb/s and sustain and average performance much lower than this. These hard disks are limited to individual speed performance, no interleaving raid to speed thing up. The operating systems and VMs are on SSDs / NVM with much higher achievable transfer rate and should improved overall system performance, but are not used for normal server storage purposes, hence the speed advantage here are not generally restricted by the network speed.
This is the setup for my new server with 4 built in Intel NICs, with Ubuntu 20.04 To reduce performance reduction due to limited Ethernet bandwidth using only one NIC I propose to use 2 NICs in bonded configuration and also use bridging to allow server virtual machine access to the NICs and also use the remaining 2 NICs effectively as a switch.
Ubuntu has for some time defaulted to netplan.io, where as Debian 11 still defaults to the interfaces configuration style. Use ''sudo apt install netplan.io'' to install netplan on Debian. After configuring netplan, move the interfaces file to prevent overlapping configuration, e.g. ''sudo mv /etc/network/interfaces /etc/network/interfaces.old''
To check available interfaces and names: ''ip link''
Netplan does **not** require the bridge utilities to be loaded however these utilities can be used upon the bridge: ''sudo apt install bridge-utils''
Under netplan the bonded configuration does not need ifenslave utility loaded, as this utilities is dependent upon ifupdown. Do **not** install ''sudo apt install ifenslave''
The netplan website with basic information [[https://netplan.io|netplan.io]]. Also another resource is from cloud-init [[https://cloudinit.readthedocs.io/en/latest/topics/network-config-format-v2.html#examples|Networking Config Version 2]].
My new server NIC connectors (hardware) are configured as follows:
IPMI_LAN
USB2-1 LAN3(eno3) LAN4(eno4)
USB2-0 LAN1(eno1) LAN2(eno2) VGA
The new server board does not have any back USB3 ports. No great loss, never used them yet.
As instructed in the system created yaml file ''/etc/netplan/50-cloud-init.yaml'', create the file ''sudo vim /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg'' and add the line ''network: {config: disabled}''
Edit the network configuration file: ''/etc/netplan/interfaces.yaml'' as follows:
++++interfaces.yaml|
network:
#setup network interfaces
version: 2
renderer: networkd
ethernets:
eno1:
dhcp4: no
dhcp6: no
optional: true
eno2:
dhcp4: no
dhcp6: no
optional: true
eno3:
dhcp4: no
dhcp6: no
optional: true
eno4:
dhcp4: no
dhcp6: no
optional: true
enp2s0:
dhcp4: no
dhcp6: no
optional: true
# #Setup the Bond
# bonds:
# bond0:
# interfaces: [eno1, eno2]
# parameters:
# mode: balance-rr
#Setup Bridge Interface
bridges:
br0:
addresses: [192.168.1.10/24]
interfaces: [eno1, en02, eno3, eno4, enp2s0]
gateway4: 192.168.1.1
nameservers:
addresses: [192.168.1.1]
parameters:
stp: off
forward-delay: 9
hello-time: 2
max-age: 12
Some additional netplan commands:
*''sudo netplan --debug apply'' To apply any changes to the network configuration.
*''sudo netplan --debug generate'' To generate backend specific configuration files.
*''sudo netplan try'' To try a new neplan configuration with automatic roll back.
*''journalctl -u systemd-networkd'' to check the networkd log
++++
=====Full VM Network Setup=====
Moving back to Debian I am also moving away from netplan back to interfaces.
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
#allow-hotplug enp1s0
#iface enp1s0 inet dhcp
auto enp1s0
iface enp1s0 inet static
address 192.168.1.17/24
gateway 192.168.1.1
#dns-nameservers 192.168.1.1 only functional is resolvconf installed
iface enp1s0 inet6 static
address 2001:470:1f2c:10d::17/64
gateway 2001:470:1f2c:10d::3
The VM netplan yaml configuration file for static LAN IP address: ''/etc/netplan/network.yaml'' as follows:
network:
version: 2
renderer: networkd
ethernets:
ens3:
addresses: [192.168.1.12/24]
gateway4: 192.168.1.1
nameservers:
addresses: [192.168.1.1]
I also created a bridge definition file for libvirt as recommended by netplan.io examples:
Create a file br0.xml, ''vim ~/br0.xml'' and add following to it:
br0
Next have libvirt add the new network and autostart it:
*sudo virsh net-define ~/br0.xml
*sudo virsh net-start br0
*sudo virsh net-autostart br0
The qemu defined networks can be listed with the command: ''virsh net-list --all''
You can list networks with ''networkctl list''
=====Some helpful commands and comments:=====
*To see bridge status information: ''brctl show''
*To see bond setup status: ''cat /proc/net/bonding/bond0''
*To list network configuration: ''ifconfig'', ''ip a'', ''ip route''
*Kernal IP routing table: ''route''
The NetworkManager is not required on a server, as the base ifconfig and related commands provide full functionality. NetworkManager may conflict with base configuration. Remove with ''sudo apt remove NetworkManager''. (To see information on system network start-up and ongoing status: ''sudo systemctl status NetworkManager'' or more comprehensively ''journalctl -u NetworkManager'')
''sudo systemctl restart systemd-networkd ''
=====Ubuntu Network Setup Links=====
Links relating to bridged and bonded Networking
A bridged network allows different networks to be connected, both physical, like NICs or Wifi and virtual, allowing virtual machine to connect to a physical network and even be assigned a LAN IP address. Bonding allows physical networking devices such as NICs or Wifi to be bonded to allow increased bandwidth or redundancy. Sadly there seems to be alot of information out there that isceither for older version of software or other purposing.
*Debian wiki [[https://wiki.debian.org/BridgeNetworkConnections|BridgeNetworkConnections]] and [[https://wiki.debian.org/Bonding|Bonding]]
*Serverfault How do I put a bridge on top of a bonded interface?
*nixCraft [[https://serverfault.com/questions/348266/how-do-i-put-a-bridge-on-top-of-a-bonded-interfaceHow To Setup Bonded (bond0) and Bridged (br0) Networking On Ubuntu LTS Server]]
*nixCraft [[https://serverfault.com/questions/776057/802-3ad-bonding-configuration-file-on-an-ubuntu-16-04-lts-server|802.3ad bonding configuration file on an Ubuntu 16.04 LTS Server]]
*nixCraft [[https://www.cyberciti.biz/faq/ubuntu-setup-a-bonding-device-and-enslave-two-real-ethernet-devices/|setup a bonding device and enslave two real Ethernet devices]], [[https://www.cyberciti.biz/faq/how-to-create-bridge-interface-ubuntu-linux/|How To Setup Bridge (br0) Network]], & [[https://www.cyberciti.biz/faq/debian-network-interfaces-bridge-eth0-eth1-eth2/|Debian Linux: Configure Network Interfaces As A Bridge / Network Switch]]
*Unixmen [[https://www.unixmen.com/linux-basics-create-network-bonding-on-ubuntu-14-10/|Ubuntu Create Network Bonding On Ubuntu 14.10]]
*The Linux foundation [[https://wiki.linuxfoundation.org/networking/bridge|bridge]], [[https://wiki.linuxfoundation.org/networking/bonding?s[]=network&s[]=bond|bonding]] and [[https://wiki.linuxfoundation.org/networking/start?s[]=bonding&s[]=bridging|Kernel Networking]]
*Ubuntu documentation [[https://help.ubuntu.com/community/UbuntuBonding|Bonding]], [[https://help.ubuntu.com/community/KVM/Networking|KVM networking]], [[https://help.ubuntu.com/community/NetworkConnectionBridge|network bridging]] and [[https://help.ubuntu.com/community/BridgingNetworkInterfaces|bridging network interfaces]]
*Linux.com [[https://www.linux.com/learn/create-secure-linux-based-wireless-access-point|Create a secure Linux-based wireless access point]]
*Gentoo [[https://wiki.gentoo.org/wiki/Home_Router|Home Router]]
*Stackexchange [[https://unix.stackexchange.com/questions/128439/good-detailed-explanation-of-etc-network-interfaces-syntax|Good detailed explanation of /etc/network/interfaces syntax?]] and [[https://unix.stackexchange.com/questions/192671/what-is-a-hotplug-event-from-the-interface/192913#192913|What is a hotplug event from the interface?]]
*centos.org [[https://www.centos.org/docs/5/html/5.1/Deployment_Guide/s3-modules-bonding-directives.html|41.5.2.1. bonding Module Directives]]
*kernal.org [[https://www.kernel.org/doc/Documentation/networking/bonding.txt|Linux Ethernet Bonding Driver HOWTO]]
*Themas Krenn [[https://www.thomas-krenn.com/en/wiki/Link_Aggregation_and_LACP_basics|Link Aggregation and LACP basics]]
*How-To Geek [[https://www.howtogeek.com/52068/how-to-setup-network-link-aggregation-802-3ad-on-ubuntu/|How to Setup Network Link aggregation (802.3ad) on Ubuntu]]
*[[https://delightlylinux.wordpress.com/2014/07/12/speed-up-your-home-network-with-link-aggregation-in-linux-mint-17-and-xubuntu-14-04/|Speed Up Your Home Network With Link Aggregation in Linux Mint 17 and Xubuntu 14.04]]
*Wikipedia [[https://en.wikipedia.org/wiki/Link_aggregation|Link aggregation]]
++++Home Server Index|
*[[home_server:home_server_setup:summary]]
*[[home_server:home_server_setup:home_it_setup]]
*[[home_server:home_server_setup:Network_setup]]
*[[home_server:home_server_setup:kvm]]
*[[home_server:home_server_setup:vnc_setup]]
*[[home_server:home_server_setup:disk_check]]
*[[home_server:home_server_setup:other_services:index]]
++++
<- home_server:home_server_setup:Home_IT_setup|Prev ^ home_server:home_server_setup:summary|Start page ^ home_server:home_server_setup:kvm|next page ->