Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
home_server:home_server_setup:network_setup [2022-12-27 Tue wk52 20:46] – [Full Network Setup] baumkphome_server:home_server_setup:network_setup [2025-01-03 Fri wk01 11:00] (current) – [References] baumkp
Line 1: Line 1:
-{{tag>network interface netplan nic setup loopback eth ethernet bridge bond networkd linux ubuntu debian setup command}} +{{tag>network interface netplan nic setup loopback eth ethernet bridge bond networkd linux debian setup command}} 
-=====Network Setup=====+======Network Setup======
  
 +Most server have more than one network connection although one is technically enough.  Routers by definition need to have a minimum of at least 2 network connections.
 +
 +It would seem that Debian Linux supports multiple methods to define network connections:
 +  - ''/etc/network/interfaces'' with 
 +  - network manager
 +  - systemd-networkd
 +  - netplan
 +As usual they all have their own pros and cons.  Also care needs to be taken not to have conflicting methods operating at the same time, particularly on the same interface.
 +
 +++++old, tldr;|
 The home server I have has 4 Intel Gigabit NICs. The past couple of years I have only been using 1 NIC to a main 24 port gigabit switch. This is described in the Basic Network Setup below. The newer home server has 5 drive, 2 SSD system drive, 2 larger data storage drives and 1 drive used as a parity drive for off line raid for the data storage drives. For most the time a single NIC will provide sufficient bandwidth between the server and switch. However there exists server capacity to saturate bandwidth of a single gigabit NIC. To increase effective bandwidth there is an option to bond 2 or more NIC together to combine their bandwidth. This is call NIC Bonding. To allow virtual machine NIC access the NIC(s) must be setup in bridge mode. Furthermore bridging NICs can also allow the NICs to act as a switch, obviously where more than one NIC is available. The Full Network Setup section below, describes setting up the system with bonded and bridge NICs. All these described setups were found to operate well. The home server I have has 4 Intel Gigabit NICs. The past couple of years I have only been using 1 NIC to a main 24 port gigabit switch. This is described in the Basic Network Setup below. The newer home server has 5 drive, 2 SSD system drive, 2 larger data storage drives and 1 drive used as a parity drive for off line raid for the data storage drives. For most the time a single NIC will provide sufficient bandwidth between the server and switch. However there exists server capacity to saturate bandwidth of a single gigabit NIC. To increase effective bandwidth there is an option to bond 2 or more NIC together to combine their bandwidth. This is call NIC Bonding. To allow virtual machine NIC access the NIC(s) must be setup in bridge mode. Furthermore bridging NICs can also allow the NICs to act as a switch, obviously where more than one NIC is available. The Full Network Setup section below, describes setting up the system with bonded and bridge NICs. All these described setups were found to operate well.
 I added a 2.5gbe NIC card to my servers and switch. I added a 2.5gbe NIC card to my servers and switch.
  
-Some references are noted below Network Setup Links.+Some references are noted below Network Setup Links. ++++ 
 + 
 +=====References===== 
 +  *[[https://shape.host/resources/how-to-configure-network-on-debian-12-a-guide-for-beginners|How to Configure Network on Debian 12: A Guide for Beginners]] (systemd-networkd) 
 +  *[[https://www.server-world.info/en/note?os=Debian_12&p=initial_conf&f=3|Debian 12 Initial Settings : Network Settings]] 
 +  *[[https://poweradm.com/systemd-network-config-linux/|Network Configuration with Systemd-networkd on Ubuntu/Debian]] (systemd-networkd)) 
 +  *[[https://wiki.debian.org/NetworkConfiguration|Debian Wiki - Network Configuration]] (All methods) 
 +  *[[https://wiki.debian.org/BridgeNetworkConnections|Debian Wiki - Bridge Network Connections]] (/etc/network/interfaces method) 
 +  *[[https://www.debian.org/doc/manuals/debian-reference/ch05.en.html|Debian Manual Chapter 5. Network Setup]] 
 +  *[[https://thelinuxcode.com/debian_etc_network_interfaces/|All About the Debian /etc/network/interfaces File: The Comprehensive Guide]] (/etc/network/interfaces method) 
 +    *[[https://thelinuxcode.com/reload-network-interfaces-debian/|A Comprehensive Guide on Reloading /etc/network/interfaces in Debian]] 
 +    *[[https://thelinuxcode.com/restart-networking-debian-12-desktop-server-operating-system/|Getting Networking Up and Running on Debian 12: An Expert‘s Guide]] 
 +    *[[https://thelinuxcode.com/restart_networking_debian_linux/|How to Restart Networking on Debian Linux]] 
 +  *[[https://www.cyberciti.biz/faq/linux-list-network-cards-command/|https://www.cyberciti.biz/faq/linux-list-network-cards-command/]] 
 +  *[[https://www.baeldung.com/linux/network-interface-configure|Understanding and Configuring Linux Network Interfaces]] 
 +  *[[https://raspberrypi.stackexchange.com/questions/108592/use-systemd-networkd-for-general-networking|https://raspberrypi.stackexchange.com/questions/108592/use-systemd-networkd-for-general-networking]] (systemd-networkd) 
 +  *[[https://superuser.com/questions/1694538/systemd-networkd-what-is-the-configuration-file-precedence|https://superuser.com/questions/1694538/systemd-networkd-what-is-the-configuration-file-precedence]] (systemd-networkd) 
 +  *[[https://medium.com/100-days-of-linux/working-with-systemd-networkd-e461cfe80e6d|Working with systemd-networkd]] (systemd-networkd) 
 +  *[[https://major.io/p/creating-a-bridge-for-virtual-machines-using-systemd-networkd/|Creating a bridge for virtual machines using systemd-networkd]] (systemd-networkd) 
 +  *[[https://wiki.archlinux.org/title/Systemd-networkd|Arch wiki - systemd-networkd]] 
 +  *[[https://man.archlinux.org/man/systemd.network.5|systemd.network - Network configuration]] 
 +  *[[https://www.freedesktop.org/software/systemd/man/latest/systemd.network.html|Systemd documentation: systemd.network]]
 =====Archived Network Setups===== =====Archived Network Setups=====
  
-++++Old Network Interfaces Setup|+++++Network Interfaces Setup|
 ====Basic Network Setup==== ====Basic Network Setup====
  
Line 149: Line 180:
 Wifi quoted speeds are particularly bogus / optimistic. The quoted speed is usually the best possible speed achievable. Wifi bandwidth is often shared with many simultaneous users, with each n user only often getting at best a 1/n share of bandwidth. There are also latency and interference issues with Wifi that can affect performance. Wired LAN ethernet connection tend to provide more reliable consistent performance. That being said Wifi is convenient and in most, but certainly not all, cases fast enough.++++ Wifi quoted speeds are particularly bogus / optimistic. The quoted speed is usually the best possible speed achievable. Wifi bandwidth is often shared with many simultaneous users, with each n user only often getting at best a 1/n share of bandwidth. There are also latency and interference issues with Wifi that can affect performance. Wired LAN ethernet connection tend to provide more reliable consistent performance. That being said Wifi is convenient and in most, but certainly not all, cases fast enough.++++
 =====Full Network Setup===== =====Full Network Setup=====
-As of 2021 instead of bonding my server I have installed a 2.5Gb/s ethernet card in my mainserver, backup server and main desktop computers. I have purchased a 5 ports 2.5Gb/s switch.  My 2 home home Wifi access points (APs) are connected to the 2.5Gb/s switch too.  I have had these older Wifi APs since 2014Netgear EX6200/AC1200 and they still serve my home well. They each have 5 x 1Gb/s ethernet ports, for which 1 is used to connect to the upstream switchAs much of my home network equipment is connected by CAT 6 ethernet cable the dual  APs are not very stretched.  I have been luck to avoid the whole Wifi mesh to enable adequate Wifi coverage, which is clearly inferior to the cabled system I have. I researched purchasing upgraded Wifi6 APs, but those with a 2.5Gb/s ethernet port (or better) are still unreasonably expensive. The 5GB/s Ethernet cards and switches are not readily available. The 10Gb/s cards and switches are still much more expensive and consume significantly more power at 10Gb/s connection, meaning operating costs are probably also adversely affected.\\+As of 2021 instead of bonding my server I have installed a 2.5Gb/s ethernet card in my mainserver, and backup server. My current main desktop computer comes with 2.5Gb/s standard. I original purchased a 5 ports 2.5Gb/s switch, but upgraded to an 8 port version.  
  
-I have not tested the performance of the 2.5Gb/s ethernet system, in all honesty I have not noticed much difference. This is perhaps not unexpected as my in eternet connection is ~ 30Mb/s down and 12Mb/s up.  My main server primary storage is 3.5" spinning disk with peak performance at best <2Gb/s and sustain and average performance much lower than this. These hard disks are limited to individual speed performance, no interleaving raid to speed thing up.  The operating systems and VMs are on SSDs / NVM with much higher achievable transfer rate and should improved overall system performance, but are not used for normal server storage purposes, hence the speed advantage here are not generally restricted by the network speed.+My 2 home home Wifi 6 access points (APs) are also connected to the 2.5Gb/s switch with 2.5Gb/s Ethernet ports and have 4 x 4 Gb/s ports downstream, Netgear WAX206-100AU.  Interestingly the WAX206 went EOL (End of Life) on 2023-02-01, It appears to be less than 2 years, perhaps only 1 year after the product was available for purchase.  Netgear indicated they support their products for minimum 5 years after EOL, so until 2028-02-01, only about 3 years as of writing this.   Last year (2022) I upgraded from older Wifi APs since 2014, Netgear EX6200/AC1200. I got these APs around 2014. They each have 5 x 1Gb/s ethernet ports, for which 1 is used to connect to the upstream switch. As much of my home network equipment is connected by CAT 6 ethernet cable the dual  APs are not very stretched.  I have been lucky to avoid the whole Wifi mesh to enable adequate Wifi coverage, which is clearly inferior to the cabled system I have. 5GB/s Ethernet cards and switches are not readily available, even now end of 2024. I have been very happy with these Netgear APs.  Sadly there do not seem to be any similar products on the market at this time. I suspect this is only a limited market for these devices with most home users going for the inferior / overpriced Mesh router option as it does not require ethernet cabling and business users going with expensive AP endpoints with only a single PoE 2.5GB ethernet connection.  The 10Gb/s cards and switches are still much more expensive and consume significantly more power at 10Gb/s connection, meaning operating costs are probably also adversely affected.\\ 
 + 
 +I have not tested the performance of the 2.5Gb/s ethernet system, in all honesty I have not noticed much difference. This is perhaps not unexpected as my in Ethernet connection is ~ 270Mb/s down and 22Mb/s up.  My main server primary storage is 3.5" spinning disk with peak performance at best <2Gb/s and sustain and average performance much lower than this. These hard disks are limited to individual speed performance, no interleaving raid to speed thing up.  The operating systems and VMs are on SSDs / NVM with much higher achievable transfer rate and should improved overall system performance, but are not used for normal server storage purposes, hence the speed advantage here are not generally restricted by the network speed.
  
 This is the setup for my new server with 4 built in Intel NICs, with Ubuntu 20.04 To reduce performance reduction due to limited Ethernet bandwidth using only one NIC I propose to use 2 NICs in bonded configuration and also use bridging to allow server virtual machine access to the NICs and also use the remaining 2 NICs effectively as a switch. This is the setup for my new server with 4 built in Intel NICs, with Ubuntu 20.04 To reduce performance reduction due to limited Ethernet bandwidth using only one NIC I propose to use 2 NICs in bonded configuration and also use bridging to allow server virtual machine access to the NICs and also use the remaining 2 NICs effectively as a switch.
Line 176: Line 209:
  
 Edit the network configuration file: ''/etc/netplan/interfaces.yaml'' as follows: Edit the network configuration file: ''/etc/netplan/interfaces.yaml'' as follows:
 +++++interfaces.yaml|
 <code yaml> <code yaml>
 network: network:
Line 228: Line 262:
   *''sudo netplan try'' To try a new neplan configuration with automatic roll back.   *''sudo netplan try'' To try a new neplan configuration with automatic roll back.
   *''journalctl -u systemd-networkd'' to check the networkd log   *''journalctl -u systemd-networkd'' to check the networkd log
 +++++
 =====Full VM Network Setup===== =====Full VM Network Setup=====
 Moving back to Debian I am also moving away from netplan back to interfaces. Moving back to Debian I am also moving away from netplan back to interfaces.
Line 257: Line 291:
  
 The VM netplan yaml configuration file for static LAN IP address: ''/etc/netplan/network.yaml'' as follows: The VM netplan yaml configuration file for static LAN IP address: ''/etc/netplan/network.yaml'' as follows:
-<code>+<code yaml>
 network: network:
   version: 2   version: 2
Line 271: Line 305:
  
 Create a file br0.xml, ''vim ~/br0.xml'' and add following to it: Create a file br0.xml, ''vim ~/br0.xml'' and add following to it:
-<code>+<code xml>
 <network> <network>
   <name>br0</name>   <name>br0</name>