home_server:home_server_setup:network_setup

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
home_server:home_server_setup:network_setup [2023-10-29 Sun wk43 12:52] – [Full Network Setup] baumkphome_server:home_server_setup:network_setup [2023-10-29 Sun wk43 12:56] (current) – [Full Network Setup] baumkp
Line 1: Line 1:
-{{tag>network interface netplan nic setup loopback eth ethernet bridge bond networkd linux ubuntu debian setup command}}+{{tag>network interface netplan nic setup loopback eth ethernet bridge bond networkd linux debian setup command}}
 =====Network Setup===== =====Network Setup=====
  
Line 151: Line 151:
 As of 2021 instead of bonding my server I have installed a 2.5Gb/s ethernet card in my mainserver, and backup server. My current main desktop computer comes with 2.5Gb/s standard. I original purchased a 5 ports 2.5Gb/s switch, but upgraded to an 8 port version.  My 2 home home Wifi 6 access points (APs) are connected to the 2.5Gb/s switch too with 2.5Gb/s Ethernet ports and have 4 x 4 Gb/s ports downstream.  Last year (2022) I upgraded from older Wifi APs since 2014, Netgear EX6200/AC1200. They each have 5 x 1Gb/s ethernet ports, for which 1 is used to connect to the upstream switch. As much of my home network equipment is connected by CAT 6 ethernet cable the dual  APs are not very stretched.  I have been lucky to avoid the whole Wifi mesh to enable adequate Wifi coverage, which is clearly inferior to the cabled system I have. The 5GB/s Ethernet cards and switches are not readily available. The 10Gb/s cards and switches are still much more expensive and consume significantly more power at 10Gb/s connection, meaning operating costs are probably also adversely affected.\\ As of 2021 instead of bonding my server I have installed a 2.5Gb/s ethernet card in my mainserver, and backup server. My current main desktop computer comes with 2.5Gb/s standard. I original purchased a 5 ports 2.5Gb/s switch, but upgraded to an 8 port version.  My 2 home home Wifi 6 access points (APs) are connected to the 2.5Gb/s switch too with 2.5Gb/s Ethernet ports and have 4 x 4 Gb/s ports downstream.  Last year (2022) I upgraded from older Wifi APs since 2014, Netgear EX6200/AC1200. They each have 5 x 1Gb/s ethernet ports, for which 1 is used to connect to the upstream switch. As much of my home network equipment is connected by CAT 6 ethernet cable the dual  APs are not very stretched.  I have been lucky to avoid the whole Wifi mesh to enable adequate Wifi coverage, which is clearly inferior to the cabled system I have. The 5GB/s Ethernet cards and switches are not readily available. The 10Gb/s cards and switches are still much more expensive and consume significantly more power at 10Gb/s connection, meaning operating costs are probably also adversely affected.\\
  
-I have not tested the performance of the 2.5Gb/s ethernet system, in all honesty I have not noticed much difference. This is perhaps not unexpected as my in eternet connection is ~ 30Mb/s down and 12Mb/s up.  My main server primary storage is 3.5" spinning disk with peak performance at best <2Gb/s and sustain and average performance much lower than this. These hard disks are limited to individual speed performance, no interleaving raid to speed thing up.  The operating systems and VMs are on SSDs / NVM with much higher achievable transfer rate and should improved overall system performance, but are not used for normal server storage purposes, hence the speed advantage here are not generally restricted by the network speed.+I have not tested the performance of the 2.5Gb/s ethernet system, in all honesty I have not noticed much difference. This is perhaps not unexpected as my in Ethernet connection is ~ 65Mb/s down and 17Mb/s up.  My main server primary storage is 3.5" spinning disk with peak performance at best <2Gb/s and sustain and average performance much lower than this. These hard disks are limited to individual speed performance, no interleaving raid to speed thing up.  The operating systems and VMs are on SSDs / NVM with much higher achievable transfer rate and should improved overall system performance, but are not used for normal server storage purposes, hence the speed advantage here are not generally restricted by the network speed.
  
 This is the setup for my new server with 4 built in Intel NICs, with Ubuntu 20.04 To reduce performance reduction due to limited Ethernet bandwidth using only one NIC I propose to use 2 NICs in bonded configuration and also use bridging to allow server virtual machine access to the NICs and also use the remaining 2 NICs effectively as a switch. This is the setup for my new server with 4 built in Intel NICs, with Ubuntu 20.04 To reduce performance reduction due to limited Ethernet bandwidth using only one NIC I propose to use 2 NICs in bonded configuration and also use bridging to allow server virtual machine access to the NICs and also use the remaining 2 NICs effectively as a switch.
Line 176: Line 176:
  
 Edit the network configuration file: ''/etc/netplan/interfaces.yaml'' as follows: Edit the network configuration file: ''/etc/netplan/interfaces.yaml'' as follows:
 +++++interfaces.yaml|
 <code yaml> <code yaml>
 network: network:
Line 228: Line 229:
   *''sudo netplan try'' To try a new neplan configuration with automatic roll back.   *''sudo netplan try'' To try a new neplan configuration with automatic roll back.
   *''journalctl -u systemd-networkd'' to check the networkd log   *''journalctl -u systemd-networkd'' to check the networkd log
 +++++
 =====Full VM Network Setup===== =====Full VM Network Setup=====
 Moving back to Debian I am also moving away from netplan back to interfaces. Moving back to Debian I am also moving away from netplan back to interfaces.
  • /app/www/public/data/attic/home_server/home_server_setup/network_setup.1698555162.txt.gz
  • Last modified: 2023-10-29 Sun wk43 12:52
  • by baumkp