Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
home_server:home_server_setup:network_setup [2022-12-27 Tue wk52 20:46] – [Full Network Setup] baumkp | home_server:home_server_setup:network_setup [2025-01-03 Fri wk01 11:00] (current) – [References] baumkp | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | {{tag> | + | {{tag> |
- | =====Network Setup===== | + | ======Network Setup====== |
+ | Most server have more than one network connection although one is technically enough. | ||
+ | |||
+ | It would seem that Debian Linux supports multiple methods to define network connections: | ||
+ | - ''/ | ||
+ | - network manager | ||
+ | - systemd-networkd | ||
+ | - netplan | ||
+ | As usual they all have their own pros and cons. Also care needs to be taken not to have conflicting methods operating at the same time, particularly on the same interface. | ||
+ | |||
+ | ++++old, tldr;| | ||
The home server I have has 4 Intel Gigabit NICs. The past couple of years I have only been using 1 NIC to a main 24 port gigabit switch. This is described in the Basic Network Setup below. The newer home server has 5 drive, 2 SSD system drive, 2 larger data storage drives and 1 drive used as a parity drive for off line raid for the data storage drives. For most the time a single NIC will provide sufficient bandwidth between the server and switch. However there exists server capacity to saturate bandwidth of a single gigabit NIC. To increase effective bandwidth there is an option to bond 2 or more NIC together to combine their bandwidth. This is call NIC Bonding. To allow virtual machine NIC access the NIC(s) must be setup in bridge mode. Furthermore bridging NICs can also allow the NICs to act as a switch, obviously where more than one NIC is available. The Full Network Setup section below, describes setting up the system with bonded and bridge NICs. All these described setups were found to operate well. | The home server I have has 4 Intel Gigabit NICs. The past couple of years I have only been using 1 NIC to a main 24 port gigabit switch. This is described in the Basic Network Setup below. The newer home server has 5 drive, 2 SSD system drive, 2 larger data storage drives and 1 drive used as a parity drive for off line raid for the data storage drives. For most the time a single NIC will provide sufficient bandwidth between the server and switch. However there exists server capacity to saturate bandwidth of a single gigabit NIC. To increase effective bandwidth there is an option to bond 2 or more NIC together to combine their bandwidth. This is call NIC Bonding. To allow virtual machine NIC access the NIC(s) must be setup in bridge mode. Furthermore bridging NICs can also allow the NICs to act as a switch, obviously where more than one NIC is available. The Full Network Setup section below, describes setting up the system with bonded and bridge NICs. All these described setups were found to operate well. | ||
I added a 2.5gbe NIC card to my servers and switch. | I added a 2.5gbe NIC card to my servers and switch. | ||
- | Some references are noted below Network Setup Links. | + | Some references are noted below Network Setup Links. |
+ | |||
+ | =====References===== | ||
+ | *[[https:// | ||
+ | *[[https:// | ||
+ | *[[https:// | ||
+ | *[[https:// | ||
+ | *[[https:// | ||
+ | *[[https:// | ||
+ | *[[https:// | ||
+ | *[[https:// | ||
+ | *[[https:// | ||
+ | *[[https:// | ||
+ | *[[https:// | ||
+ | *[[https:// | ||
+ | *[[https:// | ||
+ | *[[https:// | ||
+ | *[[https:// | ||
+ | *[[https:// | ||
+ | *[[https:// | ||
+ | *[[https:// | ||
+ | *[[https:// | ||
=====Archived Network Setups===== | =====Archived Network Setups===== | ||
- | ++++Old Network Interfaces Setup| | + | ++++Network Interfaces Setup| |
====Basic Network Setup==== | ====Basic Network Setup==== | ||
Line 149: | Line 180: | ||
Wifi quoted speeds are particularly bogus / optimistic. The quoted speed is usually the best possible speed achievable. Wifi bandwidth is often shared with many simultaneous users, with each n user only often getting at best a 1/n share of bandwidth. There are also latency and interference issues with Wifi that can affect performance. Wired LAN ethernet connection tend to provide more reliable consistent performance. That being said Wifi is convenient and in most, but certainly not all, cases fast enough.++++ | Wifi quoted speeds are particularly bogus / optimistic. The quoted speed is usually the best possible speed achievable. Wifi bandwidth is often shared with many simultaneous users, with each n user only often getting at best a 1/n share of bandwidth. There are also latency and interference issues with Wifi that can affect performance. Wired LAN ethernet connection tend to provide more reliable consistent performance. That being said Wifi is convenient and in most, but certainly not all, cases fast enough.++++ | ||
=====Full Network Setup===== | =====Full Network Setup===== | ||
- | As of 2021 instead of bonding my server I have installed a 2.5Gb/s ethernet card in my mainserver, backup server | + | As of 2021 instead of bonding my server I have installed a 2.5Gb/s ethernet card in my mainserver, |
- | I have not tested the performance of the 2.5Gb/s ethernet system, in all honesty I have not noticed much difference. This is perhaps not unexpected as my in eternet | + | My 2 home home Wifi 6 access points (APs) are also connected to the 2.5Gb/s switch with 2.5Gb/s Ethernet ports and have 4 x 4 Gb/s ports downstream, Netgear WAX206-100AU. |
+ | |||
+ | I have not tested the performance of the 2.5Gb/s ethernet system, in all honesty I have not noticed much difference. This is perhaps not unexpected as my in Ethernet | ||
This is the setup for my new server with 4 built in Intel NICs, with Ubuntu 20.04 To reduce performance reduction due to limited Ethernet bandwidth using only one NIC I propose to use 2 NICs in bonded configuration and also use bridging to allow server virtual machine access to the NICs and also use the remaining 2 NICs effectively as a switch. | This is the setup for my new server with 4 built in Intel NICs, with Ubuntu 20.04 To reduce performance reduction due to limited Ethernet bandwidth using only one NIC I propose to use 2 NICs in bonded configuration and also use bridging to allow server virtual machine access to the NICs and also use the remaining 2 NICs effectively as a switch. | ||
Line 176: | Line 209: | ||
Edit the network configuration file: ''/ | Edit the network configuration file: ''/ | ||
+ | ++++interfaces.yaml| | ||
<code yaml> | <code yaml> | ||
network: | network: | ||
Line 228: | Line 262: | ||
*'' | *'' | ||
*'' | *'' | ||
+ | ++++ | ||
=====Full VM Network Setup===== | =====Full VM Network Setup===== | ||
Moving back to Debian I am also moving away from netplan back to interfaces. | Moving back to Debian I am also moving away from netplan back to interfaces. | ||
Line 257: | Line 291: | ||
The VM netplan yaml configuration file for static LAN IP address: ''/ | The VM netplan yaml configuration file for static LAN IP address: ''/ | ||
- | < | + | < |
network: | network: | ||
version: 2 | version: 2 | ||
Line 271: | Line 305: | ||
Create a file br0.xml, '' | Create a file br0.xml, '' | ||
- | < | + | < |
< | < | ||
< | < |