Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
home_server:home_server_setup:network_setup [2021-12-25 Sat wk51 09:10] – [Full Network Setup] baumkp | home_server:home_server_setup:network_setup [2023-10-29 Sun wk43 12:56] (current) – [Full Network Setup] baumkp | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | {{tag> | + | {{tag> |
=====Network Setup===== | =====Network Setup===== | ||
The home server I have has 4 Intel Gigabit NICs. The past couple of years I have only been using 1 NIC to a main 24 port gigabit switch. This is described in the Basic Network Setup below. The newer home server has 5 drive, 2 SSD system drive, 2 larger data storage drives and 1 drive used as a parity drive for off line raid for the data storage drives. For most the time a single NIC will provide sufficient bandwidth between the server and switch. However there exists server capacity to saturate bandwidth of a single gigabit NIC. To increase effective bandwidth there is an option to bond 2 or more NIC together to combine their bandwidth. This is call NIC Bonding. To allow virtual machine NIC access the NIC(s) must be setup in bridge mode. Furthermore bridging NICs can also allow the NICs to act as a switch, obviously where more than one NIC is available. The Full Network Setup section below, describes setting up the system with bonded and bridge NICs. All these described setups were found to operate well. | The home server I have has 4 Intel Gigabit NICs. The past couple of years I have only been using 1 NIC to a main 24 port gigabit switch. This is described in the Basic Network Setup below. The newer home server has 5 drive, 2 SSD system drive, 2 larger data storage drives and 1 drive used as a parity drive for off line raid for the data storage drives. For most the time a single NIC will provide sufficient bandwidth between the server and switch. However there exists server capacity to saturate bandwidth of a single gigabit NIC. To increase effective bandwidth there is an option to bond 2 or more NIC together to combine their bandwidth. This is call NIC Bonding. To allow virtual machine NIC access the NIC(s) must be setup in bridge mode. Furthermore bridging NICs can also allow the NICs to act as a switch, obviously where more than one NIC is available. The Full Network Setup section below, describes setting up the system with bonded and bridge NICs. All these described setups were found to operate well. | ||
+ | I added a 2.5gbe NIC card to my servers and switch. | ||
Some references are noted below Network Setup Links. | Some references are noted below Network Setup Links. | ||
=====Archived Network Setups===== | =====Archived Network Setups===== | ||
- | ++++Old Network Interfaces Setup| | + | ++++Network Interfaces Setup| |
====Basic Network Setup==== | ====Basic Network Setup==== | ||
Line 148: | Line 149: | ||
Wifi quoted speeds are particularly bogus / optimistic. The quoted speed is usually the best possible speed achievable. Wifi bandwidth is often shared with many simultaneous users, with each n user only often getting at best a 1/n share of bandwidth. There are also latency and interference issues with Wifi that can affect performance. Wired LAN ethernet connection tend to provide more reliable consistent performance. That being said Wifi is convenient and in most, but certainly not all, cases fast enough.++++ | Wifi quoted speeds are particularly bogus / optimistic. The quoted speed is usually the best possible speed achievable. Wifi bandwidth is often shared with many simultaneous users, with each n user only often getting at best a 1/n share of bandwidth. There are also latency and interference issues with Wifi that can affect performance. Wired LAN ethernet connection tend to provide more reliable consistent performance. That being said Wifi is convenient and in most, but certainly not all, cases fast enough.++++ | ||
=====Full Network Setup===== | =====Full Network Setup===== | ||
- | As of 2021 instead of bonding my server I have installed a 2.5Gb/s ethernet card in my mainserver, backup server | + | As of 2021 instead of bonding my server I have installed a 2.5Gb/s ethernet card in my mainserver, |
- | I have not tested the performance of the 2.5Gb/s ethernet system, in all honesty I have not noticed much difference. This is perhaps not unexpected as my in eternet | + | I have not tested the performance of the 2.5Gb/s ethernet system, in all honesty I have not noticed much difference. This is perhaps not unexpected as my in Ethernet |
This is the setup for my new server with 4 built in Intel NICs, with Ubuntu 20.04 To reduce performance reduction due to limited Ethernet bandwidth using only one NIC I propose to use 2 NICs in bonded configuration and also use bridging to allow server virtual machine access to the NICs and also use the remaining 2 NICs effectively as a switch. | This is the setup for my new server with 4 built in Intel NICs, with Ubuntu 20.04 To reduce performance reduction due to limited Ethernet bandwidth using only one NIC I propose to use 2 NICs in bonded configuration and also use bridging to allow server virtual machine access to the NICs and also use the remaining 2 NICs effectively as a switch. | ||
- | Ubuntu has for some time defaulted to netplan.io, where as Debian 11 still defaults to the interfaces configuration style. | + | Ubuntu has for some time defaulted to netplan.io, where as Debian 11 still defaults to the interfaces configuration style. |
To check available interfaces and names: '' | To check available interfaces and names: '' | ||
Line 160: | Line 161: | ||
Netplan does **not** require the bridge utilities to be loaded however these utilities can be used upon the bridge: '' | Netplan does **not** require the bridge utilities to be loaded however these utilities can be used upon the bridge: '' | ||
- | Under netplan the bonded configuration does not need ifenslave utility loaded, as this utilities is dependent upon ifupdown. Do **not** install '' | + | Under netplan the bonded configuration does not need ifenslave utility loaded, as this utilities is dependent upon ifupdown. |
The netplan website with basic information [[https:// | The netplan website with basic information [[https:// | ||
Line 175: | Line 176: | ||
Edit the network configuration file: ''/ | Edit the network configuration file: ''/ | ||
- | < | + | ++++interfaces.yaml| |
+ | < | ||
network: | network: | ||
#setup network interfaces | #setup network interfaces | ||
Line 227: | Line 229: | ||
*'' | *'' | ||
*'' | *'' | ||
+ | ++++ | ||
+ | =====Full VM Network Setup===== | ||
+ | Moving back to Debian I am also moving away from netplan back to interfaces. | ||
+ | < | ||
+ | # This file describes the network interfaces available on your system | ||
+ | # and how to activate them. For more information, | ||
- | =====Full VM Network Setup 20.04===== | + | source / |
+ | |||
+ | # The loopback network interface | ||
+ | auto lo | ||
+ | iface lo inet loopback | ||
+ | |||
+ | # The primary network interface | ||
+ | # | ||
+ | #iface enp1s0 inet dhcp | ||
+ | |||
+ | auto enp1s0 | ||
+ | iface enp1s0 inet static | ||
+ | address 192.168.1.17/ | ||
+ | gateway 192.168.1.1 | ||
+ | # | ||
+ | |||
+ | iface enp1s0 inet6 static | ||
+ | address 2001: | ||
+ | gateway 2001: | ||
+ | </ | ||
The VM netplan yaml configuration file for static LAN IP address: ''/ | The VM netplan yaml configuration file for static LAN IP address: ''/ | ||
- | < | + | < |
network: | network: | ||
version: 2 | version: 2 | ||
Line 245: | Line 272: | ||
Create a file br0.xml, '' | Create a file br0.xml, '' | ||
- | < | + | < |
< | < | ||
< | < |