This page is read only. You can view the source, but not change it. Ask your administrator if you think this is wrong. {{tag>linux router hardware}} =====Router Hardware===== (Jan 2023) For my router, including DNS (BIND9) and DHCP (ISC DHCP) I am using a Supermicro SYS-E200-9B that comes with a Supermicro motherboard X11SBA-LN4F. I purchased this in 2016 and got functional in 2017, whilst waiting for NFTables to run all required features on Ubuntu. The X11SBA-LN4F has an Intel Pentium N3700 system with 4 x Intel i210-AT GbE LAN. I got with maximum 8GB RAM and 120GB mSata HD. Sadly the mSata HD was a Chinese branded unit that failed after 3 years operation. I replaced it with an old Samsung 256GB 860 SSD that I had on hand. I also took the opportunity to change the router from Ubuntu to Debian at this time. The N3700 CPU had reasonable performance at the time and includes AES instruction, which a number of common lower priced options at the time did not, e.g. J1900 CPU. The AES CPU instruction helps improve encryption performance significantly, handy for SSL / VPN. The unit is still performing well now. including the 10 year old Samsung SSD. I run the following software on it, all bare metal: * NFtables for firewall and routing * Bind9 for DNS * ISC DHCP for DHCP * Wireguard for remote access to my network I would consider to try setting up a VM and Docker on this machine, however I suspect it maybe under powered for this. I would want Docker to be on a VM as I do not like the amount of IPtables configuration it does on its host. This would interfere with my NFTables router firewall configuration, if on the same host. I looked at the various options for the router hardware, written in 2016. ++++tldr;| *A small ARM based machine, e.g. Raspberry Pi 3. (The current RPi looks much more capable.) However these machines are generally limited in a number of way, including by definition not x86 based. Many do not have more than one NIC and the NIC are often not full Gigabit. (To be fair this hardware may be sufficient in most cases, as most homes do not have better than 100Mb/s internet connections, and in general much slower.) The main upside is that they are small, low power and relatively cheap. Those with only one NIC need to be setup with USB NIC adaptors, that further complicates setup, performance and reliability. Although better spec'ed machines, e.g. with multiple gigabit NICs, start getting more pricey too. I suppose you get what you pay for.... *The Raspberry Pi 4 & 5 looks like a much better option than earlier versions for a home router. Still has the complexity of only native 1 NIC, but that is full 1Gbe and there are 2 USB 3 port to allow another full 1Gbe NIC off USB. *An older x86 based machine. The main downside to these is poor power consumption and large size, even an old server tends to use more than 30W at the wall, or greater than $60/year power. Also the board I had only had one built in NIC, so I would need a PCIe NIC card. There is also the issue of reliability and performance for the older hardware, although it is probably good enough in this respect. That all being said if one is strapped for cash this may be a good way to start as the upfront cost would be smallest, if not zero. *At the moment, 2016, there are a lot of Intel Celeron J1900 based units with 4 NICs around. The J1900 is an older CPU, 4 cores, 2.0-2.42 GHz. Also in many cases the NIC hardware is older, particularly on the cheaper units, so care must be taken if you want to ensure more up to date hardware. These machines are a good option, low power (~8 - 10W), small size. They come with 2 SATA ports and mini PCI-E slots. By the time you fit them out they cost out USD250 - 350, with 4-8GB RAM and 120GB mSata drive. The cheaper options are as noted above usually with older NIC hardware and lower memory and HD size and can be had at even lower prices. *I decided to get a Supermicro [[https://www.supermicro.com/products/system/Mini-ITX/SYS-E200-9B.cfm|SYS-E200-9B]] that comes with a Supermicro motherboard [[https://www.supermicro.com/products/motherboard/X11/X11SBA-LN4F.cfm|X11SBA-LN4F]], an Intel Pentium N3700 system with 4 x Intel i210-AT GbE LAN, from [[https://mitxpc.com/products/sys-e200-9b|Mitxpc]]. I got with maximum 8GB RAM and 120GB mSata HD. The N3700 CPU is more modern than the J1900 and includes AES instruction that the J1900 does not have. The AES CPU instruction helps improve encryption performance significantly, handy for SSL / VPN. Otherwise the overall performance is similar (4 cores at 1.6-2.4GHz) and power slightly lower than the J1900. (The Intel LAN controllers are also the more modern ones). This unit also comes with a dedicated IPMI LAN Port, allowing full remote KVM operation on the network. A downside of the IPMI is that it uses another 3.5W of power (1W power 24/7 costs $2.29/year @ $0.25/kWhr, so 3.5W IPMI costs $7.67/yr extra for power over the main units 9W at $19.71/year). The upside is that the unit can be remotely off-site operated, with configuration options for auto on at power up and heart-beat with auto reset. (My home server is also a Supermicro based unit with dedicated IPMI LAN Port and has given me a good 5 years of service to date.) Downside is mainly the price, USD490 + delivery, as these units are not sold locally I purchase in USA and had it mailed at USD75. In any case this hardware should allow for a router with great performance for some years to come. Again you get what you paid for..... So some 7 years later I am having problems with the BMC on this unit, it is very unreliable now and requires the entire computer is reset, which in many aspect defeats the purpose of having it. The main unit otherwise works, but it is now much more difficult to use headlessly. The main unit is still go enough for my home internet which can be provided up to 1000Mb/s, however is usually much lower than this upstream.... <fs smaller> I don't see the point installing a 64bit OS on systems with less than 4GB of RAM. A 32bit OS can only natively access up 4 GB RAM, but should give better compromise with such limited RAM.</fs> ++++ ====VM / Docker on Router==== ===Progress=== As of 2023/01 I setup a VM manager (Libvirt/qemu/KVM) on the router and loaded Docker on it. It is slow but does seem to work. Next: *ISC Kea DHCP in Docker (currently ISC DHCP in bare metal) *ISC Bind 9 in Docker (currently ISC Bind 9 in bare metal) *Wireguard VPN in Docker (currently Wireguard VPN in bare metal) ===Router key features=== - Operate reliably 24 hours per day, 7 days a week - Low power operation, power cost money - Headless Remote access, with separate BMC NIC (this could be integrated or external KVM, e.g. [[https://pikvm.org/|PiKVM]]) - Hardware suitable for purpose: - At least 2 NICs (1 WAN plus 1 or more LAN, quality native type NICs, not USB based), 4+ NICs preferable. - NICs to be 1 GB/s type minimum, although as of 2023 2.5GB/s NIC would now be minimum specification - Sufficient CPU power not to limit primary performance - Correct CPU options, e.g. AES, [[https://www.intel.com/content/www/us/en/virtualization/virtualization-technology/intel-virtualization-technology.html|virtualization]] (VT-x, and as of 2023 VT-d). - No graphical user interface environment install (although individual applications could have web interface) - Connectivity to upstream IPS provided internet - Firewall - DNS - DCHP - VPN for use as secure gateway to allow private access from public internet The following key services define the router: *network services (bare metal) *ISP Internet connectivity (bare metal) *main firewall (bare metal) *DNS *DHCP *VPN (for secure public access to LAN) ===Assumptions and Limitations=== *Low power means low CPU resources, hence care with applications that require significant or otherwise unnecessary resources. *Some services on bare metal to ensure reliable performance *This machine is much slower than usual hardware, and this is noticeable on interface usage, even no graphical. *The network and related services performance must NOT limit performance on upstream IP connectivity to greater than 100Mb/s and preferably only limit as speed get close to NIC's 1 Gb/s hardware speed. (At the moment my internet connection is via VSDL and is limited to about 65Mb/s down and 16MB/s up and this hardware and setup seem to be performing well.) Docker really does some work on the firewall using iptables. For this reason I decided to setup a virtual machine (VM) environment, Linux QEMU/KVM/Libvirt based. VM's seem to impact the firewall / network setup less adversely than Docker. The use of the VM isolates the Docker firewall machinations from the bare metal. ===Why not Proxmox=== ++++tldr;| *I have not used to date, this is I have no experience with Proxmox *I already have a lot of experience on run Debian, libvirt/qemu/kvm, which is what Proxmox seems to be built on *Proxmox seems to need to be installed on bare metal. I am not so sure this would work well with my bare metal firewall feature requirements ++++ ====Specific issues with use of headless X11SBA-LN4F hardware==== ++++IPMI KVM Display Problems| ====IPMI KVM Display Problems==== Acronyms can be painful. IPMI = Intelligent Platform Management Interface, KVM = Keyboard video and mouse, BMC = Baseboard management controller. The remote KVM and IPMI, BMC are not used often, however they negate the need for the use of separate keyboards and monitors to set up and maintain these machines and allow true convenient headless set up, maintenance and operation. Normally an SSH terminal is all that is required, however a BMC with KVM allows full on/off/reset control and remote access to GRUB and terminal that SSH does not provide until after the base machine is running correctly. The Pentium N3700 comes with a built-in graphics adaptor. On the headless BMC system the built-in graphics adapter is not required and can interfere with the BMC graphic adapter. The best solution is to turn off the Intel integrated graphics device (IGD), which is enabled by default. The graphics then defaults to the BMC adaptor. The IGD can be turned of from the BIOS motherboard options (In this case under Advanced-Chipset Configuration-North Bridge-Intel IGD Configuration). The terminal also seems to default to 1024x768 resolution, so no additional work is required for this. The 18.04 Server loader also had a problem with existing drive partitions, so I needed to manually remove all existing partitions using fdisk, from 18.04 install terminal. My home server already in service over 5 years (as of 2017) has a Supermicro motherboard with Intel Atom C2750 CPU [[https://www.supermicro.com/products/motherboard/Atom/X10/A1SAi-2750F.cfm|A1SAi-2750F]] also with IPMI, BMC & KVM and did not display this problem. This makes sense as the Atom C2750 CPU does not have a internal graphic capacity, so the only graphics capacity was on the BMC video controller. The Ubuntu drivers defaulted to this basic BMC graphics display system. (This is now my backup server and my main server is a I now have a new server with the newer Supermicro motherboard with Intel Atom C3000 series CPU, also the 8 core version. (It was hard to justify the extra cost for the 12 or 16 core versions and I had no other hardware for the 10GB/s Ethernet option). The link to 8 core Supermicro motherboard with embedded 4 x 1GBe LAN [[https://www.supermicro.com/products/motherboard/atom/A2SDi-8C_-HLN4F.cfm|A2SDi-8C+-HLN4F]]. This server is now running as my primary.++++ ++++Forcing Display option at boot in Ubuntu| ====Forcing Display option at boot in Ubuntu==== **Note this method did not work in Ubuntu 18.04 amd64 server edition** Basically after setting up Ubuntu 16.04 amd64 server edition on the router hardware I noticed a problem with the IPMI KVM terminal display. During the Ubuntu start-up the KVM screen would just go blank. However login into a SSH session on the main board NIC was working normally. After a bit of head scratching and investigation I worked out the problem to be related to the design of Intel N3700 with the built graphics processor that was conflicting with the BMC graphics processor built into the motherboard a Supermicro [[https://www.supermicro.com/products/motherboard/X11/X11SBA-LN4F.cfm|X11SBA-LN4F]] in the also Supermicro [[http://www.mitxpc.com/proddetail.php?prod=SYS-E200-9B|SYS-E200-9B]]. So the solution is to ensure that Ubuntu does not load any "special" main board (Celeron N3700) CPU graphic drivers. For Debian and Ubuntu this is done by setting the "nomodeset" option into the grub bootloader. This can be done by editing the grub bootloader during boot up, a one off solution and by making permanent by editing the grub configuration file. The reliablesite.net give a good explanation in their article [[http://support.reliablesite.net/kb/a240/how-to-set-nomodeset-into-the-grub-bootloader-debian-and-ubuntu-intel-core-i7-3770.aspx|How to set 'nomodeset' into the grub bootloader]]. At the grub menu hit the arrow key to select, select the default option, 1st line. Press the 'e' key to edit. Add the the 'nomodeset' option to the end of the line starting with 'linux'. Hit 'F10' to proceed with modified boot. For the permanent solution basically edit etc/default/grub, adding nomodeset such that GRUB_CMDLINE_LINUX_DEFAULT="nomodeset" and then execute "sudo update-grub". Note that the "quiet splash" options in GRUB_CMDLINE_LINUX_DEFAULT should be removed to allow all boot information to be seen.++++ ++++Controlling BMC Terminal Resolution in Ubuntu| ====Controlling BMC Terminal Resolution in Ubuntu==== **Note this method was not tested in Ubuntu 18.04 amd64 server edition** The BMC terminal screen seems to default to 640x480 resolution. To improve consider the following. Add 'GRUB_GFXPAYLOAD_LINUX=1024x768' to the etc/default/grub file. There are an number of other possible options, the default 640x480 and 800x600 are too small and the 1280x1024 and 1600x1200 option too big. To check the options at the grub menu type 'c' to input the command 'vbeinfo' to list the available grub video modes. You can also specify colour depth, e.g. 1024x768x24, but if this is not correct it totally ignores your parameter and goes to default. As I did not care about colour depth, I just used the resolution only, which seems more reliable. You can also similarly increase the Grub menu screen by adding 'GRUB_GFXMODE=1024x768' to the etc/default/grub file. This help make use of the grub menu easier. Always run 'sudo update-grub' to make the modified grub file the current boot one. An on-line reference [[https://help.ubuntu.com/community/ChangeTTYResolution|ChangeTTYResolution]].++++ ++++Router Ethernet Hardware Consideration| ====Router Ethernet Hardware Consideration==== The X11SBA-LN4F hardware comes with 4 dedicated NIC controllers. NIC0 is on a dedicated PCIe lane, whereas NIC1 to 3 use a multiplexer to share another PCIe lane. The PCIe lane with the 3 shared NIC controllers have enough bandwidth to handle maximum combined throughput of the 3 NICs, however the multiplex does add a minor processing delay, although better than an additional external switch. I suspect this probably does not have significant affect on final performance. I plan to dedicate NIC0 to the WAN and bridge NICs 1-3 to the LAN. Also the bridged LAN network will used for the main server and its VMs with dedicated IP addresses on the LAN. The main NFTables based router will run on bare metal and a number of VMs used for DNS, DHCP, VPN and logger.++++ <- linux_router:background|Prev page ^ linux_router:start|Start page ^ linux_router:ubuntu|Next page ->