Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
linux_router:hardware [2024-12-10 Tue wk50 20:08] – [Specific issues with use of headless X11SBA-LN4F hardware] baumkplinux_router:hardware [2024-12-15 Sun wk50 10:07] (current) – [Old Router Hardware] baumkp
Line 1: Line 1:
 {{tag>linux router hardware}} {{tag>linux router hardware}}
-=====Router Hardware=====+======Router Hardware======
  
-(Dec 2024) The Supermicro SYS-E200-9B has stopped working.  It posts bios, but will not boot further.  I suspect hardware failure of some sort.  The BMC failed a few years ago.  I have ordered a replacement [[https://www.ikoolcore.com/products/ikoolcore-r2-max|ikoolcore-r2-max]]+=====ikoolcore-r2-max===== 
 +(Dec 2024) The Supermicro SYS-E200-9B has stopped working.  It posts bios, but will not boot further.  I suspect hardware failure of some sort.  The BMC failed a few years ago.  I have ordered a replacement [[https://www.ikoolcore.com/products/ikoolcore-r2-max|ikoolcore-r2-max]].  The replacement comes with 2.5Gb/s and 10GB/s NICs and a more modern and faster 8 core [[https://www.intel.com/content/www/us/en/products/sku/231805/intel-core-i3n305-processor-6m-cache-up-to-3-80-ghz/specifications.html| i3-305 CPU]] that should easily handle home router services up to 10GB/s, and certainly to 2.5GB/s.  The [[https://www.marvell.com/products/ethernet-adapters-and-controllers/fastlinq-edge-ethernet-controllers.html|Marvell AQC113C-B1-C 10Gb/s NIC]] on this machine are RJ45 based and have full connectivity for all normal RJ45 speeds (10, 5, 2.5, 1Gb/s, and 100 and 10Mb/s).
 ++++ikoolcore-r2-max specifications| ++++ikoolcore-r2-max specifications|
-  *Processor: Intel Alder Lake-N i3-N305 (Also N100 option, stadnard without system fans)+  *Processor: Intel Alder Lake-N i3-N305 (Also N100 option, standard without system fans)
   *Memory: 1 x SO-DIMM DDR5 4800MHz, 32GB(SAMSUNG).   *Memory: 1 x SO-DIMM DDR5 4800MHz, 32GB(SAMSUNG).
   *Ethernet Ports: 2 x Marvell AQC113C-B1-C 10Gbps Network cards(via PCIe 3.0 x 2), 2 x Intel i226-v 2.5G network cards (via PCIe 3.0 x 1)   *Ethernet Ports: 2 x Marvell AQC113C-B1-C 10Gbps Network cards(via PCIe 3.0 x 2), 2 x Intel i226-v 2.5G network cards (via PCIe 3.0 x 1)
Line 23: Line 24:
 More Information AND FAQs, please visit [[https://wiki.ikoolcore.com|wiki.ikoolcore.com]]. ++++ More Information AND FAQs, please visit [[https://wiki.ikoolcore.com|wiki.ikoolcore.com]]. ++++
  
-++++tldr;|+=====Old Router Hardware===== 
 +++++old hardware tldr;| 
 +With the X11SBA-LN4F finally failing about 8 years after purchase (2016) and 7 years after be placed in to operating I am honestly disappointed in its reliability.  The BMC fail about 3-4 years before the main machine failed.  The limitations of the machine were starting to be come apparent, it was slow, but low powered. If it had not failed I probably would have been able to continue to use as my router for a few more years.  Its now limited performance means it is not worth the trouble to try to repair.  
 + 
 +====X11SBA-LN4F====
 For my router, including DNS (BIND9) and DHCP (ISC DHCP) I am using a Supermicro SYS-E200-9B that comes with a Supermicro motherboard X11SBA-LN4F. I purchased this in 2016 and got functional in 2017, whilst waiting for NFTables to run all required features on Ubuntu.  The X11SBA-LN4F has an Intel Pentium N3700 system with 4 x Intel i210-AT GbE LAN. I got with maximum 8GB RAM and 120GB mSata HD.  Sadly the mSata HD was a Chinese branded unit that failed after 3 years operation. I replaced it with an old Samsung 256GB 860 SSD that I had on hand. I also took the opportunity to change the router from Ubuntu to Debian at this time. The N3700 CPU had reasonable performance at the time and includes AES instruction, which a number of common lower priced options at the time did not, e.g. J1900 CPU. The AES CPU instruction helps improve encryption performance significantly, handy for SSL / VPN.  The unit is still performing well now. including the 10 year old Samsung SSD.  I run the following software on it, all bare metal: For my router, including DNS (BIND9) and DHCP (ISC DHCP) I am using a Supermicro SYS-E200-9B that comes with a Supermicro motherboard X11SBA-LN4F. I purchased this in 2016 and got functional in 2017, whilst waiting for NFTables to run all required features on Ubuntu.  The X11SBA-LN4F has an Intel Pentium N3700 system with 4 x Intel i210-AT GbE LAN. I got with maximum 8GB RAM and 120GB mSata HD.  Sadly the mSata HD was a Chinese branded unit that failed after 3 years operation. I replaced it with an old Samsung 256GB 860 SSD that I had on hand. I also took the opportunity to change the router from Ubuntu to Debian at this time. The N3700 CPU had reasonable performance at the time and includes AES instruction, which a number of common lower priced options at the time did not, e.g. J1900 CPU. The AES CPU instruction helps improve encryption performance significantly, handy for SSL / VPN.  The unit is still performing well now. including the 10 year old Samsung SSD.  I run the following software on it, all bare metal:
   * NFtables for firewall and routing   * NFtables for firewall and routing
Line 39: Line 44:
   *I decided to get a Supermicro [[https://www.supermicro.com/products/system/Mini-ITX/SYS-E200-9B.cfm|SYS-E200-9B]] that comes with a Supermicro motherboard [[https://www.supermicro.com/products/motherboard/X11/X11SBA-LN4F.cfm|X11SBA-LN4F]], an Intel Pentium N3700 system with 4 x Intel i210-AT GbE LAN, from [[https://mitxpc.com/products/sys-e200-9b|Mitxpc]]. I got with maximum 8GB RAM and 120GB mSata HD. The N3700 CPU is more modern than the J1900 and includes AES instruction that the J1900 does not have. The AES CPU instruction helps improve encryption performance significantly, handy for SSL / VPN. Otherwise the overall performance is similar (4 cores at 1.6-2.4GHz) and power slightly lower than the J1900. (The Intel LAN controllers are also the more modern ones). This unit also comes with a dedicated IPMI LAN Port, allowing full remote KVM operation on the network. A downside of the IPMI is that it uses another 3.5W of power (1W power 24/7 costs $2.29/year @ $0.25/kWhr, so 3.5W IPMI costs $7.67/yr extra for power over the main units 9W at $19.71/year). The upside is that the unit can be remotely off-site operated, with configuration options for auto on at power up and heart-beat with auto reset. (My home server is also a Supermicro based unit with dedicated IPMI LAN Port and has given me a good 5 years of service to date.) Downside is mainly the price, USD490 + delivery, as these units are not sold locally I purchase in USA and had it mailed at USD75. In any case this hardware should allow for a router with great performance for some years to come. Again you get what you paid for..... So some 7 years later I am having problems with the BMC on this unit, it is very unreliable now and requires the entire computer is reset, which in many aspect defeats the purpose of having it. The main unit otherwise works, but it is now much more difficult to use headlessly. The main unit is still go enough for my home internet which can be provided up to 1000Mb/s, however is usually much lower than this upstream....   *I decided to get a Supermicro [[https://www.supermicro.com/products/system/Mini-ITX/SYS-E200-9B.cfm|SYS-E200-9B]] that comes with a Supermicro motherboard [[https://www.supermicro.com/products/motherboard/X11/X11SBA-LN4F.cfm|X11SBA-LN4F]], an Intel Pentium N3700 system with 4 x Intel i210-AT GbE LAN, from [[https://mitxpc.com/products/sys-e200-9b|Mitxpc]]. I got with maximum 8GB RAM and 120GB mSata HD. The N3700 CPU is more modern than the J1900 and includes AES instruction that the J1900 does not have. The AES CPU instruction helps improve encryption performance significantly, handy for SSL / VPN. Otherwise the overall performance is similar (4 cores at 1.6-2.4GHz) and power slightly lower than the J1900. (The Intel LAN controllers are also the more modern ones). This unit also comes with a dedicated IPMI LAN Port, allowing full remote KVM operation on the network. A downside of the IPMI is that it uses another 3.5W of power (1W power 24/7 costs $2.29/year @ $0.25/kWhr, so 3.5W IPMI costs $7.67/yr extra for power over the main units 9W at $19.71/year). The upside is that the unit can be remotely off-site operated, with configuration options for auto on at power up and heart-beat with auto reset. (My home server is also a Supermicro based unit with dedicated IPMI LAN Port and has given me a good 5 years of service to date.) Downside is mainly the price, USD490 + delivery, as these units are not sold locally I purchase in USA and had it mailed at USD75. In any case this hardware should allow for a router with great performance for some years to come. Again you get what you paid for..... So some 7 years later I am having problems with the BMC on this unit, it is very unreliable now and requires the entire computer is reset, which in many aspect defeats the purpose of having it. The main unit otherwise works, but it is now much more difficult to use headlessly. The main unit is still go enough for my home internet which can be provided up to 1000Mb/s, however is usually much lower than this upstream....
  
-<fs smaller> I don't see the point installing a 64bit OS on systems with less than 4GB of RAM. A 32bit OS can only natively access up 4 GB RAM, but should give better compromise with such limited RAM.</fs> +<fs smaller> I don't see the point installing a 64bit OS on systems with less than 4GB of RAM. A 32bit OS can only natively access up 4 GB RAM, but should give better compromise with such limited RAM.</fs>  
-+++++ 
 +====Specific issues with use of headless X11SBA-LN4F hardware==== 
 + 
 +====IPMI KVM Display Problems==== 
 +Acronyms can be painful. IPMI = Intelligent Platform Management Interface, KVM = Keyboard video and mouse, BMC = Baseboard management controller. 
 + 
 +The remote KVM and IPMI, BMC are not used often, however they negate the need for the use of separate keyboards and monitors to set up and maintain these machines and allow true convenient headless set up, maintenance and operation. Normally an SSH terminal is all that is required, however a BMC with KVM allows full on/off/reset control and remote access to GRUB and terminal that SSH does not provide until after the base machine is running correctly. 
 + 
 +The Pentium N3700 comes with a built-in graphics adaptor. On the headless BMC system the built-in graphics adapter is not required and can interfere with the BMC graphic adapter. The best solution is to turn off the Intel integrated graphics device (IGD), which is enabled by default. The graphics then defaults to the BMC adaptor. The IGD can be turned of from the BIOS motherboard options (In this case under Advanced-Chipset Configuration-North Bridge-Intel IGD Configuration). The terminal also seems to default to 1024x768 resolution, so no additional work is required for this. The 18.04 Server loader also had a problem with existing drive partitions, so I needed to manually remove all existing partitions using fdisk, from 18.04 install terminal. 
 + 
 +My home server already in service over 5 years (as of 2017) has a Supermicro motherboard with Intel Atom C2750 CPU [[https://www.supermicro.com/products/motherboard/Atom/X10/A1SAi-2750F.cfm|A1SAi-2750F]] also with IPMI, BMC & KVM and did not display this problem. This makes sense as the Atom C2750 CPU does not have a internal graphic capacity, so the only graphics capacity was on the BMC video controller. The Ubuntu drivers defaulted to this basic BMC graphics display system. (This is now my backup server and my main server is a  
 + 
 +I now have a new server with the newer Supermicro motherboard with Intel Atom C3000 series CPU, also the 8 core version. (It was hard to justify the extra cost for the 12 or 16 core versions and I had no other hardware for the 10GB/s Ethernet option). The link to 8 core Supermicro motherboard with embedded 4 x 1GBe LAN [[https://www.supermicro.com/products/motherboard/atom/A2SDi-8C_-HLN4F.cfm|A2SDi-8C+-HLN4F]]. This server is now running as my primary.++++ 
 +++++Forcing Display option at boot in Ubuntu| 
 +====Forcing Display option at boot in Ubuntu==== 
 +**Note this method did not work in Ubuntu 18.04 amd64 server edition** 
 + 
 +Basically after setting up Ubuntu 16.04 amd64 server edition on the router hardware I noticed a problem with the IPMI KVM terminal display. During the Ubuntu start-up the KVM screen would just go blank. However login into a SSH session on the main board NIC was working normally. After a bit of head scratching and investigation I worked out the problem to be related to the design of Intel N3700 with the built graphics processor that was conflicting with the BMC graphics processor built into the motherboard a Supermicro [[https://www.supermicro.com/products/motherboard/X11/X11SBA-LN4F.cfm|X11SBA-LN4F]] in the also Supermicro [[http://www.mitxpc.com/proddetail.php?prod=SYS-E200-9B|SYS-E200-9B]]. 
 + 
 +So the solution is to ensure that Ubuntu does not load any "special" main board (Celeron N3700) CPU graphic drivers. For Debian and Ubuntu this is done by setting the "nomodeset" option into the grub bootloader. This can be done by editing the grub bootloader during boot up, a one off solution and by making permanent by editing the grub configuration file. The reliablesite.net give a good explanation in their article [[http://support.reliablesite.net/kb/a240/how-to-set-nomodeset-into-the-grub-bootloader-debian-and-ubuntu-intel-core-i7-3770.aspx|How to set 'nomodeset' into the grub bootloader]]. At the grub menu hit the arrow key to select, select the default option, 1st line. Press the 'e' key to edit. Add the the 'nomodeset' option to the end of the line starting with 'linux'. Hit 'F10' to proceed with modified boot. For the permanent solution basically edit etc/default/grub, adding nomodeset such that GRUB_CMDLINE_LINUX_DEFAULT="nomodeset" and then execute "sudo update-grub". Note that the "quiet splash" options in GRUB_CMDLINE_LINUX_DEFAULT should be removed to allow all boot information to be seen.++++ 
 +++++Controlling BMC Terminal Resolution in Ubuntu| 
 +====Controlling BMC Terminal Resolution in Ubuntu==== 
 +**Note this method was not tested in Ubuntu 18.04 amd64 server edition** 
 + 
 +The BMC terminal screen seems to default to 640x480 resolution. To improve consider the following. Add 'GRUB_GFXPAYLOAD_LINUX=1024x768' to the etc/default/grub file. There are an number of other possible options, the default 640x480 and 800x600 are too small and the 1280x1024 and 1600x1200 option too big. To check the options at the grub menu type 'c' to input the command 'vbeinfo' to list the available grub video modes. You can also specify colour depth, e.g. 1024x768x24, but if this is not correct it totally ignores your parameter and goes to default. As I did not care about colour depth, I just used the resolution only, which seems more reliable. You can also similarly increase the Grub menu screen by adding 'GRUB_GFXMODE=1024x768' to the etc/default/grub file. This help make use of the grub menu easier. Always run 'sudo update-grub' to make the modified grub file the current boot one. An on-line reference [[https://help.ubuntu.com/community/ChangeTTYResolution|ChangeTTYResolution]].++++ 
 +++++Router Ethernet Hardware Consideration| 
 +====Router Ethernet Hardware Consideration==== 
 +The X11SBA-LN4F hardware comes with 4 dedicated NIC controllers. NIC0 is on a dedicated PCIe lane, whereas NIC1 to 3 use a multiplexer to share another PCIe lane. The PCIe lane with the 3 shared NIC controllers have enough bandwidth to handle maximum combined throughput of the 3 NICs, however the multiplex does add a minor processing delay, although better than an additional external switch.  I suspect this probably does not have significant affect on final performance. 
 + 
 +I plan to dedicate NIC0 to the WAN and bridge NICs 1-3 to the LAN. Also the bridged LAN network will used for the main server and its VMs with dedicated IP addresses on the LAN. The main NFTables based router will run on bare metal and a number of VMs used for DNS, DHCP, VPN and logger.++++
  
-====VM / Docker on Router====+=====VM / Docker on Router=====
 ===Progress=== ===Progress===
 As of 2023/01 I setup a VM manager (Libvirt/qemu/KVM) on the router and loaded Docker on it.  It is slow but does seem to work. As of 2023/01 I setup a VM manager (Libvirt/qemu/KVM) on the router and loaded Docker on it.  It is slow but does seem to work.
Line 56: Line 90:
   - Hardware suitable for purpose:   - Hardware suitable for purpose:
     - At least 2 NICs (1 WAN plus 1 or more LAN, quality native type NICs, not USB based), 4+ NICs preferable.     - At least 2 NICs (1 WAN plus 1 or more LAN, quality native type NICs, not USB based), 4+ NICs preferable.
-    - NICs to be 1 GB/s type minimum, although as of 2023 2.5GB/NIC would now be minimum specification+    - NICs to be 1 GB/s type minimum, although as of 20232.5GB/NICs would now be minimum specification
     - Sufficient CPU power not to limit primary performance     - Sufficient CPU power not to limit primary performance
     - Correct CPU options, e.g. AES, [[https://www.intel.com/content/www/us/en/virtualization/virtualization-technology/intel-virtualization-technology.html|virtualization]] (VT-x, and as of 2023 VT-d).     - Correct CPU options, e.g. AES, [[https://www.intel.com/content/www/us/en/virtualization/virtualization-technology/intel-virtualization-technology.html|virtualization]] (VT-x, and as of 2023 VT-d).
Line 74: Line 108:
  
 ===Assumptions and Limitations=== ===Assumptions and Limitations===
-  *Low power means low CPU resources, hence care with applications that require significant or otherwise unnecessary resources.+  *Low power means lower CPU resources, hence care with applications that require significant or otherwise unnecessary resources.
   *Some services on bare metal to ensure reliable performance   *Some services on bare metal to ensure reliable performance
   *This machine is much slower than usual hardware, and this is noticeable on interface usage, even no graphical.   *This machine is much slower than usual hardware, and this is noticeable on interface usage, even no graphical.
-  *The network and related services performance must NOT limit performance on upstream IP connectivity to greater than 100Mb/s and preferably only limit as speed get close to NIC's 1 Gb/s hardware speed.  (At the moment my internet connection is via VSDL and is limited to about 65Mb/s down and 16MB/s up and this hardware and setup seem to be performing well.)+  *The network and related services performance must NOT limit performance on upstream IP connectivity to greater than 100Mb/s and preferably only limit as speed get close to NIC's 1 Gb/s hardware speed.  (At the moment my internet connection is via fibre and is limited to about 1000Mb/s down and up, although the plan I am on is limited to 250Mb/s down and and 20MB/s up and this hardware and setup seem to be performing well. Up until March 2024 my internet connection is via VSDL and is limited to about 65Mb/s down and 16MB/s.)
  
 Docker really does some work on the firewall using iptables.  For this reason I decided to setup a virtual machine (VM) environment, Linux QEMU/KVM/Libvirt based. VM's seem to impact the firewall / network setup less adversely than Docker. The use of the VM isolates the Docker firewall machinations from the bare metal.  Docker really does some work on the firewall using iptables.  For this reason I decided to setup a virtual machine (VM) environment, Linux QEMU/KVM/Libvirt based. VM's seem to impact the firewall / network setup less adversely than Docker. The use of the VM isolates the Docker firewall machinations from the bare metal.