Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
docker_notes:vm-container [2024-01-14 Sun wk02 11:19] – [Setup VM] baumkpdocker_notes:vm-container [2024-07-07 Sun wk27 11:21] (current) – [KVM versus Proxmox] baumkp
Line 1: Line 1:
 +{{tag>linux docker VM Proxmox server vnc kvm libvirt}}
 ======Docker Host ====== ======Docker Host ======
 =====KVM versus Proxmox===== =====KVM versus Proxmox=====
-I originally started using Linux KVM based VM, with QEMU and Libvirt on Ubuntu bare metal, circa 2014, this before was I was aware that Proxmox existed.  Around 2020 I moved to Debian as my preferred bare metal distribution for server and desktop, I stopped using Windows as my main home desktop around this time. When I started playing around with Docker to create my own container images I preferred use of the Alpine distribution and where necessary using the S6-rc init system.  I avoid the use of Ubuntu now as I just find some of their practices unpalatable, e.g. forced use of Snaps, requiring registration for latest package updates. Also their base server and desktop distributions came across as bloated around the time I stopped using them as my main distribution.  I see no point re-engaging with Ubuntu at this time, as many of the dissatisfaction that made me move are still there.+I originally started using Linux KVM based VM, with QEMU and Libvirt on Ubuntu bare metal, circa 2014, this before was I was aware that Proxmox existed.  Around 2020 I moved to Debian as my preferred bare metal distribution for server and desktop, I stopped using Windows as my main home desktop around this time. When I started playing around with Docker to create my own container images I preferred use of the Alpine distribution and where necessary using the S6-rc init system.  I avoid the use of Ubuntu now as I just find some of their practices unpalatable, e.g. forced use of Snaps, requiring registration for latest package updates. Also their base server and desktop distributions came across as bloated around the time I stopped using them as my main distribution.  I see no point re-engaging with Ubuntu at this time, as the dissatisfaction that made me move are still there. Beside, I simply prefer Debian now, stable for servers and testing for main desktop.  I have been happily using XFCE for about 7 year now as my main Linux desktop GUI (2024-04).
  
 Interestingly as I understand it Proxmox uses Debian and Linux KVM VM, however they also provide a lot of additional functionally, such as nice web interface, nice VM backup, and LXC container system, as well as other functionality.  Interestingly as I understand it Proxmox uses Debian and Linux KVM VM, however they also provide a lot of additional functionally, such as nice web interface, nice VM backup, and LXC container system, as well as other functionality. 
Line 9: Line 10:
 I may try Proxmox in the future, there is currently no compelling reason for me to do so at this time. I may try Proxmox in the future, there is currently no compelling reason for me to do so at this time.
  
-My current router has an Intel N3700 CPU, maximum 8GB ram, procured in 2016, which in 2023 is becoming slow and resource limited.  I also run a VM with Docker containers for a backup Bind9 DNS and backup Kea DHCP.  As my upstream (WAN) internet speed is about 65Mb/s down and 17Mb/s up this router is still suitable for purpose. I suspect it will not be limiting until the available WAN speeds are above 500 - 1000Mbps. I expect to get fibre WAN access to my house early 2024, but at this time pricing for the top Gbps speed package are still too expensive for me. I expect when internet connectivity speed goes above 250Mb/s the router capacity may become limiting.  <fs small>(I am currently 2023/2024 eyeing an Intel i5-1335U as a possible replacement, this is much fast overall and should easily handle multi Gb/s internet traffic, as well as more complex resource intensive Docker instances.)</fs>+My current router has an Intel N3700 CPU, maximum 8GB ram, procured in 2016, which in 2024 is becoming slow to use, but still functions well as a sub gb/s router.  I also run a VM with Docker containers for a backup Bind9 DNS and backup Kea DHCP on this machine.  As my current (WAN) internet speed is about 265Mb/s down and 23Mb/s up this router is still suitable for purpose. I suspect it will not be limiting until the available WAN speeds are above 1000Mbps. ++tl;dr|<fs small>(I am currently 2023/2024 eyeing an Intel i5-1335U or N305 as a possible replacement, this is much fast overall and should easily handle multi Gb/s internet traffic, as well as more complex resource intensive Docker instances.)</fs>  Sadly the N3700 AS2400 BMS seem to be unreliable now. I can only login to the BMS after a long shutdown and the machine often fails to reboot reliably.  Due to the age of the hardware it is not worth the cost to repair.  I will need to get a replacement.++
  
-My main home server is based upon an Intel Atom C3750 server, this is still currently adequately meeting my needs. I have upgraded with a 2.5Gb/s PCIe card.  I have not been able to find a good replacement for this machine at this time.  It was designed as a server, again an i5-1335U is in many ways superior, CPU cores and threads, CPU and memory speed and bandwidth, however memory is not ECC and memory is limited to 64GB, neither of which is probably a problem for me, as I am currently only using 32GB. Power consumption is similar. The biggest problem is that I have not been able to date find an i5-1335U motherboard with 4+ SATA ports and PCIE expansion slot, most are laptop boards, router boards or industrial embedded type boards that do not have the functionality that I am after.  I also still operate an older Intel Atom C2750 as a back-up server. This gets started by the main server once a week to run a Restic back-up with a Python script I wrote.+My main home server is based upon an Intel Atom C3750 server, this is still currently adequately meeting my needs. I have upgraded with a 2.5Gb/s PCIe card.  ++tl;dr|I have not been able to find a good replacement for this machine at this time.  It was designed as a server, again an i5-1335U is in many ways superior, CPU cores and threads, CPU and memory speed and bandwidth, however memory is not ECC and memory is limited to 64GB, neither of which is probably a problem for me, as I am currently only using 32GB. Power consumption is similar. The biggest problem is that I have not been able to date find an i5-1335U motherboard with 4+ SATA ports and PCIE expansion slot, most are laptop boards, router boards or industrial embedded type boards that do not have the functionality that I am after.++  I also still operate an older Intel Atom C2750 as a back-up server. This gets started by the main server once a week to run a Restic back-up with a Python script I wrote.
  
  
Line 17: Line 18:
 I use Linux KVM with libvirt, virsh and qemu. I use Linux KVM with libvirt, virsh and qemu.
   * Install standard Debian files. See [[https://wiki.kptree.net/doku.php?id=home_server:home_server_setup:kvm&s[]=libvirt#kvm_setup|kvm setup]]   * Install standard Debian files. See [[https://wiki.kptree.net/doku.php?id=home_server:home_server_setup:kvm&s[]=libvirt#kvm_setup|kvm setup]]
-    * I simply do not need a GUI. Where convenient I may separately install a GUI that can be accessed via VNC.+    * I simply do not normally need a GUI. Where convenient I may separately install a GUI that can be accessed via VNC.  I often in stall on my main VM host, but not router host.
   * Add user to libvirt and libvirt-qemu, e.g. ''sudo usermod -a -G libvirt-qemu baumkp''   * Add user to libvirt and libvirt-qemu, e.g. ''sudo usermod -a -G libvirt-qemu baumkp''
   * If you are ssh'ing into the host machine remember to add the ssh key to allow password-less login.  e.g. ''ssh-copy-id 192.168.1.21'' from the host machined where ''192.168.1.21'' is the remote machine.  If you do not do this the VM installer can ask for password continuously to the point of making use non-functional.   * If you are ssh'ing into the host machine remember to add the ssh key to allow password-less login.  e.g. ''ssh-copy-id 192.168.1.21'' from the host machined where ''192.168.1.21'' is the remote machine.  If you do not do this the VM installer can ask for password continuously to the point of making use non-functional.
Line 27: Line 28:
   * Set static ip address and a bridge network (this varies on the install type)   * Set static ip address and a bridge network (this varies on the install type)
     * For networkd:     * For networkd:
-      * ''sudo apt install bridge-utils'' Docker will need a bridge network connection +      * ''sudo apt install bridge-utils'' KVM commonly uses a bridge network connection to access the host 
-      * ''/etc/network/interfaces'' <code bash>source /etc/network/interfaces.d/+        *The KVM virtual machine does not necessarily need a bridge network, but does usually need a static IP address 
- +        *Docker does not specifically require a HOST with bridge  
-# The loopback network interface+      * ''/etc/network/interfaces''  
 +++++source /etc/network/interfaces.d/*| 
 +<code bash># The loopback network interface
 auto lo auto lo
 iface lo inet loopback iface lo inet loopback
Line 43: Line 46:
   address 192.168.1.2/24   address 192.168.1.2/24
   gateway 192.168.1.1         #Do not use on a router   gateway 192.168.1.1         #Do not use on a router
-#  dns-nameservers 192.168.1.1+  dns-nameservers 192.168.1.14 192.168.1.2  #Your LAN DNS server(s)
   bridge_stp off  # disable Spanning Tree Protocol </code>    bridge_stp off  # disable Spanning Tree Protocol </code> 
-  +++++
 ====Reference==== ====Reference====
   *[[https://linuxconfig.org/how-to-use-bridged-networking-with-libvirt-and-kvm|How to use bridged networking with libvirt and KVM]]   *[[https://linuxconfig.org/how-to-use-bridged-networking-with-libvirt-and-kvm|How to use bridged networking with libvirt and KVM]]
  
 <- docker_notes:index|Back ^ docker_notes:index|Start page ^ docker_notes:docker|Next -> <- docker_notes:index|Back ^ docker_notes:index|Start page ^ docker_notes:docker|Next ->