{{tag>server kvm virtual command debian machine qemu virsh qcow linux command setup nbd 'network block device'}} ======KVM Setup====== I original setup my main server and virual machines all with Ubuntu. However I have started using Debian and find it leaner than Ubuntu. I am slowly moving my various servers and virtual machines to Debian. *[[https://www.ostechnix.com/install-and-configure-kvm-in-ubuntu-20-04-headless-server/|Install And Configure KVM In Ubuntu 20.04 Headless Server]] *[[https://linuxhint.com/install_kvm_debian_10/|Installing KVM on Debian 10]] *[[https://computingforgeeks.com/how-to-install-kvm-virtualization-on-debian/|How To Install KVM Hypervisor on Debian 11|10]] *[[https://www.cyberciti.biz/faq/install-kvm-server-debian-linux-9-headless-server/|How to install KVM server on Debian 9/10 Headless Server]] Basically to install the KVM Hypervisor: ''sudo apt install qemu-kvm qemu-system qemu-utils libvirt-clients libvirt-daemon-system virtinst bridge-utils'' bridge-utils is optional. *''genisoimage'' is a package to create iso images *''libguestfs-tools'' is a library to access and modify VM disk images *''libosinfo-bin'' is a library with guest operating systems information to assist VM creation Use the built-in clone facility: ''sudo virt-clone %%--%%connect=qemu:%%//%%example.com/system -o this-vm -n that-vm %%--%%auto-clone''. To list all defined virtual machines ''virsh list %%--%%all''. Which will make a copy of this-vm, named that-vm, and takes care of duplicating storage devices. To dump a virtual machine xml definition to a file: ''virsh dumpxml {vir_machine} > /dir/file.xml''. Modify the following xml tags: *VM_name be careful not to use an existing VM name, running, paused or stopped. *869d5df0-13fa-47a0-a11e-a728ae65c86d. Use ''uuidgen'' to generate a new unique uuid. *Change as required the VM source file: * Cant be the same on the same local network.... To convert the xml file back to a virtual machine definition: ''sudo virsh define /path_to/name_VM_xml_file.xml''. The VM xml file can be edited directly using the command ''virsh edit VM_name'' To get virtual image disk information ''sudo qemu-img info /path_to/name_VM_file.img'' A compacted qcow2 image can be created using the following command: ''sudo qemu-img convert -O qcow2 -c old.qcow2 new_compacted_version_of_old.qcow2'' ===Copy to New Server=== I created a new Debian server on the same hardware as the original Ubuntu server. Only one server could be running at a time using EUFI boot options. I ran into the following problems. * Backed up copies of the VMs when they were stopped. * Some filing and naming clean-ups. * The VMs are stored on a separate drive, not the server root drive, and backed up to another separate drive. * The VM xml files were exported to the back-up drive, same directory as the VM back ups. * Using the same host name and IP address caused a SSH key error. To solve the new machine with 2 IP addresses the original server address and the a separate new unique address. * The original IP address allowed the VM NFS to work without any configuration changes, so remains working on both the new and old servers. * The new unique secondary address allows ssh access to the new server without needing to change the ssh keys. Once the original server is permanently decommissioned the secondary IP address would not be required. * When attempting to start the old VMs on the new server there was an error with the machine type. The command ''kvm-spice -machine help'' shows the allowable configurations on the current KVM server. Simply changing the machine value to one listed from kvm-spice in hvm corrected this problem. ---- ====Windows10 on KVM==== I have not used Windows on a VM now since circa 2021. Just no need. I do have a dual boot on my main desk top that I default to Debian testing and can boot to Windows 11 when I need to use Windows based software. My sons all still use Windows exclusively on their computers and game consoles..... So I still have a family MSOffice 365 subscription. This give access to MSoffice and 1TB of MS Cloud each. I had poor performance on Windows 7, 8/8.1, and 10 running on KVM a few years back. A large frustration was that I could not seem to get more than 2 CPUs functioning on the Windows VM even though I assigned 4. Performance was very poor, with CPU usage usually saturated with any use and relatively high even when idle. I found out early that Windows has limitations on the number of CPUs that could be used; 1 on Home, 2 on professional and 4 on Workstation and more on Server versions, at least that was my understanding. As I did not have a great need for the Windows VM I did not try too hard and basically did not use. What I recently discovered was that this Windows OS limitation was not on the number of logical CPUs, but rather on the number of sockets configured. Further to this KVM allows for configuration of socket|Cores|Threads. See the picture below. This actually makes sense for limitations on the number of sockets on a paid OS. So there seems to be no limit on the number of cores and threads, only the number of sockets. Sadly, the default KVM topology setup is to assign all the virtual CPUs as sockets with 1 core(/socket) and 1 Thread(/core). When setting the manual CPU topology option to 1 Socket with 4 Cores(/Socket) and 1 Thread(/Core) my Windows 10 could see the 4 cores and performance increase dramatically. Upon further use I seemed to get best performance with 6 cores for the Windows VM. It is basically usable now. {{:home_server:home_server_setup:screenshot_2020-09-12_19-58-17.png?400|}} BTW //my server hardware configuration is: 1 Socket, 8 Cores (/Socket) & 1 Thread(/Core)// DESKTOP-M41KNMA My understanding is that Windows Professional only allows one user to be actively logged in at any time either locally or remotely. This limitation was never a concern for me. ---- =====KVM Backup===== There seems to be 4 main ways to backup a KVM virual machine - Copy the main file(s) while the VM is running - Not recommended as file corruption will probably occur as the VM operation may and probably will modify the file during the copy process - Shutdown the VM first and then copy the file. Start the VM again after the copy is completed. - Use virsh backup-begin command to allow live full backup - Live backup using snapshot - Create a snapshot of the VM and direct all changes to the snapshot allowing safe backup of main VM file - Active back commit the snapshot and verify back commit worked ====KVM Offline Backup==== Note this only works on VMs that are shut down -''sudo virsh list --all'' to list all KVM virtual machines. -''%%sudo virsh dumpxml VM_name | grep -i "source file"%%'' to list the VM source file location noted in the VM XML file. -''sudo virsh dumpxml vm-name > /path/to/xm_file.xml'' to archive/backup the VM XML definition file. -''sudo cp -p /working/path/VM_image.qcow2 /path/to/'' to archive/move the VM file. -''%%sudo virsh undefine vm-name --remove-all-storage%%'' to undefine the VM and remove its storage. (Be careful with this one!) -''sudo virsh define --file '' to import (define) a VM from an XML file. References: *[[https://ostechnix.com/export-import-kvm-virtual-machines-linux/|How To Export And Import KVM Virtual Machines In Linux]] ====kvm back-up links==== *[[https://schh.medium.com/backup-and-restore-kvm-vms-21c049e707c1|Backup and Restore KVM Vms]] *[[https://www.bacula.org/kvm-backup-vm/|Backula Technical Considerations of a KVM Backup Process]] *[[https://www.virtkick.com/docs/how-to-perform-a-live-backup-on-your-kvm-virtual-machines.html|Virtkick - How to Perform a Live Backup on your KVM Virtual Machines]] *[[https://libvirt.org/kbase/live_full_disk_backup.html|Libvirt.org - Efficient live full disk backup]] ---- =====KVM Cheat Sheet===== There are perhaps too many of these I will keep this list very short and simple with the most useful options. * ''sudo virsh nodeinfo'' : Virsh display node information * ''%%sudo virsh list --all%%'' : Virsh List all domains, the ''%%--all%%'' option ensure inactive domains are listed * ''sudo virsh dominfo domain-name'' : List information of domain ''domain-name'' or domain ''domain-id'' * ''sudo virsh domiflist domain-name'' : List network interface(s) information of domain ''domain-name'' or domain ''domain-id'' * ''sudo virsh domblklist domain-name'' : to locate the file of an existing VM of domain ''domain-name'' or domain ''domain-id'' * ''sudo virsh domrename currentname newname'' : To rename domain * ''sudo virsh dumpxml domain-name > /dir_tree/kvm_backup/domain-name.xml'' : To copy the domain definition of ''domain-name'' to xml file * ''sudo virsh dumpxml domain-name'' will list the xml domain definition of ''domain-name'' * ''%%virsh define --file /dir_tree/kvm_backup/domain-name.xml%%'' : To restore a VM definition from a xml file. The file is normal one created from ''virsh dumpxml'' and ''domain-name'' is given in this definition file * ''sudo virsh pool-list'' : List storage pools * ''%%sudo virsh vol-list --pool pool-name%%'' : List volumes in ''pool-name'' * virsh : start, shutdown, destroy, reboot, or pause (suspend / resume) * ''sudo virsh start domain-name'' : To start a virtual machine of domain ''domain-name'' * ''sudo virsh shutdown domain-name'' : To shutdown a VM of ''domain-name'' or ''domain-id'' (initiate a shutdown now in VM, could take some time to actually stop) * ''sudo virsh destroy domain-name'' : To destroy a VM of ''domain-name'' or ''domain-id'' (effectively power down VM, force off, could corrupt a working VM) * ''sudo virsh reboot domain-name'' : To reboot (shutdown and restart) a VM of ''domain-name'' or ''domain-id'' * ''sudo virsh suspend domain-name'' : To suspend or pause and operating VM of ''domain-name'' or ''domain-id'', all cpu, device and i/o are paused. But VM remains in operating memory ready for immediate resume / un-pause * ''sudo virsh resume domain-name'' : To resume / unpause a suspended / paused VM of ''domain-name'' or ''domain-id'' * ''%%man virsh --help%%'' : virsh help * ''%%man virsh list --help%%'' : virsh list specific help Where: * VM = Virtual Machine Notes: * Only running VMs are given a numerical Id ---- =====KVM QEMU Commands===== ====Change the Disk Allocated Size==== How to change the amount of disk space assigned to a KVM *[[https://fatmin.com/2016/12/20/how-to-resize-a-qcow2-image-and-filesystem-with-virt-resize/|How to Resize a qcow2 Image and Filesystem with Virt-Resize]] *First turn off the virtual machine to be checked *Next find the file location of the virtual machine to be checked *Next query the file: ''sudo qemu-img info /path_vm/vm_name.img'' *Next increase the allowed size of the vm disk: ''sudo qemu-img resize /path_vm/vm_name.img +20G'' *We need to make a backup of the VM disk: ''sudo cp /path_vm/vm_name.img /path_vm/vm_name_backup.img'' *We can check the file system on the VM: ''%%virt-filesystems --long -h --all -a /path_vm/vm_name.img%%'', //**we can also use this to confirm the correct partition to expand**//. *We the backup VM disk to create a new expanded drive: ''%%sudo virt-resize --expand /dev/sda1 /path_vm/vm_name_backup.img /path_vm/vm_name.img%%'' The ''virt-filesystems'' command may not be installed by default and can be installed with the following ''sudo apt install guestfs-tools'' [[https://computingforgeeks.com/how-to-extend-increase-kvm-virtual-machine-disk-size/|How To extend/increase KVM Virtual Machine (VM) disk size]] ====Shrink the Disk File==== *Locate the QEMU disk file *Shut down the VM. *Copy the VM file to a back-up: ''cp image.qcow2 image.qcow2_backup'' *Option #1: Shrink your disk without compression (better performance, larger disk size): *''qemu-img convert -O qcow2 image.qcow2_backup image.qcow2'' *Option #2: Shrink your disk with compression (smaller disk size, takes longer to shrink, performance impact on slower systems): *''qemu-img convert -O qcow2 -c image.qcow2_backup image.qcow2'' Example: A 11GB disk file I shrank without compression basically remained unchanged at 11GB, but with compression to 5.2GB. Time to compress was longer and dependent upon the hardware used. *Boot your VM and verify all is working. *When you verify all is well, it should be safe to either delete the backup of the original disk, or move it to an offline backup storage. =====How to mount VM virtual disk on KVM hypervisor===== There seem to be a number of methods to do this. **In all cases the VM (Virtual Machine) must be in shutdown state.** ====libguestfs method==== A method is to use the tool set ''libguestfs'' however it is very heavy with many dependencies, so I have decided not to pursue this option. ++++tl;dr; libguestfs| The method described here uses libguestfs, which is a set of tools used to access and modify virtual machine (VM) disk images. You can use this for: *viewing and editing files inside guests *scripting changes to VMs *monitoring disk used/free statistics *creating guests *P2V *V2V *performing backup etc. To check if already installed or not: ''%%sudo apt list --installed | grep libguest%%'' To install ''sudo apt install libguestfs-tools'' This would installs to many too many additional dependencies for my liking, so stopping here. ++++ ====Mount a qcow2 image directly==== To check if already installed or not: ''%%sudo apt list --installed | grep qemu-utils%%'' To install ''sudo apt install qemu-utils'' The nbd (network block device) kernel module needs to be loaded to mount qcow2 images. *''sudo modprobe nbd max_part=16'' will load it with support for 8 block devices. (If more blocks are required use 16, 32 or 64 as required.) *Check VMs ''%%sudo virsh list --all%%''. *If the VM to be mounted is active shutdown with ''sudo virsh shutdown ''. *Use ''sudo virsh domblklist '' to get the full path and file name to the VM image file. *Use ''ls -l /dev/nbd*'' to check if any devices named nbd have already been defined. *Use ''sudo qemu-nbd -c /dev/nbd0 '' to create the block devices on the VM. *''sudo fdisk /dev/nbd0 -l'' will list the available partitions in /dev/nbd0. *Use ''sudo partprobe /dev/nbd0'' to update the kernel device list. *Use ''ls -l /dev/nbd0*'' to see available partitions on image. *If the image partitions are not managed by LVM they can be mounted directly. *If a mount point does not already exist, create: ''sudo mkdir /mnt/image''. *The device can then be mounted ''sudo mount /dev/nbd0p1 /mnt/image'' or ''sudo mount -r /dev/nbd0p1 /mnt/image'' to mount read only or ''sudo mount -rw /dev/nbd0p1 /mnt/image'' to mount explicitly read-write. When complete clean-up with the following commands. *Unmount the block device with ''sudo umount /mnt/image'' *Delete the network block device with ''sudo qemu-nbd -d /dev/nbd0''. *If required the VM can be restarted with ''sudo virsh start '' ===Mount a qcow2 image with LVM=== Links: *[[https://docs.openstack.org/image-guide/modify-images.html|Modify images]] *[[https://medium.com/@aysadx/linux-nbd-introduction-to-linux-network-block-devices-143365f1901b|linux NBD: Introduction to Linux Network Block Devices]] *[[http://alexeytorkhov.blogspot.com/2009/09/mounting-raw-and-qcow2-vm-disk-images.html|Mounting raw and qcow2 VM disk images]] *[[https://www.xmodulo.com/mount-qcow2-disk-image-linux.html|How to mount qcow2 disk image on Linux]] =====KVM Guest Corrupted - Recovery Options and Related===== *[[http://www.randomhacks.co.uk/how-to-recover-fsck-a-qcow2-file/|How to recover a qcow2 file using fsck]] *''sudo modprobe nbd max_part=8'' to enable the nbd (network block device) kernel module on host *''sudo qemu-nbd %%--%%connect=/dev/nbd0 /mnt/kvm/VMname.qcow2'' to use qemu-nbd to connect your qcow2 file as a network block device *''sudo fdisk /dev/nbd0'' to help with finding partitions on the VM file, qcow2 *''sudo fsck /dev/nbd0p1'' to fix the corrupted disk on vm *''sudo qemu-nbd %%--%%disconnect /dev/nbd0'' to disconnect the disk network block device *Qemu-discuss [[https://lists.nongnu.org/archive/html/qemu-discuss/2014-10/msg00023.html|How to fix/recover corrupted qcow2 images]] *[[https://cdcvs.fnal.gov/redmine/projects/fcl/wiki/Example_checking_filesystems_for_a_virtual_machine|Example checking filesystems for a virtual machine]] *[[http://mycfg.net/articles/booting-from-a-cdrom-in-a-kvm-guest-with-libvirt.html|Booting from a cdrom in a kvm guest using libvirt]] *[[http://unix.stackexchange.com/questions/12296/how-do-i-add-a-kvm-guest-vm-to-virsh|How do I add a KVM guest VM to virsh]] Some Keypoints are: *''sudo virsh'' to get into virsh, the virtualisation interactive terminal. Once inside virsh: *''list'' to list running VMs, or ''list %%--%%all'' to list all defined VMs, running or shutdown *''edit VM_name'' to edit the XML configuration file of the VM names VM_name. *''sudo virsh define XXX.xml'' to add a vm (XXX.xml)into virsh persistently. The VM is not started. The vm xml definition files can be found in ''/etc/libvirt/qemu''. *''sudo virsh start VM_name'' to start the VM. (Also reboot, reset, shutdown, destroy) *''sudo virsh help | less'' list all the virsh commands Some links: *[[https://computingforgeeks.com/virsh-commands-cheatsheet/|Virsh commands cheatsheet to manage KVM guest virtual machines]] *[[https://devonhubner.org/virsh_cheat_sheet/|virsh cheat sheet]] *[[https://www.ullright.org/ullWiki/show/libvirt-virsh-cheatsheet|libvirt / virsh cheatsheet (kvm)]] *[[https://blog.first2host.co.uk/virsh-cheat-sheet/|KVM Virsh Cheat Sheet]] *[[https://bobcares.com/blog/clone-kvm-virtual-machine/|Clone KVM Virtual Machine – How we use it in Linux]] *[[https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/virtualization_administration_guide/index|Redhat Virtualization Administration Guide]] * [[https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/virtualization_administration_guide/chap-virtualization_administration_guide-storage_pools-storage_pools|Chapter 12. Storage Pools]] ++++Home Server Index| *[[home_server:home_server_setup:summary]] *[[home_server:home_server_setup:home_it_setup]] *[[home_server:home_server_setup:Network_setup]] *[[home_server:home_server_setup:kvm]] *[[home_server:home_server_setup:vnc_setup]] *[[home_server:home_server_setup:disk_check]] *[[home_server:home_server_setup:other_services:index]] ++++ <- home_server:home_server_setup:network_setup|Prev ^ home_server:home_server_setup:summary|Start page ^ home_server:home_server_setup:vnc_setup|Next ->