Dikkekop

Members
  • Posts

    21
  • Joined

  • Last visited

Everything posted by Dikkekop

  1. My problem: slow smb browsing got fixed (acceptable) using this from almost a minute to instant(exept when disk is spindown, but thats normal) and defragging xfs disks. defrag took about 2 days per disk. fyi my disks are almost 10 years of age. Spindown 30 mins. Unraid 6.12.3 thnx
  2. Oh i'm sorry. Rolled back to 6.7.2. I'm waiting for next update
  3. So i tried 6.8.2 and unfortunately the problem still exists.
  4. So i tried 6.8.2 and unfortunately the problem still exists. Mouse works after detach --> attach. Keyboard (bluetooth) doesnt work at all. Screen sometims is glitchy... this also happens when i sometime detach --> attach keyboard (bluetooth) so maby its related Syslog: Feb 1 13:41:43 Tower kernel: usb 6-8: reset full-speed USB device number 4 using xhci_hcd Feb 1 13:41:43 Tower kernel: logitech-djreceiver 0003:046D:C52B.001E: hiddev96,hidraw0: USB HID v1.11 Device [Logitech USB Receiver] on usb-0000:00:14.0-8/input2 Feb 1 13:41:43 Tower kernel: input: Logitech M310 as /devices/pci0000:00/0000:00:14.0/usb6/6-8/6-8:1.2/0003:046D:C52B.001E/0003:046D:1024.001F/input/input25 Feb 1 13:41:43 Tower kernel: logitech-hidpp-device 0003:046D:1024.001F: input,hidraw1: USB HID v1.11 Mouse [Logitech M310] on usb-0000:00:14.0-8:1 Feb 1 13:41:43 Tower kernel: input: Logitech K520 as /devices/pci0000:00/0000:00:14.0/usb6/6-8/6-8:1.2/0003:046D:C52B.001E/0003:046D:2011.0020/input/input26 Feb 1 13:41:43 Tower kernel: logitech-hidpp-device 0003:046D:2011.0020: input,hidraw2: USB HID v1.11 Keyboard [Logitech K520] on usb-0000:00:14.0-8:2 Feb 1 13:41:44 Tower kernel: usb 6-4: reset low-speed USB device number 2 using xhci_hcd Feb 1 13:41:44 Tower kernel: input: USB Optical Mouse as /devices/pci0000:00/0000:00:14.0/usb6/6-4/6-4:1.0/0003:1BCF:0005.0021/input/input27 Feb 1 13:41:44 Tower kernel: hid-generic 0003:1BCF:0005.0021: input,hidraw3: USB HID v1.10 Mouse [USB Optical Mouse] on usb-0000:00:14.0-4/input0 Feb 1 13:41:44 Tower kernel: usb 6-7: reset full-speed USB device number 3 using xhci_hcd Feb 1 13:41:50 Tower kernel: br0: port 3(vnet6) entered blocking state Feb 1 13:41:50 Tower kernel: br0: port 3(vnet6) entered disabled state Feb 1 13:41:50 Tower kernel: device vnet6 entered promiscuous mode Feb 1 13:41:50 Tower kernel: br0: port 3(vnet6) entered blocking state Feb 1 13:41:50 Tower kernel: br0: port 3(vnet6) entered forwarding state Feb 1 13:41:50 Tower kernel: resource sanity check: requesting [mem 0x000c0000-0x000dffff], which spans more than PCI Bus 0000:00 [mem 0x000d4000-0x000d7fff window] Feb 1 13:41:50 Tower kernel: caller pci_map_rom+0x7a/0x15e mapping multiple BARs Feb 1 13:41:50 Tower kernel: resource sanity check: requesting [mem 0x000c0000-0x000dffff], which spans more than PCI Bus 0000:00 [mem 0x000d4000-0x000d7fff window] Feb 1 13:41:50 Tower kernel: caller pci_map_rom+0x7a/0x15e mapping multiple BARs Feb 1 13:41:50 Tower kernel: resource sanity check: requesting [mem 0x000c0000-0x000dffff], which spans more than PCI Bus 0000:00 [mem 0x000d4000-0x000d7fff window] Feb 1 13:41:50 Tower kernel: caller pci_map_rom+0x7a/0x15e mapping multiple BARs Feb 1 13:41:50 Tower kernel: resource sanity check: requesting [mem 0x000c0000-0x000dffff], which spans more than PCI Bus 0000:00 [mem 0x000d4000-0x000d7fff window] Feb 1 13:41:50 Tower kernel: caller pci_map_rom+0x7a/0x15e mapping multiple BARs Feb 1 13:41:50 Tower kernel: vfio-pci 0000:00:1b.0: enabling device (0000 -> 0002) Feb 1 13:41:50 Tower acpid: input device has been disconnected, fd 7 Feb 1 13:41:52 Tower kernel: resource sanity check: requesting [mem 0x000c0000-0x000dffff], which spans more than PCI Bus 0000:00 [mem 0x000d4000-0x000d7fff window] Feb 1 13:41:52 Tower kernel: caller pci_map_rom+0x7a/0x15e mapping multiple BARs Feb 1 13:41:52 Tower kernel: resource sanity check: requesting [mem 0x000c0000-0x000dffff], which spans more than PCI Bus 0000:00 [mem 0x000d4000-0x000d7fff window] Feb 1 13:41:52 Tower kernel: caller pci_map_rom+0x7a/0x15e mapping multiple BARs Feb 1 13:41:55 Tower emhttpd: cmd: /usr/local/emhttp/plugins/dynamix/scripts/tail_log libvirt/qemu/Windows 103.log Feb 1 13:41:55 Tower kernel: usb 6-8: reset full-speed USB device number 4 using xhci_hcd Feb 1 13:41:55 Tower kernel: usb 6-7: reset full-speed USB device number 3 using xhci_hcd Feb 1 13:41:56 Tower kernel: usb 6-8: reset full-speed USB device number 4 using xhci_hcd Feb 1 13:41:56 Tower kernel: logitech-djreceiver 0003:046D:C52B.0024: hiddev96,hidraw0: USB HID v1.11 Device [Logitech USB Receiver] on usb-0000:00:14.0-8/input2 Feb 1 13:41:56 Tower kernel: input: Logitech M310 as /devices/pci0000:00/0000:00:14.0/usb6/6-8/6-8:1.2/0003:046D:C52B.0024/0003:046D:1024.0025/input/input28 Feb 1 13:41:56 Tower kernel: logitech-hidpp-device 0003:046D:1024.0025: input,hidraw1: USB HID v1.11 Mouse [Logitech M310] on usb-0000:00:14.0-8:1 Feb 1 13:41:56 Tower kernel: input: Logitech K520 as /devices/pci0000:00/0000:00:14.0/usb6/6-8/6-8:1.2/0003:046D:C52B.0024/0003:046D:2011.0026/input/input29 Feb 1 13:41:56 Tower kernel: logitech-hidpp-device 0003:046D:2011.0026: input,hidraw2: USB HID v1.11 Keyboard [Logitech K520] on usb-0000:00:14.0-8:2 Feb 1 13:41:56 Tower kernel: usb 6-4: reset low-speed USB device number 2 using xhci_hcd ### [PREVIOUS LINE REPEATED 1 TIMES] ### Feb 1 13:41:57 Tower kernel: input: USB Optical Mouse as /devices/pci0000:00/0000:00:14.0/usb6/6-4/6-4:1.0/0003:1BCF:0005.0027/input/input30 Feb 1 13:41:57 Tower kernel: hid-generic 0003:1BCF:0005.0027: input,hidraw3: USB HID v1.10 Mouse [USB Optical Mouse] on usb-0000:00:14.0-4/input0 Feb 1 13:45:20 Tower acpid: input device has been disconnected, fd 7 qemu log: (When detaching and attachinbg device via "Hotplug USB" plugin) 2020-02-01 12:41:50.143+0000: Domain id=4 is tainted: high-privileges 2020-02-01 12:41:50.143+0000: Domain id=4 is tainted: host-cpu char device redirected to /dev/pts/2 (label charserial0) libusb: error [do_close] Device handle closed while transfer was still being processed, but the device is still connected as far as we know libusb: warning [do_close] A cancellation for an in-flight transfer hasn't completed but closing the device handle libusb: error [do_close] Device handle closed while transfer was still being processed, but the device is still connected as far as we know libusb: warning [do_close] A cancellation for an in-flight transfer hasn't completed but closing the device handle libusb: error [do_close] Device handle closed while transfer was still being processed, but the device is still connected as far as we know libusb: warning [do_close] A cancellation for an in-flight transfer hasn't completed but closing the device handle tower-diagnostics-20200201-1350-ano.zip
  5. Rolled back to 6.7.2 because passtrough is a little buggy (Qemu/Libvirt ??). On a Windows 10 VM on which i passtrough an Intel IGD sometimes my screen shows "random" purple/yellow pixels. On VM bootup none of my usb devices get detected. Some of them like mouse works after detach and attach with "Hotplug usb" plugin. So i rolled back to 6.7.2 and now everything i fine again.
  6. Just upgraded from 6.7.2 to 6.8.1: All seems to work fine for now. Docker and VM's up and running. I had to recreate two Windows VM's (using form view to create a new XML and point to existing .qcow2 and .img vdisk) because both of them didn't boot: Blackscreen with error: no bootable device. These two Windows VM's boot off of an Unassignd SSD, i had to update the plugin (Unassignd Devices) before upgrade. So either this plugin update or the Unraid upgrade itself messed something up. Passthrough of Intel IGD (4th Gen) still working 👍
  7. YES: # default is 0. set this to the number of days backups should be kept. 0 means indefinitely. number_of_days_to_keep_backups="0" # default is 0. set this to the number of backups that should be kept. 0 means infinitely. # WARNING: If VM has multiple vdisks, then they must end in sequential numbers in order to be correctly backed up (i.e. vdisk1.img, vdisk2.img, etc.). number_of_backups_to_keep="1"
  8. Domain snapshot snap.img created for debz '/mnt/cache/domains/debz/vdisk1.img.snap.img' -> '/mnt/user/backup/vm/debz/20190524_1732_vdisk1.img.snap.img' 2019-05-24 17:32 information: backup of vdisk1.img.snap.img vdisk to /mnt/user/backup/vm/debz/20190524_1732_vdisk1.img.snap.img complete. Please edit your xml file and remove .snap.img. (you are creating a backup of your snapshot.) Check if your vm is booting correctly and try again with provided script
  9. New version: v1.1.5.1 Changelog: # Version v1.1.5.1 - 2019-09-30 # Added: check before and after backup for .snap.img files and remove them # added parameter --wait to: virsh blockcommit $vm $disknumber (line 1245). This hopefully fixes the renaming of vdisk to vdisk.img.snap.img problem. https://github.com/thies88/unraid-vmbackup/blob/master/script
  10. temporary this path is edited to point to the "temp" snapshot file. Changes in your vm are written to that "temp" file during the backup. If backup is finished it should point backup to vdisk1.img. The moment it does this is when the script preforms "virsh blockcommit" (in your logging you wil see this where it says: Block commit: [100 %] Successfully pivoted) can you replace the script with this one and try again: https://github.com/thies88/unraid-vmbackup/blob/master/script i made 2 new points where the script checks for the .snap.img files and delete them. One at the start of backup and one after. i wil look at the editing the .xml file some other time. Maby we can build a check that checks the xml file for the .snap.img value and delete it can you also upload an logfile (output of script).
  11. i have spotweb running in the "nginx" docker with the database on the "mariadb" docker. The "nginx" docker has cron buildin Addad the following user script on unraid: Download "CA User Scripts" from CA. Go to Settings --> User Scripts --> ADD NEW SCRIPT. add a script name for example "update spotweb" click on the newly created user script named: update sportweb (blue text) and click "Edit Script" remove all text and paste this: #!/bin/bash docker exec -t nginx php /config/www/spotweb/retrieve.php > /dev/null /config/www/spotweb/retrieve.php is the path to the retrieve.php file from within the docker container. Click SAVE CHANGES and set Schedule Disabled to Scheduled hourly If you do this the script "update spotweb" (the cronjob) survives an update of the "nginx" docker. Out of curiosity in what docker are you running spotweb ?
  12. ## USE AT YOUR OWN RISK ## # Live backup is experimental, so please dont test/use on daily use unraid box for now. That beeing said, i use this on my daily Unraid box without problems. In order for this to work make sure: When using enable_backup_running_vm="1" you HAVE TO CHANGE your vdisk path in vm.xml to point to /mnt/cache/somefolder or /mnt/disk1/somefolder instead of /mnt/user/somefolder (see: New script added as attachment, what is changed ? # - Fixed case sensitive bug, where an .iso file is shown as warning in script-logging when filename is xxx.ISO. Only .iso was accepted (lines: 1180 and 1921) # - Experimental: added live backup, backup of running vm. This also checks if guest agent is installed inside vm to add extra parameter for making sure we have a consistent disk state before taking snapshot and backup. This backup DOES NOT includes RAM state. so when VM is restored it wil be powered off and is in a state as it has been forced powerd off. allows the guest to flush its I/O into a stable state before the snapshot is created, which allows use of the snapshot without having to perform a fsck or losing partial database transactions. When guest agent is not installed it just creats a snapshot but is stil considerd to be a safe snapshot. Can be enabled or disabled with: enable_backup_running_vm="1" or "0" # - Added Quick settings on top of script for easy changes true Unraid webgui # - Added comments in script to see what is changed: look for: "### Added:" and "### edited:" All features and functionality from original script is intact, so if you set enable_backup_running_vm=0 it is the same script as: https://github.com/JTok/unraid-vmbackup : v1.1.4 - 2018/05/19 ## USE AT YOUR OWN RISK ## Autovmbackup.sh.txt
  13. Nevermind found this thread:
  14. i switcht from Proxmox to unraid and running LibreELEC in a vm is a must for me. With the following steps i got it working for an intel i5 4th gen (i5-4670,HD4600) with an Asrock B85M Pro4: VM Settings (VMS --> VM Name --> Edit): Bios: Seabios Machine: i440fx-3.0 VM Manager(SETTINGS --> VM Manager): PCIe ACS override: both VFIO allow unsafe interrupts: Yes Syslinux (/boot/syslinux/syslinux.cfg): YOU WILL LOOSE UNRAID BOOT GUI, so you can only use WEBGUI and SSH to access your Tower. added: vfio-pci.ids=8086:0412 modprobe.blacklist=i915,snd_hda_intel,snd_hda_codec_hdmi video=efifb:off so line 8 looks like this: (Not sure if "snd_hda_intel,snd_hda_codec_hdmi" is needed) append pcie_acs_override=downstream,multifunction vfio-pci.ids=8086:0412 modprobe.blacklist=i915,snd_hda_intel,snd_hda_codec_hdmi video=efifb:off vfio_iommu_type1.allow_unsafe_interrupts=1 initrd=/bzroot Find: vfio-pci.ids= VALUE find IGD adres: Unraid Webgui --> TOOLS --> System devices, and look for VGA: Find: modprobe.blacklist= VALUE SSH to your TOWER: typ: lspci -ks 00:02.0 l Look for: Kernel Modules So in this case it would be: modprobe.blacklist=i915
  15. Is it possible do create a bridge without a NIC attached ? im looking for a way to add a bridge for communication between VM's only (like a seperate virtual switch/network). On this network i want to run my own dhcp server/router. i kno of Virbr... but this comes with its own dhcp... or can i disable this ?
  16. Oke i have done some testing with libvirt commandline tool virsh... and made a basic script.... seems to work. Almost done merging this with: https://github.com/JTok/unraid-vmbackup Version v1.1.4 - 2018/05/19. So the orginal script is still functional with all its parameters/functionality and i will add one to enable "Live backup". Just to be clear this wil not include current RAM state into backup. This will backup only the disk at the current state, it will flush latest i/o to disk before snapshot/backup to get a consistent disk state (requires guest agent inside vm, script will check if it is installed..). If VM is restored it will act like its been forced shutdown. Tested: - backup of windows server 2016 is restorable, also tested an open word file wich does after a restore is still accesable. - backup of an debian vm is restorable while running dd if=/dev/zero of=/root/testfile bs=1G count=1 oflag=direct wich creates a 1 gb file. libvirt is waiting for it (if guest agent is installed inside vm) to finish before creating snapshot/backup so this is really nice. I will post updated script i a couple of days and would like to ask you guys to test. Maby do not test this on your critical vm's/unraid box but thats up to you I will add marks in scipt to mark what i added or edited.
  17. This is an w.i.p. i dont have an answer for that. About your "document open for editing". Correct me if i'm wrong Word creates a temp file for editing leaving the original file as it is (maby its marked or something so no other user can edit the file while its open..but i dont think this "mark" is saved in the file itself). When you hit the save button its writes the changes to file. So if you backup while file is open it saves the original file as it is before editing and before saving. If you save the file after backup is done, the next backup wil contain changes made while file was open. I wil test this..... the --quiesce option can be used if guest additions are installed inside the vm and will flush the memory buffers to disk. But i have to test this... again W.I.P. This is al theory.. i want to test things first..
  18. Don't know yet its stil a w.i.p. and needs testing. From what i understand its safe if guest additions are installed in a vm: NOTE-1: Above, if you have QEMU guest agent installed in your virtual machine, try '--quiesce' option with virsh snapshot-create-as[. . .] to ensure you have a consistent disk state.
  19. Hi JTok First of all big thnx for the script. Im using your script and want to add a feature which allows to backup running vm's. From just a few tests it seems to work but i have to adjust your script for it to be compatible with your bultin functions like stopping the vm before backup and starting it after. For now i outcommented and added some lines of code to make it work. Backup running vm's: https://wiki.libvirt.org/page/Live-disk-backup-with-active-blockcommit
  20. isn't this possible with virsh snapshot ? 1. virsh snapshot --domain vm (creates a second vdisk.img and make future writes to this vdisk) (it also points to this vdisk in the .xml file) 2. backup the vm original vdisk.img (no writes take place to this vdisk because of step 1, so "consistent state" ?) 3. virsh blokcommit --domain vm (merge snapshot vdisk to original vdisk, after that deletes the snapshot, and points the orginal vdisk back in .xml) URL of Actual commands: https://wiki.libvirt.org/page/Live-disk-backup-with-active-blockcommit i'm working/testing on a user script to do this, based on: https://github.com/JTok/unraid-vmbackup Thread: