LeoFender

Members
  • Posts

    12
  • Joined

  • Last visited

Everything posted by LeoFender

  1. Just noticed by accident that UD doesn’t stop me from using a number at the beginning of the pool name when trying to format a single disk using ZFS as filesystem. I didn’t see it the first time around, but there is an error message popup along the lines of — Fail — see syslog for details. The thing is, this message disappears so fast that if you looked away for a few seconds, it’s gone and you’re none the wiser about what happened. Luckily I looked in the syslog and saw my mistake.
  2. VM Backup (Beta) plugin app by JTok was causing the bottom 2 error messages for me, using unRAID v6.8.3. After uninstalling and rebooting, the errors are gone. More info about these errors can be read in the following Issue thread on the author's GitHub page, with comments and initial troubleshooting of cause: https://github.com/JTok/unraid.vmbackup/issues/18 Also, this linked post of the following thread for other details:
  3. Thank you for this. I've got several old systems for homelab use now that I'll be able to test this on later today.
  4. I'm still looking to further refine it, but the following will return the filepath of any files on /mnt/cache/isos over 1M that have a sparseness value over 0.1 (or ratio of 10 to 1): find "/mnt/cache/isos" -depth -size +1M -printf '%S:%p\0' | awk -v RS='\0' -F : '$1 > 0.1 {sub(/^[^:]*:/, ""); print}' Adjusted from the example here: https://unix.stackexchange.com/a/86446 This takes into account floating point values like 3.8147e-06 returned by %S on very tiny files. If the pipe to awk is an issue, there might be ways around it, which I'm about to look into.
  5. Is it possible to direct mover to transfer sparse files without expanding them to full size onto the destination disk? If it's something mover just doesn't support yet, maybe an alternative could be a rule to ignore files if their sparseness value is (for e.g.) less than 0.1 or a rule that effectively avoids a 10GB file becoming a 200GB file. It looks like Find has a %S directive for showing a file's sparseness: https://www.gnu.org/software/findutils/manual/html_mono/find.html#Size-Directives I've been looking into doing this myself on my own system as a sanity check before mover starts, as I've managed to lock up my system at least twice due to trying to backup VMs to the protected array, then finding the same mover task still running 24hrs later, filling the array with 0s 😅
  6. As of 31 Mar 2020, 7:07:51 UTC [ Scheduler running ] Total queued jobs: 257,174 In progress: 1,222,333 Successes last 24h: 694,612 Looks like it's back up!
  7. Over the last few days, I've put 3 PCs together for Rosetta@home CPU work, using old parts I'd kept from upgrading systems for family members over the years. They're all working surprisingly well, which I hoped for but you never know what to expect with old parts. Now that our Unraid server has become as important a household item as Wi-Fi, I'd been wanting to setup a proper HomeLab for myself to test stuff on. For these 3 PCs, I've gone with using XCP-ng (Xen Hypervisor) on each and Xen Orchestra for central management, plus a mix of Windows / Linux VMs to see how BOINC performs in each. Haven't quite decided if I'll just put a minimal Debian distro like DietPi on each, with a BOINC Docker container and manage it all in a few browser tabs. In any case, it's nice to be able to put all this old stuff to good use, plus have a new HomeLab to play with.
  8. Wondering if anyone is using the Disable Security Mitigations plugin and seeing improved times for CPU based work units?
  9. Just saw a Windows 10 VM hit 200+Mbit/s while downloading updates via the cache. Nice! This is quite an impressive bundle of software. Thank you for making this all work in unRAID. Have also tested this with PiHole by changing the Lancache-bundle UPSTREAM_DNS setting from '1.1.1.1' to my PiHole IP. They're both working great, except of course PiHole is no longer showing which Client IP the requests were from, only the IP of the Lancache-bundle container now. Would sniproxy or another part of this bundle could cause problems if I changed the order of DNS requests to: Client -> PiHole -> Lancache-bundle -> 1.1.1.1 ?
  10. A big reason Windows can idle much lower by default is because it's a GUI first system, with a driver model that demands GPU makers include every possible state available for Windows to use, usually even before the first visual element appears on screen. Something that might gain similar results is to download a recent LibreELEC ISO and make a VM passing through your GPU and Audio to it. Setup power save in the GUI to sleep the screen after 1min. Disable any background audio settings. Enable SSH and then wait for the display to sleep. SSH in and issue a command to STOP the Kodi GUI, which reduces this VM CPU cycles to bare minimum now. These are just a few things, but you get the idea. When you need to use the GPU in another VM, you'll have to shutdown the LibreELEC VM by SSHing in, issue a CONTINUE to the Kodi GUI, then after a few seconds issue a HALT to the LibreELEC VM. This might not work well in every configuration, but it's a fast way to test as a possible option, without having to devote several GBs of RAM and constantly writing to storage just for a Windows VM.
  11. Okay, I've done this several dozen times now, so here goes. Part 1 -- Getting, Converting and Resizing the Image 1. Download the VirtualBox VM from https://dietpi.com -- Make sure it's named: DietPi_VirtualBox-x86_64-Buster.7z 2. Uncompress this into an empty folder. Keep the DietPi_VirtualBox-x86_64-Buster.ova 3. Untar DietPi_VirtualBox-x86_64-Buster.ova -- Resulting in 3 new files. We will focus on the .vmdk file. tar -xf DietPi_VirtualBox-x86_64-Buster.ova 4. Convert the .vmdk into a qcow2 file. qemu-img convert -p -f vmdk -O qcow2 DietPi_VirtualBox-x86_64-Buster-disk001.vmdk DietPi_64GB.qcow2 5. Resize the qcow2 file to 64GB, or whatever size you want the final image to be. qemu-img resize DietPi_64GB.qcow2 64G 6. Convert this 64GB qcow2 file to a raw image. qemu-img convert -p -f qcow2 -O raw DietPi_64GB.qcow2 DietPi_64GB.img Part 2 -- Adding the Image to unRAID Create a VM using Debian template. Change BIOS to SeaBIOS. Primary vDisk Location = Manual -- Add the location of the raw image from Part 1. Primary vDisk Bus = SATA Click [Create] When DietPi first starts, it'll automatically resize the partition inside the image. Thats all folks! Let me know if you come across this and found it helpful, or have any questions. Some notes: I chose to use the VirtualBox image rather than the VMware option, as it doesn't require the need to use 3rd party tools and many times more steps to get the same result 😅
  12. I had the same trouble, but solved it using testdisk (downloadable via Nerd-Pack) to scan the image file. Testdisk detected an issue with the cylinders number, recommending to change to 64. After doing this and writing the change, it scanned through fine. DietPi is awesome. Default minimal install uses 64MB ram and boots in seconds. I’ll add a short guide on getting it all working when I’m at a keyboard again.