nick5429

Community Developer
  • Posts

    109
  • Joined

  • Last visited

Everything posted by nick5429

  1. It appears the difference is the '-a' flag. Without '-a', the command I thought was safe appears to be safe: root@nickserver:/mnt/disk4# rsync -v --info=progress2 --remove-source-files /mnt/disk4/death /mnt/disk4/ skipping directory death 0 100% 0.00kB/s 0:00:00 (xfr#0, to-chk=0/0) sent 17 bytes received 12 bytes 58.00 bytes/sec total size is 0 speedup is 0.00 root@nickserver:/mnt/disk4# ls -lha /mnt/disk4/death total 0 drwxrwxrwx 2 root root 66 May 12 11:14 ./ drwxrwxrwx 9 nobody users 142 May 12 11:13 ../ -rw-rw-rw- 1 root root 0 May 12 11:14 myfile1 -rw-rw-rw- 1 root root 0 May 12 11:14 myfile2 -rw-rw-rw- 1 root root 0 May 12 11:14 myfile3 -rw-rw-rw- 1 root root 0 May 12 11:14 myfile4 root@nickserver:/mnt/disk4# rsync -av --info=progress2 --remove-source-files /mnt/disk4/death /mnt/disk4/ sending incremental file list 0 100% 0.00kB/s 0:00:00 (xfr#0, to-chk=0/5) sent 144 bytes received 49 bytes 386.00 bytes/sec total size is 0 speedup is 0.00 root@nickserver:/mnt/disk4# ls -lha /mnt/disk4/death total 0 drwxrwxrwx 2 root root 6 May 12 11:24 ./ drwxrwxrwx 9 nobody users 142 May 12 11:13 ../
  2. This was precisely the intent rsync manpage shows: Nothing got transferred, so nothing should be deleted I'd have sworn that in the past when I was *TRYING* to copy-verify-and-delete data which was duplicated on two different disks (e.g., at times when the mover flaked out and identical data existed on both the cache and the regular array), it completely ignored (and explicitly did not delete) source files which also existed at the destination. But the trial example I just constructed shows different behaviour: root@nickserver:/mnt# ls /mnt/disk4/death myfile1 myfile2 myfile3 root@nickserver:/mnt# ls /mnt/disk5/death myfile1 myfile2 myfile3 myfile4 root@nickserver:/mnt# rsync -av --remove-source-files --info=progress2 -X /mnt/disk4/death^Cmnt/disk5/ root@nickserver:/mnt# rsync -av --remove-source-files --info=progress2 -X /mnt/disk5/death /mnt/disk4/ sending incremental file list death/ death/myfile1 0 100% 0.00kB/s 0:00:00 (xfr#1, to-chk=0/5) death/myfile2 0 100% 0.00kB/s 0:00:00 (xfr#2, to-chk=2/5) death/myfile3 0 100% 0.00kB/s 0:00:00 (xfr#3, to-chk=1/5) death/myfile4 0 100% 0.00kB/s 0:00:00 (xfr#4, to-chk=0/5) sent 309 bytes received 128 bytes 874.00 bytes/sec total size is 0 speedup is 0.00 root@nickserver:/mnt# ls /mnt/disk4/death myfile1 myfile2 myfile3 myfile4 root@nickserver:/mnt# ls /mnt/disk5/death root@nickserver:/mnt#
  3. The unBalance plugin was not giving me sane behavior, so I decided I could just rsync manually on the commandline to accomplish the same thing: moving all the data in a ~7TB share onto a newly-installed, empty 8TB disk -- the new disk is disk6 I ran the following to generate the list of commands I intended to execute: root@nickserver:/mnt/disk6# for i in /mnt/disk* > do > echo rsync -av --remove-source-files --info=progress2 -X "$i/DLs" "/mnt/disk6/" > done rsync -av --remove-source-files --info=progress2 -X /mnt/disk1/DLs /mnt/disk6/ rsync -av --remove-source-files --info=progress2 -X /mnt/disk11/DLs /mnt/disk6/ rsync -av --remove-source-files --info=progress2 -X /mnt/disk12/DLs /mnt/disk6/ rsync -av --remove-source-files --info=progress2 -X /mnt/disk13/DLs /mnt/disk6/ rsync -av --remove-source-files --info=progress2 -X /mnt/disk14/DLs /mnt/disk6/ rsync -av --remove-source-files --info=progress2 -X /mnt/disk15/DLs /mnt/disk6/ rsync -av --remove-source-files --info=progress2 -X /mnt/disk16/DLs /mnt/disk6/ rsync -av --remove-source-files --info=progress2 -X /mnt/disk17/DLs /mnt/disk6/ rsync -av --remove-source-files --info=progress2 -X /mnt/disk18/DLs /mnt/disk6/ rsync -av --remove-source-files --info=progress2 -X /mnt/disk2/DLs /mnt/disk6/ rsync -av --remove-source-files --info=progress2 -X /mnt/disk3/DLs /mnt/disk6/ rsync -av --remove-source-files --info=progress2 -X /mnt/disk4/DLs /mnt/disk6/ rsync -av --remove-source-files --info=progress2 -X /mnt/disk5/DLs /mnt/disk6/ rsync -av --remove-source-files --info=progress2 -X /mnt/disk6/DLs /mnt/disk6/ # unnecessary, but I assessed as harmless rsync -av --remove-source-files --info=progress2 -X /mnt/disk9/DLs /mnt/disk6/ rsync -av --remove-source-files --info=progress2 -X /mnt/disks/DLs /mnt/disk6/ # unnecessary, but I assessed as harmless Not all of these were fully relevant, not all had data on them -- but I briefly evaluated that 'it should be fine anyway'. I copied and pasted all those rsync commands back into my terminal, and walked away for a couple days. When I returned, disk6 had essentially zero data: root@nickserver:/mnt/disk6# du -hs /mnt/disk6 292K /mnt/disk6 My log/alerts showed various other disks decreasing below the warning thresholds as their data was offloaded. Disk6 showed filling up; filling up; filling up to 97% warning threshold -- then 5 minutes later, dropping down to 'normal utilization level' (which appears to effectively be "empty"): 10-05-2022 20:53 Unraid Disk 6 message Notice [NICKSERVER] - Disk 6 returned to normal utilization level H7280A520SUN8.0T_001649PAR4LV_VLKAR4LV_35000cca260bc9574 (sdd) normal 10-05-2022 20:47 Unraid Disk 6 disk utilization Alert [NICKSERVER] - Disk 6 is low on space (97%) H7280A520SUN8.0T_001649PAR4LV_VLKAR4LV_35000cca260bc9574 (sdd) alert 10-05-2022 20:32 Unraid Disk 6 disk utilization Alert [NICKSERVER] - Disk 6 is low on space (96%) H7280A520SUN8.0T_001649PAR4LV_VLKAR4LV_35000cca260bc9574 (sdd) alert 10-05-2022 19:29 Unraid Disk 6 disk utilization Warning [NICKSERVER] - Disk 6 is high on usage (91%) H7280A520SUN8.0T_001649PAR4LV_VLKAR4LV_35000cca260bc9574 (sdd) warning 10-05-2022 18:05 Unraid Disk 4 message Notice [NICKSERVER] - Disk 4 returned to normal utilization level TOSHIBA_DT01ACA300_X3G716ZKS (sdx) normal 10-05-2022 16:28 Unraid Disk 2 message Notice [NICKSERVER] - Disk 2 returned to normal utilization level TOSHIBA_MG03ACA300_54G1KI4TF (sds) normal 10-05-2022 13:38 Unraid Disk 17 message Notice [NICKSERVER] - Disk 17 returned to normal utilization level HUS726060AL5210_NAGZ0EHY_35000cca242369310 (sdk) normal Did I overlook something obvious/stupid in my rsync commands? Was that ` rsync -av --remove-source-files --info=progress2 -X /mnt/disk6/usenet /mnt/disk6/ ` command destructive after all? It shouldn't be, given the description of the --remove-source-files flag (which only deletes successfully transferred source data; and this command should have moved nothing) In related news -- do we have a preferred xfs un-delete tool; ideally which will attempt to preserve filenames as much as possible? I have another empty disk large enough to hold any recovered data....
  4. It doesn't look like this report was ever addressed, and I have this same problem For my case... the target disk I *want* to select has enough free space (7.5TB out of 8TB) to move the entire usershare that I've selected (~6.2TB), but that disk does not show in the available targets. Often, there are zero options for selecting the target. Other times, unBalance offers me target disks that have <1TB free (for this 6+TB transfer) Docker-safe new permissions has been recently run; all dockers are stopped Any idea what's going on here or how to resolve?
  5. @gfjardim Hey this is happening to me now as well [2 years later]. Unraid 6.9.3, plugin version 2021.04.11 [up-to-date] I noticed 'lsof' was pegging an entire cpu to 100%; investigated and it's coming from /etc/rc.d/rc.diskinfo From /var/log/diskinfo.log , it looks like this benchmark is being run continuously in a loop, with no delay/sleep between iterations Mon Oct 18 22:17:49 EDT 2021: benchmark: shell_exec(lsof -- '/mnt/disks/SanDiskSSD' 2>/dev/null | tail -n +2 | wc -l) took 14.509439s. Mon Oct 18 22:18:03 EDT 2021: benchmark: shell_exec(lsof -- '/mnt/disks/VolatileSSD' 2>/dev/null | tail -n +2 | wc -l) took 13.332904s. Mon Oct 18 22:18:47 EDT 2021: benchmark: shell_exec(lsof -- '/mnt/disks/SanDiskSSD' 2>/dev/null | tail -n +2 | wc -l) took 13.167924s. Mon Oct 18 22:19:00 EDT 2021: benchmark: shell_exec(lsof -- '/mnt/disks/VolatileSSD' 2>/dev/null | tail -n +2 | wc -l) took 12.680031s. Mon Oct 18 22:19:44 EDT 2021: benchmark: shell_exec(lsof -- '/mnt/disks/SanDiskSSD' 2>/dev/null | tail -n +2 | wc -l) took 12.882862s. Mon Oct 18 22:19:58 EDT 2021: benchmark: shell_exec(lsof -- '/mnt/disks/VolatileSSD' 2>/dev/null | tail -n +2 | wc -l) took 14.628200s. Mon Oct 18 22:20:44 EDT 2021: benchmark: shell_exec(lsof -- '/mnt/disks/SanDiskSSD' 2>/dev/null | tail -n +2 | wc -l) took 14.887803s. Mon Oct 18 22:20:57 EDT 2021: benchmark: shell_exec(lsof -- '/mnt/disks/VolatileSSD' 2>/dev/null | tail -n +2 | wc -l) took 13.041714s. Those drives are mounted by Unassigned Devices. I'm not sure what this benchmark was trying to accomplish, as I don't have any preclear's running, and haven't since a reboot.
  6. I have two separate dockers running delugevpn on my unraid machine via PIA vpn. They both worked well simultaneously with PIA's 'old' network and deluge ~2.0.3. After upgrading to the newest delugevpn docker and PIA's 'nextgen' network (identical .ovpn files for each), only one of them works properly. One works perfectly, the other gets stuck on Tracker Status: 'announce sent' (vs 'announce ok'), with the exact same public linux test torrent (or with any other torrent). The logs appear to show that both are properly getting separate forwarded ports setup with no obvious errors, and I don't think I changed anything other than what was required to move to nextgen PIA vpn. They're setup basically identically except for different download folders, and different host port mappings (went port+1 for each). Both running in network bridge mode. Any ideas?
  7. How can I select multiple drives at once to process in a 'scatter' operation? I have 5 disks that I want to move all the data off and decommission; doing them one at a time (and needing to circle back and remember to move on to the next one at the appropriate time) is going to be a hassle.
  8. Good catch, where do you see that in the diag reports? I found similar info digging in the syslog once you pointed it out, but not formatted like you have quoted. Is there a summary somewhere I'm missing? Looks like these are the commands for SAS drives: sdparm --get=WCE /dev/sdX sdparm --set=WCE /dev/sdX Possibly has to be re-enabled on every boot, from internet comments? Will give that a try and see how the 2nd parity disk rebuild goes. Initial regular array write tests with 'reconstruct write' definitely see speed improvement after enabling this on the parity drive.
  9. You're likely right, I misinterpreted the output of iostat and came to the wrong conclusion. 'write request size' is listed in kB, writes are just being buffered to 512kB chunks before written. Unlikely to have anything to do with drive block size. I did a brief parity check for speed test after the rebuild finished, and a read-only parity check was going at ~110MB/sec. Still, something's not right if the 5-6TB portion of the rebuild (when the new fast drive is the only disk active) is going at 40MB/sec, when it was >175MB/sec during the preclear nickserver-diagnostics-20200717-1007.zip "routine writes" to the array (non-cache, tested with dd) with the new 6tb drive installed go at about 40MB/sec, regardless of RMW or 'turbo write' mode. The new drives are all 6TB SAS, and individually perform great. Something odd showed up when testing with the diskspeed plugin, which shows that when all drives are in use -- the new 6TB SAS ones consistently take a much bigger perf hit than the legacy SATA's (when if anything I'd expect them to be faster; and alone they are faster) Diags and diskspeed results attached
  10. I'm in the process of adding several 6TB 512e drives (physically 4k block size, but emulate 512). Right now, my parity drive upgrade with the first drive is going *extremely* slow compared to expected speeds with this new drive, even on the portion of the drive that is larger than any other drive (ie, no reads are required) All other drives in the system are <=3TB, and the parity rebuild onto the 6TB drive is currently at ~5TB position, but is only writing at <40MB/sec. The only activity is write to the new parity drive, and no reads are happening at all. From preclear testing, even the slowest portion of this one disk writes at >175MB/sec. Digging into iostat details, it looks like unraid is using write size of 512 instead of 4k, which is likely slowing this down significantly (I assume the drive internally does a read-modify-write to emulate 512B writes to a 4k physical sector?). How can I tell unraid to use a 4k access size for parity checks/rebuilds? New filesystems can be done with a 4k block size which should help for the new data disks, but if my parity drive is 512e, it would also likely be faster if unraid used 4k access size for everything, not just new filesystems; is there a setting to change this as well?
  11. I have an old nvidia card which requires the 340.xx driver line for support, and the card *does* support nvenc/nvdec. I'm able to compile and load the appropriate nvidia driver myself, but I'd also like to take advantage of the other modifications that have been done as part of the work for this plugin for docker compatibility, beyond simply loading the driver. Where is the source for the additional changes made to the underlying unraid/docker system with build instructions to create these distributed packages? A general outline is fine, I can figure it out from there.
  12. Hm. Well, I blew away the old installation, and re-selected the packages (being careful to only select things that weren't already installed by the underlying unraid system), and it seems fine. Good for me, but doesn't fully solve the answer of whether it was a problem with my flash, or a problem with some built-in package system being replaced. If I feel bold (and feel like dealing with another crashed system), maybe I'll re-enable having DevPack install all the old packages later
  13. Something in this prevents my server from booting on unraid 6.4.1 Took a couple hours for me to narrow it down to this plugin. This was working fine with my setup on 6.3.5 and I didn't enable/disable any packs. Here's what I've got in DevPack.cfg: attr-2_4_47="no" binutils-2_27="yes" bzip2-1_0_6="yes" cxxlibs-6_0_18="yes" expat-2_2_0="no" flex-2_6_0="no" gc-7_4_2="yes" gcc-5_4_0="yes" gdbm-1_12="no" gettext-0_19_8_1="no" glib2-2_46_2="no" glib-1_2_10="yes" glibc-2_24="yes" gnupg-1_4_21="no" gnutls-3_5_8="yes" gpgme-1_7_1="no" guile-2_0_14="yes" json-c-0_12="no" json-glib-1_2_2="no" kernel-headers-4_4_38="yes" libelf-0_8_13="no" libevent-2_1_8="no" libgcrypt-1_7_5="no" libgpg-error-1_23="no" libjpeg-turbo-1_5_0="no" libmpc-1_0_3="yes" libnl-1_1_4="no" libpcap-1_7_4="no" libunistring-0_9_3="no" libX11-1_6_4="yes" make-4_2_1="yes" ncurses-5_9="yes" openssl-1_0_2k="no" pcre-8_39="no" pkg-config-0_29_1="yes" sqlite-3_13_0="no" tcl-8_6_5="yes" tclx-8_4_1="no" tk-8_6_5="no" xproto-7_0_29="no" xz-5_2_2="yes" zlib-1_2_8="yes" This caused a similar symptom as in Unfortunately, I wasn't able to capture a more complete log, since it completely locks up the system. The files in /boot/config/plugins/DevPack/packages/6.4 that it managed to download before locking up are: binutils-2.27-x86_64-2.txz* bzip2-1.0.6-x86_64-1.txz* cxxlibs-6.0.18-x86_64-1.txz* gc-7.4.2-x86_64-3.txz* gcc-5.4.0-x86_64-1.txz* glib-1.2.10-x86_64-3.txz* glibc-2.24-x86_64-2.txz* gnutls-3.5.8-x86_64-1.txz* guile-2.0.14-x86_64-1.txz* kernel-headers-4.4.38-x86-1.txz* libX11-1.6.4-x86_64-1.txz* libmpc-1.0.3-x86_64-1.txz* make-4.2.1-x86_64-1.txz* ncurses-5.9-x86_64-4.txz* packages-desc* packages.json* pkg-config-0.29.1-x86_64-2.txz* tcl-8.6.5-x86_64-2.txz* xz-5.2.2-x86_64-1.txz* zlib-1.2.8-x86_64-1.txz* Unfortunately, they're all timestamped in the same second, so I'm unable to determine processing order that way. PS -- this is super handy! Hope you get it working again soon, and thanks
  14. I had this same problem, and it went away when I disabled the DevPack plugin Not sure yet what in there might be triggering it
  15. Of course. But can I then remove that device, do a "New Config", and tell unraid "trust me, the parity is still good even though I removed a device" with dual parity mode active? The answer is trivially yes in single parity mode where P is a simple XOR; I didn't see this directly addressed for dual parity where the calculations are much more complex (and the procedure was defined before dual parity mode existed), so wanted to ask.
  16. I know the P+Q parity scheme is a lot more complex than just a simple XOR. Is the manual procedure in the first post valid with dual parity?
  17. I tried it both ways. When I wasn't seeing any uploading on my usual private torrents, I found the most active public torrent possible as a test -- and I see virtually no upload there either
  18. So I've got everything configured and set up, and am getting great download speeds through the PIA Netherlands endpoint (20+ MB/sec) -- but my upload is all-but-nonexistent. I'm on a symmetric gigabit fiber connection (1000Mbit/sec upload and download). "Test active port" in deluge comes back with a happy little green ball. Strict port forwarding in the container config is enabled. I loaded up about 10 test torrents on 3 different private trackers with a moderate number of peers, and see zero upload (as in, not even a number shown in the 'upload' column). Just for funsies, I pulled up a public torrent with 60 seeds and 600 leechers and downloaded the whole thing. I have a total of 30 KB/sec upload on that torrent. Something is clearly wrong here. I've seen several other comments about this throughout the thread, but no resolution. Does uploading work correctly for anyone using this with PIA??
  19. Perhaps my posts should be split off to a new post/defect report (mods??) with a reference from this thread as additional data point -- but there's no way my report (or this, presuming the same problem) is a "docker issue". Docker hadn't been given any reference to the cache drive. Unraid is the only thing that could have made the decision to write to /mnt/cache/<SHARE> Also, I noted the same problem on a share that docker has never touched
  20. Investigating further, I see the same issue on a share (/mnt/user/Nick) which is only ever accessed over SMB or commandline, which I definitely would not have manually specified /mnt/cache/Nick. Share "Nick" is set "cache=no, excluded disks=disk1". Plenty of space on the array and on the individual relevant array drives for both these shares root@nickserver:/mnt/user# df -h /mnt/user Filesystem Size Used Avail Use% Mounted on shfs 23T 19T 4.3T 82% /mnt/user root@nickserver:/mnt/user# df -h /mnt/disk* Filesystem Size Used Avail Use% Mounted on /dev/md1 1.4T 22G 1.4T 2% /mnt/disk1 /dev/md10 2.8T 2.6T 175G 94% /mnt/disk10 /dev/md11 1.9T 1.5T 337G 82% /mnt/disk11 /dev/md12 1.9T 1.5T 338G 82% /mnt/disk12 /dev/md4 1.9T 1.5T 403G 79% /mnt/disk4 /dev/md5 1.9T 1.7T 174G 91% /mnt/disk5 /dev/md6 1.9T 1.7T 175G 91% /mnt/disk6 /dev/md7 2.8T 2.6T 161G 95% /mnt/disk7 /dev/md8 2.8T 2.6T 171G 94% /mnt/disk8 /dev/md9 2.8T 2.6T 175G 94% /mnt/disk9 root@nickserver:/mnt/user# df -h /mnt/user0 Filesystem Size Used Avail Use% Mounted on shfs 22T 18T 3.4T 85% /mnt/user0
  21. The responses here are centered around "OP has something misconfigured", but I am hitting this too. I just noticed a similar problem this morning with my Crashplan share -- It appears there was a bug introduced somewhere here. The common modality is docker, but that doesn't necessarily mean that's the source. I use the Crashplan docker, and my Crashplan share is not, and has not recently been, configured to write to the cache drive. In the unraid server UI, I have "included disks=disk4" and "use cache = no" for my Crashplan share to keep all of my backups contained on one disk, and that's it. The docker is passed a mountpoint of /mnt/user/ -- nowhere do I give it any method where the docker or the app would even be capable of writing to the cache drive, and yet I have ~250GB of recently-written files in /mnt/cache/Crashplan. It's got to be from the underlying unraid mechanism that determines where to write files. root@nickserver:/mnt/user# du -hs /mnt/cache/Crashplan/ 278G /mnt/cache/Crashplan/ root@nickserver:/mnt/user# du -hs /mnt/disk4/Crashplan/ 713G /mnt/disk4/Crashplan/ root@nickserver:/mnt/user# du -hs /mnt/user/Crashplan/ 1.4T /mnt/user/Crashplan/ root@nickserver:/mnt/user# du -hs /mnt/user0/Crashplan/ 1.2T /mnt/user0/Crashplan/ # ls -lh /mnt/cache/Crashplan/503826726370413061/cpbf0000000000013241371 total 2.4G -rw-rw-rw- 1 nobody users 23 Oct 13 20:44 503826726370413061 -rw-rw-rw- 1 nobody users 2.4G Dec 11 15:37 cpbdf -rw-rw-rw- 1 nobody users 1.8M Dec 12 11:51 cpbmf There are still files being correctly written to /mnt/disk4/Crashplan though, so the failure is apparently intermittent. Unraid should not be writing files to /mnt/cache/Crashplan, ever, and the docker/app/me aren't doing it manually. Attached screenshots show docker and share configuration
  22. I have an updated encryption plugin for 6.x that I'd like to release. Encryption has been a widely requested feature on unraid for many years that hasn't received any real first party traction. An encfs implementation isn't nearly as good as a proper solution that lives below the unraid layer, but could help bridge the gap until real disk encryption is implemented for unraid. However -- I'm not comfortable putting up a "release" of an encryption plugin which logs its password. This just provides a false sense of security, which is arguably worse than none at all. I wanted to bump this up and request its inclusion ASAP or in the upcoming 6.3 please?
  23. I'd like to request some sort of mechanism to pass arguments (e.g., passwords) from a plugin's WebUI page to the plugin's scripts without logging the string in the syslog. I'm working on an encryption plugin which needs to pass an encryption password/key from an input field to the backend scripts to mount/encrypt/decrypt a volume, and the unraid 6.1+ plugin system seems to log all parameters. It seems inappropriate to log that password/key. Perhaps for these fields, they can be passed from the form submission in "redactN" arguments that get logged as "*****" or "[REDACTED_FIELD]" instead of "argN" arguments -- and always present the "redact" variables contiguously after all the "arg" variables to the underlying script?
  24. It's possible with XenDesktop and VMware Horizon. See NVIDIA GRID. Though it may only work with the Tesla line of addin cards ($$$$$), and probably not consumer grade cards.
  25. Yes it certainly would be. However in this case, my understanding is that this is impossible due to technical limitations in docker