nick5429

Community Developer
  • Posts

    121
  • Joined

  • Last visited

Everything posted by nick5429

  1. Of course. But can I then remove that device, do a "New Config", and tell unraid "trust me, the parity is still good even though I removed a device" with dual parity mode active? The answer is trivially yes in single parity mode where P is a simple XOR; I didn't see this directly addressed for dual parity where the calculations are much more complex (and the procedure was defined before dual parity mode existed), so wanted to ask.
  2. I know the P+Q parity scheme is a lot more complex than just a simple XOR. Is the manual procedure in the first post valid with dual parity?
  3. I tried it both ways. When I wasn't seeing any uploading on my usual private torrents, I found the most active public torrent possible as a test -- and I see virtually no upload there either
  4. So I've got everything configured and set up, and am getting great download speeds through the PIA Netherlands endpoint (20+ MB/sec) -- but my upload is all-but-nonexistent. I'm on a symmetric gigabit fiber connection (1000Mbit/sec upload and download). "Test active port" in deluge comes back with a happy little green ball. Strict port forwarding in the container config is enabled. I loaded up about 10 test torrents on 3 different private trackers with a moderate number of peers, and see zero upload (as in, not even a number shown in the 'upload' column). Just for funsies, I pulled up a public torrent with 60 seeds and 600 leechers and downloaded the whole thing. I have a total of 30 KB/sec upload on that torrent. Something is clearly wrong here. I've seen several other comments about this throughout the thread, but no resolution. Does uploading work correctly for anyone using this with PIA??
  5. Perhaps my posts should be split off to a new post/defect report (mods??) with a reference from this thread as additional data point -- but there's no way my report (or this, presuming the same problem) is a "docker issue". Docker hadn't been given any reference to the cache drive. Unraid is the only thing that could have made the decision to write to /mnt/cache/<SHARE> Also, I noted the same problem on a share that docker has never touched
  6. Investigating further, I see the same issue on a share (/mnt/user/Nick) which is only ever accessed over SMB or commandline, which I definitely would not have manually specified /mnt/cache/Nick. Share "Nick" is set "cache=no, excluded disks=disk1". Plenty of space on the array and on the individual relevant array drives for both these shares root@nickserver:/mnt/user# df -h /mnt/user Filesystem Size Used Avail Use% Mounted on shfs 23T 19T 4.3T 82% /mnt/user root@nickserver:/mnt/user# df -h /mnt/disk* Filesystem Size Used Avail Use% Mounted on /dev/md1 1.4T 22G 1.4T 2% /mnt/disk1 /dev/md10 2.8T 2.6T 175G 94% /mnt/disk10 /dev/md11 1.9T 1.5T 337G 82% /mnt/disk11 /dev/md12 1.9T 1.5T 338G 82% /mnt/disk12 /dev/md4 1.9T 1.5T 403G 79% /mnt/disk4 /dev/md5 1.9T 1.7T 174G 91% /mnt/disk5 /dev/md6 1.9T 1.7T 175G 91% /mnt/disk6 /dev/md7 2.8T 2.6T 161G 95% /mnt/disk7 /dev/md8 2.8T 2.6T 171G 94% /mnt/disk8 /dev/md9 2.8T 2.6T 175G 94% /mnt/disk9 root@nickserver:/mnt/user# df -h /mnt/user0 Filesystem Size Used Avail Use% Mounted on shfs 22T 18T 3.4T 85% /mnt/user0
  7. The responses here are centered around "OP has something misconfigured", but I am hitting this too. I just noticed a similar problem this morning with my Crashplan share -- It appears there was a bug introduced somewhere here. The common modality is docker, but that doesn't necessarily mean that's the source. I use the Crashplan docker, and my Crashplan share is not, and has not recently been, configured to write to the cache drive. In the unraid server UI, I have "included disks=disk4" and "use cache = no" for my Crashplan share to keep all of my backups contained on one disk, and that's it. The docker is passed a mountpoint of /mnt/user/ -- nowhere do I give it any method where the docker or the app would even be capable of writing to the cache drive, and yet I have ~250GB of recently-written files in /mnt/cache/Crashplan. It's got to be from the underlying unraid mechanism that determines where to write files. root@nickserver:/mnt/user# du -hs /mnt/cache/Crashplan/ 278G /mnt/cache/Crashplan/ root@nickserver:/mnt/user# du -hs /mnt/disk4/Crashplan/ 713G /mnt/disk4/Crashplan/ root@nickserver:/mnt/user# du -hs /mnt/user/Crashplan/ 1.4T /mnt/user/Crashplan/ root@nickserver:/mnt/user# du -hs /mnt/user0/Crashplan/ 1.2T /mnt/user0/Crashplan/ # ls -lh /mnt/cache/Crashplan/503826726370413061/cpbf0000000000013241371 total 2.4G -rw-rw-rw- 1 nobody users 23 Oct 13 20:44 503826726370413061 -rw-rw-rw- 1 nobody users 2.4G Dec 11 15:37 cpbdf -rw-rw-rw- 1 nobody users 1.8M Dec 12 11:51 cpbmf There are still files being correctly written to /mnt/disk4/Crashplan though, so the failure is apparently intermittent. Unraid should not be writing files to /mnt/cache/Crashplan, ever, and the docker/app/me aren't doing it manually. Attached screenshots show docker and share configuration
  8. I have an updated encryption plugin for 6.x that I'd like to release. Encryption has been a widely requested feature on unraid for many years that hasn't received any real first party traction. An encfs implementation isn't nearly as good as a proper solution that lives below the unraid layer, but could help bridge the gap until real disk encryption is implemented for unraid. However -- I'm not comfortable putting up a "release" of an encryption plugin which logs its password. This just provides a false sense of security, which is arguably worse than none at all. I wanted to bump this up and request its inclusion ASAP or in the upcoming 6.3 please?
  9. I'd like to request some sort of mechanism to pass arguments (e.g., passwords) from a plugin's WebUI page to the plugin's scripts without logging the string in the syslog. I'm working on an encryption plugin which needs to pass an encryption password/key from an input field to the backend scripts to mount/encrypt/decrypt a volume, and the unraid 6.1+ plugin system seems to log all parameters. It seems inappropriate to log that password/key. Perhaps for these fields, they can be passed from the form submission in "redactN" arguments that get logged as "*****" or "[REDACTED_FIELD]" instead of "argN" arguments -- and always present the "redact" variables contiguously after all the "arg" variables to the underlying script?
  10. It's possible with XenDesktop and VMware Horizon. See NVIDIA GRID. Though it may only work with the Tesla line of addin cards ($$$$$), and probably not consumer grade cards.
  11. Yes it certainly would be. However in this case, my understanding is that this is impossible due to technical limitations in docker
  12. I found that and it's helpful, it's just inadequate. Not trying to pick a fight or anything but... a) that's all from someone's efforts at writing stuff down while they reverse engineered the plugin system. Helpful, certainly, but incomplete. E.g., even simple things like "what is the list of unraid events my plugin can operate on?" are missing (both from there and from anywhere, as far as I could find) b) it's outdated and some of what IS there is obsolete. Some things can be pieced together from other places, but scattered partially-deprecated tidbits in forum posts don't add up to documentation of an API, especially if you want to encourage people to extend the ecosystem you're selling
  13. It can't be THAT much of a corner case if my plugin hits it every time the command is executed, and several of your scripts apparently hit it as well. Though that was really more an oblique comment about how there's zero documentation available on the plugin system at all beyond "look at what I did and try to reverse engineer how it works. good luck."
  14. You are correct that you cannot directly "export" a fuse mountpoint from docker as a native mount, for technical reasons. The workaround if you really wanted a docker would be to export the ACD encfs mountpoint as something like a WebDAV share and then mount that (which again brings you back to needing a plugin or script running natively). See this docker: https://github.com/sedlund/acdcli-webdav Thus, to get encrypted ACD mountpoints working and integrated and looking like a native mount the way unraid users would want it, it needs to be a plugin.
  15. Got it, thanks! Woulda been nice if that'd been clearly documented somewhere.... Curiosity: what causes this, what triggers it, why does THIS command hit it and other commands in my plugin don't, etc?
  16. Note that while this was moved from Defect Reports to Programming, I do still believe this is an unraid issue, not a "plugin development" issue -- though I'm willing to be proven wrong if anyone knows how to make this work.
  17. Description: I've been struggling with this behavior for years (since unraid v5 RC's), and after some additional debugging effort over the past week, am convinced this is a problem with unraid and not with my plugin. I'm developing a plugin which executes a script that invokes "encfs" to mount an encfs fuse encrypted store. If the script executes an encfs mount command, the webUI will "spin forever" and never finish loading. The encfs command does not appear to be generating any output. I've tried numerous things (redirect all output to /dev/null, launch encfs in the background using "&", launch encfs even more in the background using "nohup", launch the command in a subshell [inside parens in the bash script], encfs mount using an 'expect' script to provide the password, encfs mount using pipe'd in password, tee the output to a file and verified that the file contains 0 characters, long sleeps, no sleeps, more output, no output, etc) -- the result is all the same that the WebUI does not respond correctly if encfs is invoked. If I change nothing in the script except to replace the encfs mount command with an empty or logging command, the page loads properly. Relevant script lines: logger `whoami` logger `pwd` logger "starting encfs mount" echo "$1" | encfs -S --public "$ENCDIR0" "$DECDIR0" -- -o uid=99 -o gid=100 2>&1 | tee /var/log/plugins/encfs_mount_output sleep 2 echo "Mount command done" logger "encfs mount command has returned" WebUI shows "Mount command done", but "is loading" forever. And the mount *does* complete successfully and is completely usable. Executing the same command as the same user from the same CWD (ie, " /usr/local/emhttp/plugins/encfs/scripts/rc.encfs mount test" as root from /), the script finishes quickly and returns to the command prompt as expected. If I change nothing except the encfs line, e.g. as follows: logger `whoami` logger `pwd` logger "starting encfs mount" # echo "$1" | encfs -S --public "$ENCDIR0" "$DECDIR0" -- -o uid=99 -o gid=100 2>&1 | tee /var/log/plugins/encfs_mount_output echo "$1" | logger "null command" | tee /var/log/plugins/encfs_mount_output sleep 2 echo "Mount command done" logger "encfs mount command has returned" The page finishes loading as expected. How to reproduce: Install encfs plugin from my debugging branch: https://github.com/nick5429/unraid-encfs/blob/pipe/encfs.plg Enable plugin, configure directories to something different if you want, follow directions for manually setting up the encrypted store. Use "testing" as the password. Use the plugin UI with your password to attempt to mount Expected results: Script finishes, WebUI loads Actual results: Script finishes (as seen by logger results), but the WebUI remains loading forever. If you hit "esc" to stop loading, then browse back to the plugin page (or list the directory at the command line) -- you'll see that the command did complete successfully and the encrypted store was mounted, even though the WebUI never finished loading. Other information: This exact same behavior has been occurring since unRAID v5 Side note: it'd be really great if there were a way to convey the password from the webUI form to the mounting script without logging it in syslog or storing it as a permanent file somewhere. This is a new problem as of the unraid v6.1 plugin system requirements -- I need a mode to disable logging of webUI command parameters.
  18. Here's a unionfs package for slackware 14.2 -- https://pkgs.org/slackware-14.2/slackel-x86_64/unionfs-fuse-0.26-x86_64-1dj.txz.html
  19. Thanks, that did the trick A latent btrfs bug prevented me from non-destructively fixing the problem ("a raid1 pool can be mounted degraded as rw only once, after that it's read only unless you have all the disks present") -- which I didn't know until too late, and could no longer 'fix' the existing pool or convert from raid1 to single. Copied everything off and working on rebuilding manually though
  20. Is there a way I can configure unraid from the commandline to either: a) don't start the array on boot?, or b) pass additional flags to the btrfs mount command? I had a two-drive "raid 1"-style btrfs cache array. One of the drives flaked out due to drive flakyness and dropped out of the array. I rebooted, and immediately upon coming back up it automatically started a 'btrfs balance'. The btrfs balance is now stuck, btrfs/disk errors filling up the syslog, and the balance can't be cancelled (either via web UI or command line), because the 'cancel' command waits until it has finished balancing the current block -- and balance is stuck not making forward progress because disk errors. If I could manage the disks in the webUI, I could un-assign/re-assign the cache disks as needed with good drives. But I can't cleanly stop the array once it's started, because /mnt/cache can't be unmounted, because it's still balancing and automatically starts balancing on boot -- so the webui spins forever trying to stop. The other option is to pass skip_balance on mount:
  21. It's more that -- to achieve true end-to-end fault tolerance, you must have ECC ram. The automatic checksumming that ZFS implements is still an enormous improvement in data integrity / bitrot detection, even without ECC ram. And no, the various checksum snapshot projects that have been implemented for unraid are not a comparable solution. They're (perhaps) better than nothing, but are still vastly inferior to having it built into the filesystem
  22. I strongly disagree. The lack of a true 'read-only' mode completely prevents you from being able to fall back on the "old" parity drive to avoid data loss if a 2nd drive fails during the upgrade. With the currently implementation, there are lower-level filesystem operations that issue writes to the disk, regardless of whether any user process or action has written to the filesystem
  23. All reports online indicate SPICE offers a better and faster GUI interface than RDP for LAN connections to VMs. Why not support it? edit: not to mention VM-specific features like USB Redirection, instead of just using a generic remote access protocol
  24. Excellent! Seems to be working now. Thanks for all your work
  25. Hey there, love your work! Especially on the deluge-vpn integration. I was having trouble connecting to the daemon from my PC, and noticed in the iptable script that the port for remote connection the daemon isn't opened. The result is that when the VPN is enabled, I can't connect to the daemon from the PC (I can connect fine when the VPN is disabled -- confirming authentication, etc is setup right) Use case: A bunch of plugins don't have a webUI, and must be configured using the "real" client. The general recommendation on how to accomplish this is to run deluge as a thin-client on your PC and connect to the server. With the way it's set up, only docker-to-docker communications (??) are allowed on port 58846 when the VPN is up. Could you add similar forwarding in the iptables script as is used for the webUI port to allow access to the daemon from other LAN devices? E.g.: # accept input to deluge daemon port 58846 iptables -A INPUT -i eth0 -p tcp --dport 58846 -j ACCEPT iptables -A INPUT -i eth0 -p tcp --sport 58846 -j ACCEPT [...] # accept output from deluge daemon port 58846 (used when tunnel down) iptables -A OUTPUT -o eth0 -p tcp --dport 58846 -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 58846 -j ACCEPT # accept output from deluge daemon port 58846 (used when tunnel up) iptables -t mangle -A OUTPUT -p tcp --dport 58846 -j MARK --set-mark 3 iptables -t mangle -A OUTPUT -p tcp --sport 58846 -j MARK --set-mark 3 (not 100% sure on the 'mark 3' part) Thanks!