nick5429

Community Developer
  • Content Count

    103
  • Joined

  • Last visited

Community Reputation

0 Neutral

About nick5429

  • Rank
    Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I have two separate dockers running delugevpn on my unraid machine via PIA vpn. They both worked well simultaneously with PIA's 'old' network and deluge ~2.0.3. After upgrading to the newest delugevpn docker and PIA's 'nextgen' network (identical .ovpn files for each), only one of them works properly. One works perfectly, the other gets stuck on Tracker Status: 'announce sent' (vs 'announce ok'), with the exact same public linux test torrent (or with any other torrent). The logs appear to show that both are properly getting separate forwarded ports setup with no obvious errors, and
  2. How can I select multiple drives at once to process in a 'scatter' operation? I have 5 disks that I want to move all the data off and decommission; doing them one at a time (and needing to circle back and remember to move on to the next one at the appropriate time) is going to be a hassle.
  3. Good catch, where do you see that in the diag reports? I found similar info digging in the syslog once you pointed it out, but not formatted like you have quoted. Is there a summary somewhere I'm missing? Looks like these are the commands for SAS drives: sdparm --get=WCE /dev/sdX sdparm --set=WCE /dev/sdX Possibly has to be re-enabled on every boot, from internet comments? Will give that a try and see how the 2nd parity disk rebuild goes. Initial regular array write tests with 'reconstruct write' definitely see speed improvement after enabling this on the parity
  4. You're likely right, I misinterpreted the output of iostat and came to the wrong conclusion. 'write request size' is listed in kB, writes are just being buffered to 512kB chunks before written. Unlikely to have anything to do with drive block size. I did a brief parity check for speed test after the rebuild finished, and a read-only parity check was going at ~110MB/sec. Still, something's not right if the 5-6TB portion of the rebuild (when the new fast drive is the only disk active) is going at 40MB/sec, when it was >175MB/sec during the preclear nickserver-diagnostics-202
  5. I'm in the process of adding several 6TB 512e drives (physically 4k block size, but emulate 512). Right now, my parity drive upgrade with the first drive is going *extremely* slow compared to expected speeds with this new drive, even on the portion of the drive that is larger than any other drive (ie, no reads are required) All other drives in the system are <=3TB, and the parity rebuild onto the 6TB drive is currently at ~5TB position, but is only writing at <40MB/sec. The only activity is write to the new parity drive, and no reads are happening at all. From preclear test
  6. I have an old nvidia card which requires the 340.xx driver line for support, and the card *does* support nvenc/nvdec. I'm able to compile and load the appropriate nvidia driver myself, but I'd also like to take advantage of the other modifications that have been done as part of the work for this plugin for docker compatibility, beyond simply loading the driver. Where is the source for the additional changes made to the underlying unraid/docker system with build instructions to create these distributed packages? A general outline is fine, I can figure it out from there.
  7. Hm. Well, I blew away the old installation, and re-selected the packages (being careful to only select things that weren't already installed by the underlying unraid system), and it seems fine. Good for me, but doesn't fully solve the answer of whether it was a problem with my flash, or a problem with some built-in package system being replaced. If I feel bold (and feel like dealing with another crashed system), maybe I'll re-enable having DevPack install all the old packages later
  8. Something in this prevents my server from booting on unraid 6.4.1 Took a couple hours for me to narrow it down to this plugin. This was working fine with my setup on 6.3.5 and I didn't enable/disable any packs. Here's what I've got in DevPack.cfg: attr-2_4_47="no" binutils-2_27="yes" bzip2-1_0_6="yes" cxxlibs-6_0_18="yes" expat-2_2_0="no" flex-2_6_0="no" gc-7_4_2="yes" gcc-5_4_0="yes" gdbm-1_12="no" gettext-0_19_8_1="no" glib2-2_46_2="no" glib-1_2_10="yes" glibc-2_24="yes" gnupg-1_4_21="no" gnutls-3_5_8="yes" gpgme-1_7_1="no" guile-2_0_14="yes" json-c-0_12="no" json-glib-1
  9. I had this same problem, and it went away when I disabled the DevPack plugin Not sure yet what in there might be triggering it
  10. Of course. But can I then remove that device, do a "New Config", and tell unraid "trust me, the parity is still good even though I removed a device" with dual parity mode active? The answer is trivially yes in single parity mode where P is a simple XOR; I didn't see this directly addressed for dual parity where the calculations are much more complex (and the procedure was defined before dual parity mode existed), so wanted to ask.
  11. I know the P+Q parity scheme is a lot more complex than just a simple XOR. Is the manual procedure in the first post valid with dual parity?
  12. I tried it both ways. When I wasn't seeing any uploading on my usual private torrents, I found the most active public torrent possible as a test -- and I see virtually no upload there either
  13. So I've got everything configured and set up, and am getting great download speeds through the PIA Netherlands endpoint (20+ MB/sec) -- but my upload is all-but-nonexistent. I'm on a symmetric gigabit fiber connection (1000Mbit/sec upload and download). "Test active port" in deluge comes back with a happy little green ball. Strict port forwarding in the container config is enabled. I loaded up about 10 test torrents on 3 different private trackers with a moderate number of peers, and see zero upload (as in, not even a number shown in the 'upload' column). Just for f
  14. Perhaps my posts should be split off to a new post/defect report (mods??) with a reference from this thread as additional data point -- but there's no way my report (or this, presuming the same problem) is a "docker issue". Docker hadn't been given any reference to the cache drive. Unraid is the only thing that could have made the decision to write to /mnt/cache/<SHARE> Also, I noted the same problem on a share that docker has never touched
  15. Investigating further, I see the same issue on a share (/mnt/user/Nick) which is only ever accessed over SMB or commandline, which I definitely would not have manually specified /mnt/cache/Nick. Share "Nick" is set "cache=no, excluded disks=disk1". Plenty of space on the array and on the individual relevant array drives for both these shares root@nickserver:/mnt/user# df -h /mnt/user Filesystem Size Used Avail Use% Mounted on shfs 23T 19T 4.3T 82% /mnt/user root@nickserver:/mnt/user# df -h /mnt/disk* Filesystem Size Used Avail Use% Mounted on /dev/md1