jedimstr

Members
  • Content Count

    121
  • Joined

  • Last visited

Community Reputation

8 Neutral

About jedimstr

  • Rank
    Member

Converted

  • Gender
    Undisclosed
  • Personal Text
    IAmYourFathersBrothers NephewsCousinsFormerRoomate

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Am I the only one who thinks its a bad idea to put out a version update with "DO NOT INSTALL ON 6.8" in the default update path? Wouldn't it have been better to put out a separate plugin beta for manual install for 6.9 RC2 and keep the plugin that's compatible on the current stable versions on the regular plugin? What about those users who just click Update All and don't check the release notes?
  2. Unfortunately I'm hitting the Exit code '56' from all the port-forward capable endpoints on the list. Tried ca-vancouver and ca-toronto first (I'm usually on ca-toronto anyway) but they've been on a constant retry loop for the last few hours. None of the others are working either.
  3. I'm in the middle of a Parity Rebuild (upgrading one of my array disks from 10TB to 16TB Exos), and started getting hanging issues with my cache pool that lead to docker and the webGUI to timeout. When I do get through to Settings or Tools tab (other tabs hang, sometimes I get partial load on Main and shows Array is reading/writing), the parity rebuild is progressing (footer progress still going up in percentage) so I'm hesitant on doing a 'powerdown -r' or a physical reboot. There are long stretches when I can't access the WebGUI at all (max_children errors, I've already raised the level in w
  4. Probably more of a feature request, but is there a way to get Privoxy working with a Wireguard VPN provider instead of an OpenVPN provider? The goal would be for speed.
  5. That particular drive ended up having multiple read errors and just dying even on a new pre-clear pre-read. Ended up RMA'ing it. After taking that drive out of the equation, I still get relatively slow parity syncs/rebuilds, but never as slow as with the RMA'd drive. Slowest now is in the dual digit MBs range (but it goes back up to the high 80's or 90's again).
  6. Yup it looks like that fixed it. Thanks!!!
  7. Removed and re-installed from CA. Here's the installation output: plugin: installing: https://raw.githubusercontent.com/gfjardim/unRAID-plugins/master/plugins/preclear.disk.plg plugin: downloading https://raw.githubusercontent.com/gfjardim/unRAID-plugins/master/plugins/preclear.disk.plg plugin: downloading: https://raw.githubusercontent.com/gfjardim/unRAID-plugins/master/plugins/preclear.disk.plg ... done plugin: downloading: https://raw.githubusercontent.com/gfjardim/unRAID-plugins/master/archive/preclear.disk-2020.01.13.txz ... done plugin: downloading: https://raw.githubusercontent
  8. Both version 2020.01.12 and 2020.01.13 are showing the "unsupported" warning even though it was updated for 6.8.1 support.
  9. To update, I was eventually able to complete the rebuild after a reboot. But then I have more disk replacements to do, so I'm in my second data drive replacement now on 6.8.0 and it slowed to a crawl again after a day. I rebooted the server again, which of course restarted the rebuild from scratch, but this time I saw slowdowns again down to the dual digit KBs range. This time I just left it running and eventually it bumped back up to around 45MBs, and a day later up to 96.3MBs... still crazy slow but better than the KB range. Hope the general slow parity/rebuild issue gets reso
  10. Thanks, I rebooted and I see Parity run at better speed now. Started from scratch and still slower than usual but at least its in the 3 digit MB range. There was also an Ubuntu VM I had running that often accesses a share that's isolated to one of the drives being rebuilt, so just in case that has anything to do with it, I shutdown that VM. I'm not the only one seeing this slow to a crawl issue though. Another user on Reddit posted this:
  11. After upgrading to 6.8.0, I replaced my parity drives and some of my data drives with 16TB Exos from previous 12TB and 10TB Exos & IronWolfs. Initial replacement of one parity went fine with full rebuild completing in normal fashion (a little over 1 day). For the second parity, I saw that I could replace it and one of the data drives at the same time, so went ahead and did that with the pre-cleared 16TB drives. The parity-sync/data-rebuild started off pretty normal with expected speeds over 150+MBs most of the time until it hit around 36.5% where the rebuild dramatically dropped in spee
  12. Thanks I'll post in general support. My results definitely seem much slower by an order of magnitude than the slowdowns mentioned in the bug post.
  13. Is there an active bug posted around the slow Parity-Sync/Data-Rebuild for wide arrays mentioned being looked at in the errata? I can't find it in the bug forums. I'm definitely seeing this problem with my 23 data and 2 parity drive array. I've replaced both a data drive and one of the parity drives in the same rebuild session so that may be a contributing factor. I wanted to see if there's any data I can gather that would help in your investigation on the issue and see if there were any stop-gap solutions in the meantime but like I said, I can't find an active bug post for this.
  14. Congrats on moving forward with this new container and giving everyone what they wanted whether they wanted to stay on older versions or leap ahead to current released versions. I've already moved on to other container providers, but happy to see you guys take critique and release a better product that should please everyone.