Jump to content

jedimstr

Members
  • Content Count

    117
  • Joined

  • Last visited

Community Reputation

7 Neutral

About jedimstr

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed
  • Personal Text
    IAmYourFathersBrothers NephewsCousinsFormerRoomate

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Probably more of a feature request, but is there a way to get Privoxy working with a Wireguard VPN provider instead of an OpenVPN provider? The goal would be for speed.
  2. That particular drive ended up having multiple read errors and just dying even on a new pre-clear pre-read. Ended up RMA'ing it. After taking that drive out of the equation, I still get relatively slow parity syncs/rebuilds, but never as slow as with the RMA'd drive. Slowest now is in the dual digit MBs range (but it goes back up to the high 80's or 90's again).
  3. Yup it looks like that fixed it. Thanks!!!
  4. Removed and re-installed from CA. Here's the installation output: plugin: installing: https://raw.githubusercontent.com/gfjardim/unRAID-plugins/master/plugins/preclear.disk.plg plugin: downloading https://raw.githubusercontent.com/gfjardim/unRAID-plugins/master/plugins/preclear.disk.plg plugin: downloading: https://raw.githubusercontent.com/gfjardim/unRAID-plugins/master/plugins/preclear.disk.plg ... done plugin: downloading: https://raw.githubusercontent.com/gfjardim/unRAID-plugins/master/archive/preclear.disk-2020.01.13.txz ... done plugin: downloading: https://raw.githubusercontent.com/gfjardim/unRAID-plugins/master/archive/preclear.disk-2020.01.13.md5 ... done tmux version 3.0a is greater or equal than the installed version (3.0a), installing... +============================================================================== | Skipping package tmux-3.0a-x86_64-1 (already installed) +============================================================================== libevent version 2.1.11 is greater or equal than the installed version (2.1.11), installing... +============================================================================== | Skipping package libevent-2.1.11-x86_64-1 (already installed) +============================================================================== utempter version 1.1.6 is lower than the installed version (1.1.6.20191231), aborting... +============================================================================== | Installing new package /boot/config/plugins/preclear.disk/preclear.disk-2020.01.13.txz +============================================================================== Verifying package preclear.disk-2020.01.13.txz. Installing package preclear.disk-2020.01.13.txz: PACKAGE DESCRIPTION: Package preclear.disk-2020.01.13.txz installed. ----------------------------------------------------------- preclear.disk has been installed. Copyright 2015-2020, gfjardim Version: 2020.01.13 ----------------------------------------------------------- plugin: installed Updating Support Links preclear.disk --> http://lime-technology.com/forum/index.php?topic=39985.0
  5. Both version 2020.01.12 and 2020.01.13 are showing the "unsupported" warning even though it was updated for 6.8.1 support.
  6. To update, I was eventually able to complete the rebuild after a reboot. But then I have more disk replacements to do, so I'm in my second data drive replacement now on 6.8.0 and it slowed to a crawl again after a day. I rebooted the server again, which of course restarted the rebuild from scratch, but this time I saw slowdowns again down to the dual digit KBs range. This time I just left it running and eventually it bumped back up to around 45MBs, and a day later up to 96.3MBs... still crazy slow but better than the KB range. Hope the general slow parity/rebuild issue gets resolved.
  7. Thanks, I rebooted and I see Parity run at better speed now. Started from scratch and still slower than usual but at least its in the 3 digit MB range. There was also an Ubuntu VM I had running that often accesses a share that's isolated to one of the drives being rebuilt, so just in case that has anything to do with it, I shutdown that VM. I'm not the only one seeing this slow to a crawl issue though. Another user on Reddit posted this:
  8. After upgrading to 6.8.0, I replaced my parity drives and some of my data drives with 16TB Exos from previous 12TB and 10TB Exos & IronWolfs. Initial replacement of one parity went fine with full rebuild completing in normal fashion (a little over 1 day). For the second parity, I saw that I could replace it and one of the data drives at the same time, so went ahead and did that with the pre-cleared 16TB drives. The parity-sync/data-rebuild started off pretty normal with expected speeds over 150+MBs most of the time until it hit around 36.5% where the rebuild dramatically dropped in speed to between 27/KBs to 44/KBs. It's been running at that speed for over 2 days now. At first I thought this was somehow related to the 6.8.0 known issues/errata notes that mentioned an issue with slow parity syncs on wide 20+ arrays (I have 23 data drives and 2 parity), but my speeds are much slower than those reported in the bug report for that issue by an order of magnitude. Here's what I'm seeing now and my diagnostics attached. holocron-diagnostics-20191215-0604.zip
  9. Thanks I'll post in general support. My results definitely seem much slower by an order of magnitude than the slowdowns mentioned in the bug post.
  10. Is there an active bug posted around the slow Parity-Sync/Data-Rebuild for wide arrays mentioned being looked at in the errata? I can't find it in the bug forums. I'm definitely seeing this problem with my 23 data and 2 parity drive array. I've replaced both a data drive and one of the parity drives in the same rebuild session so that may be a contributing factor. I wanted to see if there's any data I can gather that would help in your investigation on the issue and see if there were any stop-gap solutions in the meantime but like I said, I can't find an active bug post for this. Here's my rebuild's current progress: holocron-diagnostics-20191215-0604.zip
  11. Congrats on moving forward with this new container and giving everyone what they wanted whether they wanted to stay on older versions or leap ahead to current released versions. I've already moved on to other container providers, but happy to see you guys take critique and release a better product that should please everyone.
  12. I'm sorry for causing such a hubub. I really didn't mean for this to get out of hand.
  13. Look, I'm sorry if I came off in a bad way and really don't want to badger. But the criteria mentioned in the past on why things aren't updating "not until it shows up on their download page" occurred and things still weren't updated. So if that's no longer the case, then you guys should stop saying it. Nice and easy, just don't say it. Keep a 5 year old version, fine. Don't say you'll update when their software page is updated. That's no longer a valid excuse. You should also maybe think about rewording this then: "Our build pipeline is a publicly accessible Jenkinsserver. This pipeline is triggered on a weekly basis, ensuring all of our users are kept up-to-date with latest application features and security fixes from upstream."
  14. I'm sorry if I've been ticking you off. That's definitely NOT my intention. But I want to make clear that previous criteria claimed for keeping this particular repo up to date have been met days before the weekly update cycle, but that weekly cycle didn't pick up this update on the unstable tag. So lets keep this to the technical reasons why this was, not on a personal level. If there's some other reason fine. I'm just calling out that something broke here. Also the reasoning of why to choose LinuxServer.io's repos versus others, is because they have been reliable in pretty much every other repo.
  15. Fair enough, I wasn't proposing to do things at the drop of a hat. I was proposing that if the source of the images were posted as has been deemed as the previous requirement for updating the unstable tag, then the pipeline should have picked up that version on the weekly update, but didn't, then something is wrong. It's not a personal attack against you, so don't take it as such. My response to yours was because your response went against what even LinuxServer.io's site says. As for the importance of this release... 5.10.12 is actually very important from a security perspective due to an open known exploits in the wild. Lots of us really do need this version to mitigate possible attacks/scans that are occuring: https://community.ubnt.com/t5/UniFi-Updates-Blog/UniFi-Network-Controller-5-10-12-Stable-has-been-released/ba-p/2665341