Jump to content

eschultz

Members
  • Posts

    512
  • Joined

  • Last visited

  • Days Won

    8

Everything posted by eschultz

  1. Nice! On your server with 11 disks, not counting the 4 connected to the SAS2LP, are the other 7 just connected to the motherboard? Just wondering because one of the (older) motherboards I'm using has 6 x SATA2 ports but I was only able connect and fully saturate 4 HDDs. Once I connected a 5th, it brought the speed down on all 5... so I was limited by that sata chip's upstream bandwidth (probably PCIe x1).
  2. Is this likely to impact ICH10 ports and/or Adaptec 1430SA's ?? Just curious if this is why parity checks are so much slower in v6 vs. v5 on my old C2SEA. I presume it's safe to try -- right? ... also, is it persistent until the next reboot ? Finally, does this explain why parity sync works so much faster than checks? [i.e. does the sync process cause fewer queued requests?] I'm not sure about ICH10 or your Adaptec card but I can say those nr_request changes didn't negatively affect a Intel C220 SATA controller or LSI SAS2308 controller that had no performance issues to begin with. It's safe to try The defaults will revert back once you reboot the machine. I presume the parity sync is writing synchronously and blocks more reads from queuing until the parity write is completed which conveniently finishes by time the pending reads complete from the other drives. In other words, if there are no writes (parity check with no errors) then it'll keep filling the disk queues with async read requests. Tom would be able to explain it better (or correct my poor interpretation).
  3. Thanks for the update!! I think this is everything you're looking for. Additional system details in signature. Disks: (multiple 2TB and 3TB on 2xSAS2LP, parity on mb) Oct 24 18:30:09 Tower emhttp: WDC_WD5000BPKX-00HPJT0_WD-WX61AC4L4H8D (sdb) 488386584 [Cache - on MB SATA] Oct 24 18:30:09 Tower emhttp: WDC_WD30EFRX-68EUZN0_WD-WMC4N2022598 (sdc) 2930266584 [Parity - on MB SATA] Oct 24 18:30:09 Tower emhttp: WDC_WD30EFRX-68EUZN0_WD-WMC4N0H2AL9C (sdd) 2930266584 [sAS2LP] Oct 24 18:30:09 Tower emhttp: WDC_WD30EFRX-68EUZN0_WD-WMC4N0F81WWL (sde) 2930266584 [sAS2LP] Oct 24 18:30:09 Tower emhttp: WDC_WD30EFRX-68EUZN0_WD-WCC4N4TRHA67 (sdf) 2930266584 [sAS2LP] Oct 24 18:30:09 Tower emhttp: WDC_WD30EFRX-68EUZN0_WD-WCC4NPRDDFLF (sdg) 2930266584 [sAS2LP] Oct 24 18:30:09 Tower emhttp: WDC_WD30EFRX-68EUZN0_WD-WCC4N1VJKTUV (sdh) 2930266584 [sAS2LP] Oct 24 18:30:09 Tower emhttp: Hitachi_HDS722020ALA330_JK1101B9GME4EF (sdi) 1953514584 [sAS2LP] Oct 24 18:30:09 Tower emhttp: Hitachi_HDS5C3020ALA632_ML0220F30EAZYD (sdj) 1953514584 [sAS2LP] Oct 24 18:30:09 Tower emhttp: ST2000DL004_HD204UI_S2H7J90C301317 (sdk) 1953514584 [sAS2LP] Oct 24 18:30:09 Tower emhttp: ST2000DL003-9VT166_6YD1WXKR (sdl) 1953514584 [sAS2LP] Oct 24 18:30:09 Tower emhttp: WDC_WD30EFRX-68EUZN0_WD-WCC4N4EZ7Z5Y (sdm) 2930266584 [sAS2LP] Oct 24 18:30:09 Tower emhttp: Hitachi_HDS722020ALA330_JK11H1B9GM9YKR (sdn) 1953514584 [sAS2LP] Oct 24 18:30:09 Tower emhttp: Hitachi_HDS722020ALA330_JK1101B9GKEL4F (sdo) 1953514584 [sAS2LP] Oct 24 18:30:09 Tower emhttp: WDC_WD30EFRX-68EUZN0_WD-WCC4N3YFCR2A (sdp) 2930266584 [sAS2LP] * I am not prepared to move all my 2TB disks to the motherboard right now (need cables and free time). If others report success, I'll crack open my case when I have time and plug all my 2TB & cache on the MB and leave the 3TB drives on the SAS2LP. Last Parity on 5.0.5: 37311 seconds (avg between 10-11 hrs, same drives plugged into same ports) Parity After upgrade to 6.1.3: Thanks for the detailed info. Let's see if we can increase your parity check speeds by running the following commands in the console or ssh session: echo 8 > /sys/block/sdc/queue/nr_requests echo 8 > /sys/block/sdd/queue/nr_requests echo 8 > /sys/block/sde/queue/nr_requests echo 8 > /sys/block/sdf/queue/nr_requests echo 8 > /sys/block/sdg/queue/nr_requests echo 8 > /sys/block/sdh/queue/nr_requests echo 8 > /sys/block/sdi/queue/nr_requests echo 8 > /sys/block/sdj/queue/nr_requests echo 8 > /sys/block/sdk/queue/nr_requests echo 8 > /sys/block/sdl/queue/nr_requests echo 8 > /sys/block/sdm/queue/nr_requests echo 8 > /sys/block/sdn/queue/nr_requests echo 8 > /sys/block/sdo/queue/nr_requests echo 8 > /sys/block/sdp/queue/nr_requests This will limit the requests each hdd (in the array) is trying to handle at a time from the default 128 down to 8. No need to run it on cache devices and/or SSDs. This seems to make a HUGE difference on Marvell controllers. I suspect earlier versions on Linux (hence unRAID 5.x) had lower defaults for nr_requests. Parity check can be already running or not when you're trying the commands above. Speed should increase almost instantly...
  4. Thanks for running that test, seems like disk 5 is ok. I have a suspicion of a deadlock occurring in shfs/FUSE (rare situations) but not able to reproduce it here. It's been about a week, have you had any lockups since? I think if docker is stopped before the mover starts and started back up after the mover finishes we could narrow down (or maybe eliminate) the issue. We can help you script in the Docker stop/start in to the mover script if you want to try this? Thanks for following up. I have turned off my mover script for now and only run it if my cache drive starts to fill up. So no lockups technically. I am actually in the process of migrating all of my disks to XFS because I am willing to try anything at this point. Someone mentioned earlier in the thread that this could help. I am happy to try the docker stop/start during mover idea. Please let me know what I should modify in my mover script to facilitate this. Thanks! When you're at a good point to experiment (XFS conversion is finished) you can edit /usr/local/sbin/mover (using nano or other editor). Look for the echo "mover started" and add this line right after: /usr/local/emhttp/plugins/dynamix.docker.manager/event/stopping_svcs next, add this line of code at the end of the mover script: /usr/local/emhttp/plugins/dynamix.docker.manager/event/started Save the changes to the mover script and turn back on the mover schedule from the webGUI. Let's see if that solves the lock-ups when the mover script is running...
  5. Thanks for running that test, seems like disk 5 is ok. I have a suspicion of a deadlock occurring in shfs/FUSE (rare situations) but not able to reproduce it here. It's been about a week, have you had any lockups since? I think if docker is stopped before the mover starts and started back up after the mover finishes we could narrow down (or maybe eliminate) the issue. We can help you script in the Docker stop/start in to the mover script if you want to try this?
  6. Sorry about that, the rsync command was changing /dev/null unexpectedly. To be fair, I just checked and my /dev/null was messed up too Here's a alternative script to try that won't mess up /dev/null (you just wont see pretty progress bars): find /mnt/disk5 -type f -print -exec cp {} /dev/null \; Squid: Thanks for stepping in to help him restore his /dev/null device
  7. Looks like there are some S.M.A.R.T. command timeout errors for disk 5 (ST3000DM001-9YN166_Z1F12JLY). Also, your prior post revealed the mover never finished but disk 5 was never spun down with the rest of the disks. The disk might be going bad or there some possible reiserfs corruption somewhere (even though reiserfsck wasn't able to detect anything). Try running this script from your console, it'll iterate through all files on disk5 and attempt to fully read the file contents. If there is reiserfs corruption you'll likely be able to see which file it stopped on (this will take a while to run): find /mnt/disk5 -type f -exec rsync --progress {} /dev/null \;
  8. I think these errors show when an app or vm is attempting to create a hardlink. Hardlinks are not supported on user shares. I couldn't find a definite answer if SAB is using hardlinks to repair though.
  9. Hi bonienl, nice addition - but there might be a little bug there in case, no dockers are used (and no docker image exists) - in that case it always shows "20%" - I'd suggest to simply not display docker image usage in case there is none!? That's not a bug, it's a feature! It doubles as ram % usage when docker is disabled [emoji3]
  10. Just added the p7zip package to NerdPack, update the plugin and let me know if you have any issues.
  11. Bingo! I also just updated the Plex version to 0.9.12.8
  12. Just added iperf to Nerd Tools. Enjoy! i'm just wondering why iPerf, it's very old tool! use iperf3 instead IMHO The version of iperf I added was version 3.0.11 (latest stable). The binary is named iperf3.
  13. No problem, just pushed an update that included SSHFS [emoji3]
  14. Just added Python 2.7.9 to the NerdPack. iotop will work again after the update since it requires python.
  15. Howdy folks, this is the support thread for the LimeTech maintained Docker repository: https://github.com/limetech/docker-templates Docker containers included: Plex Media Server BitTorrent Sync Update requests can be made here or by PM.
  16. It's not included, but just pick one and copy it to config\plugins\preclear.disk\ After copying, make sure it's named preclear_disk.sh (not preclear_bjp.sh)
  17. Here is the syslog I don't see anything wrong, but the download must be failing for some reason... Are you able to SSH/telnet in or try these commands from the console to see if any errors come back: wget -O /boot/config/plugins/preclear.disk/tmux-1.8-x86_64-1.txz http://mirrors.slackware.com/slackware/slackware64-14.1/slackware64/ap/tmux-1.8-x86_64-1.txz wget -O /boot/config/plugins/preclear.disk/libevent-2.0.21-x86_64-1.txz http://mirrors.slackware.com/slackware/slackware64-14.1/slackware64/l/libevent-2.0.21-x86_64-1.txz This basically manually (re)downloads those 0-sized packages you're seeing
  18. lsusb is already there. I don't see it in the plg. Is it part of another package? lsusb is already built-in to unRAID 6 stock
  19. Nano is already baked-in, no need to install it (you can safely remove nano from your /boot/extra too if it's there)
  20. Sorry for the delay, I just added unrar. Update the plugin and give it a shot!
  21. Here's the direct link to the .plg: https://raw.githubusercontent.com/dmacias72/unRAID-NerdPack/master/plugin/NerdPack.plg
×
×
  • Create New...