Jump to content

ramesivi

Members
  • Content Count

    13
  • Joined

  • Last visited

Community Reputation

0 Neutral

About ramesivi

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Thanks for the information, that's almost surely what's going on. I haven't done a parity check since 6.8 released. Will wait for fix.
  2. I replaced my 3x SAS2LP (because they aren't recommended anymore) with 2x LSI 16i cards. The server consists of 28x 8+ TB drives and 2x Parity drives, the maximum unRAID allows. Prior to the swap, the SAS2LP cards were getting about 180-200MB at the start of parity. I am now getting 130-140MB/s. Estimated completion is 1 day 6 hours. I have tried tweaking nr_requests and md_num_stripes. md_num_stripes has an affect, but only up to about 4096. Any lower and it goes much slower. The cards are installed in PCI3.0 x16 slots (cards are x8). Should be plenty of bandwidth. Cards were flashed to latest v20 firmware. Any ideas?
  3. Oh that works, thanks didn't know about that plugin.
  4. With 12TB drives parity takes about a day and media stutters during this, especially remuxed BD and 4K content, which leaves the server pretty crippled for one day out of the month. As a result, I swapped to bi-monthly parity checks which isn't ideal. The server is vastly overkill with an i7 9700k, 32GB DDR4, etc. The problem is the drives get fully loaded. I remember "low priority" parity checks were a feature planned to be implemented but it's been a few years and I haven't seen anything. Any solutions to this problem?
  5. I went from running 3x SAS2LP to 2x SAS2LP and 1x LSI card so I could expand to 30 drives and dual parity. However, NR_REQUESTS set to 8 slows down the LSI card and it's well known the SAS2LP cards need this tweak for ideal parity speeds. I am getting almost 2 day parity checks with 12TB drives (130MB/s) now. I have tried various middle grounds like 16,32,64 but nothing gets the speeds up to where they should be. I was getting 190-200MB/s before. There seems to be no way to fix this other than buy a second LSI card and get rid of the 2x SAS2LP cards? Anyway I can fix this without spending another $200?
  6. Ah damn. So is the Norco. Sooo what's the cheapest 32+ bay case for 3.5" drives? I can't find any that are under $2000 because they come with hardware. I have 2x Norco 4224 cases. Couldn't I just buy the SAS cards, stack the cases on top of each other, drill some holes, and just use bays in the other case? Not ideal, but it beats spending $2000.
  7. Whats your thoughts on this 36 bay case? Only 2 120mm fans. Seems toasty. I can't find any images of the insides. I believe they are SAS connectors inside? I am not dealing with 36x SATA (lol) https://www.newegg.com/istarusa-d-410-de36/p/N82E16811165665?Description=36-Bay Hotswap&cm_re=36-Bay_Hotswap-_-11-165-665-_-Product&quicklink=true
  8. Any other 32bay cases you know of? Ideally I'd want something like 6U but they all come with existing hardware ($2000+). The norco is the cheapest 32bay I can find by far.
  9. I am running low on space with 24x 8TB drives (don't ask..). I figure the cheapest method is to go to a 32x drive case since I already have 6x 8TB spares. This is what I am looking at: Case: http://www.ipcdirect.net/rpc-2132-2u-rackmount-server-case-w-32-hot-swappable-sata-sas-6g-2-5-drive-bays/ (2x) SAS Cards: https://www.amazon.com/16-PORT-Int-6GB-Sata-Pcie/dp/B003UNP05O/ref=sr_1_fkmr0_2?keywords=LSI+LSI00244&qid=1581702997&sr=8-2-fkmr0 Any potential problems you can see with this case/cards and unraid? Thanks.
  10. AutoFan is not correctly ignoring drives I set in the Ignore section. I set my NVME SSD to be ignored, but in the logs it's still adjusting based on that drive which results in 100% fan speed most of the time. Seems others have reported this with NVME drives. Is this still supported? I think I had same issue and I believe I fixed it by adding this line in my go file on flash drive (and rebooting): modprobe it87 force_id=0x8628 Others have said this one fixes it: modprobe nct6775force_id=0xd120
  11. I already have 24x 12TB disks and need the space. Upgrading to 18x 16TB drives would leave me with next to no room for future expansion and it'd be something like $10,000. No thanks... lol
  12. I have 4 PCI-e 2.0 x8 ports on my server motherboard. 3 of them are filled with 3 SAS2LP-MV8 SAS controllers, the other is filled with a 10G NIC card. I'd like to add an NVMe SSD cache drive to go with my 10G network but it's not possible unless I free up a PCIe slot. Any suggestions? Is there a way for me to condense these 3 SAS2LPs into 1 or 2 cards without causing a PCIe bottleneck?