PlayLoud

Members
  • Posts

    18
  • Joined

  • Last visited

Everything posted by PlayLoud

  1. Does this mean you could have multiple 28+2 unRAID arrays under the same license/installation? Or would it still be the same maximum device limitation for all unRAID arrays combined within the single system?
  2. Yup, we would all welcome an official response. Like, TODAY... and BEFORE they make any changes. The worst thing they could do would be to not respond in a timely manner.
  3. Since your times seem normal, I don't think you'll get any meaningful speed increase with a faster HBA. The one your looking at isn't cheap either. Your current limit appears to be the speed of your drives. It's really the nature of the beast. If your want fast parity rebuilds, you'll have to use all lower capacity drives that can get through the entire drive faster (not worth the storage tradeoff), or you can get at least some speed increase by getting rid of your smallest drives to keep the first half of your parity check faster. The most efficient route is using all the same drive size, so that you're not getting that slow down as the smaller drives finish on the slow part of the platter. If 8TB is your parity, you could replace all your 4TB with 8TB, thought it always hurts to spend money on a drive and only gain 4TB. In addition, getting higher performance drives (maybe WD Ultrastars instead of Reds) for all your drives would speed up the process. But even then, you're only talking about shaving a few hours at most. Unless you're planning on expanding beyond 16 drives, I think your current HBA is fine.
  4. That sounds normal to me. When I had only 10TB Red drives (1 Party, 4 Data), I think it took me ~19 hours. This was on an LSI 9207-8i, so no bottleneck on the HBA. Your 4TB drives are probably slowing you down a bit as they'll be reaching the inner part of the platter (slower) while the 8TB drives are still on the outside (faster). You probably notice after you pass up the 4TB mark, your speed goes back up a bit as the 8TB drives are no longer being bottlenecked by the slower 4TB drives.
  5. Mostly. As I said, I have two different cache pools now. I was mostly wondering if there is any kind of limitation on setting up multiple RAID configurations (one RAID 1, and one RAID 0), whether that be a limitation of Unraid itself, or something inherently in the SATA controllers on motherboards. Stuff like that.
  6. So, I'm in the middle of planning an upgrade of my server, and my main cache pool will be going from 2x 1TB SATA SSDs (RAID 1) to 2x 2TB NVMe SSDs (RAID 1). However, I'm also planning on moving my secondary cache pool (currently a 14TB spinny) to 2-3x 4TB (8-12TB total) SATA SSDs in RAID 0. This would just be for torrents and my own blu-ray rips before moving them over to the array, so it's only an inconvenience if a drive fails. Anything important would be on the array. I know it's a bit of overkill for such tasks, but I sometimes want to transfer stuff while those Linux ISOs are downloading at high speed, and I'd like to see bigger numbers on the transfer. Also, it's fun to see what I can do. I'm just curious if there's any limitation in having both a RAID 1 cache pool and a RAID 0 cache pool at the same time?
  7. UnRAID will put them in the correct slot based on their serial numbers. That said, take a pic of their locations anyway.
  8. Thank you for all the info. I changed the split level back and deleted the folders that brought the total over 5TB (this also deleted those small .xml files that were on Disk 2). Then I started transferring again, and the remaining folders started going to Disk 2. It looks like all will be good once this last couple TB transfers, but I will keep your suggestions in mind if this ever comes up again. That's one area where I have no bottleneck. 10g
  9. Thank you. I guess my original understanding was correct. Ah, I can see how that would mess it up. That seems easy enough. It would only be something I have to do on the initial transfer. Adding folders afterwards would be much smaller chunks where that would not be necessary. Do you know a copy method I could use for this? The data is currently sitting on an unassigned device on the same system.
  10. Greetings all, Unraid 6.9.2 Array 1x 10TB Parity 4x 10TB Data So, I recently decided to reorganize my media. Previously, I had all my movies and tv shows in a share called "media". As I read about split-level, I decided I wanted all files for any given movie/show to be on the same drive. I figured the best way to do this was to copy off all my media, create new shares (Movies and TV Shows instead of Media), set the Split Level, and copy it back. My folder structure is as follows Movies (share) Movie name (folder) Movie.mkv TV Shows (share) Show name (folder) Season 1 (folder) 1x01.mkv 1x02.mkv Season 2 (folder) 2x01.mkv 2x02.mkv etc My Allocation Method is set to High-water. After reading about Split-Level, I thought I understood it and set it for "Automatically split only the top level directory as required." I thought this was put all files of any particular movie or TV show folder (including all seasons) on the same drive. After creating the new shares as described above, I used Krusader to start copying my media back to the array. My new Movies share is set to only use Drive 3. Since I currently have only 4.17TB of movies, this transferred as expected. TV Shows are set for Disk 1 and Disk 2. However, I wake up this morning to find that all ~7TB of TV shows are on Disk 1. I was expecting ~5TB of shows on disk 1 due to the High-water allocation. I knew it might go higher as if a series had started on disk 1, the split level would force it to stay on that disk. However, I wasn't expecting the entire share to be on Disk 1. I figured I misunderstood the Split Level, and change it to "Automatically split only the top two directory levels as required", delete everything off of it and start the transfer again. The same thing happening (currently 5.79TB on Disk 1). The only thing that is currently on Disk 2 are 2x series.xml files from shows that are on Disk 1. Now I'm thinking the first Split-Level option was correct in that regard. But now I am still confused. Shouldn't the High-water allocation have caused a split at ~5TB (once the current show was finished). I know I could always unbalance some shows to balance the drive a little better, but I'd really like to know what I am not understanding. Thanks in advance for any insight. skynet-diagnostics-20220310-2228.zip
  11. Indeed. It was part of the procedure for removing the old drives (or at least the way I was doing it). Forgot to add them back.
  12. That did it. I knew it had to be something simple that I was missing. Thanks Johnnie.
  13. Greetings all, So, I was doing fine in unRAID. I had... 10TB Red Parity 10TB Red 10TB Red 6TB Red 4TB Red 1TB SSD (Cache) I decided to shuck some WD Element drives, and replace the smaller HDDs. So, I copied all the data off the smaller drives to an external, deleted the data, removed the drives from the array, put in the new shucked drives, ran preclear, and have now added the new shucked drives to the array. Everything seems fine. Except... When I try to add new user shares, I only see the first two drives, and not the new drives (3,4). And I don't know if this matters, but I also see disks 3,4 in the Disk Shares, but I don't see disks 1,2. I'm guessing I forgot something that I did when I initially set up unRAID, but I've looked at everything I can think of, and I don't see what I may have missed. Any suggestions?
  14. Thanks for the info Benson. It pointed me in a good direction to for further research. Though if I'm not mistaken, the SATA controller isn't connected to the CPU lanes, but the chipset lanes. The chipset in this case is PCIe 2.0, but if it is 4 lanes as you say, that (2 GB/s) should still be enough for 1x SSD + 5x 5400rpm spinners, even in a worst case scenario. Nothing else is sharing the PCIe 2.0 lanes. No cards have been added. I think I'll keep the LSI in the box until I add another drive. No reason for it to consume power in the mean time. The LSI will eventually go into one of the first two 16x slots, which are connected to the CPU, and are PCIe 3.0.
  15. So, if I can fit the SSD and all the spinny-go-round HDDs on the on-board Intel SATA controller, is that a better option than offloading some of them to the LSI? Is there a situation where the on-board Intel controller would get saturated? Perhaps a parity check with 5 drives all reading at once?
  16. Thinking they weren’t as good as the Intel controlled ports, I have to this point only connected the optical drives to the Marvell ports.
  17. Hello everybody, So, I decided to finally give unRAID a try. Here is my current setup. CPU: Intel Core i7 3770k @ 4.4 GHz RAM: 32GB DDR3 1600 MB: Gigabyte Intel Z77 GA-Z77X-D3H Storage Interfaces on the MB: 2 x SATA 6Gb/s connectors (Intel Z77) 4 x SATA 3Gb/s connectors (Intel Z77) 2 x SATA 6Gb/s connectors (Marvell 88SE9172) Current Drives: 10TB WD Red 6TB WD Red 4TB WD Red 3TB WD Red 2TB WD Green Blu-Ray Optical Drive Blu-Ray Optical Drive 64GB SSD (system drive - Linux Mint 19) This used up all the SATA ports on the board. I figured I was going to just order another 10TB Red drive for Parity, so I ordered a LSI SAS 9207-8i for additional ports. However, as I ran an extended SMART test on all my drives, I discovered the 3TB Red failed with a read error, and the 2TB Green was showing a very high "Current Pending Sector Count", and also had about 70,000 hours on the drive. Time to retire those drives. So, instead of ordering one 10TB Red, I ordered two. One for Parity, and one to replace the combined 5TB of the two drives that didn't make the cut. I also ordered a larger SSD for the cache drive. New Plan: 10TB WD Red (Parity) 10TB WD Red 10TB WD Red 6TB WD Red 4TB WD Red 1TB Samsung Evo 860 (Cache) Optical Drive Optical Drive Because I am replacing two smaller drives with one larger drive, I'm actually good on SATA ports again, though I already have the LSI HBA on order. I could return the HBA. I could keep the HBA, but keep it out of the system until I want to add another drive. I could install and use the HBA, but I don't know if that would provide any benefit, which brings me to my first question. Is there any advantage to using the LSI SAS 9207-8i with its high bandwidth (all 6Gb ports)? Would using this take any pressure off of the Intel controller? Only the SSD cache drive would truly need the 6Gb port (available on the Intel), but I am wondering if splitting up the drives to different controllers would improve performance by not saturating the Intel controller when all drives were in use?