bkastner

Members
  • Posts

    1198
  • Joined

  • Last visited

Everything posted by bkastner

  1. Norco 4224 Server Chassis Bundle - Includes everything you need for a storage server other than CPU, motherboard, RAM and storage. Includes HBA adapters, SFF cables, power supply (all already cabled), plus 2 additional backplanes in case of issue, as well as additional SFF cables. 1x NORCO 4U Rack Mount 24 x Hot-Swappable SATA/SAS 6G Drive Bays Server Rack Mount RPC-4224 1x Upgraded Fan Backplane for 3x 120mm fans instead of 4x80mm fans 3x 120mm fans 2x 80mm fans 2x Replacement Norco RPC-4224 SAS Backplane SFF-8087 3x Dell Perc H310 Adapter - Storage Controller (RAID) - 8 transmitter/channel - SATA 6Gb/s /SAS 6Gb/s Low Profile - 6Gb/s - Raid 0, 1 1x AX860 ATX Power Supply — 860 Watt 80 PLUS® PLATINUM Certified Fully-Modular PSU 6x 2FT Mini SAS (SFF-8087) to Mini SAS (SFF-8087) Data Cable Located outside of Toronto, Canada Asking $900 CAD / $700 USD + Shipping (if required) Willing to meet in person, or ship, though this will likely be expensive due to size and weight. Not interested in selling parts. Only the complete bundle.
  2. I've had Unraid running for 10+ years and have just added or updated disks as needed. Currently, I have a mix of WD Reds (12TB, 6TB, 4TB). All drives are xfs and sharing is configured to split top level only so everything associated with a show or movie is on a single disk. However, In this scenario I do have disks that get filled and I need to shuffle stuff around which can get annoying (though I know I can set a minimum free space). I was looking at adding a couple WD Gold 20TB, and with drives this large I have an opportunity to rebuild a pool from scratch. I was thinking of buying 5 of the 20TB drives which would give me 60TB usable space. I'd then move my movies over to it, and once the 12TB Reds are empty I'd add them to the new pool. The thought is I can switch to btrfs and can (potentially) switch to split at any level which I assume will work to fill my drives fairly equally. However, I have some questions: 1) Is this even worth doing? It seems like it may be a good idea, and it is a rare chance to basically start over as I have around 100TB of data, so it's not easily to redo. I was only going to buy 3 of the Gold 20TB, but figured by adding more I make this an option, but want to get thoughts on whether it's worth the effort 2) Am I better moving from xfs to btrfs? I've seen that both offer benefits, but btrfs seems to be where things are going so again.. easier to do this way. Moving from reiserfs to xfs was a pain in the butt, so I don't really want to do this drive by drive 3) Are there reasons I wouldn't want to use 'automatically split any directory' (I think it used to be called split level 0)? I do like the organization of having everything on a specific drive, but get tired of shuffling data around at times. I've never explored this split level before and am curious if there are any thoughts / recommendations 4) If I do split any directory am I correct that with the initial 3 drives it will just balance the data, but once I add one of the 12TB drives it will fill that to the same level as the original 20TB drives before equally distributing again? (i.e. if I have 5TB used on each of the 20TB, when I add a 12TB will it fill it to 5TB as well before equally distributing, or does it use % of disk capacity (so 3TB on the 12TB vs 5TB on the 20TB drives) 5) Are there thoughts on how best to manage this? 99% of the data is tied to a Plex server, which is going to add a challenge. I am thinking of starting with just moving all my movies over as the content is (fairly) static, and then move historical shows that won't update, and finally everything else. This will presumably mean Plex can't see some of the tv show data, but I am hoping it will minimize issues with data access. 6) Does anyone know how plex will manage this? I have a ton of movies and tv shows and don't want to completely screw up Plex or have it thinking that old data is being deleted and then re-added back in again on the new location (basically want to avoid churn). But I don't know the best way to manage this process (again... if it's even worthwhile). I'd be interested in any thoughts anyone has on the above. My Unraid server has been pretty static and stable for a long time and has run great. I like the idea of setting myself up better for the future, but don't want to screw things up (especially if it's not even worth doing at all). Thanks.
  3. Well what do you know... that fixed it. Thank you for the suggestion. I'm glad it was an easy fix.
  4. Okay... I enabled Mover logging and ran the mover. I'm assuming the logs still get captured in the diagnostics so have attached the new one. cydstorage-diagnostics-20210408-2329.zip
  5. I noticed a couple of weeks ago that my cache drive was getting full, and then realized that my mover isn't doing anything. I'm guessing it's been about a month, which (roughly) coincides with installing 6.9.1, but I have no idea if the two items are related, or just generally grouped together in my memory. Regardless, the mover has stopped working, and I have no idea why. I have TV & Movies set to yes:cache but I have to manually move them right now which is annoying. If anyone can review my logs and tell me why I would definitely appreciate it. cydstorage-diagnostics-20210408-2329.zip
  6. I'd agree with that. I'm still struggling with this as well as noted in the other thread Energen linked. I thought I had it sorted out by copying most of the data, and then migrating the last bit while Plex was down, but once copied Plex wouldn't start. I'm currently compressing the Plex data into a tar file and trying to copy over that way. It looks like I will need around 2 hours to tar the data, but it's not quite done, so I dont' know what the rest of the process will look like. I don't know what NVMe drive you have, but unless it's a high end SLC or maybe MLC drive it's likely going to be a challenge if you need to move a lot of Plex data. If it's a smaller library it may not be horrible, or as suggested, you can just rebuild it. At worst, you can do a test run.. use cp or something to copy to the NVMe while Plex is running and see how long the overall process is, and then plan your production move based on those results.
  7. Lol... no chance... I probably haven't seen more than 20-25%.. though my father-in-law lives with us and is retired and has likely watched a pretty good chunk. But I do provide media for a number of friends and family and Plex is awesome for that.. so much better than filling external HDDs which I used to do for everyone... that got really annoying quickly.
  8. I had been thinking about that, but the NMVe drive *should* be much faster when people are browsing libraries, and ideally I'd rather have it stored on the cache drive vs unassigned devices. A few weeks ago I was doing something with the array - I think it was a drive replacement - and it looked like unassigned devices didn't come up until the array was back to normal.. not sure if that was normal but plex was down until the unassigned device was visible... I figured that having it all on the cache drive should eliminate this risk/issue - though again, not sure how normal behavior that was. I had moved plex to my SSD from the WD Black cache drive as it was much faster, but I figured one of the benefits of the NVMe is it should be so much faster than both...and it reduces the complexity of the environment to keep it all "in house". I tried the cp -ar command and it took 25 mins... so I am wondering if it skips existing files automatically (since 95-98% would have been on the NMVe already), which would be a very reasonable timeframe for plex to be down during the switch. And yes.. my plex metadata folder is huge... it's over 4700 movies and 750 tv shows with over 30,000 episodes total. So... it's a ton of metadata.
  9. Thanks. One additional question... if I am looking at doing this in 2 passes as mentioned where I pre-stage as much data as possible prior to turning Plex off for the final pass... will this command skip files that are already copied and haven't been updated? Or will I need additional switches on the second pass to skip identical files? I sort of remember something like this in Windows with archive bits being set / unset, but not sure how this works in the Linux world.
  10. Thanks, that's good to know. I like MC as it's easy, but will try the command line approach. I know the 'r' is for recursive, but what does the 'a' do? I see it's for archive, but I am not sure I understand what that does in this context
  11. I was using MC to do the copy from /mnt/disks/CachePool to /mnt/cache
  12. That's some interesting information... I knew QLC is slow but has the SLC cache to help, but didn't think that applied to MLC or TLC NVMe drives as well, which is why I was surprised at the performance. I am thinking I will do a copy of the plex folder so I don't have to take it down, and will hopefully have 99% of the overall data on the NVMe and then when I take plex offline and redo the move, skipping files that exist it should be a minimal copy that won't overwhelm the NVMe... definitely more complicated than I was expecting. I also realize that I now need to figure out how to flash the firmware on the NVMe drive while on Linux as that is something I apparently need to do as well.
  13. I have a pretty large plex metadata folder (250GB / 1.5M files), which is currently sitting on a SSD drive using Unassigned Devices. I recently built a new system and installed a Corsair MP600 1TB to act as a cache drive and have been moving the appdata over to it, but the issue is when copying the plex metadata folder. Before the NMVe drive I was using a 1TB WD Black HDD and copied the plex directory from it to the SSD in 7-8 hours (I think... it finished while I was asleep). Now I am coping the folder from the SSD to the new NVMe Cache drive, and it's taking FOREVER. I gave up after 24 hours as it still seemed to have 3-5GB left, which would have been a few hours more at least. Thankfully even though I was moving files, it's actually seems to be a copy/delete process so I was able to remount Plex using the existing SSD. At first I thought I screwed up and the MP600 was a QLC drive and I was blowing through the cache and dealing with slow writes because of it, but it turns out it a TLC drive, which while not ideal should be more than sufficient (I thought). Does anyone else have any experience with anything similar? Is there a reason I'm not aware of that this long write time would make sense? I am guessing that once I have the plex folder on the MP600 I should have really good performance, but the horrible write time has me concerned, and I am wondering if this is a good NVMe drive to be using for this, or if I should be looking for a MLC or SLC drive. The MP600 seems to have great reviews, and really good performance, so I am confused if it's just not an ideal scenario to be moving so many small files to the drive. This is the first NVMe drive I've owned and was really excited by the performance potential, but feel seriously underwhelmed at the moment.
  14. Okay... one more question... I am 99.99999% positive I reformatted the new cache drive as btrfs, but just noticed it's showing as xfs in Unraid.... I am almost done moving the 250GB of Plex data back over (1.5M files) which take such a long time. Am I screwed now for adding a second cache drive and getting the raid1 pool? Does it need to be btrfs to support that? Or does xfs work? I really hope the answer is that I am good, but I am guessing I am likely not.
  15. Thanks for the info. I've reformatted my new drive as BTRFS and finished the standard migration to it. Am I correct that if I add a second BTRFS NVMe drive to the system I can add it to the cache as well and they will automatically sync data between the two providing redundancy?
  16. I currently have 3 SAS2LP controllers that tie into my Norco 4224 backplanes, but the new motherboard I have only has 3 x16 and 1 x1 PCIE slots. I am hoping to add a 10GB NIC, and possibly a transcoding video card down the road. I have bought a RES25V240 which I've seen I don't even need to mount in a PCIE slot which is kinda cool, but I am wondering what sort of throughput I am going to see if I use that. Given that I still need 1 HBA I am assuming I'd remove 2 SAS2LP, and have one with a direct connection to a backplane and the other to the RES24V240 with it's other 5 ports connecting to the other backplanes. Essentially I want to try and understand the following: 1) Am I going to significantly impact performance with 5 backplanes going through the one card. I currently get around 95-98MB/s for Parity Checks and dont' know if this will really impact that, or if I have 8-10 people streaming off different disks am I going to bottleneck at all 2) Is there a significant difference between running 1 SAS2LP with 5 connections through the RES25V240 vs running 2 SAS2LP cards both feeding a connection into the RES25V240 card and only using 4 connections from that to backplanes 3) Given that the backplanes are all 6GB SAS, is there any value in buying a 12GB SAS controller with 6 ports and run SFF-8643 to SFF-8087 converting cables? I've had one vendor tell me this could cause issues (i.e. frying backplanes), but I dont' know if that's true, or if I'd see any improved performance vs the SAS Expander... If so, this would also potentially set me up if I was to replace the case down the road as I could get one with 12GB SAS (though these are really expensive) 4) Would I even notice much difference between 6GB SAS and 12GB SAS with all WD Red drives? I know this gets into a throughput question, and I've seen some comments on other threads, but I'm still not clear on if there is a significant gain by going to 12GB SAS (enough to justify a $500 card and $1200-$1500 case down the road) Any input or thoughts would be appreciated.
  17. So... I currently have a 1TB WD Black cache drive which I am looking at replacing with a NVMe 1TB drive, and I was thinking of getting a second one as well and adding it to a cache pool. I've seen the documented process for swapping the cache drive, but was curious whether you could instead just add the 1TB NVMe to a cache pool with the WD, and then remove the WD drive from the pool, leaving the NVMe with everything going. If I read cache pools correctly both drives maintain a copy of the data, so is this a doable solution? Also... if I start with the one NVMe cache drive, can I add a second one for redundancy down the road? Or should I be adding them both at the same time? My current cache drive is basically empty as I've been preparing for this move, but was curious if the above replace/migrate was a valid approach or if there was a reason that woudln't work (other than speeds would be quite different between a HDD and NVMe drive of course).
  18. Thanks for the feedback guys. I will have to try the management option at some point, but am glad that things seem to be normalized at least.
  19. Okay... last comment before bed. It seems my issue is the Realtek RTL8117 NIC, which had been assigned as eth0. I've broken the bond, switched NICs so the Intel I211-AT NIC is eth0 and disabled the Realtek NIC and network performance (at least locally) is normal again. I also don't get any drop errors anymore. Is there a known issue with the RTL8117? Or is it unique to me? I am hoping to get a 10GB NIC down the road so both on-board NICs will likely be turned off eventually, but I'd be curious to know if the issue is just my MB for some reason. Hopefully everyone's Plex experience is back to normal and I can let this lie.
  20. Okay... so I brought the other NIC up and bonded them to test... network speeds are definitely better and I don't have the stuttering, but it does show eth0 as 10Mbps and eth1 as 1000 Mbps... no idea why, and I am open to any suggestions to help. I know other's have used the same MB, so not sure why I am having difficulties.
  21. Found what I was talking about with the NIC (eth0). I did just switch network cables, but it didn't help. It seems to be an issue with the NIC for some reason.
  22. Actually looking at my dashboard.. the server has only been up less than 8 hours, but I have a lot of drops as shown in the picture
  23. I have a GB switch that the UnRAID server ties directly into as does my internet router. I started copying files from UnRAID to my local PC as the playback was horrible and noticed how slow the transfer was which started leading me down the same thoughts. Everything from the cables to the switch onward is the same as before, but with the new MB there are new NICs and I am guessing that is where the issue lies. For some reason I thought when poking around that my one active NIC was showing 10MB/s instead of 100 of 1000, but I had thought it was just reporting in error... now I am wondering if that's the case, and I have no idea where I actually saw that reported. I can talk through other parts of the network, but assume it's isolated to the UnRAID server as Plex in the house and outside are affected as is me doing local direct playback with Windows Film & TV Player The Asus Pro WS x570-ACE has 2 NICs, one Intel and one Realtek.. I think for management, but they both show up in UnRAID... I think UnRAID tried to bond them when I started up, but I didn't know if bonding the 2 different manufacturer NICs was a good idea so I turned it off. Maybe I should try with it turned back on?
  24. Okay... I've now confirmed the stutter isnt' just with parity check. I had only built the server on June 30th so though it was only an issue when parity was running, but if I try and start a video now I still get massive buffering going on. Does anyone have any suggestions? Again... for what I changed I don't understand why it would have negatively affected performance like this.