Jump to content

bkastner

Members
  • Content Count

    1168
  • Joined

  • Last visited

Everything posted by bkastner

  1. I'd agree with that. I'm still struggling with this as well as noted in the other thread Energen linked. I thought I had it sorted out by copying most of the data, and then migrating the last bit while Plex was down, but once copied Plex wouldn't start. I'm currently compressing the Plex data into a tar file and trying to copy over that way. It looks like I will need around 2 hours to tar the data, but it's not quite done, so I dont' know what the rest of the process will look like. I don't know what NVMe drive you have, but unless it's a high end SLC or maybe MLC drive it's likely going to be a challenge if you need to move a lot of Plex data. If it's a smaller library it may not be horrible, or as suggested, you can just rebuild it. At worst, you can do a test run.. use cp or something to copy to the NVMe while Plex is running and see how long the overall process is, and then plan your production move based on those results.
  2. Lol... no chance... I probably haven't seen more than 20-25%.. though my father-in-law lives with us and is retired and has likely watched a pretty good chunk. But I do provide media for a number of friends and family and Plex is awesome for that.. so much better than filling external HDDs which I used to do for everyone... that got really annoying quickly.
  3. I had been thinking about that, but the NMVe drive *should* be much faster when people are browsing libraries, and ideally I'd rather have it stored on the cache drive vs unassigned devices. A few weeks ago I was doing something with the array - I think it was a drive replacement - and it looked like unassigned devices didn't come up until the array was back to normal.. not sure if that was normal but plex was down until the unassigned device was visible... I figured that having it all on the cache drive should eliminate this risk/issue - though again, not sure how normal behavior that was. I had moved plex to my SSD from the WD Black cache drive as it was much faster, but I figured one of the benefits of the NVMe is it should be so much faster than both...and it reduces the complexity of the environment to keep it all "in house". I tried the cp -ar command and it took 25 mins... so I am wondering if it skips existing files automatically (since 95-98% would have been on the NMVe already), which would be a very reasonable timeframe for plex to be down during the switch. And yes.. my plex metadata folder is huge... it's over 4700 movies and 750 tv shows with over 30,000 episodes total. So... it's a ton of metadata.
  4. Thanks. One additional question... if I am looking at doing this in 2 passes as mentioned where I pre-stage as much data as possible prior to turning Plex off for the final pass... will this command skip files that are already copied and haven't been updated? Or will I need additional switches on the second pass to skip identical files? I sort of remember something like this in Windows with archive bits being set / unset, but not sure how this works in the Linux world.
  5. Thanks, that's good to know. I like MC as it's easy, but will try the command line approach. I know the 'r' is for recursive, but what does the 'a' do? I see it's for archive, but I am not sure I understand what that does in this context
  6. I was using MC to do the copy from /mnt/disks/CachePool to /mnt/cache
  7. That's some interesting information... I knew QLC is slow but has the SLC cache to help, but didn't think that applied to MLC or TLC NVMe drives as well, which is why I was surprised at the performance. I am thinking I will do a copy of the plex folder so I don't have to take it down, and will hopefully have 99% of the overall data on the NVMe and then when I take plex offline and redo the move, skipping files that exist it should be a minimal copy that won't overwhelm the NVMe... definitely more complicated than I was expecting. I also realize that I now need to figure out how to flash the firmware on the NVMe drive while on Linux as that is something I apparently need to do as well.
  8. I have a pretty large plex metadata folder (250GB / 1.5M files), which is currently sitting on a SSD drive using Unassigned Devices. I recently built a new system and installed a Corsair MP600 1TB to act as a cache drive and have been moving the appdata over to it, but the issue is when copying the plex metadata folder. Before the NMVe drive I was using a 1TB WD Black HDD and copied the plex directory from it to the SSD in 7-8 hours (I think... it finished while I was asleep). Now I am coping the folder from the SSD to the new NVMe Cache drive, and it's taking FOREVER. I gave up after 24 hours as it still seemed to have 3-5GB left, which would have been a few hours more at least. Thankfully even though I was moving files, it's actually seems to be a copy/delete process so I was able to remount Plex using the existing SSD. At first I thought I screwed up and the MP600 was a QLC drive and I was blowing through the cache and dealing with slow writes because of it, but it turns out it a TLC drive, which while not ideal should be more than sufficient (I thought). Does anyone else have any experience with anything similar? Is there a reason I'm not aware of that this long write time would make sense? I am guessing that once I have the plex folder on the MP600 I should have really good performance, but the horrible write time has me concerned, and I am wondering if this is a good NVMe drive to be using for this, or if I should be looking for a MLC or SLC drive. The MP600 seems to have great reviews, and really good performance, so I am confused if it's just not an ideal scenario to be moving so many small files to the drive. This is the first NVMe drive I've owned and was really excited by the performance potential, but feel seriously underwhelmed at the moment.
  9. Okay... one more question... I am 99.99999% positive I reformatted the new cache drive as btrfs, but just noticed it's showing as xfs in Unraid.... I am almost done moving the 250GB of Plex data back over (1.5M files) which take such a long time. Am I screwed now for adding a second cache drive and getting the raid1 pool? Does it need to be btrfs to support that? Or does xfs work? I really hope the answer is that I am good, but I am guessing I am likely not.
  10. Awesome. Thanks Johnnie
  11. Thanks for the info. I've reformatted my new drive as BTRFS and finished the standard migration to it. Am I correct that if I add a second BTRFS NVMe drive to the system I can add it to the cache as well and they will automatically sync data between the two providing redundancy?
  12. I currently have 3 SAS2LP controllers that tie into my Norco 4224 backplanes, but the new motherboard I have only has 3 x16 and 1 x1 PCIE slots. I am hoping to add a 10GB NIC, and possibly a transcoding video card down the road. I have bought a RES25V240 which I've seen I don't even need to mount in a PCIE slot which is kinda cool, but I am wondering what sort of throughput I am going to see if I use that. Given that I still need 1 HBA I am assuming I'd remove 2 SAS2LP, and have one with a direct connection to a backplane and the other to the RES24V240 with it's other 5 ports connecting to the other backplanes. Essentially I want to try and understand the following: 1) Am I going to significantly impact performance with 5 backplanes going through the one card. I currently get around 95-98MB/s for Parity Checks and dont' know if this will really impact that, or if I have 8-10 people streaming off different disks am I going to bottleneck at all 2) Is there a significant difference between running 1 SAS2LP with 5 connections through the RES25V240 vs running 2 SAS2LP cards both feeding a connection into the RES25V240 card and only using 4 connections from that to backplanes 3) Given that the backplanes are all 6GB SAS, is there any value in buying a 12GB SAS controller with 6 ports and run SFF-8643 to SFF-8087 converting cables? I've had one vendor tell me this could cause issues (i.e. frying backplanes), but I dont' know if that's true, or if I'd see any improved performance vs the SAS Expander... If so, this would also potentially set me up if I was to replace the case down the road as I could get one with 12GB SAS (though these are really expensive) 4) Would I even notice much difference between 6GB SAS and 12GB SAS with all WD Red drives? I know this gets into a throughput question, and I've seen some comments on other threads, but I'm still not clear on if there is a significant gain by going to 12GB SAS (enough to justify a $500 card and $1200-$1500 case down the road) Any input or thoughts would be appreciated.
  13. So... I currently have a 1TB WD Black cache drive which I am looking at replacing with a NVMe 1TB drive, and I was thinking of getting a second one as well and adding it to a cache pool. I've seen the documented process for swapping the cache drive, but was curious whether you could instead just add the 1TB NVMe to a cache pool with the WD, and then remove the WD drive from the pool, leaving the NVMe with everything going. If I read cache pools correctly both drives maintain a copy of the data, so is this a doable solution? Also... if I start with the one NVMe cache drive, can I add a second one for redundancy down the road? Or should I be adding them both at the same time? My current cache drive is basically empty as I've been preparing for this move, but was curious if the above replace/migrate was a valid approach or if there was a reason that woudln't work (other than speeds would be quite different between a HDD and NVMe drive of course).
  14. Thanks for the feedback guys. I will have to try the management option at some point, but am glad that things seem to be normalized at least.
  15. Okay... last comment before bed. It seems my issue is the Realtek RTL8117 NIC, which had been assigned as eth0. I've broken the bond, switched NICs so the Intel I211-AT NIC is eth0 and disabled the Realtek NIC and network performance (at least locally) is normal again. I also don't get any drop errors anymore. Is there a known issue with the RTL8117? Or is it unique to me? I am hoping to get a 10GB NIC down the road so both on-board NICs will likely be turned off eventually, but I'd be curious to know if the issue is just my MB for some reason. Hopefully everyone's Plex experience is back to normal and I can let this lie.
  16. Okay... so I brought the other NIC up and bonded them to test... network speeds are definitely better and I don't have the stuttering, but it does show eth0 as 10Mbps and eth1 as 1000 Mbps... no idea why, and I am open to any suggestions to help. I know other's have used the same MB, so not sure why I am having difficulties.
  17. Found what I was talking about with the NIC (eth0). I did just switch network cables, but it didn't help. It seems to be an issue with the NIC for some reason.
  18. Actually looking at my dashboard.. the server has only been up less than 8 hours, but I have a lot of drops as shown in the picture
  19. I have a GB switch that the UnRAID server ties directly into as does my internet router. I started copying files from UnRAID to my local PC as the playback was horrible and noticed how slow the transfer was which started leading me down the same thoughts. Everything from the cables to the switch onward is the same as before, but with the new MB there are new NICs and I am guessing that is where the issue lies. For some reason I thought when poking around that my one active NIC was showing 10MB/s instead of 100 of 1000, but I had thought it was just reporting in error... now I am wondering if that's the case, and I have no idea where I actually saw that reported. I can talk through other parts of the network, but assume it's isolated to the UnRAID server as Plex in the house and outside are affected as is me doing local direct playback with Windows Film & TV Player The Asus Pro WS x570-ACE has 2 NICs, one Intel and one Realtek.. I think for management, but they both show up in UnRAID... I think UnRAID tried to bond them when I started up, but I didn't know if bonding the 2 different manufacturer NICs was a good idea so I turned it off. Maybe I should try with it turned back on?
  20. Okay... I've now confirmed the stutter isnt' just with parity check. I had only built the server on June 30th so though it was only an issue when parity was running, but if I try and start a video now I still get massive buffering going on. Does anyone have any suggestions? Again... for what I changed I don't understand why it would have negatively affected performance like this.
  21. I'm running through the full tunables testing now to see if it helps. Even during a parity check though I am assuming I should be able to watch a few video streams without major impact. Again.. other than changing MB, CPU RAM the system is exactly the same and I was able to do this before the hardware upgrade.
  22. I've been running UnRAID for a long time... likely 10 years at this point, and for most of that time I've been running great on a Xeon E3 1230 with 32GB of ECC memory. I have 3 SAS2LP controllers that tie into the case backplane where I currently have 19 disks running. Every so often I've had video playback stuttering... either through Plex, or if I have a drive mapped and start playing a video using a Win10 PC. As my hardware was getting on in years I thought I'd upgrade to a Ryzen system - ASUS Pro WS X570 Ace MB, Ryzen 3900X and 32GB of ECC ram. My issue is that since switching I am getting a ton more buffering (to the point where videos can be unplayable), and I really don't understand why. Other than MB, CPU and RAM the infrastructure is the same... and everything should be much faster (on Passmark I went from an 8K on the Xeon to 32K on the Ryzen). I'm looking to run the disk tunables scripts to ensure those are set correctly as I last did it 5-6 years ago (I think I was still on UnRAID 5.X the last time I ran this), but I don't know what else to check. If anyone has suggestions I would greatly appreciate them. I've included a diagnostic dump for anyone who can pull useful information from it. cydstorage-diagnostics-20200702-2124.zip
  23. Thanks for the clarification. This doesn't apply when assigning to a docker container though?
  24. Thanks. I had seen elsewhere that assigning the GPU to the Plex Docker would make it unavailable for other things like VMs, etc, but wasn't sure of all the limitations of this scenario. I'm glad I only need the one card so I don't have to waste another PCI slot.