bkastner

Members
  • Posts

    1177
  • Joined

  • Last visited

About bkastner

  • Birthday 08/07/1971

Converted

  • Gender
    Male
  • Location
    Canada

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

bkastner's Achievements

Collaborator

Collaborator (7/14)

2

Reputation

  1. The current cache drive is btrfs, so I should be good. Thank you for confirming.
  2. Hello I currently have a 1 TB nvme drive as my cache, however I've bought 2 new drives (also 1 TB) to replace as a cache pool in a raid 1 and then use the old drive just for caching share data / new downloads. My plex metadata folder is large with a ton of files, so I am trying to find an easy way to do this. I wanted to confirm if my strategy would work, and if there are any potential issues. 1) Add one of the new 1 TB drives to the existing cache pool (which I assume will start to mirror the existing one). 2) Once replicated, remove the old cache drive, and replace it with the 2nd new 1 TB drive and let the mirror rebuild 3) Re-add the old cache drive in a new pool for downloads So, a couple questions: 1) Am I correct in how the RAID 1 system will work? Do I need to do anything to make this happen? 2) Is there a way to know once the 2 drives are completely mirrored 3) Are there any issues with the above approach
  3. Sorry... as you likely noted above, this is now sold.
  4. I'd do $800 for everything, but that's about as low as I want to go
  5. Norco 4224 Server Chassis Bundle - Includes everything you need for a storage server other than CPU, motherboard, RAM and storage. Includes HBA adapters, SFF cables, power supply (all already cabled), plus 2 additional backplanes in case of issue, as well as additional SFF cables. 1x NORCO 4U Rack Mount 24 x Hot-Swappable SATA/SAS 6G Drive Bays Server Rack Mount RPC-4224 1x Upgraded Fan Backplane for 3x 120mm fans instead of 4x80mm fans 3x 120mm fans 2x 80mm fans 2x Replacement Norco RPC-4224 SAS Backplane SFF-8087 3x Dell Perc H310 Adapter - Storage Controller (RAID) - 8 transmitter/channel - SATA 6Gb/s /SAS 6Gb/s Low Profile - 6Gb/s - Raid 0, 1 1x AX860 ATX Power Supply — 860 Watt 80 PLUS® PLATINUM Certified Fully-Modular PSU 6x 2FT Mini SAS (SFF-8087) to Mini SAS (SFF-8087) Data Cable Located outside of Toronto, Canada Asking $900 CAD / $700 USD + Shipping (if required) Willing to meet in person, or ship, though this will likely be expensive due to size and weight. Not interested in selling parts. Only the complete bundle.
  6. I've had Unraid running for 10+ years and have just added or updated disks as needed. Currently, I have a mix of WD Reds (12TB, 6TB, 4TB). All drives are xfs and sharing is configured to split top level only so everything associated with a show or movie is on a single disk. However, In this scenario I do have disks that get filled and I need to shuffle stuff around which can get annoying (though I know I can set a minimum free space). I was looking at adding a couple WD Gold 20TB, and with drives this large I have an opportunity to rebuild a pool from scratch. I was thinking of buying 5 of the 20TB drives which would give me 60TB usable space. I'd then move my movies over to it, and once the 12TB Reds are empty I'd add them to the new pool. The thought is I can switch to btrfs and can (potentially) switch to split at any level which I assume will work to fill my drives fairly equally. However, I have some questions: 1) Is this even worth doing? It seems like it may be a good idea, and it is a rare chance to basically start over as I have around 100TB of data, so it's not easily to redo. I was only going to buy 3 of the Gold 20TB, but figured by adding more I make this an option, but want to get thoughts on whether it's worth the effort 2) Am I better moving from xfs to btrfs? I've seen that both offer benefits, but btrfs seems to be where things are going so again.. easier to do this way. Moving from reiserfs to xfs was a pain in the butt, so I don't really want to do this drive by drive 3) Are there reasons I wouldn't want to use 'automatically split any directory' (I think it used to be called split level 0)? I do like the organization of having everything on a specific drive, but get tired of shuffling data around at times. I've never explored this split level before and am curious if there are any thoughts / recommendations 4) If I do split any directory am I correct that with the initial 3 drives it will just balance the data, but once I add one of the 12TB drives it will fill that to the same level as the original 20TB drives before equally distributing again? (i.e. if I have 5TB used on each of the 20TB, when I add a 12TB will it fill it to 5TB as well before equally distributing, or does it use % of disk capacity (so 3TB on the 12TB vs 5TB on the 20TB drives) 5) Are there thoughts on how best to manage this? 99% of the data is tied to a Plex server, which is going to add a challenge. I am thinking of starting with just moving all my movies over as the content is (fairly) static, and then move historical shows that won't update, and finally everything else. This will presumably mean Plex can't see some of the tv show data, but I am hoping it will minimize issues with data access. 6) Does anyone know how plex will manage this? I have a ton of movies and tv shows and don't want to completely screw up Plex or have it thinking that old data is being deleted and then re-added back in again on the new location (basically want to avoid churn). But I don't know the best way to manage this process (again... if it's even worthwhile). I'd be interested in any thoughts anyone has on the above. My Unraid server has been pretty static and stable for a long time and has run great. I like the idea of setting myself up better for the future, but don't want to screw things up (especially if it's not even worth doing at all). Thanks.
  7. Well what do you know... that fixed it. Thank you for the suggestion. I'm glad it was an easy fix.
  8. Okay... I enabled Mover logging and ran the mover. I'm assuming the logs still get captured in the diagnostics so have attached the new one. cydstorage-diagnostics-20210408-2329.zip
  9. I noticed a couple of weeks ago that my cache drive was getting full, and then realized that my mover isn't doing anything. I'm guessing it's been about a month, which (roughly) coincides with installing 6.9.1, but I have no idea if the two items are related, or just generally grouped together in my memory. Regardless, the mover has stopped working, and I have no idea why. I have TV & Movies set to yes:cache but I have to manually move them right now which is annoying. If anyone can review my logs and tell me why I would definitely appreciate it. cydstorage-diagnostics-20210408-2329.zip
  10. I'd agree with that. I'm still struggling with this as well as noted in the other thread Energen linked. I thought I had it sorted out by copying most of the data, and then migrating the last bit while Plex was down, but once copied Plex wouldn't start. I'm currently compressing the Plex data into a tar file and trying to copy over that way. It looks like I will need around 2 hours to tar the data, but it's not quite done, so I dont' know what the rest of the process will look like. I don't know what NVMe drive you have, but unless it's a high end SLC or maybe MLC drive it's likely going to be a challenge if you need to move a lot of Plex data. If it's a smaller library it may not be horrible, or as suggested, you can just rebuild it. At worst, you can do a test run.. use cp or something to copy to the NVMe while Plex is running and see how long the overall process is, and then plan your production move based on those results.
  11. Lol... no chance... I probably haven't seen more than 20-25%.. though my father-in-law lives with us and is retired and has likely watched a pretty good chunk. But I do provide media for a number of friends and family and Plex is awesome for that.. so much better than filling external HDDs which I used to do for everyone... that got really annoying quickly.
  12. I had been thinking about that, but the NMVe drive *should* be much faster when people are browsing libraries, and ideally I'd rather have it stored on the cache drive vs unassigned devices. A few weeks ago I was doing something with the array - I think it was a drive replacement - and it looked like unassigned devices didn't come up until the array was back to normal.. not sure if that was normal but plex was down until the unassigned device was visible... I figured that having it all on the cache drive should eliminate this risk/issue - though again, not sure how normal behavior that was. I had moved plex to my SSD from the WD Black cache drive as it was much faster, but I figured one of the benefits of the NVMe is it should be so much faster than both...and it reduces the complexity of the environment to keep it all "in house". I tried the cp -ar command and it took 25 mins... so I am wondering if it skips existing files automatically (since 95-98% would have been on the NMVe already), which would be a very reasonable timeframe for plex to be down during the switch. And yes.. my plex metadata folder is huge... it's over 4700 movies and 750 tv shows with over 30,000 episodes total. So... it's a ton of metadata.
  13. Thanks. One additional question... if I am looking at doing this in 2 passes as mentioned where I pre-stage as much data as possible prior to turning Plex off for the final pass... will this command skip files that are already copied and haven't been updated? Or will I need additional switches on the second pass to skip identical files? I sort of remember something like this in Windows with archive bits being set / unset, but not sure how this works in the Linux world.
  14. Thanks, that's good to know. I like MC as it's easy, but will try the command line approach. I know the 'r' is for recursive, but what does the 'a' do? I see it's for archive, but I am not sure I understand what that does in this context
  15. I was using MC to do the copy from /mnt/disks/CachePool to /mnt/cache