extrobe

Members
  • Posts

    145
  • Joined

  • Last visited

About extrobe

  • Birthday May 27

Converted

  • Gender
    Male

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

extrobe's Achievements

Apprentice

Apprentice (3/14)

12

Reputation

  1. Thanks, I’ll check that option out! yes, it’s really so I can repurpose one of the slots (or maybe 2 by also using the onboard Sata) - I recently moved things around to add a gpu, so things are a little cramped now with one of the hba cards pretty much rubbing against the gpu which can’t be good for either, especially as these cards get pretty toasty!
  2. I'm looking to upgrade a couple of my 8-port controllers (Dell H200's) with 16 port controllers, to free up some PCIe slots. 9201-16i's seem to be the recommendation - but from Australia the only place I can find to buy them from is China/HK - but I've seen advice to avoid these as they can be knock-offs. Does anyone have any recent experience with a seller in CN/HK they can share? (Or know another place to look for from AU?)
  3. Yes, just changing slots. I'm trying to get it to read off the original disk (the disk itself should have still been physically ok, it was just approaching EoL), but struggling to get it to include it in the pool. Working my way through some of the BTRFS troubleshooting steps, but starting to look like a lost cause EDIT: Looks like it's the other Crucial disk which is not showing up - but there was nothing to suggest it was an issue before hand - in fact, I checked the SMART data before I started the replacement as wanted to see how much life that one had left. When I try to mount it, it says the Special Device doesn't exist - any diagnostics I can do on this to work out why that might be / confirm it's damaged? Looks like the data on the original disk has also already gone - when I try to mount it, it says wrong FS type
  4. I did swap 2 of the disks around (the 2x Samsungs) - that was because I thought the 'unmountable filesystem' message was specific to that disk, so swapped the bays over to check it wasn't a connection issue. But I checked the assignments still matched before moving on.
  5. demeter-diagnostics-20200906-1810.zip Diagnostics attached. I did follow that link for no spare port, but it went back to the single-disk procedure, and wasn't sure if that was the right one to follow - so figured that as the multi-disk procedure was to just select a new disk (seemingly like for a standard disk) I thought I could just hot-swap them instead 😕 Edit: References to disk Crucial_CT500MX200 = Old Cache 4, Crucial_CT500MX500 = New Cache 4
  6. Ok... I did the following... Stopped Array Started Array in Maintenance Mode Ran mkdir /x mount -o degraded,usebackuproot,ro /dev/sdh1 /x Realised I should probably run that not in maintenance mode Stopped Array Started Array (normal) and the cache pool seemingly is back online Not sure if I've lost data or not, but I can't get docker to start "Docker Service failed to start" EDIT: Did a restart, and back to being unmountable EDIT: Repeating the previous steps, this time copying to the array using Midnight Commander - but getting a lot of copy errors (keeps saying [stalled]), and pretty sure there are some missing some directories. Would putting the old disk in again, and using the above command be a sensible next step? EDIT: Adding the old disk back just gives the warning 'all data will be overwritten when you start the array', so doesn't feel like this will work
  7. Although, perhaps I'm just being a dunce - is the Unmountable message just reflecting that the pool as an entirety can't be mounted? It's prompting me to format the 'lead' disk in the cache - it was cache 4 which I swapped out
  8. I have a 4xSSD Cache Pool on BTRFS (3x480GB, 1x500GB) Few days ago, one of the drives was being flagged for replacement (500gb). New one arrived, and read through the FAQ post The only bit I was different on, was I didn't have a spare port, so instead did the following... - Stopped the array - Pulled out the faulty disk caddy - Replaced the disk with the new one - Selected the new disk in the pool (which is pretty much the same process I use on the main disks) But... whilst the offending disk shows as a 'new device', now one of the other other 3 disks is showing as Unmountable: No File System I've tried stopping the array again, and removing/re-inserting the disk. I've also tried putting the old disk back in, but I don't seem to be able progress from here. Is this recoverable? I have partial backups, so not all is lost, but annoyingly I think my PLEX instances were on my exclude list, and probably my biggest 'loss'
  9. I'd love to see partial-parity scheduling. eg, do 25% each week, meaning a full scan is done every 4 weeks. Currently, doing a full scan takes nearly 25hours, so don't tend to run them unless I have had a dirty restart or something. Being able to phase it would allow users to keep parity in check
  10. Bummer - that does make it more challenging Perhaps I'll add that to my rainy-day to do list!
  11. It's taken quite some time, but have finally finished applying disk encryption to all 20 drives using the 'shuffle' method. But I'm not sure what the best approach would be to encrypt the cache drive(s). I have a 4-disk cache disk configuration (4x 500gb) in btrfs. I know one of these disks needs replacing soon anyway, so will I be able to ... - Remove one of the disks from the cache array - Install the new disk - Format with encryption - Add to cache array - repeat for each subsequent disk or is there a better approach?
  12. Thanks BRiT - thought it was worth asking. I'm replacing both anyway, and I'll later stick one of the SMR drives back into the array - see what happens!
  13. Over the last few weeks, I've 'lost' one of my parity drives on 3 occasions (not always the same drive). I get a 'Parity1 Disabled' error. On each occasion I ran extended SMART reports, and couldn't find any issues. I've swapped out power, data cables etc, but never found the cause and on each occasion I ended up re-formatting and re-adding the parity drive. The 2 parity drives are connected to different controller cards (I have 3xH200's), and different power outputs from the PSU. I do have 2 new WD 'Red' drives onorder to replace both parity drives which are currently Seagate Archive drives (ST8000AS0002, which I understand are SMR drives?). But it's bugging me why the parity drives keep getting kicked out. It then occurred to me that on all 3 occasions I had been running Unbalanced to shift data between disks as I go through applying XFS Encryption to each of the 18 array disks. Could the combination of shifting data between disks, and the less-than-ideal SMR drives being used for Parity have something to do with this? Or is it simply the fact that the law of probability means I'm more likely to encounter such an issue during this type of activity?
  14. Thanks @jude, this did the trick for me in 6.7 stable as well
  15. Right, I think I know what I did now - Because I planned to remove both disk 16, and disk 1 eventually, I removed disk 1 from the 'included' disks for each share, but also from the Global Share Settings - this seems to be what triggered the data on disk one to be treated differently. In theory, fixing it was just re-adding disk 1, but was made tricker by half my dockers hanging because the data in the shares they referenced was not there, and had to force a hard reset. Gah.