Jump to content

banterer

Members
  • Posts

    114
  • Joined

  • Last visited

Everything posted by banterer

  1. The 14TB has a lot of other backups on, some of which have been transferred to the array and some haven't, so I'd rather keep that aside for now. It was my intention to eventually use it for parity, so that I can put whichever size disks I like in the array. I have a 3TB WD Red arriving tomorrow, so it's kind of sounding like maybe ddrescue from a manually mounted disk4 -> new disk, then replace that in the array and try to start it?
  2. Ok just ordering two more disks now. WD Red. Are the plus or pro worth going for?
  3. Ok, so first step get another disk. I guess it can be bigger than 3TB, just at least 3TB, right? Can you list out the steps I should take?
  4. Hmmm, well one is backups, which is only important if needed. The other ones are cctv (not important) and media (don't really want to lose, but not the end of the world). How can I force disk4 to do something?
  5. It was one of the older ones. But I don't really know which data was on it, as the folders are all split. Is that something I can find out somehow?
  6. Update - I've connected 3 to my USB->SATA adapter, and it's not showing up on my Mac as a device (in disk util or /dev) so I'm guessing that's fried?
  7. Yeah this is where I say 'no', and you tell me I'm stupid, right? The truth is a lot more complicated than that. EG right now I'm supposed to be doing stuff to something on the cloud, that this is the offsite backup for. I can't do the cloud stuff, because I can't risk it without a backup. And a dozen other complex things that would take too long to explain. What would you do, out of the two options (force accept disk4, and new config)?
  8. I guess if we can at least get disk4 up and running, my data should be safe? What should I do with that, and how can I at least mount my cache so I can access my dockers?
  9. It's headless, in the garage. I have a USB-> Sata I can use to try hooking it up to my Mac?
  10. Also, I need to urgently access my docker installs but my array won't start like this. How can I mount just the cache SSD, and 'point' docker at it?
  11. So, checked all connections, moved the 2x problematic drives (3 and 4) to other bays vacated by removing unused drives, rebooted, (gui fine now), ran diags, attached. For info - all drives in the array are on a hotswap backplane, with 8 x SATA ports connected to the SAS card by 2x SAS->SATA 4 way splitters. I chose the card after a long conversation (well several) on this forum, as others had had success with them. tower-diagnostics-20230215-1623.zip
  12. tower-diagnostics-20230215-0203.zip
  13. I have read that link. I don't see instructions from the terminal. Maybe it's late, maybe it's my fault. Your curt replies seem to suggest so.
  14. The link, in the terminal? Are you trying to be funny?
  15. This is all I can get form the GUI I still have access to the terminal.
  16. ..update, now I've lost access to the GUI Really don't know what's going on here. I can still access the terminal - how can I cleanly shut down and reboot from there? I've tried the instructions here, but even the first command `/root/samba stop` isn't recognised.
  17. Ok, so parity is currently ok, but I have one disk unmountable, one disabled and 'emulated'. And 'stop' is disabled so I can't stop and start in maintenance mode. Please advise??
  18. As in 'unmountable partition'. nvme0 & nvme1 were originally a BTRFS pool. When that went bad, I switched to XFS for cache instead, on one of them. That then became unmountable. So I've reformatted it (having lost my 'live' appdata etc etc), to start again. Don't know how long it will last this time! And that's just the (pcie mounted) SSDs!)
  19. tower-diagnostics-20230201-2012.zip Diags attached. Would parity not let me restore the missing disk, if it was not recoverable, though? I don't really know what was on that disk, although presumably a number of my files! I'll check all the cables again and look at how they're routed etc... but I've lost (due to same fault) 2 x NVME drives as well, and they are both on PCIe cards, so no cables involved!
  20. Ok so parity rebuilding, meanwhile: Unmountable: Unsupported partition layout On *another* disk. Getting fed up with this 1. How do I know what I have lost, given I have no parity disk, and one of the disks (another one) won't mount? 2. What's going on? This chassis worked fine before unraid (please don't flame me, I'm just telling it like it is)... since unraid, my disks have been dropping like flies. nvme, hdd, one by one they are all going!
  21. Connections checked, smart report and full diags attached. Again, says passed no errors. Took a long time for the extended test. Itower-diagnostics-20230129-2145.zip think I did it twice though!
  22. tower-diagnostics-20230128-2131.zip
  23. Ok so this is odd. My parity drive is disabled. Tried running a smart test, and it said completed without error. But when I look in the log I see the following. What's going on here? ATA Error Count: 165 (device log contains only the most recent five errors) CR = Command Register [HEX] FR = Features Register [HEX] SC = Sector Count Register [HEX] SN = Sector Number Register [HEX] CL = Cylinder Low Register [HEX] CH = Cylinder High Register [HEX] DH = Device/Head Register [HEX] DC = Device Command Register [HEX] ER = Error register [HEX] ST = Status register [HEX] Powered_Up_Time is measured from power on, and printed as DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes, SS=sec, and sss=millisec. It "wraps" after 49.710 days. Error 165 occurred at disk power-on lifetime: 62237 hours (2593 days + 5 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 00 ff ff ff 0f Error: UNC at LBA = 0x0fffffff = 268435455 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- 60 00 00 ff ff ff 4f 00 16d+13:14:37.016 READ FPDMA QUEUED 60 00 00 ff ff ff 4f 00 16d+13:14:37.016 READ FPDMA QUEUED 60 00 00 ff ff ff 4f 00 16d+13:14:37.016 READ FPDMA QUEUED 60 00 00 ff ff ff 4f 00 16d+13:14:37.015 READ FPDMA QUEUED 60 00 00 ff ff ff 4f 00 16d+13:14:37.015 READ FPDMA QUEUED Error 164 occurred at disk power-on lifetime: 61901 hours (2579 days + 5 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 00 ff ff ff 0f Error: UNC at LBA = 0x0fffffff = 268435455 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- 60 00 00 ff ff ff 4f 00 2d+13:31:25.905 READ FPDMA QUEUED 60 00 00 ff ff ff 4f 00 2d+13:31:25.905 READ FPDMA QUEUED 60 00 00 ff ff ff 4f 00 2d+13:31:25.905 READ FPDMA QUEUED 60 00 00 ff ff ff 4f 00 2d+13:31:25.905 READ FPDMA QUEUED 60 00 00 ff ff ff 4f 00 2d+13:31:25.905 READ FPDMA QUEUED Error 163 occurred at disk power-on lifetime: 61847 hours (2576 days + 23 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 00 ff ff ff 0f Error: UNC at LBA = 0x0fffffff = 268435455 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- 60 00 00 ff ff ff 4f 00 06:41:38.399 READ FPDMA QUEUED 60 00 00 ff ff ff 4f 00 06:41:38.396 READ FPDMA QUEUED 60 00 00 ff ff ff 4f 00 06:41:38.393 READ FPDMA QUEUED 60 00 00 ff ff ff 4f 00 06:41:38.389 READ FPDMA QUEUED 60 00 00 ff ff ff 4f 00 06:41:38.387 READ FPDMA QUEUED Error 162 occurred at disk power-on lifetime: 61844 hours (2576 days + 20 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 00 ff ff ff 0f Error: WP at LBA = 0x0fffffff = 268435455 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- 61 00 08 ff ff ff 4f 00 03:58:29.699 WRITE FPDMA QUEUED 60 00 08 ff ff ff 4f 00 03:58:29.698 READ FPDMA QUEUED 60 00 40 ff ff ff 4f 00 03:58:29.698 READ FPDMA QUEUED 60 00 40 ff ff ff 4f 00 03:58:29.698 READ FPDMA QUEUED 60 00 08 ff ff ff 4f 00 03:58:29.698 READ FPDMA QUEUED Error 161 occurred at disk power-on lifetime: 61844 hours (2576 days + 20 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 00 ff ff ff 0f Error: UNC at LBA = 0x0fffffff = 268435455 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- 60 00 00 ff ff ff 4f 00 03:58:18.632 READ FPDMA QUEUED 60 00 00 ff ff ff 4f 00 03:58:18.628 READ FPDMA QUEUED 60 00 00 ff ff ff 4f 00 03:58:18.625 READ FPDMA QUEUED 60 00 00 ff ff ff 4f 00 03:58:18.622 READ FPDMA QUEUED 60 00 00 ff ff ff 4f 00 03:58:18.619 READ FPDMA QUEUED
  24. The disk space (without the working cache) seems to be limited to ~15GB - thinking logically could this be because it's saving data to the docker image? I changed the appdata share to cache: no, and the available space to deluge has now shot up to the remaining space on my array.
×
×
  • Create New...