polishprocessors

Members
  • Posts

    65
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

polishprocessors's Achievements

Rookie

Rookie (2/14)

10

Reputation

1

Community Answers

  1. Ok, one last question in the long list of issues I've had migrating my array from one enclosure to another, having a drive fail along the way, and accidentally overwriting some of my data... I finally have my array running now with my replacement drive. I didn't want to explicitly choose to replace a drive with the new one because I was too nervous during setup to pick the wrong one after previous failures, so I've now got a full array including a semi-failed drive that just won't accept writes at anything besides a snail's pace. So my question is: that drive's in slot 3 of the array and I have 6 drives in total. To remove the drive fully from the array I just need to create a new config. If I choose to keep all drives in the same positions, though, can I just deselect the drive in slot 3 and have it empty and the array will start fine? Just don't want to create more issues by having a middle slot empty... Sorry if I'm overthinking this, just want to make sure I don't create more issues again... unraid-diagnostics-20240421-1135.zip
  2. Right, that would have been useful-thanks! Fortunately I managed to realise my mistake before I'd done anything catastrophic and so only need to sit through a 24h rebuild instead of having to rely on backups for anything...
  3. I did and they did, rebuild appears to be going fine.
  4. Ok, I found this, which I believe is the correct procedure: https://docs.unraid.net/unraid-os/manual/storage-management/#rebuilding-a-drive-onto-itself It just looks like that 10TB drive is DOA and, because of my mess-up, I'll need to rebuild the 14TB drive then look into replacing the 10TB as well
  5. Ok, so I definitely clicked 'clear disk' before I realized that was actually a disconnected disk. So now I'm sat in emulation mode, but I want to be VERY careful to not lose any data... Should I re-attach this disk to the array? If so, will that let the data rebuild? When I booted it started in missing/disk emulated mode, so I know the data's still (virtually, if not physically) there but I want to make sure I don't overwrite it now... Edit: to clarify, it seems that new 10TB drive that arrived is DOA or just not registering for some reason or another, but in the process of installing it I swapped about my cables and this 14TB drive is shucked and therefore requires a specific power cable so it didn't come up fully and then, thinking it was the new 10TB drive, I accidentally cleared it.
  6. Hey all! I recently upgraded my Unraid with a SAS card and a new enclosure. Unfortunately in the process I lost a drive. No big deal, I just moved all the data off that (virtualized) drive and bought a new one. Now I'm trying to install that new drive but, as it's secondhand, I wanted to do a full preclear. However, despite it being a 10TB HGST Ultrastar He10, the Unassigned Devices plugin is showing me one of my WD 14TB drives (with the same serial) there instead. The sd mount point is different, at least, but I don't dare to start to try and clear the drive until I'm positive Unraid knows what it's looking at. I'm going to try to unplug and plug into a different SAS port and see if that fixes things, but this is weird, right? unraid-diagnostics-20240419-1205.zip
  7. Can confirm, there is clearly something hokey with that drive. Even reads (after I copied some files over at .2M/s using unbalanced and then wanted to move them back) were sometimes up to 50M/s, but sometimes more like .2M/s as well. I eventually was able to copy all the files off that drive and then excluded it from shares on the array and have a replacement drive on the way to replace it. In the meantime I added another (smaller) drive and that wrote zeros and formatted and joined the array with no issues, so yes, I'm pretty sure I just got extremely unlucky and had 2 drives fail at once. Though neither was a hard failure, just the beginnings of a slow one, so I'm not out any data, but does make me wonder if dual parity is warranted...
  8. Sucks, but yeah, looks like it might be... Had another drive with reallocated sectors I took out of the array, though, so if I'm down 18TB in a week that's gonna suck...
  9. Ok, I'll add one more thing for now but try to stop clogging this up until I get more info or someone's got better ideas on what to do. All data was already moved off this drive, but I tried moving a sample of data around between working disks and this disk3. disk1 & 2 are fine, running a copy from disk1 > disk2 copies over at 50MB/s. Disk1 > Disk3, however, copies (in unbalanced) showing .2MB/s, but in the unraid dashboard at 0MB/s then bursts of 200K-1M. Reading seems to work fine, it's just writing disk3 that appears to not work.
  10. Hmm... perhaps of some note I turned on alerting for SMART command timeouts and immediately got the following alerts: 188 Command timeout 0x0032 100 099 000 Old age Always Never 4 4 5
  11. I did, and it all looks good... Really mysterious, these issues. I'm nervous about adding disk3 back into the shares but might have to at this point...
  12. On restarting the array disk1 and disk2 come up immediately, but disk3 takes ages to mount. No errors that I can see, but just takes 3+ minutes to mount where disk1/2/4 took fractions of a second. Attaching another diagnostics file... unraid-diagnostics-20240412-1318.zip
  13. Ok, ran zfs check and it came back with this: Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 6 - agno = 4 - agno = 7 - agno = 5 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... done