polishprocessors

Members
  • Posts

    65
  • Joined

  • Last visited

Everything posted by polishprocessors

  1. Ok, one last question in the long list of issues I've had migrating my array from one enclosure to another, having a drive fail along the way, and accidentally overwriting some of my data... I finally have my array running now with my replacement drive. I didn't want to explicitly choose to replace a drive with the new one because I was too nervous during setup to pick the wrong one after previous failures, so I've now got a full array including a semi-failed drive that just won't accept writes at anything besides a snail's pace. So my question is: that drive's in slot 3 of the array and I have 6 drives in total. To remove the drive fully from the array I just need to create a new config. If I choose to keep all drives in the same positions, though, can I just deselect the drive in slot 3 and have it empty and the array will start fine? Just don't want to create more issues by having a middle slot empty... Sorry if I'm overthinking this, just want to make sure I don't create more issues again... unraid-diagnostics-20240421-1135.zip
  2. Right, that would have been useful-thanks! Fortunately I managed to realise my mistake before I'd done anything catastrophic and so only need to sit through a 24h rebuild instead of having to rely on backups for anything...
  3. I did and they did, rebuild appears to be going fine.
  4. Ok, I found this, which I believe is the correct procedure: https://docs.unraid.net/unraid-os/manual/storage-management/#rebuilding-a-drive-onto-itself It just looks like that 10TB drive is DOA and, because of my mess-up, I'll need to rebuild the 14TB drive then look into replacing the 10TB as well
  5. Ok, so I definitely clicked 'clear disk' before I realized that was actually a disconnected disk. So now I'm sat in emulation mode, but I want to be VERY careful to not lose any data... Should I re-attach this disk to the array? If so, will that let the data rebuild? When I booted it started in missing/disk emulated mode, so I know the data's still (virtually, if not physically) there but I want to make sure I don't overwrite it now... Edit: to clarify, it seems that new 10TB drive that arrived is DOA or just not registering for some reason or another, but in the process of installing it I swapped about my cables and this 14TB drive is shucked and therefore requires a specific power cable so it didn't come up fully and then, thinking it was the new 10TB drive, I accidentally cleared it.
  6. Hey all! I recently upgraded my Unraid with a SAS card and a new enclosure. Unfortunately in the process I lost a drive. No big deal, I just moved all the data off that (virtualized) drive and bought a new one. Now I'm trying to install that new drive but, as it's secondhand, I wanted to do a full preclear. However, despite it being a 10TB HGST Ultrastar He10, the Unassigned Devices plugin is showing me one of my WD 14TB drives (with the same serial) there instead. The sd mount point is different, at least, but I don't dare to start to try and clear the drive until I'm positive Unraid knows what it's looking at. I'm going to try to unplug and plug into a different SAS port and see if that fixes things, but this is weird, right? unraid-diagnostics-20240419-1205.zip
  7. Can confirm, there is clearly something hokey with that drive. Even reads (after I copied some files over at .2M/s using unbalanced and then wanted to move them back) were sometimes up to 50M/s, but sometimes more like .2M/s as well. I eventually was able to copy all the files off that drive and then excluded it from shares on the array and have a replacement drive on the way to replace it. In the meantime I added another (smaller) drive and that wrote zeros and formatted and joined the array with no issues, so yes, I'm pretty sure I just got extremely unlucky and had 2 drives fail at once. Though neither was a hard failure, just the beginnings of a slow one, so I'm not out any data, but does make me wonder if dual parity is warranted...
  8. Sucks, but yeah, looks like it might be... Had another drive with reallocated sectors I took out of the array, though, so if I'm down 18TB in a week that's gonna suck...
  9. Ok, I'll add one more thing for now but try to stop clogging this up until I get more info or someone's got better ideas on what to do. All data was already moved off this drive, but I tried moving a sample of data around between working disks and this disk3. disk1 & 2 are fine, running a copy from disk1 > disk2 copies over at 50MB/s. Disk1 > Disk3, however, copies (in unbalanced) showing .2MB/s, but in the unraid dashboard at 0MB/s then bursts of 200K-1M. Reading seems to work fine, it's just writing disk3 that appears to not work.
  10. Hmm... perhaps of some note I turned on alerting for SMART command timeouts and immediately got the following alerts: 188 Command timeout 0x0032 100 099 000 Old age Always Never 4 4 5
  11. I did, and it all looks good... Really mysterious, these issues. I'm nervous about adding disk3 back into the shares but might have to at this point...
  12. On restarting the array disk1 and disk2 come up immediately, but disk3 takes ages to mount. No errors that I can see, but just takes 3+ minutes to mount where disk1/2/4 took fractions of a second. Attaching another diagnostics file... unraid-diagnostics-20240412-1318.zip
  13. Ok, ran zfs check and it came back with this: Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 6 - agno = 4 - agno = 7 - agno = 5 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... done
  14. See attached. Mover finished no issues with disk3 excluded from shares (just excluded, not explicitly removed from the array). I think I'm going to flip to maintenance mode and check the filesystem on disk3. unraid-diagnostics-20240412-1357.zip
  15. Well fine, but what about the Connection Refused error? Again, this is going to other drives (not disk3) without issues, so I don't expect any problems if the issue is just with disk3... I'm going to let the mover complete now and then take the array down to maintenance mode to run an xfs check on disk3...
  16. Ok, I realise I'm going at this on my own, but on reboot I tried excluding disk3 from the share and re-ran the mover. Got some early errors: Apr 12 12:05:07 unRaid shfs: copy_file: /mnt/cache/media/movies/Defiance (2008)/Defiance (2008) [1080p].mp4 /mnt/disk1/media/movies/Defiance (2008)/Defiance (2008) [1080p].mp4.partial (17) File exists Apr 12 12:05:07 unRaid move: move_object: /mnt/cache/media/movies/Defiance (2008)/Defiance (2008) [1080p].mp4 Connection refused But it's looking to otherwise go just fine. Is it possible that drive is bad despite showing green and passing all self-tests? Parity built just fine with that drive, but I did notice all writes to that drive were going INCREDIBLY slow when I was manually copying (1-4MB/s) versus other drives being fine (150MB/s). But those issues were only when I was moving files to the drive, not when it was running parity, leading me to perhaps believe there's some sort of logical error with the drive, not physical?
  17. I should note: mover worked fine before the drive removal, shares are set to Cache > Array, and nothing on the config side changed besides removal of a drive... I'm also now trying to run a New Permissions because I tried to move at least one set of files from /mnt/cache/media/movies > /mnt/disk1/media/movies and, because I did it from the CLI, they came through as owned by root. This took the better part of 20m, but eventually finished, but Unraid still thinks the mover is running even though nothing's happening so I've no idea what to do besides another reboot...
  18. Hey all! So I had a drive (I think) go bad so I pulled it from the array after moving all the data off the emulated version. Went into New Config, got a new config going without that drive and fired up a Parity Build which finished after ~24h for 14TB. So far everything's good. But while running the rebuild I was also downloading things so my cache drive nearly filled up so I had to pause downloads until Parity was finished because the mover apparently won't run when Parity is going. Fast forward 24h and now I have an array where the mover seems to freeze at some point while moving files, perhaps only when writing to disk3 but that also might be a coincidence? Other than perhaps a confluence with disk3 (which tests out fine) I can see no consistency with when things/the mover go wrong, but it keeps stalling and then there seems to be no way to kill it besides restarting unraid. Does anyone have any idea where to look and/or what might be up? FWIW, as well, generating this diagnostics file took 2+ minutes, but, besides files not moving about properly, my array/dockers appear to be functioning fine... unraid-diagnostics-20240412-1108.zip
  19. FWIW I'm seeing 50-100MB/s write speeds to my 14TB parity, 8-14TB drive array.
  20. I did, I thought, but alas. Oh well, I have backups, so it's just another long disk restore (from no data to no data because that's what it wants to do), followed by restoring my data. I guess there's no way to skip the disk restore because, even if I don't choose to restore it and instead formatted it and added it empty I'd need to rebuild parity with a new drive, no?
  21. Unfortunately I wasn't paying a lot of attention after the data restore completed, but, because I wanted to add those 2 new drives to my array I stopped the array and put them in, but the drive which was just rebuilt was also asking to be added back in, so I put it in the slot where it was before. I'm not sure why it didn't just get re-added to the array, but I'm certain that 1) not paying attention and 2) not doing this one at a time created issues. I went back and took the 10TB drive out of the array, restarted it and formatted the two new drives, then added the 10TB drive back to the array and it appeared with 9.9TB free, so I think that data's toast. Probably my problem was adding new drives at the same time as when another drive was being emulated. It's not a huge deal-I have backups of any important data-but lesson learned: one thing at a time!!
  22. If it helps... unraid-diagnostics-20240406-1604.zip
  23. Additionally, this is showing up below...