NominallySavvyTechPerson

Members
  • Posts

    23
  • Joined

  • Last visited

NominallySavvyTechPerson's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Ok, I think I see. What I should be asking is "can I add a zpool to unraid" which is already asked in that post that I linked to.
  2. Oh, maybe I understand now. Unraid looks at each drive and opens the filesystem there and merges them together. If I made a zpool out of two of them, I wouldn't have a filesystem on either of the two drives. Unraid would just ask me if I wanted to format the drives because they don't have any filesystem.
  3. Sorry, that went over my head. I found this where someone talks about adding a zpool to the unraid array. I'm not trying to add a zpool to the array... wait, am I? I was thinking that there was a difference between adding a whole zpool of drives as if it were a single slot in unraid (not supported) vs what I want to do where I would fill two slots in unraid with two drives. Then, unknown to unraid, I can make a zpool out of the two drives.
  4. IIUC if I want snapshots of a share, I can format one array drive as zfs and limit the share to that drive. If I want more space than that, can I make a zpool out of two zfs drives in my unraid array? That way, I can snapshot the zpool and have a snapshot of the share even though the share is too big to fit on one drive. I wouldn't use RAIDZ and would rely on unraid's parity to give some protection. Is that a good idea? Thanks!
  5. I have a drive that is all zeroes but does not have the signature. Can I use this plugin to read the drive, verify that it is all zeroes, and then write the signature? I don't see that in the options. All the options either write zeroes or read the signature. I have zeroes but I don't have the signature. I could of course just write all new zeroes but I imagine reading is less wear. Thanks!
  6. I read https://docs.unraid.net/legacy/FAQ/shrink-array/ and followed the link for step 8 "Run the clear an array drive script" that took me to the comment that has a download link for the "clear_array_drive" script. I tried running it but got an error. ``` Found a marked and empty drive to clear: Disk 3 ( /mnt/disk3 ) + echo -e '* Disk 3 will be unmounted first.' * Disk 3 will be unmounted first. + echo '* Then zeroes will be written to the entire drive.' * Then zeroes will be written to the entire drive. + echo '* Parity will be preserved throughout.' * Parity will be preserved throughout. + echo '* Clearing while updating Parity takes a VERY long time!' * Clearing while updating Parity takes a VERY long time! + echo '* The progress of the clearing will not be visible until it'\''s done!' * The progress of the clearing will not be visible until it's done! + echo '* When complete, Disk 3 will be ready for removal from array.' * When complete, Disk 3 will be ready for removal from array. + echo -e '* Commands to be executed:\n***** \x1b[36;01m umount /mnt/disk3 \x1b[39;49;00m\n***** \x1b[36;01m dd bs=1M if=/dev/zero of=/dev/md3 \x1b[39;49;00m\n' * Commands to be executed: ***** umount /mnt/disk3 ***** dd bs=1M if=/dev/zero of=/dev/md3 + '[' clear_array_drive == /tmp/user.scripts/ ']' + echo -n 'Press ! to proceed. Any other key aborts, with no changes made. ' Press ! to proceed. Any other key aborts, with no changes made. + ch= + read -n 1 ch !+ echo -e -n '\r \r' + '[' '!' '!=' '!' ']' + logger -tclear_array_drive 'Clear an unRAID array data drive v1.4' + echo -e '\rUnmounting Disk 3 ...' Unmounting Disk 3 ... + logger -tclear_array_drive 'Unmounting Disk 3 (command: umount /mnt/disk3 ) ...' + umount /mnt/disk3 + echo -e 'Clearing Disk 3 ...' Clearing Disk 3 ... + logger -tclear_array_drive 'Clearing Disk 3 (command: dd bs=1M if=/dev/zero of=/dev/md3 ) ...' + dd bs=1M if=/dev/zero of=/dev/md3 dd: error writing '/dev/md3': No space left on device 9+0 records in 8+0 records out 8388608 bytes (8.4 MB, 8.0 MiB) copied, 0.00256501 s, 3.3 GB/s + logger -tclear_array_drive 'Clearing Disk 3 is complete' + echo -e '\nA message saying "error writing ... no space left" is expected, NOT an error.\n' A message saying "error writing ... no space left" is expected, NOT an error. + echo -e 'Unless errors appeared, the drive is now cleared!' Unless errors appeared, the drive is now cleared! + echo -e 'Because the drive is now unmountable, the array should be stopped,' Because the drive is now unmountable, the array should be stopped, + echo -e 'and the drive removed (or reformatted).' and the drive removed (or reformatted). + exit ``` I ran the script with `sh -x` so there's debug output. I see it uses `/dev/mdX` not `/dev/mdXp1`. Is that the issue? Are people using this script? Do I have an old version of the script? Here's the download link for the script from the comment by RobJ https://s3-us-west-2.amazonaws.com/upload.forums.unraid.net/live/monthly_2016_09/clear_an_array_drive_zip.80cc67c667b13547f490de09b702bf4b I also searched this topic for "of=/dev/md" and I didn't see any references that included the "p1".
  7. @Frank1940 1. A refurbished drive is heavily used but not failed. I think... I don't know. I imagine that it would be easier for a data center to rotate all their drives before eol rather than wait until they individually fail. I don't actually know where they come from. Where do refurbished drives come from? 2. I imagine the seller checks the smart report to see that it doesn't have any reallocated sectors. I don't actually know what they do. 3. The two sellers that I see talked about most are GoHardDrive and ServerPartDeals. I think they have reputation around respecting the warranty. Idk if they have any rep around screening and testing. 4. 5 year warranty. Only for "Reallocated sector count" or "UDMA CRC error count" in the case of SPD. "Any issues with the drive" in the case of GoHD. The drive that failed badblocks is a drive that I bought new that is now out of warranty. I ran badblocks on it because I was curious what would happen.
  8. It's true; I do not understand how hard drives work. It's very confusing. Badblocks does four passes over the drive writing a pattern and reading it back on each pass. I have a drive that had no errors for the first pass writing and reading 0xaa but failed halfway through the second pass writing and reading 0x55. That makes me suspicious that a preclear is not enough.
  9. I recently bought some refurbished drives and I wanted to run `badblocks -wsv -b 4096 /dev/disk/by-id/whatever-abc` but I ran it on an array drive on accident. I kicked off badblocks, started a dozen docker services, changed over to the unraid main tab and that's when I saw that I had just run badblocks on the wrong drive. I took the array offline, unassigned the drive, started the array without the drive, and ran `xfs_repair` on the emulated drive. In case you're curious `xfs_repair` seems to have fixed it. ``` bad primary superblock - bad magic number !!! attempting to find secondary superblock... .found candidate secondary superblock... verified secondary superblock... writing modified primary superblock ``` Anyways, does anyone have a safer way to run badblocks? Maybe a script that objects if the drive is in the array or something? The closest I found in the forums was this saying that it's not going to happen in UD. Thanks! Backup you data, haha!
  10. There's a lot of stuff in the diagnostics. I haven't used that feature before and I want to review it more thoroughly before I upload it. Thank you for your support! I was really sweating it but I'm confident now that I have not lost any data. I have an offline copy of my important files and I'm waiting on parts to add some drives to build a raidz-2 as a destination for online backups. I'm going to take advantage of snapshots. I'm also running badblocks -wsv on all my drives to build confidence in the drives that I have. I added some refurb drives to my array in haste and now I can take my time rotate those out of the array so that I can do a destructive run of badblocks on each. I'm also adding two new wd red pro as parity because they recently had a "sale" on the 14tb. I don't want too many refurbs.
  11. Thank you. I added two 8TB drives and rebuilt the array in maintenance mode without issue. Then I did parity swap to move an 8TB CMR from parity to the array to replace an 8TB SMR array drive and a new 12TB CMR took its place in parity. Then I replaced an 8TB parity SMR with a new 12TB CMR and I'm waiting for that to finish. I'll run xfs_repair through the gui when that's done. Thanks for mentioning it; I didn't know about that feature. Sorry for not posting diagnostics. I've seen that in other troubleshooting threads but I didn't think it mattered for my question. I'll try to remember to do that by default in the future. I'm not posting it now because I didn't downloaded the diagnostics when I first posted and I have rebooted and changed drives since then. I'm happy to post diagnostics if I still need to. Here's the SMART from the drive that unRAID disabled: | # | ATTRIBUTE NAME | FLAG | VALUE | WORST | THRESHOLD | TYPE | UPDATED | FAILED | RAW VALUE | | 1 | Raw read error rate | 0x000f | 068 | 064 | 006 | Pre-fail | Always | Never | 71094922 | | 3 | Spin up time | 0x0003 | 092 | 092 | 000 | Pre-fail | Always | Never | 0 | | 4 | Start stop count | 0x0032 | 099 | 099 | 020 | Old age | Always | Never | 1331 | | 5 | Reallocated sector count | 0x0033 | 100 | 100 | 010 | Pre-fail | Always | Never | 8 | | 7 | Seek error rate | 0x000f | 090 | 061 | 045 | Pre-fail | Always | Never | 992855040 | | 9 | Power on hours | 0x0032 | 059 | 059 | 000 | Old age | Always | Never | 36679h+06m+49.125s | | 10 | Spin retry count | 0x0013 | 100 | 100 | 097 | Pre-fail | Always | Never | 0 | | 12 | Power cycle count | 0x0032 | 100 | 100 | 020 | Old age | Always | Never | 449 | | 183 | Runtime bad block | 0x0032 | 100 | 100 | 000 | Old age | Always | Never | 0 | | 184 | End-to-end error | 0x0032 | 100 | 100 | 099 | Old age | Always | Never | 0 | | 187 | Reported uncorrect | 0x0032 | 093 | 093 | 000 | Old age | Always | Never | 7 | | 188 | Command timeout | 0x0032 | 100 | 098 | 000 | Old age | Always | Never | 1 1 6 | | 189 | High fly writes | 0x003a | 100 | 100 | 000 | Old age | Always | Never | 0 | | 190 | Airflow temperature cel | 0x0022 | 064 | 048 | 040 | Old age | Always | Never | 36 (min/max 35/36) | | 191 | G-sense error rate | 0x0032 | 100 | 100 | 000 | Old age | Always | Never | 0 | | 192 | Power-off retract count | 0x0032 | 100 | 100 | 000 | Old age | Always | Never | 339 | | 193 | Load cycle count | 0x0032 | 082 | 082 | 000 | Old age | Always | Never | 37333 | | 194 | Temperature celsius | 0x0022 | 036 | 052 | 000 | Old age | Always | Never | 36 (0 20 0 0 0) | | 195 | Hardware ECC recovered | 0x001a | 079 | 064 | 000 | Old age | Always | Never | 71094922 | | 197 | Current pending sector | 0x0012 | 100 | 100 | 000 | Old age | Always | Never | 24 | | 198 | Offline uncorrectable | 0x0010 | 100 | 100 | 000 | Old age | Offline | Never | 24 | | 199 | UDMA CRC error count | 0x003e | 200 | 200 | 000 | Old age | Always | Never | 0 | | 240 | Head flying hours | 0x0000 | 100 | 253 | 000 | Old age | Offline | Never | 22351h+58m+29.819s | | 241 | Total lbas written | 0x0000 | 100 | 253 | 000 | Old age | Offline | Never | 68659349241 | | 242 | Total lbas read | 0x0000 | 100 | 253 | 000 | Old age | Offline | Never | 1595474125720 | Can you share good resources on understanding fields like "offline uncorrectable"? The best I have found is https://helpful.knobs-dials.com/index.php/Computer_data_storage_-_Reading_SMART_reports.
  12. Wow, ok. I found the button that says "Sync will start Data-Rebuild." Never mind...