mflotron Posted July 2, 2016 Share Posted July 2, 2016 Hey all, I am trying to get my first install of unRAID up and running and have run into a pretty significant bump--none of my hard drives will mount. I assign all the drives appropriately in the array, attempt to Start the array. The array starts, but then shows all drives as unmountable. I attempt to Format and after a few moments the status for every drive is still "Unmountable" Here is the log from 1 of the drives right after the format attempt: Jul 1 21:15:49 Edinburgh root: meta-data=/dev/md8 isize=512 agcount=5, agsize=268435455 blks Jul 1 21:15:49 Edinburgh root: = sectsz=512 attr=2, projid32bit=1 Jul 1 21:15:49 Edinburgh root: = crc=1 finobt=1, sparse=0 Jul 1 21:15:49 Edinburgh root: data = bsize=4096 blocks=1220942633, imaxpct=5 Jul 1 21:15:49 Edinburgh root: = sunit=0 swidth=0 blks Jul 1 21:15:49 Edinburgh root: naming =version 2 bsize=4096 ascii-ci=0 ftype=1 Jul 1 21:15:49 Edinburgh root: log =internal log bsize=4096 blocks=521728, version=2 Jul 1 21:15:49 Edinburgh root: = sectsz=512 sunit=0 blks, lazy-count=1 Jul 1 21:15:49 Edinburgh root: realtime =none extsz=4096 blocks=0, rtextents=0 Jul 1 21:15:49 Edinburgh emhttp: shcmd (293): mkdir -p /mnt/disk8 Jul 1 21:15:49 Edinburgh emhttp: shcmd (294): set -o pipefail ; mount -t auto -o noatime,nodiratime /dev/md8 /mnt/disk8 |& logger Jul 1 21:15:50 Edinburgh root: mount: /dev/md8: more filesystems detected. This should not happen, Jul 1 21:15:50 Edinburgh root: use -t <type> to explicitly specify the filesystem type or Jul 1 21:15:50 Edinburgh root: use wipefs( to clean up the device. Jul 1 21:15:50 Edinburgh emhttp: shcmd: shcmd (294): exit status: 1 Jul 1 21:15:50 Edinburgh emhttp: mount error: No file system (1) Jul 1 21:15:50 Edinburgh emhttp: shcmd (295): umount /mnt/disk8 |& logger Jul 1 21:15:50 Edinburgh root: umount: /mnt/disk8: not mounted Jul 1 21:15:50 Edinburgh emhttp: shcmd (296): rmdir /mnt/disk8 I've tried formatting using DBAN and Preclearing to no avail. FWIW, all of these drives are from a previous FreeNAS installation. Quote Link to comment
JonathanM Posted July 2, 2016 Share Posted July 2, 2016 Try explicitly setting the file system type in the drive properties instead of leaving it set to auto. Quote Link to comment
mflotron Posted July 2, 2016 Author Share Posted July 2, 2016 Curiously enough, that worked...but it does concern me quite a bit as to potentially causing issues for me down the road. The log makes me believe the drives have remnants of their ZFS past, which I definitely want to get rid of. I did figure out how to use wipefs, and I did it on all the drives, but this still didn't allow me to mount the drives while on auto. It DID seem to find and erase some of the so-called "magic strings"... I got a lot of this: /dev/sdd: 8 bytes were erased at offset 0x57541e36000 (zfs_member): 0c b1 ba 00 00 00 00 00 While we will obviously have a backup of this data (44TB...), it is crucial to our production and I want to make sure I'm not doing something now that I'm going to regret. Quote Link to comment
itimpi Posted July 2, 2016 Share Posted July 2, 2016 Not sure why a preclear would not wipe any remnants of the previous usage as it involves writing zeroes to every sector on the disk. I have seen cases where not doing so causes problems, particularly when the the disk file system type is set to 'auto'. I think this is a by-product of unRAID trying to keep intact the data on disks previously. Used by unRAID and the logic around this nit being complete enough. Quote Link to comment
JorgeB Posted July 2, 2016 Share Posted July 2, 2016 Not sure why a preclear would not wipe any remnants of the previous usage as it involves writing zeroes to every sector on the disk. Agree, are you sure you did a full preclear? Curiously enough, that worked...but it does concern me quite a bit as to potentially causing issues for me down the road. The log makes me believe the drives have remnants of their ZFS past, which I definitely want to get rid of. It would be best to really clear the disks so you can use auto, if you need to do a new config sometime in the future fs setting will be reset to auto resulting in unmountable disks and you may not remember why. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.