Jump to content

Creating a Cache Drive Pool


jeffreywhunter

Recommended Posts

I've read several posts in the forum about how unRaid 6.0.x has cache drive pooling setup in the webgui, but I've not been able to see how it works in the latest iteration.  Can someone point me to the latest instructions?

 

When I go into the Main/Cache (array stopped), all I see is this...

http://my.jetscreenshot.com/12412/20150707-nkzc-43kb.jpg

 

Slots = 1?  Is this a setting somewhere?

 

thanks!

 

 

Link to comment

I've read several posts in the forum about how unRaid 6.0.x has cache drive pooling setup in the webgui, but I've not been able to see how it works in the latest iteration.  Can someone point me to the latest instructions?

 

When I go into the Main/Cache (array stopped), all I see is this...

http://my.jetscreenshot.com/12412/20150707-nkzc-43kb.jpg

 

Slots = 1?  Is this a setting somewhere?

 

thanks!

Reduce the number of array slots to increase the cache slot available count.
Link to comment

Now that I have the cache pool setup...

 

http://my.jetscreenshot.com/12412/20150707-ztwg-57kb.jpg

 

I'm assuming that the system sees the two drives as a single 990GB cache drive.

 

If you look at the screenshot however, there is something odd.  It shows 268GB on the cache drive (but its empty - http://my.jetscreenshot.com/12412/20150707-ojgf-65kb.jpg).  And why does the cache 2 drive show no freespace?  FYI, I precleared the cache drive before I put it into the pool...

 

Link to comment

Now that I have the cache pool setup...

 

http://my.jetscreenshot.com/12412/20150707-ztwg-57kb.jpg

 

I'm assuming that the system sees the two drives as a single 990GB cache drive.

No firsthand experience here, but that doesn't look right. I think unraid is only configured to use multiple cache devices as RAID1, which means your configuration probably isn't valid, and won't balance because you have more data on the first device than will fit on the second. Or something like that. Hopefully someone who has actually played with multiple cache devices of different sizes will chime in here.
Link to comment

I wondered that, but thought I had read you could use any number of drives.  Raid 0 would make more sense.  Should I create the Raid 0 outside of unRaid, then add that as the cache?  or just use 2 identical cache drives and let unraid setup the Raid 0?

btrfs RAID1 is a special kind of beast, it allows the use of odd numbers of drives and spreads the data around so that there are always 2 copies of each bit, allowing for a single drive failure in the set. RAID0 is non fault tolerant, so unraid would have to be forced into using it, I'm not even sure you could do it in software. 2 identical drives as a RAID1 would be the most efficient use of the space, as odd sizes are going to limit the max amount of protected space. There are btrfs RAID1 drive space calculators online if you want to play around with different size combinations.
Link to comment

Oops... I reconfigured the cache back to what it was...

 

http://my.jetscreenshot.com/12412/20150707-kt3t-38kb.jpg

 

Now the cache drive is unmountable?  and it wants me to reformat... which means I'll lose my docker configuration for Plex, and a couple others...  No big deal, but I'd rather not rebuild it...

 

Guess I'll wait for some wisdom to come along!

Try shutting down and restarting. Reconfiguring multiple times may have confused it as to what was supposed to be mounted where. Also try starting the array with no cache drive assigned, then reassign just the single drive.
Link to comment

The cache pool is set up to provide protection so that there is a copy of any particular data sector on at least two drives in the pool.

 

With 2 drives in a cache pool the maximum usable space is equivalent to the smallest drive so in such a case you want them to be of the same size for maximum efficiency of usable space. 

 

With more than 2 drives it gets more complicated and there is a calculator tool to calculate the usable space for any given mix of drive sizes.

Link to comment

Here's what I could find.  Does not look like any reference to cache.  Full syslog attached...

 

root@HunterNAS:~# cat /var/log/syslog | grep btrfs
Jul  7 00:05:44 HunterNAS emhttp: shcmd (18): /sbin/btrfs device scan |& logger
Jul  7 00:05:45 HunterNAS emhttp: shcmd (20): set -o pipefail ; mount -t btrfs -o noatime,nodiratime /dev/md1 /mnt/disk1 |& logger
Jul  7 00:05:49 HunterNAS emhttp: shcmd (21): btrfs filesystem resize max /mnt/disk1 |& logger
Jul  7 00:05:50 HunterNAS emhttp: shcmd (32): set -o pipefail ; mount -t btrfs -o noatime,nodiratime /dev/md5 /mnt/disk5 |& logger
Jul  7 00:05:51 HunterNAS emhttp: shcmd (33): btrfs filesystem resize max /mnt/disk5 |& logger
Jul  7 00:05:51 HunterNAS emhttp: shcmd (35): set -o pipefail ; mount -t btrfs -o noatime,nodiratime /dev/md6 /mnt/disk6 |& logger
Jul  7 00:05:51 HunterNAS emhttp: shcmd (36): btrfs filesystem resize max /mnt/disk6 |& logger
Jul  7 00:05:51 HunterNAS emhttp: shcmd (38): set -o pipefail ; mount -t btrfs -o noatime,nodiratime /dev/md7 /mnt/disk7 |& logger
Jul  7 00:05:51 HunterNAS emhttp: shcmd (39): btrfs filesystem resize max /mnt/disk7 |& logger
Jul  7 00:05:53 HunterNAS emhttp: mount error: Bound to btrfs pool
Jul  7 00:11:46 HunterNAS emhttp: shcmd (132): /sbin/btrfs device scan |& logger
Jul  7 00:11:46 HunterNAS emhttp: shcmd (134): set -o pipefail ; mount -t btrfs -o noatime,nodiratime /dev/md1 /mnt/disk1 |& logger
Jul  7 00:11:51 HunterNAS emhttp: shcmd (135): btrfs filesystem resize max /mnt/disk1 |& logger
Jul  7 00:11:51 HunterNAS emhttp: shcmd (146): set -o pipefail ; mount -t btrfs -o noatime,nodiratime /dev/md5 /mnt/disk5 |& logger
Jul  7 00:11:52 HunterNAS emhttp: shcmd (147): btrfs filesystem resize max /mnt/disk5 |& logger
Jul  7 00:11:52 HunterNAS emhttp: shcmd (149): set -o pipefail ; mount -t btrfs -o noatime,nodiratime /dev/md6 /mnt/disk6 |& logger
Jul  7 00:11:52 HunterNAS emhttp: shcmd (150): btrfs filesystem resize max /mnt/disk6 |& logger
Jul  7 00:11:52 HunterNAS emhttp: shcmd (152): set -o pipefail ; mount -t btrfs -o noatime,nodiratime /dev/md7 /mnt/disk7 |& logger
Jul  7 00:11:52 HunterNAS emhttp: shcmd (153): btrfs filesystem resize max /mnt/disk7 |& logger
Jul  7 00:12:40 HunterNAS emhttp: shcmd (238): /sbin/btrfs device scan |& logger
Jul  7 00:12:40 HunterNAS emhttp: shcmd (240): set -o pipefail ; mount -t btrfs -o noatime,nodiratime /dev/md1 /mnt/disk1 |& logger
Jul  7 00:12:44 HunterNAS emhttp: shcmd (241): btrfs filesystem resize max /mnt/disk1 |& logger
Jul  7 00:12:45 HunterNAS emhttp: shcmd (252): set -o pipefail ; mount -t btrfs -o noatime,nodiratime /dev/md5 /mnt/disk5 |& logger
Jul  7 00:12:45 HunterNAS emhttp: shcmd (253): btrfs filesystem resize max /mnt/disk5 |& logger
Jul  7 00:12:45 HunterNAS emhttp: shcmd (255): set -o pipefail ; mount -t btrfs -o noatime,nodiratime /dev/md6 /mnt/disk6 |& logger
Jul  7 00:12:46 HunterNAS emhttp: shcmd (256): btrfs filesystem resize max /mnt/disk6 |& logger
Jul  7 00:12:46 HunterNAS emhttp: shcmd (258): set -o pipefail ; mount -t btrfs -o noatime,nodiratime /dev/md7 /mnt/disk7 |& logger
Jul  7 00:12:46 HunterNAS emhttp: shcmd (259): btrfs filesystem resize max /mnt/disk7 |& logger
Jul  7 00:12:47 HunterNAS emhttp: mount error: Bound to btrfs pool

 

So grep'd for cache

root@HunterNAS:~# cat /var/log/syslog | grep cache
Jul  7 00:05:31 HunterNAS kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes)
Jul  7 00:05:31 HunterNAS kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes)
Jul  7 00:05:31 HunterNAS kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes)
Jul  7 00:05:31 HunterNAS kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes)
Jul  7 00:05:31 HunterNAS kernel: PCI: pci_cache_line_size set to 64 bytes
Jul  7 00:05:31 HunterNAS kernel: xhci_hcd 0000:00:14.0: cache line size of 64 is not supported
Jul  7 00:05:31 HunterNAS kernel: ehci-pci 0000:00:1a.0: cache line size of 64 is not supported
Jul  7 00:05:31 HunterNAS kernel: ehci-pci 0000:00:1d.0: cache line size of 64 is not supported
Jul  7 00:05:31 HunterNAS kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Jul  7 00:05:31 HunterNAS kernel: sd 2:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jul  7 00:05:31 HunterNAS kernel: sd 3:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jul  7 00:05:31 HunterNAS kernel: sd 6:0:0:0: [sdd] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jul  7 00:05:31 HunterNAS kernel: sd 8:0:0:0: [sde] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jul  7 00:05:31 HunterNAS kernel: sd 10:0:0:0: [sdf] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jul  7 00:05:31 HunterNAS kernel: sd 1:0:0:0: [sdg] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jul  7 00:05:31 HunterNAS kernel: sd 1:0:1:0: [sdh] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jul  7 00:05:31 HunterNAS kernel: sd 1:0:2:0: [sdi] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jul  7 00:05:31 HunterNAS kernel: sd 1:0:3:0: [sdj] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jul  7 00:05:31 HunterNAS kernel: sd 1:0:4:0: [sdk] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jul  7 00:05:31 HunterNAS kernel: sd 1:0:5:0: [sdl] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jul  7 00:05:31 HunterNAS kernel: sd 1:0:6:0: [sdm] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jul  7 00:05:31 HunterNAS kernel: sd 1:0:7:0: [sdn] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jul  7 00:05:35 HunterNAS logger: # version and speed, cache configuration, bus speed, etc. on
Jul  7 00:05:42 HunterNAS emhttp: cache slots: 1
Jul  7 00:05:42 HunterNAS emhttp: cache slots: 1
Jul  7 00:05:42 HunterNAS emhttp: import 23 cache device: sdf
Jul  7 00:05:52 HunterNAS emhttp: shcmd (49): mkdir -p /mnt/cache
Jul  7 00:05:53 HunterNAS emhttp: shcmd (50): rmdir /mnt/cache
Jul  7 00:11:16 HunterNAS emhttp: cache slots: 1
Jul  7 00:11:16 HunterNAS emhttp: import 23 cache device: sdf
Jul  7 00:11:17 HunterNAS emhttp: cache slots: 1
Jul  7 00:11:17 HunterNAS emhttp: import 23 cache device: sdf
Jul  7 00:11:18 HunterNAS emhttp: cache slots: 1
Jul  7 00:11:18 HunterNAS emhttp: import 23 cache device: sdf
Jul  7 00:11:29 HunterNAS emhttp: cache slots: 1
Jul  7 00:11:29 HunterNAS emhttp: import 23 cache device: no device
Jul  7 00:11:44 HunterNAS emhttp: cache slots: 1
Jul  7 00:11:44 HunterNAS emhttp: import 23 cache device: no device
Jul  7 00:12:21 HunterNAS emhttp: cache slots: 1
Jul  7 00:12:21 HunterNAS emhttp: import 23 cache device: no device
Jul  7 00:12:22 HunterNAS emhttp: cache slots: 1
Jul  7 00:12:22 HunterNAS emhttp: import 23 cache device: no device
Jul  7 00:12:23 HunterNAS emhttp: cache slots: 1
Jul  7 00:12:23 HunterNAS emhttp: import 23 cache device: no device
Jul  7 00:12:34 HunterNAS emhttp: cache slots: 1
Jul  7 00:12:34 HunterNAS emhttp: import 23 cache device: sdf
Jul  7 00:12:38 HunterNAS emhttp: cache slots: 1
Jul  7 00:12:38 HunterNAS emhttp: import 23 cache device: sdf
Jul  7 00:12:47 HunterNAS emhttp: shcmd (269): mkdir -p /mnt/cache
Jul  7 00:12:47 HunterNAS emhttp: shcmd (270): rmdir /mnt/cache

 

And grep for device sdf (the cache drive)

 

root@HunterNAS:~# cat /var/log/syslog | grep sdf
Jul  7 00:05:31 HunterNAS kernel: sd 10:0:0:0: [sdf] 1465149168 512-byte logical blocks: (750 GB/698 GiB)
Jul  7 00:05:31 HunterNAS kernel: sd 10:0:0:0: [sdf] 4096-byte physical blocks
Jul  7 00:05:31 HunterNAS kernel: sd 10:0:0:0: [sdf] Write Protect is off
Jul  7 00:05:31 HunterNAS kernel: sd 10:0:0:0: [sdf] Mode Sense: 00 3a 00 00
Jul  7 00:05:31 HunterNAS kernel: sd 10:0:0:0: [sdf] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jul  7 00:05:31 HunterNAS kernel: sdf: sdf1
Jul  7 00:05:31 HunterNAS kernel: sd 10:0:0:0: [sdf] Attached SCSI disk
Jul  7 00:05:42 HunterNAS emhttp: Samsung_SSD_840_EVO_750GB_S1DMNEADA10706Y (sdf) 732574584
Jul  7 00:05:42 HunterNAS emhttp: import 23 cache device: sdf
Jul  7 00:05:45 HunterNAS kernel: BTRFS: device fsid da7eba92-74ae-4c3a-80ba-1c511f0027a3 devid 1 transid 17613 /dev/sdf1
Jul  7 00:11:16 HunterNAS emhttp: Samsung_SSD_840_EVO_750GB_S1DMNEADA10706Y (sdf) 732574584
Jul  7 00:11:16 HunterNAS emhttp: import 23 cache device: sdf
Jul  7 00:11:17 HunterNAS emhttp: Samsung_SSD_840_EVO_750GB_S1DMNEADA10706Y (sdf) 732574584
Jul  7 00:11:17 HunterNAS emhttp: import 23 cache device: sdf
Jul  7 00:11:18 HunterNAS emhttp: Samsung_SSD_840_EVO_750GB_S1DMNEADA10706Y (sdf) 732574584
Jul  7 00:11:18 HunterNAS emhttp: import 23 cache device: sdf
Jul  7 00:11:29 HunterNAS emhttp: Samsung_SSD_840_EVO_750GB_S1DMNEADA10706Y (sdf) 732574584
Jul  7 00:11:44 HunterNAS emhttp: Samsung_SSD_840_EVO_750GB_S1DMNEADA10706Y (sdf) 732574584
Jul  7 00:12:21 HunterNAS emhttp: Samsung_SSD_840_EVO_750GB_S1DMNEADA10706Y (sdf) 732574584
Jul  7 00:12:22 HunterNAS emhttp: Samsung_SSD_840_EVO_750GB_S1DMNEADA10706Y (sdf) 732574584
Jul  7 00:12:23 HunterNAS emhttp: Samsung_SSD_840_EVO_750GB_S1DMNEADA10706Y (sdf) 732574584
Jul  7 00:12:34 HunterNAS emhttp: Samsung_SSD_840_EVO_750GB_S1DMNEADA10706Y (sdf) 732574584
Jul  7 00:12:34 HunterNAS emhttp: import 23 cache device: sdf
Jul  7 00:12:38 HunterNAS emhttp: Samsung_SSD_840_EVO_750GB_S1DMNEADA10706Y (sdf) 732574584
Jul  7 00:12:38 HunterNAS emhttp: import 23 cache device: sdf

 

Thanks!

syslog20150707.zip

Link to comment

This is new territory for me, but here is the info I found.

https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices#Replacing_failed_devices

It appears to me that you would want to

mount -o degraded /dev/sd? /mnt
btrfs device delete missing /mnt

Obviously I have not tested this. I ASSUME that you would want to create a temp folder to use in the /mnt path (/mnt/test) or something, before you tried to remount it at /mnt/cache.

 

All this assumes you are willing to take risks with your array data, at the possible reward of figuring out a procedure that may help others in the future. If you are uncomfortable playing around, just take the safe route, wipe the drive and rebuild your dockers.

 

As an aside, apparently to remove a device from a pool, you (or more properly unraid's gui) should have issued a device delete. Since this wasn't done while the pool was healthy, you have to force the removal.

Link to comment

I don't think this will mess with the entire array would it?

 

I'm game to try.

 

So these are the devices...

 

root@HunterNAS:~# lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
md1      9:1    0   3.7T  0 md
md2      9:2    0   1.8T  0 md   /mnt/disk2
md3      9:3    0   1.8T  0 md   /mnt/disk3
md4      9:4    0   1.8T  0 md   /mnt/disk4
md5      9:5    0   1.8T  0 md
md6      9:6    0   1.8T  0 md
md7      9:7    0   1.8T  0 md
md8      9:8    0   1.8T  0 md   /mnt/disk8
md9      9:9    0   1.8T  0 md   /mnt/disk9
sda      8:0    1    29G  0 disk
??sda1   8:1    1    29G  0 part /boot
sdb      8:16   0   4.6T  0 disk
??sdb1   8:17   0   4.6T  0 part
sdc      8:32   0   1.8T  0 disk
??sdc1   8:33   0   1.8T  0 part
sdd      8:48   0   3.7T  0 disk
??sdd1   8:49   0   3.7T  0 part
sde      8:64   0 223.6G  0 disk
??sde1   8:65   0 223.6G  0 part
sdf      8:80   0 698.7G  0 disk
??sdf1   8:81   0 698.7G  0 part
sdg      8:96   0   1.8T  0 disk
??sdg1   8:97   0   1.8T  0 part
sdh      8:112  0   1.8T  0 disk
??sdh1   8:113  0   1.8T  0 part
sdi      8:128  0   1.8T  0 disk
??sdi1   8:129  0   1.8T  0 part
sdj      8:144  0   1.8T  0 disk
??sdj1   8:145  0   1.8T  0 part
sdk      8:160  0   1.8T  0 disk
??sdk1   8:161  0   1.8T  0 part
sdl      8:176  0   1.8T  0 disk
??sdl1   8:177  0   1.8T  0 part
sdm      8:192  0   1.8T  0 disk
??sdm1   8:193  0   1.8T  0 part
sdn      8:208  0   1.8T  0 disk
??sdn1   8:209  0   1.8T  0 part
md10     9:10   0   1.8T  0 md   /mnt/disk10

 

sdf (750gb) and sde (240gb) are the two ssds. 

 

So I ran the first command...tried /mnt/test first (didn't work) and then tried /mnt - got this result.

 

root@HunterNAS:~# mount -o degraded /dev/sdf /mnt/test
mount: mount point /mnt/test does not exist
root@HunterNAS:~# mount -o degraded /dev/sdf /mnt
mount: block device /dev/sdf is write-protected, mounting read-only
mount: you must specify the filesystem type

 

Here's what the syslog reported

 

Jul 7 12:49:00 HunterNAS kernel: REISERFS warning (device sdf): super-6502 reiserfs_getopt: unknown mount option "degraded"
Jul 7 12:49:00 HunterNAS kernel: EXT3-fs (sdf): error: can't find ext3 filesystem on dev sdf.
Jul 7 12:49:00 HunterNAS kernel: EXT2-fs (sdf): error: can't find an ext2 filesystem on dev sdf.
Jul 7 12:49:00 HunterNAS kernel: EXT4-fs (sdf): VFS: Can't find ext4 filesystem
Jul 7 12:49:00 HunterNAS kernel: FAT-fs (sdf): Unrecognized mount option "degraded" or missing value
Jul 7 12:49:00 HunterNAS kernel: FAT-fs (sdf): Unrecognized mount option "degraded" or missing value
Jul 7 12:49:00 HunterNAS kernel: REISERFS warning (device sdf): super-6502 reiserfs_getopt: unknown mount option "degraded"
Jul 7 12:49:00 HunterNAS kernel: EXT3-fs (sdf): error: can't find ext3 filesystem on dev sdf.
Jul 7 12:49:00 HunterNAS kernel: EXT2-fs (sdf): error: can't find an ext2 filesystem on dev sdf.
Jul 7 12:49:00 HunterNAS kernel: EXT4-fs (sdf): VFS: Can't find ext4 filesystem
Jul 7 12:49:00 HunterNAS kernel: FAT-fs (sdf): Unrecognized mount option "degraded" or missing value
Jul 7 12:49:00 HunterNAS kernel: FAT-fs (sdf): Unrecognized mount option "degraded" or missing value
Jul 7 12:49:00 HunterNAS kernel: hfsplus: unable to parse mount options
Jul 7 12:49:00 HunterNAS kernel: UDF-fs: bad mount option "degraded" or missing value
Jul 7 12:49:00 HunterNAS kernel: XFS (sdf): unknown mount option [degraded].

 

Looks like I need to specify the filesystem somewhere?

 

Here's the filesystems:

 

root@HunterNAS:~# mount
tmpfs on /var/log type tmpfs (rw,size=128m,mode=0755,size=256m)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
/dev/sda1 on /boot type vfat (rw,noatime,nodiratime,umask=0,shortname=mixed)
/dev/md1 on /mnt/disk1 type btrfs (rw,noatime,nodiratime)
/dev/md2 on /mnt/disk2 type xfs (rw,noatime,nodiratime)
/dev/md3 on /mnt/disk3 type xfs (rw,noatime,nodiratime)
/dev/md4 on /mnt/disk4 type xfs (rw,noatime,nodiratime)
/dev/md5 on /mnt/disk5 type btrfs (rw,noatime,nodiratime)
/dev/md6 on /mnt/disk6 type btrfs (rw,noatime,nodiratime)
/dev/md7 on /mnt/disk7 type btrfs (rw,noatime,nodiratime)
/dev/md8 on /mnt/disk8 type xfs (rw,noatime,nodiratime)
/dev/md9 on /mnt/disk9 type xfs (rw,noatime,nodiratime)
/dev/md10 on /mnt/disk10 type xfs (rw,noatime,nodiratime)
shfs on /mnt/user0 type fuse.shfs (rw,nosuid,nodev,noatime,allow_other)
shfs on /mnt/user type fuse.shfs (rw,nosuid,nodev,noatime,allow_other)

 

Of course, the cache isn't there anymore because its not mounting.  Not sure what to do next...

Link to comment

I don't think this will mess with the entire array would it?

 

I'm game to try.

So I ran the first command...tried /mnt/test first (didn't work) and then tried /mnt - got this result.

Whether or not it messes with the rest of your array depends on exactly what you type. In general you shouldn't issue a command without a rudimentary understanding of why you typed what you did.

 

First, if you wish to mount to a directory, it must exist, so you need to mkdir /mnt/test if you want to use it as a test mount point.

 

 

 

Whether or not you need to specify a file system type, I don't know. normally the mount command can parse the drive and figure out what type it is, and mount it appropriately. Perhaps try /dev/sdf1 in the mount command.

 

I don't know if using the /mnt folder by itself messed with the array, are your other drives still mounted ok inside /mnt?

Link to comment

The cache directory is not in the MNT directory. So I suppose its content is gone?  Ergo, start over?

Currently your drive is failing to mount automatically, and you've tried a few failed mount commands. I would reboot and see what happens if you create the test directory in the /mnt folder and try mounting the SSD to it using the /dev/sd?1 with the degraded option. Keep in mind the sd? designation can change, so after you reboot verify which letter corresponds to the SSD you wish to work with.

 

The content is probably still on the SSD, just a little hard to get to right now.

 

If you are more comfortable starting over, I would do that. If this particular pursuit isn't fun for you, it's probably not a good idea, it's not worth getting super stressed, especially since rebuilding the dockers isn't particularly hard.

Link to comment

Hmmm, made some progress...

 

root@HunterNAS:/mnt# lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
md1      9:1    0   3.7T  0 md
md2      9:2    0   1.8T  0 md   /mnt/disk2
md3      9:3    0   1.8T  0 md   /mnt/disk3
md4      9:4    0   1.8T  0 md   /mnt/disk4
md5      9:5    0   1.8T  0 md
md6      9:6    0   1.8T  0 md
md7      9:7    0   1.8T  0 md
md8      9:8    0   1.8T  0 md   /mnt/disk8
md9      9:9    0   1.8T  0 md   /mnt/disk9
sda      8:0    1    29G  0 disk
??sda1   8:1    1    29G  0 part /boot
sdb      8:16   0   4.6T  0 disk
??sdb1   8:17   0   4.6T  0 part
sdc      8:32   0   1.8T  0 disk
??sdc1   8:33   0   1.8T  0 part
sdd      8:48   0   3.7T  0 disk
??sdd1   8:49   0   3.7T  0 part
sde      8:64   0 223.6G  0 disk
??sde1   8:65   0 223.6G  0 part
sdf      8:80   0 698.7G  0 disk
??sdf1   8:81   0 698.7G  0 part
sdg      8:96   0   1.8T  0 disk
??sdg1   8:97   0   1.8T  0 part
sdh      8:112  0   1.8T  0 disk
??sdh1   8:113  0   1.8T  0 part
sdi      8:128  0   1.8T  0 disk
??sdi1   8:129  0   1.8T  0 part
sdj      8:144  0   1.8T  0 disk
??sdj1   8:145  0   1.8T  0 part
sdk      8:160  0   1.8T  0 disk
??sdk1   8:161  0   1.8T  0 part
sdl      8:176  0   1.8T  0 disk
??sdl1   8:177  0   1.8T  0 part
sdm      8:192  0   1.8T  0 disk
??sdm1   8:193  0   1.8T  0 part
sdn      8:208  0   1.8T  0 disk
??sdn1   8:209  0   1.8T  0 part
md10     9:10   0   1.8T  0 md   /mnt/disk10
root@HunterNAS:/mnt# mount -o degraded /dev/sdf1 /mnt/test
root@HunterNAS:/mnt# btrfs device delete missing /mnt/test
ERROR: error removing the device 'missing' - unable to go below two devices on raid1
root@HunterNAS:/mnt#

 

Not sure what the error message means...

 

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...