Jump to content

Cache pool won't balance


Fatal_Flaw

Recommended Posts

I started with a 512GB Samsung 840 Pro in my cache pool. I then added a 500GB Samsung 840 to the cache pool.

 

The cache mounts and is accessible via //tower/cache, and my VM stored on it works, but there's no writes to the second cache drive like I would expect since it's mirroring the data. I tried initiating a balance, but the syslog says

 

Jul  8 13:33:04 Filebox php: /sbin/btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt/cache &>/dev/null &
Jul  8 13:33:04 Filebox kernel: BTRFS error (device sdb1): unable to start balance with target data profile 16

 

Any suggestions for getting the drives mirroring correctly? Thanks!

 

Screenshots:

Cache Pool Settings

Main Screen - Cache Devices

filebox-syslog-20150708-1333.zip

Link to comment

Results:

 

root@Filebox:~# btrfs device add /dev/sdk /mnt/cache
/dev/sdk appears to contain a partition table (dos).
Use the -f option to force overwrite.
root@Filebox:~# btrfs fi df /mnt/cache
Data, single: total=34.00GiB, used=32.59GiB
System, single: total=32.00MiB, used=16.00KiB
Metadata, single: total=1.00GiB, used=10.92MiB
GlobalReserve, single: total=16.00MiB, used=0.00B

 

Hmmm, do I need to manually format the second cache (sdk) drive?

Link to comment

Hmmm, do I need to manually format the second cache (sdk) drive?

Actually, I think the existing format is what is causing it to fail. If you are sure there is NOTHING on /dev/sdk that you want to save, I'd just do this
btrfs device add /dev/sdk1 /mnt/cache

or

btrfs device add -f /dev/sdk1 /mnt/cache

and see what happens. I'm not sure whether or not you need to manually add a specific partition type, but you definitely don't need to format it.

 

It might be better to just delete the existing partition on the new SSD and try readding the drive and let unraid fully prepare the drive itself instead of manually trying to force it.

 

Hopefully someone better versed in unraid's implimentation of btrfs will chime in and say for sure how unraid wants things laid out, and whether or not unraid is expected to erase a drive in order to add it to a RAID pool. I suspect it is programmed to leave existing content alone, which means you have to manually erase (remove partitions) on a drive you wish to add.

Link to comment

NOTE: I had to move some drives/cables around so what was previously sdb is now sdn, and what was previously sdk is now sdm.

sdn is the existing SSD with the data on it.

sdm is the new SSD that I'm trying to add to the cache pool.

 

Decided to just delete the partition and let unraid do what it needed to to add it to the pool.

So I stopped the array. Deleted the partition on the new SSD (sdm). The main page of the GUI listed the new SSD (sdm) as a new drive in the Cache 2 slot. I started the array Cache 2 is green. Check the syslog and find it's still not balancing. You can see below in the pasted portion of the syslog, "error during balancing '/mnt/cache' - Invalid argument" and "BTRFS error (device sdn1): unable to start balance with target data profile 16".

 

Jul  9 17:21:10 Filebox emhttp: shcmd (164): mkdir -p /mnt/cache
Jul  9 17:21:10 Filebox emhttp: shcmd (165): set -o pipefail ; mount -t btrfs -o noatime,nodiratime -U b2d98d37-4bb3-4e49-9b4e-95e14e7322a8 /mnt/cache |& logger
Jul  9 17:21:10 Filebox kernel: BTRFS info (device sdn1): disk space caching is enabled
Jul  9 17:21:10 Filebox kernel: BTRFS: has skinny extents
Jul  9 17:21:10 Filebox kernel: BTRFS: detected SSD devices, enabling SSD mode
Jul  9 17:21:10 Filebox emhttp: writing MBR on disk (sdm) with partition 1 offset 64, erased: 0
Jul  9 17:21:11 Filebox emhttp: re-reading (sdm) partition table
Jul  9 17:21:11 Filebox kernel: ata17.00: Enabling discard_zeroes_data
Jul  9 17:21:11 Filebox emhttp: shcmd (166): udevadm settle
Jul  9 17:21:11 Filebox kernel: sdm: sdm1
Jul  9 17:21:11 Filebox emhttp: shcmd (167): set -o pipefail ; /sbin/btrfs device add -f -K /dev/sdm1 /mnt/cache |& logger
Jul  9 17:21:12 Filebox logger: /dev/sdm1 is mounted
Jul  9 17:21:12 Filebox emhttp: shcmd: shcmd (167): exit status: 1
Jul  9 17:21:12 Filebox emhttp: shcmd (168): set -o pipefail ; /sbin/btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt/cache |& logger &
Jul  9 17:21:12 Filebox emhttp: shcmd (169): btrfs filesystem resize max /mnt/cache |& logger
Jul  9 17:21:12 Filebox logger: ERROR: error during balancing '/mnt/cache' - Invalid argument
Jul  9 17:21:12 Filebox logger: There may be more info in syslog - try dmesg | tail
Jul  9 17:21:12 Filebox kernel: BTRFS error (device sdn1): unable to start balance with target data profile 16
Jul  9 17:21:12 Filebox logger: Resize '/mnt/cache' of 'max'
Jul  9 17:21:12 Filebox emhttp: shcmd (170): sync
Jul  9 17:21:12 Filebox kernel: BTRFS: new size for /dev/sdn1 is 512110157824

 

Is it possible that the problem is that the original cache drive is 512GB and I'm trying to add a 500GB drive? Does anyone have any other suggestions? I appreciate the help!

 

Full syslog attached.

filebox-syslog-20150709-1721.zip

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...