Expanding Cache - 4th Drive not being written to


Trites

Recommended Posts

Hello,

I added a forth SSD to my cache last night and I noticed today that it doesn't look like anything is being written to it.

Capture.thumb.PNG.520b7c9859a6f0bf09fd52fc5b8d3a0a.PNG

 

When I click on Cache 4 I noticed that Format is unknown

Capture2.PNG.658e8a131ff60a15b6b469645b5bf1ce.PNG

 

And when I check the pool from putty it doesn't look like the drive is part of the pool

root@*******:/# btrfs fi usage --si /mnt/cache
Overall:
    Device size:                          720.17GB
    Device allocated:                     474.66GB
    Device unallocated:                   245.51GB
    Device missing:                          0.00B
    Used:                                 470.20GB
    Free (estimated):                     124.32GB      (min: 124.32GB)
    Data ratio:                               2.00
    Metadata ratio:                           2.00
    Global reserve:                       180.85MB      (used: 0.00B)

Data,RAID1: Size:236.22GB, Used:234.65GB
   /dev/sdg1      156.77GB
   /dev/sdn1      156.77GB
   /dev/sdo1      158.91GB

Metadata,RAID1: Size:1.07GB, Used:443.35MB
   /dev/sdg1        1.07GB
   /dev/sdn1        1.07GB

System,RAID1: Size:33.55MB, Used:49.15kB
   /dev/sdg1       33.55MB
   /dev/sdn1       33.55MB

Unallocated:
   /dev/sdg1       82.18GB
   /dev/sdn1       82.18GB
   /dev/sdo1       81.14GB

 

Is there some way I can manually add the drive to the pool?

 

 

 

Link to comment
Label: none  uuid: 65e0f80d-5b53-401c-bc74-b1a683ad98be
        Total devices 3 FS bytes used 107.85GiB
        devid    1 size 223.57GiB used 146.03GiB path /dev/sdn1
        devid    2 size 223.57GiB used 148.03GiB path /dev/sdg1
        devid    3 size 223.57GiB used 146.00GiB path /dev/sdo1

Here's the results.

 

the 4th cache drive is /dev/sdh

Edited by Trites
Added more information.
Link to comment

With v6.4 it's much easier to add and remove cache devices, try this:

 

-Stop the array

-unassign cache4

-start the array

-stop the array

-before re-adding it's best to clear the disk with wipe fs, run both one after the other, check it's still sdh if you rebooted since:
 

wipefs -a /dev/sdh1
wipefs -a /dev/sdh

-re-assign cache4

-start the array

-a balance should automatically begin to make it part of the pool, if it doens't post the diagnostics.

Link to comment

That worked beautifully. Balance is currently running. Thanks Johnnie.

/# btrfs fi show /mnt/cache
Label: none  uuid: 65e0f80d-5b53-401c-bc74-b1a683ad98be
        Total devices 4 FS bytes used 94.11GiB
        devid    1 size 223.57GiB used 138.03GiB path /dev/sdn1
        devid    2 size 223.57GiB used 142.03GiB path /dev/sdg1
        devid    3 size 223.57GiB used 138.00GiB path /dev/sdo1
        devid    4 size 223.57GiB used 0.00B path /dev/sdh1

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.