Jump to content

sneakybifta

Members
  • Posts

    8
  • Joined

  • Last visited

Posts posted by sneakybifta

  1. On 5/15/2018 at 5:08 PM, cferrero said:

    What I could find, the issue with bandwidth limits is caused by a bugged  libtorrent library (v 1.1.5). Linuxserver.io rebased the image to Alpine Edge that pulls that version which doesn't work with Deluge 1.3.15

     

    Here's a how-to using Unraid that makes Deluge stop ignoring the bandwidth limit.

    Unfortunately, it's based on simply downgrading to an older version rather than any proper fix.

  2. 1 minute ago, johnnie.black said:

    This is only the current allocated space, not the total device size, to get that use:

     

    
    btrfs fi usage /mnt/cache

     

     

    Ah awesome, I was worried things were about to start breaking due to hitting the limit. I guess the unallocated space will automatically become allocated as my appdata grows, up to the max shown below?
     

    Thanks for your help anyway, I'll mark this as solved

     

    Overall:
        Device size:                 350.26GiB
        Device allocated:            109.06GiB
        Device unallocated:          241.20GiB
        Device missing:                  0.00B
        Used:                        107.02GiB
        Free (estimated):            241.88GiB      (min: 121.28GiB)
        Data ratio:                       1.00
        Metadata ratio:                   2.00
        Global reserve:              115.42MiB      (used: 0.00B)
    
    Data,single: Size:107.00GiB, Used:106.32GiB
       /dev/sdh1     107.00GiB
    
    Metadata,RAID1: Size:1.00GiB, Used:361.08MiB
       /dev/sdc1       1.00GiB
       /dev/sdh1       1.00GiB
    
    System,RAID1: Size:32.00MiB, Used:16.00KiB
       /dev/sdc1      32.00MiB
       /dev/sdh1      32.00MiB
    
    Unallocated:
       /dev/sdc1     110.76GiB
       /dev/sdh1     130.44GiB

     

  3. 1 hour ago, johnnie.black said:

    That's expected and a known btrfs limitation, when 2 different sized devices are used in a pool the space is incorrectly reported.

     

    If the pool is in raid0 you'll be able to use the 240GB.

     

    I expected the free space reported in the main WebGUI to be incorrect as it was for RAID1, but the btrfs filesystem on the cache page at least reported the correct Total / Used space.

     

    I've swapped from RAID0 to 'single' before I read your reply, and now my btrfs filesystem is showing as below. The total usuable size of the data partition is now showing as only 107GB.

     

    Data, single: total=107.00GiB, used=106.31GiB
    System, RAID1: total=32.00MiB, used=16.00KiB
    Metadata, RAID1: total=1.00GiB, used=360.97MiB
    GlobalReserve, single: total=115.33MiB, used=0.00B

     

  4. This worked, and I am now using RAID0. However, the total RAID0 size was showing 180GB rather than 240GB the calculator said I should get.

     

    I tried a rebalance without any input, and now the RAID0 size is showing as 110GB.

     

    How can I expand the RAID0 to the full 240GB available to it?

     

     

     

    
    Data, RAID0: total=108.00GiB, used=106.22GiB
    System, RAID1: total=32.00MiB, used=16.00KiB
    Metadata, RAID1: total=1.00GiB, used=359.27MiB
    GlobalReserve, single: total=113.69MiB, used=0.00B

     

  5. Just now, johnnie.black said:

    There's no error in the log but the smallest cache is practically full, move some data to the array and try again, also a good idea to first update to v6.4 which includes a newer kernel with many btrfs fixes/improvements.

     

    Thanks, much appreciated. Just caught the 6.4 post and server is rebooting post update right now. I'll move the 20GB docker image to the array, try again, then if successful move it back afterwards.

  6. I'm running appdata in a cache pool, I have 1x 120GB and 1 x 256GB SSDs. I've had them in RAID1, but as that only gives me 120GB of space and try as I might to slim it down, my appdata + docker.img is now pushing 115GB. 

     

    I've used CA appdata backup to copy everything into the main array, and I'm keeping a copy backed up on my PC too, so I'm not bothered about losing redundancy for now until I can afford a new SSD.

     

    I'm trying to change my RAID1 cache pool to RAID0 which should give me 240GB available with increased speed over BTRFS 'single' mode. However I don't seem to be able to switch to either, having tried both and got the same error.

     

    I've tried running this and hitting balance, it initially starts balancing but within a few seconds, the first time I hit refresh, it has stopped and changed back to 'No balance found on /mnt/cache'

    -dconvert=raid0 -mconvert=raid1

    I've tried running the code below as an alternative but the same thing happens. Running it without -mconvert makes no difference either. Filesytem attached. 

     

    -dconvert=single -mconvert=raid1
    Label: none  uuid: redacted
    	Total devices 2 FS bytes used 108.85GiB
    	devid    1 size 111.79GiB used 111.79GiB path /dev/sdc1
    	devid    2 size 238.47GiB used 111.79GiB path /dev/sdh1
    
    Data, RAID1: total=110.76GiB, used=108.49GiB
    System, RAID1: total=32.00MiB, used=16.00KiB
    Metadata, RAID1: total=1.00GiB, used=368.38MiB
    GlobalReserve, single: total=121.42MiB, used=0.00B
    
    No balance found on '/mnt/cache'
    

     

  7. Hi,

     

    I'm building my first unraid box and re purposing a couple of hard drives. For now, I have a standard 4TB drive and a hybrid Seagate 4TB/8GB ssd hybrid drive, and may add more drives to the array in future.

     

    I've done some searching and understand I'm unlikely to see any performance benefit from the hybrid drive, but I can't work out whether it would be best as the parity drive or as the array drive. I don't really understand enough about how unraid works, I would imagine there's not much difference either way but should I be concerned about lots of writes knackering the SSD as parity or array?

     

    The main purpose of this box is a Plex server

×
×
  • Create New...