Looking for suggestion for Cache / VM Drive....


Recommended Posts

Currently i have 2 drives in my cache (only being used to hold VMs) When using both VM's (windows 8.1) disk usage jumps to 100% on both VM's slowing them to a grinding halt. Any suggestions other than just using one SSD?  Also why does it say space free is 541GB and not over 1TB if i have only used 40 GB???

 

Cache Directory Only Contains:

 

\\192.168.2.4\cache\

\\192.168.2.4\cache\Windows_VPN\vdisk1.img

\\192.168.2.4\cache\Windows_NoVPN\vdisk1.img

 

YCiixde.png

 

----

 

wpaecJC.png

Link to comment

Look at the 'Balance Status' in your screen shot and google ' Raid 1' OR look it up on Wikipedia.  It appears to me that you have got two  virtual drives setup that are mirrors of each other.  So the free space is half the sum of the two drives minus the used used space on the Raid 1 array.

Link to comment

root@Icarus:~# btrfs fi df /mnt/cache

Data, RAID1: total=41.00GiB, used=40.66GiB

System, RAID1: total=32.00MiB, used=16.00KiB

Metadata, RAID1: total=1.00GiB, used=116.56MiB

GlobalReserve, single: total=48.00MiB, used=0.00B

 

root@Icarus:~# btrfs fi show

Label: none  uuid: a1f4c62d-b908-4384-944e-105d961bbaa4

        Total devices 2 FS bytes used 40.78GiB

        devid    1 size 931.51GiB used 42.03GiB path /dev/sdg1

        devid    2 size 149.01GiB used 42.03GiB path /dev/sdk1

 

^^

 

At this point i think my best options for speed are RAID 0 .... OR Single (IE one VM per HD)

 

# Don't duplicate metadata on a single drive (default on single SSDs)

mkfs.btrfs -m single /dev/sdg1

 

When you have drives with differing sizes and want to use the full capacity of each drive, you have to use the single profile for the data blocks, rather than raid0.

 

# Use full capacity of multiple drives with different sizes (metadata mirrored, data not mirrored and not striped)

mkfs.btrfs -d single /dev/sdg1 /dev/sdk1

 

----WOULD this be basicly like how unraid handles data without parity? IE one complete file to each drive not bits? If so why is not the default as it is most like the rest of how unraid is set up.

 

One of my favorite things about unraid was that it used to be so simple lol

Link to comment

Soo i tried,

 

 

 

btrfs-progs v4.0

root@Icarus:/mnt# mkfs.btrfs -f -d single /dev/sdg

btrfs-progs v4.0

See http://btrfs.wiki.kernel.org for more information.

 

Turning ON incompat feature 'extref': increased hardlink limit per file to 65536

Turning ON incompat feature 'skinny-metadata': reduced-size metadata extent refs

adding device /dev/sdk id 2

fs created label (null) on /dev/sdg

        nodesize 16384 leafsize 16384 sectorsize 4096 size 1.05TiB

root@Icarus:/mnt# mkfs.btrfs -f -d single /dev/sdk

btrfs-progs v4.0

See http://btrfs.wiki.kernel.org for more information.

 

Turning ON incompat feature 'extref': increased hardlink limit per file to 65536

Turning ON incompat feature 'skinny-metadata': reduced-size metadata extent refs

fs created label (null) on /dev/sdk

        nodesize 16384 leafsize 16384 sectorsize 4096 size 149.01GiB

 

At which point the webgui reported both drives as unformated and Unmountable disk present....

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.