vl1969 Posted June 19, 2014 Share Posted June 19, 2014 OK, i did this... and I think I'm OK mkfs.btrfs -f -m raid1 -d raid1 -L VM-store /dev/sdf /dev/sdh /dev/sdg mkdir /boot/btrfs mount -t btrfs /dev/sdf /mnt/btrfs/ df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 3882752 923200 2959552 24% /boot /dev/sdf 1465159752 1280 976753664 1% /mnt/btrfs dd if=/dev/zero of=/mnt/btrfs/bench.test bs=1M count=2000 2000+0 records in 2000+0 records out 2097152000 bytes (2.1 GB) copied, 0.47854 s, 4.4 GB/s blkid /dev/sdf /dev/sdf: LABEL="VM-store" UUID="569b8d06-5676-4e2d-9a22-12d85dd1648d" UUID_SUB="bc85940f-c379-4b4f-b10d-370b6d12de3c" TYPE="btrfs" root@tower:~# btrfs filesystem df /mnt/btrfs/ Data, RAID1: total=3.00GiB, used=1.95GiB Data, single: total=8.00MiB, used=0.00 System, RAID1: total=8.00MiB, used=16.00KiB System, single: total=4.00MiB, used=0.00 Metadata, RAID1: total=1.00GiB, used=2.09MiB Metadata, single: total=8.00MiB, used=0.00 unknown, single: total=16.00MiB, used=0.00 root@tower:~# btrfs filesystem show /dev/sdf Label: 'VM-store' uuid: 569b8d06-5676-4e2d-9a22-12d85dd1648d Total devices 3 FS bytes used 1.96GiB devid 1 size 465.76GiB used 2.02GiB path /dev/sdf devid 2 size 465.76GiB used 3.01GiB path /dev/sdh devid 3 size 465.76GiB used 3.01GiB path /dev/sdg btrfs filesystem show /dev/sdf Label: 'VM-store' uuid: 569b8d06-5676-4e2d-9a22-12d85dd1648d Total devices 3 FS bytes used 1.96GiB devid 1 size 465.76GiB used 2.02GiB path /dev/sdf devid 2 size 465.76GiB used 3.01GiB path /dev/sdh devid 3 size 465.76GiB used 3.01GiB path /dev/sdg So what is the best way to mount this in the go file ? using /dev/sdf looks not correct, since this might can change during boot? //Peter use "uuid: 569b8d06-5676-4e2d-9a22-12d85dd1648d" instead Quote Link to comment
jonp Posted June 19, 2014 Author Share Posted June 19, 2014 So what is the best way to mount this in the go file ? using /dev/sdf looks not correct, since this might can change during boot? //Peter I think you can locate it under /dev/by-id and use that method. Quote Link to comment
peter_sm Posted June 19, 2014 Share Posted June 19, 2014 Done that already, if I want to convert to raid5? is that possible?.... going googling little about that ;-) //Peter Quote Link to comment
jonp Posted June 19, 2014 Author Share Posted June 19, 2014 Done that already, if I want to convert to raid5? is that possible?.... going googling little about that ;-) //Peter Not sure, but I would think this is possible... We will be experimenting more with this more and more in the weeks/months ahead to see what we can and can't do. Quote Link to comment
sacretagent Posted June 19, 2014 Share Posted June 19, 2014 thanks for nothing again NO SWAP files on btrfs ?? you couldn't have put that in big letters on the first post spend 4 hours moving stuff of the cache drive reformat the thing and now swap file is not working Jun 20 00:58:26 R2D2 kernel: swapon: swapfile has holes Jun 20 00:58:26 R2D2 rc.swapfile[10796]: Swap file /mnt/cache/swapfile re-used and started Jun 20 01:05:59 R2D2 kernel: mdcmd (55): spindown 0 (Routine) Jun 20 01:08:34 R2D2 rc.swapfile[11875]: Swap file /mnt/cache/swapfile re-used and started Jun 20 01:08:34 R2D2 kernel: swapon: swapfile has holes Jun 20 01:12:50 R2D2 rc.swapfile[12244]: Creating swap file /mnt/cache/swapfile please wait ... Jun 20 01:14:41 R2D2 rc.swapfile[12382]: Swap file /mnt/cache/swapfile created and started Jun 20 01:14:41 R2D2 kernel: swapon: swapfile has holes a google search gives conflicting info but btrfs wiki says not supported ? Quote Link to comment
vl1969 Posted June 19, 2014 Share Posted June 19, 2014 so , what's the big deal? shrink your btrfs volume add swap partition to RAW device. Quote Link to comment
peter_sm Posted June 19, 2014 Share Posted June 19, 2014 Hmmm, after a reboot i see this root@tower:/mnt# blkid /dev/sdf /dev/sdf: LABEL="VM-store" UUID="569b8d06-5676-4e2d-9a22-12d85dd1648d" UUID_SUB="bc85940f-c379-4b4f-b10d-370b6d12de3c" TYPE="btrfs" root@tower:/mnt# mount /dev/disk/by-uuid/569b8d06-5676-4e2d-9a22-12d85dd1648d /mnt/vm_disk/ mount: wrong fs type, bad option, bad superblock on /dev/sdf, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so Quote Link to comment
peter_sm Posted June 19, 2014 Share Posted June 19, 2014 I did not touch the RFS on these disk before I executed ... mkfs.btrfs -f -m raid1 -d raid1 -L VM-store /dev/sdf /dev/sdh /dev/sdg Quote Link to comment
sacretagent Posted June 19, 2014 Share Posted June 19, 2014 so , what's the big deal? shrink your btrfs volume add swap partition to RAW device. all you guys assume we are all commandline jockeys the things is that we have no clue how to even start at that .... we were spoiled with plugins we assume if limetech changes a cache drive filesystem that they at least know that a lot of us are running a swapfile .... they asked for the data in the polls so if they change filesystem they at least need to be sure this will not affect us in any bad way 2 o clock in the morning and in 4 hours have to get up for work and plex is still not running (plex folder = 91 GB) Quote Link to comment
vl1969 Posted June 19, 2014 Share Posted June 19, 2014 hay, hay, hay, who do you call a command line jokey :-) I hate command line and almost a noob in Linux myself. I read manuals and google the rest :-) also LM did not change file system, they add support for new file system. you do not have to use it though or at least read ahead about the new file system before using it. that what I did when rebuilding my server (not unRaid). since I am a noob, who comes from windows through and through, it was a quest of epic proportions, so far so good... Quote Link to comment
sacretagent Posted June 19, 2014 Share Posted June 19, 2014 hay, hay, hay, who do you call a command line jokey :-) I hate command line and almost a noob in Linux myself. I read manuals and google the rest :-) also LM did not change file system, they add support for new file system. you do not have to use it though or at least read ahead about the new file system before using it. that what I did when rebuilding my server (not unRaid). since I am a noob, who comes from windows through and through, it was a quest of epic proportions, so far so good... NO docker without btrfs so they force us to use it Quote Link to comment
vl1969 Posted June 19, 2014 Share Posted June 19, 2014 ? how is docker related to btrfs? Quote Link to comment
prostuff1 Posted June 19, 2014 Share Posted June 19, 2014 hay, hay, hay, who do you call a command line jokey :-) I hate command line and almost a noob in Linux myself. I read manuals and google the rest :-) also LM did not change file system, they add support for new file system. you do not have to use it though or at least read ahead about the new file system before using it. that what I did when rebuilding my server (not unRaid). since I am a noob, who comes from windows through and through, it was a quest of epic proportions, so far so good... NO docker without btrfs so they force us to use it But they did not force you to use a beta version OR to use a swap plugin, that they did not create themselves. I have been using unRAID since 2008 and have never run swap. I consider myself a proficient power user and have never needed swap. LimeTech can not curate all plugins, nor would I expect them to. Quote Link to comment
NAS Posted June 19, 2014 Share Posted June 19, 2014 If you are not comfortable with using the command line then do not participate in the beta testing at this stage or at least wait until others have automated it. your already trying to do stuff specifically outwith the range of the current test set. thats fine in itself but dont complain when complicated things are complicated Quote Link to comment
jonp Posted June 19, 2014 Author Share Posted June 19, 2014 hay, hay, hay, who do you call a command line jokey :-) I hate command line and almost a noob in Linux myself. I read manuals and google the rest :-) also LM did not change file system, they add support for new file system. you do not have to use it though or at least read ahead about the new file system before using it. that what I did when rebuilding my server (not unRaid). since I am a noob, who comes from windows through and through, it was a quest of epic proportions, so far so good... NO docker without btrfs so they force us to use it SA, There are very good reasons for why Docker requires Btrfs in our implementation. I suggest that you read through the Docker Quick Start Guide for more information. This is very clearly spelled out. Quote Link to comment
bkastner Posted June 19, 2014 Share Posted June 19, 2014 hay, hay, hay, who do you call a command line jokey :-) I hate command line and almost a noob in Linux myself. I read manuals and google the rest :-) also LM did not change file system, they add support for new file system. you do not have to use it though or at least read ahead about the new file system before using it. that what I did when rebuilding my server (not unRaid). since I am a noob, who comes from windows through and through, it was a quest of epic proportions, so far so good... NO docker without btrfs so they force us to use it SA, There are very good reasons for why Docker requires Btrfs in our implementation. I suggest that you read through the Docker Quick Start Guide for more information. This is very clearly spelled out. Also, if you are going to play with a brand new feature that can have unexpected consequences, maybe don't start it late on a weeknight when you don't have time to back out before you need to get some beauty sleep. Quote Link to comment
jbartlett Posted June 19, 2014 Share Posted June 19, 2014 Why do you need a swap file? Small amount of physical RAM? Be quicker to upgrade your RAM. Quote Link to comment
JustinChase Posted June 20, 2014 Share Posted June 20, 2014 To maybe make this easier on folks in the future, here is exactly how I did this... I used Midnight Commander (type mc in putty) to move the whole cache directory to my disk10, and it took about an hour to move about 140GB. When finished it complained that it could not delete the cache directory, which didn't surprise me. I left Midnight Commander running/open in my Putty session. Then, I followed the steps in post 1 to [*]stop the array [*]change the format type of the cache drive [*]restart the array [*]format the drive Then I just went back to Midnight Commander and clicked on the cache directory on disk10 (on the right) and hit move again, which then moved everything back to the cache directory. This took much less time, maybe 20 minutes (I'm not sure why). Then I checked that everything was back on the cache drive, then went to the GUI and installed docker. I couldn't see my cache only shares in the GUI until I rebooted unRAID again, but after that, all seems well. *I just realized that I needed to change my cache only shares back to cache only, this process has set them all to "no" Quote Link to comment
peter_sm Posted June 20, 2014 Share Posted June 20, 2014 I'm on a btrfs pool now with raid1 for 2 disk. followed the guide to add the second one. /Peter Quote Link to comment
BobPhoenix Posted June 20, 2014 Share Posted June 20, 2014 Then I just went back to Midnight Commander and clicked on the cache directory on disk10 (on the right) and hit move again, which then moved everything back to the cache directory. This took much less time, maybe 20 minutes (I'm not sure why). Remember writing to the array is much slower than writing to a cache drive so you would see significantly faster writes to your now btrfs cache drive. Quote Link to comment
JustinChase Posted June 20, 2014 Share Posted June 20, 2014 Yeah, I forgot about that. Writing to a disk directly still has to write the parity at the same time, that makes perfect sense. Thanks for sharing, I actually feel better about things knowing 'why' it was that way Quote Link to comment
peter_sm Posted June 20, 2014 Share Posted June 20, 2014 Anyone knows how to fix these uncorrectable errors ? root@tower:/mnt# btrfs scrub status /mnt/tmp/ scrub status for 9e90cef6-5890-4b19-b3af-cee001773b46 scrub started at Fri Jun 20 21:20:03 2014, running for 1021 seconds total bytes scrubbed: 219.88GiB with 2 errors error details: csum=2 corrected errors: 0, uncorrectable errors: 2, unverified errors: 0 Quote Link to comment
zoggy Posted June 21, 2014 Share Posted June 21, 2014 I currently use a SSD for my cache drive.. would changing it to btrfs allow it to have trim support? Also, TOM any chance of getting this support backported to 5.x ? Quote Link to comment
jonp Posted June 21, 2014 Author Share Posted June 21, 2014 I currently use a SSD for my cache drive.. would changing it to btrfs allow it to have trim support? Also, TOM any chance of getting this support backported to 5.x ? btrfs has trim support. can't speak to backporting of features yet. Sent from my Nexus 5 using Tapatalk Quote Link to comment
peter_sm Posted June 21, 2014 Share Posted June 21, 2014 Hi, I have set up a BTRFS RAID-1 on 2*0.5TB disks which I skip the partition tables, when the file system is going to use the entire device and I see no reason to have a partition table. So fare so good... //Peter Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.