unRAID Server Release 6.0-beta8-x86_64 Available


limetech

Recommended Posts

  • Replies 190
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

I have two 500gb drives as a btrfs cache pool.  Main page reports

1tb cache drive.  Shouldn't this be 500gb?

 

It likely depends on what RAID you are running. RAID 0 will stripe the data between the two drives, effectively treating it as one drive the total size of your drives (which looks like what you have). However, there is no redundancy here - just a speed increase.

 

If you did RAID 1 then it would mirror the data, so you'd have two copies of your data, but would only see 500GB.

 

Link to comment

A interesting note in 6b8: If you try xfs_check ( the primary tool for checking XFS disks) it returns with "xfs_check is deprecated and scheduled for removal in June 2014.  Please use xfs_repair -n instead." 

 

Something is wrong: code definitely uses 'xfs_repair', never used 'xfs_check'.  Where are you seeing this message?

 

Looks like the xfs_check is on its way out the door and replaced with xfs_repair -n.

 

If if did:

xfs_check /dev/sdg1

 

it returns:

xfs_check is deprecated and scheduled for removal in June 2014.

Please use xfs_repair -n <dev> instead.

 

xfs_repair -n /dev/sdg1 works just fine.

 

however its still in the man pages "xfs_check and xfs_repair can be used to cross check each other":

http://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide//tmp/en-US/html/ch11s02.html

 

found this:

https://bugzilla.redhat.com/show_bug.cgi?id=1029458

 

Link to comment

A interesting note in 6b8: If you try xfs_check ( the primary tool for checking XFS disks) it returns with "xfs_check is deprecated and scheduled for removal in June 2014.  Please use xfs_repair -n instead." 

 

Something is wrong: code definitely uses 'xfs_repair', never used 'xfs_check'.  Where are you seeing this message?

 

Looks like the xfs_check is on its way out the door and replaced with xfs_repair -n.

 

If if did:

xfs_check /dev/sdg1

 

it returns:

xfs_check is deprecated and scheduled for removal in June 2014.

Please use xfs_repair -n <dev> instead.

 

xfs_repair -n /dev/sdg1 works just fine.

 

however its still in the man pages "xfs_check and xfs_repair can be used to cross check each other":

http://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide//tmp/en-US/html/ch11s02.html

 

found this:

https://bugzilla.redhat.com/show_bug.cgi?id=1029458

 

That's why 'Check Filesytem' button for xfs uses xfs_repair.  Why are you using 'xfs_check' from the command line?

Link to comment

ok bare with me

 

i am just thinking in the wild here

 

so i update to beta 8

point the img to /mnt/cache/dock/docker.img

set it to 20 gb

 

what prevents me from doing a cp /mnt/cache/docker/*.* /var/lib/docker/ ?

basically everything would be copied over? containers and images ?

 

The problem is that 'cp' command does not know about copy-on-write subvolumes.

 

Type this:

 

ls /mnt/cache/docker/btrfs/subvolumes

 

See all those dir's with long numeric names?  Those are the docker image "layers".  When a particular application image is built, Docker starts with a "baseimage" which is something like, say, stripped-down ubuntu.  It then creates a btrfs snapshot, and in that snapshot it installs a little bit of s/w to form another layer.  It then creates another btrfs snapshot of that snapshot to form another layer, install a bit of s/w, and so on...  Each layer differs from it's parent typically by a relatively small amount.  But when viewed using standard unix tools, a snapshot subdir appears to be then entire set of files.

 

So using your 'cp' command has these problems:

1. the source might use say 10GB, but when you cp it will repeat each layer, resulting in target consuming far far more storage (order of magnitude or more).

2. docker won't work anyway because in btrfs you can only snapshot a subvolume but there are no more subvolumes because cp converts them to normal directories on the target.

 

That is the problem: normal unix tools, eg, 'cp', 'mv' etc. are unaware of subvolumes (they look just like directories).

 

Also: once you understand this, you will realize how much better it is to have loop-mounted volume file for Docker.  This is because you can move that file to any other device using 'cp', 'mv', etc.  You can also easily expand it (and shrink it) as needed.

Link to comment

so in layman terms, the docker.img is akin to a DVD ISO and the /var/lib/docker folder is the mount point for that "ISO", and exists in RAM?

 

Doesn't exist in RAM, but your analogy is correct.  Just like using daemontools/alcohol to mount an ISO on your windows PC, while it appears that the mounted device is an independent drive, it's really just an image file running off your hard drive.

Link to comment

seems unmenu is not very happy with that loop thing

 

Sep 5 19:38:20 R2D2 unmenu[2584]: cat: /sys/block/loo/stat: No such file or directory
Sep 5 19:40:34 R2D2 unmenu[2584]: cat: /sys/block/loo/stat: No such file or directory
Sep 5 19:42:37 R2D2 unmenu[2584]: cat: /sys/block/loo/stat: No such file or directory
Sep 5 19:44:58 R2D2 unmenu[2584]: cat: /sys/block/loo/stat: No such file or directory
Sep 5 19:47:02 R2D2 unmenu[2584]: cat: /sys/block/loo/stat: No such file or directory
Sep 5 19:49:04 R2D2 unmenu[2584]: cat: /sys/block/loo/stat: No such file or directory
Sep 5 19:51:08 R2D2 unmenu[2584]: cat: /sys/block/loo/stat: No such file or directory

 

Joe will have to take a look

Link to comment

so in layman terms, the docker.img is akin to a DVD ISO and the /var/lib/docker folder is the mount point for that "ISO", and exists in RAM?

 

Doesn't exist in RAM, but your analogy is correct.  Just like using daemontools/alcohol to mount an ISO on your windows PC, while it appears that the mounted device is an independent drive, it's really just an image file running off your hard drive.

 

It's also writable I believe.  OR not ?

Link to comment

It is, and should be writeable of course.

 

It is.

 

Basically we are using a vdisk (virtual disk).  It's the same type of file that Xen or KVM or any hypervisor for that matter would use to store data for a virtual machine.  Only instead of tying that vdisk to a virtual machine, we are presenting it back to our own host as /dev/sdX (or /dev/loop# in our case).  So unRAID as an OS sees docker.img as a disk device, plain and simple.  Then we mount that disk device to the path of /var/lib/docker.  If you were to mount the loop device to say /mnt/test by doing this:

 

mkdir /mnt/test
mount /dev/loop8 /mnt/test
cd /mnt/test
v

 

Then you would see this in response:

 

root@fractal:~# mkdir /mnt/tst
root@fractal:~# mount /dev/loop8 /mnt/tst
root@fractal:~# cd /mnt/tst
root@fractal:/mnt/tst# v
total 12
drwx------ 1 root root    0 Sep  5 10:01 btrfs/
drwx------ 1 root root    0 Sep  5 10:01 containers/
drwx------ 1 root root   12 Sep  5 10:01 execdriver/
drwx------ 1 root root    0 Sep  5 10:01 graph/
drwx------ 1 root root   32 Sep  5 10:01 init/
-rw-r--r-- 1 root root 5120 Sep  5 10:01 linkgraph.db
-rw------- 1 root root   19 Sep  5 10:01 repositories-btrfs
drwx------ 1 root root    0 Sep  5 10:01 tmp/
drwxrwxrwx 1 root root    0 Sep  5 10:01 unraid-templates/
drwx------ 1 root root    0 Sep  5 10:01 volumes/

 

Hmm.  The directory listing of that test mount looks familiar...  I wonder...

 

root@fractal:/mnt/tst# cd /var/lib/docker
root@fractal:/var/lib/docker# v
total 12
drwx------ 1 root root    0 Sep  5 10:01 btrfs/
drwx------ 1 root root    0 Sep  5 10:01 containers/
drwx------ 1 root root   12 Sep  5 10:01 execdriver/
drwx------ 1 root root    0 Sep  5 10:01 graph/
drwx------ 1 root root   32 Sep  5 10:01 init/
-rw-r--r-- 1 root root 5120 Sep  5 10:01 linkgraph.db
-rw------- 1 root root   19 Sep  5 10:01 repositories-btrfs
drwx------ 1 root root    0 Sep  5 10:01 tmp/
drwxrwxrwx 1 root root    0 Sep  5 10:01 unraid-templates/
drwx------ 1 root root    0 Sep  5 10:01 volumes/

 

They are the same because they are both pointing to the same vdisk.  We are mounting it just like we would any other disk device.

 

And just like any other disk device, with this vdisk, we can choose which file system we want to use on it.  The only method we support at this time is through BTRFS, so that's what we do.  But because we use BTRFS on the vdisk doesn't mean we have to use it on the physical device that the vdisk is stored upon.  This is a huge advantage for two reasons:

 

1.  You don't have to reformat any existing unRAID array or cache device to BTRFS in order to make use of Docker in Beta 8.

2.  If you want to "backup" your Docker configuration / images at any time and easily preserve the benefits of snap-shots (lower disk consumption), you can just copy your "docker.img" file anywhere you want!  It's that easy!

Link to comment

so in layman terms, the docker.img is akin to a DVD ISO and the /var/lib/docker folder is the mount point for that "ISO", and exists in RAM?

 

Doesn't exist in RAM, but your analogy is correct.  Just like using daemontools/alcohol to mount an ISO on your windows PC, while it appears that the mounted device is an independent drive, it's really just an image file running off your hard drive.

 

I've already caused myself an issue with this docker.img (I created it at 200GB, then deleted it and created at 10GB, but didn't copy out my-templates, so lost them). I've learned that it's very valuable to have a screenshot of your docker configs so you can remember the volume mappings.

 

Personally I think this is also a good reason to keep all your configs outside of the docker.img file so that it only contains easily restorable data (i.e. the containers themselves).

 

The downside of having everything in a single virtual disk file like this is like you mentioned - if it gets corrupted you are up sh!t creek potentially and need to rebuild - minimizing the impact of that process is going to be important to people's sanity if/when that happens.

Link to comment

Personally I think this is also a good reason to keep all your configs outside of the docker.img file so that it only contains easily restorable data (i.e. the containers themselves).

This is the practise that has been used for the majority of the docker containers that have been put together for use by unRAID.

The downside of having everything in a single virtual disk file like this is like you mentioned - if it gets corrupted you are up sh!t creek potentially and need to rebuild - minimizing the impact of that process is going to be important to people's sanity if/when that happens.

You are correct - it is just like losing a disk.  However if the docker.img file only contains the container images and all configuration data and user data is external to the images, then it is trivial to rebuild the contents of the docker.img file by redownloading the docker images.  Also, keeping docker.img as a single file means that it is easy to back it up just like any other file.
Link to comment

Personally I think this is also a good reason to keep all your configs outside of the docker.img file so that it only contains easily restorable data (i.e. the containers themselves).

This is the practise that has been used for the majority of the docker containers that have been put together for use by unRAID.

 

Agreed, but I've seen comments from Tom on great portability, which sort of sounded like you would have everything contained in the docker.img file - you could back it up, move it around, or whatever.

 

While that does sound good, I think I would rather keep my data external to the image.

Link to comment

Personally I think this is also a good reason to keep all your configs outside of the docker.img file so that it only contains easily restorable data (i.e. the containers themselves).

There is unRaid-specific "config" data associated with a docker.img file:

- the list of containers that should be autostarted

- the set of 'template' files used to create and maintain the containers

- the set of icons/banners

- maybe other stuff

 

We want to keep this inside the docker.img file because it's relevant only to the images/containers inside that particular image file.

 

Suppose we kept that data sepeate, maybe in /boot/config/plugins/dockerMan/config-data (for example).

 

Next we want to create a 2nd docker.img file with a completely different set of images/containers.  All you have to do is stop docker, point /var/lib/docker to the new that image file, and then start docker.  But the config for that image will be all wrong if the webGu uses /boot/config/plugins/dockerMan/config-data to retrieve it.

 

This is why the unraid-config data should be kept in the image file.

 

The downside of having everything in a single virtual disk file like this is like you mentioned - if it gets corrupted you are up sh!t creek potentially and need to rebuild - minimizing the impact of that process is going to be important to people's sanity if/when that happens.

Easy solution: backup.  There are a few ways to automate this:

- manually set up a cron job

- create a scheduler section in the Docker settings section of Docker manager to set up the cron

- create notion of a "mirrored cache-only share" - mover would move stuff from cache to array, but then not delete from cache

- probably other ways to do it

Link to comment

This is why the unraid-config data should be kept in the image file.

 

Thanks for this, I was wondering why template data was being stored in the otherwise deletable docker.img file

 

Easy solution: backup.  There are a few ways to automate this:

 

While we're on the subject... we also need a way to backup our appdata shares.  Maybe a process that shuts down all the dockers, copies the appdata share and docker.img, and starts the dockers again?

 

Question/idea: Would backups be easier/faster if appdata was also a vdisk?  It takes forever to copy the plex files one by one.

Link to comment

The mirrored cache-only share concept Tom just mentioned would also be a solution to protecting our appdata. I can see there would be some details to work out regarding how this would be handled from a user shares perspective. I guess the user share would always read/write from the cache, but mover would copy or update in the array when it runs. There it would be parity protected, and could be used to restore the files on the cache if they were corrupted or accidentally deleted.

Link to comment

While we're on the subject... we also need a way to backup our appdata shares.  Maybe a process that shuts down all the dockers, copies the appdata share and docker.img, and starts the dockers again?

 

Question/idea: Would backups be easier/faster if appdata was also a vdisk?  It takes forever to copy the plex files one by one.

 

Another idea is put Plex appdata in /var/lib/docker/volumes/appdata and just let it live in the vdisk.

 

There are many ways to skin this cat.  Have you seen our Roadmap and Defect Reports boards?  Here's the honest truth: we have simply not had time to develop a set of "best practices" yet, but I am watching what everyone here is coming up with as best I can  ;)

Link to comment

Just want to add to the "it's that easy" statement from couple post before.

I needed to change the cache drive after switching to beta8, all I had to do is copy the apps folders and the docker.img to a remote computer, install and format the new cache drive, start the array and copy the apps and docker.img back to cache drive.

After starting the docker by pointing to docker.img everything was running as before.

 

It's that easy!!!

Link to comment

Has that helped or confused you even more :)

 

Thanks a lot for your post it explained to me things a bit better so i took the courage and started the upgrade process following the instructions in the OP and know i think i am up and running again .... Hopefully i wont have any errors

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.