unRAID Server Release 6.0-beta6-x86_64 Available


Recommended Posts

If we can't convert from reiserfs to btrfs then we need to move it first of the cache disk and then back so your example above is moot ffor anybody who uses the cache disk  ::)

 

Only if your appdata lives on a cache disk today in which case, yes, you will need to move it off to format it with Btrfs.  However, if you use a new device for Btrfs, you can keep your Cache Drive for now on Reiserfs to just test out the new capabilities.  I'm working on posting a guide here in a sec on how to do this manually to a non-cache drive.

Link to comment
  • Replies 336
  • Created
  • Last Reply

Top Posters In This Topic

But how do you convert your existing reiserfs cache drive to btrfs so docker can run without moving the library off the cache drive?

From what I can see when I was looking earlier today this is not possible (I would be happy to be proved wrong).  I went through the pain of moving my data off the cache so I could reformat it as btrfs and then moved it back.

Link to comment

Haha the plex library move is a pain. There are several hundred thousand files in that folder (no joke)

 

Plex wiki says that the easiest way is to zip or rar first, then move that file (don't use compression and it will be faster). Otherwise it will take very long time.

 

I learned that when I moved my plex library from the old plugin into my new VM. It was much easier after rarring it

 

Lets say you have your plex library specified to live under /mnt/cache/appdata/plex or /mnt/user/appdata/plex or /mnt/disk1/appdata/plex.  Doesn't matter.  Same for where your media content is located (doesn't matter).  You can do this to install and run Plex to a Docker container WITHOUT doing ANYTHING to your existing library data:

 

docker run -d  --net="host" --name="plex" -v /mnt/path/to/appdata/plex:/config -v /mnt/path/to/mediacontent:/data -p 32400:32400 eschultz/docker-plex

 

After this command completes, type http://tower:32400/web and enjoy!

 

DONE!

 

It is worth in advance of the guides pointing out some basics.

 

This command will need an internet connection and it will go away and download this docker file and its host OS. It is automatic but it will take a while dependent on your internet connection.

 

Each dockerfile has a line in it staring with "FROM". In simple terms this is the OS the package runs on so in the case of above "FROM ubuntu:14.04".

 

In an ideal world you would want to have as few variations of this as possible. The only cost is disk space but the base OS for this community will be a hot topic in the future.

 

now go play and realise how amazing it is.

Link to comment

quick post here for my findings so far, moving from v6b4 to v6b6 my windows 8.1 pro vm running under xen now wont boot, crashes and goes into recovery mode, im currently attempting a "refresh" of the vm to see if i can get it kicked back into life, im assuming its probably due to changes with the virtual hardware and thus windows falls over, qemu change to version 2.0 maybe did it?.

 

good old arch linux vm boots fine though, no issues so far when moving from v6b4 to v6b6 using xen as hypervisor.

Link to comment

quick post here for my findings so far, moving from v6b4 to v6b6 my windows 8.1 pro vm running under xen now wont boot, crashes and goes into recovery mode, im currently attempting a "refresh" of the vm to see if i can get it kicked back into life, im assuming its probably due to changes with the virtual hardware and thus windows falls over, qemu change to version 2.0 maybe did it?.

 

good old arch linux vm boots fine though, no issues so far when moving from v6b4 to v6b6 using xen as hypervisor.

 

Questions:

 

[*]Are you still using xl create to start the VM (or from the webGUI) or is this an attempt with libvirt/virsh and a conversion to Domain XML?

[*]By "refresh", do you mean reinstall?

 

We were able to get old Windows 8.1 and 7 VMs from beta 4/5a to boot up in Beta 6 without difficulty.  In addition, there is no new "virtual hardware" in use with the new Xen 4.4 build.  QEMU 2.0 can still emulate the i440fx chipset that Xen uses just the same.  Might be something else going wrong.  Let us know what happens after the "refresh" and we'll try to troubleshoot through it.

Link to comment

Is it possible to use a disk outside of the array/cache for Docker? I installed a SSD that I was planning on moving my ArchVM image to, but never got around to.

 

Could I format this as btrfs and use it for Docker? Am I likely to have a noticeable speed improvement using SSD over a 1TB WD black disk (my cache)?

 

I am thinking this could be easier than trying to move everything off my cache drive, reformat and move back (without screwing anything up).

I'm wondering the same thing. I have a 1TB WD Black that is currently my cache and planned to keep it that way. I too am adding an SSD to host my ArchVM and others eventually. It sounds like in the Btrfs Quick-Start Guide that SSD is preferred for Btrfs, so (1) can I keep the 1TB WD as my cache and add the SSD as my Btrfs drive for both Docker and my VM drive? If yes to that question, (2) would it be wise to create two partitions on the SSD, one for Btrf and one for Ext4 if that's what I should still use for my VMs?

 

Okay, it sounds like we will be able to do this with a non-cache drive, so then I would just need to determine if it's fine hosting my VMs on Btrfs...

 

Only if your appdata lives on a cache disk today in which case, yes, you will need to move it off to format it with Btrfs.  However, if you use a new device for Btrfs, you can keep your Cache Drive for now on Reiserfs to just test out the new capabilities. I'm working on posting a guide here in a sec on how to do this manually to a non-cache drive.
Link to comment

Is it possible to use a disk outside of the array/cache for Docker? I installed a SSD that I was planning on moving my ArchVM image to, but never got around to.

 

Could I format this as btrfs and use it for Docker? Am I likely to have a noticeable speed improvement using SSD over a 1TB WD black disk (my cache)?

 

I am thinking this could be easier than trying to move everything off my cache drive, reformat and move back (without screwing anything up).

I'm wondering the same thing. I have a 1TB WD Black that is currently my cache and planned to keep it that way. I too am adding an SSD to host my ArchVM and others eventually. It sounds like in the Btrfs Quick-Start Guide that SSD is preferred for Btrfs, so (1) can I keep the 1TB WD as my cache and add the SSD as my Btrfs drive for both Docker and my VM drive? If yes to that question, (2) would it be wise to create two partitions on the SSD, one for Btrf and one for Ext4 if that's what I should still use for my VMs?

 

Okay, it sounds like we will be able to do this with a non-cache drive, so then I would just need to determine if it's fine hosting my VMs on Btrfs...

 

Only if your appdata lives on a cache disk today in which case, yes, you will need to move it off to format it with Btrfs.  However, if you use a new device for Btrfs, you can keep your Cache Drive for now on Reiserfs to just test out the new capabilities. I'm working on posting a guide here in a sec on how to do this manually to a non-cache drive.

 

Short answer: YES!  You can format any non-array device in Beta 6 to use Btrfs using the Quick-Start Guide mentioned above and in the Announcements forum.  This can be SSD or non-SSD.  I will say this much regarding the "preference" for SSD specifically with respect to Docker and Btrfs:  Btrfs is an "SSD Aware" file system, meaning that it will optimize how it handles IO for SSDs compared to HDDs.  That said, we tested actually installing Docker to an HDD on Btrfs and performance was excellent there as well.  The real difference is in Spin Up / Spin Down.  Because HDDs have to go through this, this can add to application launch performance delays.  SSD doesn't suffer from this.  The actual read/write performance thanks to btrfs is really not a big deal with most applications because they are so small to begin with.

Link to comment

quick post here for my findings so far, moving from v6b4 to v6b6 my windows 8.1 pro vm running under xen now wont boot, crashes and goes into recovery mode, im currently attempting a "refresh" of the vm to see if i can get it kicked back into life, im assuming its probably due to changes with the virtual hardware and thus windows falls over, qemu change to version 2.0 maybe did it?.

 

good old arch linux vm boots fine though, no issues so far when moving from v6b4 to v6b6 using xen as hypervisor.

 

Questions:

 

[*]Are you still using xl create to start the VM (or from the webGUI) or is this an attempt with libvirt/virsh and a conversion to Domain XML?

[*]By "refresh", do you mean reinstall?

 

We were able to get old Windows 8.1 and 7 VMs from beta 4/5a to boot up in Beta 6 without difficulty.  In addition, there is no new "virtual hardware" in use with the new Xen 4.4 build.  QEMU 2.0 can still emulate the i440fx chipset that Xen uses just the same.  Might be something else going wrong.  Let us know what happens after the "refresh" and we'll try to troubleshoot through it.

 

ive tried starting the vm using xl create, ive also got the vm registered with xenman and tried starting it that way, it doesnt boot to the ui, if i connect via vnc i can see it attempt to boot, splash screen, and then reboots into recovery mode.

 

when i said refresh, i meant one of the recovery options in windows 8.1 is a mode called "Refresh" this basically uninstalls all apps but leaves your files intact. this worked for me and the machine then boots but obviously i have lost all settings so its pretty much a rebuild. luckily i take regular backups of my vm's and thus im going to blat the img file and restore it and try instead to go into safe mode to see if i can get it to boot that way, if it does i may try an uninstall of pv drivers for a start, next on the list would be a sysprep to see if i can get it to redetect any hardware, just incase :-), i will let you know how i get on.

Link to comment

I upgraded to v6 from v5a without any problems. My Windows 7 VM works just fine. It automatically starts and stops perfectly when rebooting or shutting down unraid. I also used the webGUI to change my cache drive from rfs to btrfs without problems. Once that was done, I noticed the docker service shows as started.

 

Thanks for all the great work that has gone into this!

Link to comment

quick post here for my findings so far, moving from v6b4 to v6b6 my windows 8.1 pro vm running under xen now wont boot, crashes and goes into recovery mode, im currently attempting a "refresh" of the vm to see if i can get it kicked back into life, im assuming its probably due to changes with the virtual hardware and thus windows falls over, qemu change to version 2.0 maybe did it?.

 

good old arch linux vm boots fine though, no issues so far when moving from v6b4 to v6b6 using xen as hypervisor.

 

Questions:

 

[*]Are you still using xl create to start the VM (or from the webGUI) or is this an attempt with libvirt/virsh and a conversion to Domain XML?

[*]By "refresh", do you mean reinstall?

 

We were able to get old Windows 8.1 and 7 VMs from beta 4/5a to boot up in Beta 6 without difficulty.  In addition, there is no new "virtual hardware" in use with the new Xen 4.4 build.  QEMU 2.0 can still emulate the i440fx chipset that Xen uses just the same.  Might be something else going wrong.  Let us know what happens after the "refresh" and we'll try to troubleshoot through it.

 

ive tried starting the vm using xl create, ive also got the vm registered with xenman and tried starting it that way, it doesnt boot to the ui, if i connect via vnc i can see it attempt to boot, splash screen, and then reboots into recovery mode.

 

when i said refresh, i meant one of the recovery options in windows 8.1 is a mode called "Refresh" this basically uninstalls all apps but leaves your files intact. this worked for me and the machine then boots but obviously i have lost all settings so its pretty much a rebuild. luckily i take regular backups of my vm's and thus im going to blat the img file and restore it and try instead to go into safe mode to see if i can get it to boot that way, if it does i may try an uninstall of pv drivers for a start, next on the list would be a sysprep to see if i can get it to redetect any hardware, just incase :-), i will let you know how i get on.

 

Post your xl cfg file for the VM.  You may be using a deprecated instruction there that I can fix for you pretty quickly.

Link to comment

 

- slack: added packages:

  sqlite

 

Can someone confirm if the PHP in this version is compiled to access these libraries natively?

 

If so, it will go a long way towards helping with a filesystem inventory tool I've been working on.

I.E. like locate but with stored MD5sums and some meta data to tell if the file has changed since last inventory review.

 

If PHP is not compiled directly with PHP support, can it be in the next version?

Link to comment

...I will say this much regarding the "preference" for SSD specifically with respect to Docker and Btrfs:  Btrfs is an "SSD Aware" file system, meaning that it will optimize how it handles IO for SSDs compared to HDDs...The actual read/write performance thanks to btrfs is really not a big deal with most applications because they are so small to begin with.

 

Considering this, I have 2 SSD's that I can remove from machines I'm not using now, since I could combine them into the unRAID box  ;D ;D

 

One is a 64GB SATA2, and the other is a 128GB SATA3 drive.  It sounds like the speed difference between 3GB/s (SATA2) and 6GB/s (SATA3) is unlikely to be realized in any meaningful manner, so it probably comes down to size.  It sounds like the base image sharing is going to keep my space needs down enough that 64GB should be plenty for a Windows VM and several containers for things like SickRage and SAB.

 

Any thoughts on all this?  The 64GB drive is easier for me to get to, and leaves me the faster, larger drive for use in another machine, but I have no immediate plans/use for such a machine, so I can really use either drive for unRAID, I just don't want to 'waste' the good drive when the smaller/slower one will work just as well.

Link to comment

...I will say this much regarding the "preference" for SSD specifically with respect to Docker and Btrfs:  Btrfs is an "SSD Aware" file system, meaning that it will optimize how it handles IO for SSDs compared to HDDs...The actual read/write performance thanks to btrfs is really not a big deal with most applications because they are so small to begin with.

 

Considering this, I have 2 SSD's that I can remove from machines I'm not using now, since I could combine them into the unRAID box  ;D ;D

 

One is a 64GB SATA2, and the other is a 28GB SATA3 drive.  It sounds like the speed difference between 3GB/s (SATA2) and 6GB/s (SATA3) is unlikely to be realized in any meaningful manner, so it probably comes down to size.  It sounds like the base image sharing is going to keep my space needs down enough that 64GB should be plenty for a Windows VM and several containers for things like SickRage and SAB.

 

Any thoughts on all this?  The 64GB drive is easier for me to get to, and leaves me the faster, larger drive for use in another machine, but I have no immediate plans/use for such a machine, so I can really use either drive for unRAID, I just don't want to 'waste' the good drive when the smaller/slower one will work just as well.

 

You can use either.  I don't know how much of a difference you will notice day-to-day between the two.  You could also create a raid0 group in btrfs between the two SSDs and get the benefits of both devices.

Link to comment

I installed b6 earlier today, and had my ArchVM running for most of the day.  I just checked SAB, and it has been stuck on unpacking a file for over an hour, so I paused the download, but SAB didn't respond.  I tried going into settings and it had a problem loading the page, and now I can't see SAB at all in my browser.

 

So, I tried to open Arch in Putty to grep sab, but Arch is giving me a connection timed out error, and won't open in putty.

 

I checked the unRAID GUI, and it shows Arch is still running, and doesn't look unusual, but it's totally unresponsive right now.

 

I'm going to leave it alone for another 30 minutes or so, but wanted to report a possible issue, in case there is a way to 'fix' the VM without killing it in the GUI, or if not possible with xl destroy in unraid itself.

Link to comment

...I will say this much regarding the "preference" for SSD specifically with respect to Docker and Btrfs:  Btrfs is an "SSD Aware" file system, meaning that it will optimize how it handles IO for SSDs compared to HDDs...The actual read/write performance thanks to btrfs is really not a big deal with most applications because they are so small to begin with.

 

Considering this, I have 2 SSD's that I can remove from machines I'm not using now, since I could combine them into the unRAID box  ;D ;D

 

One is a 64GB SATA2, and the other is a 28GB SATA3 drive.  It sounds like the speed difference between 3GB/s (SATA2) and 6GB/s (SATA3) is unlikely to be realized in any meaningful manner, so it probably comes down to size.  It sounds like the base image sharing is going to keep my space needs down enough that 64GB should be plenty for a Windows VM and several containers for things like SickRage and SAB.

 

Any thoughts on all this?  The 64GB drive is easier for me to get to, and leaves me the faster, larger drive for use in another machine, but I have no immediate plans/use for such a machine, so I can really use either drive for unRAID, I just don't want to 'waste' the good drive when the smaller/slower one will work just as well.

 

You can use either.  I don't know how much of a difference you will notice day-to-day between the two.  You could also create a raid0 group in btrfs between the two SSDs and get the benefits of both devices.

Just to be clear, you are referring to Step 6 of the Btrfs Quick-Start Guide?

Link to comment

I installed b6 earlier today, and had my ArchVM running for most of the day.  I just checked SAB, and it has been stuck on unpacking a file for over an hour, so I paused the download, but SAB didn't respond.  I tried going into settings and it had a problem loading the page, and now I can't see SAB at all in my browser.

 

So, I tried to open Arch in Putty to grep sab, but Arch is giving me a connection timed out error, and won't open in putty.

 

I checked the unRAID GUI, and it shows Arch is still running, and doesn't look unusual, but it's totally unresponsive right now.

 

I'm going to leave it alone for another 30 minutes or so, but wanted to report a possible issue, in case there is a way to 'fix' the VM without killing it in the GUI, or if not possible with xl destroy in unraid itself.

 

Still no change, ArchVM (and SAB) is unresponsive, but does appear to be active when I look at xl top in unRAID.  it shows it's actively downloading, and the CPU % is fluctuating, so it appears it's still running, I just can't get to it.

 

xentop - 16:54:04   Xen 4.4.0
3 domains: 1 running, 2 blocked, 0 paused, 0 crashed, 0 dying, 0 shutdown
Mem: 8264920k total, 8017292k used, 247628k free    CPUs: 4 @ 2793MHz
      NAME  STATE   CPU(sec) CPU(%)     MEM(k) MEM(%)  MAXMEM(k) MAXMEM(%) VCPUS NETS NETTX(k) NETRX(k) VBDS   VBD_OO   VBD_RD   VBD_WR  VBD_RSECT  VBD_WSECT SS
ID
     Arch5 --b---       3346    1.8    2097152   25.4    2098176      25.4     2    1 10467958  9019245    0        0        0        0          0          0
0
  Domain-0 -----r      16661   52.0    3681424   44.5   no limit       n/a     4    0        0        0    0        0        0        0          0          0
0
  windows8 --b---       1622   11.2    2094948   25.3    2098176      25.4     4    1    16172     1450    0        0        0        0          0          0
0

 

I just checked an my Windows8 VM is running fine, but also cannot connect to SABnzbd (confirming it's not a wifi issue on my laptop).

 

Any ideas on how to restore ArchVM access, or should I just kill and restart it?

Link to comment

Getting the exact same as you justinchase, arch VM is locked, cannot login and cannot access webui of any apps running on arch VM.

 

Jonp - seemed to of managed to work around earlier issue with winvm, had two boot options for some reason, choose gplpv driver option and it booted OK, more concerned with the total hang from arch VM.

 

Sent from my Nexus 7 using Tapatalk

 

Link to comment

I just killed it with the webGUI in unraid (took a few tries), then restarted it, and it seemed to be fine, but sab had lost it's queue.  I added about 10 downloads back (and the one that was unpacking had started again), then it hung completely again.  it hung within about 5 minutes of restarting it.

 

I'll kill it and restart it again, and give it more memory, then immediately do pacman -Syyu as soon as it starts up, in case XEN 4.4 needs some updates or something.

 

:(

 

** I just ran xl console Arch5 and was greeted with a page full of...

 

[  297.067953] xen_netfront: xennet: skb rides the rocket: 19 slots
[  297.335237] xen_netfront: xennet: skb rides the rocket: 20 slots
[  304.803593] xen_netfront: xennet: skb rides the rocket: 20 slots
[  305.077816] xen_netfront: xennet: skb rides the rocket: 20 slots
[  305.320727] xen_netfront: xennet: skb rides the rocket: 20 slots
[  305.614141] xen_netfront: xennet: skb rides the rocket: 20 slots
[  305.984680] xen_netfront: xennet: skb rides the rocket: 19 slots
[  306.087856] xen_netfront: xennet: skb rides the rocket: 19 slots
[  308.217600] xen_netfront: xennet: skb rides the rocket: 19 slots
[  308.600587] xen_netfront: xennet: skb rides the rocket: 19 slots
[  308.970850] xen_netfront: xennet: skb rides the rocket: 19 slots
[  309.263899] xen_netfront: xennet: skb rides the rocket: 19 slots
[  314.827608] xen_netfront: xennet: skb rides the rocket: 19 slots
[  315.117789] xen_netfront: xennet: skb rides the rocket: 19 slots
[  317.566649] xen_netfront: xennet: skb rides the rocket: 19 slots
[  317.741618] xen_netfront: xennet: skb rides the rocket: 19 slots

 

Maybe I'll try running NFS shares again, instead of SMB.

Link to comment

Someone post their ArchVM cfg file so I can see the settings being used.  I also only tested NFS with my ArchVM (not SMB).  Not sure if that should have a difference.

 

I specifically tested my ArchVM on multiple hosts with this release and had no problems.  Curious though if you guys have a variant to my .cfg file.

Link to comment

name = "Arch5"
vcpus = '2'
memory = '2048'
maxmem = '4096'
vif = [ 'bridge=br0,mac=00:16:3E:A5:A5:A5' ]
disk = ['file:/mnt/cache/VM/Arch5/Arch5.img,xvda,w' ]
bootloader = "pygrub"
localtime = 1

 

I managed to get into the Arch5 console (ctl-c when looking at the rocket page), and was able to login fine.

 

I could see SAB still running, then tried to pacman -Syyu but got this...

 

[root@IronicsArchVM_v5 ~]# pacman -Syyu
:: Synchronising package databases...
error: failed retrieving file 'core.db' from mirror.us.leaseweb.net : Resolving timed out after 10556 milliseconds
error: failed retrieving file 'core.db' from mirror.de.leaseweb.net : Resolving timed out after 10521 milliseconds
^C

Interrupt signal received

 

stopped SAB, tried again, same result.

 

My fstab file...

 

#
# /etc/fstab: static file system information
#
# <file system> <dir>   <type>  <options>       <dump>  <pass>
# /dev/xvda1
UUID=93ec2c22-36c1-487c-a888-adde602a16fe       /               ext4           $

//media/adult /mnt/adult cifs auto,x-systemd.automount,guest,noperm,noserverino$
//media/backup /mnt/backup cifs auto,x-systemd.automount,guest,noperm,noserveri$
//media/documents /mnt/documents cifs auto,x-systemd.automount,guest,noperm,nos$
//media/downloads /mnt/downloads cifs auto,x-systemd.automount,guest,noperm,nos$
//media/music /mnt/music cifs auto,x-systemd.automount,guest,noperm,noserverino$
//media/photos /mnt/photos cifs auto,x-systemd.automount,guest,noperm,noserveri$
//media/video /mnt/video cifs auto,x-systemd.automount,guest,noperm,noserverino$

#//media:/mnt/user/adult /mnt/adult nfs auto,x-systemd.automount,guest,noperm,n$
#//media:/mnt/user/backup /mnt/backup nfs auto,x-systemd.automount,guest,noperm$
#//media:/mnt/user/downloads /mnt/downloads nfs auto,x-systemd.automount,guest,$
#//media:/mnt/user/documents /mnt/documents nfs auto,x-systemd.automount,guest,$

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.