unRAID Server Release 6.0-rc6a-x86_64 Available


Recommended Posts

I don't run UnRAID in a VM, so this is a guess ... but I suspect what is happening is that UnRAID is updating just fine => but it's updating the files on the flash drive; NOT on your VMDK.

 

If you manually copied the files from the flash to your VMDK after the update, I think you'd find it was in fact updated.

 

Link to comment
  • Replies 99
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

... alternatively you could set the VM to boot from the flash drive.  If I recall the discussions on this, that works fine but is just slower than using a VMDK -- is that correct?  [Or is there an issue getting ESXi to boot from the flash?]

 

as i know, you can boot from flash with PLOP or something like that only..

and this approach is much slower than VMDK boot.

as i mentioned in another thread, use unassigned disks plugin to mount VMDK and copy files after update - no need to mount it outside unRAID VM anymore :)

Link to comment

 

You are correct. To update I have been mounting the VDMK to a different VM and updating the files.  You can use PLOP and boot directly of the USB but it is slower.  I may have to weigh that out...

 

You can use another plugin, unassigned devices, to mount the vmdk file and copy the updates files over. No explicit need for a vm, I believe.

Link to comment

 

You are correct. To update I have been mounting the VDMK to a different VM and updating the files.  You can use PLOP and boot directly of the USB but it is slower.  I may have to weigh that out...

 

You can use another plugin, unassigned devices, to mount the vmdk file and copy the updates files over. No explicit need for a vm, I believe.

 

Just out of curiosity, how MUCH slower is it to simply boot from the USB?    I'd think actually booting the array is a fairly infrequent occurrence; and am wondering if the convenience of the update button might make it worthwhile to simply live with slower boots.

 

Link to comment

 

Just out of curiosity, how MUCH slower is it to simply boot from the USB?    I'd think actually booting the array is a fairly infrequent occurrence; and am wondering if the convenience of the update button might make it worthwhile to simply live with slower boots.

 

i'm not tested this, but if i remember correctly from other threads, PLOP boot may take 5 or more minutes?

Link to comment

BTW...just be aware that LT has made the decision to NOT officially support virtualized unRAID instances:  http://lime-technology.com/forum/index.php?topic=40564.0

 

John

It's always been that way. They try not to break unraid as a guest, but when push comes to shove, they will not allocate extra time to sorting out issues that may appear. After things have calmed down a little after the 6.0 release, and they are working on 6.01, they will probably be a little more willing to make changes if you can tell them exactly what needs to be done to fix issues you are having. It will most likely be on the community to sort out the issue and what needs to be done, then limetech will make the changes if it doesn't interfere with bare metal usage.

 

At least that's the way it's worked in the past, I'd wait a month or so and bring it up with Tom to find out if his position on unraid as a guest has changed.

Link to comment

Fantastic!

 

I have not even tried unRAID 6.0 yet... I will upgrade when it is officially released.

 

Do docker acknowledge there is a DNS bug? I can't find anything on their bugs list.

 

There are two bugs both having to do with their "auto update" of container /etc/resolv.conf file feature:

 

1. If docker daemon is started after local resolv.conf file is updated, containers don't get updated.  For example:

 

update /etc/resolv.conf  (eg via dhcpcd obtaining IP lease)

start docker daemon

start docker containers

 

In above sequence, the changes in /etc/resolv.conf will not be reflected in any containers whether they're stopped, started, restarted, etc.  though curiously they are if you 'run' (create a new container).  Worse is this:

 

update /etc/resolv.conf  (eg via dhcpcd obtaining IP lease)

start docker daemon

start docker containers

stop containers, stop docker daemon

update /etc/resolv.conf

start docker daemon

start docker containers  => results in docker creating /etc/resolv.conf in containers with google nameservers specified

 

It's the above sequence which was more-or-less default behavior on unRaid.  There are race conditions in there which is why some users saw the issue, some didn't.  Also, the google dns servers are perfectly fine for most users and they would never see any problems and it wouldn't be obvious containers were possibly using different nameservers than unRaid host.

 

Finally we determined:

 

start docker daemon

update /etc/resolv.conf

start docker containers

 

Works.  But since we can't start docker until array started (because docker image file is out on array somewhere) we did this:

 

array started

start docker daemon

echo -n '#' >>/etc/resolv.conf    [makes docker think it's been updated]

start docker containers

 

The above fix was put in -rc6.  BUT...

 

2. Then we discovered that when docker updated /etc/resolv.conf in containers, it changed the permissions from 0644 (so any user could read) to 0600 (so only root can read) in the process.  The way this manifested was this:

a) a lot of containers worked ok, only those whose applications run as something other than root could no longer resolve DNS (because the /etc/resolv.conf file was unreadable)

b) to add to confusion, in watching the logs, container image updates worked just fine (because they run as root, haha).

c) from unRaid console you could type 'docker exec <container> cat /etc/resolv.conf' just fine and it would work, but this is because the command is executing as 'root' in the container -doh!

 

This all was very confusing until we realized the permissions on resolv.conf were being changed (you have to do a 'ls -al' to see them, not just an 'ls').  Once that was confirmed, it took another half day to figure out how the permissions were being changed.  The bug in docker is that they use a temp file to change resolv.conf in the container and that temp file was getting created with 0600 permissions.

 

In looking at docker 1.7-rc3(their latest as of this post), they have indeed fixed the permissions issue with nary a comment or anything else pointing to any known issues (sneaky guys).  In fact we took their "fix" and patched 1.6.2 to fix this bug.

 

As for the 'inotify' sequence bug - don't know if that's fixed in 1.7.  If not we'll certainly bring it to their attention :)

Link to comment

BTW...just be aware that LT has made the decision to NOT officially support virtualized unRAID instances:  http://lime-technology.com/forum/index.php?topic=40564.0

 

John

It's always been that way. They try not to break unraid as a guest, but when push comes to shove, they will not allocate extra time to sorting out issues that may appear. After things have calmed down a little after the 6.0 release, and they are working on 6.01, they will probably be a little more willing to make changes if you can tell them exactly what needs to be done to fix issues you are having. It will most likely be on the community to sort out the issue and what needs to be done, then limetech will make the changes if it doesn't interfere with bare metal usage.

 

At least that's the way it's worked in the past, I'd wait a month or so and bring it up with Tom to find out if his position on unraid as a guest has changed.

 

Exactly the way I think it should be.  It's nice that the guys don't just cut someone off if they do something a little different that better suits their needs.  :)

Link to comment

 

 

BTW...just be aware that LT has made the decision to NOT officially support virtualized unRAID instances:  http://lime-technology.com/forum/index.php?topic=40564.0

 

John

It's always been that way. They try not to break unraid as a guest, but when push comes to shove, they will not allocate extra time to sorting out issues that may appear. After things have calmed down a little after the 6.0 release, and they are working on 6.01, they will probably be a little more willing to make changes if you can tell them exactly what needs to be done to fix issues you are having. It will most likely be on the community to sort out the issue and what needs to be done, then limetech will make the changes if it doesn't interfere with bare metal usage.

 

At least that's the way it's worked in the past, I'd wait a month or so and bring it up with Tom to find out if his position on unraid as a guest has changed.

 

Your pretty spot on with our stance on this. Guest support for unRAID as a VM really comes down to virtual driver support in the kernel (e.g. VMWare drivers, Hyper-V drivers, etc.).  If we completely unsupported this, we'd drop those drivers completely.

Link to comment

 

 

BTW...just be aware that LT has made the decision to NOT officially support virtualized unRAID instances:  http://lime-technology.com/forum/index.php?topic=40564.0

 

John

It's always been that way. They try not to break unraid as a guest, but when push comes to shove, they will not allocate extra time to sorting out issues that may appear. After things have calmed down a little after the 6.0 release, and they are working on 6.01, they will probably be a little more willing to make changes if you can tell them exactly what needs to be done to fix issues you are having. It will most likely be on the community to sort out the issue and what needs to be done, then limetech will make the changes if it doesn't interfere with bare metal usage.

 

At least that's the way it's worked in the past, I'd wait a month or so and bring it up with Tom to find out if his position on unraid as a guest has changed.

 

Your pretty spot on with our stance on this. Guest support for unRAID as a VM really comes down to virtual driver support in the kernel (e.g. VMWare drivers, Hyper-V drivers, etc.).  If we completely unsupported this, we'd drop those drivers completely.

 

I was 99% sure this was the answer and agree with it, but a man can have his wish-list...

Link to comment

 

 

BTW...just be aware that LT has made the decision to NOT officially support virtualized unRAID instances:  http://lime-technology.com/forum/index.php?topic=40564.0

 

John

It's always been that way. They try not to break unraid as a guest, but when push comes to shove, they will not allocate extra time to sorting out issues that may appear. After things have calmed down a little after the 6.0 release, and they are working on 6.01, they will probably be a little more willing to make changes if you can tell them exactly what needs to be done to fix issues you are having. It will most likely be on the community to sort out the issue and what needs to be done, then limetech will make the changes if it doesn't interfere with bare metal usage.

 

At least that's the way it's worked in the past, I'd wait a month or so and bring it up with Tom to find out if his position on unraid as a guest has changed.

 

Your pretty spot on with our stance on this. Guest support for unRAID as a VM really comes down to virtual driver support in the kernel (e.g. VMWare drivers, Hyper-V drivers, etc.).  If we completely unsupported this, we'd drop those drivers completely.

 

I was 99% sure this was the answer and agree with it, but a man can have his wish-list...

 

+1 to all the above.

 

I will stay with unRAID as a guest for the the following reasons:

 

1)  At this point there is no way to run VMs without the array being started.  My router is pfsense.  I'm not going to invest in separate hardware just for that - nor do I want to go back to a blue box.  I also have VMs that run my SageTV service and other functions that makes shutting them all down a PITA.

 

2)  While the webgui is very much improved, there are still too many reports of it locking up, requiring a reboot of unRAID.  What is needed is a way to restart the webgui.  I don't know if this is even possible.  I just don't have this problem with ESXi.

 

3)  I have no need to pass-thru GPUs.  I am not a gamer.  RDP works just fine for my needs.

 

And as all of us using ESXi know, booting our system bare metal can be done easily, should the need arise.  ESXi is VERY stable.  About the only thing forcing a reboot is a power cut that exceeds my UPS battery capacity.

 

Link to comment
I'd have thought the total # of slots would be limited to the # of assignable devices.  It actually works that way, but it always totals the 25 for a Pro key.  i.e. if you drop the # of drive slots then you can increase the # of cache slots.  But the total you can set for the two combined never exceeds 25.  In other words, it works exactly as I'd expect it to work for Pro ... so clearly the logic is there to ensure the total doesn't exceed the appropriate max (of 25 in this case).      On the surface, it seems like the "max" simply needs to be set to the appropriate # for the key.

Just as a really stupid OT and basic question, is it currently that if you have 24 drives in your array, you can only have one cache drive?  And if you wanted more (or a cache plus VM/docker drive), you'd need to lower the array drives?  I hadn't thought about it, since my case couldn't hold that many anyhow, but I'm switching to a 24 front and 2 internal case soon (guessing it'll never be at capacity either, but you never know).

Link to comment

I'd have thought the total # of slots would be limited to the # of assignable devices.  It actually works that way, but it always totals the 25 for a Pro key.  i.e. if you drop the # of drive slots then you can increase the # of cache slots.  But the total you can set for the two combined never exceeds 25.  In other words, it works exactly as I'd expect it to work for Pro ... so clearly the logic is there to ensure the total doesn't exceed the appropriate max (of 25 in this case).      On the surface, it seems like the "max" simply needs to be set to the appropriate # for the key.

Just as a really stupid OT and basic question, is it currently that if you have 24 drives in your array, you can only have one cache drive?  And if you wanted more (or a cache plus VM/docker drive), you'd need to lower the array drives?  I hadn't thought about it, since my case couldn't hold that many anyhow, but I'm switching to a 24 front and 2 internal case soon (guessing it'll never be at capacity either, but you never know).

Yes - that is correct.

However there is nothing to stop you having drives that are not part of unRAID on the Pro license.  I have an external SSD that I use for my VM's.  I am hoping that for a post 6.0 release the ability for VM's to be started/stopped at system start/stop will be added so that I can avoid manually starting/stopping VM's that need to survive the array being started/stopped.

 

It is not quite as easy on the lower tier licenses as devices attached to the unRAID server count towards the license limits regardless of whether they are used by unRAID.  The Pro license imposes no attached devices limit and the limit of 25 devices only applies to those used by unRAID.

Link to comment

Hey everyone.

 

I read somewhere that smb-extra.conf has been ditched from newer releases. I use some options to get offline caching in Windows to work:

[global]
    oplocks = yes
    level2 oplocks = yes
    kernel oplocks = no
    map archive = yes
    map system = yes
    map hidden = yes

 

Where can I put those options now?

Link to comment

Hey everyone.

 

I read somewhere that smb-extra.conf has been ditched from newer releases. I use some options to get offline caching in Windows to work:

[global]
    oplocks = yes
    level2 oplocks = yes
    kernel oplocks = no
    map archive = yes
    map system = yes
    map hidden = yes

 

Where can I put those options now?

 

This is the smb.conf for version 6.0rc6a.  I certainly see the 'include = /boot/config/smb-extra.conf' line in the smb.conf file.  (It might be in the wrong spot in smb.conf-- I would think it should be the last "include" so the user would have complete control over the Samba parameters but as far as I know no one using that option  has complained.) 

 

# configurable identification
        include = /etc/samba/smb-names.conf

        # log stuff only to syslog
        log level = 0
        syslog = 0
        syslog only = Yes

        # we don't do printers
        show add printer wizard = No
        disable spoolss = Yes
        load printers = No
        printing = bsd
        printcap name = /dev/null

        # misc.
        max protocol = SMB3
        invalid users = root
        unix extensions = No
        wide links = Yes
        use sendfile = Yes
        aio read size = 0
        aio write size = 0

        # ease upgrades from Samba 3.6
        acl allow execute always = Yes

        # hook for user-defined samba config
        include = /boot/config/smb-extra.conf

        # auto-configured shares
        include = /etc/samba/smb-shares.conf

Link to comment
Yes - that is correct.

However there is nothing to stop you having drives that are not part of unRAID on the Pro license.  I have an external SSD that I use for my VM's.  I am hoping that for a post 6.0 release the ability for VM's to be started/stopped at system start/stop will be added so that I can avoid manually starting/stopping VM's that need to survive the array being started/stopped.

Thanks for that.  The completist in me would like the option of an array of 24 drives plus cache pool/docker/VM of 2.  The realist will never fill that up though, so it's no big deal.

Link to comment

Someone must have rubbed Jon's head for good luck!  :D

 

EDIT:  RE: my comment above...Sparkly and CHBMB...try to keep your minds out of the gutter!

 

I suspect if LimeTech want some more good luck Sparkly & I would be prime candidates  ;D  Only thing that my profile pic isn't accurate on is the expertise bit...  ::)

 

I'm behaving now, I'd cry if Tom banned me...  :-X

Link to comment
Guest
This topic is now closed to further replies.