unRAID Server Release 6.2.0-rc3 Available


Recommended Posts

Definitely agree this should be highlighted => that's a fairly major change in UnRAID's behavior ... one of the BIG advantages of this nifty little guy has always been that you could just move all your disks to a new system; boot to the flash drive ... and life was good  :)

 

... for that to be no longer the case -- especially even on the same make/model motherboard -- is a pretty big change in behavior !!

 

That is not entirely true, see my earlier remark in this thread.

 

People with systems containing a single ethernet port (which I believe will be the majority) we have NO issues at all.

 

With the new advanced settings it is mandatory that port assignments stay persistent across system boots. Especially when different ports are configured for different access. Hence the rules file.

 

When you change hardware AND you have more than one ethernet port AND the total number of ports does not change, then the network-rules file need to be deleted.

 

Link to comment
  • Replies 190
  • Created
  • Last Reply

Top Posters In This Topic

Since the advanced network configuration introduction unRAID is no more hardware-agnostic regarding NICs, replacing a motherboard (should be the same replacing just the NIC) results in no network.

 

This happens even if the board replaced is the exact same model as in the example below.

 

I really like and need the advanced net config, but if it's not possible to make unRAID detect a NIC change I believe it should be mentioned in the release notes that network-rules.cfg needs to be deleted after a NIC change.

 

The file "network-rules.cfg" is only created when more than one NIC is present in your system and is used to make interface assignments persistent across system boots.

 

There is a detection mechanism in place which deletes the file when a change in the number of NICs occurs.

 

As you observed this is not 100% full-proof. In case a system has multiple NICs and these are changed, without the number of NICs changing, it goes undetected. Though one port will become eth0, and it might be necessary to move cables to restore connectivity. This will depend on how the network is configured (bonding, bridging).

Would it maybe be a better to have the change detection rely on not finding any of the original MAC Addresses instead of just the number of NICs?

 

I would need to think of a way how this can be improved (if possible).

 

The NIC detection happens by udev, any change in MAC addresses should somehow be told to udev, or other detection rules for udev should be considered/created. Open to ideas :)

 

Link to comment

Hello,

 

I don't know if the following problem occours since the 6.2rc2 or rc3:

 

I have a Windows 10 VM which is running quite nicely with GPU Pssthrough and everything.

If I force quit the VM via the Unraid WebGUI and start it again, the VM hangs at the TianoCore UEFI. I cant't do anything but fircequit the VM again.

The VM will only run again, after a complete Unraid server restart.

Anyone else experiencing this issue?

tower-diagnostics-20160807-2357.zip

Link to comment

Definitely agree this should be highlighted => that's a fairly major change in UnRAID's behavior ... one of the BIG advantages of this nifty little guy has always been that you could just move all your disks to a new system; boot to the flash drive ... and life was good  :)

 

... for that to be no longer the case -- especially even on the same make/model motherboard -- is a pretty big change in behavior !!

 

That is not entirely true, see my earlier remark in this thread.

 

People with systems containing a single ethernet port (which I believe will be the majority) we have NO issues at all.

 

With the new advanced settings it is mandatory that port assignments stay persistent across system boots. Especially when different ports are configured for different access. Hence the rules file.

 

When you change hardware AND you have more than one ethernet port AND the total number of ports does not change, then the network-rules file need to be deleted.

 

Ahh => that's nowhere near as bad as I had interpreted it from Johnnie's comment.  Most users would never encounter this issue.

 

Link to comment

Hello,

 

I don't know if the following problem occours since the 6.2rc2 or rc3:

 

I have a Windows 10 VM which is running quite nicely with GPU Pssthrough and everything.

If I force quit the VM via the Unraid WebGUI and start it again, the VM hangs at the TianoCore UEFI. I cant't do anything but fircequit the VM again.

The VM will only run again, after a complete Unraid server restart.

Anyone else experiencing this issue?

 

If you shutdown the Windows 10 VM normally (not force stop) are you able to start the VM up again and get past UEFI boot?

 

When the VM hangs at the Tianocore UEFI: force stop again, edit the VM, uncheck all but 1 CPU core, Save and try launching the VM again.  Does it get past Tianocore UEFI at that point?

Link to comment

This is somewhat related to my request (Disable auto array start when there's a disk missing), but IMO more a defect, I was upgrading the parity disk on my backup server, single parity only, the array stared without the old parity disk, as expected in v6.2, but there was no notification that parity was missing, server was running for aprox. 3 minutes, I then stopped, assigned new parity and started array and immediately got the parity sync/data rebuild notifications.

tower3-diagnostics-20160809-2007.zip

Link to comment

This is somewhat related to my request (Disable auto array start when there's a disk missing), but IMO more a defect, I was upgrading the parity disk on my backup server, single parity only, the array stared without the old parity disk, as expected in v6.2, but there was no notification that parity was missing, server was running for aprox. 3 minutes, I then stopped, assigned new parity and started array and immediately got the parity sync/data rebuild notifications.

Thanks for pointing that out, we'll take a look at it.

Link to comment

@afoard. Can you post your diagnostics.

 

Diags attached. Sorry, I split them up into two files since it was larger than 192KB. Wasn't sure how else to send them.

 

After examining what caused this diag and others to be too large, I'm adjusting my recommendations for diag space control.  We only need the first section of the first syslog and perhaps the last 200 lines of the whole syslog.  So while always having the last 200 is convenient, it's not important, and we've already seen several cases where it was the last straw that caused the size to be a bit too big.

 

* We only need the last 200 if and only if we truncate the actual syslog file, no others.  We don't need the last 200 of any 'syslog.?' files, and we don't need the last 200 from 'syslog' if we don't truncate it.

* I think we should also decrease the truncated size to 2MB (or even 1MB), down from 3MB.

 

These 2 changes should help a lot, but you could also delete any intermediate syslog files.  If there's a syslog, syslog.1, and syslog.2, syslog.1 can be deleted completely, syslog.2 truncated if huge, and syslog truncated if huge (and if truncated grab the last 200 first).

 

Suggested logic -

maxsize = 2MB or 1MB
i = 5;
found = 0;

do 
   if syslog.i exist
      if found
         delete syslog.i
      else
         found = 1;
         if size of syslog.i > maxsize
            truncate syslog.i
while (--i);

if size of syslog > maxsize
   tail 200 syslog
   truncate syslog

 

... or any simpler variation thereof ...

 

Same logic applies to docker.log and libvirtd.log - only last 200 if we have to truncate them.  However for both, a max size for truncation could probably be 200KB or less.  Anything more is all garbage.

Link to comment

@RobJ

 

Thanks for the great recommendations.

 

I changed the creation of the log files as per your recommendation with a maximum size of 2MB for syslogs and 1MB for docker/vm logs. Perhaps a bit more conservative than your proposal, but should give already good benefits in case many large log files exist.

 

Sorry, not included is deletion of any files. My view is that diagnostics should be a passive tool, it observes and reports but doesn't do any active changes.

 

Link to comment

 

Sorry, not included is deletion of any files. My view is that diagnostics should be a passive tool, it observes and reports but doesn't do any active changes.

 

I agree with that.

 

The deletion of fluffer syslogs should be in a tool like Fix Common Problems and not in Diagnostics.

 

Link to comment

@RobJ

 

Thanks for the great recommendations.

 

I changed the creation of the log files as per your recommendation with a maximum size of 2MB for syslogs and 1MB for docker/vm logs. Perhaps a bit more conservative than your proposal, but should give already good benefits in case many large log files exist.

 

Sorry, not included is deletion of any files. My view is that diagnostics should be a passive tool, it observes and reports but doesn't do any active changes.

How about a compromise? Could the syslogs be compressed after they are rotated? That way the RAM doesn't get filled nearly as fast on a misbehaving system.
Link to comment

My cache drive recently failed and as a temporary solution I have added a USB3 external drive to the array as a cache drive until I can replace the cache with an internal drive.

 

On recreating my VM templates I noticed that the USB3 drive is available for assignment to a VM.

 

Seeing as it's assigned to the array (not sure if it's available if assigned as an actual array drive instead of cache), this may be a little bit risky in case someone does assign it to a VM in error.

 

I haven't tested what would happen yet but I hope it will reject assignment to the VM without detriment to the array.

 

Link to comment

Just upgraded to RC3 on my main server from 6.1.9. Leaving my backup server on 6.1.9 until 6.2 is FINAL. I've not seen 6.2 since the very first BETA.

 

I decided to backup my original appdata folder and format my cache pool before upgrading - meaning I have started again with my docker containers / configuration / restoring VM's. I'm glad I did this as its forcing me to get familiar with the changes.

 

I have not added my second Parity drive yet but I have to say that I am pleasantly surprised with how quick the server booted (in comparison with 6.19) and the changes I have seen so far.

 

Upgrade was simple. No issues at all so far.

 

Good work LT!  :)

Link to comment

Hello,

 

I don't know if the following problem occours since the 6.2rc2 or rc3:

 

I have a Windows 10 VM which is running quite nicely with GPU Pssthrough and everything.

If I force quit the VM via the Unraid WebGUI and start it again, the VM hangs at the TianoCore UEFI. I cant't do anything but fircequit the VM again.

The VM will only run again, after a complete Unraid server restart.

Anyone else experiencing this issue?

 

If you shutdown the Windows 10 VM normally (not force stop) are you able to start the VM up again and get past UEFI boot?

 

When the VM hangs at the Tianocore UEFI: force stop again, edit the VM, uncheck all but 1 CPU core, Save and try launching the VM again.  Does it get past Tianocore UEFI at that point?

 

I Just noticed that my Windows VM won't shutdown at all.... It just keeps saying "Shutting down". I though mabye this problem is Window update related, so l let the VM "Shut Down". On the next morning I checked the machine and it was still "Shutting Down".

I tried to boot with 1 core, no difference. Maybe it is a Windows 10 related Issue?

Link to comment

Hello,

 

I don't know if the following problem occours since the 6.2rc2 or rc3:

 

I have a Windows 10 VM which is running quite nicely with GPU Pssthrough and everything.

If I force quit the VM via the Unraid WebGUI and start it again, the VM hangs at the TianoCore UEFI. I cant't do anything but fircequit the VM again.

The VM will only run again, after a complete Unraid server restart.

Anyone else experiencing this issue?

 

If you shutdown the Windows 10 VM normally (not force stop) are you able to start the VM up again and get past UEFI boot?

 

When the VM hangs at the Tianocore UEFI: force stop again, edit the VM, uncheck all but 1 CPU core, Save and try launching the VM again.  Does it get past Tianocore UEFI at that point?

 

I Just noticed that my Windows VM won't shutdown at all.... It just keeps saying "Shutting down". I though mabye this problem is Window update related, so l let the VM "Shut Down". On the next morning I checked the machine and it was still "Shutting Down".

I tried to boot with 1 core, no difference. Maybe it is a Windows 10 related Issue?

Do you have the latest virtio drivers installed? Mine shuts down fine

Link to comment

Upgraded to 6.2 RC3 and I get the follwing error on my VM screen. I only have one VM (Windows 10) and it start and runs fine but I am not able to edit or do any modification to the VM. I can see a new path was added with the update and the error looks to be referencing the new path but not sure what I need to do to correct the problem.

 

Warning: libvirt_domain_xml_xpath(): namespace warning : xmlns: URI unraid is not absolute in /usr/local/emhttp/plugins/dynamix.vm.manager/classes/libvirt.php on line 936 Warning: libvirt_domain_xml_xpath():

 

Also noticed that everything on my dashboard is arranged crazy and I cannot click on anything. See attached photo.

2016-08-12_16-49-05.jpg.569a00bec2ba6ffe8c0534d4a7c0cd7c.jpg

Link to comment

My unRAID just started automatically though my parity disks were missing which killed my parity information that needs to be rebuild now.

The automatic mounting of the disks only should be done when everything is fine, nothing is missing, nothing changed it's IDs, shouldn't it? :)

Link to comment

Hello,

 

I don't know if the following problem occours since the 6.2rc2 or rc3:

 

I have a Windows 10 VM which is running quite nicely with GPU Pssthrough and everything.

If I force quit the VM via the Unraid WebGUI and start it again, the VM hangs at the TianoCore UEFI. I cant't do anything but fircequit the VM again.

The VM will only run again, after a complete Unraid server restart.

Anyone else experiencing this issue?

 

If you shutdown the Windows 10 VM normally (not force stop) are you able to start the VM up again and get past UEFI boot?

 

When the VM hangs at the Tianocore UEFI: force stop again, edit the VM, uncheck all but 1 CPU core, Save and try launching the VM again.  Does it get past Tianocore UEFI at that point?

 

I Just noticed that my Windows VM won't shutdown at all.... It just keeps saying "Shutting down". I though mabye this problem is Window update related, so l let the VM "Shut Down". On the next morning I checked the machine and it was still "Shutting Down".

I tried to boot with 1 core, no difference. Maybe it is a Windows 10 related Issue?

Do you have the latest virtio drivers installed? Mine shuts down fine

 

I completly deleted the Windows 10 VM and created a new one with a new virtual disk and everything.

It's running fine now. Was a Windows problem then I guess.

Link to comment

My unRAID just started automatically though my parity disks were missing which killed my parity information that needs to be rebuild now.

The automatic mounting of the disks only should be done when everything is fine, nothing is missing, nothing changed it's IDs, shouldn't it? :)

Yes this will be corrected in the next release.

Link to comment

My unRAID just started automatically though my parity disks were missing which killed my parity information that needs to be rebuild now.

The automatic mounting of the disks only should be done when everything is fine, nothing is missing, nothing changed it's IDs, shouldn't it? :)

Yes this will be corrected in the next release.

Alright, good enough for me :) Thank you!

Link to comment

I have been running a dual parity setup on my Testbed Server since March 11th.  At that time, the parity check speed was approximately the same as with a single parity drive. The speed for parity remained almost constant until after June 1st.

 

Sometime around the release of Ver6.2-b22, this suddenly changed.  Even though this was back in June, for some reason, the scheduled parity check for July did not occur.  (I believe it was  some minor problem in one of the releases at that time which caused the problem.)  When the scheduled August parity check occurred, the speed had dropped from 104.1 MB/s to 47.2 MB/s.  And I became aware of a problem of some type.

 

I tended to suspect a hardware issue and I looked very carefully at the server, its hardware, logs and SMART reports.  I could find nothing that indicated a problem and I opened up a topic in the Version 6 Support forum.  As a result of this thread, I found out something most interesting!  That going back to a single parity setup brought the speed back up to 97.0MB/s.  While this is still slower than 104MB/s, I would consider it acceptable.

 

The entire discussion is in this thread:

 

        https://lime-technology.com/forum/index.php?topic=51036.0

 

It is suspected that some change in the Linux kernel may be responsible for this reduction and, possibly, LimeTech can do nothing about it.  In this should be the case, those users with low end processors (like my AMD Sempron) may wish to avoid using dual parity. 

Link to comment

Tom, can you or anyone else confirm if this behavior is normal or expected?

 

On this thread a user is running xfs_repair on a disable disk, user already assigned a replacement disk assigned but it's not rebuilt yet, he's using v6.1.9 but I just tested on 6.2rc3 and it's the same.

 

Running xfs_repair on a disable disk with no disk assigned results in reads from all other disks while it's searching for a secondary superblock, as expected, but running it on a disable disk with an assigned but not yet rebuilt disk results also in writes to this disk, I don't understand what is being written to the disk, if nothing else it really slows down the superblock search.

 

Just did a quick test on my test server using a small SSD, running xfs_repair on a emulated disk with no disk assigned took 3 minutes, with an assigned disk same check took 13 minutes.

Link to comment
Guest
This topic is now closed to further replies.