Unraid OS version 6.7.1-rc2 available


Recommended Posts

There have been 5 kernel patch releases since 6.7.1-rc1 and there don't appear to be any problems with Zombieland mitigation.  We updated a few security-related patches, but I want to give this a few days of wider testing before publishing to stable.

 

Please post here in this topic, any issues you run across which are not present in 6.7.0 also - that is, issue that you think can be directly attributed to the microcode and/or kernel changes.

 

Version 6.7.1-rc2 2019-06-08

Base distro:

  • curl: version 7.65.0 (CVE-2019-5435, CVE-2019-5436)
  • docker: version 18.09.6
  • kernel-firmware: version 20190514_711d329
  • mozilla-firefox: version 66.0.5
  • samba: version 4.9.8 (CVE-2018-16860)

Linux kernel:

  • version: 4.19.48 (CVE-2019-11833)

Management:

  • shfs: support FUSE use_ino option
  • Dashboard: added draggable fields in table
  • Dashboard: added custom case image selection
  • Dashboard: enhanced sorting
  • Docker + VM: enhanced sorting
  • Docker: disable button "Update All" instead of hiding it when no updates are available
  • Fix OS update banner overhanging in Auzre / Gray themes
  • Don't allow plugin updates to same version
  • misc style corrections
  • Like 4
Link to comment
  • limetech pinned and featured this topic

Just a quick note to say thank you.

 

I had a bit of a rough start coming from Windows but I have to say I love Unraid and its approach so much so I have told others too.

 

I love the development you guys do to keep moving (not just patches) and the community here make it even more special.

 

Thanks

 

Terran

Link to comment

 Quick Question, the Zombie patches were disabling hyper threading if im not mistaken?

Presumably any update i do since the first patches is going to have a heavy impact on performance?

I know there was some talk back then of disabling the patch and maybe even unraid having the option to ignore certain security upgrades did anything come of this?

 

Link to comment
2 minutes ago, TomPeel said:

 Quick Question, the Zombie patches were disabling hyper threading if im not mistaken?

Presumably any update i do since the first patches is going to have a heavy impact on performance?

I know there was some talk back then of disabling the patch and maybe even unraid having the option to ignore certain security upgrades did anything come of this?

 

Not sure of the impact on performance.   You can use the "Disable Security Mitigations" plugin to turn them off.

Link to comment
3 hours ago, itimpi said:

Not sure of the impact on performance.   You can use the "Disable Security Mitigations" plugin to turn them off.

Disable Security Mitigations Plugin doesn't show in the plugins and it doesent show anything in this forum... im on 6.7.0rc8 could it be it only shows after an update?

 

Link to comment
40 minutes ago, TomPeel said:

Disable Security Mitigations Plugin doesn't show in the plugins and it doesent show anything in this forum... im on 6.7.0rc8 could it be it only shows after an update?

 

Switch to the stable branch and update. Should show up then.

Link to comment

Hello everyone,

 

I'm facing some issues updating from 6.5.3 not only to 6.7.1-rc2 but also any version above 6.5.3.

 

Whenever I update, the drives will not mount and simply when I revert back to 6.5.3, the drives mount like nothing happened. I requested lime tech support but they couldn't identify the issue and it was suspected that the issue was related to Kernel. 

 

Later on with the help of one of the forum member who is expert (I don't want to mention his name for privacy purposes but he's welcome to add to this post), we have removed all the drive for testing purposes and made a new config with just an empty drive. The issue was appearing again after the update. So we narrowed down the issue and the conclusion was, when we have encrypted drives, the issue appears but it disappears when we remove the encryption. 

 

It was clear to us that the encryption was the key to this issue but we are not sure what is the root cause or how can we fix it.

 

I have attached the log for the system when trying to mount the drive. Hopefully someone could spot something here and point us toward the solution.

 

I would appreciate any input.

 

Many thanks,

 

Abdulla

VM SERVICE_array not start.zip

Link to comment
28 minutes ago, uaeproz said:

I have attached the log for the system when trying to mount the drive.

Disk1 is mounting correctly:

Filesystem       Size  Used Avail Use% Mounted on
/dev/mapper/md1  3.7T   25G  3.7T   1% /mnt/disk1

Is is not available on the shares or what is exactly the problem?

Link to comment
1 hour ago, johnnie.black said:

Disk1 is mounting correctly:


Filesystem       Size  Used Avail Use% Mounted on
/dev/mapper/md1  3.7T   25G  3.7T   1% /mnt/disk1

Is is not available on the shares or what is exactly the problem?

Hi @johnnie.black I thought that I would chip into this post. I logged into @uaeproz server yesterday morning to help to do some testing with a new array and clean install.

This was because his normal array (of 170tb and 11 data drives and 2 parity drives, encrypted xfs) the array will never start on any version of Unraid above 6.5.3. It always hangs after mounting the first or second drive in the array. He has been trying to upgrade to each new release of Unraid as they come out with the same problem and then having to downgrade back to 6.5.3 for his server to work correctly.

 

What we thought that we would do yesterday, is to see if we could do a clean install of 6.7.0 stable. Then make a 1 drive array and see if the problem persisted.

He removed all of his normal data and parity drives  from the server.  One 4tb drive was attached. An array was created with just one data drive with a clean install onto flashdrive of Unraid 6.7.0. The file system chosen was encrypted xfs (to be the same as the normal array) 

On clicking 'start the array' the drive was formatted but as the array started, the services started to start and it hung there. The gui saying starting services. The array never fully became available.

I looked at the data on the disk and saw that the system share/folder only had the docker folder and had it had not created the libvirt folder.  So i assumed that the vm service was unable to start but the docker service had. 

The server wouldn't shut down from the gui or command line so had to be hard reset.

 

On restarting the server , before starting the array, I disabled the vm service. This time the array started as expected.

However, stopping the array again it hang on stopping the services and the array wouldn't stop. Again needed hard reset.

Next starting the array without the docker service or vm service running the array would start and stop fine.

So next i tried starting the array without the docker or vm service running. Then once the array had finished starting then manually starting the docker and vm service.  This worked fine. And so long as these services were manually stopped before attempting to stop the array,  then the array would stop fine.

-----

So next I deleted that array and made a new one using standard xfs (not encrypted) with the same 4tb drive.

The array started fine with both the docker and vm service running without issue. The array could stop fine too. So basically everything worked as expected when the drive was not encrypted.

 

I was hoping from the results of those tests that when we connected the original drives went back to the original flash drive, and upgraded the system to 6.7.0.  that the array would start if docker service and vm service were disabled. This wasn't the case. The array didn't finishing mounting the drives. It stopped after mounting the first drive and had to be hard reset. So this is a strange problem.

 

The OP has also tried removing all non-esstential hardware such as GPU. Also tried moving disk controller to different PCIe slot. He has run memtest on the ram which has passed.

 

The diag file that he attached to the post,  if I remember, was taken with one drive in the server formatted as xfs encrypted.  Starting the array with the vm service enable. The array never finished starting just stuck on starting services. That when this file was downloaded. before hard resetting

Hope that helps.

 

 

  • Like 2
Link to comment
14 minutes ago, SpaceInvaderOne said:

if I remember, was taken with one drive in the server formatted as xfs encrypted.  Starting the array with the vm service enable. The array never finished starting just stuck on starting services.

OK, I see that:

Jun 15 03:02:51 Tower root: Creating new image file: /mnt/user/system/libvirt/libvirt.img size: 1G

Unraid is trying to create libvirt.img but it never gets created, there aren't also any errors or reason why it's failing to do it, weird issue, if possible I would try connecting the disk to the onboard controller and trying again, or make sure libvirt is created on a device using the onboard controller, like a cache device.

Link to comment

Sounds like an issue with the HBA / Raid Card he is using. I had an issue sever versions ago where I had to replace a HBA to get drives to mount properly or even to see certain drives. May be an issue like that. Check against a d HBA or onboard SATA.

 

Also it’s probably best that your issue is in a separate topic.

 

the rc1 and rc2 testing was supposed to be limited to make sure that no problems are introduced since 6.70 stable

Edited by dgreig
Link to comment
  • jonp unpinned this topic
  • trurl locked this topic
Guest
This topic is now closed to further replies.