limetech Posted June 9, 2019 Share Posted June 9, 2019 There have been 5 kernel patch releases since 6.7.1-rc1 and there don't appear to be any problems with Zombieland mitigation. We updated a few security-related patches, but I want to give this a few days of wider testing before publishing to stable. Please post here in this topic, any issues you run across which are not present in 6.7.0 also - that is, issue that you think can be directly attributed to the microcode and/or kernel changes. Version 6.7.1-rc2 2019-06-08 Base distro: curl: version 7.65.0 (CVE-2019-5435, CVE-2019-5436) docker: version 18.09.6 kernel-firmware: version 20190514_711d329 mozilla-firefox: version 66.0.5 samba: version 4.9.8 (CVE-2018-16860) Linux kernel: version: 4.19.48 (CVE-2019-11833) Management: shfs: support FUSE use_ino option Dashboard: added draggable fields in table Dashboard: added custom case image selection Dashboard: enhanced sorting Docker + VM: enhanced sorting Docker: disable button "Update All" instead of hiding it when no updates are available Fix OS update banner overhanging in Auzre / Gray themes Don't allow plugin updates to same version misc style corrections 4 1 Link to comment
StanC Posted June 9, 2019 Share Posted June 9, 2019 Wow, first to reply. Updated my two server with no issues. Thank you for all your hard work! Link to comment
Dazog Posted June 9, 2019 Share Posted June 9, 2019 (edited) Does this RC include php 7.2.19 for CVE-2019-11040 and OpenSSL 1.1.1c? Edited June 9, 2019 by Dazog Link to comment
SimonF Posted June 9, 2019 Share Posted June 9, 2019 Updated test server all ok. Available to test SAS spin down updates if required. 1 Link to comment
ccsnet Posted June 9, 2019 Share Posted June 9, 2019 Just a quick note to say thank you. I had a bit of a rough start coming from Windows but I have to say I love Unraid and its approach so much so I have told others too. I love the development you guys do to keep moving (not just patches) and the community here make it even more special. Thanks Terran Link to comment
LammeN3rd Posted June 9, 2019 Share Posted June 9, 2019 Already 13 hours uptime on 6.7.1-rc2, no issues everything runs great! Really like the new Dashboard draggable fields, would be even better if they could move between tables! Link to comment
zoggy Posted June 12, 2019 Share Posted June 12, 2019 just to note, upgraded from 6.7.0->6.7.1-rc1->6.7.1-rc2 without problems. 6.7.0 cosmetic bug still shown every 20 hours: Quote Jun 11 09:27:38 husky dhcpcd[1582]: eth0: failed to renew DHCP, rebinding Link to comment
TomPeel Posted June 12, 2019 Share Posted June 12, 2019 Quick Question, the Zombie patches were disabling hyper threading if im not mistaken? Presumably any update i do since the first patches is going to have a heavy impact on performance? I know there was some talk back then of disabling the patch and maybe even unraid having the option to ignore certain security upgrades did anything come of this? Link to comment
itimpi Posted June 12, 2019 Share Posted June 12, 2019 2 minutes ago, TomPeel said: Quick Question, the Zombie patches were disabling hyper threading if im not mistaken? Presumably any update i do since the first patches is going to have a heavy impact on performance? I know there was some talk back then of disabling the patch and maybe even unraid having the option to ignore certain security upgrades did anything come of this? Not sure of the impact on performance. You can use the "Disable Security Mitigations" plugin to turn them off. Link to comment
TomPeel Posted June 12, 2019 Share Posted June 12, 2019 3 hours ago, itimpi said: Not sure of the impact on performance. You can use the "Disable Security Mitigations" plugin to turn them off. Disable Security Mitigations Plugin doesn't show in the plugins and it doesent show anything in this forum... im on 6.7.0rc8 could it be it only shows after an update? Link to comment
wgstarks Posted June 12, 2019 Share Posted June 12, 2019 40 minutes ago, TomPeel said: Disable Security Mitigations Plugin doesn't show in the plugins and it doesent show anything in this forum... im on 6.7.0rc8 could it be it only shows after an update? Switch to the stable branch and update. Should show up then. Link to comment
TomPeel Posted June 12, 2019 Share Posted June 12, 2019 2 hours ago, wgstarks said: Switch to the stable branch and update. Should show up then. Works on 6.7.1rc2 Link to comment
Squid Posted June 12, 2019 Share Posted June 12, 2019 Works on 6.7.1rc2 Because the min version I have it set for is 6.7.0 stableSent via telekinesis Link to comment
zoggy Posted June 15, 2019 Share Posted June 15, 2019 any chance we can get dhcpcd updated? Link to comment
runraid Posted June 15, 2019 Share Posted June 15, 2019 (edited) Upgrading in hopes it fixes the SQLite db corruption problem many of us are plagued by since upgrading to 6.7.0 Edited June 15, 2019 by runraid Link to comment
Abzstrak Posted June 16, 2019 Share Posted June 16, 2019 18 hours ago, runraid said: Upgrading in hopes it fixes the SQLite db corruption problem many of us are plagued by since upgrading to 6.7.0 try enabling the use_ino option. I'm wondering if it will help. Link to comment
uaeproz Posted June 16, 2019 Share Posted June 16, 2019 Hello everyone, I'm facing some issues updating from 6.5.3 not only to 6.7.1-rc2 but also any version above 6.5.3. Whenever I update, the drives will not mount and simply when I revert back to 6.5.3, the drives mount like nothing happened. I requested lime tech support but they couldn't identify the issue and it was suspected that the issue was related to Kernel. Later on with the help of one of the forum member who is expert (I don't want to mention his name for privacy purposes but he's welcome to add to this post), we have removed all the drive for testing purposes and made a new config with just an empty drive. The issue was appearing again after the update. So we narrowed down the issue and the conclusion was, when we have encrypted drives, the issue appears but it disappears when we remove the encryption. It was clear to us that the encryption was the key to this issue but we are not sure what is the root cause or how can we fix it. I have attached the log for the system when trying to mount the drive. Hopefully someone could spot something here and point us toward the solution. I would appreciate any input. Many thanks, Abdulla VM SERVICE_array not start.zip Link to comment
JorgeB Posted June 16, 2019 Share Posted June 16, 2019 28 minutes ago, uaeproz said: I have attached the log for the system when trying to mount the drive. Disk1 is mounting correctly: Filesystem Size Used Avail Use% Mounted on /dev/mapper/md1 3.7T 25G 3.7T 1% /mnt/disk1 Is is not available on the shares or what is exactly the problem? Link to comment
SpaceInvaderOne Posted June 16, 2019 Share Posted June 16, 2019 1 hour ago, johnnie.black said: Disk1 is mounting correctly: Filesystem Size Used Avail Use% Mounted on /dev/mapper/md1 3.7T 25G 3.7T 1% /mnt/disk1 Is is not available on the shares or what is exactly the problem? Hi @johnnie.black I thought that I would chip into this post. I logged into @uaeproz server yesterday morning to help to do some testing with a new array and clean install. This was because his normal array (of 170tb and 11 data drives and 2 parity drives, encrypted xfs) the array will never start on any version of Unraid above 6.5.3. It always hangs after mounting the first or second drive in the array. He has been trying to upgrade to each new release of Unraid as they come out with the same problem and then having to downgrade back to 6.5.3 for his server to work correctly. What we thought that we would do yesterday, is to see if we could do a clean install of 6.7.0 stable. Then make a 1 drive array and see if the problem persisted. He removed all of his normal data and parity drives from the server. One 4tb drive was attached. An array was created with just one data drive with a clean install onto flashdrive of Unraid 6.7.0. The file system chosen was encrypted xfs (to be the same as the normal array) On clicking 'start the array' the drive was formatted but as the array started, the services started to start and it hung there. The gui saying starting services. The array never fully became available. I looked at the data on the disk and saw that the system share/folder only had the docker folder and had it had not created the libvirt folder. So i assumed that the vm service was unable to start but the docker service had. The server wouldn't shut down from the gui or command line so had to be hard reset. On restarting the server , before starting the array, I disabled the vm service. This time the array started as expected. However, stopping the array again it hang on stopping the services and the array wouldn't stop. Again needed hard reset. Next starting the array without the docker service or vm service running the array would start and stop fine. So next i tried starting the array without the docker or vm service running. Then once the array had finished starting then manually starting the docker and vm service. This worked fine. And so long as these services were manually stopped before attempting to stop the array, then the array would stop fine. ----- So next I deleted that array and made a new one using standard xfs (not encrypted) with the same 4tb drive. The array started fine with both the docker and vm service running without issue. The array could stop fine too. So basically everything worked as expected when the drive was not encrypted. I was hoping from the results of those tests that when we connected the original drives went back to the original flash drive, and upgraded the system to 6.7.0. that the array would start if docker service and vm service were disabled. This wasn't the case. The array didn't finishing mounting the drives. It stopped after mounting the first drive and had to be hard reset. So this is a strange problem. The OP has also tried removing all non-esstential hardware such as GPU. Also tried moving disk controller to different PCIe slot. He has run memtest on the ram which has passed. The diag file that he attached to the post, if I remember, was taken with one drive in the server formatted as xfs encrypted. Starting the array with the vm service enable. The array never finished starting just stuck on starting services. That when this file was downloaded. before hard resetting Hope that helps. 2 Link to comment
JorgeB Posted June 16, 2019 Share Posted June 16, 2019 14 minutes ago, SpaceInvaderOne said: if I remember, was taken with one drive in the server formatted as xfs encrypted. Starting the array with the vm service enable. The array never finished starting just stuck on starting services. OK, I see that: Jun 15 03:02:51 Tower root: Creating new image file: /mnt/user/system/libvirt/libvirt.img size: 1G Unraid is trying to create libvirt.img but it never gets created, there aren't also any errors or reason why it's failing to do it, weird issue, if possible I would try connecting the disk to the onboard controller and trying again, or make sure libvirt is created on a device using the onboard controller, like a cache device. Link to comment
bastl Posted June 16, 2019 Share Posted June 16, 2019 What if the "domain" share isn't on the encrypted array? @SpaceInvaderOne Most users have their domain share on the cache drive which I guess for most isn't encrypted. Link to comment
dgreig Posted June 16, 2019 Share Posted June 16, 2019 (edited) Sounds like an issue with the HBA / Raid Card he is using. I had an issue sever versions ago where I had to replace a HBA to get drives to mount properly or even to see certain drives. May be an issue like that. Check against a d HBA or onboard SATA. Also it’s probably best that your issue is in a separate topic. the rc1 and rc2 testing was supposed to be limited to make sure that no problems are introduced since 6.70 stable Edited June 17, 2019 by dgreig Link to comment
rix Posted June 18, 2019 Share Posted June 18, 2019 Before we go stable: Please take a look at the SACK exploit. https://access.redhat.com/security/vulnerabilities/tcpsack A kernel update may be advisable.. 1 Link to comment
SpecFroce Posted June 18, 2019 Share Posted June 18, 2019 My thoughts exactly. I was almost expecting a rc3 already with that change, but I’m sure Limetech is already on it. 1 Link to comment
limetech Posted June 20, 2019 Author Share Posted June 20, 2019 On 6/18/2019 at 8:12 AM, SpecFroce said: My thoughts exactly. I was almost expecting a rc3 already with that change, but I’m sure Limetech is already on it. Yup. 1 Link to comment
Recommended Posts