rollieindc Posted June 24, 2019 Share Posted June 24, 2019 I just upgraded my tower from 6.7.0 to 6.7.1 - No Issues. Thanks for the denial-of-service and processor vulnerabilities security plugs! 😀 Quote Link to comment
Stan464 Posted June 24, 2019 Share Posted June 24, 2019 Guys Might be a silly question. But does the 6.7.1 with Intel Patch *lol* Does this cause a Performance Hit? current CPU is not the best and im almost certain if it gets nerfed by a patch it will Melt lol! Thanks Guys? Quote Link to comment
LammeN3rd Posted June 24, 2019 Share Posted June 24, 2019 27 minutes ago, Stan464 said: Guys Might be a silly question. But does the 6.7.1 with Intel Patch *lol* Does this cause a Performance Hit? current CPU is not the best and im almost certain if it gets nerfed by a patch it will Melt lol! Thanks Guys? There is a plugin to disable them link: plugin-disable-security-mitigations Quote Link to comment
highdefinitely Posted June 24, 2019 Share Posted June 24, 2019 Upgraded two servers from V6.7.0 to V6.7.1 without a hitch. Quote Link to comment
local.bin Posted June 24, 2019 Share Posted June 24, 2019 Updated to 6.7.1 and have lost all of my vlans for each of the dockers. The vlans are still configured in the network settings but don't appear in the list of available networks in the individual docker configs? The dockers are still there but their network value is now blank. Anyone else have this issue? Quote Link to comment
local.bin Posted June 24, 2019 Share Posted June 24, 2019 31 minutes ago, local.bin said: Updated to 6.7.1 and have lost all of my vlans for each of the dockers. The vlans are still configured in the network settings but don't appear in the list of available networks in the individual docker configs? The dockers are still there but their network value is now blank. Anyone else have this issue? Resolved by going into the network vlan config and disabling all vlans and then enabling them all again. After starting the array the vlans reappeared. 1 Quote Link to comment
nphil Posted June 24, 2019 Share Posted June 24, 2019 (edited) Upgraded to 6.7.1 - got DB corruption issues with a bunch of containers: Plex, Unifi, Sonarr, Radarr, etc. anything with a db file inside appdata. Everything was fine on 6.7.0 Downgraded to 6.7.0 - same issue persists. On docker settings, manually switched the location of appdata from /mnt/user/appdata to /mnt/cache/appdata and suddenly everything works fine. additional details about my cache drives: 2x NVME drives formatted BTRFS running in single mode to maximize available space. Edited June 24, 2019 by nphil 1 Quote Link to comment
ken-ji Posted June 24, 2019 Share Posted June 24, 2019 Upgraded from 6.7.0 remotely without issue. Quote Link to comment
JasonJoel Posted June 25, 2019 Share Posted June 25, 2019 Upgraded my AMD EPYC based system, no issues. Quote Link to comment
cybrnook Posted June 25, 2019 Share Posted June 25, 2019 (edited) Upgraded from 6.7.0 to 6.7.1 without issue. I have 2 x NVME drives as cache in RAID1. Docker settings are set to default /mnt/user/appdata. No issues starting my Plex docker. Using the Plex docker managed by Plex. So far so good. Edited June 25, 2019 by cybrnook Quote Link to comment
LammeN3rd Posted June 25, 2019 Share Posted June 25, 2019 6 hours ago, cybrnook said: Upgraded from 6.7.0 to 6.7.1 without issue. I have 2 x NVME drives as cache in RAID1. Docker settings are set to default /mnt/user/appdata. No issues starting my Plex docker. Using the Plex docker managed by Plex. So far so good. Same here, just with 4x NVMe in Raid 1 and both sonarr and plex use /mnt/user/appdata I've been running 6.7.1 RC and 6.7.0 before that never had any corruption issues! Quote Link to comment
bwnautilus Posted June 25, 2019 Share Posted June 25, 2019 Upgraded from 6.7.0 to 6.7.1 via remote browser. Everything worked smoothly. Thanks again for supporting us users! Quote Link to comment
testdasi Posted June 25, 2019 Share Posted June 25, 2019 Upgraded from 6.7.0. Forgot that I upgraded and tried to do it again. Quote Link to comment
armbrust Posted June 25, 2019 Share Posted June 25, 2019 (edited) Hi, I tried upgrading from 6.6.7, and experiance a hard crash, when the VMs were started. For a few seconds, The GUI was available, but then the system rebooted, and came back with the VM manger disabled. When I enabled the VM manger, the system crashed (no web GUI didn't respond to pings), and didn't come back. I rebooted into safe mode, captured a diagnostic file, and reverted to 6.6.7. Back to 6.6.7 and all is well. The same thing happened when trying to upgrade to 6.7.0 from 6.6.7. The attached diagnostic file was captured in 6.7.1 safe mode. Thanks for any advice. I'm hoping it's not that my hardware is too out of date! Edit: more info: It seems to be a problem with one particular VM, which has pass through of GPU and USB controller. When starting this VM it dies. Here are the lines recorded in the syslog when I started the VM and the system froze. Jun 25 14:21:40 Tower kernel: vfio-pci 0000:0a:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=io+mem Jun 25 14:21:40 Tower kernel: br0: port 2(vnet0) entered blocking state Jun 25 14:21:40 Tower kernel: br0: port 2(vnet0) entered disabled state Jun 25 14:21:40 Tower kernel: device vnet0 entered promiscuous mode Jun 25 14:21:40 Tower kernel: br0: port 2(vnet0) entered blocking state Jun 25 14:21:40 Tower kernel: br0: port 2(vnet0) entered forwarding state Jun 25 14:21:42 Tower kernel: vfio_ecap_init: 0000:0a:00.0 hiding ecap 0x19@0x900 Jun 25 14:21:42 Tower avahi-daemon[7190]: Joining mDNS multicast group on interface vnet0.IPv6 with address fe80::fc54:ff:feb3:33ee. Jun 25 14:21:42 Tower avahi-daemon[7190]: New relevant interface vnet0.IPv6 for mDNS. Jun 25 14:21:42 Tower avahi-daemon[7190]: Registering new address record for fe80::fc54:ff:feb3:33ee on vnet0.*. Jun 25 14:21:42 Tower kernel: vfio-pci 0000:00:1a.7: enabling device (0000 -> 0002) Jun 25 14:21:42 Tower kernel: vfio_cap_init: 0000:00:1a.7 hiding cap 0xa Jun 25 14:21:45 Tower kernel: vfio-pci 0000:0a:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=io+mem Jun 25 14:21:47 Tower kernel: DMAR: DRHD: handling fault status reg 2 Jun 25 14:21:47 Tower kernel: DMAR: [DMA Read] Request device [00:1a.7] fault addr eb000 [fault reason 06] PTE Read access is not set tower-diagnostics-20190625-1740.zip Edited June 25, 2019 by armbrust Quote Link to comment
limetech Posted June 25, 2019 Author Share Posted June 25, 2019 1 hour ago, armbrust said: t seems to be a problem with one particular VM, which has pass through of GPU and USB controller. When starting this VM it dies. Here are the lines recorded in the syslog when I started the VM and the system froze. You can try adding this to your selected syslinux boot line (add to end of 'append'): intel_iommu=pt For example, append initrd=/bzroot,/bzroot-gui intel_iommu=pt Quote Link to comment
raidserver Posted June 25, 2019 Share Posted June 25, 2019 Uneventful update to x2 unRaid servers. Thanks LimeTech👍 Quote Link to comment
armbrust Posted June 26, 2019 Share Posted June 26, 2019 6 hours ago, limetech said: You can try adding this to your selected syslinux boot line (add to end of 'append'): intel_iommu=pt For example, append initrd=/bzroot,/bzroot-gui intel_iommu=pt Thanks for the reply, unfortunately no luck. This is the syslog at the time of VM start. Jun 25 21:44:17 Tower kernel: vfio-pci 0000:0a:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=io+mem Jun 25 21:44:17 Tower kernel: br0: port 3(vnet2) entered blocking state Jun 25 21:44:17 Tower kernel: br0: port 3(vnet2) entered disabled state Jun 25 21:44:17 Tower kernel: device vnet2 entered promiscuous mode Jun 25 21:44:17 Tower kernel: br0: port 3(vnet2) entered blocking state Jun 25 21:44:17 Tower kernel: br0: port 3(vnet2) entered forwarding state Jun 25 21:44:18 Tower kernel: vfio_ecap_init: 0000:0a:00.0 hiding ecap 0x19@0x900 Jun 25 21:44:18 Tower avahi-daemon[7180]: Joining mDNS multicast group on interface vnet2.IPv6 with address fe80::fc54:ff:fe13:8859. Jun 25 21:44:18 Tower avahi-daemon[7180]: New relevant interface vnet2.IPv6 for mDNS. Jun 25 21:44:18 Tower avahi-daemon[7180]: Registering new address record for fe80::fc54:ff:fe13:8859 on vnet2.*. Jun 25 21:44:19 Tower kernel: vfio-pci 0000:00:1a.7: enabling device (0000 -> 0002) Jun 25 21:44:19 Tower kernel: vfio_cap_init: 0000:00:1a.7 hiding cap 0xa Jun 25 21:44:22 Tower kernel: vfio-pci 0000:0a:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=io+mem Jun 25 21:44:24 Tower kernel: DMAR: DRHD: handling fault status reg 2 Jun 25 21:44:24 Tower kernel: DMAR: [DMA Read] Request device [00:1a.7] fault addr eb000 [fault reason 06] PTE Read access is not set Quote Link to comment
Geek@Trucker Posted July 7, 2019 Share Posted July 7, 2019 On 6/24/2019 at 6:04 AM, itimpi said: Got it working 100% again....Sorry for drama. Did a complete check of hardware (with a calm head) and 1 of the memory stick was loose. Reseated all of my memory stick and double checked everything and booted OK. Thank you for your support. Unraid 6.7.2 Pro (on Sandisk Cruzer Fit 16Gb) Server : Supermicro X8DT3-LN4F, 48Gb ECC (12*4Gb 1333Mhz), EVGA SuperNOVA 850, 2* LSI IT Mode 9211-8i, Parity: 2*3tb Seagate, Cache: 2*240Gb Kingston (in Raid 1),Data: 6*3tb Seagate, 1*3tb WD, 1*tb WD in a Rosewill rsv-l4500 Case. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.