Unraid OS version 6.7.1 available


Recommended Posts

Updated to 6.7.1 and have lost all of my vlans for each of the dockers.

 

The vlans are still configured in the network settings but don't appear in the list of available networks in the individual docker configs?

 

The dockers are still there but their network value is now blank.

 

Anyone else have this issue?

Link to comment
31 minutes ago, local.bin said:

Updated to 6.7.1 and have lost all of my vlans for each of the dockers.

 

The vlans are still configured in the network settings but don't appear in the list of available networks in the individual docker configs?

 

The dockers are still there but their network value is now blank.

 

Anyone else have this issue?

Resolved by going into the network vlan config and disabling all vlans and then enabling them all again.

 

After starting the array the vlans reappeared.

  • Upvote 1
Link to comment

Upgraded to 6.7.1 - got DB corruption issues with a bunch of containers: Plex, Unifi, Sonarr, Radarr, etc. anything with a db file inside appdata. Everything was fine on 6.7.0

 

Downgraded to 6.7.0 - same issue persists. On docker settings, manually switched the location of appdata from /mnt/user/appdata to /mnt/cache/appdata and suddenly everything works fine.

 

additional details about my cache drives: 2x NVME drives formatted BTRFS running in single mode to maximize available space.

Edited by nphil
  • Like 1
Link to comment

Upgraded from 6.7.0 to 6.7.1 without issue. I have 2 x NVME drives as cache in RAID1. Docker settings are set to default /mnt/user/appdata. No issues starting my Plex docker. Using the Plex docker managed by Plex.

 

So far so good.

Edited by cybrnook
Link to comment
6 hours ago, cybrnook said:

Upgraded from 6.7.0 to 6.7.1 without issue. I have 2 x NVME drives as cache in RAID1. Docker settings are set to default /mnt/user/appdata. No issues starting my Plex docker. Using the Plex docker managed by Plex.

 

So far so good.

Same here, just with 4x NVMe in Raid 1 and both sonarr and plex use /mnt/user/appdata

I've been running 6.7.1 RC and 6.7.0 before that never had any corruption issues!

Link to comment

Hi, I tried upgrading from 6.6.7, and experiance a hard crash, when the VMs were started.   For a few seconds, The GUI was available, but then the system rebooted, and came back with the VM manger disabled.   When I enabled the VM manger, the system crashed (no web GUI  didn't respond to pings), and didn't come back.   I rebooted into safe mode, captured a diagnostic file, and reverted to 6.6.7.

 

Back to 6.6.7 and all is well.

 

The same thing happened when trying to upgrade to 6.7.0 from 6.6.7.

 

The attached diagnostic file  was captured in 6.7.1 safe mode.   

 

Thanks for any advice.  I'm hoping it's not that my hardware is too out of date!  

 

Edit:  more info:

 

It seems to be a problem with one particular VM, which has pass through of GPU and USB controller.   When starting this VM it dies.   Here are the lines recorded in the syslog when I started the VM and the system froze.

 

Jun 25 14:21:40 Tower kernel: vfio-pci 0000:0a:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=io+mem
Jun 25 14:21:40 Tower kernel: br0: port 2(vnet0) entered blocking state
Jun 25 14:21:40 Tower kernel: br0: port 2(vnet0) entered disabled state
Jun 25 14:21:40 Tower kernel: device vnet0 entered promiscuous mode
Jun 25 14:21:40 Tower kernel: br0: port 2(vnet0) entered blocking state
Jun 25 14:21:40 Tower kernel: br0: port 2(vnet0) entered forwarding state
Jun 25 14:21:42 Tower kernel: vfio_ecap_init: 0000:0a:00.0 hiding ecap 0x19@0x900
Jun 25 14:21:42 Tower avahi-daemon[7190]: Joining mDNS multicast group on interface vnet0.IPv6 with address fe80::fc54:ff:feb3:33ee.
Jun 25 14:21:42 Tower avahi-daemon[7190]: New relevant interface vnet0.IPv6 for mDNS.
Jun 25 14:21:42 Tower avahi-daemon[7190]: Registering new address record for fe80::fc54:ff:feb3:33ee on vnet0.*.
Jun 25 14:21:42 Tower kernel: vfio-pci 0000:00:1a.7: enabling device (0000 -> 0002)
Jun 25 14:21:42 Tower kernel: vfio_cap_init: 0000:00:1a.7 hiding cap 0xa
Jun 25 14:21:45 Tower kernel: vfio-pci 0000:0a:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=io+mem
Jun 25 14:21:47 Tower kernel: DMAR: DRHD: handling fault status reg 2
Jun 25 14:21:47 Tower kernel: DMAR: [DMA Read] Request device [00:1a.7] fault addr eb000 [fault reason 06] PTE Read access is not set

 

tower-diagnostics-20190625-1740.zip

Edited by armbrust
Link to comment
1 hour ago, armbrust said:

t seems to be a problem with one particular VM, which has pass through of GPU and USB controller.   When starting this VM it dies.   Here are the lines recorded in the syslog when I started the VM and the system froze.

You can try adding this to your selected syslinux boot line (add to end of 'append'):

intel_iommu=pt

 

For example,

append initrd=/bzroot,/bzroot-gui intel_iommu=pt

 

Link to comment
6 hours ago, limetech said:

You can try adding this to your selected syslinux boot line (add to end of 'append'):

intel_iommu=pt

 

For example,


append initrd=/bzroot,/bzroot-gui intel_iommu=pt

  

Thanks for the reply, unfortunately no luck.   This is the syslog at the time of VM start.

Jun 25 21:44:17 Tower kernel: vfio-pci 0000:0a:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=io+mem
Jun 25 21:44:17 Tower kernel: br0: port 3(vnet2) entered blocking state
Jun 25 21:44:17 Tower kernel: br0: port 3(vnet2) entered disabled state
Jun 25 21:44:17 Tower kernel: device vnet2 entered promiscuous mode
Jun 25 21:44:17 Tower kernel: br0: port 3(vnet2) entered blocking state
Jun 25 21:44:17 Tower kernel: br0: port 3(vnet2) entered forwarding state
Jun 25 21:44:18 Tower kernel: vfio_ecap_init: 0000:0a:00.0 hiding ecap 0x19@0x900
Jun 25 21:44:18 Tower avahi-daemon[7180]: Joining mDNS multicast group on interface vnet2.IPv6 with address fe80::fc54:ff:fe13:8859.
Jun 25 21:44:18 Tower avahi-daemon[7180]: New relevant interface vnet2.IPv6 for mDNS.
Jun 25 21:44:18 Tower avahi-daemon[7180]: Registering new address record for fe80::fc54:ff:fe13:8859 on vnet2.*.
Jun 25 21:44:19 Tower kernel: vfio-pci 0000:00:1a.7: enabling device (0000 -> 0002)
Jun 25 21:44:19 Tower kernel: vfio_cap_init: 0000:00:1a.7 hiding cap 0xa
Jun 25 21:44:22 Tower kernel: vfio-pci 0000:0a:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=io+mem
Jun 25 21:44:24 Tower kernel: DMAR: DRHD: handling fault status reg 2
Jun 25 21:44:24 Tower kernel: DMAR: [DMA Read] Request device [00:1a.7] fault addr eb000 [fault reason 06] PTE Read access is not set

 

Link to comment
  • 2 weeks later...
On 6/24/2019 at 6:04 AM, itimpi said:

Got it working 100% again....Sorry for drama. Did a complete check of hardware (with a calm head) and 1 of the memory stick was loose. Reseated all of my memory stick and double checked everything and booted OK.  Thank you for your support.

Unraid 6.7.2 Pro (on Sandisk Cruzer Fit 16Gb)

Server : Supermicro X8DT3-LN4F, 48Gb ECC (12*4Gb 1333Mhz), EVGA SuperNOVA 850, 2* LSI IT Mode 9211-8i, Parity: 2*3tb Seagate, Cache: 2*240Gb Kingston (in Raid 1),Data: 6*3tb Seagate, 1*3tb WD, 1*tb WD in a Rosewill rsv-l4500 Case.

 

Link to comment
  • jonp unpinned this topic

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.