Unraid OS version 6.10.0 available


Recommended Posts

I upgraded from 6.9.2 to 6.10.0 and everything works except...

 

I have 2 identical NVME drives and one of them gives temperature overheat warnings. Sometimes it is every few minutes and sometimes it is every few hours (not consistant).

The usual temperatue is about 35-40C and the warning gives a temperature of 84C (always) for a one minutes and then back to normal - I guess it is the polling frequency.

This happens when working with the server and also in complete idle (during the night).

 

This did not happen in 6.9.2 and started immedaitely after the upgrade to 6.10.0.

 

image.thumb.png.a95251415a78ebe986967684213985de.png

Link to comment
14 hours ago, wgstarks said:

What about SMB multi channel?

I recreated a time machine share and enabled multichannel. I was lucky to manage a single backup. Unfortunately the next ones keep failing and make smb shares unaccessible (sometimes for 5 mins or until Macs reboot).

 

Link to comment
9 hours ago, clevoir said:

I have tried that but the new version of Unraid is not listed?

 

image.thumb.png.fff2dba5760099c33bd6a01cf19d639c.png

It sounds like you don't have an Internet connection from your server.   This may (or may not) to simple to troubleshoot depending on your skill level,  I would suggest that you follow the advice of @ljm42 at the top of this page and start a new thread in the General Forum. 

Link to comment
On 5/20/2022 at 7:12 PM, JorgeB said:

I would recommend anyone running a HP MicroServer Gen8 to not update for now

One more update, @RikStigteris helping me confirm if like suspected having IOMMU enable on these servers is the source of the problem, preliminary results look positive since the usual errors logged on them after updating to v6.10 are gone with VT-d disable, he will now use the server normally for a few days so we can confirm if it remains all good.

 

Issue is possibly caused by the onboard NICs when VT-d is enable, can't tell you if it's a HP problem or some Linux issue with the new kernel, certainly nothing suggests an Unraid problem, but hopefully disabling VT-d for now fixes this, again servers with a Pentium or i3 CPU shouldn't have this issue since they don't support VT-d, though I would still recommend disabling it in the BIOS, since apparently it's enable by default, so if later they are upgraded to a Xeon and this issue still exists there won't be a problem.

 

 

 

 

Link to comment
One more update, @RikStigteris helping me confirm if like suspected having IOMMU enable on these servers is the source of the problem, preliminary results look positive since the usual errors logged on them after updating to v6.10 are gone with VT-d disable, he will now use the server normally for a few days so we can confirm if it remains all good.
 
Issue is possibly caused by the onboard NICs when VT-d is enable, can't tell you if it's a HP problem or some Linux issue with the new kernel, certainly nothing suggests an Unraid problem, but hopefully disabling VT-d for now fixes this, again servers with a Pentium or i3 CPU shouldn't have this issue since they don't support VT-d, though I would still recommend disabling it in the BIOS, since apparently it's enable by default, so if later they are upgraded to a Xeon and this issue still exists there won't be a problem.
 
 
 
 
Sounds like HPs “version” of the same Dell issue; having to toy with virtualization stuff in bios. 
 
 
  • Like 1
Link to comment

Has 6.10 changed the ability of docker containers to access files owned by root on the host? I noticed some containers were not able to access root-owned 700 or 755 files or directories after the update, like ssh keys or mounted volumes inside /tmp. When I change the permission to 755, respectively 777 for directories to write in, it worked again.

Since you can define a docker volume to be read/write or read-only inside the container I am wondering why I now have to set additional permissions which were not needed before. After a restart my sub-directory created in /tmp for a container ramdisk got set back to 711 which again caused error messages.

Is there a way to prevent this or was this done intentionally for security reasons?

Link to comment
27 minutes ago, kennymc.c said:

Has 6.10 changed the ability of docker containers to access files owned by root on the host? I noticed some containers were not able to access root-owned 700 or 755 files or directories after the update, like ssh keys or mounted volumes inside /tmp. When I change the permission to 755, respectively 777 for directories to write in, it worked again.

Since you can define a docker volume to be read/write or read-only inside the container I am wondering why I now have to set additional permissions which were not needed before. After a restart my sub-directory created in /tmp for a container ramdisk got set back to 711 which again caused error messages.

Is there a way to prevent this or was this done intentionally for security reasons?

Some users have had permission problems with appdata.  However, this is the exception rather than the norm.  And I just created a test container with a mapping to /tmp/blah and the container can access it no problems.

 

6.10 (all betas and all RCs) was tested quite extensively on mine (and many other users) production servers with no issues at all running quite an extensive system of containers.  My one suggestion would be to delete and recreate the image (or folder) for docker and then see if there's any change in how things work.

Link to comment
3 hours ago, pkoci1 said:

I recreated a time machine share and enabled multichannel. I was lucky to manage a single backup. Unfortunately the next ones keep failing and make smb shares unaccessible (sometimes for 5 mins or until Macs reboot).

 

You need to set multi channel to “off”.

Link to comment
6 hours ago, Revan335 said:

Where is the Changelog, Thread, Blog for 6.10.1?

...changelog is linked on the download page.

All there is inside is this:

## Version 6.10.1 2022-05-21


### Management:

- startup: fix regression: support USB flash boot from other than partition 1

...still not excited and not keen on upgrading from previous 6.9.2...way too many users with problems, as it seems.

Link to comment
On 5/22/2022 at 12:29 PM, JorgeB said:

Issue is possibly caused by the onboard NICs when VT-d is enable, can't tell you if it's a HP problem or some Linux issue with the new kernel

 

TL; DR I would recommend only running v6.10.x on a server with a Brodcom NIC that uses the tg3 driver if VT-d/IOMMU is disable or it might in some cases cause serious stability issues, including possible filesystem corruption.

 

 

Another update since this is an important issue, there's a new case with an IBM/Lenovo X3100 M5 server, this server uses the same NIC driver as the HP so this appears to confirm the problem is the NIC/NIC driver when IOMMU is enable.

 

Known problematic NICs:

 

HP Microserver Gen8:

03:00.0 Ethernet controller [0200]: Broadcom Inc. and subsidiaries NetXtreme BCM5720 Gigabit Ethernet PCIe [14e4:165f]
    DeviceName: NIC Port 1
    Subsystem: Hewlett-Packard Company NC332i Adapter [103c:2133]
    Kernel driver in use: tg3

 

IBM/Lenovo X3100 M5:

06:00.0 Ethernet controller [0200]: Broadcom Inc. and subsidiaries NetXtreme BCM5717 Gigabit Ethernet PCIe [14e4:1655] (rev 10)
    DeviceName: Broadcom 5717
    Subsystem: IBM NetXtreme BCM5717 Gigabit Ethernet PCIe [1014:0490]
    Kernel driver in use: tg3

 

HP ProLiant ML350p Gen8

02:00.0 Ethernet controller [0200]: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe [14e4:1657] (rev 01)
    DeviceName: NIC Port 1
    Subsystem: Hewlett-Packard Company NetXtreme BCM5719 Gigabit Ethernet PCIe [103c:3372]
    Kernel driver in use: tg3

 

This driver supports many different NICs, unclear for now if all are affected or just some, also unclear if AMD based servers with AMD-Vi/IOMMU enable are affected, but for now I would recommend only running v6.10.x on a server with a Brodcom NIC that uses this driver if VT-d/IOMMU is disable or it might in some cases cause serious stability issues, including possible filesystem corruption.

 

When there is a problem with one of these NICs and VT-d you should see multiple errors similar to below in the logs not long after booting, usually before a couple of hours of uptime:

 

May 21 15:53:05 Tower kernel: DMAR: ERROR: DMA PTE for vPFN 0xb0780 already set (to b0780003 not 28dc74801)
May 21 15:53:05 Tower kernel: ------------[ cut here ]------------
May 21 15:53:05 Tower kernel: WARNING: CPU: 1 PID: 557 at drivers/iommu/intel/iommu.c:2408 __domain_mapping+0x2e5/0x390

 

If you see that stop using the server and disable VT-d/IOMMU ASAP, there's no need to disable VT-x/HVM, i.e., you can still run VMs (but without VT-d/IOMMU can't passthrough any device to one).

 

For Intel CPUs VT-d can usually be disabled in the BIOS, alternatively you can add intel_iommu=off to the syslinux.cfg append line, on the main GUI page click on flash and scroll down to "Syslinux Configuration", then add it to the default boot option, the one in green) :

 

image.png.76e6370b2b9d73824904acaed1f8102d.png

 

 

In either case confirm it's really disabled, you can do that by clicking on "system information", top right of the GUI:

 

image.png.90a561f39e1842c1801e78b2d542333e.png

 

 

 

Original post here:

https://forums.unraid.net/topic/123620-unraid-os-version-6100-available/?do=findComment&comment=1128822

 

  • Like 1
  • Thanks 1
Link to comment
4 hours ago, Squid said:

Some users have had permission problems with appdata.  However, this is the exception rather than the norm.  And I just created a test container with a mapping to /tmp/blah and the container can access it no problems.

 

6.10 (all betas and all RCs) was tested quite extensively on mine (and many other users) production servers with no issues at all running quite an extensive system of containers.  My one suggestion would be to delete and recreate the image (or folder) for docker and then see if there's any change in how things work.

 

I stopped the container, deleted the folder in /tmp and created a new one via terminal which had a 777 permission. After a server reboot the same folder had a 755 permission. I then deleted the docker container including the image, recreated it but still got permission errors. When I changed the permissions back to 777, everything went back to normal.

The affected files and directories are also owned by root inside the container. When opening the console of a container a am always root and can access these files or folders. I assume that the processes inside some containers maybe are not always running as root and they can't access these folder or files. So it depends on the use case.

Before the update is was somehow possible for non-root users inside a container to access all files from mounted volumes even if they were r/w/x by root only. At least this is what I can think of as a possible explanation why I am having problems now. 

Edited by kennymc.c
  • Like 1
Link to comment
29 minutes ago, JorgeB said:

 

Another update since this is an important issue, there's a new case with an IBM X3100 M5 server, this server uses the same NIC driver as the HP so this appears to confirm the problem is the NIC/NIC driver when IOMMU is enable, these are the NICs used:

 

HP Microserver Gen8:

03:00.0 Ethernet controller [0200]: Broadcom Inc. and subsidiaries NetXtreme BCM5720 Gigabit Ethernet PCIe [14e4:165f]
    DeviceName: NIC Port 1
    Subsystem: Hewlett-Packard Company NC332i Adapter [103c:2133]
    Kernel driver in use: tg3
    Kernel modules: tg3

 

IBM X3100 M5:

06:00.0 Ethernet controller [0200]: Broadcom Inc. and subsidiaries NetXtreme BCM5717 Gigabit Ethernet PCIe [14e4:1655] (rev 10)
    DeviceName: Broadcom 5717
    Subsystem: IBM NetXtreme BCM5717 Gigabit Ethernet PCIe [1014:0490]
    Kernel driver in use: tg3
    Kernel modules: tg3

 

I would recommend only running v6.10.x on a server with a Brodcom NIC that uses this driver if VT-d/IOMMU is disable.

 

Is there a particular location in the UI or a terminal prompt command to run to gather all the NIC information as you have shown above? Or a particular file inside the Diagnostics collection?

 

Link to comment

2 of my systems have problems after upgrade to 6.10.1.

 

Both have Samsung 980 passtru to VM - both unable to boot VM - i/o error 0xc00000e9, drives are visible when I boot from iso

 

after restore 6.9.2 vm boots fine

Edited by Raptor
Link to comment
48 minutes ago, Raptor said:

2 of my systems have problems after upgrade to 6.10.1.

 

Both have Samsung 980 passtru to VM - both unable to boot VM - i/o error 0xc00000e9, drives are visible when I boot from iso

 

after restore 6.9.2 vm boots fine

Possibly related to this issue linked.

If so, you should just need to grab the nvme ID again and update your VM XML and any passthru settings (UD or vfio-pci driver binding)

 

Edited by tjb_altf4
Link to comment

Good morning guys thank you very much for this new update just performed on my unRaid / Qnap and everything works regularly buy the cache via USB (I didn't think it worked) I just have to test if now the notifications on the icloud account are given that they never worked for me indeed if I give some ideas I gladly accept it.

Link to comment

Not good experience with 6.10.

 

after update and reboot unraid unmount my second parity disk and start in disk selection. I select the second parity disk, unraid mount it but said wrong parity (it doesn’t) and then new parity sync begun.

 

beyond that, Linux VM with home assistant stop accepting zigbee commands, HA works but without effect.

 

I reboot VM but no changes.

 

Reboot unraid (stop parity sync first) but HA still with that issue.

 

I downgrade to 6.9.2 and HA works flawless, perfect. Parity sync ends without errors.

 

I will stay in 6.9.2 for now…

Link to comment

As recommended, I made all the suggested backups before upgrading my unRAID server from 6.9.2 to 6.10.1 and I'm happy to report that everything is working fine, including my 5 Docker containers and 4 VMs :)

 

My sincere thanks to the team that made 6.10 possible!!! 👍👍

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.