Unraid OS version 6.12.4 available


ljm42

Recommended Posts

On 10/4/2023 at 12:31 PM, jpowell8672 said:

Run Tools > Update Assistant first then follow release notes upgrade procedure:

 

Version 6.12.4 2023-08-31

Upgrade notes

Known issues

Please see the 6.12.0 release notes for general known issues.

Rolling back

Before rolling back to an earlier version, it is important to ensure Bridging is enabled:

Settings > Network Settings > eth0 > Enable Bridging = Yes

Then Start the array (along with the Docker and VM services) to update your Docker containers, VMs, and WireGuard tunnels back to their previous settings which should work in older releases.

Once in the older version, confirm these settings are correct for your setup:

Settings > Docker > Host access to custom networks

Settings > Docker > Docker custom network type

If rolling back earlier than 6.12.0, also see the 6.12.0 release notes.

Fix for macvlan call traces

The big news in this release is that we have resolved issues related to macvlan call traces and crashes!

The root of the problem is that macvlan used for custom Docker networks is unreliable when the parent interface is a bridge (like br0), it works best on a physical interface (like eth0) or a bond (like bond0). We believe this to be a longstanding kernel issue and have posted a bug report.

If you are getting call traces related to macvlan, as a first step we recommend navigating to Settings > Docker, switch to advanced view, and change the "Docker custom network type" from macvlan to ipvlan. This is the default configuration that Unraid has shipped with since version 6.11.5 and should work for most systems. If you are happy with this setting, then you are done! You will have no more call traces related to macvlan and can skip ahead to the next section.

However, some users have reported issues with port forwarding from certain routers (Fritzbox) and reduced functionality with advanced network management tools (Ubiquity) when in ipvlan mode.

For those users, we have a new method that reworks networking to avoid issues with macvlan. Tweak a few settings and your Docker containers, VMs, and WireGuard tunnels should automatically adjust to use them:

Settings > Network Settings > eth0 > Enable Bonding = Yes or No, either work with this solution

Settings > Network Settings > eth0 > Enable Bridging = No (this will automatically enable macvlan)

Settings > Docker > Host access to custom networks = Enabled

Note: if you previously used the 2-nic docker segmentation method, you will also want to revert that:

Settings > Docker > custom network on interface eth0 or bond0 (i.e. make sure eth0/bond0 is configured for the custom network, not eth1/bond1)

When you Start the array, the host, VMs, and Docker containers will all be able to communicate, and there should be no more call traces!

Troubleshooting

If your Docker containers with custom IPs are not starting, edit them and change the "Network type" to "Custom: eth0" or "Custom: bond0". We attempted to do this automatically, but depending on how things were customized you may need to do it manually.

If your VMs are having network issues, edit them and set the Network Source to "vhost0". Also, ensure there is a MAC address assigned.

If your WireGuard tunnels will not start, make a dummy change to each tunnel and save.

If you are having issues port forwarding to Docker containers (particularly with a Fritzbox router) delete and recreate the port forward in your router.

To get a little more technical...

After upgrading to this release, if bridging remains enabled on eth0 then everything works as it used to. You can attempt to work around the call traces by disabling the custom Docker network, or using ipvlan instead of macvlan, or using the 2-nic Docker segmentation method with containers on eth1.

Starting with this release, when you disable bridging on eth0 we create a new macvtap network for Docker containers and VMs to use. It has a parent of eth0 instead of br0, which is how we avoid the call traces.

A side benefit is that macvtap networks are reported to be faster than bridged networks, so you may see speed improvements when communicating with Docker containers and VMs.

FYI: With bridging disabled for the main interface (eth0), then the Docker custom network type will be set to macvlan and hidden unless there are other interfaces on your system that have bridging enabled, in which case the legacy ipvlan option is available. To use the new fix being discussed here you will want to keep that set to macvlan.

 

https://docs.unraid.net/unraid-os/release-notes/6.12.4/#:~:text=This release resolves corner cases,properly shut the system down.

Thanks for this. I'm not sure what my setup is running, I guess macvlan:

 

2037419677_Screenshot2023-10-18at21_49_30.thumb.png.f5bf0b353713db036e4a6880e8bd225f.png

 

I really don't understand what any of this means though TBH. In the past I've updated unraid and it's usually worked fine.

Link to comment

Hey All

Just want to thanks the UNRAID team and LT Family as I finally made the leap of faith from 6.92-->6.12.4 with no really issues detected in the logs.

 

I performed the usual steps of backing up the flash drive, backed up my folders to an external 20TB HDD, updated all apps and plugins, stopped docker, VMs and the array then proceeded with the update process.

 

Once rebooted, the array restarted and I enabled the docker and VM's again. Love the GUI, and ZFS, but  I do not know if I will convert my xfs to zfs even though all my drive sizes are the same and since my unraid server is just a basic NAS with some docker stuff, it all went smoothly. While snap shots are good my external Rclone backup seems to do the trick for now.

 

I did loose my Docker Folders, so I see I need to remove the old plugin and grab the new folder view plugin, I assume it will not retain my old structure but that gives me something to do while I play around, my previous 6.9.2 was up for about 70days without any issues from my last reboot due to a power failure, I really need to get new batteries for my UPS..

 

Thank you again, and I do hope those having issues are solved quickly

PEACE

Kosti 

Edited by Kosti
Link to comment

Quick question,

Sorry if it doesn't belong here but Im wanting to ZFS my cache name drive and tried to move everything off the cache to the array but mover doesn't fire up? 

Have cache-->array for the following'

appsdata

isos

domains

system

I've even stopped the docker and VM libvert etc still doesn't move, now I can only assume it doesn't overwrite existing same files or copies of the same folder, so they are either duplicates or I stuffed up somewhere

 

Is it safe to delete these if they exist on the array and move them later after I reformat the cache drive to zfs or just leave it as btrfs?

Link to comment
2 hours ago, Kosti said:

I've even stopped the docker and VM libvert etc still doesn't move, now I can only assume it doesn't overwrite existing same files or copies of the same folder, so they are either duplicates or I stuffed up somewhere

Yes - mover will not overwrite duplicates as in normal operation this should not happen.

 

you need to decide which copy is the more recent (probably the cached ones) and then manually delete any duplicates off the relevant drive.   Make sure you go directly to a drive and not via a User Share as if you go via the User Share you are quite likely to delete the wrong copy.   After doing this mover should then start working as expected.

  • Like 1
Link to comment

I just wanted to check if the next big version update might allow multiple unraid arrays? 

 

Just because I would like to separate some slower drives out. 

Older drives top out around 100MB/s whereas my newer drives can do 220MB/s.

 

Would rather not have more than 16x drives in the main array due to the slowness of parity checks. 

 

I know there is a plan for pool to pool moving.

 

35675167_Screenshot2023-10-25at00_38_16.thumb.png.6e365ff6929cb22435d6b26d47236aec.png

 

Edited by dopeytree
Link to comment
4 hours ago, dopeytree said:

Would rather not have more than 16x drives in the main array due to the slowness of parity checks. 

Note that the number of drives does not affect the speed of parity checks.   The length of time a parity check takes is primarily determined by the size of the largest parity drive.

Link to comment
4 hours ago, dopeytree said:

I just wanted to check if the next big version update might allow multiple unraid arrays? 

I think the multiple arrays support may have slipped to the 6.14 release with the 6.13 release concentrating on improving ZFS support.   I could be wrong about that though.

Link to comment
2 hours ago, itimpi said:

Note that the number of drives does not affect the speed of parity checks.   The length of time a parity check takes is primarily determined by the size of the largest parity drive.

 

Well yes, and no.

 

While obviously the size of the largest drive in the array (and by design, the Parity drive) is a major factor, having smaller, slower drives mixed in can also play a significant role in how fast the overall parity check/rebuild takes.  Assuming, of course, that other drive activity isn't interfering with the drives 'syncing up' their access cycles to maximize the parity check speed.

 

All spinning drives start off the check at their fastest speed, and end at their slowest speed at the top end of their capacity.  As parity checks will run only as fast as the slowest drive, having mixed sizes in your array will increase the overall time it takes to run a parity check.

 

Let's look at this example:  10 drives + 1 parity in the array - a 16TB parity drive, 9 16TB data drives, 1 8TB data drive.  All drives have a linear access speed curve - 250 MB/s at the first sector, 150 MB/s at the last sector.

 

Once you start the parity check (and give the drives a little time to sync up) you are moving right along around 250 MB/s.  The 8 TB drive starts slowing down faster than the others.  4TB along, you are now down to 200 MB/S, at 7.9TB you are at 150 MB/s.

 

Once you clear the 8TB done point, the speeds will increase from 150 MB/s to 200 MB/s, then ramp down to 150MB/s for the rest of the check, but at a rate that is half as slow as the first 8TB slowed down.

 

This is exactly what I saw when I recently replaced the remaining 8TB drives in my array with those which matched the 16TB drives.  My average speed jumped up to 186 MB/s from 166 MB/s.

Edited by ConnerVT
speeling
  • Upvote 1
Link to comment
On 10/25/2023 at 9:49 PM, itimpi said:

Yes - mover will not overwrite duplicates as in normal operation this should not happen.

 

you need to decide which copy is the more recent (probably the cached ones) and then manually delete any duplicates off the relevant drive.   Make sure you go directly to a drive and not via a User Share as if you go via the User Share you are quite likely to delete the wrong copy.   After doing this mover should then start working as expected.

Thanks for the tips!

 

Even though there was no duplications it wouldn't migrate files with the move function, so I manually moved these folders from cache to disk (x) then formatted the nvme to ZFS encrypted and set the move function from array-->cache and ran mover. Now the files are being moved off the array and onto cache via the move function.

However I think I did break something as I did delete a folder that was empty for my backups as I have a modified script to backup too external USB using rsync but this now fails

 

rsync: [sender] change_dir "/mnt/user/backups#342#200#235#012/mnt/user" failed: No such file or directory (2)
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1336) [sender=3.2.7]
true

 

Not sure how to fix this up? does it matter who executes the script either root or use?

Link to comment

Quick update, as mover just finished however only a few folders "moved" and some stayed on the disks for example

 

Domain

Isos

System

|->Docker

|->Libvirt

 

These above folders are empty folders but were not created either on the cache nvme with the exception of the System folders? But mover did not delete folders when it moves the docker& livirt,

 

Does mover not create empty directories and remove them from the current drive?

 

Should I move Domains & isos manually to the cache drive and delete the System folder from the disk?

 

Also in the appdata folder from the disk shows 20 folders but move function only moved 17, All folders remained in disk as well, should I manually move them?

 

I have not started the VMs or Docker at present just not sure if I need to manually move them and clean up the disks manually or I've broken something

 

I also see my ZFS memory is at 99% full

Edited by Kosti
added ZFS memory
Link to comment
  • 2 weeks later...

I've followed the instructions to change over from a bridged setup to the new eth0 setup for my Dockers and VMs. The dockers migrated without an issue and they're currently working as normal. My Win 10 VM is also working as normal except for the fact that I can't connect to my UnRAID shares or even ping my UnRAID server from the VM now that I've changed the network settings. Is there something else I need to do to get that working again?

Link to comment
  • 3 months later...

When I apply the macvlan fix as described here https://docs.unraid.net/unraid-os/release-notes/6.12.4/, I get the problem that vhost0 and eth0 are both creating interfaces with the same IP address of the unraid server (192.168.14.15). My unify network is now complaining that there is an IP conflict with 2 clients being assigned to the same IP.  

 

While this is not creating functional problems, it throws warnings in the unifi log and prevents me from setting local DNS, so I would be interested if there is a fix for this? 


route

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         unifi.internal  0.0.0.0         UG    0      0        0 eth0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
172.18.0.0      0.0.0.0         255.255.0.0     U     0      0        0 br-a4b11a9a27a1
192.168.14.0    0.0.0.0         255.255.255.0   U     0      0        0 vhost0
192.168.14.0    0.0.0.0         255.255.255.0   U     1      0        0 eth0



ifconfig

vhost0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.14.15  netmask 255.255.255.0  broadcast 0.0.0.0
        ether XX:XX:XX:XX:XX  txqueuelen 500  (Ethernet)
        RX packets 42631  bytes 11758180 (11.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 33271  bytes 66624456 (63.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
  
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether XX:XX:XX:XX:XX  txqueuelen 0  (Ethernet)
        RX packets 27070  bytes 57222523 (54.5 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 28877  bytes 7967471 (7.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.14.15  netmask 255.255.255.0  broadcast 0.0.0.0
        ether XX:XX:XX:XX:XX  txqueuelen 1000  (Ethernet)
        RX packets 175318  bytes 186774948 (178.1 MiB)
        RX errors 0  dropped 40  overruns 0  frame 0
        TX packets 66106  bytes 25680102 (24.4 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

 

Edited by rob_robot
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.