Looks like the root problem here is that your file system on the cache drive is corrupted. This is caused a lot of times by memory being bad. Run a memtest.
(Side note, if you have no plans to upgrade to a multiple device pool, then usually you're better off using XFS as it's more forgiving on systems that are not 100% rock stable)
Those 2 paths are actually more or less identical. The first is referring to a share on the cache drive and the second is referring to a share on all drives including the cache.
You'd want to stop all the running containers, and edit them to be /mnt/cache_apps/appdata to refer to the correct pool (for anything that had /mnt/cache/appdata). And also set the appdata share to use cache_apps. No need to change /mnt/user/appdata to refer to anything else.
Not necessarily your problem, but try running your memory at its rated speed of 2133 instead of it's overclocked XMP profile of 2666
In addition, run a Memtest from the boot menu
Yeah, I've never seen in diagnostics "defunct" next to a process, and ffmeg had a lot of defunct processes, so assumed that as time went on more and more and more kept showing up via shinobi and that eventually crashed the system. Restarting it daily may help, but it's not the real solution.
run the disk check filesystem again disk 4.
You are going to have to ultimately rebuild disk 4 onto itself (stop array, remove the drive from the list, start the array, stop the array, reassign the drive, start the array and rebuild)
Download the zip file from https://unraid.net/download and overwrite all of the bz* files on the flash drive with those in the archive.
Downgrading via Tools can only go back to the last version you had installed (rc3)
As to the linked post, I've never seen any issues. Are you sure the IP address of the server is properly referenced in the rules?
I'll leave it to others to answer, but only the cache is read-only (and by reference, the docker.img file). Mover can't properly move because it can't remove the files from the cache drive.
I will say however that on a single device pool if you have no intention of making it multi-device you're better off with XFS than BTRFS as while BTRFS does work, it is dependent upon an absolute rock-solid system and XFS is more forgiving.
Yes, because parity 2 is dependent upon drive order and if you rearrange the order after adding p2 it's contents will be invalid and have to be rebuilt.
P1 doesn't care about drive order
Not going to disagree per se, but they are 2 different "environments" and can't be compared directly to each other
Out of curiosity, why is it impossible?
Try making a change (any change) to Settings - Network Settings and then reverting it (then apply)
System thinks you've got 2 NICs installed, but only 1 is appearing in ethtool
No. It has to be via the internet (or you manually place it [appropriately named] within /var/lib/docker/unraid/images) Once it successfully downloads it though, it is stored within the docker.img and the URL is never accessed again unless you change the URL
I'd say it's bad because it is currently being corrected by virtue of it being ECC. IMO, ECC is purchased so that you know when one is beginning to return errors so that it can then be replaced rather than waiting for the module to get bad enough that it can no longer correct the errors.
The SEL should tell you exactly which DIMM it is (or mce says that it's DIMM0, but I'd trust the SEL more than what mce says for identification)
Any USB flash that the motherboard can boot from will work provided it returns a unique GUID
Older drives tend to be more reliable as the manufacturers weren't cutting corners on manufacturing costs back then to the same extent that they are now (USB manufacturers hope for and expect for you to lose them)
There is zero benefit to using a flash device that is USB3.1 vs 2.0 (or even 1.1) as the OS runs completely from RAM and only accesses the flash during the initial load of the archives at boot time (and the odd settings change)
Every filesystem potentially has that issue, because after the data to get written is transferred to the drive, the filesystem doesn't particularly care anymore about it, and is relying upon the drive itself to write the data.
Use a UPS on any computer system you care about. That is the primary defense against this issue, not choice of OS or filesystem