It's a very esoteric piece of knowledge (and not documented anywhere). I found out about it this July, and it seems that only @BRiT and @itimpi knew about it previously.
Yeah, I don't know. I first thought that mover might have done something at the time, but it looks like the restore finished 2 minutes before mover was set to kick in.
Are you passing any of the UD mounts to a docker container? If so they should be set to an access mode of Slave.
But, reason why it's stuck trying to stop the array is that one of the containers got stuck in a zombie mode and won't stop, which is leaving the system unable to unmount the cache drive. Could be related to the above. Easiest to simply restart the system.
Someone is trying to login to the server with user names of admin, Administator, administrator, root, user, etc
Have you got unRaid's webUI accessible to the outside world?
Technically Yes, but I'd avoid it as the Sata connectors are terrible to begin with, and breathing too hard (or touching) another one could begin to give problems on another drive which is the last thing you want in the middle of a rebuild.
Should also be noted that when you're dealing with that many files (and at 100k files, most of them are small) that the filesystem time overhead in figuring out where to place the files, creating the entries, etc etc starts to become very noticeable. No different than any other OS with bulk copying like that.
The unRaid.net plugin isn't compatible with unRaid versions before 6.8, and isn't even ready for primetime yet.
The dockerHub guys changed the API with the result that unRaid always shows an update for most container always being available. If the AutoUpdate plugin is up to date, then it patches the OS to fix this problem.