What you saw, while definitely not desirable, was predictable based on your settings.
1. Cache prefer means send new data written to the share to the cache drive, and when the cache drive is full (more on that in a sec) start using the array. When the mover is run, attempt to put any overflow that went to the array back on the cache. Basically, that share is set to prefer to live on the cache drive exclusively, and only go to the array when the cache doesn't have any room left.
2. Unraid determines where to write new data based on a bunch of settings, and when a target drive or pool gets close to filled, it relies on the minimum free setting, which needs to be set larger than the single largest file you intend to write to that share, so any particular drive doesn't get completely filled. Unraid can't know how large any specific file will be until it's fully written, so it's up to you to set a free space margin that makes sense for your use case.
Combine cache prefer with a floor of zero free space, and when the mover ran it filled the pool, and crashed the user share system. Filling any filesystem completely and then attempting to write more data can have unpredictable results, especially when the filesystem is layered like Unraid's user share system.
Good news is that the files on each specific drive are fine, it's just the user share system that lost the plot. Rebooting should bring that back.
To fix the issue, you need to set the share in question to cache:yes and run the mover, so it cleans out the cache drive to the array, then when that is complete set it to cache:no so further writes go straight to the array. There is no point in using the cache for initial data transfer, as you will inevitably fill it up and then have to wait for the mover. It's faster to write directly to the array so each piece of data only gets transferred once.
You also need to set minimum free space for the shares and pools to a reasonable figure so Unraid can appropriately allocate between disks and pools and not run completely out of space on any single volume. Such is the downside of individual filesystems that allow differential spindowns and isolated drive recovery as opposed to traditional RAID.
I recommend turning on the help in the Unraid GUI with the question mark, that will expand out the help text that explains share settings in more detail.