Depends on the size of your cache pool, and the amount of data you are transferring.
If your cache pool has enough free space to hold all the data in this particular batch, then leave cache:YES and if the mover is scheduled when you aren't using the server, it will all get moved to the array at the slower array speed while you sleep.
If you have more data than will fit in the pool, it will end up either giving you an out of space error or start copying directly to the slower array when it runs out of space, depending on your settings. Then when the mover is scheduled, it will move the data from the pool to the array freeing up the pool space. Either way, the transfer will take longer than if you just sent it directly to the array in the first place.
Writing to the array is slower than writing to the cache pool in a typical system, but the data needs to end up on the array eventually anyway.
tldr: Using cache pools to write to a parity protected array share don't speed up the parity array, it's just a fast temporary spot to put the data and let the server deal with it later. The data still has to get written at array speed, but you don't have to wait on it.
Here's an extreme example. You have 10TB of data to transfer, and a 128GB cache pool. The first 128GB writes fast, then the transfer slows down, and the pool shows full. You manually run the mover, now the array is receiving writes from the mover, and your transfer slows down even further. The mover can't empty the pool as fast as you can fill it, so you are stuck at a snails pace.