This statement is not true! You can transfer more data than the size of the cache disk as long as:
you have the Use Cache setting for the shares to end up on the array but go via the cache set to Yes, or Prefer for files you want to finally end up on the cache if room permits but go the arrsy otherwise.
You have the Minimum Free Space value for the cache set to be larger than any single file to be transferred. A typical recommendation is twice the size of the largest file to give some headroom. You do NOT want the default value of 0 for this setting. Personally I do not think a value of 0 should be allowed by Unraid and the default should be something like 10% of the size of the pool as a better default that would suit most people.
Files are being transferred one at a time (which would be typical). If multiple files are being transferred in parallel the the Minimum Free Space value must be adjusted to a larger value to allow for the number of parallel transfers that can occur.
Aa long as this is true, then when Unraid sees the free space fall below the Minimum Free Space setting for the cache it will start by-passing the cache for new files and write them directly to the array. It is true, however, that there is not much point in having a cache drive that is smaller than the amount of new data you write between mover runs.
For the initial load onto a new Unraid system it is advantageous from a performance perspective to not have a parity drive assigned as most files will probably end up by-passing the cache so you do not want the performance penalty of updating parity (unless you are prepared to accept it to keep the data on the array drives protected from the outset).