At any given time, the speed being given is meaningless. The important thing is the time it takes vs the size of the file. IE: The average speed. If you're moving / transferring many small files, then every OS (including locally on Windows) has massive amounts of overhead in handling all the metadata changes, directory updating etc.
When you're transferring to the server from another system, the system is going to use memory as a "cache", hence why you're seeing the theoretical line speed of 112-113MB/s, and then once the memory gets filled, it has to start dumping the memory to the much slower hard drives, so you will see the transfer rate drop very significant, and then pick back up, drop again etc. It's the average speed that's the important metric here.
One of Unraid's tradeoffs is that for the default settings (Settings, Disk Settings - md_write_method) is that it's read/modify/write. This default means that only the hard drives involved in the write are active (whichever data and the parity(s) (others can stay spun down), but write speeds by definition are ~4x slower than the theoretical maximum of the hard drive. By changing that setting to reconstruct write, you will tend to hit the maximums of the slowest hard drive present at the expense of every drive has to be involved (spinning up if necessary)
Copying files from the same drive to the same drive, is the worst of all worlds on any OS for speed, as it has to read the contents, wait for the drive(s) to spin back to the appropriate sectors and then write the contents, wait the drive(s) to spin back around for the next read ad nauseum.
Windows (ie: SMB3) is identical in speeds for transfers / moves within the same server as the entire system is smart enough to realize that if the source and destination are both on the same server then don't bother moving the data back and forth on the network. EG: On my 2.5G network I can quite easily hit 7G/s via Windows copying files between a pair of WD Black NVMe's