Mr.Cake

Members
  • Posts

    14
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Mr.Cake's Achievements

Noob

Noob (1/14)

4

Reputation

  1. Loading the dashboard gui with vm's that have cdrom drives attached causes the drive holding iso to spinup. The vm does not need to be started and can be hidden using started only and it will still happen. Not really a bug but unnecessary annoyance. Story, i kept seeing my machine spin up a drive whenever i logged in, used inotifywait to check what was being accessed. Unmapped all iso from vms, no more drive spinup.
  2. Since i haven't found a solution, I'm going to recreate the array to try and fix this. Am I able to keep the cache drive as storage while removing all the spinners from the array? I'll be zeroing the drives if necessary, is there some way to have unraid forgot drive assignments so i can test if that fixes it first? To be clear i've removed all plugins, docker is off, no vm's and it still happens. For whatever reason unraid feels the need to read from destination disk when it doesn't need to. The reading only occurs when you start a transfer. Would be great to know why this has happened though, won't be able to restart next time. I think seafile might be breaking diagnostics , attach what it has grabbed before it fails. unraid-diagnostics-20230115-0204.zip
  3. Whoops, my mistake. Since i worked out the issue I added some default routes to get full speed. unraid-diagnostics-20221004-2157.zip
  4. TLDR: Only able to get 95mbps when Unraid in sending data, full speed (2.5gbe) receive direction when NIC bridged. Network setup is Unraid>Switch>Desktop all 3 are 2.5gbe. Computers have two NIC's each, (mobo and pcie card) with one pair direct connecting each computer. All NIC are some form of Realtek. Recently got a 2.5gbe switch to remove direct connect and simplify accessing unraid. Tested the speeds straight away of course. Receiving data off the server is only ~1gbe, sending data full 2.5gbe. Confirmed with iperf3 testing. Initially I thought it was a windows issue, then a third computer had same problem. After many hours of troubleshooting worked out that the problem occurs when eth0 (r8169) is bridged. Take it out of the bridge, full speed straight away. You can sometimes get the tests to start at full speed and then it will drop off back to 1gbe, other times it only runs at 1gbe
  5. Yeah and that's my issue. What could it be? Or how can i find it.
  6. Ok. Here is a copy from cache drive to drive 4 with turbo write enabled. Exhibits the same behaviour albeit with less reading.
  7. With reconstruct write / turbo write on you can copy and move files between disks in the array without any speed penalty. Which is how it was working before adding extra drives.
  8. Around 18 months ago i expanded my array by adding another 2 8tb disks, ever since then reconstruct write doesn't seem to work as it should. With reconstruct write on unraid is still reading from the destination disk and also the parity disk. Slowing down operations to basically read/modify/write speeds. I've ignored it up until now. Tried to google the answer myself but every post talks about slow write speeds not the fact that disk is still reading when it shouldn't be. Pre array expansion with 'Turbo write' on i would see pure writing to destination and parity, now you see as below. Anything i can do to fix this? File copy from disk 1 > disk 3 Reconstruct write on Reconstruct write off unraid-diagnostics-20220824-1902.zip
  9. Quick question about how to move hardlinked files. I want to move a folder full of hard linked files to another drive. My question is will unbalance move the links or the original files leaving the links or will i end up with duplicate files? Looking at the share it also seems to take up 600gb more space than it should, how do i work out whats going on there?
  10. In case anyone finds this thread like i did looking for a solution. There is a round about way to do this, https://knowledgebase.macrium.com/display/KNOW7/Restore+to+VHD
  11. Ah i missed that you had already tried to trim, in the thread you posted the HBA cards aint able to trim I guess your ssd is plugged into motherboard though. I wonder what a straight benchmark of the SSD would show? Maybe trim the drives in another system and see how they go after?
  12. Sorry i posted in your thread about our similar issues (it was moved), solved mine with a trim of ssd, for reference.
  13. I was abit slow in taking the screenshot, it dipped down to 60MB/s. Sorry, reading the other post the issues were exactly the same. I think i have worked it out though, trimmed the ssd with fstrim -a and hit it with 260mbps of writes for 10mins now. The wait column (i don't know what that measures) hasn't gone above 20. Krusader stayed response the whole time. Ya. Maybe mover should run a trim command whenever it finishes?
  14. Hi, new to unraid. I wonder if anyone can explain this behaviour. I'm using krusader to move files onto the array, with a cache drive, sdb. For whatever reason it is slowing the write speed down to cache disk (ssd), it can do this an cause no slowdown of the source read but it will also chug enough to slow down the hdd. The ssd is capable of ~380mbs it looks like. Nothing else running on unraid, latest version (trial) I hit it from the network and you can see the wait time spike, i guess that's it how do i fix?