Huge performance issues


Hanfufu

Recommended Posts

So after years of pondering, i finally took the plunge to start using unRAID on my server.

 

At the moment im fearing it was the worst decision i ever made, and that perhaps the software cost is just a loss, along with the several days of moving over data from old disks to my new array....

 

i DO NOT have parity drives yet. All drives are either IronWolf NAS og Seagate Exos. though a few is 6TB consumer disks.

Its running on my old server atm (Xeon E5 2683v3, 14 cores/28 threads so slow CPU is NOT the issue), which have been running for years with WS2016 with zero issues at all.

 

The performance is horrendous...

 

It does not affect speed at all, which disk i copy to in the array.

 

Copying from a locally attached drive to the array, can get me speeds of about 110-160MB/sec, which is somewhat acceptable.

 

VMs lagging and almost impossible to work with.

Disk transfer speeds are reminiscent of the old 10Mbit days.

Downloading anything on my 500/500mbit is completely useless - goes up quick to 40MB/secs, then tanks totally and goes down to sub 1MB/sek after 3 secs. Sometimes its stuck as 0.0KB/sec in 30 secs or more, before going up to 40MB/secs again, and after a few secs were down to sub 1MB/sek and then it goes on and on. Downloading same file on my WS2016 server, sees the speed rock solid at 60MB/sec - but if i download it to a network share on my unRAID box, the speed is again horrendous.

Trying to copy over lan, from my primary server to the temp unraid server, does the same. Stuck at like 40MB/sec, going all the way down to 0.0B/sec and gets stuck there for minutes at a time.

Copying from my main server, to an unRAID share, via a VM on the unRAID server, gets me the 40MB/sek at short intervals - Starting another copy from my main server and also to the unRAID, sits at around 60-80MB/sec - pausing the slow transfer on the VM on the unRAID server, does nothing to the other copy - still hovers between 60-80MB/sec. so with just 1 copy running, it is impossible to get over 500Mb/sec, which makes ZERO sense.

 

So copying 2 streams simultaniously, gets me around 950Mb/sec on a 1000mbit LAN. But copying just a stream at a time, never comes close to this speed.

 

This is completely useless, and nothing is running at acceptable speeds....can someone PLEASE help me here, as i am completely lost as to how to fix this, and which information do you guys need, to help me fix problem?

 

 

 

 

Link to comment

On another note, im running a Plex server in Docker, and while all these slowdowns take place, the server responds pretty quickly, and plays almost immediately when transcoding, and does not seem to suffer from slowdowns when transcoding.

The Windows Server VM has access to 12 cores and 8GB of RAM så should be plenty for a smooth user experience.

 

Even though there is no transfers ongoing to/from the server, my torrent download speeds will atm not exceed 1MB/sec on my 500Mbit connection, with 0 other LAN traffic interferring :(

Edited by Hanfufu
added some info
Link to comment

Your domain and system shares have files on the array. Dockers/VMs will perform better and not keep array disks spunup if these are all on a fast pool (cache).

 

1 hour ago, Hanfufu said:

Copying from a locally attached drive to the array, can get me speeds of about 110-160MB/sec, which is somewhat acceptable.

Maybe iperf would tell something about the slower network transfers.

Link to comment
47 minutes ago, trurl said:

Your domain and system shares have files on the array. Dockers/VMs will perform better and not keep array disks spunup if these are all on a fast pool (cache).

 

Maybe iperf would tell something about the slower network transfers.

 

There shouldnt be active files on other than the cache_ssd pool, and the WS VM that is problematic, runs off of its own 240GB SSD on a second pool.

 

Or have i set it up incorrectly, according to my screenshot?

shares.png

Link to comment
9 minutes ago, Hanfufu said:

Or have i set it up incorrectly, according to my screenshot?

Not sure :(

 

It looks like you might have misunderstood the Prefer and Yes settings for the Use cache setting?

- Yes means write them to cache with overflow to array.  Later move any files on cache to the array

- Prefer means write them to cache, with overflow to array.  Later move any files on array to cache if sufficient free space.

Link to comment

You can see how much of each disk each user share is using by clicking Compute... for the share, or Compute All button.

 

1 hour ago, trurl said:

Your domain and system shares have files on the array

Nothing can move open files, so you would have to disable Docker and VM Manager in Settings before Mover could move those to the specified pool.

 

Link to comment
36 minutes ago, itimpi said:

Not sure :(

 

It looks like you might have misunderstood the Prefer and Yes settings for the Use cache setting?

- Yes means write them to cache with overflow to array.  Later move any files on cache to the array

- Prefer means write them to cache, with overflow to array.  Later move any files on array to cache if sufficient free space.

You could be right about that, but i can see the vdisk files etc physically on the SSDs when i view the file system on them, also they are growing in size as i install stuff.

Tried downloading from a VM to the c-drive (vdisk), and i could see the writes on the SSD in the Main tab.

 

But i was under the impression that i could keep everything related to VMs solely on the SSDs by selecting Yes: cache_ssd/VMs.

 

How do i make sure that they are kept only on the SSDs then?

Link to comment
36 minutes ago, trurl said:

You can see how much of each disk each user share is using by clicking Compute... for the share, or Compute All button.

 

Nothing can move open files, so you would have to disable Docker and VM Manager in Settings before Mover could move those to the specified pool.

 

I just checked all the individual disks, and only on disk1 are there domain and system folder, which have not been changed for more than 3 days, so they should be running from the SSDs

Link to comment
58 minutes ago, Hanfufu said:

You could be right about that, but i can see the vdisk files etc physically on the SSDs when i view the file system on them, also they are growing in size as i install stuff.

Tried downloading from a VM to the c-drive (vdisk), and i could see the writes on the SSD in the Main tab.

 

But i was under the impression that i could keep everything related to VMs solely on the SSDs by selecting Yes: cache_ssd/VMs.

 

How do i make sure that they are kept only on the SSDs then?

Use the ‘Only’ setting!   
 

note you should only use this setting once there are no files currently on the array for the share as the ‘mover’ application ignores shares set to,’Only’.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.