jbonnett Posted August 26, 2018 Share Posted August 26, 2018 Hi All, I have set my domains share to only use the cache drives, although on my VM I get really bad disk speeds My drivers are up to date, I have no idea what's happening. Jamie Link to comment
John_M Posted August 26, 2018 Share Posted August 26, 2018 47 minutes ago, jbonnett said: I have no idea what's happening. Without a lot more information, neither has anyone else. Post your diagnostics and the XML for your VM. What is the Task Manager screen grab meant to show - were you doing some sort of file copying at the time? Link to comment
jbonnett Posted August 26, 2018 Author Share Posted August 26, 2018 8 hours ago, John_M said: Without a lot more information, neither has anyone else. Post your diagnostics and the XML for your VM. What is the Task Manager screen grab meant to show - were you doing some sort of file copying at the time? Sorry I thought I posted the diagnostics. The screenshot is at idle, the only task I was running was searching for a graphics driver and even when that task was complete it was the same story. You can see here the process that is using the most disk usage and it little to nothing. tower-diagnostics-20180826-0156.zip Link to comment
John_M Posted August 26, 2018 Share Posted August 26, 2018 One thing I notice from your disks' SMART data is a very high power cycle count on most of your disks, including your two Samsung cache SSDs: 9 Power_On_Hours -O--CK 099 099 000 - 3244 12 Power_Cycle_Count -O--CK 098 098 000 - 1205 That's a power cycle approximately every 2 hours and 41 minutes of power on time, on average. Do you have a power supply problem? As a comparison, here's the same information for one of my disks: 9 Power_On_Hours -O--C- 100 100 000 - 14684 10 Unknown_SSD_Attribute PO--C- 100 100 050 - 0 12 Power_Cycle_Count -O--C- 100 100 000 - 24 I would do some read/write tests from the command line, directly to a folder on /mnt/cache, bypassing the virtualisation to check if the cache is the problem or if it's fundamentally sound. Link to comment
jbonnett Posted August 26, 2018 Author Share Posted August 26, 2018 27 minutes ago, John_M said: One thing I notice from your disks' SMART data is a very high power cycle count on most of your disks, including your two Samsung cache SSDs: That's a power cycle approximately every 2 hours and 41 minutes of power on time, on average. Do you have a power supply problem? As a comparison, here's the same information for one of my disks: I would do some read/write tests from the command line, directly to a folder on /mnt/cache, bypassing the virtualisation to check if the cache is the problem or if it's fundamentally sound. I've not noticed any problems with my power supply (I doubt that I have). I used the DiskSpeed plugin and got from what I can tell are normal results: I don't really understand what a power cycle is? Link to comment
John_M Posted August 26, 2018 Share Posted August 26, 2018 A power cycle is a power up followed by a power down, or vice versa. It's not necessarily a problem with your power supply itself but possibly the distribution (cabling, backplanes, drive cages) whatever is between the PSU and the drives. Read performance looks fine. How about write? Are you trimming your SSDs? Link to comment
jbonnett Posted August 26, 2018 Author Share Posted August 26, 2018 Could any settings cause it too? E.g. Windows power settings or BIOS? Ok I did this, it doesn't look right though I didn't realise that I had to explicitly turn trimming on. Link to comment
John_M Posted August 26, 2018 Share Posted August 26, 2018 Writing only a gigabyte is probably misleading as it will be buffered by memory. Trim isn't enabled by default. An easy way to set a schedule is with the Dynamix SSD Trim plugin. Link to comment
jbonnett Posted August 26, 2018 Author Share Posted August 26, 2018 I installed the Dynamix SSD Trim as soon as you mentioned it I tried again with 4GB but it still doesn't look right. I then used a real file keeping in mind the read speed of that disk that the iso is stored on is around that number (disk 1), it's a cheap NAS drive In terms of the power cycle it is probably that I don't keep my machine on at all times, I only use unRAID for 2 gamers 1 pc and do a far few restarts due to software development work. Link to comment
John_M Posted August 26, 2018 Share Posted August 26, 2018 My results with a 1 GiB transfer (write, then read, then read again): root@Lapulapu:~# dd if=/dev/zero of=/mnt/cache/tempfile.img bs=1M count=1024 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 6.19082 s, 173 MB/s root@Lapulapu:~# dd if=/mnt/cache/tempfile.img of=/dev/null bs=1M count=1024 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.41437 s, 314 MB/s root@Lapulapu:~# dd if=/mnt/cache/tempfile.img of=/dev/null bs=1M count=1024 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.468517 s, 2.3 GB/s root@Lapulapu:~# I have my write cache settings vm.dirty_background_ratio and vm.dirty_ratio set very low (to 1% and 2%, respectively) so a 1 GiB transfer doesn't fit in RAM. Reads aren't affected, as the second one shows - it's clearly read from RAM. If your ratios are set to their default values you'll need to choose a bigger transfer to defeat the write cache. You can change the ratios with the Tip and Tweaks plugin - it helps with avoiding Out Of Memory errors too. Link to comment
jbonnett Posted August 26, 2018 Author Share Posted August 26, 2018 I set them both to 1% tried a 8GB transfer and I still get silly speeds. Just in case you don't know from the diagnostics I have 32GB RAM. Link to comment
John_M Posted August 26, 2018 Share Posted August 26, 2018 That was on a server that's using slow mSATA SSDs. These are the results for a different server using 2.5-inch SSDs: root@Northolt:~# dd if=/dev/zero of=/mnt/cache/tempfile.img bs=1M count=1024 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 2.32141 s, 463 MB/s root@Northolt:~# dd if=/mnt/cache/tempfile.img of=/dev/null bs=1M count=1024 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.91691 s, 560 MB/s root@Northolt:~# dd if=/mnt/cache/tempfile.img of=/dev/null bs=1M count=1024 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.230467 s, 4.7 GB/s root@Northolt:~# Link to comment
John_M Posted August 26, 2018 Share Posted August 26, 2018 The write speed is sensible. The reads are clearly from RAM as the dirty ratios don't affect read caching. Link to comment
jbonnett Posted August 26, 2018 Author Share Posted August 26, 2018 So what could be affecting the VM? Also surely it's not right due to the scaling of 4GB, anything over 320MB (1% of 32GB) should have around the same speed? I also went the opposite way Link to comment
pwm Posted August 26, 2018 Share Posted August 26, 2018 39 minutes ago, jbonnett said: I set them both to 1% tried a 8GB transfer and I still get silly speeds. Just in case you don't know from the diagnostics I have 32GB RAM. Copying from /dev/zero can sometimes give silly speeds when actually writing to the drive - a number of SSD performs on-the-fly compression of data to reduce the flash wear. And the compression ratio of the contents from /dev/zero is very, very high. So the write speeds ends up being the link speed to the drive instead of the actual write speed of the drive. Link to comment
jbonnett Posted August 27, 2018 Author Share Posted August 27, 2018 Ok so I updated the Windows build and now my disk speed seems to ok. I did a quick test by downloading a game from steam, when steam was allocating drive space Task Manager reported 1GB/s. That looks right for two cache SSD's right? Link to comment
Recommended Posts
Archived
This topic is now archived and is closed to further replies.