hammondses Posted April 25, 2018 Share Posted April 25, 2018 (edited) Simple question - What is the max speed I can expect to write to a cache only shared folder on a 256gb 960 Evo? In my testing it seems to be 350mb/s no matter what I do. Testing the NVME drive in a windows 7 VM shows 3200mb/s read and 1100mb/s write. I setup a ramdisk on both my PC and the Unraid & I can achieve full 10gbe speeds. (1.15gb/s) PC Ramdisk -> shared cache only folder = 350mb/s.. Cache -> PC Ramdisk = 1.15gb/s I've tried direct IO which increases the write speed of the cache to around 400mb/s but drops the read to around 180mb/s. Any help would be greatly appreciated. Edited April 25, 2018 by hammondses Quote Link to comment
Warrentheo Posted April 26, 2018 Share Posted April 26, 2018 There is a decent probability that the drive is formatted incorrectly somehow... With a drive that size, you might be able to just empty it, and reformat clean to fix the problem... (Make sure to follow these correctly, one character wrong and you could delete the wrong drive or something, follow at your own risk) 1. Empty the drive of needed data 2. Un-mount the drive, and clean it out of the unRaid config manager. (Tools-->New Config-->Remove config for the cache drive) 3. Use the command: blkdiscard /dev/nvme0n1 modify this command to fit your drive's device name... This will send a linux discard/TRIM style command to the entire drive, nukeing all data and making it act like a blank/new drive... 4. You can then add the drive back to the unRaid config, and allow it to format the drive like normal... 5. If that does not work, then you most likely have the BIOS for your computer configured incorrectly somehow... Quote Link to comment
JorgeB Posted April 26, 2018 Share Posted April 26, 2018 Jumbo frames will likely help, if still not using them. Quote Link to comment
hammondses Posted April 29, 2018 Author Share Posted April 29, 2018 On 4/26/2018 at 3:31 PM, Warrentheo said: There is a decent probability that the drive is formatted incorrectly somehow... With a drive that size, you might be able to just empty it, and reformat clean to fix the problem... (Make sure to follow these correctly, one character wrong and you could delete the wrong drive or something, follow at your own risk) 1. Empty the drive of needed data 2. Un-mount the drive, and clean it out of the unRaid config manager. (Tools-->New Config-->Remove config for the cache drive) 3. Use the command: blkdiscard /dev/nvme0n1 modify this command to fit your drive's device name... This will send a linux discard/TRIM style command to the entire drive, nukeing all data and making it act like a blank/new drive... 4. You can then add the drive back to the unRaid config, and allow it to format the drive like normal... 5. If that does not work, then you most likely have the BIOS for your computer configured incorrectly somehow... I did as you said, same result. 350mb/s write to Cache on NVME drive from RAM disk and 650mb/s read from NVME drive to RAM disk. On 4/26/2018 at 6:57 PM, johnnie.black said: Jumbo frames will likely help, if still not using them. Both ends set to 9014 unfortunately. Quote Link to comment
John_M Posted May 1, 2018 Share Posted May 1, 2018 On 4/29/2018 at 6:41 AM, hammondses said: Both ends set to 9014 unfortunately. Is an MTU of 9014 supported at both ends? The standard for 10G Ethernet is 9000 bytes and some NICs don't support higher values. Quote Link to comment
hammondses Posted May 13, 2018 Author Share Posted May 13, 2018 The only option my Asus NIC gives above ~4k is 9014 & my unraid box nics support it. Quote Link to comment
pwm Posted May 14, 2018 Share Posted May 14, 2018 8 hours ago, hammondses said: The only option my Asus NIC gives above ~4k is 9014 & my unraid box nics support it. I suspect someone at Asus failed the network courses. Their figure 9014 is probably MTU 9000 + 14 bytes ethernet header. But the 14 bytes of the ethernet header shouldn't be counted as part of the MTU. Quote Link to comment
bonienl Posted May 14, 2018 Share Posted May 14, 2018 (edited) They are definitely not the only ones. This setting gives a MTU of 9000. The ethernet layer includes a 4 bytes FCS checksum at the end, making it 18 bytes in total. Somebody didn't pay attention when defining these values. Not to speak about the confusion it causes. Edited May 14, 2018 by bonienl Quote Link to comment
pwm Posted May 14, 2018 Share Posted May 14, 2018 M$ doesn't explicitly claim that their figure 9014 is the MTU. It's ok to talk about 9014 byte large packets when talking about jumbo frames - as long as the number 9014 isn't claimed to be the actual MTU value. Quote Link to comment
s.Oliver Posted May 14, 2018 Share Posted May 14, 2018 (edited) it doesn't seem to be network speed related. @hammondses states, that he achieves full 10gbe speed, when using a ram-disk on unRAIDs side as destination for the writes. drops to lower speed occur, when the destination is a cache only share. this would indicate, that somewhere in between the receiving I/O buffers (unRAID) and the actual writes to the share, data can't be moved fast enough, so it drops down in speed. interesting would be, if we have others here with this combination of hardware to test: fast NVMe drive (as cache) and 10gbe network speed between all parties who transfer data (client machine to unRAID server). i've got a NVMe drive (but actually because of system freezes not as cache active) but no 10gbe network infrastructure. so i can't test, sorry. but while the NVMe was active as cache, i've seen some read/write operation at maximum speed from/to it (but that was inside the unRAID box so i would guess only between ram <> NVMe disk). Edited May 14, 2018 by s.Oliver Quote Link to comment
pwm Posted May 14, 2018 Share Posted May 14, 2018 Note that SSD write speeds are affected by the availability of already erased flash blocks. Some drives requires explicit trim commands to be able to pre-erase flash blocks. Some drives have an overprovisioning pool allowing the drive itself to erase flash blocks as they are moved back into the pool. Without erased flash blocks, an SSD will not manage the write speed claimed in the datasheet. Quote Link to comment
s.Oliver Posted May 14, 2018 Share Posted May 14, 2018 @pwm you're certainly right here, but the poster used @Warrentheo tip to completely erase/wipe the drive and re-do his tests. same outcome. Quote Link to comment
Warrentheo Posted May 14, 2018 Share Posted May 14, 2018 2 hours ago, pwm said: Note that SSD write speeds are affected by the availability of already erased flash blocks. Some drives requires explicit trim commands to be able to pre-erase flash blocks. Some drives have an overprovisioning pool allowing the drive itself to erase flash blocks as they are moved back into the pool. Without erased flash blocks, an SSD will not manage the write speed claimed in the datasheet. Most modern drives handle this alot better than the original Gen 1 and Gen 2 SSD's there were on the market, This advice is now outdated for most situations... See this info: https://wiki.archlinux.org/index.php/Solid_State_Drive#Periodic_TRIM https://wiki.archlinux.org/index.php/Solid_State_Drive#Continuous_TRIM Bottom line: enabling constant continuous trim just adds alot of extra commands to the drive and slows things down... If you are still concerned about TRIM, then I would install "Dynamix SSD TRIM" plugin, this will let you put trim commands in your periodic cron file on your schedule... Quote Link to comment
pwm Posted May 14, 2018 Share Posted May 14, 2018 2 hours ago, Warrentheo said: Most modern drives handle this alot better than the original Gen 1 and Gen 2 SSD's there were on the market, This advice is now outdated for most situations... See this info: https://wiki.archlinux.org/index.php/Solid_State_Drive#Periodic_TRIM https://wiki.archlinux.org/index.php/Solid_State_Drive#Continuous_TRIM Bottom line: enabling constant continuous trim just adds alot of extra commands to the drive and slows things down... If you are still concerned about TRIM, then I would install "Dynamix SSD TRIM" plugin, this will let you put trim commands in your periodic cron file on your schedule... Some things to remember. While continuous trim isn't default for Linux anymore, that doesn't mean that there isn't a need to trim. It's just that stupid drives that performs the erase synchronously slows down the kernel a lot when there are a huge number of small erases instead of having a few periodic trim commands that supplies a large list of concurrently erasable blocks. So periodic is better for stupid drives. For better drives, it doesn't matter that much because better drives will perform the trim asynchronously meaning the trim command is basically one of the 100k OPS/s the drive might be able to handle. The problem with periodic trim is that it works very badly for almost full drives that receives lots of writes (at least if the drive doesn't have overprivisioning). A drive without a overprovisioning pool and that doesn't get trim commands will get into troubles when it has no more erased flash blocks - then every new write will require the flash controller to first erase a block to be able to write the new content and copy back the part of the block that shouldn't have been erased. And if writes consumes a number of us while an erase takes a number of ms, then the total write speed will drop drastically. At the same time, the wear will also increase since the drive will erase and write back information that no longer contain valid file data. If you replace a 4kB region on a 128kB flash block then the drive will erase the 128kB block and then copy in your 4kB changed information and the 124kB of other information that shared the same flash block. If the drive has overprovisioning, then it can keep up the write speed even without trim since it will rotate flash blocks through the pool. But the drive will still like trim since trim reduces the amount of dead data that gets copied between blocks when the drive needs to replace partial content for a flash block. Anyway - if the transfer is slow even if the SSD has just been fully erased, then it's not lack of trim that results in the slowdown. Quote Link to comment
hammondses Posted August 8, 2018 Author Share Posted August 8, 2018 I've been fiddling around with it for months and no result change, 1.1gb/s read speed burst but 350mb/s write max on the ssd still. Quote Link to comment
alfredo_2020 Posted November 14, 2018 Share Posted November 14, 2018 Im having the same problem as this post. Here is my setup and what im doing. The only thing i have NOT done yet is increase MTU size, will try that tonight. unRaid System Specs 1TB nvme Cache Drive unRaid 6.4.1 Asrok LGA1151 MotherBoard 8GB Ram DDR4 2400MHZ 3.5G Intel Quad-Core GProcessor Asus 10GBe PCIe NIC I have a mac mini with PCIE Flash & 10Gbe. Blackmagic says about 400MBs write speed and 700MBs read speed when i write to my cache only share. I have truned off my plex docker to see if that helps but it doesnt. I am using a cat7 6ft cable. I have the mac mini and unRaid on its own ips(192.168.10.227unRaid & 192.168.10.230mac) Normal network is on 192.168.1.xxx. I have seen other posts about trying iPerf, building a ram disk etc. But it sounds like its not the network but something with unRaid not being fast enough? Any other thoughts of what to try? Quote Link to comment
JorgeB Posted November 14, 2018 Share Posted November 14, 2018 (edited) 6 minutes ago, alfredo_2020 said: Any other thoughts of what to try? 1-Jumbo Frames 2-Go to Settings -> Global Share Settings -> Tunable (enable direct IO): set to Yes If neither helps run iperf to test lan bandwidth. Edited November 14, 2018 by johnnie.black Quote Link to comment
alfredo_2020 Posted November 28, 2018 Share Posted November 28, 2018 On 11/14/2018 at 12:13 PM, johnnie.black said: 1-Jumbo Frames 2-Go to Settings -> Global Share Settings -> Tunable (enable direct IO): set to Yes If neither helps run iperf to test lan bandwidth. I was able to set jumbo packet MTU = 9000 on unRaid and Mac and it did help a bit. I now get 900MBs Read and 600MBs write. However i can't bridge my 10GBe to my 1GBe on the server and i now dont have internet access through the wired connection on my Mac. I have to use wifi. When i bridge my eth0 eth1 under the eth0 settings, it sets the MTU of both to 1500. I dont know if i can enable 9000 on both and if that will even work? Quote Link to comment
JorgeB Posted November 28, 2018 Share Posted November 28, 2018 2 hours ago, alfredo_2020 said: I dont know if i can enable 9000 on both and if that will even work? Not sure, you can try as long as the gigabit hardware also supports jumbo frames. Quote Link to comment
alfredo_2020 Posted December 20, 2018 Share Posted December 20, 2018 On 11/28/2018 at 4:47 PM, johnnie.black said: Not sure, you can try as long as the gigabit hardware also supports jumbo frames. This didnt help my situation out still only getting 600Mbs. However i abonded this route and going to build an All in one unRaid Server + WIN10 VM(Occosinal Games) + Daily Driver + FCP (macOS Mojave). The windows VM can write at great speeds to the array cache. etc through the mapped network drive through the virtual. Getting speeds in the 600-800 Mbs range even though i connect to the server through a 1GBe mapping. But the osx only gets 150Mbs Write and 100Mbs Read. But i think that is a known problem with OSX VM. Ill keep investigating when i get more time over the winter break. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.