noacess

Members
  • Content Count

    40
  • Joined

  • Last visited

Community Reputation

1 Neutral

About noacess

  • Rank
    Newbie

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. This may be a dumb question but, is there a way to live migrate a VM between two unraid servers? Assume identical bare metal hardware. I know this is doable with proxmox but I wasn't sure if there were some fancy cli commands that could be ran to do this in unraid. Thanks for the help!
  2. I was referring to my hack of copying a single file via user shares to create directory structures and then copying the rest of the files via the cache share because of transfer speed being comparatively slow through user shares when writing to cache.
  3. If you stop your array, go into settings, then click on SMB there is a section called "Samba extra configuration:" That's where you paste the config I shared. After you start the array you should now see an extra share when you browse to your server. You don't need to create it under the "Shares" area of unraid. I would turn off disk shares unless you need it for something specific.
  4. Below is what works for me. I usually just write 1 file through the user share and then once the folder structure is created I do the rest of my writes through the cache share. Its clunky, but it works for now until I can think of a more elegant solution. [cacheShare] path=/mnt/cache valid users = whoever write list = whoever
  5. Hey everyone, I recently rebuilt my unraid server and I'm seeing some strange cache write speed behavior. If I write to a user share that has use cache set to Yes via SMB I can copy at about 280 MB/s which is slow. If I enable disk shares and do the same copy direct to the cache drive share I get well over 1 GB/s which is what I'd expect. The only thing I've tried so far is enabling Direct IO but that didn't help. The cache drive is a hardware raid of SAS3 SSDs. The PC I'm copying from has a PCIE 4.0 nvme disk in it. Diagnostics attached. Any help would be apprec
  6. I'm just curious what kind of performance everyone is getting. I recently got my 40Gb Mellanox Connectx-3 cards working with unraid but my SMB copy speeds to a ram drive are maxing out at about 1.5 - 1.7 GB/s which feels a bit slow. The only SMB options I've added are below. On the client side I'm copying from a pcie 4.0 NVME drive which has read speeds over 4GB/s so that shouldn't be the issue. Any thoughts/suggestions? Thanks! server multi channel support = yes aio read size = 1 aio write size = 1 strict locking = No
  7. I'll do some science tonight with jumbo frames. I've also go another NIC I can try if that doesn't yield and results. Wish me luck! Thanks for the help.
  8. Single thread yields about half the speed. Maybe this is my problem (although I don't know how to fix it)? Two threads yields similar results to 10 threads like above. Thanks! C:\temp\iperf-3.1.3-win64>iperf3.exe -P 1 -c 192.168.1.15 Connecting to host 192.168.1.15, port 5201 [ 4] local 192.168.1.8 port 9056 connected to 192.168.1.15 port 5201 [ ID] Interval Transfer Bandwidth [ 4] 0.00-1.00 sec 639 MBytes 5.36 Gbits/sec [ 4] 1.00-2.00 sec 640 MBytes 5.37 Gbits/sec [ 4] 2.00-3.00 sec 662 MBytes 5.55 Gbits/sec [
  9. What kind of cache are you using? I have Direct IO enabled. iperf leads me to believe the network side shouldn't be an issue. Thanks for the response! C:\temp\iperf-3.1.3-win64>iperf3.exe -P 10 -c 192.168.1.15 Connecting to host 192.168.1.15, port 5201 [ 4] local 192.168.1.8 port 8667 connected to 192.168.1.15 port 5201 [ 6] local 192.168.1.8 port 8668 connected to 192.168.1.15 port 5201 [ 8] local 192.168.1.8 port 8669 connected to 192.168.1.15 port 5201 [ 10] local 192.168.1.8 port 8670 connected to 192.168.1.15 port 5201 [ 12] local 192.168.1.8
  10. Hey guys, What kind of speeds are folks getting on their cache drives when copying to them over a 10GB network via SMB (no jumbo frames)? I've got 2 Samsung PM953 nvme drives setup in brtfs raid 0 and can only get it to write at 400 - 500MB/s from my Windows desktop. I have a Windows 2019 server on the same 10GB network that I'm able copy to at 1GB/s to a pair of SATA3 SSDs in raid 0. Does anyone have any tuning advice? Diags attached. Thanks! tower-diagnostics-20190514-0022.zip
  11. Just as another piece of data, I rolled back to Unraid 6.5.3 and my transfer speed starts at 450 MB/s and climbs to almost 600 MB/s by the end of the 5GB transfer. So while that's better than the 350MB/s I get on 6.6.5, its not quite the 800MB/s I get in Windows server 2016.
  12. So over the last couple of weeks I've swapped out the raid controller from an H710p to a H310 and replaced the backplane (from a 4 bay to an 8 bay). I've also swapped to a different set of SATA3 SSDs so I can keep my R630 up and running while I do testing on the R620. With the hardware above swapped out I'm still only able to copy at about 350 MB/s over a 10gbit connection to a cache only SMB sahre. Today I decided to install Windows Server 2016 and use the same raid 0 cache drive and see what my transfer speed is. This yielded a SMB file copy of 800 - 850 MB/s which is more
  13. Well, its definitely not network related as you can see below. I have a H310 raid controller on order to see if that makes a difference. In the mean time I'm going to break the raid 0 and test individual drive speed with the script and see what that yields. Thanks for the script! root@Tower:/boot# ./write_speed_test.sh /mnt/cache/test.dat writing 10240000000 bytes to: /mnt/cache/test.dat 1211290+0 records in 1211289+0 records out 1240359936 bytes (1.2 GB, 1.2 GiB) copied, 5.00092 s, 248 MB/s 2426523+0 records in 2426522+0 records out 2484758528 bytes (2.5 GB, 2.3 GiB) copi
  14. I'm copying a 5GB file from a Windows 10 workstation that's also connected via a 10GB NIC to a cache only SMB share. The file on the workstation resides on a PCIe SSD. So, the copy speed is what I mean by the transfer speed. Hopefully that clarifies a bit. Also note that this is the same machine I did the the iperf test from. Thanks!
  15. Hey guys, I'm hoping to get some ideas around an issue I'm having with cache drive speed. I'm trying to move my array from a Dell R630 (16 core 192GB Ram) to an R620 (12 core 16GB ram) server. My HDD drives are connected via an LSI SAS9207-8e to a Lenovo SA120. The R630 has a H330 raid controller for its drive bays, the R620 has a H710p. Both servers have 10GB ethernet cards (Dell 0C63DV / C63DV Intel Dual Port). For cache drives I'm using two hitachi SSDs (HUSSL4040ASS600)) in hardware raid 0. When I have this configuration setup on the R630 I can transfer at abo