Outcasst

Members
  • Posts

    14
  • Joined

  • Last visited

Everything posted by Outcasst

  1. Hi! Unfortunately I never figured this out. I ended up moving to Windows Server which (for me anyway) is working much better for raw performance with NVME drives. Copying files between shares on the same SSD is literally twice as quick as when using unRAID. And when you're transferring 200+ GB at a time, that becomes valuable. If this was ever addressed in unRAID (which I doubt, it's a very niche use case) I'd move back in a heartbeat.
  2. Hello there. I believe I am having an edge case issue regarding Gen 4 SSD performance in the UnRAID OS. I have two PCI-E 4.0 drives, a Sabrent Rocket 1TB and a Samsung 980 Pro 1TB. My original intention was to use these in a RAID0 cache pool, however I noticed performance much worse than I was anticipating, even in individual drive modes. I decided to benchmark the drives individually incase of the RAID0 not functioning correctly. Using the DiskSpeed docker, along with some file transfers, on the Sabrent drive I was only seeing read performance up to a maximum of 4.4GB/s along with a maximum of 4.1GB/s on the Samsung. I originally thought that they may not actually be running at PCI-E 4.0, however I double checked that they were. The Sabrent is rated at 5GB/s and the Samsung at 7GB/s read speeds. I have only benchmarked read speeds so far in the UnRAID OS since it is easy to do with the DiskSpeed docker. I decided that it could be one of two things causing these bottlenecks; either the filesystem (I tried both BTRFS and XFS with the same results) or how the OS itself is handling these drives. To test this theory, I decided to pass through the drives bare metal to a VM and measure their performance in a Windows environment using NTFS with CrystalDiskMark sequential. The benchmark results show that in this environment, the drives perform to their full potential whereas in the UnRAID environment they are quite far off. Below are the comparisons in read speeds: Sabrent Rocket in UnRAID: Sabrent Rocket in Windows VM: Samsung 980 Pro in UnRAID: Samsung 980 Pro in Windows VM: Quite frustrating. I know that unRAID is not really built for this but I can't really see why the OS is holding back performance so much.
  3. Also trying to passthrough a 660p bare metal to a VM with no success. Same error as those in this thread.
  4. I think this has something to do with VMs and dockers sharing the same bridge interface. Setting my VM to Vibr0 fixes this, but this isn't a solution as I can no longer RDP in to it. Setting dockers to bridge/host isn't an option either since I need them to have static custom IP addresses.
  5. Oct 12 03:35:17 Storage kernel: tun: unexpected GSO type: 0x0, gso_size 1357, hdr_len 1411 Oct 12 03:35:17 Storage kernel: tun: 13 e4 3d f7 10 86 b8 9e 87 b1 5f 81 d9 7a 98 c9 ..=......._..z.. Oct 12 03:35:17 Storage kernel: tun: 26 fa 2d 78 50 03 f2 b2 22 55 bc 68 29 75 83 46 &.-xP..."U.h)u.F Oct 12 03:35:17 Storage kernel: tun: 04 35 d4 e4 71 d8 5c 04 e3 e2 a2 6d 4e 1f 22 9d .5..q.\....mN.". Oct 12 03:35:17 Storage kernel: tun: 6f 97 72 60 c9 63 2b dc f4 ec c7 4f 68 60 66 9e o.r`.c+....Oh`f. Getting the above message repeated over and over again in the log whenever a docker tries to access the NIC. storage-diagnostics-20191012-0237.zip
  6. My slow array read speeds are fixed with 6.8 RC1.
  7. Another confirmation here that I see this behaviour on 6.7.X. Downgrade back to 6.6.7 and all my speed issues are resolved.
  8. So I can confirm this is an issue with 6.7.2. As soon as I downgrade back to 6.6.7, the issue goes away and reads are at full speed. Back to 6.7.2 and the reads are slow again. This is 100% repeatable.
  9. Yes, it's definitely going on to the cache. Also, when the transfer is in progress, it eats about 20% of the 4930k, which is alot for a single file transfer, right? Other than that, the array is completely idle. No other read/writes are happening. Edit: Now it's happening over the network. a burst of 112MB/s and dropping down to an average of 7-13MB/s. I've tried moving files from different disks within the array, same result. I have also benchmarked the drives using DiskSpeed and they are all showing 200MB/s Max Read speeds, so I don't think it's a hardware problem.
  10. Hi, I've updated to the latest stable version, however I'm seeing very poor read speeds from the array. If I transfer a file (70GB movie file) using MC from the Array to a share that's set to cache-only, I will see no more than 13-14 MB/s read speed from the array. However, transferring the same file over the network to my Windows PC yields an average of 100+ MB/s. The array is running off an LSI 9211-8i controller and the cache drive is a 970 Evo 1TB. I had the thought that it could be a PCI-E bandwidth issue, but everything seems to be running at maximum speed. Any suggestions or insights would be fantastic. Thanks.
  11. kernel: BTRFS error (device md1): bdev /dev/md1 btrfs_dev_stat_print_on_error: 2 callbacks suppressed Seems to be a random error and can't seem to trigger it on purpose. storage-diagnostics-20190122-2304.zip
  12. Since the upgrade I'm randomly getting these BTRFS errors. Downgraded to 6.6.6, errors are gone, upgrade back to 6.7.0rc1, errors return.
  13. Just updated the docker and all my torrents are gone and the UI is completely messed up. EDIT: Turns it it was a browser issue. Deleted the cache and everythings back to normal now.
  14. I'm having the same problem. After the update yesterday the docker won't stay running for more than a few minutes before crashing.