IamSpartacus

Members
  • Posts

    802
  • Joined

  • Last visited

Everything posted by IamSpartacus

  1. So it looks like any file with an accent and files that look like they have an apostrophe but it's someone not the apostrophe that you'd get if you hit the key on a standard keyboard. So it must be another accent of some kind. These files are all automatically named with Sonarr/Radarr so i'm not sure how they were named with these characters like that. I discovered these file are not using a standar apostrophe by noticing how they were ordered in WIndows Explorer:
  2. So I just did a test Mirror A -> B incremental with the file I screen shot above. The folder and file are identical on both servers, but it copies a new file over and I wind up with a duplicated directory and second file with the changed character on the destination.
  3. Yes, they are the same. But DirSyncPro copying a second copy of the file because it's reading the files with ' and -- in them as different. I'll do a test on a few files and show you what the result looks like.
  4. See here. The files are identical on both source and destination.
  5. Does anyone have an issue with DirSyncPro when trying to copy files that have an apostrophe in it? It seems like it can't read character and thus even if it exists on the destination it copies the file over changing the ' to a ą instead creating a second different copy.
  6. Could anyone explain to me why the first part of the VM backup process includes a copy operation of my previous backup img file to a newly created img file BEFORE my VM shuts down and then continues on the backup process? I'm just confused by what this first step is doing. 2020-04-28 13:35:20 information: copy of backup of /mnt/user/data/backups/servers/athens/vm/SPE-DC1/20200427_0401_spe-dc1_vdisk1.img vdisk to /mnt/user/data/backups/servers/athens/vm/SPE-DC1/20200428_1335_spe-dc1_vdisk1.img starting. 2020-04-28 13:42:24 information: copy of /mnt/user/data/backups/servers/athens/vm/SPE-DC1/20200427_0401_spe-dc1_vdisk1.img to /mnt/user/data/backups/servers/athens/vm/SPE-DC1/20200428_1335_spe-dc1_vdisk1.img complete. 2020-04-28 13:42:24 information: skip_vm_shutdown is false. beginning vm shutdown procedure.
  7. Perfect! That's exactly what I need. Thank you!
  8. Can the setting 'Only move at this threshold of used cache space' be used in conjunction with the scheduler? Such that the mover is scheduled to only run once a day, but it will only run if the used space is above the threshold? Or does this setting always invoke the mover the moment the threshold is reached?
  9. Ok so actual workloads I'm testing with (ie. sonarr/radarr imports from one server to the next) are hitting about 12Gbps so that is very solid. I can live with that :).
  10. Using cp it seems to be getting over 1GB/s but it's hard to say for sure since there is no way to view real time progress with cp. EDIT: used nload to view current network usage on the NIC while doing a cp to a SMB mount and I get 18Gbps so that is nice at least.
  11. Yes SMB mount. I get the same slowness going from one directory on cache to another with pv. This is a Samsung 960 Pro 1TB NVMe.
  12. No where near the speed the storage is capable but that may be an rsync limitation. And IOwait was low during thing. Seems I need to find a better transfer method than rsync and then figure out why my network transfers are still slow assuming they are with that method.
  13. Do you recommend a different test such as FIO? Yes, internal tests with dd on each side of the storage (cache pool on one side, NVMe cache on the other) are each capable of 1.8-2.0GB/s reads/writes to the drive.
  14. These servers are direct connected. So I don't really have any other way of testing other than an rsync or some other internal transfer tool. I get poor speed whether I use nfs or smb.
  15. Initially testing with the following command: Then testing from an unassigned disk to cache using rsync, iowait was much lower. I'm trying to test writes from one server to the next using a cache enabled share but I seem to be only getting 200-250MB/s across my network right now which isn't making much sense since it's a 40GbE connection and iperf3 is showing it connected as such.
  16. I have two Unraid servers connected via direct connect 40GbE NICs and I'm looking for some advice on how best to tune NFS/SMB to be able to get the fast transfers possible between them. The storage on each end of the transfers is each capable of 2.0GB/s from internal testing. If I even can get half that I'd be happy but my initial testing is barely breaking 200-250MB/s. As you can see from the below iperf3 testing, connectivity is not the bottleneck. The only thing I've tested changing is MTU from 1500 to 9000 for this direct connection but it makes no difference. I've been testing using rsync between the servers.
  17. I've been aware of btrfs pools in unraid causing high iowait for a while and up to this point I've avoided the issue by using a single NVMe using XFS. But circumstances have changed and I'm exploring using a cache pool of Intel S4600 480GB SSD's now. I've been doing extensive testing with both the number of drives in the pool and the raid balance. It seems that as the number of drives in the pool increases, so does the amount of IOwait. It appears there is a specific IOwait number attached per drive. The raid balance does not seem to have any effect other than shortening/prolonging the IOwait issue depending on the balance (ie. raid1/raid10 has a longer period of high iowait then raid0 obviously since the write takes longer). I have tested these drives connected both to my onboard SATA controller (SuperMicro X11SCH-F motherboard with C246 chipset) and also connected to my LSI 9300-8e SAS controller. There is zero difference. I'm curious if any one has any insight on how to mitigate these IOwait issues. My only solution at this moment appears to be using a RAID0 balance so that the writes are very fast (ie. 2.0GB/s with four S4600's) so that the iowait only lasts say 10-15 seconds for a 20GB write. But this is obviously not sustainable unless I can ensure I never do large transfers to cache enabled shares which is kind of the whole point of having a cache. EDIT: I should note, these tests were done using dd. It appears that writes from an unassigned pool to cache does show much less iowait. I guess that would make sense being that RAM is so much faster than storage.
  18. Why when I'm running a non snapshot backup (not backing up nvram either) do I see the script copy my running vdisk (does not shut vm down first) to the backup location, then shutdown the VM and backup the vdisk on top of the copied file? 20200420_1611_unraid-vmbackup.log
  19. Has anyone using a ConnectX-3 card had any luck tuning their NFS/SMB settings to allow for increased throughput? I'm testing between two ramdisks on two unraid servers and only getting a max of 1.5GB/s while I get 39.5Gbps using iperf3.
  20. Just updating that I did a full long format with rufus and that still didn't help. So it must be the flash drive.
  21. Is it possible to complete this process in Unraid or some other Linux (one that can be run from LiveC) distro? I have no way of running Windows on my server currently and would love to get my Connectx-3 working in Unraid. EDIT: I managed to install Windows onto a USB3 HDD to the above steps working perfectly. Thanks!