bouis

Members
  • Posts

    12
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

bouis's Achievements

Noob

Noob (1/14)

1

Reputation

  1. I have the same issue (apparently) but it's a pool and can't be accessed (as far as I know) without mounting the array.
  2. I guess I should have been more clear---it didn't delete the old files after moving, so I ended up with duplicates. Sorry for the lack of clarity; it was late and I was busy.
  3. I have two SSDs in btrfs raid0. I've been fiddling around with my box lately, and I guess the cables were loose on one my drives, because the drive dropped while the mover was running. It came back after reboot and no data was lost, but there were some very preventable corruption issues. Bug #1: After one disk dropped, the files on the raid0 cache remained visible to the UI (but not readable through the UI). Good, I guess. But the mover kept on going and wrote corrupted files to the array disks which had the appearances of being functional files. I was able to clean them up because the files remained on the cache (presumably it went read only). But the mover should not do this! Bug #2: Unraid refuses to mount the cache if one drive in btrfs raid0 is not available after reboot. Good. And the drives sync after being reconnected and the system rebooted. Good. But even though it works and the data is good, unraid shows the dropped drive as unassigned, not as part of the cache pool. I can't see any way to add it back without moving everything off and wiping it. This is very inconvenient.
  4. Bug report: it seems that unbalance won't move files with ` (that's that upper-left key) in the name.
  5. You can't multi-thread a single file transfer but you can transfer multiple files at the same time. Unfortunately I haven't found a good windows app to automate this but if you just do two copies at the same time it works because each copy gets its own samba thread. With SSDs there's very little performance decrease for doing two or even three sequential copies at once. If you find a windows app that automates this I'd love to hear about it. I tried a lot of them and had no luck.
  6. Performance with the 2650 v2s is pretty disappointing. But if I run two threads I can max out the 10gb ethernet. Unfortunately I can't find any windows utility that splits mass copies into multiple threads. I tried a bunch of them and none seem to have this functionality and a usable interface. Looks like I'll just have to manually split my copies. It's not a big deal, I guess. And once I get the box filled up the performance issues won't matter and I'll probably get rid of one of the cache drives anyway. No sense in having a $300 drive sit idle.
  7. Thanks! Very helpful. Maybe I should look into multi-threaded windows copy programs. It seems like there should be something with a shell extension that adds a "paste with two threads" dialogue in the right-click explorer menu.
  8. That makes sense. Still, it's quite a bit faster single-threaded than the CPUs that're in there. I'll report back how it works. Thanks again.
  9. Thanks, but these two drives in RAID 0 were getting very close to 1GB/s sequential writes with the old (newer) motherboard and cpu. I thought it was a SATA controller issue but that doesn't explain why the NVME drive is bottlenecked too. That thing will do > 2GB/s writes. Tomorrow I am going to try it with an newer CPU (E5 2650 v2) and see if that changes anything.
  10. This is very strange. I put an NMVE drive in and tested it and got: root@argos:/mnt# dd if=/dev/zero of=/mnt/testnv/test1.img bs=10G count=1 oflag=dsync 0+1 records in 0+1 records out 2147479552 bytes (2.1 GB, 2.0 GiB) copied, 4.27589 s, 502 MB/s The expected write speed is > 1500megabytes/s. However, the actual read speeds as tested by diskspeed plugin are the expected ~2750megabytes/s. Could it be that the single-thread performance of the CPU is the bottleneck here? I don't have experience running these kind of SSDs on older hardware.
  11. I have two 2tb sata SSDs in raid0 as a cache drive. I'd formerly used a fairly modern desktop motherboard and they worked as expected (1gigabyte/s read & write), but today I moved the unraid server over to an older supermicro box with an X9DRi-F motherboard and two v1 2620 xeons. It's got two sata iii ports, which I've connected the SSDs to. I also have a gigabit ethernet card (mellanox x-3). But I only get 280-350 megabytes/s write speeds when copying to the cache drive over network. Formerly I was getting the expected 1 gigabyte/s. So, I test the ethernet with iperf3 and it's giving me 1 gigabyte/s. I test the SSDs with diskspeed docker, and they're giving me 500mb/s read (each), 1000mb/s when both are tested simultaneously to check the controller. I have the trim plugin installed and tried manually trimming the cache. I tried adding an HBA and connecting the SSDs to it, and I still get the same slow write speeds when copying files over network. The same HBA (9211-8i) can max out SATA3 SSDs in windows, though I don't think I ever tried it with unraid before now. So what gives? Any thoughts are greatly appreciated. argos-diagnostics-20190630-0728.zip
  12. I've just discovered this plugin. It really seems to scratch an itch with the mover, so thanks a bunch. I do have a feature request, though: I have a large cache drive (4tb SSD), and it would be helpful if I could use set it up so, during a scheduled move, turbo write is used only if the cache drive is filled to a certain percent. That way small scheduled moves would happen without turbo write, but big ones that need to write faster to clear the cache would spin up all the drives. Also, and maybe it's asking for too much, but it'd be great if we could: A) stop the mover manually; and B) automatically stop scheduled moves if they haven't finished by a certain time. My need for those last two may be abrogated by being able to set the mover priority lower / cancel it during parity scans, but I haven' had a chance to test the low priority much. Anyway, thanks again!