Jump to content

JorgeB

Moderators
  • Posts

    63,674
  • Joined

  • Last visited

  • Days Won

    674

Everything posted by JorgeB

  1. Start with a single parity valid array, add parity2 and a new disk at the same time and you get: Invalid expansion. - You may not add new disk(s) and also remove existing disk(s). I know this isn't possible, but the error is wrong, I'm not removing any disks.
  2. This is an old one and a twofer: 1-After a clear or rebuild operation finishes it's report as the last parity check. 2-If the device is smaller than parity, avg speed will be wrong, it's calculated based on parity size instead of the cleared/rebuilt device size.
  3. SMART looks fine, could just be filesystem corruption from a flaky connection, unclean shutdown, etc, I would replace the cables just to rule them out.
  4. No need to reboot, you just need to stop the VM service when restoring libvirt.img.
  5. Yep, /mnt/user can be on cache or array depending on how use cache is set for that share.
  6. I don't know what data you have on you cache drive, appdata is enough to recreate the dockers, libvirt to keep the VMs settings, but you also need the vdisk(s), is that on cache your elsewhere?
  7. If all cache data is backed up just format it and restore, if there's important data see here to try and recover/fix: https://lime-technology.com/forums/topic/46802-faq-for-unraid-v6/?do=findComment&comment=543490
  8. https://lime-technology.com/forums/forum/53-feature-requests/
  9. Never said it was, it's an unRAID requirement (for data and cache drives)
  10. It won't, unRAID requires that the partition starts on sector 64, only way is to use them with a different starting sector is as unassigned devices.
  11. Yes, why I mentioned even if it's only for testing.
  12. First thing would be to use a cable, even if it's just to rule out wifi issues, in my experience wifi speeds can be very inconstant and a very bad idea for a file server.
  13. Remove the n flag (no modify) and run it again, you might need to use -L but only if asked.
  14. OK, that could explain what you're seeing, since as I mentioned partition is already 4k aligned, still I also have some 120GB 840 EVOs and did some testing with the partition starting on sector 64 and 2048 and results are practically identical. copying a 50GB file from another SSD: sector 64 - 159.86MB/s sector 2048 - 159.88MB/s writing a 20GB file with zeros: sector 64 - 169MB/s sector 2048 - 168MB/s So not sure there's any value using a different starting sector even for Samsung SSDs, but if it works best for you keep it like that.
  15. H310 doens't support trim on most SSDs, move them to the onboard SATA ports.
  16. It would the opposite, with less RAM the issue would manifest much sooner.
  17. So much RAM will make a large impact on testing as by default 20% are used for write cache, that's about 50GB, before testing change the defaults to minimum, by typing on the console: sysctl vm.dirty_ratio=1 sysctl vm.dirty_background_ratio=1 Rebooting will change back to defaults or use 20 and 10 respectively.
  18. UD formatted devices (same as unRAID formatted devices) are 4K aligned already, for advanced format devices and SSDs the partition needs to be aligned on a multiple of 4KB, they start at sector 64, so 64 x 512 bytes = 32 768 which is divisible by 4096, parted align-check is probably looking for a partition stating on sector 2048 (which would also be aligned). Besides, if your problem was an unaligned partition, you'd see the performance impact immediately, not just after an 80GB transfer, that doesn't make any sense, were you regularly trimming your SSD?
  19. Since v6.2 the array won't be offline when adding a new drive, it will be cleared first on the background and only the you can format, but array will be online during the entire process.
  20. Just to add this is valid for mirrors only, e.g., raid1, for raid0/10 data is stripped to multiple disks, so a single process will read from multiple devices.
  21. All indications are this is not a concern.
  22. That's the way btrfs currently reads from multiple devices, it's based on the process PID, first one will read from a device, next process from the next and so on.
  23. raid1 will have same write performance as a single device, better read performance (for multiple processes) since both mirrors are used, raid0/10 will have better write performance also.
×
×
  • Create New...