Jump to content

JorgeB

Moderators
  • Posts

    63,741
  • Joined

  • Last visited

  • Days Won

    674

Everything posted by JorgeB

  1. OK, that could explain what you're seeing, since as I mentioned partition is already 4k aligned, still I also have some 120GB 840 EVOs and did some testing with the partition starting on sector 64 and 2048 and results are practically identical. copying a 50GB file from another SSD: sector 64 - 159.86MB/s sector 2048 - 159.88MB/s writing a 20GB file with zeros: sector 64 - 169MB/s sector 2048 - 168MB/s So not sure there's any value using a different starting sector even for Samsung SSDs, but if it works best for you keep it like that.
  2. H310 doens't support trim on most SSDs, move them to the onboard SATA ports.
  3. It would the opposite, with less RAM the issue would manifest much sooner.
  4. So much RAM will make a large impact on testing as by default 20% are used for write cache, that's about 50GB, before testing change the defaults to minimum, by typing on the console: sysctl vm.dirty_ratio=1 sysctl vm.dirty_background_ratio=1 Rebooting will change back to defaults or use 20 and 10 respectively.
  5. UD formatted devices (same as unRAID formatted devices) are 4K aligned already, for advanced format devices and SSDs the partition needs to be aligned on a multiple of 4KB, they start at sector 64, so 64 x 512 bytes = 32 768 which is divisible by 4096, parted align-check is probably looking for a partition stating on sector 2048 (which would also be aligned). Besides, if your problem was an unaligned partition, you'd see the performance impact immediately, not just after an 80GB transfer, that doesn't make any sense, were you regularly trimming your SSD?
  6. Since v6.2 the array won't be offline when adding a new drive, it will be cleared first on the background and only the you can format, but array will be online during the entire process.
  7. Just to add this is valid for mirrors only, e.g., raid1, for raid0/10 data is stripped to multiple disks, so a single process will read from multiple devices.
  8. All indications are this is not a concern.
  9. That's the way btrfs currently reads from multiple devices, it's based on the process PID, first one will read from a device, next process from the next and so on.
  10. raid1 will have same write performance as a single device, better read performance (for multiple processes) since both mirrors are used, raid0/10 will have better write performance also.
  11. If they are still increasing then there's still a problem, if it's not the cables the controller is the next candidate.
  12. Incorrect, you'll see clearing and the progress is shown in the same place as a parity check, and you'll get a notification when complete.
  13. Not likely, Crucial usually releases a a bootable update, so you should still be able to do it on the server.
  14. It's a problem with the MX500, likely a firmware bug, I have 12 in a pool and keep getting the same warning, 1 pending sectors that disappears after a few minutes, hope they fix it with a firmware update.
  15. To compare the data you'll need to change one of the disk's UUID so they can be mounted at the same time, you can change the old disk UUID with: xfs_admin -U generate /dev/sdX1 Then mount it with the UD plugin and run a compare.
  16. Make sure the content looks correct before rebuilding to the same disk, especially since you canceled the parity check after the unclean shutdown so there may be some sync errors, best way would be to rebuild to a spare disk then compare the data with the old disk using for example rsync in checksum mode.
  17. OK, now it's showing filesystem corruption, you need to run xfs_repair on the emulated disk, start the array in maintenance mode and run: xfs_repair -v /dev/md7
  18. Is the unassigned disk still mounted on UD? It was on your latest diags, you can't have the same UUID mounted twice, if yes unmount and re-start the array, if not post new diags.
  19. Was already taking a look, there are some weird controller errors but seam unrelated. Unsupported partition layout is not a filesystem problem, I find it very strange this would be caused by an unclean shutdown, unless it somehow damaged the mbr, to fix it you'll need to rebuild the disk, to the same disk, or to play it safer to a spare one if available, the disk should mount if you start the array with it emulated.
  20. You don't need to disable the mover, just set that or those shares to cache "only" and the mover won't touch them.
  21. You should also update the firmware, latest is 20.00.07
  22. Try erasing the bios, it's not needed and as an added bonus it will boot much faster: https://lime-technology.com/forums/topic/12114-lsi-controller-fw-updates-irit-modes/?do=findComment&comment=632252
×
×
  • Create New...