Jump to content

JorgeB

Moderators
  • Posts

    67,411
  • Joined

  • Last visited

  • Days Won

    705

Everything posted by JorgeB

  1. Can you make a bug report? It's not a bug, he's referring to the auto change from reconstruct write to r/m/w when there's multiple array disk activity, I did mention before that I would much prefer this would only happen when write mode was set to auto, never when set to reconstruct write.
  2. It's not really a bug, it's a feature, though I don't like it either you can turn it off. That is a bug with v6.7.x, where any array writes starve reads on other array devices.
  3. Yes, rmw still works the same, with reconstruct mode (and because you're doing an array to array transfer) you're seeing a combination of both write modes, like mentioned you can confirm it's that by downgrading to v.6.7.2 and repeat that transfer.
  4. No worries, my bad also because I didn't read the entire post carefully, I missed that this was an array to array copy, so not related to what I was talking about. The issue here is that since v6.8.x Unraid reverts to read/modify/write mode when activity is detected in multiple array disks, and unfortunatly this can't be turned off, I did ask Tom to only use this if write mode was set to auto, but no dice, it can't be disabled for now, so your writes are constantly going from reconstruct write to r/m/w, hence why parity and the destination disk also have reads, for the r/m/w part, if you downgrade to v6.7.x. you'll see the expected behavior, but speed won't likely be much better since disk4 will be constantly seeking for the data and parity calculation reads.
  5. Yeah, when I got that was for brief periods and just ignored, next transfer was usually OK, I would guess it happened 1 or 2 in 100, and it's been some time. Just noticed screenshot shows XFS for all disks, so not a btrfs related...
  6. I don't have an explanation, but I did observe similar behavior, but only sometimes, usually for a short while, and not in a reproducible way, so I never tried to investigate further, but is likely related to btrfs (or COW in general) since all my servers are also btrfs.
  7. With the full diagnostics it would be easy to see it's an Asmedia controller, with just the syslog it doesn't show Asmedia, it just shows a two port controller loading after the first 6 Intel ports, but using the motherboard model I could see it has a 2 port Asmedia controller.
  8. Since kernel 4.4 balance is not needed, at least not regularly.
  9. Most likely. At least that one is, it's using one of the two Asmedia ports.
  10. Yes, looks like a connection/power issue, disk dropped offline out of the blue, like if the power or SATA cable was pulled.
  11. Start by redoing the flash drive: backup config folder, recreate flash, restore config folder.
  12. That's incorrect and should be updated. It shouldn't be ignored, that's the value you need to keep an eye on.
  13. The link explains ou to read the actual errors on Seagate drives, just looking at the total RAW value is pointless for those, not for WD drives, the RAW value is the actual number of errors, so 0 = good, anything above 0 not so good, though low values can be OK.
  14. They should be saved after the disk gets disabled, and before rebooting, without the syslog and based on the SMART report disk looks OK, CRC errors like mentioned aren't a disk problem, so likely it's the slot/cable.
  15. If there are errors there's a problem, and it's not the drive. Without any diags posted we can only guess.
  16. That's for Seagate drives: https://forums.unraid.net/topic/86337-are-my-smart-reports-bad/?do=findComment&comment=800888
  17. CRC errors are a connection problem, 9 times out of 10 a bad SATA cable, but could also be the backplane, even the controller, though much less likely.
  18. Forgot to say, you can still swap backplane slots to rule that out, but like mentioned I don't think that is the problem.
  19. Besides the normal pending/reallocated sector attributes, on WD drivers there are couple more that should be monitored: ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE 1 Raw_Read_Error_Rate POSR-K 200 200 051 - 51 200 Multi_Zone_Error_Rate ---R-- 200 200 000 - 26 Both should be 0, or close to that on a healthy drive, higher numbers are usually bad news (especially if they keep climbing) and the disk will likely return read errors sooner or later, but there are exceptions, or disks that give a few errors then work fine for some time.
  20. You need to delete/move data from the cache pool, deleting docker containers won't do much for this, and the docker image will always need to be recreate since current one is corrupt, likely from running out of space, recreating the image is very easy.
  21. Yes, but since the SMART test passed those are "false positives", still pretty sure the read errors on both disks are disk related and they will likely fail again soon, still good to rule out connection issues since on rare cases they are logged as media/UNC errors.
  22. Please don't double post, threads merged.
  23. See if this helps: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=819173 P.S. Ryzen 3 3200G is a second gen Ryzen, not third as the model implies.
  24. Two visible issues: -CPU is overheating, check cooling -There are what look like connection/power issues with multiple disks, check all connections and/or use a different PSU if available
×
×
  • Create New...