Jump to content

JorgeB

Moderators
  • Posts

    67,737
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. Fine if flashed to LSI IT mode. Both will have the same performance with SAS2/SATA3 expander/devices. Correct, and the bracket can usually be found on ebay. Usually it's fine, but to be sure you'd need to try it, or find anyone else using it with the same board.
  2. That means the new config wasn't properly done, repeat the procedure.
  3. Correct, all pool data became inaccessible.
  4. Jan 28 10:35:32 TSA-NAS01 kernel: ahci 0000:03:00.1: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000e address=0xb0010000 flags=0x0000] Problem with the onboard SATA controller, both cache devices dropped offline because of that: Jan 28 10:36:33 TSA-NAS01 kernel: ata1.00: disabled Jan 28 10:37:30 TSA-NAS01 kernel: ata2.00: disabled This is quite common with some Ryzen boards, rebooting should bring the pool back but if it keeps happening best to use an ad-don controller (or replace the board).
  5. Best IMHO would be to move or re-create the docker image on cache.
  6. I would still leave it online for now since it's not yet disable, then just replace, also note that if the disk gets disabled during the copy you should stop, since it will them be copying from the emulated disk and there could be some data corruption there.
  7. You did, but there were read errors on disk1 during the sync, so parity isn't 100% valid, you can still rebuild but there will likely be some data corruption, except if you are very lucky and the read errors coincided with empty disk space, or use ddrescue, this way you can at least now with files are corrupt, another alternative is to copy the data form disk1 to another disk, any files that you can't copy need to be restored from backups.
  8. It's not easy to diagnose hardware without starting to swap some things around, like PSU, board, RAM, etc, one more thing you can try before that is to boot the server in safe mode with all docker/VMs disable, let it run as a basic NAS for a few days, if it still crashes it basically points to a hardware problem, if it doesn't start turning on the other services one by one.
  9. Possibly this: https://forums.unraid.net/bug-reports/prereleases/69x-610x-intel-i915-module-causing-system-hangs-with-no-report-in-syslog-r1674/?do=getNewComment&d=2&id=1674
  10. No, that's how it should be, something you did/ran changed that.
  11. /mnt/user permission are wrong, type: chmod 777 /mnt/user then post output of ls -ail /mnt
  12. System share has some files on disk1, and it's completely full, if the docker image is there it won't have enough space for writes.
  13. Like mentioned you have errors on multiple disks, disk1 isn't the problem, at least not for now Jan 25 16:21:48 Apollo kernel: md: disk1 write error, sector=273781944 .. Jan 25 16:22:35 Apollo kernel: md: disk3 read error, sector=3558475008 .. Jan 25 16:23:02 Apollo kernel: md: disk6 read error, sector=273783216 .. Jan 25 16:23:26 Apollo kernel: md: disk2 read error, sector=273782224 .. Jan 25 16:23:41 Apollo kernel: md: disk4 read error, sector=273786064 .. Jan 25 16:24:27 Apollo kernel: md: disk5 read error, sector=273785856
  14. Check filesystem on disk1, make sure you run xfs_repair without -n or nothing will be done.
  15. I forgot that reiser is limited to 16TiB max size, so you'll need to go back to the old disk, convert to xfs, then you can upgrade again.
  16. That seek error rate failing now SMART is kind of common with those drives, and it usually goes way on its own, main issue is that the extended SMART test is failing, might or not be related to that, you can run a non correcting check, if there aren't any errors and the SMART failing now attribute goes away after a power cycle or a few days and the SMART extended test no longer fails I would probably keep them for now.
  17. Reiserfs can take a long time to mount, hours even for large filesystems, when there are issues or it's replaying the journal.
  18. That suggests a plugin issue, safe mode by itself shouldn't make any different for this.
  19. It will affect any Linux based system, not just Unraid.
  20. The 840 EVO can turn slow reading old data, especially if not using the latest firmware, run the diskspeed docker to test read performance in all devices.
  21. Yes, but two ports on the board are from an Asmedia controller, other ones use the Intel PCH controller. Both emulated disks are mounting so you can rebuild on top, though it should be fairly safe if you have spares you can use those instead and keep the old disks in case something goes wrong. https://wiki.unraid.net/Manual/Storage_Management#Rebuilding_a_drive_onto_itself
  22. This is the wrong forum for this, it's for running Unraid as a VM, also already replied in your other thread:
  23. Both disks dropped offline, and they are on different controllers, this suggests a power/connection problem, power down, check/replace cables, power back up and post new diags after array start.
×
×
  • Create New...