Jump to content

JorgeB

Moderators
  • Posts

    67,737
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. Not especially, unless you have recurring issues, and in my experience those are usually caused by bad hardware. Btrfs is currently the only option for pools, you can use xfs for single device cache.
  2. Pool wasn't configured correctly before, i.e., only the NVMe device was part of it, and by unassigning it it was wiped: Dec 29 11:31:17 Tower emhttpd: shcmd (1202764): /sbin/wipefs -a /dev/nvme1n1p1 Dec 29 11:31:17 Tower root: /dev/nvme1n1p1: 8 bytes were erased at offset 0x00010040 (btrfs): 5f 42 48 52 66 53 5f 4d This usually works for this, type in the console: btrfs-select-super -s 1 /dev/nvme1n1p1 If the command is successful (there's no error) then reset the pool with: if Docker/VM services are using the cache pool disable them, unassign all cache devices, start array to make Unraid "forget" current cache config, stop array, reassign only the NVMe device (there can't be an "All existing data on this device will be OVERWRITTEN when array is Started" warning for any cache device), re-enable Docker/VMs if needed, start array.
  3. First run an extended SMART test to confirm if it's an actual issue or just a false positive, make sure disk spin down is disable.
  4. Not seeing anything relevant logged, most hardware related crashes don't leave a trace on the logs.
  5. Should still be able to browse /mnt/disk#, nothing changed.
  6. Might be related to this: https://forums.unraid.net/bug-reports/prereleases/69x-610x-intel-i915-module-causing-system-hangs-with-no-report-in-syslog-r1674/?do=getNewComment&d=2&id=1674
  7. According to the log there are no valid btrfs filesystems in any device, you can try this, it might work depending on how the devices were wiped: btrfs-select-super -s 1 /dev/sdX1 Do this for both devices, if the command doesn't output an error for one at least one of them reboot and post new diags.
  8. Enable the syslog server and post that after a crash, best to use v6.9.2 if the rc is spamming the log.
  9. Parity can't help with filesystem corruption, the rebuilt disk would look the same. Filesystem corruption can be the result of for example an unclean shutdown or some hardware issue, like bad RAM, or just a bit flip in the wrong place, it's important to always have backups of anything important.
  10. On the main page click on the first pool member, then:
  11. That's expected, data corruption resulting from running the server with bad RAM, you can run a scrub on the pool, it will list in the syslog all corrupt files, those files need to be deleted/restored from backups.
  12. SMART Extended Self-test Log Version: 1 (1 sectors) Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed: read failure 50% 29399 2368412 SMART test is failing, so yes, device needs to be replaced.
  13. mpt2sas_cm0: unable to map adapter memory! or resource not found Try this:
  14. Check the System/DMI event log in the BIOS, there might be more info there.
  15. Enable the syslog server and post that after a crash.
  16. That's strange, disabling C-States is expected to increase power usage, but it shouldn't degrade performance, likely a BIOS issue, though you don't need to disable C-States, just enable the correct power supply idle control setting: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=819173
  17. Try using a disk share instead to test, user shares will always have some overhead, some systems it's a small difference, but it can also be huge.
  18. Just add the disk and the shares will then start to write to it depending on the split level and allocation method set.
  19. Dec 25 18:03:17 Gumby kernel: ahci 0000:02:00.1: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x000e address=0xa1bc0000 flags=0x0050] Problem with the onboard SATA controller, quite common with some Ryzen boards, looks for a BIOS update or avoid using it.
  20. Yes, it's the same pass for all disks. It will after a reboot.
  21. You need to run reiserfsck again but using with the --rebuild-tree option.
×
×
  • Create New...