Jump to content

JorgeB

Moderators
  • Posts

    67,556
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by JorgeB

  1. There are multiple things spamming the log, but this is the main one: Nov 24 16:15:38 Tower kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window] Nov 24 16:15:38 Tower kernel: caller _nv000908rm+0x1bf/0x1f0 [nvidia] mapping multiple BARs Nov 24 16:15:40 Tower kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window] Nov 24 16:15:40 Tower kernel: caller _nv000908rm+0x1bf/0x1f0 [nvidia] mapping multiple BARs Related to a NVIDA GPU, possibly the GPUstats plugin.
  2. Constant errors on one of the cache devices: Dec 6 18:57:07 Jordan kernel: sd 1:0:0:0: [sdb] tag#12 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00 Dec 6 18:57:07 Jordan kernel: sd 1:0:0:0: [sdb] tag#12 CDB: opcode=0x28 28 00 00 00 00 40 00 00 01 00 Dec 6 18:57:07 Jordan kernel: print_req_error: I/O error, dev sdb, sector 64 Dec 6 18:57:07 Jordan kernel: sd 1:0:0:0: [sdb] tag#27 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00 Dec 6 18:57:07 Jordan kernel: sd 1:0:0:0: [sdb] tag#27 CDB: opcode=0x28 28 00 00 00 00 40 00 00 20 00 Dec 6 18:57:07 Jordan kernel: print_req_error: I/O error, dev sdb, sector 64 Dec 6 18:57:07 Jordan kernel: sd 1:0:0:0: [sdb] tag#28 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00 Dec 6 18:57:07 Jordan kernel: sd 1:0:0:0: [sdb] tag#28 CDB: opcode=0x28 28 00 00 00 00 40 00 00 08 00 Dec 6 18:57:07 Jordan kernel: print_req_error: I/O error, dev sdb, sector 64 You can try these options to recover the data, then also see here for better pool monitoring.
  3. Screenshot shows flash drive problems, backup the config folder, redo-it using the USB tool and restore the config folder, if still issues after that try a different flash drive.
  4. They point to a hardware problem, but don't identify the component.
  5. That won't re-sync parity, that will rebuild the disable disk on top, and probably not what yo want in this case. This is what you need to do to see if the actual disks mounts with UD and contents look correct, post back after that.
  6. It passed the SMART test so it's fine for now but you want to keep an eye on these: ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE 1 Raw_Read_Error_Rate POSR-K 200 200 051 - 42 200 Multi_Zone_Error_Rate ---R-- 200 200 000 - 16 If they continue to increase it will likely have more errors soon.
  7. Yes, same a zfs, you get an i/o error during transfer/read, if you still want the corrupt file you can use btrfs restore. It is limited in what it can fix and needs to be used with care, though it's getting better all the time. It can only fix errors when using a redundant profile, on the array devices it can only detected errors and you need to replace the file from a backup, again same as zfs. Yes, write hole only affects raid5/6, and it's not much of an issue if you use raid1/c3 for metadata for any raid5/6 pool like recommended.
  8. Settings for eth0: ... Link detected: no driver: r8169 -------------------------------- Settings for eth1: ... Link detected: yes driver: alx
  9. It's not a bug, just a consequence of the new alignment, since the partition is now smaller it can be rebuild using the same size device.
  10. Once a disk gets disable it needs to be rebuilt, make sure the emulated disk is mounting and showing the correct data and rebuild on top.
  11. No link on eth0, there is on eth1, Unraid always uses eth0 for management (you can choose which one is eth0 in the network settings, boot using GUI mode if needed to change).
  12. Since the disk looks healthy and the errors don't look disk related instead of rebuilding you can do a new config and re-sync parity instead, but make sure the actual disk is amounting correctly first, you can do that with UD (array must be stopped).
  13. Both failed the SMART test so they should be replaced.
  14. Missed that, macvlan call traces are usually cased by having dockers containers with a custom IP address, more info below:
  15. You just need to assigned as a single cache device.
  16. You're using a Marvell controller with a SATA port multiplier, both on their own should be avoided, much worse together.
  17. Besides disk15 there's also these: Dec 4 10:12:54 Tower kernel: md: disk9 read error, sector=9000 Dec 4 10:12:54 Tower kernel: md: disk13 read error, sector=9000 Dec 4 10:12:54 Tower kernel: md: disk9 read error, sector=9008 Dec 4 10:12:54 Tower kernel: md: disk13 read error, sector=9008 Dec 4 10:12:54 Tower kernel: md: disk9 read error, sector=9016 Dec 4 10:12:54 Tower kernel: md: disk13 read error, sector=9016 So some hardware issue there, as for disk 15 reboot then check filesystem, if the emulated disk mounts you can rebuild on top, but ideally after fixing whatever is causing those issue, could be power, cables, etc.
  18. Swap both cables with another disk to rule them out and re-sync parity.
  19. Try adding a little more RAM, 1Gb is not enough for v6.
  20. Does it boot in UEFI mode, i.e., is the flash drive bootable?
×
×
  • Create New...