Jump to content

JorgeB

Moderators
  • Posts

    67,797
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. Please post the diagnostics after a mount attempt.
  2. Yep, sorry, like you didn't mention running a test I didn't check, if a SMART test fails disk need to be replaced.
  3. Some good permission info below: https://forums.unraid.net/topic/123901-plex-issues-upon-upgrade-to-6101/?do=findComment&comment=1138715
  4. It should be mounting or not, there's no in between.
  5. It does look like a power/connection problem, but once a disk gest disabled it needs to be rebuilt, since the emulated disk is mounting you can rebuild on top, would still recommend replacing/swapping cables to rule that out id it happens again to the same disk. https://wiki.unraid.net/Manual/Storage_Management#Rebuilding_a_drive_onto_itself
  6. Check filesystem on disk3.
  7. May 14 00:41:06 Tower kernel: macvlan_broadcast+0x116/0x144 [macvlan] May 14 00:41:06 Tower kernel: macvlan_process_broadcast+0xc7/0x110 [macvlan] Macvlan call traces are usually the result of having dockers with a custom IP address, uswitching to ipvlan should fix it (Settings -> Docker Settings -> Docker custom network type -> ipvlan (advanced view must be enable, top right)), or see below for more info. https://forums.unraid.net/topic/70529-650-call-traces-when-assigning-ip-address-to-docker-containers/ See also here: https://forums.unraid.net/bug-reports/stable-releases/690691-kernel-panic-due-to-netfilter-nf_nat_setup_info-docker-static-ip-macvlan-r1356/ P.S. Unrelated but you're also running out of RAM.
  8. Enable the syslog server and post that after a crash.
  9. Usually they are a case for concern, run an extended SMART test.
  10. Standard recovery is 60€, and yes probably easier to take the disk out and use a WIndows desktop.
  11. Were you using the Dynamix file manager?
  12. Yes, but I believe you can use the trial to see what data would be recoverable, including if it can recover the folder structure.
  13. Syslog starts over after every reboot, there won't be nothing about the v6.10 issues.
  14. That's not unexpected, there will be some damage, you can inspect those files to see if there's something useful, other alternative is scanning the disk with a file recovery util like UFS explorer.
  15. That's a good sign, it found a backup superblock, now mount the disk with the UD plugin (or add a new pool and assign it there) and check contents.
  16. To try and find the issue you'd need to post diags from v6.10.3, diags from v6.9.2 won't help.
  17. As expected no fs detected, but since current cache is xfs lets assume old one was the same, so try this: xfs_repair -v /dev/sdb1 This will take some time while it looks for a backup superblock.
  18. Try disabling and re-enabling the network bridge, there was a similar report recently.
  19. Extremely unlikely that is the problem.
  20. Because the beginning of the disk was cleared, and that's were the filesystem superblock is, there's a very small chance of recovering the filesystem, but first post the output of blkid just to confirm no fs is detected
  21. Doesn't look like a disk problem, swap cables//slot with another disk, for example parity2 and see if the issues follows the disk. P,S, unless you're troubleshooting a mover issue disable mover logging to avoid spamming the log, because of that spam cannot see the LSI firmware installed, check that both are on latest release 20.00.07.00
×
×
  • Create New...