Jump to content

JorgeB

Moderators
  • Posts

    67,771
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=545988
  2. Edit config/ident.cfg on the flash drive and change USE_SSL from "auto" to "no", then reboot.
  3. That's a completely different issue, in case it's SSL related try this: Edit config/ident.cfg on the flash drive and change USE_SSL from "auto" to "no", then reboot and try again accessing by name or IP
  4. You might still post diags when it doesn't find any disks just in case there's some issue initializing the HBA. P.S. disk10 is failing in case you didn't notice yet.
  5. Unfortunately not easy to diag without using a different HBA or a different backplane, cable config is OK and doubt it's cable related, but you can try with just one cable connected at a time to rule that out, bandwidth will be reduce but it's fine for testing.
  6. Still seems Nvidia related, so my original advice remains.
  7. LSISAS2308: FWVersion(20.00.00.00) Known issue with that firmware, update to 20.00.07.00
  8. You need to re-enable parity: https://wiki.unraid.net/Manual/Storage_Management#Rebuilding_a_drive_onto_itself
  9. Disks appears to be failing, you can run an extended SMART test to confirm.
  10. These can be intermittent, since the test passed disk is OK for now, but you should keep monitoring, especially these attributes: 1 Raw_Read_Error_Rate POSR-K 200 200 051 - 0 200 Multi_Zone_Error_Rate ---R-- 200 200 000 - 126 If they climb you'll likely get more read errors.
  11. IIRC someone posted that's just a visual issue, RAM will still be allocated as needed, assuming the memory ballooning driver is installed, though I don't really see much point in thin provisioning RAM, if you set both values to the RAM you want you won't see that.
  12. This is normal if "Initial Memory" is less than "Max Memory" for that VM.
  13. For now it's only enable for pools, you can have a single device "pool" as long as pool slots are set to >1, though I agree it should be enabled for everything.
  14. This is a known issue, it occurs if you have just one btrfs formatted array device, it results in an invalid btrfs filesystem on parity which confuses the pools: May 1 10:50:32 Vili kernel: md: import disk0: (sdq) WDC_WD140EDGZ-11B1PA0_Y6GWSR1C size: 13672382412 ... May 1 10:51:00 Vili emhttpd: /mnt/ssd_pool ERROR: cannot scan /dev/sdq1: Input/output error I already brought this to LT's attention and hopefully something will be done about it soon, solution for now is to either convert disk1 to xfs like the remaining arrays devices or add/convert more array devices to btrfs.
  15. Seems related to the RAID controllers, make sure JBOD mode is enable and the disks are being passed through, Unraid isn't detecting those devices, like if they don't exist.
  16. Still looks like a power/connection problem, more on the power side.
  17. That's your current hardware, some devices might show older names because they didn't change.
  18. Check this, also next time please post the complete diagnostics instead.
  19. Strange that the sync errors were detected in a similar zone and they are all sequential, this makes not suspect RAM: 1st check Apr 20 03:28:08 Tower kernel: md: recovery thread: P corrected, sector=4015656792 Apr 20 03:28:08 Tower kernel: md: recovery thread: P corrected, sector=4015656800 Apr 20 03:28:08 Tower kernel: md: recovery thread: P corrected, sector=4015656808 Apr 20 03:28:08 Tower kernel: md: recovery thread: P corrected, sector=4015656816 Apr 20 03:28:08 Tower kernel: md: recovery thread: P corrected, sector=4015656824 etc 2nd check Apr 25 06:07:49 Tower kernel: md: recovery thread: P incorrect, sector=4015657608 Apr 25 06:07:49 Tower kernel: md: recovery thread: P incorrect, sector=4015657616 Apr 25 06:07:49 Tower kernel: md: recovery thread: P incorrect, sector=4015657624 Apr 25 06:07:49 Tower kernel: md: recovery thread: P incorrect, sector=4015657632 Apr 25 06:07:49 Tower kernel: md: recovery thread: P incorrect, sector=4015657640 etc Since you rebooted run a correcting check then a non correcting one and post new diags, if it's a disk issue it's much more difficult to identify the culprit.
  20. It's logged as a disk issue, and SMART shows some issues, wait for the extended test result.
  21. Apr 26 07:42:45 Tower kernel: macvlan_broadcast+0x10e/0x13c [macvlan] Apr 26 07:42:45 Tower kernel: macvlan_process_broadcast+0xf8/0x143 [macvlan] Macvlan call traces are usually the result of having dockers with a custom IP address, upgrading to v6.10 and switching to ipvlan might fix it (Settings -> Docker Settings -> Docker custom network type -> ipvlan (advanced view must be enable, top right)), or see below for more info. https://forums.unraid.net/topic/70529-650-call-traces-when-assigning-ip-address-to-docker-containers/ See also here: https://forums.unraid.net/bug-reports/stable-releases/690691-kernel-panic-due-to-netfilter-nf_nat_setup_info-docker-static-ip-macvlan-r1356/ There's also this: Apr 26 06:31:12 Tower kernel: XFS (sdg1): Metadata corruption detected at xfs_dinode_verify+0xa3/0x581 [xfs], inode 0x3f0d11 dinode Apr 26 06:31:12 Tower kernel: XFS (sdg1): Unmount and run xfs_repair Check filesystyem on cache_bak
  22. Make sure you're doing the recommended settings from here and if still issues enable the syslog server and post that after a crash.
  23. No issues in the log that I can see, but diags are just after rebooting, did you already get the error before the diags were saved?
  24. To use the new disk you just need to do a new config, try booting in safe mode and if you still have issues accessing the registration/diagnostics pages I would suggest starting with a clean install and re-assigning all the disks, if that works you can then copy the rest of the config from the old flash in parts or just reconfigure the server.
×
×
  • Create New...