Jump to content

JorgeB

Moderators
  • Posts

    67,737
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. It's the same partition, if CSM/Legacy boot is enable you should get two boot options: UEFI: Name_of_your_flash_drive Name_of_your_flash_drive* *Sometimes USB: Name_of_your_flash_drive
  2. That can be skipped, problem is filesystem corruption, check filesystem on disk2.
  3. If it's a power problem it's most likely a connection/splitter issue, unlikely to be not enough power, unless the PSU is not good.
  4. It's disk4, constant spin up suggests a power problem.
  5. Looks like a power/connection problem, try replacing both cables.
  6. This is probably the problem, you can get likely around that if you can run the script on a disk share vs user share, or /mnt/user0 vs /mnt/user if the files are on the array.
  7. Correct, missed du output above, it's not reliable with btfs, even with a single device fs, GUI and df will show the correct used/free space for that pool.
  8. What is the problem exactly? Diags show 1.6TiB used, GUI should show 1.76TB.
  9. Jan 12 09:12:04 Tower shfs: shfs: ../lib/fuse.c:1451: unlink_node: Assertion `node->nlookup > 1' failed. https://forums.unraid.net/bug-reports/stable-releases/683-shfs-error-results-in-lost-mntuser-r939/ Some workarounds discussed there, mostly disable NFS if not needed or you can change everything to SMB, can also be caused by Tdarr if you use that.
  10. No, that won't case checksum errors, you can run memtest from Unraid's boot menu (Legacy/CSM boot only)
  11. 197 Current_Pending_Sector -O--C- 100 100 000 - 64 198 Offline_Uncorrectable ----C- 100 100 000 - 64 Best to replace it.
  12. To boot legacy/CSM you don't need to do anything to the flash drive, just select the appropriate boot option in the board BIOS, if legacy/CSM boot is still supported/enable.
  13. Could be power or SATA related, why I mentioned cables plural.
  14. Also note that this is usually the result of bad RAM, so good idea to run memtest.
  15. Just re-create it then: https://forums.unraid.net/topic/57181-docker-faq/?do=findComment&comment=564309
  16. Without the diags can't say for sure but loop2 is usually the docker image, if yes just delete and recreate.
  17. There are various reasons, one of them is like mentioned above that I like to re-utilize old disks after upgrading them, I find this more cost effective than just upgrading the existing server to even larger disks, also and although I work in IT it's kind of a hobby for me, and finally my small way of supporting LT. As for what they are used, there are two main servers that are always on, with VMs and the data I need day to day, they also serve as partial backups of each other, then I have mostly cold archive servers, 5 in total and for each of those there's also another one that acts as a full backup, another server for backup of the array for one the the main servers, which doesn't need daily backups, currently also a couple of Chia farms and a couple of test servers which for now share the same key since only use one at a time.
  18. In the SMART report: Extended self-test routine recommended polling time: (1324) minutes. It will take longer if the disk is being used during the test.
  19. Those won't work with that controller, use regular SATA cables.
  20. Jan 11 21:31:22 Unraid2 kernel: general protection fault, probably for non-canonical address 0xc0bf7e66aa0b45bf: 0000 [#1] SMP NOPTI Jan 11 21:31:22 Unraid2 kernel: CPU: 14 PID: 25668 Comm: unraidd0 Tainted: P O 5.10.28-Unraid #1 Jan 11 21:31:22 Unraid2 kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./B550 Steel Legend, BIOS P1.00 05/21/2020 Jan 11 21:31:22 Unraid2 kernel: RIP: 0010:bio_associate_blkg_from_css+0x121/0x15d Jan 11 21:31:22 Unraid2 kernel: Code: fb 3d 00 4d 85 ed 74 13 49 8d 7d 38 e8 86 fa ff ff 84 c0 75 06 4d 8b 6d 30 eb e8 e8 0e ef ff ff 4c 89 6d 48 eb 31 48 8b 45 08 <48> 8b 80 a8 03 00 00 48 8b b8 f0 03 00 00 48 83 c7 38 e8 99 fa ff Jan 11 21:31:22 Unraid2 kernel: RSP: 0018:ffffc900008fbd68 EFLAGS: 00010246 Jan 11 21:31:22 Unraid2 kernel: RAX: c0bf7e66aa0b45bf RBX: ffffffff825161a0 RCX: ffff88814f7d9250 Jan 11 21:31:22 Unraid2 kernel: RDX: ffff88814f7d9250 RSI: ffffffff825161a0 RDI: 0000000000000000 Jan 11 21:31:22 Unraid2 kernel: RBP: ffff88814f7d9270 R08: ffff88814f7d9270 R09: ffffffff820bfe80 Jan 11 21:31:22 Unraid2 kernel: R10: ffff88814f7d93b0 R11: 00000000fffffffc R12: ffff88814f7d9068 Jan 11 21:31:22 Unraid2 kernel: R13: 0000000000000000 R14: ffff88814f7d9270 R15: ffff88810446d158 Jan 11 21:31:22 Unraid2 kernel: FS: 0000000000000000(0000) GS:ffff88880eb80000(0000) knlGS:0000000000000000 Jan 11 21:31:22 Unraid2 kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Jan 11 21:31:22 Unraid2 kernel: CR2: 00001455bc982718 CR3: 00000001043b6000 CR4: 0000000000350ee0 Jan 11 21:31:22 Unraid2 kernel: Call Trace: Jan 11 21:31:22 Unraid2 kernel: bio_associate_blkg+0x45/0x4b Jan 11 21:31:22 Unraid2 kernel: unraidd+0x119d/0x12b7 [md_mod] Jan 11 21:31:22 Unraid2 kernel: ? md_thread+0xee/0x115 [md_mod] Jan 11 21:31:22 Unraid2 kernel: ? __kthread_should_park+0x5/0x10 Jan 11 21:31:22 Unraid2 kernel: md_thread+0xee/0x115 [md_mod] Jan 11 21:31:22 Unraid2 kernel: ? init_wait_entry+0x24/0x24 Jan 11 21:31:22 Unraid2 kernel: ? md_seq_show+0x69e/0x69e [md_mod] Jan 11 21:31:22 Unraid2 kernel: kthread+0xe5/0xea Jan 11 21:31:22 Unraid2 kernel: ? __kthread_bind_mask+0x57/0x57 Jan 11 21:31:22 Unraid2 kernel: ret_from_fork+0x22/0x30 Jan 11 21:31:22 Unraid2 kernel: Modules linked in: md_mod nvidia_drm(PO) nvidia_modeset(PO) drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops nvidia(PO) drm backlight agpgart ip6table_filter ip6_tables iptable_filter ip_tables x_tables bonding edac_mce_amd kvm_amd kvm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel r8125(O) aesni_intel crypto_simd cryptd input_leds wmi_bmof r8169 glue_helper i2c_piix4 i2c_core led_class k10temp ccp ahci realtek rapl libahci acpi_cpufreq button wmi [last unloaded: md_mod] Jan 11 21:31:22 Unraid2 kernel: ---[ end trace 3af3b5048a4767f7 ]--- Unraid driver is still crashing, can't really help with this, not much to do other than using different hardware or waiting for a newer release to see if it helps.
  21. You should also post new diags like asked, since cache pool is currently not redundant.
×
×
  • Create New...