Jump to content

JorgeB

Moderators
  • Posts

    67,867
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. You need to remove any reference to that device by editing the VM XML, it's normal if the hardware changed.
  2. Disk looks OK, could be a preclear issue, just add it to the array and let Unraid clear it, you can also run an extended SMART test first if you want.
  3. The fact that it worked once makes me thing it could still be a power/cable issue, but I understand you already tested with different power and SATA cable? Do you have another controller you could test with? A cheap 2 port Asmedia/JMB controller would do.
  4. Post diags but it the CPU overheats when maxing out a core there are cooling issues, clean the cooler and check the thermal paste.
  5. This is very strange, clearly there's a xfs filesystem on disk2: ep 29 20:00:48 FILE-SERVER kernel: XFS (md2): Mounting V5 Filesystem Sep 29 20:00:48 FILE-SERVER root: mount: /mnt/disk2: mount(2) system call failed: Structure needs cleaning. Sep 29 20:00:48 FILE-SERVER root: dmesg(1) may have more information after failed mount system call. Sep 29 20:00:48 FILE-SERVER kernel: XFS (md2): Log inconsistent (didn't find previous header) Sep 29 20:00:48 FILE-SERVER kernel: XFS (md2): failed to find log head Sep 29 20:00:48 FILE-SERVER kernel: XFS (md2): log mount/recovery failed: error -117 Sep 29 20:00:48 FILE-SERVER kernel: XFS (md2): log mount failed But xfs_repair doesn't find main or a backup superblock, something weird is happening there, just one last shot in the dark, and to rule out any md driver issue, with the array stopped type in the console: xfs_repair -v /dev/sdc1
  6. IMHO your best bet is to use ddrescue on disk3, use a disk with the same size for the new disk and if the recovery is fairly successful (like 90%+ recovered) we can than make that the existing disk3 to rebuild disk4, of course there would likely still be some data loss, mostly depending on the ddrescue results.
  7. Log is full of spam, please reboot, try doing a backup immediately after start and if it still fails post new diags.
  8. You are virtualizing Unraid, so likely the problem is that the devices are not being passed-through or detected correctly with the new kernel.
  9. Sep 3 17:37:21 NASA kernel: macvlan_broadcast+0x116/0x144 [macvlan] Sep 3 17:37:21 NASA kernel: macvlan_process_broadcast+0xc7/0x110 [macvlan] Macvlan call traces are usually the result of having dockers with a custom IP address and will end up crashing the server, switching to ipvlan should fix it (Settings -> Docker Settings -> Docker custom network type -> ipvlan (advanced view must be enabled, top right)).
  10. Those errors come from a 120GB Kingston SSD, it looks to be unassigned, is it being used? If not unplug it, in any case reboot and post new diags after array start.
  11. Wrong date and time will cause that, make sure NTP service is enabled.
  12. If only the folders exist and no files you can format, but double check.
  13. It starts with the log snippet posted above, device 05:00.1 is eth1.
  14. If you are passing through the GPU you should not install the driver, or do you have more than one Nvidia GPU?
  15. If I understand correctly the 4th attempt was the one going well? 4th and 5th have the disk in a different port, it was ATA10 on the forth and ATA2 on the 5th, see if you can connect it to ATA9/10, should be ports 5 and 6.
  16. Start the array in maintenance mode and type in the console: xfs_repair -v /dev/md2 Then post the full output from that.
  17. Yeah, forgot to mention, you'd need to use the disk share path, e.g. /mnt/cache/file I believe there are plans for that but it' won't be for 6.11, maybe 6.12 or 6.13
  18. Does the looping start right after the menu or you see some text? If there's some text see if you can catch where it loops with a photo or video.
  19. Diags after array start in normal mode please.
  20. Sep 28 19:06:49 Executor-Server emhttpd: shcmd (740): /sbin/wipefs -a /dev/nvme0n1 Sep 28 19:06:49 Executor-Server root: wipefs: error: /dev/nvme0n1: probing initialization failed: Device or resource busy Sep 28 19:06:49 Executor-Server emhttpd: shcmd (740): exit status: 1 Sep 28 19:06:49 Executor-Server emhttpd: writing MBR on disk (nvme0n1) with partition 1 offset 2048, erased: 0 Sep 28 19:06:49 Executor-Server emhttpd: re-reading (nvme0n1) partition table Sep 28 19:06:50 Executor-Server emhttpd: error: mkmbr, 2196: Device or resource busy (16): ioctl BLKRRPART: /dev/nvme0n1 Sep 28 19:06:50 Executor-Server emhttpd: shcmd (741): udevadm settle Sep 28 19:06:50 Executor-Server emhttpd: shcmd (742): /sbin/wipefs -a /dev/nvme0n1p1 Sep 28 19:06:50 Executor-Server emhttpd: shcmd (743): mkfs.btrfs -f /dev/nvme0n1p1 Sep 28 19:06:50 Executor-Server root: ERROR: '/dev/nvme0n1p1' is too small to make a usable filesystem It's saying device in use, so it's not wiping it completely and then using the existing partition 1 which is too small, try rebooting and running blkdiscard right after boot, if it still doesn't format post new diags.
  21. Docker and VMs folders are on the cache pool, only appdata is on the NVMe pool.
×
×
  • Create New...