JorgeB

Moderators
  • Posts

    40209
  • Joined

  • Last visited

  • Days Won

    470

Everything posted by JorgeB

  1. Start the array in maintenance mode and type in the console: xfs_repair -v /dev/md2 Then post the full output from that.
  2. Yeah, forgot to mention, you'd need to use the disk share path, e.g. /mnt/cache/file I believe there are plans for that but it' won't be for 6.11, maybe 6.12 or 6.13
  3. Does the looping start right after the menu or you see some text? If there's some text see if you can catch where it loops with a photo or video.
  4. Diags after array start in normal mode please.
  5. Sep 28 19:06:49 Executor-Server emhttpd: shcmd (740): /sbin/wipefs -a /dev/nvme0n1 Sep 28 19:06:49 Executor-Server root: wipefs: error: /dev/nvme0n1: probing initialization failed: Device or resource busy Sep 28 19:06:49 Executor-Server emhttpd: shcmd (740): exit status: 1 Sep 28 19:06:49 Executor-Server emhttpd: writing MBR on disk (nvme0n1) with partition 1 offset 2048, erased: 0 Sep 28 19:06:49 Executor-Server emhttpd: re-reading (nvme0n1) partition table Sep 28 19:06:50 Executor-Server emhttpd: error: mkmbr, 2196: Device or resource busy (16): ioctl BLKRRPART: /dev/nvme0n1 Sep 28 19:06:50 Executor-Server emhttpd: shcmd (741): udevadm settle Sep 28 19:06:50 Executor-Server emhttpd: shcmd (742): /sbin/wipefs -a /dev/nvme0n1p1 Sep 28 19:06:50 Executor-Server emhttpd: shcmd (743): mkfs.btrfs -f /dev/nvme0n1p1 Sep 28 19:06:50 Executor-Server root: ERROR: '/dev/nvme0n1p1' is too small to make a usable filesystem It's saying device in use, so it's not wiping it completely and then using the existing partition 1 which is too small, try rebooting and running blkdiscard right after boot, if it still doesn't format post new diags.
  6. Docker and VMs folders are on the cache pool, only appdata is on the NVMe pool.
  7. Do a manual update, just download the zip and extract the bz* files overwriting existing ones.
  8. Yep, also working for me on multiple servers.
  9. Array should be stopped. /xxx must be replaced with correct device, like /dev/sdb or /dev/nvme0n1, if you want post the diags and I can tell you the complete command.
  10. You can do it now, though if containers will be doing a lot of reading or writing to the array it will affect rebuild time.
  11. This part is strange, lets see how it goes for others.
  12. System and domains are on cache, and that is fine, data is mostly on disk1, but like mentioned any new data written to that share goes initially to cache, that is expected behavior.
  13. Negative Ghost Rider, you need to wait for v6.11.1, it should be out soon.
  14. Sep 28 11:50:56 Ascension emhttpd: shcmd (520): mount -t btrfs -o noatime,space_cache=v2 /dev/sdc1 /mnt/cache Sep 28 11:50:56 Ascension root: mount: /mnt/cache: wrong fs type, bad option, bad superblock on /dev/sdc1, missing codepage or helper program, or other error. Your cache is not mounting, and no valid btrfs filesystem in being found on boot, are you sure it was btrfs? You are using a raid volume, that might be the part of the problem, it might have change something: import 30 cache device: (sdc) LOGICAL_VOLUME_5001438016327E90_3600508b1001c52ba5e10bd5d90a2e4dc
  15. And Unraid agrees: Try typing in the console: unraid-api restart If that doesn't help you should contact support and they will replace the key for you.
  16. If they use a Matrox GPU it's a known issue, a driver will be included on the next release, hopefully it helps.
  17. Then you can rebuild, just stop the array, re-assign the disk and start the array to begin.
  18. Try wiping it with "blkdiscard -f /dev/xxx", if that doesn't help post the diagnostics after a format attempt.
  19. Shares are not showing in the diags, go to shares click "compute all" and post a screenshot.
  20. For eth0 looks more like a connection problem, try replacing the cable or using a different switch/router eth1 crashed, only a reboot will fix that: Sep 29 05:36:42 fs kernel: DMAR: DRHD: handling fault status reg 2 Sep 29 05:36:42 fs kernel: DMAR: [DMA Read] Request device [05:00.1] PASID ffffffff fault addr f4a19000 [fault reason 06] PTE Read access is not set Sep 29 05:36:42 fs kernel: DMAR: [DMA Read] Request device [05:00.1] PASID ffffffff fault addr f9d54000 [fault reason 06] PTE Read access is not set Unrelated but the server is detecting RAM errors, should fix that.
  21. You can check with: lsattr /path/to/file If the output is ---------------C------ nocow is set, if there's no C it's not set.