StudiesTheBlade

Members
  • Posts

    17
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

StudiesTheBlade's Achievements

Noob

Noob (1/14)

3

Reputation

1

Community Answers

  1. It worked! Looks like the TrueNAS partition scheme was the culprit. stanley-diagnostics-20230615-2337.zip
  2. Ah, I needed to run just a `zpool import` not `zpool import tank`. I tried the trick you mentioned above, but still no luck. But at least I think I have a lead with the partition scheme on that one drive. I wonder if I can wipe that drive and resilver with the rest of the array. A little sketchy, but should work, no? stanley-diagnostics-20230615-1221.zip
  3. Sorry about that. Here you go! stanley-diagnostics-20230615-1115.zip I've been using the pool with out issue for docker mappings, but I can't get it as an official pool.
  4. Hi all, I am trying to import an existing ZFS pool into Unraid, but I am getting the "Unmountable: Unsupported or no file system" error message for all the disks after starting the array. From what I've read in the documentation, my pool seems to be a supported configuration. It is 2 raidz1 vdevs with 4 drives each (see attached image). I have tried creating the pool, assigning all drives, and then choosing zfs raidz 2x4 drives, but I still get the same result. I was hoping someone could tell me I'm missing some step somewhere or if my array is somehow incompatible and why. I don't see any log entries in Unraid that seem to give any explaination of why it couldn't be mounted. Thanks for your time!
  5. Not sure if it will help, but I added `pcie_aspm=off` to my Syslinux config at some point trying to diagnose similar issues. My config for "Unraid OS": kernel /bzimage append pcie_aspm=off kvm_amd.nested=1 isolcpus=8-15 initrd=/bzroot
  6. So far so good after having removed the Recycle Bin plugin among others. That seems to have been the culprit, although I did make a few other changes so I am not certain it was Recycle Bin. EDIT: Marking as solved for now.
  7. ah, hadn't seen those posts. I have been using the Recycle Bin plugin. I'll remove it and as many other plugins as I can and continue to monitor it and other dockers. Although, IMO dockers should be able to manipulate files on the filesystem without fear of breaking it. Seems like one of the main requirements of a filesystem. I have also switched my mover schedule from hourly to daily.
  8. Hi All, I did not find any bug reports specific to this error on this Unraid pre-release version, so I am reporting this here for others to also list their experience and diagnostics. This issue is usually noticed after dockers and VMs fail to start or fail while running and is always preceded by the following error in the syslog: shfs: shfs: ../lib/fuse.c:1451: unlink_node: Assertion `node->nlookup > 1' failed. I am aware this may be a bigger issue than just Unraid and this specific pre-release, but I feel it is still rather import to have it documented. This way a workaround or mitigation could be found and used in scenarios where having to perform a daily reboot to fix this is not an acceptable solution. unraid-tower-diagnostics-20210113-0916.zip
  9. Are there any special settings I need to set to get nested subdomains working? I've got no issues with certificates for my root and first-level subdomains, but the second-level nested aren't getting added to the cert. I'm using cloudflare and dns verification Example A records: A example.com <ip> <-- OK A *.example.com <ip> <-- OK A *.subdomain.example.com <ip> <-- Cert invalid when navigating to site
  10. Not sure what exactly was causing this, but it was resolved after using the mover to move everything off the cache drive and running: blkdiscard /dev/sdx After moving everything back, the mover appears to be moving everything properly again.
  11. I ran 'mount -l' and was curious about a few results: proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) tmpfs on /dev/shm type tmpfs (rw) tmpfs on /var/log type tmpfs (rw,size=128m,mode=0755) /dev/sde1 on /boot type vfat (rw,noatime,nodiratime,flush,dmask=77,fmask=177,shortname=mixed) [UNRAID] /boot/bzmodules on /lib/modules type squashfs (ro) /boot/bzfirmware on /lib/firmware type squashfs (ro) hugetlbfs on /hugetlbfs type hugetlbfs (rw) /mnt on /mnt type none (rw,bind) tmpfs on /mnt/disks type tmpfs (rw,size=1M) nfsd on /proc/fs/nfs type nfsd (rw) nfsd on /proc/fs/nfsd type nfsd (rw) /dev/md1 on /mnt/disk1 type xfs (rw,noatime) /dev/md2 on /mnt/disk2 type xfs (rw,noatime) /dev/md3 on /mnt/disk3 type xfs (rw,noatime) /dev/md4 on /mnt/disk4 type xfs (rw,noatime) /dev/nvme0n1p1 on /mnt/cache type btrfs (rw,space_cache=v2) shfs on /mnt/user0 type fuse.shfs (rw,nosuid,nodev,noatime,allow_other) shfs on /mnt/user type fuse.shfs (rw,nosuid,nodev,noatime,allow_other) secure: on /mnt/disks/backblaze type fuse.rclone (rw,nosuid,nodev,allow_other) /mnt/cache/system/docker/docker.img on /var/lib/docker type btrfs (rw,noatime,space_cache=v2) /mnt/cache/system/libvirt/libvirt.img on /etc/libvirt type btrfs (rw,noatime,space_cache=v2) Is it normal to have that /mnt mount and both a /mnt/user and a /mnt/user0 ?
  12. I can move the files to that specific disk manually with no issue. I can also delete the file from the disk and then invoke the mover without issue. It's almost as if the file is being written to the disk and the cache at the same time during the initial write.