mcrommert

Members
  • Posts

    39
  • Joined

  • Last visited

mcrommert's Achievements

Rookie

Rookie (2/14)

1

Reputation

1

Community Answers

  1. I can add fuel on the fire -i had this issue too and realized it when sonarr started mass removing stuff saying it couldn't find it
  2. Had to hard power cycle and boot up again ran till this morning at 745 am and went unresponsive and lost all the apps hard rebooting again going to revert the update
  3. doublechurch-diagnostics-20240216-1849 I downloaded the update for 6.12.8 and when i tried to reboot the server to finish the upgrade it hung on syncing filesystems and wont shutdown or reboot
  4. So far my logs are a lot quieter - just NUT warnings and the random segfault from plex transcoder
  5. Will do - i also saw the macvlan warning - i had been seeing some syslog activity regarding macvlan but not having seen that warning i originally dismissed it It also did not show up in common issues - but i switched to ipvlan Will let you know what i see
  6. Okay so I got rid of the errors but for whatever reason when I went back to 6.12 the parity needed to be rebuilt…no issue over the last three days the server has hard crashed about 10 hours after boot while rebuilding the parity twice now and both times all containers have dropped and the webui and ssh is inaccessible as I have taken 12.1 now I need to revert somehow back to 6.11 where I didn’t have these issues
  7. Yes - it showed these two (it scrolled too fast to take a screenshot) But i had already deleted them
  8. So i deleted this file over smb (actually i deleted the whole folder) ran scrub again and still this errors the same
  9. ran this from the other cache not mounting thread btrfs rescue zero-log /dev/nvme0n1p1 And then i ran a scrub
  10. Literally exactly the same thing happened to me in exactly the same way
  11. So i updated yesterday to 6.12 - today i noticed that my dockers weren't running - Had a "Unraid service failed to start" - rebooted the server and had to dirty shutdown as it wouldn't shut down When it booted docker started with no containers - and now the cache drive has "Unmountable: Unsupported or no file system" This is the log Jun 18 09:24:49 OffensiveBias kernel: nvme0n1: p1 Jun 18 09:24:49 OffensiveBias kernel: BTRFS: device fsid 66d7b772-62bd-444f-8ec7-60544d9d9fa6 devid 1 transid 3904928 /dev/nvme0n1p1 scanned by udevd (914) Jun 18 09:25:43 OffensiveBias root: [092543] system.php:1187 @ loop: Scanning: N:0:0:1 Node: /dev/nvme0n1 Jun 18 09:26:31 OffensiveBias emhttpd: MS1PC5ED3ORA3.2T_S2T7NA0J601609 (nvme0n1) 512 6251233968 Jun 18 09:26:31 OffensiveBias emhttpd: import 31 cache device: (nvme0n1) MS1PC5ED3ORA3.2T_S2T7NA0J601609 Jun 18 09:26:32 OffensiveBias emhttpd: read SMART /dev/nvme0n1 Jun 18 09:26:48 OffensiveBias emhttpd: /sbin/btrfs filesystem show /dev/nvme0n1p1 2>&1 Jun 18 09:26:49 OffensiveBias emhttpd: devid 1 size 2.91TiB used 1.64TiB path /dev/nvme0n1p1 Jun 18 09:26:49 OffensiveBias emhttpd: shcmd (81): mount -t btrfs -o noatime,space_cache=v2 /dev/nvme0n1p1 /mnt/cache Jun 18 09:26:49 OffensiveBias kernel: BTRFS info (device nvme0n1p1): using crc32c (crc32c-intel) checksum algorithm Jun 18 09:26:49 OffensiveBias kernel: BTRFS info (device nvme0n1p1): using free space tree Jun 18 09:26:49 OffensiveBias kernel: BTRFS info (device nvme0n1p1): enabling ssd optimizations Jun 18 09:26:49 OffensiveBias kernel: BTRFS info (device nvme0n1p1): start tree-log replay Jun 18 09:26:49 OffensiveBias kernel: BTRFS error (device nvme0n1p1): incorrect extent count for 1095761920; counted 4528, expected 4521 Jun 18 09:26:49 OffensiveBias kernel: BTRFS: error (device nvme0n1p1) in btrfs_replay_log:2409: errno=-5 IO failure (Failed to recover log tree) Jun 18 09:26:49 OffensiveBias root: mount: /mnt/cache: can't read superblock on /dev/nvme0n1p1. Jun 18 09:26:49 OffensiveBias kernel: BTRFS error (device nvme0n1p1: state E): open_ctree failed Wondering what i can do - really really want to restore as i don't have super up to date backups offensivebias-diagnostics-20230618-0943.zip
  12. i just kept restarting my container until it worked - glad to see i'm not the only one having this issue
  13. So mine says joined and does not error out - but it never shows up in the zerotier interface to approve Anyway to pull the name of this one so i can whitelist it manually in the zerotier interface? EDIT: When i change to bridge it changes to online but still doesn't work Under host it says 200 info xxxxxxxxx 1.6.2 OFFLINE