brawny

Members
  • Posts

    27
  • Joined

  • Last visited

About brawny

  • Birthday September 5

Converted

  • Gender
    Male
  • Location
    London, Ontario, Canada

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

brawny's Achievements

Noob

Noob (1/14)

3

Reputation

  1. I see the same warning in my server log: avahi-daemon[3985]: *** WARNING: Detected another IPv4 mDNS stack running on this host. This makes mDNS unreliable and is thus not recommended. *** and the same output from the "lsof -i -P -n | grep 5353" command: avahi-dae 3985 avahi 14u IPv4 39789328 0t0 UDP *:5353 RoonAppli 24329 root 276u IPv4 71818202 0t0 UDP *:5353 RoonAppli 24329 root 278u IPv4 71818204 0t0 UDP *:5353 RoonAppli 24329 root 280u IPv4 71818206 0t0 UDP *:5353 Roon is working fine, as is Roon Arc, but my Roon log shows these errors: Should I be concerned?
  2. I just had this happen last night. UnRaid 6.9.2. Rebooted the server this morning, and it came back up fine, and started a parity check. I've enabled the syslog, so hopefully if it happens again, I'll have a clue as to cause. I haven't changed anything on the server in quite some time. The only recent change was to enable a plugin for the Logitech Media Server docker for "shairTunes" a few days ago.
  3. Thanks @JorgeB. The server is working OK for now, even without a cache. I don't like opening it up any more often than I need to, but will follow up once I do. I guess I'll hold off moving from 6.8.3 to 6.9.1 until after I confirm the cache is working. One issue at a time. 🙂 Thanks for your help! Brawny
  4. Not sure when this happened, but I just noticed it today. I had run Tools | Update Assistant to make sure I didn't have any issues that would impact upgrading to 6.9.1. I had to update a bunch of my dockers, re-ran the Update Assistant and got a clean bill of health. I then went to the Main tab and realized my cache drive was missing. No issues with the server recently. I'm running a bunch of dockers radarr/sonarr/sabnzbdvpn and have had no issues today. I did add a new 10TB drive to my cache a couple weeks ago. I suppose its possible that the cache drive cabling not nudged at that point so its not connected, but I think that's a stretch... No issues with the server with the new drive installed and working. leviathan-diagnostics-20210311-1829.zip Diagnostics attached. Any help would be appreciated. Thanks, Brawny
  5. See if this works for you. For some reason the IP address SAB was using internally changed for me. Not something I did myself... The quote above is how I found and resolved it.
  6. I figured out my issue, but its strange that it popped up today, and the resolution wasn't what I expected. I normally follow Space Invader One's videos for setting dockers up rather than reinventing the wheel. That's what I did with SABnzbd, Radarr and Sonarr. This morning the issue I was having was that Sonarr and Radarr couldn't communicate with SABnzbd. They've been working for over a year, without issue. Yesterday I lost all my dockers and had to restore from a backup, so somehow that might be part of what caused the issue - not sure. To resolve, I noticed that when I looked at "Status and Interface Options" in the SABnzbd UI, the local IPV4 address was a 10.8.x.x address. I had this set as the 192.168.1.x address of my server in Radarr and Sonarr (and I'm sure this is how it has always been!). I copied/pasted the 10.x.x.x address into the SAB config in Sonarr and Radarr, and lo and behold, they were able to connect and started to work... Not sure how this happened, but glad it is...
  7. Any luck getting this resolved? I'm having a similar issue with my setup as of this morning following some changes to my system.
  8. Good to know. I wish I knew why this happened in the first place. It would be good to understand why it happened so I can avoid it happening again in the future. I also wish I'd been able to figure this out right away, rather than by trial and error, but thankfully my server's back up and fully functional now so no worries. 🙂
  9. Yes. At first I did a couple of them one at a time, and when that was successful, I selected multiple apps, and installed them in a batch.
  10. The appdata folder appeared to be intact for all my dockers. I spent a bunch of time last night re-configuring all the dockers, and letting them chug through my data (Plex, LogitechMediaServer, Radarr, Sonarr, etc). This morning your note got me thinking - why not restore the app data folders from backup? I have CA Backup / Restore AppData V2 installed. It took a couple hours to restore all appdata, and everything appears to be back to normal now. To summarize how I resolved this: Reinstall previous version of all dockers Restore all appdata from backup via CA Backup / Restore Appdata plugin. Thanks @JorgeB for your help with this!
  11. Thanks for the pointer @JorgeB I followed what I found there: When I went to the Web UI for each, it looks like I have to reconfigure each of them. Ugh..... That means I have to set up Plex again, and point it at my media, and redo all my customizations. Same with LogitechMediaServer for my music. I haven't tried Sonarr or Radarr yet, but expect to have to re-add all the movies/shows that I'm following. This could take days/weeks to get things back the way they were yesterday - and I'm still not sure what the cause of the issue was. Would also like to know if there's a better way to restore stuff with the existing configuration... Hopefully I'm missing something here... Thanks @JorgeB for your help getting me back up and running! Brawny
  12. Ok - makes sense. libvirt.img is for VMs. I just checked, and I get "Libvirt Service failed to start." on the VMS tab. Looking at the logs, I see: Feb 24 11:55:00 leviathan kernel: BTRFS info (device loop2): new size for /dev/loop2 is 32212254720 Feb 24 11:55:00 leviathan emhttpd: shcmd (64): /etc/rc.d/rc.docker start Feb 24 11:55:00 leviathan root: starting dockerd ... Feb 24 11:55:00 leviathan avahi-daemon[10164]: Joining mDNS multicast group on interface docker0.IPv4 with address 172.17.0.1. Feb 24 11:55:00 leviathan avahi-daemon[10164]: New relevant interface docker0.IPv4 for mDNS. Feb 24 11:55:00 leviathan avahi-daemon[10164]: Registering new address record for 172.17.0.1 on docker0.IPv4. Feb 24 11:55:00 leviathan kernel: IPv6: ADDRCONF(NETDEV_UP): docker0: link is not ready Feb 24 11:55:03 leviathan rc.docker: a56f380d3d4d55e4fe8b59db86cf04e328d27bebe7d9305442480f97edc0ba62 Feb 24 11:55:03 leviathan emhttpd: shcmd (78): /usr/local/sbin/mount_image '/mnt/user/system/libvirt/libvirt.img' /etc/libvirt 1 Feb 24 11:55:03 leviathan kernel: BTRFS: device fsid 34a49792-1bfd-4d9b-821a-43cdc2cf5449 devid 1 transid 10 /dev/loop3 Feb 24 11:55:03 leviathan kernel: BTRFS info (device loop3): disk space caching is enabled Feb 24 11:55:03 leviathan kernel: BTRFS info (device loop3): has skinny extents Feb 24 11:55:03 leviathan kernel: BTRFS error (device loop3): bad fsid on block 22036480 Feb 24 11:55:03 leviathan kernel: BTRFS error (device loop3): bad fsid on block 22036480 Feb 24 11:55:03 leviathan kernel: BTRFS error (device loop3): failed to read chunk root Feb 24 11:55:03 leviathan root: mount: /etc/libvirt: wrong fs type, bad option, bad superblock on /dev/loop3, missing codepage or helper program, or other error. Feb 24 11:55:03 leviathan root: mount error Feb 24 11:55:03 leviathan emhttpd: shcmd (78): exit status: 1 Feb 24 11:55:03 leviathan emhttpd: nothing to sync Not sure why I would have a bad fsid, or what that means?
  13. Done. After copying, I rebooted, and I'm still seeing the same thing. No dockers shown.
  14. Sorry guys, I'm not getting it... Only disk3 has /mnt/diskX/system directory. Its the same size as my backup and the aggregated file shown in /mnt/user/system Thanks for your patience helping me figure this out.
  15. I checked each of the img files: root@leviathan:~# ls -l /mnt/user/system/libvirt/libvirt.img -rw-rw-rw- 1 nobody users 1073741824 Feb 24 07:11 /mnt/user/system/libvirt/libvirt.img root@leviathan:~# ls -l /mnt/disk3/system/libvirt/libvirt.img -rw-rw-rw- 1 nobody users 1073741824 Feb 24 07:11 /mnt/disk3/system/libvirt/libvirt.img root@leviathan:~# ls -l /mnt/disk2/Documents/backups/domains_backup/libvirt.img -rwxrwxrwx 1 nobody users 1073741824 Feb 22 00:00 /mnt/disk2/Documents/backups/domains_backup/libvirt.img* root@leviathan:~# so the /mnt/user/system, /mnt/disk3/system and /mnt/disk2/Documents/backups/domains_backup are all the same size. The timestamp on my backup is from 2 days ago ) I backup weekly). What do you suggest I do from here? Thanks for your help, @JorgeB