brawny

Members
  • Posts

    27
  • Joined

  • Last visited

Everything posted by brawny

  1. I see the same warning in my server log: avahi-daemon[3985]: *** WARNING: Detected another IPv4 mDNS stack running on this host. This makes mDNS unreliable and is thus not recommended. *** and the same output from the "lsof -i -P -n | grep 5353" command: avahi-dae 3985 avahi 14u IPv4 39789328 0t0 UDP *:5353 RoonAppli 24329 root 276u IPv4 71818202 0t0 UDP *:5353 RoonAppli 24329 root 278u IPv4 71818204 0t0 UDP *:5353 RoonAppli 24329 root 280u IPv4 71818206 0t0 UDP *:5353 Roon is working fine, as is Roon Arc, but my Roon log shows these errors: Should I be concerned?
  2. I just had this happen last night. UnRaid 6.9.2. Rebooted the server this morning, and it came back up fine, and started a parity check. I've enabled the syslog, so hopefully if it happens again, I'll have a clue as to cause. I haven't changed anything on the server in quite some time. The only recent change was to enable a plugin for the Logitech Media Server docker for "shairTunes" a few days ago.
  3. Thanks @JorgeB. The server is working OK for now, even without a cache. I don't like opening it up any more often than I need to, but will follow up once I do. I guess I'll hold off moving from 6.8.3 to 6.9.1 until after I confirm the cache is working. One issue at a time. 🙂 Thanks for your help! Brawny
  4. Not sure when this happened, but I just noticed it today. I had run Tools | Update Assistant to make sure I didn't have any issues that would impact upgrading to 6.9.1. I had to update a bunch of my dockers, re-ran the Update Assistant and got a clean bill of health. I then went to the Main tab and realized my cache drive was missing. No issues with the server recently. I'm running a bunch of dockers radarr/sonarr/sabnzbdvpn and have had no issues today. I did add a new 10TB drive to my cache a couple weeks ago. I suppose its possible that the cache drive cabling not nudged at that point so its not connected, but I think that's a stretch... No issues with the server with the new drive installed and working. leviathan-diagnostics-20210311-1829.zip Diagnostics attached. Any help would be appreciated. Thanks, Brawny
  5. See if this works for you. For some reason the IP address SAB was using internally changed for me. Not something I did myself... The quote above is how I found and resolved it.
  6. I figured out my issue, but its strange that it popped up today, and the resolution wasn't what I expected. I normally follow Space Invader One's videos for setting dockers up rather than reinventing the wheel. That's what I did with SABnzbd, Radarr and Sonarr. This morning the issue I was having was that Sonarr and Radarr couldn't communicate with SABnzbd. They've been working for over a year, without issue. Yesterday I lost all my dockers and had to restore from a backup, so somehow that might be part of what caused the issue - not sure. To resolve, I noticed that when I looked at "Status and Interface Options" in the SABnzbd UI, the local IPV4 address was a 10.8.x.x address. I had this set as the 192.168.1.x address of my server in Radarr and Sonarr (and I'm sure this is how it has always been!). I copied/pasted the 10.x.x.x address into the SAB config in Sonarr and Radarr, and lo and behold, they were able to connect and started to work... Not sure how this happened, but glad it is...
  7. Any luck getting this resolved? I'm having a similar issue with my setup as of this morning following some changes to my system.
  8. Good to know. I wish I knew why this happened in the first place. It would be good to understand why it happened so I can avoid it happening again in the future. I also wish I'd been able to figure this out right away, rather than by trial and error, but thankfully my server's back up and fully functional now so no worries. 🙂
  9. Yes. At first I did a couple of them one at a time, and when that was successful, I selected multiple apps, and installed them in a batch.
  10. The appdata folder appeared to be intact for all my dockers. I spent a bunch of time last night re-configuring all the dockers, and letting them chug through my data (Plex, LogitechMediaServer, Radarr, Sonarr, etc). This morning your note got me thinking - why not restore the app data folders from backup? I have CA Backup / Restore AppData V2 installed. It took a couple hours to restore all appdata, and everything appears to be back to normal now. To summarize how I resolved this: Reinstall previous version of all dockers Restore all appdata from backup via CA Backup / Restore Appdata plugin. Thanks @JorgeB for your help with this!
  11. Thanks for the pointer @JorgeB I followed what I found there: When I went to the Web UI for each, it looks like I have to reconfigure each of them. Ugh..... That means I have to set up Plex again, and point it at my media, and redo all my customizations. Same with LogitechMediaServer for my music. I haven't tried Sonarr or Radarr yet, but expect to have to re-add all the movies/shows that I'm following. This could take days/weeks to get things back the way they were yesterday - and I'm still not sure what the cause of the issue was. Would also like to know if there's a better way to restore stuff with the existing configuration... Hopefully I'm missing something here... Thanks @JorgeB for your help getting me back up and running! Brawny
  12. Ok - makes sense. libvirt.img is for VMs. I just checked, and I get "Libvirt Service failed to start." on the VMS tab. Looking at the logs, I see: Feb 24 11:55:00 leviathan kernel: BTRFS info (device loop2): new size for /dev/loop2 is 32212254720 Feb 24 11:55:00 leviathan emhttpd: shcmd (64): /etc/rc.d/rc.docker start Feb 24 11:55:00 leviathan root: starting dockerd ... Feb 24 11:55:00 leviathan avahi-daemon[10164]: Joining mDNS multicast group on interface docker0.IPv4 with address 172.17.0.1. Feb 24 11:55:00 leviathan avahi-daemon[10164]: New relevant interface docker0.IPv4 for mDNS. Feb 24 11:55:00 leviathan avahi-daemon[10164]: Registering new address record for 172.17.0.1 on docker0.IPv4. Feb 24 11:55:00 leviathan kernel: IPv6: ADDRCONF(NETDEV_UP): docker0: link is not ready Feb 24 11:55:03 leviathan rc.docker: a56f380d3d4d55e4fe8b59db86cf04e328d27bebe7d9305442480f97edc0ba62 Feb 24 11:55:03 leviathan emhttpd: shcmd (78): /usr/local/sbin/mount_image '/mnt/user/system/libvirt/libvirt.img' /etc/libvirt 1 Feb 24 11:55:03 leviathan kernel: BTRFS: device fsid 34a49792-1bfd-4d9b-821a-43cdc2cf5449 devid 1 transid 10 /dev/loop3 Feb 24 11:55:03 leviathan kernel: BTRFS info (device loop3): disk space caching is enabled Feb 24 11:55:03 leviathan kernel: BTRFS info (device loop3): has skinny extents Feb 24 11:55:03 leviathan kernel: BTRFS error (device loop3): bad fsid on block 22036480 Feb 24 11:55:03 leviathan kernel: BTRFS error (device loop3): bad fsid on block 22036480 Feb 24 11:55:03 leviathan kernel: BTRFS error (device loop3): failed to read chunk root Feb 24 11:55:03 leviathan root: mount: /etc/libvirt: wrong fs type, bad option, bad superblock on /dev/loop3, missing codepage or helper program, or other error. Feb 24 11:55:03 leviathan root: mount error Feb 24 11:55:03 leviathan emhttpd: shcmd (78): exit status: 1 Feb 24 11:55:03 leviathan emhttpd: nothing to sync Not sure why I would have a bad fsid, or what that means?
  13. Done. After copying, I rebooted, and I'm still seeing the same thing. No dockers shown.
  14. Sorry guys, I'm not getting it... Only disk3 has /mnt/diskX/system directory. Its the same size as my backup and the aggregated file shown in /mnt/user/system Thanks for your patience helping me figure this out.
  15. I checked each of the img files: root@leviathan:~# ls -l /mnt/user/system/libvirt/libvirt.img -rw-rw-rw- 1 nobody users 1073741824 Feb 24 07:11 /mnt/user/system/libvirt/libvirt.img root@leviathan:~# ls -l /mnt/disk3/system/libvirt/libvirt.img -rw-rw-rw- 1 nobody users 1073741824 Feb 24 07:11 /mnt/disk3/system/libvirt/libvirt.img root@leviathan:~# ls -l /mnt/disk2/Documents/backups/domains_backup/libvirt.img -rwxrwxrwx 1 nobody users 1073741824 Feb 22 00:00 /mnt/disk2/Documents/backups/domains_backup/libvirt.img* root@leviathan:~# so the /mnt/user/system, /mnt/disk3/system and /mnt/disk2/Documents/backups/domains_backup are all the same size. The timestamp on my backup is from 2 days ago ) I backup weekly). What do you suggest I do from here? Thanks for your help, @JorgeB
  16. No output. root@leviathan:~# find /mnt -name libvirt.log root@leviathan:~# My bad. Rerunning the command again with the right parameters... root@leviathan:~# find /mnt -name libvirt.img /mnt/user/Documents/backups/domains_backup/libvirt.img /mnt/user/system/libvirt/libvirt.img /mnt/disk3/system/libvirt/libvirt.img /mnt/disk2/Documents/backups/domains_backup/libvirt.img root@leviathan:~#
  17. Thanks itimpi! I've attached the diagnostics zip file. leviathan-diagnostics-20210224-0804.zip
  18. I saw another thread with a very similar title from 5 years ago. Rather than resurrect the old thread, I'm creating a new one. My unraid server has been chugging away for over a year with no issues. Last night I shut it down cleanly, opened it up to add another 10TB drive. Restarted the server, started preclear on the new drive and went to bed. This morning I realized that all of my docker containers are missing, as is the one VM I have configured. I checked and the docker image file exists: root@leviathan:/mnt/user/system/docker# ls -l total 31457284 -rw-rw-rw- 1 nobody users 32212254720 Feb 24 07:20 docker.img root@leviathan:/mnt/user/system/docker# sudo docker version Client: Docker Engine - Community Version: 19.03.5 API version: 1.40 Go version: go1.12.12 Git commit: 633a0ea838 Built: Wed Nov 13 07:22:05 2019 OS/Arch: linux/amd64 Experimental: false Server: Docker Engine - Community Engine: Version: 19.03.5 API version: 1.40 (minimum version 1.12) Go version: go1.12.12 Git commit: 633a0ea838 Built: Wed Nov 13 07:28:45 2019 OS/Arch: linux/amd64 Experimental: false containerd: Version: v1.2.10 GitCommit: b34a5c8af56e510852c35414db4c1f4fa6172339 runc: Version: 1.0.0-rc8+dev GitCommit: 3e425f80a8c931f88e6d94a8c831b9d5aa481657 docker-init: Version: 0.18.0 GitCommit: fec3683 root@leviathan:/mnt/user/system/docker# From the other thread, I understand that the data for my various dockers is fine, but not sure how to get the dockers back. I'm hoping I don't have to download and reconfigure them all, as I can't remember the configurations for them. (Radarr, Sonarr, Plex, SABNZBD+, LogitechMediaServer, pi-Hole, Krusader and a few others). Most were done with help from SpaceInvaderOne's videos. I can send whatever logging information is needed to help understand what happened to help resolve the issue. Not sure whether installing another drive has caused this, and if so, why/how. Thanks in advance for your help! Brawny P.S. I paused the preclear on the new disk and rebooted. No change - no dockers showing.
  19. I've been running unraid for about 6 months or so, without any issues. Within the last couple weeks I've started getting warnings that my cache drive is getting full: Event: Unraid Cache disk disk utilization Subject: Alert [LEVIATHAN] - Cache disk is low on space (99%) Description: ADATA_SU800_2J1820009460 (sdb) Importance: alert I installed Boinc a few weeks back so wondered if that was the issue. I also wonder if perhaps its an issue with Plex starting to take up space, as I've recently added a new TV show, and have one local user that is having to transcode down to lower quality. Not sure if that process would cause temporary data to be written to the cache. I've now stopped Boinc, and run the mover. I'm back up to 18GB free on a 1TB SSD cache. Diagnostic logs attached. Any advice appreciated! leviathan-diagnostics-20200413-1329.zip
  20. Posted this in General Support - cross posting the link here in case its useful.
  21. Woke up this morning to reports of out of memory errors by check common problems. Haven't installed anything new since BOINC (linuxserver/boinc) a couple weeks back. Diagnostic logs attached. I just stopped BOINC, and memory usage is back down from 95% to 15%. (System has 32GB RAM) I think I found the culprit... Possibly just a configuration issue - I'll leave this stopped and see if I can reconfigure to reduce memory usage and let it go again. FWIW, my last system reboot was 5 days ago to upgrade UnRaid to v6.8.3. leviathan-diagnostics-20200327-0942.zip
  22. Thank you @luisv That was the bit that I was missing. I'm up and running now!
  23. Sorry if this is answered elsewhere, but perhaps the instructions on installing and configuring BOINC could be clarified on the blog post? Step one: 1. First, go install the BOINC manager client. I'm browsing the blog from a laptop. Clicking the link on the blog page takes me to a download page. Great so far, but I'm on a laptop, not my Unraid server. Assuming this should be installed/run on my server (rather than an old laptop), can someone provide some direction? I don't have any VMs on my server - just running dockers. I've downloaded the BOINC docker, but the values to be populated in the configuraiton aren't clear. Can someone provide contextual information on suggested or typical values. Thanks - really want to help here if I can! Brawny
  24. I did. Post and response is here: Logitech Media Server forums The response was: This really is the hostname, given by the underlying system. But you can change LMS' name to something else in the Basic Settings/Media Library Name. Please note that this would change the name shown in apps, but not eg. the URL you'd have to use to access LMS. I've made this change within the app in [Basic Settings -> Media Library Name]. Fixed!
  25. Hopefully a simple question, but I haven't been able to figure out how to change the hostname of my LMS docker. Would like to have something a little more friendly than "14f53de05a80" show up in iPeng, and SqueezePlay, etc... Thanks in advance! Brawny