ramair02

Members
  • Posts

    45
  • Joined

  • Last visited

Everything posted by ramair02

  1. You're having the same issue as me. I'm assuming it is a bug that a future update of FCP or UD will fix? For now, I'm just ignoring the error until I hear from folks here that know more
  2. This makes sense. How can I figure out what is creating the 'mounts' file? It contains a bunch of mountpoint information about all of my disks. Here's a snippet of the file contents. Entire file attached. mounts.txt
  3. Receiving the following error: Fix Common Problems - homegrown: 03-02-2024 04:40 AM Errors have been found with your server (homegrown). Investigate at Settings / User Utilities / Fix Common Problems FCP states: File mounts present within /mnt Can't figure out what is causing this issue. It may be related to how I have Google Drive mounted with SMB? There is a "mounts" file in /mnt, but I'm not sure if that's expected to be there. Or maybe related to /mounts folder? Other possibility is perhaps I need to exclude /mnt from Recyle Bin plugin? Not sure. Diagnostics attached. Any help is appreciated. Thanks in advance homegrown-diagnostics-20240205-1957.zip
  4. I am having an issue with the Tailscale Docker. All of my containers are accessible through IP & magicDNS, however my unraid WebUI does not resolve with neither Tailscale IP nor magicDNS. This issue also extends to SMB -- I am unable to access files on the server through Tailscale.
  5. Yes, my cache failed upon boot after upgrading to 6.12. I replaced the drive and rolled back to 6.11.5
  6. Thanks for the response @petchav! Unfortunately, typing the VM name within the CLI wrapped in quotes does not work. Here's an example of the script not working with a VM name that has spaces. Here it is working perfectly for a VM name that does not have any spaces. And here it is failing when wrapping the VM name in quotes. The error message is simply `!!! Backup file not found !!!`. I think there is another line or two in the script where the path needs to be wrapped in quotes, but I can't figure out where.
  7. Shoot, I'm sorry Jorge -- I missed this. I was getting pretty frustrated and ultimately downgraded to 6.11.5 for now. Everything is working as intended again. Apologies that I didn't stick it out to test out why nginx was not starting.
  8. Yes, sorry for not being more clear! I am talking about the restoration script. It works great for VMs with no spaces in the name and I'm sure it can be fixed to work with VMs that do have spaces in their names, but it's not something I've been able to figure out with my limited knowledge.
  9. Can anyone help edit @petchav's script to work with VM names that include spaces? I don't know bash scripting, but have been trying to figure it out for the last hour. The most positive change I made is to line 87, mkdir /mnt/user/domains/$VM_NAME which I edited to mkdir "/mnt/user/domains/$VM_NAME" That got me past creating incorrect directories. Now if my VM name is "Windows 11", then it will create the correct directory in the domains folder. However, it is still failing after that with `!!! Backup file not found !!!` I've tried adding quotes at various places in lines 102, 112 & 120 but still the script fails at the same point. I've also tried adding quotes in various places to lines 89, 95, 110, 116 & 117 which are the lines that define `BACKUP_FILE`, but again, not making any progress. Figured someone that knows scripting and the correct syntax might be able to fix the script easily
  10. Perhaps related, but there is no network.cfg file on my flash drive in the config folder. I only have a network-extra.cfg file. I'm not sure how that would disappear, but maybe it is part of my issue? Edit: Upon further research, I don't believe this is related to the issue. It seems if you don't make changes to Network settings, there is no network.cfg and the system utilizes default settings.
  11. This sounds similar to an issue I am facing right now, however, I'm on 6.12.1
  12. Diagnostics attached. Not sure what is up. I had a failed cache drive, shutdown the system to replace it, started the system again and got to the WebUI. Assigned the new cache drive and tried to start the array. Nothing was happening -- the start button was unresponsive. So I rebooted. Now, I cannot access the WebUI through localhost on the server itself nor through other devices on my local network. I also tried GUI safe mode, but still no WebGUI. The server seems to boot fine and I do have SSH access. Any help is greatly appreciated. tower-diagnostics-20230624-2100.zip
  13. Understood and thank you. My new SSD will be here tomorrow and I'll follow this process.
  14. Thanks. Is there a guide to follow to rebuild cache after a failed disk? I didn't find this particular situation in the documentation. After replacing the cache drive with a new one, what's next? Reinstall all Docker containers and then copy over the appdata that I pulled off the failing drive?
  15. I am on 6.12.1. I didn't realize the CA Backup plugin was deprecated. My cache drive is corrupt / failed (Unmountable: Unsupported or no file system). I only have backups from CA Backup plugin (from 6.11.5), not from your new plugin. I understand from your pinned post that the CA backups cannot be restored with your new Appdata Backup plugin. Is there a workaround? I was also able to mount the failing cache drive to /temp and copy off all of the contents. Perhaps that is useful in rebuilding a new cache drive?
  16. Ok I will look into it. ddrescue is new to me. Thank you. I found this post from you in the FAQ and was able to mount the drive to /temp and copy off all of the contents. Is that useful in rebuilding a new cache drive?
  17. Thanks for the reply, Jorge. I'll replace the Cache-docker (sdd) drive. What is the best process to replace & rebuild the drive since it is unmountable -- I have appdata & vm backups from the CA plugin, but I'm only now realizing that the plugin is deprecated in 6.12. I didn't know that and since I've been on 6.12 / 6.12.1 for about a week, my last appdata & vm backups are from June 15th. If I install the new Appdata Backup plugin, will it be able to restore my backups from the previous plugin? Assuming so, would I just shutdown the array, replace the Cache-docker drive, start the array, assign the new disk as Cache-docker and then restore appdata & vm backups?
  18. Diagnostics attached. I had an issue on 6.12 where my Docker containers would not update -- it was throwing an error saying the container name already existed. Then I noticed some of my containers were not working and all of the Database containers were stopped and would not start. I figured it may be a bug in the new stable version, so I upgraded to 6.12.1 Upon starting the array, the Docker would not start. Then I saw on the Main tab that my Cache-docker pool is showing as Unmountable: Unsupported or no file system. Any help is much appreciated! I saw a similar thread here, but couldn't get the btrfs rescue command figured out. homegrown-diagnostics-20230622-2105.zip
  19. Thanks for the reply, EDACerton. I also saw your PM. I'm not sure what's going on -- everything I Google essentially says it is a bug with NetworkManager / ConnectivityCheck and doesn't affect the operation of Tailscale. However, it is annoying and I don't remember having this issue when I was using the Tailscale Docker Container. I'm not sure if the research I've done is related to the syslog being spammed with the above, but it's all I could find searching around. https://github.com/tailscale/tailscale/issues/5175 https://forum.tailscale.com/t/ratelimit-format-open-conn-track-timeout-opening-v-no-associated-peer-node/1456/2 https://forum.tailscale.com/t/open-conn-track-timeout/2231 FWIW, unraid is setup as an exit node in Tailscale. I've also tested with Accept Routes on & off as well as Accept DNS on & off. Logs still get spammed with the same.
  20. My logs are littered with... Jun 14 15:54:24 homegrown tailscaled: 2023/06/14 15:54:24 open-conn-track: timeout opening (TCP 100.71.223.5:51675 => 172.64.96.12:443); no associated peer node Jun 14 15:54:27 homegrown tailscaled: 2023/06/14 15:54:27 open-conn-track: timeout opening (TCP 100.71.223.5:51675 => 172.64.96.12:443); no associated peer node Jun 14 15:54:37 homegrown tailscaled: 2023/06/14 15:54:37 open-conn-track: timeout opening (TCP 100.71.223.5:47567 => 45.154.253.8:80); no associated peer node Jun 14 15:54:39 homegrown tailscaled: 2023/06/14 15:54:39 open-conn-track: timeout opening (TCP 100.71.223.5:51675 => 172.64.96.12:443); no associated peer node Jun 14 15:54:40 homegrown tailscaled: 2023/06/14 15:54:40 open-conn-track: timeout opening (TCP 100.71.223.5:47567 => 45.154.253.8:80); no associated peer node Jun 14 15:54:47 homegrown tailscaled: 2023/06/14 15:54:47 open-conn-track: timeout opening (TCP 100.71.223.5:42959 => 45.154.253.8:80); no associated peer node Jun 14 15:54:47 homegrown tailscaled: 2023/06/14 15:54:47 open-conn-track: timeout opening (TCP 100.71.223.5:54857 => 172.64.163.13:443); no associated peer node Jun 14 15:54:50 homegrown tailscaled: 2023/06/14 15:54:50 open-conn-track: timeout opening (TCP 100.71.223.5:54857 => 172.64.163.13:443); no associated peer node Jun 14 15:54:50 homegrown tailscaled: 2023/06/14 15:54:50 open-conn-track: timeout opening (TCP 100.71.223.5:42959 => 45.154.253.8:80); no associated peer node Jun 14 15:54:50 homegrown tailscaled: 2023/06/14 15:54:50 [RATELIMIT] format("open-conn-track: timeout opening %v; no associated peer node") Jun 14 15:55:02 homegrown tailscaled: 2023/06/14 15:55:02 [RATELIMIT] format("open-conn-track: timeout opening %v; no associated peer node") (1 dropped) Jun 14 15:55:02 homegrown tailscaled: 2023/06/14 15:55:02 open-conn-track: timeout opening (TCP 100.71.223.5:54857 => 172.64.163.13:443); no associated peer node Jun 14 15:55:02 homegrown tailscaled: 2023/06/14 15:55:02 open-conn-track: timeout opening (TCP 100.71.223.5:42959 => 45.154.253.8:80); no associated peer node Everything seems to be working fine, but these lines are constantly repeating in the logs. Any insight?
  21. Yeah Tailscale has been crushing my logs ever since I switched from the Docker Container to the Plugin. disk14 is new -- I just finished a clear yesterday. I think the buffer i/o errors during a clear are normal and I haven't had any issues since the clear finished, but I'll keep my eye on it.
  22. Diagnostics attached. I had an out of memory error at 4:40 AM last night. I'm trying to figure out the cause. I'm guessing it is some scheduled task, but need help narrowing it down. Any assistance is much appreciated! homegrown-diagnostics-20230611-0758.zip
  23. Diagnostics attached I woke up this morning to an "Out Of Memory errors detected on your server" error. I've never had this error before. I did upgrade to 6.11.5 yesterday -- not sure if that is related. I believe the error occurred around the same time as my Mover is scheduled. Please let me know if you can identify the issue with my diagnostics. Thank you homegrown-diagnostics-20221122-0813.zip
  24. I was having a heck of a time figuring out the correct collabora subdomain proxy config for SWAG. I came across this site which helped me successfully get Nextcloud, SWAG & Collabora working. The config at that link worked for me -- I only changed line 37 to my Unraid IP. Sharing here in case it helps anyone else. My current `collabora.subdomain.conf` ## Version 2022/09/08 # make sure that your dns has a cname set for collabora and that your collabora container is named collabora server { listen 443 ssl; listen [::]:443 ssl; server_name collabora.*; include /config/nginx/ssl.conf; client_max_body_size 0; # enable for ldap auth (requires ldap-location.conf in the location block) #include /config/nginx/ldap-server.conf; # enable for Authelia (requires authelia-location.conf in the location block) #include /config/nginx/authelia-server.conf; location / { # enable the next two lines for http auth #auth_basic "Restricted"; #auth_basic_user_file /config/nginx/.htpasswd; # enable for ldap auth (requires ldap-server.conf in the server block) #include /config/nginx/ldap-location.conf; # enable for Authelia (requires authelia-server.conf in the server block) #include /config/nginx/authelia-location.conf; include /config/nginx/proxy.conf; include /config/nginx/resolver.conf; set $upstream_app XX.XX.XX.XX; set $upstream_port 9980; set $upstream_proto https; proxy_pass $upstream_proto://$upstream_app:$upstream_port; } } Set your server IP in line 37 -- Unraid specific info that is not included in the above link... Edit the collabora container in unraid, remove domain, add new variable as below