Jump to content

theGrok

Members
  • Posts

    101
  • Joined

  • Last visited

Everything posted by theGrok

  1. I am currently running 6.5. Everything has been going smoothly. Today I was in an RDP session on one of my VMs. All of the sudden I lost my session. My server became unreachable. I could not ping it or anything. I gently pushed the power button to initiate shutdown. Nothing. I then unplugged the UPS to try to initiate shutdown and the server would not shutdown. Finally, I had no choice but to hard reset the machine. The server came back up and everything seemed fine. A parity check initiated. After a couple of minutes, I lost access to the gui. I hooked up a monitor to the server and saw call trace messages and the server was frozen. I had to do another reboot. I have no idea what is causing this behavior. I have now disabled docker and VMs and will see if the server can stay online. I am just wondering if anyone can point me in the right directions as to what might be causing this? Is this bad hardware? Do I need to reinstall unRaid? I have attached diagnostics. medserver-diagnostics-20180408-2145.zip
  2. I upgraded to 6.5 with no problems at first. Yesterday, my windows VM started freezing. I tried to shut it down and could not. I did a force stop and restarted my server. When it came back, my cache drive showed Unmountable: No File System. I had backups of mostly everything, so I formatted the cache, restored my appdata, libvirt, and vdisks. Everything was back up and running smoothly. This morning while updating the windows VM (the vdisk backup was quite a bit older). It froze again. I checked the log and saw call traces and btrfs errors. Fix Common problems also reports: Mar 16 05:53:34 Medserver root: Fix Common Problems: Error: Unable to write to cache Mar 16 05:53:34 Medserver root: Fix Common Problems: Error: Unable to write to Docker Image I do not know whether it is just a coincidence that it happened after the upgrade or if my SSD is going bad. SMART seems ok. Now that it has happened a second time, I thought I would post here. The first time I had the problem I posted here. My original diagnostics are attached in this thread: I have also attached my new diagnostics here. It is from after I formatted the cache drive and restored. Thank you for any assistance that you provide. medserver-diagnostics-20180316-0533.zip
  3. thank you so much for all of your help. much appreciated.
  4. btrfs restore -v /dev/sdX1 /mnt/disk2/restore this seems to be working to restore some data. The first method returned could not read superblock. Any chance that something in the logs shows that it is the hardware that went bad? Should I be replacing this SSD?
  5. Thanks I'll try the recovery process. If that fails, I did find an older backup of my vidsk. Better than nothing I suppose. I would format, restore from CA Backup_restore, copy over the old vdisk and reboot?
  6. my vidsks are on /mnt/user/vdisks/ but when I ssh in and look I do not see this directory there, so I assume that it is on the cache drive/ I do not have a backup of the vdisks unfortunately, I guess I would have to try the recovery link above?
  7. Thanks for the help. Can you please specify what you mean by all cache data. The plugin backs up my appdata and my libvirt image. Is this sufficient? If I format and restore this everything will return as normal?
  8. My Windows VM started acting weird. I tried to shut it down and had trouble doing so from the UI. I did a force shutdown. When I tried to restart the VM it gave me some execution error about a read only file system. I then rebooted my system. When it came back up my cache drive shows as unmountable, no file system. I have no idea how to recover form this. I have been regularly backing up my libvirt, and appdate using the plugin. Can anyone help? Diagnostics attached. medserver-diagnostics-20180315-1307.zip
  9. Ok, thanks for the clarification. So as of now this does not seem possible. I will revert everything back to bridge as it all works that way. As networking is not my forte, I don't think I would be able to find a solution.
  10. It does look like that. But no cigar. Even when I type the ip manually, there is no response. Anything I could attach that might help figure it out?
  11. I have solved a few of my problems. I can get sonarr and nzbget etc to talk to each other. The only problem I seem to have left is that I am using binhex's rtorrentvpn container. When I give it it's own IP using br0, I cannot get into it's webui. If but it back to bridge, I can get into the webui. I don't see anything strange in the container logs (nginx is started etc). I left a message in the container's support thread. I even installed a fresh copy in a new appdata folder in case. Same issue.
  12. Ok, I am going to take a clean look at all of the container configs and report back. Thank you
  13. seems like it. If I disable Docker in unRaid, all of those entries go away in the routing table
  14. it is in fact a typo. sorry about that. everything is 192.168.2.x
  15. Some of my containers don't have the ping command to test this. The ones that do I get no reply. Do you see anything that should not be there in the routing table I posted. What are all of those entires as0tx entries?
  16. I recently upgrade to 6.4. One of the features that I wanted to try was assigning containers their individual IP addresses. Once I started moving containers to their own IPs, they could no longer talk to each other. 6.3.5: unRaid IP 192.168.2.34 Sonarr 192.168.3.34:8989 NZBGET. 192.168.2.34:6789 6.4 unRaid IP 192.168.2.34 Sonarr 192.168.3.200:8989 NZBGET. 192.168.2.201:6789 in 6.3.5 Sonarr could talk to nzbget. when I switch the containers to individual IPs, they no longer can talk to each other. I did update the config in sonarr with nzbget's new IP. I also have another container, rtorrent-VPN that I cannot have access to the webui when it is on it's own IP. The container starts without problems and runs, but I don't have access to it's webui. I went to the routing table in network settings and noticed a bunch of entries that I have no idea what they do. Namely as0t0 ... 15. I have some friends who run the same containers I do and have none of these entries. Could it be some kind of routing issue? I am completely stumped. If I switch all of the containers back to the unRaid IP, everything starts to work again. Any suggestions are welcomed. Thanks.
  17. Hi, I recently upgraded to 6.4 and was interesting in running this with its own IP I set the network type to br0 I gave it the container it's own IP I start the container, but I cannot get access to the webui. It does seem like it is running fine as privoxy seems to be working fine. It seems like it is just the UI that I cannot get into. I checked the supervisor log and I don't see any errors. It says nginx is running etc. Any ideas? I have been able to move other containers to their own IP (plex, tuatulli, sonarr, radarr) I also tried this this with the deluge and I have the same issue. I cannot get into the webui. I can however connect to the deluge daemon with the standalone app. My problems seem to be with accessing both webUIs. As soon as I revert back to bridge instead of br0, I can access the webUIs
  18. Ok thank you for your help and for wonderful work.
  19. Thanks for the clarification. One more question if you don't mind. Am I supposed to forwarding the tunnel port on my router? The port that is in the openvpn conf? I ask because I also run an openvpn-as container which uses 1194 and I have to forward that port for it to work. Thanks.
  20. Hello, I have a port forwarding question. I use PIA. When using PIA, how do I know which port is being forwarded? Do I have to change anything inside the rtorrent config with respect to this port? Sometimes I get a yellow triangle with an exclamation point in the check port status section in ruTorrent. I am connecting to an endpoint that allows forwarding. Sometimes everything seems to work fine, other times I cannot seem to connect to all of the seeds and I get slow speeds. Thank you for this great container
  21. Thanks that worked. Big facepalm on my part. I didn't think of it because it was necessary when I set up sonarr/radarr in a similar way.
  22. I've used this exact configuration and it is working for me from the external world using my domain etc... However when configured this way I can no longer access the webui from my local lan. I end up getting a 404 error. Any ideas?
  23. Since the most recent update I seem to be getting this message in my IRC client status window: ZNC is presently running in DEBUG mode. Sensitive data during your current session may be exposed to the host. unless I am mistaken, it seems that debug option is set during compiling. Is there a way to disable this?
  24. This container seems to be using 1.7.x nightly builds. Is there a way to make it use the stable version instead? I don't see an variable in the template to let me do so. Thanks!
×
×
  • Create New...