AnnabellaRenee87

Members
  • Posts

    247
  • Joined

  • Last visited

Everything posted by AnnabellaRenee87

  1. I'm not sure if this is a bug per se but the past like 5 versions of UNRAID have had an issue where if I go to the docker screen the site gets super laggy and sometimes even soft locks Chrome on my Galaxy S10+. I know I probably shouldn't be doing things like that on my phone but I'm the type that if an idea hits me, I might just try it without getting out of bed lol. Sorry if this is the wrong section, if it is, can a mod move it for me? Sent from my SM-G975U using Tapatalk
  2. Got an odd problem. I can login with the local IP Address:port. But if i try to login with my WAN Address (Using a Reverse Proxy and a Subdomain) it just accepts the password loads for a second and brings up the password screen again. I've made sure the Daemon "Allow Remote Connections" is enabled and triple checked it in core.conf. I want to add that this had been working for years. I tried it on several browsers, thought it was a DNS issue, tried another PC because i was too lazy to flush the DNS on my main computer. Went to Cloudflare tried toggling proxied off, made sure the IP Address was going directly to my WAN Address, same thing. Turned proxying back because I"m about 99.9999999999% sure that's not the issue. I mean, the interface loads up but it just won't let me login when trying to login with my reverse proxy. Other containers let me login like your "binhex-sabnzbdvpn" Docker. SAB works fine. Any ideas or know of anything else I can try?
  3. Not working for me for some reason, I'm on PFSense with UPnP turned on for the Servers IP. Here's the logs it generated. Command timeout 12 stdbuf -o0 upnpc -m br0 -l 2>&1 Status 0 Results upnpc : miniupnpc library test client, version 2.1. (c) 2005-2018 Thomas Bernard. Go to http://miniupnp.free.fr/ or https://miniupnp.tuxfamily.org/ for more information. List of UPNP devices found on the network : desc: http://10.0.0.2:8096/dlna/9b902c51-b640-4805-9413-713cac1323ab/description.xml st: urn:schemas-upnp-org:device:MediaServer:1 desc: http://10.0.0.2:8096/dlna/9b902c51-b640-4805-9413-713cac1323ab/description.xml st: uuid:9b902c51-b640-4805-9413-713cac1323ab desc: http://10.0.0.2:8096/dlna/9b902c51-b640-4805-9413-713cac1323ab/description.xml st: upnp:rootdevice desc: http://10.0.128.114:9080 st: upnp:rootdevice desc: http://10.0.128.123:9080 st: upnp:rootdevice desc: http://10.0.1.11:80/plugin/discovery/discovery.xml st: upnp:rootdevice desc: http://10.0.1.10:80/plugin/discovery/discovery.xml st: upnp:rootdevice desc: http://192.168.122.1:34400/device.xml st: upnp:rootdevice UPnP device found. Is it an IGD ? : http://10.0.0.2:8096/ Trying to continue anyway Local LAN ip address : unset GetConnectionTypeInfo failed. GetStatusInfo failed. GetLinkLayerMaxBitRates failed. GetExternalIPAddress failed. (errorcode=-3) i protocol exPort->inAddr:inPort description remoteHost leaseTime GetGenericPortMappingEntry() returned -3 (UnknownError) Determination ->gateway is [10.0.0.1] ->No IGD device found ->UPnP not available on this network.
  4. Updated to 6.9.2 as well, my HP branded Seagate drives aren't spinning down either but the Hitachi drives are. Here's logs in case you all need them. server-diagnostics-20210408-1550.zip
  5. Will I need Internet to do that? I won't have internet if I can't do the RMRR patch for my PFSense VM.
  6. I'm sorry to ask a dumb question, but Is the Unraid-Kernel-Helper not on CA right now? I'm still on 6.9 RC2.
  7. @doron you should add a donation link to your signature, I totally wanna buy you a coffee or beer for this plugin. My system uses a mixture of SAS and SATA drives and with how inexpensive SAS drives are on the retired server parts market I've got more SAS than SATA, I just hated how they never spun down. You have helped me save money in electric (can see that I'm saving about 100 watts of power on average now), so the last I can do is throw you a few dollars ❤️
  8. I'm gonna take another crack at it in a few days. When I was working on it last I ended up catching COVID-19. Was dead for about 2 weeks, when I got sorta better work exploded, cats and dogs..... Sorry. Sent from my SM-G975U using Tapatalk
  9. So I've been crazy sick the last week, I was diagnosed with COVID-19, I just decided to try nuking out all the folders in the plugins folder except the theme folder with my backed up appdata folder and now it's working. I could have swore I tested that yesterday. I blame the fever I've been having on and off yesterday and today. I'm sorry, carry on! Ignore the ramblings of the sick girl!!!! lol
  10. The plugin isn't there, if i go through the appdata share and look in the same folder I can literally take everything out of it and it will still put that file back at /usr/share/webapps/rutorrent/plugins/extsearch/engines/. I even tried removing the Docker, it's Image, going stock, it will launch the first time just fine completely stock (yes, I removed the ~/appdata/binhex-rtorrentvpn directory) and it will do it on the second start.
  11. So a LOT of debugging later I figured out what's giving the error; If I go in to the docker via the shell and go to /usr/share/webapps/rutorrent/plugins/extsearch/engines/ I see that for some reason I have JPopsuki.php and jpopsuki.php. If I manually remove the lowercase one (rm jpopsuki.php) and go to the site for my rtorrent and hit F5 the interface loads up normally. If I restart the docker it will auto re-create the file and I will be in the same situation again. I manually created the Uppercase version from https://github.com/Novik/ruTorrent/blob/master/plugins/extsearch/engines/JPopsuki.php Is there a way to keep it from auto creating that upon startup?
  12. Any idea why I would be getting the 500 server error with these? supervisord.log
  13. How did you get it patched? I've put my efforts on hold right now, was just confirmed to have COVID-19. Been too sick to look at it lol. Sent from my SM-G975U using Tapatalk
  14. For some reason I'm not able to get in to the WebUI. I'm getting this in the logs.
  15. Don't get excited, an update on 6.9. I'm currently looking, line by line in the kernel's source code trying to figure out how to patch it. I'm trying to find information mentioned by rafalz on the ProxMox forums at "https://forum.proxmox.com/threads/compile-proxmox-ve-with-patched-intel-iommu-driver-to-remove-rmrr-check.36374/post-313467" Who's got it working on a 5.X kernel. If you want to help, please DM me. An alternative is to downgrade your BIOS to something sub 2009ish but I don't feel that's a very good workaround.
  16. Can someone help me figure out what's filling my rootfs? server-diagnostics-20200623-1712.zip
  17. Home Assistant renamed the tab (hass.io) to "Supervisor". As for a reverse proxy, I had to edit it to point to the IP vs. the Container name to get it to work. Any add-ons you install with Home Assistant you will have to proxy them too as they have different port mappings.
  18. Is there a way to have the hassio_supervisor just start up normally without having to edit the template each time?
  19. Is your NIC a card or integrated on the board? You may need to get your logs and ask@limetech to add the NICs drivers to the next build. Out patch only fixes a very specific issue with virtualization, nothing driver wise is modified from stock. Sent from my SM-G975U using Tapatalk
  20. Did you replace the NICs in it? My DL370 G6 runs fine after unRAID added my NICs drivers for me like 2 years ago. Just remember your system will also have a iLO Ethernet port on the back which isn't for general data transfer. Sent from my SM-G975U using Tapatalk
  21. Can you get us the system logs please? Like go to Tools then Diagnostics and attach that file. You can anonymize your data if you want.
  22. Remember, the team has their own lives, they have day jobs, families and friends heck some are not even in Development or IT in their professional lives. We're all pretty much hobbiest here, and this is being worked on by the goodness of their hearts. I'm OCD when it comes to updating things but you can't just expect everyone to drop everything when a new release hits to be patched. It will be ready when it's done. Thank you guys for all the hard work you put into this project [emoji3590] Sent from my SM-G975U using Tapatalk