drsparks68

Members
  • Posts

    36
  • Joined

  • Last visited

Everything posted by drsparks68

  1. Is there any indication that this project is still active? Looks like @testdasi hasn't logged in since October.
  2. I spent 2 days trying to figure out why none of my Plex content would play, until I realized this. The change could have used a little more announcing.
  3. Trying out PhotoPrism and attempting to import photos but I don't have an "import" button in the library tab...just Index, Copy and Logs. I have the import folder mapped to a folder on my array and have populated it with sample images, and have even tried running Docker Safe New Perms, but still no import button. What am I missing?
  4. Now it seems that Fail2Ban isn't working at all...or at least none of the default jails flagged this traffic and banned the source IP (and there were over 600 lines of it in the NGINX access.log):
  5. Hello all, I am trying to configure f2b for permanent bans. I have started the container with "--cap-add=NET_ADMIN" and have set the bantime to "-1" for each jail (as noted under "Jail Options" at https://www.fail2ban.org/wiki/index.php/MANUAL_0_8). I am able to see IP's being detected: 2020-03-30 22:04:20,572 fail2ban.filter [392]: INFO [nginx-botsearch] Found 148.72.207.250 - 2020-03-30 22:04:20 2020-03-31 06:46:10,028 fail2ban.filter [386]: INFO [nginx-botsearch] Found 34.76.172.157 - 2020-03-31 06:46:09 2020-03-31 09:29:25,455 fail2ban.filter [386]: INFO [nginx-botsearch] Found 128.199.254.23 - 2020-03-31 09:29:25 2020-03-31 11:38:48,885 fail2ban.filter [386]: INFO [nginx-botsearch] Found 103.5.150.16 - 2020-03-31 11:38:48 But I'm not seeing those in the persistent DB (fail2ban.sqlite3): Curious if I'm missing something that is preventing this from working. Thanks in advance, D
  6. It ended up being bad memory for me. Once I removed the bad stick the machine check events went away.
  7. I'm hitting this as well. I spent days looking at my internet connection and DNS to figure out what's going on. At least now I have an idea what the culprit is. Have you opened a bug on the Github page (https://github.com/linuxserver/docker-plex/issues)? That may be the best way to get a response.
  8. Hermy65, Looking at the issue on the LS github (https://github.com/linuxserver/docker-mylar/issues/33) there is a suggestion that running the following in the container directly will mitigate the issue, at least temporarily: pip uninstall requests pip install requests I just tried it and it allowed all of the pending comics to be successfully post-processed.
  9. Yeah, I'm running into this as well in addition to the cache folder (appdata\mylar\mylar\cache) filling up with a bunch of folders with names like "mylar_mP29Lt". It completely filled up my cache drive and caused a couple of other containers to stop working until I deleted them all. That was yesterday. Today the cache drive (500GB SSD) is half full again.
  10. I had no idea that setting was there. Got it working after I removed it. Thanks!
  11. Assuming that subfolder support would mean that an app like Ubooquity would be supported (it's URL string is http://<IP>:2202/ubooquity/admin)?
  12. Anyone? I keep getting this message and Google doesn't seem to be any help whatsoever. Your server has detected hardware errors. You should install mcelog via the NerdPack plugin, post your diagnostics and ask for assistance on the unRaid forums. The output of mcelog (if installed) has been logged
  13. I've been getting machine check events and am looking to troubleshoot the issue. I have mcelog installed through Nerd Tools, but when I run 'mcelog', I get the following error: "mcelog: ERROR: AMD Processor family 21: mcelog does not support this processor. Please use the edac_mce_amd module instead. CPU is unsupported" I've searched on how to install the edac_mce_amd module but haven't found any good instructions yet. Anyone know how to install it so I can troubleshoot my server issues? Many thanks
  14. So, I've been following this guide to set up MariaDB/Nextcloud/Let's Encrypt. Things were working well until I got to the point that I created a file to put into the LE App data folder (/config/nginx/site-confs/nextcloud). Since then, I've been getting the following error in my LE docker log, over and over: nginx: [emerg] the size 10485760 of shared memory zone "SSL" conflicts with already declared size 52428800 in /config/nginx/site-confs/nextcloud:20 When I set the LE container up, I set it to use a subdomain only (my nextcloud instance) and it has created the certs for that URL. Otherwise, I can connect to Nextcloud and have set it up and can log in but I always get the invalid cert notification. Is there a way to simply install the certs that LE has already obtained to the NextCloud instance?
  15. @ppunraid, I never did. I ran into similar issues getting it connected. I ended up using the Splunk Lite container.
  16. I tried doing this once I had my syslog server up and it seemed to cause all of my dockers to become orphaned after they were automatically updated and restarted. I pulled this string out of the Extra Parameters field and they started up again.
  17. Am I missing something? I'm trying to install this docker through CA, but the template page I'm seeing doesn't look anything like the others in this thread. It's missing quite a bit of info. Yes, I've checked Advanced View.
  18. I thought my upgrade went fine but I just realized that my "Downloads" folder that SABnzbd uses is missing. It was on my cache disk. What's odd is that all of the Docker appdata folders are still there.
  19. Awesome write-up. Unfortunately it doesn't seem like this crappy C2000T router from CL will let me do port translation, so seems like I'm out of luck.
  20. I'm seeing the same behavior: root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name="gazee" --net="bridge" -e TZ="America/Los_Angeles" -e HOST_OS="unRAID" -e "PUID"="99" -e "PGID"="100" -p 4242:4242/tcp -v "/mnt/user/appdata/gazee":"/config":rw -v "/mnt/user/Comics/":"/comics":rw -v "":"/mylar":rw linuxserver/gazee /usr/bin/docker: Error response from daemon: Invalid volume spec ":/mylar:rw": Invalid volume specification: ':/mylar:rw'. See '/usr/bin/docker run --help'. The command failed.
  21. Can you hit the webUI via the IP address (192.168.0.202)? Try setting static DNS (and gateway addresses) up. 8.8.8.8 / 8.8.4.4 and 192.168.0.1 for a gateway it's already set to static using that IP. WebGUI and shares aren't reachable, seems like networking isn't working at all I added the 8.8.8.8/8.8.4.4 as DNS servers and everything is still unreachable, both ways. I would delete the /flash/config/network.cfg on your flash drive, reboot, and setup your network from scratch. And I can setup networking via the console? No, but unRaid will revert to defaults which should work. If it doesn't then you'll have to manually edit network.cfg and put in the dns / gateway addresses I've already added the dns & gw to the network.cfg; didn't have any effect. # Generated settings: IFNAME[0]="eth0" DESCRIPTION[0]="Public VM Bridge" USE_DHCP[0]="no" IPADDR[0]="192.168.0.202" NETMASK[0]="255.255.255.0" GATEWAY="192.168.0.1" DHCP_KEEPRESOLV="yes" DNS_SERVER1="8.8.8.8" DNS_SERVER2="8.8.4.4" DNS_SERVER3="" MTU[0]="" SYSNICS="1" I removed it, rebooted and checked my router...it's not pulling an IP. If I run ifconfig from the console, eth0 doesn't have an IP. Use the GUI boot option and set up your network. Yeah, after staring at the boot up about 10 times, it finally hit me that I should boot into the GUI. I was able to get the network set up. It works it I set it to DHCP and I can set a static IP, but if I try to use the old IP address (192.168.0.202), it loses connection again. Kinda weird. Now just dealing with Dockers that don't want to start.
  22. Can you hit the webUI via the IP address (192.168.0.202)? Try setting static DNS (and gateway addresses) up. 8.8.8.8 / 8.8.4.4 and 192.168.0.1 for a gateway it's already set to static using that IP. WebGUI and shares aren't reachable, seems like networking isn't working at all I added the 8.8.8.8/8.8.4.4 as DNS servers and everything is still unreachable, both ways. I would delete the /flash/config/network.cfg on your flash drive, reboot, and setup your network from scratch. And I can setup networking via the console? No, but unRaid will revert to defaults which should work. If it doesn't then you'll have to manually edit network.cfg and put in the dns / gateway addresses I've already added the dns & gw to the network.cfg; didn't have any effect. # Generated settings: IFNAME[0]="eth0" DESCRIPTION[0]="Public VM Bridge" USE_DHCP[0]="no" IPADDR[0]="192.168.0.202" NETMASK[0]="255.255.255.0" GATEWAY="192.168.0.1" DHCP_KEEPRESOLV="yes" DNS_SERVER1="8.8.8.8" DNS_SERVER2="8.8.4.4" DNS_SERVER3="" MTU[0]="" SYSNICS="1" I removed it, rebooted and checked my router...it's not pulling an IP. If I run ifconfig from the console, eth0 doesn't have an IP.