dirrtyjoe

Members
  • Content Count

    202
  • Joined

  • Last visited

Community Reputation

1 Neutral

About dirrtyjoe

  • Rank
    Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Fix Common Problems has stated I have MCE error(s). My logs are attached. I believe it is due to inadvertently plugging in incompatible memory which has since been fixed but I wanted to post to get some feedback and make sure. Thanks in advance. juggernaut-diagnostics-20180321-1021 (1).zip
  2. How do you start on a different port? I was using 8008 and webui started on 80.
  3. Thanks, Johnnie.black. I found this: https://www.supermicro.com/support/faqs/faq.cfm?faq=17906 and investigating my setup, I did have VT-d enabled. I've disabled and upon reboot it appears that the error has gone away. I will keep an eye on it and if it persists... probably take your advice and ignore :-) since I'm looking for a "reason" to need to upgrade, anyway.
  4. Hey everyone, Just upgraded to 6.3.5 and received the "call traces" error. Can someone take a look at the attached diagnostics to help me identify what may be the problem? Thanks! juggernaut-diagnostics-20170531-0844.zip
  5. Plans to rebrand this to Ombi since Plex asked the original project to remove association with Plex? http://www.ombi.io/
  6. To explicitly respond to your message, is this related to the log file sizes (the reason I have the Userscript piece to my above comment)? I don't believe spants installs the cron automatically so if you forgot to do that piece (or overlooked it) the logs may simply be getting too big.
  7. I moved to diginc's repo as it is "directly" tied to the pi-hole development (https://hub.docker.com/r/diginc/pi-hole/). I took spants' template, changed the repository to: 'diginc/pi-hole:alpine' and the docker hub URL to: 'https://hub.docker.com/r/diginc/pi-hole/'. diginc added a parameter to meet the necessary changes that urged spants to create his container, which is disabling IPV6 support so if you add a new container variable: Name: Key 4 Key: IPv6 Value: False Default Value: False you should be up and running as spants designed but using the container that follows th
  8. Does adding the .cron file to the plugin directory automatically install/run it?
  9. I took that approach. I like the dashboard and logging so I'll keep it! Thanks! But... I'm receiving the following error when trying to add to the whitelist from the web interface: ==> /var/log/nginx/error.log <== 2016/08/30 20:02:04 [error] 239#239: *21 FastCGI sent in stderr: "PHP message: PHP Warning: error_log(/var/log/lighttpd/pihole_php.log): failed to open stream: No such file or directory in /var/www/html/admin/php/add.php on line 3" while reading response header from upstream, client: 192.168.1.206, server: , request: "POST /admin/php/add.php HTTP/1.1", upstream:
  10. Is it worth using this over your unRAID-hole container?
  11. As for how to approach correcting it, any suggestions? Wipe the cache drives, reformat, and start over?
  12. Adding some detail - the balance seems to be never ending. I see: Jul 27 11:30:58 Juggernaut kernel: BTRFS info (device sdc1): found 1 extents Jul 27 11:30:59 Juggernaut kernel: BTRFS info (device sdc1): found 1 extents Jul 27 11:30:59 Juggernaut kernel: BTRFS info (device sdc1): found 1 extents Jul 27 11:31:00 Juggernaut kernel: BTRFS info (device sdc1): found 1 extents Jul 27 11:31:00 Juggernaut kernel: BTRFS info (device sdc1): found 1 extents Jul 27 11:31:00 Juggernaut kernel: BTRFS info (device sdc1): found 1 extents goes on and on - several times per second. S
  13. It appears my main cache drive is failing. This is a HDD where the other two are SSD so I had planned on removing it anyway. The instructions for removing a drive from a cache pool are numerous and each a bit different. But my goal was to do something like this: [*]Balance Cache [*]Backup Cache Drive [*]Stop the array using the webUI [*]Remove drive to the cache pool [*]Disconnect power/sata from removed drive [*]Start the array [*]Wait for the updated pool to rebalance which will also delete from the pool The initial balance process seems to be hung(?) at: 21 out