Ambidex

Members
  • Posts

    5
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Ambidex's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Similar to WonderfulSlipperyThing I just set this up and querying via the web page is painfully slow. Is anyone else seeing the same thing? I should note that I'm running this on a core i3 based system. In my tests querying the db directly took 265ms-380ms, while an artist search via the web took 10s-17s + another 18s-45s to load an artist's page. Even if they db query triggered by the web is different this doesn't seem right to me. 1 - Direct Query musicbrainz_db=# explain analyze select * from artist where lower(name) like '%pearl jam%'; QUERY PLAN ---------------------------------------------------------------------------------------------------------- Seq Scan on artist (cost=0.00..33420.71 rows=111 width=98) (actual time=93.738..264.023 rows=3 loops=1) Filter: (lower((name)::text) ~~ '%pearl jam%'::text) Rows Removed by Filter: 1121111 Planning time: 0.095 ms Execution time: 264.041 ms (5 rows) 2 - Chrome 'Inspect->Network' timings: - Search: search?query=pearl+jam&type=artist&method=indexed 200 document Other 77.5?KB 10.70?s search?query=sublime&type=artist&method=indexed 200 document Other 167?KB 10.93?s - Artist Page: 83b9cbe7-9857-49e2-ab8e-b57b01038103 200 document Other 227?KB 6.18?s 95f5b748-d370-47fe-85bd-0af2dc450bc0 200 document Other 147?KB 11.39?s - Wikipedia Extract and Image (requested after the initial page): wikipedia-extract 200 xhr jquery.js:9659 2.4?KB 5.98?s commons-image 200 xhr jquery.js:9659 703?B 11.96?s wikipedia-extract 200 xhr jquery.js:9659 2.7?KB 16.73?s commons-image 200 xhr jquery.js:9659 388?B 33.29?s Rebuilding the indexes via the following command in the docker did not improve anything (side note I hit a db deadlock and had to disable the update crontab): root@1f8dca2ee516:/usr/bin# ./reindexdb -U abc -a Figured I'd post this in case anyone had any ideas. If/when I have some time I'd like to try and figure out what layer is introducing the delay. Most of this is new to me so I'm not sure how much time I'll spend on it.
  2. I just deleted and re-created with the new 'pihole' docker. Wanted to let you know that there's no longer a 'WebUI' option when it's running and I click the pihole docker icon. If I manually go to '<tower ip>/admin' the pihole web ui loads.
  3. To explicitly respond to your message, is this related to the log file sizes (the reason I have the Userscript piece to my above comment)? I don't believe spants installs the cron automatically so if you forgot to do that piece (or overlooked it) the logs may simply be getting too big. Cron was taken from the first post on Aug 18: root@Tower:/boot/config/plugins/pihole# ls -l total 16 -rwxrwxrwx 1 root root 1519 Aug 18 23:47 pihole.cron* root@Tower:/boot/config/plugins/pihole# cat pihole.cron # Pi-hole: A black hole for Internet advertisements # (c) 2015, 2016 by Jacob Salmela # Network-wide ad blocking via your Raspberry Pi # http://pi-hole.net # Updates ad sources every week # # Pi-hole is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 2 of the License, or # (at your option) any later version. # Your container name goes here: #DOCKER_NAME=pihole-for-unRaid #PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # Pi-hole: Update the ad sources once a week on Sunday at 01:59 # Download any updates from the adlists 59 1 * * 7 root docker exec pihole-for-unRaid pihole updateGravity > /dev/null # Pi-hole: Update the Web interface shortly after gravity runs # This should also update the version number if it is changed in the dashboard repo #30 2 * * 7 root docker exec pihole-for-unRaid pihole updateDashboard > /dev/null # Pi-hole: Parse the log file before it is flushed and save the stats to a database # This will be used for a historical view of your Pi-hole's performance #50 23 * * * root docker exec pihole-for-unRaid dailyLog.sh # note: this is outdated > /dev/null # Pi-hole: Flush the log daily at 11:58 so it doesn't get out of control # Stats will be viewable in the Web interface thanks to the cron job above 58 23 * * * root docker exec pihole-for-unRaid pihole flush > /dev/null root@Tower:/boot/config/plugins/pihole# Log file is currently sitting at ~87M: bash-4.3# pwd /var/log bash-4.3# ls -l pihole.log -rw-rw-rw- 1 root root 86833328 Sep 24 00:00 pihole.log Just found these in the unraid syslogs. Looks like the cron may not be running properly: root@Tower:/var/log# grep pihole syslog* syslog:Sep 22 23:58:01 Tower crond[1512]: exit status 127 from user root root docker exec pihole-for-unRaid pihole flush > /dev/null syslog:Sep 23 10:20:56 Tower php: /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker 'stop' 'pihole-for-unRaid' syslog:Sep 23 10:21:00 Tower php: /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker 'start' 'pihole-for-unRaid' syslog.1:Sep 14 23:58:01 Tower crond[1512]: exit status 127 from user root root docker exec pihole-for-unRaid pihole flush > /dev/null syslog.1:Sep 15 23:58:01 Tower crond[1512]: exit status 127 from user root root docker exec pihole-for-unRaid pihole flush > /dev/null syslog.1:Sep 16 23:58:01 Tower crond[1512]: exit status 127 from user root root docker exec pihole-for-unRaid pihole flush > /dev/null syslog.1:Sep 17 23:58:01 Tower crond[1512]: exit status 127 from user root root docker exec pihole-for-unRaid pihole flush > /dev/null syslog.1:Sep 18 01:59:01 Tower crond[1512]: exit status 127 from user root root docker exec pihole-for-unRaid pihole updateGravity > /dev/null syslog.1:Sep 18 23:58:01 Tower crond[1512]: exit status 127 from user root root docker exec pihole-for-unRaid pihole flush > /dev/null syslog.1:Sep 19 23:58:01 Tower crond[1512]: exit status 127 from user root root docker exec pihole-for-unRaid pihole flush > /dev/null syslog.1:Sep 20 23:58:01 Tower crond[1512]: exit status 127 from user root root docker exec pihole-for-unRaid pihole flush > /dev/null syslog.1:Sep 21 23:58:01 Tower crond[1512]: exit status 127 from user root root docker exec pihole-for-unRaid pihole flush > /dev/null syslog.2:Sep 7 23:58:16 Tower crond[1512]: exit status 127 from user root root docker exec pihole-for-unRaid pihole flush > /dev/null syslog.2:Sep 8 23:58:01 Tower crond[1512]: exit status 127 from user root root docker exec pihole-for-unRaid pihole flush > /dev/null syslog.2:Sep 9 23:58:11 Tower crond[1512]: exit status 127 from user root root docker exec pihole-for-unRaid pihole flush > /dev/null syslog.2:Sep 10 23:58:01 Tower crond[1512]: exit status 127 from user root root docker exec pihole-for-unRaid pihole flush > /dev/null syslog.2:Sep 11 01:59:01 Tower crond[1512]: exit status 127 from user root root docker exec pihole-for-unRaid pihole updateGravity > /dev/null syslog.2:Sep 11 23:58:01 Tower crond[1512]: exit status 127 from user root root docker exec pihole-for-unRaid pihole flush > /dev/null syslog.2:Sep 12 23:58:07 Tower crond[1512]: exit status 127 from user root root docker exec pihole-for-unRaid pihole flush > /dev/null syslog.2:Sep 13 23:58:01 Tower crond[1512]: exit status 127 from user root root docker exec pihole-for-unRaid pihole flush > /dev/null root@Tower:/var/log# I manually ran the flush and the graphs now appear: root@Tower:/var/log# docker exec pihole-for-unRaid pihole flush ::: Flushing /var/log/pihole.log ...... done!
  4. To explicitly respond to your message, is this related to the log file sizes (the reason I have the Userscript piece to my above comment)? I don't believe spants installs the cron automatically so if you forgot to do that piece (or overlooked it) the logs may simply be getting too big. Cron was taken from the first post on Aug 18: root@Tower:/boot/config/plugins/pihole# ls -l total 16 -rwxrwxrwx 1 root root 1519 Aug 18 23:47 pihole.cron* root@Tower:/boot/config/plugins/pihole# cat pihole.cron # Pi-hole: A black hole for Internet advertisements # (c) 2015, 2016 by Jacob Salmela # Network-wide ad blocking via your Raspberry Pi # http://pi-hole.net # Updates ad sources every week # # Pi-hole is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 2 of the License, or # (at your option) any later version. # Your container name goes here: #DOCKER_NAME=pihole-for-unRaid #PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # Pi-hole: Update the ad sources once a week on Sunday at 01:59 # Download any updates from the adlists 59 1 * * 7 root docker exec pihole-for-unRaid pihole updateGravity > /dev/null # Pi-hole: Update the Web interface shortly after gravity runs # This should also update the version number if it is changed in the dashboard repo #30 2 * * 7 root docker exec pihole-for-unRaid pihole updateDashboard > /dev/null # Pi-hole: Parse the log file before it is flushed and save the stats to a database # This will be used for a historical view of your Pi-hole's performance #50 23 * * * root docker exec pihole-for-unRaid dailyLog.sh # note: this is outdated > /dev/null # Pi-hole: Flush the log daily at 11:58 so it doesn't get out of control # Stats will be viewable in the Web interface thanks to the cron job above 58 23 * * * root docker exec pihole-for-unRaid pihole flush > /dev/null root@Tower:/boot/config/plugins/pihole# Log file is currently sitting at ~87M: bash-4.3# pwd /var/log bash-4.3# ls -l pihole.log -rw-rw-rw- 1 root root 86833328 Sep 24 00:00 pihole.log
  5. Thanks for creating this. After having pi-hole running for days/weeks I get the following error logs when trying to load the pi-hole gui: 2016/09/23 14:57:19 [error] 242#242: *49 FastCGI sent in stderr: "PHP message: PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 93 bytes) in /var/www/html/admin/data.php on line 176" while reading response header from upstream, client: 192.168.1.131, server: , request: "GET /admin/api.php?summaryRaw&getQuerySources HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "192.168.1.252", referrer: "http://192.168.1.252/admin/" 2016/09/23 14:57:19 [error] 242#242: *44 FastCGI sent in stderr: "PHP message: PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 96 bytes) in /var/www/html/admin/data.php on line 176" while reading response header from upstream, client: 192.168.1.131, server: , request: "GET /admin/api.php?summary HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "192.168.1.252", referrer: "http://192.168.1.252/admin/" 2016/09/23 14:57:19 [error] 242#242: *50 FastCGI sent in stderr: "PHP message: PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 96 bytes) in /var/www/html/admin/data.php on line 176" while reading response header from upstream, client: 192.168.1.131, server: , request: "GET /admin/api.php?getForwardDestinations HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "192.168.1.252", referrer: "http://192.168.1.252/admin/" 2016/09/23 14:57:19 [error] 242#242: *47 FastCGI sent in stderr: "PHP message: PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 96 bytes) in /var/www/html/admin/data.php on line 176" while reading response header from upstream, client: 192.168.1.131, server: , request: "GET /admin/api.php?overTimeData HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "192.168.1.252", referrer: "http://192.168.1.252/admin/" 2016/09/23 14:57:19 [error] 242#242: *48 FastCGI sent in stderr: "PHP message: PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 96 bytes) in /var/www/html/admin/data.php on line 176" while reading response header from upstream, client: 192.168.1.131, server: , request: "GET /admin/api.php?getQueryTypes HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "192.168.1.252", referrer: "http://192.168.1.252/admin/" These are the requests to the backend for the graph content, and the end result is that the graphs never load (you're left with spinning 'loading arrows'). It's discussed in the following github page, and you may need to upgrade php-fpm version: https://github.com/pi-hole/pi-hole/issues/375 I'm available to test anything you need to. I'll also try to fix it myself if/when I have the time.