Jump to content

dalben

Members
  • Content Count

    1368
  • Joined

  • Last visited

Community Reputation

34 Good

About dalben

  • Rank
    sleepy
  • Birthday 09/30/1966

Converted

  • Gender
    Male
  • Location
    Singapore
  • Personal Text
    Too old to know everything

Recent Profile Visitors

1668 profile views
  1. What's the easiest way to setup a cron job in the container? Or uses the server's crontab to trigger a command in the container? I'm trying to sync 2 Pihole installs, one on a Raspberry PI, and this container. The final step is to run "pihole restartdns reload-lists" in the container to read the sync'd db and custom.list. Any ideas would be appreciated. For those interested, I just copy (rsync) the gravity.db and custom.list from the RPI directly to the appdata/pihole/ directory on unraid. The restart command will read the new files in. I'm simply using cron to kick off the push at 30min past the hour and I want to run the script here at 32min past the hour. I'm sure there are sexier ways of achieving this but this is already maxing out my technical abilities. This also assumes the docker version of pihole is running v5.
  2. Thanks. I'm actually not too fussed as your defaults work fine for me and I now get the Webgui. Was more trying to help you track down the issue.
  3. Yeah, I tried advance view, they weren't there, Also check "show more settings", not there either
  4. If it helps, I updated the container and like pm, I didn't see the options for the webgui variables. But when I started the container, I noticed 8222 was in the startup command so I sent a browser to the container IP:8222 and it connected great. So it's there, just no displaying.
  5. Actually, I realise my TZ is set incorrectly on Paperless. it's showing UTC. Above it says the TZ variable has been removed because it uses the servers TZ but it doesn't seem to be happening in my case. Any idea how I can fix this? My server's TZ is set correctly ( (UTC+08:00) Kuala Lumpur, Singapore) UPDATE: Looked through the documentation and saw that the PAPERLESS_TIME_ZONE variable still works. Added it to the startup script and all is good.
  6. Has anyone who uses this container setup replication between two Pihole installs? There's a couple of scripts out there but none seem to be docker friendly. I have an RPI running Pihole (used to use this container but server issues meant the internet went down when the server went down. Family not happy if I'm not here). I would like to spin up this container to give me some resiliency on my LAN. DNS 1 on the RPI and DNS 2 on Unraid. Not keen to have to add settings and changes twice so looking at auto-synchronisation.
  7. Loaded this up today and it's running great. One slight stumbling block I didn't see mentioned. I don't use "Bridge" for my dockers, instead give them their own IP on br2 network. The paperless server ran fine. When I added the consumer, it complained that the IP was being used (it was, it was used by the server container). In the end I used Bridge for consumer and it started. But when I click on Webgui from the docker page, it goes to the wrong URL, trying the server IP address. I manually type in the consumer address and it worked. Not a biggie but some info in case others have the same problem. I haven't tried the script that was posted to mount server and consumer from the same container. Is that something that will be incorporated into this solution or is it something that I would have to try on my own if I want to give it a shot?
  8. I've been trying to mount an unraid share on my RPi. I've been able to do it with NFS but I'd much prefer SMB (the only reason I have NFS running is for this mount). I've checked all manner of sites but can seem to get it to work. If anyone has a working example, or might know what the issue is, that would be very handy.
  9. Yeah, it's an annoying one. There's a few errors being thrown in the start up that point to hardware but I don't know enough to understand whether they are important or can be ignored.
  10. Here's the syslog entries leading up to a reboot or hang. As you can see there is nothing in the logs pointing to why the server has hung or rebooted itself. Happy to send through the whole syslog if that helps. Mar 30 10:31:37 tdm kernel: mdcmd (467): spindown 1 Mar 30 11:27:00 tdm kernel: mdcmd (468): spindown 1 Mar 30 12:22:15 tdm kernel: mdcmd (469): spindown 3 Mar 30 12:27:56 tdm kernel: mdcmd (470): spindown 1 Mar 30 14:21:44 tdm kernel: mdcmd (471): spindown 1 Mar 30 17:25:52 tdm kernel: microcode: microcode updated early to revision 0x2f, date = 2019-02-17 Apr 4 18:40:44 tdm kernel: mdcmd (174): spindown 2 Apr 4 18:52:51 tdm kernel: mdcmd (175): spindown 1 Apr 4 18:58:48 tdm kernel: mdcmd (176): spindown 3 Apr 4 19:54:27 tdm kernel: mdcmd (177): spindown 2 Apr 4 19:55:46 tdm kernel: mdcmd (178): spindown 3 Apr 4 20:12:58 tdm kernel: microcode: microcode updated early to revision 0x2f, date = 2019-02-17 May 2 20:43:11 tdm kernel: mdcmd (294): spindown 6 May 2 20:44:34 tdm kernel: mdcmd (295): spindown 3 May 2 20:45:37 tdm kernel: mdcmd (296): spindown 5 May 2 21:07:43 tdm kernel: mdcmd (297): spindown 2 May 2 21:07:54 tdm kernel: mdcmd (298): spindown 3 May 2 21:25:36 tdm kernel: mdcmd (299): spindown 6 May 2 21:28:26 tdm kernel: mdcmd (300): spindown 2 May 2 21:28:54 tdm kernel: mdcmd (301): spindown 3 May 2 21:31:50 tdm kernel: microcode: microcode updated early to revision 0x2f, date = 2019-02-17 May 24 08:03:20 tdm kernel: mdcmd (1394): spindown 2 May 24 08:18:52 tdm kernel: mdcmd (1395): spindown 3 May 24 11:20:01 tdm kernel: mdcmd (1396): spindown 1 May 24 12:29:08 tdm kernel: mdcmd (1397): spindown 2 May 24 12:29:08 tdm kernel: mdcmd (1398): spindown 5 May 24 14:06:54 tdm kernel: microcode: microcode updated early to revision 0x2f, date = 2019-02-17 May 25 12:47:05 tdm kernel: mdcmd (73): spindown 3 May 25 13:13:55 tdm kernel: sd 9:0:6:0: [sdi] tag#2606 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x00 May 25 13:13:55 tdm kernel: sd 9:0:6:0: [sdi] tag#2606 CDB: opcode=0x88 88 00 00 00 00 01 5e db 0b 80 00 00 00 08 00 00 May 25 13:13:55 tdm kernel: print_req_error: I/O error, dev sdi, sector 5886380928 May 25 13:40:29 tdm kernel: mdcmd (74): spindown 6 May 25 13:55:12 tdm kernel: mdcmd (75): spindown 5 May 25 13:59:19 tdm kernel: mdcmd (76): spindown 6 May 25 13:59:43 tdm kernel: mdcmd (77): spindown 2 May 25 14:17:15 tdm kernel: microcode: microcode updated early to revision 0x2f, date = 2019-02-17
  11. I do run syslog onto a server as well as duplicate it on flash. This was in the attempt to capture some info before the boot/hang. There is nothing before a hang or reboot that I see. Let me grab the current syslog and anonymise it a bit and I can attach. I mistakenly thought the Diags would grab the whole syslog.
  12. My UnRAID server is for the most part quite solid, but I do get the occasional restart or server hang. Nothing has ever popped up in the syslog before the event, and as it's headless I've never seen what the screen is showing. I can go a couple of months without anything or I can have a few in a short space of time. Yesterday I had a lockup where a cold restart was needed to get back. I just had a server reboot. Because it's not constant, or can be triggered, I've refrained from trial by elimination and starting it without plugins or dockers and slowly adding until I get an event. It would take too long and render my server useless for what I need it for. So for now it while just be something to deal with but I've attached my diagnostics in case someone has the desire to wade through and see if there's anything obvious amiss. There's a few errors in the syslog at every startup but I've always ignored those without knowing what they mean. TIA to anyone that has the time to have a look. tdm-diagnostics-20200525-1502.zip
  13. They did have headless/SQL backend as a student type hackathon topic at one stage but never eventuated. Will try Emby to see if there is much difference but Jellyfin seems the more admirable platform with the right intent so I wanted to give that a shot first
  14. Cool, good to know. I have started playing with Jellyfin but it's not working as smoothly as I had hoped, but that could be my issue so I'll keep plugging away, Off topic, but surprised the Kodi guys gave up on a headless instance to manage a centralised SQL DB. I think they still consider a SQL DB as "experimental"
  15. Just noticed this post. IF LS don't think this container will continue to be maintained many better start looking at moving over to another centralised solution. Can we have some clear direction? This container has been central to everything I have to the point it kept me with Unraid as my home server solution. It would be a shame to see it go but life moves on I guess. I started playing with Jellyfin over the weekend. Got it running with Kodi but it's nowhere near as instant and snappy as Kodi with a SQL backend. I guess I could just keep the SQL backend and use one of the clients to do the updating.