Merijeek

Members
  • Posts

    129
  • Joined

  • Last visited

Everything posted by Merijeek

  1. OK, well, another couple tries at the delete/reboot/recreate seems to have me up and running. Can anyone see any hardware issues in my diagnostics? This is not the first time this has had to be done on my end. But it is the most difficult time I've had getting it fixed...
  2. Came home today, went in to check on some containers, and wouldn't connect. I checked on things, and find when I go to Docker tab.... Docker Service failed to start. I've tried to stop and restart the service. Bounced the entire box. Deleted the .img. Nothing seems to be helping. Any time I try to stop and start things manually, I'm just getting this: Feb 13 15:53:52 UnRAID emhttpd: shcmd (217): /etc/rc.d/rc.docker start Feb 13 15:53:52 UnRAID root: starting dockerd ... Feb 13 15:54:02 UnRAID emhttpd: shcmd (219): umount /var/lib/docker unraid-diagnostics-20180213-1558.zip
  3. ...well, that's weird. I went to mess with the share and I saw it was set to "secure". Then I checked and ALL my shares have flipped to "secure". Any guesses why that happened?
  4. I seem to have found a new spontaneous problem. I'm pretty sure I've seen it before, way back, and when I did, I just went ahead and did a permissions fix and it worked. Now, when I try to run fix permissions, it just sits, near as I can tell. I've run XFS repair. First time found some stuff. Second time seemed uneventful. Any suggestions greatly appreciated, since near as I can tell I can't currently alter any of my files. unraid-diagnostics-20171220-1537.zip
  5. Which was kind of what I suspected was happening, so I grabbed the linuxserver.io one, set it up and...same version. Even used a different config directory.
  6. Looking at the current Plex version, I'm seeing Version 1.5.6.3790. From what I understand (quite possibly incorrectly) we need to be running something in the 1.7 range to get live TV for IOS devices. I've tried a manual update, I've tried checking for new updates, and I've ever deleted and recreated the container with no change in the version. Can someone suggest what I'm missing here?
  7. root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name="binhex-delugevpn" --net="bridge" --privileged="true" -e TZ="America/Los_Angeles" -e HOST_OS="unRAID" -e "VPN_ENABLED"="yes" -e "VPN_USER"="XXX" -e "VPN_PASS"="XXX" -e "VPN_REMOTE"="us-seattle.privateinternetaccess.com" -e "VPN_PORT"="1194" -e "VPN_PROTOCOL"="udp" -e "VPN_PROV"="pia" -e "STRONG_CERTS"="no" -e "ENABLE_PRIVOXY"="no" -e "LAN_NETWORK"="192.168.1.0/24" -e "DEBUG"="false" -e "PUID"="99" -e "PGID"="100" -p 8112:8112/tcp -p 59846:58846/tcp -p 9118:8118/tcp -v "/mnt/user/Incoming":"/data":rw -v "/mnt/user/Incoming":"/unraid":rw -v "/mnt/user/config/binhex-delugevpn":"/config":rw binhex/arch-delugevpn
  8. So, for a while now I've been getting this: Docker Application binhex-delugevpn, Container Port 58946 not found or changed on installed application and this: Docker Application binhex-delugevpn, Container Port 58946 not found or changed on installed application I don't recall changing a port, and when I look at the config for that container, I do see 58946 as "Port 2"....and that's it. I just don't know what I'm supposed to actually correct here.
  9. OK, thanks. I was having to rebuild Plex from scratch, and there's a bunch of media. If nothing else I could have run separate library additions. I thought part of the deal with Docker was to keep things discrete to keep them from interfering this way?
  10. This is a recurring issue, and I'm hoping that I find a resolution this time. I had the 'fixcommon problems' troubleshooting mode running for what that's worth. The symptoms are that I lose everything but ping. I can't ssh, telnet, or do web management. All my docker containers go unresponsive. It appears to have quit at 1:32AM. Any suggestions would be greatly appreciated. unraid-diagnostics-20170224-0102.zip unraid-diagnostics-20170224-0132.zip syslog.txt
  11. Yes, sadly I don't see anything as far as "Why do my dockers spontanously stop responding?" isn't on the list.
  12. I seem to have problems with Docker more often than...everything put together, really. Any idea why that would be?
  13. I've also run xfs_repair, but haven't found anything too damning. root@UnRAID:/dev# xfs_repair -v /dev/md2 Phase 1 - find and verify superblock... - block cache size set to 1087816 entries Phase 2 - using internal log - zero log... zero_log: head block 3353606 tail block 3353606 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 3 - agno = 0 - agno = 2 - agno = 1 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... XFS_REPAIR Summary Fri Jan 20 20:06:44 2017 Phase Start End Duration Phase 1: 01/20 20:06:15 01/20 20:06:16 1 second Phase 2: 01/20 20:06:16 01/20 20:06:17 1 second Phase 3: 01/20 20:06:17 01/20 20:06:26 9 seconds Phase 4: 01/20 20:06:26 01/20 20:06:26 Phase 5: 01/20 20:06:26 01/20 20:06:26 Phase 6: 01/20 20:06:26 01/20 20:06:35 9 seconds Phase 7: 01/20 20:06:35 01/20 20:06:35 Total run time: 20 seconds done root@UnRAID:/dev# xfs_repair -v /dev/md4 Phase 1 - find and verify superblock... - block cache size set to 1072760 entries Phase 2 - using internal log - zero log... zero_log: head block 3556750 tail block 3556750 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 4 - agno = 3 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... XFS_REPAIR Summary Fri Jan 20 20:08:59 2017 Phase Start End Duration Phase 1: 01/20 20:07:17 01/20 20:07:18 1 second Phase 2: 01/20 20:07:18 01/20 20:07:21 3 seconds Phase 3: 01/20 20:07:21 01/20 20:08:06 45 seconds Phase 4: 01/20 20:08:06 01/20 20:08:06 Phase 5: 01/20 20:08:06 01/20 20:08:06 Phase 6: 01/20 20:08:06 01/20 20:08:46 40 seconds Phase 7: 01/20 20:08:46 01/20 20:08:46 Total run time: 1 minute, 29 seconds done root@UnRAID:/dev# xfs_repair -v /dev/md1 Phase 1 - find and verify superblock... - block cache size set to 1132584 entries Phase 2 - using internal log - zero log... zero_log: head block 174794 tail block 174794 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... XFS_REPAIR Summary Fri Jan 20 20:09:09 2017 Phase Start End Duration Phase 1: 01/20 20:09:07 01/20 20:09:07 Phase 2: 01/20 20:09:07 01/20 20:09:07 Phase 3: 01/20 20:09:07 01/20 20:09:08 1 second Phase 4: 01/20 20:09:08 01/20 20:09:08 Phase 5: 01/20 20:09:08 01/20 20:09:08 Phase 6: 01/20 20:09:08 01/20 20:09:08 Phase 7: 01/20 20:09:08 01/20 20:09:08 Total run time: 1 second done root@UnRAID:/dev# xfs_repair -v /dev/md3 Phase 1 - find and verify superblock... - block cache size set to 1102608 entries Phase 2 - using internal log - zero log... zero_log: head block 862818 tail block 862818 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... XFS_REPAIR Summary Fri Jan 20 20:11:34 2017 Phase Start End Duration Phase 1: 01/20 20:09:15 01/20 20:09:15 Phase 2: 01/20 20:09:15 01/20 20:09:19 4 seconds Phase 3: 01/20 20:09:19 01/20 20:10:14 55 seconds Phase 4: 01/20 20:10:14 01/20 20:10:14 Phase 5: 01/20 20:10:14 01/20 20:10:14 Phase 6: 01/20 20:10:14 01/20 20:10:40 26 seconds Phase 7: 01/20 20:10:40 01/20 20:10:40 Total run time: 1 minute, 25 seconds
  14. Today I came home from work and tried to open my Sickrage UI and I'm getting a connection refused on http://192.168.1.44:8081/. I did a scrub and got this: scrub status for 1d177fe9-4d58-4f9d-8a29-e94ebcecc842 scrub started at Fri Jan 20 17:05:55 2017 and finished after 00:02:00 total bytes scrubbed: 15.15GiB with 2 errors error details: csum=2 corrected errors: 0, uncorrectable errors: 2, unverified errors: 0 I've also attached diagnostics. I assume this is due to power issues. I did have the Fix Common Problems troubleshooting mode running. So I've got loads of .zip files at 30 minute increments, but I'm not sure which one would be of interest, so I don't quite want to spam those in there yet. unraid-diagnostics-20170120-1709.zip
  15. I've got the tail running. Hopefully I'll remember to restart it when Microsoft decides I need a reboot whether I want it to or not. I'll update this post when the next inevitable freeze happens.
  16. Diagnosticles attached. unraid-diagnostics-20161216-1042.zip
  17. Our version is 6.2 or 6.2.1. I just started getting nags about the latest version. Dockers I've got.. Rutorrent Sickrage delugevpn plex tonido The machine is made from leftover parts. An old Xeon processor on the ASRock board that used to be on my old main PC. 12GB of RAM. There's definitely no logs anywhere. Since this is a recurring error, what is the best way for me to gather live info? It seems like I can't rely on finding logs afterwards on the flash drive. So how can I setup something to continually dump to a syslog server or something?
  18. I've been running unRAID for about a year now, and I haven't made it past about 20 days before I have to do a cold shutdown. The system, from a user perspective becomes completely unresponsive. I can ping it, but that's it. No docker UIs respond. Unraid web UI doesn't respond. No telnet, no ssh. According to a port scan, 23 is listening but I can never successfully connect to it. I've avoided a reboot so far so that I can hopefully find some sort of cause here. I've got the flash drive connected to my PC, but the \logs folder is completely empty. Is there somewhere else I should be looking?
  19. So, I'm still having occasional system lockups and I still have no idea why. How can I configure Unraid to continually write to a remote syslog server?
  20. Well, that's an unfortunate coincidence. Thanks for the info. And have I mentioned how much I hate stuff that has a "basic" and "advanced" toggle? If not for that I would have spotted it myself.
  21. So, I've somehow managed to come up with two shares with the same case insensitive names - isos and ISOS. ISOS, I've created as a normal share the way I've done all the rest. isos, I'm not sure where it's coming from. I believe it's got something to do with VM manager. However, if I disable VMs in Settings, go under the share via CLI and manually clear it out and delete it, it comes back. So...I'm looking for help figuring out what I've done and how to undo it.
  22. Look a couple posts above you. I had the exact same issue. PIA port number changed.
  23. Pia port has changed from 1194 to 1198 Sent from my SM-G900F using Tapatalk That did it. Thanks. You think PIA's support people would be aware of this...
  24. Has something gone funny with PIA? My installation (which I C&P when needed) looks like this: oot@UnRAID:~# docker run -d \ > --cap-add=NET_ADMIN \ > -p 8112:8112 \ > -p 8118:8118 \ > --name=delugevpn \ > -v /mnt/user/Incoming:/data \ > -v /mnt/user/config/delugevpn:/config \ > -v /etc/localtime:/etc/localtime:ro \ > -e VPN_ENABLED=yes \ > -e VPN_USER=XXXX \ > -e VPN_PASS=XXXX \ > -e VPN_REMOTE=nl.privateinternetaccess.com \ > -e VPN_PORT=1194 \ > -e VPN_PROTOCOL=udp \ > -e VPN_PROV=pia \ > -e ENABLE_PRIVOXY=yes \ > -e LAN_NETWORK=192.168.1.0/24 \ > -e DEBUG=false \ > -e PUID=0 \ > -e PGID=0 \ > binhex/arch-delugevpn However, the VPN is failing over and over because of a self-signed cert in the chain. 2016-10-09 10:15:41,184 DEBG 'start-script' stdout output: Sun Oct 9 10:15:41 2016 VERIFY ERROR: depth=1, error=self signed certificate in certificate chain: C=US, ST=OH, L=Columbus, O=Private Internet Access, CN=Private Internet Access CA, [email protected] Sun Oct 9 10:15:41 2016 OpenSSL: error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed Sun Oct 9 10:15:41 2016 TLS_ERROR: BIO read tls_read_plaintext error Sun Oct 9 10:15:41 2016 TLS Error: TLS object -> incoming plaintext read error Sun Oct 9 10:15:41 2016 TLS Error: TLS handshake failed I'm at a loss as to what to do here. It's not like I can do anything about PIA's cert chain. Is it possible to skip the verification? And even if I could, would I want to?
  25. ...can anyone give me some insight as to what might have caused this to start up today? 2016-10-08 20:59:57,882 DEBG 'start-script' stdout output: Sat Oct 8 20:59:57 2016 VERIFY ERROR: depth=1, error=self signed certificate in certificate chain: C=US, ST=OH, L=Columbus, O=Private Internet Access, CN=Private Internet Access CA, [email protected] Sat Oct 8 20:59:57 2016 OpenSSL: error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed Sat Oct 8 20:59:57 2016 TLS_ERROR: BIO read tls_read_plaintext error Sat Oct 8 20:59:57 2016 TLS Error: TLS object -> incoming plaintext read error Sat Oct 8 20:59:57 2016 TLS Error: TLS handshake failed