wackydog

Members
  • Posts

    27
  • Joined

  • Last visited

Everything posted by wackydog

  1. So I gave up. It seems no one knows what the problem is (including me ) I switched to the Plexinc container and finally got back up and going. It took me quite a while to get it all working, so If anyone else needs to make the switch here are a few tips from my past couple of hours of pain: First take the time to copy your binex-plex config directory to a new location, so you can keep the original for reference in case you screw up and need to do it over. Next, read and understand as best you can, the readme for the Plexinc container: https://github.com/plexinc/pms-docker The /config directory for binhex-plex docker has the directory "Plex Media Server" directly inside it. In the Plexinc docker, the "Plex Media Server" directory is inside "Library/Application Support/", so you'll have to create the directories "Library/Application Support" and then move your existing "Plex Media Server" into it. Your /config container path should then point to the directory containing "Library" If this sounds confusing, you can always just create a temporary plexinc docker with the container name "plex-tmp" or something, start it up with the default values and look at what it creates. In the directions don't forget the variable ALLOWED_NETWORKS, otherwise you won't be able to connect to plex on your local network. If you have a plex pass, the repository field should be 'plexinc/pms-docker:plexpass' instead of 'plexinc/pms-docker', so you receive updates correctly. For more tips, 'Spaceinvader one' created a video on switching containers that's also worth a watch: https://www.youtube.com/watch?v=7RgPx7BN8DE Hmm... I think that's about it. In any case that's what worked for me. Good luck!
  2. Hi all, I have lost access to my Plex server. I was on the Plex.tv site from work and I decided to change my password. Big mistake. I checked the box to invalidate all connected devices, and I lost access to the server. I thought I'd just be able to restart it when I got home, but no. I can restart, but port 32400 on my local network no longer delivers anything. I ran: lsof -i -P -n | grep LISTEN from the shell, and it says: docker-pr 22409 root 4u IPv6 323760312 0t0 TCP *:32400 (LISTEN) So it looks like it's listening. I tried to adjust the Preferences.xml as indicated here: https://support.plex.tv/articles/204281528-why-am-i-locked-out-of-server-settings-and-how-do-i-get-in/ but there is no change. In looking at the plex server log, the server appears to be shutting down (though the docker continues to run with no docker log errors). There are messages involving plex.tv, but I don't really understand much of what's going on in there. I've attached a plex server log of a complete startup. If anyone has an idea I'd love to hear it. Any help would be appreciated. server.log
  3. It works! I loaded the User Script plugin and set my startup script there and all is working like before. I did see the call to my startup script was still in the go file, but it wasn't running. Oh well, the User Script plugin is working fine. Thanks to itimpi and trurl for all the help. I really appreciate it. Also much thanks to the Unraid team for fixing the annoying update problems in Docker.
  4. Thanks! I'm off to bed, but I'll try to use the Users Script plugin first thing tomorrow. I'll let you know if it works.
  5. I guess my problem is that I don't recall what the mechanism was that made it automatically run at startup in the first place. I just need a way for my authorized_keys file to survive reboots. The file is on the thumb drive, I just need to make sure it's always in the .ssh folder of the running system.
  6. It looks like that was it, though I haven't tested the other plugins. I tried to just re-add the NerdPack.plg file with the existing plugin folder and the GUI would not start. I then deleted the NerdPack.plg and its corresponding folder, and then after reboot re-added the plugin from community apps and it worked fine. I'm not actively dependent on any of the utils, so I'll just add things as I need them. The only thing that doesn't work now is my authorized_keys file. Years ago I wrote a custom 'startup.sh' file that copied my authorized_keys from the boot drive to '/root/.ssh'. This startup file no longer runs at startup, so I can't ssh into the machine using my keys. Is there a standard way for setting up authorized ssh keys in Unraid 6.8?
  7. Hi Maarty, If you only have ssh access, you can set unraid to boot to GUI safe mode. Edit the file /boot/syslinux/syslinux.cfg file. Just move the line 'menu default' to beneath the line 'label unRAID OS GUI Safe Mode (no plugins)'. Then save and reboot. Just remember to set it back when you're finished. label unRAID OS menu default kernel /bzimage append initrd=/bzroot label unRAID OS GUI Mode kernel /bzimage append initrd=/bzroot,/bzroot-gui label unRAID OS Safe Mode (no plugins, no GUI) kernel /bzimage append initrd=/bzroot unraidsafemode label unRAID OS GUI Safe Mode (no plugins) kernel /bzimage append initrd=/bzroot,/bzroot-gui unraidsafemode label Memtest86+ kernel /memtest
  8. Hi all, I set all the original files back into directory '/boot/config/plugins'. Then I removed all .plg files, except for community.applications.plg, and restarted. Everything is working fine. That means the problem is likely with one of the following plugins which I removed: fix.common.problems.plg recycle.bin.plg NerdPack.plg dynamix.ssd.trim.plg preclear.disk.plg tips.and.tweaks.plg I don't have too many interesting settings in these plugins, so I think I'll try to re-install them and see if they work with fresh configurations.
  9. I removed the folder and rebooted, and that worked, but I did lose my GUI dark mode. I moved all the files in /boot/config/plugins to another directory, so I'll move the files back and reset it as you suggested. Time for me to head to bed for now though. My hardware takes about 10 minutes to reboot each time, so I'll have to wait until tomorrow to play with it again. Thanks very much for your help! I'll report my findings hopefully tomorrow once I've narrowed down the exact problem. Goodnight.
  10. The directory /boot/extra is empty, but as I said, there are many plugins in '/boot/config/plugins'. Can I just empty this directory?
  11. I see several plugins in '/boot/config/plugins' Is this the directory to remove? Is it safe just to delete the contents and reboot?
  12. Yes, it looks like that might has something to do with it. In safe mode I can connect to the GUI. I looked under 'plugins' and it showed no plugins installed and three plugins on the error tab. I deleted them in the GUI and restarted no longer in safe mode, but I still can't load the GUI. Could there be other plugins not shown by the GUI that I could remove from the command line console?
  13. So, looking a few of the other posts, I tried to use the 'diagnostic' command in the console, but I got: /usr/bin/php: error while loading shared libraries: libicui18n.so.64: cannot open shared object file: No such file or directory
  14. Hi all, I just updated to the latest stable version and I can't connect to the web interface. I can SSH into the machine and my Plex docker seems to be running fine. Following are the last (sanitized) entries in the syslog if that is helpful. Thanks for any help or ideas! Dec 16 17:36:52 myserv ntpd[1745]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 17:37:01 myserv crond[1765]: exit status 127 from user root /usr/local/emhttp/plugins/dynamix/scripts/monitor &> /dev/null Dec 16 17:38:01 myserv crond[1765]: exit status 127 from user root /usr/local/emhttp/plugins/dynamix/scripts/monitor &> /dev/null Dec 16 17:39:01 myserv crond[1765]: exit status 127 from user root /usr/local/emhttp/plugins/dynamix/scripts/monitor &> /dev/null Dec 16 17:39:03 myserv nginx: 2019/12/16 17:39:03 [crit] 5026#5026: *633 connect() to unix:/var/run/php5-fpm.sock failed (2: No such file or directory) while connecting to upstream, client: 192.168.1.218, server: , request: "GET /Main HTTP/1.1", subrequest: "/auth_request.php", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "192.168.1.44" Dec 16 17:39:03 myserv nginx: 2019/12/16 17:39:03 [error] 5026#5026: *633 auth request unexpected status: 502 while sending to client, client: 192.168.1.218, server: , request: "GET /Main HTTP/1.1", host: "192.168.1.44" Dec 16 17:39:03 myserv nginx: 2019/12/16 17:39:03 [crit] 5026#5026: *635 connect() to unix:/var/run/php5-fpm.sock failed (2: No such file or directory) while connecting to upstream, client: 192.168.1.218, server: , request: "GET /favicon.ico HTTP/1.1", subrequest: "/auth_request.php", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "192.168.1.44", referrer: "http://192.168.1.44/Main" Dec 16 17:39:03 myserv nginx: 2019/12/16 17:39:03 [error] 5026#5026: *635 auth request unexpected status: 502 while sending to client, client: 192.168.1.218, server: , request: "GET /favicon.ico HTTP/1.1", host: "192.168.1.44", referrer: "http://192.168.1.44/Main" Dec 16 17:39:12 myserv nginx: 2019/12/16 17:39:12 [crit] 5026#5026: *647 connect() to unix:/var/run/php5-fpm.sock failed (2: No such file or directory) while connecting to upstream, client: 192.168.1.218, server: , request: "GET /Main HTTP/1.1", subrequest: "/auth_request.php", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "myserv" Dec 16 17:39:12 myserv nginx: 2019/12/16 17:39:12 [error] 5026#5026: *647 auth request unexpected status: 502 while sending to client, client: 192.168.1.218, server: , request: "GET /Main HTTP/1.1", host: "myserv" Dec 16 17:39:12 myserv nginx: 2019/12/16 17:39:12 [crit] 5026#5026: *649 connect() to unix:/var/run/php5-fpm.sock failed (2: No such file or directory) while connecting to upstream, client: 192.168.1.218, server: , request: "GET /favicon.ico HTTP/1.1", subrequest: "/auth_request.php", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "myserv", referrer: "http://myserv/Main" Dec 16 17:39:12 myserv nginx: 2019/12/16 17:39:12 [error] 5026#5026: *649 auth request unexpected status: 502 while sending to client, client: 192.168.1.218, server: , request: "GET /favicon.ico HTTP/1.1", host: "myserv", referrer: "http://myserv/Main" Dec 16 17:39:17 myserv nginx: 2019/12/16 17:39:17 [crit] 5026#5026: *655 connect() to unix:/var/run/php5-fpm.sock failed (2: No such file or directory) while connecting to upstream, client: 192.168.1.218, server: , request: "GET /Main HTTP/1.1", subrequest: "/auth_request.php", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "myserv" Dec 16 17:39:17 myserv nginx: 2019/12/16 17:39:17 [error] 5026#5026: *655 auth request unexpected status: 502 while sending to client, client: 192.168.1.218, server: , request: "GET /Main HTTP/1.1", host: "myserv" Dec 16 17:39:17 myserv nginx: 2019/12/16 17:39:17 [crit] 5026#5026: *657 connect() to unix:/var/run/php5-fpm.sock failed (2: No such file or directory) while connecting to upstream, client: 192.168.1.218, server: , request: "GET /favicon.ico HTTP/1.1", subrequest: "/auth_request.php", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "myserv", referrer: "http://myserv/Main" Dec 16 17:39:17 myserv nginx: 2019/12/16 17:39:17 [error] 5026#5026: *657 auth request unexpected status: 502 while sending to client, client: 192.168.1.218, server: , request: "GET /favicon.ico HTTP/1.1", host: "myserv", referrer: "http://myserv/Main" Dec 16 17:39:22 myserv nginx: 2019/12/16 17:39:22 [crit] 5026#5026: *664 connect() to unix:/var/run/php5-fpm.sock failed (2: No such file or directory) while connecting to upstream, client: 192.168.1.218, server: , request: "GET /favicon.ico HTTP/1.1", subrequest: "/auth_request.php", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "192.168.1.44", referrer: "http://192.168.1.44/Main" Dec 16 17:39:22 myserv nginx: 2019/12/16 17:39:22 [error] 5026#5026: *664 auth request unexpected status: 502 while sending to client, client: 192.168.1.218, server: , request: "GET /favicon.ico HTTP/1.1", host: "192.168.1.44", referrer: "http://192.168.1.44/Main" Dec 16 17:40:01 myserv crond[1765]: exit status 127 from user root /usr/local/emhttp/plugins/dynamix/scripts/monitor &> /dev/null Dec 16 17:41:01 myserv crond[1765]: exit status 127 from user root /usr/local/emhttp/plugins/dynamix/scripts/monitor &> /dev/null Dec 16 17:42:01 myserv crond[1765]: exit status 127 from user root /usr/local/emhttp/plugins/dynamix/scripts/monitor &> /dev/null Dec 16 17:42:39 myserv nginx: 2019/12/16 17:42:39 [crit] 5026#5026: *881 connect() to unix:/var/run/php5-fpm.sock failed (2: No such file or directory) while connecting to upstream, client: 192.168.1.218, server: , request: "GET /Docker HTTP/1.1", subrequest: "/auth_request.php", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "myserv" Dec 16 17:42:39 myserv nginx: 2019/12/16 17:42:39 [error] 5026#5026: *881 auth request unexpected status: 502 while sending to client, client: 192.168.1.218, server: , request: "GET /Docker HTTP/1.1", host: "myserv" Dec 16 17:42:39 myserv nginx: 2019/12/16 17:42:39 [crit] 5026#5026: *883 connect() to unix:/var/run/php5-fpm.sock failed (2: No such file or directory) while connecting to upstream, client: 192.168.1.218, server: , request: "GET /favicon.ico HTTP/1.1", subrequest: "/auth_request.php", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "myserv", referrer: "http://myserv/Docker" Dec 16 17:42:39 myserv nginx: 2019/12/16 17:42:39 [error] 5026#5026: *883 auth request unexpected status: 502 while sending to client, client: 192.168.1.218, server: , request: "GET /favicon.ico HTTP/1.1", host: "myserv", referrer: "http://myserv/Docker" Dec 16 17:43:01 myserv crond[1765]: exit status 127 from user root /usr/local/emhttp/plugins/dynamix/scripts/monitor &> /dev/null Dec 16 17:44:01 myserv crond[1765]: exit status 127 from user root /usr/local/emhttp/plugins/dynamix/scripts/monitor &> /dev/null Dec 16 17:45:01 myserv crond[1765]: exit status 127 from user root /usr/local/emhttp/plugins/dynamix/scripts/monitor &> /dev/null Dec 16 17:46:01 myserv crond[1765]: exit status 127 from user root /usr/local/emhttp/plugins/dynamix/scripts/monitor &> /dev/null
  15. Good to hear! I hope the parity and extended SMART test show green. Thank you so much for your quick response. I REALLY appreciate it. You're the man
  16. OK, it's back online. Thanks for the help! I guess my next step is to do a non-writing parity check and also an extended Smart test on the drive. Does the parity check show problems when the file system is repaired? Should I be concerned about the disk health if the extended smart test is all green?
  17. So, this was the output: Phase 1 - find and verify superblock... - block cache size set to 6061368 entries Phase 2 - using internal log - zero log... zero_log: head block 1090613 tail block 1090605 ALERT: The filesystem has valuable metadata changes in a log which is being destroyed because the -L option was used. - scan filesystem freespace and inode maps... agi unlinked bucket 26 is 1208760090 in ag 0 (inode=1208760090) sb_ifree 2646, counted 2777 sb_fdblocks 132148717, counted 134296216 - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - traversal finished ... - moving disconnected inodes to lost+found ... disconnected inode 1208760090, moving to lost+found Phase 7 - verify and correct link counts... Maximum metadata LSN (5:1090609) is ahead of log (1:2). Format log to cycle 8. XFS_REPAIR Summary Tue Nov 28 18:42:47 2017 Phase Start End Duration Phase 1: 11/28 18:40:16 11/28 18:40:16 Phase 2: 11/28 18:40:16 11/28 18:40:54 38 seconds Phase 3: 11/28 18:40:54 11/28 18:41:13 19 seconds Phase 4: 11/28 18:41:13 11/28 18:41:13 Phase 5: 11/28 18:41:13 11/28 18:41:13 Phase 6: 11/28 18:41:13 11/28 18:41:30 17 seconds Phase 7: 11/28 18:41:30 11/28 18:41:30 Total run time: 1 minute, 14 seconds done Can I just restart the array? If disconnected inodes were moved to lost and found, do I lose data? Can I rebuild missing files from parity?
  18. So I presume I have to do the -L option if starting the array gives me 'disk unmountable' ?
  19. Hi Jonnie! Thanks for the quick reply|! Here is the result Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. So I'll follow the instructions and see how it goes
  20. (Unraid 6.3.5) Hi all! I have an umountable disk on my array and I'm trying to fix it. I'm trying to follow the instructions here: https://wiki.lime-technology.com/Check_Disk_Filesystems#Drives_formatted_with_XFS I've started the array in maintenance mode and I've gotten the results from xfs_repair with the -nv options. On the help page it says: Here is the report output: Phase 1 - find and verify superblock... - block cache size set to 6061368 entries Phase 2 - using internal log - zero log... zero_log: head block 1090613 tail block 1090605 - scan filesystem freespace and inode maps... agi unlinked bucket 26 is 1208760090 in ag 0 (inode=1208760090) sb_ifree 2646, counted 2777 sb_fdblocks 132148717, counted 134296216 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 3 - agno = 2 - agno = 4 - agno = 5 - agno = 6 - agno = 7 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - traversal finished ... - moving disconnected inodes to lost+found ... disconnected inode 1208760090, would move to lost+found Phase 7 - verify link counts... would have reset inode 1208760090 nlinks from 0 to 1 No modify flag set, skipping filesystem flush and exiting. XFS_REPAIR Summary Tue Nov 28 18:02:24 2017 Phase Start End Duration Phase 1: 11/28 18:01:48 11/28 18:01:48 Phase 2: 11/28 18:01:48 11/28 18:01:48 Phase 3: 11/28 18:01:48 11/28 18:02:06 18 seconds Phase 4: 11/28 18:02:06 11/28 18:02:06 Phase 5: Skipped Phase 6: 11/28 18:02:06 11/28 18:02:24 18 seconds Phase 7: 11/28 18:02:24 11/28 18:02:24 Total run time: 36 seconds Should I do a repair? What should my next step be to get my data back? Any help would be deeply appreciated.
  21. unraid 6.1.9 Hi all! I just got one of those HGST He12 drives to replace my parity drive and even though I don't need to clear it, I thought I'd do a one-cycle preclear first. It says it's been reading about 250 MB/sec, but it's only 46% through the Pre-Read phase after 63 hours. At 250 MB/sec it should be able to read it in about 14 hours, no? At this rate I'd have to wait three weeks or so to start the parity build. Is this right?
  22. Hi trurl, I did a non-correcting parity check after your post and it did turn up two parity errors: kernel: md: parity incorrect, sector=0 kernel: md: parity incorrect, sector=8590376744 I put off dealing with that because I wanted to swap out the high decibel fans that came with my XCase build. After endless reboots and tweaks the fans are now working fine. I did another non-correcting parity check with the same results. I then ran the extended SMART tests on all drives, but saw no problems there. So, from what you said I guess my understanding of how the parity works is incomplete. My assumption was that when the automatic parity rebuild happened, it fully rebuilt the parity, thus removing any previous parity information. My question is which should I rebuild, parity or data, and why? Any enlightenment would be appreciated.
  23. Hi garycase, Thanks for the clarification, and thanks to RobJ for updating the documentation! I was just confused by the checkbox not showing up according to what I'd read. Now that my int test was successful, I feel comfortable about unraid functioning the way I'd expected. @trurl: I will do a rebuild of the drive. Like I said though, the use case of rebuilding an existing drive is not covered in the V6 manual, so my only way of knowing how to do it was from this thread. Thanks everyone for the help. I'm used to watching tumbleweeds roll by when I post somewhere. This community rocks!
  24. Hi trurl, thanks again for the response. I chose to do a new config and the parity drive began rebuilding right away. When it was finished I did the same test again. This time when I did the new config, the "parity is already valid" checkbox showed up for me and I was able to assign the drives and restart right away. I'm guessing there were uncommitted writes in the queue yesterday, so unraid didn't give me the option? I waited 20 minutes after the writes to do the test, but apparently that wasn't enough time. Now I just need to re-copy my last job from yesterday, since some data was likely lost. Thanks for your help!
  25. Hi Trurl. Thanks for your answer. Well that sucks A day of data copying lost with a simple 10 second test. Searching for the word 'rebuild' in the V6 manual delivers no usable results. There is a section "Replace a Failed Disk", but that makes it sound as if just rebooting the machine and restarting the array with a new disk will automatically trigger a rebuild. That was not the case with my existing disk. How do I trigger this rebuild? Is there documentation somewhere for this?