Unraiding

Members
  • Posts

    23
  • Joined

  • Last visited

Everything posted by Unraiding

  1. Thanks for the tip. I was able to pull up some guides and use midnight commander to get this done.
  2. I'm sure this is something simple, but I have been searching off and on for a few days and I can't find a clear answer. I am trying to remove some old 2TB drives from my array as they started throwing errors. I used unBALANCE to migrate the data off of the disks, but the share folder is still there so I am receiving the "Share is outside the list of designated disks" warning. I avoid the CLI when possible, so I installed Krusader. I see my shares under 'media', but I don't see my individual disks. I tried manually navigating there, but am not seeing anything. I would appreciate it if someone could point me to the best way to delete these folders and/or what I am missing in Krusader?
  3. Thanks. That is what I ended up doing. Disabled, deleted, and re-enabled Docker, then reinstalled my containers. I decided to skip PiHole as it seemed to be the cause, but understand now that the filled cache could have been the root cause of some form of corruption.
  4. I wish my uptime was that high. I'm guessing it had been maybe a month since I had rebooted it for a UPS upgrade. My cache disk is a 1TB 860 Evo and it transfers off the temporary files every night. I had just queued up a few too many downloads that night and it got full for a couple hours until the Mover cleared it out again. It does line up with the general timeline of things going bad though. Could it have been some kind of cascading issue? Cache fills > log memory fills > things start corrupting?
  5. Thanks. It ended up falling apart completely when I stopped it and got stuck in a 'dead' status. I had to delete my docker image and reinstall everything to get it running again. I decided not to reinstall PiHole for now...
  6. It did fill up a couple nights ago, but the scheduled transfer cleared it that same night. It is a 1TB Samsung 860 Evo, so normally room isn't a problem, but I queued a lot of downloads that day apparently. Time wise that does seem to line up with when things started going wrong. Could that be related?
  7. I have been running Plex for months without a problem, but in troubleshooting another issue I ended up rebooting the server. After the reboot Plex would not start. The log is showing the following: [s6-init] making user provided files available at /var/run/s6/etc...exited 0. [s6-init] ensuring user provided files have correct perms...exited 0. [fix-attrs.d] applying ownership & permissions fixes... [fix-attrs.d] done. [cont-init.d] executing container initialization scripts... [cont-init.d] 01-envfile: executing... [cont-init.d] 01-envfile: exited 0. [cont-init.d] 10-adduser: executing... ------------------------------------- _ () | | ___ _ __ | | / __| | | / \ | | \__ \ | | | () | |_| |___/ |_| \__/ Brought to you by linuxserver.io ------------------------------------- To support LSIO projects visit: https://www.linuxserver.io/donate/ ------------------------------------- GID/UID ------------------------------------- User uid: 99 User gid: 100 ------------------------------------- [cont-init.d] 10-adduser: exited 0. [cont-init.d] 40-chown-files: executing... [cont-init.d] 40-chown-files: exited 0. [cont-init.d] 45-plex-claim: executing... [cont-init.d] 45-plex-claim: exited 0. [cont-init.d] 50-gid-video: executing... [cont-init.d] 50-gid-video: exited 0. [cont-init.d] 60-plex-update: executing... Docker is used for versioning skip update check [cont-init.d] 60-plex-update: exited 0. [cont-init.d] 99-custom-scripts: executing... [custom-init] no custom files found exiting... [cont-init.d] 99-custom-scripts: exited 0. [cont-init.d] done. [services.d] starting services Starting Plex Media Server. [services.d] done. Connection to 162.216.19.157 closed by remote host. Dolby, Dolby Digital, Dolby Digital Plus, Dolby TrueHD and the double D symbol are trademarks of Dolby Laboratories. decoder information: 102 decoder information: 102 decoder information: 102 Connection to 45.79.129.106 closed by remote host. Connection to 172.104.29.70 closed by remote host. Connection to 50.116.52.102 closed by remote host. Connection to 50.116.52.102 closed by remote host. Connection to 50.116.52.102 closed by remote host. Connection to 104.200.30.183 closed by remote host. Connection to 96.126.104.168 closed by remote host. Connection to 104.200.30.183 closed by remote host. Connection to 96.126.104.168 closed by remote host. Connection to 66.175.212.202 closed by remote host. Connection to 172.104.29.70 closed by remote host. Connection to 50.116.59.145 closed by remote host. Connection to 50.116.59.145 closed by remote host. Connection to 50.116.59.145 closed by remote host. Connection to 50.116.59.145 closed by remote host. Connection to 50.116.59.145 closed by remote host. Connection to 50.116.59.145 closed by remote host. Connection to 50.116.59.145 closed by remote host. Connection to 50.116.59.145 closed by remote host. Connection to 50.116.59.145 closed by remote host. Connection to 50.116.59.145 closed by remote host. Connection to 50.116.59.145 closed by remote host. [cont-finish.d] executing container finish scripts... [cont-finish.d] done. [s6-finish] waiting for services. Critical: libusb_init failed [s6-finish] sending all processes the TERM signal. [s6-finish] sending all processes the KILL signal and exiting. [s6-init] making user provided files available at /var/run/s6/etc...exited 0. [s6-init] ensuring user provided files have correct perms...exited 0. [fix-attrs.d] applying ownership & permissions fixes... [fix-attrs.d] done. [cont-init.d] executing container initialization scripts... [cont-init.d] 01-envfile: executing... [cont-init.d] 01-envfile: exited 0. [cont-init.d] 10-adduser: executing... usermod: no changes ------------------------------------- _ () | | ___ _ __ | | / __| | | / \ | | \__ \ | | | () | |_| |___/ |_| \__/ Brought to you by linuxserver.io ------------------------------------- To support LSIO projects visit: https://www.linuxserver.io/donate/ ------------------------------------- GID/UID ------------------------------------- User uid: 99 User gid: 100 ------------------------------------- [cont-init.d] 10-adduser: exited 0. [cont-init.d] 40-chown-files: executing... [cont-init.d] 40-chown-files: exited 0. [cont-init.d] 45-plex-claim: executing... [cont-init.d] 45-plex-claim: exited 0. [cont-init.d] 50-gid-video: executing... [cont-init.d] 50-gid-video: exited 0. [cont-init.d] 60-plex-update: executing... Docker is used for versioning skip update check [cont-init.d] 60-plex-update: exited 0. [cont-init.d] 99-custom-scripts: executing... [custom-init] no custom files found exiting... [cont-init.d] 99-custom-scripts: exited 0. [cont-init.d] done. [services.d] starting services [services.d] done. Starting Plex Media Server. Connection to 45.56.104.126 closed by remote host. Dolby, Dolby Digital, Dolby Digital Plus, Dolby TrueHD and the double D symbol are trademarks of Dolby Laboratories. connect: Connection timed out s6-rmrf: fatal: unable to remove /var/run/s6/services: I/O error s6-rmrf: fatal: unable to remove /var/run/s6/services: I/O error s6-rmrf: fatal: unable to remove /var/run/s6/services: I/O error s6-rmrf: fatal: unable to remove /var/run/s6/services: I/O error s6-rmrf: fatal: unable to remove /var/run/s6/services: I/O error s6-rmrf: fatal: unable to remove /var/run/s6/services: I/O error s6-rmrf: fatal: unable to remove /var/run/s6/services: I/O error s6-rmrf: fatal: unable to remove /var/run/s6/services: I/O error s6-rmrf: fatal: unable to remove /var/run/s6/services: I/O error s6-rmrf: fatal: unable to remove /var/run/s6/services: I/O error
  8. I installed this a few weeks ago and it has been running well as far as I know, but in troubleshooting another issue I noticed that under the Docker tab > Log it was showing 'unhealthy'. In the log it was showing the following messages over and over until I stopped it: Stopping lighttpd lighttpd: no process found Starting pihole-FTL (no-daemon) as root Starting lighttpd Stopping pihole-FTL Stopping lighttpd kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec] lighttpd: no process found Starting pihole-FTL (no-daemon) as root Starting lighttpd Stopping pihole-FTL Stopping lighttpd kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec] lighttpd: no process found Starting lighttpd Starting pihole-FTL (no-daemon) as root Stopping pihole-FTL Stopping lighttpd kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec] lighttpd: no process found Starting lighttpd Starting pihole-FTL (no-daemon) as root Stopping pihole-FTL kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec] Stopping lighttpd lighttpd: no process found Starting lighttpd Starting pihole-FTL (no-daemon) as root Stopping pihole-FTL Stopping lighttpd kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec] lighttpd: no process found Starting lighttpd Starting pihole-FTL (no-daemon) as root Stopping pihole-FTL Stopping lighttpd kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec] lighttpd: no process found Starting lighttpd Starting pihole-FTL (no-daemon) as root Stopping pihole-FTL kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec] Stopping lighttpd lighttpd: no process found Starting pihole-FTL (no-daemon) as root Starting lighttpd Stopping pihole-FTL kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec] Stopping lighttpd lighttpd: no process found Stopping cron [cont-finish.d] executing container finish scripts... [cont-finish.d] done. [s6-finish] waiting for services. [s6-finish] sending all processes the TERM signal. [s6-finish] sending all processes the KILL signal and exiting.
  9. I noticed that my log was showing 100% full under the Memory section of the dashboard. After googling around a bit I saw that Fix Common Problems plugin would check for this, ran it, and confirmed that there was a problem (/var/log is getting full (currently 100 % used)). I understand a reboot will clear this problem, but I am hoping that someone can help me determine the cause and/or fix. I have no VM's and just a few common Dockers (Plex, Tautulli, Sonarr, Deluge, BOINC, and PiHole). I have 128GB of RAM so increasing the available size is no problem, but I'm not sure how that is done and I don't want to just treat the symptom if there is a memory leak or other problem somewhere. Diagnostics attached *edit - I rebooted the machine and it came up fine, but Plex refuses to start and PiHole is displaying running, but not connectable, sigh *edit2 - I tried to delete PiHole and it got stuck in the status of 'dead', so I disabled, deleted, and re-enabled my dockers without PiHole unraid1-diagnostics-20201116-0121.zip
  10. Any idea why a substantial percentage of my tasks are getting a computation error? I thought it was related to stopping and starting the docker a few times as I dialed in the CPU throttling, but it's been up solid for the last ~18 hours and still throwing errors.
  11. I'm not having any problems, but as a reference point I am hovering around 20GB used for the BOINC docker. Edit - recently is has been more like 40-50GB.
  12. I am in a similar place here, but on a fresh install. It takes a few minutes to get connected, but lands on an empty window.
  13. Any idea why my search results are slow? It's not terrible, but averages about 5 seconds per search. I would just assume it's part of not accessing the server directly anymore, but I am seeing results from other shared servers I am connected to (via the internet) before my own.
  14. Just to close the loop on this, I powered the system down, moved the USB drive from the internal port to a rear port, and everything looks good on power up.
  15. Ah, ok. I am currently using the internal USB2 port with a USB2 Sandisk drive trying to avoid these errors. Is it safe to assume I need to reboot the system to move the drive?
  16. Up front, I am an Unraid novice and my build is relatively new. My server is accessible via the web GUI, I can access/modify files, my dockers (Plex, Deluge, etc) are working fine, but multiple things are not working. Problems I am seeing: My dashboard is blank (all other tabs seem fine) I can't update my dockers I can't get my diagnostics via the GUI (I click download and it goes to a blank page) or the CLI (error below) My searching turned up nothing, so I'm hoping for some guidance here. My instinct is to reboot the system and hope it clears itself, but this seems semi-serious. unraid1-syslog-20200229-2007.zip
  17. Thanks PeteAsking. I searched around and found these procedure options on the Wiki in case anyone else runs into a similar issue: https://wiki.unraid.net/Shrink_array
  18. I recently migrated my Plex database from Windows over to a new Unraid build using the Linuxserver.io docker. For some reason, newly added movies correctly pull the metadata, but do not add the poster. When I edit the movie entry I see all of the poster options I normally would and am able to choose one with no problems. I've been searching, but haven't found anything that helped. Any suggestions?
  19. Thanks for the replies everyone. I think PeteAsking's unassigned suggestion will help me get over the mental block. Now the follow up question - what is the best way to implement this change? I reordered the drives slightly once to confirm the issue wasn't a bad backplane connection and it started a full parity rebuild (~48 hours). Any way I can avoid doing this again since the drives are empty?
  20. Ah, alright. If I understand it correctly, I would be looking at two primary risks: One of the failing disks dies, then the other failing disk dies on the rebuild > I lose the array An apparently 'good' disk dies, then one of the failing disks dies on the rebuild > I lose the array (If so, looks like I'm pulling them out)
  21. I am working through getting my first Unraid server off the ground. So far things are going well, but I have two old WD Red drives that are apparently failing and out of warranty. They have been running in a Windows box for years without issue, but are unable to complete the extended SMART tests and are showing read errors. Both drives are currently empty and I was not planning to use them with any irreplaceable data. What is the risk of keeping these disks in the array until they actually fail? I am running single parity currently and have plenty of extra capacity at the moment, but it feels wrong just throwing away questionable disks. General system specs: Supermicro 846 chassis with dual 920W SQ PSU's Supermicro X9DR3-F Intel Xeon E5-2695v2 x2 128GB Samsung ECC RAM Drives follow
  22. Just adding my thanks and experience. New 12TB drives formatted as XFS ended up with ~84GB used.