Unraiding

Members
  • Content Count

    23
  • Joined

  • Last visited

Community Reputation

2 Neutral

About Unraiding

  • Rank
    Newbie

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Thanks for the tip. I was able to pull up some guides and use midnight commander to get this done.
  2. I'm sure this is something simple, but I have been searching off and on for a few days and I can't find a clear answer. I am trying to remove some old 2TB drives from my array as they started throwing errors. I used unBALANCE to migrate the data off of the disks, but the share folder is still there so I am receiving the "Share is outside the list of designated disks" warning. I avoid the CLI when possible, so I installed Krusader. I see my shares under 'media', but I don't see my individual disks. I tried manually navigating there, but am not seeing anything. I woul
  3. Thanks. That is what I ended up doing. Disabled, deleted, and re-enabled Docker, then reinstalled my containers. I decided to skip PiHole as it seemed to be the cause, but understand now that the filled cache could have been the root cause of some form of corruption.
  4. I wish my uptime was that high. I'm guessing it had been maybe a month since I had rebooted it for a UPS upgrade. My cache disk is a 1TB 860 Evo and it transfers off the temporary files every night. I had just queued up a few too many downloads that night and it got full for a couple hours until the Mover cleared it out again. It does line up with the general timeline of things going bad though. Could it have been some kind of cascading issue? Cache fills > log memory fills > things start corrupting?
  5. Thanks. It ended up falling apart completely when I stopped it and got stuck in a 'dead' status. I had to delete my docker image and reinstall everything to get it running again. I decided not to reinstall PiHole for now...
  6. It did fill up a couple nights ago, but the scheduled transfer cleared it that same night. It is a 1TB Samsung 860 Evo, so normally room isn't a problem, but I queued a lot of downloads that day apparently. Time wise that does seem to line up with when things started going wrong. Could that be related?
  7. I have been running Plex for months without a problem, but in troubleshooting another issue I ended up rebooting the server. After the reboot Plex would not start. The log is showing the following: [s6-init] making user provided files available at /var/run/s6/etc...exited 0. [s6-init] ensuring user provided files have correct perms...exited 0. [fix-attrs.d] applying ownership & permissions fixes... [fix-attrs.d] done. [cont-init.d] executing container initialization scripts... [cont-init.d] 01-envfile: executing... [cont-init.d] 01-envfile: exited 0. [cont-init.d] 10-adduser: executi
  8. I installed this a few weeks ago and it has been running well as far as I know, but in troubleshooting another issue I noticed that under the Docker tab > Log it was showing 'unhealthy'. In the log it was showing the following messages over and over until I stopped it: Stopping lighttpd lighttpd: no process found Starting pihole-FTL (no-daemon) as root Starting lighttpd Stopping pihole-FTL Stopping lighttpd kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec] lighttpd: no process found Starting pihole-FTL (no-daemon) as root Starting light
  9. I noticed that my log was showing 100% full under the Memory section of the dashboard. After googling around a bit I saw that Fix Common Problems plugin would check for this, ran it, and confirmed that there was a problem (/var/log is getting full (currently 100 % used)). I understand a reboot will clear this problem, but I am hoping that someone can help me determine the cause and/or fix. I have no VM's and just a few common Dockers (Plex, Tautulli, Sonarr, Deluge, BOINC, and PiHole). I have 128GB of RAM so increasing the available size is no problem, but I'm not sure how that
  10. Any idea why a substantial percentage of my tasks are getting a computation error? I thought it was related to stopping and starting the docker a few times as I dialed in the CPU throttling, but it's been up solid for the last ~18 hours and still throwing errors.
  11. I'm not having any problems, but as a reference point I am hovering around 20GB used for the BOINC docker. Edit - recently is has been more like 40-50GB.
  12. I am in a similar place here, but on a fresh install. It takes a few minutes to get connected, but lands on an empty window.
  13. Any idea why my search results are slow? It's not terrible, but averages about 5 seconds per search. I would just assume it's part of not accessing the server directly anymore, but I am seeing results from other shared servers I am connected to (via the internet) before my own.
  14. Just to close the loop on this, I powered the system down, moved the USB drive from the internal port to a rear port, and everything looks good on power up.