elco1965

Members
  • Posts

    67
  • Joined

  • Last visited

Everything posted by elco1965

  1. trurl, I have rebuilt the failed drive and made the changes as suggested in "Dealing with unclean shutdowns". The server now shuts down and restarts without error and does not initiate a party check once restarted. Thanks again for the help.
  2. Thanks for the suggestion. I actually I saw that yesterday when I was searching the forum and attempted to make some changes but nothing I did in gui would take. I will take another look after the disk is rebuilt.
  3. Sorry I missed that yesterday. I have navigated to flash/config/disk.cfg set start Array to no. When you say "force a reboot" do you mean via command line or the button on the mail page? I haven't tried the main page button yet but I am wondering if the array will respond and I will get the same error. And thanks for your time. Your help is appreciated.
  4. I get the same error as I try to change disk settings. Swapping auto start from yes to no will not take. Jan 5 19:22:40 BlackHole nginx: 2024/01/05 19:22:40 [error] 3170#3170: *135011 connect() to unix:/var/run/emhttpd.socket failed (11: Resource temporarily unavailable) while connecting to upstream, client: 10.10.20.20, server: , request: "POST /update.htm HTTP/1.1", upstream: "http://unix:/var/run/emhttpd.socket:/update.htm", host: "10.10.20.12", referrer: "http://10.10.20.12/Settings/DiskSettings"
  5. Its been a while since I made a backup. I was using unRAID Connect but had trouble with it also. I cannot change the Management settingd like UseSSL/TLS to Strict for manual port forwarding. The same error pops up in the log when hit apply.
  6. I have unplugged all devices aside from one desktop. I know there is nothing accessing the storage on the server. I still cannot stop the server and I still get the same error.
  7. That is a negative. But I just noticed the failed drive, disk 13 with the red X is also showing up under unassigned devices.
  8. I don't think there is anything accessing the storage drives. I disabled the docker and VM service. And there was no terminal open to the server.
  9. I have noticed I get the above mentioned error when trying to, unsuccessfully, make changes in the array's settings.
  10. Hello everyone, Recently a drive failed. I have a replacement and when I attempt to stop the array to replace the failed drive with the new one I see the following error in the log and the array will not stop. Jan 5 12:14:33 BlackHole nginx: 2024/01/05 12:14:33 [error] 18699#18699: *6884397 connect() to unix:/var/run/emhttpd.socket failed (11: Resource temporarily unavailable) while connecting to upstream, client: 10.10.20.20, server: , request: "POST /update.htm HTTP/1.1", upstream: "http://unix:/var/run/emhttpd.socket:/update.htm", host: "10.10.20.12", referrer: "http://10.10.20.12/Main" I have had an issue that I haven't found a solution to for a long time now. The array automatically goes into auto parity check after a power failure or reboot. I have found disabling the docker service and the VM manager prevents this from happening, usually. I like to disable these services when rebuilding a drive anyway. But, I am not sure what to do now. Of course I don't want to lose the data on the failed drive. And I cannot stop the array to replace said drive. I am also afraid to reboot the array with the for-mentioned error. Diagnostic file attached Thank you blackhole-diagnostics-20240105-1216.zip
  11. Thank you for the help JeorgeB. I used the file manager to move the appdata folder and now the mover is running at an acceptable transfer rate at ~ 175MB/s.
  12. Thanks for looking. I will stop the money and use the file manager. I will report back
  13. Mover was moving. There was a slight increase in usage on the cache drive. blackhole-diagnostics-20230713-0933.zip
  14. Mover is running very slowly. It has moved less than a 100mb in 3 hours. Is there a safe way to stop the mover? Attached are diagnostics and a screen shot from system stats. Thank you blackhole-diagnostics-20230713-0814.zip
  15. Having a problem with Nextcloud TOTP. When the app is enabled I get the log error in the attached txt file and cannot open the Security tab in when in the setting. I get the error shown in the attached .png. When I attempt to disable or remove the TOTP app I get a message in the log in screen stating my two factor authorization app cannot be found. I have searched thru the forums but haven't found anything like this yet. Thanks in advance for your time. Error.txt
  16. This reminds of when my Grandfather would slap me upside the head for saying or doing something stupid. You are correct. I was stuck on a folder that I saw in /mnt. This was one of the things I deleted the other day. Anyway, I was having a couple of issue with Nextcloud as well. I put that share on the cache hoping for a gain in "performance". I will be moving it back to "yes". Thanks again for you're help. As someone who is really out of his league when it comes to the proper use of unRAID, I really appreciate the community here.
  17. I think I saw and deleted that folder after your first response. I have also attached an updated diagnostics. It should be gone. blackhole-diagnostics-20221123-1037.zip
  18. Where would I look for this? I am not seeing it under the shares tab.
  19. That is a god question. I don't know.
  20. Thank you sir. This was it. I moved the system files from the array and fixed the plex container mapping. Now when I run the "Where can I see what folders are taking up my RAM?" found here it returns the attached and looking much better. I still 2 files that show up as /mnt, 0 /mnt/rootshare and /mnt. I beleive that /mnt/rootshare was created following Spaceinvader One's video about root shares for Windows users. I am guessing it's supposed to be there and looks as if it is using zero memory. I don't know about /mnt Also, when browsing the files using the file browser, there are 2 folders in /root I do not know what they are. "disks" and "remotes". I tried to delete disks but it returns on it's own. I am assuming these are supposed to be there or were created by mistake? Memory Output.txt blackhole-diagnostics-20221122-0849.zip
  21. Thanks. I ran the latest version of Memtest a couple days ago. It ran 3 passes detecting no errors.
  22. Thanks for the response. I'll take a look at the docker soon as I get the server running. Is there a good reason to continue with the Memtest on ECC RAM?
  23. Fix common problems was reporting "Rootfs getting full" I mostly followed the directions that lead me to the creation of this post. I have been having a couple of issues lately. One of them being with the web GUI. It would become unresponsive. I could log on but when I clicked on the Main tab the drive assignments including the cache pool were missing. VMs would also disappear. The only way to get these back was to stop and restart the array via the terminal. I first noticed the issues when attempting to preclear a couple of drives. The preclear would be aborted almost immediately fail at the beginning of the process with the error "detected memory low". I read through e few post, some suggested bad memory. I also found the "Where can I see what folders are taking up my RAM?" I ran the script "https://raw.githubusercontent.com/Squidly271/misc-stuff/master/memorystorage.plg" and it returned /mnt/plex....something. I unfortunately didn't save the output before the server became mostly unresponsive. The strange thing about the "what folders are taking up my RAM" script, Plex and all other docker containers were stopped and the dashboard was reporting about 70% of the RAM was in use. I had some extra memory and swapped out some of the modules. I am 19 hours into a Memtest with no errors reported. The server has ECC server RAM. I read through the forums and some have suggested the version of Memtest that comes with unRAID is obsolete or doesn't work with ECC memory. I don't know if I should continue with the test or not. Any help with this would be greatly appreciated. Thank you blackhole-diagnostics-20221120-2247.zip
  24. After moving some files the cache started to fill up again. Turns out the Unifi-Control Docker was filling the cache with a logs in the cache/appdata//unifi-controller/data/db/journal. The last time I look at the calculation of this folder it was 1.28TB. I have no idea why. I deleted and recreated the Docker img, reinstalled all docker containers, and so far things seem to be back in order. JorgeB, Thanks again for pointing me in the right direction.