jcsnider

Members
  • Posts

    9
  • Joined

  • Last visited

Everything posted by jcsnider

  1. Just finally updated to v3 and really appreciate your willingness to take charge and continue supporting this plugin. As someone who also doesn't use and has no interest in the MyServers plugin I think you for continuing to support backups of the usb/flash drive. It'd be nice if that data could also be compressed to an archive, but the post-run script for that is simple enough
  2. He posted above that he was working on a new version, hopefully out soon.
  3. Thanks so much for creating this tool! I've been working on building a kernel with hannesha's it87 updates which allows monitoring temperatures and somewhat controlling fan speeds for motherboards with IT8688E and ITE8792 super io chipsets. (Namely my Gigabyte X570S Aorus Master) I found where thor2002ro added those changes to his unraid kernel, so I singled out and exported the it87 changes commit as a user patch for this docker and it seems to have worked well. For others looking for information regarding the Gigabyte X570 Aorus boards... (just dumping my notes/thoughts in hopes it'll help others endlessly googling for information and solutions) I am aware of the Gigabyte WMI driver that is being introduced in the newer linux kernels but it appears that will only be good to view temperature info and maybe fan speeds, but it will not have any ability to control fans. Hannesha's it87 changes linked above for the x570s Gigabyte boards allows for working temp monitoring and there is partial fan control support. It seems like setting the pwm value for any fan headers on the IT8792E work but do not persist. Any value set works, but reverts to the previous value in under a second. (To confirm, try echoing 0 into the pwm control inside a loop, and then kill the loop) As for the fan headers using the IT8668E chipset I am able to stop and control those fans and my settings seem to persist, but I am still exploring/learning of any pitfalls there.
  4. Yeah the redis changes have kept me stable since December. Highly recommend giving them a shot.
  5. I have had no issues since doing the following 3 changes: Switched to Mysql for NextCloud database storage (although I don't think this solved it) Moved over to using a redis container for Nextcloud cache. Changed the nextcloud docker php config to use redis for session cache. See the edits at the bottom of this post for more info. Ymmv.
  6. Mine has locked up again twice today so I am attempting to dig a little deeper. It appears that if you get the process ids from the processes found in D states using the command I posted above (ps axl | awk '$10 ~ /D/'), you can take those pids and get a list of file handles for those processes using the following command: ls -l /proc/pid/fd Here's my output: If my assumption is that all of these php-fpm workers are getting stuck on disk io, having 4 processes trying to write to that sqlite db could be the problem. I don't have redis or anything setup to handle transactional file locking either so my db is likely getting hit a lot for that. Are you all using sqlite databases? I was previously using mariadb (I think things were well) but the mariadb docker kept getting corrupted and it was tedious having to run commands all the time to fix it and get it running again. Edit: Switching to a mysql database and using a redis container for cache did not solve the issue. It seems as if I can recreate the problem fairly consistently by booting up my mobile app for some reason. Next debugging step is to use a temporary upload location that is off the array/primary cache. Edit 2: Moving my upload and php tmp directories off onto an unassigned disk seems to have helped, but the temporary directory was getting a lot of sess_XXXXXXXXXXXXXXXXXXXX files written to it. Turns out they are encrypted php session files. To further optimize and reduce disk usage I added the following lines to my nextcloud/php/php-local.ini file so that sessions would also be handled by redis session.save_handler = redis session.save_path = tcp://172.17.0.X:6379 After rebooting my NextCloud container again things seem to be working well. I will let it run for awhile and see if it lasts.
  7. Yes, I was using it before. Not sure if it was a problem or not but in trying to debug these issues I have stopped using that option and instead have modified my scripts/applications that store files within NextCloud to also trigger an occ files:scan. I think that option also made page load times take longer but that could have been my imagination. Yes but my Unraid has only been running 1 day, 19 hours. Historically when everything has broken down uptime is greater than 3 days, sometimes upwards of a week or two. Very interesting. I wonder if in your case you would have also been able to collect diagnostics. I have never been able to get a clean shutdown or collect diagnostics once things have started to lock up. On a side note: How often is your Mover configured to run? I had mine set to every hour but now I have changed it to once a day (I have far fewer files being added to my array). Maybe it was interfering somehow? Idk.
  8. I'm guessing if you ssh into your unraid server and try to run lsof on any location (ie: lsof /mnt/user) that will also never finish executing (requiring a new terminal instance in order to issue further commands). If you try to fetch diagnostics it will never finish executing either, and if you try to collect those through the WebUI then eventually the WebUI will start giving 500 errors until it's restarted. I never assigned a /tmp docker path, but I modified the php config within NextCloud to use a subdirectory in the /data folder to store upload data as it's being recieved so effectively I am doing the same thing to avoid my docker file increasing in usage dramatically. I needed to get everything back up and running last night so I rebooted. I went into my NextCloud config and changed all my shared paths access properties to 'Read/Write - Shared' instead of just 'Read/Write'. My instance is working for now but I don't expect that it will last. I guess in each of our cases we have designated a temporary upload location that is on our arrays - I don't see how that'd be a problem but for debugging purposes maybe I will change it to an unassigned device and see how that works. Edit: Are either if you all using the filesystem_check_changes’ => 1, flag in your NextCloud config.php?
  9. Following up. Same issue is happening here as well. Really diving into the issue tonight in hopes of figuring out what's going on... Other posts/threads that seem to be experiencing the same issue dating back to Unraid v6.5 and NextCloud 18.X.X. (I am on NextCloud 20 with Unraid 6.9.X? Beta 35). I am sort of thinking that the workload to run NextCloud is causing the issue to present itself but NextCloud isn't the actual problem. Instead it might be hardware related. @CyaOnDaNet @p3rky2005 If you ssh into your servers what is the output of the following command? It will list any processes stuck in an uninterruptable io wait state. (The uninterruptable part meaning that we can't kill them, even if we use kill -9) In my instance I have a few php-fpm: pool processes which are running from the NextCloud docker. ps axl | awk '$10 ~ /D/' If I try to run lsof on any of my drives that also enters a D state that never finishes which is why I think when I try to collect diagnostics the WebUI crashes in which case I must restart with the following commands: /etc/rc.d/rc.php-fpm restart /etc/rc.d/rc.php-fpm reload Are you all using LSI cards? Mine is a h310 perc card flashed to it mode but I do not know what firmware version or anything off the top of my head. For reference, here are other posts/topics I have found detailing similar problems. https://forums.unraid.net/topic/99669-nextcloud-locking-up/?do=findComment&comment=919516 https://forums.unraid.net/topic/48383-support-linuxserverio-nextcloud/page/163/?tab=comments#comment-919547 https://forums.unraid.net/topic/48383-support-linuxserverio-nextcloud/page/112/?tab=comments#comment-798246 https://forums.unraid.net/topic/83174-docker-container-hangs/ https://forums.unraid.net/topic/90676-high-load-but-low-cpu-utilization-docker-frozen-unable-to-stop-array-reboot-needed/