craigr

Members
  • Posts

    767
  • Joined

  • Last visited

Everything posted by craigr

  1. Have you seen this thread: Also, many things have changed since this thread here was first started.
  2. Did you power down or do a restart? Just wondering if your system crashed or if you even tempted fate?
  3. Strange and interesting. Thanks for the data point. In my case, this never happened until 6.12.x though.
  4. It's impossible to get log data now, because it won't crash again (based on prior experience). It only happens on the first boot after the upgrade.
  5. Thank you for these ideas. I like them all, though I think 2 & 3 have the most likelihood of providing some help. After no crash upgrading to 6.12.3 I didn't worry, but next time I will be sure to write the log to flash as well and hopefully glean some information from that. Thanks for your help and again kind regards, craigr
  6. Upgraded from 6.12.3 to 6.12.4 and it went mostly smoothly as everything is running fine and well now. However, I followed the instructions and did shutdown the array before upgrading. Upon reboot, unRAID briefly came online (web GUI and all) and then completely crashed. This has happened upgrading to 6.12.0, 6.12.1, 6.12.2, 6.12.4; but NOT 6.12.3. My IPMI records show this for each of the upgrades. This is the ONLY time my machine has EVER crashed; only after the FIRST reboot after the four upgrades I have mentioned. I have had this configuration since 2019 with very few changes. Supermicro IPMI: unRAID IPMI plugin: Above are my IPMI errors, one for each upgrade 😬. When it crashes there is no cursor blinking at the terminal on the monitor connected to the machine, no ssh, no web GUI, but the IPMI fan control seems to be running as the fans are not at full speed. I have to do a hard shutdown with the power button, and then of course a parity check starts on the next reboot which I always cancel because there will be no errors. Also, after this update, for some reason my Deluge docker would not start. I forced an update and it started fine after that. I really don't like these crashes after moving to 6.12.x on my system. Thanks for the update and kind regards, craigr
  7. Did you follow the instructions in the first post to this thread before rebooting?
  8. I seem to have upgraded from 6.12.2 to 6.12.3 without incident. I had been out of town and did not want to apply the update remotely as both previous version (6.12.1 and 6.12) crashed my machine on the first reboot after update, and had to be hard reset. This update is so far painless and rebooted without problems.
  9. And again on 6/21. I should pay more attention to my emails! error: Compressing program wrote following message to stderr when compressing log /var/log/nginx/error.log.1: gzip: stdout: No space left on device error: failed to compress log /var/log/nginx/error.log.1
  10. Actually, I bet that was generated when my log was at 100% capacity?
  11. Hey so I found this email for June 6th, does this give any further clues? Don't know how I missed it. Everything seems fine now but still... error: Compressing program wrote following message to stderr when compressing log /var/log/nginx/error.log.1: gzip: stdout: No space left on device error: failed to compress log /var/log/nginx/error.log.1
  12. I do not have an xmrig directory in the container. I think you may have picked it up somewhere else.
  13. root@unRAID:/var/log/unraid-api# ps -ef | grep unraid-api root 27041 6450 0 15:09 pts/1 00:00:00 grep unraid-api OK fingers crossed that I am good now. I'll have to try reenabling healthcheck on Plex and see what happens, though I will most likely move towards keeping it off if everything runs OK. I also think (hope we are done). Thank you so much!
  14. Let me truncate the log and see if it still grows. One sec...
  15. If the app is uninstalled it shouldn't be able to create log entries. I thought perhaps you might have meant there is a bug and it is not fully uninstalled or something?
  16. Yes, I cannot sign in because the app is gone and uninstalled. I meant should I reinstall the app and sign in, then sign out again? I thought you were implying that my log entries had to do with the Unraid Connect app which is totally uninstalled? I am now also confused 🤪
  17. I logged out of and then uninstalled Unraid Connect. I cannot sign back in. This can't be affecting the log, can it? Should I reinstall the plugin? I frankly don't really like it much, have many issues with it, and don't really trust the unencrypted backups. It's certainly not an app I feel I need at all.
  18. More, I am seeing this in stdout.log. Seems to be after disabling healthcheck on Plex. What does it mean, do I need to worry about it? Also, since you said that log is limited to 10MB I am not going to concern myself with it as a culprit anymore, it's a relief. [2023-07-10T04:58:00.002] [10228] ^[[33m[WARN]^[[39m ^[[33m[mothership]^[[39m State data not fully loaded, but job has been started [2023-07-10T04:58:01.333] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event [2023-07-10T04:58:06.328] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for shares [2023-07-10T04:58:06.331] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks [2023-07-10T04:58:08.490] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[minigraph]^[[39m GraphQL Connection Status: { status: 'CONNECTING', error: null } [2023-07-10T04:58:08.490] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Writing updated config to /usr/local/emhttp/state/myservers.cfg [2023-07-10T04:58:08.490] [10228] ^[[32m[INFO]^[[39m ^[[32m[minigraph]^[[39m Connecting to wss://mothership.unraid.net/ws [2023-07-10T04:58:08.855] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[minigraph]^[[39m GraphQL Connection Status: { status: 'CONNECTED', error: null } [2023-07-10T04:58:08.856] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Writing updated config to /usr/local/emhttp/state/myservers.cfg [2023-07-10T04:58:08.856] [10228] ^[[32m[INFO]^[[39m ^[[32m[minigraph]^[[39m Connected to wss://mothership.unraid.net/ws [2023-07-10T04:58:08.980] [10228] ^[[91m[ERROR]^[[39m ^[[91m[minigraph]^[[39m Network Error Encountered "error" message expects the 'payload' property to b> [2023-07-10T04:58:08.980] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[minigraph]^[[39m GraphQL Connection Status: { status: 'ERROR_RETRYING', error: `"error" message expects the 'payload' property to be an array of GraphQL errors, but got "Could not find a user with that information. Please try> } [2023-07-10T04:58:08.980] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[remote-access]^[[39m Clearing all active remote subscriptions, minigraph is no longer connecte> [2023-07-10T04:58:08.980] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Writing updated config to /usr/local/emhttp/state/myservers.cfg [2023-07-10T04:58:08.981] [10228] ^[[32m[INFO]^[[39m ^[[32m[minigraph]^[[39m Delay currently is 29237.169127408637 [2023-07-10T04:58:11.335] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event [2023-07-10T04:58:16.328] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks [2023-07-10T04:58:21.331] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event [2023-07-10T04:58:26.328] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks [2023-07-10T04:58:31.331] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event [2023-07-10T04:58:36.328] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks [2023-07-10T04:58:38.219] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[minigraph]^[[39m GraphQL Connection Status: { status: 'CONNECTING', error: null } [2023-07-10T04:58:38.219] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Writing updated config to /usr/local/emhttp/state/myservers.cfg [2023-07-10T04:58:38.219] [10228] ^[[32m[INFO]^[[39m ^[[32m[minigraph]^[[39m Connecting to wss://mothership.unraid.net/ws [2023-07-10T04:58:38.576] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[minigraph]^[[39m GraphQL Connection Status: { status: 'CONNECTED', error: null } [2023-07-10T04:58:38.576] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Writing updated config to /usr/local/emhttp/state/myservers.cfg [2023-07-10T04:58:38.576] [10228] ^[[32m[INFO]^[[39m ^[[32m[minigraph]^[[39m Connected to wss://mothership.unraid.net/ws [2023-07-10T04:58:38.666] [10228] ^[[91m[ERROR]^[[39m ^[[91m[minigraph]^[[39m Network Error Encountered "error" message expects the 'payload' property to b> [2023-07-10T04:58:38.666] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[minigraph]^[[39m GraphQL Connection Status: { status: 'ERROR_RETRYING', error: `"error" message expects the 'payload' property to be an array of GraphQL errors, but got "Could not find a user with that information. Please try> } [2023-07-10T04:58:38.666] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[remote-access]^[[39m Clearing all active remote subscriptions, minigraph is no longer connecte> [2023-07-10T04:58:38.666] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Writing updated config to /usr/local/emhttp/state/myservers.cfg [2023-07-10T04:58:38.667] [10228] ^[[32m[INFO]^[[39m ^[[32m[minigraph]^[[39m Delay currently is 253101.86594354652 [2023-07-10T04:58:46.328] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for shares [2023-07-10T04:58:46.330] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks [2023-07-10T04:58:51.334] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event
  19. Also, I didn't know healthcheck was enabled by default in unRAID Docker containers. I thought one had to go through a whole rigamarole to get it turned on. Now I know it's on by default. Turning it off for Plex has reduced writes to my SSD pool tremendously which was something that always bugged me even though it only amounts to about 75GB a year; just the constant writing was annoying. At one point I even moved the data path to a spinner so that it wouldn't write to SSD all the time. When Plex became a full time necessary Docker a few months ago I didn't want to risk it being on a single spinner so I moved it back to the RAID1 SSD pool. Thanks again for that suggestions, but I do wonder if I should turn off healthcheck for things like Deluge and Radar as well? As I understand it, healthcheck is really just a Band-Aid and covers underlying problems in most cases. If the server is running correctly than it should not be needed.
  20. Follow up. So far the log has not ballooned since removing the Unraid Connect plugin and adding --no-healthcheck to the Plex docker... OK why. I'm not prepared to say it's fixed. Removing the Unraid Connect plugin was obvious. However, you have me scratching my head on where you came up with removing helathcheck from Plex. And that you suggested only Plex. Why just Plex, why not Deluge, why not all my Dockers. So far it seems brilliant, but should I remove healthcheck from all my Dockers. or Dockers that are always running and doing things? I suppose it could have been the Unraid Connect plugin too. I won't have time to fully investigate until I get back from my travels in a few weeks. Thanks!
  21. Removed Unraid Connect. Honestly, I need all the other plugins I have except maybe Tips and Tweaks. That said I haven't used it to make many changes. As I recall it was necessary for CPU power management or something?
  22. Darn, I thought I found it. Will do on Plex. So far no ballooning log for 14 hours.
  23. Restart Docker: [2023-07-09T23:44:00.001] [10228] ^[[33m[WARN]^[[39m ^[[33m[mothership]^[[39m State data not fully loaded, but job has been started [2023-07-09T23:44:43.395] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks [2023-07-09T23:44:48.401] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event [2023-07-09T23:44:53.398] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks [2023-07-09T23:44:58.409] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event [2023-07-09T23:45:00.003] [10228] ^[[33m[WARN]^[[39m ^[[33m[mothership]^[[39m State data not fully loaded, but job has been started [2023-07-09T23:45:13.401] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks [2023-07-09T23:45:18.407] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event [2023-07-09T23:45:53.405] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks [2023-07-09T23:45:58.417] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event [2023-07-09T23:46:00.004] [10228] ^[[33m[WARN]^[[39m ^[[33m[mothership]^[[39m State data not fully loaded, but job has been started [2023-07-09T23:46:23.408] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks [2023-07-09T23:46:28.420] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event [2023-07-09T23:47:00.001] [10228] ^[[33m[WARN]^[[39m ^[[33m[mothership]^[[39m State data not fully loaded, but job has been started [2023-07-09T23:47:03.414] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks [2023-07-09T23:47:08.424] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event [2023-07-09T23:47:33.417] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks [2023-07-09T23:47:38.429] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event [2023-07-09T23:48:00.004] [10228] ^[[33m[WARN]^[[39m ^[[33m[mothership]^[[39m State data not fully loaded, but job has been started [2023-07-09T23:48:43.423] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks [2023-07-09T23:48:48.428] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event [2023-07-09T23:49:00.004] [10228] ^[[33m[WARN]^[[39m ^[[33m[mothership]^[[39m State data not fully loaded, but job has been started [2023-07-09T23:50:00.004] [10228] ^[[33m[WARN]^[[39m ^[[33m[mothership]^[[39m State data not fully loaded, but job has been started [2023-07-09T23:50:13.435] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks [2023-07-09T23:50:18.446] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event [2023-07-09T23:50:53.438] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks [2023-07-09T23:50:58.448] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event [2023-07-09T23:51:00.004] [10228] ^[[33m[WARN]^[[39m ^[[33m[mothership]^[[39m State data not fully loaded, but job has been started [2023-07-09T23:52:00.004] [10228] ^[[33m[WARN]^[[39m ^[[33m[mothership]^[[39m State data not fully loaded, but job has been started [2023-07-09T23:52:28.350] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for var [2023-07-09T23:52:31.111] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[docker]^[[39m Starting docker watch [2023-07-09T23:52:31.111] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[docker]^[[39m Creating docker event emitter instance [2023-07-09T23:52:32.323] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[docker]^[[39m Binding to docker events [2023-07-09T23:52:32.832] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for var [2023-07-09T23:52:33.266] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[docker]^[[39m [mariadb] container->start [2023-07-09T23:52:33.450] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks [2023-07-09T23:52:33.814] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[docker]^[[39m [plexinc/pms-docker:latest] container->start [2023-07-09T23:52:34.444] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[docker]^[[39m [binhex/arch-nzbget] container->start [2023-07-09T23:52:35.021] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[docker]^[[39m [binhex/arch-delugevpn] container->start [2023-07-09T23:52:35.361] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[docker]^[[39m [binhex/arch-delugevpn] container->die [2023-07-09T23:52:43.450] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for shares [2023-07-09T23:52:43.452] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks [2023-07-09T23:52:48.455] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event [2023-07-09T23:52:53.450] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks [2023-07-09T23:52:53.454] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for shares [2023-07-09T23:52:58.453] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event [2023-07-09T23:53:00.003] [10228] ^[[33m[WARN]^[[39m ^[[33m[mothership]^[[39m State data not fully loaded, but job has been started [2023-07-09T23:53:03.453] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks [2023-07-09T23:53:03.463] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for shares [2023-07-09T23:53:08.462] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event [2023-07-09T23:53:13.454] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks [2023-07-09T23:53:23.454] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks [2023-07-09T23:53:28.463] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event [2023-07-09T23:53:33.456] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for shares [2023-07-09T23:53:33.462] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks [2023-07-09T23:53:38.471] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event [2023-07-09T23:53:43.456] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for shares [2023-07-09T23:53:43.458] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks [2023-07-09T23:53:48.460] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event [2023-07-09T23:53:53.459] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks [2023-07-09T23:53:58.473] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event [2023-07-09T23:54:00.003] [10228] ^[[33m[WARN]^[[39m ^[[33m[mothership]^[[39m State data not fully loaded, but job has been started [2023-07-09T23:54:03.458] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks And repeat forever. Why would this happen intermittently as well?
  24. If I stop Docker than stdout.log stops growing. I can confirm Docker stopped with `/etc/rc.d/rc.docker status`. I tried stopping each Docker, but the only way was to turn off Docker in unRAID entirely. I am pretty lost at this point and have no idea where to go next. Please help.
  25. [2023-07-09T21:49:32.124] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks [2023-07-09T21:49:37.133] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event [2023-07-09T21:49:42.125] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks [2023-07-09T21:49:47.131] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event [2023-07-09T21:49:52.128] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks [2023-07-09T21:50:00.002] [10228] ^[[91m[ERROR]^[[39m ^[[91m[minigraph]^[[39m NO PINGS RECEIVED IN 3 MINUTES, SOCKET MUST BE RECONNECTED [2023-07-09T21:50:00.003] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[minigraph]^[[39m GraphQL Connection Status: { status: 'PING_FAILURE', error: 'Ping Receive Exceeded Timeout' } [2023-07-09T21:50:00.004] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[remote-access]^[[39m Clearing all active remote subscriptions, minigraph is no longer connected. [2023-07-09T21:50:00.006] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Writing updated config to /usr/local/emhttp/state/myservers.cfg [2023-07-09T21:50:00.006] [10228] ^[[32m[INFO]^[[39m ^[[32m[minigraph]^[[39m Reconnecting Mothership - PING_FAILURE / PRE_INIT - SetGraphQLConnectionStatus Event [2023-07-09T21:50:00.007] [10228] ^[[32m[INFO]^[[39m ^[[32m[minigraph]^[[39m Subscribing to Events [2023-07-09T21:50:00.009] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[minigraph]^[[39m GraphQL Connection Status: { status: 'CONNECTING', error: null } [2023-07-09T21:50:00.009] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Writing updated config to /usr/local/emhttp/state/myservers.cfg [2023-07-09T21:50:00.009] [10228] ^[[32m[INFO]^[[39m ^[[32m[minigraph]^[[39m Connecting to wss://mothership.unraid.net/ws [2023-07-09T21:50:00.581] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[minigraph]^[[39m GraphQL Connection Status: { status: 'CONNECTED', error: null } [2023-07-09T21:50:00.582] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Writing updated config to /usr/local/emhttp/state/myservers.cfg [2023-07-09T21:50:00.582] [10228] ^[[32m[INFO]^[[39m ^[[32m[minigraph]^[[39m Connected to wss://mothership.unraid.net/ws [2023-07-09T21:50:02.129] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks [2023-07-09T21:50:07.133] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event [2023-07-09T21:50:12.134] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks [2023-07-09T21:50:22.136] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks [2023-07-09T21:50:27.143] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event [2023-07-09T21:50:32.138] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks [2023-07-09T21:50:32.160] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for shares [2023-07-09T21:50:37.156] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event [2023-07-09T21:50:42.137] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks [2023-07-09T21:50:52.138] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks [2023-07-09T21:50:57.144] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[app]^[[39m Array was updated, publishing event [2023-07-09T21:51:02.140] [10228] ^[[36m[DEBUG]^[[39m ^[[36m[emhttp]^[[39m Loading state file for disks Is this the My Server plugin or whatever it's called that now seems to be integrated into unRAID? I've had loads of issues with it in the past.