muslimsteel

Members
  • Posts

    8
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

muslimsteel's Achievements

Newbie

Newbie (1/14)

0

Reputation

  1. So it seems after I finally got everything set up and went to create a schedule for backups, it borked the server. I have a feeling the answer is going to be delete the DB and start over, but wanted to confirm if there was another way. This is from the container logs: 2021-09-18 16:41:50,537 DEBG 'start-script' stderr output: Traceback (most recent call last): File "/opt/crafty/crafty.py", line 308, in <module> 2021-09-18 16:41:50,538 DEBG 'start-script' stderr output: multi.reload_scheduling() File "/opt/crafty/app/classes/multiserv.py", line 93, in reload_scheduling self.reload_user_schedules() File "/opt/crafty/app/classes/multiserv.py", line 112, in reload_user_schedules helper.scheduler(task, svr_obj) File "/opt/crafty/app/classes/helpers.py", line 1306, in scheduler 2021-09-18 16:41:50,538 DEBG 'start-script' stderr output: schedule.every(task.interval).monday.do(mc_server_obj.backup_server).tag('user') File "/opt/crafty/env/lib/python3.9/site-packages/schedule/__init__.py", line 302, in monday raise IntervalError('Use mondays instead of monday') schedule.IntervalError: Use mondays instead of monday 2021-09-18 16:41:50,584 DEBG fd 11 closed, stopped monitoring <POutputDispatcher at 22962550325200 for <Subprocess at 22962550324528 with name start-script in state RUNNING> (stdout)> 2021-09-18 16:41:50,584 DEBG fd 15 closed, stopped monitoring <POutputDispatcher at 22962550007648 for <Subprocess at 22962550324528 with name start-script in state RUNNING> (stderr)> 2021-09-18 16:41:50,584 INFO exited: start-script (exit status 1; not expected) 2021-09-18 16:41:50,585 DEBG received SIGCHLD indicating a child quit Thanks for looking.
  2. I just had this issue, took a diagnostics file right after it happened. I was working on some files in krusader when suddenly it said the directory did not exist anymore. I went up a level and that one did not exist either. Then I looked at my shares and they were gone. Googled it and came across this post. So I took some diagnostics and then rebooted. Server seems to be ok now, but just seems odd, was wondering if there was anything in the diagnostics that would suggest the problem. Thanks! hulk-diagnostics-20200610-2159.zip
  3. @primeval_god Thanks, looks like they might have an issue open for this same thing on GitHub: https://github.com/netdata/netdata/issues/9084
  4. I have come across the same issue as above. I originally posted the issue in the Dynamix forum because of the errors that I saw, they looked at my diagnostics and saw the process issue that you are seeing. This is what I originally posted here: And then one of the guys there replied: I have attached my diagnostics if you want to take a look. Going to turn off the netdata container for now and see if I am seeing any more of these issues. Thanks in advance for the support! hulk-diagnostics-20200519-2143.zip
  5. Interesting, will check it out in their support thread, thanks!
  6. Hello, I hope this is the right place to post this. I have searched and have been unable to find a solution. In the last few days I added a second cache drive, identical to the existing one. I added this to create a cache pool. Since then I noticed occasional weird messages in my email that don't seem to make sense: Subject is: cron for user root /usr/local/emhttp/plugins/dynamix/scripts/monitor &> /dev/null and the body consists of: /bin/sh: fork: retry: Resource temporarily unavailable Typically I get several in a row and then they stop for 12-24 hours. If I leave them it seems to only get worse leading to the server being unresponsive twice now in the last few days. I was able to reboot it from the GUI once, but the second time I had to do a hard boot. I tried uninstalling and reinstalling the SSD Trim plugin but did not seem to make a difference. It came back up without issue and the errors seemed to be cleared, but then about 24 hours later they started happening again. Everything seems to be working ok otherwise, I am not sure what is causing this. One thought I had is that one of the cache drives is on an HBA and the other is connected directly to the motherboard, not sure if that would make a difference. I have attached the diagnostic. Let me know what you guys think, the server has been running great otherwise and I have really been enjoying UNRAID. Thanks for the support! hulk-diagnostics-20200519-2143.zip
  7. Thanks for the advice, did that and all is good now.
  8. Hello, I had the hassio_supervisor docker working yesterday and it seems that they did an update to the docker container and now it will not start. I have tried the small fix with adding the letter in the description to have it repull the container but still do not stay started. These are the lines from the log: [32m20-02-05 10:37:42 INFO (MainThread) [__main__] Initialize Hass.io setup[0m [32m20-02-05 10:37:42 INFO (SyncWorker_0) [hassio.docker.supervisor] Attach to Supervisor homeassistant/amd64-hassio-supervisor with version 195[0m [32m20-02-05 10:37:42 INFO (SyncWorker_0) [hassio.docker.supervisor] Connect Supervisor to Hass.io Network[0m [32m20-02-05 10:37:42 INFO (SyncWorker_0) [hassio.docker.interface] Cleanup images: [][0m [32m20-02-05 10:37:43 INFO (MainThread) [__main__] Setup HassIO[0m [33m20-02-05 10:37:43 WARNING (MainThread) [hassio.dbus.systemd] No systemd support on the host. Host control has been disabled.[0m [33m20-02-05 10:37:43 WARNING (MainThread) [hassio.dbus.hostname] No hostname support on the host. Hostname functions have been disabled.[0m [33m20-02-05 10:37:43 WARNING (MainThread) [hassio.dbus.rauc] Host has no rauc support. OTA updates have been disabled.[0m [33m20-02-05 10:37:43 WARNING (MainThread) [hassio.dbus.nmi_dns] No DnsManager support on the host. Local DNS functions have been disabled.[0m [32m20-02-05 10:37:43 INFO (MainThread) [hassio.host.apparmor] Load AppArmor Profiles: set()[0m [32m20-02-05 10:37:43 INFO (MainThread) [hassio.host.apparmor] AppArmor is not enabled on host[0m [32m20-02-05 10:37:43 INFO (SyncWorker_1) [hassio.docker.interface] Attach to homeassistant/amd64-hassio-dns with version 1[0m [32m20-02-05 10:37:43 INFO (MainThread) [hassio.misc.forwarder] Start DNS port forwarding to 172.30.32.3[0m [32m20-02-05 10:37:43 INFO (SyncWorker_1) [hassio.docker.interface] Restart homeassistant/amd64-hassio-dns[0m [32m20-02-05 10:37:44 INFO (SyncWorker_0) [hassio.docker.interface] Attach to homeassistant/intel-nuc-homeassistant with version 0.104.3[0m [32m20-02-05 10:37:44 INFO (MainThread) [hassio.store.git] Load add-on /data/addons/core repository[0m [32m20-02-05 10:37:44 INFO (MainThread) [hassio.store.git] Load add-on /data/addons/git/a0d7b954 repository[0m [32m20-02-05 10:37:44 INFO (MainThread) [hassio.store] Load add-ons from store: 62 all - 62 new - 0 remove[0m [32m20-02-05 10:37:44 INFO (MainThread) [hassio.addons] Found 0 installed add-ons[0m [32m20-02-05 10:37:44 INFO (MainThread) [hassio.updater] Fetch update data from https://version.home-assistant.io/stable.json[0m [32m20-02-05 10:37:44 INFO (MainThread) [hassio.snapshots] Found 0 snapshot files[0m [32m20-02-05 10:37:44 INFO (MainThread) [hassio.discovery] Load 0 messages[0m [32m20-02-05 10:37:44 INFO (MainThread) [hassio.ingress] Load 0 ingress session[0m [32m20-02-05 10:37:44 INFO (MainThread) [hassio.secrets] Load Home Assistant secrets: 1[0m [32m20-02-05 10:37:44 INFO (MainThread) [__main__] Run Hass.io[0m [32m20-02-05 10:37:44 INFO (MainThread) [hassio.api] Start API on 172.30.32.2[0m [32m20-02-05 10:37:44 INFO (MainThread) [hassio.supervisor] Update Supervisor to version 198[0m [32m20-02-05 10:37:44 INFO (SyncWorker_6) [hassio.docker.interface] Update image homeassistant/amd64-hassio-supervisor:195 to homeassistant/amd64-hassio-supervisor:198[0m [32m20-02-05 10:37:44 INFO (SyncWorker_6) [hassio.docker.interface] Pull image homeassistant/amd64-hassio-supervisor tag 198.[0m [32m20-02-05 10:37:49 INFO (SyncWorker_6) [hassio.docker.interface] Tag image homeassistant/amd64-hassio-supervisor with version 198 as latest[0m [32m20-02-05 10:37:49 INFO (SyncWorker_6) [hassio.docker.interface] Stop hassio_supervisor application[0m [32m20-02-05 10:37:49 INFO (MainThread) [__main__] Stopping Hass.io[0m [32m20-02-05 10:37:49 INFO (MainThread) [hassio.api] Stop API on 172.30.32.2[0m [32m20-02-05 10:37:49 INFO (MainThread) [hassio.misc.forwarder] Stop DNS forwarding[0m [32m20-02-05 10:37:49 INFO (MainThread) [hassio.core] Hass.io is down[0m [32m20-02-05 10:37:49 INFO (MainThread) [__main__] Close Hass.io[0m Let me know your thoughts on this. Thanks!