arch1mede

Members
  • Posts

    13
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

arch1mede's Achievements

Newbie

Newbie (1/14)

0

Reputation

  1. For me md1 was mounting but showing up like: d?????????????? so it would have been good to know that there was an issue with the mount point.
  2. Yes I realize that but not sure how I am supposed to know there is an issue if I have to check multiple places.
  3. I just wanted to report, this is still present in the latest 6.9.2 Jul 5 09:47:41 UnRaid-Mini-NAS nginx: 2021/07/05 09:47:41 [alert] 8435#8435: worker process 18731 exited on signal 6 Jul 5 09:47:43 UnRaid-Mini-NAS nginx: 2021/07/05 09:47:43 [alert] 8435#8435: worker process 18756 exited on signal 6 Jul 5 09:47:45 UnRaid-Mini-NAS nginx: 2021/07/05 09:47:45 [alert] 8435#8435: worker process 18801 exited on signal 6 Jul 5 09:47:47 UnRaid-Mini-NAS nginx: 2021/07/05 09:47:47 [alert] 8435#8435: worker process 18828 exited on signal 6 I was checking on my parity check and noticed my logs filled with this. I only had 2 windows open, main one and a systems log. Ran the following: killall --quiet --older-than 1w process_name This seemed to have solved the issue.
  4. I actually figured out the issue, unknown to me, md1 was spitting out xfs errors even though the main page/dashboard showed everything green. As it happens, the lancache-bundle docker has a user setting that pointed to md1 which was really not accessible so wouldn't start. Rebooted the unraid box resolved the issue but not really happy with that solution as I shouldn't have needed to reboot it. As a result, md1 had a xfs error on it so I had to run a parity check on the whole array to resolve the issue, I may still have to put the array into maintenance mode and do a repair but thought id share how this was resolved.
  5. In my experience the VM solution is slower, besides I have this configured to use its own IP so there shouldn't be any port conflicts. This worked before the most recent update.
  6. Recently the docker updated but now refuses to run, all I see now is just an error, anyone have any ideas how to get this running again?
  7. OK so I finally found the solution for this consoleblank=0 cat /sys/module/kernel/parameters/consoleblank should now reflect 0
  8. anyone know how to disable the console screen blanking out, I have already tried setterm --blank 0, cat /sys/module/kernel/parameters/consoleblank and its not 0, still saying 900. The instructions I found were for 6.8.3 so something must have changes for 6.9.0
  9. I have no idea...and i'm not sure why others haven't run into this same issue. I went to the support github and there was nothing in issues and I was starting to suspect that 6.9 is the cause. Maybe its a combination of that docker and version? If it happens again, I will need to downgrade to 6.8.3.
  10. I had this VERY same docker installed and then all of a sudden, my unraid server started acting weird. First time it locked up was a week ago, just became unresponsive, 2nd time it started to degrade, dashboard stopped displaying anything, docker page stopped displaying anything, stop/start nginx did not resolve anything and the web terminal started saying bad proxy so I just removed that docker. I have been running dockers for YEARS and have never had a docker effect a server like this.
  11. Same issue here, installed macinthebox docker, while following the vid, pressed the notifier and it immediately told me to run the helper script, Pressed it and it said to run the VM and it wasn't there. This is a brand new 6.9.0 install.