Jump to content

Chelun

Members
  • Posts

    28
  • Joined

  • Last visited

Everything posted by Chelun

  1. Rebooted the server last night and the logs are down again to 1%, so whatever it was it is gone. Will wait and see if the problem re-occur.
  2. This is the second time it happened, the way I noticed it is because I loose the dashboard, all the data is clear! A restart of the ngnix will get me the dashboard back and that is why I noticed the logs being full. The first time it happened was 2 weeks ago, and I restarted the server, that give me the dashboard back. This time I was hoping the diagnostic will have something that can point me to the issue, without restarting.
  3. if i try rebooting into safe mode then the logs will get cleared and the problem will get fixed. I am trying to figure out what was the root cause of the issue.
  4. Not sure the root cause of this, can I have some help please? nasty-diagnostics-20240418-1412.zip
  5. OK, so far so good. I rebooted, and deleted all docker containers by the exception of ddclient and wikiJs. Started up by adding the community plugin, waited 2 days and added docker-patch, 2 days later added fixed all things, at this point I am on almost 7 days with all that running and no crashes. I will go ahead and close this, and as a solution I am going to blame the problem to docker, that after the move to the new hardware, something got corrupted and kept crashing the server. Thank you all for the help.
  6. A little over 10 days and no crashes since I started in safe mode. It has been running with all plugins and only 1 docker, ddclient, kept wikiJS in place because I need to migrate data out of it but I deleted all other dockers with data! Having the system in safe mode meant the plugins were disabled? does it do anything else? how do I get out of safe mode?
  7. This morning enabled the Docker and stopped all containers but 1, ddclient. Will monitor for a couple of days and will enable another one. At least now I know it is not a hardware issue!!
  8. Docker is disabled. My question was, how to delete all the containers without enabling Docker? Because it is disabled, I can no see the containers and delete them one by one.
  9. OK, taking that into consideration, how can delete them all and all traces of them, without even enabling the Docker? I don't really care which container is the problem here, I will be installing them again later on, but for now I just want to have a stable system.
  10. Do you believe is a container issue? Because if that is the case, I just remove them all and start over. I have no problem with that! I have good documentation on everything to re-reproduce the setup again.
  11. 4 Days and 4 hours up in safe mode. Record for sure!! I have Docker disabled, no VMs, and the following plugins: community.applications.plg docker.patch2.plg dynamix.file.manager.plg fix.common.problems.plg tasmotapm.plg Also as Docker I have: The binhex-RR stack (Radar, Sonar, etc) DrawIO WikiJS ddclient I am think about backing up all data and start from fresh, as this is new hardware that I migrated to. It seems to be a Software issue. Any other ideas will be greatly appreciated!
  12. OK, restarted the server in safe mode and disabled docker from the Settings page. Will wait and see what happen.
  13. Hello, I am experiencing some random crashes of the unraid server and I can not figure out the issue. This had been going on since I moved to the new hardware. Unraid 6.12.8 System specs are: - Supermicro X11SCA-F - I3-8100 - 32G ram ECC - 2 NVME (cache) - 3 8TB drives (data) - 10G Network card (intel x520) Now the server had been up for 1 day 10 hours. There is no spacial thing that I am doing when it crashes also I only have a hand full of Dockers running, nothing crazy. Let me attach the Diagnostics and syslogs (the system crashed twice or more since I enabled the syslogs) Thank you in advance. nasty-diagnostics-20240303-2119.zip syslog.log
  14. I see, let me setup the syslog and start preserving the logs, then if it crashes again, I will create a new post and upload the logs. Thank you all for the help!
  15. @trurl I just got around to work on this (homelab is always second or third on my daily list, LOL), I removed the splitter and connected the drives directly to the power supply, this seems to fix the issue! But not completely, as soon as I brought it up and started the array, the CPU went to 100% and the system crashed!! After came back up, started the array again and it started the parity check automatically, now it is going at 238.8 MB/sec, which is what I expect and the check should finish in 8 hours, which is 4-5 hours faster than before!! I am attaching new logs here, would you be able to tell me the reason for the crashes from the logs? nasty-diagnostics-20240223-1215.zip
  16. What do you think the next steps should be? Should I start changing/replacing/removing parts?
  17. Yes they are connected to sata ports. No hardware change, unraid crashed earlier today and the server rebooted, since that reboot the problem occurred. Do you see on the logs which hdd is the problem? Do you see in the logs the root cause of the crash?
  18. Yes, I am using this 4 to 1 adapter
  19. New diagnostics nasty-diagnostics-20240222-1601.zip
  20. I stopped the server, reseated all the sata cables and power cables, started up and still the same. Started the array back, started the parity check and it is still running very slow. any help will be appreciated.
  21. Hello, I need some help with my system, it is running a parity check after the system crashed and it is running at 241.8 KB/sec which will take 382 days to complete. This is a new system I build about a week ago, I added a new drive 2 days ago and ran parity then, it took 14 hours to finish (I had help here to add the drive). The system rebooted out of nowhere. I attached the diagnostics. nasty-diagnostics-20240222-1432.zip
  22. Thank you so much!! That command fixed it! Now the array started with the 8TB as data drive and parity check is in progress!!!! Thank you again
  23. Please, I will run it and then after the drive is in, I can rebuild the parity.
  24. The parity drive is the same one, did not change it at all. the new drive (sdd) was used in a Windows VM for a while, I decomm'ed the VM and now I want to reuse the drive here in the unraid server. sdc is the parity and sdd is the new drive, when I open fdisk on each drive I see the sdc as bigger, so I don't understand what I am doing wrong here. Disk /dev/sdc: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors Disk model: ST8000VN0022-2EL Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: B9C9F102-FCCB-4E59-AFB6-6561A40FD7B8 Device Start End Sectors Size Type /dev/sdc1 2048 15628053134 15628051087 7.3T Linux filesystem Disk /dev/sdd: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors Disk model: ST8000NM0105 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 86B8EC10-FC00-4864-B442-E5EEFA93FBA1 Device Start End Sectors Size Type /dev/sdd1 2048 15628052479 15628050432 7.3T Linux filesystem I will update to 6.12.8 as soon as I finish this post.
×
×
  • Create New...