Jump to content

nickro8303

Members
  • Content Count

    140
  • Joined

  • Last visited

Community Reputation

2 Neutral

About nickro8303

  • Rank
    Advanced Member

Converted

  • Gender
    Male
  • Location
    Texas

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Friday the electricity flickered a few times during a storm and it caused my server to do an unclean shutdown. My UPS for some reason did not prevent this. Since then I'm getting multiple email notifications with "Resource temporarily unavailable" in the description. They seem to be pointing to multiple plugins. Rebooting hasn't resolved the issue either. If someone could take a look a my diagnostics and help me figure this out I would greatly appreciated it. Email Notifications Examples: cron for user root /usr/local/emhttp/plugins/recycle.bin/scripts/get_trashsizes &> /dev/null cron for user root /usr/local/emhttp/plugins/dynamix/scripts/monitor &> /dev/null /bin/sh: fork: retry: Resource temporarily unavailable /bin/sh: fork: retry: Resource temporarily unavailable cron for user root /etc/rc.d/rc.diskinfo --daemon &> /dev/null /bin/sh: fork: retry: Resource temporarily unavailable /bin/sh: fork: retry: Resource temporarily unavailable tower-diagnostics-20200830-0953.zip
  2. Seeing the diskinfo line the only thing that stands out to me would be the disk location plugin. Could that be causing this issue?
  3. This happened again this morning at 4am. It only happens every few weeks. The only thing I have running is the user script to backup my VMs. I can access the gui and SSH but it hangs when I try to reboot. The console shows "Waiting up to 420 seconds for graceful shutdown". I stopped the docker and VM service before trying to reboot so I don't know what it's hanging on. I was able to get the dmesg output but a search for oops and oom turned up nothing. Not sure what is causing this to happen. dmesg2.5.20.txt tower-diagnostics-20200205-0821.zip
  4. Last few weeks my server has been locking up and sending a mass of error emails with the subject of "cron for user root /etc/rc.d/rc.diskinfo --daemon &> /dev/null" and in the body "/bin/sh: fork: retry: Resource temporarily unavailable". I am able to log in to the gui and ssh but it refuses to do a clean shut down. I have to hard power off. This has started happening every couple of days now. I thought it was due to Plex transcoding to RAM which I set up about a month ago but I've since reverted it back to default and this issue continues happening. Diagnostics file is attached, I would be grateful for any assistance I can get with this problem. tower-diagnostics-20200125-0937.zip
  5. This is the one I got. https://www.amazon.com/gp/product/B073SBQMCX/ref=ppx_yo_dt_b_asin_title_o09_s00?ie=UTF8&psc=1 So far it's working flawlessly
  6. You can try more sata cables but I can tell you you're wasting your time. I tried several brands and none of them stopped the errors. I replaced the Samsung SSD with WD and the errors stopped. So now I have a 1tb SSD I paid over $100 for working as coaster on my desk. I feel your pain.
  7. Looks like Samsung brand SSD was the issue, replaced it with a WD SSD and I'm not getting any more CRC errors. Thanks for the help guys.
  8. Thanks for the help. I'm at my wit's end with these CRC errors. I've spent months trying to stop them. I'm ordering a WD SSD to see if that will resolve the issue. Sent from my SM-G950U using Tapatalk
  9. I found a work around. I renamed that share, created a new back up share and used krusader to move the data to the new share. Seems to be no issues writing to that share as of now.
  10. I have not tried another brand yet. As far as I can tell though there is no issue with the SSD. When I replaced it the CRC error count picked up where it left off with the old one and continued. I've not seen any failures to read/write from the SSD. Any ideas as to the error I just posted about?
  11. Well I changed the cables out and swapped ports with another drive and as no surprise to me I'm still getting CRC errors on the cache drive. I'm telling you there is no getting rid of these errors. I've literally replaced everything. Any way I connected a spare drive and transferred the Backup data off the cache drive and then back to the array but now I'm getting an error when trying to write to the Backup share. The only way it will let me write to that share is if I enable cache on it. Any ideas? tower-diagnostics-20190606-0002.zip
  12. Any SATA cables you'd recommend or should I should just find the most expensive one Amazon?
  13. I'm not kidding when I say I've literally replaced almost every part on this server over the last few months trying to stop those CRC errors. The only parts that I haven't replaced are the CPU and the RAM modules. I am using those flat red cables (my cable management isn't perfect either) I'll pick up some better cables and try replacing them again but I'm still not sure that this is causing the mover to error out on those files.
  14. I believe it is, but the SSD is not plugged in to the controller. Would it still causing these errors even though it's not on those ports?
  15. The mover works fine with other shares, this is the only one that it is giving errors for.