relink

Members
  • Posts

    235
  • Joined

  • Last visited

Everything posted by relink

  1. So is there any way for me to find out whats causing the OOM errors? During the day everything seems fine, I havent noticed anything wrong. If they are happening at night then it could be the scheduled tasks in Plex running at the same time as the mover and parity checks. EDIT: I take back what I said. Im in my WebUI right now and its barely responsive. I have to refresh the "Dashboard" tab multiple times to get everything to show up. The "Main" tab will not load completely, and neither will the "Plugins" or "Docker tabs"...However all of my containers are running just fine, not even a little slowdown. This is definitely a new issue.
  2. I used to have issues with it filling, but that was a while ago when I didn't properly understand docker. I haven't had that issue is a while, but I havent see any reason to shrink my docker img back. 50G isn't hurting me. The only process that should be able to write into my ram is the Plex transcoder, and I changed it to "/dev/shm" instead of "/tmp" specifically so it cant take all my ram.
  3. "Your server has run out of memory, and processes (potentially required) are being killed off. You should post your diagnostics and ask for assistance on the unRaid forums" This is an error I noticed when checking "Fix Common Problems" today. I was unaware of this error, or when it occurred. Everything seems to have been fine lately. But I was having major issues not long ago. Maybe unraid is still having problems, but its happening at night when I don't notice it. The only change I've made to my server recently is a switch from an LSI 9211-8i with an HP Expander, to an Adaptec ASR 71605. This change seems to have been purely beneficial from what I can tell. Performance and stability seem to be great. and I no-longer have drives randomly disappear like I did with the LSI card. serverus-diagnostics-20200901-1307.zip
  4. Thought id update you guys on the situation, I bit the bullet and bought an Adaptec ASR 71605. Found a decent deal on ebay, less than $90 and it came with whatever that little daughter board is (many used ones seem to be missing this), the battery, my choice of bracket size, and 4 brand new cables. Luckily I had some extra thermal compound because Fedex was so rough they knocked the heatsink off in shipping. Other than that this card has been amazing. Just set it to HBA mode and it has been a flawless experience. All drives show up on boot every time so far, no more CRC errors, and so far no more 100% CPU lockups.
  5. I know its been about 2 weeks since i last posted here, but I am still troubleshooting this issue. All of my Cables have been replaced and I'm still having the issue. I have started looking into a new HBA as Im beginning to think it might be my HBA or Expander, but Im not actually sure. I have noticed the last 2 times this happened I was able to ssh and run htop and I saw a process called "shfs" at the top of the list for CPU usage. I have searched for this issue and most of what I found was from 2017-2018 and pertained to ReiserFS. But It has been reported with XFS disks too, and that's what all my disks are.
  6. There is a very early beta of both an android app and iOS app, it doesn't currently have a ton of feature but it does support automatic upload. Andorid APK can be downloaded here iOS App can be installed via TestFlight here OR Installed from source using Xcode.
  7. I assume your referring to the thread talking about the Adaptec ASR 71605. That is actually a really nice looking card, and I will defiantly add it my list of options. The only reason I did not order one right now is that it uses SFF-8643 ports, and all I have are SFF-8087 Cables. With those breakout cables at $15 each now I'm back into the same price range as the other cards. But if I end up having to bite the bullet and get new cables (again) then Ill probably go with the Adaptec, no special firmware, and it uses internal connections.
  8. It's looking like that's going to be my best bet. Even with having to buy all new breakout cables I can manage it for under $90 total. The only crappy part....I just replaced every cable in my system 2 weeks ago. lol
  9. Ah, that's what I was afraid of. I see plenty of people speculating on the forums about these cards, but almost no actual accounts of people buying one and trying it. I've got to find something that can handle at least 12 drives and not break the bank.
  10. The first suggestion worked perfectly. As for the second issue after some searching through github I did find an official solution. Create a file at the root of the folder containing your images called ".ppignore" within the file just make a list of anything you want Photo Prism to ignore when indexing, one per line of course. So I now have 1 file called ".ppignore" at the root of my photos folder with one line in it that just reads "@eadir" without the quotes. Now When photo prism indexes, the "@eadir" folders still show-up, but from within photo prism they appear empty, and nothing in them gets indexed.
  11. How can I change 2 things with photoprism? 1. I need to move the ".photoprism" folder that gets created at the root of my photo folder somewhere else. Photo Prism isn't the only app I have pointed at that folder and it caused my other apps to begin indexing all the photo prism thumbnails. 2. Can I tell photo prism to ignore certain folders? I also use a Synology and it creates a folder called "@eaDIR" in every folder which contains all of the Synology generated thumbnails, and photo prism happily indexes all of them so I end up with 5 copies of every photo at different resolutions. So With these 2 issues Im sure you can see that this would actually cause an endless loop that if not caught would eventually result in 100% disk usage with copies. Because if Photo Prism indexes the Synology thumbnails, and creates its own thumbnails, then the Synology will index the photo prisms thumbnails, and create its own thumbnails, which will then get indexed my photo prism, and on and on and on until my server is 100% full. I really want to use Photo Prism, but I absolutely have to at a minimum fix issue #1
  12. Normally I would agree 100%, except the pictures they are using clearly aren't the same card. All of the "official" LSI 9201-16i I have seen have 3 distinct differences between the cards I see on eBay. 1. The "real" cards use Yellow capacitors while the Chinese cards use black. 2. The "real" cards have a smaller heatsink than the Chinese cards. 3. The "real" cards ALL have an LSI logo on them, many of the Chinese cards do not. But realistically I'm not too worried about weather its a knockoff card or not. What I'm really worried about is do they work? and are they reliable?
  13. I am currently have an LSI 9211-8i with an HP SAS Expander card. I am having issues with this setup and am considering simplifying to just getting a single controller that can support more drives. In case anyone wants to see what issues Ive been having I have posted on the unraid forums and reddit. At this point im convinced there an issue with either my HBA or Expander. Unraid Forums Post Reddit Post So that brings me to the 9201-16i, I have searched eBay and find tons of them, but they all ship from China or Hong Kong, and I have heard mention of them being "counterfeit". But when I look at American listings, they are double the price, and look like the exact same card. So counterfeit or not, has anyone actually ordered one of these from China and what was your experience? If anyone can suggest a different controller that's fine too, I need to connect 12Drives, and id like to stay under or around $150. Thank guys!
  14. Hmm, I have not. I haven’t updated the bios since I got the board. As for moving my gpu, I’m not 100% sure if that’s doable. Every slot in my case is being used, or blocked by something needed by another card. But I did just get all new SAS cables in today, I’m going to try replacing those first and if I’m still having issues I’ll start looking into updating the BIOS. I’m open to any other troubleshooting steps, I still haven’t resolved this. I made it several days, but it happened again last night. If it helps at all 99% of the time it happens is in the evening between 7:30-9:30pm usually closer to around 8:00pm. I do have a syslog server on my Synology that logs everything from unraid. The last time it happened I saw a TON of errors about “unable to parse crontab” or something like that. Anyway, I don’t have any cron jobs at all set to run within the time frame this is happeneing in, and I don’t know if that actually had anything to do with it. But knowing this is there any red flags I should look for in the logs if this happens again?
  15. Im beginning to think it may be related to some disk problems I have been having. I have looked through my syslog server and see pretty consistent CRC errors from all of my drives. So I have all new cables on the way for my HBA and SAS expander. I noticed a crash happened a couple minutes after adding a new series to sonar, so just as the new episodes began flooding into the array is when it locked up. That's what it seemed like anyway. Cables will be here Wednesday, I guess Ill see what happens.
  16. So Ive been going through every single line and setting on every single page of my unraid server trying to see if anything jumps out at me. One thing did, I have a plugin installed called "Dynamix Cache Directories", I don't remember if this comes with unraid or if I installed it. But anyway I read up on what it does and decided to try disabling it. Also this is by far the oldest plugin on my system showing the most current version to be "2018.12.04". Since disabling it, which was only 2 days ago, I haven't crashed, and I've had RAM usage in the 50% range instead of 80+%, and CPU usage seems to be staying around or under 20%.
  17. Im checking up on my server this morning and I'm already seeing the RAM usage getting up-to 72%, however htop shows the process using the most ram is Plex at only 8.7%, and the Plex dashboard confirms this number...CPU useage is between 20-30% which for the current load is only slightly above average, and isn't anything that would freak me out.
  18. This is exactly what I see. I only see the 100% usage in the GUI. In htop everything looks normal. But that still doesn't stop docker from becoming completely unresponsive. I actually have had an issue with either my HBA or extender, im not sure which. But its an issue ive had for quite a while now, and this problem im having now is fairly new. But anyway, any time I go to reboot my unraid server I will generally have to reboot a minimum of 1-2 times to actually get all my disks to show up. On the first boot im guaranteed to have several disks missing from the array. However once I get all the disks to show up again, everything always seemed to have ran ok.
  19. So I think I managed to catch things as they were falling apart this time. It seems that the issue is coming from running out of RAM. I don't know how unraid handles that, does it have a swap file? if so where is it? Anyway, I immediately ssh into unraid and ran htop and just simply didn't see anything using that much ram, same when running top...I just don't see anything using that much ram. Despite this, even with all containers and VMs stopped the ram usage never dropped below 54%. After restarting the array with all my main containers running I haven't gone over 19% ram usage. I have attached 2 diags this time. The first one is from before I restarted the array with everything stopped except pihole, and unbound. The other is after restarting the array and with my main containers running. serverus-diagnostics-20200629-2017.zip serverus-diagnostics-20200629-2013.zip
  20. As of the crash yesterday, I now only have the bare essential containers running and no VMs. If I can go a few days without another issue then I will start re-enabling things. If I crash again, then I will disable all containers and see what happens. The part the I find confusing about this is that there is not a single container or VM in my system that has access to all CPU threads. Plex has access to the most and even its capped at 10 out of 12, and everything else is limited to between 2 and 4.
  21. Ouch. There must be a better way to find out whats causing this. Is there not a more accurate task manager that could possibly show whats causing 100% CPU useage? Also the last time around I noticed near 100% RAM usage too.
  22. Ok so yesterday I rebooted without GUI mode, and today, just now, it happened again. I still cannot figure out whats causing this, but when it happens everything grinds to a halt. Attached updated diag. serverus-diagnostics-20200619-2042.zip
  23. Server Management. Its just the primary way I manage my server when im at home. Plus I keep the Unraid dash up 24/7 so I can see whats going on at a glance.
  24. I havent, only because I heavily use GUI mode. But I suppose the next time this happens it wouldn't hurt to reboot without it.
  25. Ok, I guess I spoke too soon. The issue just crept back up within the last hour. My son was watching a movie and I noticed it just stopped playing and when I checked the server, sure enough 100% usage on all cores. I attached an updated diag. Here the kicker though, I went into the CPU pinning screen and set every single container and VM to a specific number of cores, and there is not one single thing that I have running on here that is able to use all the CPU cores. Most things are limited to 2-4 cores, plex is the most at 10 out of 12 cores. Luckily I have learned that stopping and re-starting the array seems to fix the issues, so at least I don't have to perform a full reboot. But I have to get this fixed, unfortunately Im not sure whats causing it, especially since "top" and "htop" don't appear to be showing the whole picture. serverus-diagnostics-20200615-2106.zip