Jump to content

relink

Members
  • Posts

    235
  • Joined

  • Last visited

Posts posted by relink

  1. 6 hours ago, ich777 said:

    For the second thing I recommend to create a issue on Github from PhotoPrism: Click

    (As a workaround I can only recommend to don't use Synology for this and delete all these folders if you really want to use PhotoPrism)

    The first suggestion worked perfectly. As for the second issue after some searching through github I did find an official solution.

    Create a file at the root of the folder containing your images called ".ppignore" within the file just make a list of anything you want Photo Prism to ignore when indexing, one per line of course.

     

    So I now have 1 file called ".ppignore" at the root of my photos folder with one line in it that just reads "@eadir" without the quotes. Now When photo prism indexes, the "@eadir" folders still show-up, but from within photo prism they appear empty, and nothing in them gets indexed.

    • Like 1
  2. How can I change 2 things with photoprism?

     

    1. I need to move the ".photoprism" folder that gets created at the root of my photo folder somewhere else. Photo Prism isn't the only app I have pointed at that folder and it caused my other apps to begin indexing all the photo prism thumbnails.

     

    2. Can I tell photo prism to ignore certain folders? I also use a Synology and it creates a folder called "@eaDIR" in every folder which contains all of the Synology generated thumbnails, and photo prism happily indexes all of them so I end up with 5 copies of every photo at different resolutions.

     

    So With these 2 issues Im sure you can see that this would actually cause an endless loop that if not caught would eventually result in 100% disk usage with copies. Because if Photo Prism indexes the Synology thumbnails, and creates its own thumbnails, then the Synology will index the photo prisms thumbnails, and create its own thumbnails, which will then get indexed my photo prism, and on and on and on until my server is 100% full.

     

    I really want to use Photo Prism, but I absolutely have to at a minimum fix issue #1

  3. 21 hours ago, ChatNoir said:

    You cannot trust pictures too much on ebay, Amazon and the likes.

    Normally I would agree 100%, except the pictures they are using clearly aren't the same card. All of the "official" LSI 9201-16i I have seen have 3 distinct differences between the cards I see on eBay.

     

    1. The "real" cards use Yellow capacitors while the Chinese cards use black.

    2. The "real" cards have a smaller heatsink than the Chinese cards.

    3. The "real" cards ALL have an LSI logo on them, many of the Chinese cards do not.

     

    But realistically I'm not too worried about weather its a knockoff card or not. What I'm really worried about is do they work? and are they reliable?

  4. I am currently have an LSI 9211-8i with an HP SAS Expander card. I am having issues with this setup and am considering simplifying to just getting a single controller that can support more drives.

     

    In case anyone wants to see what issues Ive been having I have posted on the unraid forums and reddit. At this point im convinced there an issue with either my HBA or Expander.

    Unraid Forums Post

    Reddit Post

     

    So that brings me to the 9201-16i, I have searched eBay and find tons of them, but they all ship from China or Hong Kong, and I have heard mention of them being "counterfeit". But when I look at American listings, they are double the price, and look like the exact same card. So counterfeit or not, has anyone actually ordered one of these from China and what was your experience?

    If anyone can suggest a different controller that's fine too, I need to connect 12Drives, and id like to stay under or around $150.

     

    Thank guys!

  5. On 7/6/2020 at 5:28 PM, civic95man said:

    Have you tried updating the BIOS? if that fails, are you able to move your video card to another slot?

    Hmm, I have not. I haven’t updated the bios since I got the board. 
     

    As for moving my gpu, I’m not 100% sure if that’s doable. Every slot in my case is being used, or blocked by something needed by another card. 
     

    But I did just get all new SAS cables in today, I’m going to try replacing those first and if I’m still having issues I’ll start looking into updating the BIOS. 

     

    I’m open to any other troubleshooting steps, I still haven’t resolved this. I made it several days, but it happened again last night. If it helps at all 99% of the time it happens is in the evening between 7:30-9:30pm usually closer to around 8:00pm. 
     

    I do have a syslog server on my Synology that logs everything from unraid. The last time it happened I saw a TON of errors about “unable to parse crontab” or something like that. Anyway, I don’t have any cron jobs at all set to run within the time frame this is happeneing in, and I don’t know if that actually had anything to do with it. But knowing this is there any red flags I should look for in the logs if this happens again?

  6. Im beginning to think it may be related to some disk problems I have been having. I have looked through my syslog server and see pretty consistent CRC errors from all of my drives. So I have all new cables on the way for my HBA and SAS expander.

    I noticed a crash happened a couple minutes after adding a new series to sonar, so just as the new episodes began flooding into the array is when it locked up. That's what it seemed like anyway.

    Cables will be here Wednesday, I guess Ill see what happens.

  7. So Ive been going through every single line and setting on every single page of my unraid server trying to see if anything jumps out at me. One thing did, I have a plugin installed called "Dynamix Cache Directories", I don't remember if this comes with unraid or if I installed it. But anyway I read up on what it does and decided to try disabling it. Also this is by far the oldest plugin on my system showing the most current version to be "2018.12.04".

     

    Since disabling it, which was only 2 days ago, I haven't crashed, and I've had RAM usage in the 50% range instead of 80+%, and CPU usage seems to be staying around or under 20%.

  8. 14 hours ago, -Daedalus said:

    Have you seen a 100% CPU crash from top/htop, or just from the GUI?

    This is exactly what I see. I only see the 100% usage in the GUI. In htop everything looks normal. But that still doesn't stop docker from becoming completely unresponsive.

     

    I actually have had an issue with either my HBA or extender, im not sure which. But its an issue ive had for quite a while now, and this problem im having now is fairly new. But anyway, any time I go to reboot my unraid server I will generally have to reboot a minimum of 1-2 times to actually get all my disks to show up. On the first boot im guaranteed to have several disks missing from the array. However once I get all the disks to show up again, everything always seemed to have ran ok.

  9. So I think I managed to catch things as they were falling apart this time. It seems that the issue is coming from running out of RAM. I don't know how unraid handles that, does it have a swap file? if so where is it?

     

    Anyway, I immediately ssh into unraid and ran htop and just simply didn't see anything using that much ram, same when running top...I just don't see anything using that much ram. Despite this, even with all containers and VMs stopped the ram usage never dropped below 54%. After restarting the array with all my main containers running I haven't gone over 19% ram usage.

     

    I have attached 2 diags this time. The first one is from before I restarted the array with everything stopped except pihole, and unbound. The other is after restarting the array and with my main containers running.

    serverus-diagnostics-20200629-2017.zip serverus-diagnostics-20200629-2013.zip

  10. As of the crash yesterday, I now only have the bare essential containers running and no VMs. If I can go a few days without another issue then I will start re-enabling things. If I crash again, then I will disable all containers and see what happens. 
     

    The part the I find confusing about this is that there is not a single container or VM in my system that has access to all CPU threads. Plex has access to the most and even its capped at 10 out of 12, and everything else is limited to between 2 and 4. 

  11. Ok, I guess I spoke too soon. The issue just crept back up within the last hour. My son was watching a movie and I noticed it just stopped playing and when I checked the server, sure enough 100% usage on all cores. I attached an updated diag.

     

    Here the kicker though, I went into the CPU pinning screen and set every single container and VM to a specific number of cores, and there is not one single thing that I have running on here that is able to use all the CPU cores. Most things are limited to 2-4 cores, plex is the most at 10 out of 12 cores.

     

    Luckily I have learned that stopping and re-starting the array seems to fix the issues, so at least I don't have to perform a full reboot. But I have to get this fixed, unfortunately Im not sure whats causing it, especially since "top" and "htop" don't appear to be showing the whole picture.

    serverus-diagnostics-20200615-2106.zip

  12. Just wanted to update, I heard back from SD and it does appear the issue was on there end. I re-ran the cron job, and everything loaded just fine!

     

    I also wanted to chime in on the issue being discussed of nearly empty data taking the place of the correct cached data when its not supposed to. I have had the issue a couple of times as well, where SD has an issue and it will pull down a mostly empty file and xteve loads it in anyway erasing whatever correct data it had cached.

     

    The last time this happened was yesterday, I hadn't noticed the update to using the new yaml files so I went to go watch TV and had no data. I came here saw there was a change, and immediately generated the new yaml files I needed, ran the cron job and everything was fine. but over night guide2go pulled in new data from SD, which was having issues with one of my lineups, so every channel using that lineup now had no data. Thus why I posted here this morning.

     

    The time prior to that, all of my channels lost their icons because SD was having an issue.

  13. I did make sure I was in the correct directory when running the command. I have 3 lineups and 2 of them seem to be working.

     

    Like I said when I first did the new setup yesterday I edited my new yaml files and created the xml files and everything seemed to have worked just fine. it wasn't until this morning that all the data for 1 of my lineups is gone, and i cant get it to download again.

  14. On 6/6/2020 at 7:01 AM, Rick Gillyon said:

    Some serious problems with the updated plugin here. I thought it was working and now I've run out of EPG. 😯

     

    I've tried recreating this with a new lineup and it's repeatable. I just got G2G SD to create the yaml for a new xml:

    guide2go -configure z.yaml

     

    Then added all channels, no edits to the YAML. Ran:

    guide2go -config z.yaml

     

    This output is produced:

    
    2020/06/06 11:39:37 [G2G  ] Version: 1.1.1
    2020/06/06 11:39:37 [URL  ] https://json.schedulesdirect.org/20141201/token
    2020/06/06 11:39:37 [SD   ] Login...OK
    
    2020/06/06 11:39:37 [URL  ] https://json.schedulesdirect.org/20141201/status
    2020/06/06 11:39:37 [SD   ] Account Expires: 2020-09-24 16:52:44 +0000 UTC
    2020/06/06 11:39:37 [SD   ] Lineups: 2 / 4
    2020/06/06 11:39:37 [SD   ] System Status: Online [No known issues.]
    2020/06/06 11:39:37 [G2G  ] Channels: 163
    2020/06/06 11:39:37 [URL  ] https://json.schedulesdirect.org/20141201/lineups/GBR-1000080-DEFAULT
    2020/06/06 11:39:38 [URL  ] https://json.schedulesdirect.org/20141201/lineups/GBR-1000203-DEFAULT
    2020/06/06 11:39:38 [G2G  ] Download Schedule: 7 Day(s)
    2020/06/06 11:39:38 [URL  ] https://json.schedulesdirect.org/20141201/schedules
    2020/06/06 11:39:38 [ERROR] invalid character '<' looking for beginning of value
    2020/06/06 11:39:38 [G2G  ] Download Program Informations: New: 0 / Cached: 0
    2020/06/06 11:39:38 [G2G  ] Download missing Metadata: 0 
    2020/06/06 11:39:38 [G2G  ] Create XMLTV File [z.xml]
    2020/06/06 11:39:38 [G2G  ] Clean up Cache [z_cache.json]
    2020/06/06 11:39:38 [G2G  ] Deleted Program Informations: 0

    Clearly the line:

    2020/06/06 11:39:38 [ERROR] invalid character '<' looking for beginning of value

    is a problem. There is no < in the YAML. If I pick a single channel it seems to work, so it looks like the plugin is not properly protecting itself from weird characters in the SD data.

     

    Edit: the XML output is produced, but is just the channel list.

     

    Any ideas? This is quite urgent (wife complaining!) so I'll need to sort something ASAP. Thanks! Redacted YAML is attached.

    z.txt 11.54 kB · 0 downloads

    I am having this exact same issue. I actually came here to post this almost word for word.

     

    When I did everything last night it seemed ok, I woke up today and most EPG data is gone and I noticed the exact same error.

  15. I believe you may have been right. I had to try and do something so I just started shutting down containers until my CPU usage dropped. When I saw how big of a difference Crashplan made, I pinned it to a single core. now that single core is maxed out, and the rest of my CPU looks normal. 

     

    But I had this problem once before with crashplan, probably close to 2 years ago. I fixed it back then and it has not been an issue since. Any idea why it Would suddenly became a problem again? 

  16. hmm, ok. I just ssh in and ran "top" im seeing several things with useage in the 20's, but im not sure what to do about any of them. They don't appear to be any of my containers that i have running. One of them is Firefox, could that be because im booted in GUI mode?

    Interestingly enough if I run htop instead my CPU usage looks normal. But that doesn't change the fact that something is clearly wrong, my server is so over loaded Im going to have to reboot soon or noone can watch TV. But I really need to figure out why this keeps happening. Its so bad that not only is Plex not working, but the web UI for Unraid is just barely responsive.

  17. Hey guys, this is the second time this has happened this week. The wife and kids are watching Plex and suddenly its starts to stutter and eventually stops playing. I remote into the server from work to see 100% CPU usage on all cores of my Ryzen 5 2600.

     

    The first time I assumed it was the parity check that was running that was causing the issues, so I stopped it and rescheduled it for later and after a reboot everything was fine. But this time there was no parity check running, the mover wasn't running, Im not sure whats causing this issue. I decided NOT to reboot this time, and instead downloaded the diag (attached) and let it run it course. It did eventually stop and go back to normal CPU usage...but this shouldn't happen to begin with, and idk whats causing it. 

     

    UPDATE: Wife just told me its been fine all day until around 2:00-2:30 this afternoon. Its currently 5:13 as Im typing this where I am.

     

     

    serverus-diagnostics-20200612-1702.zip

×
×
  • Create New...