Jump to content

postersyndrome

Members
  • Posts

    23
  • Joined

  • Last visited

Posts posted by postersyndrome

  1. I was looking through my SMART report and noticed that my last extended test completed without errors and it was done AFTER the most recent of the 5 errors listed was discovered. I'm not sure if that has anything to do with it. I might end up having to get a new SAS cable... If the current test completes without error, i will power off, reseat the SAS cables, and see what happens from there. If I run for a bit without error, I will order a couple replacement for the next time. Just in case I don't know my ass from a hole in the ground, I will attach the report here as well.

    plexraid-smart-20230314-1113.zip

  2. My Extended test finished. There were no errors found. This morning, I powered off the server and reseated the cables on the drive indicated. I did notice that the power connector right next to the one connected to the drive with issues has a 4-pin molex adapter for a fan controller. I'm not sure if that is the cause or not, but if it happens again, I will start looking for an alternative there. After the server came up and I was able to ensure that the issue was resolved at least for the time being, I backed up my flash drive and upgraded to the newest version of unRAID. This was the plan before I started getting the errors. So far, so good. Thanks for the assistance, @Squid!

  3. 6 minutes ago, Squid said:
    
      Commands leading to the command that caused the error were:
      CR FEATR COUNT  LBA_48  LH LM LL DV DC  Powered_Up_Time  Command/Feature_Name
      -- == -- == -- == == == -- -- -- -- --  ---------------  --------------------
      60 00 28 00 00 00 01 80 12 b6 88 40 00 43d+11:57:14.120  READ FPDMA QUEUED
      ea 00 00 00 00 00 00 00 00 00 00 00 00 43d+11:57:13.644  FLUSH CACHE EXT
      61 01 20 00 20 00 02 04 77 dc c0 40 00 43d+11:57:04.705  WRITE FPDMA QUEUED
      61 04 00 00 18 00 02 04 77 d8 c0 40 00 43d+11:57:04.705  WRITE FPDMA QUEUED
      61 04 00 00 10 00 02 04 77 d4 c0 40 00 43d+11:57:04.705  WRITE FPDMA QUEUED
    
    Error 2 [1] occurred at disk power-on lifetime: 17515 hours (729 days + 19 hours)

     

    Reseat the cabling.

     

    Also, trimming every hour is going to decrease the lifetime of the SSD.  Once a week is more than sufficient.

    Thanks for the fast response. Since you said more than sufficient, I have set the trim to once a month. If I should move it to once a week, I'm ok with that too.

     

    I am currently in the process of running an extended test. As soon as that finishes, I will power down and reseat the cables on the device.

     

    Again, thank you for responding so fast.

  4. On 2/1/2022 at 11:39 AM, binhex said:

     

    thanks for that!, you are the first user that has confirmed a successful upgrade (without restoring from backup), so it looks like any prior corruption in the db (probably due to earlier versions of radarr) will cause the migration to fail, so it is going to be a case of backup your config, cross your fingers, sacrifice a couple of goats in your back garden and then press the button!.

    Mine went smooth as well. In fact, I didn't know there were any issues until I got an email about this thread. Thank, binhex!

  5. On 2/5/2022 at 2:54 PM, Pks6601 said:

    Last week, I woke up to my Plex Server not starting. After a bit of Googling, I copied the appdata folder, deleted the binhex-plexpass docker and reinstalled it, restoring the appdata folder. I have restored the paths for my media, and all of that appears to be working. At some point in the process, my ability to transcode to the iGPU was broken and I have been unable to restore it. I no longer have the /dev/dri folder, I have attempted to go back through the process of enabling the iGPU but that folder is not being found no matter what I try. Has anyone else experienced any of this? Any assistance would be greatly appreciated.

    Thank you.

    Apparently, I'm an idiot. I forgot that I moved the HDMI dummy plug for some reason. After making sure that all of the ports that should have plugs in them did, installing and enabling went perfectly. Good to have it all back the way it was.

     

    Moral of the story: Check all the stupid stuff you are sure isn't the problem!

  6. Last week, I woke up to my Plex Server not starting. After a bit of Googling, I copied the appdata folder, deleted the binhex-plexpass docker and reinstalled it, restoring the appdata folder. I have restored the paths for my media, and all of that appears to be working. At some point in the process, my ability to transcode to the iGPU was broken and I have been unable to restore it. I no longer have the /dev/dri folder, I have attempted to go back through the process of enabling the iGPU but that folder is not being found no matter what I try. Has anyone else experienced any of this? Any assistance would be greatly appreciated.

    Thank you.

  7. On 9/26/2021 at 8:14 PM, Ystebad said:

    I guess I'm just too stupid to run an unraid server because after installing this and accepting the Eula when I start the GUI it takes me right to my main unraid server page.  doesn't seem to actually do anything. 

     

    Make sure that you specifiy a port that is not already in use by the server. The default is 80 and I know that my port 80 is already set up on another of my dockers.

  8. I could really use some help. I just performed a pretty sizeable hardware upgrade on my server, new motherboard, gpu, ps, case, RAM, and I added 5 new drives. Prior to the upgrade, I had the server operating beautifully, no issues with any of my containers. My delugevpn container was rebuilt and after that, my sonarr container started operating perfectly, in fact better than before it appears. I currently have binhex-delugevpn with binhex-jackett, binhex-sonarr, and binhex-radarr routed through deluge for vpn. Jackett and sonarr are working just fine, but radarr will not successfully add an indexer. I have even gone so far as to completely remove the container, the image, and appdata, start over from scratch, and I am still getting the same results.

     

    If log files would help, let me know what to get and I can provide it, but I would greatly appreciate some help with this one.

    Thanks in advance, folks.

     

    Edit:
    I was able to fix the issue with adding indexers, I had to replace the IP in the URL for the indexer with localhost.

  9. Been following the development of this for a bit and decided this morning that it seemed mature enough to give it a shot. I am currently converting a 1.5 GB mp4 720p file with unmanic pinned to only 2 cores. I know that there are better ways, but I am testing performance vs load at the moment to make sure that I can run it on this server without issue. If I overload it constantly, I may move it to the other server that doesn't run any dockers. The cores are residing at 98% fairly constantly, but the remaining 10 cores are barely moving and the rest of my processes are responding as promptly as before. I am going to test this for a bit and if all goes well, I may just turn it loose on the majority of my library. Except the 4K files I have worked so hard to make sure I get a quality I am happy with. :)

    Thanks for building this, @Josh.5

    • Like 1
  10. That was the missing piece! I had gotten it to work using localhost but it seemed unstable, with an occasional failure. After changing the IP to the containers IP, the tests became more reliable and responsive. So far so good, I now have jackett, sonarr, and radarr running without errors for now. Thanks for all the help!

  11. 12 minutes ago, wgstarks said:

    Did you also edit the Jackett IP in all your indexers in Sonarr? I know I missed those on my first try.

     

    Just a guess.

    The jackett IP didn't change. Can you tell me what you are referring to? Did you change the indexer settings to localhost or something else?

    This is what they are now:

     

    Screenshot 2021-02-26 124149.png

     

    EDIT:

    After thinking about what I typed, I went into the indexer and changed the URL from http://192.168.1.10:9117/... to http://localhost:9117/... and the indexers started passing tests. I was then able to follow the same process on the downloader and it passed. It appears that I now have sonarr fixed and I will be moving to radarr. Thanks for the assistance! I will update this if I have issues, or one final time to indicate success.

  12. I am using a separate vpn, PIA using WireGuard, and using privoxy to route my jackett, sonarr, and radarr containers through. Jackett is passing all tests now, but I am still failing at sonarr. I have made changes to the proxy settings in sonarr, but the tests are all still failing.

    Screenshot 2021-02-26 122212.png

     

    EDIT:

    I read through Q26. Since I am routing the containers through delugevpn, I tried to remove the proxy setting completely and it is failing as well. I am having terrible luck with this and am beginning to suspect I am missing something really simple.

  13. I followed the instructions on Q24 and Q25 in the FAQ, and I am still not able to successfully test any of the indexers. I am working on this one docker at a time, starting with jackett. I have added all the necessary ports in delugevpn, and listed all the ports needed in additional ports, separated by a comma, restarted the container. I am able to pull up the DelugeVPN webui. I then change the network type back to "none" for jackett, add the extra parameter as instructed, "--net=container:binhex-delugevpn". The web interface loads for jackett when I use local IP:9117, and all indexers fail testing. What am I missing?

     

    EDIT:

    After looking at a few other comments, I removed all settings for proxy in jackett, and now it is passing tests on all indexers. I will move on to sonarr from here.

  14. I have always used jackett and prior to today, I never had a bypass set up. But now, as has been posted, jackett, radarr, and sonarr are now failing. Is this a permanent change that I will need to fix, or is this an error that will be fixed in a future update?

  15. Sounds good. I will reenable the schedule and keep an eye on it.

     

    Should I be concerned that I haven't seen any errors that Mover is not compatible with my UnRAID 6.9 server? It isn't too big of a deal since that server has little usage and is almost entirely a testbed, other than it does run pihole for me.

  16. I started getting a message from Fix Common Problems stating that my version of mover is not compatible with my version of UnRAID. I have a second server running 6.9 RC 2 that is not showing me that error. I did install Mover from the link posted above on the 6.8.3 server to make sure that I had not gotten updated and I am still getting the message in Fix Common Problems. Should I disable/uninstall Mover on 6.8.3 for now, or disregard the message?

×
×
  • Create New...