Phastor

Members
  • Posts

    78
  • Joined

  • Last visited

Everything posted by Phastor

  1. Just gave this a shot. Unfortunately it's a no go for me. Glad you got yours fixed though!
  2. Nope. Still no luck. Was told by someone else here I should check in with the Sonarr forums about trying to recover the database, but at this point I'm wondering if it would more efficient to just blow away all my appdata and start over. I really don't want to do that though. That's a lot of profiles and settings I do not want to have to create again.
  3. Just a connection refused error as if nothing is listening on that port Here's a segment that stands out to me from the logs. "2024-03-27 10:25:32,605 DEBG 'sonarr' stdout output: [Fatal] ConsoleApp: EPIC FAIL! [v4.0.2.1183] NzbDrone.Common.Exceptions.SonarrStartupException: Sonarr failed to start: Error creating main database ---> System.Text.Json.JsonException: '|' is invalid after a value. Expected either ',', '}', or ']'. Path: $ | LineNumber: 33 | BytePositionInLine: 0. ---> System.Text.Json.JsonReaderException: '|' is invalid after a value. Expected either ',', '}', or ']'. LineNumber: 33 | BytePositionInLine: 0. at System.Text.Json.ThrowHelper.ThrowJsonReaderException(Utf8JsonReader& json, ExceptionResource resource, Byte nextByte, ReadOnlySpan`1 bytes) at System.Text.Json.Utf8JsonReader.ConsumeNextToken(Byte marker) at System.Text.Json.Utf8JsonReader.ConsumeNextTokenOrRollback(Byte marker) at System.Text.Json.Utf8JsonReader.ReadSingleSegment()" So I guess it is failing to start when using binhex/arch-sonarr
  4. In the same fashion that I've accessed it before the issue. Currently, when accessing the UI when on binhex/arch-sonarr:v3, I can access the UI via the docker dashboard, or by accessing it directly via my unRAID server's IP and the container port number. When attempting to access the UI when on binhex/arch-sonarr, the UI is inaccessible via both of those methods.
  5. Just gave this a shot and unfortunately that's still a no-go. No change in bahavior. Thanks though!
  6. An update on this. I tried binhex\arch-sonarr:v3. The container UI is functional with this. I am still getting the "BlacklistSpecification: SQL logic error no such table: Blacklist" error for every show episode though, so nothing monitored is being downloaded. Going back to binhex\arch-sonarr still makes the UI inaccessible.
  7. So, I've ran into an issue that has been three years in the making. I have been using SABNZBD and the -arr suite for some time. I started out using the Binhex containers. Sometime in 2021, I switched the repository for my existing container from the Binhex source to Linuxserver's. I'm not exactly sure why I did this, but it might have been because of the major version release around that time and Linuxserver had a preview build of it. I only did this switch with Sonarr. Fast forward to a couple weeks ago, I noticed that the container was no longer updating and probably hasn't for some time. This is when I noticed and remembered that the container was pointed to the Linuxserver repository preview build, even though I know it was originally the Binhex container. I pointed it back to the Binhex repo and updated to pull down the new container. After updating, the web UI was no longer available. I didn't have time to mess with it, so I switched it back to the Linuxserver repo, updated, and everything looked good. "This is a future me problem" I thought. Fast forward again to today, and future me noticed that nothing has downloaded for any of my monitored shows in the last two weeks. I did a search on one of the shows that are on my calendar and there were indeed episodes that matched the criteria in my profiles that should have downloaded. However, they were all rejecting with the error. "BlacklistSpecification: SQL logic error no such table: Blacklist." I get this with every single entry for every single show. I can still manually download them, however. I did some research and apparently this happens when you upgrade Sonarr and then roll back. It seems that when I switched back to the Binhex repo and Sonarr was able to update again, it also updated the database. (So I'm guessing Sonarr was working under the hood even if the UI was busted) And since I had to switch back to Linuxserver's, it can now no longer properly read the database since it is an older version. I tried switching back to the Binhex repo again today, but were met with the same results as two weeks ago. An inaccessible UI. Since my database is now unreadable by the previous version, it looks like my only option short of wiping my Sonarr appdata is to figure out how to get it working using the container from the Binhex repo again. Again, I think it's working under the hood. The web UI just appears to be borked. Any ideas on where to start on troubleshooting this?
  8. I'm using "remote access to lan" as my peer connection type. I've got an active tunnel and can remotely ping virtual machines running on my unRAID server as well as physical devices on my LAN over the tunnel. I can also access docker containers over the tunnel that are using network type "bridged". However, I cannot ping or access my PiHole container, which is using the network type "custom:br0" and has its own IP on my physical LAN's subnet. I'm guessing this has something to do with the container's IP being bound to the server's physical interface, but my VMs are configured the same way and I can access them just fine.
  9. If it's something as simple as both NICs having IPs on the same subnet, I'll be very happy, but confused. I understand having the same IP would cause an issue, but what about having two NICs on the same subnet doesn't sit well with unRAID? I'm not doubting that can be my issue--I want it to be that simple. I'm just curious.
  10. I've been running on a server with dual NICs from the beginning, but only ever utilized one of them. I decided I wanted to look at PFSense last night and figured running it as a VM would be a good option just for testing it out. I figured I would bridge my second NIC to it just so I could play around with it. When I went to set up the second NIC in unRAID, I realized for the first time that unRAID had bonded my two interfaces by default. I unbonded the interfaces, kept eth0 as it was with the IP if x.x.x.13, what it has always been, and assigned x.x.x.14 to eth1. Things seemed to be normal at the start. I took a look at PFSense and realized I was going to be better off testing it on a physical machine, so at that point I physically disconnected eth1 and that is where the weirdness began. I immediately lost connectivity to my server. I ran a ping and sure enough no response from .13. However, I did get a response from .14. Odd, since I disconnected the interface that should have been .14. I reconnected eth1 and the server came back. I disconnected eth1 again and got the same result. At this point I thought maybe I was wrong on which physical port was which on the server, so to troubleshoot, I went ahead and disconnected eth0 and left eth1 plugged in. Server dropped again. No response from .13, but again a response from .14... I still don't know what was going on there. It was already 1:30 at that point, so I just said screw it and left them both plugged in, since it seemed to be happy with that, and went to bed. Fast forward to this morning right before I was leaving for work. I realized that my Plex server was down (docker container in unRAID). I tried to get into the unRAID UI and got no response. No dockers containers responding. Can't connect with SSH. No pings from .13 or .14. The only thing that IS working is my Windows 10 VM that I VPN/RDP into from work, which is very odd since that VM is bridged to eth0, which is otherwise not responding. I couldn't do anything else with it as I had to leave for work. That's kind of where I'm at right now. The VM is still working as that's what I'm remoted into and writing this from right now, but I am otherwise unable to access my server in any way. I'm going to plug a monitor into it once I get home and try to get diagnostics from it to post here, but for now I'm just sitting here at work for the next eight hours going absolutely crazy and needed to get this out there. Any thoughts?
  11. Nah. I've only got two USB devices aside from the flash. I've got my UPS plugged into USB2 on the board and the drive in the PCIe adapter.
  12. As much as that would hurt, I'll give USB2 a shot. The USB drive in question is my Duplicati backup target, which is already slow the way it is. I may just get myself a small NAS for my backups if that turns out to be the issue. It's just weird since this issue only surfaced a few months ago after about two years of unRAID/UD use. Thanks for the help! I'll keep you posted with what happens.
  13. This has happened with two different USB drives. I guess I'll need to scrounge up a third one and see. Or perhaps it could be a controller? I'm using a PCIe USB3 controller since my board does not have it onboard. Is it UD's interaction with the failed device that's causing the server-wide issues?
  14. After about an hour of the diag page appearing locked up, it gave me the zip file. aivas-diagnostics-20200113-2202.zip
  15. Just tried running diagnostics and that seems to have hosed my UI. This thing will go at the drop of a hat as soon as UD starts to do this. Will have to wait till I'm home to do a hard reset before I can look into it further.
  16. I'm having an issue where Unassigned Devices will stop responding randomly. The UD section in my Main tab just gives me an unending dancing unRAID logo, I can't browse my USB connected drive, and any docker containers with folders mapped to that drive become unresponsive. It will also cause all of my CPU cores to appear to be pegged out in the Dashboard. When it does this, I am unable to stop any docker containers or my array. I am unable to shut down my server cleanly and my only option is to do a hard reset. If I try to do anything with UD when it does this, it will hose my web UI completely and I can't access anything until the server is hard reset. Any docker containers that I have not touched and do not have any mapped folders to my UD drive will continue to function. I can also still RDP into my Win10 VM. However, anything to do with browsing the server UI is not possible. I thought I read about this being a known issue and that it was addressed in a recent update, but it has just happened to me again. I am at work right now and am afraid to touch anything on it in fear that the web UI will get hosed and I won't be able to do anything with it until I get home.
  17. I cannot for the life of me get Calibre-Web to see ebook-convert. I pulled the binary from a Calibre install, tossed it into /config within the container, and pointed Calibre-Web to the path to it. It's still reporting as not installed. I consoled into the container to run "ebook-convert --version" to see if it was actually functional at all and it returned an error regarding missing modules. Does it require dependcies? Does Calibre as a whole have to be installed within the container?
  18. MXToolbox is reporting that my server does not support TLS. My knowledge in this sort of thing is limited, but I think I have pinpointed the problem. After issuing the EHLO command myself, it returned the following. 250-PIPELINING 250-8BITMIME 250-SMTPUTF8 250-SIZE 25214400 250 STARTTLS That last line is what draws my attention. It's got a space instead of a dash. MXToolbox is expecting "250-STARTTLS" and I'm guessing that's why it's marking it as not supported since that's not in the response that it's getting. I imagine this is something more for the original developer of the software to deal with--just hoping that it makes its way up the chain from here.
  19. I've had a pretty stable set of backups for about 8 months now. Aside from the incredibly slow restore process (browsing folders within the backup is painful!), I've been pretty happy with it. However, I really wanted some of the newer features, such as improved performance during restores, so I switched to the canary build. I was not expecting my backup config to get wiped when I did this. Going back to the stable container returned the configs after some tweaking. Is there a safe way I can move my configs over to the canary build?
  20. I'm confused on which version of Duplicati the container is running. From what I understand, they have made a lot of improvements in performance a few months ago as far as browsing your backups goes, but I am still having terrible results that seem to go back and forth with each update of the container. On average, after hitting "restore files," it takes about 8-10 minutes to finally be presented with a directory structure I can browse. It then takes 2-3 minutes for it to think on every folder I drill down in that structure. On some updates it gets better, as in it only takes a couple minutes to present the directories and then about thirty seconds to drill down each folder. But then, like in the latest update, it's back to taking several minutes again. Throughout the course of this, the actual version of Duplicati in the container has not changed and is reporting that it is 2.0.3.3_beta_2018-04-02.
  21. How far back is the version of Duplicati in the container? Looking under "About", I'm seeing that the version is one dating back to last August. Is this accurate? If so, what kind of time window are we looking at for the docker container to be on the version that was released today? There's been some huge performance improvements to the latest version that I have been waiting for months on--namely the speed at which you can browse directories within your backups. Currently I am getting backups, but I wouldn't be able to use them if I were to need them since it literally takes like 15 minutes to drill down each individual directory. Getting this update will give me some peace of mind. Using the "check for updates" function within Duplicati itself detects the newest version that was released today. Is it possible use the in-app update function, or would that break the container? Edit: It's occurred to me that hitting "download now" may not be a self updater, but instead take you to a download page to get an installer, which I know would be useless in this case. I haven't tried it yet in case it actually is a self updater and would break the container.
  22. Stumped by such a simple and obvious thing. Thanks for pointing it out! It didn't like that I left the host path for the watch directory blank. I just removed that path entry entirely and it went through. Strange since this was the same template, blank watch path included, that I used to install it the first time where it worked.
  23. This happened a while back, but I just installed a different version of it, which turned out to be more to my liking anyway, but now it has happened again. It has become orphaned again. This time it's the one by jlesage. After removing the orphan and re-installing, it is immediately orphaned again. What does it mean when this happens?
  24. In the Dashboard tab, we get a little summary list of the users and how many shares that they have read and write access to. However, unless I am missing something, the only way to see which shares they have read/write access to, you have to look into each share individually. This is a bit tedious. I think it would be useful to have this information also present in the users tab and then be able to see more details when you click on a user. I'd like to be able to click on one of my users and then see a detailed list of shares that they have access to and what permissions they have to each.