[Support] Linuxserver.io - Sonarr


Recommended Posts

Everything else is working just fine, including a ton of other dockers.

 

Seeing a lot of this now in the log:

 

--

[v2.0.0.5344] System.Net.WebException: Error: ConnectFailure (No route to host): 'http://skyhook.sonarr.tv/v1/tvdb/shows/en/280494' ---> System.Net.WebException: Error: ConnectFailure (No route to host) ---> System.Net.Sockets.SocketException: No route to host

--

 

Diags attached.

 

 

ffs2-diagnostics-20200331-1529.zip

Link to comment

So I came home tonight and every single docker was stopped.  Every single one's last two entries in the logs shows something about getting the TERM signal and being KILLed.

 

Every one started up with no errors (except for Sonarr of course).

 

So maybe a bigger issue?

Link to comment
  • 3 weeks later...

Hello everyone,

After a couple of retries I finally managed to get SONARR&DELUGE to work together.

Now Download&Import are working just fine.

 

What I can't accomplish is to DELETE the download from DELUGE once it's imported.

I set the option in SONARR, but unfortunately it doesn't do a thing.

Need I to configure something else?

Link to comment
  • 3 weeks later...

Experiencing the same "Not Available" after having to manually reconstruct my docker templates post-corrupted-bootflash-disaster. Very possibly could be something on my end but seeing someone else report the same makes me think otherwise.
image.png.0c18ed5fe9ae99d0592f3ba3ce5b00d8.png

 

Edit: I waited a bit and performed a check for all updates and it went away. Oh well.

Edited by veruszetec
Link to comment

I'm hoping someone can give me some advice on using Sonarr with a Seedbox.  I had Sonarr, Deluge, Jacket, all working perfectly fine on my unraid server via dockers.  However, I think deluge was taxing my system or specifically my cache drive.  So I decided to trial a seedbox.  I have everything working, I have Deluge and SyncThing running on the seedbox and Sonarr, Jacket and SyncThing dockers on my unraid server.  

 

My problem is getting the files from the seedbox to my unraid server.  I do have it working via SyncThing but the problem is when SyncThing downloads to the unraid server and Sonarr picks up the files and moves them to the desired movie or tv folder, if I then delete the files from the folder that SyncThing downloads to, SyncThing on my seedbox will downloads those files again to my unraid server.  I can't figure a way around this.  Thought someone here might have a better idea.

 

My other idea would be to buy an extra hard drive and use two cache drives.  Problem is I'm not sure if I can the dockers all installed on one cache drive but have deluge download and seed the files from the other cache drive.  That way my dockers are never impacted by all the torrenting.

 

Suggestions?  

Link to comment
  • 2 weeks later...

Good day all:

 

Im having an issue with Sonarr not auto-importing files downloaded by Sabnzbd. when the download ends, Sonarr says "No files found are eligible for import in /downloads..." but when i manually go and do it, it works fine so i know is not a path issue. The only thing Ive noticed is that this happens with files that come inside a folder named correctly but named in numers-only format inside the folder. Again, when manually imported, i dont have to make Sonarr recognize or identify anything, just select the files to manually import and done.
Can someone point me on what i may need to do or check?

Thanks in advance.

 

Edit: This only happens in Sonarr docker, I had a windows VM running Sonaar and it would auto-import everything so could it be a docker bug?
 

Edited by Kaos809
Link to comment
3 hours ago, Kaos809 said:

it works fine so i know is not a path issue.

You say that, but I'll almost guarantee it's a path issue.

 

Compare the download location mappings for sabnzbd and sonarr. Both the host side AND the container side must be identical on both.

 

If in doubt, post the docker run commands from both sab and sonarr.

Link to comment

I'm hoping someone might be able to provide some assistance on the following

I’m having an issue with Sonarr not copying all the files over in a downloaded folder. I have Sonarr docker running on my local server and Deluge on a seedbox. I’m also using Syncthing to move the data from the speedbox to my local server.

What’s happening i as an example, I dowloaded Space Force Season 1. Sonarr is only copying two episodes from the downloaded folder on my local server even thought there are about 10 episodes. when I check the debug I see the following error below. This happens whenever there is more than one episode. I’m not sure if the issue is Sonarr trying to grab the files as they are still being copied to the synched folder or some type of rights issue to the folder.

0-5-29 14:08:31.0|Error|DownloadedEpisodesImportService|Import failed, path does not exist or is not accessible by Sonarr: /data/Seedbox_Downloads/Space.Force.S01.1080p.NF.WEB-DL.DDP5.1.x264-NTG
20-5-29 14:09:58.7|Debug|X509CertificateValidationPolicy|Certificate validation for https://85.17.64.50/deluge/json failed. RemoteCertificateChainErrors.

Link to comment
  • 2 weeks later...
  • 3 weeks later...

Just wondering if there's going to be a version 3 update at some point? There's some functionality in version 3 that's not in 2 that I woud like to have...just curious...

 

Is there a way I can keep version 2 and try version 3 preview? if so, how would i go about doing so?

Edited by DigitalDivide
forgot something
Link to comment

I seem to be having some strange issue with my /tv folder mapping that is preventing Sonarr from creating new folders or something.

 

So I recently replaced a 4TB drive with an 8TB drive and the rebuild process took about a week. During that time, something happened and both Sonarr as well as Radarr threw up lots of missing titles. I knew they were there, as I had just watched one of them days before, but I suspect it was stored on that drive that I had updated.

 

Even though unraid tells me the contents of the disk is emulated, I was unable to access the disk and (like I said) things on it were not accessible from Sonarr/Radarr.

 

During this rebuild process I also noticed Sonarr/Radarr being unable to import newly added things. It was find with replacing items that I already had (and it already had access to) but importing new items from Sabnzbd were throwing up an error. I assumed it was because of the disk that was being rebuilt so I waited.

 

once the rebuilding of the data was complete, I restarted unraid and sonarr/radarr were able to see those things that it had previously marked as missing

 

the thing is, during this time I noticed that in the System section of Sonarr/Radarr, it was not showing the media folder for them. here, you can see the folders I have mapped right now in the sonarr UI (attached image 003 screenshot). but in unraid's docker editor page (attached image 004 screenshot), you can see I have the /tv folder mapped as well (it just isn't showing up in the sonarr UI and I think that is causing the importing issues

124862149_Image003.thumb.png.5eae7b1e8047c55e9b7fb449c6c67750.png

 

Image 003.png

Image 004.png

 

here's a snippet from the log of when it attempts to import a new item. . .

 

____________

[v3.0.0.348] System.IO.DirectoryNotFoundException: Could not find a part of the path. at System.IO.File.Move (System.String sourceFileName, System.String destFileName) [0x000e3] in <04750267503a43e5929c1d1ba19daf3e>:0 at NzbDrone.Common.Disk.DiskProviderBase.MoveFileInternal (System.String source, System.String destination) [0x00000] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Common\Disk\DiskProviderBase.cs:232 at NzbDrone.Mono.Disk.DiskProvider.MoveFileInternal (System.String source, System.String destination) [0x00076] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Mono\Disk\DiskProvider.cs:170 at NzbDrone.Common.Disk.DiskProviderBase.MoveFile (System.String source, System.String destination, System.Boolean overwrite) [0x000e1] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Common\Disk\DiskProviderBase.cs:227 at NzbDrone.Common.Disk.DiskTransferService.TryMoveFileTransactional (System.String sourcePath, System.String targetPath, System.Int64 originalSize, NzbDrone.Common.Disk.DiskTransferVerificationMode verificationMode) [0x0008f] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Common\Disk\DiskTransferService.cs:490 at NzbDrone.Common.Disk.DiskTransferService.TransferFile (System.String sourcePath, System.String targetPath, NzbDrone.Common.Disk.TransferMode mode, System.Boolean overwrite, NzbDrone.Common.Disk.DiskTransferVerificationMode verificationMode) [0x003cc] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Common\Disk\DiskTransferService.cs:312 at NzbDrone.Common.Disk.DiskTransferService.TransferFile (System.String sourcePath, System.String targetPath, NzbDrone.Common.Disk.TransferMode mode, System.Boolean overwrite, System.Boolean verified) [0x0000e] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Common\Disk\DiskTransferService.cs:196 at NzbDrone.Core.MediaFiles.EpisodeFileMovingService.TransferFile (NzbDrone.Core.MediaFiles.EpisodeFile episodeFile, NzbDrone.Core.Tv.Series series, System.Collections.Generic.List`1[T] episodes, System.String destinationFilePath, NzbDrone.Common.Disk.TransferMode mode) [0x00129] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Core\MediaFiles\EpisodeFileMovingService.cs:119 at NzbDrone.Core.MediaFiles.EpisodeFileMovingService.MoveEpisodeFile (NzbDrone.Core.MediaFiles.EpisodeFile episodeFile, NzbDrone.Core.Parser.Model.LocalEpisode localEpisode) [0x0005f] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Core\MediaFiles\EpisodeFileMovingService.cs:81 at NzbDrone.Core.MediaFiles.UpgradeMediaFileService.UpgradeEpisodeFile (NzbDrone.Core.MediaFiles.EpisodeFile episodeFile, NzbDrone.Core.Parser.Model.LocalEpisode localEpisode, System.Boolean copyOnly) [0x0017c] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Core\MediaFiles\UpgradeMediaFileService.cs:76 at NzbDrone.Core.MediaFiles.EpisodeImport.ImportApprovedEpisodes.Import (System.Collections.Generic.List`1[T] decisions, System.Boolean newDownload, NzbDrone.Core.Download.DownloadClientItem downloadClientItem, NzbDrone.Core.MediaFiles.EpisodeImport.ImportMode importMode) [0x0028e] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Core\MediaFiles\EpisodeImport\ImportApprovedEpisodes.cs:111

____________

 

I think this was due to some corruption. I used the webui to check my disks and fixed 2 of them that reported errors. I'm no longer having issues importing items with these (I still don't see the tv/movie folder in sonnar/radarr webui though, but maybe that's normal)

 

thanks for the great work everyone!

Edited by Endda
Link to comment

I'm running into an issue that I'm trying to remediate, but I'm getting stuck. I am running the normal combo of NZBGet, Radarr and Sonarr and I had everything "working" appropriately but I noticed that whenever NZBGet would finish a file it would take Radarr or Sonarr forever to finish moving the file. From everything I searched for it looked like it was a hardlink issue and that the best practice was to point the mount for each docker container to the root (/mnt/user) and then navigate from there within each application to the appropriate shares. Once I did this, the speed issue was solved, but now I'm dealing with another problem (I'll stick with Sonarr since I'm in this thread). Now, when the download finishes in NZBGet (all work done on the cache drive, appdata share set to cachedrive ONLY) it moves the file to the TV share, but it also puts it in a new folder on the cache drive called TV. I have the TV share setup to cachedrive "NO" so I am not sure why it's creating a folder on the cache drive called TV and then the subsequent folder under there. I'm wondering if I need to re-setup my shares.

 

I'll edit this with a couple of pictures once I get this approved. Thanks!

 

2020-07-07 14_52_13-Mimir_Docker.png

2020-07-07 14_52_48-Mimir_Shares.png

2020-07-07 14_53_22-Mimir_Main.png

Edited by slackmountain
added pictures
Link to comment
13 hours ago, slackmountain said:

I'm running into an issue that I'm trying to remediate, but I'm getting stuck. I am running the normal combo of NZBGet, Radarr and Sonarr and I had everything "working" appropriately but I noticed that whenever NZBGet would finish a file it would take Radarr or Sonarr forever to finish moving the file. From everything I searched for it looked like it was a hardlink issue and that the best practice was to point the mount for each docker container to the root (/mnt/user) and then navigate from there within each application to the appropriate shares. Once I did this, the speed issue was solved, but now I'm dealing with another problem (I'll stick with Sonarr since I'm in this thread). Now, when the download finishes in NZBGet (all work done on the cache drive, appdata share set to cachedrive ONLY) it moves the file to the TV share, but it also puts it in a new folder on the cache drive called TV. I have the TV share setup to cachedrive "NO" so I am not sure why it's creating a folder on the cache drive called TV and then the subsequent folder under there. I'm wondering if I need to re-setup my shares.

The reason for this is because you're using a mapping of /data <--> /mnt/user

 

The way every OS (including Unraid) works is that when moving a file, it first attempts to rename it to accomplish the move.  In this case, you're attempting to move a file from /data/completed/... to /data/TV Shows/...

 

Because both references are within /data (the same mount point), the rename will succeed.  This has the result that the file never actually "moves" and may in fact be technically outside of the constraints imposed by included / excluded disks, the use cache settings etc.

 

Your solution is to ether set TV shows share to be use cache:yes, or to go back and use separate references for the downloads and for the media share.  I wouldn't consider the time it takes to physically move the file from the cache drive to the array as being "forever".  If you've just spent the time to download a file, the extra time required to move it into it's final resting place shouldn't be a major issue.

Link to comment
9 hours ago, Squid said:

The reason for this is because you're using a mapping of /data <--> /mnt/user

 

The way every OS (including Unraid) works is that when moving a file, it first attempts to rename it to accomplish the move.  In this case, you're attempting to move a file from /data/completed/... to /data/TV Shows/...

 

Because both references are within /data (the same mount point), the rename will succeed.  This has the result that the file never actually "moves" and may in fact be technically outside of the constraints imposed by included / excluded disks, the use cache settings etc.

 

Your solution is to ether set TV shows share to be use cache:yes, or to go back and use separate references for the downloads and for the media share.  I wouldn't consider the time it takes to physically move the file from the cache drive to the array as being "forever".  If you've just spent the time to download a file, the extra time required to move it into it's final resting place shouldn't be a major issue.

Yeah, thanks for double checking my work and I decided to just deal with the file transfer delay. I think long term it won't be a big deal, it just becomes an issue when I'm trying to do mass file upgrades (from 720p to 1080p) on all shows in all my folders. I'll just slowly do these a folder at a time rather than hitting the entire share. Thanks again!

Link to comment
2 hours ago, slackmountain said:

Yeah, thanks for double checking my work and I decided to just deal with the file transfer delay. I think long term it won't be a big deal, it just becomes an issue when I'm trying to do mass file upgrades (from 720p to 1080p) on all shows in all my folders. I'll just slowly do these a folder at a time rather than hitting the entire share. Thanks again!

Real quick followup. I went out and picked up a new SSD (Samsung 860 EVO 1 TB) since my 250 GB Corsair SSD was 8 years old and kept filling up whenever I was grabbing large files. There is now a night and day difference in performance, so I'm wondering if there was some bottleneck/issue with the old SSD. Nice to see it humming along now and barely using the CPU at all even though it was pegging every core previously!

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.