MethodCall

Members
  • Posts

    29
  • Joined

  • Last visited

About MethodCall

  • Birthday 01/25/1980

Converted

  • Gender
    Male
  • Location
    Chicagoland

MethodCall's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Yup. It was split level. I *clearly* hadn't wrapped my head around the problem sufficiently. I wasn't understanding that "Hey, Episode 22 can't copy to the new drive because Episodes 1-21 are already on one of the other drives in the array and the split level won't allow E22 to be copied to another damn drive." Arrrgh. Well spotted, trurl. Well spotted, indeed. Edit (for posterity): I verified this by manually copying Episodes 1-22 to the new drive (to join E22, which I had already manually copied). Once I did that, Sonarr was able to import E23 without issue (via manual import, because the automatic import had already failed for that episode).
  2. I wouldn't expect the split level to be an issue. I've been rocking this split level through the addition of several drives and it has always split correctly. As for minimum free, it is currently set to 0KB, which is what it has been set to for the life of my server. At that, on the share's own compute stats, it says there's over 2TB free on the new drive.
  3. Split level for that share is set to top two (the 3rd choice in the dropdown) and it is set to Most Free. (It was set to this prior to adding the new drive, as well.)
  4. As best I can tell, yes, the user share in question can use the new disk. Disks 1 through 8 have been a part of the share for a long time, Disk 9 is excluded, and then Disk 10 is the new drive. In the settings for this user share, Included Disks is set to All and Excluded Disks is set to Disk 9. If I click the compute link in the Size column on the Shares tab, it shows the space used by the one file I have manually copied over.
  5. I've dug around for a good bit now trying to find a previous instance of my issue so, if my Google-Fu has just failed me, do please point me to previous solutions if I'm repeating an issue that has already been solved! So, what I'm getting is this error: Couldn't import episode /downloads/complete/sonarr/DefinitelyTheActualFilename.mkv: Disk full. Path With this tracelog: System.IO.IOException: Disk full. Path at System.IO.File.Move (System.String sourceFileName, System.String destFileName) [0x00116] in <b0e1ad7573a24fd5a9f2af9595e677e7>:0 at NzbDrone.Common.Disk.DiskProviderBase.MoveFileInternal (System.String source, System.String destination) [0x00000] in C:\BuildAgent\work\5d7581516c0ee5b3\src\NzbDrone.Common\Disk\DiskProviderBase.cs:232 at NzbDrone.Mono.Disk.DiskProvider.MoveFileInternal (System.String source, System.String destination) [0x00076] in C:\BuildAgent\work\5d7581516c0ee5b3\src\NzbDrone.Mono\Disk\DiskProvider.cs:170 at NzbDrone.Common.Disk.DiskProviderBase.MoveFile (System.String source, System.String destination, System.Boolean overwrite) [0x000e3] in C:\BuildAgent\work\5d7581516c0ee5b3\src\NzbDrone.Common\Disk\DiskProviderBase.cs:227 at NzbDrone.Common.Disk.DiskTransferService.TryMoveFileTransactional (System.String sourcePath, System.String targetPath, System.Int64 originalSize, NzbDrone.Common.Disk.DiskTransferVerificationMode verificationMode) [0x0008f] in C:\BuildAgent\work\5d7581516c0ee5b3\src\NzbDrone.Common\Disk\DiskTransferService.cs:490 at NzbDrone.Common.Disk.DiskTransferService.TransferFile (System.String sourcePath, System.String targetPath, NzbDrone.Common.Disk.TransferMode mode, System.Boolean overwrite, NzbDrone.Common.Disk.DiskTransferVerificationMode verificationMode) [0x003ce] in C:\BuildAgent\work\5d7581516c0ee5b3\src\NzbDrone.Common\Disk\DiskTransferService.cs:312 at NzbDrone.Common.Disk.DiskTransferService.TransferFile (System.String sourcePath, System.String targetPath, NzbDrone.Common.Disk.TransferMode mode, System.Boolean overwrite, System.Boolean verified) [0x0000e] in C:\BuildAgent\work\5d7581516c0ee5b3\src\NzbDrone.Common\Disk\DiskTransferService.cs:196 at NzbDrone.Core.MediaFiles.EpisodeFileMovingService.TransferFile (NzbDrone.Core.MediaFiles.EpisodeFile episodeFile, NzbDrone.Core.Tv.Series series, System.Collections.Generic.List`1[T] episodes, System.String destinationFilePath, NzbDrone.Common.Disk.TransferMode mode) [0x0012c] in C:\BuildAgent\work\5d7581516c0ee5b3\src\NzbDrone.Core\MediaFiles\EpisodeFileMovingService.cs:119 at NzbDrone.Core.MediaFiles.EpisodeFileMovingService.MoveEpisodeFile (NzbDrone.Core.MediaFiles.EpisodeFile episodeFile, NzbDrone.Core.Parser.Model.LocalEpisode localEpisode) [0x0005e] in C:\BuildAgent\work\5d7581516c0ee5b3\src\NzbDrone.Core\MediaFiles\EpisodeFileMovingService.cs:81 at NzbDrone.Core.MediaFiles.UpgradeMediaFileService.UpgradeEpisodeFile (NzbDrone.Core.MediaFiles.EpisodeFile episodeFile, NzbDrone.Core.Parser.Model.LocalEpisode localEpisode, System.Boolean copyOnly) [0x0017c] in C:\BuildAgent\work\5d7581516c0ee5b3\src\NzbDrone.Core\MediaFiles\UpgradeMediaFileService.cs:76 at NzbDrone.Core.MediaFiles.EpisodeImport.ImportApprovedEpisodes.Import (System.Collections.Generic.List`1[T] decisions, System.Boolean newDownload, NzbDrone.Core.Download.DownloadClientItem downloadClientItem, NzbDrone.Core.MediaFiles.EpisodeImport.ImportMode importMode) [0x00272] in C:\BuildAgent\work\5d7581516c0ee5b3\src\NzbDrone.Core\MediaFiles\EpisodeImport\ImportApprovedEpisodes.cs:107 I had recently filled up my array so I added another 3TB Red into it. It's got space for days, so the "disk" definitely isn't (literally) full. At that, Sonarr can see (and was able to rename) an episode that I manually copied over to that new drive, so the drive is definitely part of the user share into which Sonarr imports files from SAB. I am at a loss here. I can't seem to find the "convince Sonarr that there's a couple terabytes waiting for it" button. Any ideas?
  6. It seems as though everything is back to normal operation. Man, I can't thank you enough, Johnnie. There is pretty much *zero* chance I would have figured out the problem without ya. If I could sling more upvotes your way, I would...but I have literally hit the daily limit. Top shelf, son. TOP SHELF.
  7. Yeah, no more assumptions from me. DEFINITELY making sure. 100%.
  8. *facepalm* Hey, Lime....maybe an error message inside the CLI would more effectively communicate that....instead of the broadcast message that gives other-than-true feedback to the user. Just a thought. *sigh* Good info, Bonienl. That is *definitely* something I need to know. Appreciate it!
  9. Interesting. (In a bad way.) Tried to shut down the server with powerdown (instead of the powerdown -r's I've been doing all along) to make absolutely sure it restarted. So I physically went over to the server (since I was gonna have to hit the power switch to turn it back on, obviously). It hadn't shut down. Reconnect with Putty...sure enough, still up. Tell it to powerdown again. Nothing. I mean, the CLI is telling me that it's going down, got the broadcast message like usual...but the front of the server's chassis is still a frickin' christmas tree. No actual shut down is taking place. So, I kill it by holding down the power switch. It goes down for real. Hit the switch again to power it up...wait a bit for it to post annnnnnndd.....the GUI is accessible again. I'm questioning whether or not ANY of my previous powerdown -r's ever actually rebooted anything or not, now. Lesson learned...get visual confirmation on shut downs. So.... delete and re-create my docker image, I'm assuming?
  10. Edited & saved. Rebooted. Reopened docker.cfg after reboot to confirm that the edit took (it had). GUI is *still* clocking. Good grief, what have I done to my server? New diags attached... newbsauce-diagnostics-20180104-1602.zip
  11. Will deleting it via mc be sufficient? As far as re-creating it, I don't know how to do that outside of the GUI (which I still can't get back to) but if you can gimme the commands, I'll throw 'em at Putty. You think something with Docker is breaking the GUI? I rebooted the box after the balancing, you want me to reboot after deleting the Docker image, or after deleting and re-creating the Docker image?
  12. Damn. No joy. GUI is still clocking infinitely. New Diagnostics ZIP attached. newbsauce-diagnostics-20180104-1534.zip
  13. I did have to nuke some downloads to make space for the process, but I have completed a balance -dusage=75 without an error. To anyone performing this in the future, as that -dusage percentage goes up... so does the time it takes to complete. That last one at 75% took *at least* 20 minutes. Restarting the server now to see if this did the trick. *crosses fingers*
  14. I have *literally* no idea what that means...but I'll dig into that thread, as you suggest. Thank ya, sir.
  15. Running unRAID 6.3.5 Originally, I was getting the (fantastically unhelpful) "Execution Failed" message when I would try to start any of my Docker containers after stopping them (at this point they seemed to be autostarting successfully on server startup). Following the prescribed methods, I blew away my Docker image file, re-created it, and then re-created all of my Docker containers. Everything seemed to be fine for a bit then I started to get the "Execution Failed" message again. Now, after restarting the server with no other changes made I cannot access the GUI any longer. When I try to access it, it just clocks indefinitely (no page appears, no login prompt). Docker Containers: Couch Potato (based on some of the stuff in the logs, a likely culprit), Radarr, Sonarr, PMS, MySQL, and SABnzbd. All from linuxserver.io, all set to auto-start except for MySQL. I can still Telnet into the box without any issue and all of my SMB shares appear to be fully accessible. None of the GUIs for any of my Docker applications come up, either, though they do not clock - they fail immediately. The Docker tomfoolery and the GUI nonsense aren't necessarily related but that sure seems unlikely. I know far too little about unRAID, Docker, or Linux to troubleshoot this further. Please help if you can! newbsauce-diagnostics-20180104-1314.zip