Jump to content

Spritzup

Members
  • Content Count

    236
  • Joined

  • Last visited

Community Reputation

1 Neutral

About Spritzup

  • Rank
    Advanced Member
  • Birthday 05/21/1981

Converted

  • Gender
    Male
  • Location
    Canada

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Apologies for replying to myself, but what workflow are you guys using to have Unmanic play nicely with with Sonarr (specifically V3). I thought about using the Blackhole setting, but then I lose failed download handling. So I'd love to hear any ideas ~Spritz
  2. Even that won't update it unfortunately. While unmanic is awesome (in my test folder it saved me 1TB of space!), it's beauty would be the integration into existing tools. With Sonarr V3 being able to upgrade to 265 (if configured), it will be downloading updates to files that are already 265. If unmanic could simply tag the file in someway, it would be amazing, but I believe @Josh.5 has stated that it is out of the scope of what he wants to do. ~Spritz
  3. According to the Sonarr dev, the only time it will scan the media (and thus update it) is when the filename changes. I've now confirmed with multiple shows that this is indeed the case. The only shows that were updated were ones where the extension changed (for example, mp4 to mkv) and therefore Sonarr reads it as a filename change. Apparently the only way to have Sonarr rescan everything is to delete and re-add the library, which sort of renders Unmanic less useful for a large number of people... which is unfortunate ~Spritz
  4. Do you happen to know how often these scheduled library changes run? I have some shows from a few days ago now that still haven't updated. ~Spritz
  5. Compliments to @Josh.5, this thing is a beast! Testing it out right now on a smaller library, but it's averaging a 40-50% reduction in file size. The only issue that I'm running into is how to let Sonarrv3 know that the file has been changed to 265? I read the thread and didn't see a solution ~Spritz
  6. *facepalm* Ok, thanks both @johnnie.black and @itimpi. New drive it is. ~Spritz
  7. Thanks @johnnie.black, you're an asset to this forum I ran the extended SMART test, and assuming I'm reading it right, it does appear that the drive is failing. I've posted it here to have another set of eyes have a look, in case I missed something. ~Spritz dyson-smart-20200706-0824.zip
  8. Thanks for the follow up. I mean it could be the backplanes, it just seems like it would be really odd to have all of them start acting up at the same time. As for your suggestion re - the power cable. The system has the backplanes load balanced across separate lines on the PSU without the use of any extensions/splitters/etc. So while possible, I think power is not likely. Fun fact though, older Norco-4224's don't support the 3.3v sata spec, and you need to cover the pin on newer SAS/SATA drives. ~Spritz
  9. Thanks for the reply. Yeah, the system(s) are on a UPS.
  10. So I've been scratching my head about this one. It seems that I will randomly get read errors on random disks (sometimes 2). Initially I thought that it was an issue with the drives, so I removed them from the array, ran SMART and tested them and all came back good. Re-add them to the array, rebuild the data and they seem fine, until another random drive has read errors. I'm trying to work my way through what's common among the drives and testing those shared components. The weird thing is that they're all on different backplanes and therefore different cables. So currently here is what I've done --> - Replaced cables going from HP SAS Expander to 8087/8088 converter - Replaced cables going from 8087/8088 converter to LSI 9205-8e HBA The next thing I was thinking was to update the firmware on the 9205, as it appears to be from 2011... but it seems older firmware is preferred on these cards? I was also considering running memtest to see if that's the problem, though for that is finding downtime. Does anyone else have any thoughts or suggestions? I've included the diagnostics here as well ~Spritz dyson-diagnostics-20200705-1250.zip
  11. Just a quick update. Rebuilt the disks and everything looks fine. Again, thanks again for all the help ~Spritz
  12. Rebuilding now, thanks for the help!
  13. Thanks @johnnie.black, I appreciate the response. So I follow the procedure from the Wiki, namely unassign and the reassign the disks to have the array rebuild? If so, can I rebuild both at the same time? Also, could the XFS errors caused the write errors, thereby causing the disk to become disabled? I suppose ultimately it doesn't really matter, so long as it doesn't happen again. ~Spritz
  14. Hey All, So Mr. Murphy reared his ugly head today. I had just finished saying to the better 1/2 that the server rebuild is done, and now to tidy up my "rack" area. When doing this, I realized that I had plugged the server into the wrong spot on the UPS, so did a clean power down, moved the plug and powered back on... to two disks saying "Unmountable: No file system". Searched the forums, followed this thread and appeared to get the errors fixed. I rebooted and now I have 2 disabled disks with the content emulated... and I'm out of idea's. Any help or guidance would be appreciated. I've attached the diagnostics to this post, as well as the output from the Enhanced System Log. ~Spritz dyson-diagnostics-20200607-1856.zip dyson-syslog-20200607-1858.zip
  15. So I'm an idiot, and forgot to delete the container after utilizing it. It was fine for a couple of months, and then somehow it started and ran again (just prior to the update to fix this issue) and overwrote my disks. Now the thing is, I edited the xml to install everything on an nvme drive I have... So my question is, how do I get it to boot off the proper drive, rather than the newly created drive? ~Spritz EDIT - I may have resolved it by removing the install disk that is created, and simply leaving the clover image and then the physical disk. Anything wrong with my solution?