wickedathletes

Community Developer
  • Posts

    435
  • Joined

  • Last visited

Posts posted by wickedathletes

  1. 7 hours ago, trurl said:

    Powerdown plugin is deprecated.

     

    Before rebooting, you should be trying to get your diagnostics. Type "diagnostics" at the command line then shut down and get the diagnostics zip from your flash drive and post it.

     

    unfortunately this time it was completely locked out.

     

    any reason why shutdown -r now or reboot wouldn't work then? And if their is a better way when the UI locks up to reboot the server, I would love to know

     

  2. So the GUI locks up on occasion for me. What I am wondering is why I can't reboot it from the command line anymore This isn't new to 6.3.5, this has been happening for a long time for me. I do have the powerdown plugin installed.

     

    I try:

     

    reboot

    powerdown -r now

    shutdown -r now

     

    They all appear to "work" with the System restart now message, but do nothing.

  3. 2 hours ago, Squid said:

    Personally I wouldn't exclude the drives from those shares.  While the media within the share(s) isn't going to change, there is a possibility that you may update the meta data (eg: Kodi's .nfo files) associated with them.  Excluding would mean that the .nfo files would be separated on a different disk than the media files (assuming of course you have split level's set to keep everything together)

     

     

    so what is the best way to make it so the drives are ignored for adding new movies? Basically what happens is if I have a drive set to 10GB minimum it still tries to sometimes push files to it causing it to fail, or push a partial file (even worse). I just want the drive ignored unless manual intervention is used (like if I update my meta data with Ember Media Manager. The drive(s) wont change for a LONG time, they are 4TB full of only 1080p movies.

  4. If I have some full drives of data that will not change, is it better to exclude them from a share, or better to set my share settings minimum allocation amount to a higher number than is available on the drive? I seem to remember the minimum did not do what I previously intended.

     

    Currently, I have been keeping all drives distributed evenly (space-wise), but its a pain in the ass and probably the incorrect way to do things for drive health and usage. So I moved my data around and I have 6 of 8 drives that can be "locked" so to speak (data wont change). I still need my files from those drives accessible (Movies, TV Shows etc), but I don't want unRAID writing to them anymore.

  5. 4 minutes ago, johnnie.black said:

    Diagnostics you posted only show issues on ata7, post the ones with errors on ata1.

     

    thankfully ata1 just had the problem. Attached.

     

    Last night I did replace the cable on ata7 hoping that fixed it, but it sometimes works for a week or two, sometimes a day. The thing that threw me was it seems to bounce between the two.

     

    Definitely not the drives since putting a new drive there, the new drive got the error.

     

    thank you again.

     

    hades-diagnostics-20170401-1417.zip

  6. 8 hours ago, johnnie.black said:

    Number 1 suspect is the Marvell controller where those disks are connected, they are known to have issues, expecially with vt-d enable.

    I assume you mean on the MoBo? I forgot to mention I have a sata card as well, and of course one is plugged into that and one is plugged into the MoBo.

     

    so that leaves me with:

     

    Cable

    Hot Swap Bay port

    PSU?

  7. So the error stayed at the ata location and didn't move with the drive when I moved the two drives to other spots. If that's the case what would I want to look at?

     

    PSU?

    Cable?

    HotSwap Bay?

    Sata port on MoBo?

     

    obviously it isn't the drive, although with 2 having issues I'm still unclear. I have once seen both drives acting up, however most of the time it's just one location.

  8. So is there a way to test if this could possibly be a PSU issue? Have I reached my max finally? Could I be overloading a source because of the hot swap bay powering 5 drives at once?

     

    PSU: Antec NEO ECO 520C (520W)
    +3.3V@24A, +5V@24A, +12V@40A, [email protected], [email protected]

     

    MoBo: ASRock Z68 Extreme4 LGA 1155 Intel Z68 HDMI SATA 6Gb/s USB 3.0 ATX Intel Motherboard

     

    CPU: Intel Core i7-2600K Sandy Bridge Quad-Core 3.4GHz (3.8GHz Turbo Boost) LGA 1155 95W BX80623I72600K Desktop Processor Intel HD Graphics 3000

     

    Memory: G.SKILL Ripjaws X Series 16GB (4 x 4GB) 240-Pin DDR3 SDRAM DDR3 1866 (PC3 14900) Desktop Memory Model F3-14900CL9Q-16GBXL

     

    Hot Swap: SUPERMICRO CSE-M35T-1B 3 x 5.25" to 5 x 3.5" Hot-swap SATA HDD Trays

     

    SATA PCI Controller
    IO Crest 4 Port SATA III PCI-e 2.0 x1 Controller Card Marvell Non-Raid with Low Profile Bracket SI-PEX4006

    HDs:
    WDC WD20EZRX-00D8PB0
    Hitachi HDS5C4040ALE630 (parity)
    WDC WD20EARS-00MVWB0
    WDC WD40EFRX-68WT0N0 (throwing errors)
    TOSHIBA MD04ACA400
    WDC WD40EFRX-68WT0N0 (throwing errors)
    WDC WD40EZRZ-00WN9B0
    TOSHIBA MD04ACA400

    HGST HDS5C4040ALE630
    Crucial_CT750MX300SSD1 (cache)

     

    I do have a backup PSU from my old gaming PC that I could test but don't want to waste my time if it wouldn't be the issue:
    SeaSonic X Series X-850 (SS-850KM3 Active PFC F3) 850W ATX12V v2.3 / EPS 12V v2.91 SLI Ready CrossFire Ready 80 PLUS GOLD Certified Full Modular Active PFC Power Supply New 4th Gen CPU Certified Haswell Ready

     
  9. I have 2 drive periodically giving me following error:

    ata7.00: exception Emask 0x0 SAct 0x0 SErr 0x400000 action 0x6 frozen

    The two drives doing this are my two newest drives (ordered about 3 months apart so not a batch issue, WD Reds). These replaced drives doing the same thing as far as I remember...its been a few months. That said, could other things cause this? Like bad cables, connectors etc etc?

     

    edit: added diagnostics.

     

    hades-diagnostics-20170331-1542.zip

  10. First time I have ever seen this, so just asking for some help because I got no idea...

     

    Your server has issued one or more call traces. This could be caused by a Kernel Issue, Bad Memory, etc. You should post your diagnostics and ask for assistance on the unRaid forums

    hades-diagnostics-20170319-0918.zip

     

    would this type of error occur because of the issue with Disk 1 I see in the log?

  11. 25 minutes ago, Squid said:

    Generally its because you don't have the host and container path within the dl client matching exactly the host and container path on radarr  

    Also are you sure you don't have a typo or something that's causing radarr to move the completes to inside the container?

     

     

    it has worked on about 50 files, just randomly failed on 2 other ones. So path should be ok (obviously might not be). But in Radarr  I have 0 paths set anyways. I am sure I a doing something wrong, I just can't seem to find it that is for sure.

    1.png

    2.png

    3.png

    4.png

    5.png

  12. 3 hours ago, Squid said:

    First and easiest thing to do is to look at whatever logging level you are able to set within radar and lower it.  Since its in development, it might be actively logging "Wow it's a new second and I'm still alive, let's log that"

     

     

    Logging is set to INFO (their lowest level). That said I did notice something and hopefully Radarr will have some more info (on their reddit).

     

    This has happened to me twice and both times (whether related or not) I am getting heavy errors in my logs after moving a movie (and radarr thinking it didnt). So over and over I get this:

     

    Import failed, path does not exist or is not accessible by Radarr: /downloads/complete/movies/BLAH.BLAH 

     

    or this

     

    Couldn't process tracked download Killing.Fields.2011.REPACK.MULTi.LiMiTED.1080p.BluRay.x264-ZEST: Movie with ID 1500 does not exist

     

    That said, clearing the log doesn't release the space.

  13. 3 hours ago, trurl said:

    And as with everything Linux, it is critical that you get upper/lower case exactly right. If you did have /mnt/User/downloads instead of /mnt/user/downloads, that was your problem.

     

     

    ya i wish that catch was it but its still happening. Is their a way to see what container is taking the space so I know what to narrow down onto? I am just assuming its Radarr since it was the newest container added.

     

    figured out how:

     

    docker ps -s

     

    and its Radarr... still have no idea how though:

    7cd26415cdaf        linuxserver/radarr       "/init"             11 hours ago        Up 3 hours          0.0.0.0:7878->7878/tcp                     radarr              24.9 GB (virtual 25.25 GB)
     

  14. Just looking for a second pair of eyes. I thought originally it was Radarr causing an issue because I had my /downloads path set to /mnt/User/downloads instead of /mnt/cache/downloads. When I switched it and restarted Radarr my image dropped 20GB, however it just filled up again, about 6 hours later. Not really sure what could be causing it, and I am sure I am missing something but did go through the faq on space issues.

    2017-03-14 19_16_21-Hades_Docker.png

  15. 3 minutes ago, saarg said:

    I don't use radarr so don't know how much space it needs, but turn it off to check.

    Also post the settings for folders.

     
     

     

    thanks, I think I figured it out. Not really sure why (probably my lack of understanding), but it was the container mapping on radarr.

     

    NZBGet was pointing to /mnt/cache/downloads and Radarr was pointing to /mnt/user/downloads. Despite that just being the shared version of the cache drive (to my understanding at least), I switched it over and my img file immediately went down 20 GB. Weirdly enough, CP is pointing to the User share and never caused an issue for me.

  16. ya no idea.. unless Radarr does something weird in a temp folder when it renames a file? No settings in Radarr are doing anything. Its handing off to NZBGet without issue and its definitely Radarr filling it, because i disabled radarr while NZBGet was still running and it stopped filling up.

     

    This is the only setting that maybe could do something? not really sure. Also, below are all my other file management settings

    2017-03-14 10_58_41-Settings - Radarr.png

    2017-03-14 10_59_46-Settings - Radarr.png

  17. Just now, saarg said:

     

    Then its the folder settings in radarr that is the problem I guess.

     

     

    ya after you said i might have something wrong I thought about that too, I am checking but so far not sure. The downloads are going to the correct location so I am not sure what would fill it.

  18. 2 minutes ago, saarg said:

     

    It's most likely you that configured mappings wrong or the folders in the app wrong. When the docker image fills, something is stored in the container instead of the folder mappings.

     

    You should have a look in the docker FAQ to better understand how docker works.

     

    anything out of the ordinary on this? This setup works 100% fine on my other 10 or so apps for the last 1 or so years.

    2017-03-14 10_47_16-Hades_UpdateContainer.png

  19. anyone having docker container size issues using this container? I have a 30gb container and within about 20 minutes of rebuilding it and turning radarr on it filled 100% from about 5gb.

     

    Radarr is indexing about 2200 movies for me, so maybe that is why? Should I just grown my docker image or is this a bug of sorts?

  20. And if you have to deal with WAF then that consideration trumps all others.

     

    *But* if you're not using the dated backup feature of CA, then I'd disable the backup schedule just so you don't wind up trashing the backup set from a couple of days ago until you get around to restoring again.

     

    Worse than WAF, I have my parents breathing down my neck because they are 3 episodes left of breaking bad and my server has been down 2 days. damn you remote access plex. hahaha