Jump to content

wickedathletes

Community Developer
  • Posts

    435
  • Joined

  • Last visited

Posts posted by wickedathletes

  1. can this be coorelated back to a specific drive or is this completely unrelated. Nothing is having an issue on my system but I just saw that pop up and I am wondering if my disk 2 is just cooked.

     

    617 May  6 23:15:49 Hades kernel: ata1.00: exception Emask 0x50 SAct 0x0 SErr 0x280900 action 0x6 frozen

    May  6 23:15:49 Hades kernel: ata1.00: irq_stat 0x08000000, interface fatal error

    May  6 23:15:49 Hades kernel: ata1: SError: { UnrecovData HostInt 10B8B BadCRC }

    May  6 23:15:49 Hades kernel: ata1.00: failed command: READ DMA EXT

    May  6 23:15:49 Hades kernel: ata1.00: cmd 25/00:40:e0:be:5c/00:05:8c:01:00/e0 tag 10 dma 688128 in

    May  6 23:15:49 Hades kernel:        res 50/00:00:df:be:5c/00:00:8c:01:00/ec Emask 0x50 (ATA bus error)

    May  6 23:15:49 Hades kernel: ata1.00: status: { DRDY }

    May  6 23:15:49 Hades kernel: ata1: hard resetting link

    May  6 23:15:49 Hades kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)

    May  6 23:15:49 Hades kernel: ata1.00: ACPI cmd ef/10:06:00:00:00:00 (SET FEATURES) succeeded

    May  6 23:15:49 Hades kernel: ata1.00: ACPI cmd f5/00:00:00:00:00:00 (SECURITY FREEZE LOCK) filtered out

    May  6 23:15:49 Hades kernel: ata1.00: ACPI cmd b1/c1:00:00:00:00:00 (DEVICE CONFIGURATION OVERLAY) filtered out

    May  6 23:15:49 Hades kernel: ata1.00: ACPI cmd ef/10:06:00:00:00:00 (SET FEATURES) succeeded

    May  6 23:15:49 Hades kernel: ata1.00: ACPI cmd f5/00:00:00:00:00:00 (SECURITY FREEZE LOCK) filtered out

    May  6 23:15:49 Hades kernel: ata1.00: ACPI cmd b1/c1:00:00:00:00:00 (DEVICE CONFIGURATION OVERLAY) filtered out

    May  6 23:15:49 Hades kernel: ata1.00: configured for UDMA/133

    May  6 23:15:49 Hades kernel: ata1: EH complete

  2. I know using straight windows explorer is a poor choice as its using the network to transfer a file from one spot to another (adding a TON of time). Normally I would just use command line but I need a graphical way to get my drives in a better space. I have 2 drives that are 99% full and a bunch of drives that are only 10-20% full. I want to move say 200GB-1000GB at a time but based on hundreds of specific folders in a share so it needs to be gui based.

    Why not install either the dolphin or krusader docker app, which gives you an interface similar to windows, but avoids the extra time involved via windows in moving everything back and forth over the network

     

    trying out dolphin now, this is definitely ideal over a windows app if it looks and works good :)

  3. I know using straight windows explorer is a poor choice as its using the network to transfer a file from one spot to another (adding a TON of time). Normally I would just use command line but I need a graphical way to get my drives in a better space. I have 2 drives that are 99% full and a bunch of drives that are only 10-20% full. I want to move say 200GB-1000GB at a time but based on hundreds of specific folders in a share so it needs to be gui based.

  4. It might also be a good idea to consider having a second set of your data somewhere, even if it is on external USB hard drives, just because you have a NAS with redundant drives doesn't mean your data is always safe. Also, did you ever consider setting up a second parity drive which is possible in the 6.2b? I don't know if it would of made a difference in your situation or not but it might be worth considering if you have the space.

     

    I do... or well i have everything uber important to me backed up... IE Photos, Home Videos, Music. I also have all the important stuff backed up to CrashPlan as well, unfortunately CrashPlan is a memory hog and I was selective with backing up stuff since backing up 15TB to the cloud required about 16GB of RAM to do it... So again, Photos, Music and home videos only.

     

    I am dumb, just not completely stupid hahaha. Movies and TV Shows are a pain to recover but not an end of the world scenario. I just think there should be better wording on the format button especially in a catastrophic situation like that. Thankfully I do backup stuff that matters to me beyond unRAID, but still, not a good scenario.

     

    also, second parity drive is a plan (like you said probably not helpful in this scenario though), I was just planning to wait until 6.2 is official before getting it setup.

  5. The file system on the drive likely became corrupt. This can happen due to faulty RAM, faulty hardware, and in some cases just happens. ECC ram can usually prevent the latter, but most people don't use it.

     

    Parity can not repair a file system, and that is why it didn't attempt a rebuild. unRAID couldn't see a valid partition on that drive. Your data was likely 100% recoverable by repairing the file system, however the second you hit format your data went poof and parity was immediately updated to reflect this. If this happens again, you should do a file system repair on the drive. NEVER format a drive with data on it. In the future I suggest using a program that lists out every single file on the server, on a per-drive basis, in a single txt file. That way you can know exactly what you lost if you run it regularly. Sorry this happened, sometimes we learn things the hard way.

     

    On a side note, 1000 movies on a 2TB drive? That's only 2GB a movie? Perhaps use this as an excuse to upgrade to better quality... 2TB drive only stores about 50 movies for me (35GB per movie). I guess that's the pro of using full quality BD rips, you lose a heck of a lot less content. :P

     

    Am I to assume I should not use this drive anymore? Or should it be safe and just that partition for some reason become corrupt?

     

    1000 was a bit high, I lost about 1000 movies and tv episodes, about 440 movies, but yes I am in the process of getting better quality stuff now, I have had my collection since about 2003 so I have a lot of Dvd rips still.

     

    I don't care about losing it, I care more about having no idea what was lost minus doing some fancy shit with Plex DBs to try to figure it out, oh and recovering tv shows sucks with Usenet.... :)

     

    As a side note to unRAID devs, might I recommend if something like this happens to at least have warning messages for dumb people like myself that formatting the drive will nuke everything in your parity from that drive as well. I knew that would happen somewhere in the back of my brain but typically when I have had drive failures the system acted accordingly and I knew what to do. This threw me for a loop and I acted quickly not smartly.

  6. You definitely were wrong to select Format as that is an instruction for unRAID to create an empty file system on the disk. At that point parity is updated to reflect the disk having an empty file system.

     

    A disk suddenly coming up unmountable normally means some sort of file system corruption has occurred on the disk and the correct way forward is to put the array into Maintenance mode and run the repair tool appropriate to the format in use.  Often the disk becoming unmountable will be accompanied by the disk being disabled (marked with a red cross) due to a write having failed (which is why corruption has occurred).

     

    In terms of recovering data after incorrectly issuing format, then if the disk happens to be in reiserfs format there is an excellent chance the recovery tool can retrieve most of the files.  I do not believe the tools for other formats are as good, so you will have to wait until someone else chimes in with suggestions for those.  I think I have seen posts suggesting that for XFS there may be a tool that can be used under Windows to recover files in such a scenario.  No idea about Btrfs.

     

    Ya, the format was dumb, but if the disc was dead why didn't my parity kick in? I've lost disks before, this was kind of frustrating beyond a doubt

  7. in my system log:

     

    May  5 23:13:29 Hades kernel: XFS (md2): Internal error XFS_WANT_CORRUPTED_GOTO at line 3156 of file fs/xfs/libxfs/xfs_btree.c.  Caller xfs_free_ag_extent+0x419/0x558
    May  5 23:13:29 Hades kernel: CPU: 2 PID: 6733 Comm: mount Not tainted 4.4.6-unRAID #1
    May  5 23:13:29 Hades kernel: Call Trace:
    May  5 23:13:29 Hades kernel: XFS (md2): Internal error xfs_trans_cancel at line 990 of file fs/xfs/xfs_trans.c.  Caller xlog_recover_process_efi+0x148/0x155
    May  5 23:13:29 Hades kernel: CPU: 2 PID: 6733 Comm: mount Not tainted 4.4.6-unRAID #1
    May  5 23:13:29 Hades kernel: Call Trace:
    May  5 23:13:29 Hades kernel: XFS (md2): xfs_log_force: error -5 returned.
    May  5 23:13:29 Hades emhttp: mount error: No file system (32)

    hades-syslog-20160505-2336.zip

  8. Is there anything I can do?

     

    I was browsing my NAS and all of a sudden disk 2 disappeared. I rebooted my NAS and it came up with disk 2 unmountable, format to continue. I formatted (probably a mistake, I assumed it would rebuilt my parity) and nothing, no more data...

     

    Prior to me reformatting my data was gone, I checked, 2500 movies down to 1500....

     

    Is it hosed and how the heck did this happen? I am on 6.2 beta 21.

     

    My questions:

     

    1. I am sure me selecting format was a bad choice but, why did it reboot and come back "healthy" and online yet definitely not since it was missing 2 TB of data?

    2. Is there anything that can be done? Can I manually access the parity? can a parity be rebuilt or did I screw that up the minute I formatted the drive? I don't think so though since like I said, data was GONE prior to me hitting format.

     

    I wish I knew what was even on that drive... this is frustrating beyond belief and making me extremely worried to even turn on the server and lose more drives...

  9. Well it's definitely not an NZBget issue as that doesn't use mono.  Reading the link you posted it seems to be related to copying to a CIFS share and is a problem with the OS implementation of this.  I can't quite see how this ties in with a typical Unraid setup which to all intents and purposes  uses local folders rather than network shares as far as containers are concerned.  Is there anything special about your setup? 

     

    Sent from my LG-H815 using Tapatalk

     

    Nothing that I know of, I download and extract on my cache and push to my share. It started a few weeks ago and since turning off drone factory it hasn't happened again (albeit only a week of testing)

  10. figure I would try here as well as NZBGet and my newshosting sites.

     

    for about 3 weeks now, I will download files and the video cuts out after about 30 minutes. NZBGet is showing the file fully downloads and uncompresses, but the uncompressed file is smaller than it should be. Watching the video in any player cuts out at the same point. So first thought was ovbiously the file/host but if I regrab the same file it gives all the same results (aka 100% download), but thee second or third time it is now the correct size and everything works fully.

     

    This was happening a solid 5-6 times a week and a week ago I thought it went away (despite it not mattering, or shouldn't, I saw my docker image was almost full so I recreated it and the issue seemed to go away for a week. It happened again yesterday.

     

    These files are coming from different sites and the files are verified working (2nd, 3rd or 4th time is the charm).

     

    Any ideas would be appreciated, I have no idea what is going on.

    Do you use Sonarr?  If so are you using Drone Factory or Completed Download Handling for the file renaming?

     

    I was having the same issue and for a long time I also thought it was NzbGet.  I switched from using the Drone Factory to Completed Download Handling and I have not had this issue anymore.

     

    Thanks eroz, I changed that setting, hopefully it helps.

     

    So question to dev group, did you update any version of the base image in the last month or so? Apparently this is an issue with the version of Mono?

     

    https://forums.sonarr.tv/t/nzb-drone-post-processor-end-of-file-gets-cut-off-when-moving-across-network/963/27

     

    I am just wondering because it worked fine for X number of months and then just started doing this out of the blue.

  11. figure I would try here as well as NZBGet and my newshosting sites.

     

    for about 3 weeks now, I will download files and the video cuts out after about 30 minutes. NZBGet is showing the file fully downloads and uncompresses, but the uncompressed file is smaller than it should be. Watching the video in any player cuts out at the same point. So first thought was ovbiously the file/host but if I regrab the same file it gives all the same results (aka 100% download), but thee second or third time it is now the correct size and everything works fully.

     

    This was happening a solid 5-6 times a week and a week ago I thought it went away (despite it not mattering, or shouldn't, I saw my docker image was almost full so I recreated it and the issue seemed to go away for a week. It happened again yesterday.

     

    These files are coming from different sites and the files are verified working (2nd, 3rd or 4th time is the charm).

     

    Any ideas would be appreciated, I have no idea what is going on.

  12. is community applications down? i cant connect. I just cleared out my docker and all my plugin images and now I cant connect:

     

    browsing here works: tools.linuxserver.io

     

    Download of appfeed failed. Reverting to legacy mode

     

    Download of source file has failed

     

    I decided to clear house because after rebooting all my dockers were stuck in "needs to update" but they couldn't update.

     

     

    EDIT: and I am an idiot, not sure why it was set to itself instead of my router 10.10.1.1.... but it has been like that for a couple years without issue....

     

    I am 6.2b21

    Sounds more like you need to set static DNS addresses in network settings

     

    sorry if this sounds dumb but my 5.x/6.x install has been set to automatic for a few years (my router is setup with a static IP), why would this change?

     

    last weekend I did a motherboard/cpu/ram swap, but even after that its been fine until today?

     

    Also, My DNS server settings have never changed, which i should clarify is set to my static ip (10.10.1.3)

  13. is community applications down? i cant connect. I just cleared out my docker and all my plugin images and now I cant connect:

     

    browsing here works: tools.linuxserver.io

     

    Download of appfeed failed. Reverting to legacy mode

     

    Download of source file has failed

     

    I decided to clear house because after rebooting all my dockers were stuck in "needs to update" but they couldn't update.

     

    I am 6.2b21

  14. Has the availability of Plex Pass in the Lime Tech docker changed since this thread was started? I'd prefer to stay on the official docker, but I'm a lifetime Plex Pass member and I want my premium features. :)

    I believe that the Plex Pass features is connected to your account and not specifically to a Plex Pass release.

     

    The Plex Pass releases is where all new features show up first so it’s more like a preview of new functionality that is implemented and an opportunity to find new bugs and “enjoy” them for a while before they get fixed.

     

    Yes you become the beta tester.

     

    Plex Pass is both beta testing/early release of features and premium features (such as Plex Home or Plex Sync to name a few). So Plex Pass matters if you use specific premium features, however it is as mentioned in this sense specific to just beta testing new releases. You will not lose any premium functionality being on the "official release" unless of course the premium functionality was just released and not in the official release thread yet. Your plex pass functionality is tied to your login and login only to Plex.

  15. Personally I like the idea of taking the bulk of my dockers from one source as presumably they are all built on the same base (so more efficient use of resources) ???

     

    Some requests (to add the the long queue)

     

    Mylar ( I know lonix has done this personally but in same format/base etc?)

    aMule (as above but Sparklyballs - same format/base?)

    Comictagger (Sparklyballs again)

    Cherrymusic

    Jackett

    Get_iPlayer

    DuckDNS (or some other free DNS client)

    RDP Calibre

     

    I've just been told we actually have a requests page "thingy"  It's here...

     

    Good to know, I just put one in for the request I had a few posts above.

  16. I haven a request for Plex Users, which is extremely helpful for media conversions automatically. This is built off of the CouchPotato/SickBeard/Sonarr plugin called SickBeard MP4 Autmator, https://github.com/mdhiggins/sickbeard_mp4_automator/blob/master/README.md, HOWEVER IT IS NOT IT. This is customized for Plex users to output not only better quality to be also add an AAC track to the beginning and also force the MOOV Atom to the beginning as well. The instructions for this version are below, found here if you are a plex forum member, https://forums.plex.tv/discussion/comment/931888/#Comment_931888:

     

    Pre-requisites:

    The only pre-requisite is to have python 2.7.9 installed. https://www.python.org/downloads/release/python-279/

    Instructions for setup:

    1. Download convert.zip from [ftp=ftp://ayars.tv]ftp://ayars.tv[/ftp]

    -- username and password are both guest

    Unzip, You will end up with [iNSTALLDIR]\Convert

    There are two directories (can be changed) inside this folder that are used for the media

    Process is where you put any files you want to convert

    Done is where the files will end up after processing

    You can run this multiple ways but the easiest is to use the Run.bat file. (or for this case setup a Cron Job maybe?)

    If you edit this file (run.bat) you will notice it's one line that reads: c:\python27\python manual.py -a -i c:\convert\process

    If you install python to a different folder then change the python directory at the beginning of this line.

    If you want to process files from a different location then c:\convert\process then just change this in the batch file.

    If you want to change the final location of where the final files are stored then edit autoProcess.ini (this is the file people will need to modify the most) file and change the output_directory setting.

    That's pretty much it.

    Put files in the process directory and run the batch file

     

    What this will do:

    It will process each and every file in the "process" directory.

    It will remux files that are h.264 and will transcode files that aren't h.264 video

    It will pull out any english subtitles and create SRT files from them.  If you want to pull out other languages then modify the subtitle-language setting in the ini file.  Multiple languages are supported by commas

    The completed file will not have subtitles (we pulled them out)

    It will clear all tags in the mp4 file so Plex doesn't pick up and use these tags instead of the proper meta-data

    It will create an AAC track as the first track upto 256K in size.

    It will remove all audio tracks not in english.  If you want other languages included modify audio-language setting (ini file) and include any languages you want kept in the MP4 file.  Multiple languages are supported by commas

    When it transcodes it wil use an h.264 profile of high; a level of 4.0; crf setting of 20.  These are basically the same as Handbrake High Profile settings with a bit more refinement for Plex use that allows it to direct play more often.

    In all cases the final MP4 file has the MOOV ATOM at the beginning of the file (ie web optimized)

  17. Random Docker request, wondering how hard it would be to tackle or not...

     

    https://forums.plex.tv/discussion/comment/931888/#Comment_931888

     

    Someone created an excellent conversion script for video that is automated using Python. I was wondering how hard it would be to have a small UI in docker for it so it could run on the server. Basically I was hoping to have a "service" running monitoring folder A and outputting into folder B.

     

    can you just post the link for whatever it is in here for those of us without a plex forum account.

     

    Shoot sorry, thought it was in the general post. Here is the details:

     

    Here is a set of scripts that you can use to automate mp4 creation of files.

    I'd recommend running your files through FileBot first to rename them correctly before running them through this script.  It can pull SRT files so everything ends up with proper names this way.

    The only pre-requisite is to have python 2.7.9 installed. https://www.python.org/downloads/release/python-279/

    (I'm running 64bit version for windows installed to C:\Python27)

    This is a modified version of sickbeard_mp4_automator with modifications by me. I pulled out all the integration features to make setup much easier.  I also modified it to produce higher quality MP4 files that will direct play better with Plex.

    1. Download convert.zip from ftp://ayars.tv

    - username and password are both guest

    2. Unzip to C: drive. You will end up with C:\Convert

    - There are two directories (can be changed) inside this folder that are used for the media

    - Process is where you put any files you want to convert

    - Done is where the files will end up after processing

    3. You can run this multiple ways but the easiest is to use the Run.bat file.

    - If you edit this file you will notice it's one line that reads: c:\python27\python manual.py -a -i c:\convert\process

    - If you install python to a different folder then change the python directory at the beginning of this line.

    - If you want to process files from a different location then c:\convert\process then just change this in the batch file.

    - If you want to change the final location of where the final files are stored then edit autoProcess.ini file and change the output_directory setting.

    That's pretty much it.

    Put files in the process directory and run the batch file

     

    What this will do:

    - It will process each and every file in the "process" directory.

    - It will remux files that are h.264 and will transcode files that aren't h.264 video

    - It will pull out any english subtitles and create SRT files from them.  If you want to pull out other languages then modify the subtitle-language setting in the ini file. Multiple languages are supported by commas

    - The completed file will not have subtitles (we pulled them out)

    - It will clear all tags in the mp4 file so Plex doesn't pick up and use these tags instead of the proper meta-data

    - It will create an AAC track as the first track upto 256K in size.

    - It will remove all audio tracks not in english.  If you want other languages included modify audio-language setting (ini file) and include any languages you want kept in the MP4 file.  Multiple languages are supported by commas

    - When it transcodes it wil use an h.264 profile of high; a level of 4.0; crf setting of 20.  These are basically the same as Handbrake High Profile settings with a bit more refinement for Plex use that allows it to direct play more often.

    - In all cases the final MP4 file has the MOOV ATOM at the beginning of the file (ie web optimized)

    Advanced setting:

    - You can limit the total bitrate for the video.  If for example you use a client that has an upper limit of 12000 KB you can set the option video-bitrate option to something - - like 11000 which gives you a bit of headroom for a couple of audio tracks.

  18. Random Docker request, wondering how hard it would be to tackle or not...

     

    https://forums.plex.tv/discussion/comment/931888/#Comment_931888

     

    Someone created an excellent conversion script for video that is automated using Python. I was wondering how hard it would be to have a small UI in docker for it so it could run on the server. Basically I was hoping to have a "service" running monitoring folder A and outputting into folder B.

  19. I have the same issue. I am running dual 6 core processors and 48gb of ram. If I have my transcode set to the ram (/tmp) I can watch some tv shows, but none of my movies. I changed the transcode to my cache drive ( mnt/cache/) then my movies work fine.

     

    I have to believe that something with the new update has changed. For now I have left my transcode set at the cache drive. I would prefer it set to ram, but right now it doesn't work for me.

     

    its an open plex issue, has been for a few weeks now.

  20. I assume I am missing something, and hopefully this is my final stupid question, but is there a way to get the machines to talk among each other? My desktop Win 10 machine can see my unRAID server fine, but it can't see my unRAID Win 10 VM. wasn't sure if this was something I need to open on the machine, router or unRAID settings.

    You need to have enabled the bridge (default of br0) under Settings->Network and then set the VM network settings to use it if you want the VM visible on the LAN.    The default virtbr0 is a NAT bridge that allows the VM to see the LAN, but not vice-versa.

     

    I am using br0 already, although I modified it after the fact. I can see the machine with a correct IP range I just can't browse it \\machine\c$ for example.

×
×
  • Create New...