heffe2001

Members
  • Posts

    411
  • Joined

Posts posted by heffe2001

  1. I highly modified a script from torrent-invites for using LFTP and a seedboxes.cc account for my torrent downloads.  You'll need LFTP installed on your base unraid install, as well as sshfs-fuse.  The original script as supplied would just mirror folders in a specific destination directory on the seedbox with LFTP, but wouldn't download single files (IE:  A download without a parent directory), didn't do sftp, and also wasn't multi-segemented.  Subsequent releases on that site added database function, but it was windows-specific so that had to be modified to work with linux/unraid.  I also added the capability of it determining whether a download is a directory (and use mirror), or a file (and use get) so that it will handle either without problems.  After the files are downloaded, I also call a unrarall script that extracts the rar files, and removes the extras (nfo's, samples, and the parent rars), and leaves the files in such a way that Sonarr and couchpotato can work with them.

     

    The Script:

     

    #!/bin/sh
    login="username"
    pass="password"
    host="server.hostname"
    sshfshome="/path/to/home/for/sshfs"
    sshfsmnt="/path/to/local/seedbox/mount"
    sqldir="/folder/for/sqlitedb/" # would suggest you put the script and the sqlite file in the same folder for simplicity, but this allows you to separate the files
    sql="dir.db" #name of the sqlite file
    table="rememberedFiles"  #name of the sqlite table
    field="filename" # name of the sqlite field
    for i in 1 2 3 # Each of these are for a specific folder on my seedbox, which I use for specific folders determined by the torrent Labels set up in Deluge.  If you need more, add the numbers here, and copy one of the sections below, making changes are you need.
    do
    if [ $i -eq 1 ]
        then
            remote_dir="/example/torrents/finished/Downloads/Comics/"   # this is the absolute path on my seedbox's ftp client
            remotedir="/mnt/seedboxe/Comics/"                       # this is the local path where I have the seedbox mounted with sshfs below, required for database to work correctly
            local_dir="/mnt/user/Downloads/Deluge/Downloads/Comics/"    # this is the local path where I actually download my files to
            
        elif [ $i -eq 2 ]
        then
            remote_dir="/example/torrents/finished/Downloads/Movies/"
            remotedir="/mnt/seedbox/Movies/"
            local_dir="/mnt/user/Downloads/Deluge/Downloads/Movies/"
         else
            remote_dir="/example/torrents/finished/Downloads/TV/"
            remotedir="/mnt/seedbox/TV/"
            local_dir="/mnt/user/Downloads/Deluge/Downloads/TV/"
    fi
    if [ -d $remotedir ]
        then tick=$remotedir
        else
            #umount $sshfsmnt
            echo $pass | sshfs $login@$host:$sshfshome $sshfsmnt -o workaround=rename -o password_stdin
    fi
    cd $remotedir
    Progs=( * ) # creates an array of your directories
    for show in "${Progs[@]%*/}"; do  #creates single variables from the array
        cd $sqldir
        exists=$( sqlite3 $sql "select count(*) from $table where $field=\"${show}\"" )
        if (( exists > 0 )); then  # these two lines test if your directory is already in the sqlite database
        tick=$show
        #        echo "Show already downloaded $show"
        else
            trap "rm -f /tmp/synctorrent.lock" SIGINT SIGTERM
        if [ -e /tmp/synctorrent.lock ]
        then
            echo "Already Running"
            exit
        else
            touch /tmp/synctorrent.lock
            if [ -d "$remotedir/${show}" ]  # I needed 2 seperate lftp lines, one for mirroring full directories, and one to handle single files since some uploaders don't use directories
            then
               echo "$remote_dir/${show}/ is directory"  # This is there for debugging the testing to see if it's determining if you are downloading a file or mirroring a directory, can be safely commented out
    lftp -p 22 -u $login,$pass sftp://$host << EOF  #this is an SFTP script - hopefully secure
    set mirror:use-pget-n 7
    mirror -c -P5 --log=/var/log/synctorrentssl.log "$remote_dir/${show}/" "$local_dir/${show}/"
    quit
    EOF
            else
                echo "$remote_dir/${show} is file"   # For Debugging, can be commented out as well
    lftp -p 22 -u $login,$pass sftp://$host << EOF  #this is an SFTP script - hopefully secure
    get "$remote_dir/${show}" -o "$local_dir/${show}"
    quit
    EOF
            fi
        fi
    sqlite3 $sql "insert into rememberedFiles (filename) values ('${show}');"  # once the script completes, the variable is added to the database, ensuring that you can't download it again using the script
    rm -f /tmp/synctorrent.lock
    trap - SIGINT SIGTERM
    fi
    done
    done
    umount $sshfsmnt
    #Below this line I've added the unrarall script to test md5's (if available), extract, and clean up the samples, nfo's, etc.
    /boot/custom/unrarall --clean=all /mnt/user/Downloads/Deluge/Downloads/TV
    /boot/custom/unrarall --clean=all /mnt/user/Downloads/Deluge/Downloads/Movies
    exit 0

     

    My notes from the thread on the other site:

     

    I've heavily modded this script to work my my seedbox host (seedboxes.cc). I use Deluge as my downloader, and have labels set up to automatically sort my downloads into folders for whatever type downloads they are (Movies, Music, TV, Porn, etc). You need the sshfs client installed to use it on a linux-based system (no clue on windows or mac, I coded this specifically for my unraid linux system).

     

    The line that executes the sshfs command MAY need the variables changed to the actual values needed, I had some issues with it on an earlier version. This will mirror any directories in the Label directories, and also pull down any single files if they aren't in directories (the originaly supplied script wouldn't handle single files in the root directories at all). I've also modded it to use sftp in place of regular ftp, and it keeps a database of previously downloaded files, and won't re-download them if they've already been pulled.

     

    Hopefully someone can use this, I've tried to comment where I thought it was necessary.

     

    This is the unrarall script I'm running: https://github.com/arfoll/unrarall

     

    Original script here: http://www.torrent-invites.com/showt...=1#post2083528 (Thanks to the original author)

    You'll need to use his directions to set the environment up, but the above script code to get it to function. I'd still like to add some errorlevel checking on the actual lftp downloads, as this script accepts the download even if it cancelled out, and won't re-download it due to it being in the sqlite database. If I ever get around to doing that, I'll make the changes above..

     

    Aside from the script above, you'll need cksfv, p7zip, unrar, lftp and sshfs-fuse.

     

    These are the versions (and links) to the packages I used.  This is all done on the base unraid install, not in a docker.  I placed all these files in the addons directory that unraid installs automatically on boot.

     

    cksfv  (http://slackonly.com/pub/packages/14.1-x86_64/misc/cksfv/cksfv-1.3.14-x86_64-1_slack.txz)

    p7zip  (http://taper.alienbase.nl/mirrors/people/alien/slackbuilds/p7zip/pkg64/13.37/p7zip-9.20.1-x86_64-1alien.tgz)

    unrar  (http://taper.alienbase.nl/mirrors/people/alien/slackbuilds/unrar/pkg64/14.0/unrar-4.2.4-x86_64-1alien.tgz)

    lftp    (http://ftp.slackware.com/pub/slackware/slackware64-14.1/slackware64/n/lftp-4.4.9-x86_64-1.txz)

    sshfs-fuse (http://taper.alienbase.nl/mirrors/people/alien/slackbuilds/sshfs-fuse/pkg64/14.0/sshfs-fuse-2.5-x86_64-1alien.tgz)

     

    Since for torrents I use the blackhole directories in most of my autodownloaders, I also have a script that sends them up hourly (I'm getting ready to make changes to my scheduling to do the watchdir every 5 minutes, and download the resultant torrents every 15-30 minutes), and pulls down the torrents every day at 5am (unless I fire it off manually, which is usually what happens, lol).

     

    If you have any questions, feel free to ask, but if it's about another seedbox host other than seedboxes.cc, not sure how much help I can be.  I've gotten excellent speeds with them (I've had some torrents approach 70-80M/s on a download to the box using a gremlin box from them, and rates down to me average around 7.5M/s on my cable internet connection (60Mbps), which can completely saturate my connection. 

     

    Watchdir sync script:

     

    #!/bin/bash
    login="username"
    pass="password"
    host="hostname"
    remote_dir="/remote/torrent/watch/directory/"
    local_dir="/local/watchdir/repository/"
    
    trap "rm -f /tmp/syncwatch.lock" SIGINT SIGTERM
    if [ -e /tmp/syncwatch.lock ]
    then
      echo "Syncwatch is running already."
      exit 1
    else
      touch /tmp/syncwatch.lock
      lftp -p 22 -u $login,$pass sftp://$host << EOF
      set mirror:use-pget-n 5
      mirror -R --Remove-source-files --log=/var/log/syncwatch.log $local_dir $remote_dir
      quit
    EOF
      rm -f /tmp/syncwatch.lock
      trap - SIGINT SIGTERM
      exit 0
    fi

     

    I should also mention that I'm using Deluge on the seedboxes account.  I also have my watchdir set up with subfolders under each (tv, movies, tv-sonarr, music, books, comics, etc), and have Deluge set up to search each of those directories seperately for new torrent files, and tag the download with a label for the type of download they are.  I use the Deluge autoadd plugin to create the labels based on the directory the torrent is uploaded into, then use the label's move feature to move the completed files into a specific directory in the finished directory on the seedbox.  You could always have all torrents dumped into one big finished directory, but I'd advise against it, as you'll have sonarr trying to process movies and music, headphones tagging the directory as unprocessed, and other oddities.  If they are segregated into their own directories on the server and downloaded into those folders on your unraid box, each application can be mapped to it's own download folder and you'll be happier with the results.

     

    *EDIT*  I just realized you don't necessarily need to add those files to unraid's addon install folder if you're using the Nerdtools plugin, with the exception of the cksfv program, as they are included in that plugin.

     

    *2nd EDIT*  Forgot to put the DB initialization info here:

     

    sqlite3 dir.db  #creates the database and loads the sqlite prompt
    sqlite> create table zero (episode varchar(10)); #don't forget to put a ; at the end of the line or the command will not execute
    sqlite3> .quit  #takes you back to the linux prompt

  2. Noticed this in my log:

     

    Jul 23 04:50:47 media01 kernel: timekeeping watchdog: Marking clocksource 'tsc' as unstable, because the skew is too large:
    Jul 23 04:50:47 media01 kernel: 'hpet' wd_now: 6da0260f wd_last: 6d1ee276 mask: ffffffff
    Jul 23 04:50:47 media01 kernel: 'tsc' cs_now: 292d9f99dfeaf cs_last: 292d98b8a523c mask: ffffffffffffffff
    Jul 23 04:50:47 media01 kernel: Switched to clocksource hpet

     

     

     

  3. Still having issues adding trackers with your docker.  Any that require a login give me a data not received error when it attempts to test and add.  I was able to add the Sonarr info (I had the port changed to 9118 external in docker since I was attempting to run 2, and that threw off the authentication to Sonarr).  The attempted additions aren't showing up in the log files, so not sure which way to turn at this point. 

     

    That being said, my previous install from the other thread still works fine.

     

  4. The current source they have on their git compiles to 0.4.3.1, so that's definitely the current version, just not the current version they have pre-compiled and available.  I usually grab the source and compile it on one of my systems, guess he snuck an update in there over the past 2 days..  Do you happen to know off hand where your docker stores it's config files, I'd like to keep that stuff on the unraid filesystem.

     

    I just played around a bit with it, and can't connect it to my Sonarr either, gives me a 401 authentication failed.  I also get failures on all of the torrent sites that require login.  Seems like it might be something in the distro setup that you're using for your docker maybe??

     

     

    Where did you get the release of 0.4.3.1?  That's the version it shows running, where I'm running the latest they show available (precompiled anyway) at 0.4.3.0 on my install.

     

    I compiled from source on my windows box, and the current source does show 0.4.3.1.  Upgraded my install (from my original Jackett), and it still allows me to add Torrent Day.  Not sure why yours isn't working correctly.

     

    I also went so far as to replace the jackett install in your docker I have running, and the code I compiled won't allow it to be added on your docker install running my code.  Could be a dependency or something possibly?

     

    it's directly a build from git (my fork to add frenchtorrentdb). I'll check tonight sorry!

  5. Where did you get the release of 0.4.3.1?  That's the version it shows running, where I'm running the latest they show available (precompiled anyway) at 0.4.3.0 on my install.

     

    I compiled from source on my windows box, and the current source does show 0.4.3.1.  Upgraded my install (from my original Jackett), and it still allows me to add Torrent Day.  Not sure why yours isn't working correctly.

     

    I also went so far as to replace the jackett install in your docker I have running, and the code I compiled won't allow it to be added on your docker install running my code.  Could be a dependency or something possibly?

     

  6. Used one of the Jackett dockers on the docker repository.  My docker commandline (change the paths to your own locations):

     

    docker run -d --name="Jacket" --net="bridge" -e TZ="America/New_York" -p 9117:9117/tcp -v "/mnt/docker-appdata/data/jackett":"/config":rw -v "/mnt/docker-appdata/data/jackett":"/root/.config/Jackett/":rw -v "/mnt/docker-appdata/data/jackett/app":"/app":rw ammmze/jackett
    

     

    I extracted the contents of the latest Jackett release zip file (from the above git) into the app/ directory (the extracted zip actually has a release directory in it, with all the files under that, move those files into the app/ directory).  On my install I had to map the config directory to both the /config and the /root/.config/Jackett/.  Those aren't strictly necessary to run, but that way your config files are safe from a removal and reinstall (otherwise it stores the config files IN the container, not on your unraid filesystem).

     

    It's not unraid specific, but I've managed to get it running ok, and it works with my Sonarr without having to run it on my windows boxen..

     

    Now if anybody wants to MAKE an Unraid docker for this, I'll gladly switch over, but this at least works for me at the moment.

     

  7. This is what I use:

     

    Flash:    http://www.newegg.com/Product/Product.aspx?Item=N82E16820226463

    USB Header:    http://www.newegg.com/Product/Product.aspx?Item=N82E16812200474

     

    That header is the same as the 2nd one listed above, just from Newegg.  The 16g Mushkin thumb drives are very small, decently fast, and work fine with Unraid's registration requirements.  I have 2 of them, one as a backup in case the one I use for booting ever fails (both are registered for Pro so no worries there).  While 16gb is definitely overkill for the boot drive with Unraid, I don't worry too much about downloading new releases through Dynamix, be a while before I fill that thing up..

     

  8. I also restart mine rarely, and if I do it's usually either a unraid core update, or hardware update.  It's not been a huge deal for the array to not start, as I usually remember it.  As far as I can tell currently, the volume identification for that volume is:  ARC-1231-VOL, and that's it.  I'll check it when I restart for the RC6 update after parity finishes this evening (replaced 4 emptied 2tb's for a new 8tb last night, leaves me with 3 2tb and 1 3tb left to move data from and remove..  I'm going to have a bunch of 2tb paperweights, lol.

     

     

  9. Ever since I set up a 4tbx2 (8tb) parity array on my Areca (so I can use my Seagate Archive 8tb drives as data drives), I've not been able to autostart the array.  It always comes up that it's missing the parity drive (which is the Areca array ARC-1231-VOL), even though it's on the dropdown for you to select.  All the other drives on the controller have the correct naming (using the instructions at the start of the thread for that).  Hopefully this weekend I'll get all the drives moved over to the Areca (Still have 6 drives on the MV8, although 3 of those will be pulled after the data is moved to my latest 8tb).

     

    I will say the Areca card is FAST, getting 132MB/sec on a parity check at the moment..

     

    I've tried setting the delay after issuing the udevadm trigger, I've tried it up to 60s total, with the same result (I don't think this is the problem though, as even at 5, all the drives except parity are in their proper place in the array config). 

     

    Any ideas?  It'd also be great if Limetech supported temp and spindown/spinup on these cards, not sure how much trouble that'd be though.

     

     

  10. Just noticed some really slow transitions & other weirdness in the Docker tab...

     

    First restart after upgrading to rc5 hung after loading the fixes for areca cards that keep the same names for containers for about 5 minutes or so (I had a 20s delay in the go script).  emhttp never started, and the server was not reachable, nor were any dockers running.  Since the array wasn't mounted, I did a restart from putty (telnet was working), and everything came up as per normal.

     

    When I went into plex to make sure it was running, noticed there was an update on the plex-pass track, so I went to edit my plex install with the current version number (I'm forcing upgrades using the version variable in the docker config).  It took about 25-30 seconds for the edit screen to come up, and once I submitted my changes, the screen that normally shows that it's downloading the changed bits for the docker came back and said 0 bytes loaded, and had the done at the bottom. I clicked the log button, and watched the log be populated by the removal and reinstallation of the docker (albeit very slowly).  This is all different behavior than I had in RC4 and previous versions..

     

  11. I got a sub for the newznab plus to get a newznab ID, entered it on install of this docker, and it makes a huge difference over the public one that's included built in.  Anybody having issues may want to give it a go.

     

    Just go to Newznab.com and purchase newznab plus, they will give you an ID number.  Change the docker variable regex_url to:

     

    http://www.newznab.com/getregex.php?newznabID=<YOURIDNUM>

     

    I also set my backfill for 90 days, not sure if it helps or not.  I'm able to actually search the indexer now and get results, and I'm using it with Couchpotato, Sonaar (nzbdrone), and several other downloaders.  You do still get quite a few failures on the decoding, but you're actually able to download what you do scrape...

     

     

  12. Same on my system, antivirus (but I'm using NOD32).  Only way to see the logs on my work system is to disable web protection.  I've tried adding both the machine name and IP to the whitelist, but it still won't show.

     

    Do you have some add-on or plugin install in your browser that it is intended to block pop-ups?

     

    Finally found the culprit, it's my antivirus (F-Secure).

    No other option than to deactivate it completely to allow log windows to work

     

    Thanks all for the help and sorry as it seems to be unrelated to RC4

     

    Hope this can help others anyway

  13. to the first part of your question, it needs a better regex, and there are ones available.

     

    the second part i won't dignify with a response.

    Got it up and running on my system, according to the stats it's finding stuff, but as for decoding the names, how would one add one of the better regex addons?

     

    Also, if you set it up initially with a 0 on the backfill variable, how would you go about changing it to say 30 days?  I'm guessing that I'd need to wipe the install, and re-install with that in the settings?

     

     

  14. I'm trying it from my work machine, will try on my home machine and see if that changes anything.  It did pull up logs here prior to RC4 though, and I was able to pull up one docker log earlier (I went straight to the docker page after I started array, and brought up a log for duckdns, which showed, then tried pysab, but that nor any other logs show now).  Even running the command that the web log viewer uses to populate the screen on a command line hangs indefinitely.

     

    *EDIT*  My home machine will open the logs fine.  Never had any problem prior to RC4 on my work machine though. 

     

    My logs are working ... this happens to all your installed Dockers ?

     

    Yes all Dockers (4), VMs (1) and unraid log fails to show

     

    Logs are working, just the webgui fails to show them

     

    Have you tried a different machine?  I have had (since back in the Dynamix days on 5.X) one machine that refuses to show the system or docker logs in the pop up window (it just shows a blank white screen).  It doesn't matter what browser I use.  On other machines, it displays fine in all the browsers.  Perhaps this is related?

  15. Hadn't noticed the main unraid logs showing up the same (white/blank window with waiting for server in the bottom), but can verify now that it's doing just that on any attempt to view logs from the webui.  I will say I was able to open ONE log file when I rebooted my system, but any subsequent attempt shows the white window.

     

     

    Has anyone else lost the ability to see Docker logs?  It opens the popup window, but never receives any data.  Tried running the command-line version, and didn't get any output either (I've left the popup window up for an hour+ and no data filled).

     

    I'm experiencing the same issue here since the upgrade from RC3 to RC4.

     

    1. Dashboard view, opening the logs of a VM or Docker opens an empty window that never gets filled

    2. Dashboard view, click on unraid log opens an empty window as well

    3. Docker and VMs views, opening logs has the same result, a white window with no info

     

    Command line does work

    /usr/bin/tail -n 42 -f /var/log/syslog 2>&1

    /usr/bin/docker logs --tail=350 -f PlexMediaServer 2>&1

    ...

     

    Tested with Firefox 38 and IE 11

  16. Just a FYI, it seems to be working much better with my headphones installation.  Wonder if it was still working on setup when I was trying yesterday..

     

    Should I re-install, or leave it as-is?  I'm guessing it won't matter if I leave it as it is now since they don't update their stuff very often anyway..