heffe2001

Members
  • Posts

    411
  • Joined

Everything posted by heffe2001

  1. http://www.amazon.com/Amazon-SK705DI-Echo/dp/B00X4WHP5E It's pretty spendy though, but it does have voice activation, and do more than just music. If they'd cut the price to half, I'd snatch one up.. Thanks for telling me about that. I have had a Pi for a while now just sitting around with nothing to do. This might just inspire me to finally use it. Cool, the idea came to me watching the wife lug a radio around the house with her to listen to the radio or her laptop if she wanted to listen to music. Figured I could do something better, as it supports Internet radio and all the BBC channels that we like in the UK (Both die-hard Radio 4 listeners) Good thing about picoreplayer is it loads to RAM, like Unraid so doesn't suffer from SD card corruption if the power is pulled.. We're using a free app called Squeezer on Android to control it from our mobiles.. You mean I could have bought something and saved myself all the trouble.... I'm in the UK so I haven't looked at the Echo as we can't get 'em
  2. That might work, it's more the description of the main docker section that gets me. It's worded like you only need to go in there if you have trouble starting the Docker subsystem. Not just you... I'm still getting used to the changes. At first I thought that the repositories thread had been deleted, and had to spend some time looking for it. What if I renamed it to be like this: Docker | ---- Containers and Repositories Virtualization | ---- VM Templates Is that better? Maybe a slight tweak to the description? Thoughts?
  3. So kind of a home-built Amazon Echo? Thanks for telling me about that. I have had a Pi for a while now just sitting around with nothing to do. This might just inspire me to finally use it. Cool, the idea came to me watching the wife lug a radio around the house with her to listen to the radio or her laptop if she wanted to listen to music. Figured I could do something better, as it supports Internet radio and all the BBC channels that we like in the UK (Both die-hard Radio 4 listeners) Good thing about picoreplayer is it loads to RAM, like Unraid so doesn't suffer from SD card corruption if the power is pulled.. We're using a free app called Squeezer on Android to control it from our mobiles..
  4. Is it just me, or does it just look odd that the Docker Containers child forum is under the Docker engine forum (or maybe it's just the description of the Docker Engine forum, it's worded like that forum is only for issues with enabling the Docker service). Maybe it just needs a forum description change??
  5. I'd love to see this as well, none of my drives currently show temp or correct spinup/down .
  6. I believe that sqlite is pre-installed on the system, didn't catch this earlier. You need to run this code to create and initialize the database: sqlite3 dir.db #creates the database and loads the sqlite prompt sqlite> create table zero (episode varchar(10)); #don't forget to put a ; at the end of the line or the command will not execute sqlite3> .quit #takes you back to the linux prompt Quick question (hopefully) I see it mentions in the script the use of sqlitedb... What do I need to install to unRAID to make that work? Do I need to create the database table(s)? Thanks, H.
  7. I use both here, whichever Sonarr find the first copy on is where it pulls from. The script I have set up unrar's the shows in the TV section at the end, and I've not had an issue with Sonarr picking them up before they are finished extracting. It seems like most stuff I really want fast tends to hit my private BT trackers before they hit usenet (I'm usually watching Walking Dead from BT before it's even up on nzb's). I just have to fire off the torrent syncing script by hand in those instances, otherwise they only get pulled once a day (for most shows that's fine, but some stuff I want faster).
  8. Heffe thank you for sharing this.... it will take me a while to absorb and understand what you are doing but I AM going to try it. I just signed up for PulsedMedia and it is horrible. They disable AutoTools so its basically useless in terms of automating anything with CP, Sonarr, etc. It took me about 4 weeks from when I found the original script, to where I had it working as I needed it to. There were also others in the thread who gave help with the database and all, I just adapted it to my Unraid box' configuration. Basically, you need to set a point on your Unraid box (most likely cache drive) where you can use the sshfs to mount your seedboxes finished downloaded directory so that the routine in the script can parse it for files/directories, which it then downloads one by one, marking them in the database once finished as downloaded so that it doesn't attempt to redownload. The original script as supplied would happily download the same stuff over and over when it was executed, IF the original downloaded files on your unraid box had been moved or deleted. The database addition solved that. I then had to add some logic in there to determine whether the parsed download is a file, or a directory, as lftp is called with the mirror-type download in the original script, which won't work with single files not in directories. If you need any help trying to figure out my spaghetti code, just ask, lol.
  9. Think you could add cksfv to the tools pack? http://zakalwe.fi/~shd/foss/cksfv/
  10. I highly modified a script from torrent-invites for using LFTP and a seedboxes.cc account for my torrent downloads. You'll need LFTP installed on your base unraid install, as well as sshfs-fuse. The original script as supplied would just mirror folders in a specific destination directory on the seedbox with LFTP, but wouldn't download single files (IE: A download without a parent directory), didn't do sftp, and also wasn't multi-segemented. Subsequent releases on that site added database function, but it was windows-specific so that had to be modified to work with linux/unraid. I also added the capability of it determining whether a download is a directory (and use mirror), or a file (and use get) so that it will handle either without problems. After the files are downloaded, I also call a unrarall script that extracts the rar files, and removes the extras (nfo's, samples, and the parent rars), and leaves the files in such a way that Sonarr and couchpotato can work with them. The Script: #!/bin/sh login="username" pass="password" host="server.hostname" sshfshome="/path/to/home/for/sshfs" sshfsmnt="/path/to/local/seedbox/mount" sqldir="/folder/for/sqlitedb/" # would suggest you put the script and the sqlite file in the same folder for simplicity, but this allows you to separate the files sql="dir.db" #name of the sqlite file table="rememberedFiles" #name of the sqlite table field="filename" # name of the sqlite field for i in 1 2 3 # Each of these are for a specific folder on my seedbox, which I use for specific folders determined by the torrent Labels set up in Deluge. If you need more, add the numbers here, and copy one of the sections below, making changes are you need. do if [ $i -eq 1 ] then remote_dir="/example/torrents/finished/Downloads/Comics/" # this is the absolute path on my seedbox's ftp client remotedir="/mnt/seedboxe/Comics/" # this is the local path where I have the seedbox mounted with sshfs below, required for database to work correctly local_dir="/mnt/user/Downloads/Deluge/Downloads/Comics/" # this is the local path where I actually download my files to elif [ $i -eq 2 ] then remote_dir="/example/torrents/finished/Downloads/Movies/" remotedir="/mnt/seedbox/Movies/" local_dir="/mnt/user/Downloads/Deluge/Downloads/Movies/" else remote_dir="/example/torrents/finished/Downloads/TV/" remotedir="/mnt/seedbox/TV/" local_dir="/mnt/user/Downloads/Deluge/Downloads/TV/" fi if [ -d $remotedir ] then tick=$remotedir else #umount $sshfsmnt echo $pass | sshfs $login@$host:$sshfshome $sshfsmnt -o workaround=rename -o password_stdin fi cd $remotedir Progs=( * ) # creates an array of your directories for show in "${Progs[@]%*/}"; do #creates single variables from the array cd $sqldir exists=$( sqlite3 $sql "select count(*) from $table where $field=\"${show}\"" ) if (( exists > 0 )); then # these two lines test if your directory is already in the sqlite database tick=$show # echo "Show already downloaded $show" else trap "rm -f /tmp/synctorrent.lock" SIGINT SIGTERM if [ -e /tmp/synctorrent.lock ] then echo "Already Running" exit else touch /tmp/synctorrent.lock if [ -d "$remotedir/${show}" ] # I needed 2 seperate lftp lines, one for mirroring full directories, and one to handle single files since some uploaders don't use directories then echo "$remote_dir/${show}/ is directory" # This is there for debugging the testing to see if it's determining if you are downloading a file or mirroring a directory, can be safely commented out lftp -p 22 -u $login,$pass sftp://$host << EOF #this is an SFTP script - hopefully secure set mirror:use-pget-n 7 mirror -c -P5 --log=/var/log/synctorrentssl.log "$remote_dir/${show}/" "$local_dir/${show}/" quit EOF else echo "$remote_dir/${show} is file" # For Debugging, can be commented out as well lftp -p 22 -u $login,$pass sftp://$host << EOF #this is an SFTP script - hopefully secure get "$remote_dir/${show}" -o "$local_dir/${show}" quit EOF fi fi sqlite3 $sql "insert into rememberedFiles (filename) values ('${show}');" # once the script completes, the variable is added to the database, ensuring that you can't download it again using the script rm -f /tmp/synctorrent.lock trap - SIGINT SIGTERM fi done done umount $sshfsmnt #Below this line I've added the unrarall script to test md5's (if available), extract, and clean up the samples, nfo's, etc. /boot/custom/unrarall --clean=all /mnt/user/Downloads/Deluge/Downloads/TV /boot/custom/unrarall --clean=all /mnt/user/Downloads/Deluge/Downloads/Movies exit 0 My notes from the thread on the other site: Aside from the script above, you'll need cksfv, p7zip, unrar, lftp and sshfs-fuse. These are the versions (and links) to the packages I used. This is all done on the base unraid install, not in a docker. I placed all these files in the addons directory that unraid installs automatically on boot. cksfv (http://slackonly.com/pub/packages/14.1-x86_64/misc/cksfv/cksfv-1.3.14-x86_64-1_slack.txz) p7zip (http://taper.alienbase.nl/mirrors/people/alien/slackbuilds/p7zip/pkg64/13.37/p7zip-9.20.1-x86_64-1alien.tgz) unrar (http://taper.alienbase.nl/mirrors/people/alien/slackbuilds/unrar/pkg64/14.0/unrar-4.2.4-x86_64-1alien.tgz) lftp (http://ftp.slackware.com/pub/slackware/slackware64-14.1/slackware64/n/lftp-4.4.9-x86_64-1.txz) sshfs-fuse (http://taper.alienbase.nl/mirrors/people/alien/slackbuilds/sshfs-fuse/pkg64/14.0/sshfs-fuse-2.5-x86_64-1alien.tgz) Since for torrents I use the blackhole directories in most of my autodownloaders, I also have a script that sends them up hourly (I'm getting ready to make changes to my scheduling to do the watchdir every 5 minutes, and download the resultant torrents every 15-30 minutes), and pulls down the torrents every day at 5am (unless I fire it off manually, which is usually what happens, lol). If you have any questions, feel free to ask, but if it's about another seedbox host other than seedboxes.cc, not sure how much help I can be. I've gotten excellent speeds with them (I've had some torrents approach 70-80M/s on a download to the box using a gremlin box from them, and rates down to me average around 7.5M/s on my cable internet connection (60Mbps), which can completely saturate my connection. Watchdir sync script: #!/bin/bash login="username" pass="password" host="hostname" remote_dir="/remote/torrent/watch/directory/" local_dir="/local/watchdir/repository/" trap "rm -f /tmp/syncwatch.lock" SIGINT SIGTERM if [ -e /tmp/syncwatch.lock ] then echo "Syncwatch is running already." exit 1 else touch /tmp/syncwatch.lock lftp -p 22 -u $login,$pass sftp://$host << EOF set mirror:use-pget-n 5 mirror -R --Remove-source-files --log=/var/log/syncwatch.log $local_dir $remote_dir quit EOF rm -f /tmp/syncwatch.lock trap - SIGINT SIGTERM exit 0 fi I should also mention that I'm using Deluge on the seedboxes account. I also have my watchdir set up with subfolders under each (tv, movies, tv-sonarr, music, books, comics, etc), and have Deluge set up to search each of those directories seperately for new torrent files, and tag the download with a label for the type of download they are. I use the Deluge autoadd plugin to create the labels based on the directory the torrent is uploaded into, then use the label's move feature to move the completed files into a specific directory in the finished directory on the seedbox. You could always have all torrents dumped into one big finished directory, but I'd advise against it, as you'll have sonarr trying to process movies and music, headphones tagging the directory as unprocessed, and other oddities. If they are segregated into their own directories on the server and downloaded into those folders on your unraid box, each application can be mapped to it's own download folder and you'll be happier with the results. *EDIT* I just realized you don't necessarily need to add those files to unraid's addon install folder if you're using the Nerdtools plugin, with the exception of the cksfv program, as they are included in that plugin. *2nd EDIT* Forgot to put the DB initialization info here: sqlite3 dir.db #creates the database and loads the sqlite prompt sqlite> create table zero (episode varchar(10)); #don't forget to put a ; at the end of the line or the command will not execute sqlite3> .quit #takes you back to the linux prompt
  11. Noticed this in my log: Jul 23 04:50:47 media01 kernel: timekeeping watchdog: Marking clocksource 'tsc' as unstable, because the skew is too large: Jul 23 04:50:47 media01 kernel: 'hpet' wd_now: 6da0260f wd_last: 6d1ee276 mask: ffffffff Jul 23 04:50:47 media01 kernel: 'tsc' cs_now: 292d9f99dfeaf cs_last: 292d98b8a523c mask: ffffffffffffffff Jul 23 04:50:47 media01 kernel: Switched to clocksource hpet
  12. Still having issues adding trackers with your docker. Any that require a login give me a data not received error when it attempts to test and add. I was able to add the Sonarr info (I had the port changed to 9118 external in docker since I was attempting to run 2, and that threw off the authentication to Sonarr). The attempted additions aren't showing up in the log files, so not sure which way to turn at this point. That being said, my previous install from the other thread still works fine.
  13. The current source they have on their git compiles to 0.4.3.1, so that's definitely the current version, just not the current version they have pre-compiled and available. I usually grab the source and compile it on one of my systems, guess he snuck an update in there over the past 2 days.. Do you happen to know off hand where your docker stores it's config files, I'd like to keep that stuff on the unraid filesystem. I just played around a bit with it, and can't connect it to my Sonarr either, gives me a 401 authentication failed. I also get failures on all of the torrent sites that require login. Seems like it might be something in the distro setup that you're using for your docker maybe?? it's directly a build from git (my fork to add frenchtorrentdb). I'll check tonight sorry!
  14. Where did you get the release of 0.4.3.1? That's the version it shows running, where I'm running the latest they show available (precompiled anyway) at 0.4.3.0 on my install. I compiled from source on my windows box, and the current source does show 0.4.3.1. Upgraded my install (from my original Jackett), and it still allows me to add Torrent Day. Not sure why yours isn't working correctly. I also went so far as to replace the jackett install in your docker I have running, and the code I compiled won't allow it to be added on your docker install running my code. Could be a dependency or something possibly?
  15. Used one of the Jackett dockers on the docker repository. My docker commandline (change the paths to your own locations): docker run -d --name="Jacket" --net="bridge" -e TZ="America/New_York" -p 9117:9117/tcp -v "/mnt/docker-appdata/data/jackett":"/config":rw -v "/mnt/docker-appdata/data/jackett":"/root/.config/Jackett/":rw -v "/mnt/docker-appdata/data/jackett/app":"/app":rw ammmze/jackett I extracted the contents of the latest Jackett release zip file (from the above git) into the app/ directory (the extracted zip actually has a release directory in it, with all the files under that, move those files into the app/ directory). On my install I had to map the config directory to both the /config and the /root/.config/Jackett/. Those aren't strictly necessary to run, but that way your config files are safe from a removal and reinstall (otherwise it stores the config files IN the container, not on your unraid filesystem). It's not unraid specific, but I've managed to get it running ok, and it works with my Sonarr without having to run it on my windows boxen.. Now if anybody wants to MAKE an Unraid docker for this, I'll gladly switch over, but this at least works for me at the moment.
  16. Anybody know if the ARC-1680IX-24-2G cards work in Unraid? Getting one this weekend.
  17. Dropped the # and changed it to a _ without any issues at all. Restarted after the change, assigned it, now the system finds the drive every time. One less thing to worry about lol.
  18. This is what I use: Flash: http://www.newegg.com/Product/Product.aspx?Item=N82E16820226463 USB Header: http://www.newegg.com/Product/Product.aspx?Item=N82E16812200474 That header is the same as the 2nd one listed above, just from Newegg. The 16g Mushkin thumb drives are very small, decently fast, and work fine with Unraid's registration requirements. I have 2 of them, one as a backup in case the one I use for booting ever fails (both are registered for Pro so no worries there). While 16gb is definitely overkill for the boot drive with Unraid, I don't worry too much about downloading new releases through Dynamix, be a while before I fill that thing up..
  19. Going to remove the 2tb drives tomorrow, will rename the raidset at that time, and regen parity. Thanks for the tip!
  20. You're actually correct, the drive only shows the ARC-1231-VOL when added, there's a # and long number string (guess it treats it like a serial number). Wonder if there's any way to fix that. Example:
  21. I also restart mine rarely, and if I do it's usually either a unraid core update, or hardware update. It's not been a huge deal for the array to not start, as I usually remember it. As far as I can tell currently, the volume identification for that volume is: ARC-1231-VOL, and that's it. I'll check it when I restart for the RC6 update after parity finishes this evening (replaced 4 emptied 2tb's for a new 8tb last night, leaves me with 3 2tb and 1 3tb left to move data from and remove.. I'm going to have a bunch of 2tb paperweights, lol.
  22. Ever since I set up a 4tbx2 (8tb) parity array on my Areca (so I can use my Seagate Archive 8tb drives as data drives), I've not been able to autostart the array. It always comes up that it's missing the parity drive (which is the Areca array ARC-1231-VOL), even though it's on the dropdown for you to select. All the other drives on the controller have the correct naming (using the instructions at the start of the thread for that). Hopefully this weekend I'll get all the drives moved over to the Areca (Still have 6 drives on the MV8, although 3 of those will be pulled after the data is moved to my latest 8tb). I will say the Areca card is FAST, getting 132MB/sec on a parity check at the moment.. I've tried setting the delay after issuing the udevadm trigger, I've tried it up to 60s total, with the same result (I don't think this is the problem though, as even at 5, all the drives except parity are in their proper place in the array config). Any ideas? It'd also be great if Limetech supported temp and spindown/spinup on these cards, not sure how much trouble that'd be though.
  23. It's amazing how many problems can be resolved by rebooting => not just in UnRAID, but in computers in general. First thing I generally ask folks when they need help is if they've tried rebooting ... and a fair number of issues simply "go away"
  24. Just noticed some really slow transitions & other weirdness in the Docker tab... First restart after upgrading to rc5 hung after loading the fixes for areca cards that keep the same names for containers for about 5 minutes or so (I had a 20s delay in the go script). emhttp never started, and the server was not reachable, nor were any dockers running. Since the array wasn't mounted, I did a restart from putty (telnet was working), and everything came up as per normal. When I went into plex to make sure it was running, noticed there was an update on the plex-pass track, so I went to edit my plex install with the current version number (I'm forcing upgrades using the version variable in the docker config). It took about 25-30 seconds for the edit screen to come up, and once I submitted my changes, the screen that normally shows that it's downloading the changed bits for the docker came back and said 0 bytes loaded, and had the done at the bottom. I clicked the log button, and watched the log be populated by the removal and reinstallation of the docker (albeit very slowly). This is all different behavior than I had in RC4 and previous versions..
  25. Switched over to your nzbmegasearch, seems to work fine.