JustinAiken

Members
  • Posts

    452
  • Joined

  • Last visited

Everything posted by JustinAiken

  1. Hahaha wow, there goes 11GB of sickrage logs!! Thanks for the command/workaround!
  2. - Running all the containers listed in my sig (except for Crashplan) - Get docker disc space warnings - Install cAdvisor, not much help - No prunable images lying around - No files at all in any running container's /tmp dir - No large files found other than the JDK: $ find /var/lib/docker/btrfs -type f -size +50000k -exec ls -lh {} \; | awk '{ print $9 ": " $5 }' /var/lib/docker/btrfs/subvolumes/2fc900c39351bb8332117d23522c747aad386dde51e5a84f0c732644a589dc40/usr/lib/jvm/java-8-oracle/jre/lib/amd64/libjfxwebkit.so: 55M /var/lib/docker/btrfs/subvolumes/2fc900c39351bb8332117d23522c747aad386dde51e5a84f0c732644a589dc40/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar: 63M /var/lib/docker/btrfs/subvolumes/76d5ba319f723928c3c589f2ffb858ff9f83e927a4e97a2e98f5ca36e6ac99a9/usr/lib/jvm/java-8-oracle/jre/lib/amd64/libjfxwebkit.so: 55M /var/lib/docker/btrfs/subvolumes/76d5ba319f723928c3c589f2ffb858ff9f83e927a4e97a2e98f5ca36e6ac99a9/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar: 63M /var/lib/docker/btrfs/subvolumes/88392dc0830b3019a7cc463efcc2e672b606ed78b8e634bfaafeef1117eeda28/usr/lib/jvm/java-8-oracle/jre/lib/amd64/libjfxwebkit.so: 55M /var/lib/docker/btrfs/subvolumes/88392dc0830b3019a7cc463efcc2e672b606ed78b8e634bfaafeef1117eeda28/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar: 63M /var/lib/docker/btrfs/subvolumes/3dcabb14efd0ef17360f36536bd95ebb281a5756025a6eff5b8a92bc6f7c25f5-init/usr/lib/jvm/java-8-oracle/jre/lib/amd64/libjfxwebkit.so: 55M /var/lib/docker/btrfs/subvolumes/3dcabb14efd0ef17360f36536bd95ebb281a5756025a6eff5b8a92bc6f7c25f5-init/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar: 63M /var/lib/docker/btrfs/subvolumes/3dcabb14efd0ef17360f36536bd95ebb281a5756025a6eff5b8a92bc6f7c25f5/usr/lib/jvm/java-8-oracle/jre/lib/amd64/libjfxwebkit.so: 55M /var/lib/docker/btrfs/subvolumes/3dcabb14efd0ef17360f36536bd95ebb281a5756025a6eff5b8a92bc6f7c25f5/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar: 63M - Stopped all dockers, scrubbed: scrub status for 68dfcede-36b4-42de-8f9c-50d73af3884a scrub started at Sun Jan 10 15:57:21 2016 and finished after 00:11:24 total bytes scrubbed: 15.56GiB with 0 errors
  3. Pretty openended; I'll glady drop an extra $100 on the mobo or CPU if it makes it so I don't have to worry about anything for years, but if it's $100 more just for performance that wouldn't be used, I'd rather put it towards more HD's
  4. My current system has ALMOST enough power to handle everything I use unRaid for (no 6-player game streaming for me ), but I find that enabling crashplan tends to result in a crash after a couple of weeks. I'd like to move from the C2SEE/E5200 to something a bit more modern: - 2-4 cores - Supports >4GB of RAM - ECC or nonECC, I can get new RAM if needed - Low power usage - Runs cool - Not Atom/Celeron slow - At least 4 SATA slots on MB (although MOAR is better!) Any current recommendations?
  5. Hmm, I can't get it going at all - it's never actually worked. I can scan it in on my laptop or HTPC, but nothing ever scans in. Things I've tried: - Copying over sources/mediasources.xml - Including or not the user/pass in the request - Switching to the linuxserver.io container, same result - Setting the `usefasthash` to false The database connection works, because I can get lists of shows... I just can't get it to add a new episode..
  6. I'm also having troubles with Isengard-Headless updating my mySQL library... - I have the docker running - I can access the Kodi WebGUI through the docker, and see my tvshows/etc - I can access the JSON-RPC, and get back valid responses - But if I try to update a source: {"jsonrpc":"2.0","method":"VideoLibrary.Scan","params":{"directory":"smb://192.168.1.8/Video/TV/Daily Show, The/"},"id":1} I get a successful response: {u'jsonrpc': u'2.0', u'id': 1, u'result': u'OK'} And see this in the xbmc log: 19:56:11 T:47380474951424 NOTICE: VideoInfoScanner: Starting scan .. 19:56:11 T:47380474951424 NOTICE: VideoInfoScanner: Finished scan. Scanning for video info took 00:00 But no new episodes get scraped in, even though there are new episodes there. I also tried setting up a passwords.xml that adds my user/password for the samba share, or manually adding it into the RPC call, but no luck... Any ideas?
  7. https://github.com/docker/dockercraft Definitely going to give this a try this weekend!
  8. Just used faster preclear on my first 8TB drive: root@Tower:/boot/preclear_reports# cat preclear_rpt_Z8408YNX_2015-11-17.txt ========================================================================1.15 == invoked as: /boot/config/plugins/preclear.disk/preclear_disk.sh -M 3 -o 1 -c 1 -f -J /dev/sdb == ST8000AS0002-1NA17Z Z8408YNX == Disk /dev/sdb has been successfully precleared == with a starting sector of 1 == Ran 1 cycle == == Using :Read block size = 1000448 Bytes == Last Cycle's Pre Read Time : 19:43:27 (112 MB/s) == Last Cycle's Zeroing time : 19:02:46 (116 MB/s) == Last Cycle's Post Read Time : 23:49:56 (93 MB/s) == Last Cycle's Total Time : 62:37:15 == == Total Elapsed Time 62:37:15 During the post read, I noticed it started about 200 MB/s for most of the first half, and then the speed dropped progressively lower as it made it through the last half...
  9. I have a little bash script I run on my Mac whenever I reset the unRaid, so that I can use the Crashplan GUI to connect to the dockerized crashplan (I don't use the desktop docker, just the core crashplan docker). It grabs the key from the .ui_info on the share passed through docker, and splices it into my local .ui_info: #! /bin/bash LOCAL_LOCATION=/Library/Application\ Support/CrashPlan/.ui_info REMOTE_LOCATION="/mnt/cache/apps/crashplan/id/.ui_info" REMOTE_HOST="tower" export ui_key=$(ssh $REMOTE_HOST cat $REMOTE_LOCATION | cut -d',' -f2) echo "Remote UI key = $ui_key" sed -i '' "s/,[-a-zA-Z0-9]*,/,$ui_key,/" "$LOCAL_LOCATION" echo "Now local .ui_info:" cat "$LOCAL_LOCATION" - Put contents of that in a file - Edit top variables to suit you - chmod +x the file - Move it somewhere in your path (like /usr/local/bin/)
  10. http://mirrors.slackware.com/slackware/slackware64-current/slackware64/ap/vim-7.4.692-x86_64-1.txz doesn't exist anymoe... Tried vim-7.4.898-x86_64-1.txz instead, and it installs, but then upon running it, I get: vim: error while loading shared libraries: libperl.so: cannot open shared object file: No such file or directory
  11. Don't have an Areca card, but in case my output helps anyhow, here it is: https://gist.github.com/JustinAiken/0cd637e1c217a823c37c Thanks for the plug, it's made the last couple of drives I've precleared -so- much easier than remember the commands/flags for both screen and the preclear script itself.
  12. So it's rebuilding a 2TB onto a 4TB drive... It's past 2TB now, so I'd imagine the rest of the drive is 0's... why are all drives (even 1.5TB and 2.0TB) still spun up and being used to rebuild?
  13. Thanks guys.. Rebuilding it now - so many spinning drives
  14. - I have this 2TB WD drive showing lots of pending/offline - it seems likely that I should replace it? SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0027 169 167 021 Pre-fail Always - 6550 4 Start_Stop_Count 0x0032 092 092 000 Old_age Always - 8760 5 Reallocated_Sector_Ct 0x0033 199 199 140 Pre-fail Always - 6 7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0 9 Power_On_Hours 0x0032 040 040 000 Old_age Always - 44147 10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 100 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 127 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 57 193 Load_Cycle_Count 0x0032 178 178 000 Old_age Always - 67686 194 Temperature_Celsius 0x0022 115 104 000 Old_age Always - 35 196 Reallocated_Event_Count 0x0032 196 196 000 Old_age Always - 4 197 Current_Pending_Sector 0x0032 197 196 000 Old_age Always - 1249 198 Offline_Uncorrectable 0x0030 199 197 000 Old_age Offline - 629 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0008 192 183 000 Old_age Offline - 2245 - I have a 4TB hot spare that is precleared and ready - is it better to: - Stop the array, reassign the new disk into the slot, and let unRAID rebuild - Assign the drive as a new drive, copy the files over the old fashioned way (cp -r or rsync), and then remove the old disk?
  15. I'll probably try my hand at setting up one myself if no one else does..
  16. https://gist.github.com/JustinAiken/93b4c6b47cb9fb3d54ab
  17. Best release yet! My initial thoughts - "Ugh, I have to redo ALL my apps/plugins just to stay up-to-date and get 64 bit, what a pain... this is going to take days..." 2 hours later... "Woah, UPS support is legit now... goodbye unMenu! The GUI looks like it's from this decade!! OMG I love Docker!" ... I think I'm going to go convert all my work stuff (Ruby on Rails) into Docker containers instead of Chef cookbooks now!
  18. Jumping in the midst of all this docker talk to interrupt with a more general question (figuring is the biggest group of unraid/Crashplan users out there) - is Crashplan w/ unRaid awesome or not? I'd really like to get my full NAS (Currently about about 30TB and growing) backed up, with these being my important things: - MUST be able to be sure I can get stuff back if needed... it would suck to upload 20TB, then have a massive local failure, then get an email saying "20TB is too many TB's, we cancelled your account..." - Client-side encryption - I'm sure the NSA has a few backdoors into Crashplan... I don't want them looking at my pictures! - Prioritized uploads - I want the full unRAID backed up, but of course I'd like family pictures uploaded before backups of blurays or other relatively common things - After everythings up, having stuff auto-uploaded as it's added would be nice Is crashplan going to fit the ^^ needs?
  19. Ah, aha, it's a mySQL perms error: ERROR: Unable to open database: MyVideos90 [1045](Access denied for user 'xmbc'@'172.17.42.1' (using password: YES)) Odd that my other Kodi clients are able to access it okay w/ the same user/pass... Bind is set to 0.0.0.0 (by your mariaDB docker I suppose) in my.cnf, and I've granted all privileges to 'xbmc' user... Do you know which log inside the mariaDB container I should peek at here?
  20. > do you have a library already scanned in from another kodi instance Yes > what are your ip settings for kodi in the docker template ? Kodi Docker Mappings: /opt/kodi-server/share/kodi/portable_data -> /mnt/cache/apps/kodi Kodi Docker Ports: 8080 -> 8099 Kodi Docker ENVs: MYSQLip=192.168.1.8 MYSQLport=3306 MYSQLuser=xmbc MYSQLpass=xmbc > what settings are you using in sickrage to connect to kodi ? Kodi IP port: 192.168.1.8:8099 KODI username: xbmc KODI password: xbmc Hitting "Test KODI" in sickrage says success, it's just that the library doesn't update... I think Sickrage->Kodi is all good, it's just the Kodi->MySQL that's wonky..
  21. Hey sparklyballs, I'm trying to get Kodi-headless working (Helix, not Isengard).. I have a shared mySQL database working for all my clients (HTPC, laptop, etc)... I'm looking at the advancedsettings.xml my actual client uses compared to the one your docker generates, they look identical (at least in the database settings part)... But having Sickrage send the update command doesn't seem to update the library.. I'm not sure where to go to debug - there's no Kodi log I can find other than the docker one, which just has: *** Running /etc/my_init.d/00_regen_ssh_host_keys.sh... *** Running /etc/my_init.d/firstrun.sh... creating datafiles creating advancedsettings.xml *** Running /etc/rc.local... *** Booting runit daemon... *** Runit started as PID 17 Jun 28 16:41:36 422399d91fe5 syslog-ng[24]: syslog-ng starting up; version='3.5.3' Can't open display ...cron stuff... Going to the web interface, 'Movies' and 'TV Shows' both fail (AJAX call has an internal server error) - not sure if that's expected or not. Is there a shared folder I should set up between the host/container in order to get the full Kodi log for futher debugging?
  22. Running the script failed for me, I think because my router doesn't respond to pings: root@Tower:/boot# gateway=$(route -n | awk '/^0.0.0.0/ {print $2}') root@Tower:/boot# echo $gateway 192.168.1.2 ping -q -n -c 1 $gateway PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data. ... (20 seconds later) ... ^C --- 192.168.1.2 ping statistics --- 1 packets transmitted, 0 received, 100% packet loss, time 0ms But I commented out the lines in the .plg that do that check, and then the installer finished okay: Update successful - PLEASE REBOOT YOUR SERVER success file /tmp/unRAIDServer.sh: successfully wrote INLINE file contents /bin/bash /tmp/unRAIDServer.sh ... success plugin successfully installed Going to reboot now, crossing fingers... EDIT - Rest was smooth, loving 6!
  23. $16 off if you use MASTERPASS10, and pay with mastercard. Great drive, I have two in my server now, and just ordered another.