ljm42

Administrators
  • Posts

    4379
  • Joined

  • Last visited

  • Days Won

    27

Everything posted by ljm42

  1. The forums are filled with instructions to put the flash drive in a Windows computer and run chkdsk. But then I saw this one: http://lime-technology.com/forum/index.php?topic=34266.msg318661#msg318661 that explains how to run dosfsck from within unRAID. Which makes me wonder... 1. Can we add a "checkdisk" option to the boot menu that loads a minimal environment, runs dosfsck on the USB drive, and then reboots? 2. Or even better... when unRAID detects a problem with the USB drive during bootup, why not have it automatically unmount the drive, run dosfsck and then remount the drive (or reboot if needed)?
  2. From another thread: Currently we are storing plex data and other Docker settings in /mnt/user/appdata, which for most people is on their BTRFS-formatted cache drive with cow. What happens to this data as part of this change? Does cow get disabled on the cache drive or do we need to move the data to another disk with a different fs? Also: Are we still doing a 7a or going straight to 8? Are we expecting this to come quickly or is it more intense on your end? (I am getting the "No space left on device (28)" error and wondering what to do to fix it) BTW, I think it is fine to just recreate the docker containers in the new location. They don't contain any data and are easy to rebuild. It is a beta after all
  3. Great! I'm glad it is connecting now. Once you have it configured you can close the Crashplan app on your Windows box and exit putty too. It is pretty much a "set it and forget" it type of thing, unless you want to check up on how much progress it has made on the backup. (depending on how much data you are backing up, it could take a few months to complete the initial backup) Also, I see now that gfjardim is recommending an alternate way to connect in the first post. It looks like his method bypasses putty, which should be easier. I'm not sure why Crashplan doesn't recommend that method.
  4. I'd revert the changes you made and then follow the directions here: http://support.code42.com/CrashPlan/Latest/Configuring/Configuring_A_Headless_Client You basically setup a SSH tunnel from your Windows machine to unRAID, and the Crashplan GUI you installed on Windows then communicates over that tunnel to the Crashplan service running on unRAID. Okay, I tried the above and am still having issues. I removed the Docker container and the config folder and reinstalled the image. I uninstalled on my Windows PC, rebooted, did a clean install and followed the guide you posted. When I connect with Putty it asks for a username/password. I am assuming it wants my UnRAID root account since if I put in my Crashplan credentials I get access denied. When I launch the app on my Windows machine it only lists my Windows machine. I do see an entry for '42' in the main GUI page, which I thought may be my UnRAID machine, but says no files were selected for backup, and I can't browse it. When I try and browse I only see my Windows PC. When I installed my docker for data I selected /mnt/user/Pictures as this is all I want to backup. Any idea on what I am missing still? Thanks OK, once you connect via putty it should just be a matter of setting the "service port" in your ui.properties file to 4200. That step in Crashplan's docs is kind of buried. Then when you start the crashplan client on windows it should see the folders on unRAID and NOT the folders on Windows. You won't be able to see both at the same time.
  5. I'd revert the changes you made and then follow the directions here: http://support.code42.com/CrashPlan/Latest/Configuring/Configuring_A_Headless_Client You basically setup a SSH tunnel from your Windows machine to unRAID, and the Crashplan GUI you installed on Windows then communicates over that tunnel to the Crashplan service running on unRAID.
  6. Thanks! I'm a little confused though, I put the file in /boot/packages/ and ran installpkg, but it throws a bunch of errors: I shutdown, ran checkdisk on the flash (didn't find anything) and rebooted. Now all is good.
  7. Curious if anyone has used nsenter? It is discussed here: http://blog.docker.com/2014/06/why-you-dont-need-to-run-sshd-in-docker/ and in the latest phusion-baseimage readme: https://github.com/phusion/baseimage-docker#login_nsenter It provides shell access to any running docker image without having to deal with SSH keys or modify the docker images. There are two ways to install it in the unRAID host: 1) This docker command will drop nsenter plus a wrapper script (docker-enter) into /usr/local/bin: docker run --rm -v /usr/local/bin:/target jpetazzo/nsenter You'll need to re-run this command every time you reboot. 2) Or you can install nsenter and phusion's wrapper script (docker-bash) as described here: https://github.com/phusion/baseimage-docker#docker_bash Again, you'll need to re-install after every reboot. Unless someone wants to make a plugin? Once it is installed you just type: docker-enter YOUR-CONTAINER-ID or: docker-bash YOUR-CONTAINER-ID i.e. docker-[enter|bash] PlexMediaServer then you can poke around to look at what is running or check logs, etc. It is really helpful when developing a dockerfile because you have much more visibility into why a command is failing. phusion officially supports nsenter in version 0.9.12 of their baseimage, but it seems to work for me in 0.9.11. I can't tell from the release notes: http://blog.phusion.nl/2014/07/17/baseimage-docker-0-9-12-released/ if there are changes in the latest that we need for nsenter.
  8. Hi pinion, Thanks for building this. I am working on adding ffmpeg, tivodecode and tdcat to the Dockerfile. Would you mind if I fork it and send a pull request?
  9. When looking at your movie library, choose the gear and then "Force Refresh" - that will refresh the whole library at once.
  10. oh right, config.pl is for plexWatch and config.php is for plexWatchWeb. Glad you got it working!
  11. Yes, in Plex I have "require authentication on local networks" checked and for "List of networks that are allowed without auth" I have "127.0.0.1/255.255.255.255" (which I think is the default) In Docker, both PlexMediaServer and plexWatch are set to "host" network type. My plexWatch/config.php is: require_once '/var/www/html/plexWatch/includes/functions.php'; $plexWatch['dateFormat'] = 'm/d/Y'; $plexWatch['timeFormat'] = 'g:i a'; $plexWatch['pmsIp'] = 'localhost'; $plexWatch['pmsHttpPort'] = '32400'; $plexWatch['pmsHttpsPort'] = '32443'; $plexWatch['https'] = 'no'; $plexWatch['plexWatchDb'] = '/plexWatch/plexWatch.db'; $plexWatch['myPlexUser'] = ''; $plexWatch['myPlexPass'] = ''; $plexWatch['myPlexAuthToken'] = ''; $plexWatch['globalHistoryGrouping'] = 'no'; $plexWatch['userHistoryGrouping'] = 'no'; $plexWatch['chartsGrouping'] = 'no'; Note that it works without a username/password since plexWatch is connecting to localhost. Hope it helps!
  12. I wasn't able to get the plex username and password to work either. But try changing the "network type" from "bridge" to "host", that solved it for me. Here is some background: http://lime-technology.com/forum/index.php?topic=33965.msg316848#msg316848
  13. That's a good point, I've got mine set to "minimal": http://support.code42.com/CrashPlan/Latest/Backup/Backup_Settings#Advanced_Settings
  14. Does it mean that if I have 30TB of data, it will take 30GB of RAM just for Crashplan? According to the Crashplan help pages: http://support.code42.com/CrashPlan/Latest/Troubleshooting/CrashPlan_Runs_Out_Of_Memory_And_Crashes What I do to try and get around this is create multiple backup sets. My theory is that if I can keep each backup set to around 1 TB I'll be ok. It seems to work, I've backed up 3.5 TB so far and haven't had any memory problems. But 30 TB is a different story That would probably take a few years to backup. I'd use Crashplan for your critical data and then find some other kind of local backup for the less critical data.
  15. Take a look at Needo's plexWatch Docker: https://github.com/needo37/plexWatch It contains a cronjob
  16. Are you sure? Can't it access plex though the external address? hmm... plexWatch does have a config file that allows you to specify an IP address and a user/pass, but I was never able to get it to work. For me, the only thing that worked was specifying "localhost", and that didn't work as bridge. I'm fine if you want to leave it as-is though to see if other people are successful. I did have sucess with this on the commandline: --net="container:PlexMediaServer" If you aren't tired of tweaking the script that might be another good option to add. For now I'm fine running plexWatch as host.
  17. I just noticed a small problem with the plexWatch template: https://github.com/gfjardim/dockers/blob/dockerMan/dockerMan/templates/plexWatch.xml plexWatch does not work in bridge mode; it has to be run in host mode because it needs to be able to access plex at 127.0.0.1.
  18. Here's a suggestion for a best practice... always add a .gitattributes file to the root of your project that contains these lines: # Auto detect text files and force unix-style line endings * text eol=lf That forces unix-style line endings on text files, which solves problems for people who use Git for Windows to checkout a repository to an unRAID share. Without that line, Git for Windows uses Windows-style line endings, which causes errors when running shell scripts in Docker.
  19. Looks great! (we really need a "like" button)
  20. Great additions! Although I noticed the "my-" templates moved to the bottom of the list, would you mind moving those back to the top? Those are the ones we'll be use the most often. Thanks!
  21. Thanks for cross posting! The other thread is too toxic to spend much time in These details on unRAID 6 sound great.
  22. I love this plugin How would you feel about adding a "cleanup images" link that tries to "docker rmi" (without -f) all the images with one click?
  23. Oh sorry about that. I don't see any service.logs yet but maybe they will appear once it gets past the "synchronizing" stage. I haven't had any memory problems yet, but that help page says you need 1GB RAM for every 1TB of data being backed up, so I assume I'll need more. I don't know if it helps, but I'm using multiple backup sets with the hope that it will need less ram.