Wob76

Members
  • Posts

    194
  • Joined

  • Last visited

Everything posted by Wob76

  1. Hi, Thanks for the work on the plugin. It seems that the beta has seem some improvements with seeking etc since I tested a month or so ago. Just wondering if anyone has done any speed testing between rclone and acd_cli mount points. I am using encryption for both rclone built in, and encfs for acd_cli. I have uploaded similar content into both and testing via playback in plex. currently it will take about 8-9sec for a episode to start via acd_cli and around 14-17sec for the same files with rclone. rclone reduces the complexity of my setup, no need for the encfs mounts, and rclone seems to show changes soon after they are made, where as I need to do a sync with acd_cli. At this stage I have not attempted any tweaking of setting, so fairly anecdotal evidence. Thanks, Wob
  2. Hey All, I just fired up the plugin, works fine for me, thanks. I could consider submitting it to CA. Re the questions about using to as a mount point. You can do this, and pass it into a docker (if using the plug-in) but I found rclone speeds to be insufficient at the moment. The stable version at the moment also doesn't support seeking within a file, a pretty big limitation if you are thinking of hosting Media files. There is mention of this issue on github, and fixes in beta versions, but I found acd_cli much better\quicker as a mount tool. I am currently using acd_cli and encfs for a mount point that I am able to pass into my plex container. I have been testing for about a month and it seems to work OK so far, I have only tested limited content. acd_cli has some issue when running multiple versions, so if I want to do a sync with it I need to stop the mount. So I am planning on using rclone sync (without encryption) to upload my encfs mount of local data (a reverse encrypted view of local content). My setup is just using a script file at startup, it would be nice to move it into a plugin, maybe with a config page for mount points, but I have put that in the too hard basket at the moment. I might see if I can modify this rclone script to do it, but it wouldn't enable any user config for mount points. You can see this thread for my script (https://lime-technology.com/forum/index.php?topic=50516.0). I created a share called cloud. under that I have all my mount points. /mnt/user/cloud/ - share (set as cache only) /mnt/cache/cloud/.acd - this is the acd_cli mount of my ACD drive (I point to a subfolder called Plex) the data is encrypted as it is stored in ACD. /mnt/cache/cloud/acd - this is the ENCFS decrypted view of the above mount /mnt/cache/cloud/.local - this is a read only encrypted view of my local Media (/mnt/user/Media/) that is used for syncing to ACD, this is mounted with the --reverse option in encfs, this means that your CPU is only hit with the encryption\decryption penalty for local data when syncing into the cloud. /mnt/user/cloud/media - this is my merged mount. It will show both local and ACD data in a single folder view, local data takes precedence. This is the point I pass into my dockers. The only change since that post is that I have started using unionfs-fuse for my merged mount, rather than overlayfs as it enables me to have the local layer as writable. I have only moved to it today so I'll wait for it to prove stable before I update my post\script. Some notes\known issues: I was seeing unreliable behaviour with the acd_cli mount, I found an issue on git_hub that suggests running it in foreground mode using screen, that is what my script is doing. If for some reason you remount or any of these mount points, you will need to restart any docker using them, so that it will see the data again. Not sure if this is just a docker limitation or something else. You will see I have this happening at the end of my script I was never able to get any of this to work in a docker in a way that could work between dockers, there is a limitation with exporting fuse mounts, so I could mount something inside a docker, but I could not then get the data outside of that docker, the volume mapped to the host would just be empty for a ACD mount. That is why I moved to working on the unraid HOST. Regards, wob
  3. Hi, Yep, rerunning ./initial_config.sh fixed my issue, I made the (incorrect) assumption that it was only generating config files (that I already have). Does that mean we need to rerun that whenever the docker is updated? Thanks, Wob
  4. Yes, i have it working on Unraid 6.2.1 outside of docker. I have posted my current working script in this thread (https://lime-technology.com/forum/index.php?topic=50516.0) I have it loading via the /boot/config/go. I have only had it working for a couple of weeks, so reliability is not yet really tested and there are some limitation (read the above thread). It would be nice if this was all in a plugin, but that is a little beyond my skill set, so the script will do me for now.
  5. Yep, that would do the trick. The solution I have is fine for the most part, but if I want to track the collections (including ACD only files) with the likes of CP and SR then I need them to point to a mount that is RW. At the moment they will only see local content, as they will point just to local data as they need RW access for renaming. It is not a big issue for SR as it will just tag files that are moved to ACD as archived, but CP would no longer see track that are moved to ACD. At the moment it is mostly theoretical, I only have a small amount of data in ACD for testing, it appears to stream fine. I am also hesitant to have ACD the only copy of something, so if I do want to remove something local I will be looking at a backup of some kind before I remove my local copy. At the moment, local space is not an issue. My plan is to maintain local data for a least a few years, then have older stuff pulled from ACD on the rare occasion that someone views it.
  6. I am using the --reverse option (the second encfs mount in my script) that mount is purely there for uploading to ACD, and yeah, then you are only hit with the encryption overhead during the upload process. I end up with the following mount points from my script encfs on /mnt/cache/cloud/acd type fuse.encfs (rw,nosuid,nodev,allow_other,default_permissions) - The decrypted view of ACD encfs on /mnt/cache/cloud/.local type fuse.encfs (ro,nosuid,nodev,allow_other,default_permissions) - The encrypted view of my local media folder *the reverse mount* overlay on /mnt/user/cloud/media type overlay (ro,lowerdir=/mnt/user/Media/:/mnt/user/cloud/acd/) - The overlay mount with local media being the top layer and acd below that. ACDFuse on /mnt/cache/cloud/.acd type fuse.ACDFuse (rw,nosuid,nodev,allow_other) - The encrypted ACD mount point I solved the issue with my script not working at start up. I enabled verbose on the acd_cli sync command and it was looking in /.cache/acd_cli for the oauth, so no home variable was being parsed. I could move the file to that location, but it I want to be able to call acd_cli from the command line for uploading etc. So I just added the following to the start of my script. HOME=/root If anyone has been able to install unionfs-fused on unRAID I would love to hear about it. Wob
  7. Thanks for the response, are you running RacherOS on another box to your unRAID, or in a VM? Regards, Wob
  8. Hi 2devnull, Can I ask how you went about installing unionfs? I couldn't find a nice slackware install for it and didn't want to go installing make just for that task. I ended up using the overlayfs built into the kernel, it works fine for reading, but it doesn't seem to like fuse mounts as RW so my merged mount is just Read Only. I scripted my install with a basic start up script, but it doesn't seem to like mapping the acd_cli during boot. I have already copied the oauth in place, but still get For the one-time authentication a browser (tab) will be opened at https://tensile-runway-92512.appspot.com/. Please accept the request and save the plaintext response data into a file called "oauth_data" in the directory "/.cache/acd_cli". Running the script after boot it all works fine, I have tried adding some delays, but no luck yet. I am running acd_cli with -fg (foreground) using screen, as I was seeing the mount being a little buggy, timing out etc, as this was a noted fix on the issues on github. Here is a copy of my startup script for anyone interested. #!/usr/bin/bash LOGFILE=/boot/acd_cli/logs/cloudstore-$(date "+%Y%m%d").log echo CloudSotre log $(date) $'\r'$'\r' >> $LOGFILE 2>&1 echo "Starting Cloud Mounts" $'\r'>> $LOGFILE 2>&1 #Copy oauth file to system mkdir -p /root/.cache/acd_cli/ cp /boot/acd_cli/config/oauth_data /root/.cache/acd_cli/oauth_data && #Install dependancies upgradepkg --install-new /boot/acd_cli/install/boost-1.59.0-x86_64-1.txz >> $LOGFILE 2>&1 upgradepkg --install-new /boot/acd_cli/install/rlog-1.4-x86_64-1pw.txz >> $LOGFILE 2>&1 upgradepkg --install-new /boot/acd_cli/install/slocate-3.1-x86_64-4.txz >> $LOGFILE 2>&1 #Install encfs upgradepkg --install-new /boot/acd_cli/install/encfs-1.8.1-x86_64-1gv.txz >> $LOGFILE 2>&1 #Install acd_cli pip3 install --upgrade git+https://github.com/yadayada/acd_cli.git >> $LOGFILE 2>&1 #Sleep for 10s and then run a acd_cli sync sleep 10s && acdcli sync >> $LOGFILE 2>&1 #Mount Amazon Cloud Drive (using screen) echo Mounting Amazon Cloud Drive >> $LOGFILE 2>&1 screen -S acdcli -d -m /usr/bin/acd_cli -nl mount -fg -ao --uid 99 --gid 100 --modules="subdir,subdir=/Plex" /mnt/cache/cloud/.acd >> $LOGFILE 2>&1 #Mount Decrypted view of ACD echo Mounting ENCFS points >> $LOGFILE 2>&1 echo <password> | ENCFS6_CONFIG='/boot/acd_cli/config/encfs.xml' encfs -S -o ro -o allow_other -o uid=99 -o gid=100 /mnt/cache/cloud/.acd/ /mnt/cache/cloud/acd/ >> $LOGFILE 2>&1 #Mount Encrypted view of Local Media (Use for uploading Data to ACD) echo <password>| ENCFS6_CONFIG='/boot/acd_cli/config/encfs.xml' encfs -S --reverse -o ro -o allow_other -o uid=99 -o gid=100 /mnt/user/Media/ /mnt/cache/cloud/.local/ >> $LOGFILE 2>&1 #Overlay Mount with Local Data taking preference. (Read Only) echo Mounting Overlay point >> $LOGFILE 2>&1 mount -t overlay -o lowerdir=/mnt/user/Media/:/mnt/user/cloud/acd/ overlay /mnt/user/cloud/media/ >> $LOGFILE 2>&1 #Restart the plex docker (so it can see data in the mount point) docker restart plex Cheers, Wob
  9. Hi, This broke for me on the last update, as with the previous users post the plexreport executable seems to be messed up. running docker exec plexReport plexreport -n -d I get.. exec: "plexreport": executable file not found in $PATH If I jump into the container and try this. root@TheBox:/# whereis plexreport plexreport: /opt/plexReport/bin/plexreport root@TheBox:/# /opt/plexReport/bin/plexreport /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require': cannot load such file -- bundler/setup (LoadError) from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from /opt/plexReport/bin/plexreport:3:in `<main>' I deleted my container and redownloaded it, same result. Cheers, Wob
  10. Hi Guys, Not sure where to post this, so I'll start here. I am running both LS versions of Couchpotato and Jackett. As CP seems to have slowed down its updates I am seeing more results via Jackett as Torrent Providers break. My issue is it now appears to be returning a bunch of false positives, the year is being totally ignored for most results. I can see that CP is passing the title (without year) but with the imdb id to Jackett. Jackett is completing the search, but only using the title, and not the imdb id. The result is I see a bunch of results for the wrong movie. Not sure what app to look at to resolve the issue, has anyone else seen this? Is there an easy way to move my dockers to use the development branch of these apps? Thanks, Wob
  11. User error... The :z option had nothing to do with it. I had my mount point as /mnt/user/cloud/media/ But I had my volume on the docker mapped to /mnt/cache/cloud/ Changed to docker to point to /mnt/user/cloud and it is working. interesting note, my encfs and acd_cli mounts are all pointing to /mnt/cache but show up fine in the docker, I put this down to them being fused mounts. mounts in the docker now look like this. shfs on /cloud type fuse.shfs (rw,nosuid,nodev,noatime,user_id=0,group_id=0,allow_other) overlay on /cloud/media type overlay (ro,relatime,lowerdir=/mnt/user/Media/:/mnt/user/cloud/acd/)
  12. Ok, If I add the :z option to the end of my volume mapping appears to resolve my issue, something to do with SELinux permission, I don't fully understand it, but hey it worked. Relative link (http://www.projectatomic.io/blog/2015/06/using-volumes-with-docker-can-cause-problems-with-selinux/) My next question, is it possible to add the z or Z option to volume mapping via the GUI? Thanks, Wob Update: So further reading and testing and it appears the SELinux is not part of unraid, so anyone know what a :z option on the volume mapping works here?
  13. Oh and permissions on the media folder as viewed from unraid. drwxrwxrwx 1 nobody users 4 Oct 5 03:41 media/
  14. Thanks for reply squid, I did try that after I found a post of yours about volume mapping, but it didn't help. This is what my mounts look like in unraid. encfs on /mnt/cache/cloud/.local type fuse.encfs (rw,nosuid,nodev,allow_other,default_permissions) ACDFuse on /mnt/cache/cloud/.acd type fuse.ACDFuse (rw,nosuid,nodev,allow_other) encfs on /mnt/cache/cloud/acd type fuse.encfs (ro,nosuid,nodev,allow_other,default_permissions) overlay on /mnt/user/cloud/media type overlay (ro,lowerdir=/mnt/user/Media/:/mnt/user/cloud/acd/) This is what I get inside the docker. /dev/sde1 on /cloud type btrfs (rw,noatime,nodiratime,ssd,space_cache,subvolid=5,subvol=/cloud) encfs on /cloud/.local type fuse.encfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other) ACDFuse on /cloud/.acd type fuse.ACDFuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other) encfs on /cloud/acd type fuse.encfs (ro,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other) For the actual docker settings I am just mapping /mnt/user/cloud to /cloud/ and I have tried it set as RW or RW/Slave. If I look at the available file with cat /proc/filesystems both inside and outside the container I see overlay. Any other ideas?
  15. Can anyone help as to why my overlay mount will not pass into my docker container? I have a encfs and a acd_cli as well as the overlayfs mount all passed into the docker, all work except the overlayfs. I was thinking permission, the other mounts are fuse and I am using allow_other to grant access, overlayfs doesn't appear to have a similar option. Thanks in advanced.
  16. OK, So I have ended up using mount -t overlay, still to test it, but can anyone comment on if the following should be ok, for now I have just done a read only mount. mount -t overlay overlay -olowerdir=/mnt/user/Media/:/mnt/user/cloud/acd/ /mnt/user/cloud/media/ So; /mnt/user/Media is my local content. /mnt/user/cloud/acd is my content stored on Amazon Cloud Drive. /cloud/ is a cache only share, although no local data is in this folder the acd_cli tool does seem to buffer where ever the mount point is, I had it on the flash drive (/mnt/) at one point and saw speed issues, on my SSD (cache drive) it seems to work fine. So I am hoping if I point at /mnt/user/cloud/media/ I will get a local file if it exists, if not it will pull from ACD.
  17. Hi, I have been playing around with ACD options for the last couple of weeks, I am mostly looking at the mount options, as I want to be able to mount a drive and have a merged view of local and cloud storage. rclone for me was too slow, just browsing folders is not quick at all, acd_cli maintains a local db of the folder tree so browses much faster. rclone mount also can only do sequential reads (at least at the moment) and this was an issue for me. At the moment I have acd_cli and encfs working, I am still testing but so far the performance has been good, you can copy to\from the cloud mount, also most people recommend against it to ensure file integrity, so if I want to upload big chunks I unmount the acd folder and do the upload using acd_cli upload. I am still trying to work out how to do a merged mount of the local\cloud data. I will basically have the same data in the cloud and local, but I will then remove\archive some local content, so will reference the cloud if it doesn't exist locally. I found a guide for using UnionFS-Fuse, but since it is a similar process to the local user folder and how it references cache and local discs I am trying to work out if I can do it without installing unionfs-fuse. I made a post in another thread for anyone interested (http://lime-technology.com/forum/index.php?topic=45338.msg502980#msg502980) Cheers, Wob
  18. HI, Should I be posting this question somewhere else in the forum? Surely someone can give me some hints on creating a custom mount point similar to /mnt/user? Regards, Wob
  19. I didn't have much joy playing with acd_cli in a docker, I wanted to use the mount option, and a limitation with docker and exporting fuse mounts meant that the data was only available inside the docker. I have installed acd_cli on UNRAID, at it is working fine, I am still testing reliability, so still early days. You will need to have Python3.5 installed, there is a problem with the one in the NerdPack plugin, sqlite3 is not working in it, dmacias is looking to solve that, but you can manually install it with. upgradepkg --install-new http://slackonly.com/pub/packages/14.1-x86_64/python/python3/python3-3.5.1-x86_64-1_slack.txz After that is installed you can install acd_cli with. pip3 install --upgrade git+https://github.com/yadayada/acd_cli.git you may need to also install the following dependencies, i can't remember if they auto installed. pip3 install SQLAlchemy pip3 install dateutils I also installed the encfs for an encryption layer, you can try that my running upgradepkg --install-new of the following packages; http://download.salixos.org/x86_64/14.2/salix/ap/encfs-1.8.1-x86_64-1gv.txz http://slackware.cs.utah.edu/pub/slackware/slackware64-14.2/slackware64/l/boost-1.59.0-x86_64-1.txz http://ftp.slackware.org.uk/salix/x86_64/14.1/salix/l/rlog-1.4-x86_64-1pw.txz http://ftp.slackware.org.uk/slackware/slackware64-14.1/slackware64/a/slocate-3.1-x86_64-4.txz None of this will survive a reboot, and you need to do all the mounting and configuration after installing the above, If it proves reliable I might try and tackle a plugin later, but thought I would post this now if anyone wants to have a play. Wob
  20. Hi, I am playing around with acd_cli and encfs, at the moment I have both acd_cli and encfs working on my unraid OS, still early stages, I am still waiting to see how the reliable it is and I still need to look at making a plugin or script so that it will stay after a reboot. One guide I found online suggested using unionfs-FUSE to make a fused folder with cloud and local data shown, the cloud layer would be read only and the local mount point a write part, since the unraid filesystem works in a similar fashion I thought I might be able to do this without without the need to install unionfs-FUSE, can anyone tell me of there are some mount options I can use with unraid to complete the same setup? My Goal is to maintain my data local, but have a ACD bucket that I can move older (say 5+ years) data into. I am aware of the encryption limitations of encfs, but for my purposes they really shouldn't be an issue. Regards, Wob
  21. Thanks for the starting point, I'll look over your stuff and fire some questions. At the moment I am still seeing some issues with the acd app so I'll sort those out before I proceed. Sent from my SM-G900I using Tapatalk
  22. I might have called encfs too early, seems I have an issue with 1.7.4, need to get to a newer version. OK, I installed http://download.salixos.org/x86_64/14.2/salix/ap/encfs-1.8.1-x86_64-1gv.txz and had to upgrade boost to http://slackware.cs.utah.edu/pub/slackware/slackware64-14.2/slackware64/l/boost-1.59.0-x86_64-1.txz Now have a working install of encfs and acd_cli... just need to work out how to get it all reinstall and map after a reboot.
  23. Hi dmacias, I just installed from (http://slackonly.com/pub/packages/14.1-x86_64/python/python3/python3-3.5.1-x86_64-1_slack.txz) and it works, so it does appear to be linked to your version, I did read somewhere that you need sqlite3 installed when you build so I think you are correct in you description. Yeah, no pysqlite3, I did search for that for some time. Another question for you, I am looking to install encfs, there is an old plugin but nothing for 6, I am not at all familiar with how to create a plugin, so just trying to work out best how I go about it. I found all the txz files required, I see the following options; See if you can add them to the NerdPack Create a user script and use the user script plugin to run it Work out how to create a plug in for it The first option is obvious the easiest for me, is that something you are willing to add to this pack? These are the packages i used. http://ftp.slackware.org.uk/salix/x86_64/14.1/salix/ap/encfs-1.7.4-x86_64-4gv.txz Then the dependancies. http://ftp.slackware.org.uk/slackware/slackware64-14.1/slackware64/l/boost-1.54.0-x86_64-3.txz http://ftp.slackware.org.uk/salix/x86_64/14.1/salix/l/rlog-1.4-x86_64-1pw.txz http://ftp.slackware.org.uk/slackware/slackware64-14.1/slackware64/a/slocate-3.1-x86_64-4.txz I'll need to do some script work to get acd_cli installed at start up as it has a couple of dependencies and needs to be installed via pip3. With relation to plugin creation, can you point me at a starting point if I want to tackle that? Thanks, Wob
  24. Hi, i am trying to play around with acd_cli on unraid, I have it running in a docker, but you can't export fuse mounts so I want to try it on the host system, I have sorted most of the dependencies, but having issues with python3 and sqlite3. Sqlite3 is installed as it is part of the unraid build now, but python3 (and 3.5) some seem to have it. In python2 Python 2.7.12 (default, Sep 6 2016, 18:21:48) [GCC 5.4.0] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import sqlite3 >>> sqlite3.sqlite_version '3.13.0' >>> exit() In python3.5 or 3 Python 3.5.1 (default, May 19 2016, 21:40:34) [GCC 5.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import sqlite3 Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib64/python3.5/sqlite3/__init__.py", line 23, in <module> from sqlite3.dbapi2 import * File "/usr/lib64/python3.5/sqlite3/dbapi2.py", line 27, in <module> from _sqlite3 import * ImportError: No module named '_sqlite3' I have googled but can't find a solution, any idea on how to get a python3.5 working with sqlite3? Thanks, Wob
  25. Hey all, now that the LS images are not auto updating I was wondering if there any issues with doing in app updates inside the container? Mostly interested in this, sickrage and plexpy, all LS dockers. Thanks, Wob. Sent from my SM-G900I using Tapatalk