Amazon Cloud Drive docker - ACD_CLI and Encfs


Recommended Posts

This is the command I run to upload my files to the server and delete local files when successfully uploaded or duplicated.

acd_cli upload --remove-source-files /mnt/user/Amazon/.local /encfs 

 

Unfortunately, after I run the command (in screen), nothing happens? How do I monitor the progress? How do I know it works?

Link to comment

A Docker wouldn't work anyway as nothing outside it would be able to see the mount point.

I thought that was the case but, not being super technical, figured the OP had something in mind that warranted a docker.

 

Well, I'm pretty sure anyway. I had the same suspicions and a few of us tried to do it with the rclone docker and no one find a way to do it. If one is just trying to back up something and encrypt it I suppose a docker would be fine but if you wanted another program to access it, as I image many if not most would, then it would need to be a plugin. And yeah, I agree, a docker would be overkill anyway. It just wouldn't fit the usage scenario.

Link to comment

 


echo <password>| ENCFS6_CONFIG='/boot/acd_cli/config/encfs.xml' encfs -S --reverse -o ro -o allow_other -o uid=99 -o gid=100 /mnt/user/Media/ /mnt/cache/cloud/.local/ >> $LOGFILE 2>&1

 

 

I wonder, why mount the filesystem read only? Is there any specific reasons?

 

I want to upload to Amazon and delete uploaded files using this command:

acd_cli upload --remove-source-files /mnt/user/Amazon/.local/fxR--l67TIkpk,/ /encfs/

 

Unfortunately /mnt/user/Amazon/.local is read only.

 

So i tried mounting .local without read only :

encfs -S --reverse -o allow_other -o uid=99 -o gid=100 /mnt/user/Amazon/local/ /mnt/user/Amazon/.local/ 

 

and with read wire :

 

encfs -S --reverse -o rw -o allow_other -o uid=99 -o gid=100 /mnt/user/Amazon/local/ /mnt/user/Amazon/.local/ 

 

.local folder still read only. How can I make .local NOT read only?

 

By the way, I set up my encryption to be file independent. Meaning, I can move file in the encrypted view folder without breaking the decryption. Make sense?

 

Anyway, how do I make .local folder (encrypted view of encfs) not read only?

 

Link to comment

You are correct that you cannot directly "export" a fuse mountpoint from docker as a native mount, for technical reasons.  The workaround if you really wanted a docker would be to export the ACD encfs mountpoint as something like a WebDAV share and then mount that (which again brings you back to needing a plugin or script running natively). See this docker: https://github.com/sedlund/acdcli-webdav

 

Thus, to get encrypted ACD mountpoints working and integrated and looking like a native mount the way unraid users would want it, it needs to be a plugin.

Link to comment
  • 2 weeks later...

Hey All,

 

Sorry about the radio silence from me, I was away on 3 weeks holiday.

 

I did get a little further with my testing, I moved to unionfs fused for my overlay mount point, and I have modified my scripts slightly to work inside the "User Scripts" plugin, I have one for mounting and another for unmounting, and using they plugin they can be scheduled to start and stop with the array, this fixes an issue I was having during shutdown as the system was waiting for these mount points to disappear.

 

I have also migrated my mount folders to /mnt/disks/ as per some advice from squid in the rclone (beta) docker thread. Once you do this, set the docker volume mapping to Slave:RW and changes in the mount will be seen in the docker without the need for a restart. This means I could remove the docker restarts from the end of my script. As /mnt/disks/ folders are nuked on reboot I added directory creation at the start of my script. I had some earlier issues with acd_cli and encfs mount points being on the flash drive, but haven't seen this again with using /mnt/disks, but it is early a

 

Here are my current script files, see the user scripts plugin for how to implement them. You will still need to create the /boot/cloudstore/ folder structure and copy the required plg's as I have mentioned earlier in the thread. I may move to pulling these from git at some point. Oh and you need to create your own oauth and encfs.xml files, and don't forget to replace <password> in the script with your password.

 

Mounting Script,

#!/usr/bin/bash
HOME=/root
LOGFILE=/boot/cloudstore/logs/cloudstore-$(date "+%Y%m%d").log
echo CloudSotre log $(date) $'\r'$'\r' >> $LOGFILE 2>&1
echo "Starting Cloud Mounts" $'\r'>> $LOGFILE 2>&1

#Copy oauth file to system
mkdir -p /root/.cache/acd_cli/
cp /boot/cloudstore/config/oauth_data /root/.cache/acd_cli/oauth_data

#Install dependancies
echo Installing Slackware Packages >> $LOGFILE 2>&1
upgradepkg --install-new /boot/cloudstore/install/boost-1.59.0-x86_64-1.txz >> $LOGFILE 2>&1
upgradepkg --install-new /boot/cloudstore/install/rlog-1.4-x86_64-1pw.txz >> $LOGFILE 2>&1
upgradepkg --install-new /boot/cloudstore/install/slocate-3.1-x86_64-4.txz >> $LOGFILE 2>&1

#Install encfs
upgradepkg --install-new /boot/cloudstore/install/encfs-1.8.1-x86_64-1gv.txz >> $LOGFILE 2>&1

#Install UnionFS-FUSE
upgradepkg --install-new /boot/cloudstore/install/unionfs-fuse-0.26-x86_64-1dj.txz >> $LOGFILE 2>&1

#Install acd_cli
echo Installing acd_cli >> $LOGFILE 2>&1
#pip3 install --upgrade git+https://github.com/yadayada/acd_cli.git
pip3 install --upgrade git+https://github.com/bgemmill/acd_cli.git >> $LOGFILE 2>&1

#Run a acd_cli sync
echo Running ACD sync >> $LOGFILE 2>&1
acdcli -v sync >> $LOGFILE 2>&1

#Make /user/disks folder structure
mkdir -p /mnt/disks/cloud/.acd/
mkdir -p /mnt/disks/cloud/acd/
mkdir -p /mnt/disks/cloud/.local/
mkdir -p /mnt/disks/cloud/media/

#Mount Amazon Cloud Drive (using screen)
echo Mounting Amazon Cloud Drive >> $LOGFILE 2>&1
screen -S acdcli -d -m /usr/bin/acd_cli -nl mount -fg -ao --uid 99 --gid 100 --modules="subdir,subdir=/Plex" /mnt/disks/cloud/.acd >> $LOGFILE 2>&1

#Mount Decrypted view of ACD
echo Mounting ENCFS points >> $LOGFILE 2>&1
echo <password> | ENCFS6_CONFIG='/boot/cloudstore/config/encfs.xml' encfs -S -o allow_other -o uid=99 -o gid=100 /mnt/disks/cloud/.acd/ /mnt/disks/cloud/acd/ >> $LOGFILE 2>&1

#Mount Encrypted view of Local Media (Use for uploading Data to ACD)
echo <password> | ENCFS6_CONFIG='/boot/cloudstore/config/encfs.xml' encfs -S --reverse -o ro -o allow_other -o uid=99 -o gid=100 /mnt/user/Media/ /mnt/disks/cloud/.local/ >> $LOGFILE 2>&1

#Overlay Mount with Local Data taking preference.
echo Mounting Overlay point >> $LOGFILE 2>&1
#mount -t overlay -o lowerdir=/mnt/user/Media/:/mnt/user/cloud/acd/ overlay /mnt/user/cloud/media/ >> $LOGFILE 2>&1
unionfs -o cow -o allow_other -o uid=99 -o gid=100 /mnt/user/Media/=RW:/mnt/disks/cloud/acd/=RO /mnt/disks/cloud/media/ >> $LOGFILE 2>&1

echo Script Complete

 

Unmounting Script

#!/usr/bin/bash
screen -S acdcli -X quit
fusermount -u /mnt/disks/cloud/media/ & fusermount -uz /mnt/disks/cloud/acd/ & fusermount -u /mnt/disks/cloud/.local/ & fusermount -uz /mnt/disks/cloud/.acd/

 

I am currently doing some speed tests between this setup and using the beta rclone plugin. This seems to be almost twice as fast for the initial loading of a video file, but rclone has less moving parts. So I am still not sure which solution I will go with, so I may tweak this further to try and make it a little more automated, or I may move it to use rclone instead of acd_cli/encfs.

 

Wob

Link to comment

I am using the rclone plugin but I am interested in how this compares performance wise. Always good to have options. I am also wondering if there is any compatibility between options. I'm just worried about committing to one way and uploading a bunch of data only to have to switch for some reason and start from scratch because the encryption is different.

 

EDIT: It looks like rclone might be including a compatibility mode for encfs.

Link to comment

I am using the rclone plugin but I am interested in how this compares performance wise. Always good to have options. I am also wondering if there is any compatibility between options. I'm just worried about committing to one way and uploading a bunch of data only to have to switch for some reason and start from scratch because the encryption is different.

 

EDIT: It looks like rclone might be including a compatibility mode for encfs.

 

Yeah, I did read that, or you could alternatively overlay a encfs mount over rclone and do the encryption outside of rlcone. BUT, the internal rclone encryption is newer and avoids some of the issue with encfs.

Link to comment

I also found this:

https://forum.rclone.org/t/plex-server-with-amazon-drive-rclone-crypt-vs-encfs-speed/106/2

 

One of the comments posits that it might be the FUSE mounting, not rclone itself, as cause for the performance hit.

 

Thanks for the link, interesting read, but all on older versions, so he might still be working on a fix. Reading the issues/fixes it would seem rclone is being developed faster, but like you I want to commit to a solution before bulk uploaded encrypted data. I could look at replacing acd_cli with rclone (unencrypted) and still using encfs, the need to run a sync with acd_cli to see changes is a bit of a sticking point.

Link to comment

I've been messing with the acd_cli/encfs method as well as rclone by itself.

My unofficial tests seems to be that the acd_cli method is about 2x as fast as starting a video stream as well as starting it again after seeking than rclone.

using latest beta of rclone. It's not a huge difference, but for what I'm wanting to do w/ my system and WAF type factor for users, I would rather a 10sec delay than a 30sec+ one.

 

I will agree that rclone is much more simple to use and setup than the acd_cli/encfs method.

So I haven't settled on which I want to use yet. As others have mentioned the rclone guy does seem to be on top of it with his updates, so if he can improve the mount performance that would be awesome.

Link to comment

That's my main concern. Even if acd_cli is faster now, rclone seems to be developing faster and working on performance. The acd_cli/encfs solution is still not a cohesive projects but a combination of two programs so there likely won't be any work done in terms of how they interact and the performance between them. Would that even be a factor? I don't know.

Link to comment

@ninthwalker, seems your tests reflect mine.

 

I think at this stage I'll stick with acd_cli and encfs. But I might hold off on bulk uploading for a month or two to see if rclone can catch up. changing encryption system at this point would require a re-upload.

 

I am still looking to keep a large chunk of data local anyway, it would only be older data (say older than 3-5 years). That way most content would be local, and just watching older stuff would pull from ACD.

 

Link to comment

@ninthwalker, seems your tests reflect mine.

 

I think at this stage I'll stick with acd_cli and encfs. But I might hold off on bulk uploading for a month or two to see if rclone can catch up. changing encryption system at this point would require a re-upload.

 

I am still looking to keep a large chunk of data local anyway, it would only be older data (say older than 3-5 years). That way most content would be local, and just watching older stuff would pull from ACD.

 

That's was my exact plan too!

Have an 'Archived Movies' and 'Archived TV Shows' Library's, and older items that people don't watch as much move to ACD.

Newer popular content stay's local. SO when people do want to watch those older movies/tv they still can and just deal with the extra load times I figure is an ok trade off.

 

Link to comment

That's was my exact plan too!

Have an 'Archived Movies' and 'Archived TV Shows' Library's, and older items that people don't watch as much move to ACD.

Newer popular content stay's local. SO when people do want to watch those older movies/tv they still can and just deal with the extra load times I figure is an ok trade off.

 

I was going to make a quip about great minds thinking alike, but not sure i can claim that at the moment ;)

 

I server doesn't get allot of hits on older content, so I think I can safety base my "move" to the could on dates or number of seasons, then hits to ACD should be limited and not impact on the WAF factor too much.

 

My only real concern is with ACD and its terms, I need to work out a "backup" for things stored in ACD, at least till I am confident they won't just "purge" my collection on a whim. Weather that is a offline storage device locally or another cloud providers I have yet to determine.

 

Wob

Link to comment

My only real concern is with ACD and its terms, I need to work out a "backup" for things stored in ACD, at least till I am confident they won't just "purge" my collection on a whim. Weather that is a offline storage device locally or another cloud providers I have yet to determine.

 

Wob

 

I also have unlimited Google Drive. Got it free with my old .edu account so I'll have dual cloud backup.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.