Jump to content

bobbintb

Members
  • Posts

    1,401
  • Joined

  • Last visited

Posts posted by bobbintb

  1. Is there a list somewhere of what files types work when mounted?  E.g I can't open ms office files.

     

     

    Loving the possibilities though - just wish my upload was faster so I could seriously consider uploading my Plex files.  At the moment I'm uploading to cloud sync so I won't need my server on all the time.... eventually

     

    Strange. I don't know why Office files wouldn't work. Maybe because they aren't really streamable, but I doubt that should matter, especially given their size. I haven't come across anything about it. There is a community forum on rclone that might be worth a look if no one here know.

     

    But, yeah, I've been looking for a long time for a solution like this and there is finally a viable solution, as long as you have the speed. I'm just going to have to spend the next few years uploading until we finally get some cost effective fiber in my area, which I am told will be soon. We already have one provider but they are ridiculously expensive and have very small caps and the upload is restricted to 50mbps. Lame. We will eventually be getting some competition from what I can find and they have the same price and speeds as Google Fiber. Ironically enough, I have 3 siblings that live in Google Fiber areas, or soon to be Google Fiber areas, and my parents are moving to another. Yet none of them actually have Google Fiber, except my brother but he gets the slow, free, option. Tthe one person (me) that would actually get use out of it is stuck in the boondocks. I might either wait to see if my parents get it and just work out some sort of arrangement where I mail a 512gb flash drive back and forth or pay for my brother's gigabit for a month and drive down there with my server in tow and upload everything within that month and come back for it. First world problems.

  2. The stable branch of rclone has been updated today. Because of this i have now released a version of the plugin which follows the stable branch. You can use this if you dont want to be on bleeding edge beta releases :)

    I will keep both versions updated as new releases are made :)

     

    Oh, that's good. The only real reason for using the beta in the first place was that the new features were necessary for our particular usage scenario. Good to have the peace of mind of a stable ranch.

  3. Hi. I'm getting a locked up webgui and no access to my array.

     

    I use the following linuxserver dockers

    Couchpotato

    Mysql

    Rutorrent

    Sonarr

    Nzbget

    Plex

     

    Sonarr is the only one I can't connect to when this happens so I wondered if this was causing my problem.

     

    I have a thread here with my diagnostics

    http://lime-technology.com/forum/index.php?topic=53257.0

     

    Any help would be appreciated

     

     

    Sent from my Nexus 9 using Tapatalk

     

    I have the same issue but can't see anything in the log.

  4. Great work putting this together Waseh!

     

    Can you show a link to the source code? I will eventually make a simple GUI for this to make it even more user friendly.

     

    A GUI would be great. I thought about doing it but I don't have the time and it would take me even more time learning it than someone that is already familiar.

     

    I think rclone will eventually have a daemon but until then this might be usefulle to add to the plugin:

    https://techarena51.com/index.php/inotify-tools-example/

     

    You can add inotify from the nerd tools plugin - I think thats a better solution for the people who need it :)

     

    I could have sworn I already looked there...

     

    Anyway, I'll probably give that a go and give a writeup on it for those that want use that instead of a cronjob.

  5. Really awesome thanks for making this.

     

     

    Could someone explain what I should be using for my rclone copy command?

     

     

    The rclone docs say:

     

     

    rclone copy remote:test.jpg /tmp/download

    But I don't know what to put for my paths.

     

    the syntax is:

    rclone copy source:sourcepath dest:destpath

     

    The syntax for the remote can be confusing though. Here is my example:

     

    rclone copy /mnt/user/server/backup/myfile.bin amazon:

     

    The root of a remote share is the name of the share followed by a colon. So if I wanted to backup to a folder named "documents" on my Amazon Cloud Drive (named "amazon" in my rclone config), that would be as follows:

     

    amazon:documents/

  6. Great work putting this together Waseh!

     

    Can you show a link to the source code? I will eventually make a simple GUI for this to make it even more user friendly.

     

    A GUI would be great. I thought about doing it but I don't have the time and it would take me even more time learning it than someone that is already familiar.

     

    I think rclone will eventually have a daemon but until then this might be usefulle to add to the plugin:

    https://techarena51.com/index.php/inotify-tools-example/

  7. Make the mount points within /mnt/disks

     

    Then on the Plex template, set that container/host volume with a mode of Read Write,Slave

     

    Should fix your problems  (The only mount point that supports Slave modes are within /mnt/disks, and slave mode is your fix to any app not seeing the mounts properly if they are mounted after docker is started)

     

    Question about that, what is slave mode? Also, what's the purpose of the disks share?

    Slave mode lets the docker system see (and more to the point, populate) the contents of the host path if the path is mounted after the docker service starts.

     

    However, unRaid's implementation only supports Slave Mode when the mount point is within /mnt/disks (it was added due to Unassigned Devices)

     

    http://lime-technology.com/forum/index.php?topic=40937.msg465348#msg465348

     

    It seems I edited my statement while you were replying so I'll ask again in case you didn't see the edit, do we have to do anything to specify slave mode when we mount it?

     

    Ok, so disks was added specifically for the purposes of slave mode then? Does it work the same as the user share in terms of nfs/smb mounting?

    In more general terms any path that you cannot guarantee is mounted before the docker service is started you should make the mount point within /mnt/disks and use slave mode within the docker template if a docker app needs to access it

     

    Sent from my LG-D852 using Tapatalk

     

    That makes sense but I guess what I am getting at is that I have my user share mapped over smb to my Windows machine. Can I do the same with shares under the disks share or would that not really make any sense in this use case since I am essentially remotely mounting and already remotely mounted share? I just want a way to view the encrypted Amazon share in a decrypted state on my Windows VM in the same manner I view my unraid share. Ultimately I suppose I could install rclone on the Windows machine but I'd rather not have to if unraid can manage.

  8. Thanks for this. I'm interested but cautious. I need to do some reading before I install it.

     

    There has been a lot of discussion and experimentation on it. Check here for starters:

     

    http://lime-technology.com/forum/index.php?topic=52033.msg512568

     

    If you have questions or concerns, feel free to ask as there are plenty of people that have been tinkering that can answer you.

     

    I'm excited to have this "official" now. Thanks to everyone involved.

  9. Make the mount points within /mnt/disks

     

    Then on the Plex template, set that container/host volume with a mode of Read Write,Slave

     

    Should fix your problems  (The only mount point that supports Slave modes are within /mnt/disks, and slave mode is your fix to any app not seeing the mounts properly if they are mounted after docker is started)

     

    Question about that, what is slave mode? Also, what's the purpose of the disks share?

    Slave mode lets the docker system see (and more to the point, populate) the contents of the host path if the path is mounted after the docker service starts.

     

    However, unRaid's implementation only supports Slave Mode when the mount point is within /mnt/disks (it was added due to Unassigned Devices)

     

    http://lime-technology.com/forum/index.php?topic=40937.msg465348#msg465348

     

    It seems I edited my statement while you were replying so I'll ask again in case you didn't see the edit, do we have to do anything to specify slave mode when we mount it?

     

    Ok, so disks was added specifically for the purposes of slave mode then? Does it work the same as the user share in terms of nfs/smb mounting?

  10. Plugin is now live: https://lime-technology.com/forum/index.php?topic=53365 :)

     

    In regards to the risk of overlapping cron jobs this is not something i've thought about. My use case doesnt really put me at risk of that problem.

     

    Well, it's always a good idea to plan for contingencies but mostly I'm just trying to figure out a way to create a daemon until one is properly added. Cron jobs are easy enough but I'd rather have something more efficient and was just wondering if anyone else has tried anything other than a cron job.

  11. Make the mount points within /mnt/disks

     

    Then on the Plex template, set that container/host volume with a mode of Read Write,Slave

     

    Should fix your problems  (The only mount point that supports Slave modes are within /mnt/disks, and slave mode is your fix to any app not seeing the mounts properly if they are mounted after docker is started)

     

    Question about that, what is slave mode and do we have to specify it when mounting? Also, what's the purpose of the disks share?

  12. Also, I see there is some discussion about a daemon possibly being implemented into rclone but for now people have just been using cron jobs with flock (so cron jobs don't overlap). Anyone know of a better way, such as hacking some sort of inotifywait/fsnotify solution?

  13. No it's part of the plugin - actually one of the things I'm changing before release :)

     

    Yeah, that was my other guess. I figured it was part of the myrclone script. I really hate having to use an alternate command but I get that it was done to save having to specify the config file every time. There's got to be a better way for that.

     

    Thanks for your work on updating the plugin Waseh. I tested Plex and things seem to be working great. Just need to upload my 24TB of data to Amazon, then again to Google, and then whatever else gets added to that after those are finished and I should be good to go. At 5Mbit upload speed, it should take too long...

  14. Anyone know how to get rclone to sync one file at a time? When I run a sync it seems to be uploading many at the same time. My internet has been flaky lately so when uploading many files at once it can take 8+ hours and I am certain to loose connection at some point and have to start over.

     

    I checked the documentation and it looks like the --transfers int option is what I need but it says the default is 4 and it seems to be doing more than that so I'm not sure if that it the option I am looking for.

     

    Edit: Nevermind, that was it. Maybe the beta change the default or something. Anyway, you can use that option to change number of transfers for anyone else that is wondering.

  15. Another caveat I just thought of. Even if I mount the share before Plex starts, there is still the issue of adding content. There needs to be a way for the Plex docker to get the changes without the need to restart it. Having to restart it every time new content is added is not practical. Not sure how that would work. I might have to switch to the Plex plugin if a solution cannot be found.

  16. Try myrclone mount --allow-other crypt: /mnt/acd/unraidcrypt :)

    hah ..

    That's what i originally did, then i started putting it on the end. Same result either way.

    I know it's related to permissions somehow, as i have a folder inside that should show up when I browse to add the path in Plex.

    But plex only gets to /unraidcrypt and then doesn't see anything in it.

    only at the console do I see into it. =/

    Did you manually make a fuse.conf file or anything? Not sure what's different on my end than yours.

     

    2 things:

    Make sure you add the path to the Plex docker. It sounds like you did but just making sure.

    Second, if Plex was already running when you mounted it, you'll need to restart Plex.

    When you say "only at the console", are you referring to the unraid command line or the command line within the Plex docker? Checking with the Plec docker command line is the best way to check if the file is there for Plex to see.

     

    That brings up a point though that I was about to ask. How is everyone mounting the encrypted share? I want it to automatically mount on startup but it needs to be mounted before the Plex container starts.

  17. Also another question: who wants to take ownership of this?

     

    I mean, this thread was started by thomast_88 for the docker he created which we have since found out isn't really as useful as a plugin is because of the limitations of docker. Also, correct me if I am wrong but it seemed more like it was just an experimental proof of concept he created for himself and shared and never really intended to make into a project and fully support it. The plugin was created by aschamberger, whose only post was creating the plugin so I don't think he is really invested in it. Waseh, improved and updated the plugin but understandably doesn't feel able or want to commit to it. Plus there are a bunch of disparate threads all pertaining to rclone or encfs/acd_cli implementations. There are a number of people contributing to this in one way or another but no one seems to want or have the time to own it. This has all been a sort of nebulous work in progress so far but it seems to be at the point where we have a solid product. It just needs someone willing to take it from this point on, bring it all under one thread, and send it off to community apps and be willing to maintain it.

     

    I will maintain this docker version and keep developing it if the community asks for it. So far i've been running it to backup my appdata to ACD (encrypted), and it's working flawlessly. If people wants to mount encrypted volumes on their array, the plugin will be the way to go (unless docker will support this in the feature).

     

    I'm looking forward to the new plugin by Waseh.

     

    I think you should make sure and note the sharing limitation in the OP just so those that have not been following the progress can be aware of it. Thanks for the work.

  18. I will probably look the plugin over tomorrow and do a couple of improvements I have in mind and then make a new thread and submit it to CA :)

     

    Good to hear. This one is a lot less involved than most and all the hard parts seem to be done so I don't think it would be too demanding for you but if you need help I can offer what I can. I've had some experience writing plug-ins in v5.

×
×
  • Create New...