[Support] Linuxserver.io - davos


Recommended Posts

15 hours ago, Nyghthawk said:

if I want it in "/Seedbox/red-apl/data" even though I mapped "/Seedbox/" as '/download/' for Davos I have to put "/download/red-apl/data" as my local download location. Will test in a moment (on a larger download)

You want it in /Seedbox/.... and you mapped ./Seedbox to /download

 

So yeah you need to put /download as the location, or am I missing something?

Link to comment

Hi @Nyghthawk,

 

davos does not assume where your "download" folder is, as it was designed to be relatively flexible regarding which directories you wish for it to be aware of. Due to the way Docker containers work, it is important to remember that just by mapping /home/my/host/directory:/download you are not telling davos (the application) where you want downloaded files to go. All that does is tell the Docker container where you want files in the internal /download directory to be on your host.

 

For example, if you decided to run davos natively on your machine, you'd tell it to download files to /mnt/usr/Seedbox because davos has no knowledge of your preferred download directory - you need to tell it explicitly. The same applies to running it within a Docker container. In this particular case, the Docker container has mapped the /download directory specifically for this use case, hence why you need to start your local directory with it.

 

This is not something I plan on amending in future releases. Regarding your other feature request, please add it to the Issues list on the davos GitHub page. Cheers.

 

Thank you for using davos! If you have any further queries, let me know.

Link to comment

I guess its just my miss understanding of mapping a location.

 

If i map "download" to "/a/b/c/d/e/f/g/" and inside g/ i have folders 1 2 3 and 4.

 

And I want my files to go to the main directory of /g/

 

when the program asks for download location id EXPECT to put / for /g/ since "download" was mapped there. but in order for it to go to /g/ i have to put /download/ as the location. Why? I mapped /g/ to download already? should download location imply /download/ folder? or for location can i put the FULL path and it still work?

 

like /mnt/usr/a/b/c/d/e/f/g/ and itll download to g/? 

Link to comment
17 minutes ago, Nyghthawk said:

I guess its just my miss understanding of mapping a location.

 

If i map "download" to "/a/b/c/d/e/f/g/" and inside g/ i have folders 1 2 3 and 4.

 

And I want my files to go to the main directory of /g/

 

when the program asks for download location id EXPECT to put / for /g/ since "download" was mapped there. but in order for it to go to /g/ i have to put /download/ as the location. Why? I mapped /g/ to download already? should download location imply /download/ folder? or for location can i put the FULL path and it still work?

 

like /mnt/usr/a/b/c/d/e/f/g/ and itll download to g/? 

 

Of course you have to enter the path /download/. You did not map the folder to /

There is absolutely no point in trying to argue about this,  as this is how docker and mappings work. 

You don't seem to understand that that the container and the app inside the container are not the same. So the app, in this case Davos, doesn't know where you mapped /mnt/user/x so you need to tell it. 

Link to comment

If for example your host machine has the following directory structure:

/
    home/
        downloads/
    tmp/
    other/

And the docker container has the following internal structure:

 

/
    download/
    tmp/
    app/

When you map your directories during the creation process ("-v /home/downloads:/download"), you do this:

 

/                                   /   
    home/
        downloads/ <-------------------> download/  
    tmp/                                 tmp/
    other/                               app/
    

That link (or, mapping) effectively means your "/home/downloads" directory is the "/download" directory inside the container. They get overlaid, meaning they are effectively now the same directory. This is an important concept to understand because any files placed in the "/download" directory are immediately visible to the host, via "/home/downloads", and vice versa.

 

In order to access the files (or in this case, download them) you must specify "/download" so the files are visible in your "/home/downloads" directory. If you specify root "/", all that will happen is files will be downloaded internally in the container (and will fail due to permission errors).

 

What you are describing is a fundamental misunderstanding of how docker overlays its directories when you map them. By mapping these directories together, you are not saying that the internal root directory ("/") is now your "/home/downloads" folder. If you want, feel free to map root "/" and see what happens (I definitely do not recommend this though).

 

In the case of Davos, no the download location does not imply "/download" because it was designed to be flexible when running outside of docker. The path is ABSOLUTE, meaning you must specify the whole path, from root. If you do not do this, it will attempt to find the path relative to where the application is running, which is the "/app" directory. 


TL;DR: Use "/download" when specifying the local directory for davos to download to. It will not work with any other directory when running inside a container. It is by design, and will not change.

Edited by Stark
  • Like 1
Link to comment
  • 3 weeks later...
On 05/10/2017 at 5:11 AM, sinbrkatetete said:

hi!

 

do I have any chance of setting it up so it connects through a socks5 proxy?

 

thanks in advance

 

Hi @sinbrkatetete, there is currently no support for connecting to a host via proxies (SOCKS or HTTP). If this is something you would like to have, please add a request as an Issue in GitHub (https://github.com/linuxserver/davos/issues), with your requirements so I can see if this is something I can add in to a future version.

 

Cheers!

  • Like 1
Link to comment
  • 3 weeks later...
  • 2 months later...
On 10/25/2017 at 5:19 PM, donciccio said:

Hi, got an issue with slow download speeds in davos.

 

Getting ~0.5 MB/s through davos, but if I run Filezilla through a docker, I can get about 7 MB/s - is there anything I can do about this?

 

got it all setup on my end today, ran into the same issue making it unusable unfortunately, this would be fantastic, did you figure it out? i tried everything.

Link to comment

Hi guys, sorry for the late reply.

 

I have to admit this isn't something I've noticed. I haven't released to davos since 1st Oct 2017 so any differences in performance would be either externally instigated, or perhaps via the Docker image (although I can't see this being the cause).

 

The best I can offer in terms of help is the suggestion to keep monitoring it and seeing how it goes. My implementation of the connection speed can be a  bit flaky for SFTP connections so it may be that while it shows a slow speed, it's not necessarily that slow. Are you able to monitor the transfer speed using ntop or likewise?

 

Thanks.

Link to comment
  • 2 months later...

After finally getting everything tidied up on my server such that the hard drive write speed can't possibly be the bottleneck, I can finally see that, indeed, Davos only copies at 24mbit/s at max, often around 8-12. Again, this isn't a *huge* problem because of my automated workflow, but I have a gigabit connection, and drives that can write at at *least* 30MB/s. Faster if I ever take my 1TB out of the array.

 

What could possibly be causing this?

Link to comment
  • 2 weeks later...
On 3/29/2018 at 3:53 AM, drumstyx said:

After finally getting everything tidied up on my server such that the hard drive write speed can't possibly be the bottleneck, I can finally see that, indeed, Davos only copies at 24mbit/s at max, often around 8-12. Again, this isn't a *huge* problem because of my automated workflow, but I have a gigabit connection, and drives that can write at at *least* 30MB/s. Faster if I ever take my 1TB out of the array.

 

What could possibly be causing this?

 

Scroll 2 posts up for the author's explanation.

On 1/10/2018 at 2:49 PM, Stark said:

My implementation of the connection speed can be a  bit flaky for SFTP connections so it may be that while it shows a slow speed, it's not necessarily that slow.

 

Link to comment
  • 5 weeks later...

Hi @drumstyx, when you say you're getting a 24Mbit/s transfer speed, are you referring to actual speeds? I have a 74Mbit/s line, and when using davos, got ~6.5MB/s - which isn't too far off 80-90% of my available bandwidth. Admittedly I've not run the application on a faster line (they're just not available to me where I live).

 

I appreciate that the slowness for some people is annoying, but my implementation of the FTP and SFTP libraries is incredibly basic (practically no internal library tweaking), as the davos application was initially designed as a personal project that ended up gaining traction with a decent number of people. Unfortunately due to other commitments, I don't have the spare time to run diagnostics to understand where the bottlenecks may be.

 

Honestly, it could be one of any number of things that's causing your slow-downs: SSL handshaking delays, JVM heap space utilisation, system property interference etc. Hell, even the hardware could be a factor. I have to admit, the most likely culprit is quite likely my implementation of the libraries, for which I can only apologise.

Link to comment
  • 2 weeks later...

Lately Davos has been incredibly slow with transfers.  I don't know necessarily where the problem is but it seems that Davos is the culprit.

 

Downloading from my ftp to my server with Filezilla speeds are normal, but with davos they are so slow that davos doesn't even report the speed in the schedules tab.

 

 

Anyone else having issues?

 

Is there anywhere in Unraid that will show me the network speeds of specific dockers rather than the overall throughput?

 

And are there any other dockers that are similar in function to davos?  It's so useful to me that I almost can't live without it now. ;)

Link to comment
  • 1 month later...

Hey @Stark, really appreciate the response! I definitely feel ya that it's a personal project -- maybe I'll see what I can do to chip in with enhancements. Speeds are confirmed by the overall network speed in the stats plugin, the readout on davos is buggy, so I knew to look elsewhere, but ultimately not a concern.

It's quite possible that it's an issue of lack of parallelism. IIRC, even using Filezilla, transfers from the seedbox don't reach full potential unless I'm downloading either multiple files at once, or large files in chunks. For my use-case in particular, I often download large sets of files, so dead-simple parallel downloads might be the way to go. I'll see if I can get some time for this.

Heck, realistically, rclone could probably handle my case, it just didn't have a pretty UI at the time.

Edited by drumstyx
Link to comment
  • 2 weeks later...
  • 4 weeks later...

Not that I don't appreciate this container, but I've put a script together to bypass this whole problem with rclone. If you have issues with speed, this will help.

It requires that you've set up your FTP/SFTP server as a remote with rclone, but set this up with a cron of whatever your schedule in davos would have been, and tailor it to your own needs.
 

remote_dir="remotename:/path/to/complete/items" # wherever your files are coming from -- make sure only completed files are here
local_dir="/local/working/directory" # davos' initial download folder
local_completed_dir="/local/complete/directory" # equivalent of davos' move after complete
log_dir="/log/dir"

function scrape () {
  if rclone lsf $remote_dir | grep . -c
  then
    if rclone move $remote_dir $local_dir -v --delete-empty-src-dirs --buffer-size=2G --stats 5s --transfers 4  2>&1 >>$log_dir
    then
      echo "scrape succeeded into $local_dir"
      if mv $local_dir/* $local_completed_dir
      then
        echo "mv succeeded into $local_completed_dir"
      else
        echo "mv failed to move to $local_completed_dir"
        return 1
      fi
    else
      echo "rclone failed on $local_dir"
      return 1
    fi
  else
    echo "No new files"
  fi
}

scrape || exit 1

EDIT: And here's a sloppy rsync version, which allows for as-you-go 'completion', where the other script doesn't have anything in 'complete' until the whole job is done. This one's probably more delicate as-is (like I said, sloppy) but it does what I want better. Depends on parallel, rclone, and rsync

 

remote_dir="remotename:/path/to/complete/items" # wherever your files are coming from -- make sure only completed files are here
local_dir="/local/completed/files" # where things will end up -- rsync uses temp files, so no need for a working dir
mount_dir="$local_dir/tempmountpoint" # a temp folder where the remote server gets mounted while syncing

function scrape () {
  echo -n "files found: "
  if rclone lsf -R --files-only $remote_dir | grep . -c
  then
    echo "attempting to mount"
    find $mount_dir/ -depth -type d -empty -delete &>/dev/null
    fusermount -u $mount_dir &>/dev/null
    mkdir $mount_dir &>/dev/null
    if rclone mount $remote_dir $mount_dir --daemon
    then
      echo "mount to $mount_dir succeeded"
      echo "sleeping a few seconds for mounting to be complete I guess"
      sleep 5
      echo "making local directory structure to fill"
      find $mount_dir -type d | sed 's|'$mount_dir/'|'$local_dir/'|' | tr '\n' '\0' | xargs -0 mkdir -p
      echo "trying to rsync with parallel"
      if find $mount_dir -type f | sed 's|'$mount_dir/'||' | parallel -v -j8 --progress --line-buffer rsync -v --progress --stats --remove-source-files $mount_dir/{} $local_dir/{}
      then
        echo "rsync succeeded into $local_dir"
        find $mount_dir/ -depth -type d -empty -delete &>/dev/null
        fusermount -u $mount_dir &>/dev/null
        echo "attempted to remove empty directories from $mount_dir"
      else
        echo "rsync failed to move to $local_dir"
        return 1
      fi
    else
      echo "rclone failed to mount $remote_dir"
      return 1
    fi
  else
    echo "exiting"
  fi
}

scrape
fusermount -u $mount_dir &>/dev/null
find $mount_dir/ -depth -type d -empty -delete &>/dev/null
exit 0

EDIT: Tailored the script above for better performance on large sets of files. I'm hitting speeds in excess of 800mbps if there's enough data to keep transferring -- it happens so fast that it sometimes can't even hit its peak.

To @Stark, if I could figure out how to get a better handle on the parallel output, something like this could replace the internal FTP client (which to be fair is very difficult to wrangle, it seems), while keeping the management and scheduling system. rclone's configs are machine readable/writeable, and rsync *does* provide pretty progress output when it's not parallelized, but parallel butchers it. Worst case, that could probably be faked by reading the file sizes and measuring the total transferred, but that would just give you batch-level progress.

Edited by drumstyx
Link to comment
  • 4 months later...

Within the last 2 weeks, I started having issues with the hosts/schedules being wiped after restarting/updating the container.


The only log that seemed interesting is this:
 

2019-03-03 15:03:13.506 - INFO - [DavosApplication] - No active profile set, falling back to default profiles: default


Assuming that the config isn't being saved to persistent location so it goes back to a default state. 

I removed the image and all appdata to start over and found that the container does appear to write its persistent data to /config (mapping) anymore. Based on the last log message I have written to volume mapping, this started on 2/10

 

Link to comment

@drumstyx at this point if you're putting effort into using rclone instead, just keep using that. I'm afraid I don't have any more free time to give to this project as I recently took a new job which has effectively exhausted me. Parallel downloads won't be something I personally implement. 

 

@manofcolombia nothing has changed regarding the actual application but I will check to see if anything has changed regarding the image dependencies. 

Link to comment
On 3/3/2019 at 3:21 PM, Stark said:

@drumstyx at this point if you're putting effort into using rclone instead, just keep using that. I'm afraid I don't have any more free time to give to this project as I recently took a new job which has effectively exhausted me. Parallel downloads won't be something I personally implement. 

 

@manofcolombia nothing has changed regarding the actual application but I will check to see if anything has changed regarding the image dependencies. 

@Stark any luck? Container restarted again and all its config is gone again 

Link to comment

@manofcolombia Sorry mate it's been a hectic week. I'll try and have a look at the latest tag in the next couple of days. In the meantime, switch back from ":latest" and use ":2.2.1" instead. That was the version of the app before we migrated our pipeline. Can you see if this helps; if so we know it's an issue with the latest build of the Docker image. 

Link to comment
22 hours ago, manofcolombia said:

@Stark any luck? Container restarted again and all its config is gone again 

OK, we've figured out what went wrong. During the migration phase of our pipeline, the build command omitted a flag which set it to use production configuration. The result of this is that the latest image was build with a "staging" environment in mind and so had its database set in memory, not from a file. 

 

A new build has been pushed through the pipeline, so can you pull latest and check? You may find that now it looks for the database file correctly, it will pick up your old existing database, so will be a bit stale.

 

Long story short: we dropped the ball a bit with the build, and I can only apologise for that.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.