olympia

Members
  • Posts

    458
  • Joined

  • Last visited

Everything posted by olympia

  1. No, but the HDD I use for cache is fast enough to handle gigabit speed and yes, I use cache only folder. I know what I am doing. I found some other reports in the meantime on transmission forums that it cannot handle speed more than 40MB/sec: https://forum.transmissionbt.com/viewtopic.php?t=14697 https://forum.transmissionbt.com/viewtopic.php?t=16725 Deluge is also not much better. I just tried QBittorrent and it CAN saturate the network, so it seems like this is the ONLY torrent client what can handle that speed. I am (was) a fun of transmission though... It's a pity...
  2. Could someone using a gigabit link with Transmission Docker confirm that the full bandwidth can be utilized? Looks like somehow I am capped at about 40MB/sec, while from another computer from the same network I can have 100MB+/sec using the same connection with the same torrent. Doesn't seem to be a limitation by the Docker environment (I was trying with the Transmission plugin as well with same results) - can somehow unRAID itself have a limitation not enabling Transmission to run at full speed?
  3. I tried both and no difference in the speed rates.
  4. Thanks for the attention, but I know what I am doing on that side. I have torrents where I am 100% confident to test speed with and on the contrary I don't have any remote host from where I can download with certain gigabit speed. Simply put the question is: is the E8200 powerful enough to run dockers at this speed level or I have to upgrade the whole rig?
  5. I have just upgraded my Internet link to 1Gbit and realized that my unRAID box doesn't seem to cope with the increased resource need to keep up with the available bandwidth. I was doing some tests with Deluge/ Transmission dockers and the max speed I can get is somewhat 35MB/sec in case of Deluge, while only about 20MB/sec with Transmission (which would be my preference anyway). I am running that box on an old C2D CPU (Intel E8200) - is this surely the bottleneck and simply too weak to allow saturating the available net/disk throughput? To clarify: I am able to get around 100MB/s from my notebook on the same Internet link, same torrent, so the link is as advertised. Also I am able to copy to unRAID cache drive on the LAN beyond 100MB/s, so we can take unRAID connection to LAN out of the picture too. So this is either a docker performance issue with the mention CPU or can this be a configuration issue somewhere?
  6. Many thanks jbrodriguez! It works perfectly now! a minor thing: while I was adding servers I used the autocomplete for IP addresses and my Android OS put a space after the IP address during the autocomplete. ControlR won't add the server in this case as it seems it is not recognizing it as an IP/host. Took me quite some mins to figure out what's wrong. Maybe you could look into this to avoid confusion of users in similar case? Not sure how difficult it would be to code, but would be nice, if the order of the servers could be modified (not something you cannot live without, but always nice to have your own server as first ) Thank you again!
  7. I was turning off/ disabling spin up groups on two different servers today and in both cases when I hit apply the Banner (which was disabled on both servers) came back, i.e. got enabled. I had to turn it off in display settings once again. Is this a bug in the webgui? (I hope this has not changed other, less noticeable settings). Edit: this is with unRAID 6.3.5 with webgui 2017.05.26
  8. Hi, yes, the servers have the same name, but they are on different IPs and different subnets (and different sites, but that's not relevant here). I can add and manage both of them individually one at a time, but cannot add them together.
  9. When I try to add a second server, it actually replaces the first one instead of getting added to the server list. This is on vanilla installation. Is this just me?
  10. Dynamix Local Master Plugin doesn't display yoda in the header any longer, while in SMB settings the plug-in reports correctly that unRAID is the elected master browser. Is this only me?
  11. I am using v6.3.5 Sure yes, webgui is all fine. Sent from my E5823 using Tapatalk
  12. Just installed and all, but dockers and VMs work. For those I get "No dockers are installed or they are currently unavailable". Would you have any hint where to look?
  13. Is it possible to find/ view/ copy the results of the extended tests from the file system? I have a huge log with dupes and it is difficult to process it based on the "view results" window from the GUI. I presume these results should be saved somewhere? However, I cannot locate it.
  14. Even if this was never officially supported, this regression is REALLY sad. No that restriction is not true. We'll update the OP. Many many people have trouble with this. Some do not. Probably a race condition but I still felt my contribution was justified. Sent from my LG-D852 using Tapatalk I haven't looked at the UA plugin for a while, as long is it mounts devices in response to 'disks_mounted' event, which takes place before services are restarted, it should work ok. UD mounts its devices on the 'disks_mounted' event. So what is the conclusion here Tom? Is this now something with UD or some defect relative to 'disks_mounted' event?
  15. Even if this was never officially supported, this regression is REALLY sad. My cache drive is a HDD and I want it to spun down in case of no activity. Obviously having docker ran from there will prevent the cache drive to spun down. For performance reasons I have a smaller, but more than enough sized SSD installed as an "appdrive" mounted with Unassigned Devices and it worked perfectly with v6.1.9. I had better performance and less power consumption. Now I have to install docker on my cache drive and this will result: - lower performance - higher power consumption - cluttered cache drive with docker stuff This makes me not so much happy... ...or else, I can spend a lot of money to buy a SSD large enough to house cache and docker on what disk, but this would be money to thrown away...
  16. It works for me as well as we speak! Thank you for this Docker!
  17. Aha, OK, thanks. Just a question: in which log we can see the above mentioned curl error?
  18. Can you post your docker mappings... And we'll take a look at this... It's /mnt/cache/appdat/config and /mnt/cache/appdat/data Could it be somehow related to unRAID version? I am running v6.1.9, not the 6.2 series... Edit: I was just removing the previous version which has been installed flawlessly and working perfectly (other than it was not updating itself any more due to the schema changes) - So I have a precedent to see this docker working
  19. I am having the exactly the same issue. Did you find a solution to the problem?
  20. Another update of the Dynamix Cache Dirs plugin is available, which has these improvements. Works fine here after a quick test! Many thanks again! Dynamic Cache Dirs has been updated using an updated cache_dirs script by Alex R. Berg. His script is improved in several ways, he fixed some bugs, and it shuts down very nicely when cache_dirs is stopped. Update the Dynamix Cache Dirs plugin and let us know if it is working for you when the array is stopped. This is nothing scientific, but my server cpu load seems to have been reduced. Yes, I can confirm stopping is working fine! Thank you for having the updated cache_dirs disk included. Let's cross the fingers it will work out fine as it was not beta tested by the community before. So we will need to keep an eye on it.
  21. Another update of the Dynamix Cache Dirs plugin is available, which has these improvements. Works fine here after a quick test! Many thanks again!
  22. I will be wondering how your tests are ending, but on my side, bonienl's update works like a charm. Both cache_dirs and its child processes are killed. A big thank you for both of you for taking the time and fixing this. This issue was outstanding for years!
  23. Not sure how to answer The script runs and it stops cache_dirs immediately. I also get back the prompt immediately. However, 'ps -ef | grep find' shows the actual child 'find' process is still running and finishes when it finishes depending on how large is the dir. So killing cache_dirs works like a charm, but the sub-process stays there (PPID changes to 1 after parent has gone) and does not get killed. Based on your quick test, there is nothing more I can do. You should post this issue on the cache_dirs discussion and see if Joe L. the creator of the cache_dirs script can add a more robust shutdown to the cache_dirs script. Fair enough. However I think he is not so interested any more in this. I raised this with him year(s) ago... That's why I was looking for another viable solution what is not from evil... Just one question: I thought the intention with your adjustments was to kill the subprocesses on cache_dirs stop as well regardless if cache_dirs is doing that or not; have I misunderstood that? Based on your latest feedback now I feel you still expected cache_dirs doing that in its own. Thank you for looking into this!