thomast_88

Community Developer
  • Posts

    246
  • Joined

  • Last visited

Everything posted by thomast_88

  1. Hi there, You can actually just pass this environment variable to the container: "RCLONE_VERSION="<version>" (default is "current") and it will download the specified version for you. I've kickstarted a build aswell though. See Docker file: https://github.com/tynor88/docker-rclone/blob/dev/Dockerfile
  2. Thanks - i will try that. What exactly will this setting do? cacheExtra="nossd"
  3. Hmm, so yesterday my server was acting up again. Seems like it's because of this: root@unRAID:~# btrfs fi show /mnt/cache Label: none uuid: 4ad605bd-2713-453f-916b-699068fd9790 Total devices 2 FS bytes used 203.20GiB devid 1 size 232.89GiB used 232.88GiB path /dev/sdc1 devid 2 size 232.89GiB used 232.88GiB path /dev/sde1 I did as per @johnnie.black advice: btrfs balance start -dusage=75 /mnt/cache And it became: root@unRAID:~# btrfs fi show /mnt/cache Label: none uuid: 4ad605bd-2713-453f-916b-699068fd9790 Total devices 2 FS bytes used 175.13GiB devid 1 size 232.89GiB used 195.05GiB path /dev/sdc1 devid 2 size 232.89GiB used 195.05GiB path /dev/sde1 This morning I'm back to: root@unRAID:~# btrfs fi show /mnt/cache Label: none uuid: 4ad605bd-2713-453f-916b-699068fd9790 Total devices 2 FS bytes used 203.20GiB devid 1 size 232.89GiB used 232.88GiB path /dev/sdc1 devid 2 size 232.89GiB used 232.88GiB path /dev/sde1 Why is this happening? I did write around 30 gb to the cache during the night, but why is it showing as full again? Maybe you have an idea @johnnie.black ? You seem to be the expert on this topic
  4. Another guy on reddit seems to have problems with Raid 1 as well:
  5. @deusxanime can you try to copy a large file > 10 GB from anoter disk to the cache array? While you do it ssh and check "top" and monitor the Load Avarage.
  6. I'm still having issues. Copied a 11GB file from my primary array, to the cache array (raid 1), and server load went to 25'ish before it ended, making my dockers / VMS crash. @johnnie.black I saw your tests earlier, and noticed you are using raid 0. Have you had any issues with raid 1, or maybe have any idea why this is happening? @aptalca did you get all the issues fixed - and are you running raid 0 or raid 1?
  7. @GilbN This is a support thread for the Docker. You should look here for the Plugin support:
  8. Before: Total devices 2 FS bytes used 186.99GiB devid 1 size 232.89GiB used 232.88GiB path /dev/sdc1 devid 2 size 232.89GiB used 232.88GiB path /dev/sde1 After: Total devices 2 FS bytes used 189.18GiB devid 1 size 232.89GiB used 192.03GiB path /dev/sdc1 devid 2 size 232.89GiB used 192.03GiB path /dev/sde1 Trim: fstrim -v /mnt/cache /mnt/cache: 81.7 GiB (87732568064 bytes) trimmed Not sure what those numbers exactly mean, but so far I feel a performance improvement - that is promising! I will try with some large files tomorrow :-)
  9. @johnnie.black I will test this straight away when I get home. Thanks for putting your findings up!
  10. Anyone had luck running SQL Server docker on unRAID? I've tried different tags from CTP1 to RC2 but I'm getting different errors. Can this be related to the container not being able to run as root? There are 2 issues on GitHub around unRAID already: Failed To Start -- No such file or directory #25 A serious error condition has been encountered. Cannot access '/var/opt/mssql/log/errorlog*': No such file or directory #56 Microsoft does not allow changing the PUID and GUID like most unRAID friendly container does. Anyway to hack around this? Docker Hub
  11. I have a BTRFS raid1 (2x250gb Samsung EVO), and I'm having the same issues as @aptalca for months. Raid is unusable when copying / moving stuff. If anybody got an idea how to trace this down, let me know. I'm willing to invest my time to get this fixed.
  12. I think this is a serious bug which should be adressed. Isnt BTRFS the recommended way to run a cache array? Right now it's working poorly with all these crashes on large writing of files
  13. Hmpf - thanks for the quick reply. I'm hoping someone can step in with a working solution. I'd really like to utilize the btrfs raid function without the server crashing
  14. @aptalca Sorry for bugging you. But how is your test progressing? I'm at the same boat as you at the moment. I can conclude running with a single drive, everything works properly. But in raid1, things start to become unstable. This sucks pretty much, as this renders the cache functionality useless. But this has to be a configuration issue? Many people are running with several drives, and they don't report these issues...
  15. Thanks for the plugin, simple but helpful . Using it for unBALANCE currently. Will probably add Krusader as well.
  16. Moving the scripts to a GitHub repo is a start. That will even release you from maintaining the first post = less work.
  17. What about a GitHub repo for all these great scripts? I can help making it. (Feature Req) And have them listed in the GUI as "Community Scripts". Just like the CA plugin.
  18. Since 6.3 for me. I already posted an issue half a year ago about this.
  19. Oww okay. Well thanks for the help then @Darksurf Was just worried with so many processes showing 99.99% IO.
  20. Yes, 2 VM's (Windows 10 pro) and around 10 dockers.
  21. No, not writing anything. You can see the total disk write is below 250 kbps. That's whats worries me. Maybe related to this aswell:
  22. Any ideas how we can diagnose this @johnnie.black? Like @aptalca mentioned, it's hard to do any diagnosis, as the system is unresponsive when it happens. I can easily reproduce this by writing a large file to the cache array, but then the system becomes unresponsive (load average keeps raising). I basically stopped using my cache array, for bigger file transfers, as this problem is causing the whole system to lock up.
  23. Ok, sounds exactly like the issue i have been having since i changed my cache from 1 to 2 disks :-(. When it's at > 40ish %, VM's / Dockers begins crashing until i stop the write command.