thomast_88

Community Developer
  • Posts

    246
  • Joined

  • Last visited

Everything posted by thomast_88

  1. I've merged the dev branch into master, so that the latest tag on docker hub contains the newest version. I feel it is quiet stable, as I've been backing up million of files during the past few months. I'm using it for offsite backup of docker application data. If somebody is missing a feature let me know.
  2. https://hub.docker.com/r/sdorra/scm-manager/ Supports subversion,git,mercurial. I have not tried the docker, only scm-manager on windows and it works perfectly.
  3. flexibility > user-friendliness Crisp ! That was quick! Already changed my first job to run every 5 minutes
  4. Very simple script which will resume paused/suspended vms or start shut off vms. #!/bin/sh #description=Start / resume all VMs #arrayStarted=true virsh list --all | grep -E "(paused|suspended)$" | awk '{$1=""; print $0}' | sed -e 's/paused//g; s/suspended//g' | while read domain do echo "Resuming $domain ..." virsh resume "$domain" echo "Resuming $domain ..." done virsh list --all | grep -E "(shut off)$" | awk '{$1=""; print $0}' | sed -e 's/shut off//g' | while read domain do echo "Starting $domain ..." virsh start "$domain" echo "Started $domain ..." done
  5. Okay cool - i appreciate your work. I can live with 1 hour for now. I'll wait patiently for the support of custom schedules
  6. Would it be possible to set a custom cron schedule. I want to run a script every 5 minutes rather than every 1 hour (which seems to be the lowest frequency at the moment).
  7. New version of rclone is out https://github.com/ncw/rclone/releases
  8. Ok so I got the same problem today. It seems when my cache mount (1x250gb samsung evo drive) reaches 54% usage, unRAID somehow thinks it's 100% full (based on the output from df -h). Then everything goes haywire; VM's stopping, certain docker applications stops and I'm unable to reboot. Once i delete some stuff and it goes below 53% usage it's fine again. I'm on latest RC5, it happened on RC3 & RC4 aswell. I've tried: Scrubbing (with/without error fixing) SMART test SMART Extended test Parity Check Rebooted several times None of these reports any errors. Anybody has an idea of what could be wrong? Anybody knows what this issue could be?
  9. That wasn't the case. Had to force shutting it down by holding in the physical power button :-/. Well my problem seems to have normalized now. df -h (cache disk) output: root@unRAID:/mnt# df -h /dev/sdc1 233G 122G 112G 53% /mnt/cache I started deleting some VM's and eventually 100% disk usage went down to 53% which is correct. I would really like to find the root cause of this issue, as this happened suddenly today after an uptime of roughly two months. I fear this will happen again in the future. I suspect this is related to a bad user-share configuration together with the cache floor setting. But I'm just guessing. Are there any recommended settings for these?
  10. When I try to reboot with shutdown -r I get the following message in syslog all the time: Nov 28 18:08:15 unRAID emhttp: shcmd (1609): umount /mnt/cache |& logger Nov 28 18:08:15 unRAID root: umount: /mnt/cache: target is busy Nov 28 18:08:15 unRAID root: (In some cases useful info about processes that Nov 28 18:08:15 unRAID root: use the device is found by lsof( or fuser(1).) Nov 28 18:08:15 unRAID emhttp: Retry unmounting disk share(s)...
  11. But I just have 1 cache drive? a 250GB as you should see in diagnostics? Can that be the issue. I appreciate your help so far. I'm lost without a server. This is my first major issue since a year running unRAID :-(
  12. I just scrubbed with/without correcting errors. No luck I see no pool information. Is this what you want to see?
  13. Sure, it's here https://www.dropbox.com/s/sq9sb778n12ryt4/unraid-diagnostics-20161128-1519.zip?dl=0
  14. Hi, I'm having some issues with my unRAID (latest RC - started happening even before my upgrade) df -h rootfs 16G 522M 16G 4% / tmpfs 16G 344K 16G 1% /run devtmpfs 16G 0 16G 0% /dev cgroup_root 16G 0 16G 0% /sys/fs/cgroup tmpfs 128M 3.1M 125M 3% /var/log /dev/sda1 7.4G 418M 7.0G 6% /boot /dev/md1 7.3T 7.1T 191G 98% /mnt/disk1 /dev/md2 7.3T 7.1T 208G 98% /mnt/disk2 /dev/sdc1 233G 140G 0 100% /mnt/cache shfs 15T 15T 398G 98% /mnt/user0 shfs 15T 15T 398G 98% /mnt/user Cache disk is showing 100% full, in the webUI i see cache drive still has 101 GB left. I've tried readjusting the cache floor setting, but nothing is working. What is taking up that extra space??
  15. The one in this thread works fine. It's not "unRAID" friendly in terms of template for CA. But you can still use it with some manual setup. I've not had time to look further at it, as I'm leaning towards rclone
  16. Where is your Application Data Storage Path mounted? Could it be some docker issue upgrades with unRAID? Did you have any other docker applications with issues? Which versions were you upgrading between? As far as I know it is set to the default I did not change it. "/mnt/cache/appdata/gitlab-ce/data"
  17. I'm not using the copy option myself - I'm using sync. This is probably better to ask on the rclone github page, or maybe at the unRAID rclone plugin page - I see some people over there are trying to sync/copy lots of data like you.
  18. Where is your Application Data Storage Path mounted? Could it be some docker issue upgrades with unRAID?
  19. You had some plans for extending this to support Auto Update of Dockers as well? I still think that would be a nice addition. Perhaps merge / integrate with Docker Buttons plugin https://github.com/gfjardim/unRAID-plugins/blob/master/plugins/docker.buttons.plg ?
  20. I'm running it behind nginx. Works fine, so i guess it should work with Apache as well.
  21. I can hardly notice any difference when starting playback from an encrypted ACD volume with rclone. Maybe 4-5 sec in total. Can it be your connection speed - 17 seconds sounds extreme?
  22. Source: http://rclone.org/downloads/
  23. I've submitted a PR to upgrade to the latest rclone version. Apperantly it fixes some issues with corrupt files when seeking and some ACD file limit warnings.
  24. Great work putting this together Waseh! Can you show a link to the source code? I will eventually make a simple GUI for this to make it even more user friendly.