Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

2 Neutral

About ThatDude

  • Rank
    Advanced Member


  • Gender
  • Location
  • Personal Text
    “I bent my wookie.”

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Excellent - thanks for the sanity check 🙂
  2. I have two unRAID servers, both fully encrypted with LUKS volumes, and both are using the SAME encryption key. Can I move a populated encrypted drive from one server to the other, add it to the array and keep all of it's data intact?
  3. I've managed to get this openvas docker running from the command line but I'm lost trying to figure out how to create a pretty wrapper around it for unRAID's docker page. Can anyone help? Or point me in the right direction? This command line got me up and running: docker run -d -p 10443:443 -e PUBLIC_HOSTNAME=tower.local mikesplain/openvas:9
  4. That's been my experience too 😞 I've fallen back to scripting a graceful shutdown then taking a full backup of the VMs. I really wanted incremental backup files so that I could push them to cloud storage. I need to figure out a workaround.
  5. Mods please delete, I found an existing thread explaining what I needed to do.
  6. Is there any way to create a restricted user for use with rsync? I have rsync working perfectly to sync a huge directory from my remote workstation to my unRAID server using the root user like this: rsync -avzh --delete Pictures/ root@[external-ip-address]:/mnt/user/backup/ But I'd like to create a dedicated rsync user and give it access exclusively to the 'backup' share and no or very limited ability to SSH into the server.
  7. Hey did you ever figure this out? I would really like to make live incremental backups that I push to the cloud but I don't want to re-invent the wheel if you've already done it.
  8. I'm having an issue on one of my unraid servers where FCP times out, I get this in the error log: Jun 24 07:40:31 bigbird nginx: 2019/06/24 07:40:31 [error] 3771#3771: *8043 upstream timed out (110: Connection timed out) while reading response header from upstream, client:, server: , request: "POST /plugins/fix.common.problems/include/fixExec.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock", host: "", referrer: "" Is there another error log that I can check to see what's happening?
  9. I figured out a workaround to set the DNS from the client side, open the ovpn file and add the following directives. #ipv4 pull-filter ignore "dhcp-option DNS" #ipv6 pull-filter ignore "dhcp-option DNS6" # put prefered dns name here dhcp-option DNS
  10. Is there any way to push a user-specified DNS server? I have a Pi-Hole running on my LAN and I'd like my connected devices to use it's DNS IP address instead. This allows me to block ads on my connected iOS devices and for local name resolution - really useful.
  11. Thank you for this! I was getting completely stuck following the instructions in the readme that's generated with the cert.
  12. Upgraded from 6.6.6 to 6.6.7 and now have a boot loop at the blue menu screen, it counts backwards down from 5 then just resets it's self and counts down again. I tried each of the menu options and only memtest86 works.
  13. Great! Thank you. Hopefully the team will add this functionality into the unRAID shares tab for BTRFS volumes in the future.
  14. Thanks so much for this info, I've been desperate to implement compression on my huge collection of Sony RAW image files. I copied across 20GB of files as a test and my compsize output looks like this: Processed 568 files, 32187 regular extents (32187 refs), 0 inline. Type Perc Disk Usage Uncompressed Referenced TOTAL 86% 16G 19G 19G none 100% 15G 15G 15G zlib 25% 948M 3.6G 3.6G I think it's saying that the folder is 86% of what it was, i.e. it's 16G on disk rather than 19GB which is the uncompressed size. Is that correct?
  15. I have around 5TB to re-home. Reading this thread it looks like G-Suite seems to be the best option if they are allowing single users to blow past the non-team limit. Is anyone doing this already? Duplicati looks like the best docker solution, is anyone already using this with a large amount of data? Can you share your experiences?