Jump to content

ThatDude

Members
  • Content Count

    81
  • Joined

  • Last visited

Community Reputation

2 Neutral

About ThatDude

  • Rank
    Advanced Member

Converted

  • Gender
    Male
  • Location
    UK
  • Personal Text
    “I bent my wookie.”

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I've managed to get this openvas docker running from the command line but I'm lost trying to figure out how to create a pretty wrapper around it for unRAID's docker page. Can anyone help? Or point me in the right direction? This command line got me up and running: docker run -d -p 10443:443 -e PUBLIC_HOSTNAME=tower.local mikesplain/openvas:9
  2. That's been my experience too 😞 I've fallen back to scripting a graceful shutdown then taking a full backup of the VMs. I really wanted incremental backup files so that I could push them to cloud storage. I need to figure out a workaround.
  3. Mods please delete, I found an existing thread explaining what I needed to do.
  4. Is there any way to create a restricted user for use with rsync? I have rsync working perfectly to sync a huge directory from my remote workstation to my unRAID server using the root user like this: rsync -avzh --delete Pictures/ root@[external-ip-address]:/mnt/user/backup/ But I'd like to create a dedicated rsync user and give it access exclusively to the 'backup' share and no or very limited ability to SSH into the server.
  5. Hey did you ever figure this out? I would really like to make live incremental backups that I push to the cloud but I don't want to re-invent the wheel if you've already done it.
  6. I'm having an issue on one of my unraid servers where FCP times out, I get this in the error log: Jun 24 07:40:31 bigbird nginx: 2019/06/24 07:40:31 [error] 3771#3771: *8043 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.12.140, server: , request: "POST /plugins/fix.common.problems/include/fixExec.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock", host: "192.168.12.250", referrer: "http://192.168.12.250/Settings/FixProblems" Is there another error log that I can check to see what's happening?
  7. I figured out a workaround to set the DNS from the client side, open the ovpn file and add the following directives. #ipv4 pull-filter ignore "dhcp-option DNS" #ipv6 pull-filter ignore "dhcp-option DNS6" # put prefered dns name here dhcp-option DNS 192.168.0.200
  8. Is there any way to push a user-specified DNS server? I have a Pi-Hole running on my LAN and I'd like my connected devices to use it's DNS IP address instead. This allows me to block ads on my connected iOS devices and for local name resolution - really useful.
  9. Thank you for this! I was getting completely stuck following the instructions in the readme that's generated with the cert.
  10. Upgraded from 6.6.6 to 6.6.7 and now have a boot loop at the blue menu screen, it counts backwards down from 5 then just resets it's self and counts down again. I tried each of the menu options and only memtest86 works.
  11. Great! Thank you. Hopefully the team will add this functionality into the unRAID shares tab for BTRFS volumes in the future.
  12. Thanks so much for this info, I've been desperate to implement compression on my huge collection of Sony RAW image files. I copied across 20GB of files as a test and my compsize output looks like this: Processed 568 files, 32187 regular extents (32187 refs), 0 inline. Type Perc Disk Usage Uncompressed Referenced TOTAL 86% 16G 19G 19G none 100% 15G 15G 15G zlib 25% 948M 3.6G 3.6G I think it's saying that the folder is 86% of what it was, i.e. it's 16G on disk rather than 19GB which is the uncompressed size. Is that correct?
  13. I have around 5TB to re-home. Reading this thread it looks like G-Suite seems to be the best option if they are allowing single users to blow past the non-team limit. Is anyone doing this already? Duplicati looks like the best docker solution, is anyone already using this with a large amount of data? Can you share your experiences?
  14. I would love to see file compression added to unRAID, either on a per-share, per-disk or even better per-file-type basis. I see that unRAID supports BTRFS which in turn supports compression already, would it be possible to enable this in a future build? BTRFS compression: https://btrfs.wiki.kernel.org/index.php/Compression My personal use case is that I'm a commercial photographer and shoot around 50,000 RAW (ARW) images each year which I store on my unRAID server, these files are mostly full of air and compress by a factor 3:1 or more. I currently have around 18TB of these files which with compression would use ~6TB of space, this would save me a fortune in storage costs and I suspect would actually speed up read/write operations as there would be less data to read/written on the array disks.