Jump to content

lotekjunky

Members
  • Content Count

    16
  • Joined

  • Last visited

Community Reputation

0 Neutral

About lotekjunky

  • Rank
    Member
  1. Does anyone know how to make the backups password protected?
  2. I just found this app and LOVE it. Great job! I also just searched through this whole thread for information on how to make the backup password protected / encrypted. My whole system is using luks, but my backups are out in the clear. I want to store them in my gdrive, but I don't want to put them in the cloud without a little bit of protection. Any ideas on this? I thought about wrapping the .tar.gz (I'm using compression, holy cow... it dropped my backup from 5GB to 900MB) in a passworded archive, but that seems archaic.
  3. I have ESXi running in qemu. When editing the VM settings, the network adapter type keeps changing from e1000 to virtio. ESXi doesn't support virtio and ONLY supports e1000. I have to make this change in the XML which is not a huge deal, but every time I go into edit, I have to make the same change. Please make this respect the changes AND add a way to edit the network adapter type into the simple VM config view.
  4. Fantastic! I was not aware of the user scripts plugin, this is going to make my life a lot easier... thanks a ton! On a side note, I wish the search function was better on this forum... it would probably cut down on the redundant questions. I tried searching for an answer first but kept striking out so I decided to just ask. Glad I did.
  5. I have the Fix Common Problems plugin running. every time I reboot, it tells me that write cahce is disabled on disk3. It's correct, it is disabled... when I enable it via CLI, it reports correctly but falls back to disabled when I reboot. Am I not saving the change properly? Is something else going on? root@Tower:/mnt/cache# hdparm -W /dev/sdb /dev/sdb: write-caching = 0 (off) root@Tower:/mnt/cache# hdparm -W 1 /dev/sdb /dev/sdb: setting drive write-caching to 1 (on) SG_IO: bad/missing sense data, sb[]: 70 00 01 00 00 00 00 0a 00 00 00 00 00 1d 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 write-caching = 1 (on) root@Tower:/mnt/cache# hdparm -W /dev/sdb /dev/sdb: write-caching = 1 (on)
  6. Holy cow that was easy! I didn't realize I could click on the HDD/SDD when the array was stopped. Thanks so much!
  7. After going through encrypting my data drives and moving data back and forth, I'm ready for the next step- cache drive encryption. I currently have the following: 1. 128b cache SSD (sdg) drive, empty, no shares use cache for now, currently formatted btrfs 2. Several 8TB WD Red drives (sdb, c, d, e, f), all with luks encryption (this is where I'll store my docker and libvirt files) Desired outcome: 1 luks encrypted cache drive and obviously maintain my encrypted pool with no loss of data. I've stopped docker and VMs and moved all data from the cache drive. I stopped the array but it looks like I can't just remove the cache drive... it looks like I might need to go build a "new config' but that is scarry and I am asking for help in case I've missed something. TIA!
  8. I got all of my issues sorted by actually understanding what the security settings mean and how to apply a rule to NFS.
  9. Thank you so much, I have it working now! I could not figure out where the RULE field was... I have this share mounted as an ESXi datastore and it's actually working. w00t!
  10. I created a new share and set it up as NFS, secure and exported. 'showmount -e 192.168.1.19' shows the exported mount. 'exportfs -v' lists the mount too, but it shows as 'ro'. When I look at /etc/exports, it is also listed as RO there. I see mention of "rules" for NFS on this forum, but I cannot find where to add or change them in Unraid 6.7.0. I also can't find any semi-recent documentation on this... there is an "unofficial manual" for unraid V5, but that's so old... Can anyone please help?
  11. Apparently something is still wrong. I can find the NFS datastore successfully, but it fails to do anything that requires WRITE permissions. I found this but I'm still trying to make sense of it...
  12. As soon as you admit you can't accomplish something, it happens. I was able to get an NFS datastore mounted by actually putting in the proper path to the datastore... From UNRAID terminal, you can run "showmount -e $IP" where IP is the IP of your UNRAID box. From there you'll see which ones are setup as NFS exports. I was trying to add the NFS share the same way I would add an SMB share, via a short name. The real way to use the output from the showmount command and put it in, something like "/mnt/user/domains"
  13. Has anyone got this to work? I've tried every setting but my ESXi VM will not see the NFS datastore hosted on UNRAID.
  14. lotekjunky

    Solved

    This is kind of old, but I suspect that if you use something like rclone or rsync to migrate the data, you'll have a better experience. When I used Krusader, it would only move 1 file at a time with max throughput around 40=50MB/s. With rclone, I can set transfers=20 and it WILL push 120 MB/s on my 1gb network. Wirespeed would be 125-128 MB/s depending on what you thing a "gig" is (either 1000 or 1024).
  15. For future googlers, you can find out when the CPU overheated by looking at the syslog. http://<<whateveryourunraidaddressis>>/Settings/Tools/Syslog It will show up as type "warning" so you can use the filters at the top of the page to make it easier to find out when it happened. For some reason, when I copy a lot of data to a luks encrypted device, I get overheating issues. For now, during my initial loadup of the shares, I have the case off and a fan pointing at it. I'm going to go back and replace the thermal paste this weekend, I just don't want to stop the data migration right now.