drdobsg

Members
  • Posts

    27
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

drdobsg's Achievements

Newbie

Newbie (1/14)

0

Reputation

  1. Is google drive a requirement? Crashplan is encrypted, it is relatively cheap too $60/year. There is a docker for it.
  2. From what i see it is the default. Compare ls -l vs du and see what you get.
  3. Before any one else tries this, I am pretty sure the answer is no this is not the best way. Maybe someone with more experience can chime in, but that sounds like a bad way to do it. What you are describing is making snapshots of the BTRFS file system. Essentially each "clone" you make is a diff of the base sub file system. And in the end it will probably end up being a full copy of the original disk, since the .img file will change enough from the original where it will be a copy instead of a diff. I don't know enough about BTRFS, but i suspect there will be a performance hit there. The first few references from googling "BTRFS snapshots KVM" all pretty much say the same thing: Don't use BTRFS for VM storage. Or at least don't enable copy on write for storing the image files because they are essentially one giant file.
  4. Yes that will work, but the problem is that it increases the disk space required by the VDisk. For example if a VM has a 30GB virtual hard drive but is only using 13GB of it. It actually only takes up 13GB on the cache drive. The copy will take up the 30GB on the cache drive. The same is true in vmware esxi, if you just try to copy the thin disk it will expand its size. Note that ls -l will show you it takes up 30GB, but du will show you what it really uses.
  5. This is the same for me.
  6. Does anyone know of a tool installed on unRaid that will clone a VM? I am new to KVM, but I am used to cloning VMs on esxi via the command line. I have read a few suggestions of just copying the .img file, but that will essentially convert the thin provisioned disk to a thick one. Through some googling i have found some packages that may be what i am looking for: virtinst, which contains a virt-clone command; or libguestfs-tools which contains a virt-sparsify command which shrinks a thick provisioned disk to a thin one. I found this slack package, but I don't know if its compatible with unRaid or how to install it. Ideally what i really want to do is create a template and deploy from that, but I will settle for being able to clone a VM.
  7. Yes that does change things. You would set both DNS aliases (example.com and personalwebsite.com) to point to your public IP address. Then you forward port 80 on your router to whatever port your nginx server is listening to (it most likely isn't 80 as that is what unraid uses by default) Forward port 443 on your router to whatever SSL port your nginx server is listening to (might be 443, but is configurable) In the nginx configuration nginx\sites-conf\ folder create a configuration for personalwebsite.com that looks something like this ... server { listen 80; #listen 443 ssl; server_name *.personalwebsite.com; root /config/www/personalwebsite.com; location / { try_files $uri $uri/ /index.html /index.php?$args =404; } } The website will be placed in /config/www/personalwebsite.com/ in this scenario. Nginx will examine web requests for personalwebsite.com and if it matches then it will server those files. The limitation in this setup is that a single IP can only host one SSL certificate. And your lets-encrypt cert will most likely have the example.com certs on it. So if you do happen to go to https://personalwebsite.com you will get a certificate error in your web browser about names not matching. There are lines you can add to the config file that will force https back to http for all personalwebsite.com connections. What may be a better option for you would be to get the lets-encrytp cert for personalwebsite.com and let the mismatch name occur on example.com, only because I am assuming you will be the only one going to example.com so you will know to ignore the certificate error and go on your way. however if you really have no need for SSL on your personalwebsite.com then continue using the cert from example.com. Another option, instead of serving the site via an unraid docker is to just have the docker act as a reverse proxy and forward the request to a real server or a VM where you have more control over things. As far as the security concerns of having an nginx docker on the public internet. In my opinion, using nginx solely as a reverse proxy or static html server it is a risk I am willing to live with. I probably wouldn't host PHP or some other dynamic site on it though.
  8. Is it possible to change the default port for torrent transfers? I assume I would have to change the docker template to "host" mode for network and remove the port mappings?
  9. I have an older 60GB OCZ SSD for a cache drive, 3x 1TB WD Blacks, and a couple 500GB Seagates for the array. I know 3TB usable its not much, but I am still testing unRaid while i wait for some sales on some newer drives. I have 5x 3TB Seagate drives but 3 of them died so I am looking to replace them with something better. I was surprised to see the default spindown delay was set to never. I changed it to 2 hours and then the server froze, but I was also going heavy on the downloading trying to restore my collection. I don't even know why the drives were spun down if there were still processes accessing them.
  10. Good to know. I think was trying template name instead of the actual docker name. That looks handy, I may borrow that
  11. I had a similar issue as well. The web GUI locked up, i could still ssh in so i did get my diagnostics. There were several processes locked up that i couldn't kill. I ended up trying to reboot, but that didn't work so i did a hard reset. Parity and array drives were fine after reboot. It may have been coincidence, but the system froze shortly after spinning down hard drives. I can post my diagnostics when i get home if you want, or start a new thread. But it sounds related as I was also d/ling a decent amount via dockers: i was running couchpotato, sonarr, deluge, and plex. And on top of that i was running file integrity plugin. I have since turned off cache drive for my downloads folder, disabled file integrity auto hashing, and set spin delay to never. I will monitor to see if that had anything to do with it. My M/B uses Intel 82579LM and 82574L LAN. Config 6.2.0-beta21 M/B: Supermicro - X9SCL/X9SCM CPU: Intel® Xeon® CPU E31230 @ 3.20GHz HVM: Enabled IOMMU: Enabled Cache: 256 kB Memory: 16384 MB (max. installable capacity 32 GB) Network: bond0: fault-tolerance (active-backup), mtu 1500 eth0: 1000Mb/s, Full Duplex, mtu 1500 eth1: 1000Mb/s, Full Duplex, mtu 1500 Kernel: Linux 4.4.6-unRAID x86_64 OpenSSL: 1.0.2g
  12. Did you ever get a solution for this? I am having what sounds like the same problem.
  13. Fairly new to the cache pool scene, but I have been reading a lot about this in the forums lately. It seems to me the ideal would be two 1GB WD Black drives in BTRFS Raid-1 as the normal cache pool and then have a single SSD for the dockers. I assume you can run dockers from the SSD via unassigned devices? Then have a backup procedure backup the SSD to the array. So far i don't see a need to have real time disk failure protection for the Docker drive, a nightly backup should be good enough for me. Maybe if you have an important VM running on it then you might want to mirror it.
  14. So I thought it might be that my log drive was full, but i just cleared some space and I am still not getting the SMART table populated while running a preclear.
  15. I am still learning this, but try "docker ps" which will list all containers. The id of the container appears to be different than the image name. When you attach to the docker you may just attach to a running process and not actually have shell access. I assume what you really want to do is get shell access to the docker .... docker exec -it DOCKERID bash