drdobsg

Members
  • Posts

    27
  • Joined

  • Last visited

Everything posted by drdobsg

  1. Is google drive a requirement? Crashplan is encrypted, it is relatively cheap too $60/year. There is a docker for it.
  2. From what i see it is the default. Compare ls -l vs du and see what you get.
  3. Before any one else tries this, I am pretty sure the answer is no this is not the best way. Maybe someone with more experience can chime in, but that sounds like a bad way to do it. What you are describing is making snapshots of the BTRFS file system. Essentially each "clone" you make is a diff of the base sub file system. And in the end it will probably end up being a full copy of the original disk, since the .img file will change enough from the original where it will be a copy instead of a diff. I don't know enough about BTRFS, but i suspect there will be a performance hit there. The first few references from googling "BTRFS snapshots KVM" all pretty much say the same thing: Don't use BTRFS for VM storage. Or at least don't enable copy on write for storing the image files because they are essentially one giant file.
  4. Yes that will work, but the problem is that it increases the disk space required by the VDisk. For example if a VM has a 30GB virtual hard drive but is only using 13GB of it. It actually only takes up 13GB on the cache drive. The copy will take up the 30GB on the cache drive. The same is true in vmware esxi, if you just try to copy the thin disk it will expand its size. Note that ls -l will show you it takes up 30GB, but du will show you what it really uses.
  5. This is the same for me.
  6. Does anyone know of a tool installed on unRaid that will clone a VM? I am new to KVM, but I am used to cloning VMs on esxi via the command line. I have read a few suggestions of just copying the .img file, but that will essentially convert the thin provisioned disk to a thick one. Through some googling i have found some packages that may be what i am looking for: virtinst, which contains a virt-clone command; or libguestfs-tools which contains a virt-sparsify command which shrinks a thick provisioned disk to a thin one. I found this slack package, but I don't know if its compatible with unRaid or how to install it. Ideally what i really want to do is create a template and deploy from that, but I will settle for being able to clone a VM.
  7. Yes that does change things. You would set both DNS aliases (example.com and personalwebsite.com) to point to your public IP address. Then you forward port 80 on your router to whatever port your nginx server is listening to (it most likely isn't 80 as that is what unraid uses by default) Forward port 443 on your router to whatever SSL port your nginx server is listening to (might be 443, but is configurable) In the nginx configuration nginx\sites-conf\ folder create a configuration for personalwebsite.com that looks something like this ... server { listen 80; #listen 443 ssl; server_name *.personalwebsite.com; root /config/www/personalwebsite.com; location / { try_files $uri $uri/ /index.html /index.php?$args =404; } } The website will be placed in /config/www/personalwebsite.com/ in this scenario. Nginx will examine web requests for personalwebsite.com and if it matches then it will server those files. The limitation in this setup is that a single IP can only host one SSL certificate. And your lets-encrypt cert will most likely have the example.com certs on it. So if you do happen to go to https://personalwebsite.com you will get a certificate error in your web browser about names not matching. There are lines you can add to the config file that will force https back to http for all personalwebsite.com connections. What may be a better option for you would be to get the lets-encrytp cert for personalwebsite.com and let the mismatch name occur on example.com, only because I am assuming you will be the only one going to example.com so you will know to ignore the certificate error and go on your way. however if you really have no need for SSL on your personalwebsite.com then continue using the cert from example.com. Another option, instead of serving the site via an unraid docker is to just have the docker act as a reverse proxy and forward the request to a real server or a VM where you have more control over things. As far as the security concerns of having an nginx docker on the public internet. In my opinion, using nginx solely as a reverse proxy or static html server it is a risk I am willing to live with. I probably wouldn't host PHP or some other dynamic site on it though.
  8. Is it possible to change the default port for torrent transfers? I assume I would have to change the docker template to "host" mode for network and remove the port mappings?
  9. I have an older 60GB OCZ SSD for a cache drive, 3x 1TB WD Blacks, and a couple 500GB Seagates for the array. I know 3TB usable its not much, but I am still testing unRaid while i wait for some sales on some newer drives. I have 5x 3TB Seagate drives but 3 of them died so I am looking to replace them with something better. I was surprised to see the default spindown delay was set to never. I changed it to 2 hours and then the server froze, but I was also going heavy on the downloading trying to restore my collection. I don't even know why the drives were spun down if there were still processes accessing them.
  10. Good to know. I think was trying template name instead of the actual docker name. That looks handy, I may borrow that
  11. I had a similar issue as well. The web GUI locked up, i could still ssh in so i did get my diagnostics. There were several processes locked up that i couldn't kill. I ended up trying to reboot, but that didn't work so i did a hard reset. Parity and array drives were fine after reboot. It may have been coincidence, but the system froze shortly after spinning down hard drives. I can post my diagnostics when i get home if you want, or start a new thread. But it sounds related as I was also d/ling a decent amount via dockers: i was running couchpotato, sonarr, deluge, and plex. And on top of that i was running file integrity plugin. I have since turned off cache drive for my downloads folder, disabled file integrity auto hashing, and set spin delay to never. I will monitor to see if that had anything to do with it. My M/B uses Intel 82579LM and 82574L LAN. Config 6.2.0-beta21 M/B: Supermicro - X9SCL/X9SCM CPU: Intel® Xeon® CPU E31230 @ 3.20GHz HVM: Enabled IOMMU: Enabled Cache: 256 kB Memory: 16384 MB (max. installable capacity 32 GB) Network: bond0: fault-tolerance (active-backup), mtu 1500 eth0: 1000Mb/s, Full Duplex, mtu 1500 eth1: 1000Mb/s, Full Duplex, mtu 1500 Kernel: Linux 4.4.6-unRAID x86_64 OpenSSL: 1.0.2g
  12. Did you ever get a solution for this? I am having what sounds like the same problem.
  13. Fairly new to the cache pool scene, but I have been reading a lot about this in the forums lately. It seems to me the ideal would be two 1GB WD Black drives in BTRFS Raid-1 as the normal cache pool and then have a single SSD for the dockers. I assume you can run dockers from the SSD via unassigned devices? Then have a backup procedure backup the SSD to the array. So far i don't see a need to have real time disk failure protection for the Docker drive, a nightly backup should be good enough for me. Maybe if you have an important VM running on it then you might want to mirror it.
  14. So I thought it might be that my log drive was full, but i just cleared some space and I am still not getting the SMART table populated while running a preclear.
  15. I am still learning this, but try "docker ps" which will list all containers. The id of the container appears to be different than the image name. When you attach to the docker you may just attach to a running process and not actually have shell access. I assume what you really want to do is get shell access to the docker .... docker exec -it DOCKERID bash
  16. I clicked the "update ready" link under the unRaid > Docker menu. Then restarted the docker and it updated.
  17. I am new to this. Is it possible for me to update the version of plex inside the docker? Or do I have to wait for the docker to be updated?
  18. Yes I can get SMART reports from the dashboard when I click the green thumbs up icon. And the non-beta script showed it when I cleared a drive with that.
  19. I am running the preclear beta plugin v2016.03.24a (installed via Apps plugin) on unraid 6.2b21. I am not seeing SMART results from preclears. I got the same thing on 3 drives so far. Using the non-beta plugin on the same setup i did see SMART status, although obviously not in the same format. See below .... Any ideas? ############################################################################################################################ # # # unRAID Server Pre-Clear of disk /dev/sdg # # Cycle 4 of 4, partition start on sector 64. # # # # # # Step 1 of 5 - Pre-read verification: [1:25:17 @ 97 MB/s] SUCCESS # # Step 2 of 5 - Zeroing the disk: [1:14:51 @ 111 MB/s] SUCCESS # # Step 3 of 5 - Writing unRAID's Preclear signature: SUCCESS # # Step 4 of 5 - Verifying unRAID's Preclear signature: SUCCESS # # Step 5 of 5 - Post-Read in progress: (30% Done) # # # # # # # # # # # # ** Time elapsed: 0:23:04 | Current speed: 132 MB/s | Average speed: 111 MB/s # # # ############################################################################################################################ # Cycle elapsed time: 3:03:55 | Total elapsed time: 15:26:22 # ############################################################################################################################ ############################################################################################################################ # # # S.M.A.R.T. Status # # # # # # ATTRIBUTE INITIAL CYCLE 1 CYCLE 2 CYCLE 3 STATUS # # # # # # # # # # # # # # # # # # # # # # # # # ############################################################################################################################ # SMART Health Status: OK # ###########################################################################################################################?
  20. I forgot about that. That one got me too.
  21. Are you running version 6.2 beta? If so make sure you either use the beta preclear plug in or patch the preclear script to support the new version.
  22. Something i just learned today, if you add another 1TB blue as you describe then you can BTRFS Raid-5 them all together and get 1.5TB usable. Although i probably wouldn't recommend that. I have no idea what performance that will give you because the WD Blue drives will slow down the entire array. A better idea would be to buy another 500GB SSD and BTRFS Raid-1 them together for 532GB usable. Then just throw the 1TB blue in with the unRAID pool. I would bet the Raid-1 SSD will beat the Raid5 with Blue's by any benchmark.
  23. I haven't used unraid in a while (pre version 5) so I don't know this first hand, only form reading and researching ... but it sounds like you want a Docker and not a VM. Set up the SSD as a cache drive or pool and the download will go there first them move to the slower unRaid pool. If you really want a VM then you can just SMB mount your unRAID pool and download stuff there. Again, the cache drive will automatically kick in.
  24. Thanks for the quick response johnnie. I was just reading your reply to this other thread that also sheds some light on things. However, it would seem your two comments contradict. 500GB + 500GB +1TB in a BRTFS raid-1 would then equal 1TB and not 500GB? In a traditional raid1 it would be 500GB copied 3 times. I was using unRaid back in 2011, but switched to ZFS / napp-it solution for a while. I have been reading up on dual parity and cache pools and am very close to making the jump back into unRaid.
  25. Are there any other BRTFS cache pool configurations supported? I am trying to find documentation somewhere but it seems pretty vague. In the diagram here, it says it is using BRTFS RAID1 protection but mixes a 1TB and two 500GB drives as the cache pool. I don't know much about BTRFS and it's "unique twist on traditional RAID 1", in that example what is the effective space available? 500GB? 1TB? If i have a 60GB SSD and two 1TB WD Black drives, can i combine them in any way to make a faster BTRFS cache pool?