• Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About boof

  • Rank


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. This isn't just a legacy issue. It affected my Ubuntu 18.04.3 LTS clients as well. Stale mounts quite quickly after the initial mount. There are other threads linking it to cached data and mover - but very much a general NFS issue.
  2. The vmware courses / training all use vsphere running virtually within vsphere (vmware inception) to provide lab sessions anyway - or at least the ones I've been on have So there's no expectation on the vcp side to run it on real tin. Only question would be how well you could run esx in kvm. Vcenter will be ok as it just needs a windows server guest. Don't know the answers to esx in kvm I'm afraid. esx can be picky at the best of times on real tin so it could go either way.Though being honest I would actually tend to agree with buying a new box - and just go and get a little HP micr
  3. There is also ecryptfs which has some level of kernel adoption. It's what ubuntu (perhaps others?) use to provide encrypted homedirs etc I believe. It can behave in the same way as encfs (normal user, single directory)
  4. Great read - thanks. mergerfs is a new one on me - like you the alternatives you listed in the article have always had an issue that stopped them being attractive. I'm excited to go and take a look at mergerfs if it resolves them.
  5. Had the same thoughts as you. I bought the same case and rammed it full of 1-3TB drives years back. As those drives show signs of failure or I need to upgrade capacity I've been replacing them with 8TB drives and consolidating. I'm now down to only using 2 of the 3 SAS HBA's I was using before for physical drive connectivity - whilst having way more storage thanks to density! So yes - for me I don't need that size a case anymore by a long shot. If I ever do another full refresh I'll be looking at smaller cases - there are some neat mini / micro atx cases around with some interes
  6. General rule of thumb is don't over provision memory for guests. As above ballooning is a last resort but isn't magic and can't always fix the issue. You could investigate KVM being able to use disk as swap for guest memory / migrate to vmware which does do swapping and accept the performance penalties when ballooning fails and the hypervisor swaps your guest memory out. My hunch would be KVM *does* try to swap out guest memory (rather than, as you're seeing killing the guest or more accurately having the KVM process oom'd ?) but uses the system swap space to do so. And I don't think
  7. Ballooning isn't a magic bullet. It can only reap memory the guest OS has allocated but isn't actually using (cache etc etc) and / or apply other clever tricks (dedupe of memory across guests etc) to try and free some real memory. If all your vm's are, genuinely, actively paging all memory in the guests - the hypervisor won't be able to do much about it. You're rolling the dice if you force the hypervisor into a position where it needs to start thinking about this. I'm not sure on kvm's default behaviour if guests exhaust memory - vmware will start swapfiling so killing performance b
  8. Very old firmware. Without double checking- P7. Getting these cards flashed was a nightmare for me (only have one mboard that will allow it) so I've been in no rush to keep updating them when they've been working (apparently) fine. 10TB drives - no. Only ones I'm aware of are the HGST models which need host drivers for SMR and so wouldn't work regardless. No idea if they're even out yet.
  9. I'm only backing up the things that are irreplaceable (photos, documents etc). Anything that can be generated again (dvd rips, flac etc) I don't bother with. That said it's only because the amount of time it would take to push it all offsite. Don't feel bad about pushing 5TB to crashplan. It's what they advertise and what you pay for. I've read anecdotal tales from others with far more than that in there.. They'll have designed their business model to cope with a small proportion of people eating lots of space - whilst most people only use a small amount.
  10. I have three M1015's and have some 8TB disks hanging off them. As above, I'd so presume 5TB would be ok.
  11. This is exactly what the current docker is. It's based off the phusion base image which is just a slightly 'docker friendly' tweaked ubuntu install.
  12. I'd inject a note of caution that the crashplan updater has a history of not really being that great at doing updates. This isn't the first time it's gone a bit wrong. It doesn't update that often though which helps smooth this out. This doesn't change the argument of having the docker build kept up to date (though it's easily forkable as necessary, in theory just change the path in the current one to the new crashplan install bundle) but does mean that regardless of the state of any crashplan install crashplan itself will always attempt to auto update itself. And it may not go smoothly.
  13. Depending on the licensing costs, and how granular they are (i.e will you have to absorb upfront costs or can you 'pay as you go' per implementation) then you may have a path to have another tier of unraid licensing. Pay more for the 'Unraid 6 Double Protection ' license to unlock the feature - and that uplift cost covers your backend licensing fees and a little on top for your trouble. It may be low volume in terms of sales but that might not matter. Or if not needing all disks to spin up it may be a very popular license option for customers. Or the backend licensing fees could be s
  14. Hopefully this will be an option. One of the appeal for unraid to me is the drive spin down maximisation. Happy to have a parity write penalty as a result - and mitigate as best I can with the cache drive. Appreciate others will have different needs but hopefully this won't be an enforced change.