boof

Members
  • Posts

    800
  • Joined

  • Last visited

Everything posted by boof

  1. This isn't just a legacy issue. It affected my Ubuntu 18.04.3 LTS clients as well. Stale mounts quite quickly after the initial mount. There are other threads linking it to cached data and mover - but very much a general NFS issue.
  2. The vmware courses / training all use vsphere running virtually within vsphere (vmware inception) to provide lab sessions anyway - or at least the ones I've been on have So there's no expectation on the vcp side to run it on real tin. Only question would be how well you could run esx in kvm. Vcenter will be ok as it just needs a windows server guest. Don't know the answers to esx in kvm I'm afraid. esx can be picky at the best of times on real tin so it could go either way.Though being honest I would actually tend to agree with buying a new box - and just go and get a little HP microserver, stick esxi on it and then build your vsphere environment virtually in there. Possibly easier in the long run and something quite common to do.
  3. There is also ecryptfs which has some level of kernel adoption. It's what ubuntu (perhaps others?) use to provide encrypted homedirs etc I believe. It can behave in the same way as encfs (normal user, single directory) https://en.wikipedia.org/wiki/ECryptfs
  4. Great read - thanks. mergerfs is a new one on me - like you the alternatives you listed in the article have always had an issue that stopped them being attractive. I'm excited to go and take a look at mergerfs if it resolves them.
  5. Had the same thoughts as you. I bought the same case and rammed it full of 1-3TB drives years back. As those drives show signs of failure or I need to upgrade capacity I've been replacing them with 8TB drives and consolidating. I'm now down to only using 2 of the 3 SAS HBA's I was using before for physical drive connectivity - whilst having way more storage thanks to density! So yes - for me I don't need that size a case anymore by a long shot. If I ever do another full refresh I'll be looking at smaller cases - there are some neat mini / micro atx cases around with some interesting builds already in the forums here. Hard to decide to do though as the larger case isn't causing any issues and migrating will only cost money - so in that regard the large case was a good purchase in terms of longevity. Failure rates / data density isn't a factor in my thinking at all really. Drives will fail either way and how you handle that should be the same regardless.
  6. General rule of thumb is don't over provision memory for guests. As above ballooning is a last resort but isn't magic and can't always fix the issue. You could investigate KVM being able to use disk as swap for guest memory / migrate to vmware which does do swapping and accept the performance penalties when ballooning fails and the hypervisor swaps your guest memory out. My hunch would be KVM *does* try to swap out guest memory (rather than, as you're seeing killing the guest or more accurately having the KVM process oom'd ?) but uses the system swap space to do so. And I don't think unraid runs with swap? I can't check my unraid machine right now to confirm. If that's the case you could configure unraid with some swap space and see if that will allow KVM to swap guest memory. Performance penalties but your vms would keep running.
  7. Ballooning isn't a magic bullet. It can only reap memory the guest OS has allocated but isn't actually using (cache etc etc) and / or apply other clever tricks (dedupe of memory across guests etc) to try and free some real memory. If all your vm's are, genuinely, actively paging all memory in the guests - the hypervisor won't be able to do much about it. You're rolling the dice if you force the hypervisor into a position where it needs to start thinking about this. I'm not sure on kvm's default behaviour if guests exhaust memory - vmware will start swapfiling so killing performance but at least keeping the guests up. My understanding anyway.
  8. Very old firmware. Without double checking- P7. Getting these cards flashed was a nightmare for me (only have one mboard that will allow it) so I've been in no rush to keep updating them when they've been working (apparently) fine. 10TB drives - no. Only ones I'm aware of are the HGST models which need host drivers for SMR and so wouldn't work regardless. No idea if they're even out yet.
  9. I'm only backing up the things that are irreplaceable (photos, documents etc). Anything that can be generated again (dvd rips, flac etc) I don't bother with. That said it's only because the amount of time it would take to push it all offsite. Don't feel bad about pushing 5TB to crashplan. It's what they advertise and what you pay for. I've read anecdotal tales from others with far more than that in there.. They'll have designed their business model to cope with a small proportion of people eating lots of space - whilst most people only use a small amount.
  10. I have three M1015's and have some 8TB disks hanging off them. As above, I'd so presume 5TB would be ok.
  11. This is exactly what the current docker is. It's based off the phusion base image which is just a slightly 'docker friendly' tweaked ubuntu install.
  12. I'd inject a note of caution that the crashplan updater has a history of not really being that great at doing updates. This isn't the first time it's gone a bit wrong. It doesn't update that often though which helps smooth this out. This doesn't change the argument of having the docker build kept up to date (though it's easily forkable as necessary, in theory just change the path in the current one to the new crashplan install bundle) but does mean that regardless of the state of any crashplan install crashplan itself will always attempt to auto update itself. And it may not go smoothly. This will apply equally to crashplan running inside docker images, inside your own vm's, on baremetal installs etc. It was ever thus with crashplan sadly. Having an updated docker image won't help with this as by the time an updated docker image is necessary your running instance will have tried to update itself. It will have either succeeded in which case you don't care about a docker image update, or failed in which case your backups are broken until you notice and / or until you notice there is a docker image update and act accordingly to have it pulled down. Certainly for this specific upgrade none of this would have helped with the changing client token exchange requirements which, as far as I can tell, aren't documented by crashplan. So would always have been at the mercy of someone bright in the community figuring it out. All three of my pre-existing crashplan installs (three seperate machines, only one running inside an unraid docker container. The other two directly installed on the hosts) needed a bit of a prod during this round of updates to come back to life.
  13. Depending on the licensing costs, and how granular they are (i.e will you have to absorb upfront costs or can you 'pay as you go' per implementation) then you may have a path to have another tier of unraid licensing. Pay more for the 'Unraid 6 Double Protection ' license to unlock the feature - and that uplift cost covers your backend licensing fees and a little on top for your trouble. It may be low volume in terms of sales but that might not matter. Or if not needing all disks to spin up it may be a very popular license option for customers. Or the backend licensing fees could be so low that it can just be rolled into the unraid base without any fuss and the general unraid license cost increased by a small amount across the board. Charging a fee for a new unraid license or unraid upgrade come version 7 for existing users (presuming this feature would be included) might not cause any problems. If you'd charged again for version 6 I would have happily paid given the improvement in feature set. Something like this would bring enough addiitonal value to the use of th eproduct that I would see it reasonable to pay for v7 if necessary.
  14. Hopefully this will be an option. One of the appeal for unraid to me is the drive spin down maximisation. Happy to have a parity write penalty as a result - and mitigate as best I can with the cache drive. Appreciate others will have different needs but hopefully this won't be an enforced change.
  15. ssh is now part of the core system. So long as there are docker plugins for the other two (and I believe there are) then you should be fine. Docker isn't virtualisation so doesn't particularly come with any cpu overhead (not entirely true as you need some cpu for the docker framework and daemon, but on a per process basis this is the case).
  16. Jumping to kernel 4.x and bumps to associated other tools (docker etc) is one helluva change from previous betas.
  17. This is a fair point - but both cases so far are much smaller and neater than a 4U Norco chassis So it's all relative. The mini=-tx boards also seem to be a bit limited in feature set in general. And the ones that aren't I can't find availability very easily. Still very much a paper exercise this end. A lot of money to spend on just making things a bit tidier - when the same funds could go on more disk / cache ssd / memory for the existing box. SFF unraid / storage is a very interesting space to be looking into though!
  18. Thanks. It did save in the end for some reason. Though I still don't have the bridge interface the VM manager looks for by default. I found where you're saving the config settings on the flash (which is presumably the source image for the libvirt loopback mount) so I'm not sure if this is just a race condition etc. I'll look forward to RC3! Thanks for acknowledging the issue.
  19. virbr0 is a built-in NAT style bridge so you always have that bridge without doing any further setup. You only need to set one upexplicitly under network settings if you want a VM to be visible with its own IP address on your home LAN. I don't. VM manager couldn't bind guests to it as it wasn't there. Creating br0 and binding guests to that worked ok. Is my RC2 installation just somehow very broken? Working fine in all other regards.
  20. I'm not sure to what you are referring. I don't see any issues with VM after reboot in rc1 or 2. I have the same issues. I've never used the VM / KVM manager before so this is relatively 'fresh'. I also had to enable the bridge interface in network settings before I could make any meaningful network setting in the VM manager. And then with br0 not virbr0 as the VM manager wants to default to (I don't have that interface). Where should the VM guest config data live on disk to survive reboots?
  21. Interesting - thanks. Will also take a micro-itx board which might open some options. Are there any recomended / good micro-itx boards out there? I'm off to search the forums for builds..!
  22. Thanks - that's really useful. I came across these as well - which in conjunction with the above would give the possibility of neat cabling between physical hosts : http://www.pc-pitstop.com/sas_cables_adapters/ Only concern I have is most mini-itx boards seem to have one pci-e slot which would limit the expandability (as you'd likely want one SAS controller per additional jbod enclosure). I note there are two very interesting mini-itx motherboards floating around : - An asrock avoton SoC. There seems to be little positive feedback on running it in unraid though. Shame as it has good DIMM and SATA provisioning. - Asus p9a avoton board which has integrated mini sas ports. I couldn't find this available to buy anywhere though (uk) nor any feedback on if unraid would support the marvel sas controllers. If nothing else there certainly seems to be a new market emerging for small NAS enclosures and storage rich boards to control them. Which can only bode well for the immediate future!
  23. Hello, I've been running for quite some time now with a NOrco (or xcase in the uk) 4u chassis stuffed with drives. Other than the size of the thing that has been working ok. With the recent (relative) explosion in drive density I'm beginning to wonder why I need so many disk slots anymore when I could double my existing capacity in a mini itx case with 8TB drives. This would be much neater and 'home friendly' than the 4U chassis. However. There would still be concern over headroom should the 5/7/8 (whatever mini itx chassis in use) be exhausted. Has anyone chained together these mini-itx cases when expansion is needed? Using a second (or third or n) chassis as just a dumb drive bay with no mboard / cpu in it? Perhaps using SAS cables directly from the master host, or even something cleaner with SAS expanders / external SAS cables? I'm guessing the PSU in the 'slaves' would have to be rigged up to apply power to the disks with no other components present. If anyones done this or similar can you share? It would be an interesting / modular path to go down.
  24. I've been vocally scathing of unraid in the past. Sometimes with good reason (mostly probably not). But that can't be the case anymore - unraid 6 is so much more of an advanced and rounded project (even in its beta stage) compared to previous iterations that it may as well be a completely different product. This has to be, in no small part, down to the new additions to the team and the direction focused on for unraid going forward. Well done all round.