Jump to content

boof

Members
  • Posts

    800
  • Joined

  • Last visited

Everything posted by boof

  1. NIC drivers (Intel, Atheros, Realtek), LSI based SATA controller cards, and SASLP controller card (there is a newer version of that card out, SAS2LP or SASLP2, that is only support in the latest couple betas because of the newer kernel). Thanks - useful info. It's hard to keep track with all the beta threads and pull out the important tidbits.
  2. I suspect I missed this but why *did* the beta kernel version jump from 2.6.x to the new 3 stream? It struck me as an odd thing to be doing mid beta cycle as it's a fairly massive change. But I'm sure there was a good reason?
  3. Use the -/+ buffers/cache: line here as your guide. It shows you the true usage of memory ignoring any data that's cached or buffered. So it's saying your actual OS and programs in active memory are only using 340 megs. And you have 3455 megs left free that can be allocated as active memory. Which is plenty. You can also see you have 3184 megs cached - almost guaranteed to be disk data cached. As linux needs more memory for active memory it will push some out of the cache to make way. As has been said, don't worry about it you're fine.
  4. In addition to the above the big one for me is general flexibility. Build a large, powerful machine with a lump of storage in it - then carve it up however you want with regards to operating systems and functions. No need to hand it all over to a single OS. As time passes and your needs change you just reallocate as necessary - almost on the fly. Should I ever migrate away from unraid then all I need to do is bring up the replacement OS as a new virtual guest and reassign whatever storage I want across to it - again pretty much on the fly. No cumbersome downtime or rebuilding of a single machine. The same can be said from upgrading unraid whilst keeping a backout copy of a working guest ready and waiting. Other than the initial knowledge hurdle and more limited hardware support / requirements I'd really flip the question on it's head and ask why you wouldn't use esxi.
  5. That's a good idea. I wish I was as meticulous. However, best laid plans etc.. I'll comment on some stuff, I'm not going to say if this is all going to work for you as read though. That's useful for the initial install but I'd really consider thinking about getting comfortable with remote access once the system is running. In principle I can't see a problem doing this - however - if you have a dhcp server on your LAN and you can get a temporary address I'd be inclined to go that way and then, once the server boots, reconfigure the network 'properly' via the web interface. I've seen 5 do some odd things during configuration and I'd be keen to stick to the official path for this. But your way might work and, at this stage, it's easy to backout and start again so a minor point. You'll have to login as 'root' first (there will not be a password set at this time). Then you can run your command. For clarity I presume you mean dmesg|grep SATA|grep link (and you've just made a typo). Functionally no. But the preclear will help test the drive for bad sectors before hand. So doesn't hurt. As above, it's not an array drive so the function of the preclear isn't needed. But preclear will test the drive a bit for peace of mind. Yes, it will need formatted. You'll then need to configure each of your shares to use the cache drive. Do you mean persistently keep apps on the cache drive? So they're never moved to the array? Easiest way is to install them into a '.' directory which the mover script will skip. i.e /mnt/cache/.apps I'm not familiar with the recent plugins but I'd suggest making sure nothing is installed in memory and everything is on the cache disk. This should be easy to test. Install, configure them and then reboot.. What about it? They all use python with various modules and sqlite so there is no LAMP stack required for them. There are other plugins to do all that and unraid 5 notionally uses an install of php internally for emhttp. Though this is, I think, slightly restrictive. Not without installing another plugin. You can put them in a subfolder off /usr/local/emhttp (i.e /usr/local/emhttp/myapp -> http://tower/myapp) but be aware of how the emhttp php environment behaves. I'm not sure an off the shelve php app will work without any massaging. And this doesn't help you with mysql. Yes - just make sure you bind the license to the correct flash ID when you buy it! This will largely depend on your hardware and what your needs are. They're still beta for good reason and some hardware would mandate you should be on an earlier beta for compatibility. Worth reading through the beta threads on this. I, personally, would stick with 4 (again if it meets your immediate requirements) if you're new to this as you probably won't want to be fighting the foibles of 5 until you're more comfortable with the system. Though 4.x does still have the edge case data loss bug that has been fixed in 5 - but never backported. Pick your poison. As above - just be aware things will diverge and don't worry if they do. All of this until you start recycling your drives with data on them is reversible and you can start from scratch again. There is an argument it would be good for you to go through the install iterations then tear it down and start again from scratch just to help you streamline the process and become more comfortable with how everything works. Similarly before you put your real data on it may be worth you putting some fake / unimportant data on the array and simulating drive failures etc to see how recover will work. As a more advanced path - which you may or may not be comfortable with - there is an increasing percentage of users installing unraid as a guest under esxi. This would let you leave unraid to be a bit bucket and have other guests running alongside to run your web services etc. This, in my mind, is much better than fighting unraids packaging and ram filesystem concepts as well as avoiding having to reinvent the wheel to make a lot of things work. Plenty of threads around on this if it interests you but there is a lot to consider and it may require different hardware. Hope this helps. This place will be very open to helping if you hit any bumps and I'm sure there will be others along to fill in the gaps I've missed / give you better answers
  6. Just keep running as normal. Write to the array as you would normally. This will all be fine whilst the parity check is running, parity will still be updated / calculated for new writes as required. Obviously the more you access the array during the parity check the slower the check will be.
  7. Frustrating if you have a 24 bay chassis (or bigger) that is 'wasted' by a software restriction. Splitting servers also means, potentially / likely, splitting namespace as well I imagine would be undesirable for many.
  8. The two main drawbacks highlighted in that article with regards to 'competition' with unraid are : - striped data no matter if using mirroring or parity. I doubt it will tolerate multiple disk failures (without also increasing the space used by parity / mirroring) as unraid does as the data on drives will not just be in a 'standard' filesystem so losing unraids 'you only lose data on the drives that actually failed' advantage. - Because of the above I doubt it will spin individual drives down. - Similarly you can't just put one of the drives into another system for recovery, you'd have to move the entire array / stripe over. However, early days and that article was necessarily light on the nitty gritty so one to watch with *great* interest I suspect! Nice to see Microsoft focusing on storage improvements, I suspect we might see more features fall out of this as time goes on. It will also be interesting to see what the framework looks like for third party bolt ons (if at all) and what that might bring....
  9. Got there in the end.. I have a uefi bios and, it turns out, you have an additional hoop to jump through. When you press ctrl+c / ctrl+h to enter the bios of the m1015 nothing much happens - but the server appears to soft reboot and you get the system BIOS screen again. At this point you can go into the system BIOS as normal and you now have a new boot option called something like 'option rom' and you can then tell the system bios to boot from that - which gets you into the 1015 option rom / bios. What a pain, but it worked. From there I just added the second card into the ordering within the m1015 BIOS and that's it! Thanks for the help, your tip on setting the order made me quite comfortable that I was doing the right thing! This is on an asrock z68 extreme4 board.
  10. Unfortunately I've now hit a brick wall in getting two of these running. I've never been able to enter the card's BIOS on any system I've tried (inc the one that flashed them). Now when I have two cards inserted in my unraid system only one HBA is detected alongside an 'Adapter configuration may have changed, reconfiguration is suggested!" during the drive scan in the cards BIOS. I preusme this is because I now have a second card and they need tickled to play nice with each other. However as I can't enter the bios..I can't really do much Any ideas?
  11. If you're having problems at the -cleanflash stage try rebooting before attempting. I.e backup the sbr, wipe the BIOS but before doing megarec -cleanflash 0 reboot. You won't see the card BIOS but you might find you can then cleanflash successfully. I do this on my asus p6t to get it past that stage. Once cleanflashed, continue with the instructions as normal (i.e reboot then us 5it.bat or similar). I've done two cards this way and was bashing my head off the wall at the -cleanflash stage until I figured this out. Apologies if this has already been mentioned in this thread - it's a big un! I posted this because I did a second card this afternoon and had to remember it all over again, so posted on the interweb for posterity
  12. This was exactly the process I went through too when settling on 16 I found to go beyond meant I had to jump up another class in motherboard / go dual processor to get more slots. Sounds like you have everything figured out!
  13. 1 - This really depends on what you're using the OS installs for. If you're not using them for storing bulk data (presuming you're using unraid instead) then I'm not sure you'd need to much space HD wise? Performance wise it might be better to have 1x physical HD per vm but you'd have to be really rattling the disks. I'd stick with an SSD and consider if you can go smaller. I have a 32G SSD in use for my datastore. This admittedly might not go too far if you were going to cram windows vm's on, but for unix installations it's plenty. 2 - Go for as much as you can afford / justify / the board supports. You'll never wish you had less and it's (relatively) cheap just now. I shoved 16G in mine as thats the max my board will take. I would have put more in if I could but 16 seems a reasonable amount. Again really depends on the number, type and usage of vm's. I'll likely allocate 8-12 of mine just to unraid for data caching. 3 - Looks ok to me though I don't know too much about the board or that specific CPU. I'm guessing as it's a Xeon you have vt-d support in the board and cpu. 3 x m1015 in theory should be ok, might want to check what speeds the pci-e slots will run at once you have three cards in them and if this will affect your throughput at all. Very unlikely and even then probably only during a parity check when the bulk of your disks are being hit at once. Can't comment on cooling, I just left my norco with the stock fans but undervolted them for a modicum of peace and quiet!
  14. boof

    CrashPlan

    I've not tried this - but out of interest is that really true? I can't believe crashplan wouldn't just treat it as any other file?
  15. boof

    CrashPlan

    I agree with you in principle, but crashplan have said on their twitter feed and in forums that they have people storing dozens of terabytes at the 'people who use us most' level. They also say they're clearly outliers and 99% of users don't come close to that - but in principle they can and do allow it. I agree with you that it's not sustainable if everyone did it, but so long as no one does (and realistically not many people will have that much data full stop let alone be concerned about backing it up, or have the bandwidth to feasible do so) and crashplan are currently happy with it I'm willing to give them the benefit of the doubt at the moment. Nothing stops crashplan changing their pricing model at any point though - however historically they've been pretty decent about adhereing to what you actually signed up for and also for gleefully taking advantage of other providers reneging on their unlimited promises (cough..mozy..cough) so I'm sure they would be well aware of the fallout if they chose to do so.
  16. Not necessarily cache_dirs specifically - just the overall system cache. Everytime you do some disk i/o the kernel will try to cache it in any unused memory - at some point it will fill your memory but this is fine as it can juggle the amount used for caching to let your programs have what they need. If your programs need more the memory / disk cache will be reduced accordingly. cache_dirs just runs a perpetual find loop over all your disks, which reads the metadata of the directory structure / filenames etc and forces the kernel (as it's being read from disk) to put it into the cache. Next time you do a dir listing - it comes from the cache rather than disk and avoid potential spin ups. So yes, in principle you're correct.
  17. Post the output of 'free -m' You'll probably find most of the memory is being used for cache by the kernel - which is perfectly normal and a very good thing. It's in fact the thing cache_dirs relies on to perform so well.
  18. If what you see you're able to describe away as a 'slight performance hit' then you'll be doing very well I suspect what you'll be seeing instead is performance dropping off a very high sheer cliff For writes anyway...
  19. As above, running a vmware guest backed onto a parity protected unraid export via NFS would be a painful experience if the guest does any amount of disk i/o. I'd strongly echo the above warnings and also the recommendation to test this with the free version before committing any further. It would work but your performance would be less than adequate.
  20. Agreed. Though there is an increasing trend it seems of people on the forums virtualising unraid giving the flexibility of having separate client service vm's to get round the pain of unraid / slackwares packaging. If you have time do post up what solution you moved to...
  21. As above.. I've found the following as a client option gives me the best performance : -o noacl,nocto,rsize=32768,wsize=32768 Alongside an async export. I can get 100 megabytes per second sequential throughtput from an ubuntu client vm to an unraid 5 server vm using those options in conjunction with other sensible vmware optimisations (correct virtual nic, vmtools installed etc). I'm not sure how random i/o would fare though. And some of those options might not be desirable depending on your usage..
  22. Thanks - that's re-assuring. Can you point me in the direction of the threads you mentioned? I'm inclined to think if the drive pops up as unformatted to just commit it as being a disk to rsync the data from rather than trying to fix it - but it might be a trival fix..?
  23. I have a server whittling away currently running 4.x I've built a seperate larger in scope server with 5.x currently running (virtualised, though makes no difference here) and a couple of token new 3TB disks in place including parity. Rather than rsync data across and do a phased migration of data - drain one disk in old server by rsync'ing to new, wipe disk in old server and install as a new blank disk in the new server to use as the target for the next old disk..and so on.. What happens if I just take a physical disk from the 4.x machine, drop it in the 5.x machine and boot unraid? Will it just let me assign it a slot and nothing further (other than a parity regeneration and I'd presumably also have to run the fixperms script)? i.e can I Just pick up all my physical disks from the 4.x machine, drop them in the 5 machine boot unraid and give them slots and I'm done (other than redoing share and user configs but that's minutiae) or am I asking for trouble in terms of goemotries / any differences in MD drivers between 4 and 5 etc.
  24. No, encfs uses the fuse filesystem layer (the same as user shares) so what is presented to you is a virtual filesystem with the unencrypted files. This file system (in your case Video) doesn't really exist as far as unraids concerned so has no impact on parity at all. When you write or update a file to this unencrypted virtual filesystem encfs maps it back through to the real filesystem (.Video in your case, encrypting along the way) and as the real filesystem is something unraid knows about parity is updated. In short - don't worry, so long as your encrypted files (.Video in your example) are on the parity protected array everything will be fine. Unraid deals only with physical disks wrt parity.
×
×
  • Create New...