Jump to content

boof

Members
  • Posts

    800
  • Joined

  • Last visited

Everything posted by boof

  1. I see this too but I've put it down to the useless windows client end (see also explorer activity stopping when a drive spins up). Other clients all accessing unraid at the time don't see any stuttering or problem with streaming, only the single machine that's trying to do multiple things at once. So either rubbish client side from windows or something in samba for the single daemon handling requests from that client. I haven't tested to another non unraid samba server which would be a good test to rule out (or in) unraid specific implementation.
  2. I've only skimmed the other thread.. Do you *know* what the problem is and it's in the kernel? is it a problem with the unraid code and how it interfaces with the code? Or is it just a hope that a new kernel will fix this eventually? I've flippantly suggested this before on other threads but if I were in your shoes I would be inclined to seriously think about : - Start restricting and / or publishing an HCL. You can make this as wide or as narrow as you want. Ensure unraid works with that hardware and publish that for anything else we're on our own. I think this used to be quite common with 4.x but has become much more vague with 5 due to the explosion in kernel device support and we users all picking up myriad HBA's and having ever more exotic environments. - Stop distributing unraid as a complete OS. Rewrite it to be neatly installable and buildable (kernel module, emhttp, user shares etc) on a known 'supported' distribution(s). For example CentOS (as an example of a reasonably stable, slow moving RHEL clone). This leaves you free to concentrate on developing core unraid functionality (instead of spending time on the ecosystem as a whole) and let the community worry about bolting bits on via centos. In effect we're already bolting bits on anyway (and in some cases replicating what you've already done..e.g the recent spate of community samba bumps!) so not much would change except it might become easier for us! You're already having hardware support issues and passing the problem off as being in the kernel so you may as well just hand the whole thing over to a distribution. Unraid seems to be stuck in a grey area of what it wants to be here. It wants to be a complete out the box OS I think. But it's not - it needs a whole bunch of the community addons to actually bring functionality that would be expected from a product of this type. And there seems to be an official stance that so long as a community addon exists it doesn't need to become core (preclear? shutdown?) but at the same time there is the line that anything bolted on is unofficial and not supported. It can be confusing. In absence of all else just label it as final and shove it out the door. As long as everyone is comfortable there is no data loss issue then performance issues (I think?) on some hardware platforms is what it is. Unless *you* can directly fix it there isn't much to be done. My two suggestions above are quite radical and would mean a big departure from how things are currently done. And would no doubt bring their own problems but I would urge them not to be dismissed out of hand. Many other companies / vendors make similar models work so it is possible.
  3. auto should be fine - one way to find out!
  4. Yes, it will. But so long as you have dedupe enabled it will just whiz through it.
  5. Some boards will also allow any sort of card in any of the suitable slots - but will reduce bandwidth to certain slots if other slots are populated. So all of a sudden your x16 slot is running at only x8 rates. Generally not a problem for disk controllers but something worth keeping an eye out for as it's not very well documented usually. Generally boards designed to cope with 3 or 4 way SLI graphics cards have special considerations to actually allow all their pci-e lanes to run at full tilt without contention.
  6. Just a note of caution on this - I went with the Kingston flash reader last time I migrated server (about a year ago). This week I've had serious issues with it being recognised and so providing a valid license at boot. I've tried various different permutations of actual storage in it (multiple CF cards, micro sd cards) and had similar issues from multiple machines. I can only conclude it's the reader hardware itself. I managed to get it to be recognized in an unraid boot finally but more by blind luck and persistence than anything. I don't expect it to last much longer. So whilst it may be likely they could outlast a normal USB stick they can still fail..
  7. Given where you likely got the case, this is probably a daft suggestion that you've already tried - but xcase no use? They helped me out with a replacement backplane without any fuss.
  8. You should be able to read the unraid disks with no problem so long as your new system can read the reiserfs filesystem. That may exclude windows - but linux should be fine. You could then just mount the drives as normal and configure flexraid to use them as data units and configure your old parity as the new ppu inside flexraid. Flexraid works at the filelevel, not the block level like unraid. You'd be stuck with reiserfs for your existing disks but that may not bother you / once it's all in the new system you can shuffle that round and change it at your leisure.
  9. Can you elaborate more on this? I'm not fond of the shfs on unRAID and was considering this layer. I would love to know your findings, pros/cons. Split levels basically. mhddfs keys primarily off drive with the most free space. Which means you can't attempt to group or control 'sets' of data to individual disks to prevent excess spinups. Flexraid, I believe, does have a similar concept to split levels in it's own pooling software though I forget what they call the feature. Some caveats to this - I might be wrong. mhddfs might support more options but that was the issue I hit up against. - You may not care, I appreciate some people use user shares more as a single read location whilst controlling writes to specific drives themselves manually. - It could be a non-problem given drive sizes these days, there is a good chance we're moving towards having less but higher capacity drives as time goes on. Therefore worrying about lots of drives spinning up becomes less of an issue. - Likewise there is no rebalancing of data in unraid. So split levels work perfectly and as described - until you run out of space on a disk and then they don't. At which point you have to manually intervene and shuffle data yourself in the backend anyway (unless I've missed something that would make my life much easier!) aufs can also do a single namespace but I'm not sure it does anything better to fix the lack of split level and I found it's config and docs quite complicated. I don't have this problem using the partnership of split level and the cache_dirs script. Or should I say I don't always have this problem. Sometimes I see spin ups where I wouldn't expect it but on the whole things behave as I'd hope. I presume the random spinups depend on the memory pressure from other things happening on the server at the time. Though as you say it does hinge on your file density and memory. When I refreshed my server I shoved in much more memory to help with this - but again appreciate this may not be a good answer for everyone. I don't know if mhddfs would do any better with regards to this, I don't think it does anything clever with metadata but could be wrong. It would be nice for unraid to support a filesystem that allows metadata to be stored on a seperate / specific disk. Not sure what does outside of the enterprise - btrfs might but then it's overall reliability could be a problem!
  10. I used snapraid for testing and submitted a few bug reports. It works ok. The drawback for me is that it is literally what it says on the tin - if you also want storage pooling you'd have to do something else. And there aren't that many solution that are as good (or perhaps I should say easy) as the unraid user share system or flexraids storage pooling. mhddfs is the usual one trotted out but it falls short in some areas IMO. At which point snapraid starts to look less appealing as to replace unraid it would need combined with 'something else'. Especially when the elephant in the room is Flexraid which does ship a raid engine and pooling implementation in one package - although last time I tested it (which was a while ago granted) I could break it within 5 seconds. Snapraid does work though (the odd bug but they're being fixed), there are suggestions performance could be better but I believe that is also being incrementally worked through. The community seems to basically be the sourceforge forum which isn't great - but the developer is active and responsive to queries there. And the bottom line is that it is fully open source and the code looked pretty reasonable to me. The bugs I reported I would actually have been able to fix myself which is always a fair sign that if even I can manage to understand the code well enough to do that then it's been written quite clearly! Updated to add that as Snapraid can now apparently support symlinks properly, using it in conjunction with greyhole might be a very interesting approach to solve drive pooling : http://sourceforge.net/projects/snapraid/forums/forum/1677233/topic/4661219
  11. Just for info FlexRaid and Snapraid will both do the same thing. They differ from unraid in other areas but all allow individual drive recovery / don't touch the data drive contents allowing them to be easily remounted elsewhere.
  12. You can do either. ESXi has a concept of datastores which is where Virtual machine data is kept. Traditionally you will have a local data store on your server which is built from disk local to that server (i.e not disks used by unraid). You can put your VM on that. It's up to you if you choose to mitigate disk failure by making this data store redundant using RAID of some sort. This would all be outwith unraid though. Or you can make a datastore via NFS. This means you can could start your unraid virtual machine, export a portion of it via NFS then add that NFS export as a new datastore in ESXi and create new vm's there. This means all VM's you create there rely on your unraid VM running and performance won't be great, but it gives your vm data protection via unraid. The latter option is a bit 'meta' / recursive. But it does work. I do it for some vm's myself.
  13. Flexraid. Whether you think it's any good or not is another question - but it has the features you list above. The big thing it's missing over unraid is 'transparent' access to data when a disk is failed. You'll be missing that data until you replace the disk and rebuild whereas unraid will virtually reconstruct it for you to use in the meantime.
  14. I've been using xbmc for years since way back when it was actually the xbox media center on the xbox 1. I diverted off to media portal for a while but soon came back. No problems at all. I've never used a Popcorn hour but I wasn't aware xbmc was ever considered immature, it's been around for at least a decade now. I'm just running straight xbmc eden on an asrock ion box. I'd like to try openelec but more out of curiosity than any pressing need.
  15. If you tell us specifically what you don't like about 4.x we can tell you if it will be similar in 5. Broadly the fundamentals are the same in 5. There are software bumps with some minor new features, a new UI which is a bit shinier (and you can also replace with the simplefeatures replacement UI), a new security model / how permissions are handled and new hardware support. My understanding is a lot of the new bits and bobs are under the hood and behind the scenes to set things up for better ongoing support in the long term. For a better answer have a look at the changelog : http://lime-technology.com/wiki/index.php?title=UnRAID_Server_Version_5.0-beta_Release_Notes
  16. hmm ok my mistake. I guess it really had been too long since I added a disk!
  17. ah fair enough, it's been a while since I've added one. Perhaps I'm confused and it's a parity check that's triggered by adding a new drive?
  18. Flexraid, snapraid and unraid will all do expansion with no loss of existing data using any size of drive. You will have a window of lesser protection whilst the parity is in flux however.
  19. Unraid will rebuild the *parity* but not the array when you add a new drive. So depending on what your issue is with rebuilding it may not help you. Is your concern protection of data during the rebuild window? Snapraid and Flexraid all allow easy expansion using differing disk sizes. And all will only have to update the parity once a new disk is added / included in the config in a similar fashion to unraid. They both, in effect have the same fundamental model as unraid. Bare filesystem disks + seperate parity disk(s). The core differences are really that unraid works on the block level, not at the filesystem level and is also it's own complete OS. Flexraid and snapraid both sit on top of the filesystem (which is either good or bad depending on what you want) and are just standalone applications you install on top of your existing OS setup. There are other bits 'around the edges' like cache drive / user share / storage pooling and how each offers them (or not) and the individual quirks of how they work. But if you're focusing on the core / fundamentals just now... The feature matrix link on the snapraid website posted above is a pretty good and impartial look at things.
  20. Don't discount the others. They may work as well / better than unraid depending on your scenario. Always personal preference / best tool for the job but unraid is far from perfect.
  21. FlexRAID and Snapraid are probably the two closest in terms of technology both do unstriped parity. But neither are distributed as a standalone OS - yet. FlexRAID keeps threatening too but no sign yet. FlexRAID is more feature complete and includes more parity protection features as well as storage pooling (similar to user shares. Not quite the same but similar). Snapraid is more barebones (by design) and just does the parity protection and points you toward using other tools if you need to layer features on top.
  22. Please see the first post from Limetech in this thread : and thread with further background : http://lime-technology.com/forum/index.php?topic=20301.0
  23. ZFS does 'end to end' checksumming of the data and metadata blocks it writes to disk. If, on read back, it finds that the checksum of the block it's just read disagrees with the previously stored checksum it can attempt to fix it by either reconstructing that block using a raidz parity rebuild for that individual block (presuming this problem is occurring in a raidz pool!) or by going to another copy of the block if you have replication enabled. There may also be something it can do based on it's copy on write methodology - I don't know how long it keeps 'old' copies of data around once it's written a new version or if it even tracks this internally. I'm not convinced how infallible this protection is - it still needs to be able to reconstruct the block and I would presume in the (unlikely?) event where that particular block of data has problems on multiple disks it won't be able to reconstruct. And if you're only running zfs on a single disk or collections of single disks with no sort of replication enabled or parity based recovery possible - all it can do is warn you a checksum has failed. So 'having zfs' as a filesystem I don't think inherently protects you from this. You still need to be careful and appreciate there may be edge cases. This is all just my understanding though, I could be very wrong. I'd be more worried, personally, about bad hardware causing data corruption than over time bitrot on the disks. ZFS may not protect you from this as if the data is corrupted before it's written to the filesystem then the checksum will still be correct - just for the corrupted data. In short I'd be kitting out with ECC ram and enterprise kit as a priority before I relied on ZFS to save me. Though I appreciate ZFS could (if you were planning on using it anyway) be a quick and easy 'rude not to' layer of protection. In my own experience I've never (noticeably) had any problems with this sort of corruption so I don't bother with any of it and just have a decent backup methodology in place including verification and versioning of data. But as drive densities increase and the overall amount of data I store increases I may change my approach but likely only as a future result of being bitten hard by the problem. I'd be very interested in any case studies or papers where people have prodded at ZFS' recovery from bitrot.
  24. I'm going to update to rc[4|5] over the weekend - I was looking forward to this fix too. It's good to hear you've found it much improved - thanks for the info I'm much more confident about doing the update now!
×
×
  • Create New...