Jump to content

butlerpeter

Members
  • Posts

    192
  • Joined

  • Last visited

Everything posted by butlerpeter

  1. Point taken - will probably go for the 1230 instead. Finding stock of everything seems to be a stumbling block, can't find anywhere in the UK that has stock at the minute. But I'm in no great rush, now I know what I want I can keep an eye out. By the time I get around to buying it I'll probably have talked myself into going the whole hog and getting a new case as well lol
  2. Done a bit of research and now I'm thinking of going for: Supermicro X10SL7-F motherboard Xeon v3-1225 processor Seasonic M12II-520 PSU XCase 5x3 drive cage XCase 3x2 drive cage That would give me 8 drive cage bays in my current case, all of which could be driven from the 8 SAS2 ports on the motherboard. Although I might be tempted to mount the cache and parity drives internally, driven by the 2 6Gbps SATA ports on the motherboard and keep the cage bays for data drives. So I could host my current drives and have room for expansion. Could also mount my current SATA/eSATA addon card (PEXSATA24E) in the x4 slot to maintain connectivity to my external enclosure for further expansion. That would also add a couple more SATA 3Gbps ports. I'd even have the ability to, in the future, upgrade to a larger case with room for more drives, and add a SAS2LP card in the x8 slot to allow for more drives. That seems like a pretty good way of building a system that can be evolved over time to get near to the 24 drive limit, if I end up using the 3Gbps SATA ports as well.
  3. Thanks again guys, Have had a brief look at the boards suggested and like what I see, although I see that one of them (can't remember which) has SAS2 ports instead of SATA. From research I believe that SAS is backwards compatible with SATA so I could just plug my SATA disks in with no worries. Is that right? Would I need any special cabling or anything? Current thinking now is the Xeon - spec wise it looks pretty similar to the i5 and the price is about the same. The motherboard will likely be more expensive than one for an i5, but then I can offset that cost by being able to re-use the RAM I have already. Will probably go for a new PSU as well. To save any concerns or messing about later. Now to have a closer look at the motherboards and look at some drive cages and see what the total cost will be. Also got to try and find suppliers in the UK. Thanks again for the advice, much appreciated.
  4. Thanks guys. What about the power supply? Suitable or not, with those boards and cpus you've suggested?
  5. The RAM is this kit http://www.amazon.co.uk/gp/product/B002T3JN0Y/ref=oh_details_o01_s00_i02?ie=UTF8&psc=1 I did also have a brief look at using a Xeon processor, as I thought that might be more likely to support the ECC RAM. Forgive me, I've not really been keeping up to date on processors and have only done a little bit of research so far, but I think the Xeon I was looking at was a v3 1225 - does that sound right? It was similar in spec to the I5.
  6. Hi guys, currently my unRAID server is a HP N40L MicroServer, with 6 drives installed internally (4TB parity, 2x 3TB and 2x 1.5TB data drives and a 250GB cache drive) connected to a Hornettek Enterprise 4x e-SATA external enclosure which contains an additional 4TB data drive. I've recently fitted 8GB of Kingston ECC memory. I'm contemplating building something with a little more cpu power, and to save costs am thinking of cannibalising my current desktop tower pc, which doesn't get much use these days, and repurposing the newly added memory from the N40L. My desktop is in a CoolerMaster Centurion case, which I think will do for a start. There's room for at least one 4 or 5 bay drive cage, plus room for internal drives. It has a Corsair HX520 power supply from about 5 years ago (been a while since I updated my pc!) and is running a Core2Duo E6700, with 4GB RAM on an Asus P5KC motherboard. The only things I'm thinking of keeping from the current pc are the case and power supply. Although having seen recommendations that unRAID power supplies are single rail and stuff I'm not sure if it is really suitable. An old product page for it is http://www.corsair.com/en/hx520w which says it's triple rail. As a minimum I want to recreate my current setup, 6 drives in the case with external enclosure attached, and to save a bit of cost I'd like to reuse the RAM from the N40L. With unRAID 6 on the horizon I'd like a setup that supports virtualization, I'm never likely to run loads of VM,s but maybe 1 or 2. I'm thinking of going for a Haswell quad core I5 processor a 4440 was one that I looked at. But I have no idea when it comes to motherboards. To be able to connect all of my internal drives I need one with at least 6 SATA connectors, plus I need one that supports the ECC memory that I have in the MicroServer. Does anybody have any suggestions/thoughts on a processor/motherboard combo that would tick these boxes? Also any comments on the power supply? Is it suitable, if not any recommendations for another? Thanks Peter
  7. Could it not be possible to remove AFP from the main release but make it available as a plugin? Surely that's exactly the type of thing plugins are designed for - making additional functionality available so that users who want it can install it.
  8. SURVEY SAYS! 1. VirtFS (9P with virtio) People can have all their VMs access / read / write / copy directly to the unRAID Drives / Cache Drive at close to block level speeds. Much better than having to use a file based protocol like NFS and Samba (which aren't even in the neighborhood of VirtFS) to copy LARGE GBs worth of data from our VMs to the Host (unRAID). 2. Video Card Passthrough is way better on KVM. AMD RADEON 5xxx, 6xxx, 7xxx - Works NVIDIA GEFORCE 7, 8, 4xx, 5xx, 6xx - Works <--- No Quattro Card required. Xen or ESXi cannot do this. INTEL IGD - Sorta working but not there / stable yet. 3. KVM is more catered to home users and they are way ahead of Xen and ESXi when it comes to the things that we are going to want now and going forward. Xen and ESXi does not have anything like VirtFS because they are more Business / Enterprise focused. A typical Xen / ESXi server / VMs are using 10GB+ speeds on their network and using block level protocols like iSCSI, FCoE, AoW, etc. to take advantage of those network speeds. This is why, a few pages back, I raised the idea of the VM related bits and pieces being created as plugins on top of a base 6.0 release that includes all the necessary kernel configs and modules. That way both Xen and KVM can be supported, not all users will be affected by a huge growth in the size of the bzroot and users can have the choice of which VM platform suits their wants/needs best - possibly none for those who aren't interested in either.
  9. I wonder if, once the bulk of development/testing is done, it would be sensible/worthwhile to split the Xen related packages and configs etc from the main release and instead make them available as a system plugin. That way I guess it would also be possible for a KVM plugin to be created, and any updates to Xen/KVM packages could happen independently of an unRAID release. Obviously any kernel configs or required base (non Xen or KVM specific) packages should be included in the main release. Anything else could be in the plugin. That way users who aren't interested in running VM's wouldn't find themselves lumbered with irrelevant (to them) packages being installed. And if there were Xen and KVM plugins then those users who are interested in running VMs can pick their architecture of choice. Thoughts?
  10. No that doesn't help - that package doesn't build, it looks as if the source for it no longer exists. Probably been updated to a new version and I can't find a comparable source package.
  11. Yep done that - first thing I did so that I could rebuild the kernel for my install.
  12. I'm following parts of this guide in order to get Xen up and running on an Arch build, but when following the section to rebuild libvirt with xen enabled, I get the following error: configure: error: You must install the Xen development package to compile Xen driver with -lxenstore Anybody know what package I need to install? I haven't been able to find one that seems relevant.
  13. Surely must be worth exploring. Having had a look at the docker site it seems to need some stuff enabling in the kernel. I'm not sure which, if any, are already enabled in the v6 kernels. I wonder if those options could be enabled - that way people could investigate independently whether docker is a viable proposition. Especiialy whilst v6 is in beta phase, having the options enabled in the kernel config can't hurt too much can it?
  14. Am I correct in understanding that to mean Xen is booting on bare metal then booting unRAID as a VM? Wasn't the purpose of adding the VM related stuff, to the unRAID kernel, to make unRAID the bare metal kernel that would be hosting other VM's? No. Dom0 is the host. Tom, just go grab one grumpys vhd files if you want a quick and dirty test though without a bridge its not gona be pretty. Sent from my Nexus 5 using Tapatalk So in Tom's example - what is running on the bare metal - the Xen kernel or the unRAID kernel? Or are they the same thing? Apologies for going off topic - thought I understood where things were heading, then I thought I'd misunderstood and now I'm clueless
  15. Am I correct in understanding that to mean Xen is booting on bare metal then booting unRAID as a VM? Wasn't the purpose of adding the VM related stuff, to the unRAID kernel, to make unRAID the bare metal kernel that would be hosting other VM's?
  16. Now that the beta of unRAID 6 is available, I had a go at trying my Arch install using the 64bit emhttp. Worked like a charm - up to the same state as my previous attempts. Array running, shares being available locally and over (independently controlled samba). If only, as mentioned before, the commands to start/stop samba etc were configurable. Add to that a configurable setting for the path to the unraid config files (defaults to /boot/config - in Arch I mount the flash drive as /flash and create /boot/config as a symlink to /flash/config). If those few things were in place then I think it would be possible to get a nicely integrated system together.
  17. grumpy If it didn't come across in my previous post - I totally agree with you! In no way am I trying to argue or doubt your points or co-erce you into writing a guide or releasing anything - I agree wholeheartedly. I've had a go at installing unraid on a distro and can now see for myself more clearly what problems might arise. With that experience, and the points you've highlighted, it seems, to me, that an unraid package for installation on any distro is not the best way for things to proceed. To my mind (and getting back onto something relating the actual topic of this thread ) the best solution would be for there to be a single unraid distro. The same as there is now, but not based on slackware of course Essentially we're talking about unraid, as it is now, and swap slackware for CentOS/Arch/OpenSUSE/whatever. If the basic release from Limetech worked as the current one does (as an appliance) and there was the ability to install unraid on a full installation of whatever the base distro ends up being (as you can now with a full slackware install), then I think that would be most of the bases covered. More modern and better supported base distro - Yep Maintain appliance like ability - Yep Better package management - Yep Ability for hardcore/experienced users to do a full install - Yep No increase in support headache - Yep Happy forum users who have gotten something shiny and new to play with - Yep
  18. I couldn't find one either, but I did discover that the StarTech PEXSATA24E is the same card, and is available in the UK. http://www.ebuyer.com/162229-startech-com-2-port-esata-4-port-sata-pci-express-x4-sata-pexsata24e I ordered one from ebuyer last week and installed it at the weekend. I currently it setup to enable the 2 esata ports, and have my cache drive connected to one of the internal ports (eliminating the esata to sata cable coming in through the rear of the chassis). At some point, when required, I'll look into getting hold of one (or more) of the external enclosures discussed in this thread and connecting up to the new esata ports. As an update - I got to the point of needing more hdd space, and with my Microserver drive bays being full I needed an external enclosure. So I got one of these - http://www.hornettek.com/hdd-enclosure/3-5-quad-bay-jbod/hornettek-enterprise-4x.html and have it connected via esata to one of the esata ports on the StarTech card. The product info for the enclosure says that it supports up to 3TB drives, but at the same time I ordered a Seagate 4TB and put it in the enclosure "just to see if it would work". Happily it seems to work fine, although I do only have the 1 drive in there at the moment. If I get issues in the future when adding more drives I'll just put any 4TB ones into the Microserver and move the <= 3TB ones into the enclosure. I haven't done any scientific testing, but access and data transfer speeds seem fine. Will wait until the next scheduled parity check to see if there is any great effect there. The only issue is that the drive I the enclosure isn't reporting any SMART data. Apparently that is an issue with the controller in the enclosure and only affects the first disk that is enumerated in it. So far so good - would recommend it. Especially for users in the UK where there seems to be a lack of suitable JBOD enclosures. I couldn't find the previously mentioned Sans Digital oes anywhere for a decent price.
  19. grumpy, good points about the config/ini files. That's exactly the type of thing I was referring to. It would make things much easier to configure for different distros like you say. I have noticed that even though, on current unraid, there is the rc.samba file - emhttp doesn't always use it for stopping/starting samba. I've noticed in the logs that sometimes emhttp just kills smbd and starts it again directly. Abstracting those kind of things out, so that there is one path for starting/stopping the various services, would be a good thing to do whether an unraid 'package' ever gets released or not. Now that I've had a bash, and some success, at getting unraid running on top of Arch, I can understand your reluctance to produce a guide. You would get a storm of questions (the few I had were bad enough). Which would be made even worse by the scenario you describe - something getting updated that breaks the unraid integration. Also in its current form, at least my version anyway, unraid on a distro isn't as complete/polished as it would need to be. If a user sees, in the unraid webui under emhttp, the ability to manage samba they are going to expect it to work 100%. Whilst is does kind of work it's not quite what would be expected from a commercial product. I wonder if it would be possible for the unraid driver to be developed into a kernel module that could possibly be distributed alongside the unraid package and negate the need to have it compiled into the kernel? I have no knowledge or experience of such things, just speculating. On the whole I'm pleased that I've managed to get my Arch build working. Although I'm not sure that I would want to run it on my actual server. I don't think I'd trust my data to my mad Linux skillz Think I'd rather wait for a 'proper' solution. I'll probably carry on tinkering and learning though!
  20. Just in case anybody hasn't seen it, there was a previous thread about pxe booting openelec, where I shared some configs and a background image for an openelec menu http://lime-technology.com/forum/index.php?topic=27234.0
  21. Right - I was thinking you'd some something clever so that emhttp still managed samba etc. But you're saying that you leave those to be managed outside of emhttp. That makes sense, I just thought there was more to it. Thanks. It's a shame that emhttp is hardcoded to use the rc.d scripts. Would have been nice if there was a bit of abstraction - e.g. emhttp calls emhttp-samba which in turn calls /etc/rc.d/rc.samba Then we'd be able to modify emhttp-samba to call what we wanted it to "systemctl smbd restart" for example. Maybe if Tom sees this (or if you are in contact with him you could suggest it) he will keep something like that in mind for a future emhttp version
  22. Great news, however I'm pretty sure this is all futile as Tom has said a brand new x64 version of emhttp is coming 'any day now'. I appreciate that it all might be futile, but I do feel I have learned a lot by attempting this. I've actually re-done the Arch build, this time installing Arch 64 bit (silly mistake by me not to do 64 bit the first time). That has actually been a bit more of a learning experience - having to incorporate multilib to get emhttp running, and getting hold of a fuse package for multilib. Now I've gotten the 64 bit build to the same state as the previous one - it runs, loads emhttp automatically, starts the array and the shares are available under /mnt/user However I would really like to know how yourself and grumpy have handled the init scripts issue. Even if it is pointless in the long run, it would help me round off this little project nicely. Pretty please?
  23. I guess it depends ultimately on what the resulting solution is... If it's a package that can be installed on any (supported) distro then something like unRAID-pkg If it's a distro by itself then maybe unRAID-[DistroName] e.g. unRAID-Arch or unRAID-CentOS.
  24. I did give up on trying to get CentOS 6.5 to work - I just couldn't get the kernel to run. However I have had more luck with Arch Linux. I have been able to build the kernel, and get system to run emhttp. Resulting in being able to run the array. Am pretty sure that a load of stuff won't work properly as I didn't know hat to do with all of the init scripts from /etc/rc.d. Obviously they need translating to work with system somehow? Can anyone who has gotten unraid working on Arch (or any other system based distro) please advise me how to get the init scripts moved over?
×
×
  • Create New...