Johnm

Members
  • Posts

    2484
  • Joined

  • Last visited

Everything posted by Johnm

  1. expanding on bobs thoughts.. Yes the board is pretty outdated.. But. if you can boot it from USB... a board of this vintage might not. once you get past that. it does have gigabit NIC and 6 sata (150) ports. I am assuming you already have an agp or pci video card.. that would be enough to get you started with a 3 drive test system. using the pci slots for expansion will bog the system down pretty quick. you could use it for a test machine to see if unRaid is right for you before you invest money into a new PC. download the free copy of unRaid, toss some spare drives on it, and give it a test drive. (keep in mind that it will not perform as fast the newest hardware). there is one other possible issue, it is an nforce board. nforce boards are their own standard and can have oddities. as i recall some nforce chipsets are not compatible with unRaid. they can cause data corruption. I believe a partial list is in the wiki? I would see if the nforce 3 boards are on this list. as far as reusing your parts in an upgrade. i would throw everything into a an electronics recycling bin except the tower. that is still useable. chances are nothing on that list is compatible with any current hardware. If that PSU is 10 years old, it is most likely is degraded and not a true 480watts anymore. it might also have older oil filled caps. likely to pop. but, you might be able to reuse it for a small build with just a few drives until you can get a new one.
  2. heh.. nice franken server.. needs a tad bit of sata cable management. but I like where you're going with this.. I'm still waiting to see how you fabricate a front for the server. (please say full mesh like a mac pro front.. lol) if you want 15 more of these servers... well.. actually we use them as a couch in the datacenter.
  3. It might be an ESXi glitch. its trying to access the PCIe2 Ports wth PCIe3 instructions..
  4. I am not saying that i have the answers of all answers. I will say both at home and work, we are running X9SCM and X9SCL's with 2.0a on some of the boards. they all have Sandybridge CPU's and no issues. The one Ivy we had, we swapped to a sandy. I am running 4 HBA's (I also am running an X9SCM with 4 NICs all passed though) on ESXi box with 2.0a with sandy's all passed through to the same VM no issues. The use of an ivy CPU is obviously an after thought from intel and was intended for reference boards.. I am sure that this is not 100% stable in 100% of testing. there are many major changes between the 2 CPU's and I think we are pushing its intended upgrade. I am seeing PCI bus issues when inter-mixed server side. I am also seeing Video issues putting the HD4000 on the HD3000 boards, I am also seeing sleep issues and some odd usb3 issues when intermixing on the desktop side. Maybe this can be fixed with software or bios upgrades in the future.. I rather buy what works now out of the box and not hope for a possible future fix. Maybe intel should have come out with a new socket for ivy. Idk..
  5. I always laugh at ITX boards in a full size case.. I'm mostly laughing because that is how my first unraid box was. That and all the empty space. looks nice.. I'd get more front fans when you get more drives. for now the temps look fine. as far as spending more money for cable management... dont sweat it. no one is going to come to your house and inspect it and give you a guilt trip. as long as you're not impeding airflow, use the money on more drives or a plus license. do the housekeeping inside when you are starting to get a rats nest.
  6. the problem people seem to be having with multiple HBA/RAID cards with the x9 series of boards is when mixing in an IVY-Bridge CPU. The culprit might be the fact that its PCI architecture for Ivy-bridge is completely different then that of the Sandy-bridge. somthing is wrong in the PCI communication area. I run both WHSv2 and unraid on the same box. unraid is my file server and WHS is strictly for client backups (and WHS then backed up to unraid).
  7. Oh god that sucks.. Sorry about your loss.. I like your mobile lab... next time.. pull the drives and take them with you!!!
  8. Thats one way to unmake the MicroServer... that reminds me of my 24 drive Norco powered by an X7SPA ITX Atom. There is a 16 port LSI HBA if you want to pass on the expander. Personally, I prefer the micro server as a micro server, but is it neat to see one evolve into something else. I like that you didn't actually destroy anything (unlike most modding). that way you can always rebuild the MicroServer when you chose to upgrade the board in the new build.
  9. I would go with the 2120 for 2 reasons. 1. If you get an old stock (more then a few months old) server, it will have the wrong bios to boot the 3120 2. there are some oddities when mixing-and-matching gen2 and gen3 chips/boards. This not just with supermicro, but across the board with several brands. the whole backwards compatible thing was never fully tested outside of reference boards. While it might not effect you today, a year from now, you might do the V8 head slap. Keep in mind you need ECC unbuffered RAM for this server and a hard drive. I'd probably get smaller SSD for cool, quiet, and ready in an instant availability. Also to help with energy efficiency. If you have needs for more then 1 server, keep the Xeon E3-1230 V2 CPU in mind. while it is $100 then the I3, you could reasaly run several virtual servers on that same physical server with Hyper-v or the Free ESXi. You would probably still be under the 100watt mark. but, you could run a full farm of virtual servers on that box.
  10. This ram does work in the HP NL40 micro servers (2 sticks anyways) I assume it also works in the NL36. I have 2 sticks of it in mine.
  11. M1015 +1 ... Like bob, i have 6 also. and I have also fried one myself.. (hot plugged it.. DOHH!!)
  12. disassembly video: I'll remind everyone that I have been burned doing this. the drive died a month later me. nothing i could do to put it all back to together and get a replacement. it is a gamble.. If i am not mistaken. the WD 4TB RE's are based on the 7200RPM Hitachi Deckstars? one of the reasons they bought them. Also, the Seagate 4TB Barracuda have been out for over a year. Just really hard to get your hands on one. I am not sure if there is a 4 platter version yet. we have some of the 5 platter ones at work.
  13. I think you should point out if it is DDR3 or DDR2, unbuffered or Registered. different supermicro boards take different ram.
  14. I tried the upnp dlna on both my Samsung and Sony TV's. I was not digging the restrictions, limited formats and limited documentation/support for it all. I went with XBMC on small form factor PC's. For my main theater setup i have a newer Mac Mini (freebee from work). for the rest of my TV's I have a couple of older Acer Revo's mounted to the VESA mounts on the backs of the TV 's. If the Raspberry Pi was around when i built my media empire, I would have looked into them. they look nice. Especially for the price. Then again so do the overpriced Intel NUC's.
  15. Been there, done this.. Yet another reason for me to use unraid.. no fuss hardware replacement/migration. glad it was painless.. the mental part, not the cost out of pocket.. I hope new server is faster to help justify it.
  16. I have all of my media players use a generic user ID. usually XBMC with a pass of XBMC or something similar. This works fine for me since I don't write to the servers from the media players. (this might not work with all players out there, it works with XBMC for me) Create a Media player user. (XBMC for example) Set the SMB share type to Export:Yes and Securty:Private then set all of your media shares to read only for XBMC and give yourself and any other accounts that need write access read/write. Then set your media player to use that account on the server I also visit all of my other shares and set my media account to "No access" for all other shares on the server to keep nosy eyes out of my private stuff. the only problem with this is if you write your scrap data to your media share. you might need to have a master player with read write.
  17. I guess I'm a slave to microsoft. For my home use use, I am using the free SSL account with dynamic DNS that comes with WHS2011 that is hosted by Go Daddy. It is one of those, "it works so why mess with it?" situations. My registered domain address all go to static IP's.
  18. I'll second, it looks like you're trying to build a high end HTPC. going forward... with your picks. Overkill on CPU for just file serving. that board has the Realtek 8111E. you'll have to run a newer RC5. 60gb cache dive is pretty small. unless you plan to migrate less then 60GB a day. That case is a bit odd for a server. if you already have it then OK. the ram is overkill. but it is cheap. so its fine. I don't know if you'll gain anything from a usb3 flash. but you never know... (assuming the controller is seen as usb3 in unraid) 3x3TB is not 15tb .. but I think I know what you meant.
  19. I'll second prostuffs suggestion. These work great and are complete ready to run servers in a SFF. Just add more drives. if you buy a licensed unraid, you can use the 250GB drive for a cache drive. I bought one of these just because it was on sale.. great little server. edit: here is a great thread dedicated to the older model. http://lime-technology.com/forum/index.php?topic=11585.0 The OP of that thread has 6 Drives in it.
  20. In general, running 2 cables between 2 switches that do not support link aggregation and have not been configured for it, with cause a loopback and take down the segment/Vlan. Keep in mind that single workstation to server will still not use much more then a full GB of throughput when it has the ability. Even SSD to SSD over 1GB is about capped. the real benefit is so that a single workstation is not eating 100% of the bandwith (ignoring QoS for a sec) in large file transfers.
  21. the 12x5 "should work", but there is no point. the 12x5's have on board video (and one more thing to interfere) You're paying for an CPU video card that the Mobo wont recognize.. so don't waste your money (or power). The 1230 is probably all you need for most normal server use. when i got my 1240, it was priced so close to the 1230, it was a no brainer to do that upgrade ($25 or so?). as far as the 1270, I have never maxed my 1240. unless you're doing bluray ripping on your server, I don't see a need for it. maybe in the future you might need the extra CPU IO if you think about future proofing. SAS2lp-mv8 = unknown. I have gotten conflicting answers on that one also (as i recall, none from a definitive trustworthy source). I do not own one so i cant help. The obvious answer is no. they don't work because they are not supported. but neither are the SAS(1)LP-mv8 and I know they can be hacked because I use them.... I guess the answer you dont want to hear is.. one of the 3... 1. Delay: wait for someone to confirm. 2. Move Ahead: go ahead with the build and see what wall you run into. possibly replacing the card when you cross that bridge (being our guinea pig). falling back to baremetal unraid until you can swap out cards. 3. Move ahead with no downtime: Buy an M1015 (or equivalent) now. then do step 2. if the card is not compatible, then ebay it to try and get your money back for the m1015 (or a second one). IF the SAS2LP does work, I'd honestly only put one drive on (with the rest of the drives on the other card) it for a few days or a test array if you have the drives to spare. just in case of some sort of odd issues (aka : dont put the main array at risk). I wish I had a better answer for you.
  22. I am running a silverstone small form factor (SFX) PSU in my Mac case conversion. http://www.silverstonetek.com/product_power.php?tno=7&area=en Not the cheapest, but they are getting decent reviews.
  23. I would go with the Corsair unit. I wanted to point out for others that might be looking at PSU options.... That that antec model is actually a 4 rail unit (not a 2 as listed) and is a poor choice for a storage server.
  24. Short answer: Yes. While Joe is 100% correct, Personally, I would not trust my data to a single point of failure or corruption with the questionable stability of an antique drive. @Joe. 22 MPG, but if you paint racing stripes on it, it will go twice as fast and get 12MPG. paint it silver for 36mpg. unless the primer used is white.
  25. My decision for IO was this thread at [H]ard. It was well written and had decent community support. As I stated, I never did flip the switch to migrate my freeNAS to OI / napp-it. My freeNAS has worked 100% flawlessly for almost a year now with super easy setup (once I found the hacks I needed) and super user intuitive interface, performance reporting, and error reporting. Plus it was ESXi aware and installed itself as a guest with VMware tools ready to go. I honestly have not felt that I would gain enough of an upgrade to justify compromising my data in a migration (it should go smooth, but crap happens. we have all been there) Why would i rebuild with OI? Because I have done freeNAS, i wanted to learn something new. Would i stick with it after I tried it? I have no clue honestly.. I have not used it. Your, raid6 idea, while it might be more fault tolerant, it wont perform as fast as the Striped Mirrored Vdev’s. If you wanted to expand that raid6, you would need to by 4 new drives for only 2 more drives of data. With the Striped Mirrored Vdev’s you can add drives 2 at a time with 1 drive loss but gain a higher IO each expansion. Yes, loosing 2 drives in the same Vdev = total array loss. It sounds like you are in the mindset that a production raid is a backup (or needs no backup). it is not (especially on zfs), you have a greater chance of staying lucky. a backup is the only way to be safe. remember, you can backup the ZFS to your unraid... Restoration will be a pain because you have chicken/egg problem. you have to first recreate the zfs and unraid guests before you restore (this in one reason why my unraid guest is on my ssd, only the cache is on the ZFS). I still backup my entire ZFS array to my unraid weekly and backup key guests daily (2x 3tb HDDs are enough for my 4x 2TB's). Some things i should point out about ZFS that is not obvious or mentioned a lot.. first off, ZFS is a bit more complicated and advanced then unRAID. Especially if you get into a Solaris based OS. (freeNAS is quite a bit simpler) it is very unforgiving! one mistake and you can loose your data. dont mix 4k and 512k disks.. they say to avoid 4k disks. some people say it's ok to use them with newer builds. (my array is 4k drives samsung f4's) expanding an array is almost impossible without some sort of knowledge of what your doing. you really do need a plan ahead of time. When you first build your array, plan out your expansion options for the future. You cant just add a disk and hit expand like a hardware raid. The best way is to add vdevs. and that needs a matching(ish) group of disks as your existing array drives. (I already plan to double my array with another 4 drive Vdev when I migrate to a "Head".) Before you build a "production array". test out a few test arrays. once you are happy with it, then migrate your data to it. I see people time and time again when expanding arrays in ZFS arrays creating a whole new array and migrating the data over instead of expanding.