delicatepc

Members
  • Posts

    92
  • Joined

  • Last visited

Everything posted by delicatepc

  1. Sales, Combo Deals, etc. For example NewEgg had combo that saved me $25 bucks on the supermicro board plus cpu so decided to go with that mobo and save some money. -dpc
  2. Well that changes things. Will look into other supermicro boards as well as i thought they were bound to the app.... yeesh what a rookie assumption to make. Thanks Johnm, dpc
  3. Explain? Do you mean you can hit IPMI without the supermicro application (like Dell/Tyan does)? Does it use a specific port? -dpc
  4. i could use a rack but money is tight for toys on toys. Can you tell us more about this "ghetto" ikea rack that looks mighty appealing to me? EDIt: Read other postings...
  5. Dropping price: Asking: $175 + actual shipping cost (from Seattle)
  6. For sale: HP 615095-B21 Micro Server Remote Access Card Originally purchased in April 2012 for a HP Proliant N40L Reason for sale: Sold the server this was in recently - consolidating. Condition: Used and very clean. Tested and works great. Pulled from working setup. Full Disclosure: Original Cost (April 2011): ~$85 Retail Price Today (brand new): ~$85 Asking: $50 + actual shipping cost (from Seattle)
  7. Good thought. I had not encountered issues with this before as I had not been using all the slots on the Norco so the load (if there ever was any) on the splitter was never high enough to cause issues. I will look into having three molex lines from the power supply. I hope my cables reach. -dpc
  8. Originally purchased in April 2011 Reason for sale: Replacing with much much larger and stupidly expensive machine. No issues at all with this host just need to free up room for the large new box. NAS for Sale - Triple Core, 4GB RAM ==================================== Case: LIAN LI PC-A04B Black Aluminum MicroATX Mini Tower Computer Case Power Supply: Antec BP550 Plus 550W Continuous Power ATX12V V2.2 80 PLUS Certified Modular Active PFC Power Supply Motherboard: MSI 760GM-P33 AM3 AMD 760G Micro ATX AMD Motherboard CPU: AMD Athlon II X3 450 Rana 3.2GHz 3 x 512KB L2 Cache Socket AM3 95W Triple-Core Desktop Processor RAM: CORSAIR XMS 4GB 240-Pin DDR3 SDRAM DDR3 1333 Desktop Memory Compatible Gigabit NIC: Intel EXPI9301CTBLK 10/ 100/ 1000Mbps PCI-Express Network Adapter 1 x RJ45 DVD Drive: LITE-ON SATA DVD Drive ==================================== Condition: Used and very clean - quiet workhorse in a bedroom. Have original computer box to ship it all in. Works absolutely great. Full Disclosure: Original Cost (April 2011): ~$440 Retail Price Today (brand new): ~$350 Asking: $200 + actual shipping cost (from Seattle)
  9. Narrowed down my mobo/ram/cpu to something like this: Mobo: unordered: TYAN S5512GM4NR ATX Server Motherboard LGA 1155 Intel C204 DDR3 1600 - $240 CPU: unordered Intel E3-1230 V2 - $235 RAM: unordered Kingston 8GB 240-Pin DDR3 SDRAM ECC Unbuffered DDR3 1333 Server Memory Intel Model KVR13E9/8I x 2 - $170 ($85 each) Why Tyan over SuperMicro? Couple reasons that I been musing over: > Web based IPMI - My understanding is the SuperMicro offerings all require you to download a client to access the software. Web based IPMI allows Java based KVM over IP (Which is a typical style that Dell/HP/etc offer). I like Web based management with not OS level limitations other than web based tech (ie Java/javascript) > Preboot bios flashing. Not sure if the supermicro had this. Handy if new bios update is needed for updated CPU. > 4 Built in Gig NICs.... FOUR!! Also supposedly supported by Esxi/proxmox etc. with no hacking/patching. > ATX sized board (vs mATX supermicro), PCI x16 slot with 2 x8, 2 internal USB slots (vs 1 in supermicro I saw)
  10. Xander Hardware: ===================================================================================================== Case: Norco 4224 - 24 hotswap bays - $395 Case Mid Fan Wall 120MM: 120MM Fan wall for Norco 4224 - $0 Case Middle Fans: Noctua NF-P12-1300 120mm Fan x 3 - $75 ($25 each) Case Rear Fans: Noctua NF-R8-1800 80mm Fan x 2 - $36 ($18 each) Power Supply: SeaSonic 1000W 80+ PLATINUM Modular - $187 Accessory - 1to7 Molex Connector: NORCO C-P1T7 4 Pin Molex 1 Male to 7 Female Power Extension Splitter Cable - $8 Accessory - Norco BALL-BEARING Rails: RL-26 Rails - $40 Mobo: SUPERMICRO MBD-X9SCM-F-O LGA 1155 Intel C204 Micro ATX Intel Xeon E3 Server Motherboard - $175 CPU: Intel E3-1230 V2 - $236 RAM: Kingston 8GB 240-Pin DDR3 SDRAM ECC Unbuffered DDR3 1333 Server Memory Intel Model KVR13E9/8I x 2 - $158 ($79 each) RAID Controller Cards (for 5 of the 6 SAS connectors): undecided (thinking the LSI 9211-8 or the M1015)... Dont want hardware raid at all. The LSI is pretty expensive for 3 cards Backplane to onboard SATA ports Cable: NORCO C-SFF8087-4S Discrete to SFF-8087 (Reverse breakout) Cable - $15 1x 2.5 120GB hdd laying around for OS Shipping Costs: $8 Total (so far): $1333 Chassis: ~$750 (about $200 saving over supermicro 24 bay). Processing Guts (Mobo/CPU/RAM): ~$570 HDD Facilitation Guts (RAID cards, SAS cabling etc): $15+? =====================================================================================================
  11. My Drive Temps with new fans - temps are about same or lower than with the noisy OEM fans: Note - The server is *silent* as far as I am concerned. So to summarize Norco 4224 Chassis: Case: Norco 4224 - 24 hotswap bays - $400 Case Mid Fan Wall 120MM: 120MM Fan wall for Norco 4224 - $20 (shipped) Case Middle Fans: Noctua NF-P12-1300 120mm Fan x 3 - $75 ($25 each) Case Rear Fans: Noctua NF-R8-1800 80mm Fan x 2 - $34 ($17 each) Power Supply: SeaSonic X-850 (SS-850KM Active PFC F3) 850W 80 PLUS GOLD Modular - $211 Accessory - 1to7 Molex Connector: NORCO C-P1T7 4 Pin Molex 1 Male to 7 Female Power Extension Splitter Cable - $8 Accessory - Norco Rails: RL-26 Rails - $40 Total Cost for Norco 4224 Chassis: About $815 (with shipping) - Approximately $150 dollar savings over the SuperMicro chassis. Would I buy the chassis again? Yes as long as I order all these parts (fans etc) together. I am ok with the Rails (even though its a shame they are such a piece of crap). ~ dpc
  12. It seems you are more versed in these matters. To each their own. ~ dpc
  13. Surely you folks jest....?? 1. (typical) unRAID requires a USB stick to be installed on. Yes a vmdk can be created and its great idea. However not interested in creating a vmdk for each new release. As an additional determent the unRAID requires windows to run the prepackaged betas - the syslinux executable - at home I have only a mac and im ok with that - dont understand why we are stuck to binary blobs for windows for a linux OS. 2. unRAID licensing scheme appears to be based off USB stick info. Perhaps it works without the USB stick but I do not feel comfortable to invest into something that is tied to a USB stick or generic host. I dont care if the support backend for the one off situations that could potentially occur was angels themselves - I didnt want to deal with that in a virtualized solution. I also didnt feel like unRAID product was polished enough (speeding ant webgui addon excluded) to be worth the money - I care about the nitty gritty but damn for some paper i want some polish, hobbyist or not. unRAID technology seems sound enough - needs some spit and shine 3. Not promoting or denying the stability of unRAID virtualized. It potentially is rock solid but that was not my sticking point. 4. No (easy and clean) method to get a sandbox/dev/virtual version of the unRAID up. I did some scripting and automation work for unRAID and it was difficult to test my code because i was having trouble with having a unRAID VM. Too much work for a hobbyist project. I wanted to do more but it was becoming difficult for me to continue fighting getting a clean environment to test with instead of focusing on my script attempts. OMV: 1. Simple ISO install. I boot and install the latest version from an ISO. New version comes out i boot up new iso and go. In unRAID newest beta comes out and that means i have to potentially mess with the USB key and/or create new disk image. 2. Built in nice update manager - really nice feature that makes my life a lot easier. 3. Didnt have to create anything or go out of my way to whip up a VM with OMV. 2 minutes and I have a new VM with OMV to test and tinker my code with. 4. Bonus points for the dev who developing it virtualizes the environment as well. 5. Less windows reliant - some of unRAID plugins are flat out made by windows users for windows users (in a bad way). Zips instead of tarballs for example (tar ships with unRAID, infozip (contains unzip) dont not ship with unRAID - why plugins zipped for crying out loud?). 6. Slightly off topic but WAY MORE functionality built into the core offering than the unRAID. To each to his own but i love that i have FTP, NFS, SMB/CIFS, an other services available for the get go and dont have to rely on plugins for what i always considered core functionality in a NAS (built in FTP for the big long transfers where you dont want to mess with SMB/CIFS). Johnm - The Norco 7 connector seemed flimsy but so far i cant complain. I definitely had better molex connectors thats for sure. BTW went with the Noctua 120mm and 80mm fan replacements and now my drives are cooler and the machine is ALMOST completely SILENT. Day and night difference. Second NIC - I was refering to the extra PCI express NIC in passthrough - cant seem to get it to work. Will try passing it through to my windows host to see if its perhaps something funny driver wise. The second nic onboard is working great for me with that patch. ~dpc
  14. At this time running 5 guests offering following network services: -------------------------------------------------------------- > PXE booting (for OS installs, drive clones, etc) > NAS (SMB/FTP/NFS/etc) > FTP host - guest offering external FTP shares > Internal LAMP stack - for timesheet etc. > Media Guest - Middleware guest for NAS to serve up our media in variety places/protocols/formats (itunes/airvideo/subsonic/dnla/plex/etc) === PXE Booting Guest (PXE) === 2 CPU, 2GB RAM, 8GB Thin Provisioned OS vmdk on SSD --Ubuntu 11.10 based - TFTP+PXELinux+Syslinux to serve up local imaging tools, recovery tools, diagnostic utilities etc. === Network Storage (NAS) === 4 CPU, 4GB RAM, 8GB Thin Provisioned OS vmdk on SSD Passthrough of LSI 9211 with two SFF8087 cables going to Intel Expander card. Total of 16 drives available. --OpenMediaVault based == FTP Host (FTP) === 2 CPU, 2GB RAM, 8GB Thin Provisioned OS vmdk on SSD -- FreeNAS 8 based == LAMP Host (LAMP) === 2 CPU, 2GB RAM, 8GB Thin Provisioned OS vmdk on SSD -- Ubuntu 11.10 based == Media Middleware Guest (MEDIA) === 4 CPU, 6GB RAM, 60GB Thin Provisioned OS vmdk on HDD -- Windows Server 2008 R2 based - Mapped network drives to NAS - Multiple software such as AirVideo, Subsonic, Plex Media Server that stream and transcode media
  15. Hello, Built an Atlas-like clone. Machine is created in an effort to upgrade our NAS capacity as well as consolidate our resources. Running X64 everywhere. VM ESXi 5 Hardware: ===================================================================================================== Case: Norco 4224 - 24 hotswap bays - $400 Case Mid Fan Wall 120MM: 120MM Fan wall for Norco 4224 - $20 (shipped) Case Middle Fans: Noctua NF-P12-1300 120mm Fan x 3 - $75 ($25 each) Case Rear Fans: Noctua NF-R8-1800 80mm Fan x 2 - $34 ($17 each) Power Supply: SeaSonic X-850 (SS-850KM Active PFC F3) 850W 80 PLUS GOLD Modular - $211 Mobo: SuperMicro MBD-X9SCM-F-O Intel 1155 C204 MicroATX Intel - IPMI/iKVM+ 2 Gig NICs - $200 CPU: Intel Xeon E3-1270 Sandy Bridge 3.4GHz 4 x 256KB L2 Cache 8MB L3 Cache LGA 1155 80W Quad-Core - $340 RAM: Kingston 16GB (2 of 2x4GB kits) 240-Pin DDR3 SDRAM ECC Unbuffered DDR3 1333 (PC3 10600) - $170 Small and Fast Datatore SSD: Intel 311 Series Larsen Creek SLC SSD 20GB 2.5" SATA II - $115 RAID Controller Card: LSI Internal SATA/SAS 9211-8i 6Gb/s PCI-Express 2.0 - $240 Expander Card: Intel RAID Twenty-four port RES2SV240 - $280 Backplane to onboard SATA ports Cable: NORCO C-SFF8087-4S Discrete to SFF-8087 (Reverse breakout) Cable - $15 Extra NIC: Intel Gigabit NIC - $28 Accessory - 1to7 Molex Connector: NORCO C-P1T7 4 Pin Molex 1 Male to 7 Female Power Extension Splitter Cable - $8 Accessory - Norco Rails: RL-26 Rails - $40 Server Rack: iStarUSA WN228 22U 800mm Depth Rack-mount Server Cabinet - $923 2x 2TB Samsung laying around - $0 5x 500GB hdds laying around - $0 Total (too much): $2180+ (not counting 1K toward rack or shipping). This is all part of a 6K order for a rack mounted experience. ===================================================================================================== Thoughts: Case: Fans are too LOUD. Not as loud as a Dell Precision Tower with door off but loud enough to send you looking for a quieter solution. BARELY FIT in the 800MM deep rack. Any less and i imagine its a no go. Wiring up backplane is a pain in the ass unless you remove the fan wall. Used 7 pin molex to connect both of the backplane power slots (per some forum user recommendations). 20GB SSD is floating inside. The case is not bad. I like the drive caddies - seemed somewhat superior to the SuperMicro caddies from another machine. However after purchasing PSU ($210), Piece of SHIT Rails ($40), Fans ($110), Fan Wall ($20), and Case itself ($400) it comes to about $800. SuperMicro (which most folks consider a superior brand to Norco) sells a 24 bay chassis with a 900watt redundant PSUs for $950. SUPERMICRO CSE-846TQ-R900B Black 4U Rackmount Server Case 900W. The cost savings with the Norco can be described as $200 savings which you will repay back with labor. Perhaps getting slightly better tweaked components in the end. Your employer will be frustrated at the end result as the labor costs will be paid out of their pocket. Rails: Garbage. Get the ball bearing version through a different vendor; price is almost similar - FAIL NEWEGG! I am living with these current friction rails but disappointing rails. Mobo: Wish every motherboard was built like this (headless mobos in any case). Best purchase. Really like dedicated IPMI port Want - More RAM slots (or buy the 8GB sticks if available), potentially second cpu slot. Could use addition PCI express slot or two. RAID Controller Card/Intel Expander Card: I am a fan. So far working well. The Intel Expander came with a set of cables (about $80 worth). 2 went from RAID to expander, and 4 went to backplane. Gives me 16 drives in passthrough to my NAS guest VM. Recommend. Im not using it in any RAID. Extra NIC: Dont seem to be able to use it in any of my guests so far (in passthrough). Its pretty standard NIC imagined it would be supported. ?? OS: Went with OpenMediaVault for license free usage, auto update capabilities, virtualization friendly and based upon a stable Debian OS. I said screw it to unRAID manly due to not being virtualization friendly - the online updater of OMV just sealed the deal. Also really not a fan of unRAIDs windows centric delivery framework in the beta's.. Fans: Just installed - day and night difference. Worth it. Buy them and the fan wall. Currently have wired capacity for 16 drives in NAS, 4 drives in datastore/RDM capacity. 1 backplace (4 drives worth) is not wired - suggestions? Why do only two of my drives show a blue AND a green light? The other caddies show blue only. The only difference i can think of is the other drives are all SATA2 and the two others are SATA3 2TB Drives Once fan wall and fans arrive I will try to throw some pictures up. http://imgur.com/3o8pY ~ dpc
  16. Thanks - glad to see someone give it a spin. Realistically there is two ways to stop the services to stop the array (from what I know): 1. Easiest: Go to Box/Settings/Disk and disable autostart of array. Reboot the machine (all the services will shutdown *cleanly*) and array will not be started - make the changes you wish, set the array to autostart again and reboot. All services should come up next boot. 2. Technical way: Telnet into the box and run the following commands to stop the current services. (hint, run the same commands and replace "stop" with "start" to start the services) - You should be able to stop the array after. /etc/rc.d/unraid.d/rc.unraid_sabnzbd stop /etc/rc.d/unraid.d/rc.unraid_sickbeard stop /etc/rc.d/unraid.d/rc.unraid_couchpotato stop /etc/rc.d/unraid.d/rc.unraid_headphones stop /etc/rc.d/unraid.d/rc.unraid_transmission stop /etc/rc.d/unraid.d/rc.unraid_subsonic stop /etc/rc.d/unraid.d/rc.unraid_airvideo stop /etc/rc.d/unraid.d/rc.unraid_plex stop ~dpc
  17. Updated now. Added Plex and headphones, updated the existing items. Optimized the script to get rid of some custom blobs no longer needed. In the latest edition I took some extra time to reconstruct binaries and libraries from package sources as opposed to getting them from custom blobs that forum members compiled. The script is much more dynamic now - its not at the stage of compiling everything from sources (nor should I want it to be) but its much more standardized. Have fun. ~dpc
  18. Hi Johnm, Building an Atlas like clone. Here is what we got ordered: Case: Norco 4224 - 24 hotswap bays - $400 Power Supply: SeaSonic X-850 (SS-850KM Active PFC F3) 850W 80 PLUS GOLD Modular - $211 Mobo: SuperMicro MBD-X9SCM-F-O Intel 1155 C204 MicroATX Intel - IPMI/iKVM+ 2 Gig NICs - $200 CPU: Intel Xeon E3-1270 Sandy Bridge 3.4GHz 4 x 256KB L2 Cache 8MB L3 Cache LGA 1155 80W Quad-Core - $340 RAM: Kingston 16GB (2 of 2x4GB kits) 240-Pin DDR3 SDRAM ECC Unbuffered DDR3 1333 (PC3 10600) - $170 OS SSD: Intel 311 Series Larsen Creek SLC SSD 20GB 2.5" SATA II - $115 RAID Controller Card: LSI Internal SATA/SAS 9211-8i 6Gb/s PCI-Express 2.0 - $240 Expander Card: Intel RAID Twenty-four port RES2SV240 - $280 Extra NIC: Intel Gigabit NIC - $28 Accessory - 1to7 Molex Connector: NORCO C-P1T7 4 Pin Molex 1 Male to 7 Female Power Extension Splitter Cable - $8 Accessory - Norco Rails: RL-26 Rails - $40 Server Rack: iStarUSA WN228 22U 800mm Depth Rack-mount Server Cabinet - $923 Total (too much): $2032 (not counting 1K toward rack or shipping). This is all part of a 6K order for a rack mounted experience. My hope is to virtualize a NAS here, possibly using unRAID (but may go to OpenMediaVault) as well as several other services. ESxi is on the table as well as looking at ProxMox 2.0 beta. Read about some of your troubles recently with unRaid. What I would love to see out of unRaid is a VM offering - something I can download and run in a VM (ESxi or KVM/QEMU). ~ dpc
  19. I would hold off. Current script has not been testing on B11 and in progress version of the script is undergoing overhaul that will allow for much for flexibility and granularity. please expect things to break. that way wont disappoint - i guarantee it will break something ~ dpc
  20. 200mbps http://www.newegg.com/Product/Product.aspx?Item=N82E16833122359 ~ dpc
  21. Used the netgear powerline - worked well for getting lower pings than wireless and more stable of a connection when I was rooming. I plugged into a OpenWRT router which helped me abstract my network from the folks who renting a room to me. Speed - it worked well enough for internet (10-20mbps was no problem). Media streaming of all sorts seemed to work well enough. When I got my own place though I am hard wired gig because I really liked that. In summary:For dropping internet to local router/switch - GREAT! For media streaming (1080/720 from NAS) - Does the job. Minor to no quirks. For transferring large files from NAS and back - leaves more to be desired. Go wire gig ~ dpc
  22. Honestly what works best for me? I have 27inch iMac which is excellent because of all the unix/linux similarities especially for testing out basic BASH commands/scripts. Then I have a laptop on docking station hooked up to 2 external monitors, nice kb and mouse for work and having Windows option. Frankly I like this best - I can play source and some other Steam games, watch media, and play music on my imac while i do more workish stuff on my docked laptop with its two 24inch displays. Best of both worlds from what I find. I've built several hackintoshes and they are fine but i make enough to justify not having to tinker anymore - and thats a nice feeling. To be frank the iMac is really the first computer I have seen and have been able to use as an abstract tool (similar to a hammer) - i use it and move on with my life or other projects. ~ dpc
  23. Look for beets and Headphones in the next release as well as testing and usage with latest unraid beta. ~ dpc
  24. Air video works rather well over Internet. OP: you say you have IPMI... Do you know how to use it or you just throwing out jargon? I ask cause I could use some pointers... ~ dpc
  25. Ah yes - I had set that to /tmp during testing to attempt to figure out why ffmpeg didnt want to compile on me. I revert it to disk swap style. AirVideo apparently works VERY well - much better experience than subsonic.... This is actually on purpose. My logic is: users who want to make basic changes can go to the webgui and make the changes there. There is also a really great Tranmission "client" that allows to change virtually all the settings: Transmission Remote Gui. Or they can telnet in and run "nano /mnt/$servicesvolume/$servicesdir/transmission/settings.json" and use the nano texteditor to edit the files. I have confidence in the human condition - if they pop open the script in a text editor (notepad++, textedit/textmate) I think the script will pragmatically make sense. If they dont get it... well unMenu may be the option they are looking for. EDIT: I should say one thing. One great thing with this script is people pop it open and see a method of achieving a certain task. They can take that method and integrate it into their own package or script. This is how I put it together and picked up a few things along the way (` vs '). The reason I am a fan of shell scripts is how easy its to open them up, make a change, and run them. Much friendly for testing new bleeding edge stuff out (Something that i like working with at times). Its about tinkering and whoa whoa wait there is a perhaps a not half bad finished result ~dpc