jimwhite

Members
  • Posts

    597
  • Joined

  • Last visited

Everything posted by jimwhite

  1. two of these in raid0 as my window's machine boot drive works great!
  2. If you really think about it, it's a no-brainer!
  3. Mine is under my computer table, towards the back.... or at least it will be when I finish futzing with it and put the cover back on... so right now it's in the middle of the floor in my office!
  4. Thanks for all the replies. I guess a few gig's of HD space on the cache drive is cheap insurance
  5. I am transitioning from 1TB drives to 2TB. I was copying folders of files (about 700GB worth) from a 1TB drive to a 2TB drive with MC. The array was started but without parity. I got a Kernel Bug, see attached syslog. Anybody got any ideas? :'( OBTW, I excised large sections of dupe-file errors from the syslog. OBTW2, running on ESXi. syslog-2011-08-23.zip
  6. just thought I'd add what I posted in the ATLAS thread... I too, followed the lead of bryanr in his http://lime-technology.com/forum/index.php?topic=7914.0 thread, mapping each drive via command line manipulations, and with 16 drives it was a bit of a pain in the arse. Not only that, but any time a disk is moved or swapped, it must be done again!! Gotta be an easier way. I have 16 hot swap bays in my tower with 16 Samsung 2TB drives. The first 6 are on the "Intel Controller", the next 8 on an LSI 2008 SAS controller. , and the last two on a Marvell 4 port Sata card (which ESXi has no drivers for). I also have an LSI 4 port raid controller with 3 1TB Seagates in a Raid5 for my ESXi Datastor. While poking around in the GUI for ESXi (vSphere Client) I found a page where I could assign the entire controller to a VM (configuration/advanced-settings). I created a new VM for unRAID and instead of going through all that commandline stuff, I assigned the 3 PCI-bus controllers as passthrough, then selected them in the unRAID VM settings. Voilla.... the VM runs just as if it were (and it is) running on the bare bones. The drives came right up, and they are not virtually mapped, so I'm free to swap them around and replace them just be re-booting the VM
  7. Just wondering if it's worthwhile carving off a piece of my cache drive.
  8. You would think so.... but the Kingwin's have a spring loaded SATA connector (presumably to boost ejection) and it takes a substantial force just to fully seat the connector!
  9. I've been using Kingwin 3in2's and 4in3's for awhile with no problems... that is until I bought a mess of Samsung 2TB drives... they're 1/16" shorter than a Seagate and don't quite make a good connection. Had to tape a piece of 1/16" lexan to the front of 16 drives!
  10. I too, followed the lead of bryanr in his http://lime-technology.com/forum/index.php?topic=7914.0 thread, mapping each drive via command line manipulations, and with 16 drives it was a bit of a pain in the arse. Not only that, but any time a disk is moved or swapped, it must be done again!! Gotta be an easier way. I have 16 hot swap bays in my tower with 16 Samsung 2TB drives. The first 6 are on the "Intel Controller", the next 8 on an LSI 2008 SAS controller. , and the last two on a Marvell 4 port Sata card (which ESXi has no drivers for). I also have an LSI 4 port raid controller with 3 1TB Seagates in a Raid5 for my ESXi Datastor. While poking around in the GUI for ESXi (vSphere Client) I found a page where I could assign the entire controller to a VM (configuration/advanced-settings). I created a new VM for unRAID and instead of going through all that commandline stuff, I assigned the 3 PCI-bus controllers as passthrough, then selected them in the unRAID VM settings. Voilla.... the VM runs just as if it were (and it is) running on the bare bones. The drives came right up, and they are not virtually mapped, so I'm free to swap them around and replace them just be re-booting the VM
  11. thanks for the clarity... now I "get it"! I'll just move the "other" versions to a non-user share! Thanks...
  12. I wholeheartedly disagree. Running Virtualbox on unRaid is a hack at best, while ESXi is designed precisely for the task at hand. Given reasonable hardware, the results are VERY predictable. Backing up the VM couldn't be simpler. Turn the "power" off to the VM, browse to the VM in the datastor browser and export the VM as a file to wherever you'd like to keep it!
  13. Bingo! That's the "issue", but it would seem there would be a better way to disable this reporting (and hence "spamming") of my syslog.
  14. I'm in the midst of a similar build on a Tyan S5512 board. I have my first 6 drives on the southbridge motherboard "Intel" controller. When raw-device-mapped to the unRaid VM, the temps don't appear to work whereas the drives mapped through from the LSI SAS controller report correct temps. Have you seen the same?
  15. except for the enormous amount of spam in my syslog Is there a way to turn this off?
  16. Even newly manufactured drives (since the release of the JP1 firmware) need to be flashed?
  17. NOOB alert The subject says it all. Is there a serious problem having the same file in two or more places (other than the wasted space)?
  18. note: coupon only good on the first drive....
  19. I purchased one of these for a new build I'm in the middle of now. It's going to be an ESXi 4.1 box with a total of 16 drives ( maybe 18?). Six on the Intel Southbridge, 8 on the onboard LSI 2008, and 2 or 4 on the E-Bay LSI plug in card I'm awaiting delivery on. My case, a Thermaltake Armour, has 4 3-in-2 Hot Swap cages and one 4-in-3, all from Kingwin. The case also has an internal mount for 2 more drives which I may or may not use. The intention is to use 2 drives in raid-0 on the plug-in card as data store for the ESXi layer. Maybe these will go in the internal mount, allowing use of all 16 front slots for unraid. Still gelling that config in my mind. Other specs are few... a Corsair 750 watt supply, 16 Samsung 2tb 204 drives for unraid, and 2 Seagate 7200 rpm 1tb drives for the raid0 datastore, 16 gb of ecc ddr3, Xeon E3-1230 ... I've been playing with the board quite a bit at the esxi level, learning how to migrate VM's back and forth between host and client, and such. The board runs great and I'm absolutely thrilled with the ip based KVM functionality... being able to totally power-down the server and the turn it back on from my desktop is the bees-knees...
  20. you still need at least one partition which includes the whole disk!