SK Posted December 2, 2010 Posted December 2, 2010 SK you're currently using the ESXi LSI controller with RDM (like the topic says on this post), correct? I'm guessing I could always use the 'soft' controller, and not do PCIe passthrough for the LSI card I'm actually using right now, especially if spinup/spindown is working correctly through the VM with it. I've really got to get a decent NIC for my ESX box, the Realtek 8111c that's on the mobo now drops connection frequently under ESX, and I'm thinking at this point that it's doing the same thing under Unraid without VM as well (can't seem to get it to complete a rsync from my old box, the new machine hangs at a random point in the transfer). If RDM works correctly with this version that may be the last part of the puzzle for me to actually get started on my migration.. I used EXSi with LSI controller before, but now switched to PVSCSI (paravirtualized scsi) adapter - that require relevant kernel module which not included in stock unraid (my iso posted above have it). As far as spinup/down - there are no errors in logs, but I need to physically check it if drives actually gets spin down as reported in logs. btw my mobo GIGABYTE GA-MA780G-UD3H also has realtec 8111C but so far I had not experienced any network drops.
heffe2001 Posted December 3, 2010 Posted December 3, 2010 On mine, the Realtek card works great for a while, but at a random time, it drops connection and won't come back. The console still works fine, and you can bring the adapter up and down there, but no joy as far as the connection goes.
m4tth3wv Posted December 3, 2010 Posted December 3, 2010 I used EXSi with LSI controller before, but now switched to PVSCSI (paravirtualized scsi) adapter - that require relevant kernel module which not included in stock unraid (my iso posted above have it). As far as spinup/down - there are no errors in logs, but I need to physically check it if drives actually gets spin down as reported in logs. btw my mobo GIGABYTE GA-MA780G-UD3H also has realtec 8111C but so far I had not experienced any network drops. Any advantages with PVSCSI over LSI? I wonder if UNRAID were to officially support drives in ESXi which they would most likely support first.
SK Posted December 3, 2010 Posted December 3, 2010 I used EXSi with LSI controller before, but now switched to PVSCSI (paravirtualized scsi) adapter - that require relevant kernel module which not included in stock unraid (my iso posted above have it). As far as spinup/down - there are no errors in logs, but I need to physically check it if drives actually gets spin down as reported in logs. btw my mobo GIGABYTE GA-MA780G-UD3H also has realtec 8111C but so far I had not experienced any network drops. Any advantages with PVSCSI over LSI? I wonder if UNRAID were to officially support drives in ESXi which they would most likely support first. generally pvscsi gives better performance and less cpu utilization compare to lsi sas, especially with i/o intensive VMs. In 4.1 vSphere vmware fix the bug when latency was slight higher for pvscsi for relatively light i/o (< 2K iops). But this is for enterprise loads, for home systems loads thats probably not significant. For interested - http://www.thelowercasew.com/more-vsphere-4-1-enhancements-welcome-back-pvscsi-driver For unraid to support pvscsi is just enable another module during kernel build process. Bigger questions is for unRAID to _fully_ support vmware ESXi.
jimwhite Posted December 4, 2010 Posted December 4, 2010 why not make a VM of unRaid running on a full-slackware distro?
queeg Posted December 4, 2010 Posted December 4, 2010 why not make a VM of unRaid running on a full-slackware distro? I think the issue with that is the drives aren't exposed to unRAID inside the VM.
doobiedo Posted December 5, 2010 Posted December 5, 2010 Hello, I'm currently running my unRaid Server under ESXi 4.1 (hardware see sig.) using VMDirectpath for all unRaid drives. 6 via the onboard sata and the other 6 via the Br10i, the ESXi drive is connected to the Promise sata300 TX4 card. Using the official unRaid version the configuration works apart from the device ids, temps, spin down. Using the first bzimage/bzroot files from SK, I recieved a lot of error messages when files were written to the drives connected to the BR10i, the cache drive is attached to the onboard sata. When the files are moved to the raid then the "errors" counter starts to rise. Device ids and temps work. Using the second bzimage/bzroot files from SK, all the drives attached to the BR10i came up as unformatted, plus I'm still getting errors and no temps. Both times I copied the bzimage/bzroot files directly to the USB drive. Syslog is attached, If there is any further information required just ask and thanks for the great forum. syslog-2010-12-05.zip
nojstevens Posted December 5, 2010 Posted December 5, 2010 Hello, I wanted to post an update with my experiences so far with unRAID inside ESXi. I had it working nicely on my single ESXi box and vmDirectPath but then went on to learn more and more about ESXi and the power features. This lead me to want to create a SAN on a separate box, so I purchased a new motherboard for the purpose of running unRAID and Openfiler inside its own ESXi and then having my other ESXi store its datastores on the SAN. The config below works very nicely, drive temps spin downs etc are all passed through as though unRAID was booted natively. Mobo Intel S3210SHLX http://www.newegg.com/Product/Product.aspx?Item=N82E16813121345&Tpk=S3210SHLX unRAID card - Supermicro AOC-SAT2-MV8 http://www.newegg.com/Product/Product.aspx?Item=N82E16815121009&Tpk=AOC-SAT2-MV8 OpenFiler card - LSI MegaRAID SAS 9260 http://www.newegg.com/Product/Product.aspx?Item=N82E16816118105&cm_re=LSI_MEGARAID_SAS_9260-8I-_-16-118-105-_-Product For openfiler, ESXi sees this card in RAID mode (10) and then i made an RDM of the array which Openfiler then sees. For the unRAID to work, you need to have vmDirectPath turned on, which this mobo supports. This mobo is also on the VM HCL. It has 2 x PCI-X slots so if you grow to 16+ Unraid disks it will allow two AOC-SAT2-MV8 cards Jon
SK Posted December 5, 2010 Posted December 5, 2010 why not make a VM of unRaid running on a full-slackware distro? I think the issue with that is the drives aren't exposed to unRAID inside the VM. In ESXi drives can be exposed to VM either using physical RDM or controller pass through (which require relevant supporting hardware with certain cpu features, etc). In first case not all features may be available such temp and spin down. As far as VM of unRAID running on full-slackware distro - my preference for unRAID VM is to keep it as small as possible and just do storage piece and leave other functionality for other VMs running on better suited OS distros such ubuntu, centos, windows and so on.
SK Posted December 5, 2010 Posted December 5, 2010 Hello, I'm currently running my unRaid Server under ESXi 4.1 (hardware see sig.) using VMDirectpath for all unRaid drives. 6 via the onboard sata and the other 6 via the Br10i, the ESXi drive is connected to the Promise sata300 TX4 card. Using the official unRaid version the configuration works apart from the device ids, temps, spin down. Using the first bzimage/bzroot files from SK, I recieved a lot of error messages when files were written to the drives connected to the BR10i, the cache drive is attached to the onboard sata. When the files are moved to the raid then the "errors" counter starts to rise. Device ids and temps work. Using the second bzimage/bzroot files from SK, all the drives attached to the BR10i came up as unformatted, plus I'm still getting errors and no temps. Both times I copied the bzimage/bzroot files directly to the USB drive. Syslog is attached, If there is any further information required just ask and thanks for the great forum. Device not ready errors are the same issue jamerson9 experienced and caused by Br10i (based on LSI 1068E chip) not supporting part of T10 SAT-2 standard correctly in regards to drives spin down. As you noticed only devices managed by Br10i (sdh to sdo have those errors), ones on internal SATA - not. I wonder if LSI1068E chip does support spin down correctly on any card at all - so far I have not seen such evidence.
jimwhite Posted December 5, 2010 Posted December 5, 2010 why not make a VM of unRaid running on a full-slackware distro? I think the issue with that is the drives aren't exposed to unRAID inside the VM. As far as VM of unRAID running on full-slackware distro - my preference for unRAID VM is to keep it as small as possible and just do storage piece and leave other functionality for other VMs running on better suited OS distros such ubuntu, centos, windows and so on. Good Point.
SK Posted December 6, 2010 Posted December 6, 2010 Update for ESXi users (who use physical RDM disks) with vmware tools installed (which double distro size) and some minor improvements available at http://www.mediafire.com/?2710vppr8ne43 Plus for owners such as IBM Br10i cards based on LSI1068E chip (version with extended spindown disabled to avoid device not ready errors) available at http://www.mediafire.com/?zeajy4mmk8j868k
doobiedo Posted December 7, 2010 Posted December 7, 2010 Update for ESXi users (who use physical RDM disks) with vmware tools installed (which double distro size) and some minor improvements available at http://www.mediafire.com/?2710vppr8ne43 Plus for owners such as IBM Br10i cards based on LSI1068E chip (version with extended spindown disabled to avoid device not ready errors) available at http://www.mediafire.com/?zeajy4mmk8j868k Cheers for this, I'll install it tonight and report back Ok its installed, as of yet there have been no errors when files are written to the array, what is missing are the drive temps for the drives connected to the br10i, they were present in the first version.
jamerson9 Posted December 7, 2010 Posted December 7, 2010 Update for ESXi users (who use physical RDM disks) with vmware tools installed (which double distro size) and some minor improvements available at http://www.mediafire.com/?2710vppr8ne43 Plus for owners such as IBM Br10i cards based on LSI1068E chip (version with extended spindown disabled to avoid device not ready errors) available at http://www.mediafire.com/?zeajy4mmk8j868k Cheers for this, I'll install it tonight and report back Ok its installed, as of yet there have been no errors when files are written to the array, what is missing are the drive temps for the drives connected to the br10i, they were present in the first version. Are you sure about the drive temperatures, as they have been missing on all the version I have tested on my BR10i.
heffe2001 Posted December 8, 2010 Posted December 8, 2010 Update for ESXi users (who use physical RDM disks) with vmware tools installed (which double distro size) and some minor improvements available at http://www.mediafire.com/?2710vppr8ne43 Plus for owners such as IBM Br10i cards based on LSI1068E chip (version with extended spindown disabled to avoid device not ready errors) available at http://www.mediafire.com/?zeajy4mmk8j868k I still need to get a decent NIC, but I'll give it a go. Now for a 4.6 version .
doobiedo Posted December 8, 2010 Posted December 8, 2010 Update for ESXi users (who use physical RDM disks) with vmware tools installed (which double distro size) and some minor improvements available at http://www.mediafire.com/?2710vppr8ne43 Plus for owners such as IBM Br10i cards based on LSI1068E chip (version with extended spindown disabled to avoid device not ready errors) available at http://www.mediafire.com/?zeajy4mmk8j868k Cheers for this, I'll install it tonight and report back Ok its installed, as of yet there have been no errors when files are written to the array, what is missing are the drive temps for the drives connected to the br10i, they were present in the first version. Are you sure about the drive temperatures, as they have been missing on all the version I have tested on my BR10i. Pretty positive the temps were there but I'll switch back to the first version tonight just to make sure. Well I've just checked the first version and I take everything back, no temps here nothing to see, my mistake Thanks once again to SK
bcbgboy13 Posted December 8, 2010 Posted December 8, 2010 Update for ESXi users (who use physical RDM disks) with vmware tools installed (which double distro size) and some minor improvements available at http://www.mediafire.com/?2710vppr8ne43 Plus for owners such as IBM Br10i cards based on LSI1068E chip (version with extended spindown disabled to avoid device not ready errors) available at http://www.mediafire.com/?zeajy4mmk8j868k I downloaded the second version as I have LSI1068E based controller but this one removes the support for "mvsas" (the AOC-SASLP-MV8 cards) so no go for me at this point. I have 6 HDs on the motherboard, 3 HDs are on AOC-SASLP-MV8 card, one Dell branded LSI1068E card with a single HD not in the array for testing and an additional RAIDCore BC4852 PCI/PCI-X controller just for testing.
Ford Prefect Posted December 8, 2010 Posted December 8, 2010 I downloaded the second version as I have LSI1068E based controller but this one removes the support for "mvsas" (the AOC-SASLP-MV8 cards) so no go for me at this point. I have 6 HDs on the motherboard, 3 HDs are on AOC-SASLP-MV8 card, one Dell branded LSI1068E card with a single HD not in the array for testing and an additional RAIDCore BC4852 PCI/PCI-X controller just for testing. ...the AOC-SASLP-MV8 is useless for an ESXi environment (no native support in ESXi and vmdirectpath not working). Since that image is tuned for ESXi, it comes only natural that these kind of features/modules are being removed. ..my 2 cents.
heffe2001 Posted December 9, 2010 Posted December 9, 2010 I downloaded the second version as I have LSI1068E based controller but this one removes the support for "mvsas" (the AOC-SASLP-MV8 cards) so no go for me at this point. I have 6 HDs on the motherboard, 3 HDs are on AOC-SASLP-MV8 card, one Dell branded LSI1068E card with a single HD not in the array for testing and an additional RAIDCore BC4852 PCI/PCI-X controller just for testing. ...the AOC-SASLP-MV8 is useless for an ESXi environment (no native support in ESXi and vmdirectpath not working). Since that image is tuned for ESXi, it comes only natural that these kind of features/modules are being removed. ..my 2 cents. I believe the 2nd version was intended to be run outside ESXi, since it looks to have been built to specifically address problems with running the older supplied version directly on the hardware.
SK Posted December 9, 2010 Posted December 9, 2010 Update for ESXi users (who use physical RDM disks) with vmware tools installed (which double distro size) and some minor improvements available at http://www.mediafire.com/?2710vppr8ne43 Plus for owners such as IBM Br10i cards based on LSI1068E chip (version with extended spindown disabled to avoid device not ready errors) available at http://www.mediafire.com/?zeajy4mmk8j868k I downloaded the second version as I have LSI1068E based controller but this one removes the support for "mvsas" (the AOC-SASLP-MV8 cards) so no go for me at this point. I have 6 HDs on the motherboard, 3 HDs are on AOC-SASLP-MV8 card, one Dell branded LSI1068E card with a single HD not in the array for testing and an additional RAIDCore BC4852 PCI/PCI-X controller just for testing. both version are the same except unraid driver, second iso disables extended spindown code for BR10i LSI1068E that not conform to t10 standard. As far as mvsas - the only revenant change was is the upgrade of Fusion MPT driver from 3.04.15 (linux kernel stock) to latest 4.24.00.00 (from LSI site) which compiles and work under my ESXi configuration without issues (when using virtualized LSI SAS controller with SATA Physical RDM disks). Having look at logs would help, if this issue with updated Fusion driver we can certainly go back to previous one. Given that 4.6 stable release is out I need to update to it anyway..
SK Posted December 9, 2010 Posted December 9, 2010 I downloaded the second version as I have LSI1068E based controller but this one removes the support for "mvsas" (the AOC-SASLP-MV8 cards) so no go for me at this point. I have 6 HDs on the motherboard, 3 HDs are on AOC-SASLP-MV8 card, one Dell branded LSI1068E card with a single HD not in the array for testing and an additional RAIDCore BC4852 PCI/PCI-X controller just for testing. ...the AOC-SASLP-MV8 is useless for an ESXi environment (no native support in ESXi and vmdirectpath not working). Since that image is tuned for ESXi, it comes only natural that these kind of features/modules are being removed. ..my 2 cents. I believe the 2nd version was intended to be run outside ESXi, since it looks to have been built to specifically address problems with running the older supplied version directly on the hardware. either outside of ESXi or under ESXi with VMDirectPath I/O for LSI1068E controller
Ford Prefect Posted December 10, 2010 Posted December 10, 2010 I downloaded the second version as I have LSI1068E based controller but this one removes the support for "mvsas" (the AOC-SASLP-MV8 cards) so no go for me at this point. I have 6 HDs on the motherboard, 3 HDs are on AOC-SASLP-MV8 card, one Dell branded LSI1068E card with a single HD not in the array for testing and an additional RAIDCore BC4852 PCI/PCI-X controller just for testing. ...the AOC-SASLP-MV8 is useless for an ESXi environment (no native support in ESXi and vmdirectpath not working). Since that image is tuned for ESXi, it comes only natural that these kind of features/modules are being removed. ..my 2 cents. I believe the 2nd version was intended to be run outside ESXi, since it looks to have been built to specifically address problems with running the older supplied version directly on the hardware. either outside of ESXi or under ESXi with VMDirectPath I/O for LSI1068E controller Hmmm...yes, mea culpa. Too many paths to follow....
heffe2001 Posted December 11, 2010 Posted December 11, 2010 Got a good Intel nic and so far, I've been running fine with the last (v1.0.4) supplied files (the non VM, or direct PCI connected iso). I'm currently running it native (not under ESXi), and it's happily transferring files much faster than the onboard POS NIC. When I get a chance, I plan on giving it a go with ESX, but probably won't get to it until next week.. **EDIT** I'm up to about 1.7tb total transferred, without any issues. Looks like the new NIC fixed my issues... Might try the onboard NIC when I set up my firewall, for the red interface... See if that works with Passthrough. **EDIT 2** Got 4tb of data moved over, then set it up under ESXi, and it booted just fine. I did have to re-configure the drives list, but once I did, it came back up just fine and the data was all there. After that I added the parity drive onto the configuration, and now it's getting about 58m/sec on the parity generation. Not bad compared to my old system... That's with the controller on passthrough.
Ford Prefect Posted December 13, 2010 Posted December 13, 2010 Hi folks, the contents of this thread always has tempted me to actually try something... Now, this weekend I took the chance ... Here is what I can share: - installed ESXI on a SM X8SIL-F with a L3426 XEON and 8GB of RAM, running off a SSD as boot+datastore - added a LSI 9240-8i (actually an IBM ServeRaid M1015, but they are identical) and three SATA drives as JBOD for testing. The LSI is natively supported under ESXi, so the "ESXi - RDM - unRAID" scenario should work as desired. Therefore I started with the attempt on using vmdirectpath with the LSI controller. The LSI should work with most modern distros, that provide the "megaraid_sas" module (i.e CentOS 5.5, RHEL6, FC14, Ubuntu 10.10). ...yes, I know unRAID does not support that controller (yet?:-) Activating vmdirectpath and assignment to another VM was not a problem. However I could not get any of the linux distros to boot inside a VM. Using the distro on bare metal, without ESXi, the LSI and the drives are properly seen. So my first assumption was that the LSI is not working with vmdirectpath. But finally, as a last resort, I tried ESXi and a Win7_HomePremium_64 inside a VM. Guess what?...Win7 could see the controller and after installing the drivers from the LSI website, I was able to use the drives...formated drives under Win7 and copied over some 100-GB from my external unRAID just fine. I could also restart the VM and the LSI and drives showed up every time...redid the copy tests fine. Shut down the Win7-VM, installed a debian lenny inside a VM and baked a new kernel with latest drivers(v4.31) from LSI. The megaraid_sas module would not load automatically upon reboot of the VM. Loading the module manually was possible without error, however drives were not seen. Shutting down the linux-VM and bringing up the Win7, the controller was gone inside Win7 ;-( After a reboot of the complete host, Win7 could see the LSI again. Maybe this is an issue with the linux driver initializing the controller the wrong way? How can I further debug and dig into this? Any suggestions welcome. regards, Ford
bcbgboy13 Posted December 13, 2010 Posted December 13, 2010 Got a good Intel nic and so far, I've been running fine with the last (v1.0.4) supplied files (the non VM, or direct PCI connected iso). I'm currently running it native (not under ESXi), and it's happily transferring files much faster than the onboard POS NIC. When I get a chance, I plan on giving it a go with ESX, but probably won't get to it until next week.. **EDIT** I'm up to about 1.7tb total transferred, without any issues. Looks like the new NIC fixed my issues... Might try the onboard NIC when I set up my firewall, for the red interface... See if that works with Passthrough. **EDIT 2** Got 4tb of data moved over, then set it up under ESXi, and it booted just fine. I did have to re-configure the drives list, but once I did, it came back up just fine and the data was all there. After that I added the parity drive onto the configuration, and now it's getting about 58m/sec on the parity generation. Not bad compared to my old system... That's with the controller on passthrough. Can you post your syslog for the native mode (not under ESXi) and your hardware config if it is not a big trouble.
Recommended Posts
Archived
This topic is now archived and is closed to further replies.