UnRAID on VMWare ESXi with Raw Device Mapping


Recommended Posts

That worked. But I couldn't pre-clear the disks when running under ESXi. It gave me an error about smartctl, and not an ata device or something.

The preclear_disk.sh script now has a -D option to eliminate the "-d ata" fed to smartctl and a

 

"-d type" option to feed the "type" to smartctl as an alternative to "-d ata"

 

Joe L.

 

What type do I use? I tried just -d, -d scsi and -d iscsi.

 

I might just run unRAID directly on this box, and install ESXi on another box (without a HDD, booting from USB with NFS datastore on the unRAID box) to host my other services, at least until there's more support for temperatures and spindown under ESXi.

 

Link to comment
  • Replies 461
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

At first you might just want to try out -D which means no -d option to see if the system and smartctl is smart enough to just figure it out. I don't expect older unRAIDs to do so like 4.6 so here's the excerpts of the smartctl man pages which lists what options it supports with '-d'.

 

-d TYPE, --device=TYPE

 

              Specifies the type of the device.  The valid arguments to this  option

              are ata, scsi, sat, marvell, 3ware,N, areca,N, usbcypress, usbjmicron,

              usbsunplus, cciss,N, hpt,L/M (or hpt,L/M/N), and test.

 

              If this option is not used  or  ´auto´  is  used  then  smartctl  will

              attempt  to  guess  the  device type from the device name or from con-

              troller type info provided by the operating system.

 

              If ´test´ is used as the TYPE name, smartctl prints the  guessed  TYPE

              name,  then  opens  the  device and prints the (possibly changed) TYPE

              name and then exists without performing any further commands.

 

              The ´sat´ device type is for ATA disks that have a SCSI to ATA  Trans-

              lation  (SAT)  Layer (SATL) between the disk and the operating system.

              SAT defines two ATA PASS THROUGH SCSI commands, one 12 bytes long  and

              the  other  16  bytes long that smartctl will utilize when this device

              type is selected. The default is the 16  byte  variant  which  can  be

              overridden with either ´-d sat,12´ or ´-d sat,16´.

 

              The  ´usbcypress´  device  type  is  for  ATA  disks that are behind a

              Cypress usb-pata bridge. This will use the ATACB proprietary scsi pass

              through command. There is no autodetection at the moment. The best way

              to know if your device support it, is to  check  your  device  usb  id

              (most  Cypress usb ata bridge got vid=0x04b4, pid=0x6830) or to try it

              (if the usb device  doesn't  support  ATACB,  smartmontools  print  an

              error).  The default scsi operation code is 0x24, but although it can

              be overridden with ´-d usbcypress,0xn´, where n is the scsi  operation

              code,  you're  running the risk of damage to the device or filesystems

              on it.

 

              [NEW EXPERIMENTAL SMARTCTL FEATURE] The ´usbjmicron´  device  type  is

              for  SATA disks that are behind a JMicron USB to PATA/SATA bridge. The

              48-bit ATA commands (required e.g. for ´-l xerror´, see below) do  not

              work  with all of these bridges and are therefore disabled by default.

              These commands can be enabled by ´-d usbjmicron,x´. If two  disks  are

              connected  to  a bridge with two ports, an error message is printed if

              no PORT is specified.  The  port  can  be  specified  by  ´-d  usbjmi-

              cron[,x],PORT´ where PORT is 0 (master) or 1 (slave). This is not nec-

              essary if the device uses a port multiplier to connect multiple  disks

              to  one  port.  The  disks  appear under separate /dev/ice names then.

              CAUTION: Specifying ´,x´ for  a  device  which  does  not  support  it

              results  in  I/O errors and may disconnect the drive. The same applies

              if the specified PORT does not exist or is not connected to a disk.

 

              [NEW EXPERIMENTAL SMARTCTL FEATURE] The ´usbsunplus´  device  type  is

              for SATA disks that are behind a SunplusIT USB to SATA bridge.

 

              Under  Linux,  to  look  at SATA disks behind Marvell SATA controllers

              (using Marvell's ´linuxIAL´ driver rather than libata driver) use  ´-d

              marvell´.  Such  controllers  show up as Marvell Technology Group Ltd.

              SATA I or II controllers using lspci, or using lspci -n show a  vendor

              ID  0x11ab  and  a device ID of either 0x5040, 0x5041, 0x5080, 0x5081,

              0x6041 or 0x6081. The ´linuxIAL´ driver seems not (yet?) available  in

              the Linux kernel source tree, but should be available from system ven-

              dors (ftp://ftp.aslab.com/ is  known  to  provide  a  patch  with  the

              driver).

 

              Under  Linux  ,  to  look  at  SCSI/SAS disks behind LSI MegaRAID con-

              trollers, use syntax such as:

              smartctl -a -d megaraid,2 /dev/sda

              smartctl -a -d megaraid,0 /dev/sdb

              where in the argument megaraid,N, the integer N is the  physical  disk

              number  within the MegaRAID controller.  This interface will also work

              for Dell PERC controllers.  The following /dev/XXX entry must exist:

              For PERC2/3/4 controllers: /dev/megadev0

              For PERC5/6 controllers: /dev/megaraid_sas_ioctl_node

 

              Under Linux and FreeBSD, to look at ATA disks behind 3ware  SCSI  RAID

              controllers, use syntax such as:

              smartctl -a -d 3ware,2 /dev/sda

              smartctl -a -d 3ware,0 /dev/twe0

              smartctl -a -d 3ware,1 /dev/twa0

              smartctl -a -d 3ware,1 /dev/twl0

              where in the argument 3ware,N, the integer N is the disk number (3ware

              ´port´) within the 3ware ATA RAID controller.  The allowed values of N

              are  from  0  to  127  inclusive.  The first two forms, which refer to

              devices /dev/sda-z and /dev/twe0-15, may be  used  with  3ware  series

              6000,  7000,  and 8000 series controllers that use the 3x-xxxx driver.

              Note that the /dev/sda-z form is deprecated starting  with  the  Linux

              2.6  kernel series and may not be supported by the Linux kernel in the

              near future. The final form, which  refers  to  devices  /dev/twa0-15,

              must be used with 3ware 9000 series controllers, which use the 3w-9xxx

              driver.

 

              The devices /dev/twl0-15 must be used with the 3ware/LSI  9750  series

              controllers which use the 3w-sas driver.

 

              Note  that  if the special character device nodes /dev/twl?, /dev/twa?

              and /dev/twe? do not exist, or exist with the incorrect major or minor

              numbers,  smartctl will recreate them on the fly.  Typically /dev/twa0

              refers to the first 9000-series controller, /dev/twa1  refers  to  the

              second 9000 series controller, and so on. The /dev/twl0 devices refers

              to the first 9750 series controller, /dev/twl1 resfers to  the  second

              9750  series  controller,  and so on. Likewise /dev/twe0 refers to the

              first 6/7/8000-series  controller,  /dev/twe1  refers  to  the  second

              6/7/8000 series controller, and so on.

 

              Note  that for the 6/7/8000 controllers, any of the physical disks can

              be queried or examined using any of the 3ware's  SCSI  logical  device

              /dev/sd?  entries.  Thus, if logical device /dev/sda is made up of two

              physical disks (3ware ports zero and one) and logical device  /dev/sdb

              is  made  up  of  two other physical disks (3ware ports two and three)

              then you can examine the SMART data on any of the four physical  disks

              using  either  SCSI  device /dev/sda or /dev/sdb.  If you need to know

              which logical SCSI device a particular physical disk (3ware  port)  is

              associated  with, use the dmesg or SYSLOG output to show which SCSI ID

              corresponds to a particular 3ware unit, and then use the 3ware CLI  or

              3dm  tool to determine which ports (physical disks) correspond to par-

              ticular 3ware units.

 

              If the value of N corresponds to a port that does  not  exist  on  the

              3ware  controller,  or  to a port that does not physically have a disk

              attached to it, the behavior of smartctl  depends  upon  the  specific

              controller  model, firmware, Linux kernel and platform.  In some cases

              you will get a warning message that the  device  does  not  exist.  In

              other  cases you will be presented with ´void´ data for a non-existent

              device.

 

              Note that if the /dev/sd? addressing form is used, then older  3w-xxxx

              drivers  do not pass the "Enable Autosave" (´-S on´) and "Enable Auto-

              matic Offline" (´-o on´) commands to the disk, and produce these types

              of  harmless  syslog  error  messages  instead:  "3w-xxxx: tw_ioctl():

              Passthru size (123392) too big". This can be  fixed  by  upgrading  to

              version  1.02.00.037  or later of the 3w-xxxx driver, or by applying a

              patch to older versions. See http://smartmontools.sourceforge.net/ for

              instructions.  Alternatively,  use  the character device /dev/twe0-15

              interface.

 

              The selective self-test functions (´-t select,A-B´) are only supported

              using  the  character  device interface /dev/twl0-15, /dev/twa0-15 and

              /dev/twe0-15.  The necessary WRITE LOG  commands  can  not  be  passed

              through the SCSI interface.

 

              Areca  SATA RAID controllers are currently supported under Linux only.

              To look at SATA disks behind Areca RAID controllers, use  syntax  such

              as:

              smartctl -a -d areca,2 /dev/sg2

              smartctl -a -d areca,3 /dev/sg3

              where in the argument areca,N, the integer N is the disk number (Areca

              ´port´) within the Areca SATA RAID controller.  The allowed values  of

              N are from 1 to 24 inclusive.  The first line above addresses the sec-

              ond disk  on  the  first  Areca  RAID  controller.  The  second  line

              addresses the third disk on the second Areca RAID controller.  To help

              identify the correct device, use the command:

              cat /proc/scsi/sg/device_hdr /proc/scsi/sg/devices

              to show  the  SCSI  generic  devices  (one  per  line,  starting  with

              /dev/sg0).  The  correct SCSI generic devices to address for smartmon-

              tools are the ones with the type field equal to 3.  If  the  incorrect

              device is addressed, please read the warning/error messages carefully.

              They should provide hints about what devices to use.

 

              Important: the Areca controller must have  firmware  version  1.46  or

              later.  Lower-numbered  firmware  versions  will  give (harmless) SCSI

              error messages and no SMART information.

 

              To look at (S)ATA disks behind HighPoint RocketRAID  controllers,  use

              syntax such as:

              smartctl -a -d hpt,1/3 /dev/sda    (under Linux)

              smartctl -a -d hpt,1/2/3 /dev/sda    (under Linux)

              smartctl -a -d hpt,1/3 /dev/hptrr    (under FreeBSD)

              smartctl -a -d hpt,1/2/3 /dev/hptrr    (under FreeBSD)

              where  in the argument hpt,L/M or hpt,L/M/N, the integer L is the con-

              troller id, the integer M is the channel number, and the integer N  is

              the PMPort number if it is available. The allowed values of L are from

              1 to 4 inclusive, M are from 1 to 8 inclusive and N from  1  to  5  if

              PMPort  available.  Note that the /dev/sda-z form should be the device

              node which stands for the disks derived from the HighPoint  RocketRAID

              controllers  under Linux and under FreeBSD, it is the character device

              which the driver registered (eg, /dev/hptrr, /dev/hptmv6).  And  also

              these values are limited by the model of the HighPoint RocketRAID con-

              troller.

 

              HighPoint RocketRAID controllers are currently  ONLY  supported  under

              Linux and FreeBSD.

 

              cciss  controllers  are currently ONLY supported under Linux and Free-

              BSD.

 

Link to comment

At first you might just want to try out -D which means no -d option to see if the system and smartctl is smart enough to just figure it out. I don't expect older unRAIDs to do so like 4.6 so here's the excerpts of the smartctl man pages which lists what options it supports with '-d'.

 

Thanks. I tried looking at the output of preclear_disks.sh -? but I'm connecting through VNC and then vSphere, and I couldn't get page up to work :)

 

It didn't work, though. I get the error "smartctl is unable to run on /dev/sda with the  option." (yep, that's two blank spaces around the missing option). It also says "Device is: Not in smartctl database" and "ATA Standard is: Exact ATA specification draft version not indicated". Not sure if this means anything.

 

Link to comment

At first you might just want to try out -D which means no -d option to see if the system and smartctl is smart enough to just figure it out. I don't expect older unRAIDs to do so like 4.6 so here's the excerpts of the smartctl man pages which lists what options it supports with '-d'.

 

Thanks. I tried looking at the output of preclear_disks.sh -? but I'm connecting through VNC and then vSphere, and I couldn't get page up to work :)

 

It didn't work, though. I get the error "smartctl is unable to run on /dev/sda with the  option." (yep, that's two blank spaces around the missing option). It also says "Device is: Not in smartctl database" and "ATA Standard is: Exact ATA specification draft version not indicated". Not sure if this means anything.

 

I don't think you are using the most recent version of preclear_disk.sh.  I think I fixed that bug.  (It was a mis-spelled variable name)  The newest version I've posted is version 1.6 as of this evening.
Link to comment

I don't think you are using the most recent version of preclear_disk.sh.  I think I fixed that bug.  (It was a mis-spelled variable name)  The newest version I've posted is version 1.6 as of this evening.

 

Unless there are two versions of 1.6, I am using the latest one (downloaded a couple days ago).

 

Link to comment

I don't think you are using the most recent version of preclear_disk.sh.  I think I fixed that bug.  (It was a mis-spelled variable name)  The newest version I've posted is version 1.6 as of this evening.

 

Unless there are two versions of 1.6, I am using the latest one (downloaded a couple days ago).

 

Like I said, I "thought" I fixed that bug where the error message showed a "blank" option passed to smartctl.  I could easily be proven wrong.  I'll bet the error message just needs fixing now.

 

You are running the most recent version.

Link to comment

Hi All,

 

I am new to the forum, but I have been following this thread for a while now ans I have been building up my ESXi box to unRAID and other virtual machines.

 

Here is the hardware that I am running.

Gigabyte EP43T-UD3L motherboard

Intel E5500 2.8GHz 775 CPU (ZALMAN CNPS9500 AT 2 Ball CPU Cooling Fan/Heatsink)

OCZ 4GB DDR3 1600 PC12800 RAM

Intel PCI-express 10/100/1000 NIC

Rosewill RC-209-EX 4 port SATA controller

Old Nvidia video card (just for the console screen)

Corsair 650-Watt TX Series 80 power supply

In a cheap HEC Blitz Black Steel Edition ATX Mid Tower Computer  (four 5.25. bays, 1 for the DVD and the other 3 for an ICY Dock 5 in 3 removable drive bay (not yet purchased))

I run two older Seagate 160GB SATA drives connected to the Rosewill controller for the ESXi datastores and have two HITACHI Deskstar 7K1000.C  1TB 7200 RPM 32MB drives that are currently mapped to the unRAID VM

 

The MB has an on-board Southbridge 6x3GB SATA controller and a Realtek 8111c LAN card along with 8 rear USB 2.0 ports, PS2 (keyboard and mouse) and on-board audio which I turned off.

 

I had an issue with installing ESXi 4.1 U1 to the machine (which you can read about here http://communities.vmware.com/message/1701491#1701491) but was finally able to get it all configured correctly and boot up the unraid_4.6-vm_1.0.6.iso without and issue once I had 4.1 installed and the VM setup according to the readme.txt

 

I am currently running the unRAID (free version) on the ESXi box along with my Windows DC and a media server to feed content from the unRAID drive space.

 

For those interested, this link ( http://communities.vmware.com/message/1701491#1701491 )also links to a Linux script that can compile non-native drivers (like the Realtek 8111c) in to an ESXi bootable USB or a DVD installation image.

 

At the time of this writing I am copying 50GB of files at 60-80MB/s with no errors. I have confirmed that the drives do spin down when idle and spin back up when tasked.

 

So I am very happy. The only item that I have is that I upgraded the ESXi to 4.1 Update1 and the unRAID server is generating a notice that the VMWare tools are out of date.  Is it possible to maybe get a 1.7 version of the ISO that incorporates the updated tools?

 

Thanks,

 

Rod

 

 

I just read that 4.7 is now available, so can we get a 4.7 ISO with VMWare Tools for ESXi 4.1 Update 1?  ;D

 

Link to comment

I just read that 4.7 is now available, so can we get a 4.7 ISO with VMWare Tools for ESXi 4.1 Update 1?   ;D

+1!

 

I moved my UnRAID into a VM on ESXi a while ago, before SK provided such excellent insight as to why you get SCSI errors etc.  I'm yet to give his custom ISOs a go, and I'd like to wait until there is one which included UnRAID 4.7 with VMWare Tools for ESXi 4.1 U1.

 

SK - thanks for all your hard work on this.  I've added to your other post re: getting better vmware support in the official release.

 

Cheers,

Bryan

Link to comment

Anyone have any idea why I can't get passthrough  working for my unraid VM (for my 2 LSI cards). I had to disable CIM within ESXi due to some kind of incompatibility with IPMI (that was my first hold up which was causing many error messages and read only complaints). After this I was able to set my 2 controllers as passthrough devices however the option to add a PCI device to my unraid vm is greyed out/disabled. Anyone have any idea why? I double checked that VT-d was enabled and yes it was (I doubt ESXi would even allow me to enable passthrough on the cards without this). What I was going to try when I got home was a BIOS update but I'm always skeptical about doing so unless I absolutely have to.

 

Hardware:

ESXi 4.1 U1

Norco 4220

Corsair 850HX

Supermicro X8SIA-F

Intel Xeon X3440

2x4GB Crucial PC3-8500 ECC

2x IBM ServeRAID BR10i LSI SAS3082E-R PCIe SAS RAID Controllers

2x SanDisk Cruzer Micro 4 GB USB 2.0 Flash Drive (1 for unraid, 1 for esxi)

3x Test Drives:

  • 1x Samsung 80GB (2.5 in) on onboard controller - currently being used as primary datastore
  • 1x 500GB WD Green on 1st LSI controller
  • 1x 2TB WD Green (ears) on 2nd LSI controller

 

Edit:

Ok I see where the problem is, it's not actually enabling passthrough for the device. It stays stuck at this status between reboots:

Untitled.jpg

 

edit2:

Woot, ok I downgraded to ESXi 4.1 (not update 1) and now the controller is there.... soooo beware anyone using a similar setup to mine with a BR10i controller.

Link to comment
  • 2 weeks later...

I'm just about ready to get started building my new esxi box, just waiting for the supermicro X8DTI-F mobo to get here (hopefully Monday), so i've re-read the whole thread, but it's still unclear to me....

 

I see Zeron and Drealit posting that they got direct hardware passthrough / vmdirectpath up and running on their SATA adapter with esxi 4.1 (and 4.1U1 in some instance), but is there confirmation that temperature display and spindown are working properly ?

 

 

I got confused because the other thread about the Br10i support seemed to point that was still an issue....

Link to comment

I have a BR10i working through VMDirectPath on an Asus 890FX board. Temps work in the latest 5.0 betas (at least from the main unRAID menu -- not unmenu) but spindown is still an issue. I have spinup and spindown disabled on the drives attached to the BR10i although it is possible to manually spin down the drives with hdparm.

Link to comment

I have a BR10i working through VMDirectPath on an Asus 890FX board. Temps work in the latest 5.0 betas (at least from the main unRAID menu -- not unmenu) but spindown is still an issue. I have spinup and spindown disabled on the drives attached to the BR10i although it is possible to manually spin down the drives with hdparm.

 

Good to know, will probably run sun tests late next week !

Link to comment

I wanted to provide some updates/feedback on my progress.

 

So, as several people predicted, my hope/experiment with the AOC-SASLP-MV8 was an epic failure.  The problem is definitely with the card and ESXi pass-through itself, not with UnRAID specifically.  I managed to get my server to PSOD when I attempted to pass the card through to a windows VM and assign a driver to it.  It was an IPMI error, and I tried messing with IPMI settings to avoid the problem, but there was no success.  ESXi 4.1 has no pass-through love for that card.

 

However, I have had some success with my PIKE LSI 1068E card.

I've just started messing around with it today, but so far the results are promising.

It was recognized by ESXi without issue, and I set it to pass-through mode.

I first built a UnRAID 5b6 VM using plop, and the device was immediately recognized with temperature feedback, etc.

I then added the device in pass-through mode to my 4.6 production UnRAID VM.  The card and device were recognized and I'm now in the process of running the preclear script on the drive.  I got an initial warning about smartctrl not working with the -d option and I don't see any temperature information being returned.  I don't know whether spin-down will work under 4.6, but it looks like it probably will under 5.

 

This motherboard is turning out to be a decent pick for an ESXi UnRAID build....

6 native SATA ports that can be used in raw device mapping mode + 8 LSI 1068 SAS ports, all on the motherboard, with all of the PCIe slots still open/available.

 

I'll provide some additional updates once the preclear is completed and I have the drive integrated into the array

 

 

Update1:

Preclear completed successfully.

Drive was added to array and data is now being loaded to the drive.

Thus far, no issues.

 

Link to comment

Okay - i will admit that i didn't read the ENTIRE thread. But, i did print it (68 pages) and got through about 1/2 of it.

 

Please bare with me, i don't know Linux, don't know UnRaid, and certainly don't know ESXi. just a peed-off WHS user looking at a very viable alternative.

 

Credit where it's due, duz over on the sab chan infected my mind with this.

 

enough rambling now to the meat of the question...

 

My purpose here is to create multiple VMs that are on some sort of protected storage.

So I might have:

0. UnRaid

1. Media Server (sorry but say it's WHS1 or even Vail, w. a DE add-in).

2. General purpose Win7 machine.

 

 

So, I would pass thru either my disks or my controller via RDM to ESXi to UnRaid, which would be booted first, before the other VMs. The VHD for the other VMs would be placed on the UnRaid protected storage. And now here's where I'm fuzzy. From what I understand UR doesn't exactly do drive pooling right?  A post somewhere on the VMWare forums indicates that I'd have to take smaller chunks of the UR (less than 2TB) and "marry" them. Is this like doing an Extend partition in the software raid of Windows?

 

Quote: "Yes, it worked.  Broke it up into three equal chunks of 1551GB per virtual disk, created three datastores, then married em all up." (http://communities.vmware.com/message/1575498#1575498).

 

So is that what i should be aiming to do with the UR shares?

 

The reason I was looking at WHS is to get my local machine backups. I have two laptops, and an HTPC that i want backed up. Further to that, I have a 3 year Carbonite service, so I would like my most essential data backed up out of the house. I don't think Carbonite has a Linux client yet. and as OP said, if WHS had something other than dumb mirrored raid, it's a different discussion.

 

also - I was fortunate enough to get fairly powerful hardware for very good prices (mobo is an Asus p6X58-DE, and an i7 960, with 6 to 12GB of ram)... so this needs to be more than just an UR or plain ole NAS box.

 

Any help to a noob is appreciated. Sorry for the long first post.

Link to comment

You are on the right track, but a little confused.

 

I'm not sure what you are reading about "marrying" hard drive chunks, though.

 

You wouldn't want to run the VHD for the other VMs off of the Unraid array since performance would suck.

 

You will have 1 or more drives for the ESXi datastore...that is where all of the Virtual Disks will be stored.

You will then pass through multiple physical drives or an entire controller over to the UnRaid VM.

The UnRaid VM will manage those drives.

 

For backups, I'd recommend throwing away Carbonite (I started with that one too) and moving over to Crashplan.  Run Crashplan on your UnRAID VM and select specific shares for online/cloud backup.

All of my data on my individual PCs/VMs backs up to the UnRAID server that is running CrashPlan Server, and the data stored in cloud-protected locations on the UnRAID server are pushed to the cloud.

 

So, in my individual VMs, I configure the persistent storage locations to be shared space in the array and then the local VMDK is just running on the ESXi datastore.  In my case, I'm working towards building an ESXi datastore that is RAID protected, but it really isn't *that* necessary, other than for avoiding down-time.

Link to comment

Hi,

 

thanks for your reply - breaking your reply up for clarification...

 

You are on the right track, but a little confused.

Doesn't begin to describe what i am :(

 

I'm not sure what you are reading about "marrying" hard drive chunks, though.

I think the guy in that link had a large datastore from a Raid controller (much like I would like UR to be) and he just smashed them together within ESXi. But I am new to this arena, so I could be way off mark

 

You wouldn't want to run the VHD for the other VMs off of the Unraid array since performance would suck.

 

i was hoping the 960 would help it suck less or would at least be better than my current 100Mbit connection on the Athlon xp 2500+

 

You will have 1 or more drives for the ESXi datastore...that is where all of the Virtual Disks will be stored.

You will then pass through multiple physical drives or an entire controller over to the UnRaid VM.

The UnRaid VM will manage those drives.

 

I wanted to to make all of the drives UnRaid controlled. Otherwise I forsee a situation like this: I have a 1TB drive (at $50, why buy anything smaller) and have 3 VMs, suppose they take 300 GB total, I'd have 700 GB unused, that would otherwise be put to good use if they were part of the UR Array.

 

 

For backups, I'd recommend throwing away Carbonite (I started with that one too) and moving over to Crashplan.  Run Crashplan on your UnRAID VM and select specific shares for online/cloud backup.

All of my data on my individual PCs/VMs backs up to the UnRAID server that is running CrashPlan Server, and the data stored in cloud-protected locations on the UnRAID server are pushed to the cloud.

 

Carbonite, I'm prepaid for 3 years (got a sweet deal, i think it was less than $40 for ALL three years).

 

So, in my individual VMs, I configure the persistent storage locations to be shared space in the array and then the local VMDK is just running on the ESXi datastore.  In my case, I'm working towards building an ESXi datastore that is RAID protected, but it really isn't *that* necessary, other than for avoiding down-time.

 

I can technically do that, since my mobo has raid 1,0,5, 10. But same scenario, don't wany any "unused" bits layin around.

 

I guess at the end of the day what I am looking for is something like a Drobo type array that would protect all my VMs. I suppose I can do Raid5, but i'd be limited to the 4 ports on the board, but still run into expansion issues, and be locked into this particular config. I was hoping UR coupoled with ESXI would allow me to do this. Almost like replacing the HW raid on the mobo with a SW raid powered by UR.

Link to comment

What is providing you with onboard RAID? I made a nasty discovery last week when I found out that ESXi won't acknowledge software raid arrays. In my case it would completely ignored the array I built on my Supermicro X8SIA-F and could just see the individual drives that were making up my array. This torpedoed my original datastore plans (for my other vm's and a little storage) and I now have a new hardware raid controller on the way (Adaptec 3405).

 

You shouldn't need to "mash" datastores together... simply follow the guide for either passing through your individual disks or passing the actual controller through itself (my preferred method).

Link to comment

 

I'm not sure what you are reading about "marrying" hard drive chunks, though.

I think the guy in that link had a large datastore from a Raid controller (much like I would like UR to be) and he just smashed them together within ESXi. But I am new to this arena, so I could be way off mark

 

Sounds like he created multiple 1.9TB datastores and combined them within ESXi to create one large datastore.

 

You wouldn't want to run the VHD for the other VMs off of the Unraid array since performance would suck.

 

i was hoping the 960 would help it suck less or would at least be better than my current 100Mbit connection on the Athlon xp 2500+

 

unRAID doesn't use much processing power at all. ESXi would certainly benefit from an i7 but wouldn't see much performance increases in unRAID.

 

You will have 1 or more drives for the ESXi datastore...that is where all of the Virtual Disks will be stored.

You will then pass through multiple physical drives or an entire controller over to the UnRaid VM.

The UnRaid VM will manage those drives.

 

I wanted to to make all of the drives UnRaid controlled. Otherwise I forsee a situation like this: I have a 1TB drive (at $50, why buy anything smaller) and have 3 VMs, suppose they take 300 GB total, I'd have 700 GB unused, that would otherwise be put to good use if they were part of the UR Array.

 

A virtual disk, at least from my experience, doesn't work too well in unRAID. I really need to experiment more with this though. Basically, you lose temps, smart data, spin down control etc.

 

So, in my individual VMs, I configure the persistent storage locations to be shared space in the array and then the local VMDK is just running on the ESXi datastore.  In my case, I'm working towards building an ESXi datastore that is RAID protected, but it really isn't *that* necessary, other than for avoiding down-time.

 

I can technically do that, since my mobo has raid 1,0,5, 10. But same scenario, don't wany any "unused" bits layin around.

 

I guess at the end of the day what I am looking for is something like a Drobo type array that would protect all my VMs. I suppose I can do Raid5, but i'd be limited to the 4 ports on the board, but still run into expansion issues, and be locked into this particular config. I was hoping UR coupoled with ESXI would allow me to do this. Almost like replacing the HW raid on the mobo with a SW raid powered by UR.

 

Be aware that ESXi will not utilize a software RAID. You can only use individual disks. Some notable exceptions to that are pseudo-hardware RAID adapters like common LSI 1068e HBAs.

Link to comment

By the way, if you're concerned about having all your data appear in one location (for instance \\Tower\Media\Movies) you can easily accomplish this with the user shares inside unRaid. I can't really think of a good reason why anyone would be mashing datastores together hahah.

Link to comment

By the way, if you're concerned about having all your data appear in one location (for instance \\Tower\Media\Movies) you can easily accomplish this with the user shares inside unRaid. I can't really think of a good reason why anyone would be mashing datastores together hahah.

 

I agree. I'd separate the datastore from the unRAID disks. My datastore is a RAID-10 that unRAID never sees. unRAID is simply a VM with other controllers passed through to it.

Link to comment

ouch - shot down. thanks dyrewolfe and drealit....

 

i wanted one large UnRaid protected datastore to drop my VM hd's on. so if i'm getting what you folks are all saying - DON'T do that. put the OS of the other VMs on a datastore that UR doesn't see, and then expose the UR shares on those machines for storage.

 

i got excited when i was referred to this thread (thanks Frost) - that this would be viable.

 

oh when oh when will UR type raid be available on HW storage controllers!

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.