brian89gp

Members
  • Posts

    173
  • Joined

  • Last visited

Posts posted by brian89gp

  1. 5.1 is the last version the .NET client will be released, going forward it will be web only.

     

    Now if they can only get external MSSQL support built into their vCenter appliance server so it can support more then 50 VM's.

  2. After a certain number of files a directory structure for organization becomes somewhat unusable.  ie, a long list is only good if you have the patience to browse through it.  This is where something that will index it comes into play such as Plex as it can aggregate multiple locations into one presentation layer.

  3. http://ceph.com/

     

    Anybody by chance used Ceph?  Although new, it seems to have promise in both the enterprise storage market and also for massive storage personal use.  For personal it sucks that there is only mirroring instead of a more space efficient RAID scheme, but on the otherhand the things in personal use that take up huge amounts of space typically are replaceable data.  For enterprise you could potentially get a massive NAS that is not reliant on a single filesystem and its shortcommings (cause the combined filesystem is a presentation from a metadata server).  Try putting a billion or more files in a single SMB/CIFS share in a single filesystem.

  4. It is a software RAID SAN, the LSI controllers are just SAS HBA's.

     

    Perhaps its just the method that unRAID works, eg the parity check uses a lot of bandwidth across all drives at the same time, not something that is typical in normal RAID systems except during a rebuild?

  5. Just a curiosity I have had based on observation of the preference by many on here when using the common 8 port SAS controllers to have 1 drive per port (I understand that on a cost basis, but not a performance basis), or if using an expander to have the cache/parity/whatever not on the expander and directly on one of the eight ports.  My question is, is there a noticable difference?

     

    For reference, worked on a SAN that uses LSI 9201-16e SAS cards (16x 6gb, PCIe 2.0 8x)  More ports but same PCIe bus speed/bandwidth as a typical IBM M1015.  It uses full loops for the SAS connections, so the 4 ports on the card will do two loops.  Each loop is up to 96 drives for up to 192 drives per card (up to 15k enterprise drives even).  And interestingly enough, it is almost impossible even with that many drives to push the limits of that card, and that is doing the traditional RAID in software/CPU where every read/write will hit multiple drives at once not the one plus parity in unRAID.

     

    Maybe I am missing something obvious.  I saw the cards, remember the reading here, and thought hmmm.

  6. Unless you use the distributed vswitch (highly doubt you are), ESX does not support a LAG/LACP/Etherchannel or whatever you want to call it.  Set them up as two individual ports on the switch, no LAG, and ESX will load balance between them.  By load balance I mean that VM guest A will be put on port 1 and VM guest B will be put on port 2.

     

     

  7. Quote from another user:

     

    In the end the best solution would be to create and start / stop those machines using the vSphere CLI - which I already tested.

    The only problem with the free license key for ESXi is that you cannot start / stop them anymore remotely from the command line and so we need a full license if we decide to go with a direct CLI remote start / stop:

    "Fault detail: RestrictedVersionFault"

     

    Figures.  They dangle that damn carrot.

  8. I must admit that the whole concept of snapshots is an absolute dream out of management point of view, so even with only one vm there is benefit .. Allthough I now have 3 (plain unraid, mediabeast for all nntp/torrent stuff and a test unraid I use for preclears and stuff).

     

    Indeed it is.  Just be sure not to forget about them, the snapshot locks the original disk and then starts creating a delta disk for each snapshot.  If it is a high rate of change server these delta's can grow to be quite large and if you run out of space on the volume ESXi will pause all disk IO until you free up some space (most guest OS's will eventually crash if left for any length of time).  Same thing happens if you over provision many thin provisioned disks and they use up all the free space.  Having a 1-2gb 'dummy' file on your host volumes that you can delete in emergencies is very usefull as all other operations other then a hard power off require free disk space to operate.

     

    So, keep an eye on free disk space, and if you have a disk on a guest that does not need snapshotted then make it an independent/persistant disk so that way you don't create snapshots of it in the first place.

     

    One last tip, when commiting large snapshots they will sometimes fail or say sucessfull but not get rid of of the delta vmdk file and it also will no longer show up in snapshot manager.  What you got is an orphaned snapshot.  If this happens create another snapshot and then do a "delete all", it will go through and commit all delta disks even if they are  not showing up as a snapshot in the manager.

  9. Definitely no way to put ESXi to sleep - it was never designed with this in mind, and I would imagine that, even if you could, putting a hypervisor to sleep would have some serious repercussions when you wake it up.

     

    There is a standby mode built into it, if you are running VirtualCenter there is an option to manually put a host into standby and exit standby either by WOL or iLO.  They tie it into the DPM (Dyanamic Power Management) feature in clusters to shut down and boot up hosts on demand to meet/match resource demands to save on power.

     

    You could probably find some method to trigger it through the remote CLI since I would assume they built it into the CLI API.  You would also need to suspend/shut down the guests manually by script since this is normally a task VirtualCenter does before the host standby (the function being to VMotion all guests off).

     

    I've never used it so have no idea how similar it is to normal sleep mode.

  10. The "unknown" is my ASMedia SATA controller, there is 2 on this MB

     

    And your boot time is quick  :D comparing to mine

     

    //Peter

     

    Couple things:

     

    1. VMFS 5 is formated to a 1MB block size, this is normal.

    2. Decrease your CPU to 2 or 3.  Your sig lists you using a 4 core CPU, you are likely to incur some some high CPU wait times just from the hypervisor process running, let alone any other VM.

    3. Remove your floppy and CDROM unless you have a good reason to use it.  Disable it in the BIOS too.

    4. Try removing your APC USB device.

     

    It sounds vaguely like a resource scheduling issue, so try the CPU decrease first.  As a general rule, a VM guest will ONLY run faster with more CPU's if it actually needs them to run, in all other cases adding more CPU's then is needed will slow it down.  Official stance from VMware even is to use only 1 CPU if at all possible.

  11. The virtual drives are container files. It's not about IDE or SCSI, it's about a layer of virtualization which slows it all down.

    it's good enough for testing and development. Not good enough for anything more use full for unraid.

     

    I disagree on this point.  It might be true on home-user level hardware but do not think the large performance difference is because of the virtualization layer.  The IDE and Buslogic controller are both slower then the LSI SAS controller and all are slower then a non-virtualized card, but most disk IO problems can almost always be traced to the back end storage speed.  It takes quite a lot of disk IO to overwhelm the LSI SAS virtual controller, less so but still a lot for the IDE and Buslogic.  PVSCSI is an awsome adapter and probably the way things are going, but it takes many thousands of IOPS for it to pull ahead of the LSI SAS controller.

     

    I have had no problems pulling 6Gbps and 4k IOPS (30 robocopy jobs running at the same time against 100 million <1k file disks) through both LSI SAS virtualized drives and RDM's on a LSI SAS virtualized controller.  Granted the PVSCSI controller might have been even faster, but the virtualized controllers are not that bad.

  12. .... but DO NOT try to flash it with LSI firmware.

     

    Care to expand on that?

     

    It won't run, the LSI flash utility does not recognize it as a LSI SAS 2008 card.  Also if you happen to delete the SAS ID of the card you will need to figure out how to add it back using the Dell flash utilities because the LSI ones will not do it.  Its all recoverable using the Dell flash utilities, but highly annoying if you delete both your SAS ID and firmware and have to start from there.

     

    On a side note, the Dell flash utilities are VERY nice.  It goes through and enumerates all cards then sequentially flashes them.

     

     

     

    edit:  Reading johnodon's post I might try it again.

  13. To resurrect a dead thread, this card seems to work.  ESXi loads the mpt2sas driver so I would assume that for all functional purposes it is the same as a generic LSI 2008, being on the HCL and all and loading the generic LSI driver.

     

    It is a LSI 2008 card, it posts under the LSI boot screen, but DO NOT try to flash it with LSI firmware.

  14. Might add to add the VMDK as Indepentent - Non Persistent.  That way if you somehow corrupt/delete/format the VMDK from inside UNRAID all you got to do is power cycle the guest and all is well again.

     

    I've also deleted the bzroot/bzimage off of my thumb drive and the config directory off of the VMDK I make so its impossible to get them confused with each other.

  15. Anybody had any experience with this product and/or setting up and running a subversion server?  I am looking to set this up for my girlfriend since she is a habitual save button hitter and has overwritten many a Photoshop PSD file.

     

    I have never used subversion before, so I know nothing about it.  My fear is I set it up, subversion does it magic and stores the files somehow on the back end, the subversion server is lost, and I have a bunch of useless files.

  16. As a slight aside - and out of interest - does anyone know the consequences (if any!) of mixing and matching firmware revisions (i.e P14, P15 etc) across different cards in the same chassis?

     

    The correct answer would be to ensure they're all the same - but given the pain of flashing if you have a mixture does it matter? Will they all still 'merge' at the BIOS level as a single manageable instance?

     

    I have had a few different version at the same time with no ill effect.  It shouldt really matter though.

     

    I turn the post off on the card so they don't show up in the BIOS, but I did have a Dell external SAS HBA and an flashed M1015 post together and they are running very different firmware.