Leaderboard

Popular Content

Showing content with the highest reputation on 11/11/18 in all areas

  1. Use this repo until it is fixed: Repository: binhex/arch-rtorrentvpn:0.9.7-1-11 It is the version before it broke. Like this: Then when it is fixed, you can make it: Repository: binhex/arch-rtorrentvpn:latest
    2 points
  2. Support for HandBrake docker container Application Name: HandBrake Application Site: https://handbrake.fr/ Docker Hub: https://hub.docker.com/r/jlesage/handbrake/ Github: https://github.com/jlesage/docker-handbrake Make sure to look at the complete documentation, available on Github ! Post any questions or issues relating to this docker in this thread.
    1 point
  3. Support for MakeMKV docker container Application Name: MakeMKV Application Site: http://makemkv.com/ Docker Hub: https://hub.docker.com/r/jlesage/makemkv/ Github: https://github.com/jlesage/docker-makemkv Unlike other containers, this one is based on Alpine Linux, meaning that its size is very small (at least 50% smaller). It also has a very nice, mobile-friendly web UI to access MakeMKV's graphical interface, has an automatic disc ripper and is actively supported! Make sure to look at the complete documentation, available on Github ! Post any questions or issues relating to this docker in this thread.
    1 point
  4. Compiled ESXi Thread NOTE: Please ask for support in the respective threads. I'm just trying to compile all of the great info that we have into one place. NOTE #2: unRAID is not officially supported on ESXi. Limetech adds support for virtual drivers, but does not actively support it. THREAD Known Issues: ESXI: usb 1-1: reset high-speed USB device number 2 using ehci-pci Different Ways of setting up unRAID on ESXi There are two main different ways to setting up unRAID on ESXi either via PLOP or a VMDK. Please see below for the differences. PCI Passthrough vs Raw Mapped LUNs -------- Please let me know of any info to add to this! [*]Additional Cost
    1 point
  5. To upgrade: If you are running any 6.4 or later release, click 'Check for Updates' on the Tools/Update OS page. If you are running a pre-6.4 release, click 'Check for Updates' on the Plugins page. If the above doesn't work, navigate to Plugins/Install Plugin, select/copy/paste this plugin URL and click Install: https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer.plg Refer also to @ljm42 excellent 6.4 Update Notes which are helpful especially if you are upgrading from a pre-6.4 release. BIOS: Especially if using Virtual Machines, we highly recommend keeping your motherboard bios up-to-date. Bugs: If you discover a bug or other issue new to this release, please open a Stable Releases Bug Report. Version 6.6.5 2018-11-08 Management: bug fix: crond was not getting started webgui: Black and White theme: display arrow in select field Version 6.6.4 2018-11-07 Base distro: curl: version 7.62.0 (CVE-2018-16839, CVE-2018-16840, CVE-2018-16842) dcron: version 4.5 (rev 5) (reversion) openssh: version 7.9p1 openssl: version 1.1.1 openssl-solibs: version 1.1.1 Linux kernel: version 4.18.17 Management: bug fix: desktop gui 'home page' not honoring user defined http port number update smartmontools drivedb and hwdata/{pci.ids,usb.ids,oui.txt,manuf.txt} webgui: Docker: changed LOG file size reading to prevent HTTP time-outs webgui: improve usb device detection/parsing for vm usb list webgui: Added warning when Flash device is set as public share webgui: Black and White theme: simplified input & select boxes webgui: VM edit: fixed overflow in cpu pinning webgui: GUI: make header scrollable with floating menu webgui: Add viewport meta setting for Safari webgui: Viewport minimum width is 1280px webgui: Hide select arrow across all browsers webgui: Improved Docker and VM settings consistency webgui: css correction for input & select element webgui: Scroll menu: change hardcoded height to dynamic value webgui: Docker styling corrections webgui: jquery ui style corrections webgui: Use viewport for mobile and tablet devices webgui: Fixed: warning on docker container page when array is stopped webgui: Diagnostics Add share storage location webgui: VM manager: fixed: set memballoon model='none' if any PCI device passthrough configured
    1 point
  6. I usually use badblocks -v -w -s -b 4096 /dev/sdX to test new and suspect disks thoroughly. The options are simply explained here. Essentially, -v makes it verbose, -s makes it show progress, -b 4096 set the block size which seems to be needed to make it work on high capacity disks, and -w means a destructive read/write test that fills the entire disk with different patterns in four passes and checks whether it can read them.
    1 point
  7. WD Lifeguard / Seagate SeaTools / SMART Extended tests But none of them ( or any other 3rd part equivalent ) will write the appropriate signature that indicates the drive has been precleared, so unRaid will still want to clear it or you will need to preclear it again. Alternatively, set up a trial key for the test rig and preclear the drive there. It will be valid for the main rig.
    1 point
  8. Examine syslog during the error and smart reports for all involved drives. Sometimes you really can't find a direct cause, and as long as you can't pin down any corrupt files, a correcting check is the best option. If you just had a hard shutdown with active writes, then a correcting check is probably warranted. The parity drive is lowest priority for writes, so it's quite feasible for the data drive to be correct and the parity a step behind. That's why a correcting check is set as default after a hard shutdown. In almost all cases, it's the correct thing to do if your hardware is healthy. It's the failing hardware scenario where things get sticky, and hopefully you have some warning that things may be going sideways before stuff goes completely bonkers. Unfortunately failing hardware likes to cause hard shutdowns, so I prefer to NOT set the array to auto start, so I have a chance to look things over EACH and EVERY time the machine is fired up.
    1 point
  9. ...and are using legacy boot. The built in version doesn't work if you UEFI boot. Because I use it a lot I made a separate bootable USB key with the newer free version which does UEFI boot and gives more options.
    1 point
  10. shutdown the server, pull the stick and post the file
    1 point
  11. Now do an ls -al /boot/logs and see if there is a file with diagnostics as part of the file name . It should have today's date on it. If there is you have a copy there. If not, try the diagnostics command again. (FYI, the up-and-down arrow keys will allow a replay of the recent commands on the command line. You can also edit those commands with the left-and-right arrow keys, backspace key and delete key.)
    1 point
  12. Thanks, DZMM, I realized yesterday that my current mobo supports VT-d as well. I am writing this from my VM inside Unraid. I am testing things and I have indeed realized that 16gb of ram is not enough. I need 12gb for some moded games I play on my computer and well 4gb for unraid and the rest is to little. So indeed 32gb of ram seems to be needed. 👍 I am quite surprised by the performance I am getting with my old i5, I can't use plex and play at the same time obviously but as long as I do one it works okay and I didn't have any fps issues on the games I tested with. I have ordered my self a new ssd as a cache drive (unraid detected that my current ssd is dying), and 2 drives once those come and I install an additional 8gb of ram that is somewhere in my mess, I will be able to play a bit more and have a better idea of the resources I will need. So really thanks for the idea, running it all in one PC works great at the moment, and I can test things up without actually spending money (well I would have bought the hard drives anyway).
    1 point
  13. I do see the point you are making but statistically it does not matter, any sufficiently large group when asked "what they want" will tend towards 100% coverage of any set of options. This if charted will obviously be a histogram but a forum is a very blunt tool for this kind of requirement gathering. You will have much better results if you either: ask users for their use cases and do the requirements analysis in reserve from there or estimate the base requirements internally and offer a trial working set of features and follow on by capturing the use cases people aren't able to meet with the trial, deciding form there if they are cases you want to support. The second option is what I would do as it allows you to internally debate the 10% effort 90% gain tipping point.
    1 point
  14. OK, so check/replace cables to see if it helps, if the HBA keeps resetting you'll continue to have problems. Edit: You should also update the HBA firmware to latest, which is p20.00.07
    1 point
  15. The HBA is resetting causing read errors on multiple devices, you need to fix that first, e.g., by resetting it or changing slots and see if the issue goes away, then run a scrub on the pool to see if it's recoverable.
    1 point
  16. No storage AIBs at the moment. Just the built-in sata, And m.2. With one m.2 remaining. As for the expense. To be honest man you can get a whole bunch of good hardware on ebay right now if your willing to buy second hand. Im seeing 1950x go for half price on ebay. I think alot of people buy TR not knowing how to utilize it, and dump em preferring to go with higher clocked intel chips. Which is a waste of an amazing chip IMO. Once more developers begin to realize the value they have on their hands with these very high core count chips. They will start to get the respect they deserve. Being able to dedicate a portion of your chip to running a hackintosh VM, or linux environments for debugging, and testing is incredible. Talk about being able to play with things only the cloud providers have been able to toy with until now.
    1 point
  17. 1 point
  18. You're right, and I personally apologize to you, and thank you for great report. We'll get to the bottom of this asap.
    1 point
  19. What he said. I’m just going to roll back and wait for the dust to settle.
    1 point
  20. Urgent for ME? You are kidding, right? Yes. Remove the priorities so no stupid, new, customer selects something that will get you annoyed. Wow. Nice feeling I got from this. First and last time I reported something regarding unraid. And yes! I still feel like this is urgent and not only to me (?? Do I really need to say this?) Maybe this is how people are treated in these forums, haven't been here for that long. Sorry if I stepped on someone's toes or something. Will absolutely not happen again!
    1 point
  21. Wow. I'm going to step back from this thread, as its going to quickly degrade.
    1 point
  22. Thank you but the mentioned setting scheduler.max_active.set in above link is very weird kind of way to work. it doesn't just limit downloading but uploading(seeding) as well. I found a QueueManager plugin for this, but no idea how to implement in this container. Would you maybe consider to add this in your next version? https://web.archive.org/web/20140511222419/http://code.google.com:80/p/pyroscope/wiki/QueueManager
    1 point
  23. I guess Unraid wants to adhere to the Sabbath; maybe computers are sentient after all.
    1 point
  24. Yes Microsoft is deprecating SMBv1 which is used in workgroups to exchange the "browse lists" (the list of computers that show up in Explorer). Instead they are using a protocol called WS-Discovery which is very similar to mDNS used by Apple's bonjour service. Of course M$ can't use mDNS because, well, because they're Mircosoft! Unfortunately, WS-Discovery has not been integrated into Samba, but eventually I think it probably will be. In meantime, we are looking at various open source WS-Discovery packages to integrate into Unraid. Note: this issue only has to do with discovery, as in displaying network computers in Explorer. You can always get at your server by opening an Explorer window and typing the name of the server as in: \\Tower
    1 point
  25. Already have this one working. A panorama my wife took on the South Island of New Zealand.
    1 point
  26. A brief introduction: I have a need to consolidate several servers into one box. this would include my main media storage (unRAID), my client backup server (WHSv1 Migrating to WHS2011), my Usenet downloading workstation, my Ghost server, my FTP server, and possibly my windows PDC. There will also be some secondary Hosts on this server also. Why here? There is a bit of a buzz on this and other forums about running ESXi in the home and using virtualization. Several people already are or want to run unraid on top of a hypervisor. The primary use will be unRAID. This will be my worklog as I share my experiences. I hope to both help others and have others inform me of better practices. Hardware: While I wish to keep the costs down, I also do not want to sacrifice performance for a few dollars saved. I could have saved a few dollars by shopping around. I do like to get as much from one vendor as possible. things like cables can be had for less elsewhere, but I got no mystery cables. I have no issue waiting out the sales or changing hardware later. I am know as the build progresses, things will change. Motherboard: I chose a socket 1155 Intel C204 motherboard for price, features, number of PCIe slots, Sata3 for Datastore SSD's and VT-d. I have 2 more of these boards and use them at work. They are solid boards. I did buy another open box. I fully inspected and burn in tested the board before building this box. Sometimes open box and used boards are damaged in ways you cant see and can damage other components. it is a gamble. CPU: I went with the E3-1240 Xeon for the performance and price point. The 1220 Will do fine, as would any of the E3-12x0 chips. I would avoid the Workstation Xeon 1155 chips if you could, while they would work just fine. you do not need the on chip GPU since you wont use it. save a few bucks and some electricity. RAM: I wanted to go with 32 Gigs of ram. This was not an option. While the board supports 32Gigs, only 4Gig chips are available ATM. I had to go with 16Gigs of ECC. PSU: Most LGA1155 Server Boards Require EPS 12V Power Supplies. (this is important when shopping for a PSU) Case: Norco RPC-4224 (newest). I used the 4224 with the 2 Front USB's. I chose this one of the 3 Norco's I had. I thought the front USB's would be useful in pass-through mode. My Ghost server and WHS for example could use them for making boot thumb drives. The USB ports would also come in handy if i needed to plug in a USB DVD drive. HBA/Raid controller: My plan was to buy 1 or 2 LSI based HBA controller(s) and an expander. I already had the 2 Supermicro cards. with minor tweaks, These cards will work fine. I am sure this will be changed in the future. for now, it is fine. Plus no additional cost for now. Once my unraid crosses 16 drives, I'll have to add a third HBA or expander at that time. NIC: I do have an extra Intel EXPI9301CTBLK 1Gb NIC. this is not needed but might come in handy in passthough for unRAID. Datastore Drives: I chose to go with 2X Sata3 SSD's for pure performance. I also will use a 7200rpm spinner for ISO storage and backups. I might also run some non-critical hosts off this drive. Originally I was going to buy 3-4 SSD's and run them in raid10 or raid5 on a cheap LSI raid card. it was still a bit to expensive for the drives and raid card. I also considered using mechanical drives in raid instead. I will most likely change this down the road. for now, the SSD's should be quite fast. as long as I keep them backed up, I should be OK. I do not need 3 Datastore drives. I will need 2 for sure. 1 for hosts and for for ISO's and back ups. This second one could be an NFS share on another PC. The point was to cut back on PC's. I'd like to leave it in this box. Ideally, I should have bought one large SSD. after I opted to not buy the LSI raid card for the SSD's i had a 3 in the morning blond moment and said.. oh look at this sale on 120Gb drives. I'll run 2 on the ICH10r on my mobo in raid0 for 1000Mb/s performance. the next morning I saw my error, ICH10r wont work in ESX as raid. I'll work with what i have... [Remember, ESXi does not support TRIM or garbage collection. I will be killing these drives over time][EDIT:See the Recommended Upgrades section below on this. Some of the NEW SSD's have an "Advanced Garbage Collection". It is almost like auto trim.] Drive configuration: The hardest part of this build with be physical drive management. I will have to get creative and mount some drives internally. 3Drives: I plan to have 3 Datastore drives. i can easily mount the SSD's internally. I might have to buy a new 750 or 1TB 2.5" drive for the third Datastore drive. for now I'll use a 3.5" in one bay until I run low on bays. 1 Drive: WHS2011 will get a passthough drive. Keeping PC backups on the SDD's wont work. 1 Drive: My newsbin client host will need a passthough drive. This can be a 2.5" mounted internally or 3.5" in a bay 1 Drive: Ghost server data drive... ?!? i might have to rethink this. have it redirect to a share on WHS or most likely unraid. I could also use a 2.5" drive internally. 20-22 Drives: unraid. i might be limited to 20 drives.This depends on how creative I get. 20 off HBA's and 1 Data and the Cache on passthough. the cache will most likely be another SSD. As you see, 25-28 drives in the end. several are 2.5" drives. maybe i can mod a 2.5 drive bay internally or off the back. I wont have a full unraid at the start. I'll have time to figure this out. At the worst, I mount some drives in a second Norco box. Shopping list: this is the shopping list at build time. These prices reflect what I paid though various sales. Case: Norco RPC-4224 (V3) $339.98 CPU: Intel Xeon E3-1240 Sandy Bridge 3.3GHz 4 x 256KB L2 Cache 8MB L3 Cache LGA 1155 80W Quad-Core Server Processor $238.99 Newegg sale Motherboard: SUPERMICRO MBD-X9SCM-F-O LGA 1155 Intel C204 Micro ATX Intel Xeon E3 Server Motherboard $133.99 open box from Newegg. RAM: Kingston 16GB 2x (2 x 4GB) 240-Pin DDR3 SDRAM ECC Unbuffered DDR3 1333 (PC3 10600) Server Memory Model KVR1333D3E9SK2/8G $169.98 ($84.99 ea) Newegg. Power Supply: SeaSonic X750 Gold 750W ATX12V V2.3/EPS 12V V2.91 SLI Ready 80 PLUS GOLD Certified Modular Active PFC Power Supply Sale: $119.99 W/ Free shipping. SATA Expansion Card(s): 2 X SUPERMICRO AOC-SASLP-MV8 $109.99 each [This will only get you 16 drives in unRAID. you will need 3 for up to 24. Se recomended hardware below.] Cables: 4 x NORCO C-SFF8087-D SFF-8087 to SFF-8087 Internal Multilane SAS Cable $19.99 Each Newegg 2 x NORCO C-SFF8087-4S Discrete to SFF-8087 (Reverse breakout) Cable $14.99 Each Newegg 1 x NORCO C-P1T7 4Pin Molex 1 Male to 7 Female Power Extension Splitter $7.99 Newegg Total Price For base Server: $1340.85 Optional Bits: Fans: 3x120mm Fan bracket ($20ish shipped) 3x "pressure optimized" Noctua NF-P12-1300 120mm fans I picked up for $15 each plus $5 shipping for all 3. 2x ARCTIC COOLING ACF8 Pro Pro PWM 80mm Case Fans back on the rear. Flash Drives: 2x Lexar 4GB Firefly. (1 for unraid and 1 for ESXi) $6.99 each Microcenter ESXi Datastore Drives: 2x OCZ Solid 3 SLD3-25SAT3-120G 2.5" 120GB SATA III MLC $155 Each Newegg. [i recommend a Marvell 88SS9174 based SSD over a sandforce for this build now. See below.[/i] 1x 1TB, 1.5TB or 2TB 7200RPM Drive (For ISO's and Backups) (Free from junk pile) [unraid drives] 8x Hitachi 3TB 5400RPM drives $106.92 Each Amazon NIC Intel EXPI9301CTBLK Network Adapter 10/ 100/ 1000Mbps PCI-Express 1 x RJ45 $22 From Newegg Ultimate Price: $2298.19 This list will change as I upgrade the build. Recommended Upgrades: SSD's I know i had mentioned earlier in this message about SSD's and that running them in ESXi will wear them out at at a fast rate. since the time of the original writing. the Marvell 88SS9174 SSD chipset has made major improvements. With it's advanced garbage collection, these SSD's are made for uses like this. while they do cost a few dollars more, they should be much faster and outlive the Sandforce drives I originally built this system with. This is an upgrade will be doing myself. 1) the Corsair CSSD-P256GBP-BK is a beast. http://www.newegg.com/Product/Product.aspx?Item=N82E16820233227 2) the Plextor M3 Series PX-256M3 is a close second with a 5 Year warranty. http://www.newegg.com/Product/Product.aspx?Item=N82E16820249015 Both are 256GB priced at $339. about what I paid per GB for the OCZ's. HBA's 1) M1015 Replace The SASLP-MV8's with IBM M1015's (LSI SAS9220-8i) about $65-$85 on ebay. (you would need 3 for more then 16 Drives. put the first 16 drives on the 8x ports, then fill in the rest on the 4x port.) this upgrade will get you faster parity checks. the m1015 is a PCIe2 8X card with 8 SAS2 ports (Sata3 6GB/s). They natively support 3TB and larger Drives. If you ever dump your unRAID and move to a ZFSx solution, these should be compatible unlike the MV8's IF You do get these cards, You will need longer cables then those listed above in a Norco case. I recommend the 1M ones from monoprice at $9.49 each [Warning! These cards come with an IBM raid bios, you have to re-flash them to LSI IT-mode Bios to work. you can not flash them on the 9XSCM. You need to do it on another motherboard.] [These do not work with unRAID 4.7. You must run 5.x and newer only] 2) SAS Expander If you plan on more then 16 drives in your unRAId guest: I would also strongly recommend a single IBM M1015 and one Intel RES2SV240 SAS Expander. This combo will only use a single PCIe 8x slot and still get pretty much full mechanical hard drive speed to 20-24 drives. It will also cost less then 3 HBA's and cables (the RES2SV240 comes with 6 SFF-8087 to SFF-8087 cables saving $60-$120). M1015 ($85 Ebay) RES2SV240 ($208) Order no SFF-8087 to SFF-8087 cables This combo saves $154 from puchasing 3x MV8's and Cables. Cables: 1M SFF-8087 to SFF-8087 from Monoprice at $9.49 each. They are cheaper and longer for those with M1015's. Face it, the NORCO C-P1T7 is crap. It is a short waiting to happen. I would recommend a more solid Molex cable. Ideally, make your own custom cable from parts [Example of a custom built Norco cable from another forum]. I know most people can't build a cable like that or don't have the time/budget. I would suggest something like THIS, THIS, or THIS for those of those that cant make a cable. Unfortunately, none of those cables are 100% correct for a 4224 (one is perfect for a 4220). You will need to buy more then one cable. Recommended Alternate parts: Combo 1: SUPERMICRO MBD-X8SIL-F-O $189.99 4x Kingston Technology 8GB 1066MHZ DDR3 ECC Reg with Parity CL7 for 32GB of RAM $76.50 each (I thought this board only uses CL9 ram, but it is on Kingstons compatability list) Intel Xeon X3470 Lynnfield 2.93GHz 8MB L3 Cache LGA 1156 95W Quad-Core Server Processor $343 (or another Xeon) Combo price $838 ($296 more) you can save $80 more by downgrading to a X3450 [i was just trying to keep the horsepower close to what I had in a good price bracket.] I only sugest this combo because many people already have this board. also you can get to 32GB right now instead of waiting for 8GB chips for the C202/C204 motherboards. Motherboards: TYAN S5510GM3NR (to replace the X9SCM. it does have 3 ESXi compatible NIC's) Supermicro X9SCM-IIF (Updated X9SCM with 2 ESXI compatible NICs and V2 Bios fr IVY Bridge CPU's) Next Part: Hardware and ESXi Install.
    1 point
  27. Hardware Build and ESXi install. Hardware install notes: Original Hardware unboxing The 650Watt Corsair Power Supply pictured was not going to cut it. I used the spare 750watt Seasonic I had from an earlier sale. I just need to swap it out from a workstation and put the 650 into it. In addition, the Seasonic is gold certified, That is a bonus for an always on PC. The first step I did was assemble everything into the Norco as if i was going to install unRAID I installed the Motherboard, RAM, CPU, 1 of the MV8's and the Power Supply for testing. Current Build Photo with 32GB Ram, E-1240, 2x M1015, Expander, Corsair Pro SSD's, and custom power cables I don't think I need to go into detail here. I'll assume you can assemble the hardware. Plug in the power and Ethernet cables to both the IPMI and LAN2[use LAN1 For ESXi, LAN2 is for baremetal unRAID] After this step, I stuck the ESXi flashdrive into the internal USB Yes, it is still blank. this is for the Bios config step. IPMISetup: At this point go to ftp.supermicro.com/utility/IPMIView/ and download IPMI View. If you have a monitor and keyboard installed and you dont plan to use IPMI, skip ahead to the BIOS configuration. Start the IPMIView software and click the Magnifying glass icon to have it auto detect your new server. go ahead and add it to your "IPMI Domain" Go ahead and login to your new server. the default login is "ADMIN" pass "ADMIN" Start the server under IMPI Device TAB and open the KVM console in the KVM TAB. Raid Card Bios Settings As the PC starts to post, watch for the Raid card BIOS. When it starts detecting drives on the raid card, start pushing "ctrl m" (For mv8 anyways) Controller Tab: Disable INT 13h Optional Under staggered spin up: set spin up groups to lower the hit on your power at boot. Exit and save. If you have more then one HBA card. You should now swap them and do the same thing to the next card. Bios Settings: hit the "Del" key to enter the bios. In the advanced tab: Processor and Clock options Enable "Intel Virtualization Technology" In the advanced tab: Integrated IO Configuration Enable "VT-d" In the advanced tab: PCIe/PCI/PnP Configuration Set PCI ROM Priority to "EFI Compatible ROM" (NOTE: for Ver 2.0a BIOS this is replaced with "Disable OPROM for slots 7&6" set them to "Disabled") In the advanced tab:IDE/SATA Configuration SATA Mode = ACHI Set staggered spin-up and Hot Plug for all drives if you want. BOOT: Boot Options Priority. Select your ESXi Flashdrive. And last (optional) IPMI: BMC Network config Set a static IP for your IPMI At this point you should save setting and exit. Manually power off the server if You were using IPMI and you changed the IP in the Bios. In IPMIView, modify your your IP to reflect the new one you just changed your IPMI IP to. Basic Pretesting: This step is not really a step. It was something I did to test my hardware. it is optional, but it made me feel better.. I pulled my unRAID flash drive from my second unRAID server. I placed the unRAID Flash drive into the ESXi box. I booted with the unRAID flash drive and ran several cycles of memtest (Note With this hardware you will need to upgrade the memtest that comes with unraid. SEE HERE.) After that passed, I felt all warm and fuzzy... Installing ESXi: NOTE* These instructions are for 4.1.0. Since this thread was created, ESXi5.0 has been released. the instructions are ALMOST the same. these instructions should get you through the ESXi 5.0 setup also. If there is a major change or a part that is confusing, let me know and I'll update this thread. (Get a screen shot if you can) I wont pretend to be an expert at ESXi. Infact, even though I use it at work, all I know is from google and trial and error.. For this build, we will be installing ESXi 4.1.0 to a flashdrive. When you download ESXi from VMware, it is an ISO image. I decided for ease of install, I would just burn it to a CD. You can create a flash drive install to install from flash drive if you wish. After basic google-fu and realizing i didn't have another flash drive laying about, the CD install won. Besides, my RPC-4224 came with a free SATA DVD?! It is karma. [Edit: You could also use the "Virtual Media" option in the IMPI and mount the ESXi ISO for the install if you don't have a SATA DVD] Prep for ESXi Intall: At this point, if you have not already, Download your free copy of ESXi and register it to get a free Serial number. http://www.vmware.com/products/vsphere-hypervisor/overview.html?ClickID=bledqduu6egnnqg6nl6vsdzelzklyvkfzgne Burn the ISO to CD (no need to waste a DVD)[or use the USB Install Method] REMOVE ALL DRIVES! Remove/Unplug all Hard Disks and Flash Drives from the server! During install, ESXi will erase ALL drives it sees!! Don't say I didn't warn you. Install your ESX flash drive into the internal USB header. You can use an external header if you like. It makes more sense to put it inside your case so you have access to your unRAID drive. Go ahead and plug a DVD drive into one of your internal Sata ports (USB CD should work also). You should have no drives in the drive bays so it is OK to leave the top off for now. ESXi Install: Power on your server. Start hitting the F11 key once you get the supermicro splash screen. This is to bring up a boot menu. Select your CD Drive. Welcome Screen: > (Enter) Install EULA Screen: > (F11) Accept Select A Disk: > Select Your Flash Drive (IT SHOULD BE YOUR ONLY DRIVE. IF NOT, STOP! SEE ABOVE!!) > (Enter) Confirm Install: > (F11) Install Wait for the install. It should take 10-15 minutes. Complete: > (Enter) Reboot! Assuming you configured the ESXi Flash drive as your first boot device, you should now boot into ESXi Configuring ESXi Console: On our first boot into ESXi, We Should be welcomed with this screen. If you see a grey screen with red text flash past and you are now sitting at an error code, Chances are you have an incompatible NIC. (We wont see that with this build.) However, the issue I did have, I did not get a DHCP IP Address. I had to move the Cable from LAN2 to LAN1. This is after i told You to place the Ethernet cable into LAN2. Apparently I have a newer revision of the motherboard on this build. My last build was with a Ver 1.0. This Board is a 1.0b. I wonder what else has changed? Assuming you have a DHCP server (who doesn't?), you should have an IP and it should say HTTP://IP_Addy/ (DHCP). This takes a few minutes sometimes. This would be the web address of the server. this is where you would go to get ESXi tools (more on this later). Lets go and set up a static IP and set the Root Password. We could do this from inside the vSphere client later, but lets see what options are in the console. We will need to hit F2 to Customize the system. If you just hit F2 in your IPMI window, you just found the exit hotkey... If you are using IPMI, in the top toolbar, on the left, Select "Virtual Media" and then "Virtual Keyboard". You should now have an on-screen keyboard. Hit the F2 on the Virtual Keyboard. You should now have a Login Screen You can now close the Virtual Keyboard, We are done with it The default Login is "root" with no Password. Set up a password now while we are here. (optional) Select Configure Password > Enter the new password. (you have to use a complex password) Set up a Static IP. (optional but recommended) Select Configuration Management Network. Select IP Configuration. Select Set static IP Address Fill in your IP Address, Subnet and Gateway. You can setup IPV6 While you are here if you use it. I do not run IPV6 at home so I skipped that. You can modify your DNS configuration now. It should have locked in what your DHCP server assigned when we set a static address. you can change the hostname if you wish also. After you are finished with your changes, hit "esc" until you are greeted with a save changes page and a warning your VM Hosts will be kicked off the network. We have no hosts yet so this is OK. Select <Y> Yes This should bring us back to the "System Customization" Menu. One last step to do while we are here. We are going to turn on SSH. This allows us to telnet and use WinSCP into the server. Select "Troubleshooting Options" Select "Enable Remote Tech Support (SSH)" (This enables SSH on the server) Double check your settings. This screen is a bit confusing to some. After you Enable SSH, You can <ESC> all the way back to the main screen. You should see the static IP now. We are done with this portion of the install. You can close the IPMI window if you want. Configuring ESXi from vSphere: This is where most people get lost, VMware vSphere client is not very intuitive. The first step is to get the vSpere Client. Open up a web browser and put in the IP address of your ESXi box and hit enter/go/whatever makes it start.. Stop! You will now have a warning message in your browser! This is OK! You are connecting over a secure connection with a private security certificate. Go ahead and connect and save the certificate if it asks you (IE wont save it). Once you get to the ESXi webpage, Download/Install the "vSphere Client" I wont hold you hand here. Install the client. Once you have the client installed. Enter the IP address of your ESXi box, Admin ID, Password and hit "Login" STOP! We we are greeted a certificate error once again. Check "Install Certificate..." Then Select "Ignore" vSphere Client will now start up and give you a nag box about your license. It will also remind us that we have no persistent storage. OK, Lets fix the License Issue first. Configuration > Licensed Features > Edit Check "Assign New License Key to this Host" Click "Enter Key" Button Enter your License Key Click "OK" Click "OK" You will have a Licensed ESXi server now. Now we need to add the Datastore drives. These are the drives where we store the virtual disks and hosts along with other data for the ESXi server. You can hotswap the drives into the server while it is on. But for the sake of safe practice, we will shut the server down. Right Click on the server in the top left pane > Shut Down. Or Summary > Reboot The server will nag that it is not in "Maintenance Mode" That is OK. It will then confirm why you are shutting down. OK and shut down. Install your Datastore disks at this point We could have done this sooner, we just didn't get to it. I am going to install 1 SSD off of one of the White SATA600 (Sata3) ports and one Large mechanical Drive off of one of the Black Sata300 (Sata2) ports. I'll eventually add the second SSD. For now, I'll hold off. (Honestly I have some test VM's on my second one in my other ESX box I need to reclaim) You can do what you feel is best for your need. ANY DRIVE WE ADD AND ASSIGN AS A DATASTORE DRIVE WILL BE FORMATTED!! FOREVER LOST! THERE IS NO GOING BACK! That is, unless it was already contains a Datastore. You can move those from ESXi to ESXi box. Once you are done adding the drives, power up your server and restart vSphere. Adding Datastore Drives: In vSphere Client, Configuration > Drives > Datastore > "Add storage" We are adding a Disk/LUN Next Select the disk you want Added to the Datastore. Next All Partitions / Data will be Wiped! Next Name your Datastore. At Work we call them Datastore1, Datastore2, etc. At home, I name them a bit more descriptive. I like to keep the name simple for scripting later. SSD1, SSD2, 2TB1 for example Enter a name > Next STOP! Format "Set Block Size".... this part is critical and most people screw this up and loose all their data after they figure this out. you have 4 Block size settings! what you select determines the maximum size of your Virtual Drives!! Block Size Vs. Maximum Virtual Drive size 1Meg = 256MB vDrive 2Meg = 512MB vDrive 4Meg = 1TB vDrive 8Meg = 2TB vDrive Supposedly, there is no performance hit or loss of drive space for choosing a larger block size. Choose wisely based on your needs. You can not undo this without reformatting the drive. In this case, I'm going to choose 1 meg blocks. My SSD is small and most of my clients will be 30gigs or Smaller. Edit: I now think it is best to format all drives the same block size. I formatted my Mechanical drives with 8Meg blocks, I am going to format my SSD's 8Meg blocks. Choose a Block Size > Next Confirm > Finish Repeat if needed for each drive. We now have our Datastore Yes, there is data on the 2tb drive (it is borrowed from another ESXi box, more on that later) Updating ESXi to the latest version. For 4.1, See this thread > http://lime-technology.com/forum/index.php?topic=14695.msg152540#msg152540 For Version 5.0, See this thread > http://lime-technology.com/forum/index.php?topic=14695.msg169119#msg169119 This Pretty much concludes the Basic ESXi setup. We will get into more tips and tricks like pass-through as we install the VM's If anyone sees any changes I should implement, let me know. the new ESXi box sitting with my 2 unRAID servers. On the Floor of a spare bedroom temporarily. Here is a crappy cellphone picture of the Servers in a Lack Rack.
    1 point