dheg

Members
  • Posts

    413
  • Joined

  • Last visited

Everything posted by dheg

  1. True, but the install is almost exactly the same. no need to re-write the setup.. I can confirm this. I never used before ESXi and followed the thread to install 5.1 Sent from my GT-P7500 using Tapatalk 2
  2. I'm running 5.1 with a M1015 and USB passthrouh for unraid with no issues. Sent from my GT-P7500 using Tapatalk 2
  3. Shutdown script Once you have configured your settings at your liking, let’s go with the shutdown script. This part was a bit tricky. I found many places making use of the vicfg-hostops command to trigger a VM/Host shutdown. And apparently, this work up to ESXi Free edition 5.0. But off course, I’m using 5.1. Dead end. I also found thisin the VM community forum. It basically emulates the vSphere Client call to shutdown the host. However, I couldn’t make it work. I ran into permission issues that couldn’t sort out (I’m a linux noob). So after some tinkering , I decided to go the easy way: to issue an ssh halt command. The shutdown script is actually very easy, it only has to send a halt order through ssh. I’ll cover that later. The tricky part here is to use Public Keys Authentication to avoid having to input the password to the ESXi host when ssh’d from a remote location. To achieve this: [*]On your vMA guest, create a pair of keys with ssh-keygen -t rsa. Choose default options and save the keys in the locations prompted. vi-admin@helios:~> ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/home/vi-admin/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/vi-admin/.ssh/id_rsa. Your public key has been saved in /home/vi-admin/.ssh/id_rsa.pub. The key fingerprint is: d4:cd:4f:a0:8b:0f:c2:75:99:1a:0b:9f:23:e0:32:c5 vi-admin@helios The key's randomart image is: +--[ RSA 2048]----+ | . | | . . * . | | E . + * o . | | o o = B . o | | o . + S . . | | o o + | | . | | | | | +-----------------+ [*]Now you have to copy this key into the authorized_keys file in the ESXi Host this file is in /etc/ssh/keys-root. So copy the output of cat /home/vi-admin/.ssh/id_rsa.pub into /etc/ssh/keys-root/ authorized_keys For this change to take effect you need to reboot the server, go the vSphere client and right-click on the Host > Reboot [*]That’s it, on restart now you can ssh from the vMA to the ESXi Host without a password. To test it just type ssh <user>@<IP> (in my case, I'm using user root and the IP of my ESXI host is 192.168.1.151): vi-admin@helios:~> ssh [email protected] The time and date of this login have been sent to the system logs. VMware offers supported, powerful system administration tools. Please see www.vmware.com/go/sysadmintools for details. The ESXi Shell can be disabled by an administrative user. See the vSphere Security documentation for more information. You are effectively logged in the Host shell from the vMA one (which I’m accessing from a putty session from windows, this is mind-blowing stuff !). For example, if you type ls you'll get the list of files and directories of the ESXi host, not the vMA guest. # ls altbootbank lib64 sbin var bin local.tgz scratch vmfs bootbank locker store vmimages bootpart.gz mbr tardisks vmupgrade dev opt tardisks.noauto etc proc tmp lib productLocker usr BTW, to log out from the Host inside the Guest , just type exit. We are almost there. [*]The last step would be to create the shutdown script, but before continuing is important you understand how this script works: it will send a halt command to the Host, which basically will follow the logic set up in Configuration > Virtual Machine Startup/Shutdown to shutdown the VMs. A couple of comments here: You need to define the order in which VMs need to start in the Automatic Startup. Shutdown order will be the opposite. For this to (cleanly) work, each VM needs to have installed VM tools. The Shutdown Action has to be set up to Guest Shutdown. Any other option will just power them off once the Host is down. Adjust the Startup Delay and Shutdown Delay to make sure VM are started/stopped in the proper order. In my case, many of my VMs have directories mounted on unRAID, so I have a long delay to make sure array is properly started and online before starting up the other VMs. The shutdown delay, might not be so important. According to VMWare manual: shutdown delay applies only if the virtual machine has not already shut down before the delay period elapses. If the virtual machine shuts down before that delay time is reached, the next virtual machine starts shutting down. [*]The shutdown script is actually very simple (it has to be if I did it ) For those that need an example, the shutdown command in my case would be ssh [email protected] halt [*]APC agent scripts (Command Files in APC lingo) are located in /opt/APC/PowerChuteBusinessEdition/Agent/cmdfiles, so save/create the script there and chmod it to make it executable # sudo chmod +x ServerShutdown.sh [*]And we are done. Go the PCBE website > Shutdown Settings and configure the APC agent to trigger the shutdown script. . The script will be triggered once the conditions in the PCBE Shutdown Settings are met. I hope it works for you too !
  4. Setting up ESXi 5.1 with an APC SmartUPS connected through USB Disclaimer: I’ve used these steps to connect an APC Smart-UPS 750. I have no reason to believe it wouldn’t work in other setups, but I can’t confirm. I came to this solution because I learnt that the SMT750 is not fully supported by apcusbd. As I’m a linux noob myself, I’ll try to go as in detail as possible, no intention to offend anyone ! [*]Download vMA 5.1 here (login needed) [*]Install vMA. I found a very good guide here [*]Enable SSH connections to vMA. Detailed instructions are in this link. Now you can connect through putty and copy/paste the instructions [*]Passthrough the serial port. Go to the vSphere Client, select the vMA guest and edit the properties. In the hardware tab: add a Serial Port interface Select option “Use physical serial port on the host” The port should show something like this: /dev/char/serial/uart0 Select the SmartUPS (it must be plugged) and that’s it, you have passthroughed the serial port to the vMA guest. Click Ok and power on the vMA guest. [*]Download PowerChute Business Edition for linux (pcbe910_linux.tar.gz). This is the link [*]Upload pcbe910_linux.tar.gz to vMA guest using SCP: # sudo scp <user_name>@<remote_host_ip><path_to_remote_host_dir>/pcbe910_linux.tar.gz /home/vi-admin/pcbe910_linux.tar.gz As an example, this is what I did: # sudo scp [email protected]:/home/ubuntu/pcbe910_linux.tar.gz /home/vi-admin/pcbe910_linux.tar.gz You can copy the file anywhere in vMA. I copied to my home directory to avoid permission issues. [*]CD to you user directory: # cd ~ [*]Check the file is there: # ls –l bin pcbe910_linux.tar.gz [*]Untar pcbe910_linux.tar.gz # sudo tar -zxvf pcbe910_linux.tar.gz This would add a file and a folder: # ls bin install_pbeagent_linux.sh pcbe910_linux.tar.gz rpms [*]Execute the install script. It must be as sudo: # sudo ./install_pbeagent_linux.sh [*]You will be prompt for the monitoring port. Select option 2 (RJ45). It will ask then how you are connected. Since I’m not using a NMC, just select 2 (No) Note: I chose RJ45 (option 2) and used this cable. I tried to get it to work with the usb cable without success. I prefer the serial though, it can be screwed to the port in the server and it’s more difficult to fall. Note 2: I read somewhere that the pinout in the cable provided by APC is different than in commercial cables. I cannot confirm this. However, if you are following this tutorial and fail to connect, make sure you are using the supplied APC SmartUPS serial cable. [*]Next, the configuration program will prompt you with a series of questions (enter username and password for the PCBE agent, you'll need this to access the web GUI) [*]Go to a browser and input https://<vMA_IP_address>:6547. Make sure you type https://, for some reason the web page didn’t load without it. In my browser (Firefox) I get a security warning. Just click on ‘I understand the risks’ and add the exception, and voila, you’ll see the login page! [*]Once you login, you’ll be welcome by the initial setup wizard: As in no way I’m an expert and this is very much a matter of anyone’s preferences, I won’t hold your hand here. PS: If you want to check you have connection, you can click on ‘Quick Status’ in the upper-right corner. A new window will prompt stating: Device Status: On-line (off course ) Let's go with the shutdown script so all pieces fall together
  5. Last weekend I finally decided to configure my UPS with my ESXi server. My server is running since last September, and the UPS since October, although automated shutdown wasn’t configured yet. So this has been a (one-of-many) new year resolution, and considering we are still in February, I feel quite good about it ! As I’ve gone through some research, I thought of saving you the work and make a thread of the different solutions I found: 1. CyberPower UPS. There is a post somewhere in the mother-of-all-ESXi-threads in this forum (yes, I refer to Atlas) mentioning this method. I don’t remember where it is, however, this is the to a quite informative and detailed youtube video explaining the process. As I found soon enough, I couldn’t use this solution since my UPS is an APC model. If you are in the same place, keep on reading. 2. APC UPS with NMC In the link explaining the how-to for the CyberPower UPS (http://tinkertry.com/configure-automated-shutdown-homelab-datacenter-15-minutes/), down in the comments I found what seemed quite a well documented solution for my case (it’s a long post, do a quick find for “analog_”). It seemed quite easy and promising, but of course I don’t have a NMC. For those noobs out there (I myself didn’t know what it meant) NMC stands for Network Management Card, and is priced at +200€. So this wasn’t either going to cut it. 3. APC UPS with USB Google came to the rescue, and after some surfing I found this link. Also very promising. It required to install apcupsd. At this point I had make my mind to go with the official APC solution, but if apcupsd could handle it, I was ok with that. However, the apcupsd website states “DO NOT purchase the following APC UPS models: SmartUPS SMX/SMT 750, 1000, 1500 / SmartUPS RT 3000XL, 5000XL”. Guess which one I have: SMT750. 4. APC UPS with USB and PCBE I finally decided, not that I had any other choice, to install PCBE (ParcaChute Business Edition, the official APC solution) with the provided cable. A pretty similar solution would be to connect the UPS to a Windows VM , passthrough the USB and use plink to pass the shutdown script to the ESXi host. This will probably be easier for the windows power users. However: I like taking the hard way: it’s more difficult, but also more fan With this solution, you can spare the Win license Next you'll find a step-by-step guide of the solution I took:
  6. I would be very interested, but probably should do so in another thread. I'll pm you once I have some time, hopefully this weekend Sent from my GT-P7500 using Tapatalk 2
  7. Have you had any problems pre-clearing drives in a VM? So far every time I try preclear gets completely through the zeroing step and dies saying the MBR isn't cleared correctly. It's NOT a drive problem because they clear fine on my preclear station. Also moving data, parity checks and parity builds work fine in a VM once the drive is cleared. It also doesn't matter how many drives I clear at a time in a VM. I will admit I haven't tried clearing very many drives from a VM - maybe 5-6. Most of the time I use my preclear station and then take it to the VM. It is also possible that I needed to add the drive with the PC off but it doesn't affect parity builds/checks so not sure that is it either. Besides I thought at least ONE of the attempts I did was after powering down the PC. I had that issue when pre clearing drives connected to RES2xxxxx expander. I sorted it updating firmware Sent from my GT-I9100 using Tapatalk 2
  8. I also have th 4224. Have you tried moddiy? I'm pretty happy with my cabling layout, i can post some pictures is someone is interested Sent from my GT-I9100 using Tapatalk 2
  9. Wise words! Sent from my GT-I9100 using Tapatalk 2
  10. +1 BUT 50 + 50 + 5 = 105 ) Sent from my GT-I9100 using Tapatalk 2
  11. dheg, is your intel card plugged into the motherboard or do you just have the power connector attached? I'm still getting a bazillion attempting task aborts, ... eventually I'll figure out what's going on. Power conector, the only link with the mb is the m1015. BTW, I have the Supermicro X9SCM, my sig is outdated Sent from my GT-I9100 using Tapatalk 2
  12. Used the built-in EFI shell method. It went flawless. Thanks a lot! BTW, it helped me reduce parity checks duration by half. If interested my post is here.
  13. In case Tom, or someone else, is keeping tabs on open issues. I reported very slow parity checks in RC10 here. I was coming from 4.7 with parity checks lasting about 8h, while in RC10 they lasted well over 12h. My syslog had thousands of attempting task abort! error messages. One particularity of my system is that I have a M1015 board with an Intel expander (RES2CV240). I found this post explaining how to update the firmware of the expander (from PH11 to PH13), so I did this weekend. My parity check has gone down to 6h 42min or 82.9MB/s (with 7 Green 2TB WD, 5 data + 1 parity +1 cache) and my syslog doesn't have a single attempting task abort! error. I guess this is a good as it gets to confirm slow parity are not RC10 related, at least for me . I hope it helps others
  14. I don't have the issue and im willing to collaborate on testing Sent from my GT-I9100 using Tapatalk 2
  15. +1 I agree. Tell your friends to cough up for a plus license. If they want more drives for free nothing is stopping them from using something like FreeNAS. +1 Sent from my GT-I9100 using Tapatalk 2
  16. It's where the caches go to relax and cool off on a hot summer day ):))) Sent from my GT-I9100 using Tapatalk 2
  17. Thanks Johnm. This is what I did for other that might have this issue: Right click on VM and Edit Settings. On the hardware tab, change memory to desired value. On the resources tab, select memory settings, and click on reserve all guest memory. Easy :-)!
  18. I'm running unRAID 5rc10 as a VM with 2GB RAM. I tried to update to 4GB but received this error message: Failed to start the virtual machine. Module MemSched power on failed. An error occurred while parsing scheduler-specific configuration parameters. Invalid memory setting: memory reservation (sched.mem.min) should be equal to memsize(4096). I tried with 3 GB and received a similar message. Tried back with 2GB, and it starts without issues. Any idea what is this?
  19. Good luck Sent from my GT-I9100 using Tapatalk 2
  20. So do I, hopefully this weekend Sent from my GT-I9100 using Tapatalk 2
  21. Btw, I just realized disk 3 is uploading continously to crashplan (installed in a different vm). Could this be it? I have some tests to run tomorrow :- Sent from my GT-P7500 using Tapatalk 2
  22. As was stated above, seems related to the Task Abort attempts, about one a minute the entire 13 hours, but none before or after the parity check. None occurred on the parity drive, fewest occurred on Disk 1, most on disk 2, with many on the other 3 data drives also. No clue here as to why. Nothing else appears to be an issue. Have you checked for the latest BIOS for your sas card? Are others with the same problem also running VMware? My SAS M1015 has F14 firmware and I'm running an expander. Parity and disk3 drive are on the sas, others are on the expander. I have a spare sas, could it help test it?
  23. I get super slow parity speeds, and I've attributed to the "Attempting task abort!" entries in my syslog, which I see you have as well. Did you not have those entries before? I don't have any previous logs, didn't care to save them
  24. I think these lines in the syslog may indicate the problem, but someone with more experience may need to confirm: Jan 13 17:43:18 Hercules kernel: sd 0:0:6:0: attempting task abort! scmd(d1267b40) Jan 13 17:43:18 Hercules kernel: sd 0:0:6:0: [sdg] CDB: cdb[0]=0x28: 28 00 00 5b 0c 40 00 04 00 00 Jan 13 17:43:18 Hercules kernel: scsi target0:0:6: handle(0x0010), sas_address(0x5001e6739eda2ff0), phy(16) Jan 13 17:43:18 Hercules kernel: scsi target0:0:6: enclosure_logical_id(0x5001e6739eda2fff), slot(16) Jan 13 17:43:19 Hercules kernel: sd 0:0:6:0: task abort: SUCCESS scmd(d1267b40) Those lines repeat every minute or so during the parity check, could be a problem with this disk: WDC_WD20EARX-00PASB0_WD-WCAZA8449068. I'm not sure if the error is pointing at disk, cable, power or controller though. I doubt it's related to upgrading unraid to RC10 unless other people have starting seeing the same error. I never had an issue with that drive and smart errors run ok. I'm running a norco 4224 with many empty slots, I could change that drive to another backplane. Would this help?
  25. my two cents: if the only thing that will change is the name, leave it as rc10. Newcomers, as someone else suggested, will look for the latest stable and won't read (I usually don't) any disclaimers. Customer base will not grow on "defective" products. One they get "unraided" they will get in the forums and see that there is a 5rc10 than long time users are using with some potential issue. Just start working in the 64bit version, before it's too late... On the other hand, I have the X9SCM-F board, and either don't have it nor know about this issue. However, my proposal is to gather all users of this board and collaborate on test/trial to find a solution.