twolf

Members
  • Posts

    34
  • Joined

  • Last visited

Everything posted by twolf

  1. I have updated to ESXi 6.5 U2 (May 2019 Patch) in the meantime and retried installing the compiled tools after deleting as you suggested. The plugin install still fails the first time: plugin: installing: https://raw.githubusercontent.com/StevenDTX/unRAID-open-vm-tools/master/openVMTools_compiled.plg plugin: downloading https://raw.githubusercontent.com/StevenDTX/unRAID-open-vm-tools/master/openVMTools_compiled.plg plugin: downloading: https://raw.githubusercontent.com/StevenDTX/unRAID-open-vm-tools/master/openVMTools_compiled.plg ... done Kernel Version: 4.18.20-unRAID plugin: run failed: /bin/bash retval: 8 And then succeeds on the second attempt: plugin: installing: https://raw.githubusercontent.com/StevenDTX/unRAID-open-vm-tools/master/openVMTools_compiled.plg plugin: downloading https://raw.githubusercontent.com/StevenDTX/unRAID-open-vm-tools/master/openVMTools_compiled.plg plugin: downloading: https://raw.githubusercontent.com/StevenDTX/unRAID-open-vm-tools/master/openVMTools_compiled.plg ... done Kernel Version: 4.18.20-unRAID plugin: downloading: https://raw.githubusercontent.com/StevenDTX/unRAID-open-vm-tools/master/packages/libdnet-1.12-x86_64-6cf.txz ... done +============================================================================== | Installing new package /boot/config/plugins/OpenVMTools_compiled/packages/libdnet-1.12-x86_64-6cf.txz +============================================================================== Verifying package libdnet-1.12-x86_64-6cf.txz. Installing package libdnet-1.12-x86_64-6cf.txz: PACKAGE DESCRIPTION: # libdnet # # libdnet provides a simplified, portable interface to several # low-level networking routines, including # Executing install script for libdnet-1.12-x86_64-6cf.txz. Package libdnet-1.12-x86_64-6cf.txz installed. plugin: downloading: https://raw.githubusercontent.com/StevenDTX/unRAID-open-vm-tools/master/packages/libffi-3.2.1-x86_64-2.txz ... done +============================================================================== | Skipping package libffi-3.2.1-x86_64-2 (already installed) +============================================================================== Starting VMWare Tools Daemon. ----------------------------------------------------------- Plugin OpenVMTools_compiled is installed. Plugin Version: 2019.05.17 ----------------------------------------------------------- plugin: installed ESXi still does not recognize any tools running though, inspecting processes no VMware Tools Daemon is running. Looking at the packages subfolder lists two libraries as from the installation log, but a 0 byte "open_vm_tools-10.3.5-4.18.20-Unraid-x86_64.tgz" which I assume is the problem. Looking at your install script and repository, I have found the issue. The kernel of 6.6.7 is "4.18.20-unRAID", and your repository correctly contains "open_vm_tools-10.3.5-4.18.20-unRAID-x86_64.tgz". Your script on the other hand attempts to download and install "open_vm_tools-10.3.5-4.18.20-Unraid-x86_64.tgz", which gives an error 404 due to the case mismatch in the filename. Manually downloading and installing with the correct filename yielded running tools, which also work after reboot. So it all comes down to that one typo of "Unraid" vs. "unRAID".
  2. I am running unRAID 6.6.7, but have no luck running the current openVMTools_compiled plugin. It should still be downward compatible to older versions, right? A first install attempt results in errors, a second install attempt seems to succeed, and the log also shows VMware tools being started after reboot, but ESXi (6.0.0 U3, current patch) doesn't recognize any installed tools. Anything I can do to further troubleshoot this? I managed to get the last posted version of the openVMTools_auto plugin with the packages for 6.7.0-rc4 working, but the precompiled plugin seems like a more future proof solution if I would manage to get it working.
  3. Yes, it will. But of course only with half the bandwidth compared to a full 8x slot.
  4. OK, decided to take the plunge and rebuild the disk then. Rebuild speed was slugging along at 3-4MB/s... Massively repeated mpt2sas entries in syslog pointed to hundreds of bus resets within minutes. Finally replaced the SFF8087 cables running to the M1015, and the rebuild is moving at a decent 120MB/s now. Guess one of the cables was faulty, probably also the reason why the disk red balled in the first place. Regards, Tobias
  5. My unRAID (5.0rc11) VM on ESXi has been happily running for over half a year now. Today I decided to shut down the host to do some hardware upgrades. Status of unRAID was perfectly fine before shutdown, no errors of any kind in the statistics or syslog, everything green. As soon as I stopped the array in preparation for shutdown, disk 6 red balled. As far as I remember nothing of import was in the syslog, but of course I forgot to save it before shutting the VM down... After checking the cabling, I powered the host and VM back up. The disk is recognized without problems (sdb), but remains of course red balled. The only other oddity is that no temperature is shown on the main screen, despite smartctl having no trouble accessing that information. Syslog and smartctrl report are attached, everything looks OK to me. As this is my first unRAID incident, I'm a bit hesitant to move forward. Should I just remove the disk, add it back again and start a regular recovery? This seems wasteful to me, as there wasn't any write access to the disk for weeks, and the last parity check in July was fine as well. How can a drive even red ball on array shutdown, is there any form of write access (metadata update) at that time? Regards, Tobias syslog.txt smartctl-sdb.txt
  6. I still do not observe any slowdown on my mixed M1015/SAS2LP setup, so your comment about RC11 having issues with SAS2LP "for sure" is not all that sure. Other factors must play a part as well...
  7. I think the EARX use three 667GB platters, while the EZRX use two 1TB ones instead. I have two EARX and three EZRX in my setup, and the two EARX are slower and run hotter than any of the EZRX. So my vote would clearly go to the EZRX unless the EARX was quite a bit cheaper.
  8. Have you already tried replacing the CMOS battery with a new one? Having to force a reset sounds like the CMOS settings keep getting corrupted, one of the possible symptoms of a dying battery...
  9. The CSE-M14TB has a SFF-8484 socket, so you would need a SFF-8484 to SFF-8087 adapter cable to connect it directly to the SASLP-MV8. But of course all the drives would be passed through then, and not only some of them.
  10. I would suggest to purchase a cable that directly connects the 5in3 to the controller, and avoid any attempt at somehow connecting the SATA connectors of the two cables (I'm not even aware that such a thing is possible). Which controller and 5in3 model are you talking about specifically? Can't be the 5in3 listed in your signature, because that one has 5 individual SATA connectors. If the controller is the SASLP-MV8, that side needs a SFF-8087 connector.
  11. I've never seen such a specimen in the wild, but it appears they really do exist. I stand corrected then.
  12. Oops. Of course I meant TB and not GB. And you are quite correct about the 2.2 TB border as well, I simplified since there are no drive sizes between 2 and 3 TB.
  13. What makes you think that is an error? As far as I remember from adding additional disks the last weeks, it's plain informational. Just start the array, unRAID will tell you that the new disks are unformatted. Check the box and click the button to format all unformatted disks, and once unRAID is done with that they will be available as regular data shares.
  14. The -A option is only relevant for smaller advanced format drives, 3 GB (and also larger) drives will always be 4k-aligned on the GPT partition. Preclear will tell you that the partition is starting at sector 1, but that's only the (unused) MBR partition which is not actually being used.
  15. twolf

    Norco 4224 Thread

    I'm using two of the modular Seasonic cables as well to power all six backplanes. I didn't have a spare cable around, but the Seasonic support was kind enough to send me two additional cables for free. Great to experience such a fantastic customer service in this day and age!
  16. Currently running 2 drives on a SAS2LP-MV8 and 3 on a M1015 here, all various WD Green drives. My parity sync speeds are fine with RC11, starting at around 130MB/s and dropping to below 100MB/s towards the end. Speeds is about the same both on a bare metal unRAID and on a virtual machine with the controllers passed through.
  17. Just to add another data point: On my hardware setup (see signature) I do not observe any problems regarding write speed. Neither running unRAID on bare metal with the whole 16GB, nor running it in a virtual machine. Edit: BIOS version 2.0b
  18. I've been using two old machines as router and server for years, but recently decided to get away from that. Modern hardware is just so much more energy efficient, especially on light system loads. The router is in the process of getting replaced by an Alix solution, while the server will get virtualized on my new ESXi machine alongsize unRAID and other virtual machines. At a certain point, the old hardware is obsolete even for tinkering purposes.
  19. The AOC-SAS2LP-MV8 passed through flawlessly in my recently assembled ESXi 5.0 Update 2 build. Neither the passthru.map edit nor disabling MSI were necessary.
  20. I've never used any forward breakout cables, so I can't comment much on this. I'm using one CBL-SFF8087-OCR reverse breakout in my build though, which does its job. So I would assume that the forward breakout counterpart CBL-SFF8087OCF will be a decent choice as well. Should be available in different lengths (0.5, 0.6 and 1.0m) according to your needs. Exactly, that's the thread with the relevant information. Detailed instructions can be found within the "LSI MegaRAID to SAS2008" ZIP file provided at the bottom of the 4th post. Apparently some motherboards are picky and have problems with the flash routine, I had no trouble at all with my five year old desktop board.
  21. Did you upgrade the unRAID version as well during this change? In that case you might need to re-run "make_bootable.bat". What happens if you select the flash drive on the boot menu (activated with "F11") or use the boot override from the BIOS, does that also fail?
  22. I assume you're connecting your hard drives directly, without any form of hot swap bays? In that case, you'll need a SFF-8087 forward breakout cable with a SFF-8087 connector on the controller side and 4x SATA connectors on the drive size. Be sure not to inadvertently get a reverse breakout cable, those look the same, but are for connecting SAS backplanes to onboard SATA ports, and will NOT work for connecting drives to the M1015. Also keep in mind that you will need to flash the firmware on the M1015 to IT mode before using it with unRAID.
  23. The M1015 is a PCIe-8x card, that motherboard has 2 PCIe-1x slots in which the card won't fit at all, and the PCIe-16x graphics card slot which I am unsure about whether it will work with a SAS/SATA controller. Maybe somebody else can chime in about that second part.
  24. By whichever way you choose to connect your old drives to the new server, you can mount those drives on the command line without assigning them to the unRAID array, and then transfer the files from there as well (for example with Midnight Commander). If you have hot-swap trays in your new server, the best option would probably be to insert the old drives that way one after the other. Connecting via USB (external enclosure or docking station) is also an option, but keep in mind that only USB 3.0 will give you very high transfer speeds. USB 2.0 would still be faster than a slow network, but should get outperformed by a Gigabit Ethernet network. So, there are multiple ways, but it depends on what your hardware and infrastructure looks like.
  25. Yes, that's the dilemma I've found myself in as well. Is there even a way to control the fan speeds on the ESXi side, or are they strictly display-only? If control is available, it might be possible to script a solution somehow (unRAID reading drive temps, and passing reactions over to ESXi).