JohnO

Members
  • Posts

    123
  • Joined

  • Last visited

Everything posted by JohnO

  1. My unRAID 5.05 is a VM on an VMware ESXi host. I've a pretty new unRAID user (3 months). I'm using the following plug-ins: - unMenu - CacheDirs - Open VM Tools Many of the items that -are- available as plugins I have running as separate VMs on my ESXi host. John
  2. ahh -- OK - that makes sense (in terms of why I haven't seen the issue, at least). Supposedly, the motherboard SATA on my ASRock can be passed through, with the unusual (and mostly positive) side effect that one of the SATA ports stays "unpassed-through" while the others can only be seen by the passed through VM. In my case, I elected not to pass through the Motherboard SATA, and have passed through my 4 port SATA PCI-e card. As this is my first NAS of any type, it is meeting my needs. I can certainly understand the desire to have more ports though. John
  3. Strange. The world of passthrough seems to be very picky. I've got 5.5 with December patches running fine for me with the unRAID drives connected to an LSI SAS3041E 4-Port SAS/SATA PCI-e x4 card. After I backup my ESXi config (running off USB memory stick) I'll try the recently released 5.5U1, and hope things continue to work correctly.
  4. When you tried 5.5, were those initial installations at version 5.5, or upgrade from previous releases? If you have good lock at 5.0 or 5.1, the upgrades will keep existing drivers. You may wish to make a back up of your ESXi boot drive, and try 5.5. Now if you already tried that and things that -did- work in 5.0 or 5.1 stopped working, then yeah, I'd stick to what works. John
  5. Glad you got it all working. Getting the correct mix of officially unsupported hardware to work with ESXi is tricky. Interesting that you had better luck with 5.1U2 than with 5.5. I built my machine in January, and at that time, it seemed the message was to stick with 5.0, and avoid 5.1 due to passthrough issues, especially with graphics cards. I started with 5.0, and ran smoothly for a month, then upgraded to 5.5. My 5.5 system has been running well. It is true that 5.5 has reduced the number of drivers included with the release, so people have had good luck with starting with a base 5.0u3 installation and then upgrading to 5.5, as the upgrade / update process will keep all your old drivers. The Realtek ethernet driver commonly used by consumer motherboards is one example of a driver no longer included with ESXi 5.5. There are also procedures out there on the Internet to add this driver to a 5.5 installation kit. I used this post at the Home Server Blog as my template. http://thehomeserverblog.com/esxi/esxi-5-0-amd-whitebox-server-for-500-with-passthrough-iommu-build-2/ I ended up with some minor changes, noted in the forums over there: http://forums.thehomeserverblog.com/esxi-compatible-hardware/asrock-970-extreme4/ Again, congratulations on getting it working! Having ESXi running at home has been great. John
  6. I've removed the floppy from the VM settings (which, I think, would be the same as disabling it in the BIOS of a physical machine). When the unRAID system reboots, it still acts as if there is a floppy device. I'd repost this in the unRAID as a Guest section, but it won't let me delete/move the message as far as I can tell. What I'm wondering is if something got written back to /boot after the first boot, when I -did- have a floppy device in the configuration, and even though I have since deleted the floppy, the initial config still has that data some how. I'd delete the /dev/fd0 device, but I as understand it, that is not persistent across reboots, so I have not tried that. Thanks, John
  7. I loaded unMENU 1.6 last weekend. It looks very nice. I'm running unRAID 5.05 as a virtual machine on ESXi 5.5, which was a fresh installation about a month ago. One thing I noticed is that if I enter the disk management tab, something is kicking an error into the syslog suggesting I have a floppy drive (fd0:). I -did- have a floppy in the default VM configuration. I hate seeing error messages, so I shutdown unRAID, went into my VMware VM setting and removed the floppy from the configuration and restarted the VM. I'm still getting the error message in syslog when I enter the disk_management menu. Mar 14 18:26:07 Tower kernel: end_request: I/O error, dev fd0, sector 0 Any ideas? I'm sure it isn't hurting anything, but I'd rather eliminate any potential problems down the road. Thanks, John
  8. User error on my part. When I went out to the Slackware site I mistakenly grabbed packages for the "current" release. When I re-checked my work and went back to grab packages for the 13.1 release, there was no longer a dependency on Perl. It's working fine now. I've got my Zenoss monitoring system checking unRAID via SNMP. Thanks! John
  9. Thanks for the rundown. I'm getting close. It looks like Perl (or a portion thereof) is required. Here's the message I received: /usr/sbin/snmpd: error while loading shared libraries: libperl.so: cannot open shared object file: No such file or directory Should I grab the whole Perl slackware package, or is there a simpler way to just get the required shared library? I don't intend to run many plug-ins as I'm running unRAID as a VM as it is, and the other VMs will shoulder the heavy lifting. I may end up with some other things that help with unRAID, and if many of those require Perl, maybe I should just bite the bullet. Thanks, John
  10. If you are still plugging away at a plug-in, I'd be interested in giving a a test run (otherwise, I'll just mimic what you've done above). Thanks, John
  11. Thanks for taking the time to write this up. I'll have to give this a shot. Do you have any process in place to backup the unRAID boot USB stick? I also boot ESXi with a USB memory stick, and should come up with a process to back that up too. John
  12. Yes, I started with the nfsvers=3 entry and received an error message. I haven't used the NFS mount much yet, but I was able to copy a couple of 1 GB files without problem. I've also not gotten the auto mount stuff to work too. My main issue, it turns out, was not having the correct syntax for the shared drives on unRAID! I do look forward to moving to v6 as soon as its considered stable. Thanks again, John
  13. I got it to work using fstab! Thanks for you help. Of course I spent the last 30 minutes fighting to get the correct fstab entry... # my NAS connection # unraid:/mnt/user/Media /home/osh/Media nfs defaults,nolock 0 0 I didn't have 'user' in there, since I hadn't created any users... It didn't work, and it wasn't clear why. Reading through some other support forum notes, I see other examples with user, and remembered you had shown that as well. Thanks! John
  14. Thanks for the info. It looks like your example does not show a particular User set up on the unRAID server. I think part of my confusion is that most examples that I see in the Linux documentation show a user and a password. I haven't added any users to my unRAID server, nor have I added a password to the root account, so your example should be helpful. It's been years since I dabbled with Unix mounts and NFS. I seem to recall that if you don't use auto mount, you can get into situations where your system will hang for a minute waiting for timeouts if you loose connectivity between the host and the mounted storage. Is that still an issue if you use the straight fstab method? That's why I was starting down the auto mount way. Perhaps I should at least get the simple config working first. Thanks again. I probably won't get a chance to try this until tonight. John
  15. Greetings, unRAID newbie here. Is there a preferred method for persistent unRAID access from a Linux host? I can use the Linux GUI (using CentOS 6.5) and connect to the unRAID server shares, but I'm assuming if I always want the share to be available I should mount it automatically on boot. I assume I should also have some mechanism to deal with reconnecting in case of failures, or reboots or other reasons for disconnections. I've looked at using autofs with SMB or NFS, but I'm not having success. I tried following section 4 of the guide here on SMB access. http://wiki.centos.org/TipsAndTricks/WindowsShares I'm guessing it is a configuration issue. Before I dig deeper, I'd like to make sure I'm getting set up for the most reliable, best performing connection. Thanks for any guidance! John
  16. OK, I've converted all my VMs to use the VMXNET3 network interface. Simple tests seem to show it's working fine! VM OS's: CentOS 6.5, Ubuntu 13.10, Windows 7 and unRAID 5.0.5. Thanks for the input. John
  17. I'm an unRAID newbie. I've just set up an array on ESXi 5.0u3 using a passthrough SATA controller. I'm wondering about network performance, especially to other VMs on the same ESXi server. I built the VM with the E1000 driver. Supposedly the VMXNET3 driver has better performance with less of an impart to resources. Has anyone tested? Any noted performance difference? Any noted stability difference? Thanks, John
  18. As a new user (and customer!) I really appreciate these blog posts. They are very clear, and give some good insight on decisions for how to use a NAS. John
  19. Thanks for the info. Reading the forums today I learned about installplg and was able to use that to successfully install the VMware tools. John
  20. Yeah, if it impacting more than the network, it is probably different than the problem I was having.
  21. Greetings, Just did my first unRAID install last night. Installed as a VM on ESXi 5.0u3. Seems fine. I tried to install the VMware tools package as described in the base note. On boot I'm getting the following message on the console as the last message before the "Welcome to Linux 3.9.11p-unRAID (tty1)" message: wget: unable to resolve host address 'unraid.zeron.ca ' If I login to the unraid VM, I can ping unraid.zeron.ca just fine. Any ideas? Thanks, John
  22. Josh -- what kind of hangs are you having? Is the ESXi host dropping the network connection after working for about 30-60 seconds? I had this issue, and thanks to Google, read a forum entry that unbelievably fixed my issue. The recommendation was to power down the machine and UNPLUG the server for 5 minutes. The issue is that (apparently) the network can get confused, and power cycling by the front switch isn't enough to clear out all the hardware. Plug the server back in, and boot straight to ESXi. This may not be your issue, but in case it's similar, it is an easy thing to try. I'm using ESXi 5.0U3. John