SK

Members
  • Posts

    35
  • Joined

  • Last visited

Everything posted by SK

  1. FYI - I created topic asking to improve stock unRAID ESXi support in future versions - in Roadmap part of the forum. Please add your voice there to highlight the need and improve visibility of those issues. http://lime-technology.com/forum/index.php?topic=10669.0
  2. well, it doesn't take long before first hacking attempt for any device connected to internet. Few nat forwarded ports pinned in my home firewall regularly got connection/hacking attempts, ftp server constantly bombarded with login tries, even standard udp ports for my voip device been regularly probed from chinese provider IPs last time i checked. Also if new IP gets assigned from provider - who knows how it's been used before.. # md5sum unraid_4.6-vm_1.0.6.iso 82962143aac52908d34052fd74d73cf1 unraid_4.6-vm_1.0.6.iso Btw for those interested - my unraid VM reports 111,341 KB/sec during parity check for 3 x 1TB Seagate 7200.12 drives.
  3. For drives spindown to work it should be supported by controller (such LSI) in first place. For instance LSI1068 based cards I dealt with had no correct spindown support. To clarify a few things: 1) the patched version (unraid-VM distro) is intended for unraid in Vmware ESXi VM only, list of deviations from original one is in README file. Virtualization layer between hardware and unraid distro brings challenges partially addressed by the patch to open-sourced unraid driver. Other challenges are in management interface and needs to be addressed by Lime given closed source of unraid management code. 2) there are no additional code to support any unsupported by standard unraid version controllers other than additional handling of drives discovery. Also patched version based on newer linux kernel. 3) Running unraid-VM distro on physical server should be ok, but if controller doesn't handle spindown/spinup correctly then spindown needs to be disabled.
  4. Nfs client reboot is fine as soon as it remounts when gets back online, nfs server reboot - likely require remount on client(s) side to avoid stalled mount points. in my single server config I have logs and backup on NFS unRAID datastore, backup script is cron scheduled on ESXi which run off compact flash. Well, I changed from the persistant connection to allowing ghettoVCB mount and unmount the NFS connection. I saw errors in the system log about the onboard Realtek nic so I decided to disable it in bios. After some rebooting and fiddling all network connections were lost and based on help from the ESXi forum I ended up resetting the configuration and had to set up the networking from scratch and add the VM's back to inventory. Grumble. EDIT: 1/2/2011 I've changed to doing non-persistant NFS connections with ghettoVCB and that's workiing perfectly. Also, in case you haven't tried the Veeam Monitor application I'd just encourage you to try it out. It's amazing. thanks - will get a try with Veeam Monitor free version, preso on their site looks promising
  5. Nfs client reboot is fine as soon as it remounts when gets back online, nfs server reboot - likely require remount on client(s) side to avoid stalled mount points. in my single server config I have logs and backup on NFS unRAID datastore, backup script is cron scheduled on ESXi which run off compact flash.
  6. just vhettoVCB settings related to non persistent nfs connection, when enabled (ENABLE_NON_PERSISTENT_NFS, etc) it will setup nfs datastore on fly just for backup duration
  7. I'm getting set up with this script now also. My ESXi server and unRAID server are separate however. Currently I'm having ESXi create NFS datastore that points to one of my unRAID shares and that is working. But I'm not sure how to get the email part working. The ghettoVCD script doesn't take parameters for SMTP authentication and it's using nc internally and I don't know if it can handle auth either. Any suggestions or help would sure be appreciated. I have this working on single physical server, during backup window ESXi host mounts NFS backup share as datastore from unraid VM, perform hot backup (by doing VM snapshot and then clone) and then unmount it. There are some issue with enabling experimental gzip based compression - it did not work for me on large VMs, but works fine without compression. I have not look into email notification though, but don't think netcat can handle SMTP authentication. Perhaps using local SMTP proxy on another VM/host without authentication can help with this, something like sSTMP - http://wiki.debian.org/sSMTP
  8. FYI - under ESXi for VMs backup I start using script from http://communities.vmware.com/docs/DOC-8760, which pretty smart and skip physical RDMs (if someone want to backup unRAID VM). It even support gzip compression for small VMs
  9. Fresh version of unRAID-VM ISO based on unRAID 4.6 and stable 2.6.35.9 kernel available at http://www.mediafire.com/?2710vppr8ne43 check README for details
  10. the error means that LSI card doesn't support ATA spindown, if sg_start (from sg3_utils) works and really put those drives to standby mode then my mod may help.
  11. ...the AOC-SASLP-MV8 is useless for an ESXi environment (no native support in ESXi and vmdirectpath not working). Since that image is tuned for ESXi, it comes only natural that these kind of features/modules are being removed. ..my 2 cents. I believe the 2nd version was intended to be run outside ESXi, since it looks to have been built to specifically address problems with running the older supplied version directly on the hardware. either outside of ESXi or under ESXi with VMDirectPath I/O for LSI1068E controller
  12. I downloaded the second version as I have LSI1068E based controller but this one removes the support for "mvsas" (the AOC-SASLP-MV8 cards) so no go for me at this point. I have 6 HDs on the motherboard, 3 HDs are on AOC-SASLP-MV8 card, one Dell branded LSI1068E card with a single HD not in the array for testing and an additional RAIDCore BC4852 PCI/PCI-X controller just for testing. both version are the same except unraid driver, second iso disables extended spindown code for BR10i LSI1068E that not conform to t10 standard. As far as mvsas - the only revenant change was is the upgrade of Fusion MPT driver from 3.04.15 (linux kernel stock) to latest 4.24.00.00 (from LSI site) which compiles and work under my ESXi configuration without issues (when using virtualized LSI SAS controller with SATA Physical RDM disks). Having look at logs would help, if this issue with updated Fusion driver we can certainly go back to previous one. Given that 4.6 stable release is out I need to update to it anyway..
  13. Update for ESXi users (who use physical RDM disks) with vmware tools installed (which double distro size) and some minor improvements available at http://www.mediafire.com/?2710vppr8ne43 Plus for owners such as IBM Br10i cards based on LSI1068E chip (version with extended spindown disabled to avoid device not ready errors) available at http://www.mediafire.com/?zeajy4mmk8j868k
  14. Device not ready errors are the same issue jamerson9 experienced and caused by Br10i (based on LSI 1068E chip) not supporting part of T10 SAT-2 standard correctly in regards to drives spin down. As you noticed only devices managed by Br10i (sdh to sdo have those errors), ones on internal SATA - not. I wonder if LSI1068E chip does support spin down correctly on any card at all - so far I have not seen such evidence.
  15. I think the issue with that is the drives aren't exposed to unRAID inside the VM. In ESXi drives can be exposed to VM either using physical RDM or controller pass through (which require relevant supporting hardware with certain cpu features, etc). In first case not all features may be available such temp and spin down. As far as VM of unRAID running on full-slackware distro - my preference for unRAID VM is to keep it as small as possible and just do storage piece and leave other functionality for other VMs running on better suited OS distros such ubuntu, centos, windows and so on.
  16. Any advantages with PVSCSI over LSI? I wonder if UNRAID were to officially support drives in ESXi which they would most likely support first. generally pvscsi gives better performance and less cpu utilization compare to lsi sas, especially with i/o intensive VMs. In 4.1 vSphere vmware fix the bug when latency was slight higher for pvscsi for relatively light i/o (< 2K iops). But this is for enterprise loads, for home systems loads thats probably not significant. For interested - http://www.thelowercasew.com/more-vsphere-4-1-enhancements-welcome-back-pvscsi-driver For unraid to support pvscsi is just enable another module during kernel build process. Bigger questions is for unRAID to _fully_ support vmware ESXi.
  17. I used EXSi with LSI controller before, but now switched to PVSCSI (paravirtualized scsi) adapter - that require relevant kernel module which not included in stock unraid (my iso posted above have it). As far as spinup/down - there are no errors in logs, but I need to physically check it if drives actually gets spin down as reported in logs. btw my mobo GIGABYTE GA-MA780G-UD3H also has realtec 8111C but so far I had not experienced any network drops.
  18. Getting disk errors with this version. Disk1 is the SATA (sdc) drive on the LSI controller. Dec 1 14:27:14 Tower kernel: sd 1:0:1:0: [sdc] Device not ready Dec 1 14:27:14 Tower kernel: sd 1:0:1:0: [sdc] Result: hostbyte=0x00 driverbyte=0x08 Dec 1 14:27:14 Tower kernel: sd 1:0:1:0: [sdc] Sense Key : 0x2 [current] Dec 1 14:27:14 Tower kernel: sd 1:0:1:0: [sdc] ASC=0x4 ASCQ=0x2 Dec 1 14:27:14 Tower kernel: sd 1:0:1:0: [sdc] CDB: cdb[0]=0x28: 28 00 00 00 00 4f 00 00 08 00 Dec 1 14:27:14 Tower kernel: end_request: I/O error, dev sdc, sector 79 Dec 1 14:27:14 Tower kernel: md: disk1 read error Dec 1 14:27:14 Tower kernel: handle_stripe read error: 16/1, count: 1 Dec 1 14:27:14 Tower kernel: e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX Dec 1 14:27:14 Tower kernel: REISERFS (device md2): found reiserfs format "3.6" with standard journal Dec 1 14:27:14 Tower kernel: REISERFS (device md2): using ordered data mode Dec 1 14:27:14 Tower kernel: sd 1:0:1:0: [sdc] Device not ready Dec 1 14:27:14 Tower kernel: sd 1:0:1:0: [sdc] Result: hostbyte=0x00 driverbyte=0x08 Dec 1 14:27:14 Tower kernel: sd 1:0:1:0: [sdc] Sense Key : 0x2 [current] Dec 1 14:27:14 Tower kernel: sd 1:0:1:0: [sdc] ASC=0x4 ASCQ=0x2 Dec 1 14:27:14 Tower kernel: sd 1:0:1:0: [sdc] CDB: cdb[0]=0x2a: 2a 00 00 00 00 4f 00 00 08 00 Dec 1 14:27:14 Tower kernel: end_request: I/O error, dev sdc, sector 79 Dec 1 14:27:14 Tower kernel: md: disk1 write error Dec 1 14:27:14 Tower kernel: handle_stripe write error: 16/1, count: 1 Dec 1 14:27:14 Tower kernel: sd 1:0:1:0: [sdc] Device not ready Dec 1 14:27:14 Tower kernel: sd 1:0:1:0: [sdc] Result: hostbyte=0x00 driverbyte=0x08 Dec 1 14:27:14 Tower kernel: sd 1:0:1:0: [sdc] Sense Key : 0x2 [current] Dec 1 14:27:14 Tower kernel: sd 1:0:1:0: [sdc] ASC=0x4 ASCQ=0x2 Dec 1 14:27:14 Tower kernel: sd 1:0:1:0: [sdc] CDB: cdb[0]=0x28: 28 00 00 00 00 bf 00 00 08 00 Dec 1 14:27:14 Tower kernel: end_request: I/O error, dev sdc, sector 191 Dec 1 14:27:14 Tower kernel: md: disk1 read error Dec 1 14:27:14 Tower kernel: md: recovery thread woken up ... Dec 1 14:27:14 Tower kernel: handle_stripe read error: 128/1, count: 1 Went back to the previous version and the errors went away. Attaching full syslog. jamerson9 - check private box (for debug version that allow to understand the issue on physical box with LSI card) EDIT: Under ESXi with physical RDM drives I have no such problems..
  19. Update (with better handling of drives spinup/spindown) can be found at http://www.mediafire.com/?2710vppr8ne43 EDIT: It is intended for ESXi with SATA drives only presented to unRAID VM using physical RDM. Has known issue with LSI1068E when used in non-VM configuration or VMDirectpass for LSI1068E controller (device not ready errors)
  20. If you do happen to patch this into the 5.0bX's, please post it, as I would like to give it a shot with the latest beta. Also SK, if you could set this up with the newest 4.6, it'd be great.. well, as soon there be final version of 4.6 which seems to be very soon will patch it. For 5.0bx need to check how emhttp been modified to make sure there are no issues..
  21. root@Tower:/boot/boot/sg3# sg_start -v --stop --pc 3 /dev/sdb Start stop unit command: 1b 00 00 00 30 00 root@Tower:/boot/boot/sg3# sg_start -v --start --pc 1 /dev/sdb Start stop unit command: 1b 00 00 00 10 00 root@Tower:/boot/boot/sg3# root@Tower:/boot/boot/sg3# sg_start -v --stop /dev/sdb Start stop unit command: 1b 00 00 00 00 00 root@Tower:/boot/boot/sg3# sg_start -v --start /dev/sdb Start stop unit command: 1b 00 00 00 01 00 Again thanks for all your work. Thanks jamerson9 for quick response. Seems LSI does support power condition (and not complain as vmware does). I almost finished with driver modifications that should work in both virtual ESXi and with physical LSI (or pass through) environment. The logic is simple - try unraid original ATA spindown/up commands, if not supported - try scsi start/stop with power modifier (for happy lsi owners) and if that's not supported do without power modifier (for happy esxi users).
  22. Great thing with going ESXi route is a choice to isolate unRAID VM to do just storage and have other VMs (with other OSs/linux distros than just quite limited slackware) doing other things. I have 4 to 5 VMs running all the time on single box with 4GB memory and may add couple more if needed (surprisingly have not run out of memory yet). Imho having needed services running on usable linux distros (such ubuntu/centos is priceless Also I have not experience significant performance issues to date with my setup. It would be good to have official support for sure, but doubtfully it will be soon as it requires quite a few changes in code - in both driver and closed management piece.. What I recently found in my ESXi configuration (with physical RDMs) that controlling drive power (to switch to standby and back to active mode) does not work - SATL (scsi to ata translation layer) does not implement specific ATA pass-through (used by hdparm) or SCSI START/STOP with power condition. I believe this is general vmware issue (or feature?). Question is if this works on physical box with real LSI controller - then it may worth to have it implemented in driver. So if someone with physical box/LSI card can install sg3_utils, run the following and post reply - it would really helpful. # sg_start -v --stop --pc 3 /dev/sdN # sg_start -v --start --pc 1 /dev/sdN What does work in ESXi - is SCSI START/STOP command without power condition (like the following command) and vmware does good job of spinning up drive when access required (which btw is differ from what SAT-2 standard dictate). # sg_start -v --stop /dev/sdN # sg_start -v --start /dev/sdN So I need to figure out what logic to put into spinup/spindown piece of driver to handle both ESXi and real hardware cases (if this doable of cause). Also had some progress with vmware tools which has quite incompatible installer for unraid slackware distro, with workaround I have it installed in my slackware dev VM and now need to get it work for prod unraid VM. Answering to bcbgboy13/jamerson9 - I know that LSI 1068E is not that expensive, but really don't feel need for it for my little home config at any nearest time..
  23. I finally got a chance to look at standards specs - device not ready error is due to drive spin down command issued previously by unraid/emhttp. It actually gets drive into stopped mode where it refuses media access commands (such reads) unless started again. I need to change it to go into standby mode instead so media access will spin up drive automatically - will make change in day or two. Interesting enough I don't have this error under vmware - perhaps virtual layer is masked from physical one.
  24. The different workstreams are hard to follow in this thread. Here's what I understood so far: What user SK is approaching with the fix is that unRAID will properly recognise the virtual disks which are passed via RDM from ESXI. So in this scenario, the controller and disks physically are managed by ESXi and unRAID will access via the virtualized drivers. What user nojstevens did was using the PCIe controller in passtrough mode of ESXi, which would give unRAID real access to that piece of hardware and to handle the real disks. It seems there are several usage profiles for issues people are experiencing which patch I made may help: 1. have unraid working under vmware esx/i with SATA drives presented as physical RDMs to unraid VM using virtual LSI SAS scsi (or paravirtualized controller) 2. have unraid working under vmware esx/i with PCI disks controller (LSI cards?) in passthrough mode 3. have unraid wokring on physical server with LSI cards which unraid having issues with While I really focus on #1 since it is my home configuration it good to know that #2 and #3 issue may got resolved.
  25. right, it does missing since I did apply patch to linux source kernel tree after copying unraid drivers over. I will include it into next rev of iso where I hope to get vmware tools integrated. But for now it's uploaded to the same directory.