jschwartz

Members
  • Posts

    22
  • Joined

  • Last visited

Posts posted by jschwartz

  1. I have an unraid nfs share as datastore, works fine..

     

    I'm running ESXI 5.5 with unRAID 5 and a ZFS VM to provide datastore storage. Setup a few years ago when SSD's were painfully expensive.

     

    Given how cheap SSD's are now, I'm considering losing the Napp-it install, and throwing a couple of biggish SSD's into a cache pool... using that as an nfs datastore... but I havent seen anyone on the forums mention doing that successfully (until now).

     

    What is the speed / stability like for your nfs shares? I'm hoping no re-emergence of the old 'stale file handle' issue in nfs in unRAID 6?

     

    The other option is just to assign the HBA to ESXI instead of ZFS, and use the SSD's directly in ESXI. I just like leaving some of the SSD space in unRAID as cache.

  2. Well, I figured out how to get the VMDK to show up, and thought I'd share in case others run into this issue.

     

    My problem was the VMDK hard drive I created was a SCSI drive.  For whatever reason, it's not presented to unRAID despite it booting fine from it.  I recreated the drive as an IDE drive and it immediately became mountable.

     

    Is this in unRAID 6? I've got unRAID5 and can see the drive just fine as a SCSI, ESXI 5.5. Wondering if I would need to recreate it as IDE for 6 when I upgrade. (I also upgrade over a share, which is terribly convenient - I'd hate to give it up.)

  3. My ZFS VM is outgrowing its britches (upgrading drives in quads is a pain, I want that 6GB ram back!), so I was considering using mirrored disks in a BTRFS cache pool (some SSD, some spinners) as datastores for other VM's shared via NFS mount on unRaid. I also keep my photos on the ZFS filesystem (faster speed, bitrot prevention) This data would not be copied to the Array, just left on the cache pool drives as storage. I currently do this via napp-it/ZFS VM, but I'd love to 'downsize' and use unRaid for all my storage.

     

    Does anyone have any experience with NFS shares in unRaid 6? I tried this in unRaid5, and the NFS shares proved to be a bit flakey at times. (Reconnect issues, stale file handles, etc.)

     

    Also, how does the speed look for BTRFS cache pools? Anyone have any benchmarks with SSD vs spinners? (I didnt see any posted of late.)

     

    I'm still on unRaid 5, but as the 6 RC's start trickling out, it seems like a good time to investigate!

  4. Its the Intel RES2SV240. I use it as 2 in / 4 out for essentially full speed spinner drive bandwidth (since the 2308 is pci 3.0 at 8 lanes, if my math is correct, I wouldn't be limiting any drives speed even on a full parity sync). The 8 other bays in the case are piped through the m1015 to Napp-it. SSD hanging off the motherboard SATA for VM's

     

    The last firmware for the expander was July of last year, the card came with the latest.

     

    I haven't identified the culprit yet, since I don't have an extra reverse breakout cable or backplane. Everything seems to be working perfectly with the motherboard though, rebuild rate was intensely fast.

  5. Moved everything over, it recognized all my 3TB drives, including WD Red.

     

    I am running into some strange syslog messages, that I havent seen before in my travails. If anyone has any thoughts, I'm all ears. I've got 10 drives + parity hooked up to an intel expander. That is attached to a pair of reverse breakout cables from the LSI 2308 on the MB. (Running in an unRAID 5.02 setup in a VM on ESXI) I've also got a ZFS setup hanging off a m1015 in another VM.

     

    [pre]Nov 22 22:16:08 tower kernel: mpt2sas0: log_info(0x31110d01): originator(PL), code(0x11), sub_code(0x0d01)

    Nov 22 22:16:11 tower last message repeated 2 times

    Nov 22 22:16:26 tower kernel: mpt2sas0: log_info(0x31111000): originator(PL), code(0x11), sub_code(0x1000)

    Nov 22 22:16:31 tower kernel: mpt2sas0: log_info(0x31120436): originator(PL), code(0x12), sub_code(0x0436)

    Nov 22 22:16:33 tower kernel: mpt2sas0: log_info(0x31110d01): originator(PL), code(0x11), sub_code(0x0d01)

    Nov 22 22:16:34 tower kernel: mpt2sas0: log_info(0x31110d01): originator(PL), code(0x11), sub_code(0x0d01)

    Nov 22 22:16:38 tower kernel: mpt2sas0: log_info(0x31120436): originator(PL), code(0x12), sub_code(0x0436)

    Nov 22 22:16:40 tower kernel: mpt2sas0: log_info(0x31110d01): originator(PL), code(0x11), sub_code(0x0d01)

    Nov 22 22:17:01 tower last message repeated 12 times

    Nov 22 22:17:03 tower kernel: mpt2sas0: log_info(0x31120303): originator(PL), code(0x12), sub_code(0x0303)

    Nov 22 22:17:05 tower kernel: mpt2sas0: log_info(0x31110d01): originator(PL), code(0x11), sub_code(0x0d01)

    Nov 22 22:17:17 tower last message repeated 7 times

    Nov 22 22:17:19 tower kernel: mpt2sas0: log_info(0x31120303): originator(PL), code(0x12), sub_code(0x0303)

    Nov 22 22:17:23 tower kernel: mpt2sas0: log_info(0x31120436): originator(PL), code(0x12), sub_code(0x0436)

    Nov 22 22:17:24 tower kernel: mpt2sas0: log_info(0x31110d01): originator(PL), code(0x11), sub_code(0x0d01)

    Nov 22 22:17:27 tower last message repeated 2 times

    Nov 22 22:17:42 tower kernel: mpt2sas0: log_info(0x31111000): originator(PL), code(0x11), sub_code(0x1000)

    Nov 22 22:17:44 tower kernel: mpt2sas0: log_info(0x31110d01): originator(PL), code(0x11), sub_code(0x0d01)

    Nov 22 22:18:14 tower last message repeated 16 times

    Nov 22 22:18:18 tower kernel: mpt2sas0: log_info(0x31120436): originator(PL), code(0x12), sub_code(0x0436)

    Nov 22 22:18:23 tower kernel: mpt2sas0: log_info(0x31120436): originator(PL), code(0x12), sub_code(0x0436)

    Nov 22 22:18:24 tower kernel: mpt2sas0: log_info(0x31110d01): originator(PL), code(0x11), sub_code(0x0d01)

    Nov 22 22:18:33 tower last message repeated 3 times

    Nov 22 22:18:34 tower kernel: mpt2sas0: log_info(0x31120303): originator(PL), code(0x12), sub_code(0x0303)

    Nov 22 22:18:37 tower kernel: mpt2sas0: log_info(0x31110d01): originator(PL), code(0x11), sub_code(0x0d01)

    Nov 22 22:18:40 tower last message repeated 2 times

    Nov 22 22:18:47 tower kernel: mpt2sas0: log_info(0x31120436): originator(PL), code(0x12), sub_code(0x0436)

    Nov 22 22:18:48 tower kernel: mpt2sas0: log_info(0x31110d01): originator(PL), code(0x11), sub_code(0x0d01)

    Nov 22 22:19:00 tower last message repeated 6 times

    Nov 22 22:19:05 tower kernel: mpt2sas0: log_info(0x31120436): originator(PL), code(0x12), sub_code(0x0436)

    Nov 22 22:19:07 tower kernel: mpt2sas0: log_info(0x31110d01): originator(PL), code(0x11), sub_code(0x0d01)

    Nov 22 22:19:36 tower last message repeated 14 times

    Nov 22 22:19:57 tower kernel: sd 0:0:11:0: attempting task abort! scmd(f244fd80)

    Nov 22 22:19:57 tower kernel: sd 0:0:11:0: [sdm] CDB:

    Nov 22 22:19:57 tower kernel: cdb[0]=0x85: 85 08 0e 00 00 00 01 00 00 00 00 00 00 00 ec 00

    Nov 22 22:19:57 tower kernel: scsi target0:0:11: handle(0x0014), sas_address(0x5001e677b9e4dff7), phy(23)

    Nov 22 22:19:57 tower kernel: scsi target0:0:11: enclosure_logical_id(0x5001e677b9e4dfff), slot(23)

    Nov 22 22:19:58 tower kernel: sd 0:0:11:0: task abort: SUCCESS scmd(f244fd80)

    Nov 22 22:20:00 tower kernel: mpt2sas0: log_info(0x31110d01): originator(PL), code(0x11), sub_code(0x0d01)

    Nov 22 22:20:11 tower last message repeated 4 times

    Nov 22 22:21:40 tower kernel: mpt2sas0: log_info(0x31111000): originator(PL), code(0x11), sub_code(0x1000)

    Nov 22 22:21:42 tower kernel: mpt2sas0: log_info(0x31110d01): originator(PL), code(0x11), sub_code(0x0d01)[/pre]

  6. I've got the X10SL7-F booting to ESXI, and have been testing it / setting it up for a few weeks. Its been working for my test drives to 2TB (I don't have any larger spares lying around.)

     

    My 'big move' to move all my drives from my old build to the virtualized build should happen tonight. I've got a few of the WD Red 3TB's, and will post success/ failure after I put them in. (Though I'm a bit nervous now, especially since I've got an Intel expander between the 2308 and the drives!)

     

    I do have my 2308 flashed to 16 IT firmware.

  7. I just got my ESXI box up and running, and am running into this issue with a Debian VM sending files to unRAID. (unRAID on a separate box yet, Debian VM using OmniOS/ZFS VM as working storage).

     

    Was there resolution in the release build, and / or any way to fix this other than unmounting / remounting the share? Or just mounting as Samba and not fighting the fight?

    (No cache drive, over a user share)

  8. Finally ready to retire my 6+ year old Asus P5B-VM DO with a new setup, and wanted to try my hand at virtualization, running (1) Unraid (2) Plex / sab / etc (3) ipcop or pfSense and (4) Minecraft (4) WinServer 2012

     

    My proposed setup would bring my 13 (so far) drives from my current setup over to the new build.

     

    Case: Norco 4224

    MB: Supermicro X10SLH-F http://www.newegg.com/Product/Product.aspx?Item=N82E16813182822

    Ram: 16GB Samsung ECC Unbuffered http://www.superbiiz.com/desc.php?name=D38GE1600S

    CPU: Intel Xeon (Haswell) E3-1240v3

    HBA: 2 x M1015

    PSU: Corsair AX760

     

    If I expand my Unraid pool past 16 drives, I would look at the intel expander. I've done the math, and it seems like going M1015 => 8 Drives and M1015 => Intel => 16 drives (2 in / 4 out on the intel) all would stay ahead of theoretical max of spinning disks. ( Fairly certain I've just talked myself into starting with one m1015 => intel unless someone sees an advantage I don't)

     

    The part I'm really not sure about is the datastore for esxi. Can I just mirror two drives (or two sets of 2) on the motherboard SATA ports? I have an AOC-SASLP-MV8 from the old system I can pull if necessary (if the MB ports are not usable), but I'd rather not.

     

    Thoughts? Thanks!

  9. Transfered some data to the drive (the 400GB Hitachi).

     

    I watched 5 different DVD streams on computers all around the house at the same time, from the same drive, without a hiccup  :) Apparently the noapic and acpi=off dont seem to be hampering performance too badly!

     

    For anyone else with this Hitachi drive, apparently they made some kernel changes to the 2.6.x line recently with broke this particular drive (it is not fully SATA compliant....doesnt it figure).  The fixes are supposed to be worked into the next kernel release (fingers crossed).

     

     

  10. I configured the sytem with "ACPI=off noapic", and it works perfectly now (as far as I can tell).

     

    I am assuming that the ACPI problem is related to this being a new board, with only 2 bios updates, and will be fixed in time.  Wasn't necessary on the Asus board.  It does not appear to be affecting performance at all, however.

     

    I am wondering if anyone can fathom why I would need to use either noapic or nolapic for this Hitachi 7k400 400GB drive to boot and be recognized in Linux reliably.  This is the case on both motherboards I tried (Asus P5B-VM DO and Intel DQ35MP), which were based on different chipsets, even.  The WD drives worked fine without the boot directives.  Any Ideas?

     

    I guess I can try a live cd or two, perhaps its the kernel version unRAID is using?

  11. Thanks for the info.

     

    I actually picked up another 750GB drive, and it had no problem.  I tested this drive in WD's DLG Tools, and it ended up failing miserably (After passing the first time).  It ended up being a drive failure.

     

    As an FYI, while testing, I 'borrowed' a MB that I intended to use for another project, to remove that from the list of culprits, and it worked great.  An Intel DQ35MP (Recently released).  6 internal SATA, Intel gigabit controller, support port multipliers (New ICH9DO chipset)....  EXCEPT, I had to disable ACPI in the syslinux config file for the system to boot, and had to config the formatting of the USB drive as a zip drive.  Bizarre.

     

    Equally bizarre: I had Xfer mode error messages on my 400GB Hitachi drive (7k400) on every MB I tried unless I passed the nolapic directive at boot.  Otherwise it would be recongnized by linux only half the time, and drop out eventually.  Any Ideas on that one?  If I enable lapic and add the drive back, any ideas on the system performance hit?  (I was getting reads of close to 95MB/s, and Parity syncs of up to 65MB/s with the twin WD7500AAKS drives)

     

    Thanks!

  12. I setup the foundation for a new media server (P5B-VM DO // 2GB Crucial ram // 750GB WD HD // 400GB Hitachi HD // 1GB Usb Stick)

     

    It boots unraid, recongizes both drives, allows me to select the devices to use as parity and data....

     

    I start a parity sync, and then every time it nears completion, I receive a string of error messages...

     

    (Interestingly enough, I receive 288 write errors EVERY time - I remember someone else complaining about exactly 288 errors in another thread - I would be suspect of coincidence)

     

    I have tried multiple different cables, and even different SATA ports on the same board.

     

    I know the Ram is good (Pulled from another machine), the MB is good (I have two - same thing both boards), the Hitachi is good (Pulled), and the PSU is a high quality PSU.  It powers another system in the house without glitches.

     

    These are the error messages:

     

    Sep 30 05:34:40 Tower kernel: [12365.858028] md0: write error!

    Sep 30 05:34:40 Tower kernel: [12365.858030] handle_stripe write error: 12038499

    44/0, count: 1

    Sep 30 05:34:40 Tower kernel: [12365.858032] md0: write error!

    Sep 30 05:34:40 Tower kernel: [12365.858034] handle_stripe write error: 12038499

    52/0, count: 1

     

    The only unknown is the WD 750GB, as it is new.  Any thoughts?

     

  13. The new wave of Intel 775 MBs are out, and some are sporting 8 SATA port on-board (using the ICH9R).

     

    They also support port multipliers, meaning you could hang all 14 drives right off the MB!

     

    Is this chipset supported in the Linux 2.6 Kernel that Unraid 4.0 is based on?

     

    (I'm in the final pre-purchase phase for my server, so a big Thanks! for the help!)

     

    Thanks!

  14. The plan is to integrate one once I pass the 6 drive mark, but browsing on newegg, there are a couple of PCI Express 4X 4-port SATA II controllers under $200.  I don't know if they are compatible with unRAID, but I should probably check :)

     

    SoNNeT TSATAII-E4i PCI Express x4 SATA II Controller Card RAID 0/10 was $169.99

    http://www.newegg.com/Product/Product.aspx?Item=N82E16816122014

     

    and

     

    HighPoint RocketRAID 2310 PCI Express x4 (x8 and x16 slot compatible) SATA II Controller Card was about $129.99 IIRC

    http://www.newegg.com/Product/Product.aspx?Item=N82E16816115027

  15. I am planning a home media server, and this product REALLY looks promising.  In consideration for going 'whole hog', I think a Stacker case, IcyDock trays, and a bunch of WD 5000AASK 500 Gig drives would be my jumping off point.  I would pair this with a Celeron D (Cedar Mill, or Conroe-L if it gets released by then)

     

    My main concern is the motherboard.  In my research, it looks like the ICH8R is by far the fastest south bridge to get on a MB.  Add to that the (eventual) need for SATA add-in cards, and I was considering a MB with at least two 4X Lane PCI Express slots (6 Standard SATA + 2 PCI-Express cards for 4 each, a total of 14 drives... seems like a nice number  ;) ).  It is my understanding the PCI Express is point to point, rather than a shared bus ala PCI, so would be preferable.  Are all my assumptions correct up to now?  (I am still a noob at the storage side of performance)

     

    So my question:  The cheapest two boards I could find (on the egg) that fit the bill were :  (They both seem to have GigE on PCI-E also)

     

    MSI P965 Platinum http://www.newegg.com/Product/Product.aspx?Item=N82E16813130055

    Lan Controller: Realtek RTL8111B

     

    ABIT AB9 QuadGT http://www.newegg.com/Product/Product.aspx?Item=N82E16813127019

    Lan Controller: Realtek RTL8810SC

     

    Does unRAID support those lan controllers?  Do these MB look like a go?

     

    Any other (potential) issues I have not considered?

     

    Thanks!