OChrisJonesO

Members
  • Posts

    12
  • Joined

  • Last visited

Everything posted by OChrisJonesO

  1. And does this only apply to the unraid/boot drive or all USB devices connected?
  2. How do I install this plugin? I don't see a link
  3. Yeah my bad, that was a typo. I've edited my comment. Everything's working swell now, seeing transfer speeds between 500-750MB/s. I think I'd need to be reading and writing to/from a RAMDisk to go any faster haha
  4. Thank you all for the help, I got this resolved. For anyone who wants to do a similar setup, here's what I ultimately did Main 1GB Network - Still at 192.168.1.x unRAID 10GBe Port 1 - 192.168.2.1 unRAID 10GBe Port 2 - 192.168.3.1 Windows PC 1 10GBe - 192.168.2.2 Windows PC 2 10GBe - 192.168.3.2 And then you'll want to edit your System32/Drivers/etc/hosts file on the Windows PC's to force all traffic to go through 10GB, otherwise it may prefer 1GB. Do it like so: Windows PC 1 hosts file - add: 192.168.2.1 <HOSTNAME OF SERVER> Windows PC 2 hosts file - add: 192.168.3.1 <HOSTNAME OF SERVER>
  5. Basically trying to achieve what is done in this video ^^ with the only exception being that rather than going from one Windows PC to another Windows PC, I want to go from one Windows PC to one unRAID server. My configuration looks like this: unRaid server: NetApp Chelsio T320 Dual Port 10GBe NIC Windows PC 1: Mellanox ConnectX-2 10GBe Single Port NIC Windows PC 2: Mellanox ConnectX-2 10GBe Single Port NIC Connected via SFP dirrect attach copper cable. unRAID --> Windows PC 1 & unRAID --> Windows PC 2. Pretty basic, direct connect, no expensive 10GBe switches. Running latest unRAID 6.3.2, which recently added support for the NetApp Chelsio NIC I'm using. However, I have two problems at the moment. On <server>/Settings/NetworkSettings, it only lists one of my two ethernet interfaces - I'm wondering if this is a bug in the webUI? I have 11 total network interfaces, eth0-10, and only eth9 (which would be the 10th) gets the full listing where I can specify settings. See the screenshot below, and notice how it jumps from Interface eth 9 straight to the interface rules (which does list eth10). So with that issue, at the moment I cant specify network settings for one of my ports, so one of my two PC's will go unconnected. Not the end of the world for the moment... However, on the one I can configure, I've configured it with the above settings. The rest of my network (router, wifi, switch, and all 1GBe devices) lives on 192.168.1.x. So I've manually assigned 192.168.0.0 for the IP address for the 10GBe network I'm trying to set up. On windows, I've done the same thing: So for this 10GBe network, ideally it'd look like this. 192.168.0.0 - unRAID 10GBe port 1 192.168.0.1 - unRAID 10GBe port 2 (can't configure yet due to bug discussed above) 192.168.0.2 - Windows PC 1 10GBe port 1 192.168.0.3 - Windows PC 2 10GBe port 1 I try to ping 192.168.0.0 (unRAID port 1) from Windows PC 1, manually specifying the source address. I'm getting this, even with Windows Firewall completely disabled. Also unable to ping from unRAID Terminal to windows PC. What am I missing here? I'm no networking pro and this is my first time dabbling with 10GBe, and with a direct connection (no switch/router).
  6. So after getting my new unRAID server setup and all my data from various drives transferred over and neatly organized, I have to say that I am loving using unRAID so far! So much so that I still have 20 days left in my trial and already purchased a Pro license. However, I'm starting to think I don't have things setup optimally here. I've got a 15TB drive array (all 3TB drives, two parity drives) as well as a 500GB SSD currently being used as my cache drive. I have manually set the appdata, domains, and isos shares to: Use Cache Drive > Only, in an effort to force that data onto my SSD. Docker Apps + VM's seem to honor that. However, since this SSD is a cache drive and is outside of the array - if it dies wouldn't I lose all the data on it? In which case I'd have to reconfigure all my VM's and Docker apps? That's certainly not ideal... I'm thinking, is there any way to have that SSD be part of the rest of the array, but still force those shares (and only those shares) to use the SSD for better performance & to not have to spin up other disks (for things like loading Plex meta files, etc)? Basically to have all my VM's and Docker Apps run off an SSD that is protected by parity? I think that would be the ideal scenario for me - would there be any major performance considerations with doing this? Additionally, I have a spare 120GB SSD that I could then setup as a cache drive solely for faster transfer speeds.
  7. I gotcha. Yeah in theory it would work in IT mode if there is one, but I haven't been able to find one. Thanks for the help / reassurance on this guys. I ended up purchasing a LSI 9210-8i from a seller on eBay that comes preflashed into IT mode for unRAID / FreeNAS compatibility.
  8. In reply to your edit (about the 2208 in IT mode) - is there even any way to do this? from what I found there simply wasn't an IT mode available for this card.
  9. Damn. Was hoping this wouldn't be the case. Any recommendations on what HBA to buy? One that works well with unRAID and would support up to 12 3TB HDD's at 6Gbps.
  10. I apologize if this a bit of a newb question, but I've been trying all night and still just can't seem to get things setup. I have an Intel S2600GL 2U server. 12 SATA / SAS hot swap bays on the front, two internal SATA for SSD's, Dual Xeon E5 2670's (upgraded), 128GB of DDR3 ECC RAM (upgraded). Previously I was just running Windows Server 2016 to get things setup and start tinkering with. Recently discovered unRAID, really want to check it out as it sounds perfect for what I want to do. I have 1 HGST Ultrastar 3TB Drives, 1 WD Green 3TB, and 1 Seagate 3TB drive - all almost full. I ordered 5 HGST Ultrastar 3TB Drives to expand upon and to do a proper unRAID install with. Plan to have Dual Parity drives, and 18TB of total usable space, with a few bays leftover for future expansion. As well as two SSD's - a 120GB cache drive, and one dedicated to Docker Apps / VM's. Previously with just Windows Server, I had a single RAID0 Virtual drive for each of my three 3TB drives. Windows detected them all no problem. However, with unRAID I cannot for the life of me get it to detect any of my drives, other than the two internal SSD's - so every drive connected to the backplane. Those drives go through a Intel RMS25CB080 hardware RAID card, which runs the LSI SAS 2208 chipset. 06:00.0 RAID bus controller [0104]: LSI Logic / Symbios Logic MegaRAID SAS 2208 [Thunderbolt] [1000:005b] (rev 05) Subsystem: Intel Corporation RMS25CB080 RAID Controller [8086:3513] 09:00.0 Serial Attached SCSI controller [0107]: Intel Corporation C602 chipset 4-Port SATA Storage Control Unit [8086:1d6b] (rev 06) Subsystem: Intel Corporation C602 chipset 4-Port SATA Storage Control Unit [8086:3583] Kernel driver in use: isci Kernel modules: isci From what I can see after searching online, there is no "IT Mode" to flash for this chipset. According to Intel's own specifications, it supports a variety of RAID configs, but specifically not JBOD. However, you can manually enable JBOD mode through the CLI following this tutorial. Which I've done, everything executes sucessfully, and upon reboot it still doesn't detect the drives. So individual RAID0 virtual drives doesn't work, and neither does "JBOD pass through" from what I've tried. I have seemingly altered every possible setting in the RAID BIOS, and on the motherboard BIOS as well, all to no avail. Is what I'm trying to accomplish even doing even possible? Has anybody had any luck with this card? Or should I just give up on using this controller and buy a normal HBA? If so, does anyone have any suggestions given my current hardware?