DanielCoffey

Members
  • Posts

    268
  • Joined

  • Last visited

Posts posted by DanielCoffey

  1. I just bought an LSI SAS9201-8i. Be aware it will probably come with a half-height bracket and order a spare full height one if you need it for your particular case. Check the screw spacing too. This is the type you might need... http://www.ebay.co.uk/itm/282572654860

     

    You will want SAS to 4xSATA forward breakout cables. I ordered these... https://www.scan.co.uk/products/startech-16-ft-50cm-serial-attached-scsi-sas-cable-sff-8087-to-4-x-latching-sata

     

    The SAS cards will run pretty hot so benefit from fair airflow. If they have been sitting in a box or server for years (mine was 2011) you may wish to replace the thermal compound. The heatsinks are usually held on by nylon push pins which may have got brittle so take care.

     

    If you are wanting to watercool them, there are some older Southbridge coolers that may be compatible as the hole-to-hole spacing is 47mm. I will be posting on how I will change the TIM shortly but the water block will have to wait till I spec my new machine.

    • Upvote 1
  2. Now that the new Xeon-W chips have just been announced by Intel for single-socket motherboards, I am considering moving my unRAID machine to a new single Xeon, ECC motherboard. Having used consumer non-ECC motherboards in every PC I have built since I owned a 2010 Mac Pro, I have no idea which brands are at the top of the hierarchy, which are the wannabe's and what features I should look for or avoid on a server board.

     

    Just at the moment (today) the only board that has been announced as "ready for the Xeon-W" with LGA2066 C422 chipset is the Gigabyte MW51-HP0 and I have no idea how its features stack up against what you would expect for such a board.

     

    I will be using two GPUs, popping in a LSI SAS9201-8i for the spinners and putting the SSDs on the onboard SATA. I only intend to have one M.2 NVMe drive but I do want to watercool the board. I am likely to settle for either the 8-core or 10-core CPU. I know the CPU will be fine for watercooling but what about the voltage circuit? How toasty do those get in a server board compared to a consumer board? Looking at the Gigabyte board there are some modest passive coolers sited there to pick up stray airflow from the CPU air cooler which I won't have.

     

    Any thoughts from the server watercoolers?

  3. Out of interest, just how much airflow do these cards need? Are we talking about forced airflow over the card and out the back or is just being in a large space with mild flow in the vicinity enough?

     

    I had a look at the spec of the heat sink that is fitted to the 9201-8i and it is a fairly small one designed for single-PCIe slots with no space for a fan. It is 35mm x 40mm x 9.7mm with the two push pins on opposite corners of a 33x33 square. You can easily get replacements for $5 plus shipping.

     

    I will end up using this card in my large ATX cube case from CaseLabs and the 9201-8i will have two empty PCIe slots in either side and the nearby GPU is water cooled. It is about 20cm underneath the 120.3 radiators which I do admit push their warm air down into the case.

     

    I have an opportunity to put a single 120mm fan right above the 9201-8i (and the adjacent NVMe SSD) if that downdraught from the radiators is not going to be enough.

     

    Thoughts?

     

    EDIT : Colour me surprised... there is a suitable water block!!! Looks like it would *just* fit too but costs almost half as much as the board... Koolance CHC-122 Water Block

  4. My LSI SAS9201-8i from eBay has arrived, well before my SAS cables. It had a half-height bracket so I temporarily removed that and carefully plugged it in to check the board as best I could.

     

    It was in perfect condition - not a fingerprint or dust mark on it anywhere. Even the PCIe connector pads were pristine. It would be easy to believe it was an unused NOS item.

     

    When it had idled for ten minutes, I checked the board with the IR thermometer. I could only reach the heat sink side of the board so will not be able to get a back surface temperature. I did notice that even at idle in open air the centre of the heat sink showed 56C. Given that the SAS chip is producing so much heat and that the board has been sitting in a box since the date on its label of 2011, I will certainly replace the thermal paste. I want to give it every help I can to get rid of the heat.

     

    Once my thermal paste and a full height bracket have arrived, I will take some pictures of the before, heat sink removal, cleaned and after condition along with a new idle temperature. It is a real shame there is no onboard sensor we can query.

  5. I would appreciate some advice on USB headers for connecting an unRAID thumb drive to a motherboard USB 2.0 or 3.0 header rather than having it sticking out the back as usual. The thumb drive I will be using is a Corsair VoyagerGT USB 3.0 32Gb

     

    My current motherboard is an ASUS Maximus IX Formula and has the following USB 2.0 or 3.0 connectors (ignoring the 3.1 connectors)...

     

    1x onboard USB1314 9-pin - currently in use for Aquaero6

    1x onboard USB 3.0 20-pin - vacant, located horizontally at edge of board under GPU slot

    4x rear panel USB 2.0

    4x rear panel USB 3.0

     

    I currently have 6 devices directly connected to the rear USB ports but I can reduce it if needed. The Aquaero goes to the onboard USB 2.0 header. I know there are USB 2.0 headers that go to a pair of USB 2.0 ports but the Aquaero won't fit onto those as it requires a header. I could use a USB 3.0 header to USB 3.0 socket for the pen drive but would need some sort of support for it, or a short cable that would allow it to be velcroed to the chassis.

     

    What do you suggest?

  6. I am wondering if it is necessary to leave any space unallocated on a SSD when creating VMs to allow the drive to do its housekeeping or not?

     

    My situation is that I intend to have a group of WD Red 3Tb spinners as single parity and data, a SATA Samsung 850PRO 256Gb SSD as cache and an NVMe Samsung 960PRO 1Tb shared between a pair of gaming VMs. The array is only responsible for serving a single 1080p stream via PLEX to a single user and the pair of VMs would be used one at a time. The Linux VM would be used as a daily driver and for those games which Steam offered as Linux. The Windows 10 VM would only be used for the Steam Windows games. I would hibernate each VM when changing to the other as I have a single GPU at the moment.

     

    Now, when assigning the space on the 1Tb NVMe device, would you advise allocating all of it to the two VMs or leaving a bit of headroom for the device to perform maintenance? If I was going to set it up as dual boot with partitions I would have allocated all of it of course because the unused parts would be empty but for a VM under unRAID, does it work the same way or does it fill the device at the time of allocation?

  7. Is there a plugin or similar that would allow me to monitor the temperature of an LSI SAS9201-8i while it was idle or under load (parity check for example)?

     

    I am about to receive a server pull 9201-8i and intend to change the stock thermal paste under the heat sink but would love to get a before and after measurement. I do have an IR Thermometer but wondered if there was a plugin to monitor the stats from the HBA.

     

    Any ideas?

  8. Thanks for the replies - I understand more about how it will fit together now.

     

    Quick question about old the 9201-8i which is dated 2010... would it be reasonable to assume the thermal paste under the big heatsink is totally baked dry now and replace it while I have the card on the bench? I see the heatsink appears to be held in place with the usual pair of springs and nylon flared clips. I am aware the nylon clips may be brittle. I did see a picture of one 9201 on eBay that was missing its heatsink and it appeared to be covered in what I would describe as thermal epoxy, not thermal paste. It was crusty hard yellow stuff. Are the heatsinks glued on or is it just paste?

    • Upvote 1
  9. I'm in! USB pendrives and a pair of SAS-to-4xSATA cables ordered and an LSI SAS9201-8i on the way from a nice UK eBay seller.

     

    Once I receive the 9201-8i I will check its firmware while I still have direct access to my Windows machine and see if it needs flashing or not.

     

    So, as promised, here are the next set of questions...

     

    1. When selecting a drive for unRAID to create a VM on, I assume there are no advantages to having each of my two VMs on its own SSD since I will be hibernating and waking the two VMs as needed. All that matters is the amount of free space that I want to allocate to the VM, yes?

     

    2. Given that I have a Plus license on our NAS and that I do intend to move it over into my big case once I have the VMs sorted out, is it simple to use a new Trial license while I play around to get the VMs right then bring my Plus license stick over and "inherit" the VM settings? I assume I would have to tell my Plus license how I wanted the VMs set up but I would like to avoid having to create and install their contents from scratch.

     

    3. Does the drive that the VMs live on get protected inside the Parity drive(s) at all?

     

    I think that is all for now but of course there will be more once the stuff arrives.

  10. Hmm... that sounds exactly like something I could try now to see how quick the hibernate/wake swap is.

     

    Out of interest, if I wanted to do this with unRAID Trial (on a new USB stick I assume), one Mint VM and one Win10 VM, how many drives would I need to scrounge together and which would benefit from being SSDs? Assume I leave my existing NAS alone for now and just have this as a basic unRAID host of the two VMs.

  11. Thanks for the suggestions folks - it has been a useful education into the sorts of things a VM can help me with and the associated issues.

     

    I can see that most of the use cases here have pros and cons. Some might require multiple GPUs, others require software licenses and remembering to sleep both the VM and the host every day. I can't really use the existing unRAID box for the Linux daily PC as it would be running off the i3 iGPU which would not be sufficient for Linux gaming which to be fair I had forgot to mention - apologies. I have about a third of my Steam library on Linux and the rest of the big games on Windows.

     

    As I said, I appreciate the education.

     

    What I think I may consider is waiting until I have the new GPU, keeping my current 780ti instead of passing it on to family and allowing the current case to have both GPUs. Once I am in that position, I will be back in touch with specific questions.

     

    Thanks.

  12. Thanks for thinking about this for me.

     

    Yes, it is the time spent restarting and rebooting throughout the day that I am trying to reduce as well as the fact that when I do go into Win10, I have to wait for the services to start up, for Kaspersky to tell me my databases are extremely out of date, that it has failed to update them for me (even though it is set to automatic) and then having to wait while I (slowly - but that is a separate issue to do with configuration) do it manually. Chances are that something else will then nag me that it wants updating. Only then am I free to actually do what I want.

     

    I am a fulltime carer so while I do have a lot of time to spend on the PC, it is in small chunks of half an hour or so. I might feel like playing a Steam game from my Windows library for a short while but am finding the hassle of swapping over on a dual boot setup a deterrent so I tend to just putter about on the Linux side not doing very much until my half hour is over.

     

    If I were to use unRAID to act as the host for the two VMs, how is hibernation/sleep handled? How much of the machine would need to be running when I wasn't wanting to use a VM at that time?

     

    The NAS is not used often and tends to be "by appointment". This was why I was considering leaving the NAS in its own box although I can see the attraction of putting them all together as I can then allow the NAS to act as a backup for the main machine, freeing up an SSD. I am aware that I do not have an offsite backup but it is really only for the "oops" situation rather than anything critical.

  13. Hello folks - I need some advice on whether using VMs would be appropriate for my situation or not.

     

    I have to stress up front that I know absolutely nothing about setting up or using VMs under unRAID so please could you tailor your suggestions on that assumption.

     

    The situation I am trying to find an improvement for is my main gaming PC which is currently set up to dual boot between Linux Mint (main OS) and Windows 10 (gaming OS). I am finding the time taken to switch between the two sides is deterring me from enjoying my Steam library on the Windows drive because I am a casual gamer and tend to have short sessions during the day and evening for light gaming. Booting back and forth is a barrier (albeit slight) to my relaxed play style and I wondered if VMs for Linux and Windows might help with this or not.

     

    I also have a small unRAID NAS used by my wife for serving movies to an AppleTV. It is currently well specced for that purpose in that it only ever has to serve a single 1080p stream at a time and the movies on it are already encoded for the convenience of the AppleTV. This is where my unRAID Plus license resides.

     

    I am not averse to consolidating both machines into one and fortunately the majority of my hardware is already broadly ready for that purpose (apart from the lack of enough SATA ports). Given that the NAS is already set up happily for my wife, it might be better to purchase another unRAID license just for the main gaming PC.

     

    I do however wonder if setting up a pair of VMs for the two OSes is using a sledgehammer to crack the proverbial nut.

     

    In the short term future I intend to replace one or more of the SSDs with m.2 SSDs, probably Samsung 960EVOs. This will impact the number of SATA ports available on the motherboard. There may be an opportunity for an upgraded GPU in the medium term too.

     

    The hardware in the two machines is accurate as per my current forum sig. The main gaming PC is water-cooled with an Aquaero6 and the 7700K is currently clocked at 5.0GHz at 1.30V. The motherboard in the main machine is an ASUS Maximus Formula IX with 6 SATA 6Gb/s ports (the m.2 drives will take up some of these if used). The 850PRO is the Linux drive, one 840PRO is for Linux backup and the other 840PRO is the Win10 drive (and also where the grub2 bootloader resides). If upgrading to m.2 drives I would probably have one for each OS and pass on the 840PROs to a family member.

     

    My main concerns are usability and also power consumption. I would like to be able to sleep as much of the hardware and VMs as possible when not in use because I am the only user of the gaming PC and our NAS is only used occasionally by one person. There are no business applications on the machines at all - consider it home usage only. I don't know how one goes about flipping between the two VMs or how long it takes to fire one up.

     

    So... do you think VMs are an appropriate tool here or are they not going to help me? What technical issues would I be likely to run into?

     

    Thanks in advance.

  14. I was able to get Windows to create the bridge but still have the issue that if there is a wired connection, it is the default method for the Windows PC to try to connect to the web. Since my router has no net connection, the default ethernet cannot see the web and I don't seem to be able to get it to run off the new "ethernet + wifi bridge connection".

  15. I have my 6.3.3 unRAID NAS sitting headless wired to my network switch. Since I am currently without broadband for the foreseeable, my only internet access is through a mobile wifi dongle. There is no wifi enabled on the NAS at the moment. My desktop PC has a choice of wired (which can see the NAS connected to the switch) or the wifi dongle (which cannot see the NAS).

     

    I would like to update the plugins and docker on the NAS. Given a choice of Linux Mint 18 or Windows 10 on the main PC, is there a way to have both the wired connection to the switch and the wifi dongle active at the same time so that I can see the NAS but still access the internet to update the plugins?

     

    Does that make sense?

  16. My i3-4160 is stepping down perfectly in unraid 6.2, both on the Dashboard and with the "cat /proc/cpuinfo |egrep -i mhz" command.

     

    I clearly see all four available cores idling at around 0% on the Dashboard screen, even while it is open and cpuinfo reporting 800MHz speeds with the occasional core going up to around 1100MHz if something polls.