rollieindc

Members
  • Posts

    104
  • Joined

  • Last visited

Everything posted by rollieindc

  1. I'm going to confirm everything that pappaq said, and offer that the nVidia 1030 card should be a good one to use in your build as a pass-through graphics card for a VM. (I'm looking at that one too!) And I use VNC to get into multiple VMs. It works pretty well. And using unRAID makes that set up pretty easy. I have 16GB of RAM, but sometimes think that I'd be better off with 32GB for some of the photo and video work I do. Most of my VMs are around 6GB of RAM, and they work well. And you might like TeamViewer - if you want to remote into your VMs from outside your own network. Otherwise, you might want to review some of SpaceInvaderOne's YouTube videos, as he does a really good step-by-step process in building VMs, remote access and doing graphic card pass-through. You can read about my build, and see some of the things I have gone through. I do boot mine headless, but I used the internal graphics card for initial system build. And you can use any basic video card for the initial build/headless config - it doesn't need to be an onboard video card for unRAID. For me, I didn't need a lot of CPU power. In fact, since most of my work is more as a NAS type file service, I am happier with my parity build disk system (Yes, you can mix different size and make disks in the array, but the Parity Disk needs to be the largest drive in the system.) If you want to add extra disks (like your 3TB movie drive) outside of their disk array - you can. You just won't get the value of any disks used for "cache" or parity protection that unRAID offers. Oh, and I currently use just one SSD as my cache drive. But for most things, I don't use the cache drive at all. But I'd recommend a 250-500GB SSD if you can get one. The docker apps and VMs by default get installed on the cache drive. So a second cache drive will improve performance, but isn't totally necessary. And it's good to plan two additional CPUs for unRAID to what you need for your daily driver. I have 4core/8thread xeon CPU in mine, and I keep 2 cores (2 threads) reserved for the OS/unRAID to chew on things. One thing to note, if you do run parity, you should look at when the parity check runs - because for my system (~16TB), it takes 6-8 hours to complete the parity check, and I do it weekly. I do recommend considering another 3TB drive (2 for drive space, 1 for parity) - if you plan to continue to expand your library. But the good news is that if you want to, later on, you can add the parity drive later, or rebuild the parity of the system with adding a larger drive later. You can even move files from one drive to another without taking up too much CPU/RAM resources. My current parity drive is 4TB, and at some point, I might go up to a 6 or 8TB set of drives - which is really easy to do - you just plug them in, format, assign as you want to and move files - if needed. If you're just adding to the array, it's basically just a one click to add storage. For me, if my array gets up to more than 5 drives, I'll likely add a second parity drive for additional protection.
  2. Oh... well that's a bugger (of a video card) First, just to get it written down - the T310 uses implementation of Intel® Virtualization Technology with Directed I/O (Intel VT-d) based off the Intel 3420 chipset, and the built in video is based off the Matrox G200eW w/ 8MB memory integrated in Nuvoton® WPCM450 (BMC controller), and will do up to a 1280x1024@85Hz and 32-bit color for KVM. And it has a 3D PassMark of 42... Which is squat-nothing. And am also just going to bookmark the following specs for reference: PCI 2.3 compliant • Plug n’ Play 1.0a compliant • MP (Multiprocessor) 1.4 compliant • ACPI support • Direct Media Interface (DMI) support • PXE and WOL support for on-board NICs • USB 2.0 (USB boot code is 1.1 compliant) Multiple Power Profiles • UEFI support So, in working with the nVidia GT610 I picked up, I am learning that the T310 layout on the slots is (from top to bottom) Slot 1: PCIe 2.3 (5GT/s) x8 (x8 routing) Slot 2: PCIe 2.3 (5GT/s) x16 (x8 routing) <- likely best for a graphics card. Slot 3: PCIe 2.3 (2.5GT/s) x8 (x4 routing) Slot 4: PCIe 2.0 (2.5GT/s) x1 Slot 5: PCIe 2.0 (2.5GT/s) x1 Disabling the integrated video controller seems to make sense given the basic nature of the video out, and Slot 1 has the SAS/HD Controller, so that means that the best spot for a graphics card is slot 2. What is not clear to me that there is a real need to change BIOS "Enable Video Controller" setting to Disabled, in order to enable a second graphics card as a VM. I don't think that's necessary, but more experimentation will tell in time. I did find that due to cooling issues, Dell limits the power draw to 25W on slots 4 & 5. Also it was noted that the X16 slot was probably not originally designed for graphics cards to be installed and thus power draw may be limited to 40W max. Given the two power supplies add up to 800 Watts max, perhaps this isn't the most surprising find. The PCIe slots were likely all intended only for ethernet or SAS cards for external expansion disk arrays. Based on this, finding a low power (25-40W) graphics card with any performance might be tough. I found out that the GT610 had a max draw of 29 Watts. Buggers! No wonder it's a popular card for systems, cheap and low power. Then before doing more research - I managed to snag a Radeon A9 290, but given the power draw limits - I doubt I can get the 290 to work in the server. It draws 300 watts alone. (Oops. =( Well, at $80, I think I snagged a good buy... eBay is listing the same cards at $120. 😃 ) I did find this: https://graphicscardhub.com/graphics-cards-no-external-power/ that might be really useful. From that, I could maybe go with the Gigabyte Radeon RX 550 Gaming OC 2G card (at about $100) - as it draws just 50 watts. Although I might need to consider the ZOTAC (or Gigabyte/EVGA) GeForce GT 1030 2GB GDDR5 - since it only draws 30 watts! (And only $85 at NewEgg.) And I might have either a old GeForce 8800 or an HD7570 (60 Watts) available in some of my spare parts that would work too. I was really hoping to do some transcoding, so the 1030 might be the right call for all the "wants" I have for a VM. I just wish I knew it would work well as a VM card with unRAID systems without a lot of hastles. Oh, and Plex doesn't support the AMD RX 550... so that's part of an answer too.
  3. April 24, 2019 - VM with nVidia GT610/1GB card - or why I have "no love" video card - post pains. So, for an update on this build - am up to 6.6.7, no real issues... and my Dell T310 server has been running rock solid for 55+ days. Mostly having to update apps and dockers. Parity checks run regularly and are not showing any signs of errors. Disks (still made of 4TB of IronWolf Drives) are all running reasonably cool (always less than 38C/100F), and I continue to add to my NAS build as I am able to. Still struggling with making a choice between a larger SSD cache drive (500GB=$55) and more RAM (+16B=$67) memory - but now I may need to consider a video card replacement instead. The GT610 was all of $25, so it's not really a loss to me... just bummed I can't get it to work. 😎 In that regard - this VM build for Windows 10/64bit Pro has me at an impass. And I guess I need to add myself to the "No Love for the nVidia GT610 video card" community (geForce GT610/1GB low noise version). I've never been able to dial in this card since first installation. Not really sure why. Again - just so others following can check what I've got done so far - am working with unRAID 6.6.7 on a DELL T310 (with Intel VT-d confirmed) and booting the Windows 10/64bit VM using Machine: i440fx-2.7 or 2.8, with either SeaBOIS or OVMF, with either CPU pass-through or emulated (QEMU64), and am building the VM with the nVidia GT610 in it's own IOMMU group (13) and it's in slot 4 of the PCIe's available. But it is giving me the same headache as others have had with video passthrough. VNC/QXL video types work fine, and I am able to use the VirtIO drivers from RedHat without much of an issue. And often the GT610 card shows up as a Microsoft basic video card (ugh), so it at least "posts" rather than locks everything up. Note - for my VM's I am using TeamViewer (v.14). As I can access it from behind my firewalls over my Verizon DSL link without an issue. (No, there is no fiber where we live, and I refuse to get ComCrap service... no, not even a dry drop! And yes, I have AT&T/Verizon/DirecTV, so suck it ComCast/NBCUniversal Media, LLC !) And I followed the excellent GPU ROM BIOS edits video that SpaceInvaderOne had done & suggested, made sure that there was no added header - and I still can't get the nVidia Card to work reliably, even after multiple attempts. I have other VMs that are working fine with TeamViewer, and multiple with VNC/QXL, but nothing seems to be working reliably with the nVidia GT610. I might try one last shot at it with Windows 7/64, but not holding my breath for that one either. Mostly I just want a card for video and audio trans-coding and video acceleration (Virtual Reality/3D MIMO gaming), and "maybe" some home-lab stuff. And I thought the 610 would have worked well, since I have it in another HP DC7900 desktop (Quad Intel Core 2 Q8400 cpu), and it works rather solidly in that machine. After three nights of tinkering, I think I am just going to find another video card off eBay or locally at the local PC Reseller to try.
  4. I guess I need to add myself to the "No Love for the GT610" community. For me - working with unRAID 6.6.7 on a DELL T310 (with Intel VT-d confirmed) with either SeaBOIS or OVMF, with CPU pass-through or emulated, and building a Windows 10/64bit image VM with the nVidia GT610 in it's own IOMMU group (13) is giving me the same headache in video passthrough as well. Note - I am using TeamViewer (v.14). And I followed the GPU ROM BIOS edits that SpaceInvaderOne had suggested, made sure that there was no added header - and I still can't get the nVidia Card to work reliably, after multiple attempts. I have other VMs that are working fine with TeamViewer, and with VNC/QXL, but nothing seems to be working reliably with the nVidia GT610. I might try one last shot at it with Windows 7/64, but not holding my breath for that one either. After three nights, I think I am just going to find another video card to try.
  5. You'll like the flexibility and capabilities of unRAID. I've been impressed with all the tools available (as a semi-pro/amateur photographer). I run multiple OS's (Win 7, Win 10, MacOS, Linux, etc.) So I like the specs, but I'd probably suggest adding a separate LSI SATA controller with cache on PCIe. I've never been happy with onboard SATA controllers, but that's just me. If the SATA fails, you can simply replace it. But if your onboard SATA fails, you're in deep trouble. If you're going to be running 8x 3.5” HDDs, but only going to have moderate CPU loads - you should be ok with 550 Watts. A lot of people are saying that 500GB is enough cache, but given the prices and you're talking about 20-40tb of media - I'd reconsider going to a 1TB drive, or maybe having 2x 500GB SSDs, if you have them already. Especially if you are working with a lot of large (4K) video files. And I like that compact Fractal Design Node 804 case design, but would be sure that the fans are sufficient airflow for both the drives and CPU. Last point: Make sure that Parity 8TB drive is new and server/enterprise quality.
  6. So, just a boring update. Not much to report lately, as I've just been on travel a lot for work and dealing with a "wonky" DSL connection at my house. The Dell T310 server has been running smoothly on unRAID 6.6.6, and not had any real issues since I installed the H200 controller in it. My biggest quandary has been to decide if I should increase the cache drive size (500gb or 1TB) or up the RAM memory to 32GB. To be honest, I don't need to do either at the moment. And my Drive Array appears to be running just fine. So, nice and quiet for me. Looking forward to the update on 6.7/6.8 at some point, the new features seem really promising. I still need to work on some VMs, that's an ongoing project for me.
  7. Jonathanm, Just to follow up on this thread, I exchanged the H700 for an H200 flashed over to IT. I did this as much to obtain the SMARTDrive information on all the attached drives, in the hopes of being able to increase the reliability of the system through knowledge of that information- as anything else. I am not sure that the loss of the attached 512MB of cache memory on the H700 will be a performance hit, but since this system is mostly going to be pulling NAS storage duty, I think the overall performance will be "good enough" for my needs. Also the ability to work with the drives individually, or replace parts in case of failures - should, as you put it - follow "the normal first steps for recovery", and offer me the knowledge of others (more learned than myself) to increase the chances of any needed recovery. Thanks! - Rollie
  8. Update: 03DEC2018 - Replacing the H700 SAS Controller for a H200 flashed into IT Mode. New Stats: Dell PowerEdge T310 (Flashed to latest BIOS) RAM: 16GB ECC Quad Rank Controller: Dell H200 SAS Flashed to IT Mode, replacing the SAS H700 (Flashed to latest BIOS, all drives running in RAID 0) Drives: Seagate Ironwolf 4TB SATA (1x parity, 3x data ) + 2x 600GB Dell + 240GB SSD (for VMs) Note: The installed a three drive 3.5" bay system (from StarTech.com) is in the available full height 5.25" drive slot. This gives me 7 accessible hot-swappable drives. I plan to populate 6, and leave one for a hot swap-able bay. Video: Onboard & nVidia GTX 610 card Soundcard: Creative Sound Blaster X-Fi Xtreme Audio PCIe x1 SB1040 Sound Card First things first, the really good news on the nVidia 610 GTX video card choice that I made - is that I should be able to make a Mac High Sierra VM now, and then Mojave later - once they release the nVidia drivers for Mojave. The drivers are already out for High Sierra, and since I have a Mac Pro 5.1 running High Sierra - that I plan to use for video and photo editing, this should be a great value to me later on. Next, I found a relatively cheap ($26/shipped) Dell H200 SAS RAID card on eBay, from China, and decided to get it. As I understand it, the Dell H200 is a LSI based 9211-8i card with RAID Firmware installed. Installing the LSI "IT" 2118it.bin firmware allows for individual access to additional features and SMART disk data. The latter is important for determining disk health and tracking, like temperature issues or bit/sector errors. Since this was primarily to save a lot of data in my personal and historical photo library and backups, I need to identify early "disk death" before it happens - and swap out any before they fail completely. After two weeks of shipping time, it finally arrived from China and looked to be in decent shape. (One of the SAS connector shields looked a little bent, but I was able to straighten it with my fingernails.) I then read through various descriptions of the process to change it to IT Mode, and decided to go for "IT". The process was fairly straightforward, although a little daunting. Instructions are available online. (See https://techmattr.wordpress.com/2016/04/11/updated-sas-hba-crossflashing-or-flashing-to-it-mode-dell-perc-h200-and-h310/ ) I did use a separate HP DC9700 computer to flash the H200, since some people stated that they had trouble using the T310 for this purpose. In order to do this, I had to remove the back card edge holder, since my system has a small form factor, but the card sat nicely in the case for the time I needed to do the reflashing. I booted the HP computer into MS/DOS from a USB drive that I had made from RUFUS (https://rufus.ie/), started the diagnostics, found the SAS address, and loaded the IT drivers onto it. The process was, again, fairly straightforward until I tried typing in the H200's SAS address at the line [ C:\> s2fp19.exe -o -sasadd 500xxxxxxxxxxxxx (replace this address with the one you wrote down in the first steps) ]. It took me awhile to realize that I needed to make the address in 16 characters, all hexidecimal (0-9,A-F), and ALL UPPERCASE. The address from the previous steps had hypens included and was in lower case, so I fumbled a bit with that, until I got a clue from the flashing software that I needed to use "NO HYPENS" and "ALL UPPER CASE". Duh - I felt stupid, but after that, the process rolled quickly without any further issues. For me, If you feel comfortable reflashing computers or cards, this is very similar and should not pose any issues. Just check your syntax and typing before hitting the enter key. That I can see this as a potential big issue, and how some people could have probably "bricked" their card from making address mistakes. But for me - my reflashing went fine. After rebooting a few times in the process, I had a reflashed H200 card in IT mode. So, I put it into the Dell T310, replacing the H700 with 512MB of cache on the card. This is what somewhat bums me out, that the H700 has a nice cache on it already, and the H200 is likely cache-less. But I rebooted, and am in the process of reformatting the array's drives. Yes, they had to all be "reformated". Ugh. This is the last change I am making to the controller. So far, the difference in speed isn't really showing up (yet, if at all - they are both 6Gbs cards), but the other SMART disk information already is. I also can see drive info from any of the drives I want - SAS or SATA, and in the drive related plugins or other diagnostics available. And I can already read the temperature on every drive in the array from the dashboard. And for me, I could not read the temperatures with the H700 in a RAID 0 configuration. And the formatting process appeared to be definitely faster too. Lastly, I went to a PC Salvage store while on travel (Hint: it was the "USED COMPUTER" store at 7122 Menaul Blvd NE, Albuquerque, NM 87110, Hours: 8-7pm, Phone: (505) 889-0756) and picked up a used Creative Sound Blaster X-Fi Xtreme Audio PCIe x1 SB1040 Sound Card (for $10). Plop and drop, nothing more necessary for it to load up in unRAID. Haven't done anything with it yet, but it could help with some of the audio files and the way that the VMs run. If I get really bored, and turn the server into a Home Theater PC server, that 5.1 sound option will be nice to have. Oh frack... parity check is running again... that will be about 6 hours of disk spinning again. But, that should be the first real parity check that serves as a baseline. Guess I should get used to it. Next up, building a bunch of VMs and starting to use the system for storing files and backups.
  9. Well, that's actually my point. That's not all I want. And yes, I realize, I may not get what I want. My goals was to: 1) secure/encrypt the data pathway into the server. 2) secure/hide the ip address of the home server (as much as possible), and close the data pathway to avoid tracerouting into the rest of the home network. (1) would be easy enough to do, just using a VPN tunnel from an external client into the server. But connecting into this tunnel directly requires an open, insecure port into my network from the router in order to establish the connection. Password and SSL protected, maybe. But still leaves the port open and other ports could be pinged and then interrogated by anyone on the internet. I trust my ISP and their router about as far as I could throw it. So if all I wanted was a direct data connection - that could be accomplished by opening the port on the router, and hoping the open connection isn't found and then hacked. So to accomplish (2) my thought had been(and yes, I recognize I could be wrong) I would need to establish a "closed" secure data path/route from the home server to the website subdomain using OpenVPN through NordVPN. The subdomain is one that I own/control which is not on my home network. In that way, a secure connection could be established from a client (e.g. my laptop) going through a secure VPN tunnel, and connecting to the other tunnel completing the connection through the subdomain name. Since no trace from the subdomain to the server could be completed without having the data connection being established first, that should essentially "hide" any (secure) connected port on my home network. Maybe I am overthinking it... and there may be too many layers of encryption... but I am still "thinking this out" and looking other more knowledgeable ideas/views. My other idea would be to just build a VM.
  10. Thanks Jonathanm, I've been using NordVPN from my client side for a while now to connect to various servers. And yes - I was thinking that an OpenVPN docker would be the answer keeping my home network (and home ip address) as secure as possible, and that would connect to my unRAID server to the NordVPN servers - permitting me to then establish the most secure tunnel from my "offsite" client/laptop (at a coffee shop) into the server (sitting at home) ... but perhaps I am misunderstanding something with the protocols(?). I really don't want an access point into my entire network (which I would gain with going to the router), I only want the unRAID server "accessible" and preferably only by means of a good SSL connection. My other means would be going through a domain I have, making my server a subdomain - or going the route of connecting via DuckDNS. My concern with that route is that my home IP address would be "trace-ible" by pinging and tracing the subdomain. Which I thought (perhaps incorrectly) that the DockerVPN would then mask the ip address until a VPN connection was established. Dunno... confused now. Thankfully, there is no rush, and I've been happily uploading lots of files to my "new" server. 😀
  11. Update: Thursday, 13SEP2018 & 15OCT 2018 I have been running unRAID 6.4 (now 6.5, soon 6.6.1) for a while now on the Dell PowerEdge T310, but I've been doing some hardware upgrades. So let me see if I can show where I started, and where I am going. Dell PowerEdge T310 (Flashed to latest BIOS) RAM: 8GB 16GB ECC Quad Rank Controller: SAS DRAC 6ir SAS H700 (Flashed to latest BIOS, all drives running in RAID 0) Drives: 2x 600GB SAS Dell/Seagate Ironwolf 4TB SATA (1x parity, 2x now up to 3x data - ) + 1x 2x 600GB Dell/Seagate SAS + 120GB 240GB SSD (for VMs), I also installed a three drive 3.5" bay system (StarTech.com) into the available full height 5.25" drive slot. This gives me 7 accessible hot-swappable drives, which should be more than enough for me. I will be moving the VMs from the SSD to the 600GB cheetah SAS drives, since the speed on those should be enough for anything I will be doing. Video: onboard & nVidia GTX 610 card (I wanted a PhysX/physics engine and some CUDA cores for digital rendering and transcoding) Fairly pleased with my overall stability, and my next steps will be doing some configuration backups & network throughput testing, and adding a UPS for when power is lost. I will also need a way to access the system from offsite, preferably through my VPN service (NordVPN/OpenVPN).
  12. I could use some advice- so thanks upfront to anyone offering their opinions. For me, I am a bit confused on the advantage/utility of a cache drive utilizing a SSD. I have a small server for home use (3x4TB + 1x 4TB parity, all IronWolf SATAs) and definitely going to add an SSD (primarily for VMs). I also have a few small SAS drives (2x 600GB + 2x 450GB), which I am going to include for various small tasks that will probably remain unassigned drives (for scratch files). The controller is a Dell H700 RAID adapter in a Dell PowerEdge T310 with 16GB of DDR3 RAM. And I am running gigabit (1gbs) ethernet. Let me say upfront: I am not concerned with loosing a VM or a what I call a "scratch file." I -am- concerned for my digital photo library and digital documents and backups (most of which will come by moving from a client/laptop and moved onto the server) which will be in the drives with parity covering them. I am not moving (or creating) lots of videos, have few media files (so far), and not expecting that to increase significantly for the next 3-5 years. (I do a little scientific computing and 3D graphics, but nothing like commercial groups do.) But what I am unclear on is what value I would get out of an SSD that would be used for disk cache considering these conditions/use cases. As I understand it, most of the time the unRAID cache is "flushed" is when the "mover" program runs (default is once at night). And until "mover" runs, the files "sit" on the SSD. Is that correct? And yes, I would expect to see improved performance with cache when moving smaller files, but I don't really do that all that often. I tend to "build up" a cluster of files, and move them all at once from the client... and more often than not, I need to move files down from the server to work on them. I then move the modified/edited (photo) files back up to the server. In fact, I often need to move larger files (backups, video files, digital documents, and clusters of digital photos ~ 300-600GB in size) fairly often, and when moving those files are larger than the size of the SSD that would be used as a cache. But I also wouldn't care if they move from the client to the server overnight. Also, if I am moving individual files - I'm more interested in them being on the "NAS Drive" than bouncing from a cache drive to the "NAS Drive". So, I think I'm better off just sizing my SSD large enough for the number of VMs I plan to use/run - and run the array without cache. Should I rethink that idea? And is there anything else I should consider in sizing my SSD... like "nextcloud/owncloud" considerations, docker apps or other apps/tools... ? To compound and confuse this discussion - I have a 120GB SSD now, but thinking I need a 250-512GB SSD, but if I were to find a compelling reason, I'd just get a 1TB SSD. Also, based on the Dell T310 architecture, the max speed on any one drive is 6gbs, and NVMe is not supported as far as I know.
  13. No, unRAID found my drives on my T310 with the H700, I just had to set them up individually as RAID 0. The configuration is not straightforward, but "do-able." After my experience, I would have gone with the H200 so that I could put it into an IT mode, but for now, I am fine with the H700.
  14. Jonathan, first, you're awesome. Thank you for that "heads up" - it's much appreciated that others here are helpful and friendly like that. I hope I can return the favor someday. Second, I get it that I'm different. A Dell PowerEdge T310 running as a non-business NAS for photo storage with services (and VMs) in a personal server is the project challenge for me. I might regret some choices I make, but I'm looking for a low-end system (paid for out of my own pocket) that will meet my storage needs, be reliable and be secure.
  15. Noted! I realize this is a potential risk, and probably a significant reason to use the H200 flashed into IT mode and used as a HBA (which I have read has been done successfully.) For me, I plan to take a snapshot of the RAID configuration with my cellphone, and place it in a safe spot. ? And just to follow up Jonathan, and to be clear, *IF* the H700 were to fail ? , and I replaced it with another H700 Adapter (flashed to the same A12 firmware), I could still assign the same HDD (via the serial numbers) into the same RAID configuration (drive for drive) and still be able to recover the unRAID array - Right? ? And even if I were to have a HDD fail, I could still rebuild the array through unRAID by replacing the drive (and initializing it into a new RAID 0 drive). I get it that it's not as easy... but unless I missed something, it's still "doable."
  16. So, I managed to update the firmware in my H700 RAID Adapter in my PowerEdge T310. There was one interesting thing to realize - that there are different firmware updates for the ADAPTER card (like the one shown) and the INTERNAL cards used in other systems like the R710. I made up a DOS boot USB Stick, booted up DOS, and then expanded the software, transferred to the USB, and ran the "update.bat" file to update my ADAPTER firmware with version A12. After flashing and rebooting, the T310 and H700 recognized my Seagate 4TB Ironwolf SATA drives that were inserted into the front drive slots - and identified them as "ATA" drives. I am currently setting each drive up individually as RAID 0 through the H700 controller, initializing them and plan to let unRAID deal with the raid/parity redundancy like I did with the previous set up I had with all Dell Drives. That seemed to work ok in unRAID ver 6.4. I did see a few places that talked about the potential of flashing the H700 into "IT", but I've not seen the ADAPTER version successfully flashed into "IT" mode. More on the H700 cards/adapters here. To get the DOS firmware update tool, go here... (look for the ZPE version, e.g. SAS_RAID_H700A_12.10.6-0001_A12_ZPE.exe )
  17. I've got an H700 in my PowerEdge T310. It worked with an all Dell 4 SAS drive array, as long as I set each drive as a RAID 0. At that point unRAID seemed to get to the drives, format and allow me to set up the array with parity. Everything seemed to work fine. I am currently replacing the lower capacity drives with some 4TB SATA Ironwolf drives - and while the H700 sees the drives, they are currently showing up as "Blocked". Some have posted elsewhere that the H700 can be reflashed (with a Dell PERC firmware update) to accept non-Dell drives, but I've not gotten that far yet. If I find some more information, I'll post in in the thread that I am posting to for my own unRAID system. Here is a recent posting on the topic: https://forums.servethehome.com/index.php?threads/dell-perc-h700-adapter.16158/
  18. Moving on... So the H700 and SAS cable install went well in the T310. All the drives were relatively easy to add in the H700 settings - and change over to RAID 0 in prep for unRAID, although I will want to look to see if there is an IT mode available with an updated H700 firmware load. However, the H700 was definitely more "zippy" in moving files from my laptop into the server (did this using the iso's for Win 10 Pro and Ubuntu). I've also been watching SPACEINVADERONE's video tutorials, and I have VMs for Win 10 and Ubuntu Studio 16.04 up and running. I need to redo the WIN 10 (Pro x64) VM, as there is no internet connection to it. I have to say that VM is a lot tricker than the one for Ubuntu, but it still works. So, yes, watch those videos... they are quite good, and well done. (Thanks!!!) I did see the video on the topic of a reverse proxy, and that seems like a good idea for me to implement with this server. I want to have it be secure (with https access only, if that's possible!) I also picked up my third (new) 4TB Ironwolf Drive, which will become my new parity drive. I've not installed any of the IronWolf Drives into the T310 yet, because I still wanted to tinker with the VMs beforehand. And I got a spare SATA power splitter cable, which I will likely use with SDDs (cache) only - should I still decide to install them. The other things I noted was that there were power splitters that I could get that would come directly from the SAS drive and power leads. I would need another SAS-SATA cable from the H700 (B port) to connect up to another 4 drives for data - since I still think the 6 available SATA connectors available on the motherboard are limited to 1.5Gbs, rather than the 6Gps I can get on the H700. And about the only thing I may be adding in the future will be more disk space, but I am not anticipating that soon - if at all. I may want to run the VM's off the SSDs, as is being suggested... since that would be far better use for any VNC/remote connections. But for now, I am thinking that the VMs can sit on one of the SAS 600GB drives, that be a Standalone drive - but get VM's backed up into the IronWolf Disk array from time to time. And I never got to the Win 7 VM install, but I anticipate less issues with it, since I've done a number of those already.
  19. "Checking in, and bellying up." Yes, this will be long and boring read for any experts... but I am writing this for anyone else who happens to be interested in doing something similar, and for my own "fun" of building up an "inexpensive" Xeon 3440 based Dell Poweredge T310 server with unRAID. So the saga of the $99 Dell Poweredge T310 continues. I spent some time playing with unRAID trial version enough to realize that I was in for the investment, and bought the H700 SAS controller to replace the PERC 6ir that came with it. And I bought the "plus" version (for up to 12 drives) of unRAID at $89. To be honest, I was back and forth on this- but decided that limiting myself to 2TB drives - as the 4 main HDDs in the system -was not what I was interested in for my NAS replacement. I wanted to at least get myself to something more like 6 to 8 TB, with some ability to have error correction or drive rebuild. I also wanted to have potentially more than 6 drives available (4 HDD + 1 parity HDD + 1 Cache SDD) just in case performance became an issue. I also wanted some flexibility to add a drive or two for separate storage space for VMs or Media Cloud Storage from the main drive system/storage drives. And the "plus" version of unRAID gave me that flexibility. I also don't expect to be running a huge server farm, so the "Pro" version seemed excessive in terms of needs. After a few minutes at the pay website, I had my email with the updated URL, and the unRAID was upgraded in place on my existing USB stick already installed on the motherboard. I did reboot the system, just to be sure it took, but I don't think I needed to. (Kudos to the Lime Tech designers on that pathway!) I also carefully considered the HDD size in my decision process. (Comments and other views welcome on this. And yes, I could have gone with WD or HGST drives, but I didn't... You can also see why here: https://us.hardware.info/reviews/7265/16/nas-hdd-review-18-models-compared-conclusion ) The Seagate Ironwolf 4TB SATA drives were running $124, while the 6TB version was running $184-190. So, my choice was two 6TB, or three 4TB, giving up one drive for parity. So for 2x6TB =>6TB storage, I would have sat at $368, or for 3x4TB=>8TB, I got for $372. And while if I added another drive, the 6TB drives probably would have been a performance winner (3x6TB=>12TB @ $552) over the 4TB (4x4TB=>12TB, $496), I think I made the better deal for cost, expandability and reliability. (And we could probably argue over getting the WD Red 6TB drives, but I've already opened the Ironwolf drives... so let's not.) So, next to eBay I went, picking up the Dell PERC H700 and a SAS W846K cable kit to tie the existing SAS drive slots/backplane to the H700. (For those not aware, the PERC 6ir has a special SAS cable to the backplane, allowing for 4 drives with the T310) The one nice thing with the H700, I can add more drives (SATA or SAS) with an addition of another SAS to SATA cable (SF-8087?) set - as the H700 has two SAS connectors (A & B, and note you have to use "A" with the first set of drives). The other nice change is that the H700 does full 6Gbs transfer rates. Anyway, total spent for the eBay H700+W846K Cable was $35. The only other downsides I saw with the 6ir to H700 changeover is that I will need to do additional power splitters to get power to any addition SATA (or SAS) drives I add to the system - and I had to reinitialize the existing SAS drives to use it with the new HDD controller. This also meant that any data I had on the drives were gone. Fortunately, I had not yet populated them completely with data. (I also found out that the two 450GB drives I picked up were only 3Gbs SAS drives, so those will likely go to eBay at some point, along with the 6ir and the Dell RD1000 drive.) This need to reformat HDD probably wouldn't have happened if I had been replacing the 6ir with another 6ir, or done an H700 to H700 swap, but going from the 6ir to the H700 meant reinitializing and reformatting the HDD drives and losing the few files I had placed on it. In configuring the H700, each HDD drive has to be it's own "RAID 0" for unRAID to be able to address it separately. Not too hard to do, once I deciphered the H700 firmware menu system. But the good thing about this configuration on the Dell T310 is that the 4 main (SATA, 3 x 4TB HDD initially, with 1 of those as parity) drives will still be (hot?) swappable. And I am leaving one HDD bay/slot on the front panel unfilled for now, even though I do have a Dell 600GB SAS drive that I could put in it. I also went with brand new HDD drives, although I saw plenty of SAS 4TB drive lots on eBay that were refurbished or "like new." But here- I don't want to be replacing bad drives with this system, I simply want it to work well and store lots of files (primarily digital photo library which is currently just over 1TB in size). And at some point, I will likely get one more 4TB Ironwolf drive to act as a "hot spare" in case one of the drives fails later on. (Reminder: I need to read up more about adding drives to increase storage space, but I recall that is what unRAID is supposedly good at.) At present, I'm still on the fence about adding another SSD SATA cache drive (I currently have a 120GB SSD SATA on the motherboard SATA B socket/header, but not using it - since it seems to run only at 3Gbs), since the PERC H700 came with 512MB of cache (RAM) memory on the card, this might not be necessary. I did make the decision not get the Dell battery add-on for the H700, partially because the system will live on a battery UPS, and will be set to shut down if the UPS power goes low. After I do some more system burn in with the existing drive array (2x600GB + 2x450GB SAS drives), I will load up the Ironwolf drives and add a couple of VMs to the system to give it a good workout. I really am interested in seeing how the system runs with a Windows 10 Pro VM and then a Windows 7 VM, and some video and photo capture and editing software. (I might want to use one of those 600GB drives for the VMs, dunno.) And later I'll be adding an Ubuntu Linux distro on it as well, likely with Docker. I'm also working on rebuilding a separate Apple Mac Pro 5.1 system, which will be networked into the system, and used for editing video and editing and scanning photos. The two will be connected with GigE switch, as to make the large video file access far less painful.
  20. And I just realized that I have an Intel(R) Xeon(R) CPU X3440 @ 2.53GHz Processor... with four cores and 8 threads. No wonder why the X3340 wasn't found in various searches. Whew!!! Update: Tonight I got the internal SATA connections running (They were disabled in bios), and added a 120GB SDD Toshiba drive for cache. It looks like the internal SATA ports are limited to 2TB and running at 1.5Gbs each (likely need to adjust that in the BIOS as well), but that also means that the H700 driver is going to be a fairly assured thing that I will need to get. For me, something seems a bit unusual (probably the bios for the internal sata), as the system now seems to be transferring files a little slower than before - even with the SDD Cache included. I hope to get a VM of Win 10 Pro up and running too, tonight.
  21. 8GB (4x2Gb) DDR3 1067 ECC 1.5V (in a Dell T310) 16GB (2x8Gb) DDR3 1067 ECC 1.5V (quad rank in a Dell T310), upgraded.
  22. That was a nice primer for me to understand the importance of the different RAID configurations. Thanks! What does get me, is that any one of those configurations leaves me with 1TB of drive space - compared to 1.4TB I have now with unRAID. And to be honest, I don't think the system is going to be taxed that hard - compared with the amount of data I need to protect. On top of that, I plan (and will get) offsite backups for really important data. And I do like that two drives need to fail in order for bad things to happen, but I think the parity option in unRAID should be fairly robust and cover that instance... or do you think otherwise? I get that if I lose the two 600GB drives, I am pretty well "hosed"... but how often do two drives fail... nearly simultaneously?
  23. "So, another day, another array." So the T310 is up, and I have all the HDD running without any raid entered into the PERC 6ir controller, and currently building the parity disk on one of the two 600GB drives. Total time until the drives are ready in the array, about 1 hour 45 minutes. Here is what it currently looks like... and yes, I blurred out the drive SNs and the tower IP address. Call me "once bitten, twice shy" on computer security issues. This now gives me about 1.4TB of usable storage space to play with, and validates most of my thoughts regarding the way the drives would work. I'll let this "putter" for a couple of days, while I move on to trying my hand at building some VMs (already have installed a nVidia GT 610 card) and got a Windows 10 Pro lisc to load up. Also I want to try my hand at a Docker App. After that, I will reflash the motherboard bios and see if the SATA interface can pick up any of the SATA drives I have. I did remove the RD1000 drive, and put in a Toshiba 128GB SDD on that SATA cable and used the power cable for it, but the system didn't recognize the SDD. I'll need to investigate that later. That might become a first cache drive, if I get so motivated. The other interesting thing is that the system fan at first ran very high (with the case side panel off, but now appears to be operating much more quietly. Again, not sure why, but imagine it's a Dell T310 setting that I will need to investigate further (and read the manual!) More later, but at this point... I just need to move on and get some other things done around the house.
  24. Got it... thanks for sharing that perspective. I appreciate it better now... a lot!