• Posts

  • Joined

  • Last visited

Everything posted by rollieindc

  1. So, just a boring update. Not much to report lately, as I've just been on travel a lot for work and dealing with a "wonky" DSL connection at my house. The Dell T310 server has been running smoothly on unRAID 6.6.6, and not had any real issues since I installed the H200 controller in it. My biggest quandary has been to decide if I should increase the cache drive size (500gb or 1TB) or up the RAM memory to 32GB. To be honest, I don't need to do either at the moment. And my Drive Array appears to be running just fine. So, nice and quiet for me. Looking forward to the update on 6.7/6.8 at some point, the new features seem really promising. I still need to work on some VMs, that's an ongoing project for me.
  2. Jonathanm, Just to follow up on this thread, I exchanged the H700 for an H200 flashed over to IT. I did this as much to obtain the SMARTDrive information on all the attached drives, in the hopes of being able to increase the reliability of the system through knowledge of that information- as anything else. I am not sure that the loss of the attached 512MB of cache memory on the H700 will be a performance hit, but since this system is mostly going to be pulling NAS storage duty, I think the overall performance will be "good enough" for my needs. Also the ability to work with the drives individually, or replace parts in case of failures - should, as you put it - follow "the normal first steps for recovery", and offer me the knowledge of others (more learned than myself) to increase the chances of any needed recovery. Thanks! - Rollie
  3. Update: 03DEC2018 - Replacing the H700 SAS Controller for a H200 flashed into IT Mode. New Stats: Dell PowerEdge T310 (Flashed to latest BIOS) RAM: 16GB ECC Quad Rank Controller: Dell H200 SAS Flashed to IT Mode, replacing the SAS H700 (Flashed to latest BIOS, all drives running in RAID 0) Drives: Seagate Ironwolf 4TB SATA (1x parity, 3x data ) + 2x 600GB Dell + 240GB SSD (for VMs) Note: The installed a three drive 3.5" bay system (from is in the available full height 5.25" drive slot. This gives me 7 accessible hot-swappable drives. I plan to populate 6, and leave one for a hot swap-able bay. Video: Onboard & nVidia GTX 610 card Soundcard: Creative Sound Blaster X-Fi Xtreme Audio PCIe x1 SB1040 Sound Card First things first, the really good news on the nVidia 610 GTX video card choice that I made - is that I should be able to make a Mac High Sierra VM now, and then Mojave later - once they release the nVidia drivers for Mojave. The drivers are already out for High Sierra, and since I have a Mac Pro 5.1 running High Sierra - that I plan to use for video and photo editing, this should be a great value to me later on. Next, I found a relatively cheap ($26/shipped) Dell H200 SAS RAID card on eBay, from China, and decided to get it. As I understand it, the Dell H200 is a LSI based 9211-8i card with RAID Firmware installed. Installing the LSI "IT" 2118it.bin firmware allows for individual access to additional features and SMART disk data. The latter is important for determining disk health and tracking, like temperature issues or bit/sector errors. Since this was primarily to save a lot of data in my personal and historical photo library and backups, I need to identify early "disk death" before it happens - and swap out any before they fail completely. After two weeks of shipping time, it finally arrived from China and looked to be in decent shape. (One of the SAS connector shields looked a little bent, but I was able to straighten it with my fingernails.) I then read through various descriptions of the process to change it to IT Mode, and decided to go for "IT". The process was fairly straightforward, although a little daunting. Instructions are available online. (See ) I did use a separate HP DC9700 computer to flash the H200, since some people stated that they had trouble using the T310 for this purpose. In order to do this, I had to remove the back card edge holder, since my system has a small form factor, but the card sat nicely in the case for the time I needed to do the reflashing. I booted the HP computer into MS/DOS from a USB drive that I had made from RUFUS (, started the diagnostics, found the SAS address, and loaded the IT drivers onto it. The process was, again, fairly straightforward until I tried typing in the H200's SAS address at the line [ C:\> s2fp19.exe -o -sasadd 500xxxxxxxxxxxxx (replace this address with the one you wrote down in the first steps) ]. It took me awhile to realize that I needed to make the address in 16 characters, all hexidecimal (0-9,A-F), and ALL UPPERCASE. The address from the previous steps had hypens included and was in lower case, so I fumbled a bit with that, until I got a clue from the flashing software that I needed to use "NO HYPENS" and "ALL UPPER CASE". Duh - I felt stupid, but after that, the process rolled quickly without any further issues. For me, If you feel comfortable reflashing computers or cards, this is very similar and should not pose any issues. Just check your syntax and typing before hitting the enter key. That I can see this as a potential big issue, and how some people could have probably "bricked" their card from making address mistakes. But for me - my reflashing went fine. After rebooting a few times in the process, I had a reflashed H200 card in IT mode. So, I put it into the Dell T310, replacing the H700 with 512MB of cache on the card. This is what somewhat bums me out, that the H700 has a nice cache on it already, and the H200 is likely cache-less. But I rebooted, and am in the process of reformatting the array's drives. Yes, they had to all be "reformated". Ugh. This is the last change I am making to the controller. So far, the difference in speed isn't really showing up (yet, if at all - they are both 6Gbs cards), but the other SMART disk information already is. I also can see drive info from any of the drives I want - SAS or SATA, and in the drive related plugins or other diagnostics available. And I can already read the temperature on every drive in the array from the dashboard. And for me, I could not read the temperatures with the H700 in a RAID 0 configuration. And the formatting process appeared to be definitely faster too. Lastly, I went to a PC Salvage store while on travel (Hint: it was the "USED COMPUTER" store at 7122 Menaul Blvd NE, Albuquerque, NM 87110, Hours: 8-7pm, Phone: (505) 889-0756) and picked up a used Creative Sound Blaster X-Fi Xtreme Audio PCIe x1 SB1040 Sound Card (for $10). Plop and drop, nothing more necessary for it to load up in unRAID. Haven't done anything with it yet, but it could help with some of the audio files and the way that the VMs run. If I get really bored, and turn the server into a Home Theater PC server, that 5.1 sound option will be nice to have. Oh frack... parity check is running again... that will be about 6 hours of disk spinning again. But, that should be the first real parity check that serves as a baseline. Guess I should get used to it. Next up, building a bunch of VMs and starting to use the system for storing files and backups.
  4. Well, that's actually my point. That's not all I want. And yes, I realize, I may not get what I want. My goals was to: 1) secure/encrypt the data pathway into the server. 2) secure/hide the ip address of the home server (as much as possible), and close the data pathway to avoid tracerouting into the rest of the home network. (1) would be easy enough to do, just using a VPN tunnel from an external client into the server. But connecting into this tunnel directly requires an open, insecure port into my network from the router in order to establish the connection. Password and SSL protected, maybe. But still leaves the port open and other ports could be pinged and then interrogated by anyone on the internet. I trust my ISP and their router about as far as I could throw it. So if all I wanted was a direct data connection - that could be accomplished by opening the port on the router, and hoping the open connection isn't found and then hacked. So to accomplish (2) my thought had been(and yes, I recognize I could be wrong) I would need to establish a "closed" secure data path/route from the home server to the website subdomain using OpenVPN through NordVPN. The subdomain is one that I own/control which is not on my home network. In that way, a secure connection could be established from a client (e.g. my laptop) going through a secure VPN tunnel, and connecting to the other tunnel completing the connection through the subdomain name. Since no trace from the subdomain to the server could be completed without having the data connection being established first, that should essentially "hide" any (secure) connected port on my home network. Maybe I am overthinking it... and there may be too many layers of encryption... but I am still "thinking this out" and looking other more knowledgeable ideas/views. My other idea would be to just build a VM.
  5. Thanks Jonathanm, I've been using NordVPN from my client side for a while now to connect to various servers. And yes - I was thinking that an OpenVPN docker would be the answer keeping my home network (and home ip address) as secure as possible, and that would connect to my unRAID server to the NordVPN servers - permitting me to then establish the most secure tunnel from my "offsite" client/laptop (at a coffee shop) into the server (sitting at home) ... but perhaps I am misunderstanding something with the protocols(?). I really don't want an access point into my entire network (which I would gain with going to the router), I only want the unRAID server "accessible" and preferably only by means of a good SSL connection. My other means would be going through a domain I have, making my server a subdomain - or going the route of connecting via DuckDNS. My concern with that route is that my home IP address would be "trace-ible" by pinging and tracing the subdomain. Which I thought (perhaps incorrectly) that the DockerVPN would then mask the ip address until a VPN connection was established. Dunno... confused now. Thankfully, there is no rush, and I've been happily uploading lots of files to my "new" server. 😀
  6. Update: Thursday, 13SEP2018 & 15OCT 2018 I have been running unRAID 6.4 (now 6.5, soon 6.6.1) for a while now on the Dell PowerEdge T310, but I've been doing some hardware upgrades. So let me see if I can show where I started, and where I am going. Dell PowerEdge T310 (Flashed to latest BIOS) RAM: 8GB 16GB ECC Quad Rank Controller: SAS DRAC 6ir SAS H700 (Flashed to latest BIOS, all drives running in RAID 0) Drives: 2x 600GB SAS Dell/Seagate Ironwolf 4TB SATA (1x parity, 2x now up to 3x data - ) + 1x 2x 600GB Dell/Seagate SAS + 120GB 240GB SSD (for VMs), I also installed a three drive 3.5" bay system ( into the available full height 5.25" drive slot. This gives me 7 accessible hot-swappable drives, which should be more than enough for me. I will be moving the VMs from the SSD to the 600GB cheetah SAS drives, since the speed on those should be enough for anything I will be doing. Video: onboard & nVidia GTX 610 card (I wanted a PhysX/physics engine and some CUDA cores for digital rendering and transcoding) Fairly pleased with my overall stability, and my next steps will be doing some configuration backups & network throughput testing, and adding a UPS for when power is lost. I will also need a way to access the system from offsite, preferably through my VPN service (NordVPN/OpenVPN).
  7. I could use some advice- so thanks upfront to anyone offering their opinions. For me, I am a bit confused on the advantage/utility of a cache drive utilizing a SSD. I have a small server for home use (3x4TB + 1x 4TB parity, all IronWolf SATAs) and definitely going to add an SSD (primarily for VMs). I also have a few small SAS drives (2x 600GB + 2x 450GB), which I am going to include for various small tasks that will probably remain unassigned drives (for scratch files). The controller is a Dell H700 RAID adapter in a Dell PowerEdge T310 with 16GB of DDR3 RAM. And I am running gigabit (1gbs) ethernet. Let me say upfront: I am not concerned with loosing a VM or a what I call a "scratch file." I -am- concerned for my digital photo library and digital documents and backups (most of which will come by moving from a client/laptop and moved onto the server) which will be in the drives with parity covering them. I am not moving (or creating) lots of videos, have few media files (so far), and not expecting that to increase significantly for the next 3-5 years. (I do a little scientific computing and 3D graphics, but nothing like commercial groups do.) But what I am unclear on is what value I would get out of an SSD that would be used for disk cache considering these conditions/use cases. As I understand it, most of the time the unRAID cache is "flushed" is when the "mover" program runs (default is once at night). And until "mover" runs, the files "sit" on the SSD. Is that correct? And yes, I would expect to see improved performance with cache when moving smaller files, but I don't really do that all that often. I tend to "build up" a cluster of files, and move them all at once from the client... and more often than not, I need to move files down from the server to work on them. I then move the modified/edited (photo) files back up to the server. In fact, I often need to move larger files (backups, video files, digital documents, and clusters of digital photos ~ 300-600GB in size) fairly often, and when moving those files are larger than the size of the SSD that would be used as a cache. But I also wouldn't care if they move from the client to the server overnight. Also, if I am moving individual files - I'm more interested in them being on the "NAS Drive" than bouncing from a cache drive to the "NAS Drive". So, I think I'm better off just sizing my SSD large enough for the number of VMs I plan to use/run - and run the array without cache. Should I rethink that idea? And is there anything else I should consider in sizing my SSD... like "nextcloud/owncloud" considerations, docker apps or other apps/tools... ? To compound and confuse this discussion - I have a 120GB SSD now, but thinking I need a 250-512GB SSD, but if I were to find a compelling reason, I'd just get a 1TB SSD. Also, based on the Dell T310 architecture, the max speed on any one drive is 6gbs, and NVMe is not supported as far as I know.
  8. No, unRAID found my drives on my T310 with the H700, I just had to set them up individually as RAID 0. The configuration is not straightforward, but "do-able." After my experience, I would have gone with the H200 so that I could put it into an IT mode, but for now, I am fine with the H700.
  9. Jonathan, first, you're awesome. Thank you for that "heads up" - it's much appreciated that others here are helpful and friendly like that. I hope I can return the favor someday. Second, I get it that I'm different. A Dell PowerEdge T310 running as a non-business NAS for photo storage with services (and VMs) in a personal server is the project challenge for me. I might regret some choices I make, but I'm looking for a low-end system (paid for out of my own pocket) that will meet my storage needs, be reliable and be secure.
  10. Noted! I realize this is a potential risk, and probably a significant reason to use the H200 flashed into IT mode and used as a HBA (which I have read has been done successfully.) For me, I plan to take a snapshot of the RAID configuration with my cellphone, and place it in a safe spot. ? And just to follow up Jonathan, and to be clear, *IF* the H700 were to fail ? , and I replaced it with another H700 Adapter (flashed to the same A12 firmware), I could still assign the same HDD (via the serial numbers) into the same RAID configuration (drive for drive) and still be able to recover the unRAID array - Right? ? And even if I were to have a HDD fail, I could still rebuild the array through unRAID by replacing the drive (and initializing it into a new RAID 0 drive). I get it that it's not as easy... but unless I missed something, it's still "doable."
  11. So, I managed to update the firmware in my H700 RAID Adapter in my PowerEdge T310. There was one interesting thing to realize - that there are different firmware updates for the ADAPTER card (like the one shown) and the INTERNAL cards used in other systems like the R710. I made up a DOS boot USB Stick, booted up DOS, and then expanded the software, transferred to the USB, and ran the "update.bat" file to update my ADAPTER firmware with version A12. After flashing and rebooting, the T310 and H700 recognized my Seagate 4TB Ironwolf SATA drives that were inserted into the front drive slots - and identified them as "ATA" drives. I am currently setting each drive up individually as RAID 0 through the H700 controller, initializing them and plan to let unRAID deal with the raid/parity redundancy like I did with the previous set up I had with all Dell Drives. That seemed to work ok in unRAID ver 6.4. I did see a few places that talked about the potential of flashing the H700 into "IT", but I've not seen the ADAPTER version successfully flashed into "IT" mode. More on the H700 cards/adapters here. To get the DOS firmware update tool, go here... (look for the ZPE version, e.g. SAS_RAID_H700A_12.10.6-0001_A12_ZPE.exe )
  12. I've got an H700 in my PowerEdge T310. It worked with an all Dell 4 SAS drive array, as long as I set each drive as a RAID 0. At that point unRAID seemed to get to the drives, format and allow me to set up the array with parity. Everything seemed to work fine. I am currently replacing the lower capacity drives with some 4TB SATA Ironwolf drives - and while the H700 sees the drives, they are currently showing up as "Blocked". Some have posted elsewhere that the H700 can be reflashed (with a Dell PERC firmware update) to accept non-Dell drives, but I've not gotten that far yet. If I find some more information, I'll post in in the thread that I am posting to for my own unRAID system. Here is a recent posting on the topic:
  13. Moving on... So the H700 and SAS cable install went well in the T310. All the drives were relatively easy to add in the H700 settings - and change over to RAID 0 in prep for unRAID, although I will want to look to see if there is an IT mode available with an updated H700 firmware load. However, the H700 was definitely more "zippy" in moving files from my laptop into the server (did this using the iso's for Win 10 Pro and Ubuntu). I've also been watching SPACEINVADERONE's video tutorials, and I have VMs for Win 10 and Ubuntu Studio 16.04 up and running. I need to redo the WIN 10 (Pro x64) VM, as there is no internet connection to it. I have to say that VM is a lot tricker than the one for Ubuntu, but it still works. So, yes, watch those videos... they are quite good, and well done. (Thanks!!!) I did see the video on the topic of a reverse proxy, and that seems like a good idea for me to implement with this server. I want to have it be secure (with https access only, if that's possible!) I also picked up my third (new) 4TB Ironwolf Drive, which will become my new parity drive. I've not installed any of the IronWolf Drives into the T310 yet, because I still wanted to tinker with the VMs beforehand. And I got a spare SATA power splitter cable, which I will likely use with SDDs (cache) only - should I still decide to install them. The other things I noted was that there were power splitters that I could get that would come directly from the SAS drive and power leads. I would need another SAS-SATA cable from the H700 (B port) to connect up to another 4 drives for data - since I still think the 6 available SATA connectors available on the motherboard are limited to 1.5Gbs, rather than the 6Gps I can get on the H700. And about the only thing I may be adding in the future will be more disk space, but I am not anticipating that soon - if at all. I may want to run the VM's off the SSDs, as is being suggested... since that would be far better use for any VNC/remote connections. But for now, I am thinking that the VMs can sit on one of the SAS 600GB drives, that be a Standalone drive - but get VM's backed up into the IronWolf Disk array from time to time. And I never got to the Win 7 VM install, but I anticipate less issues with it, since I've done a number of those already.
  14. "Checking in, and bellying up." Yes, this will be long and boring read for any experts... but I am writing this for anyone else who happens to be interested in doing something similar, and for my own "fun" of building up an "inexpensive" Xeon 3440 based Dell Poweredge T310 server with unRAID. So the saga of the $99 Dell Poweredge T310 continues. I spent some time playing with unRAID trial version enough to realize that I was in for the investment, and bought the H700 SAS controller to replace the PERC 6ir that came with it. And I bought the "plus" version (for up to 12 drives) of unRAID at $89. To be honest, I was back and forth on this- but decided that limiting myself to 2TB drives - as the 4 main HDDs in the system -was not what I was interested in for my NAS replacement. I wanted to at least get myself to something more like 6 to 8 TB, with some ability to have error correction or drive rebuild. I also wanted to have potentially more than 6 drives available (4 HDD + 1 parity HDD + 1 Cache SDD) just in case performance became an issue. I also wanted some flexibility to add a drive or two for separate storage space for VMs or Media Cloud Storage from the main drive system/storage drives. And the "plus" version of unRAID gave me that flexibility. I also don't expect to be running a huge server farm, so the "Pro" version seemed excessive in terms of needs. After a few minutes at the pay website, I had my email with the updated URL, and the unRAID was upgraded in place on my existing USB stick already installed on the motherboard. I did reboot the system, just to be sure it took, but I don't think I needed to. (Kudos to the Lime Tech designers on that pathway!) I also carefully considered the HDD size in my decision process. (Comments and other views welcome on this. And yes, I could have gone with WD or HGST drives, but I didn't... You can also see why here: ) The Seagate Ironwolf 4TB SATA drives were running $124, while the 6TB version was running $184-190. So, my choice was two 6TB, or three 4TB, giving up one drive for parity. So for 2x6TB =>6TB storage, I would have sat at $368, or for 3x4TB=>8TB, I got for $372. And while if I added another drive, the 6TB drives probably would have been a performance winner (3x6TB=>12TB @ $552) over the 4TB (4x4TB=>12TB, $496), I think I made the better deal for cost, expandability and reliability. (And we could probably argue over getting the WD Red 6TB drives, but I've already opened the Ironwolf drives... so let's not.) So, next to eBay I went, picking up the Dell PERC H700 and a SAS W846K cable kit to tie the existing SAS drive slots/backplane to the H700. (For those not aware, the PERC 6ir has a special SAS cable to the backplane, allowing for 4 drives with the T310) The one nice thing with the H700, I can add more drives (SATA or SAS) with an addition of another SAS to SATA cable (SF-8087?) set - as the H700 has two SAS connectors (A & B, and note you have to use "A" with the first set of drives). The other nice change is that the H700 does full 6Gbs transfer rates. Anyway, total spent for the eBay H700+W846K Cable was $35. The only other downsides I saw with the 6ir to H700 changeover is that I will need to do additional power splitters to get power to any addition SATA (or SAS) drives I add to the system - and I had to reinitialize the existing SAS drives to use it with the new HDD controller. This also meant that any data I had on the drives were gone. Fortunately, I had not yet populated them completely with data. (I also found out that the two 450GB drives I picked up were only 3Gbs SAS drives, so those will likely go to eBay at some point, along with the 6ir and the Dell RD1000 drive.) This need to reformat HDD probably wouldn't have happened if I had been replacing the 6ir with another 6ir, or done an H700 to H700 swap, but going from the 6ir to the H700 meant reinitializing and reformatting the HDD drives and losing the few files I had placed on it. In configuring the H700, each HDD drive has to be it's own "RAID 0" for unRAID to be able to address it separately. Not too hard to do, once I deciphered the H700 firmware menu system. But the good thing about this configuration on the Dell T310 is that the 4 main (SATA, 3 x 4TB HDD initially, with 1 of those as parity) drives will still be (hot?) swappable. And I am leaving one HDD bay/slot on the front panel unfilled for now, even though I do have a Dell 600GB SAS drive that I could put in it. I also went with brand new HDD drives, although I saw plenty of SAS 4TB drive lots on eBay that were refurbished or "like new." But here- I don't want to be replacing bad drives with this system, I simply want it to work well and store lots of files (primarily digital photo library which is currently just over 1TB in size). And at some point, I will likely get one more 4TB Ironwolf drive to act as a "hot spare" in case one of the drives fails later on. (Reminder: I need to read up more about adding drives to increase storage space, but I recall that is what unRAID is supposedly good at.) At present, I'm still on the fence about adding another SSD SATA cache drive (I currently have a 120GB SSD SATA on the motherboard SATA B socket/header, but not using it - since it seems to run only at 3Gbs), since the PERC H700 came with 512MB of cache (RAM) memory on the card, this might not be necessary. I did make the decision not get the Dell battery add-on for the H700, partially because the system will live on a battery UPS, and will be set to shut down if the UPS power goes low. After I do some more system burn in with the existing drive array (2x600GB + 2x450GB SAS drives), I will load up the Ironwolf drives and add a couple of VMs to the system to give it a good workout. I really am interested in seeing how the system runs with a Windows 10 Pro VM and then a Windows 7 VM, and some video and photo capture and editing software. (I might want to use one of those 600GB drives for the VMs, dunno.) And later I'll be adding an Ubuntu Linux distro on it as well, likely with Docker. I'm also working on rebuilding a separate Apple Mac Pro 5.1 system, which will be networked into the system, and used for editing video and editing and scanning photos. The two will be connected with GigE switch, as to make the large video file access far less painful.
  15. And I just realized that I have an Intel(R) Xeon(R) CPU X3440 @ 2.53GHz Processor... with four cores and 8 threads. No wonder why the X3340 wasn't found in various searches. Whew!!! Update: Tonight I got the internal SATA connections running (They were disabled in bios), and added a 120GB SDD Toshiba drive for cache. It looks like the internal SATA ports are limited to 2TB and running at 1.5Gbs each (likely need to adjust that in the BIOS as well), but that also means that the H700 driver is going to be a fairly assured thing that I will need to get. For me, something seems a bit unusual (probably the bios for the internal sata), as the system now seems to be transferring files a little slower than before - even with the SDD Cache included. I hope to get a VM of Win 10 Pro up and running too, tonight.
  16. 8GB (4x2Gb) DDR3 1067 ECC 1.5V (in a Dell T310) 16GB (2x8Gb) DDR3 1067 ECC 1.5V (quad rank in a Dell T310), upgraded.
  17. That was a nice primer for me to understand the importance of the different RAID configurations. Thanks! What does get me, is that any one of those configurations leaves me with 1TB of drive space - compared to 1.4TB I have now with unRAID. And to be honest, I don't think the system is going to be taxed that hard - compared with the amount of data I need to protect. On top of that, I plan (and will get) offsite backups for really important data. And I do like that two drives need to fail in order for bad things to happen, but I think the parity option in unRAID should be fairly robust and cover that instance... or do you think otherwise? I get that if I lose the two 600GB drives, I am pretty well "hosed"... but how often do two drives fail... nearly simultaneously?
  18. "So, another day, another array." So the T310 is up, and I have all the HDD running without any raid entered into the PERC 6ir controller, and currently building the parity disk on one of the two 600GB drives. Total time until the drives are ready in the array, about 1 hour 45 minutes. Here is what it currently looks like... and yes, I blurred out the drive SNs and the tower IP address. Call me "once bitten, twice shy" on computer security issues. This now gives me about 1.4TB of usable storage space to play with, and validates most of my thoughts regarding the way the drives would work. I'll let this "putter" for a couple of days, while I move on to trying my hand at building some VMs (already have installed a nVidia GT 610 card) and got a Windows 10 Pro lisc to load up. Also I want to try my hand at a Docker App. After that, I will reflash the motherboard bios and see if the SATA interface can pick up any of the SATA drives I have. I did remove the RD1000 drive, and put in a Toshiba 128GB SDD on that SATA cable and used the power cable for it, but the system didn't recognize the SDD. I'll need to investigate that later. That might become a first cache drive, if I get so motivated. The other interesting thing is that the system fan at first ran very high (with the case side panel off, but now appears to be operating much more quietly. Again, not sure why, but imagine it's a Dell T310 setting that I will need to investigate further (and read the manual!) More later, but at this point... I just need to move on and get some other things done around the house.
  19. Got it... thanks for sharing that perspective. I appreciate it better now... a lot!
  20. Ok, time for an update. Lots of lessons learned. Some new hardware, and getting up to speed. For many this will probably seem elementary, but for me, this is a log of discovery. First... crud... I bought 8GB (2 x 4GB) of DIMM off eBay that won’t work with the Dell T310. Seems the T310 is very particular about the type and density of the RAM chips used. Lesson learned, get and read the server manual. Will resell or use the memory elsewhere. But it also means I have to replace the current 8GB in order to go above 12GB on the T310. Moving to 12GB might be a good option, because I have a hard time coming up with reasons to need a VM with more than 8GB of memory. (4GB for unRAID is still huge from what I can read, and Windows 10 should play well with 8GB). And hey, I’m not Linus Media Group... (thank goodness! Sorry Linus!) Hardware wise, I got two additional drives on rails (450MB SAS, 10K) that I installed in the T310 and started to play with. The PERC 6iR SAS controller needs an update flash, and I was befuddled by the RAID configuration, as RAID 0 & 1 needed drive pairs to enable the virtual drives. So I set up two RAID 0 drives (600+450) ending up with 2 x ~990MB drives. Performance on my network still seemed very snappy and quick. Soooo... Well “Duh.” I didn’t realize that you could also eliminate the virtual (RAID) drives on the 6ir and just address each drive from within unRAID individually. So that will be my next logical step. (watch for next update) But I was able to run drive checks on everything and got zero errors. (Yea!) The SAS 6ir is limited to 3Gb/sec SAS and a individual HD max size of about 2.2TB (I think, I still need to confirm that). The 3Gb/sec speed alone is probably why most would move to a Dell H200 or H700 controller, as they run SAS or SATA at 6Gb/sec, and also allows for drives greater than 2.2TB. Also it looks like they have an option for a battery onboard to maintain their on-card cache memory in case of power failures. So (academically speaking) having 2 drives in RAID 1 could provide the equivalent of 6GB throughputs (on reads) and redundancy. That “might” be a nearterm “good enough” for a cache drive for a home server system, and just keeping the 6ir. Plus potential replacements on eBay are cheap ($20-30) and plentiful. But the max on the 4 drive rails would be limited to 4TB of storage in RAID 1 (4x 2TB, @50% for RAID1 @ 6Gb/sec) Great for reliability and speed, but cruddy in capacity. Yeah, I think I will need more than that. And while newer SSDs would be faster, especially with a faster controller... that will have to be a down the road “learn and burn” exercise, much like the RAM memory experience. (Maybe when 4TB SDDs become super cheap in 10 years... or quantum computing for Windows arrives!) Even so, with just the 6ir, I should still be able to replace all the current drives with 2TB drives on unRAID and reach 6TB, (with one 2TB acting as parity) without needing to buy anything more. That 6ir limits me to 2TB at the “top end” on the 6ir controller, and moving to the H200 or H700 (I think) would allow me to use the 2x 4TB Seagate Ironwolf drives that I have in my current NAS - on the Dell’s T310 rail hot swap system. (And yes, I have to migrate that data! ) For me, as this is a home server, I wanted to dig a little further into the T310, as it also as 6x SATA connectors on the motherboard. These are also rated at 3Gb/sec throughput (I think this is a hardware limit on the motherboard, but I need to flash upgrade the BIOS here too, including the integrated motherboard SATA drive controller). Currently, I have one into the DVD/RW drive, and one into the RD1000 drive. So, I “could” also put up to 5x SATA drives (keeping the DVD/RW), and just sell the RD1000 drive. The catridges on the RD1000, even on eBay are not cheap, at $250 each (None came with the system) - and I am thinking a hot swappable “generic” drive tray with a SATA drive will be a better use of money for offsite (safe deposit box) storage of critical home & photo library files. (A good 8TB drive is less than $170!) Plus if it takes a couple of days to make a backup... I am ok with that. (Reminder: I need to get a UPS.) So I am going to *think* about my options, and look at the H700 ($40 on ebay) as a near term option to let me use the current Dell HDD rails and ultimately go to a 4x4TB hot swappable 12TB NAS “on rails” configuration at 6GB/sec -with offsite disk storage... without having to make up weird power cables. I might need a cache SSD ultimately... but there again... with unRAID, even if I start editing and creating home video (probably only HD 1080p) on the server... I might have enough throughput for most tasks, including VMs. (Have I mentioned I have an 8 or 2x4 core xeon Mac Pro 5.1, that will likely have that dedicated task as well as any audio work I need!) And if I did need SSDs for cache, I have PCIe slots to hang the newer m.2 SSDs off a single PCIe card. And I think... with no need for another power cable. (win-win!) Now, I’ve typed enough for tonight... questions and comments welcomed!
  21. Hi, Am a newbie on unRAID, just building my first system, so am a bit confused by the path you suggest - and hoping you can help me understand. The parity drive(s) in unraid must be the largest or equal to the other drives in the system. You can't set up a parity protected system with your 2 4TB drives and add the 6TB drives later without rebuilding parity. I guess what confused me is, given the existing data is 3.5TB, my path would be... 1) back the data up on a 4TB drive (archived offsite copy), then 2) copy it (a working copy of all the data) to the second 4TB drive, then 3) take the SSD to make unRAID cache, 4) use the 4x6TB drives to make 4a)unRAID 1x parity (6TB) and 4b) three data drives (3x 6TB) to make the build. 5) Then, migrate the data from the “working copy” (4TB) drive. Then once everything is conformed as “working” - 6) reformat the 4TB “working copy” drive and add it to the unRAID array ... making for a 22TB unRAID (3x6TB+4TB), 128 GB SSD as cache, and a 4TB drive as offsite backup... Or do I have a flawed concept of how to do it best/safely? (appreciate any direction/guidance... I’m still learning!) -Rollie in Washington DC USA (age 57)
  22. Yea... I am an old timer (does that make me a crummugeon?) ... (Am currently 57) I started with a 110 baud teletype and FORTRAN IV on a pdp-8 (even did punchtape, then punchcards)... then worked on a trash 80, a sinclair at home, then a color computer trs-80, and ordered one of the first IBM/XT’s (pre 8087 add on) for my office. We went nuts with it in the engineering department. So, I got the call to build the next big system (DEC VMS based for finite element codes)... and then had to run it. 24/7 was a pain in the butt. But... yeah. Used to program matrix inversion subroutines in pascal in college. It was an esoteric program the prof wanted, but it never worked quite right. Never understood why... but aced the exams, so he had to pass me (with an A-) The T310 is probably overkill for what I am doing... but... heck... unRAID just looks cool (and better than any RAID options on a NAS).
  23. So, been a techie for a long time, sometimes a sysop (ran VAX/VMS 11/785 & 8800) - and so I was really interested in unRAID to replace my NAS for my home networking (mainly for semi-pro photography) and run some VMs and apps. I found a good “excessed” Dell Poweredge T310, single Xeon X3340 (4 cores) @ 2.53GHz with 8GB RAM (DDR3/1066) and 2x 600GB seagate cheetah SAS 10K drives, and dual 400 watt power supplies (for less than $100). Adjusted the boot to do a BIOS boot from the internal USB 4GB sadisk micro cruiser drive I had sitting in my pocket - and downloaded unRAID v6.5 trial. The disk controller (PERC 6i) set to was raid 1, but I booted the system the way it was. Adjusted the network ip4 address, prepped the USB drive... booted... Bang... up and running! Nice. Things I want to do: Hardware: Add 2x 4TB Ironwolf SATA drives - (have these already in a current NAS) Add 4TB or larger ironwolf parity SATA drive Add +8GB RAM for VMs. Add SSD cache SATA drives (2x 32 or 64GB) Add a GPU video card (nVidia?) Stuff/automation Gotta have: Fileserver/NAS/Personal Cloud (primary use) Secure Document archive (PDFs, etc) Mediaserver (music, home videos & movies, plex?) Like to have: Photo website (DruPal, maybe) Run VMs for Win 98, XP, 7, 10 and ubuntu studio Remote Desktop into vm. Maybe minecraft server for the daughter... Amazing if I could do it: Run apps in Docker to aggrigate research (work) articles (IFTTT?) Scientific code runner (eg Blender 3D, Finite Element Codes, etc) Already pleased with the speed of the system. Flexibility. Updating for it looks super simple. Questions: [pointers to other best forum threads appreciated] The six SATAs on the motherboard... will they support 4TB or larger drives? Any way to check the life use of existing SAS drives? Replace the perc 6i to H700? Wondering if I just add some 2TB SAS 7.2K enterprise drives from eBay instead of upgrading it. They look really cheap right now. (have 4 slots, 2 occupied) Have a DVD RW drive (one sata port)... best movie transcoder pathway? System has one dell RD1000 drive slot. No drive in it. Seen many on eBay. Any real value, or sell it? Am thinking it might be a good backup at offsite location (safe deposit box)... dunno. Oh, and I have Verizon DSL, (it beats Comcrap available here.) Thanks for reading... thoughts? Rollie