Jump to content

rollieindc

Members
  • Content Count

    24
  • Joined

  • Last visited

Community Reputation

1 Neutral

1 Follower

About rollieindc

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. rollieindc

    NAS build for photo/video archive

    You'll like the flexibility and capabilities of unRAID. I've been impressed with all the tools available (as a semi-pro/amateur photographer). I run multiple OS's (Win 7, Win 10, MacOS, Linux, etc.) So I like the specs, but I'd probably suggest adding a separate LSI SATA controller with cache on PCIe. I've never been happy with onboard SATA controllers, but that's just me. If the SATA fails, you can simply replace it. But if your onboard SATA fails, you're in deep trouble. If you're going to be running 8x 3.5” HDDs, but only going to have moderate CPU loads - you should be ok with 550 Watts. A lot of people are saying that 500GB is enough cache, but given the prices and you're talking about 20-40tb of media - I'd reconsider going to a 1TB drive, or maybe having 2x 500GB SSDs, if you have them already. Especially if you are working with a lot of large (4K) video files. And I like that compact Fractal Design Node 804 case design, but would be sure that the fans are sufficient airflow for both the drives and CPU. Last point: Make sure that Parity 8TB drive is new and server/enterprise quality.
  2. rollieindc

    My first hobby “TOWER”

    So, just a boring update. Not much to report lately, as I've just been on travel a lot for work and dealing with a "wonky" DSL connection at my house. The Dell T310 server has been running smoothly on unRAID 6.6.6, and not had any real issues since I installed the H200 controller in it. My biggest quandary has been to decide if I should increase the cache drive size (500gb or 1TB) or up the RAM memory to 32GB. To be honest, I don't need to do either at the moment. And my Drive Array appears to be running just fine. So, nice and quiet for me. Looking forward to the update on 6.7/6.8 at some point, the new features seem really promising. I still need to work on some VMs, that's an ongoing project for me.
  3. rollieindc

    PERC H700 Raid controller?

    Jonathanm, Just to follow up on this thread, I exchanged the H700 for an H200 flashed over to IT. I did this as much to obtain the SMARTDrive information on all the attached drives, in the hopes of being able to increase the reliability of the system through knowledge of that information- as anything else. I am not sure that the loss of the attached 512MB of cache memory on the H700 will be a performance hit, but since this system is mostly going to be pulling NAS storage duty, I think the overall performance will be "good enough" for my needs. Also the ability to work with the drives individually, or replace parts in case of failures - should, as you put it - follow "the normal first steps for recovery", and offer me the knowledge of others (more learned than myself) to increase the chances of any needed recovery. Thanks! - Rollie
  4. rollieindc

    My first hobby “TOWER”

    Update: 03DEC2018 - Replacing the H700 SAS Controller for a H200 flashed into IT Mode. New Stats: Dell PowerEdge T310 (Flashed to latest BIOS) RAM: 16GB ECC Quad Rank Controller: Dell H200 SAS Flashed to IT Mode, replacing the SAS H700 (Flashed to latest BIOS, all drives running in RAID 0) Drives: Seagate Ironwolf 4TB SATA (1x parity, 3x data ) + 2x 600GB Dell + 240GB SSD (for VMs) Note: The installed a three drive 3.5" bay system (from StarTech.com) is in the available full height 5.25" drive slot. This gives me 7 accessible hot-swappable drives. I plan to populate 6, and leave one for a hot swap-able bay. Video: Onboard & nVidia GTX 610 card Soundcard: Creative Sound Blaster X-Fi Xtreme Audio PCIe x1 SB1040 Sound Card First things first, the really good news on the nVidia 610 GTX video card choice that I made - is that I should be able to make a Mac High Sierra VM now, and then Mojave later - once they release the nVidia drivers for Mojave. The drivers are already out for High Sierra, and since I have a Mac Pro 5.1 running High Sierra - that I plan to use for video and photo editing, this should be a great value to me later on. Next, I found a relatively cheap ($26/shipped) Dell H200 SAS RAID card on eBay, from China, and decided to get it. As I understand it, the Dell H200 is a LSI based 9211-8i card with RAID Firmware installed. Installing the LSI "IT" 2118it.bin firmware allows for individual access to additional features and SMART disk data. The latter is important for determining disk health and tracking, like temperature issues or bit/sector errors. Since this was primarily to save a lot of data in my personal and historical photo library and backups, I need to identify early "disk death" before it happens - and swap out any before they fail completely. After two weeks of shipping time, it finally arrived from China and looked to be in decent shape. (One of the SAS connector shields looked a little bent, but I was able to straighten it with my fingernails.) I then read through various descriptions of the process to change it to IT Mode, and decided to go for "IT". The process was fairly straightforward, although a little daunting. Instructions are available online. (See https://techmattr.wordpress.com/2016/04/11/updated-sas-hba-crossflashing-or-flashing-to-it-mode-dell-perc-h200-and-h310/ ) I did use a separate HP DC9700 computer to flash the H200, since some people stated that they had trouble using the T310 for this purpose. In order to do this, I had to remove the back card edge holder, since my system has a small form factor, but the card sat nicely in the case for the time I needed to do the reflashing. I booted the HP computer into MS/DOS from a USB drive that I had made from RUFUS (https://rufus.ie/), started the diagnostics, found the SAS address, and loaded the IT drivers onto it. The process was, again, fairly straightforward until I tried typing in the H200's SAS address at the line [ C:\> s2fp19.exe -o -sasadd 500xxxxxxxxxxxxx (replace this address with the one you wrote down in the first steps) ]. It took me awhile to realize that I needed to make the address in 16 characters, all hexidecimal (0-9,A-F), and ALL UPPERCASE. The address from the previous steps had hypens included and was in lower case, so I fumbled a bit with that, until I got a clue from the flashing software that I needed to use "NO HYPENS" and "ALL UPPER CASE". Duh - I felt stupid, but after that, the process rolled quickly without any further issues. For me, If you feel comfortable reflashing computers or cards, this is very similar and should not pose any issues. Just check your syntax and typing before hitting the enter key. That I can see this as a potential big issue, and how some people could have probably "bricked" their card from making address mistakes. But for me - my reflashing went fine. After rebooting a few times in the process, I had a reflashed H200 card in IT mode. So, I put it into the Dell T310, replacing the H700 with 512MB of cache on the card. This is what somewhat bums me out, that the H700 has a nice cache on it already, and the H200 is likely cache-less. But I rebooted, and am in the process of reformatting the array's drives. Yes, they had to all be "reformated". Ugh. This is the last change I am making to the controller. So far, the difference in speed isn't really showing up (yet, if at all - they are both 6Gbs cards), but the other SMART disk information already is. I also can see drive info from any of the drives I want - SAS or SATA, and in the drive related plugins or other diagnostics available. And I can already read the temperature on every drive in the array from the dashboard. And for me, I could not read the temperatures with the H700 in a RAID 0 configuration. And the formatting process appeared to be definitely faster too. Lastly, I went to a PC Salvage store while on travel (Hint: it was the "USED COMPUTER" store at 7122 Menaul Blvd NE, Albuquerque, NM 87110, Hours: 8-7pm, Phone: (505) 889-0756) and picked up a used Creative Sound Blaster X-Fi Xtreme Audio PCIe x1 SB1040 Sound Card (for $10). Plop and drop, nothing more necessary for it to load up in unRAID. Haven't done anything with it yet, but it could help with some of the audio files and the way that the VMs run. If I get really bored, and turn the server into a Home Theater PC server, that 5.1 sound option will be nice to have. Oh frack... parity check is running again... that will be about 6 hours of disk spinning again. But, that should be the first real parity check that serves as a baseline. Guess I should get used to it. Next up, building a bunch of VMs and starting to use the system for storing files and backups.
  5. rollieindc

    My first hobby “TOWER”

    Well, that's actually my point. That's not all I want. And yes, I realize, I may not get what I want. My goals was to: 1) secure/encrypt the data pathway into the server. 2) secure/hide the ip address of the home server (as much as possible), and close the data pathway to avoid tracerouting into the rest of the home network. (1) would be easy enough to do, just using a VPN tunnel from an external client into the server. But connecting into this tunnel directly requires an open, insecure port into my network from the router in order to establish the connection. Password and SSL protected, maybe. But still leaves the port open and other ports could be pinged and then interrogated by anyone on the internet. I trust my ISP and their router about as far as I could throw it. So if all I wanted was a direct data connection - that could be accomplished by opening the port on the router, and hoping the open connection isn't found and then hacked. So to accomplish (2) my thought had been(and yes, I recognize I could be wrong) I would need to establish a "closed" secure data path/route from the home server to the website subdomain using OpenVPN through NordVPN. The subdomain is one that I own/control which is not on my home network. In that way, a secure connection could be established from a client (e.g. my laptop) going through a secure VPN tunnel, and connecting to the other tunnel completing the connection through the subdomain name. Since no trace from the subdomain to the server could be completed without having the data connection being established first, that should essentially "hide" any (secure) connected port on my home network. Maybe I am overthinking it... and there may be too many layers of encryption... but I am still "thinking this out" and looking other more knowledgeable ideas/views. My other idea would be to just build a VM.
  6. rollieindc

    My first hobby “TOWER”

    Thanks Jonathanm, I've been using NordVPN from my client side for a while now to connect to various servers. And yes - I was thinking that an OpenVPN docker would be the answer keeping my home network (and home ip address) as secure as possible, and that would connect to my unRAID server to the NordVPN servers - permitting me to then establish the most secure tunnel from my "offsite" client/laptop (at a coffee shop) into the server (sitting at home) ... but perhaps I am misunderstanding something with the protocols(?). I really don't want an access point into my entire network (which I would gain with going to the router), I only want the unRAID server "accessible" and preferably only by means of a good SSL connection. My other means would be going through a domain I have, making my server a subdomain - or going the route of connecting via DuckDNS. My concern with that route is that my home IP address would be "trace-ible" by pinging and tracing the subdomain. Which I thought (perhaps incorrectly) that the DockerVPN would then mask the ip address until a VPN connection was established. Dunno... confused now. Thankfully, there is no rush, and I've been happily uploading lots of files to my "new" server. 😀
  7. rollieindc

    My first hobby “TOWER”

    Update: Thursday, 13SEP2018 & 15OCT 2018 I have been running unRAID 6.4 (now 6.5, soon 6.6.1) for a while now on the Dell PowerEdge T310, but I've been doing some hardware upgrades. So let me see if I can show where I started, and where I am going. Dell PowerEdge T310 (Flashed to latest BIOS) RAM: 8GB 16GB ECC Quad Rank Controller: SAS DRAC 6ir SAS H700 (Flashed to latest BIOS, all drives running in RAID 0) Drives: 2x 600GB SAS Dell/Seagate Ironwolf 4TB SATA (1x parity, 2x now up to 3x data - ) + 1x 2x 600GB Dell/Seagate SAS + 120GB 240GB SSD (for VMs), I also installed a three drive 3.5" bay system (StarTech.com) into the available full height 5.25" drive slot. This gives me 7 accessible hot-swappable drives, which should be more than enough for me. I will be moving the VMs from the SSD to the 600GB cheetah SAS drives, since the speed on those should be enough for anything I will be doing. Video: onboard & nVidia GTX 610 card (I wanted a PhysX/physics engine and some CUDA cores for digital rendering and transcoding) Fairly pleased with my overall stability, and my next steps will be doing some configuration backups & network throughput testing, and adding a UPS for when power is lost. I will also need a way to access the system from offsite, preferably through my VPN service (NordVPN/OpenVPN).
  8. I could use some advice- so thanks upfront to anyone offering their opinions. For me, I am a bit confused on the advantage/utility of a cache drive utilizing a SSD. I have a small server for home use (3x4TB + 1x 4TB parity, all IronWolf SATAs) and definitely going to add an SSD (primarily for VMs). I also have a few small SAS drives (2x 600GB + 2x 450GB), which I am going to include for various small tasks that will probably remain unassigned drives (for scratch files). The controller is a Dell H700 RAID adapter in a Dell PowerEdge T310 with 16GB of DDR3 RAM. And I am running gigabit (1gbs) ethernet. Let me say upfront: I am not concerned with loosing a VM or a what I call a "scratch file." I -am- concerned for my digital photo library and digital documents and backups (most of which will come by moving from a client/laptop and moved onto the server) which will be in the drives with parity covering them. I am not moving (or creating) lots of videos, have few media files (so far), and not expecting that to increase significantly for the next 3-5 years. (I do a little scientific computing and 3D graphics, but nothing like commercial groups do.) But what I am unclear on is what value I would get out of an SSD that would be used for disk cache considering these conditions/use cases. As I understand it, most of the time the unRAID cache is "flushed" is when the "mover" program runs (default is once at night). And until "mover" runs, the files "sit" on the SSD. Is that correct? And yes, I would expect to see improved performance with cache when moving smaller files, but I don't really do that all that often. I tend to "build up" a cluster of files, and move them all at once from the client... and more often than not, I need to move files down from the server to work on them. I then move the modified/edited (photo) files back up to the server. In fact, I often need to move larger files (backups, video files, digital documents, and clusters of digital photos ~ 300-600GB in size) fairly often, and when moving those files are larger than the size of the SSD that would be used as a cache. But I also wouldn't care if they move from the client to the server overnight. Also, if I am moving individual files - I'm more interested in them being on the "NAS Drive" than bouncing from a cache drive to the "NAS Drive". So, I think I'm better off just sizing my SSD large enough for the number of VMs I plan to use/run - and run the array without cache. Should I rethink that idea? And is there anything else I should consider in sizing my SSD... like "nextcloud/owncloud" considerations, docker apps or other apps/tools... ? To compound and confuse this discussion - I have a 120GB SSD now, but thinking I need a 250-512GB SSD, but if I were to find a compelling reason, I'd just get a 1TB SSD. Also, based on the Dell T310 architecture, the max speed on any one drive is 6gbs, and NVMe is not supported as far as I know.
  9. rollieindc

    Can't find on the network

    No, unRAID found my drives on my T310 with the H700, I just had to set them up individually as RAID 0. The configuration is not straightforward, but "do-able." After my experience, I would have gone with the H200 so that I could put it into an IT mode, but for now, I am fine with the H700.
  10. rollieindc

    PERC H700 Raid controller?

    Jonathan, first, you're awesome. Thank you for that "heads up" - it's much appreciated that others here are helpful and friendly like that. I hope I can return the favor someday. Second, I get it that I'm different. A Dell PowerEdge T310 running as a non-business NAS for photo storage with services (and VMs) in a personal server is the project challenge for me. I might regret some choices I make, but I'm looking for a low-end system (paid for out of my own pocket) that will meet my storage needs, be reliable and be secure.
  11. rollieindc

    PERC H700 Raid controller?

    Noted! I realize this is a potential risk, and probably a significant reason to use the H200 flashed into IT mode and used as a HBA (which I have read has been done successfully.) For me, I plan to take a snapshot of the RAID configuration with my cellphone, and place it in a safe spot. ? And just to follow up Jonathan, and to be clear, *IF* the H700 were to fail ? , and I replaced it with another H700 Adapter (flashed to the same A12 firmware), I could still assign the same HDD (via the serial numbers) into the same RAID configuration (drive for drive) and still be able to recover the unRAID array - Right? ? And even if I were to have a HDD fail, I could still rebuild the array through unRAID by replacing the drive (and initializing it into a new RAID 0 drive). I get it that it's not as easy... but unless I missed something, it's still "doable."
  12. rollieindc

    PERC H700 Raid controller?

    So, I managed to update the firmware in my H700 RAID Adapter in my PowerEdge T310. There was one interesting thing to realize - that there are different firmware updates for the ADAPTER card (like the one shown) and the INTERNAL cards used in other systems like the R710. I made up a DOS boot USB Stick, booted up DOS, and then expanded the software, transferred to the USB, and ran the "update.bat" file to update my ADAPTER firmware with version A12. After flashing and rebooting, the T310 and H700 recognized my Seagate 4TB Ironwolf SATA drives that were inserted into the front drive slots - and identified them as "ATA" drives. I am currently setting each drive up individually as RAID 0 through the H700 controller, initializing them and plan to let unRAID deal with the raid/parity redundancy like I did with the previous set up I had with all Dell Drives. That seemed to work ok in unRAID ver 6.4. I did see a few places that talked about the potential of flashing the H700 into "IT", but I've not seen the ADAPTER version successfully flashed into "IT" mode. More on the H700 cards/adapters here. To get the DOS firmware update tool, go here... (look for the ZPE version, e.g. SAS_RAID_H700A_12.10.6-0001_A12_ZPE.exe )
  13. rollieindc

    PERC H700 Raid controller?

    I've got an H700 in my PowerEdge T310. It worked with an all Dell 4 SAS drive array, as long as I set each drive as a RAID 0. At that point unRAID seemed to get to the drives, format and allow me to set up the array with parity. Everything seemed to work fine. I am currently replacing the lower capacity drives with some 4TB SATA Ironwolf drives - and while the H700 sees the drives, they are currently showing up as "Blocked". Some have posted elsewhere that the H700 can be reflashed (with a Dell PERC firmware update) to accept non-Dell drives, but I've not gotten that far yet. If I find some more information, I'll post in in the thread that I am posting to for my own unRAID system. Here is a recent posting on the topic: https://forums.servethehome.com/index.php?threads/dell-perc-h700-adapter.16158/
  14. rollieindc

    My first hobby “TOWER”

    Moving on... So the H700 and SAS cable install went well in the T310. All the drives were relatively easy to add in the H700 settings - and change over to RAID 0 in prep for unRAID, although I will want to look to see if there is an IT mode available with an updated H700 firmware load. However, the H700 was definitely more "zippy" in moving files from my laptop into the server (did this using the iso's for Win 10 Pro and Ubuntu). I've also been watching SPACEINVADERONE's video tutorials, and I have VMs for Win 10 and Ubuntu Studio 16.04 up and running. I need to redo the WIN 10 (Pro x64) VM, as there is no internet connection to it. I have to say that VM is a lot tricker than the one for Ubuntu, but it still works. So, yes, watch those videos... they are quite good, and well done. (Thanks!!!) I did see the video on the topic of a reverse proxy, and that seems like a good idea for me to implement with this server. I want to have it be secure (with https access only, if that's possible!) I also picked up my third (new) 4TB Ironwolf Drive, which will become my new parity drive. I've not installed any of the IronWolf Drives into the T310 yet, because I still wanted to tinker with the VMs beforehand. And I got a spare SATA power splitter cable, which I will likely use with SDDs (cache) only - should I still decide to install them. The other things I noted was that there were power splitters that I could get that would come directly from the SAS drive and power leads. I would need another SAS-SATA cable from the H700 (B port) to connect up to another 4 drives for data - since I still think the 6 available SATA connectors available on the motherboard are limited to 1.5Gbs, rather than the 6Gps I can get on the H700. And about the only thing I may be adding in the future will be more disk space, but I am not anticipating that soon - if at all. I may want to run the VM's off the SSDs, as is being suggested... since that would be far better use for any VNC/remote connections. But for now, I am thinking that the VMs can sit on one of the SAS 600GB drives, that be a Standalone drive - but get VM's backed up into the IronWolf Disk array from time to time. And I never got to the Win 7 VM install, but I anticipate less issues with it, since I've done a number of those already.
  15. rollieindc

    My first hobby “TOWER”

    "Checking in, and bellying up." Yes, this will be long and boring read for any experts... but I am writing this for anyone else who happens to be interested in doing something similar, and for my own "fun" of building up an "inexpensive" Xeon 3440 based Dell Poweredge T310 server with unRAID. So the saga of the $99 Dell Poweredge T310 continues. I spent some time playing with unRAID trial version enough to realize that I was in for the investment, and bought the H700 SAS controller to replace the PERC 6ir that came with it. And I bought the "plus" version (for up to 12 drives) of unRAID at $89. To be honest, I was back and forth on this- but decided that limiting myself to 2TB drives - as the 4 main HDDs in the system -was not what I was interested in for my NAS replacement. I wanted to at least get myself to something more like 6 to 8 TB, with some ability to have error correction or drive rebuild. I also wanted to have potentially more than 6 drives available (4 HDD + 1 parity HDD + 1 Cache SDD) just in case performance became an issue. I also wanted some flexibility to add a drive or two for separate storage space for VMs or Media Cloud Storage from the main drive system/storage drives. And the "plus" version of unRAID gave me that flexibility. I also don't expect to be running a huge server farm, so the "Pro" version seemed excessive in terms of needs. After a few minutes at the pay website, I had my email with the updated URL, and the unRAID was upgraded in place on my existing USB stick already installed on the motherboard. I did reboot the system, just to be sure it took, but I don't think I needed to. (Kudos to the Lime Tech designers on that pathway!) I also carefully considered the HDD size in my decision process. (Comments and other views welcome on this. And yes, I could have gone with WD or HGST drives, but I didn't... You can also see why here: https://us.hardware.info/reviews/7265/16/nas-hdd-review-18-models-compared-conclusion ) The Seagate Ironwolf 4TB SATA drives were running $124, while the 6TB version was running $184-190. So, my choice was two 6TB, or three 4TB, giving up one drive for parity. So for 2x6TB =>6TB storage, I would have sat at $368, or for 3x4TB=>8TB, I got for $372. And while if I added another drive, the 6TB drives probably would have been a performance winner (3x6TB=>12TB @ $552) over the 4TB (4x4TB=>12TB, $496), I think I made the better deal for cost, expandability and reliability. (And we could probably argue over getting the WD Red 6TB drives, but I've already opened the Ironwolf drives... so let's not.) So, next to eBay I went, picking up the Dell PERC H700 and a SAS W846K cable kit to tie the existing SAS drive slots/backplane to the H700. (For those not aware, the PERC 6ir has a special SAS cable to the backplane, allowing for 4 drives with the T310) The one nice thing with the H700, I can add more drives (SATA or SAS) with an addition of another SAS to SATA cable (SF-8087?) set - as the H700 has two SAS connectors (A & B, and note you have to use "A" with the first set of drives). The other nice change is that the H700 does full 6Gbs transfer rates. Anyway, total spent for the eBay H700+W846K Cable was $35. The only other downsides I saw with the 6ir to H700 changeover is that I will need to do additional power splitters to get power to any addition SATA (or SAS) drives I add to the system - and I had to reinitialize the existing SAS drives to use it with the new HDD controller. This also meant that any data I had on the drives were gone. Fortunately, I had not yet populated them completely with data. (I also found out that the two 450GB drives I picked up were only 3Gbs SAS drives, so those will likely go to eBay at some point, along with the 6ir and the Dell RD1000 drive.) This need to reformat HDD probably wouldn't have happened if I had been replacing the 6ir with another 6ir, or done an H700 to H700 swap, but going from the 6ir to the H700 meant reinitializing and reformatting the HDD drives and losing the few files I had placed on it. In configuring the H700, each HDD drive has to be it's own "RAID 0" for unRAID to be able to address it separately. Not too hard to do, once I deciphered the H700 firmware menu system. But the good thing about this configuration on the Dell T310 is that the 4 main (SATA, 3 x 4TB HDD initially, with 1 of those as parity) drives will still be (hot?) swappable. And I am leaving one HDD bay/slot on the front panel unfilled for now, even though I do have a Dell 600GB SAS drive that I could put in it. I also went with brand new HDD drives, although I saw plenty of SAS 4TB drive lots on eBay that were refurbished or "like new." But here- I don't want to be replacing bad drives with this system, I simply want it to work well and store lots of files (primarily digital photo library which is currently just over 1TB in size). And at some point, I will likely get one more 4TB Ironwolf drive to act as a "hot spare" in case one of the drives fails later on. (Reminder: I need to read up more about adding drives to increase storage space, but I recall that is what unRAID is supposedly good at.) At present, I'm still on the fence about adding another SSD SATA cache drive (I currently have a 120GB SSD SATA on the motherboard SATA B socket/header, but not using it - since it seems to run only at 3Gbs), since the PERC H700 came with 512MB of cache (RAM) memory on the card, this might not be necessary. I did make the decision not get the Dell battery add-on for the H700, partially because the system will live on a battery UPS, and will be set to shut down if the UPS power goes low. After I do some more system burn in with the existing drive array (2x600GB + 2x450GB SAS drives), I will load up the Ironwolf drives and add a couple of VMs to the system to give it a good workout. I really am interested in seeing how the system runs with a Windows 10 Pro VM and then a Windows 7 VM, and some video and photo capture and editing software. (I might want to use one of those 600GB drives for the VMs, dunno.) And later I'll be adding an Ubuntu Linux distro on it as well, likely with Docker. I'm also working on rebuilding a separate Apple Mac Pro 5.1 system, which will be networked into the system, and used for editing video and editing and scanning photos. The two will be connected with GigE switch, as to make the large video file access far less painful.