mproberts

Members
  • Posts

    8
  • Joined

  • Last visited

mproberts's Achievements

Newbie

Newbie (1/14)

0

Reputation

  1. Fixed! Removing second node port from Server and the only/one node port from Node restored the containers!
  2. Thanks. Is it a virtual Raid 1 on a Raid 0 formatted/pooled set of drives? That's where my confusion lies and if this is some sort of virtual Raid 1, not clear how this would provide Raid 1 protection (1 drive failing). If my two drives are wholly a Raid 0 pool, how do I end up with a (reported) Raid 1 for metadata and system?
  3. Quick question - I recently replaced my two SAS drives with SSD for my Cache. As these are enterprise SSD, I went with Raid 0 (I know, still a risk). Two 960GB drives in Raid 0. I followed the process for moving the original Cache data to the array, replacing the original drives with the new drives, configured for Raid 0 via the Balance pulldown, then kicked off the move back to Cache. All went and works fine. Question: Why does the BTRFS filesystem for my Cache show Raid 1 for System and Metadata? I did run the Perform Full Balance after the data was moved back as well.
  4. I have a recently built Unraid server (see specs below) and want to make a change to the cache drives. I'd built the cache with two 2tb 7.2k SAS drives as a pool, which is entirely too slow... I'm also finding I don't really need 2tb of cache space. My primary use for this server is Plex and my 3CX phone system. I'd like to replace the cache drives with two pooled 1TB SSD drives. Questions and possible issue I need clarity on; What is the preferred method to replace cache drives with smaller capacity replacements? The 8 drive slots on my T710 are all occupied - two 2TB SAS cache drives and six 6TB SAS drives (no room/slots/channels to add more SAS/SATA drives). Since the drives are pooled, I understand I could pull one and rebuild on a new drive, but can this be done on a smaller drive? If migrating/rebuilding to smaller drives is risky or overly complicated, I assume I can follow other documented processes to replace with (much more expensive) 2TB SSD drives one at a time? Config: Unraid 6.9.2 Dell T710 II w/ two x5690 CPU’s 96GB Ram Dual 1100 watt PS Nvidia Quadro P2000 video card for transcoding (love the T710 for this easy install as opposed to the R710) Dell H310 – IT Mode (6) Hitachi HUS726060AL5210 6TB SAS Drives (2) Hitachi HUS723020ALS640 2TB 6G SAS Drives (Cache) Dell / Intel XYT17 / X520-DA2 10GB FH Network Adapter (not configured) 4 network ports on T710 running as bonded, active-backup, 1gb Dedicated APC 1500va UPS Thanks all!
  5. Thank you. Next question; when I have the sfp ports connected, what is the procedure to make active on Unraid? Then disable the copper interfaces? Eth0 is my primary interface currently with eth1, 2 and 3 as bonded (bond0) active-backup interfaces. Eth4 is my dual fiber Nic. While I can work in CLI, my concern is the order of the process so maybe I can do the changes from the web interface? Obtain connectivity to the switch via eth4 Use both fiber ports on card and switch ports? Validate connectivity (how?) Disable the bonded interface? Or keep one as a failover? Thanks again.
  6. Let me start my question with a basic 'will it work' scenario, then I expect to have more questions... Running Unraid 6.9.2 on a Dell T710 with 4 gig interfaces bonded as active-backup. I also have a X520-DA2 10GB FH Network Adapter (2 ports) installed, currently in down state (fiber cables not connected yet). Primary usage is Plex. I have a Unifi US-24-500W switch with two unused 10g sfp+ ports I believe are normally used as uplinks to other switches/infrastructure. Can I use these to connect from the Unraid server fiber interfaces to the switch sfp+ ports and serve all the other hosts on the switch? My goal is to have the two 10g links between the Unraid server and the Unifi switch serve all my clients - and potentially do away with my four Unraid 1g links. This is all on one private network at this point. Thanks