pixels703

Members
  • Posts

    11
  • Joined

  • Last visited

pixels703's Achievements

Noob

Noob (1/14)

3

Reputation

  1. I think we broke a cardinal rule. We rebooted to add the new drive but did not add it to the server. Just to have it in the server for when we decided a path today. Once it was rebooted, it changed the drive to sdo1 and showed it online. My assumption is that it's SDO now because another drive was added and orders are different now. We're new to this, so unsure. It's running a scrub now. We had problems with this drive in the array previously and thought it was a fluke. It went to the "x" status but went back online - thought it was a port issue. SDM before the reboot is now SDO. My assumption is that it will error when it gets to that point, but we're in waiting mode. There is a mirror of all the data on a second server. mediaserver-diagnostics-20240323-1009.zip
  2. General questions about replacing to make sure I am sure. This is the first time we have replaced a ZFS pool drive, so want to just make sure. My assumption is as follows. Drive is "REMOVED" so that means that I need to follow process#2 because the server is not seeing this drive as ONLINE - it's not a swap of drive that's operational, this is a bad drive, so it "Cannot have both the old and new devices connected"...? Or is this only if there is a physical limitation - e.g., there are not enough SATA connectors? We don't have any limitations with the hardware. By "Device" that is the "Drive"? So, I should: Stop Array Unassign the "REMOVED" drive = SDM1 in this case = change to "No Device" Start Array Shutdown Physically remove "SDM1" Physically add New Drive Assign New Drive to Pool where "SDM1" was previously Start Array
  3. I suspect this is related to PCIe lanes. Check the number of PCIe lanes with where each of your cards are plugged. If you’re running a card on an X1 lane even though it’s an X16 slot, you’re limiting throughput. And ensure that you have enough lanes on the CPU to accommodate the lanes being used. Go to the mobo page and look at PCI_E1, *_E2, etc etc. it will show slot size and allocated lanes. E.g., PCI_E1 x16 = 16 lanes. x4 = 4 lanes. Many mobos have x16 slots that only run x1 lane. Many mobos run the m.2 through a chip and you have limitations there. Then look at version of PCIe - 2.0 runs 500MB/s and 3.0 runs 1GB/s. So a x4 with a v2 card only runs 2GB/s. Spread that over the number of drives and that’s why it’s slow on parity. Just a suggestion. Sent from my iPhone using Tapatalk
  4. Does anyone know why iperf would resolve to the IPv6 address when using DNS (despite IPV4 Only set). Performance is fine through IP. When it grabs the DNS, it's a fraction of the speed. I don't think that's impacting performance, but curious if anyone has seen this. My hope is that it's an obvious answer for a newbie. I am pretty sure that ZeroTier is grabbing the address for a virtual interface, however, I can't figure out how to disable it (tried enabling IP4 +IP6, then looking to uncheck something, but nothing is there). Assume it's a config file that needs to be updated.
  5. Same issue on all 4 servers. After reboot I have to add a character and remove it, apply, and it's fixed.
  6. I’ve been tinkering with the Duplicati file upload since I have a large data set. And curious if anyone has any watch outs for settings. Since I’m new to this. Settings and info: - LAN backup to second Unraid server using Unassigned Disks - Stable 10GBe network between the servers, plugged into same switch Host Server - AMD Ryzen 5 4500, 128gb ram 3200, XFS - Drives: (1) Parity 16TB 7200, (2) 16TB [second parity is in UPS hands] Destination Server - Dual Intel E5 2620v2, 256gb 1333, ZFS - Drives: (2) Parity 16TB 7200, (3) 16TB 7200 Default settings were slow as molasses. Just curious if any of the following are red flags for speed or risk. Since it’s a stable LAN, assume tinkering with config is best bet. My speeds with these settings are considerably faster, but nothing to write home about. I can see the file progress bar move and server load is heavier. Spikes to 1 to 6 Gbps every now and again, holding steady at 30-60Mbps on the Unraid dashboard. Set file size to 250mb since the Duplicati docs indicate that anything above 1gb slows it down. Many of my images are 80mb, so assume the blocks will group a couple at a time compressed. At 20 asynchronous, the processor seems to hold between 30-70% spiking to 90% max. Unsure is this is related or because of the next set of settings. Assuming that I have the overhead with processor and RAM, these are good to push the backup speeds, but that’s based on reading and not application. This combination seems good. I’m tempted to try 500MB or 750MB block size, but the transfer speeds are much better at the moment. Curious if anyone has expertise here. And if I have any incorrect assumptions or settings that are off. I couldn’t find the compression size setting because I’m not worried about disk space, but assume that is just space on destination server/drives setting and less about speed as long as the server is filling the queue of files, that’s a non-issue. Thank you for any thoughts. Seems like a reasonable speed for what it is.
  7. Thank you, back to normal and will copy to XFS drives on another server. Appreciate the insights.
  8. We replaced a ZFS drive with a larger disk and changed the file system to XFS … and did a parity rebuild. Mainly because my motherboard doesn’t support ECC. And need to move to XFS. When the drive finished the parity rebuild it’s blank. From what I gather with other posts, this was a cardinal violation of Unraid file systems: Tho shall not change the array file systems. I have the old drive (the one we swapped) and if there is something that we’re overlooking, is it possible to put the old drive in where it was before, click new configuration, sort the drives in the same order and restart the array. Or has the parity done it’s thing. The drive sizes are the same used and free for the other drives that stayed as is. All the data is backed up elsewhere, so nothing is lost. We’re still learning, so please be gentle. A journey. Appreciate the help and this group for teaching us newbies the hard lessons. TIA.
  9. Just posting a solution to an issue with installing an X540-DA2 on Supermicro X9DRD-7LN4F. Plugged in card, lights on, but no connectivity. Was able to terminal and ifconfig ethX (#) up and bring the card online while still connecting one cable to 1000, it started to show traffic on the dashboard but no IP address Was able to ping the server.local however it was returning the IPv6 address Stopped the Array, went to Settings, Network Settings, and checked the boxes for ETH4 & ETH5 to add them to the Bond0. I am sure this is obvious, but not for noob. See image below. Hope this helps someone. Keywords: X540-DA2, X540-T2, Pinging Only IPv6, Unable to Ping Server with New Card
  10. Forgive me for the noob question. I have tried to search on notifications for Pool Offline Drives (specifically RAIDZ2 zfs, running 6.12-rc5) with no success. The Pool is 5 drives in "zfs" at "raidz2" in "1 group of 5." The SATA power connectors on one of my drives came loose and the only way I could see it was offline was because there was an "*" for the temp. When drives in the "Array" go offline, there are a flurry of notifications emails that are sent. For the Pool, I didn't receive anything. Mind you it was probably only offline for an hour when I moved the server on the shelf, but regardless, hoping that someone might have a suggestion or thoughts. Thanks for any feedback.