Jump to content

jpowell8672

Members
  • Posts

    381
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by jpowell8672

  1. Settings>Network Settings Enable bridging: Yes Bridging members of br0: Select which interfaces are member of the bridged interface. By default eth0 is a member, while other interfaces are optional.
  2. Top right click the ? When ? is underlined it is always showing help. Hope this helps, pun intended.
  3. Thanks go's to @SSD for this info: There are two things at play here - one is the the PCI version number (1.0/1.1, 2.0, 3.0) and the other is the number of "lanes". PCIe 1.x is 250 MB/sec per lane PCIe 2.0 is 500MB/sec per lane PCIe 3.0 is 1000MB/sec per lane Each card's maximum number of lanes is determined by the card's physical design (i.e., literally the length of the PCIe bus connector.) A 1 lane card is the shortest, and a 16 lane card is the longest. The motherboard and the card will negotiate a specification based on the highest spec both the slot and the card support. So a PCIe 2.0 card in a PCIe 3.0 slot, will run at PCIe 2.0 speed. Similarly they will agree on the number of lanes based on the "shortest" one - the card or the slot. Most disk controller cards are either 1 lane, 4 lane, or 8 lane, often referred to a x1, x4, x8. If you put an x4 card in an x8 slot, you will only have 4 usable lanes. And if you put an x8 card in an x4 slot, you will also have 4 usable lanes. Putting an x8 card into an x4 slot is not always physically possible, because the x4 slot is too short. But some people have literally melted away the back end of the slot to accommodate the wider card, which is reported to work just fine. Making things just a little more confusing, some motherboards have an x8 physical slot but is actually just wired for x4. So you can put a longer card in there with no melting, but it only uses 4 of the lanes. If you have, say, a PCIe 1.1 card with 1 lane, and it supports 4 drives, then your performance per drive would be determined by dividing the 250 MB/sec bandwidth by 4 = ~62.5 MB/sec max speed if all four drives are running in parallel. Since many drives are capable of 2-3x that speed, you would be limited by the card. If the slot were a PCIe 2.0 slot, you'd have 500MB/sec speed for 4 drives, meaning 125 MB/sec. While drives can run faster on their outer cylinders, this would likely be acceptable speed, with only minor impact on parity check speeds. With a PCIe 3.0 drive, you'd have 250 MB/sec per drive for each of the 4 drives. More than fast enough for any spinner, but maybe not quite fast enough for 4 fast SSDs all running at full speed at the same time. You might think of each step in PCIe spec as equivalent to doubling the number of lanes from a performance perspective. So a PCIe 1.1 x8 card would be roughly the same speed as a PCIe 2.0 x4 card. Hope that background allows you to answer most any questions about controller speed. I should note that PCIe 1.x and 2.0 controller cards are the most popular. And as I said, x1, x4 and x8 the most common widths If you are looking at a 16 port card, and looking at the speed necessary to support 16 drives in a single controller ... PCIe1.1 at x4 = 1GB/sec / 16 = 62.5 MB/sec - significant performance impact with all drives driven PCIe1.1 at x8 / PCIe 2.0 at x4 = 2GB/sec/16 = 125 MB/sec - some performance impact with all drive driven PCIe2.0 at x8 / PCIe 3.0 at x4= 4GB/sec/16 = 250 MB/sec - no performance limitations for spinning disks (at least today) PCIe3.0 at x8 - 8 GB/sec/16 = 500 MB/sec per drive - no performance limitations even for 16 SSDs. The speeds listed are approximate, but close enough for government work. Keep in mind, it is very uncommon to drive all drives at max speed simultaneously. But the unRAID parity check does exactly that, and parity check speed is a common measure here. If you are willing to sacrifice parity check speed, a slower net controller speed will likely not hold you back for most non-parity check operations. For 16 drives on one controllers, I'd recommend a PCIe 2.0 slot at x8. For example, an LSI SA 9201-16i Here is a pretty decent article on PCIe if you need more info: http://www.tested.com/tech/457440-theoretical-vs-actual-bandwidth-pci-express-and-thunderbolt/
  4. https://wiki.unraid.net/Replace_A_Cache_Drive
  5. Diagnostics file you uploaded is not good, try again.
  6. Ryzen is picky when it comes to ram. It shoulld run at 2933 if you set it manually but if it doesn't then I would consider better ram for Ryzen. Samsung B-die is the best which is CL14@3200 and most DDR4 above 3200 is usually also. 8gbx4 sr cl14 ddr4 3200 works best on this Main Board. I did a lot of research on this before I purchased ram for this Main Board with Ryzen Threadripper. https://benzhaomin.github.io/bdiefinder/ Samsung has discountinued making it and it will be gone soon but there is still some out there. https://wccftech.com/samsung-b-die-memory-production-ceased-replaced-by-samsung-a-die/ https://www.anandtech.com/show/14327/samsung-to-end-b-die-ddr4-memory This is the ram I have and it is still available $119.99x2 cheaper than I paid $134.99x2: https://www.ebay.com/itm/G-SKILL-Ripjaws-V-Series-16GB-2-x-8GB-288-Pin-DDR4-SDRAM-DDR4-3200-PC4-25600/381512871667?ssPageName=STRK%3AMEBIDX%3AIT&_trksid=p2057872.m2749.l2649 Last I looked newegg no longer had it on there site but they have it on there ebay page where I purchased it from. Now this is NO GUARANTEE this would fix or make your system better, there are always different variables that can be in play, this is just a friendly suggestion/recommendation. So do your own research and don't hold me accountable. The upcoming release of Unraid 6.8 might fix your problem which will be out very soon.
  7. I would set the memory to xmp default or 2933 in bios. Ryzen benefits very much from increased ram speed. I would also suggest setting cpu pinning Isolation on the last 6 cores & matching threads for the vm in Settings>CPU Pinning CPU Isolation at the bottom. If you built the system then I would make sure you have a proper TR4 CPU cooler installed with premium thermal paste applied correctly. Good airflow in case.
  8. I am running bios 3.60 and I did not have to change any defaults in the bios or add anything to the config. My system has been running rock solid with Windows 10 vm that runs 24/7 and I am using it right at the moment. I had plex running for months but I am now using Jellyfin. I have many dockers running and also a pfsense vm firewall that runs 24/7 also.
  9. I have the same Main Board and CPU. What bios do you have installed? What is the memory and what is it set at? Since Unraid 6.7 there has been a problem with the array read/write performance. Unraid 6.8 is just about to be released with the fix.
  10. https://wiki.unraid.net/Replacing_a_Data_Drive
  11. What size was the failed disk and what size in the replacement disk? Is the replacement disk new and has nothing on it? Did you try to reboot and try again? I assume you removed the failed disk from the disk assignment and added the new disk?
  12. Do you have another pc you can test them on? Maby something going on with the psu possibly.
  13. What about the sata ports on the main board, have you tried them to test the drives?
  14. Yes the drives should show if the card is seeing them.
  15. I see you added above which I did not know about until tonight: "I also noticed that at the bottom of the webpage, there is a line that says 'Array Stopped•too many devices'" So there is a reason for that also which is what I am trying to figure out. Did you try going to: Tools>New Config & apply to reset your drive config?
  16. You can also try going to: Tools>New Config & apply to reset your drive config. Make sure your license you purchased did get updated into Unraid. It should say: Registration Unraid OS Pro on top right of screen.
×
×
  • Create New...