shaunmccloud

Members
  • Posts

    141
  • Joined

  • Last visited

Everything posted by shaunmccloud

  1. IPMI shows it as "Correctable Memory ECC @ DIMMC2(CPU1) - Asserted". I have more RAM I can use come Monday.
  2. I got the following in the Fix Common Problems plugin. "Your server has detected hardware errors. You should install mcelog via the NerdPack plugin, post your diagnostics and ask for assistance on the unRaid forums. The output of mcelog (if installed) has been logged". Diagnostics file is attached. bb-8-diagnostics-20201016-0745.zip
  3. I'll just do that with 3 passes then.
  4. Is there a chance you could add a DOD pre-clear as an option?
  5. Correct, the cache was due to a bad cable. Replacement cable will be arriving today, and I might even get time to install it but likely not until this weekend (darn small kids). Running a cache scrub (I think its now the 3rd or 4th one since I moved cables around) as well as a correcting parity check. Going to cross my fingers that everything is good.
  6. After having some issues with one of the cables in my server, I decided to run a non-correcting parity check. After 37.1% (of 10TB) it is showing 2864 errors detected. I'm not sure if I have another bad cable or if the onboard SAS controller on my motherboard is having issues. Diagnostics file is attached. bb-8-diagnostics-20200930-0824.zip
  7. Forgot to post last night, one port on my SFF-8087 to SFF-8482 cable is now bad. Probably got bumped while I was adding RAM. I'll be ordering a new cable today (thankfully I was able to move other cables around to get a SFF-8482 end free).
  8. I knew to do it because of the Linuxserver.io team.
  9. I had one of the disks in my cache array fail. No problem, its RAID10 so I'll just swap it out and let it rebuild. After doing a full balance, a convert to RAID10 again, I get the following for status. Data, RAID10: total=222.00GiB, used=156.09GiB Data, single: total=3.00GiB, used=2.63GiB System, RAID10: total=64.00MiB, used=0.00B System, single: total=32.00MiB, used=48.00KiB Metadata, RAID10: total=2.00GiB, used=906.00MiB Metadata, single: total=1.00GiB, used=154.20MiB GlobalReserve, single: total=188.30MiB, used=4.33MiB I'm also being told my cache is ~2TB in size instead of ~1TB in size (4 600GB 15k SAS drives). Anyone have a suggestion on what could be causing this?
  10. Cache drive is fixed (i.e. replaced) and I have the following docker networks br0 bridge host mybridge none mybridge is the one I use for my containers because DNS works.
  11. unRAID is a little different. Once I finish tracking down the bad cache drive in my server, I'll verify thats what I did.
  12. Are you using the default bridge network or did you create your own? For pinging by name to work you need to create your own.
  13. Is there a way to display more than 4 sensors in the footer? I'd like to display my array fan & add in card fan in addition to my CPU fans. But adding the CPU temps & fan temps uses the 4 the GUI shows.
  14. Right now my cache array is a single 512GB SSD, I have acquired some 600GB SAS (10k or 15k, can't remember) drives that I want to use for my cache array. Do I just add 1 of the SAS drives to the array (after a pre-clear), wait for the initial sync to complete then swap out the 512GB SSD with a 600GB SAS drvie? I don't really want to have to take everything offline and move it the way SpaceInvaderOne suggests to.
  15. Ok, 1G NIC passthrough is not working. Anyone have ideas on how to pass a dual port NIC through when there will be two total cards with the same PCI ID in the system?
  16. I'm starting the process of passing NICs through to a VM so I can virtualize pfSense on my server. The problem is, when I setup unRAID pass my older "Intel Corporation 82571EB/82571GB Gigabit Ethernet" (PCI ID 8086:105e) through to a VM I am not able to access it on the network. Currently, unRAID is currently using an "Intel Corporation I350 Gigabit Network Connection" (PCI ID 8086:1521) for its connection to my network. The older NICs are both their own IOMMU group (dual port NIC) and the I350 NICS (again dual port & onboard) show as one IOMMU group so that should be fine. Once I get this figured out, my next hurdle is how do I pass a single SolarFlare dual port NIC through to the same VM and keep the other SolarFlare NIC dual port NIC I will be installing for unRAID to use. Any help on either item is appreciated.
  17. Just realized, I will be adding a second SolarFlare 10GbE dual port NIC, how do I pass just one of them through to a VM (so 2 ports to a VM and 2 ports to unRAID)?
  18. I'd rather not take the chance on passing a single i350 through right now since my current switch won't link with my SolarFlare NICs if the connection is made with a 10GbE DAC (even when forced to 1GbE).
  19. Not a huge deal for me, I have another dual port NIC I can drop in until I get my switch with two SFP+ ports so I can use my SolarFlare NIC in a virtualized pfSense install.
  20. This might be answered (didn't see it but I could be wrong), if I have a dual port i350 NIC built into my server I cannot pass just one of them through to a VM correct?
  21. No problem. I'll be getting the SwOS only version of the switch and letting my pfSense box handle VLAN routing (so 10GbE to pfSense and 10GbE to unRAID).
  22. You can route between VLANs if you run RouterOS but it will not be anywhere near wire speed.
  23. Did this stick across a container restart for you? It doesn't for me.