rookie701010

Members
  • Posts

    15
  • Joined

  • Last visited

Posts posted by rookie701010

  1. nicht unbdingt Doppeladapter, aber hier hat sich für diverse Experimente der hier bewährt: EZDIY-FAB Dual M.2 Adapter. Hat Kühlkörper, und nvme alle längen bis 2280, und einen 2. Stecklatz für SATA (muss man aber ein SATA Kabel anschliessen). PCIE 4x-Steckplatz. Eine Suche auf Amazon nach "pcie nvme adapter" liefert eine Menge. Für gewöhnlich ist da ausser einem Spannungsregler nix weiter drauf. Was man bei den für mehrere Karten beachten muss: Das sind meistens 4fach Adapter, ziemlich groß. Ist für den großen PCIE Slot ausgelegt, das Board sollte dann aber PCIE Bifurcation können, da jede SSD nur 4 Lanes nutzen kann. Ich habe auch so einen da, und auch mal mit 2 SSDs bestückt und dann den Slot auf 4+4+4+4 geschaltet - hat gut funktioniert. Das Board sollte das können (nicht sicher, hab hier nur B550 und X570 - AMD). Bifurcation geht normalerweise nur auf den 16x Slots, also wenn da keine dicke Grafikkarte steckt, kann man tatsächlich mehrere SSDs mit voller Geschwindigkeit betreiben.

    • Like 1
  2. Update to add: The crashes kept coming, and as it almost always is in this case, it's something hardware related. In this case the culprit is the RAM (pretty sure), just changed it to Kingston HyperX Fury/Renegade 3600, also 128GB. Since the change required some disassembly I also changed the CPU to Ryzen 9-3950X. What's not to like :) Since this thing is running VMs and containers, more cores are a good thing.

    Why am I so sure regarding RAM:  I had similar issues with this kit in completely different hardware, after 18 months. So there appears to be a degradation issue. I changed everything (!) else in the box, same behaviour. Changed the RAM, stable... although the MSI boards seem to have ageing effects, too.

    I will close this issue now, it is at least linked to the hardware issue. There was no useful info in the rsyslog, btw. last entry was some cron hourly job, then completely unresponsive box.

  3. A RAM issue should show up intermittently with crashes, though. If the box is rock stable then the possible causes can be drive, or cable. Or (with really old SATA controllers) hardware failure of the controller. I had some issues with earlier generations of AM2/AM3 boards where the SATA controllers gave up eventually. SB690/700 was ok though. With the new (B350, B550 / X570) boards, SATA is stable. In your case, I would re-check SATA cable and maybe PSU if not spanking new and generously specced.

     

     

  4. With "horribly wrong" I mean completely unresponsive, no network, no console. So... hard reset is the way to re-awaken the box. Maybe I can set up forensics like a dmesg -W in a ssh terminal on another server and hope that something shows. However, now parity is rebuilding, and the new disk is getting precleaned, though. Would need to duplicate on another setup.

  5. 16 minutes ago, itimpi said:

    Rebuild overwrites all sectors so formatting a disk would be pointless.

    Then it shouldn't crash and handle it as a 4TB disk (not optimal, but okay). There was no data on the original, but rebuild crashed... which poses some interesting questions. An erased and formatted disk is just rebuilding parity, with *exactly the same visuals* as restoring a disk. Or do I miss something? The I/O stats say that the replaced disk is written to, which would imply restoring the data and the 4TB file system. This looks inconsistent.

     

     

    2022-12-04 21_57_26-unraid_Main – Mozilla Firefox.png

  6. Hi there,

     

    this seems to be a GUI-related trap (bug???). I added a 4TB drive to my array, the parity disk is 18TB. It zeroed, I formatted it (with XFS) and everything was okay. No user data on it. Then I decided to up my SSD cache array (went fine) and to replace the disk with 8TB (and an additional fan for better air flow). System restarted, I replaced the disk in the array and the rebuild started. No pre-clearing, no formatting before. The information on unraid/main says 8TB free on this disk, everything fine. The process crashed reliably, twice. The whole system went unresponsive, no screen output, not reachable by network.

    The way out of this was to erase the disk (array not started) and start the array. Then it will be formatted, and afterwards the rebuild starts.

    This seems like a handling issue: Rebuild on a replaced disk with different size should only start after formatting.

     

    I'm running unraid version 6.11.3.

     

    with best regards

     

    rookie701010

  7. Just installed https://raw.githubusercontent.com/docgyver/unraid-v6-plugins/master/ssh.plg on 6.10.3, chose the users, updated setings, restarted sshd, and it works. Thank you! I can understand from a security POV that this is off by default, however this is quite _useful_. Like, SFTPing from the build machine into unraid to transfer the built packages. You don't want to do this as root.

    • Like 1