rookie701010

Members
  • Posts

    15
  • Joined

  • Last visited

Everything posted by rookie701010

  1. nicht unbdingt Doppeladapter, aber hier hat sich für diverse Experimente der hier bewährt: EZDIY-FAB Dual M.2 Adapter. Hat Kühlkörper, und nvme alle längen bis 2280, und einen 2. Stecklatz für SATA (muss man aber ein SATA Kabel anschliessen). PCIE 4x-Steckplatz. Eine Suche auf Amazon nach "pcie nvme adapter" liefert eine Menge. Für gewöhnlich ist da ausser einem Spannungsregler nix weiter drauf. Was man bei den für mehrere Karten beachten muss: Das sind meistens 4fach Adapter, ziemlich groß. Ist für den großen PCIE Slot ausgelegt, das Board sollte dann aber PCIE Bifurcation können, da jede SSD nur 4 Lanes nutzen kann. Ich habe auch so einen da, und auch mal mit 2 SSDs bestückt und dann den Slot auf 4+4+4+4 geschaltet - hat gut funktioniert. Das Board sollte das können (nicht sicher, hab hier nur B550 und X570 - AMD). Bifurcation geht normalerweise nur auf den 16x Slots, also wenn da keine dicke Grafikkarte steckt, kann man tatsächlich mehrere SSDs mit voller Geschwindigkeit betreiben.
  2. This thread helped me to set up a Server 2019 that was still on the former boot nvme drive of the unraid box. Worked nicely, thank you. Amazing what software can do these days.
  3. Just reassigned a miditower for experimental work, and found out that 6.12 has now zfs built-in. Yay! Now, if we can also get MGLRU (cat /sys/kernel/mm/lru_gen/enabled), that would be GREAT! Looking forward to test this out. Great job, keep up the good work.
  4. Update to add: The crashes kept coming, and as it almost always is in this case, it's something hardware related. In this case the culprit is the RAM (pretty sure), just changed it to Kingston HyperX Fury/Renegade 3600, also 128GB. Since the change required some disassembly I also changed the CPU to Ryzen 9-3950X. What's not to like Since this thing is running VMs and containers, more cores are a good thing. Why am I so sure regarding RAM: I had similar issues with this kit in completely different hardware, after 18 months. So there appears to be a degradation issue. I changed everything (!) else in the box, same behaviour. Changed the RAM, stable... although the MSI boards seem to have ageing effects, too. I will close this issue now, it is at least linked to the hardware issue. There was no useful info in the rsyslog, btw. last entry was some cron hourly job, then completely unresponsive box.
  5. A RAM issue should show up intermittently with crashes, though. If the box is rock stable then the possible causes can be drive, or cable. Or (with really old SATA controllers) hardware failure of the controller. I had some issues with earlier generations of AM2/AM3 boards where the SATA controllers gave up eventually. SB690/700 was ok though. With the new (B350, B550 / X570) boards, SATA is stable. In your case, I would re-check SATA cable and maybe PSU if not spanking new and generously specced.
  6. ... aaand it worked. No idea what caused the hiccup, and unfortunally, no diagnostics of the crash. Maybe I'm able to reproduce it on a different box. Needs to be set up first, though.
  7. Hmm the parity check went trough pretty fast. Now everything is normal. Next up: Add the pre-cleared drive 😈
  8. Okay, rsyslog is enabled and appears to be working. The parity rebuild also resulted in a hard crash. Now unraid is in "zombie" mode with VMs running and a stale configuration, parity check is progressing. But now we have a log 👯‍♀️The array shows as not started, but provides its services... anyway, lets see what the parity check will do.
  9. With "horribly wrong" I mean completely unresponsive, no network, no console. So... hard reset is the way to re-awaken the box. Maybe I can set up forensics like a dmesg -W in a ssh terminal on another server and hope that something shows. However, now parity is rebuilding, and the new disk is getting precleaned, though. Would need to duplicate on another setup.
  10. Well. Something goes horribly wrong with that, now for the third time. Currently rebuilding parity after doing the drive removal the documented way. Will take some time, but three crashes in a row is a bit unsettling.
  11. Then it shouldn't crash and handle it as a 4TB disk (not optimal, but okay). There was no data on the original, but rebuild crashed... which poses some interesting questions. An erased and formatted disk is just rebuilding parity, with *exactly the same visuals* as restoring a disk. Or do I miss something? The I/O stats say that the replaced disk is written to, which would imply restoring the data and the 4TB file system. This looks inconsistent.
  12. Hi there, this seems to be a GUI-related trap (bug???). I added a 4TB drive to my array, the parity disk is 18TB. It zeroed, I formatted it (with XFS) and everything was okay. No user data on it. Then I decided to up my SSD cache array (went fine) and to replace the disk with 8TB (and an additional fan for better air flow). System restarted, I replaced the disk in the array and the rebuild started. No pre-clearing, no formatting before. The information on unraid/main says 8TB free on this disk, everything fine. The process crashed reliably, twice. The whole system went unresponsive, no screen output, not reachable by network. The way out of this was to erase the disk (array not started) and start the array. Then it will be formatted, and afterwards the rebuild starts. This seems like a handling issue: Rebuild on a replaced disk with different size should only start after formatting. I'm running unraid version 6.11.3. with best regards rookie701010
  13. Just installed https://raw.githubusercontent.com/docgyver/unraid-v6-plugins/master/ssh.plg on 6.10.3, chose the users, updated setings, restarted sshd, and it works. Thank you! I can understand from a security POV that this is off by default, however this is quite _useful_. Like, SFTPing from the build machine into unraid to transfer the built packages. You don't want to do this as root.
  14. Well, thank you for the hint. Discoverability of this is below zero. Sorted.