Jump to content

-Daedalus

Members
  • Content Count

    273
  • Joined

  • Last visited

Community Reputation

28 Good

About -Daedalus

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I haven't read the thread fully, so apologies for that, but I'm curious: Have you seen a 100% CPU crash from top/htop, or just from the GUI? I ask because the GUI also takes into account iowait in the CPU usage. This will spike any time the system is waiting on I/O (ie, disks), so I'm wondering if you've got a dodgy HBA or similar causing crazy latency on your disks. This can look like high CPU, because you'll see the graphs max out, and everything will slow to a crawl, but it's actually just that nothing can pull the data it needs.
  2. Not to hijack, but as someone thinking about moving to 10GbE, is there a go-to recommendation for a no-fuss RJ45 card? The Intel ones seem to jump between in-tree and out-of-tree drivers a bit.
  3. From the console: "diagnostics" will create a ZIP in /logs on the boot USB. You can also do it from the GUI somewhere in the Tools menu, if memory serves.
  4. Just spit-balling here, but I seem to remember an issue with Samsung drives (mostly 850s at the time). Something to do with a non-standard starting block. I don't suppose anyone with this issue is using non-Samsung disks?
  5. My bad, didn't realise I needed to do that that way. Thanks!
  6. Thank you!! Edit: Should have checked dependencies. It apparently needs librsync as well: https://github.com/rdiff-backup/rdiff-backup
  7. Double-check what drives you get. I assume you're aware of the news about WD shoving SMR everywhere they can manage it (including a bunch of Red drives) recently.
  8. SMR would explain it. The parity runs fine while the drives are still using their pseudo-CMR buffer, then once they move to SMR tracks the performance falls off a cliff. If it were me, I'd replace them, but it depends on your use-case. Those drive probably have 10-20GB worth of buffer on them, so if you can get off this initial massive write, and the rest of your writes to the array aren't going to be very big (less than the buffer size) then the performance won't be too bad. That said, as Johnnie points out, it will be unpredictable.
  9. Absolutely. I in no way meant for that to come across as complaining (it may have, I apologise), more "passionate suggestion", shall we say. If anyone from the dev team ever decides to visit Ireland, I'll happily buy them a round. 🍻
  10. Thanks for the info, and for that work-around. I'm already back to an XFS cache, and spent a couple of days setting up backups and the like, so not really bothered moving back to BTRFS at this point, but it's wonderful if this works for more people. However, we shouldn't have to hear this from you. I'm sure Limetech are working on this, I'm sure there's some sort of fix coming at some point, but radio silence for something this severe really shouldn't be the norm, especially if Limetech is shooting for a more official, polished vibe, as a company. Event something simple, like: This actually tells us very little, other than not to expect a patch for 6.8, and that the release is only "soon", but at least it's something reassuring. I'm usually not the guy to advocate being seen to be doing something, rather than actually just doing the thing, but in this case I think a little more communication would have been warranted.
  11. Cheers, figured as much. I'm starting a copy over of cache to convert to XFS. The writes are to the point that they're saturating my SSD"s write buffer, causing massive performance issues for anything on cache. I'll be honest: I'll have a hard time going back to BTRFS after this. I think it'll be XFS and an hourly rsync or something until such a time as ZFS (hopefully) arrives to replace it. Edit: Moved from unencrypted RAID1 pool (1TB+2x500GB 850 Evos) to a single 1TB unencrypted drive, and the writes to Loop2 have gone from over 100GB in an hour, to just over 100MB. All my containers and VMs are now performing as expected now that the SSDs aren't choking on writes as well.
  12. Out of curiosity, has anyone seen this behaviour on 6.9b1?
  13. False alarm. Rebooted, all good. FYI, this happened right after I changed from eth0 and eth1 to bond0 in network settings. That was the only change, though I didn't check the passphrase section before doing it, maybe something will appear in diags to make it more clear.
  14. Hi all, Moved the guts of my server to new hardware today. Many, many reboots in, troubleshooting some stuff with HBAs not being recognised correctly. Did the final reboot, and I'm seeing the following: So it's acting like the array isn't encrypted, when it is. What's the protocol here? Do I enter the same passphrase I had previously? Is there something else I should do? Diags attached, thanks in advance. server-diagnostics-20200530-2202.zip