-Daedalus

Members
  • Posts

    426
  • Joined

  • Last visited

Everything posted by -Daedalus

  1. +1, though I've heard from podcasts and the like that it is something Limetech is interested in. I'd assume it's on the roadmap somewhere, but probably other stuff ahead of it.
  2. Thank you! I figured it was mostly to address the excess writes (I moved back to a single XFS drive from a pool because of this), just wasn't sure if there were any other affects as well.
  3. Can someone ELi5 the Docker changes in this version? How does the docker image get formatted with its own filesystem, wouldn't it inherit whatever filesystem of the drive its living on? What sort of differences/impact might we expect from the bind mount vs loopback? Just curious.
  4. I haven't read the thread fully, so apologies for that, but I'm curious: Have you seen a 100% CPU crash from top/htop, or just from the GUI? I ask because the GUI also takes into account iowait in the CPU usage. This will spike any time the system is waiting on I/O (ie, disks), so I'm wondering if you've got a dodgy HBA or similar causing crazy latency on your disks. This can look like high CPU, because you'll see the graphs max out, and everything will slow to a crawl, but it's actually just that nothing can pull the data it needs.
  5. Not to hijack, but as someone thinking about moving to 10GbE, is there a go-to recommendation for a no-fuss RJ45 card? The Intel ones seem to jump between in-tree and out-of-tree drivers a bit.
  6. From the console: "diagnostics" will create a ZIP in /logs on the boot USB. You can also do it from the GUI somewhere in the Tools menu, if memory serves.
  7. Just spit-balling here, but I seem to remember an issue with Samsung drives (mostly 850s at the time). Something to do with a non-standard starting block. I don't suppose anyone with this issue is using non-Samsung disks?
  8. My bad, didn't realise I needed to do that that way. Thanks!
  9. Thank you!! Edit: Should have checked dependencies. It apparently needs librsync as well: https://github.com/rdiff-backup/rdiff-backup
  10. Double-check what drives you get. I assume you're aware of the news about WD shoving SMR everywhere they can manage it (including a bunch of Red drives) recently.
  11. SMR would explain it. The parity runs fine while the drives are still using their pseudo-CMR buffer, then once they move to SMR tracks the performance falls off a cliff. If it were me, I'd replace them, but it depends on your use-case. Those drive probably have 10-20GB worth of buffer on them, so if you can get off this initial massive write, and the rest of your writes to the array aren't going to be very big (less than the buffer size) then the performance won't be too bad. That said, as Johnnie points out, it will be unpredictable.
  12. Absolutely. I in no way meant for that to come across as complaining (it may have, I apologise), more "passionate suggestion", shall we say. If anyone from the dev team ever decides to visit Ireland, I'll happily buy them a round. 🍻
  13. Thanks for the info, and for that work-around. I'm already back to an XFS cache, and spent a couple of days setting up backups and the like, so not really bothered moving back to BTRFS at this point, but it's wonderful if this works for more people. However, we shouldn't have to hear this from you. I'm sure Limetech are working on this, I'm sure there's some sort of fix coming at some point, but radio silence for something this severe really shouldn't be the norm, especially if Limetech is shooting for a more official, polished vibe, as a company. Event something simple, like: This actually tells us very little, other than not to expect a patch for 6.8, and that the release is only "soon", but at least it's something reassuring. I'm usually not the guy to advocate being seen to be doing something, rather than actually just doing the thing, but in this case I think a little more communication would have been warranted.
  14. Cheers, figured as much. I'm starting a copy over of cache to convert to XFS. The writes are to the point that they're saturating my SSD"s write buffer, causing massive performance issues for anything on cache. I'll be honest: I'll have a hard time going back to BTRFS after this. I think it'll be XFS and an hourly rsync or something until such a time as ZFS (hopefully) arrives to replace it. Edit: Moved from unencrypted RAID1 pool (1TB+2x500GB 850 Evos) to a single 1TB unencrypted drive, and the writes to Loop2 have gone from over 100GB in an hour, to just over 100MB. All my containers and VMs are now performing as expected now that the SSDs aren't choking on writes as well.
  15. Out of curiosity, has anyone seen this behaviour on 6.9b1?
  16. False alarm. Rebooted, all good. FYI, this happened right after I changed from eth0 and eth1 to bond0 in network settings. That was the only change, though I didn't check the passphrase section before doing it, maybe something will appear in diags to make it more clear.
  17. Hi all, Moved the guts of my server to new hardware today. Many, many reboots in, troubleshooting some stuff with HBAs not being recognised correctly. Did the final reboot, and I'm seeing the following: So it's acting like the array isn't encrypted, when it is. What's the protocol here? Do I enter the same passphrase I had previously? Is there something else I should do? Diags attached, thanks in advance. server-diagnostics-20200530-2202.zip
  18. It's weird that it's there. I get it's supposed to be a more representative of general system usage, but it creates ambiquity given that it's listed under "Processor" and "CPU load". It would be nice if the output here was the same as 'top' as some (myself being one) might expect.
  19. It's write amplification. This means that any container that was writing a little bit, will still (relatively) only be writing a little bit. Any container that was writing a lot, will now still be writing a lot. It's not the fault of any one container, but Docker (or something else) itself.
  20. Fair points both. Different ways to do each that doesn't require the current implementation, but options are always good.
  21. In my mind, this is what the existing "No" setting should do, evidenced by the amount of people expecting it to do this, and then being surprised that files are still on the cache. So, seems like as long as there was a notification somewhere, you could just change the behaviour of the existing setting, and leave it at that. I can't really envision a use-case where you would want an array only share, but have certain files always on the cache, unaffected by the mover. If there is one, let me know.
  22. I would like this as well. +1 Currently, I have my documents and music shares as cache-prefer, because I don't have to wait for spinners to spin up. +1 to responsiveness, +1 to energy efficiency. But, I manually back these shares up to a backup share on the array via rsync because, to be frank, while the cache pool is nice, I don't trust BTRFS to keep my data safe; It's probably fine in reality, but I've read a few too many corruption stories with the result of "format cache drives and restore from backup". Could something like this be done behind the scenes? Have a "Clone/both/whatever" setting function like cache-only, but run an rsync once the mover is done?
  23. Thanks for the offer! I actually figured out pretty quickly after posting that they weren't PNGs. I should have guessed they'd be vector. I just fullscreened and zoomed on the two images I needed and mucked something up in Paint. Inelegant, but it does the job.