grigsby

Members
  • Content Count

    11
  • Joined

  • Last visited

Community Reputation

4 Neutral

About grigsby

  • Rank
    Newbie
  1. Not Dephcon, but I recognize the graph: it's grafana. To get cool visualizations like that, you'll need telegraf (data collection), influxdb (storage), and grafana (visualization). It's super fun if you're into tinkering with stuff like this and monitoring everything on your network. I've attached a few images here of what some of my dashboards look like. Mine aren't super cool (yet!), but they're always evolving. The top one is part of my pfsense firewall dashboard, the rest (disks and docker containers) are from my Unraid server. (Also, the delta-data disk usage numbers are totally wrong. I'
  2. I'm curious to know if these "workarounds" in the beta releases are true bugfixes. There's a difference between, "We identified the bug and have fixed it," and "We have not been able to identify the bug, but if we do these non-standard things with dockers/filesystems/etc., things seem to get a little better?" Sort of just throwing spaghetti at the wall and seeing what sticks. I'm definitely more interested in a true bugfix than some sort of poorly-defined workaround that just appears to make things a little better while the source of the problem remains unknown.
  3. Yes, the SMART report does correlate correctly with the excessive writes. I think TexasUnraid has done a lot of helpful testing, but some of his terminology might be a bit confusing. Basically it comes down to this: SSD cache drive formatted as btrfs = huge (unacceptable) amounts of write operations (gigabytes every hour) by the loop2 device SSD cache drive formatted as xfs = works normally I currently have my cache drive formatted as xfs (so my SSDs don't get trashed) and it's working normally. The problem with this arrangement is that you can't have a
  4. Well, I gotta say, LimeTech's response to this bug has been impressive -- in a not good way. This is a major, potentially catastrophic bug that could result in loss of data, time, and hardware/money that was first reported seven months ago, and the only two comments LimeTech makes about it are dismissing it as "tldr"? I first installed Unraid in May on a new server build and promptly purchased a license for $89. Obviously I don't have much history with Unraid or the company, but their total non-response to this bug report is disheartening.
  5. The bug was originally reported in 6.7.2. The thread title was changed to 6.8.3 when it was discovered that it still exists in the current release.
  6. Maybe, but you said the same thing about them working on 6.8 when this bug was reported back in version 6.7 and we still haven't heard anything from them about it. This is not a minor issue -- I suspect that it's actually happening for a LOT of installs, but most people don't know it's happening because it requires actually looking for it. This is a serious bug -- potentially costing users a lot of money in trashed SSDs. And this is commercial software -- we're paying for a license to use it, so it's not just a FOSS project where expectations should be low. In my opinion, LimeTech shou
  7. I am not using either nextcloud or mariadb. I wonder if there's a faster way to test this. I have two SSDs in the server, since the original plan was to have them both in a (btrfs) cache pool. I had to remove the pool, and reformat one of the drives as xfs, which is now running as a single cache drive. The second SSD is now just sitting idle, unformatted, not mounted, doing nothing. Is there an easy way to format the second SSD as btrfs, mirror the contents of the cache disk to the btrfs disk, and tell unraid to use SSD_2 as the cache? It might make it easier to switch
  8. Can we update the title of this report to [6.8.3], since it's still happening with this latest version? And I would personally consider this to be more severe than a "minor" bug -- I think it fits the category of "urgent" since it potentially leads to data loss if a cache pool is not a viable option.
  9. I'm seeing this behavior, too. New unraid build (6.8.3), with two nvme drives as a cache pool formatted btrfs without encryption. Numerous docker containers (all the fun stuff -- plex, sonarr, radarr, grafana, telegraf, bitwarden, etc.) iotop shows a huge amount of write activity from loop2 (Gb after just a few minutes of watching). I removed the cache pool, removed one of the drives, and formatted one of the nvme drives as xfs to use a a single cache drive, brought everything back online again, and now the i/o is at what I would consider normal levels (a few megabytes in a few minutes).
  10. Neither the USB creator app nor the manual install script is working on MacOS Catalina 10.15.4. SIP disabled, and the app has full disk access. USB drive is formatted FAT32 and named UNRAID. The script make_bootable_mac doesn't even look like it would work right -- it appears to be copying files to/from the wrong places. I'm pretty good with this stuff, but it's extremely difficult (impossible?) to make a bootable USB stick on a Mac running Catalina right now. I eventually gave up and used my kid's Windows box to make the USB drive. I'm still really looking forward to bringing my first unRAID