shadowbert

Members
  • Posts

    22
  • Joined

  • Last visited

Everything posted by shadowbert

  1. That'd be right... kinda would have been nice if they hinted at it in the title. If it goes down again I'll certainly check it out though.
  2. Huh. Weird design choice, but that certainly explains it. At any rate, things are looking better with this half of ram. And it's entirely possible my old set might actually still be under warranty, so that might be handy...
  3. I got a screen connected. For some reason, memtest (from the unraid install) refuses to run. It just immediately restarts the computer. Ominous, but not conclusive. Going back to running unraid, with the monitor connected I can see that this shutdown is caused by a kernel panic. That certainly explains why the server simply "disappeared". Unfortunately (though unsurprisingly) the server was not kind enough to write the full details of the panic to the syslog, but it certainly helps confirm the ram theory. I've taken half of it out, to see if I can bisect which one is giving me grief. Fingers crossed. Thanks for pointing me in the right direction.
  4. That makes sense... though it is certainly concerning. Would the (now removed) SSD that was throwing those CRC be a possible cause?
  5. Really? I had assumed that a power outage that happened to occur when something is written to disk A but not disk B would cause that sort of issue... or is btrfs somehow smarter than that?
  6. I highly suspect it's more likely to be due to the sudden shutdowns than ram. Running memtest is going to be tricky given that I don't have any display outputs on that machine... What should I do to clean up the corruption, assuming RAM isn't the issue?
  7. Alright, so I haven't had it completely lock up since posting, but I have had docker grind to a halt at least 3 times. Looks like I have BTRFS errors for days in the logs. So I guess there's something wrong with my cache. One of the drives do have a CRC error count value of 133... so I'm guessing I should try ripping that out and seeing if it helps.
  8. Never mind, I worked it out. I had to set the server to use itself as the remote server.
  9. Fair call. Is this all I need to do for that? Nothing came up in the share yet...
  10. I don't even know where to start with this one, but it's a real pain. The server will simply stop responding to any traffic and, because it's headless (no gpu, no display ports on motherboard), I have no choice but to cold restart. You may note that drive 5 is missing. This drive had quite a lot of smart errors, so I took it out to see if that was somehow causing issues. Ideally I would think that drives misbehaving may result in it getting disabled (and certainly shouldn't take out the server), but it doesn't seem to have helped. lime-diagnostics-20231013-0844.zip
  11. Right.. so I might be able to control it via the terminal if I match this name somehow? The specific stack I'm using (mailcow) has a special update script which does a little more than just pull/restart the containers, so I guess that means this plugin isn't going to work that well in my usecase... Oh well, at least it gets compose installed in a nice and easy way.
  12. I'm a bit confused. Everything seems to be working well - but the UI and the command line don't seem to agree on what containers are running. That is, if I run `docker compose up -d`, the webui still thinks everything is donw. If I press "up" on the webui, the docker compose logs -f gives me nothing (as does docker compose ps) If I do both, I get two sets of the same containers. Is this a known thing? Or has something gone crazy wrong on my end?
  13. I just installed this for the first time - and novnc doesn't start. The rest seems ok, but I'm unable to connect to port 8083. In the docker logs I note the following; > 022-07-31 23:57:28,170 INFO exited: audiostream (exit status 111; not expected) > 2022-07-31 23:57:29,252 INFO exited: novnc (exit status 1; not expected) Everything else seems ok, but those two processes attempt to start and fail in the same way a few times until supervisord gives up on them. Anyone else seeing something like this?
  14. I'm setting this up for the first time, and I've noticed the description are dead (specifically, the example config and the device specific info). Does anyone have an up to date link for these? Can we get the description updated? Thanks.
  15. I've installed the pihole exporter - but the web port seems to be hardcoded to 80. Can this be exposed as a setting? This doesn't work for people who run pihole in a docker in bridge mode who use port 80 for other things (such as unraid itself)...
  16. For what it's worth, I meant to tick "snapshots" but I didn't see it until after I pressed submit. But yeah, +1 to that.
  17. I see - and that's the trick they used in the video above? Very interesting. When the cache pool gets flushed to disk - does the file also stay in the cache pool?
  18. Oh of course. The comment I made was not disregarding unraid as an option - I'm just trying to understand its limitations. It's easy to find information about the benefits - but getting a list of up-to-date limitations is not as easy. Though it would be nice if there was a way to set specific shares to hold multiple copies of a file on different disks, to boost read performance (and boost resiliency)...
  19. Sneaky sneaky... In that case, what kind of performance should one expect from an actual array? I assume it's just going to be the speed of the drive minus a bit for overhead? In other words, it's going to pretty much always be slower than RAID?
  20. This video shows Linus and his somewhat ridiculous file server. Although the capacity is something I don't need (at least at the moment), the read/write speeds have me very interested. In contrast to, say, RAID5 (which I currently use and am contemplating migrating away from) where a file is striped across multiple disks (meaning they can work together to boost their performance), unraid stores a given file on one drive (plus parity), and so any read or write speed is going to be limited by the performance of that drive, right? So how is it possible to reach those sorts of speeds on one file (he had a single ~80gb test file) on spindle disks? I tend to use cheaper disks (the "I" in RAID), so one of my major concerns with switching to something like unraid is performance...