semioniy

Members
  • Posts

    20
  • Joined

  • Last visited

semioniy's Achievements

Noob

Noob (1/14)

2

Reputation

  1. Got same errors in logs, but there was only one plugin I installed recently - Unraid Connect. Removed it, no errors anymore.
  2. Thanx, but my SSDs are SATA, not nvme. I'll add it regardless, maybe I'll install a nvme drive in the future ๐Ÿ˜
  3. The issue of cache drives dropping offline has returned despite UPS. UPS added some stability for sure, but, apparently, didn't solve the problem altogether.
  4. Sure, as I wrote โ€” I accept it as a part of hobby, and use a btrfs dev stats script that runs on schedule myself. What I don't understand is the following: since UnRaid's primary purpose is to make NAS storage simple and abstract from different storage devices, Why does it not do something when one of these storage devices gets disconnected on a live array? Like, at least detect that a /dev/sdx device can't be communicated with anymore and notify user about it.
  5. Well, I ran the script from SpaceInvader1, and it didn't find any problems, and could free respectable 175KB of space ๐Ÿ˜ So, I guess I'll assume it's all good. BTW, doing simple math - 14GB would be 70% of default 20GB, and crossing this threshold would give you a utilization warning. Having 13GB with 20 containers installed, 1 or 2 additional containers (or even a container updating) could already give you a warning, forcing you to either increase the utilization level warning threshold, or enlarge your docker image. So, I think I'll chill and won't worry too much about not following the "20 GB should be enough" rule, given how much I love to play around with docker ๐Ÿ˜
  6. TLDR: When a disk from cache pool disconnects, UnRaid keeps the pool online, slowly letting btrfs errors accumulate, but doesn't notify the user about the drive being offline, and I don't quite understand, why. I have 3 SSDs on my server, forming a btrfs cache pool. From time to time, be the reason a vibration in the room, a problem with a SATA controller / cable, or whatever else, an SSD drops offline, and the cache pool starts accumulating btrfs errors. Knowing about this problem, and having encountered it previously, I run a `dev stats` script every hour that checks btrfs for errors and notifies me if it finds any. I assume, it's just a part of the hobby, me being my own sysadmin and running a virtualization server at home. What I don't quite understand is the following โ€” why doesn't UnRaid detect this and notify user in some way? If I wouldn't run the script myself, I wouldn't know about the problem, until the pool would accumulate enough errors that even shutting the machine down, re-plugging all the cables, turning it on and running a correcting scrub operation wouldn't help anymore, forcing me to reformat the drives and recreate the pool. Been there, done that. So, my question: why doesn't UnRaid probe the drives to notice when they drop offline? As far as I could see, even when I stopped the array, the drive in question seemed to be present, even though UnRaid clearly couldn't communicate with it, and only after a reboot the system could tell that a drive actually WAS missing. Is it a hardware limitation? SATA does support hot-pluggability to some extent, so there probably is a way to know when a drive disconnects. P.S. also, facing this problem last time I noticed, that running `btrfs scrub start /mnt/cache` from the console shows me errors, but running a scrub operation from the UI returned no errors whatsoever, so without using the command directly / running a scheduled script I wouldn't even know that something happened even if I was looking for an issue.
  7. For those who stumble upon this topic in the future, here's the video.
  8. Hi. @Hoopster, does this rule of thumb apply to everyone, or only to those who doesn't use UnRaid as a virtualization server with 25 containers running? ๐Ÿ˜
  9. So, I have been troubleshooting for a week after my last post, but couldn't find anything. I checked cloudflare settings, ngingx and whatnot. It did look as if a WebUI page would load, but wouldn't open up web sockets for updating all the live information, but I couldn't figure out why, then impatience took over and I gave up on it. Well, would you know, 2 months later it works perfectly, and all I did is a whole lot of nothing! I didn't touch anything, didn't update anything on the UnRaid side, but it suddenly started working properly again. So, attempting to isolate the issue: since I didn't touch UnRaid, there are 2 variables in the chain that might have changed: Cloudflare - they constantly update their services and silently change things in the background, as far as my understanding goes, so they might have repaired what they previously have broken. I kinda doubt it though, because a half of the internet runs through the cloudflare, and if they had a faulty support for web sockets, there'd be a fire under their butt. Browser (chrome in my case) - a couple of versions since my last post, but I didn't pay too close attention. If it is a browser update breaking things though, I'll watch it closelier from now on, and will check before and after updates.
  10. Hi. I wanted to give a heads-up to anyone experiencing this problem from time to time. Problem: My cache pool consists of 3 SSDs, 2 of which are fairly old (lived through 2 laptops), and I had a problem of one of the SSDs being disconnected mid pool operation, and errors starting to accumulate, until I reboot the system and run scrub. Suggested solution / investigation: Some forum posts suggested that the issue lies in the SSDs themselves, cables, or even the backplane of the motherboard. I swapped cables, tried 2 different PCIe - SATA adapters, nothing helped. Actual solution (in my specific case): TLDR - I bought and installed a UPS for the server. All the errors stopped appearing. Longer version - I noticed that my 3d printer sometimes had a horizontal shift in the prints in one of the directions. That's apparently a sign of a short power outage (possibly due to voltage fluctuations) - not so long that the printer would stop and force me to restart the print, but long enough for it to lose its position. I decided that losing my data because of a voltage spike would defeat the purpose of a NAS, so I bought a UPS (first one that I found that satisfied my wattage needs - APC Back UPS BX - BX950MI-GR - 950 VA). Errors didn't appear ever since, and as an added benefit - it has a battery, and UnRaid can safely shutdown in the event of power outage if you connect UPS to the server via USB.
  11. Having this problem as well, 1.5 years later. What's weird, others started having this problem much earlier, and me only a month ago despite being up-to-date with latest UnRaid version, as well as nginx reverse proxy. Running through Cloudflare.
  12. @Easyacid here's a UML diagram of my setup. Hope it helps. Please note that, even though my setup works and seems relatively safe, I can not guarantee that it's actually safe and not just a bad idea. I'm no security expert, nor a network engineer, and am just playing around in hopes to stumble upon a solution ๐Ÿ˜
  13. Anyone cares to share their config? For inspiration purposes only, will definitely not copypaste it ๐Ÿ˜
  14. After trying to make this work for a couple of days, decided to go another route: own domain name on cloudflare, with all bells and whistles of cloudflare protection โ‡พ my IP โ‡พ forwarded to reverse proxy โ‡พ Unraid GUI + web Guis of a couple of containers Not the simplest setup, but works fine enough.