DarkKnight

Members
  • Posts

    116
  • Joined

  • Last visited

Everything posted by DarkKnight

  1. I see that StorCLI (For LSI) has been requested a couple of times.
  2. It's not clear from the template instructions that you mean to literally 'REMOVE' those variables from the template instead of just removing the pre-populated value. That's not typical, at all, of being a requirement for a template to function. Wouldn't it make more sense just the leave the variables entirely out of the template and put in the instructions to add them if needed? At least then it would launch/run without giving some vague error message that leads to 30 minutes of looking for answers. At a bare minimum the instructions should be explicit that the entire variable needs to be removed from the template.
  3. My Conan Exiles server seems to be crashing. Not much in the logs except an apparently common timezone error and a Key Distribution Service error. Runs for hours without problems, but over night I can no longer connect to it the next day. Suggestions on where to look? Thanks.
  4. I don’t want to disable upset checks entirely, just want the banner notifications gone. Seeing that there are container updates available is useful when in the docker tab. There’s another thread about it here: https://forums.unraid.net/topic/129950-action-center-enabled
  5. Well, since Squid was being so intransigent to making the banners optional, and Limetech is not reacting at all to this, I started using Adblock to just block the banners like you would any other annoyingly disruptive or badly designed element on a webpage a long while ago. Seems ridiculous to need to rely on Adblock to solve problems in the Unraid GUI, but here we are. I wouldn’t call this ‘problem solved’ because it’s clear that since people keep responding to this post months later, it’s irritating a lot users. Making it dismissible isn’t a replacement for making it optional to begin with. It’s just the least that could be done. Like others, I don’t want to look at it at all. Nag warnings are not an improvement, and ‘action center enabled’ is extremely vague unless you already know what it means.
  6. Don’t give them any ideas. JFC, the whole point is that these are just minor updates. Since certain popular docker containers are automatically rebuilt every few days, this is basically all the time. The extra notifications are not necessary at all, and truthfully speaking, actively disliked.
  7. Is there a way to just completely disable banner notifications? Honestly think it's getting abused by plug-ins at this point. Every time I open up a given page on the WebGUI, some plug in or another has something so damned important it needs my attention right now for a minor update that will have zero impact for me, except to distract me from what I'm trying to do in that moment. Certain frequently updated plug-ins seem to be getting banner notifications several times a week. Reaching out to the community developers themselves, in my experience feels like a waste of time as they are pretty ambivalent to user requests for options to disable them. I don't want to call out anyone specific here, but it's more than one community developer over relying on these notifications. In short, I'd like to know how to just to turn them all off entirely for now. If that's not available, I'd definitely like it to be an option. Longer term, it seems ridiculous that I even have to make this request, but I'd like Limetech to consider putting in a permissions system for admins to grant individual plugins access to webgui banner notifications. Understand that I'm deeply grateful for all the extra functionality that is provided by the community developers, but I shouldn't have to choose between functionality and being frequently nagged to update something. Unraid was just fine without the banner notifications.
  8. There is an error in the docker setup screen. The folder paths must end in a / or instead of folder/path/to/img you get folder/path/toimg. You can correct this manually in the XML, or delete the appdata files edit the container settings to add the trailing slash and rerun it. That said, I have not been able to get it to boot even after partitioning and running a Big Sur install. Monterey just hangs on the apple logo altogether.
  9. Why doesn't the mover script error out on it's own and leave a message in the GUI when there is no space left on a given device? It's currently stuck, and there doesn't seem to be a way out of it short of restarting my server. Some place in the GUI where we could watch the status of the mover would also be helpful. I'm not afraid of the CLI at all, but this is an important background function of Unraid, so viewing it's status (beyond 'mover running') should probably be accessible from somewhere in the GUI. Related problem, replacing BTRFS cache pool drive I did a while ago also changed the raid mode to 1. I changed it back to 10 and did a balance. The file system indicates that there are ~2.7TB used and ~9TB free, but anything that tries to write to the cache pool gets a 'device full' error. 2.7TB is suspiciously equivalent to the total space of a given single 3TB drive from that pool of 8x 3TB drives (e.g. Raid1). In the GUI for the BTRFS pool it reads: Data, RAID10: total=2.77TiB, used=2.74TiB, but df -h reads: /dev/sde1 12T 2.8T 8.2T 26% /mnt/r10_cache. So, something is really wrong here. I don't really have the inclination to deep dive into why this happened. I'll just say it's growing pains for a new feature. My intention is to 1) never buy drives off of facebook again, and 2) recreate the cache pool as soon as I can move the data off. It would be cool if there was just a 'dump cache to array' tool instead of dicking with the cache settings and having it completely (and silently) malfunction for 2 days because the data moved to the cache instead the array. Edit: I apologize for any snark. It's late, I'm tired and have a cold.
  10. For the same reason that pop-up blockers are now standard on web browsers. With the greatest respect for all of your contributions to unraid (which are many) the notification is just annoying to look at. There's a glowing red/black icon on the left. How much attention of mine does it need exactly? I honestly hope you don't make this the default. I'd be searching for something to revert the GUI or just avoid updating until I absolutely had to for functionality. I strongly disagree. CA is a great interface for finding new applications/plugins. It's not even close to dense enough for management though. It's way over simplified to ever replace the docker tab. It's like using file explorer with extra large icons. That's great for it's specific use, which is browsing photos, but terribly inefficient for generally interacting with the filesystem. Docker has a lot more management to it than just functionality extension. Plugins don't need start/running status, port assignments, or folder mounts, log file popups, etc. After CA's rich catalog and interface, the docker management tab is without a doubt the 2nd most useful feature Unraid has. It's far, far superior compared to Synology or Truenas implementations. I can say this with confidence because I run and interact with all 3 platforms daily. CA is a huge value add to Unraid. The docker tab is already the best GUI for managing single app stacks I've used, and that includes front ends like portainer. Squid, CA is (excepting the update nags) a great tool. It doesn't need to do everything. Speaking for me personally, the system we have for finding and managing apps is as close to perfect as I can think of.
  11. It’s kind of annoying. Please consider another solution?
  12. It fails on every docker update with 'command failed: container already exists'. The image updates, but it does not relaunch it correctly with the updated image.
  13. I keep getting this banner at the top of my server on page refreshes/loads. I get it, it's enabled. How do I get the banner to stop showing up?
  14. The link you gave is for the limetech support form though, but I can find my way. Thanks for explaining.
  15. Anyone know what this notification is for? I can't honestly find any 'action center', google and forum searches seem empty. It comes up every time a page is refreshed. Kind of annoying.
  16. Just to follow up to all who are interested, I did file a support ticket. I was thinking about this today because my replacement drive just came in. This was the response to the ticket: I'd like to see some movement on it with the next update, but as always with LT, it's a waiting game. Hopefully the third of the 3 drives lasts until they fix this. Wiping out and recreating the cache pool twice every-time I need to warranty a drive is pretty annoying. Putting this drive back in will be the 4th time I've needed to create the cache pool.
  17. Emailing this email address has resulted in a continuous mail delivery failure for several days now. This isn’t encouraging. Maybe the domain is incorrect? [email protected]?
  18. I don’t understand. If this is a known issue, and there is also no GUI recovery method available, there’s not even minimal documentation, so why did LT put this feature in stable? I just rechecked the announcement for multiple cache pools and there is zero warning that this feature is missing some really fundamental functionality. The only thing they give you is a YouTube video from a random YouTuber on how to set it up. I’d expect this kind of incomplete release from FOSS projects, not a mature paid product like this. I thing LT should at least warn users this is an incomplete feature for advanced users and any problems with it will result in a trip to the forum — so basically a beta feature that doesn’t belong in a stable release.
  19. Bumping this, since I am experiencing another cache drive failure, and this really important basic feature isn't working. I did pre-clear these drives, but there are a set of 3 used, but in warranty enterprise drives I bought from a guy on facebook. He shipped these to me in a bubble wrapped envelope. SMFH. Anyway, I'd really like to not have to nuke my cache pool or dip into the command line when I need to do a drive replacement. I took a look through all the 6.10-rc2 bug reports and a lot of them look much more serious than this, but this is also a really basic feature. Seems like the secondary cache pool should have stayed a beta feature until recovery was correctly worked out IMHO.
  20. In the GUI, I noticed I have a cache disk that wasn't spinning down. I checked the disk log, and it's not pretty: BTRFS warning (device sde1): lost page write due to IO error on /dev/sdk1 (-5) BTRFS error (device sde1): bdev /dev/sdk1 errs: wr 11963359, rd 8227683, flush 741, corrupt 0, gen 0 Pages of this error. Literally says error on it. The disk won't read, write, or respond to any commands. Feels like this should throw a red ball, or at a bare minimum increment the error counter. As it is, there seems to be no failed disk detection.
  21. It'd be great if the correct process for replacing a drive in a BTRFS secondary cache pool were documented, or you know, any documentation for replacing a cache drive: https://wiki.unraid.net/index.php/Replace_A_Cache_Drive Replaced one drive in an 8 drive R10 cache pool and, of course instead of formatting just the one drive and rebalancing the R10 array like any normal single raid-10 drive replacement should do, it just re-formatted the entire array and deleted the shares that existed only on that pool. Yes, there was a warning box listing all drives in the pool, and yes, I did tell it to do so suspecting it would re-format all those drives. That being said, since google searches returned more useless results from reddit than this forum, and the actual documentation for this is non-existent, there wasn't much choice. I was smart, and backed up the data from the cache pool elsewhere just in case something like this happened, but it kinda feels like if you were going to release the multiple cache pool 'feature' it'd either work intuitively or at least have whatever command-line driven voodoo is necessary be documented. I've had to write documentation for software processes before (ISO-9001 requirements) and no, it's no fun. It is part of the job though. Please make the time to document how-to do this properly without losing all the data on the array. Also, maybe consider extending the pool start script to have this function without resorting to users having to go to the command line and do it manually.
  22. Just wanted to update that I added 2 drives (8 total) and it is now displaying correctly.
  23. I created an BTRFS Raid10 Cache array. I'm having a display issue. at first, I thought it wasn't initializing the pool properly as I created it with 4 disks, then added 2 more, but I verified in MC that the array is in fact ~9TB. In the UI however, you can see that the bar is short by 1/3rd, and it's reading only 6TB. I have removed and readded 5 of the drives a few times, which was necessary just to get it to convert to R10 to begin with. It does not seem to have fixed the issue. I believe it's related somehow to having initialized it with only the 4 drives the first time. Shows the same even after clearing cache, and when checking it with a completely different machine altogether.
  24. For the second week in a row, the vast majority of my containers that are set to update late Sunday night using this plugin are just missing entirely on Monday morning. What steps can I take to track down why this is happening?