• Posts

  • Joined

  • Last visited

About NLS

  • Birthday 03/30/1974


  • Gender
  • URL
  • Location

Recent Profile Visitors

3216 profile views

NLS's Achievements

Community Regular

Community Regular (8/14)



  1. I can reply to my own bonus question: If I edit the template and change default app data path to the correct one (i.e. /mnt/user/config to point to /mnt/user/appdata/technitium-dnsserver/config/), then I can add an extra path (I named it app data 2) and point /mnt/user to /mnt/user/appdata/technitium-dnsserver/. This solves the issue and the container can now auto-update! (if I don't add the first path, the container doesn't assume that /mnt/user/config is under /mnt/user as it was supposed, but instead does NOT make it a bind path at all and points to the wrong path). MY SOLUTION DOES NOT SOLVE THE BUG THOUGH! This is why I don't close this report. I mean, I corrected the issue, so I don't hit the bug, but the bug is still there and can probably hit others. The bug is that if container after update has an issue, something removes it from the container list! I suspect the coolpit is somewhere in the update script where the script removes the "old" container, before verifying that indeed the "new" one can be deployed. If the deployment fails, we are left with a removed container!
  2. I use technitium DNS server. This used to have a custom UNRAID container profile, but not any more. So now I install the generic container (as found in Apps with the name "dns-server"). The default configuration does not work as is. I need to edit it with portainer to change a couple of paths and only then it starts. This is the change I do: /mnt/user/appdata/technitium-dnsserver/config /etc/dns/config /mnt/user/appdata/technitium-dnsserver /etc/dns Both as bind. Keeping the container out of "auto update", used to work fine. I updated only manually and could then RE-do the changes. Else, my whole LAN was left without DNS. BUT recently (and this is why I post here), the container seems to VANISH! I suspect maybe when checking (automatically) for updates. Even though I have set it to not auto-update. I mean, I have it running fine, then after a few hours I discover the container is not there any more! I have to go to "previous apps", re-add it (will not start like that), re-edit it (now it starts). This happened twice or three times the last few days! It is really serious, what can possibly remove the container from docker!? Previously, even if it tried to auto-update, at least it left me with a non-working but EXISTENT container. This disappearance thing, is new. Bonus question: Since the change to a working container is minor. How can I store this and re-use it instead of redoing it all the time? Maybe even be able to re-enable it to auto-update to the CORRECT one?
  3. Then 10 minutes is definitely too much.
  4. All of a sudden, this is how it looks. ctrl-F5 or different browser, didn't help. Latest UNRAID version, everything updated.
  5. You mean to start? What is your RAM and CPU?
  6. Seemingly the worse of the issues (containers not running) is my fault. I messed appdata persmissions...
  7. I tried adding a new container and that worked. ALL old ones show "bad parameter"!
  8. So I have set a few UNRAID, with my own counting (way) more than a decade. Never seen that before... The server in question is for a friend, I've set it up months ago (maybe a year). Hasn't happened to it before either. Server went down ungracefully, because of a power outage beyond UPS capacity. Normally this is no issue, system does an extra parity check and all ok. And indeed they were... Except people in that SOHO noticed the main share was not working any more. Quickly I noticed I could access it ok using IP instead of hostname, so I gave them that temp solution. Then SOME discovered they couldn't WRITE in the share! Then I went deeper to see in UNRAID what could be the issue. Server seemed to run ok, latest version, everything mostly updated, containers (very few) and VM (a Win11 that needs to run a couple of Win-only things) running ok. Since this is a SOHO, there is no real granularity in the access of the main share, it is set to private, but read/write for both the "advanced" user (the owner) and "user" (the rest of them). This is how it was always. First thing I noticed which is WEIRD, is that server changed back to default name "Tower"! First time I've seen this! This explained why they didn't see the server as it was not named as expected and mapping didn't work! Then I noticed that even the VM couldn't write to the share (although able to read it). I was forced to switch the share to "public" instead of secure! After I stopped array and changed the name back, I rebooted (gracefully this time) the server and thought everything was ok now. But after the reboot NO container starts (although docker is running), with "bad parameter"!!! The last thing is the worse! I am not sure what to do!
  9. Please implement "implicit no" (i.e. default auto update to yes, and set one or a few specifically to no), for auto updates. Right now, you only have "yes" (which cannot set one or some to no), or "no" (which you can manually set few to yes). It should be "default yes" or "default no" and in both cases allow to change some to the other option. Thanks.
  10. That was it! I used 5701 although I should use 5901! Thanks!
  11. So, for some unknown reason, I cannot connect to KVM own VNC when trying from VPN (!), while locally it works. VPN works fine in all other aspects. I am actually not looking into resolving this yet. Instead, I am trying to see how I can utilize my working guacamole to VNC connect to my VMs. Why? Because it works even if a VM networking is down. Which is the issue with one of my VMs. I know I can see the VM and fix it when I go home and see the server from LAN, but I would prefer to be able to use it remotely too. I can see the string "VNC connect" menu item creates, but I couldn't replicate this to a guacamole connection... (I also tried repeater and proxy fields) Can it work?
  12. Yes I mean folder, sorry. I have hundreds (maybe 250?) but I haven't noticed performance differences. I do think ZFS might be overkill for this though. Maybe I should revert to btrfs, although not very easy with all the VMs and containers.
  13. I do use a docker image. Is this an issue? Or just cosmetic?
  14. So as I said in an older thread, I converted my cache pool (single M2) to ZFS. Everything works fine (had some minor issues but ok). Cache usage show normal, about what it was before going ZFS. But I did a "zfs list" and then verified with ZFS Master plugin, in my pool I see my normal few folders (which are not datasets AFAIK), BUT also see a few hundreds (!!!) of "legacy" datasets with names like "03747e08c1b7e6ab35ac74dc6c1538c83b1916185c5e4311e5899ec3d6911397"! They also seem to be... snapshots? (I never made snapshots myself) What do I do, how do I clean those? They don't show in normal folder listing.
  15. OK. I will wait for the transfer to finish, reboot and hope the system will work more normally afterwards. (note that this system worked fine for years)