• Posts

  • Joined

  • Last visited

Everything posted by NLS

  1. You mean to start? What is your RAM and CPU?
  2. Seemingly the worse of the issues (containers not running) is my fault. I messed appdata persmissions...
  3. I tried adding a new container and that worked. ALL old ones show "bad parameter"!
  4. So I have set a few UNRAID, with my own counting (way) more than a decade. Never seen that before... The server in question is for a friend, I've set it up months ago (maybe a year). Hasn't happened to it before either. Server went down ungracefully, because of a power outage beyond UPS capacity. Normally this is no issue, system does an extra parity check and all ok. And indeed they were... Except people in that SOHO noticed the main share was not working any more. Quickly I noticed I could access it ok using IP instead of hostname, so I gave them that temp solution. Then SOME discovered they couldn't WRITE in the share! Then I went deeper to see in UNRAID what could be the issue. Server seemed to run ok, latest version, everything mostly updated, containers (very few) and VM (a Win11 that needs to run a couple of Win-only things) running ok. Since this is a SOHO, there is no real granularity in the access of the main share, it is set to private, but read/write for both the "advanced" user (the owner) and "user" (the rest of them). This is how it was always. First thing I noticed which is WEIRD, is that server changed back to default name "Tower"! First time I've seen this! This explained why they didn't see the server as it was not named as expected and mapping didn't work! Then I noticed that even the VM couldn't write to the share (although able to read it). I was forced to switch the share to "public" instead of secure! After I stopped array and changed the name back, I rebooted (gracefully this time) the server and thought everything was ok now. But after the reboot NO container starts (although docker is running), with "bad parameter"!!! The last thing is the worse! I am not sure what to do!
  5. Please implement "implicit no" (i.e. default auto update to yes, and set one or a few specifically to no), for auto updates. Right now, you only have "yes" (which cannot set one or some to no), or "no" (which you can manually set few to yes). It should be "default yes" or "default no" and in both cases allow to change some to the other option. Thanks.
  6. That was it! I used 5701 although I should use 5901! Thanks!
  7. So, for some unknown reason, I cannot connect to KVM own VNC when trying from VPN (!), while locally it works. VPN works fine in all other aspects. I am actually not looking into resolving this yet. Instead, I am trying to see how I can utilize my working guacamole to VNC connect to my VMs. Why? Because it works even if a VM networking is down. Which is the issue with one of my VMs. I know I can see the VM and fix it when I go home and see the server from LAN, but I would prefer to be able to use it remotely too. I can see the string "VNC connect" menu item creates, but I couldn't replicate this to a guacamole connection... (I also tried repeater and proxy fields) Can it work?
  8. Yes I mean folder, sorry. I have hundreds (maybe 250?) but I haven't noticed performance differences. I do think ZFS might be overkill for this though. Maybe I should revert to btrfs, although not very easy with all the VMs and containers.
  9. I do use a docker image. Is this an issue? Or just cosmetic?
  10. So as I said in an older thread, I converted my cache pool (single M2) to ZFS. Everything works fine (had some minor issues but ok). Cache usage show normal, about what it was before going ZFS. But I did a "zfs list" and then verified with ZFS Master plugin, in my pool I see my normal few folders (which are not datasets AFAIK), BUT also see a few hundreds (!!!) of "legacy" datasets with names like "03747e08c1b7e6ab35ac74dc6c1538c83b1916185c5e4311e5899ec3d6911397"! They also seem to be... snapshots? (I never made snapshots myself) What do I do, how do I clean those? They don't show in normal folder listing.
  11. OK. I will wait for the transfer to finish, reboot and hope the system will work more normally afterwards. (note that this system worked fine for years)
  12. I did after the issue. The problem is that now I DO have space in the cache (more than 100GB) and still the log is getting full with those entries, ZFS reports 100%, containers run funny (depending on if they need to write files) and probably cache is read only for some reason. (although I am under the minimum free space threshold) Again: Cache is 89% (and getting lower) but ZFS (which is used only in the cache, no other drive) reports 100% (the bar under "memory" in dashboard).
  13. So with 6.12 (now on 6.12.1), I changed my cache (single) disk to zfs. It is a 1TB M.2, on a 32GB RAM server. Also my docker is a folder, not an image. Anyway, had some huge file transfers today and my cache got full. I've seen before this being critical (and UNRAID SHOULD protect itself more actually), but this time seems it is MORE critical. When cache became full, my VM became paused and even dockers were not fully ok. I managed to get local access to my server, I moved manually a few GB of data and rebooted the server. Machine started working more or less ok (VM was ok), but some dockers seemed to not work ok. SWAG is stopping and doublecommander (as I planned to move more files off the cache) started with the VNC desktop giving me errors. Removed, reinstalled... that docker never succeeded starting again. Then I noticed, although cache had like 20GB free, there was an entry in the log the kept adding! "shfs: share cache full"!!! ...every single second! I used UNRAID own GUI, to start moving a few more GB off the server. I am now around 70GB free (and around 91% cache utilization), which should be plenty and the log entry keeps going! Cache settings are now: 5% minimum free space (I changed that from a much smaller number). warning 70% critical 90% THEN I NOTICED in dashboard... ZFS shows 100% (it even showed 101% briefly!)... (also in SWAG log I saw that there was no space to create some files which is why it stops) HTF does this go down??? I am now thinking of reverting to btrfs, as zfs seems to have overcomplicated things for no reason. Question is, what I can do now, how this goes down? (plan right now, is that when I finish the move that needs about 2 more hours to finish, to reboot the server again) Another thing I noticed, is that CPU load is around 35% while the move (and log entries) is keep going. Which for a Ryzen 5600G (6 core/12 thread) is a bit heavy for doing nothing serious... but still better than when I booted the server (after the initial small move to free a bit of cache) which was a sinus graph going up and down. At least now it is stable.
  14. Don't know if it was reported above, but with 6.12.X, dashboard does not refresh (even after manual page refresh) unassigned devices that are not present any more, except if we visit "main" tab, which triggers the update.
  15. mover tuning? will try... that said, my server works fine WITH that
  16. I have two rather clean systems set for friends, latest 6.12.1 and I have noticed for both that mover has never moved anything! Shares are properly set "primary cache, secondary array, move cache to array". Mover is scheduled to run every night. Both have also installed mover tuning, but it is with default settings. So mover should be running as scheduled. In my own server (which way way less clean with numerous VMs, containers, plugins) mover is working fine. Any ideas? Can I provide anything else to help you help me?
  17. So I have setup a server for a client (I use UNRAID for years first time I saw that), just as a file server initially. No weird dockers or plugins just "standard" stuff. Yesterday we noticed that some files were missing and instead in their place was a txt file named like that: <original filename with extension>_Error.txt Inside the txt there was simply text saying "file size exceeds the allowed limit". Not only I haven't set any quotas (I am not even sure if there is such a setting in UNRAID), but we are also not talking about big files anyway... some PDF, excel, docs... In the end they were around 300 files in around 100K, so at least the effect was "minor" and we had, sometimes older, versions of the files... but SERIOUS of course and I need to know WHAT caused this. Any ideas? Does this ring any bells?
  18. This is why I changed the priority. But before finding the solution myself, it was urgent at least to me, as NethServer (which is a full small business server, so potentially important to some of us) was not running. No mail etc.
  19. So i have an obsolute VM that I have manually deleted the vdisks already. Clicking to remove the VM from GUI, results in an endless "wait" icon and is never really removed. Any way to force the removal? What do I delete, where?
  20. Changed Status to Open Changed Priority to Other Pity nobody actually reacted when priority was urgent.
  21. NEWER UPDATE which seems to point UNRAID as the probem. Attempting to install fresh does NOT find any usable disk and does NOT find any usable network! Seems that with 6.12, CentOS7 does not recognise ANY virtio device! So I changed vdisk to be SATA and network to be e1000 and it works! Definitely an issue!
  22. The issue seems to be UNRAID. Aside from the attempts I have made (I can provide the details) to recover OS booting, I now tried a fresh install. (the same I used when I originally set up NethServer inside UNRAID years ago) The installer crashes, because seemingly of network issues, as it reports "couldn't open file /tmp-network-include". Back on the existing installation, note that the qcow2 image itself is fine, I managed to connect it to UNRAID (host), see the partitions inside (including the lvm partitions root and swap), I managed to mount the root partition and see the files inside. So the disk image is ok. The problem started right after 6.12 reboot. Something changed in kvm/libvirt/qemu/uefi that prohibits NS7 from running.
  23. The VM fails to boot exactly after the graceful reboot to update to 6.12! Can it be related to KVM version change? A Windows based VM works fine. HELP!?
  24. I am still using your version. Is it possible to migrate everything to the official build??? EDIT: After resolving the issue with the official build (needs to change the path of the mount points of the container), I had to copy manually the contents of the old install to the new folder. Better NOT point to the old (for safety in case you need to revert) and restoring from technitium backup, failed (because paths were different). Works fine.
  25. Messaged support. Let see what happens... Thanks.