NLS

Members
  • Posts

    1415
  • Joined

  • Last visited

Everything posted by NLS

  1. So, for some unknown reason, I cannot connect to KVM own VNC when trying from VPN (!), while locally it works. VPN works fine in all other aspects. I am actually not looking into resolving this yet. Instead, I am trying to see how I can utilize my working guacamole to VNC connect to my VMs. Why? Because it works even if a VM networking is down. Which is the issue with one of my VMs. I know I can see the VM and fix it when I go home and see the server from LAN, but I would prefer to be able to use it remotely too. I can see the string "VNC connect" menu item creates, but I couldn't replicate this to a guacamole connection... (I also tried repeater and proxy fields) Can it work?
  2. Yes I mean folder, sorry. I have hundreds (maybe 250?) but I haven't noticed performance differences. I do think ZFS might be overkill for this though. Maybe I should revert to btrfs, although not very easy with all the VMs and containers.
  3. I do use a docker image. Is this an issue? Or just cosmetic?
  4. So as I said in an older thread, I converted my cache pool (single M2) to ZFS. Everything works fine (had some minor issues but ok). Cache usage show normal, about what it was before going ZFS. But I did a "zfs list" and then verified with ZFS Master plugin, in my pool I see my normal few folders (which are not datasets AFAIK), BUT also see a few hundreds (!!!) of "legacy" datasets with names like "03747e08c1b7e6ab35ac74dc6c1538c83b1916185c5e4311e5899ec3d6911397"! They also seem to be... snapshots? (I never made snapshots myself) What do I do, how do I clean those? They don't show in normal folder listing.
  5. OK. I will wait for the transfer to finish, reboot and hope the system will work more normally afterwards. (note that this system worked fine for years)
  6. I did after the issue. The problem is that now I DO have space in the cache (more than 100GB) and still the log is getting full with those entries, ZFS reports 100%, containers run funny (depending on if they need to write files) and probably cache is read only for some reason. (although I am under the minimum free space threshold) Again: Cache is 89% (and getting lower) but ZFS (which is used only in the cache, no other drive) reports 100% (the bar under "memory" in dashboard).
  7. So with 6.12 (now on 6.12.1), I changed my cache (single) disk to zfs. It is a 1TB M.2, on a 32GB RAM server. Also my docker is a folder, not an image. Anyway, had some huge file transfers today and my cache got full. I've seen before this being critical (and UNRAID SHOULD protect itself more actually), but this time seems it is MORE critical. When cache became full, my VM became paused and even dockers were not fully ok. I managed to get local access to my server, I moved manually a few GB of data and rebooted the server. Machine started working more or less ok (VM was ok), but some dockers seemed to not work ok. SWAG is stopping and doublecommander (as I planned to move more files off the cache) started with the VNC desktop giving me errors. Removed, reinstalled... that docker never succeeded starting again. Then I noticed, although cache had like 20GB free, there was an entry in the log the kept adding! "shfs: share cache full"!!! ...every single second! I used UNRAID own GUI, to start moving a few more GB off the server. I am now around 70GB free (and around 91% cache utilization), which should be plenty and the log entry keeps going! Cache settings are now: 5% minimum free space (I changed that from a much smaller number). warning 70% critical 90% THEN I NOTICED in dashboard... ZFS shows 100% (it even showed 101% briefly!)... (also in SWAG log I saw that there was no space to create some files which is why it stops) HTF does this go down??? I am now thinking of reverting to btrfs, as zfs seems to have overcomplicated things for no reason. Question is, what I can do now, how this goes down? (plan right now, is that when I finish the move that needs about 2 more hours to finish, to reboot the server again) Another thing I noticed, is that CPU load is around 35% while the move (and log entries) is keep going. Which for a Ryzen 5600G (6 core/12 thread) is a bit heavy for doing nothing serious... but still better than when I booted the server (after the initial small move to free a bit of cache) which was a sinus graph going up and down. At least now it is stable.
  8. Don't know if it was reported above, but with 6.12.X, dashboard does not refresh (even after manual page refresh) unassigned devices that are not present any more, except if we visit "main" tab, which triggers the update.
  9. mover tuning? will try... that said, my server works fine WITH that
  10. I have two rather clean systems set for friends, latest 6.12.1 and I have noticed for both that mover has never moved anything! Shares are properly set "primary cache, secondary array, move cache to array". Mover is scheduled to run every night. Both have also installed mover tuning, but it is with default settings. So mover should be running as scheduled. In my own server (which way way less clean with numerous VMs, containers, plugins) mover is working fine. Any ideas? Can I provide anything else to help you help me?
  11. So I have setup a server for a client (I use UNRAID for years first time I saw that), just as a file server initially. No weird dockers or plugins just "standard" stuff. Yesterday we noticed that some files were missing and instead in their place was a txt file named like that: <original filename with extension>_Error.txt Inside the txt there was simply text saying "file size exceeds the allowed limit". Not only I haven't set any quotas (I am not even sure if there is such a setting in UNRAID), but we are also not talking about big files anyway... some PDF, excel, docs... In the end they were around 300 files in around 100K, so at least the effect was "minor" and we had, sometimes older, versions of the files... but SERIOUS of course and I need to know WHAT caused this. Any ideas? Does this ring any bells?
  12. This is why I changed the priority. But before finding the solution myself, it was urgent at least to me, as NethServer (which is a full small business server, so potentially important to some of us) was not running. No mail etc.
  13. So i have an obsolute VM that I have manually deleted the vdisks already. Clicking to remove the VM from GUI, results in an endless "wait" icon and is never really removed. Any way to force the removal? What do I delete, where?
  14. Changed Status to Open Changed Priority to Other Pity nobody actually reacted when priority was urgent.
  15. NEWER UPDATE which seems to point UNRAID as the probem. Attempting to install fresh does NOT find any usable disk and does NOT find any usable network! Seems that with 6.12, CentOS7 does not recognise ANY virtio device! So I changed vdisk to be SATA and network to be e1000 and it works! Definitely an issue!
  16. The issue seems to be UNRAID. Aside from the attempts I have made (I can provide the details) to recover OS booting, I now tried a fresh install. (the same I used when I originally set up NethServer inside UNRAID years ago) The installer crashes, because seemingly of network issues, as it reports "couldn't open file /tmp-network-include". Back on the existing installation, note that the qcow2 image itself is fine, I managed to connect it to UNRAID (host), see the partitions inside (including the lvm partitions root and swap), I managed to mount the root partition and see the files inside. So the disk image is ok. The problem started right after 6.12 reboot. Something changed in kvm/libvirt/qemu/uefi that prohibits NS7 from running.
  17. The VM fails to boot exactly after the graceful reboot to update to 6.12! Can it be related to KVM version change? A Windows based VM works fine. HELP!?
  18. I am still using your version. Is it possible to migrate everything to the official build??? EDIT: After resolving the issue with the official build (needs to change the path of the mount points of the container), I had to copy manually the contents of the old install to the new folder. Better NOT point to the old (for safety in case you need to revert) and restoring from technitium backup, failed (because paths were different). Works fine.
  19. Messaged support. Let see what happens... Thanks.
  20. No that is what I say, it was already logged with another account. I hope there is a way to "move" things around. There is my case where it is an admin decision on how to manage things or other more "serious" cases where a server is sold to another entity as is (legal issues). There has to be a way. Should I contact someone?
  21. So it will not be an issue with that server already in another account?
  22. (EDIT: Since nobody replied yet, let me enrich this) (EDIT #2: Since nobody replied still I will update with my own discoveries) (EDIT #3: This project looks abandoned. Never got replies to #2, #3, #4 and I see other people's questions remain unanswered above. Pity because the project looks all right and for me it seems to work - it still downloads things so I will use it at least for this initial sync, but I am not sure I will keep the container, because an abandoned project is most of the times as good as dead) 1) If I close the terminal WHILE it sync, does it keep syncing? EDIT #2: Well I actually noticed the terminal after a few hours is at the prompt and NOT reporting syncing. I was actually scared it stopped halfway (actually more like 25% of the way), but then refreshed a few times the folder size calculation in UNRAID and it kept increasing. Plus seeing the logs of the container actually still scrolls files synced, so it DOES work! Of course I am not sure what happened and the terminal returned to prompt (and I couldn't scroll upwards to see history of the window, like it got "reset"). 2) Does it implement the "monitor" parameter (installing a daemon to actively sync). 3) Starting according to instructions (with the copy paste of the login URI etc.), will it survive server reboots? Is it working from now on? Or after reboot I need to redo the whole thing? 4) If I reboot the whole server before initial sync finishes (and assuming it actually runs by itself after reboot), will it resume ok afterwards?
  23. I've set up a server for a relative, that I will fully manage. At the time of the setup, I thought that using an independent registration account would be better, but now (that myservers is evolving and getting better) I want to add it to my "own" servers. How can I do that?
  24. Since 6.12 is coming, is there some "brief" on the current state of this and any guide on how to PROPERLY set it up?