olympia

Members
  • Posts

    458
  • Joined

  • Last visited

Posts posted by olympia

  1. I'm not so sure about this - I probably would have noticed it earlier.

     

    Recalling the adjustments I made from the default settings wasn't trivial. I couldn't even realize which settings had been wiped out. I had to go through backups to retrieve my disk settings and recreate all the disk shares along with their permission settings. ...and now this is the third time in a raw I'm going through this process, however, at least I now recall the settings and don't need to look at the backups.

     

    It would be fantastic if removing a disk didn't entail all of that. The first time, I couldn't even understand why my array wasn't auto-starting after a reboot or why the disks weren't spinning down, until I realized that all these settings had been reset. Not to mention the disk shares disappearing or user shares not working.

  2. Currently, I'm in the process of upgrading to larger disks and subsequently phasing out smaller ones (replacing 2x8TB drives with a 18TB drive).

     

    The procedure I'm following is as follows:

    1) Swapping one of the 8TB drives with the 18TB drive and rebuilding from parity.

    2) Conducting a disk to disk transfer, copying all data from the second 8TB drive to the 18TB drive.

    3) Removing the second 8TB from the array

    4) Applying "New Config" tool, since there is no other way to remove disk from the array

    5) Rebuilding parity

     

    I'm currently in the third cycle of this process and I'm surprised to find that I've lost all my disk shares and disk settings during the "New Config" phase, despite selecting the option to preserve assignments. While the disk assignments (positions) remain intact, all disk shares are disabled by Unraid as a result of "New Config". Moreover, all settings configured under settings/disk are lost. Strangely, I even have to reinitialize the user shares, meaning although the user shares exist, I can't write anything to them until I modify a setting within the user share, click apply, revert the change, and then reapply to restore the desired configuration.

     

    Is this behavior intentional? It's been a while since I last used "New Config," but I don't recall it wiping all settings in this manner.

  3. Sure, I don't expect the vdisks to be restored. I have those. So if the content of /etc/libvirt/ gets restored by the plugin to /etc/libvirt/, then the restored files will be written directly into the mounted libvirt.img, so no need to take care of rebuilding this?

  4. 6 minutes ago, KluthR said:

    Libvirt is not being backed up, only some contents of its mounted filesystem (the vm definition). 

    I see that, but how can I restore the VMs from this content? I presume, ultimately we need to recreate libvirt.img from these files.

  5. Is there any guide somewhere how to restore libvirt.img file from the vm meta?

    If I do a restore of VM meta, I only get the content of /etc/libvirt restored, but what would be the way to transform this into libvirt.img?

  6. 38 minutes ago, dlandon said:

    Don't mount them and you'll not see them in /mnt/.  If they are mounted, the have to show in /mnt/.

    I mean I mount them under /mnt/disks and I don't use any of the other mounting points: addons, remotes and rootshare.

    If I understand correctly, if I mount a disk to /mnt/disks, then the other mounting points will also be created and no way to control that.

     

    Again, I am fine with this, I just wanted to check if there is any option I missed regarding those.

  7. I'm wondering, if the parity drive is the largest, is it necessary for the parity check to continue running once all the data disks have been completed?

     

    For example, with an 18TB parity drive and the second largest data drive currently at 10TB, it typically takes around 6-7 hours for the parity drive to complete the parity check after the last data disk finishes.

     

    Would it be reasonable to suggest a feature request to halt the parity check when all the data disks have been completed? Or are there valid reasons for keeping the parity disk running alone?

  8. On 3/6/2022 at 12:33 AM, Squid said:

    IIRC, if there is a user defined in ftp settings then the ftp service is enabled on boot up

     

    Unfortunately this is not the case. It doesn't gets enabled for me even if I have a user configured permanently there.

    I am a bit disappointed not to have this freedom of choice.

     

    Many thanks for the responses!

  9. Is there any way to get vsftp enabled when unraid boots up? 

    Good-old go file or something?

     

    Yes, I am aware of the security implication and yet I would like to use this if I can, then using another docker.

     

    (by the way, the help text in ftp server settings is incorrectly saying "By default the FTP server is enabled" - this is on 6.10-rc2)

  10. 10 hours ago, Linguafoeda said:

     

    Do you think i should change all my containers that have -e UMASK_SET=000 to -e UMASK=000? I gave that permission to all my containers that create new files (torrent clients, mkvtoolnix, handbrake etc.)

     

    I am not sure, but I don't think so. I think this was a container specific update to qbittorrent what made this broke. Unless your other containers suffering from the same issue, I wouldn't change it.

  11. Wow! you made a really deep testing there! Thank you very much for your efforts! I really do appreciate this!

     

    Getting late here, I will do the same with same screenshots tomorrow just for the sake of the records.

     

    Probably also Blue Iris is behaving badly in this setup. It is running as a windows service and constantly using igpu, so it is not like the video in a web browser what should stop when you disconnect. 

     

    ...and your question: open means when I have an active RDP window open - no problem with this at all. 

    When I just close the RDP window (disconnect from the session, but leave the user logged in) - that's when the problem starts.

    If I connect back to the left sesion - no issue again until the window is open.

    If I leave the session with logging out, then it looks like the issue is not popping up - still need some testing on this to confirm 100%

     

    And also for the records:

    There is no issue with RDP without igpu passthrough

     

    Let me ask a few quick questions:

     - is your VM configured as Q35-5.1 with OVMF BIOS?

     - did you do any tweaks to the video drivers in win10 (e.g. removing QXL video device manually from the XML suggested somewhere on the forums) other than what is in the second post? 

     - what unRAID version are you running? I don't recognize the chart for the CPU load. Is this in 6.10rc1 or that is provided by some plugin?

     - do you have GuC/ HuC loading disabled?

  12. 1 hour ago, alturismo said:

    so yes, RDP also causing issues here but ONLY when im over the gvt-g vGPU limits.

     

    ...but contrary to you, the issue at me is not when I have the rdp session open (what I also had at 1080p, so not beyond the limit), but when I disconnect RDP without logging out from Windows. If I am logging out, then it seems like there is no issue.

  13. 2 hours ago, alturismo said:

    when you not connecting to the VM you wont have any load on the gpu, by forcing means, RDP Protocol by default is using the resolution from client side, so lets say you sit on a 4k monitor, now the session will open in 4k instead the max 1080p and you will force a mem page error fault ...

     

    and i dont think this is related here

     

    This is 100% RDP related for me. Blue Iris in a Win10 VM with constant 20% load was running nicely in parallel with the parity check for 8:30+ hours, then following I logged in and disconnected (not logged out) using RDP I've got the memory page errors in 5 minutes and libvirt has got frozen. 

     

    I also noted the following:

     - I have 4 cores allocated to the Win10 VM

     - the average CPU load on all 4 cores cores were around 15% when it was still running

     - If I logged in with RDP and just disconnected from RDP session (leaving my user logged in), then the CPU loads were more than double on all core

     - If I logged (signed) out my user in Windows instead of just disconnecting from the RDP session, then all 4 cores have got back to the normal 15% and not having the double load issue

     

    Now the question is that why this Windows 10 build 1903 bug from 2019 is hitting me in 2021? :(

    Le me see if the workaround from 2019 disabling "Use WDDM graphics display driver for Remote Desktop Connections" helps me now...