olympia

Members
  • Posts

    458
  • Joined

  • Last visited

Everything posted by olympia

  1. I'm not so sure about this - I probably would have noticed it earlier. Recalling the adjustments I made from the default settings wasn't trivial. I couldn't even realize which settings had been wiped out. I had to go through backups to retrieve my disk settings and recreate all the disk shares along with their permission settings. ...and now this is the third time in a raw I'm going through this process, however, at least I now recall the settings and don't need to look at the backups. It would be fantastic if removing a disk didn't entail all of that. The first time, I couldn't even understand why my array wasn't auto-starting after a reboot or why the disks weren't spinning down, until I realized that all these settings had been reset. Not to mention the disk shares disappearing or user shares not working.
  2. Currently, I'm in the process of upgrading to larger disks and subsequently phasing out smaller ones (replacing 2x8TB drives with a 18TB drive). The procedure I'm following is as follows: 1) Swapping one of the 8TB drives with the 18TB drive and rebuilding from parity. 2) Conducting a disk to disk transfer, copying all data from the second 8TB drive to the 18TB drive. 3) Removing the second 8TB from the array 4) Applying "New Config" tool, since there is no other way to remove disk from the array 5) Rebuilding parity I'm currently in the third cycle of this process and I'm surprised to find that I've lost all my disk shares and disk settings during the "New Config" phase, despite selecting the option to preserve assignments. While the disk assignments (positions) remain intact, all disk shares are disabled by Unraid as a result of "New Config". Moreover, all settings configured under settings/disk are lost. Strangely, I even have to reinitialize the user shares, meaning although the user shares exist, I can't write anything to them until I modify a setting within the user share, click apply, revert the change, and then reapply to restore the desired configuration. Is this behavior intentional? It's been a while since I last used "New Config," but I don't recall it wiping all settings in this manner.
  3. Sure, I don't expect the vdisks to be restored. I have those. So if the content of /etc/libvirt/ gets restored by the plugin to /etc/libvirt/, then the restored files will be written directly into the mounted libvirt.img, so no need to take care of rebuilding this?
  4. I see that, but how can I restore the VMs from this content? I presume, ultimately we need to recreate libvirt.img from these files.
  5. Is there any guide somewhere how to restore libvirt.img file from the vm meta? If I do a restore of VM meta, I only get the content of /etc/libvirt restored, but what would be the way to transform this into libvirt.img?
  6. Yes, I was talking about those Thank you for confirming and apologies if I was not clear enough with my question.
  7. I mean I mount them under /mnt/disks and I don't use any of the other mounting points: addons, remotes and rootshare. If I understand correctly, if I mount a disk to /mnt/disks, then the other mounting points will also be created and no way to control that. Again, I am fine with this, I just wanted to check if there is any option I missed regarding those.
  8. Definitely not a huge issue. I was just wondering if it's normal to have those by default (even if they are not being used) or there is an option to disable them. Those rare moments when I am looking at /mnt directly would have been less cluttered, but again, I am fine with this.
  9. ..but then I cannot mount a disk to mnt/disks either. So does this mean either all mounting points are enabled/ created by default when mount button is enabled or none if the button is disabled?
  10. Is there any option to disable mounting points (specifically addons, remotes and rootshare) if not needed?
  11. Complimentary to the preserve "Preserve current assignments" it would be great if disk export settings could also be preserved when New Config is being triggered. Currently all the disk share settings default back to no export and the granular permission settings for all the disk shares are being lost as well.
  12. I'm wondering, if the parity drive is the largest, is it necessary for the parity check to continue running once all the data disks have been completed? For example, with an 18TB parity drive and the second largest data drive currently at 10TB, it typically takes around 6-7 hours for the parity drive to complete the parity check after the last data disk finishes. Would it be reasonable to suggest a feature request to halt the parity check when all the data disks have been completed? Or are there valid reasons for keeping the parity disk running alone?
  13. ...and me too. The only way to update to 1.29.2 was to remove the container and reinstall - but then again a receive the same error when checking for updates.
  14. Don't know if this has been requested previously and/or not having an option for this in display setting on purpose, but would be great to have this sign-in hidden from top header for those, who don't use it.
  15. I have lots of dockers and need to scroll down to the control buttons on the Containers tab. Could it be a consideration to move the control buttons from the bottom to the top of the screen (above the list of containers)?
  16. Unfortunately this is not the case. It doesn't gets enabled for me even if I have a user configured permanently there. I am a bit disappointed not to have this freedom of choice. Many thanks for the responses!
  17. Is there any way to get vsftp enabled when unraid boots up? Good-old go file or something? Yes, I am aware of the security implication and yet I would like to use this if I can, then using another docker. (by the way, the help text in ftp server settings is incorrectly saying "By default the FTP server is enabled" - this is on 6.10-rc2)
  18. I am not sure, but I don't think so. I think this was a container specific update to qbittorrent what made this broke. Unless your other containers suffering from the same issue, I wouldn't change it.
  19. I believe "extra parameters: -e UMASK=000" will do the trick. @PSYCHOPATHiO I think it's not 002 what made it work, but using UMASK instead of UMASK_SET
  20. Wow! you made a really deep testing there! Thank you very much for your efforts! I really do appreciate this! Getting late here, I will do the same with same screenshots tomorrow just for the sake of the records. Probably also Blue Iris is behaving badly in this setup. It is running as a windows service and constantly using igpu, so it is not like the video in a web browser what should stop when you disconnect. ...and your question: open means when I have an active RDP window open - no problem with this at all. When I just close the RDP window (disconnect from the session, but leave the user logged in) - that's when the problem starts. If I connect back to the left sesion - no issue again until the window is open. If I leave the session with logging out, then it looks like the issue is not popping up - still need some testing on this to confirm 100% And also for the records: There is no issue with RDP without igpu passthrough Let me ask a few quick questions: - is your VM configured as Q35-5.1 with OVMF BIOS? - did you do any tweaks to the video drivers in win10 (e.g. removing QXL video device manually from the XML suggested somewhere on the forums) other than what is in the second post? - what unRAID version are you running? I don't recognize the chart for the CPU load. Is this in 6.10rc1 or that is provided by some plugin? - do you have GuC/ HuC loading disabled?
  21. ...but contrary to you, the issue at me is not when I have the rdp session open (what I also had at 1080p, so not beyond the limit), but when I disconnect RDP without logging out from Windows. If I am logging out, then it seems like there is no issue.
  22. This is 100% RDP related for me. Blue Iris in a Win10 VM with constant 20% load was running nicely in parallel with the parity check for 8:30+ hours, then following I logged in and disconnected (not logged out) using RDP I've got the memory page errors in 5 minutes and libvirt has got frozen. I also noted the following: - I have 4 cores allocated to the Win10 VM - the average CPU load on all 4 cores cores were around 15% when it was still running - If I logged in with RDP and just disconnected from RDP session (leaving my user logged in), then the CPU loads were more than double on all core - If I logged (signed) out my user in Windows instead of just disconnecting from the RDP session, then all 4 cores have got back to the normal 15% and not having the double load issue Now the question is that why this Windows 10 build 1903 bug from 2019 is hitting me in 2021? Le me see if the workaround from 2019 disabling "Use WDDM graphics display driver for Remote Desktop Connections" helps me now...