peter76

Members
  • Posts

    44
  • Joined

  • Last visited

Everything posted by peter76

  1. My ZFS-HDD Pool containing 3 HDDs works well and HDDs sleep correctly. Problem: When I open the Unraid-Dashboard in Browser, all HDDs wake up instantly. => they should stay sleeping!! HOW TO REPRODUCE: - Close Browser - Wait until HDDs sleep - Open WEB UI Dashboard in Browser - HDDs wake up instantly now How I found the reason for HDDs waking up: - reboot Server in SAFE MODE => no problem, HDDs stay spun down, even when loading Web-UI - reboot Server to normal mode - remove Plugins, one by one ===>>> ZFS MASTER and Dynamix File Manager Plugins were the reason for spinning up HDDs (every time loading MAIN tab of Web-UI) <<<=== tower-diagnostics-20231214-0951.zip
  2. Limiting a Share size should be a basic feature on a Data Server. please!! make this (simple) feature available asap
  3. user should get informed about this 10% ( the 0 showen is not correct in this case). user should also get informed about the reason, when creating the share fails. "add share" button should be deactive when there is to less space available..
  4. thank you for the hint, I think now I can explain what's happened: -- creating new share on zfs pool with low free space GOES WRONG: - having "Minimum free space" in zfs pool settings set to 1 KB - leave "Minimum free space" in share settings at standard value ( = 0 ) -- creating new share on zfs pool with low free space SUCCEEDS: - having "Minimum free space" in zfs pool settings set to 1 KB - set "Minimum free space" in share settings to a low value ( 1 KB ) => having in mind I have to fill in a value other than 0 creating new share succeeds now! => maybe the default value "0" should be explained in corresponding help text... ?
  5. In case you mean "Minimum free space" with "share floor": I have set "Minimum free space:" to 1 KB and free space on corresponding zfs pool is 1.57TB Observation: when trying to create new share, there is no error message, but in status line of WebUI it shows "Array Started • Starting services..." I get rid of this message creating a new share on a pool with much free space so creation succeeds.
  6. thanks for this information, now I understand what was happening, makes sense.. my learning: we have to be careful when operating manually in the deep of Unraids filesystem 🙂
  7. I have a share "SysLog", shared via samba and used for Syslog-Server. I accidentialy created a Folder "Syslog/backups" (with "l" and not "L" !) calling "mkdir -p /mnt/user/Syslog/backups" now the were two folders: "/mnt/user/SysLog" and "/mnt/user/Syslog" on disk and showen in shares-tab. Samba Server now used the new (wrong) folder "Syslog" instead of "SysLog". This seems to be a bug and should be prevented in some way. My solution was stopping all Services (Docker, VM), the I was able to delete "/mnt/user/Syslog" via shares-tab (deleting in cli was not possible for me). After restarting Services everything was fine again 🙂 tower-diagnostics-20230716-1734.zip
  8. I have the same issue on 6.12.3 creating new share on zfs pool with low free space (zfs_hdd in my case) still not possible. creating new share on zfs pool with much free space (zfs_ssd in my case) there is no problem. tower-diagnostics-20230716-1734.zip
  9. hello, I'm having this issue now on Unraid 6.10.0-rc2 and UD 2022.01.02a. After 5 Minutes WebUI is accessible. ryzen7-x5750g-diagnostics-20220102-2332.zip
  10. Bug Report: on Unraid 6.9.2 the Array doesn't start after replacing a (pre-attached) disk while preclear is running on another disk (I was waiting about 3 minutes). Server was not shut down, only Array was stopped while preclear was running on a non-array-disk. After stopping preclear (killing process via CLI) the array startet correctly and data-rebuild started correctly. A manual restart of preclear also was successfull. Solution suggestion: Unraid: Unraid should start the array even when a non-array disk is preclearing AND/OR Preclear Plugin: should stop work, when array is stopped (or started) tower-diagnostics-20210819-0926.zip
  11. For me it's workin now - all Dockers are listed. Thank you for fixing!
  12. no - this file doesn't exist, not on my running Server and not in older Backups.. 🙂
  13. I'm having an issue where some Docker Containers are not listed in "Docker Auto Update Settings" - looking at "nextcloud_mariadb" which I had to update manually: => Why is "nextcloud_mariadb" not available in the list of "Docker Auto Update Settings" in the Plugin ?? => For me it seems to be a bug tower-diagnostics-20210808-1427.zip
  14. I looks like I'm having the same bug - can't delete the "ugly" entrys:
  15. okay thats true for 6.10.0-rc1, but in 6.9.2 I had user shares enabled - and they where disabled in 6.10.0-rc1: I could fix the problem - but less experienced users could have the same issue - and they get DOCKER and LIBVIRT (and maybe other) problems without a warning, and without a hint how to fix it.
  16. I'm having an issue with non existing File paths in VM and DOCKER after upgrading to 6.10.0-rc1: after upgrade was done and reboot was finished, LIBVIRT and DOCKER didn't start because the paths /mnt/user/...... don't exist any more. I changed the paths in Settings of VM and DOCKER to /mnt/disk1/.... and now LIBVIRT and DOCKER starts and runs again. Hint: I'm running a array that contains only one disk - maybe this "special" case is not recognized in 6.10.0 ? ryzen7-5800x-diagnostics-20210808-1133.zip
  17. hello JTok, I'm wondering if it would be possible to add a function to split the resulting *.img.zst file into parts of lets say 4GB. In my case I run an automated backup-job wih rsnapshot to a remote server build with multiple harddrives using mergerfs - and smaller *.img.zst files would help me fill up those harddrives. (now I have to set a minfreespace=520G to handle those big *.img.zst files). thanks a lot for your answer, peter76
  18. I'm running a Windows VM with GPU passthrough without monitor connected. RDP runs fine for most purposes. But there are situations where RDP does not work - i.e. some bigger Windows Updates require multiple reboots, the only way I got them runnig was using VNC. Now I have following problem: Activting VNC and GPU passthrough results in an Error in Windows Hardware-Manager (Code 12 - not enough resources...). So my passthrough-GPU does not run, when VNC is activated. How can I run both??
  19. thanks a lot to dlandon, you brought me on the right path to search. A few days ago, I did some changes on my firewall, and now pinging the gateway was not possible any more. I solved the problem by allowing pinging the gateway. Just one thing to think about: Unraid GUI (without UD Plugin) starts without beeing able to ping the Gateway. When installing UD Plugin AND pinging the Gateway is not possible, user gets problems (as I got..). In my "dummy user"-opinion pinging the Gateway is maybe not the best solution for checking the network.
  20. I have a problem since last Plugin Update to 2021.04.03 on Unraid 6.9.1: Unassigned devices gets a timeout when starting the Unraid-Server, Server LOG: Apr 5 17:35:40 Ryzen7-5800X unassigned.devices: Mounting 'Auto Mount' Remote Shares... Apr 5 17:41:10 Ryzen7-5800X unassigned.devices: Cannot 'Auto Mount' Remote Shares. Network not available! Pinging the Server is possible the whole time, and Unraid GUI is not reachable. After this 5min 30sec the Unraid GUI is reachable, and I can manually (by hitting MOUNT-Button) mount my SMB Shares. For testing I deleted my all my SMB Shares: Same Timeout-Behaviour when rebooting Server. ryzen7-5800x-diagnostics-20210405-1815.zip
  21. now with VM BACKUP 2021.03.11 the installation runs smooth 🙂 THANK YOU
  22. THANX JTOK - now I can run ... */3 ... PS: it's only "Disable custom cron validation?" which has to be set to YES
  23. Bug report: in "settings" => "custom cron" it's not possible to enter "/" symbol. => possible to enter i.e.: "0 3 1 3,9 *" (run at 3:00h 1.3. and 1.9.) => not possible to enter i.e.: "0 3 1 */2 *" (run at 3:00h every second month)