Kulisch

Members
  • Posts

    57
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Kulisch's Achievements

Rookie

Rookie (2/14)

9

Reputation

1

Community Answers

  1. Reserved post for possible upcoming informations.
  2. Hello everyone, despite the expensive prices from nvidia I decided to upgrade to a new gpu to open up more possibilities with AI. I have therefore decided to say goodbye to my 1070 from Zotac and bought a ASUS TUF 4070 TI Super OC from Asus. No other gpus are installed or iGPUs that I can use. (Ryzen 3800XT) So that I could use Docker containers with my new GPU, I wanted to install the latest nvidia drivers. For some reason, however, there were problems downloading them and it kept giving me errors, despite rebooting the host system. So I removed the nvidia plugin, reinstalled it again, restarted unraid and successfully downloaded the latest drivers. Then I rebooted the system again and the Docker containers were able to work with the graphics card (if Docker Settings are correctly configured). I also had difficulties with the VMs at the beginning. In my setup where the 1070 was active I needed two things. Once a vbios file which I got from the TechPowerUp website and a user script that was executed when the array was started. #!/bin/bash echo 0 > /sys/class/vtconsole/vtcon0/bind echo 0 > /sys/class/vtconsole/vtcon1/bind echo efi-framebuffer.0 > /sys/bus/platform/drivers/efi-framebuffer/unbind my vbios file: Zotac.GTX1070.8192.161103.rom Both are apparently no longer necessary. The vbios file is no longer needed in the 4000 series, as I have read several times, because nvidia has apparently made some changes to make it easier for the user. If I understand correctly, the script has disconnected the GPU from the host system to make it "available" for the VMs. This was necessary with the 1070 so that I could use it there. Even if both are no longer necessary, I still had a black screen when booting up. While browsing the forum I read that it is also possible to use the VNC interface as a graphics card in addition to the nvidia card. Do not forget to configure the corresponding sound card from the nvidia graphics card. I therefore created a Windows 10 VM and carried out the installation via the web interface. I also installed the virtio drivers for the network interface (NetKVM) and the storage (viostor) from the virtio ISO. And here the storage drivers: After installing the OS, I created the offline account and finished the setup. Then downloaded and installed the latest nvidia drivers. Normally it is possible to get graphics card drivers from Windows Updates but in my case none were offered for download. The physical display output appeared after the installation process. Just to make sure that this is not only temporary, I restarted the system via the OS. Then shut it down and booted it up again... and it stayed that way. However, I noticed that the BIOS is only visible in the VNC. On the physical screen, the display output only appears when the OS and the corresponding graphics drivers are loaded. After making sure that this works for the most part, I removed the VNC graphics card. The nvidia card remains in the configuration. Started again and after a few seconds the Windows desktop appeared in the physical monitor when the OS is fully booted. How exactly I solve the problem to get a permanent display output to see the BIOS or when updates are running or a hard disk check is performed,.. I can not say at the moment. If anyone has similar experiences or even a solution to this problem, I would be very grateful if you could let me know. I hope I could help some of you with this information.
  3. If I understand correctly... you want to use this Docker to pull/synchronize mails from a 365 mailserver and store them on the Docker mailserver? What speaks against using a mail client? (Thunderbird, Outlook, Roundcube, etc.)
  4. I cannot recommend an admin web interface. I only use the "setup" command in the container directly (in the console or terminal). docker exec -it docker-mailserver bash To be honest, I'm also not sure if there is one that you can work with properly as they advertise that you can only customize everything via configurations. https://github.com/docker-mailserver/docker-mailserver/issues/1555#issue-650874945 with the "setup" command you can configure everything you need. I can recommend Roundcube as a web client. I downloaded it directly and configured it via the Docker Hub. See configuration in the picture. I hope the information was valuable.
  5. What are you using for Login? Username, Password (special Characters?), Mail Adress, Domain, SSL/TLS, Port? (please censor sensitive data) Is the information you entered in iOS the same as in Outlook? Did you tried Thunderbird?
  6. I'm just here to say thank you. Just found this a few days ago and installed it successfully. Everything works great and I'm using this daily. 👍
  7. I guess this is a read/write problem. Have you changed the permissions? If you are not sure, try a new container or a new path to see if the error happens again. If so, try the following command (not for production environments) chmod 777 /mnt/user/appdata/<dms path>
  8. Hallo zusammen, ich nutze seit einiger Zeit Linux VMs und als GPU meine GTX 1070. An sich gibt es kaum Probleme bei der Nutzung der VM mit GPU, abgesehen davon das ich nachdem die VM heruntergefahren ist, immer noch der letzte Frame zu sehen ist und mein Display das die ganze Zeit anzeigt. Bei Windows VMs passiert das nicht. Wenn ich die VM ausschalte zeigt der kein Bild mehr an und das Display geht in standby. Hat jemand das Problem auch schon gehabt oder weiß wie man das lösen kann? Muss die Lösung in der VM stattfinden oder in Unraid? Zusätzliche Infos: Der Rechner ist ein Ryzen 3800XT Build und nur eine GTX 1070. (Keine zweite Grafikkarte) In der VM nutze ich die proprietären Treiber von nvidia. Die Grafikkarte wird nicht über vfio-bind durchgereicht. Es laufen 2 User Skripte auf dem Unraid die mit der Grafikkarte zu tun haben. 1. Script: GPU Unbind - At first Array Start #!/bin/bash echo 0 > /sys/class/vtconsole/vtcon0/bind echo 0 > /sys/class/vtconsole/vtcon1/bind echo efi-framebuffer.0 > /sys/bus/platform/drivers/efi-framebuffer/unbind 2. Script: GPU Powersave - hourly - (Von Spaceinvader One) #!/bin/bash # check for driver command -v nvidia-smi &> /dev/null || { echo >&2 "nvidia driver is not installed you will need to install this from community applications ... exiting."; exit 1; } echo "Nvidia drivers are installed" echo echo "I can see these Nvidia gpus in your server" echo nvidia-smi --list-gpus echo echo "-------------------------------------------------------------" # set persistence mode for gpus ( When persistence mode is enabled the NVIDIA driver remains loaded even when no active processes, # stops modules being unloaded therefore stops settings changing when modules are reloaded nvidia-smi --persistence-mode=1 #query power state gpu_pstate=$(nvidia-smi --query-gpu="pstate" --format=csv,noheader); #query running processes by pid using gpu gpupid=$(nvidia-smi --query-compute-apps="pid" --format=csv,noheader); #check if pstate is zero and no processes are running by checking if any pid is in string if [ "$gpu_pstate" == "P0" ] && [ -z "$gpupid" ]; then echo "No pid in string so no processes are running" fuser -kv /dev/nvidia* echo "Power state is" echo "$gpu_pstate" # show what power state is else echo "Power state is" echo "$gpu_pstate" # show what power state is fi echo echo "-------------------------------------------------------------" echo echo "Power draw is now" # Check current power draw of GPU nvidia-smi --query-gpu=power.draw --format=csv exit Vielen Dank im voraus.
  9. Hello everybody, now that I have built myself a pikvm to remotely control the system completely, I could also find a solution to my problem. I have tested some settings in these months and noted for me where there was a problem and where not. After I have now found my optimal settings and tested for weeks with success, I can now close the topic. Power Supply Idle = Typical Current Idle PSS Support = deactivated Global C-States = deactivated Core Performance Boost (CPB) = deactivated I used these settings before and after a BIOS upgrade without crashes. In the meantime I also changed the USB stick (fresh install) and formatted the hard disks. Therefore I strongly assume that it is not a setting on the operating system but only the BIOS. P2.20 -> P2.30 (B550 Taichi) The only thing that doesn't work and won't is hibernating a VM. With Global C-States enabled (if I remember correctly) I could wake up VMs from hibernation. Since I had stability issues with that option as well, I had to disable that and when the VM is in hibernation I can't wake it up and have to do "force shutdown". Therefore set in the operating system that hibernation is disabled. I hope this info is helpful for one or the other. I am so relieved to have solved the problem now.
  10. Making similar experiences. My scanner is not supported out of the box, but after driver installation scanimage -L works as well. Unfortunately the scanner is not found in the web interface either. Is there anything that needs to be adjusted in the Docker variables? # Installation Epson Drivers inside Docker (temporarily) docker exec -it scanservjs bash cp /app/config/iscan-gt-f720-bundle-2.30.4.x64.deb.tar.gz /tmp/ cd /tmp/ tar -xvf iscan-gt-f720-bundle-2.30.4.x64.deb.tar.gz cd /app/config/iscan-gt-f720-bundle-2.30.4.x64.deb/ ./install.sh ... scanimage -L Created directory: /var/lib/snmp/cert_indexes device `epkowa:interpreter:001:009' is a Epson Perfection V30 flatbed scanner
  11. Ok I will try that and give feedback if everything works. Thank you
  12. So if I understand that correctly... Should I ignore the path /mnt/user/ everywhere and instead using /mnt/cache or /mnt/diskX ?
  13. What about the docker containers /mnt/user/appdata/? Should I rsync the files in another place?
  14. unraid-diagnostics-20220919-1331.zip Like mentioned before,... The VM or Unraid crashed if I use /mnt/user/domains If I use /mnt/cache/domains everything works like a charm. I guess the same problem appears if im using docker containers. Using, docker-mailserver, postgres, nextcloud is no problem... using Swag gives me a crash after an amount of time... So im guessing there are problem in /mnt/user/ directory. How can I fix this? Update: Testdisk and Photorec works. Thx for the advise.
  15. After creating a Windows 11 VM and attaching the corrupted Harddrive it detected, the Problem was found during boot by the OS and repaired it. Now I only have to get the persmissions for my Harddrive. I only can list the files in powershell as a administrator. Explorer says the permissions are missing. Using explorer as admin doesnt change anything. Still, I have to repair the array. Is there something that I can use in Unraid to fix this problem. Setting permissions or moving the files? Or do I have delete everything and building up the array from beginning?