Sulframus

Members
  • Posts

    21
  • Joined

  • Last visited

Everything posted by Sulframus

  1. Okay, I've finally figured it out. There were issues on top of issues. The actual working config with Active Directory using attributes proxyAddresses is following.
  2. Has anybody been able to get AD LDAP working with dms? I've followed their documentation for AD, but keep getting dovecot auth issues. Reviewed dovecot forums and everything that I have tried still didn't get it to work. Usernames, passwords, IPs and domains have been changed to be redacted
  3. I keep having issues where my activity on most torrents stops being announced. It's usually fixable by forcing recheck, but having to do a recheck for about 20TB of files every few weeks is extremely annoying. Anybody had this issue or would know about a way of to get this fixed? Just updating the tracker is not enough.
  4. It looks like issue was with Thunderbird. When IMAP fills out the information, it puts the username as admin, not [email protected] and therefore the user is unknown. I now got mail receiving working. Sending e-mails to Gmail for example doesn't work. This looks to be an issue due to my ISP. I will try to call them up, hopefully they will be able to help out.
  5. Not sure why it happens either, but thank you for your help and time. Your suggestions at least made me understand the way the subdomains work. I will open up an issue on DMS GitHub, hopefully they will be able to help.
  6. I did have IPS on, disabled it now for testing, still no change. No firewall rules. This was done multiple times during testing. They show up when doing "setup email list", tried updating the passwords again to 123456, but still the same error message with unknown user appears.. Added it before, tried adding again. Tested with domain.com and mail.domain.com. Recreated the whole DMS docker, unfortunately still stuck on the same issue. Went through the "Troubleshooting" article on the DMS github page. The only part relevant is what is shown in docker logs in unraid anyway.
  7. I think the response time is good for a forum, thank you! Unfortunately I am still having issues. I have decided to test with the least amount of security just for the testing purposes to see if I will be able to get access to the mail server from a VPN connection. I don't use NPM for actually port forwarding any of the ports used for DMS, this is done by the router instead, which points to Unraid, where the DMS is running. I have turned off the Cloudflare proxy for my A records and created them using the same way, that you mentioned. I have turned off SSL to avoid certificate issues. I have recreated the mail accounts. Thunderbird finds the IMAP configuration, but after trying to login it gets stuck on "checking password". Logs from DMS point to an account, that isn't created, which is weird, as I added both [email protected] and [email protected]
  8. No warnings in Thunderbird, but even without sending anything I am being flooded in DMS logs with some connection and losing connection after EHLO from a Mexican domain, which I don't know at all. After blocking that IP in the firewall, the flood stopped. During the time when I send the mail I get the following logs: Certs in the template are setup followingly: This is where I store my CloudFlare provided Edge certificates, which are in use by NPM, so they are valid. Was not aware, that any subdomains were required, as I thought that an A record pointing to my public IP, MX record pointing to A record and TXT records with spf, dkim and dmarc were enough. I do however have my A record proxied through Cloudflare, wondering if this could cause any issues. Would you be able to tell me what the subdomains are required for and what would they need to point to? I thought, that the ports on the firewall had to be routed due to NPM taking only HTTP and HTTPS requests. I see, that you use the webmail.domain.com for the IMAP and SMTP information in Thunderbird? How do you point it in NPM? I have a single cert, which I believe is either for TLD or wildcard. I believe, that I already did the DKIM config before, but have done so once again now just in case. However, I don't receive e-mails even internally. The hostname in the docker template was setup correctly. Seems like the issue with the warning message in Thunderbird for Certificate error only appears the first time a mail is sent on the account. EDIT: Just tested using Thunderbird, that is not running on the same host and I get configuration issue.
  9. To correct myself on the first part. Seems like F2B was disabled, as that's what is default to the template. Using the test account, I was able to connect to the account now in Thunderbird and same for the original mail account when I updated the password to something basic. Now I have a new problem, which seems quite stupid. In Thunderbird when sending the first mail I get a warning "Sending of the message failed. Peer's Certificate issuer is not recognized. The configuration related to mail.domain.com must be corrected". As mentioned in my previous comment, I am using manual certificates provided by Cloudflare. I have a MX record created with name domain.com to point to mail.domain.com. Sending mails between the two local accounts doesn't do anything and trying to send a mail from a different SMTP server, such as Gmail yields error that "mail.domain.com could not be found". I have also tried adding e-mail routing in Cloudflare, which I don't think should be necessary, but no change.
  10. Thanks for the heads-up on the F2B, didn't decide to turn it off yet. Not sure if I may have gotten banned as the logs are not showing any alerts, what is the default duration for ban, or where would be the bans located, so I can clear them out? I have used [email protected] for the login in Thunderbird, which was already created prior in the console of the docker by going to the bin folder and running "setup email add [email protected] password". I did however use special characters during the setup. I will try later today again with a second account with basic password. Thank you.
  11. Hi, I seem to be struggling with getting some parts right. I use Cloudflare as my DNS provider, where I already have my MX and TXT records setup and have the .pem and .key locations setup in template for manual. I have forwarded all the ports used by the docker at the moment for testing purposes onto the IP of the unraid server. Thunderbird does recognize these records, as it autofills the configuration with IMAP. However after this part it's when things starting falling apart - when trying to login, I get a message saying "Unable to log in at server. Probably wrong configuration, username or password." I have checked the username and password, which were correct. As there is not a specific error message, I am unsure what would be misconfigured. Would anybody happen to know what can be causing this issue and possibly the fix?
  12. Hi, I have installed a Nvidia Quadro P2000 into the server. Everything was okay, but as soon as I have installed nvidia-driver the system kept working in a weird way. First of all, the nvidia-driver page was inaccessible as it just kept on loading forever. When opening up the terminal and trying to run nvidia-smi it looked like the terminal was trying to do something, but no output came out. I have decided to try to reboot the server after the install of the driver, but the system would not shutdown no matter how long I kept waiting for and I have to resort to forcefully shutting down the host. I have done some tests and this whole situation only happens when both the drivers and GPU are installed. There is no issue when only the drivers are installed and there is no issue when only the GPU is installed. myserver-diagnostics-20220605-0156.zip
  13. Unfortunately I've moved my unraid server onto another hardware, so I wasn't able to do the GPU passthrough in the end.
  14. So now I'm able to create a VM with the GPU and remote onto it with no problem, but the VM locks up after several minutes and goes back to the state of not being able to remote onto it/turn it off with a graceful shutdown.
  15. After dumping the VBios of my own GPU and fully reinstalling the VM, I was able to get it working. But after some time, the VM froze up and now it's doing the same thing as before. I'll retry again tomorrow with a fresh new image, hopefully the results will be good.
  16. I have tried the script to dump VBios in unraid from Spaceinvader, but it failed with some error. I will put the GPU in another machine to dump the VBios.
  17. So I've now double checked, the model is the same, but it's not the specific GPU, that I have. I just updated with the one I have for sure and nothing has changed. Attaching both of them (renamed them for the upload for the convenience). 5700XT-new.rom 5700XT-old.rom
  18. Added VBIOS for my GPU from TechPowerup, reapplied the multifunction, but still the same. Attaching the newest diagnostics. myserver-diagnostics-20211027-1801.zip
  19. Unfortunately, no dice. Adding logs from VM itself as well. -overcommit mem-lock=off \ -smp 8,sockets=1,dies=1,cores=4,threads=2 \ -uuid 3ca404d4-c486-5c66-7ff6-801fe07777ac \ -display none \ -no-user-config \ -nodefaults \ -chardev socket,id=charmonitor,fd=31,server,nowait \ -mon chardev=charmonitor,id=monitor,mode=control \ -rtc base=localtime \ -no-hpet \ -no-shutdown \ -boot strict=on \ -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x7.0x7 \ -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x7 \ -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x7.0x1 \ -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x7.0x2 \ -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x3 \ -blockdev '{"driver":"file","filename":"/mnt/user/domains/Windows 10/vdisk1.img","node-name":"libvirt-3-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-3-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-3-storage"}' \ -device virtio-blk-pci,bus=pci.0,addr=0x4,drive=libvirt-3-format,id=virtio-disk2,bootindex=1,write-cache=on \ -blockdev '{"driver":"file","filename":"/mnt/user/isos/Windows.iso","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-2-format","read-only":true,"driver":"raw","file":"libvirt-2-storage"}' \ -device ide-cd,bus=ide.0,unit=0,drive=libvirt-2-format,id=ide0-0-0,bootindex=2 \ -blockdev '{"driver":"file","filename":"/mnt/user/isos/virtio-win-0.1.190-1.iso","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-1-format","read-only":true,"driver":"raw","file":"libvirt-1-storage"}' \ -device ide-cd,bus=ide.0,unit=1,drive=libvirt-1-format,id=ide0-0-1 \ -netdev tap,fd=33,id=hostnet0 \ -device virtio-net,netdev=hostnet0,id=net0,mac=52:54:00:71:36:2a,bus=pci.0,addr=0x2 \ -chardev pty,id=charserial0 \ -device isa-serial,chardev=charserial0,id=serial0 \ -chardev socket,id=charchannel0,fd=34,server,nowait \ -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \ -device usb-tablet,id=input0,bus=usb.0,port=1 \ -device vfio-pci,host=0000:09:00.0,id=hostdev0,bus=pci.0,multifunction=on,addr=0x5 \ -device vfio-pci,host=0000:09:00.1,id=hostdev1,bus=pci.0,addr=0x5.0x1 \ -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ -msg timestamp=on 2021-10-27 15:44:48.825+0000: Domain id=1 is tainted: high-privileges 2021-10-27 15:44:48.825+0000: Domain id=1 is tainted: host-cpu char device redirected to /dev/pts/0 (label charserial0)
  20. Unraid is booting with UEFI, I have added the "video=efifb:off" previously to GUI mode by a mistake. I have updated it to the default one and rebooted. Now I am see the VM on the router, but no ping, RDP or VNC is working yet. root@MyServer:~# cat /proc/iomem 00000000-00000fff : Reserved 00001000-0009ffff : System RAM 000a0000-000fffff : Reserved 000a0000-000bffff : PCI Bus 0000:00 000c0000-000dffff : PCI Bus 0000:00 000c0000-000cdfff : Video ROM 000f0000-000fffff : System ROM 00100000-09e0ffff : System RAM 04000000-04a00816 : Kernel code 04c00000-04e4afff : Kernel rodata 05000000-05127f7f : Kernel data 05471000-055fffff : Kernel bss 09e10000-09ffffff : Reserved 0a000000-0a1fffff : System RAM 0a200000-0a20bfff : ACPI Non-volatile Storage 0a20c000-0affffff : System RAM 0b000000-0b01ffff : Reserved 0b020000-c76cf017 : System RAM c76cf018-c76e7c57 : System RAM c76e7c58-c76e8017 : System RAM c76e8018-c76f6057 : System RAM c76f6058-d174dfff : System RAM d174e000-d176cfff : ACPI Tables d176d000-d81a0fff : System RAM d81a1000-d81a1fff : Reserved d81a2000-da60bfff : System RAM da60c000-da749fff : Reserved da74a000-da759fff : ACPI Tables da75a000-da861fff : System RAM da862000-dac21fff : ACPI Non-volatile Storage dac22000-db77efff : Reserved db77f000-ddffffff : System RAM de000000-dfffffff : Reserved e0000000-fec2ffff : PCI Bus 0000:00 e0000000-f01fffff : PCI Bus 0000:07 e0000000-f01fffff : PCI Bus 0000:08 e0000000-f01fffff : PCI Bus 0000:09 e0000000-efffffff : 0000:09:00.0 e0000000-efffffff : vfio-pci f0000000-f01fffff : 0000:09:00.0 f0000000-f01fffff : vfio-pci f8000000-fbffffff : PCI MMCONFIG 0000 [bus 00-3f] f8000000-fbffffff : Reserved f8000000-fbffffff : pnp 00:00 fc600000-fc8fffff : PCI Bus 0000:0b fc600000-fc6fffff : 0000:0b:00.3 fc600000-fc6fffff : xhci-hcd fc700000-fc7fffff : 0000:0b:00.1 fc700000-fc7fffff : ccp fc800000-fc807fff : 0000:0b:00.4 fc808000-fc809fff : 0000:0b:00.1 fc808000-fc809fff : ccp fc900000-fcafffff : PCI Bus 0000:07 fc900000-fc9fffff : PCI Bus 0000:08 fc900000-fc9fffff : PCI Bus 0000:09 fc900000-fc97ffff : 0000:09:00.0 fc900000-fc97ffff : vfio-pci fc9a0000-fc9a3fff : 0000:09:00.1 fc9a0000-fc9a3fff : vfio-pci fca00000-fca03fff : 0000:07:00.0 fcb00000-fccfffff : PCI Bus 0000:02 fcb00000-fcbfffff : PCI Bus 0000:03 fcb00000-fcbfffff : PCI Bus 0000:05 fcb00000-fcb03fff : 0000:05:00.0 fcb04000-fcb04fff : 0000:05:00.0 fcb04000-fcb04fff : r8169 fcc00000-fcc7ffff : 0000:02:00.1 fcc80000-fcc9ffff : 0000:02:00.1 fcc80000-fcc9ffff : ahci fcca0000-fcca7fff : 0000:02:00.0 fcca0000-fcca7fff : xhci-hcd fcd00000-fcdfffff : PCI Bus 0000:0d fcd00000-fcd007ff : 0000:0d:00.0 fcd00000-fcd007ff : ahci fce00000-fcefffff : PCI Bus 0000:0c fce00000-fce007ff : 0000:0c:00.0 fce00000-fce007ff : ahci fcf00000-fcffffff : PCI Bus 0000:01 fcf00000-fcf03fff : 0000:01:00.0 fcf00000-fcf03fff : nvme fd000000-fd0fffff : Reserved fd000000-fd0fffff : pnp 00:01 fd500000-fd5fffff : Reserved fea00000-fea0ffff : Reserved feb80000-fec01fff : Reserved feb80000-febfffff : amd_iommu fec00000-fec003ff : IOAPIC 0 fec01000-fec013ff : IOAPIC 1 fec10000-fec10fff : Reserved fec10000-fec10fff : pnp 00:05 fec30000-fec30fff : Reserved fec30000-fec30fff : AMDIF030:00 fed00000-fed00fff : Reserved fed00000-fed003ff : HPET 0 fed00000-fed003ff : PNP0103:00 fed40000-fed44fff : Reserved fed80000-fed8ffff : Reserved fed81500-fed818ff : AMDI0030:00 fedc0000-fedc0fff : pnp 00:05 fedc2000-fedcffff : Reserved fedd4000-fedd5fff : Reserved fee00000-ffffffff : PCI Bus 0000:00 fee00000-feefffff : Reserved fee00000-fee00fff : Local APIC fee00000-fee00fff : pnp 00:05 ff000000-ffffffff : Reserved ff000000-ffffffff : pnp 00:05 100000000-41f37ffff : System RAM 41f380000-41fffffff : RAM buffer
  21. Hi, I'm new to unraid and the community. I have reused parts from my old PC for unraid and I have been trying to get a VM working with my Gigabyte 5700XT to no avail yet. I went step-by-step by Spaceinvader's guide part 1 and 2. I was able to setup the VM and install the virtio drivers. The problems start as soon as I try to change the GPU from VNC to the dedicated GPU. On the first boot of the VM after the change, the VM is stuck on a black screen, unable to gracefully shutdown it has to be force stopped. After that I'm not getting anymore pings, can't see it on the router and can't VNC into the VM when dedicated GPU is selected and I have to force stop. When I change the settings of VM back to VNC, everything works as per usual. I have now rebuilt the VM 2 times with the same results, tried all PCIe ACS override settings, tried adding the VBIOS manually (download from TechPowerup). Below can be seen the XML of the VM after changing back and forth from VNC and GPU <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>Windows 10</name> <uuid>3ca404d4-c486-5c66-7ff6-801fe07777ac</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='6'/> <vcpupin vcpu='2' cpuset='1'/> <vcpupin vcpu='3' cpuset='7'/> <vcpupin vcpu='4' cpuset='2'/> <vcpupin vcpu='5' cpuset='8'/> <vcpupin vcpu='6' cpuset='3'/> <vcpupin vcpu='7' cpuset='9'/> </cputune> <os> <type arch='x86_64' machine='pc-i440fx-5.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/3ca404d4-c486-5c66-7ff6-801fe07777ac_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='4' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Windows 10/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/Windows.iso'/> <target dev='hda' bus='ide'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.190-1.iso'/> <target dev='hdb' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'/> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:71:36:2a'/> <source bridge='br0'/> <model type='virtio-net'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x09' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x09' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <memballoon model='none'/> </devices> </domain> myserver-diagnostics-20211027-1652.zip