Nano

Members
  • Posts

    116
  • Joined

  • Last visited

Posts posted by Nano

  1. 17 hours ago, dedighar said:

    Hi,
    I just purchased and shucked a WD80EZAZ-11TDBA0 8TB disk, but I am not able to get the unraid box to recognize it correctly.

    See attached diagnostic zip file.

    The dmesg log file show these related entries:
    [   23.373062] sd 7:0:0:0: [sdf] Attached SCSI removable disk
    [   29.214235] ata8: SATA link up 6.0 Gbps (SStatus 133 SControl 330)
    [   29.214581] ata8.00: supports DRM functions and may not be fully accessible
    [   29.214628] ata8.00: ATA-9: WDC WD80EZAZ-11TDBA0, 83.H0A83, max UDMA/133
    [   29.215030] ata8.00: failed to enable AA (error_mask=0x1)
    [   29.225580] ata8.00: Read log 0x00 page 0x00 failed, Emask 0x1
    [   29.225629] ata8.00: NCQ Send/Recv Log not supported
    [   29.225667] ata8.00: Read log 0x00 page 0x00 failed, Emask 0x40
    [   29.225706] ata8.00: NCQ Send/Recv Log not supported
    [   29.225742] ata8.00: Read log 0x00 page 0x00 failed, Emask 0x40
    [   29.225781] ata8.00: 15628053168 sectors, multi 0: LBA48 NCQ (depth 32)
    [   29.225822] ata8.00: Security Log not supported
    [   29.225867] ata8.00: failed to set xfermode (err_mask=0x40)
    [   29.225907] ata8: limiting SATA link speed to 3.0 Gbps

    I have use a molex to sata adaptor to get around the 3.3v issue, and I can hear that the drive spin up, but the drive is not recognized.

    Does anyone have an idea or clue of what might cause this error?

    What do I need to do in order to make use of this disk in unraid?

    The data cable used was borrowed from another disk known to be working OK. 


     

    tower-diagnostics-20231230-1718.zip 88.93 kB · 1 download

    have you got any black electrical tape? I think that's the next best option as it could be the adapter, I also would not trust them as they are known to burn.

  2. On 12/16/2023 at 6:57 PM, Squid said:

     

     

    FWIW I've switched to a complete Mac / Apple environment and cannot see any fundamental difference on a stock MacOS (Sonoma) install vs a Windows environment.  Not saying that there isn't any, but in how I have my server organized I don't see any real difference

    would be good for your comments on my last post GIFS can you confirm if that issue does not appear, if so, would love to see the specific settings you use as I feel like I have tried everything.

  3. 7 hours ago, tjb_altf4 said:

    that is a generic smb problem, this post has some suggestions for improving performance

     

    Except when I run a Truenas VM inside Unraid, the Image previews load instantly on a test SMB share and the folder itself loads fast. Yet the same share inside Unraid has slow previews and the files don't appear for ages. Here is an example I just took and this is one the better examples.. 

     

     

    CleanShot 2023-12-31 at 10.31.50.gif

     

     

    Here is then Truenas VM Share which loads fast and Includes preview images.

     

     

     

     

    CleanShot 2023-12-31 at 10.37.50.gif

  4. On 12/16/2023 at 6:57 PM, Squid said:

     

     

    FWIW I've switched to a complete Mac / Apple environment and cannot see any fundamental difference on a stock MacOS (Sonoma) install vs a Windows environment.  Not saying that there isn't any, but in how I have my server organized I don't see any real difference

    I have a Mac with 10GBps link, transfer speed is fine. The issues come when you have a folder with like over 300 images. Is it possible for you to confirm no issues in this scenario 

  5. Love the plugin, feels like it should be default included. Is there any plans to add a search, or is anyone aware of a search docker. For example it would be great to search for extensions like .exe or .mkv for mass deletion ect

  6. There has been some major issues with SMB speed on Unraid for a while, Unfortunately due to not many users using Mac's for access it does not come up. I have done some more testing today that may help.

     

    Test folder with 300 Images 

     

    Unraid 6.12.6 Fresh install with NVME SSD Share = Crap SMB speed loading photos 

     

    Truenas SCALE Fresh build with simple 1 Drive SMB Test = Fast loading of photos, expected speed

     

    Truenas SCALE Fresh Build VM within the same Unraid install with simple 1 Drive SMB Test = Fast loading of photos, expected speed

     

    What's going on? 

  7. Unraid seems to have what i can only explain ghosts in the system, something that i don't get on proxmox for example.

     

    it seems base windows 11/Server 2022 VM’s will freeze, or lock the entire unraid system at 100% CPU especially if left overnight..


    cpu : 13600Kf

     

  8. 3 minutes ago, SimonF said:

    No currenly only supports btrfs. zfs master allows snaps for zfs but i dont think it has a schedule option.

    Thank you, what I was looking for, you guys should team up with unraid together because this plug-in has the most obvious name for one!

  9. I did move to Proxmox for the VM part and virtualise Unraid but then had some problems with Proxmox after upgrades with hanging boots so was easy to switch back to bare metal Unraid and reassign the disks. I want to give Unraid another Shot on the VM side but on my first day two issues stick out.

     

    1. Does anyone get this Error page when opening VNC view from the GUI, its really annoying.. (Browser = Safari) 

    image.thumb.png.9de0fc10cbcf9108c27a9f4c3afe984c.png

     

     

    2. The VM crashes for some reason when VNCing to the machine via iPad (realvnc), very strange (RealVNC) Desktop the VM does not crash. It's an instant crash, Really stumped by this one, is Unraid seeing some sort of signal or crashing, it's hard to tell.

     

    3.PCIE errors show when starting a VM with PCIE Express disks passed through when stopping the VM, Is this the GUI saying it's stopped before it actually has? Second start after the PCIE error message then starts the VM.

     

     

  10. On 3/8/2023 at 11:50 PM, petebnas said:

    Hi all..I've been using unraid forever as a HTPC target, Music/Video source, various unstructured data, etc.  It's always worked well for me, but lately I've been driven crazy trying to sort out slowness issues with my windows client machines hanging and doing all sorts of funny things.  I've been forced into spending hours trying to get apps to work differently so they won't try a lot of 'small file/quick access' tasks like reading cover art and whatever else, and I really have reached that point where I need to find another win-based solution for file access.   

     

    I ran across some other folks with similar issues that spent even more time that I have, and besides playing around with RSS (which i've done) and chasing things with hardware, it seems like unraid has some smb access issues, or perhaps it's samba...regardless, it's 2023 and I don't feel like messing around with samba problems.

     

    I'll probably keep unraid running as a backup target, but don't need any docker tricks and just need something for my primary access that will be quick with small files like jpg cover art files, music, and music video collections.  

     

    Has anyone been down this path recently? With large drives being so cheap, my thought was to keep my primary data online with a windows box, backing up to unraid and also leaving unraid for very large video files like movie files, etc...which have always worked well...and obviously some of these are accessed via NFS and bypass the issues I see with my other clients.  

     

    I want to be clear...I've been nothing but satisfied with Unraid over the years..but I get this feeling that there's some underlying issues that aren't being addressed, and I'm not quite sure why.  I do know that when I'm spending hours packet capturing and trying to get my apps to do things differently to stay afloat, that it's not something I want to be doing! :)

     

    Any thoughts are appreciated.

    Pete

    After some checking, NFS fixes these painful issues, feel free to try my guide here. Over 3 times quicker via NFS.

     

  11. I am no expert but wanted to share what worked as most of what I found on google was crap from 2010 that did not work. I will also mention for anyone accessing their Unraid server on a Mac, NFS is around 3 times faster from my testing, all the SMB issues experienced are just not present with NFS.

     

    First Enable NFS shares in "Settings"

    Go to Shares, Click the share you want to share and go NFS.

     

    Set Export to Yes

    Set Security to Private

    Set the Rule to allow full access to a certain trusted IP - This could be a server or your main PC E.G 192.168.1.20(sec=sys,rw) 

     

    Proxmox/Linux mounting - I had zero issues here after setting the security - Simply press the + button and it found the share and done.

     

    Mac OSX mounting- This was just a massive pain.... 

     

    1. Open Terminal

    2. Enter showmount -e localhost  ----- Replace localhost with your servername.local or the IP.

    3. It should list the Share Name and you should see the PC IP you have allowed

    4. Create a mount point (Basically make a folder on your Mac somewhere)

    5. Point it to the folder with the following command - 

    E.G

    sudo mount -o resvport nfs://server_IP_or_hostname:/path/to/shared/folder /path/to/mount/point

    sudo mount -o resvport 192.168.1.20:/mnt/user/unraidshare /users/yourname/yourfolderyoumade
     

    6. Done its now usable and mounted 

    • Like 1
    • Upvote 1
  12. Is it normal to have to click all the little box's on restore? Clicking like 40 box's manually is a bit of a drag. Secondly is there any chance we could have a progress bar/number per restore of each Tar.GZ pretty please. I am doing a restore now and it's stuck on a small docker. 

     

     I just restored and my docker tab is empty despite restoring all XML's too.

    image.thumb.png.8d41ecdc84b436dea768a4d37c09b43b.png

     

    I assume this is normal and a limitation, upon choosing each template it worked fine. 

  13. May be beyond me then, that’s basically what I use and works okay, my understanding was that the PCI device is simply passed through here, I assume you have installed all the drivers in windows?

     

    right click the start button on windows, click device manager and make sure you have nothing missing/yellow exclamation mark. Otherwise hopefully someone else can help.

  14. 29 minutes ago, Kodon said:

     

    I took this to start over from scratch. I reset the BIOS and reformatted the Unraid USB to get rid of all settings.

    Leaving everything at default and "ticking the boxes" and... still not working :c

     

    vfio-pci Log

     

      Hide contents
    Loading config from /boot/config/vfio-pci.cfg
    BIND=0000:01:00.0|10de:2684 0000:01:00.1|10de:22ba
    ---
    Processing 0000:01:00.0 10de:2684
    Vendor:Device 10de:2684 found at 0000:01:00.0
    
    IOMMU group members (sans bridges):
    /sys/bus/pci/devices/0000:01:00.0/iommu_group/devices/0000:01:00.0
    /sys/bus/pci/devices/0000:01:00.0/iommu_group/devices/0000:01:00.1
    
    Binding...
    Successfully bound the device 10de:2684 at 0000:01:00.0 to vfio-pci
    ---
    Processing 0000:01:00.1 10de:22ba
    Vendor:Device 10de:22ba found at 0000:01:00.1
    
    IOMMU group members (sans bridges):
    /sys/bus/pci/devices/0000:01:00.1/iommu_group/devices/0000:01:00.0
    /sys/bus/pci/devices/0000:01:00.1/iommu_group/devices/0000:01:00.1
    
    Binding...
    0000:01:00.0 already bound to vfio-pci
    0000:01:00.1 already bound to vfio-pci
    Successfully bound the device 10de:22ba at 0000:01:00.1 to vfio-pci
    ---
    vfio-pci binding complete
    
    Devices listed in /sys/bus/pci/drivers/vfio-pci:
    lrwxrwxrwx 1 root root    0 Jun  8 16:51 0000:01:00.0 -> ../../../../devices/pci0000:00/0000:00:01.0/0000:01:00.0
    lrwxrwxrwx 1 root root    0 Jun  8 16:51 0000:01:00.1 -> ../../../../devices/pci0000:00/0000:00:01.0/0000:01:00.1

     

     

    VM Log

     

      Hide contents
    -device '{"driver":"ide-cd","bus":"ide.0","drive":"libvirt-2-format","id":"sata0-0-0","bootindex":2}' \
    -blockdev '{"driver":"file","filename":"/mnt/user/isos/virtio-win-0.1.229-1.iso","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}' \
    -blockdev '{"node-name":"libvirt-1-format","read-only":true,"driver":"raw","file":"libvirt-1-storage"}' \
    -device '{"driver":"ide-cd","bus":"ide.1","drive":"libvirt-1-format","id":"sata0-0-1"}' \
    -netdev tap,fd=36,id=hostnet0 \
    -device '{"driver":"virtio-net","netdev":"hostnet0","id":"net0","mac":"52:54:00:0f:ce:cc","bus":"pci.1","addr":"0x0"}' \
    -chardev pty,id=charserial0 \
    -device '{"driver":"isa-serial","chardev":"charserial0","id":"serial0","index":0}' \
    -chardev socket,id=charchannel0,fd=34,server=on,wait=off \
    -device '{"driver":"virtserialport","bus":"virtio-serial0.0","nr":1,"chardev":"charchannel0","id":"channel0","name":"org.qemu.guest_agent.0"}' \
    -chardev socket,id=chrtpm,path=/run/libvirt/qemu/swtpm/1-Games-swtpm.sock \
    -tpmdev emulator,id=tpm-tpm0,chardev=chrtpm \
    -device '{"driver":"tpm-tis","tpmdev":"tpm-tpm0","id":"tpm0"}' \
    -device '{"driver":"usb-tablet","id":"input0","bus":"usb.0","port":"1"}' \
    -audiodev '{"id":"audio1","driver":"none"}' \
    -vnc 0.0.0.0:0,websocket=5700,audiodev=audio1 \
    -k de \
    -device '{"driver":"qxl-vga","id":"video0","max_outputs":1,"ram_size":67108864,"vram_size":67108864,"vram64_size_mb":0,"vgamem_mb":16,"bus":"pcie.0","addr":"0x1"}' \
    -device '{"driver":"virtio-balloon-pci","id":"balloon0","bus":"pci.4","addr":"0x0"}' \
    -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
    -msg timestamp=on
    char device redirected to /dev/pts/1 (label charserial0)
    qxl_send_events: spice-server bug: guest stopped, ignoring
    2023-06-08T17:27:08.650464Z qemu-system-x86_64: terminating on signal 15 from pid 2869 (/usr/sbin/libvirtd)
    2023-06-08 17:27:08.879+0000: shutting down, reason=shutdown
    2023-06-08 17:28:10.970+0000: Starting external device: TPM Emulator
    /usr/bin/swtpm socket --ctrl type=unixio,path=/run/libvirt/qemu/swtpm/2-Games-swtpm.sock,mode=0600 --tpmstate dir=/var/lib/libvirt/swtpm/a2694598-c0a6-e077-916f-47926aa66729/tpm2,mode=0600 --log file=/var/log/swtpm/libvirt/qemu/Games-swtpm.log --terminate --tpm2
    2023-06-08 17:28:10.983+0000: starting up libvirt version: 8.7.0, qemu version: 7.1.0, kernel: 5.19.17-Unraid, hostname: ArcticTower
    LC_ALL=C \
    PATH=/bin:/sbin:/usr/bin:/usr/sbin \
    HOME=/var/lib/libvirt/qemu/domain-2-Games \
    XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-2-Games/.local/share \
    XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-2-Games/.cache \
    XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-2-Games/.config \
    /usr/local/sbin/qemu \
    -name guest=Games,debug-threads=on \
    -S \
    -object '{"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-2-Games/master-key.aes"}' \
    -blockdev '{"driver":"file","filename":"/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi-tpm.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}' \
    -blockdev '{"node-name":"libvirt-pflash0-format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"}' \
    -blockdev '{"driver":"file","filename":"/etc/libvirt/qemu/nvram/a2694598-c0a6-e077-916f-47926aa66729_VARS-pure-efi-tpm.fd","node-name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"}' \
    -blockdev '{"node-name":"libvirt-pflash1-format","read-only":false,"driver":"raw","file":"libvirt-pflash1-storage"}' \
    -machine pc-q35-7.1,usb=off,dump-guest-core=off,mem-merge=off,memory-backend=pc.ram,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-format \
    -accel kvm \
    -cpu host,migratable=on,hv-time=on,hv-relaxed=on,hv-vapic=on,hv-spinlocks=0x1fff,hv-vendor-id=none,host-cache-info=on,l3-cache=off \
    -m 16384 \
    -object '{"qom-type":"memory-backend-ram","id":"pc.ram","size":17179869184}' \
    -overcommit mem-lock=off \
    -smp 16,sockets=1,dies=1,cores=8,threads=2 \
    -uuid a2694598-c0a6-e077-916f-47926aa66729 \
    -display none \
    -no-user-config \
    -nodefaults \
    -chardev socket,id=charmonitor,fd=35,server=on,wait=off \
    -mon chardev=charmonitor,id=monitor,mode=control \
    -rtc base=localtime \
    -no-hpet \
    -no-shutdown \
    -boot strict=on \
    -device '{"driver":"pcie-root-port","port":16,"chassis":1,"id":"pci.1","bus":"pcie.0","multifunction":true,"addr":"0x2"}' \
    -device '{"driver":"pcie-root-port","port":17,"chassis":2,"id":"pci.2","bus":"pcie.0","addr":"0x2.0x1"}' \
    -device '{"driver":"pcie-root-port","port":18,"chassis":3,"id":"pci.3","bus":"pcie.0","addr":"0x2.0x2"}' \
    -device '{"driver":"pcie-root-port","port":19,"chassis":4,"id":"pci.4","bus":"pcie.0","addr":"0x2.0x3"}' \
    -device '{"driver":"pcie-root-port","port":20,"chassis":5,"id":"pci.5","bus":"pcie.0","addr":"0x2.0x4"}' \
    -device '{"driver":"ich9-usb-ehci1","id":"usb","bus":"pcie.0","addr":"0x7.0x7"}' \
    -device '{"driver":"ich9-usb-uhci1","masterbus":"usb.0","firstport":0,"bus":"pcie.0","multifunction":true,"addr":"0x7"}' \
    -device '{"driver":"ich9-usb-uhci2","masterbus":"usb.0","firstport":2,"bus":"pcie.0","addr":"0x7.0x1"}' \
    -device '{"driver":"ich9-usb-uhci3","masterbus":"usb.0","firstport":4,"bus":"pcie.0","addr":"0x7.0x2"}' \
    -device '{"driver":"virtio-serial-pci","id":"virtio-serial0","bus":"pci.2","addr":"0x0"}' \
    -blockdev '{"driver":"file","filename":"/mnt/user/domains/Games/vdisk1.img","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
    -blockdev '{"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \
    -device '{"driver":"virtio-blk-pci","bus":"pci.3","addr":"0x0","drive":"libvirt-1-format","id":"virtio-disk2","bootindex":1,"write-cache":"on"}' \
    -netdev tap,fd=36,id=hostnet0 \
    -device '{"driver":"virtio-net","netdev":"hostnet0","id":"net0","mac":"52:54:00:0f:ce:cc","bus":"pci.1","addr":"0x0"}' \
    -chardev pty,id=charserial0 \
    -device '{"driver":"isa-serial","chardev":"charserial0","id":"serial0","index":0}' \
    -chardev socket,id=charchannel0,fd=34,server=on,wait=off \
    -device '{"driver":"virtserialport","bus":"virtio-serial0.0","nr":1,"chardev":"charchannel0","id":"channel0","name":"org.qemu.guest_agent.0"}' \
    -chardev socket,id=chrtpm,path=/run/libvirt/qemu/swtpm/2-Games-swtpm.sock \
    -tpmdev emulator,id=tpm-tpm0,chardev=chrtpm \
    -device '{"driver":"tpm-tis","tpmdev":"tpm-tpm0","id":"tpm0"}' \
    -device '{"driver":"usb-tablet","id":"input0","bus":"usb.0","port":"1"}' \
    -audiodev '{"id":"audio1","driver":"none"}' \
    -device '{"driver":"vfio-pci","host":"0000:01:00.0","id":"hostdev0","bus":"pci.4","addr":"0x0"}' \
    -device '{"driver":"vfio-pci","host":"0000:01:00.1","id":"hostdev1","bus":"pci.5","addr":"0x0"}' \
    -device '{"driver":"usb-host","hostdevice":"/dev/bus/usb/001/002","id":"hostdev2","bus":"usb.0","port":"2"}' \
    -device '{"driver":"usb-host","hostdevice":"/dev/bus/usb/001/003","id":"hostdev3","bus":"usb.0","port":"3"}' \
    -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
    -msg timestamp=on
    char device redirected to /dev/pts/1 (label charserial0)

     

     

    Unraid Log

     

      Hide contents
    Jun  8 10:27:08 ArcticTower  avahi-daemon[2328]: Interface vnet0.IPv6 no longer relevant for mDNS.
    Jun  8 10:27:08 ArcticTower  avahi-daemon[2328]: Leaving mDNS multicast group on interface vnet0.IPv6 with address fe80::fc54:ff:fe0f:cecc.
    Jun  8 10:27:08 ArcticTower kernel: br0: port 2(vnet0) entered disabled state
    Jun  8 10:27:08 ArcticTower kernel: device vnet0 left promiscuous mode
    Jun  8 10:27:08 ArcticTower kernel: br0: port 2(vnet0) entered disabled state
    Jun  8 10:27:08 ArcticTower  avahi-daemon[2328]: Withdrawing address record for fe80::fc54:ff:fe0f:cecc on vnet0.
    Jun  8 10:28:10 ArcticTower kernel: br0: port 2(vnet1) entered blocking state
    Jun  8 10:28:10 ArcticTower kernel: br0: port 2(vnet1) entered disabled state
    Jun  8 10:28:10 ArcticTower kernel: device vnet1 entered promiscuous mode
    Jun  8 10:28:10 ArcticTower kernel: br0: port 2(vnet1) entered blocking state
    Jun  8 10:28:10 ArcticTower kernel: br0: port 2(vnet1) entered forwarding state
    Jun  8 10:28:12 ArcticTower kernel: vfio-pci 0000:01:00.0: vfio_ecap_init: hiding ecap 0x1e@0x258
    Jun  8 10:28:12 ArcticTower kernel: vfio-pci 0000:01:00.0: vfio_ecap_init: hiding ecap 0x19@0x900
    Jun  8 10:28:12 ArcticTower kernel: vfio-pci 0000:01:00.0: vfio_ecap_init: hiding ecap 0x26@0xc1c
    Jun  8 10:28:12 ArcticTower kernel: vfio-pci 0000:01:00.0: vfio_ecap_init: hiding ecap 0x27@0xd00
    Jun  8 10:28:12 ArcticTower kernel: vfio-pci 0000:01:00.0: vfio_ecap_init: hiding ecap 0x25@0xe00
    Jun  8 10:28:12 ArcticTower kernel: vfio-pci 0000:01:00.1: enabling device (0000 -> 0002)
    Jun  8 10:28:12 ArcticTower kernel: vfio-pci 0000:01:00.1: vfio_ecap_init: hiding ecap 0x25@0x160
    Jun  8 10:28:12 ArcticTower  acpid: input device has been disconnected, fd 6
    Jun  8 10:28:12 ArcticTower  acpid: input device has been disconnected, fd 7
    Jun  8 10:28:12 ArcticTower  acpid: input device has been disconnected, fd 8
    Jun  8 10:28:12 ArcticTower  avahi-daemon[2328]: Joining mDNS multicast group on interface vnet1.IPv6 with address fe80::fc54:ff:fe0f:cecc.
    Jun  8 10:28:12 ArcticTower  avahi-daemon[2328]: New relevant interface vnet1.IPv6 for mDNS.
    Jun  8 10:28:12 ArcticTower  avahi-daemon[2328]: Registering new address record for fe80::fc54:ff:fe0f:cecc on vnet1.*.
    Jun  8 10:28:13 ArcticTower kernel: x86/split lock detection: #AC: CPU 1/KVM/12762 took a split_lock trap at address: 0x7fde008c
    Jun  8 10:28:13 ArcticTower kernel: x86/split lock detection: #AC: CPU 4/KVM/12765 took a split_lock trap at address: 0x7fde008c
    Jun  8 10:28:13 ArcticTower kernel: x86/split lock detection: #AC: CPU 3/KVM/12764 took a split_lock trap at address: 0x7fde008c
    Jun  8 10:28:13 ArcticTower kernel: x86/split lock detection: #AC: CPU 7/KVM/12768 took a split_lock trap at address: 0x7fde008c
    Jun  8 10:28:13 ArcticTower kernel: x86/split lock detection: #AC: CPU 9/KVM/12770 took a split_lock trap at address: 0x7fde008c
    Jun  8 10:28:13 ArcticTower kernel: x86/split lock detection: #AC: CPU 8/KVM/12769 took a split_lock trap at address: 0x7fde008c
    Jun  8 10:28:13 ArcticTower kernel: x86/split lock detection: #AC: CPU 11/KVM/12772 took a split_lock trap at address: 0x7fde008c
    Jun  8 10:28:13 ArcticTower kernel: x86/split lock detection: #AC: CPU 10/KVM/12771 took a split_lock trap at address: 0x7fde008c
    Jun  8 10:28:13 ArcticTower kernel: x86/split lock detection: #AC: CPU 13/KVM/12774 took a split_lock trap at address: 0x7fde008c
    Jun  8 10:28:13 ArcticTower kernel: x86/split lock detection: #AC: CPU 15/KVM/12776 took a split_lock trap at address: 0x7fde008c
    Jun  8 10:29:12 ArcticTower kernel: usb 1-7: USB disconnect, device number 2
    Jun  8 10:29:13 ArcticTower kernel: usb 1-7: new low-speed USB device number 14 using xhci_hcd
    Jun  8 10:29:13 ArcticTower kernel: input: PixArt USB Optical Mouse as /devices/pci0000:00/0000:00:14.0/usb1/1-7/1-7:1.0/0003:093A:2510.0007/input/input11
    Jun  8 10:29:13 ArcticTower kernel: hid-generic 0003:093A:2510.0007: input,hidraw0: USB HID v1.11 Mouse [PixArt USB Optical Mouse] on usb-0000:00:14.0-7/input0

     

     

    I really don't know what could be wrong.

     

    What settings are clicking on the VM create page, create a new VM template and send the screenshot? You could also try a Ubuntu distro as a test. Are you connecting an actual monitor to it or a headless hdmi fake. Some of the symptoms sound like you need one of these HDMI headless

  15. On 6/5/2023 at 4:22 PM, mike_walker said:

    Hi there, So I've been on with Surfshark (my VPN provider) and the chap told me my incoming port was 1196.  I've completely re-installed the container (as my settings have been changed so much I wanted a fresh install).  Anyhow, it didn't work - nothing better than 1.2MB/s.

    BTW I did a test without the VPN and got an amazing 50.2MB/s

    I'm not convinced the incoming port bit I was told was right (he said it was because I was using UDP).

     

    Any other suggestions - or advice on how to get wireguard working?

    Deluge Settings.png

    Use a TCP vpn, UDP vpns have been getting limited by some ISP’s

  16. I setup AUTOGPT on a windows Server and wow, its really awesome at first, Then you realise nothing got done and you spent $1 on API calls in the space of 30 minutes :D Its fun but its nothing ground breaking.