casperse

Members
  • Posts

    800
  • Joined

  • Last visited

Converted

  • Gender
    Male

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

casperse's Achievements

Collaborator

Collaborator (7/14)

35

Reputation

3

Community Answers

  1. Thanks this post really helped me! Disabled while transferring between servers stopped this. Using the guide to use Luckybackup moving TB of data
  2. Okay I went back to the PC mappings and found that it used the windows user account matching an old Unraid one (Old setup local VM PC) but after using this instead of the administrator account it connected. I also made sure that the two local root shares was renamed. - Since server p2 is a clone of the old UNraid server it had the same old accounts. I did found that the root share pool-shares did NOT list on any server. I just had to write it manually in the UI of UAD. Looking at the logs I get this now: But its working: I haven't found the difference between the two accounts? both have access to the same directories. Only difference is that the old one was created when I started using Unraid - cant be sure I didnt do something to this account so long time ago. Thanks again for your help this was a strange one 🙂
  3. Not sure I follow you? Yes the share is enabled in the settings of the mount And root access is possible from a windows PC, some other place?
  4. You said you could map the rootshare in Unraid? But the rootshare path is never listed in UAD only all the normal shares? I can confirm that UAD will list all the individual shares one by one, but what eludes me is the rootshare mapping between servers inside Unraid, that isnt listed as a option for a mapping shared! UAD will never list the root access share will it? I have created a separate user (administrator) for every shared folder, like you suggested, but I dont know how to enable this new user access to the rootshare mapping? (Some command in a terminal?) Again thanks so much for helping out having two servers is allot of work when you cant move things more easy between them 🙂
  5. So the only difference then is that if I in the future install a new docker and keep/forget to change the path of /mnt/user/appdata The exclusive share option will make sure its running without FUSE
  6. Hi All I have read the nice write-up on the new "exclusive shares" feature here: https://reddthat.com/post/224445 I have forgotten to change the path sometimes when installing a new docker, so I would actually like to setup the "Exclusive share" for my appdata cache. (I Have plenty of spare cache space, and my cache pool is mirrored and setup to use snapshot with ZFS to the array) What I am missing is do I have to change all my dockers back from the mnt/[poolname]/appdata/ back to the mnt/user/appdata/.... And is there any other difference in enabling exclusive shares vs having changed all the path to mnt/[poolname]/appdata/ ? Hope someone can help answer my question, I didn't really get this reading around the forum
  7. I might be doing it wrong, then? But doing the same on Unraid, does not work. Do I need the full path? //192.168.0.14/mnt/rootshare/Shares-Pools I get error when trying to mount the share Diagnostic attached. diagnostics-20240326-1245.zip
  8. Thanks! I did this and all my shares are listed. But I cant map the smb rootshare I still get an error when trying the path of : Servername\Shares-Pools and I dont want to map each 22 mapped shares 1 by 1 Is it only from a Windows SMB you can map the rootshare and not between Unraid servers? Maybe some linux magic you can do to enable a mount between them? I am also experimenting with 2 x 10Gbit lan card with direct connection between eth2 - eth2 But for some reason I am also running into eroors when trying this?
  9. Hi I really need some input on how to accomplish this using UAD. I already have enabled a rootshare on each of my Unraid servers I really want a rootshare mapped between my 2 Unraid servers on UAD (So far I keep getting errors) I really would like this to utilize the direct 10Gb LAN connection between them 192.168.11.6 & 192.168.11.14 if possible? SMB or NFS? So far I haven't been able to accomplish this using the UAD, do I need some Linux terminal commands to accomplish this? I am planning to use the "luckybackup" docker to move large amount of data between them, but having a rootshare mount between them would be really helpful UPDATE: Mapping the two rootshares on windows works! But trying to mount a SMB rootshare between the Unraid servers does NOT work? Same SMB part \\SERVERNAME\Shares-Pools or \\IP\Shares-Pools Unraid errors
  10. So my troubleshooting have located my problem. The Bios is always set to use the internal iGPU (Correct) and I get all the startup during boot to my monitor but when it should start the GUI I get a prompt in the upper left corner. Removing my Nvidia card and placing my HBA controller in the first PCIe slot WORKED! (Removing all other cards) and after some time I finally got the GUI on my monitor. JUHU! I then tried placing my Nvidia GPU (NVIDIA GeForce RTX 3060) in the 3 Pci slot (8x) and after boot I am back to the prompt and no UI? Something breaks during boot (The GPU have no output to my monitor). Any suggestion on what I should do next? UPDATE: I found a new Bios from 2024 (My board is from 2019) I updated the Bios and set everything up from scratch. Same thing. Cursor in top left corner of the monitor no UI.
  11. SOLVED: I am an IDIOT.... Checking the network settings on the ubuntu: sudo nano /etc/netplan/00-installer-config.yaml Somehow the gateway was wrong? Also installing the Guest service: Step 1: Log in using SSH You must be logged in via SSH as sudo or root user. Please read this article for instructions if you don’t know how to connect. Step 2: Install qemu guest agent apt update && apt -y install qemu-guest-agent Step 3: Enable and Start Qemu Agent systemctl enable qemu-guest-agent systemctl start qemu-guest-agent Step 4: Verify Verify that the Qemu quest agent is running systemctl status qemu-guest-agent
  12. Hi All All my VM's work except the AMP VM for my gaming server? I can see it doesn't get any IP? (This configuration is the same as my Windows VM and that worked perfectly after moving?) Configuration: XML: <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>AMP_Game_server</name> <uuid>106257ad-bf64-1305-df79-880b565808af</uuid> <description>Ubuntu server 20.04LTS</description> <metadata> <vmtemplate xmlns="unraid" name="Ubuntu" icon="/mnt/user/domains/AMP/amp.png" os="ubuntu"/> </metadata> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>10</vcpu> <cputune> <vcpupin vcpu='0' cpuset='12'/> <vcpupin vcpu='1' cpuset='13'/> <vcpupin vcpu='2' cpuset='14'/> <vcpupin vcpu='3' cpuset='15'/> <vcpupin vcpu='4' cpuset='16'/> <vcpupin vcpu='5' cpuset='17'/> <vcpupin vcpu='6' cpuset='18'/> <vcpupin vcpu='7' cpuset='19'/> <vcpupin vcpu='8' cpuset='20'/> <vcpupin vcpu='9' cpuset='21'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-5.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/106257ad-bf64-1305-df79-880b565808af_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='5' threads='2'/> <cache mode='passthrough'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/AMP/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x10'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x11'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0x12'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x13'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0x14'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:84:10:41'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='da'> <listen type='address' address='0.0.0.0'/> </graphics> <audio id='1' type='none'/> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </memballoon> </devices> </domain> I can connect to the Webserver UI? But it have no internet connection, so strange
  13. This is actually strange, cloned my Unraid flash for a second server (New license). On the new server it now works (With the settings above), I get the GUI output on the iGPU HDMI port (MB) BUT! Same cloned USB on my old server gives me a prompt with a "_" blinking screen in the top left corner of the monitor? On this MB its a Displayport again with iGPU output (Displayport) and I get the full boot on the screen right up to the end? Both servers have the iGPU as the primary and only output!
  14. My problem is that i occurs every 10-12 hours so with the amount of dockers I have this would be very hard to do. Update: So this could be caused by a single docker with a memory limit that breaks it? Anyway to identify the docker, from the error message?
  15. Please can anyone help me? I have installed the swapfile plugin I have set a memory limit on all my dockers to 1G (If all dockers 0bey the rules of the limit then I shouldn't see anymore errors?) I have tried stopping all dockers and only some dockers but I still get the Memory error? Is there anyway to find out what is causing this? Would syslog be able to find out? This happened again today: Systems are still running, but the error is resulting in Unraid killing random processes?