Can0n

Members
  • Posts

    542
  • Joined

  • Last visited

Everything posted by Can0n

  1. I am using OVMF with i440 5.1 I have to use OVMF as I am running bare metal windows on an NVMe meaning no support for legacy bios/seabios 1909 ran great all games ran perfect as did the OS, got notification 1909 was no longer supported so no more security updates would be coming through...wierdly Cyber Punk 2077 ran great until Jan 2021 then it went to max 13fps on all resolutions and all settings, as soon as i shut down and boot directly to windows off the NVMe CP2077 runs fine I chop that up to an update from them that breaks virtualization of the game. so I did try to update to 21H1 over the 1909 but it kept failing, after a few days i deceided to wipe and load windows from scratch...i installed it through a USB stick i made with it directly to the NVMe with the drive pulled and the unraid USB removed so like a normal windows install. I got all my programs and drivers then grabbed the UUID so i could update my VM config with it...shut down and connected drives and booted back into unraid, updated UUID and started VM, CPUs are pinned for a long time after start up with nothing going on Chrome was bad all 8 threads pin to 100% as long as its running, test CP2077 getting 1-2fps....I try Destiny 2 13fps (previously getting 50-60 on max settingst at 2560x1440@60Hz) I went back to bare metal windows and tried both games again and they run perfectly So i suspect Windows 21H1 breaks something in virtualization and QEMU and Hypp-V will need an update to run smoothly with Win in Meantime i am in process of moving all my data to my main unraid server then shutting this one down and booting straight to winodws the mouse lag is horrible right now and the mouse is USB to a passed through USB card so a l cant really use to to look for work or edit my resume as I was using it for...
  2. I should add windows performance itself is sluggish and slow too there is significant mouse delays on OS menus when i drag the mouse back and forth, again none of that when i shut down unraid and boot directly to the NVMe
  3. Hi I recently had to reinstall Windows to my nvme drive to get the latest update which is 21h1, since then all the games I passed through to my RTX 2060 only get around 18 frames a second when I was previously getting over 50 on Windows 10 v1909 I have Windows installed directly to the nvme and point the VM configuration to that drive I have not changed the resources allocated or anything other than throwing a couple of extra cores towards it after I discovered massive input lag and high CPU usage by pretty much every piece of software installed If I shut down and boot directly to Windows everything's fast and snappy so it leads me to believe there's something wrong with qemu or hypervisor for the latest M$ has to offer. Attached to my diagnostics but my VM configuration is pretty straightforward Core 0/HT reserved for unraid i7 8700 with cores 1-5 and HT 16GB ram/48GB Pass through of a PCIe usb 3 card and RTX 2060 I boot the VM directly to the NVMe Running unraid 6.9.2 Attached are the diagnostics freya-diagnostics-20210525-1120.zip
  4. You should really remove user0 and make it /mnt/user/ The method you have has been deprecated and could be removed at any time and then you will have problems not to mention it does not touch your cache cool using user0
  5. Hi @linuxserver.io I have recently been using Nextcloud with MariaDB on one of my servers and just wanted to move it to the other for my reverse proxy i used cp -arv command to move the appdata folders of nextcloud and mariadb both unraid server use same root password and profiles i copied using same command all the next cloud /data from old server to new and mirrored the /mnt/user/nextcloud set up as it was on the original server ****Edit found that i missed the updated server IP for both Nextcloud and MariaDB in the config.php file its working now
  6. the other day i could ping it but not ssh in or wake my KVM attached screen so far stopping array and restarting every two days has it stable also stopped Ombi the other day and it server ran 3 days no issue (was only getting 2 days before lock up) moved ombi to primary server tonight will see how with no dockers started the secondary server holds up
  7. on only one of my 2 servers I am seeing kcompactd0 eat up 100% cpu approx every two days it makes the webgui completely unresponsive and cannot get diagnostics at all (let it run 24 hours before i had to power cycle the server) kill command doesnt work and unable to do much else only started about a week ago and not sure what is the cause since i cant run diagnostics
  8. mine is too now as well since updating putty
  9. Hi @SpaceInvaderOne I used your video here and it works great thank you very much. I am wondering if you could point me in the direction if its even possible to get the same method to work when two NVMe drives are being used for VM's I have a friend that will be using a thread ripper and a couple NVME drives to combine his kids PC's into one server thanks in Advance
  10. I have noticed since installing 6.9 Beta 35 I am getting server refused our key on both of my servers it was working on Beta 29 I compared the authorized_keys file on both machines they match the keys file i have on my linux, Mac's and Putty setups i have in my home and no changes to the go file with the chmods were done anyone have an idea what might be causing this?
  11. Just reverse all the steps you did to install it as far as paying upfront you will have to reach out to Pulseway themselves I had to do the same when I went from five systems down to three and they just ended up extending my time for the difference
  12. did you check that the files are owned by root? chown -R /etc/ssh/.ssh ultimately once I rebooted the server that wasn’t working it is now
  13. this is what i did on the first server and it worked, second is still refusing the key in putty for some reason there is a sym link i had in /boot/config/ssh/ called root I deleted that and it left a dead symlink in /root/ called .ssh so if you do an ls -l /root/ if you have a file called .ssh you need to delete it then i did mkdir /root/.ssh/ cd / chown root:root . chmod 700 /root/.ssh chmod 600 /root/.ssh/authorized_keys this allows me to use the 4096 bit ppk file in putty for one server but other explictly rejects it not sure why
  14. I got one of my two servers working other one on same 6.9 beta 29 is not working I even copied the authorized_keys file over and re-did perms just in case ultimately for the first one I used WinSCP to delete the .ssh folder in /root/ then made a new one in command line and applied the chmod's and got it working on first server have not tried the delete and make new directory on secondary yet will try now
  15. Hi All I had this working sometime ago but my keys in windows dispapeared I am trying to use the Puttygen tool to generate a new public key for my unraid server here is what im doing please enlighten me if i messed up somewhere: i run the puttygen to make a key and save it to a ppk and public file i copy the public key to my authorized_keys file on unraid at /root/.ssh/authorized_keys I set up putting to use the public file under SSH/Auth in putty. I open putty and it wont autofill anything and still requires user and password to connect my Linux Laptop on Fedora 32 works without any issues using the ssh-copy-id command but putty doesnt support ssh-copy-id any help would be very much appreciated
  16. Try this as I had to do it too chmod 700 /root/.ssh chmod 600 /root/.ssh/authorized_keys
  17. Hi everyone I need some help here I was playing about the mac field in my windows VM settings (on unRAID) and accidentally erased the mac and saved it (hoping the new NIC i installed would pass its mac though to the VM). instead the network section is completely gone now and i cant seem to see how to add it back at all since I cannot even find the section in XML form either *******Edit i seemed to find a work around and that was creating a new VM with identical config seems like it might be a good idea to add a gui to adding Network interfaces to the VM setup/config
  18. I don't qs is configured correctly. And I don't really have a way to test unfortunately. A PR on GitHub would be nice if anyone wants to make the necessary changes and test for me. They should only be a couple of lines. if anyone wants to make the PR to fix the broken intel quick sync hardware encode I have both an i5 8600K (Coffee Lake) and i7 7700K Iron Lake gen 5 to Coffee Lake gen 9.5 all supported as per https://trac.ffmpeg.org/wiki/Hardware/QuickSync on Github https://github.com/intel/intel-vaapi-driver
  19. I don't qs is configured correctly. And I don't really have a way to test unfortunately. A PR on GitHub would be nice if anyone wants to make the necessary changes and test for me. They should only be a couple of lines. Hi @Josh.5 i have used both and just cant get intel quick sync acceperated transcoding to work. I'd be happy to test got another 90TB or so of media to convert to h265 once intel works ill see if i can get logs and stuff and post to github for you
  20. Hi @Josh.5 I am using the dev-hw-encoding repository version and the previous current master one but cant seems to get unmanic to hardware transcode using intel quicksync and the hevc codec I tried both hevc_nevc and nevc_hevc they all fail however when i use the libx265 it processes using my CPU a quick note in Plex harware transcode directions vs yours (in the container install) for plex under extra parameters it should read: --device /dev/dri:/dev/dri yours mentions this --device /dev/dri I have tried both and neither are shoing anything in my intel-gpu tools plus i can see in the docker tab high CPU usage I am only encoding video, not stripping subtitles or doing anything with audio
  21. your go file shoud have this so you dont need to set permissions on each reboot modprobe i915 chown -R nobody:users /dev/dri chmod -R 777 /dev/dri
  22. @limetech and @testdasi Machine version Q35-5.0 has been solid for my small Fedora 32 Server VM using br0 and a few dockers also using br0 nearly 48hours uptime and no kernel tun messages in logs at all
  23. so far using br0 and Q35-5.0 is not producing the kernel tun errors ill keep an eye out as I will be working all night so if its going to happen it will happen in the next 12 hours or so
  24. im actuallying use Q35-4.1 ill try 5.0 and br0 to see if that helps