je82

Members
  • Posts

    468
  • Joined

  • Last visited

Everything posted by je82

  1. Thanks for the quick reply, here's the diag file, cheers! nas-diagnostics-20220922-0905.zip
  2. I have this error message everytime i boot my unraid server, how can i fix it? (I have no issues whatsoever with the nvidia driver, probably some part of an old uninstall that didnt work?)
  3. had to "umount -l /dev/loop2" to get the array shut down... not sure what happened here, maybe it has nothing to do with the Appdata cache name and maybe more to do with me having some dockers on auto run and shutting down the array before all the autoruns have been processed making the docker process lock up?
  4. nvm it looks like the shares are displaying these linux core files, not just the new cache pool, but when i look in /mnt/ its only cache that is mounted. the array cannot be stopped though, trying to avoid a parity check here, how can i shut down the array? it just keeps saying cache is busy
  5. Pretty much think i broke my unraid, named my Appdata new cache pool to Appdata and set folder Appdata to "Prefer" on cache pool appdata, now it looks like i cannot shutdown the array, when i click whats on the new cache disk it looks like core unraid linux files are being written to the root of the disk. What do i do?
  6. https://www.kernel.org/doc/html/latest/admin-guide/kernel-parameters.html search split_lock_detect i guess ill try both when i have a chance at rebooting and see which one works
  7. Interesting, i read here about what this means, essentially 2 threads doing something s Would adding split_lock_detect=off to the syslinux.cfg as such accomplish this?
  8. i have this, how do you disable it? no need to have it in the logs if it isnt affecting the system started when i did windows vm on unraid Version: 6.10.3
  9. interesting findings, this is probably not very much unraid related but i enabled bitlocker which should results in signifigantly lower io performance and the q32t1 write performance doubled for whatever reason while the read halft, so it swapped the two around but they are essentially the same, q1t1 was little lower but not that much.
  10. i realize now that kvm cannot pass through gen4 pcie devices, is this because unraid is not updated with "QEMU 4.0.0" or maybe i missed something? EDIT: Nevermind, QEMU Version is 6.2.0 so definitely something missing in my config nevermind the port is actually pcie3 i realize now, nevermind.
  11. after KVM optimizations, were getting somewhere. Gonna try some Cpu pinning now see if that helps at all
  12. more trial and error, it seems whenever i install the hyper-v role on the guest vm random read/writes on the nvme drive goes down by over 100% , is this working as intended or is KVM just terrible for nesting hyper-v? or perhaps some issue with current kvm builds? Anyone know anything? Seems unreasonable slow to be "working as intended" Without hyper-v role installed With hyper-v role installed
  13. ouch, after installing the hyper-v service on the VM and rebooting the machine feels very unresponsive. Its defintely some very strange problems with the drivers here, what are your experiencing virtualising anything other than windows? Is it this bad with linux to? It looks like my sequential read/writes are bottoming out again too. Not sure if Hyper V Service installation caused this or if it is just a come and go
  14. Ok i've now passed through the nvme by binding it to "vfi0 at boot" and speeds seems better and the vm feels all around "more rapid" Speeds are not even near nvme gen4 though but this is not supposed to be a performance monster, just equal or better than my old threadripper barebone host so i can shut it down and run everything on unraid to save some power. If you have any tips in why i seem limited to around 3500mb/s on this nvme yet it easily does 6000+ on barebone, is the overhead that large? Not that sequential performance matters much on this server, it wont serve large files, random read/write is more important for the workload it will run. Any tips to further optimize performance is welcome, cheers!
  15. ok performance numbers are back, i simply just did manual and pointed the vm to install on /dev/nvme0n1 which i guess is what passing through means? the performance "feels better" but still far away from anything useful as a hypervisor, i still feel like its running atmay maybe 35% of actual performance. what more can i try? do i need to install some specific drivers in windows to utilize "nvme bandwidth"? Right now i see 2 pcie devices undetected in the device manager, the disk is called "QEMU HARDDISK" I did not install ballon, vioserial or viostor I will try to run some windows updates and see if drivers are found and installed
  16. Thanks testing this now, do you know if i still should install the balloon vioserial viostor? my guess is no i just installed the nic driver for now and we'll see how performance is
  17. just for reference, disk bare metal windows: On unraid Windows VM Something definitely deebly wrong here, driver issue perhaps?
  18. Testing out running vms in unraid, 1. cache pool where all vmdata is hosted is set to cache prefer and has 2 tb free (Samsung Pro 980 Gen4 nvme). 2. installed windows 2019 server using the following settings: 3. testing performance via windows remote desktop to the machine feels extremely sluggish, trying installing another nested VM inside this VM i can see that in the unraid gui its writing to disk at around 60mb/s, this disk should be good to go well above 5000mb/s Any performance tips, what have i done wrong? Or is it just this bad?
  19. Is there any issues or best practices regarding running a windows server machine as a VM in unraid then installing the hyper v role on the server and virtualizing more via hyper v? I understand passthrough may not work but other then that should i expect issues? Has anyone tried this?
  20. hmm i guess all the software i was using is now outdated, i updated putty and now i can connect with it, i guess the problem wasn't at unraid after all. Solved.
  21. tested cleaning out puttys saved ssh keys Computer\HKEY_USERS\SOFTWARE\SimonTatham\PuTTY\SshHostKeys didnt help
  22. So im no expert on ssh but since i have saved my fingerprints to my server and now it is on new hardware , i guess this may be the issue? doing ssh -vv 1.1.1.7 generates a message like this, question is how to do clear the local cache to re-establish a new session and forget the old stuff? Or am i totally in the wrong here barking up the wrong tree?
  23. Tested removing /boot/config/ssh and rebooting and see if regenerating new keys would solve it but no luck.
  24. I know this, but i like to use it for my LAN, ive been doing it for 7+ years, its not a problem, i dont use cloudflares dns services anyway. Now why is ssh not working is the question