btagomes

Members
  • Posts

    25
  • Joined

  • Last visited

Everything posted by btagomes

  1. I dont know if this is the same exact issue, but since i upgraded to 6.11 my windows 11 vm started to have hickups, it would freeze for a second from time to time, then i started to have some networking issues, i could not get in to my network shares. Then, after several experiments i noticed that in windows control panel, my network adapters (virtual) that are connected to some vlans, are constantly flipping from on/off. they are connected to different subnets and with different mac addresses. If i leave one virtual network adapter connected and disable all th others, the VM get stable again. I've had this setup running before without any issues, then, after upgrading, the issues started, then i even downgraded again but could not get things working again like they were working before.
  2. @Squid So what i asked before is not possible??
  3. Dear Squid, Thanks for your inumerous contributions to the comunity, your work is really important to everyone, me included. I once tried tonask this same question in the Forum but i gess no one understood the intention and i could not get an answer. My fourth unraid box is a very old core2quad desktop with a proprietary motherboard that does not support vt-d. So, i cannot passtrough any gpu. So that got me thinking of an alternative for that specific case... computers without vt-d, so i could have a VM on the display. I know i can allways sign in boot-gui and navigate to the vm page and open a novnc connection but... What if i change the "boot-gui" welcome page to go directly to guacamole login page???? The goal is to make the VM avaliable without the need to login to the server gui itself (imagine i let someone use this machine to go to windows). I know i would need to instal the gpu driver. That way, changing the "boot-gui" welcome page, i could sign in directly to the guacamole page or even make a direct sign in to the vm, and i could get the benefits of using rdp instead of novnc.... Is that possible? Thanks
  4. Hi, I've had 2 x 4 TB Corsair MP510 nvme ssds as cache drives formated as BTRFS but they started to have a lot of errors, and since then i cannot format them as BTRFS nor as XFS, they cannot work as cache drives. I have mines installed on MSI AERO 4 m.2 pci card together with 2 samsung 950 nvme ssds, the samsungs are working OK, so the problem is bot on the AERO adapter. I think it musb be something related with the nvme controller itself but i'vnt had enought time to investigate.
  5. I faced the same problem... i cracked my brane trying to understand what was wrong and all the information avaliable online was too generic for this unraid instalation. Finaly today i got it, look here: https://github.com/Tooa/paperless-ng/commit/2dcacaee147abfdccdca4e20262bae749c60be97 With this changes i have my paperless-ng processing office documents!
  6. I faced the same problem... i cracked my brane trying to understand what was wrong and all the information avaliable online was too generic for this unraid instalation. Finaly today i got it, look here: https://github.com/Tooa/paperless-ng/commit/2dcacaee147abfdccdca4e20262bae749c60be97 With this changes i have my paperless-ng processing office documents!
  7. I've been following Maxim Devaev's work for quite some time and i've been using couple of PI-KVMs (DIY versions) since he made the information avaliable. They have been working 24/7 without any hicup and Maxim has been really helpfull with every question that i had, i even made a interface to use PI-KVM with a Tesmart USB+HDMI switch and it worked very well. The PI-KVM is not a project anymore, its a product with lots of testing, debbuging and a reasonable track record. There is a very active comunity that grows day by day where new ideas appear and become oart of pi-kvm project as Maxim continues to add new features to the PI-KVM. Right now it is on Kickstarter, so if a anyone of you homelabers want a very powerfull gadget e recomend to back this one: PiKVM v3 HAT, via @Kickstarter https://www.kickstarter.com/projects/mdevaev/pikvm-v3-hat?ref=android_project_share
  8. Hi guys, Last month and a half i tried to use this plugin to "passtrough" the gpu to one of my vms. It was a nightmare because i was getting the damn write error everyday, forcing me to reboot the entire server to have the vms avaliable again. My little server is a Partaker B18 mini pc with intel i7 8850H cpu, intel UHD 630 gpu. I tried everything, i tried every combination of vm bios, of gvt configuration, of server bios, everything... and no results... But something came across my mind... i remembered that i forgot to install tips and tweaks plugin... i so i went ahead, installed the plugin and changed the "vm.dirty_background_ratio" to 1% and the "vm.dirty_ratio" to 2% because i remembered that we should use those values if we got any out of memory errors... The f*#@ vm is working for 2 weeks now and no more write errors, even when i put it under load... I don't know if it is coincidence, i really don't know if this is the solution but i believe that anyone with the same issue would love a new idea to try to fix this error that was driving me mad. Hope it helps someone!
  9. Mine is stable too, same conditions, couple vms, some with gpu passtrough, some with vnc and some twenty dockers running, only issue now is the novnc connection not working, all other issues are gonne and not a single crash till now.
  10. The part of not beeing able to update some dockers showing "not avaliable" in the version
  11. Open a terminal window and type "htop" and have a look at it to see if your cpu's are really at 100%.
  12. Have you checked with htop? Usualy i have that 100% indication jumping around cores but i check with htop and its a false value...
  13. That happened with me before and i solved it by recreatig the "proxynet" network (created from spaceinvader's video instructions for reverse proxy)
  14. I don't know, i have no issues with hdd/ssd devices. I have sata and nvme ssds and they are working great, same with nvme passtrough, and array disks are working ok too. Only issues i've had are: -Dvdrom passtrough to guest vm. -Npt and nrip-save features on guest cpu definition. -Novnc conection to guest (working fine in vnc trough guacamole). So far, no more issues, my log files are clean...
  15. ok, my issue with devmapper and /dev/sr0 is solved. Changed the guest xml, instead of passtrough the cdrom as a disk i passed trough as a scsi device and now is working. (before this upgrade to beta29, if i used the scsi method to passtrough the cdrom i could only read the first disk, if i changed disk the guest would not read the second disk, now on beta 29 this is not happening...) now i'll try to find a way around the vnc issue...
  16. ok... After a bucket of cofee i tryed to add this lines to the cpu config on guest xml: <feature policy='require' name='npt'/> <feature policy='require' name='nrip-save'/> All Vm's booted except the one that had a dvd-rom passtrought, that one i had to disable till i find a work around.
  17. Hi all! Tryed to upgrade to 6.9-beta29 but after the upgrade my vm's could not boot again. Every time i tryed to boot i got the message: "Operation failed: guest cpu doesn't match specification: extra features: npt,nrip-save" All my vm's were working ok before the upgrade. I'm running unraid on threadripper 1950x cpu. Reverted back to 6.9-beta25 till further developments. Thanks!
  18. I forgot to mention, the idea is to dual porpose that server, as a desktop and a docker server.
  19. Maybe thats too confusing to some of the users (my wife, my sons,etc.) The goal is to put the VM in fullscreen directly from boot (either VNC or RDP). I mentioned RDP over guacamole because in my experience that remote connection gave better results than VNC.
  20. Hi, I'm on my third unraid build and day to day i'm more and more aware of the potential of unraid... My third server is made from old parts and the mainboard/cpu cannot cope with vt-d. That is sad as i cannot use a display for those vm's... Than an idea came across my mind... The unraid GUI is like a webpage, right??? What about a mod to connect directly to guacamole home page instead of the GUI?? Even better if it could connect directly to a vm in fullscreen ( rdp connection over guacamole).... This could open doors to laptop and old hardware use of unraid....
  21. I still saying, should exist some colour diferenciation in the cpu pinning menu (based on the numa architecture obtained from numactl --hardware) so anyone could understand wich cpus are closer to each memory channel. I know that it wont be easy to implement but would be a wonderfull feature on the threadripper era. i'm still in battle with the memory spills caused by data being transfered, and in my case the memory spills create interference noise in my onbord sound card and in my gpu causes micro stutters. It only happens after the memory spill to the other node, and if i drop caches regularly it won't happen, but that would kill the purpouse of the cache and can put my cache data in risk...
  22. To better understand the particularities of NUMA assignment this diagram help a lot... Every time the data travel trough infinity fabric it gets latency. Inter-CCX latency is tolerable. Inter-DIE latency is bad. For de 1950X: If a VM get 4 cores from 1 CCX it gets 1 memory channel and the best latency you can get from threadripper (69ns) and . If a VM get 8 cores from 2 CCX in the same DIE it gets 2 memory channels, that means some latency (73ns) but double memory bandwith. If a VM get more than 8 cores or if a VM have cores from 2 different DIES you will quadruple the memory bandwith but the latency gets terrible (130ns). If you plan to use cores from 2 diferent DIES in the same VM then you would get better results if you set your memory mode to AUTO in the bios... That way the system will manage the memory in use so you get bittersweet latency (100ns). For the "interleave", "strict" and "prefered" modes... If you select "strict" your VM will only boot after all the memory gets alocated (if you have the ram full than it must dump it before the vm starts) and i'm almost sure that if it cannot get enough memory it wont boot. The "interleave" mode and the "prefered" mode are almost the same (interleave uses roundrobin)... they try to get memory from designated node but if they cannot get it they will spill to the other node, increasing latency. Sure the strict mode is nicer but then some times the VM cannot boot... When you use strict mode some times the machines get paused and wont resume (as they cannot acess all the alocated memory).
  23. Well... It depends how you set your numa mode in bios... You can select none/auto/die/socket/channel... I get the best results when i set it to channel, and that means that the each cpu will want to talk with its respective memory channel before talking to the other channels. The 1950x have 16 cores (32 treads) in each socket, 8 cores (16 treads) in each die, 4 cores(8 treads) for each memory channel... If you set numa to "die" the 2 memory channels from the same die will be handled as one channel... The latency will rise as there are data being transfered trough the infinit fabric to the adjacent memory channel, anyway, this latency wont be as bad as having the cores of one die talking to the memory of the other die...
  24. Hi all, I'm a newbie in the linux world, my first contact with linux was with unraid... Looking for some virtualization platform i assembled a server with a threadripper 1950X, 4x16gb ram, an Geforce gtx 1080 for main VM (with screen, keyboard and mouse) and two geforce gtx 1050 for secondary VM's with remote access (one is all the time passedtrough to the same VM and the other is frequently passedtrough to any other VM whenever i want to use it). i've 2 NVME SSDs for cache and 3 HDDs for the array and main VM runs from a passedtrough NVME SSD while the other VM's run from another NVME SSD (unassigned) After playing around for some days, tweaking my unraid at the same time i was learning about linux architecture, virtualization, networking, and NUMA, my platform started to be more and more stable, every issue was beeing sorted out one by one and today i can say something about threadripper and NUMA. I've done it every possible way and failed several times... then i started to understand that my ram allocations were spilling in to the wrong node, causing my latency to spike. Today i've all my dockers waiting to start AFTER all the VM's are started... and that did an huge diference, even when i've CPUs 4-31 isolated. The memory mode='strict' is tricky because it would only work right if you matched the right cores with the respective memory channel, for example, each numa node have 2 memory channels. Each channel matches to 4 cores (8 treads)... For that reason, in the 'strict' mode my VM performance could only get right if i set it up to use 4 cores with <16 gb ram or 8 cores with <32 gb ram, if the ram values were exceded the latency would spike. The GPU placement will only matter if all the cores of your VM are located on the wrong numa node... If the VM have cores from 2 memory channels in the same node you wil get double the memory bandwith, with allmost the same latency of single channel placement, as long as they stay in the same node. So... for my main VM i use 8 cores and 16 gb Ram in interleave mode, i get a good latency and good memory bandwith. For secondary VM's i use 4 cores and 8 gb ram in strict mode, the latency is even better but the memory bandwith would be half, as expected. I've my server working with S3 sleep (now that i've tweaked it a bit it works fine), waking from keyboard/mouse or wol, but the dockers need to be stoped before sleep so when the server wakes up, the sleep plugin makes them wait to start so the VM's get plenty of memory for their needs. Another thing i noticed... everytime i transfer large amounts of data between network - cache or cache - array, i get numa spills and the avaliable ram on each node gets cripled. That makes the latency spike too. For that reason i tweaked the mover script to dropcaches after the move process ends, that way the ram gets free again and the VM's may use it. Another weird event is that every time the server wakes from S3 sleep, the cpu clock gets nutz... that causes bad performance on the VM's too. To solve that problem i added the "echo 1 > /sys/module/processor/parameters/ignore_ppc ; " comand in the S3 postrun script. Forgive my english (i'm from Portugal) and forgive me if i said something unacurate, i just tryed to share my small experience. I'm loving unraid and i've already assembled a secondary backup server (old parts) with unraid too. I would like to recomend an upgrade to the CPU pinning page... add different colours for the cpus of each memory channel... that way it would be more intuitive to set up numa nodes correctly. My best regards Bruno Gomes