AppleJon

Members
  • Posts

    35
  • Joined

  • Last visited

Everything posted by AppleJon

  1. Thank you i did even know that was a option. Not sure how that happenend but it fix it. again thank you!
  2. Now added a second GPU with the same name and only see GPU2?
  3. Hi, Had two gpu inside and now back to newer single one taking more slot space but after changing the cheack box for the new one had only gpu2 empty in the dashboard after a reboot new gpu still selected in settings but nothing displaying in the dashboard now? Any ideas?
  4. Thank you both Was actually looking for a reason I have one core a 100% on my EPYCD8-2T. My othe machine with RomeD8-2T deos not seem to have that issue. Both systhem had a card in slot 6. Moving my HBA off slot 6 of my EPYCD8-2T removed tha one peg core from 100%. Weird but it "fixed" the issue.
  5. same for me any one have a quick explination for nob like us. Should we not be ruuning these commands from the unraid docker shell?
  6. Thank you the update got it working
  7. Thanks for this plugin. I was wondering is there a way to have the plugin display more then one GPU? Selecting more then one GPu does not seem to work for me?
  8. For other finding this thread these link help me figure things out more proxmox but section on vGPU and UUID and MDEV creation is helpfull https://wvthoog.nl/proxmox-7-vgpu/ this for the bottom section on mediated device and getting them assigned to the vm https://ubuntu.com/server/docs/gpu-virtualization-with-qemu-kvm
  9. Hi anyone gotten to the poin and can share how to setup the VM to actually use a vgpu device?
  10. did you make any progress after this?
  11. any info on these would be appreciated as if you want 4 or 8 instance of windows and then how to you pass it to the vm ## Modify the following variables to suit your environment WIN="2b6976dd-8620-49de-8d8d-ae9ba47a50db" #do you need one per vm or just one UBU="5fd6286d-06ac-4406-8b06-f26511c260d3" MDEVLIST="nvidia-65" #say you want mode 64 that is 4 instance with 2gig of ram on teslaP4 with 8gig also how to configure the vm onces de mdev are created edit: From waht i am readin you need to manually edit the vm in xml vieu after installing windows probably best. THe UUID is the UUID of the mdev you have created with the script. YOu need to create one mdev/UUI per VM and need to use the same mdev mode for all. SO in my case I have created moded the script to make 4 UUID that use in my VMs. I am using a tesla P4 and mode 64. But not using the unlock and not using the profile overide as the T4 is supported directly. <hostdev mode='subsystem' type='mdev' managed='yes' model='vfio-pci' display='off'> <source> <address uuid='2b6976dd-8620-49de-8d8d-ae9ba47a50d1'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </hostdev>
  12. Can you share a bit more info on version used or other thing seem to be getting error
  13. Sorry slow to post this had issue merging my accoutn for the forum. Was trying to get a new systhem online and was having alot of issue with it. On some boots could ping it when it got a address but gui was not reachable. Settig the setup to a manual address could always ping it but nopt get to GUI. Some setup would soemtimes not get a IP when in DHCP. Tryed manaul install and also the new flash creator all would not work. Tryed many diffrent network card intel, broadcom was thinking it was a driver thing. Also swapped many USB drive. But when shuting down the systhem i noticed a message saying ngix was not running. And remembered it would list all the core in the dashboard. So after some slep just desided to limit the chip to one core per CCD and deactivate SMT and that did the trick. Then reactivated all the cores in the ccds but left smt off. As soon as I put SMT back on i lose acces to the GUI. Attach are the two diagnotic log of the systhem with the issue while have SMT on and then ok with SMT off in bios. This is probably a edge case du to so many thread on the systhem? tower-diagnostics-20240217-1332.ziptower-diagnostics-20240217-1304.zip
  14. Wanted to post this yestherday but was having issue merging my account How many core and thread do you have on your machine? Have been cratching my head on a new install also with the same issue. CHanged network card, reflashing multiple USD stick with the flasher and manually to no avail. on shutting down was seing this error message about NGINX not running. This morning was thinking it could be a gui thing crashingand stated deativating core. I have a 7742 with 64 core and 128 thread deativating SMT in BIOS made things work, so could try that. I was actulally in the forum to see if someone else had seen similar issues.
  15. Strange if your cache is Sata SSD you seem to be able to benchmark them but if nvme you cant (via the benchall option not via the individual drive option there even if sata ssd it will not let you)
  16. Array is back online and web interface is running also thank for the help. Did not figure auto start would behave this way and block the web access. Nice to know
  17. ok getting somewere now: running with -n give me: Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... Metadata corruption detected at xfs_agf block 0x105fc7a89/0x200 flfirst 118 in agf 3 too large (max = 118) agf 118 freelist blocks bad, skipping freelist scan agi unlinked bucket 20 is 1239857364 in ag 3 (inode=7682308308) sb_icount 173824, counted 173248 sb_ifree 412, counted 654 sb_fdblocks 363487512, counted 363879812 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 3 - agno = 2 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... disconnected inode 7682308308, would move to lost+found Phase 7 - verify link counts... would have reset inode 7682308308 nlinks from 0 to 1 No modify flag set, skipping filesystem flush and exiting. with -v Phase 1 - find and verify superblock... - block cache size set to 9227624 entries Phase 2 - using internal log - zero log... zero_log: head block 1146861 tail block 1145904 ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this.
  18. can somewhat run it on other drives see attached screen grab
  19. Tried two other times and still get nothing multiple hours after I enter the command if my array is in maintenance or not up should I still be using "md1" naming?
  20. xfs_repair -v /dev/md1 just have a blinking cussort after that and can type stuff in the consol but enter just jump down cant really write new comment after that so can just do a manual rebooting of the system. Could be that the drive is not responding and looking up the machine?
  21. Thanks for pointing me in that direction. running it now. seem to be taking quite some time and not getting any info as of yet. Will report later. Will leave it so its thing over night probably.
  22. Running V6.2.4 with one VM (windows) and 2 dockers (emby + bitsync) that auto run on startup
  23. Hi, Seam like I lost acces to share and the web interface after a force reboot of the machine. I have a iKVM access to the command promt. Can ping and see the machine on the network and browse the flash folder. Any idea were I should start from?
  24. Latest dybamix webGUI update seam to have fixed it for me. Dynamix webGui : 2016.07.27 Was probably not really a issue as other tools where reporting the ram used for caching and sutch. Probably just a reporting thing in the GUI? Probably du to me upgrading the ram after the install maiby?