xtrap225

Members
  • Posts

    69
  • Joined

  • Last visited

Posts posted by xtrap225

  1. yes i. understand thanks.  that might be a difficult feature to implement.

     

    the real feature request I would prefer, is to be to have an option to not create shares from top level folder automatically.

     

    it just clutters up the shares list with a bunch of 'shares' that aren't exports.

  2. as my subject states 'mkdir folder in ssh is creating shares.' the strange thing beyond the obvious strangeness of that is i created a folder directly on a cache drive and the resulting automatic share that was created, was an 'array' only share.

     

    can anyone tell me, is this. 'new' expected behaviour and is there a place to change options on this?

     

    i don't want every folder i create to then create a share.  i especially don't want automatically created shares to have 'incorrect' settings.

     

    i tried a search of the forums but i couldn't find anyone else dealing with this behaviour.

     

    cheers.

  3. i have my work pc on a dedicated 1TB NVMe drive passed through windows 11 vm and i was having loads of issues  getting my cpu usage down. it would always eventually creep up. and stay up.  i tried the "Memory integrity"  recommendation change even though i was probably not supposed to.

    what ended up working was disabling "Mitigations Settings" in Unraid Settings. so i just wanted to let folks know somewhere, that made a HUGE difference.

  4. do you need secureboot only or bitlocker as well?

    i actually switched from using pass-through to this for bitlocker. but if you dont' need bitlocker than i would do either and disable bitlocker.

    also that is an example encryption secret from the webpage i got the xml info from.. put your own in, obviously

    <tpm model='tpm-tis'>
    <backend type='emulator' version='2.0'>
    <encryption secret='6dd3e4a5-1d76-44ce-961f-f119f5aad935'/>
    <active_pcr_banks>
    <sha256/>
    </active_pcr_banks>
    </backend>
    </tpm>

  5. sorry i just wanted to come back and say all the moving of files etc is no longer necessary as it is built into the latest q35 efi settings  already on unraid.  i tested the available procedure against what is just built into unraid. and i think  you will find you can just change the settings to secure and it will work.

     

    the thing that i needed to make it work for my work was to add one line to the <os>..</os> .. This took a ton of time and research and  many re-images.

    <smbios mode='host'/>

    which passes through block 0 and block 1 of the smbios ,,,which is basically vendor, version, release, serial, manufacturer ,... etc etc.

     

    this allowed the Microsoft intune company portal to work even more properly, because it served back my 'machine' certificate back to my certlm.msc>Personal>Certificates folder.  this was a requirement to get my VPN working.

     

    also  i would decrypt my bitlocker. then run a company portal sync to get it to re-encrypt and no longer have to use the bitlocker recovery key.

     

    considering the above this may NOT be necessary but i also opted to passthrough my tpm as follows

     

        <tpm model='tpm-tis'>
          <backend type='passthrough'>
            <device path='/dev/tpmrm0'/>
          </backend>
          <alias name='tpm0'/>
        </tpm>

  6. apparently it is working cause my cert came back. just had to be more patient. not my strong suit, when it comes to computers. especially since i still can't see the smbios sysinfo from the windows terminal.

     

    now i am doing hopefully my final decrypt and re-encrypt of bitlocker so i don't have to use my recovery key on each reboot.

     

    then i will just need to either get spice multi-monitor working properly or the AzureAD RDP bypass that is working on my other bare metal working machine (that i can't remember how i did), on this vm.

     

    without multi-monitors what is the point :)

  7. according to the log its working. but windows still won't show me the SerialNumber

     

    now that is in host mode, which would be ideal. but i guess i can keep testing just in case by fluke the emulate mode works.

     

    i have a feeling it will work but not work as well.  really have a bad feeling i will get stuck here.

  8. okay that was very wrong.

     

    you cannot change the hyperv mode line, nor should you.  i don't think.

     

    i changed the

     

        <smbios mode='sysinfo'/>

     

    to

     

        <smbios mode='host'/>

     

    removed ..

      <sysinfo type='smbios'>

    ..

    </sysinfo>

     

    if that fails. i will try again by putting the mode to 'emulate' and putting back the sysinfo lines with the bios and chassis info etc.

  9. found this and am going to try it.

     

    https://avdv.github.io/libvirt/formatdomain.html

     

    its a bit more clear that i need to change

     

        <hyperv mode='custom'>

     

    from my xml to either 'host' to copy the 'real' info, sort of like a passthrough for smbios sysinfo

     

    or 'emulate' to use the info i had described but not shown from the previous link, also shown in this. new link.

     

    sorry for that lack of detail but its maybe a bit private that info like serials and what. not.

     

    i will update this thread if i get it working.  and as always and input is greatly appreciated.

     

     

     

  10. i am trying to get my work windows 11 image working as a vm.

     

    i have passed through my m.2 drive after imaging it as a bare metal machine.

    i am secure booting and passing through my /dev/tpmrm0 in tis mode. then i recover the bitlocker, then disable it in  windows and allow the company policy to re-encrypt it.

     

    my intune company portal says i am compliant and is syncing ... however.

     

    i believe due to lack of smbios serial information my certlm>Personal>Certificates is lacking the machine certificate that allows my work vpn.

     

    this gets auto sync'd when the systems service tag is  detected properly.

     

    i tried to edit the xml file and add everything i could using these instructions and dmidecode -s from the linux terminal on unraid

     

    https://libvirt.org/formatdomain.html#smbios-system-information

     

    all my settings are accepted and the log seems okay but i still can't see the serial in windows when i use either powershell's

    Get-WmiObject win32_bios | select Serialnumber

    or

    CMD's

    wmic bios get SerialNumber

     

    i beleive if i can  get this to work then i will be 100% compliant and able to get my cert and therefore my vpn working.

     

    the libvirt.org page i linked above says the following

    Quote

     

    SMBIOS System Information

    Some hypervisors allow control over what system information is presented to the guest (for example, SMBIOS fields can be populated by a hypervisor and inspected via the dmidecode command in the guest). The optional sysinfo element covers all such categories of information. Since 0.8.7

     

     

    does anyone know if this is disabled in unraid's implementation of libvirt?

     

    the only other thing that it might be that i will test asap is a couple entries that existed in the example on the page but weren't set on my bare metal system i left blank like so

     

        <entry name='version'></entry>

     

    but i will test removing them completely from the xml instead.  or even setting them to what their output was which was 'Not Specified'.

     

    <entry name='version'>Not Specified</entry>

     

    any help would be greatly appreciate, if you have experience with this, or if you know that this feature has been removed from unraid's vm implementation.

     

  11. 1 hour ago, ghost82 said:

    When you passthrough the tpm device you need to choose a model.

    In this example:

    <devices>
      <tpm model='tpm-tis'>
        <backend type='passthrough'>
          <device path='/dev/tpm0'/>
        </backend>
      </tpm>
    </devices>

     

    you are passing through a tpm device located at /dev/tpm0 'tis' type.

    If the device is crb just use 'tpm-crb' instead of 'tpm-tis' for the model.

    how can you know which is correct?

  12. i got it to work, at least i thought i did ..

     

    oh first i should say i am doing the same thing.

     

    i was at first using a virtual TPM like in the instructions.  i had to reset the bitlocker and i am not sure if it was the virtual TPM or resetting the bitlocker or something else.  like the system not being able to see 'serial'/'servicetag', but although i could login with my work account and almost everything worked. the system ripped out my 'personal' machine cert from certlm.msc and that prevented my work vpn from working.

     

    as i was typing this i seem to remember a way to passthrough the system serial? maybe i saw that in a video by 'spaceinvader one'?

    EDIT* found the thing  i was thinking of and will add it to my next attempt i hope it helps."wmic csproduct get UUID'

     

    i am going to try again. but this time passthrough the TPM on first boot. oh i forgot to mention i am passing through an nvme drive where this install is. and the install must be done from the bare metal.  then boot back into unraid.

     

    i am going to edit and passthrough the TPM without it ever seeing a virtual one. any idea if i should tell it that it is TIS or CRB?

    and try to do that serial thing, which i hope i am not misremembering.

     

    the only issue is i a going away for a week and a bit, but when i get back and if i can get it all going. i would certainly be happy to help you.  that is if you  haven't already got it all figured out.

     

     

  13. i have another one. the mouse stops working after enough updates and what not go in on my windows 11 vm.  seems to be spice related cause vnc doesn't have that issue.

    i have tried installing the latest virtio-win-0.1.240-1.iso but that doesn't help.

     

    does anyone know how to get the mouse to behave properly? the strange thing is that after a fresh install it work but as time goes one it just goes chonky on me, and doesn't. work almost at all. i have to move around the vm with keyboard.

     

    if i use remote-viewer i can see this when i go in and out of the vm window (shift+F12).

     

    (remote-viewer:40158): GSpice-WARNING **: 19:21:59.987: Mouse acceleration code missing for your platform

     

  14. *I SOLVED IT!!**.. i had to setup virt-manager from my mac, then add the two extra QXL video cards that way.  i will. add what those looked like via xml at the bottom of the post.

     

    then i ad to use remote-viewer and check each of the three monitors, that is the third icon from the left that looks like a monitor and then check Display 2, and 3 (obviously 1 is already checked.

    --------------------------------------------------------

     

    is it possible to start a spice viewer session and have it work fullscreen on three monitors? like how RDP can on windows 10/11 for example.

    do i just increase heads=1 to 2 or do i need to duplicate the following part of the xml and sort of bump up parts of it by one, like the alias and pci addressing?

    ultimately i want three monitors if possible.

     

        <video>
          <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
          <alias name='video0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
        </video>

     

     

    so i am relatively sure i just increase heads but does anyone recommend more vgamem? or any of the others?, i probably just want to do 1080p on all three monitors but technically could do up to 4k on one of them.

     

    so i was wrong. according to the spice manaual you. duplicate the video card for windows vm (like mine), or increase the heads. on a single video card for linux.

     

    unfortunately they don't show the details of what to make your second video card. also when i update and turn on . unraid. rips out the changes i made.

     

    here is what i thought would be correct. alias changed for eachc. slot changed for. each. primary yes only on first.

     

        <video>
          <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
          <alias name='video0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
        </video>
        <video>
          <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='no'/>
          <alias name='video1'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
        </video>
        <video>
          <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='no'/>
          <alias name='video2'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
        </video>

     

    SOLVED....

    <video>
      <model type="qxl" ram="65536" vram="65536" vgamem="16384" heads="1" primary="yes"/>
      <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
    </video>
    <video>
      <model type="qxl" ram="65536" vram="65536" vgamem="16384" heads="1"/>
      <address type="pci" domain="0x0000" bus="0x04" slot="0x01" function="0x0"/>
    </video>
    <video>
      <model type="qxl" ram="65536" vram="65536" vgamem="16384" heads="1"/>
      <address type="pci" domain="0x0000" bus="0x04" slot="0x02" function="0x0"/>
    </video>
     

     

     

     

  15. my upgrade went as smoothly as one could hope. i am not crazy about how shares seemed to come from no where when i added my zfs.  its a bit weird. i will need to stop the array when possible and clean that up.  i will likely convert my multiple cache's to zfs, and fix those shares.

     

    but other than that i am happy..  now i will monitor for stability and if i don't crash after say 1 month, i will mark this post as the solution to my crashing issue.

     

  16. indeed i was, but i put that script in place and had  increased my /run size. so i don't think that is the reason for my crash anymore.  certainly was or  wasn't helping.

     

    that was on or around June 5th when i marked @Polar as solving it.  my dumb solution was to increase the size of /run to 256MB which is a bit of a waste of RAM.  to also be fair to myself i did say i was gonna clear the log on cron in the thread on a post i did on March 29th, but didn't actually go through with it until Polar posted on June 5th.  i have since shrunk /run to 64MB but i do have plenty of ram.  once i am convinced there is no longer an issue i will disable the cron but leave it in place and comment out the resizing of /run on boot.

  17. every couple of weeks or so my system crashes, and i have to hard reboot

     

    really hoping someone can read the diagnositics and help figure out why before i blindly upgrade to the latest stable version that i just noticed is available today.

     

    i have been trying to read the syslog which i am mirring to my usb since i noticed this issue.

     

    i had an issue with the tmpfs filling up but that has since been mitigated and things did get a bit better.  but still i woke up this morning to a completely downed server, i had to hard reboot using meshcommander talking to the intel amt i have setup on it. 

     

    any help would be greatly appreciated.  please let me know if you require any further details or information at all.

     

    previous boot ... so crash  happened right before this

    system boot  2023-06-20 20:04

     

    this current boot was ... but i am unsure exactly when the system started having trouble as i was likely sleeping.

    system boot  2023-07-01 09:20

     

    dell-pc-diagnostics-20230701-1000.zip

  18. i don't know what is going on but i am convinced it has to. do with the nvidia drivers. i updated to the 'New Feature Branch'.

     

    Just to change it up, and it is no longer the same errors in the log. just the same line over and over

     

     tail -F /run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/501f72c7fc3a92557935aab9479c1fb048e40ac95c9833c44efb4ee18e671884/log.json
    {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-05-29T22:57:18-04:00"}
    {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-05-29T22:57:23-04:00"}
    {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-05-29T22:57:28-04:00"}
    {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-05-29T22:57:33-04:00"}
    {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-05-29T22:57:38-04:00"}
    {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-05-29T22:57:43-04:00"}
    {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-05-29T22:57:48-04:00"}
    {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-05-29T22:57:53-04:00"}

     

    seems the nvidia is only working in my one plex docker, tdar and handbrake for example don't seem to work . (i was wrong plex and tdar are working; it was just handbrake that wasn't but i don't know if or what that is, maybe its a totally differnet issue.)

     

    does anyone know how to fix this. properly rip out the nvidia drivers and start from scratch maybe?

  19. so /etc/nvidia-container-runtime/host-files-for-container.d doesn't exist.

     

    the only thing in that folder is .. /etc/nvidia-container-runtime/config.toml

     

    also tried running 'runc list' and there was nothing.  probably doing something wrong though..

     

    /usr/bin/runc list
    ID          PID         STATUS      BUNDLE      CREATED     OWNER

     

    i might have to just clear the log on cron for a while until a new update comes  for the nvidia driver and fixes the problem (hopefully)...

     

    not sure if anyone has any better thoughts/ideas?

  20. so its would seem that my plex container is filling up a log.json file with 'stuff' from the nvidia i have passed through to it.  looks like the snippet below. While it is just a snippet, it does seem to just repeat over and over, so far its up to 9.8MB.  I checked plex and don't have any debug or verbose enabled.  I am running Nvidia Driver Package on Production Branch which is currently  v525.116.04.

     

    anyone recognize this issue? something about "NVIDIAContainerRuntimeConfig" and "MountSpecPath\": \"/etc/nvidia-container-runtime/host-files-for-container.d\" and "Path\": \"nvidia-ctk\"

     

    i should probably also note that transcoding and what not seems to work fine when i tested using 'watch nvidia-smi' while purposefully forcing a transcode.

     

    i mean to hit submit on this last night and in the meantime its gone from just under 10MB to 16MB

     

    tail -F /run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/501f72c7fc3a92557935aab9479c1fb048e40ac95c9833c44efb4ee18e671884/log.json
    {"level":"info","msg":"Running with config:\n{\n  \"AcceptEnvvarUnprivileged\": true,\n  \"NVIDIAContainerCLIConfig\": {\n    \"Root\": \"\"\n  },\n  \"NVIDIACTKConfig\": {\n    \"Path\": \"nvidia-ctk\"\n  },\n  \"NVIDIAContainerRuntimeConfig\": {\n    \"DebugFilePath\": \"/dev/null\",\n    \"LogLevel\": \"info\",\n    \"Runtimes\": [\n      \"docker-runc\",\n      \"runc\"\n    ],\n    \"Mode\": \"auto\",\n    \"Modes\": {\n      \"CSV\": {\n        \"MountSpecPath\": \"/etc/nvidia-container-runtime/host-files-for-container.d\"\n      },\n      \"CDI\": {\n        \"SpecDirs\": null,\n        \"DefaultKind\": \"nvidia.com/gpu\",\n        \"AnnotationPrefixes\": [\n          \"cdi.k8s.io/\"\n        ]\n      }\n    }\n  },\n  \"NVIDIAContainerRuntimeHookConfig\": {\n    \"SkipModeDetection\": false\n  }\n}","time":"2023-05-28T01:03:23-04:00"}
    {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-05-28T01:03:23-04:00"}
    {"level":"info","msg":"Running with config:\n{\n  \"AcceptEnvvarUnprivileged\": true,\n  \"NVIDIAContainerCLIConfig\": {\n    \"Root\": \"\"\n  },\n  \"NVIDIACTKConfig\": {\n    \"Path\": \"nvidia-ctk\"\n  },\n  \"NVIDIAContainerRuntimeConfig\": {\n    \"DebugFilePath\": \"/dev/null\",\n    \"LogLevel\": \"info\",\n    \"Runtimes\": [\n      \"docker-runc\",\n      \"runc\"\n    ],\n    \"Mode\": \"auto\",\n    \"Modes\": {\n      \"CSV\": {\n        \"MountSpecPath\": \"/etc/nvidia-container-runtime/host-files-for-container.d\"\n      },\n      \"CDI\": {\n        \"SpecDirs\": null,\n        \"DefaultKind\": \"nvidia.com/gpu\",\n        \"AnnotationPrefixes\": [\n          \"cdi.k8s.io/\"\n        ]\n      }\n    }\n  },\n  \"NVIDIAContainerRuntimeHookConfig\": {\n    \"SkipModeDetection\": false\n  }\n}","time":"2023-05-28T01:03:28-04:00"}
    {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-05-28T01:03:28-04:00"}

  21. thanks for the response, i am running the following for as long as it takes to hopefully figure this out.

     

    nohup watch -n600 '(df -h |grep /run; echo; echo) | tee -a /boot/run.filling_up.txt; (du -h --max-depth=1 /run; echo; echo) | tee -a /boot/run.filling_up.txt' &

     

    tail -F nohup.out