sarf

Members
  • Posts

    52
  • Joined

  • Last visited

About sarf

  • Birthday 12/19/1969

Converted

  • Gender
    Male
  • Location
    London

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

sarf's Achievements

Rookie

Rookie (2/14)

4

Reputation

  1. will do! I booted this morning from power on, drives all mounted and VM started automatically thanks again...
  2. You got it! I rebooted twice to make sure and both times all the partitions on the nvme mounted automatically. I run some more tests tomorrow from cold boot as well but hopefully its now fixed! Thank you for the fix and your awesome addon!!
  3. There you go. Diags attached thanks for looking :) core-diagnostics-20230208-1422.zip
  4. Hey There, I have a very weird issue that is 100% recreatable. basically I have an NVME 1tb ssd which I occasionally use to boot up windows directly from machine boot. i.e. this is not a VM running on unraid. The drive has the following partitions. now for some reason, when I set this drive to "Automount" from settings and reboot or startup the server from power down this is what I see. All I need to do to fix it is click "unmount" and then click "mount" again and all the partitions are then mounted as per the first shot. I also and mostly use this drive to store some VM's so having it automount and then autostart my linux VM is what I used to do. Until now This used to work perfectly fine and without issue! It started to fail when I upgraded my GPU and forgot to turn off all the pass through before switching the server back on. Adding the new gpu re-arranged all the devices and I ended up with a drive controller being passed through instead of the GPU. My fault 100% yes. So anyone wanting to upgrade anything in the future just reset all the pass through and turn off all auto starts before your proceed! You will save yourself a lot of pain! Long story short I managed to correct the server reconfigure all the drives rebuild a drive from parity so everything is now back up and working accept for auto mounting this particular drive at boot! Please help me hehe Thank's in advance!
  5. I posted it via the unraid server menu?!?!
  6. Bug report submitted! still an issue for me using 9t as well... thanks for looking
  7. Hi Guy's I have an Issue with virtiofs as follows. ubuntu 20.04.5 server vm. Added <memoryBacking> <source type='memfd'/> <access mode='shared'/> </memoryBacking> to the xml - no problem at all. Booted vm with virtiofs and a mapped share - all worked perfectly. mounted the share in vm - no problem at all. "ip address" at terminal in vm returns no ip address from network. shutdown and edit vm and remove virtiofs shares - start up vm, run "ip address" at terminal and it connects to network with valid ip address. tried it many times and does the same thing. hope this helps.
  8. Awesome, thank you for pointing me in the right direction. Added your settings and so far so good!
  9. Scrap the above. Same problem on 6.9.2...
  10. I reverted back to 6.9.2 after reading this and my domain user shares are now working. Having said that it usually took some time before the domain owners home directories would start to appear as unknown in the new file manager, at this point the user would lose access to the server, not the share, the server. "wbinfo -i user" would return an error. Hopefully this wont happen now I'm back on 6.9.2.
  11. sarf

    VM Snapshot

    Please add this! prety pretty prettty please
  12. Hi There, Just wondering if at any time in the future you will implement live migrations of VM's between unraid and other servers. This functionality is already available within KVM using virt-manager but it would be awesome to be able to use unraid to do this as well. Many Thanks.
  13. Just to confirm this worked a treat. It cost me a few hours messing around with the server though but I appreciate the decision to disable rather than corrupt! Definitely should have been made much clearer in my opinion though...
  14. The problem is what if you are using the server for virtual machines. On this particular server there is no option to dissable vt-d so in my case should I use the iommu black list option? I don't need any hardware pass through like GPU's but I do need virtualisation. Any advice would be much appreciated.