Jump to content

golli53

Members
  • Content Count

    12
  • Joined

Community Reputation

0 Neutral

About golli53

  • Rank
    Member
  1. I tried to see if I could move my 1TB Samsung NVME SSD from an unassigned device to cache seamlessly. It was formatted as LUKS-encrypted BTRFS (single partition). When it turned out to be unmountable, I immediately stopped the array. I did NOT hit format, so I didn't think any changes would be made. However, this device is now unmountable (with no filesystem being shown). Is there any way I can recover my data? I have all my critical data on this device (all my VM drives) and was foolish and did not make backups. [edit] Before assigning it to cache, I had the cache as a RAID 1 LUKS-encrypted BTRFS pool. So, I assigned it as 1 of the 2 cache devices and left the other slot empty before hitting "start." Here is the relevant result from "blkid" /dev/nvme1n1: PTUUID="b64f271a" PTTYPE="dos" /dev/nvme1n1p1: PARTUUID="b64f271a-01"
  2. I also see many of my docker containers using libvirt.img when using fuser. I'm concerned whether it might be causing some of my Win10 hibernation issues. Can anyone confirm whether "fuser -c /mnt/user/system/libvirt/libvirt.img" should normally show docker pids?
  3. Would it work to assign both containers to the same docker network and specify the IP of the SNI Proxy container in a config file or environment variable (ie https://sniproxy)? Otherwise, if you really need the same IP, you would need to make a new Dockerfile to combine the two containers into one
  4. I am trying to turn on/off certain samba shares on a schedule using cron. What's the best command to pass? All I can think of is using sed to edit /etc/samba/smb-shares.conf, but this seems really complicated and would break the webgui options that were set. Is there a command that mimics what the webgui is doing when it turns on/off samba export or changes security options for a given share?
  5. No, but I just got the chance to enable it and do some tests. Now, libvirt simply fails to start up when I hibernate my VM and stop+start my array. Here's the error message: Tower root: /mnt/disks/ssd/system/libvirt/libvirt.img is in-use, cannot mount When I use fuser and ps to find out which processes are locking it, I get the hibernated VM: /usr/bin/qemu-system-x86_64 -name guest=win10,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-win10/master-key.aes -machine pc-i440fx-3.1,accel=kvm,usb=off,dump-guest-core=off,mem-merge=off -cpu host,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff,hv_vendor_id=none -drive file=/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd,if=pflash,format=raw,unit=0,readonly=on -drive file=/etc/libvirt/qemu/nvram/2f4360e3-a8a0-ca29-a732-4f1191b06173_VARS-pure-efi.fd,if=pflash,format=raw,unit=1 -m 4096 -realtime mlock=off -smp 2,sockets=1,cores=1,threads=2 -uuid 2f4360e3-a8a0-ca29-a732-4f1191b06173 -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=26,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime -no-hpet -no-shutdown -boot strict=on -device nec-usb-xhci,p2=15,p3=15,id=usb,bus=pci.0,addr=0x7 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive file=/mnt/disks/ssd/domains/win10/vdisk1.img,format=raw,if=none,id=drive-virtio-disk2,cache=writeback -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk2,id=virtio-disk2,bootindex=1,write-cache=on -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:1a:cb:11,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,fd=31,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -device usb-tablet,id=input0,bus=usb.0,port=1 -vnc 0.0.0.0:0,websocket=5700 -k en-us -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on (Also, my docker containers were listed as locking it as well for some reason which I wasn't sure was normal, but I turned off autostart on all dockers to remove that as a source of noise) Tests I've tried (all yield the same error as above): - Manually hibernating the VM - Leaving it running and letting unRAID hibernate it upon array stop - Increasing the time given to VM shutdown/hibernate to 3 minutes under VM settings - Rebooting and retrying above This was a clean Win10 install with just qemu-img installed and no pass-through devices. So, it looks like the VM hibernate option breaks libvirt when the VM resides on an Unassigned Device? PS Not sure if it matters but the Unassigned Device for libvirt.img and my VM images is formatted as btrfs-encrypted. Unraid 6.7.2. Unassigned Devices 2019.06.18
  6. I'm not sure that's correct. My VMs currently run from an unassigned NVME drive. I can't stop my array without killing my VMs (the VM settings page becomes disabled and any previously started or paused VMs become stopped upon restart). Does it work differently for you?
  7. Oops I forgot to mention I tried this as well. Same thing - it works but then powers right back up.
  8. I often have to restart my array for various reasons and since my family uses unRAID VMs as their go-to personal computers, this is a big hassle to close all work and shutdown. I've tried hibernation, either within the OS or using "virsh domsuspend" to disk. It appears to work, but the VMs show as "paused" not "stopped". I can resume them fine, but if I stop the array, the VMs crash and the hibernation state isn't preserved (one time, I've also encountered the qemu preventing array stop because of access to drives, in which case I had to kill the instance). I've also tested this on a brand new Win10 VM install with qemu-agent running and virtio drivers installed. [edit] Btw I can't use "virsh save" unfortunately because of GPU pass through.
  9. I've found myself in the situation of having to remove specific drives in my array that's in a 5in3 hotswap bay. I want to spin down (and ideally power off) the drive before removing, but the bay does NOT have individual power buttons for each drive. I stop the array then power down the device using "hdparm -Y /dev/sdX" which works fine, but it automatically powers back on and spins up 5 seconds later. Is there a way to stop it from powering back up (or a better approach to this situation), so I can hot swap safely?
  10. I just set up unRAID for the first time and I'm trying to clone my old Win10 install to a unRAID VM. The Win10 currently sits on a 1TB SSD and I plan to clone the drive to a 1TB NVME drive as an NTFS-formatted unassigned disk, then pass it through to the VM for better performance. One added wrinkle is I'd also like to add encryption (currently not using any). I've thought about encrypting the drive using VeraCrypt within the guest machine before/after transferring to VM. 1) Is it easiest to use a standard disk clone tool to clone the Win10 install to the NVME then pass it thru to a newly created unRAID VM? Or are there any considerations I should take into account, given the drive contents will used by a VM instead of bare metal Win10? 2) Is there an alternative for encryption that might result in better performance e.g. using unRAID's native encryption outside of the VM - or is that not possible given I want to pass thru and must use NTFS? 3) If using VeraCrypt, is it better to encrypt on my old bare metal system before transferring or transfer then encrypt inside the VM? Would greatly appreciate pointers on any/all of these.
  11. I've been thinking of a similar build with E-2146G. Where were you able to find the Gigabyte MX32-4L0? Haven't seen it on retail sites. Thinking about this or ASRock Rack C246M WS, both of which have IPMI but both seem unavailable.
  12. First time NAS build (historically built a HTPC + USB ext drives with manual backup). Is there a good CPU / motherboard combo + other HW that will let me use most of the 24 bays with my old SATA drives and also give me H.265 10-bit 4k transcoding with QS? Mostly for unRAID/Plex + VM for some lite work. For CPU, thinking Kaby Lake (7th gen) which is minimum for the QS transcoding. Have looked at: Xeon E3-1225 v6 or Xeon E3-1230 v6 Core i7-7700 A bit clueless on which Supermicro/other mobos (and/or expansion cards) would support H.265 10-bit QS and give me enough SATA connections. Would appreciate any help ECC would be a plus depending on how much it adds to the cost [edit] Posted this in Hardware section b/c of specific advice around what overall setup works with the Norco, but please feel free to move to CPU/Motherboard if appropriate