htpcguru

Members
  • Posts

    44
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

htpcguru's Achievements

Rookie

Rookie (2/14)

0

Reputation

  1. By reading the two posts above yours. Awesome, that was easy!
  2. root@UnRaid:~# docker inspect -f '{{ index .Config.Labels "build_version" }}' linuxserver/transmission Linuxserver.io version:- 126 Build-date:- June-01-2018-22:08:29-UTC Container version 126 contains Transmission 2.94. I need to downgrade Transmission to 2.93 as one of my trackers rejects the latest Transmission 2.94. How would I do that?
  3. If I specify the vdisk.qcow2 location on KVM/Qemu in /mnt/disk1/... rather than /mnt/user/..., the operations work as expected root@OttSwServer1:~# virsh domblklist Ubuntu-OttSwVM01 Target Source ------------------------------------------------ hdc /mnt/disk1/domains/Ubuntu-OttSwVM01/vdisk1.qcow2 hda /mnt/user/isos/ubuntu-16.04.3-desktop-amd64.iso root@OttSwServer1:~# virsh snapshot-list Ubuntu-OttSwVM01 Name Creation Time State ------------------------------------------------------------ root@OttSwServer1:~# virsh snapshot-create-as --domain "Ubuntu-OttSwVM01" --quiesce --name backup --atomic --disk-onlyDomain snapshot backup created root@OttSwServer1:~# virsh domblklist Ubuntu-OttSwVM01 Target Source ------------------------------------------------ hdc /mnt/disk1/domains/Ubuntu-OttSwVM01/vdisk1.backup hda /mnt/user/isos/ubuntu-16.04.3-desktop-amd64.iso root@OttSwServer1:~# virsh snapshot-list Ubuntu-OttSwVM01 Name Creation Time State ------------------------------------------------------------ backup 2018-04-10 08:46:49 -0700 disk-snapshot root@OttSwServer1:~# virsh blockcommit Ubuntu-OttSwVM01 /mnt/disk1/domains/Ubuntu-OttSwVM01/vdisk1.backup --active --verbose --pivot Block commit: [100 %] Successfully pivoted root@OttSwServer1:~# virsh domblklist Ubuntu-OttSwVM01Target Source ------------------------------------------------ hdc /mnt/disk1/domains/Ubuntu-OttSwVM01/vdisk1.qcow2 hda /mnt/user/isos/ubuntu-16.04.3-desktop-amd64.iso root@OttSwServer1:~# virsh snapshot-list Ubuntu-OttSwVM01 Name Creation Time State ------------------------------------------------------------ backup 2018-04-10 08:46:49 -0700 disk-snapshot root@OttSwServer1:~# Looks like a bug in blockcommit?
  4. I am able to create a snapshot of a running VM. root@OttSwServer1:/mnt/user/domains/Ubuntu-OttSwVM01# virsh snapshot-create-as --domain "Ubuntu-OttSwVM01" --quiesce --name backup --atomic --disk-only Domain snapshot backup created root@OttSwServer1:/mnt/user/domains/Ubuntu-OttSwVM01# ls vdisk1.backup vdisk1.qcow2 root@OttSwServer1:/mnt/user/domains/Ubuntu-OttSwVM01# virsh snapshot-list Ubuntu-OttSwVM01 Name Creation Time State ------------------------------------------------------------ backup 2018-04-10 08:18:15 -0700 disk-snapshot However, when I tried to do blockcommit of the snapshot, it complains about the vdisk location. root@OttSwServer1:/mnt/user/domains/Ubuntu-OttSwVM01# virsh blockcommit Ubuntu-OttSwVM01 /mnt/user/domains/Ubuntu-OttSwVM01/vdisk1.backup --active --verbose --pivot error: internal error: qemu block name '/mnt/disk1/domains/Ubuntu-OttSwVM01/vdisk1.qcow2' doesn't match expected '/mnt/user/domains/Ubuntu-OttSwVM01/vdisk1.qcow2' Looks like blockcommit command looks for the vdisk from actual disk location rather than user share mount. (vdisk1.qcow2 file physical location is on /mnt/disk1) The snapshot xml does have the correct base image path on the user share "/mnt/user" root@OttSwServer1:/mnt/user/domains/Ubuntu-OttSwVM01# virsh snapshot-dumpxml Ubuntu-OttSwVM01 backup | grep disk1 <source file='/mnt/user/domains/Ubuntu-OttSwVM01/vdisk1.backup'/> <source file='/mnt/user/domains/Ubuntu-OttSwVM01/vdisk1.qcow2'/> vdisk files references also look good: root@OttSwServer1:/mnt/user/domains/Ubuntu-OttSwVM01# virsh domblklist Ubuntu-OttSwVM01 Target Source ------------------------------------------------ hdc /mnt/user/domains/Ubuntu-OttSwVM01/vdisk1.backup hda /mnt/user/isos/ubuntu-16.04.3-desktop-amd64.iso Thoughts?
  5. UnRaid defaults to attach a VNC display to each VM so by default you would have a VNC console for each VM. This is also true if you pass-through the GPU, unless you specifically remove the VNC display. As for the GPU-attached display, I can think of 2 possibilities: (1) Use a KVM switch as you envision above. (2) Use a HDMI dummy dongle that emulates an HDMI display (up to 4K 60Hz). This is particularly useful if you stream games to other displays on your LAN. I have used (2) for a gaming VM and worked extremely well when streaming games to other displays in the house.
  6. Thanks for the tip. I replaced the cable AND moved cache2 to another SATA port on the motherboard as the new cable had the same issue. Had to re-install one of the Windows 10 VM's. The original ports are SATA1_2 (with 2 being on top of 1), cache1 still in SATA1, and now I've moved cache2 to SATA3 of SATA3_4. Not sure if SATA2 was the culprit. It's also possible the cache2 SSD could have some H/W issue.
  7. It has happened before. After a power-cycle/SSD re-seat, the cache pool came back alright. Just now one of the Windows 10 VM prompted an updated, which caused a crash of another running Windows 10 VM. After a "forced" reboot of unraid, the cache pool came back unmountable. It has happened before... power-cycling the box and re-seating the 2 SSD's fixed the issue. Any help is appreciated! tower-diagnostics-20170914-2105.zip (Cache pool good) tower-diagnostics-20170924-1601.zip (Cache pool unmountable)
  8. Yes log shows a tonne of read errors. Diagnostics attached. Pre-clearing the spare 2TB now.
  9. After converting all the data disks from Reiserfs to XFS, one of the disks exhibits Sync Errors and Pending Sector Counts. Please see screenshots. I do have another 2TB disk (from the Reiserfs conversion process) that I can use to try swapping this out. What would be the correct troubleshoot procedure?
  10. Thanks for the tip. Now things are much better: root@UnRaid:~# du /mnt/disk3/ -s 1727843185 /mnt/disk3/ root@UnRaid:~# du /mnt/disk4/ -s 1725260516 /mnt/disk4/ Moving onto other Reiserfs disks...
  11. I believe I found where the problem is. Somehow, a vdisk1.img is copied with a much bigger size: root@UnRaid:~# ls -l /mnt/disk4/vm_backup/Windows\ 10\ Workstation/ total 398458884 -rwxrwxrwx 1 root root 408021893120 Mar 1 2016 vdisk1.img* root@UnRaid:~# ls -l /mnt/disk3/vm_backup/Windows\ 10\ Workstation/ total 40994716 -rwxrwxrwx 1 root root 408021893120 Mar 1 2016 vdisk1.img* I know that KVM/Qemu sizes the vm disk dynamically, so that could explain why on disk3 the actual image size of 40994716 is much smaller than the allocated size of 408021893120. However, I cannot explain why the copied image file is not the same size.
  12. I have the same line of thinking - the difference is too much to consider normal. root@UnRaid:/mnt/disk3# find /mnt/disk3 -type f | wc -l 8109 root@UnRaid:/mnt/disk3# find /mnt/disk4 -type f | wc -l 8109
  13. I followed the wiki and step 8 of the first Reiserfs disk is done. However, I notice on the unRaid main page that the used size of the destination XFS disk is larger than the source disk. On the ssh console: root@UnRaid:~# du -s /mnt/disk3 1727843185 /mnt/disk3 root@UnRaid:~# du -s /mnt/disk4 2083663832 /mnt/disk4 root@UnRaid:~# df /mnt/disk3 Filesystem 1K-blocks Used Available Use% Mounted on /dev/md3 3906899292 1727914992 2178984300 45% /mnt/disk3 root@UnRaid:~# df /mnt/disk4 Filesystem 1K-blocks Used Available Use% Mounted on /dev/md4 3905110812 2083701224 1821409588 54% /mnt/disk4 Is this expected? Does XFS use more space than Reiserfs? Please see attached screenshot. The red circle is the used size of the XFS disk (should have the same content of disk3), and blue is the used size of the Reiserfs source disk3.
  14. I have now put a small HDD as an UnRaid data disk (as UnRaid won't proceed without a data drive), and assign the 2 SSD's as RAID0 cache. root@Tower:/mnt# btrfs fi show /mnt/cache Label: none uuid: 90ef06ec-e402-4b2f-9b86-71e9ac2034ca Total devices 2 FS bytes used 206.03GiB devid 1 size 447.13GiB used 107.03GiB path /dev/sdb1 devid 2 size 447.13GiB used 107.03GiB path /dev/sdc1 root@Tower:/mnt# btrfs fi df /mnt/cache Data, RAID0: total=212.00GiB, used=206.03GiB System, RAID1: total=32.00MiB, used=16.00KiB Metadata, RAID1: total=1.00GiB, used=592.00KiB GlobalReserve, single: total=16.00MiB, used=0.00B Any tip on benchmarking the throughput of btrfs RAID0 implementation?
  15. I'm setting up a box purely for gaming VM's. I like UnRaid's VM management interface. I've tried Proxmox which seems to be quite popular, but it is no where near UnRaid's flexibility and user-friendliness. No HDD's for this box, only SSD's. I can simply assign an SSD as a data drive, no parity, no cache, and UnRaid operates VM's happily. Now I've just added another same-sized SSD, and set up an Intel X99 RAID0 volume with the 2 identical SSD's. UnRaid, however, cannot see a single Intel RAID0 volume, instead, it still sees the 2 individual SSD's. I've briefly searched the forum and came across this post. Can someone confirm whether UnRaid works with motherboard or intel chipset H/W Raid?