coppit

Community Developer
  • Posts

    498
  • Joined

  • Last visited

Everything posted by coppit

  1. I started with a clean Windows 10 install, with UEFI this time, and I have normal network speeds again. I can only guess that some sort of cruft in my old Windows 10 installation was contributing to the problem.
  2. Yeah, I was talking about putting an SSD into the parity-protected array, then putting the vdisks on that drive. Has anyone done that?
  3. I'm paranoid about cache drive corruption or just a mistake hosing lots of VMs. Is it crazy to put my system/libvirt share on an SSD that is in my array? Ditto for my docker.img, I guess. Reads should happen at SSD speeds, and writes at HD speeds (since my parity drive is a hard drive). But since most disk I/O is reads, maybe it will be reasonably fast. My parity drive will be running nonstop. Is that a bad thing?
  4. Yes. Sorry, I said 10Mbps line speed when I meant 10Gbps line speed.
  5. Here's my whole Win10 XML, if it helps: <domain type='kvm' id='3'> <name>Windows 10 - David VDISK</name> <uuid>1cb37bd5-33cb-ff00-e4f6-56b4b6fbf08c</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <memoryBacking> <nosharepages/> <locked/> </memoryBacking> <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='2'/> <vcpupin vcpu='1' cpuset='3'/> <vcpupin vcpu='2' cpuset='6'/> <vcpupin vcpu='3' cpuset='7'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-2.5'>hvm</type> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor id='none'/> </hyperv> </features> <cpu mode='host-passthrough'> <topology sockets='1' cores='2' threads='2'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Windows 10 - David/vdisk1.img'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source dev='/dev/disk/by-id/ata-WDC_WD3200BEKT-00KA9T0_WD-WX11A11C5038'/> <backingStore/> <target dev='hdd' bus='sata'/> <alias name='sata0-0-3'/> <address type='drive' controller='0' bus='0' target='0' unit='3'/> </disk> <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source dev='/dev/disk/by-id/ata-Maxtor_6V300F0_V60GWTEG'/> <backingStore/> <target dev='hde' bus='sata'/> <alias name='sata0-0-4'/> <address type='drive' controller='0' bus='0' target='0' unit='4'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.118-2.iso'/> <backingStore/> <target dev='hdb' bus='ide'/> <readonly/> <alias name='ide0-0-1'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <alias name='usb'/> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <alias name='usb'/> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <alias name='usb'/> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='ide' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='sata' index='0'> <alias name='sata0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:5b:70:b4'/> <source bridge='br0'/> <target dev='vnet1'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/1'/> <target port='0'/> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/1'> <source path='/dev/pts/1'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-Windows 10 - David VDISK/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <hostdev mode='subsystem' type='pci' managed='yes' xvga='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x00' slot='0x1b' function='0x0'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x00' slot='0x1d' function='0x0'/> </source> <alias name='hostdev2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </hostdev> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/> </memballoon> </devices> </domain>
  6. Possible bug: The vdisk1 setting doesn't seem to stick if I specify a manual path. It keeps wanting to flip back to automatic when I click edit again, then complains that I didn't specify a disk size.
  7. Hi all, I'm running 6.2 RC2. I have a Win7 VM that can run http://fast.com at 50 Mbps. This is using br0. I recently converted a physical Win10 machine into a VM. I installed the VirtIO drives, and the adapter is described as "Red Hat VirtIO Ethernet Adapter #2", and the properties say that it has a 10 Gbps line speed. But http://fast.com only runs at 2 Mbps. Here's my Win7 config: <interface type='bridge'> <mac address='00:16:3e:4d:0f:48'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> and here's my Win10 config: <interface type='bridge'> <mac address='52:54:00:5b:70:b4'/> <source bridge='br0'/> <target dev='vnet1'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> For the Win10 VM, I'm passing through the GPU and one USB controller. I have "Enable PCIe ACS Override" enabled. I also enabled MSI interrupts. I'm using the drivers from virtio-win-0.1.118-2.iso, if that matters. I'm stumped. Help?
  8. Actually, the wiki doesn't say anything about the 2nd volume being configured for virtio. It also doesn't mention flipping the OS disk to virtio after installing the drivers. Maybe that was part of my trouble. Let me try the process you've described... Works great! Thanks!
  9. Update: So I went through the wiki instructions a little more carefully. I noticed two things: 1) It says to add a second small virtual disk, install the drivers, then remove that disk 2) It doesn't say to change the vdisk1 from IDE to VirtIO. I did both of these, and the VM doesn't crash any more. It seems slow, but then again it's on my array. After I move it to SSD hopefully it will be back to normal. Does this seem right to people? I'm surprised by #2, and worried that even though device manager says I have the VirtIO devices, I haven't configured the disk properly to use them.
  10. I read here that this will be reverted by unRAID after a reboot? Or is that no longer the case? The feature request is also not answered. I'm terrified about it reverting after a reboot, and losing all my cache data... Especially since I have a vdisk1.img file that is larger than both of the SSDs.
  11. Hi all, As part of my move to unRAID 6.2 RC1, I thought I'd try moving my desktop into a VM. So I used "dd" to grab the system disk image, and used the nifty new Windows 10 template. I also used the nifty new VirtIO download feature to grab virtio-win-0.1.118-2.iso and mount it in the VM. I booted the VM and used VNC to install all the drivers. I didn't install the guest tools yet. I stopped the VM, and changed the system disk to use VirtIO. But during boot, Windows fails with the error INACCESSIBLE_BOOT_DEVICE. Help? Is the problem this? https://support.microsoft.com/en-us/kb/2795397 So I stopped the VM, switched the system disk to IDE, and started it. This time Windows tried to do startup repair. But I just selected the "reboot" option and the second time it came up clean. In a command prompt run as administrator, I ran the command: bcdedit /set {current} safeboot minimal as suggested by the Microsoft support site. This time the machine showed the automatic repair screen, followed immediately by SYSTEM_THREAD_EXCEPTION_NOT_HANDLED. This webpage suggests that I should try to get into the recovery console and run "bootrec /fixmbr". But how do I do that? Is that the right answer? Luckily, I made a backup of the image before doing anything. I'll restore it, and will try telling windows to boot into safe mode after installing the VirtIO drivers. But I have no idea if I'm on the right track or not.
  12. In the VM settings page, the handy little directory chooser for the ISOs isn't working. I can still paste the full path though. Is it just me?
  13. I'll mention this here. I don't think it's a bug per se, but rather an unexpected behavior of the new shares. You might want to warn people. As part of this upgrade, I'm doing a p2v transition for my desktop PC. The OS disk is 239 GB, and I made a backup. Now that unraid has the new shares, I'm trying to use them. So I put my disk image and backup in /mnt/disk1/domains/myvm. My cache disk is a 128 GB SSD, so I didn't bother trying to put them into /mnt/cache/domains/myvm. I didn't realize that the "prefer" setting for the share would attempt to move the VM images to the cache drive, filling it up and hanging the mover. It would be nice if the mover was smart enough to not shoot itself in the foot with regard to the "prefer" option, for the scenario where the file exists in the array but not in the cache. You might also want to warn people about this. Previously, I was putting my VMs and docker data on a separately mounted SSD. Since we don't have a supported RAID 0 way to combine SSDs to increase the cache size, if I want to use the new shares with SSD performance, I'll have no choice but to buy an SSD that's twice the size, replacing my previous two SSDs. For now I've done the obvious thing and modified the share settings to prevent the domains share from using the cache drive. P.S. Is the mover safe to use on vdisk1.img files while the VM is running?
  14. I'm afraid I may have hosed docker. After the upgrade, emhttp wasn't responding for 15-20 minutes. I could see the process running. I issued a "reboot" command over ssh, and the server gracefully shut down. Once it came back up, emhttp was working okay, but the Docker tab is missing. The syslog seems to mount my old docker image from /mnt/vms/docker.img, but it also seems to fail a subsequent mount command: Jul 11 20:01:31 storage emhttp: shcmd (76): set -o pipefail ; /usr/local/sbin/mount_image '/mnt/vms/docker.img' /var/lib/docker 20 |& logger Jul 11 20:01:31 storage kernel: BTRFS: device fsid 52c0aba5-9923-418c-9465-89971a7367f3 devid 1 transid 1124822 /dev/loop0 Jul 11 20:01:31 storage kernel: BTRFS info (device loop0): disk space caching is enabled Jul 11 20:01:31 storage kernel: BTRFS: has skinny extents Jul 11 20:01:31 storage root: Resize '/var/lib/docker' of 'max' Jul 11 20:01:31 storage kernel: BTRFS info (device loop0): new size for /dev/loop0 is 21474836480 Jul 11 20:01:31 storage emhttp: shcmd (78): /etc/rc.d/rc.docker start |& logger Jul 11 20:01:31 storage root: starting docker ... <snip> Jul 11 20:01:43 storage emhttp: shcmd (102): set -o pipefail ; /usr/local/sbin/mount_image '/mnt/vms/docker.img' /var/lib/docker 20 |& logger Jul 11 20:01:43 storage root: /mnt/vms/docker.img is in-use, cannot mount Jul 11 20:01:43 storage emhttp: shcmd: shcmd (102): exit status: 1 I can see that the image is mounted at /var/lib/docker, but lsof says that nothing is using that folder and docker is not running. I'm worried that I interrupted the container update process, or somehow confused unraid with the reboot. If I change the image location to /mnt/user/system/docker/docker.img and let unraid create a new image file, Docker starts okay. But if I move my /mnt/vms/docker.img into that location, I get the same behavior. I guess I broke my docker.img file?
  15. I'm stumped too. What do you get when you run this sequence?: cat /tmp/plugins/snmp/drive_temps.txt snmpwalk -v 2c localhost -c public 'NET-SNMP-EXTEND-MIB::nsExtendOutLine."disktemp"' cat /tmp/plugins/snmp/drive_temps.txt /usr/local/emhttp/plugins/snmp/drive_temps.sh cat /tmp/plugins/snmp/drive_temps.txt snmpwalk -v 2c localhost -c public 'NET-SNMP-EXTEND-MIB::nsExtendOutLine."disktemp"' cat /tmp/plugins/snmp/drive_temps.txt I'm wondering if there's an issue with the caching. Another thought: There's a "sleep 15" in the drive_temps.sh script. I don't remember why. Maybe removing it would allow the script to run faster, solving the problem for you? Try it let me know what happens.
  16. It seems to have stabilized. I'm running 6.1.9 now. It may have stabilized in an earlier version. My machine is plugged into an enterprise-grade 16-port switch that has been solid for years now, so I doubt that it's a problem there. I'm guessing some sort of hardware+kernel issue that has since resolved itself.
  17. This works nicely, thank you. But is there an easy way to make it permanent so that it survives a reboot? What you could try is putting something like this into your go script: echo "$(/sbin/ifconfig br0 | grep inet | awk '{print $2}') storage" >> /etc/hosts Then use the advanced container options to map /etc/hosts from the docker host to the container. The container will think that localhost is your server, but maybe that's okay?
  18. Check out my FileBot container if you're looking for automated renaming, subtitle downloading, etc. Non-Linux apps can't be converted into docker containers or unraid plugins. Maybe later this year Windows apps might be containerizable though.
  19. It will report failure if there are no files to move. Very confusing, I know. The UI doesn't show the cause of the failure. Please log into your server and run "docker logs FileBot" (use whatever your container name is). If it says "No files selected for processing", then that's to be expected. BTW, I've filed a bug about the UI here: http://lime-technology.com/forum/index.php?topic=47011.0
  20. Sorry for being away. I've just pushed an update to the latest version, 1.2.12-1~ppa3~trusty1.
  21. This seems like a mis-feature of the upgradepkg tool. I add the flag --install-new so that it will install perl if it's not already on the system. But it seems to also force-install the older version if a newer version already exists. If this is a big issue, I suppose I could write my own install tool wrapper that checks versions properly... In the meantime I'll just bump the version of perl to match nerdtools. I doubt it will break things. Let me test it.
  22. Yeah, that's my setup too. You don't want the incomplete dir to be watched by filebot, since it apparently will try to process the file before it's done. Looking at the monitor, I do have it treating "file close" as an event. That's different then "file close after write", so I presume that event would be a "close after read". I haven't watched it closely enough to see if filebot only processes the file after seeding is over, or at least after no seeding has happened for the stabilization time. You could try setting a shorter stabilization time, or setting your seeding ratio lower. You can get FileBot to move instead of copy, but that can be a bit dangerous. Sometimes it mis-detects the files.
  23. No, it doesn't exclude everything. I *processes* everything. As part of that work, it excludes the files it has processed. Are you sure you configured your output directory properly? It's writable? If you don't see any files there, log into your server, run a command to log into the container and look into its /output dir: docker exec -it FileBot bash ls /output If the container's /output has files, but outside of the container there are no files, then you've misconfigured the output dir. Correct. It waits a bit for the directory to stabilize before it runs. See the docs. https://hub.docker.com/r/coppit/filebot/ Within a few minutes it will run. Most likely it did eventually run, but if your output dir was not set right, you might not notice.
  24. See the docs: https://hub.docker.com/r/coppit/filebot/ Specifically, the part about changing "copy" to "rename". Caveat emptor!
  25. Did you mess with the settings in filebot.conf? Here are the defaults: SETTLE_DURATION=10 MAX_WAIT_TIME=01:00 MIN_PERIOD=05:00 Note that the MIN_PERIOD is 5 minutes, meaning that FileBot will at most run once every 5 minutes.