stottle

Members
  • Content Count

    143
  • Joined

  • Last visited

Community Reputation

16 Good

About stottle

  • Rank
    Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Thanks for all of the work here. I've got nextcloud/letsencrypt working with duckdns, which I wouldn't have tried without the support here and tutorials. One annoyance - is there an easy way to get unset urls (https://mydomain.duckdns.org/random_garbage) to map to 404 instead of the default "Welcome to our server?" Google searches for 404 and "welcome to our server" don't help...
  2. Balance failed and there are other errors. Here's a snippet Feb 15 18:30:52 Tower2 emhttp: shcmd (147): set -o pipefail ; /sbin/btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt/cache |& logger & Feb 15 18:30:52 Tower2 emhttp: shcmd (148): sync Feb 15 18:30:52 Tower2 kernel: BTRFS info (device sdb1): relocating block group 1937353211904 flags 1 Feb 15 18:30:52 Tower2 emhttp: shcmd (149): mkdir /mnt/user0 Feb 15 18:30:52 Tower2 emhttp: shcmd (150): /usr/local/sbin/shfs /mnt/user0 -disks 62 -o noatime,big_writes,allow_other |& logger Feb 15 18:30:52 Tower2 emhttp: s
  3. Something seems to be wrong. The GUI is running very slowly (several second wait to load webpages) and I've clicked "Balance" twice without it making any changes to the screen. I.e., still looks like the image I sent previous, not saying it is doing a balance operation. I'm seeing the following repeated in the logs Feb 15 18:57:41 Tower2 root: ERROR: unable to resize '/var/lib/docker': Read-only file system Feb 15 18:57:41 Tower2 root: Resize '/var/lib/docker' of 'max' Feb 15 18:57:41 Tower2 emhttp: shcmd (461): /etc/rc.d/rc.docker start |& logger Feb 15 18:57:41 Tower2 root: st
  4. It isn't looking like a balance started automatically. Under Main->Cache Devices, both SSDs were listed. When I ran blkdiscard and refreshed, Cache 2's icon turned from green to blue, so I started the array. Now the cache details look like the attached image, with no balance seeming to be running. Diagnostics attached as well. tower2-diagnostics-20170215-1843.zip
  5. @johnnie.black - thanks for your help and patience. I've updated to 6.3.1, powered down, disconnected cache2 and restarted. Then started the array. At that point, cache mounts to /mnt/cache. So far so good. Should just add the 2nd drive and then balance, or do you have other suggestions in this case? FYI root@Tower2:~# btrfs dev stats /mnt/cache [devid:1].write_io_errs 441396 [devid:1].read_io_errs 407459 [devid:1].flush_io_errs 2047 [devid:1].corruption_errs 0 [devid:1].generation_errs 0 [/dev/sdb1].write_io_errs 0 [/dev/sdb1].read_io_errs 0 [/dev/sdb1].flush
  6. <swearing> So I tried changing the sata port for the problematic drive on my mobo. Array and cache drives looked ok on reboot, so I tried stats and got root@Tower2:~# btrfs dev stats /mnt/cache ERROR: cannot check /mnt/cache: No such file or directory ERROR: '/mnt/cache' is not a mounted btrfs device Hmm, ok. Maybe the cache isn't available until I start the array, so I start the array. It now says unmountable disk present Cache • Samsung_SSD_850_EVO_500GB_S21HNXAGC11924P (sdb) Immediately turned off the array. Apologies, but I don't want to touch anything until
  7. Not sure if this is progress or not. The above error looked like a passthrough issue, so opened the edit window and changed the soundcard from the nvidia GPU to "None". So VNC instead of GPU for audio/video, and I removed the passthrough of my pcie usb controller. With these changes, the VM will actually start, but it goes immediately into the BSOD (Windows ran into a problem) in the VNC window. I would have thought windows would have all necessary drivers for VNC, so not sure what the problem is here. Help? And to make matters worse, sdh is already showing new errors after t
  8. Ok, powered off and replaced the sata cables. I had tried starting the VM after running scrub with corrections enabled, but received the same error. Now, after powering back on after swapping cables, I'm getting a new message: Execution error internal error: qemu unexpectedly closed the monitor: 2017-02-13T00:07:35.264400Z qemu-system-x86_64: -device vfio-pci,host=01:00.1,id=hostdev0,bus=pci.0,addr=0x6: vfio: error, group 1 is not viable, please ensure all devices within the iommu_group are bound to their vfio bus driver. 2017-02-13T00:07:35.264413Z qemu-system-x86_64: -device vfio-
  9. Hmm, I turned off all running dockers (array is still running, but this is the cache drive) and tried running a 2nd readonly scrub. I was curious how repeatable it was. It actually has a few LESS errors. root@Tower2:/# btrfs scrub start -rdB /mnt/cache > /boot/logs/scrub_cache2.log root@Tower2:/# cat /boot/logs/scrub_cache2.log scrub device /dev/sdh1 (id 1) done scrub started at Sun Feb 12 18:00:23 2017 and finished after 00:06:36 total bytes scrubbed: 75.84GiB with 175178 errors error details: verify=679 csum=174499 corrected errors: 0, uncorrectable errors: 0, unverified errors
  10. Thanks for the help. Any suggestions for determining what is causing the errors?
  11. I'm trying to see if there is something else that might be causing the problem. I'm running a btrfs raid1 cache, but get the following root@Tower2:/# btrfs scrub start -rdB /mnt/cache > /boot/logs/scrub_cache.log root@Tower2:/# vi /boot/logs/scrub_cache.log reading /boot/logs/scrub_cache.log Read /boot/logs/scrub_cache.log, 8 lines, 416 chars scrub device /dev/sdh1 (id 1) done scrub started at Sun Feb 12 17:23:16 2017 and finished after 00:06:38 total bytes scrubbed: 75.88GiB with 175313 errors error details:
  12. Already tried that, it didn't help. The original xml was from before I tried the steps listed in the release notes. I tried those steps, with no luck, then tried disabling all passthrough devices as well. Same error message. My current xml is: <domain type='kvm'> <name>Win10</name> <uuid>449c8082-8631-ef95-bd97-1bdad139ddc7</uuid> <description>Windows 10</description> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>8388608<
  13. Note: This isn't the Trying to start my VM gives a "Invalid Machine Type" error issue noted in the release notes. I've tried the suggestions listed there and they have no effect. Any other suggestions?
  14. If it helps, my VM xml is shown below I do see <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> This is a file on unRAID dated Feb 2nd, so I am guessing it is part of the 6.3.0 update. I'm not sure what that means, but the error message says initialization of pflash failed... <domain type='kvm'> <name>Win10</name> <uuid>449c8082-8631-ef95-bd97-1bdad139ddc7</uuid> <description>Windows 10</description> <metadata> <vmtemplate xmlns="unraid" name="Wind
  15. I have a Win10 VM that I have GPU passthru on for. I downloaded unRAID 6.3.0 (from 6.2.4), shutdown the VM, restarted unRAID. Now unRAID pops up a big error message when I try to start the VM: Execution error internal error: qemu unexpectedly closed the monitor: 2017-02-04T21:56:40.034390Z qemu-system-x86_64: Initialization of device cfi.pflash01 failed: failed to read the initial flash content Could be 6.3.0 related, but might not be. And I figured the message would make more sense to the people in this forum. I hope you guys have suggestions.