Struck

Members
  • Posts

    74
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Struck's Achievements

Rookie

Rookie (2/14)

9

Reputation

  1. Please read the notes on updating from 0.5.x to 0.6.0 for unraid docker. https://github.com/guydavis/machinaris/wiki/Unraid#how-do-i-update-from-v05x-to-v060-with-fork-support
  2. Thanks for the new version. With a little finegeling i managed to get it to work with flax.
  3. Yeah, i see i gained a block reward, so it must still be farming. but flexdog is likely not running, and also not providing allerts and daily updates. (This has never worked for flax, for me) I am on Machinaris 0.5. which seems to be the newest in this present time.
  4. I get the following error from flexdog. Is that something i should be conserned about? On the summary page, flax and chia is still listed as Farming: active [2021-10-11 08:53:36] [ INFO] --- Starting Flaxdog (v0.6.0-3-g8867a71) (main.py:54) Traceback (most recent call last): File "/flaxdog/main.py", line 111, in <module> init(conf) File "/flaxdog/main.py", line 57, in init flax_logs_config = config.get_flax_logs_config() File "/flaxdog/src/config.py", line 35, in get_flax_logs_config return self._get_child_config("flax_logs") File "/flaxdog/src/config.py", line 22, in _get_child_config raise ValueError(f"Invalid config - cannot find {key} key") ValueError: Invalid config - cannot find flax_logs key
  5. Thank you ! Don't really know how that part got in there, but i remember at some point that i tried direct attaching a ssd to a VM, to improve performance, which did not work as intended, so i scrapped this idea. The ssd used was not my cache ssd, but a secondary ssd, not in the system anymore. Something must have been left from this specific configuration. Anyway, it seems to be working as intended after removing this piece of code.
  6. Maybe it is just me not seeing it, but i see no indication of that in the XML. Below is a look at my XML after a reboot of the machine. It is true that after i tried starting the VM, it seems like the XML automatically includes the cache disk as a pci-e device passed through. <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>Windows 10 v2</name> <uuid>e04fb216-5639-1938-322d-c7720c44bfaa</uuid> <description>Using QE35</description> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='10'/> <vcpupin vcpu='1' cpuset='24'/> <vcpupin vcpu='2' cpuset='11'/> <vcpupin vcpu='3' cpuset='25'/> <vcpupin vcpu='4' cpuset='12'/> <vcpupin vcpu='5' cpuset='26'/> <vcpupin vcpu='6' cpuset='13'/> <vcpupin vcpu='7' cpuset='27'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-3.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/e04fb216-5639-1938-322d-c7720c44bfaa_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='4' threads='2'/> <cache mode='passthrough'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Windows 10 v2/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/Windows10_1903iso.iso'/> <target dev='hda' bus='sata'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.190-1.iso'/> <target dev='hdb' bus='sata'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x10'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x11'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0x12'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x13'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0x14'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:47:5f:1c'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='da'> <listen type='address' address='0.0.0.0'/> </graphics> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </hostdev> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </memballoon> </devices> </domain>
  7. I have a Windows 10 VM that, when trying to start, causes the cache disk to be read only. I have used the VM before, but cannot remember if i have changed any configuration of it since. I tried looking it over, but cannot see any problems. I use these VM's regularly: [Name] Windows 10 Ubuntu And the problem matic one is this one: Windows 10 v2 When started it throws an execution error: Execution error unable to open /mnt/user/domains/Windows 10 v2/vdisk1.img: Read-only file system up until that point it works. Upon server restart the cache is mounted, and works perfectly. but a parity sync is started. hotbox-diagnostics-20210927-1717.zip
  8. Not long enough. 2 passes, like 4 hours.
  9. All drives are passed to the OS by individual drives, and SMART data is available. Have not tried running a raid mode on the card.
  10. Memtest didn't find anything. Extended SMART test did not find any issues either. I have not tried replacing the drive yet. I will do that after the weekend i guess
  11. Okay,. Thanks i will try it after the extended smart test is done. As a side note, the cache disk mounted as normally after a reboot.
  12. I will run an extended SMART test after reboot. I have now inserted a new SSD, that is supposed to replace the one i currently use. I will try memtest later, but i haven't had any problems before i installed the SSD. The array is unaffected of this problem it seems
  13. I used the instructions to retore the data, formatted the drive and copied the data back afterwards. It worked for less than three days. Now the issue is the same. The log is filled with stuff like this: Aug 31 04:29:37 ChiaTower kernel: BTRFS error (device sdm1): bdev /dev/sdm1 errs: wr 0, rd 0, flush 0, corrupt 4, gen 0 Aug 31 04:29:37 ChiaTower kernel: BTRFS warning (device sdm1): csum failed root 5 ino 12305 off 15759626240 csum 0x21417709 expected csum 0x00000000 mirror 1 Aug 31 04:29:37 ChiaTower kernel: BTRFS error (device sdm1): bdev /dev/sdm1 errs: wr 0, rd 0, flush 0, corrupt 5, gen 0 Aug 31 04:29:37 ChiaTower kernel: BTRFS error (device sdm1): parent transid verify failed on 8343076864 wanted 3304 found 3233 Aug 31 04:29:37 ChiaTower kernel: BTRFS info (device sdm1): no csum found for inode 12305 start 15759699968 Aug 31 04:29:37 ChiaTower kernel: BTRFS warning (device sdm1): csum failed root 5 ino 12305 off 15759699968 csum 0x108cc45f expected csum 0x00000000 mirror 1 Aug 31 04:29:37 ChiaTower kernel: BTRFS error (device sdm1): bdev /dev/sdm1 errs: wr 0, rd 0, flush 0, corrupt 6, gen 0 Aug 31 04:29:37 ChiaTower kernel: BTRFS error (device sdm1): parent transid verify failed on 8343076864 wanted 3304 found 3233 Aug 31 04:29:37 ChiaTower kernel: BTRFS info (device sdm1): no csum found for inode 12305 start 15759708160 Aug 31 04:29:37 ChiaTower kernel: BTRFS warning (device sdm1): csum failed root 5 ino 12305 off 15759708160 csum 0x7d0b155f expected csum 0x00000000 mirror 1 Aug 31 04:29:37 ChiaTower kernel: BTRFS error (device sdm1): bdev /dev/sdm1 errs: wr 0, rd 0, flush 0, corrupt 7, gen 0 Aug 31 04:29:37 ChiaTower kernel: BTRFS error (device sdm1): parent transid verify failed on 8343076864 wanted 3304 found 3233 Aug 31 04:29:37 ChiaTower kernel: BTRFS info (device sdm1): no csum found for inode 12305 start 15759736832 Aug 31 04:29:37 ChiaTower kernel: BTRFS warning (device sdm1): csum failed root 5 ino 12305 off 15759736832 csum 0xabb5631a expected csum 0x00000000 mirror 1 Aug 31 04:29:37 ChiaTower kernel: BTRFS error (device sdm1): bdev /dev/sdm1 errs: wr 0, rd 0, flush 0, corrupt 8, gen 0 Aug 31 04:29:37 ChiaTower kernel: BTRFS error (device sdm1): parent transid verify failed on 8343076864 wanted 3304 found 3233 Aug 31 04:29:37 ChiaTower kernel: BTRFS info (device sdm1): no csum found for inode 12305 start 15760031744 Aug 31 04:29:37 ChiaTower kernel: BTRFS warning (device sdm1): csum failed root 5 ino 12305 off 15760031744 csum 0xb842b40e expected csum 0x00000000 mirror 1 Aug 31 04:29:37 ChiaTower kernel: BTRFS error (device sdm1): bdev /dev/sdm1 errs: wr 0, rd 0, flush 0, corrupt 9, gen 0 Aug 31 04:29:37 ChiaTower kernel: BTRFS error (device sdm1): parent transid verify failed on 8343076864 wanted 3304 found 3233 Aug 31 04:29:37 ChiaTower kernel: BTRFS info (device sdm1): no csum found for inode 12305 start 15759298560 Aug 31 04:29:37 ChiaTower kernel: BTRFS warning (device sdm1): csum failed root 5 ino 12305 off 15759298560 csum 0xff2de314 expected csum 0x00000000 mirror 1 Aug 31 04:29:37 ChiaTower kernel: BTRFS error (device sdm1): bdev /dev/sdm1 errs: wr 0, rd 0, flush 0, corrupt 10, gen 0 Aug 31 04:30:13 ChiaTower kernel: verify_parent_transid: 10 callbacks suppressed Aug 31 04:30:13 ChiaTower kernel: BTRFS error (device loop2): parent transid verify failed on 4708515840 wanted 203356 found 202764 Aug 31 04:30:13 ChiaTower kernel: BTRFS error (device loop2): parent transid verify failed on 4708515840 wanted 203356 found 202764 Aug 31 04:30:44 ChiaTower kernel: BTRFS error (device loop2): parent transid verify failed on 4708515840 wanted 203356 found 202764 Aug 31 04:30:44 ChiaTower kernel: BTRFS error (device loop2): parent transid verify failed on 4708515840 wanted 203356 found 202764 Aug 31 04:31:15 ChiaTower kernel: BTRFS error (device loop2): parent transid verify failed on 4708515840 wanted 203356 found 202764 Aug 31 04:31:15 ChiaTower kernel: BTRFS error (device loop2): parent transid verify failed on 4708515840 wanted 203356 found 202764 Aug 31 04:31:46 ChiaTower kernel: BTRFS error (device loop2): parent transid verify failed on 4708515840 wanted 203356 found 202764 Aug 31 04:31:46 ChiaTower kernel: BTRFS error (device loop2): parent transid verify failed on 4708515840 wanted 203356 found 202764 Aug 31 04:32:18 ChiaTower kernel: BTRFS error (device loop2): parent transid verify failed on 4708515840 wanted 203356 found 202764 Aug 31 04:32:18 ChiaTower kernel: BTRFS error (device loop2): parent transid verify failed on 4708515840 wanted 203356 found 202764 Aug 31 04:32:49 ChiaTower kernel: BTRFS error (device loop2): parent transid verify failed on 4708515840 wanted 203356 found 202764 Aug 31 04:32:49 ChiaTower kernel: BTRFS error (device loop2): parent transid verify failed on 4708515840 wanted 203356 found 202764 Aug 31 04:33:20 ChiaTower kernel: BTRFS error (device loop2): parent transid verify failed on 4708515840 wanted 203356 found 202764 Aug 31 04:33:20 ChiaTower kernel: BTRFS error (device loop2): parent transid verify failed on 4708515840 wanted 203356 found 202764 Aug 31 04:33:51 ChiaTower kernel: BTRFS error (device loop2): parent transid verify failed on 4708515840 wanted 203356 found 202764 And my guess is that if i try to reboot the machine the cache drive parition cannot be mounted. Even though I can access the cache drive fine before the reboot. Is the drive bad? I have multiple of these drives, so i can try and replace it. Would i be having less issues if i run multiple of them in the cache pool? chiatower-diagnostics-20210831-1815.zip
  14. Diagnostics attached the restart also triggered a parity sync. I don’t know why this is, since the array seems to be unaffected of this problem. chiatower-diagnostics-20210829-1504.zip
  15. This morming the docker service crashed with one docker running it had filed the log, so i tried to restart. now it seems that my cache drive wont mount. the btrfs seems to be unmountable. the cache drive log says this. How do I fix this problem? the cache drive was added less than one week ago,