Struck

Members
  • Posts

    88
  • Joined

  • Last visited

Everything posted by Struck

  1. Okay. Thank you for the answer. I see forward to a new release, whenever it is ready.
  2. I am having trouble updating to 2.2.0 I am still on 2.1.4 and the docker page shows that there is no update, and the docker being not available.
  3. I finally got a breakthrough. I tried many different settings. But when I switched the USB Controller from 2.0 EHCI to 3.0 (qemu XHCI) It suddently appeared as a bootable device I didn't try this earlier, since i expected 2.0 to work better for boot than 3.0. Thanks for all your guys help so far
  4. Cant really find anything about the USB in the Logs for the VM: Unraid Host logs tells me this:
  5. I tried creating the USB with the manual way, of, but I still have the same error.
  6. I actually used the USB media creation tool to create the media, but altered the drive label and edited the files afterwards, Maybe this is not the way to do it. EFI folder is named as is default, when using the media creation tool.
  7. Host is using a HP w165 usb, VM is using a Sandisk extreme USB. So different vendor and model.
  8. Using the USB Manager shown in the video. I have also tried to assign it using the normal method in the VM settings.
  9. Thanks for the informatiopn. This solved my initial problem. I followed the video, but the USB does not appear as a boot device in the VM. I edited the files on thje USB drive, and ran the make_bootable file without any problems.
  10. Hello everyone. I am trying to combine two unraid machienes I have been running. But since Unraid still does not support multiple arrays in a single install, I am trying to run the second unraid in a VM. But when i plug a new unraid USB into the already running system, the USB is not available to be assigned to a VM. As unassigned disks plugin, says the USB is "array" The Log tells me what happens: It seems like the existing unraid picks up the USB unraid, and locks the device, as it now is the active license. The USB i am using is using a new blank unraid 6.10.3 install, until i know if this would work or not. I already own the two unraid pro licenses. The host unraid system is currently running Unraid 6.9.2 I can post diagnostics, but i don't know if that would help anyway.
  11. I realize i never replied to this topic. I tried reformattig the disk, and it seems to be working. But i later changed the filesystem to xfs, and ever since i haven't had a problem with corrupt data or full disk.
  12. Just to confirm. I don't HAVE to use a cold wallet address, for chia and alt-coins to work properly, right? My Flax farm, haven't caught any blocks for 5-6 weeks at this point. My estimated time to win is 8-10 days. But i know that is only an estimate, and should count on that. My current setup, does not use a cold address for rewards, and i have about 40 XFX already, but hasn't gained any for the last month and a half. Machinaris does not report any problems, Only have a few (0-3( skipped SP's a day. and a search time average, under 0.5s.
  13. Hi I have the past two days got an error that indicated my cache drive is out of space. This apparently causes the system to stop functioning correctly. The docker service turns offline, and some of the cpu cores are pinned at 100%. The sys logs says this: Nov 4 07:59:24 ChiaTower kernel: BTRFS info (device sdo1): leaf 139039850496 gen 76585 total ptrs 0 free space 16283 owner 18446744073709551610 Nov 4 07:59:24 ChiaTower kernel: BTRFS critical (device sdo1): not enough freespace need 48992 have 16283 Nov 4 07:59:24 ChiaTower kernel: ------------[ cut here ]------------ Nov 4 07:59:24 ChiaTower kernel: kernel BUG at fs/btrfs/ctree.c:4814! Nov 4 07:59:24 ChiaTower kernel: invalid opcode: 0000 [#1] SMP PTI Nov 4 07:59:24 ChiaTower kernel: CPU: 11 PID: 28875 Comm: shfs Not tainted 5.10.28-Unraid #1 Nov 4 07:59:24 ChiaTower kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./X99M Extreme4, BIOS P3.10 06/13/2016 Nov 4 07:59:24 ChiaTower kernel: RIP: 0010:setup_items_for_insert+0xe1/0x294 Nov 4 07:59:24 ChiaTower kernel: Code: 39 f0 73 28 48 89 ef e8 f9 8c 00 00 48 89 ef e8 83 d9 ff ff 48 8b 7c 24 08 44 89 f2 48 c7 c6 de f3 d8 81 89 c1 e8 e3 82 49 00 <0f> 0b 48 8d 7c 24 40 48 89 ee e8 6b 97 ff ff 44 3b 64 24 20 0f 84 Nov 4 07:59:24 ChiaTower kernel: RSP: 0018:ffffc90002b03b10 EFLAGS: 00010296 Nov 4 07:59:24 ChiaTower kernel: RAX: 0000000000000000 RBX: 0000000000003f9b RCX: 0000000000000027 Nov 4 07:59:24 ChiaTower kernel: RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffff88848fcd8920 Nov 4 07:59:24 ChiaTower kernel: RBP: ffff88810cd498c0 R08: 0000000000000000 R09: 00000000ffffefff Nov 4 07:59:24 ChiaTower kernel: R10: ffffc90002b038c0 R11: ffffc90002b038b8 R12: 0000000000000000 Nov 4 07:59:24 ChiaTower kernel: R13: 000000000000ab10 R14: 000000000000bf60 R15: ffff8881037e0000 Nov 4 07:59:24 ChiaTower kernel: FS: 000014ece30f0700(0000) GS:ffff88848fcc0000(0000) knlGS:0000000000000000 Nov 4 07:59:24 ChiaTower kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Nov 4 07:59:24 ChiaTower kernel: CR2: 00001527d203ca24 CR3: 000000014a2e4005 CR4: 00000000001706e0 Nov 4 07:59:24 ChiaTower kernel: Call Trace: Nov 4 07:59:24 ChiaTower kernel: btrfs_insert_empty_items+0x65/0x77 Nov 4 07:59:24 ChiaTower kernel: copy_items.isra.0+0xe2/0x358 Nov 4 07:59:24 ChiaTower kernel: ? read_extent_buffer+0x1d/0x91 Nov 4 07:59:24 ChiaTower kernel: ? btrfs_set_path_blocking+0x1f/0x3e Nov 4 07:59:24 ChiaTower kernel: ? btrfs_search_forward+0x23d/0x274 Nov 4 07:59:24 ChiaTower kernel: btrfs_log_inode+0x597/0xab6 Nov 4 07:59:24 ChiaTower kernel: btrfs_log_inode_parent+0x25b/0x901 Nov 4 07:59:24 ChiaTower kernel: ? slab_post_alloc_hook+0x1e/0x14a Nov 4 07:59:24 ChiaTower kernel: ? _cond_resched+0x1b/0x1e Nov 4 07:59:24 ChiaTower kernel: ? wait_current_trans+0xbc/0xda Nov 4 07:59:24 ChiaTower kernel: ? kmem_cache_alloc+0x108/0x130 Nov 4 07:59:24 ChiaTower kernel: ? join_transaction+0x9d/0x3a3 Nov 4 07:59:24 ChiaTower kernel: btrfs_log_dentry_safe+0x36/0x4a Nov 4 07:59:24 ChiaTower kernel: btrfs_sync_file+0x250/0x340 Nov 4 07:59:24 ChiaTower kernel: do_fsync+0x2a/0x44 Nov 4 07:59:24 ChiaTower kernel: __x64_sys_fsync+0xb/0xe Nov 4 07:59:24 ChiaTower kernel: do_syscall_64+0x5d/0x6a Nov 4 07:59:24 ChiaTower kernel: entry_SYSCALL_64_after_hwframe+0x44/0xa9 Nov 4 07:59:24 ChiaTower kernel: RIP: 0033:0x14ecf13f5a8b Nov 4 07:59:24 ChiaTower kernel: Code: 0f 05 48 3d 00 f0 ff ff 77 45 c3 0f 1f 40 00 48 83 ec 18 89 7c 24 0c e8 43 f7 ff ff 8b 7c 24 0c 41 89 c0 b8 4a 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 2f 44 89 c7 89 44 24 0c e8 81 f7 ff ff 8b 44 Nov 4 07:59:24 ChiaTower kernel: RSP: 002b:000014ece30efc70 EFLAGS: 00000297 ORIG_RAX: 000000000000004a Nov 4 07:59:24 ChiaTower kernel: RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 000014ecf13f5a8b Nov 4 07:59:24 ChiaTower kernel: RDX: 000014ece30efd60 RSI: 0000000000000001 RDI: 000000000000001e Nov 4 07:59:24 ChiaTower kernel: RBP: 000014ece30efcc0 R08: 0000000000000001 R09: 000014ecc001c724 Nov 4 07:59:24 ChiaTower kernel: R10: 000000000046ab58 R11: 0000000000000297 R12: 000014ecc0010600 Nov 4 07:59:24 ChiaTower kernel: R13: 0000000000000001 R14: 00000000000000ac R15: 000014ece30efd60 Nov 4 07:59:24 ChiaTower kernel: Modules linked in: veth xt_nat xt_tcpudp xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink xt_addrtype iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 br_netfilter xfs md_mod nct6775 hwmon_vid ip6table_filter ip6_tables iptable_filter ip_tables x_tables e1000e alx mdio intel_wmi_thunderbolt mxm_wmi x86_pkg_temp_thermal intel_powerclamp coretemp crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel crypto_simd cryptd glue_helper rapl intel_cstate intel_uncore i2c_i801 input_leds i2c_smbus i2c_core ahci led_class aacraid libahci wmi button [last unloaded: e1000e] Nov 4 07:59:24 ChiaTower kernel: ---[ end trace d8547447692a8044 ]--- Nov 4 07:59:24 ChiaTower kernel: RIP: 0010:setup_items_for_insert+0xe1/0x294 Nov 4 07:59:24 ChiaTower kernel: Code: 39 f0 73 28 48 89 ef e8 f9 8c 00 00 48 89 ef e8 83 d9 ff ff 48 8b 7c 24 08 44 89 f2 48 c7 c6 de f3 d8 81 89 c1 e8 e3 82 49 00 <0f> 0b 48 8d 7c 24 40 48 89 ee e8 6b 97 ff ff 44 3b 64 24 20 0f 84 Nov 4 07:59:24 ChiaTower kernel: RSP: 0018:ffffc90002b03b10 EFLAGS: 00010296 Nov 4 07:59:24 ChiaTower kernel: RAX: 0000000000000000 RBX: 0000000000003f9b RCX: 0000000000000027 Nov 4 07:59:24 ChiaTower kernel: RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffff88848fcd8920 Nov 4 07:59:24 ChiaTower kernel: RBP: ffff88810cd498c0 R08: 0000000000000000 R09: 00000000ffffefff Nov 4 07:59:24 ChiaTower kernel: R10: ffffc90002b038c0 R11: ffffc90002b038b8 R12: 0000000000000000 Nov 4 07:59:24 ChiaTower kernel: R13: 000000000000ab10 R14: 000000000000bf60 R15: ffff8881037e0000 Nov 4 07:59:24 ChiaTower kernel: FS: 000014ece30f0700(0000) GS:ffff88848fcc0000(0000) knlGS:0000000000000000 Nov 4 07:59:24 ChiaTower kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Nov 4 07:59:24 ChiaTower kernel: CR2: 00001527d203ca24 CR3: 000000014a2e4005 CR4: 00000000001706e0 Nov 4 08:59:23 ChiaTower emhttpd: cmd: /usr/local/emhttp/plugins/dynamix/scripts/tail_log syslog Where we in the first lines see this: Nov 4 07:59:24 ChiaTower kernel: BTRFS critical (device sdo1): not enough freespace need 48992 have 16283 Which means my drive sdo does not have enough space. That drive is mapped to my cache drive. An Intel 480GB SSD, which ONLY holds my docker and app data for 3 dockers. Two of these dockers have a high disk usage, but nowhere near 480GB, they will also write to the disk all the time, but not at full speed. The dockers are: krusader, Machinaris, and Machinaris-flax. Which are used as chia farming and flax farming Unraid says i have used 102GiB of space on the disk. Is the disk bad or am i experiencing another problem? I have lots of spare disks of the same type and size, so i can swap it, if needed. Currently i have 2 ssds as UD, one of which i have used previously as my cache disk, but it appeared to have some problems, where btrfs whould be corrupted a couple of times. That issue can be found here: But i have not experienced that issue with the replaced cache disk. chiatower-diagnostics-20211104-0902.zip
  14. Please read the notes on updating from 0.5.x to 0.6.0 for unraid docker. https://github.com/guydavis/machinaris/wiki/Unraid#how-do-i-update-from-v05x-to-v060-with-fork-support
  15. Thanks for the new version. With a little finegeling i managed to get it to work with flax.
  16. Yeah, i see i gained a block reward, so it must still be farming. but flexdog is likely not running, and also not providing allerts and daily updates. (This has never worked for flax, for me) I am on Machinaris 0.5. which seems to be the newest in this present time.
  17. I get the following error from flexdog. Is that something i should be conserned about? On the summary page, flax and chia is still listed as Farming: active [2021-10-11 08:53:36] [ INFO] --- Starting Flaxdog (v0.6.0-3-g8867a71) (main.py:54) Traceback (most recent call last): File "/flaxdog/main.py", line 111, in <module> init(conf) File "/flaxdog/main.py", line 57, in init flax_logs_config = config.get_flax_logs_config() File "/flaxdog/src/config.py", line 35, in get_flax_logs_config return self._get_child_config("flax_logs") File "/flaxdog/src/config.py", line 22, in _get_child_config raise ValueError(f"Invalid config - cannot find {key} key") ValueError: Invalid config - cannot find flax_logs key
  18. Thank you ! Don't really know how that part got in there, but i remember at some point that i tried direct attaching a ssd to a VM, to improve performance, which did not work as intended, so i scrapped this idea. The ssd used was not my cache ssd, but a secondary ssd, not in the system anymore. Something must have been left from this specific configuration. Anyway, it seems to be working as intended after removing this piece of code.
  19. Maybe it is just me not seeing it, but i see no indication of that in the XML. Below is a look at my XML after a reboot of the machine. It is true that after i tried starting the VM, it seems like the XML automatically includes the cache disk as a pci-e device passed through. <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>Windows 10 v2</name> <uuid>e04fb216-5639-1938-322d-c7720c44bfaa</uuid> <description>Using QE35</description> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='10'/> <vcpupin vcpu='1' cpuset='24'/> <vcpupin vcpu='2' cpuset='11'/> <vcpupin vcpu='3' cpuset='25'/> <vcpupin vcpu='4' cpuset='12'/> <vcpupin vcpu='5' cpuset='26'/> <vcpupin vcpu='6' cpuset='13'/> <vcpupin vcpu='7' cpuset='27'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-3.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/e04fb216-5639-1938-322d-c7720c44bfaa_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='4' threads='2'/> <cache mode='passthrough'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Windows 10 v2/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/Windows10_1903iso.iso'/> <target dev='hda' bus='sata'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.190-1.iso'/> <target dev='hdb' bus='sata'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x10'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x11'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0x12'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x13'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0x14'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:47:5f:1c'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='da'> <listen type='address' address='0.0.0.0'/> </graphics> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </hostdev> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </memballoon> </devices> </domain>
  20. I have a Windows 10 VM that, when trying to start, causes the cache disk to be read only. I have used the VM before, but cannot remember if i have changed any configuration of it since. I tried looking it over, but cannot see any problems. I use these VM's regularly: [Name] Windows 10 Ubuntu And the problem matic one is this one: Windows 10 v2 When started it throws an execution error: Execution error unable to open /mnt/user/domains/Windows 10 v2/vdisk1.img: Read-only file system up until that point it works. Upon server restart the cache is mounted, and works perfectly. but a parity sync is started. hotbox-diagnostics-20210927-1717.zip
  21. Not long enough. 2 passes, like 4 hours.
  22. All drives are passed to the OS by individual drives, and SMART data is available. Have not tried running a raid mode on the card.
  23. Memtest didn't find anything. Extended SMART test did not find any issues either. I have not tried replacing the drive yet. I will do that after the weekend i guess
  24. Okay,. Thanks i will try it after the extended smart test is done. As a side note, the cache disk mounted as normally after a reboot.