zeus83

Members
  • Posts

    62
  • Joined

  • Last visited

Everything posted by zeus83

  1. Hi, I recommend you first get yourself familiar with this article: https://unraid.net/blog/zfs-guide Basically it all depends on your needs and use cases. raidz zfs is a good choice when you favour more disk space (that's what typically you need from NAS I assume) instead of IOPs. One advise regarding enabling zfs auto trim. You must be aware that auto trim doesn't come for free and may impact your pool latency and therefore performance. How bad the impact will be depends on the SSD controller and it's firmware. Some controllers drop performance significantly when doing trim and the latency will be increase in any case. Some others are doing well. So you need to test you SSD first to check your disk controller can handle this fine. Use command zpool iostat -vly 3 1000 <your pool name> To monitor your pool stats. You'll see the TRIM timings from there. So if there timings are bad or latency is critical for you then you should consider using a scheduled TRIM instead of auto trim. How often you need to TRIM your drives depends on your write workload. So it may be once a week or daily.
  2. Hi, I owned lots of GTX 1080 and none of them was incompatible. I'd say with GTX 1080 / GTX 1080 Ti I had zero issues on passthrough. Did you follow the instructions on GPU passthrough precisely ? Do you have these lines in your VM configuration ? <features> ... <hyperv mode='custom'> ... <vendor_id state='on' value='1234567890ab'/> ... </hyperv> <kvm> <hidden state='on'/> </kvm> ... </features> Also if this the only GPU in your system try putting this onto boot line: video=efifb:off
  3. Hi, 1) Yes, one of the options you to consider is to pass through SSD/NVME drive directly to the VM and use it for games storage. It is a nice options you'll get close to bare metal performance of the given drive. I've been using such option for years with my Samsung NVME drive with zero issues. The drawback is that this drive is not protected and in case of drive failure you'll loose your data. Which I think in case of games is not critical. Anyway you get what you get on the bare metal too. 2) Another option is to use BTRFS/ZFS pool and create a raw image drive for keeping your games. I would actually split your VM img and your game drive img for easier management. With such option you'll get your disk protection (in case of mirror or raidz setup), you 'll get instant snapshots , compression, l2arc caching (at least with ZFS options) and many nice to have 'enterprise' features. But you'll pay for it with your drive capacity (50% in case of mirror setup, ~33% in case of raidz of 3 disks). Performance wise the things will get complicated and you might expect slightly degraded performance, the same performance or even better performance. This all depends on your setup and tuning... And how much hours you want to spend on this. 3) Using an array for gaming vm is also viable option, because the array gives you a lot of cheap hdd capacity. However regarding the performance I would expect it will be the worst of all options. All options will work out of the box for you. But since you're new to this, I'd recommed the first one. It is a no brainer.
  4. I wish I saw this topic earlier but anyway I will put this here, may be it'll help someone. Whenever you see your gpu under utilized the issue might be is the timer. Your VM 'clock' section must look like this: <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> <timer name='tsc' present='yes' mode='native'/> </clock> Also read this [SOLVED] GEARS OF WAR 4/5 WINDOWS VM TERRIBLE PERFORMANCE In a nutshell you must ensure your host 'TSC' timer works correctly.
  5. +1 Played Fortnite in VM for years with zero issues until recent update. Upset... Seems there is not workaround yet.
  6. Great news on unraid support for ZFS pools! I've just recently migrated all my servers to ZFS with the help of community plugins. Therefore I think system level support for ZFS is a good starter and a right direction in further unraid development.
  7. Has anyone had a success in running RTX 4090 on Windows 11 VM ? Whenever I run Windows 11 VM for some period the screen goes black eventually and never returns. The os seems to be running fine meanwhile, but no picture at all. Running the same setup in Windows 10 VM works just perfectly fine. And I tried RTX 4090 on my bare metal Windows 11 installation and there were troubles with running games, they simply crash soon after start. But screen didn't turn black.
  8. I replaced memory modules and no reboots now. No kernel errors as well. I'm testing old modules via memtest still don't show any errors, possible some hardware incompatibility... Therefore I must admit in my case this is not related to unraid.
  9. I disabled my VM for a night. And caught an error in the logs (server didn't reboot this time): Jan 5 10:14:52 ares-unraid kernel: CPU: 6 PID: 0 Comm: swapper/6 Tainted: P O 5.19.17-Unraid #2 Jan 5 10:14:52 ares-unraid kernel: Hardware name: ASUS System Product Name/ProArt X570-CREATOR WIFI, BIOS 0801 04/26/2022 Jan 5 10:14:52 ares-unraid kernel: RIP: 0010:__nf_conntrack_confirm+0xa5/0x2cb [nf_conntrack] Jan 5 10:14:52 ares-unraid kernel: Code: c6 48 89 44 24 10 e8 dd e2 ff ff 8b 7c 24 04 89 da 89 c6 89 04 24 e8 56 e6 ff ff 84 c0 75 a2 48 8b 85 80 00 00 00 a8 08 74 18 <0f> 0b 8b 34 24 8b 7c 24 04 e8 16 de ff ff e8 2c e3 ff ff e9 7e 01 Jan 5 10:14:52 ares-unraid kernel: RSP: 0018:ffffc900003788c8 EFLAGS: 00010202 Jan 5 10:14:52 ares-unraid kernel: RAX: 0000000000000188 RBX: 0000000000000000 RCX: 74f8a55f28104df6 Jan 5 10:14:52 ares-unraid kernel: RDX: 0000000000000000 RSI: 00000000000001db RDI: ffffffffa035bdc0 Jan 5 10:14:52 ares-unraid kernel: RBP: ffff8889f30caf00 R08: 47ddfafba8ac2d75 R09: 25dae199f6406014 Jan 5 10:14:52 ares-unraid kernel: R10: 300a12ed70ccfb20 R11: 44f4d35e05496612 R12: ffffffff82909480 Jan 5 10:14:52 ares-unraid kernel: R13: 0000000000001ee0 R14: ffff8882ad763f00 R15: 0000000000000000 Jan 5 10:14:52 ares-unraid kernel: FS: 0000000000000000(0000) GS:ffff88900e980000(0000) knlGS:0000000000000000 Jan 5 10:14:52 ares-unraid kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Jan 5 10:14:52 ares-unraid kernel: CR2: 0000146d90e2fa60 CR3: 0000000109450000 CR4: 0000000000750ee0 Jan 5 10:14:52 ares-unraid kernel: PKRU: 55555554 Jan 5 10:14:52 ares-unraid kernel: Call Trace: Jan 5 10:14:52 ares-unraid kernel: <IRQ> Jan 5 10:14:52 ares-unraid kernel: nf_conntrack_confirm+0x25/0x54 [nf_conntrack] Jan 5 10:14:52 ares-unraid kernel: nf_hook_slow+0x3d/0x96 Jan 5 10:14:52 ares-unraid kernel: ? ip_protocol_deliver_rcu+0x164/0x164 Jan 5 10:14:52 ares-unraid kernel: NF_HOOK.constprop.0+0x79/0xd9 Jan 5 10:14:52 ares-unraid kernel: ? ip_protocol_deliver_rcu+0x164/0x164 Jan 5 10:14:52 ares-unraid kernel: ip_sabotage_in+0x4a/0x58 [br_netfilter] Jan 5 10:14:52 ares-unraid kernel: nf_hook_slow+0x3d/0x96 Jan 5 10:14:52 ares-unraid kernel: ? ip_rcv_finish_core.constprop.0+0x3b7/0x3b7 Jan 5 10:14:52 ares-unraid kernel: NF_HOOK.constprop.0+0x79/0xd9 Jan 5 10:14:52 ares-unraid kernel: ? ip_rcv_finish_core.constprop.0+0x3b7/0x3b7 Jan 5 10:14:52 ares-unraid kernel: __netif_receive_skb_one_core+0x77/0x9c Jan 5 10:14:52 ares-unraid kernel: netif_receive_skb+0xbf/0x127 Jan 5 10:14:52 ares-unraid kernel: br_handle_frame_finish+0x476/0x4b0 [bridge] Jan 5 10:14:52 ares-unraid kernel: ? br_pass_frame_up+0xdd/0xdd [bridge] Jan 5 10:14:52 ares-unraid kernel: br_nf_hook_thresh+0xe5/0x109 [br_netfilter] Jan 5 10:14:52 ares-unraid kernel: ? br_pass_frame_up+0xdd/0xdd [bridge] Jan 5 10:14:52 ares-unraid kernel: br_nf_pre_routing_finish+0x2c1/0x2ec [br_netfilter] Jan 5 10:14:52 ares-unraid rsyslogd: action 'action-3-builtin:omfwd' resumed (module 'builtin:omfwd') [v8.2102.0 try https://www.rsyslog.com/e/2359 ] Jan 5 10:14:52 ares-unraid kernel: ? br_pass_frame_up+0xdd/0xdd [bridge] Jan 5 10:14:52 ares-unraid kernel: ? NF_HOOK.isra.0+0xe4/0x140 [br_netfilter] Jan 5 10:14:52 ares-unraid kernel: ? br_nf_hook_thresh+0x109/0x109 [br_netfilter] Jan 5 10:14:52 ares-unraid kernel: br_nf_pre_routing+0x226/0x23a [br_netfilter] Jan 5 10:14:52 ares-unraid kernel: ? br_nf_hook_thresh+0x109/0x109 [br_netfilter] Jan 5 10:14:52 ares-unraid kernel: br_handle_frame+0x27f/0x2e7 [bridge] Jan 5 10:14:52 ares-unraid kernel: ? br_pass_frame_up+0xdd/0xdd [bridge] Jan 5 10:14:52 ares-unraid kernel: __netif_receive_skb_core.constprop.0+0x4f9/0x6e3 Jan 5 10:14:52 ares-unraid kernel: ? __alloc_skb+0xb2/0x15e Jan 5 10:14:52 ares-unraid kernel: ? __kmalloc_node_track_caller+0x1ae/0x1d9 Jan 5 10:14:52 ares-unraid kernel: ? udp4_gro_receive+0x1b/0x20c Jan 5 10:14:52 ares-unraid kernel: ? inet_gro_receive+0x234/0x254 Jan 5 10:14:52 ares-unraid kernel: __netif_receive_skb_list_core+0x8a/0x11e Jan 5 10:14:52 ares-unraid kernel: netif_receive_skb_list_internal+0x1d7/0x210 Jan 5 10:14:52 ares-unraid kernel: gro_normal_list+0x1d/0x3f Jan 5 10:14:52 ares-unraid kernel: napi_complete_done+0x7b/0x11a Jan 5 10:14:52 ares-unraid kernel: aq_vec_poll+0x13c/0x187 [atlantic] Jan 5 10:14:52 ares-unraid kernel: __napi_poll.constprop.0+0x2b/0x124 Jan 5 10:14:52 ares-unraid kernel: net_rx_action+0x159/0x24f Jan 5 10:14:52 ares-unraid kernel: __do_softirq+0x129/0x288 Jan 5 10:14:52 ares-unraid kernel: __irq_exit_rcu+0x79/0xb8 Jan 5 10:14:52 ares-unraid kernel: common_interrupt+0x9b/0xc1 Jan 5 10:14:52 ares-unraid kernel: </IRQ> Jan 5 10:14:52 ares-unraid kernel: <TASK> Jan 5 10:14:52 ares-unraid kernel: asm_common_interrupt+0x22/0x40 Jan 5 10:14:52 ares-unraid kernel: RIP: 0010:cpuidle_enter_state+0x11b/0x1e4 Jan 5 10:14:52 ares-unraid kernel: Code: 5b fa a1 ff 45 84 ff 74 1b 9c 58 0f 1f 40 00 0f ba e0 09 73 08 0f 0b fa 0f 1f 44 00 00 31 ff e8 9d a9 a6 ff fb 0f 1f 44 00 00 <45> 85 ed 0f 88 9e 00 00 00 48 8b 04 24 49 63 cd 48 6b d1 68 49 29 Jan 5 10:14:52 ares-unraid kernel: RSP: 0018:ffffc90000197e98 EFLAGS: 00000246 Jan 5 10:14:52 ares-unraid kernel: RAX: ffff88900e980000 RBX: 0000000000000002 RCX: 0000000000000000 Jan 5 10:14:52 ares-unraid kernel: RDX: 0000000000000006 RSI: ffffffff820d7be1 RDI: ffffffff820d80c1 Jan 5 10:14:52 ares-unraid kernel: RBP: ffff888108fbac00 R08: 0000000000000002 R09: 0000000000000002 Jan 5 10:14:52 ares-unraid kernel: R10: 0000000000000020 R11: 000000000001295c R12: ffffffff82318880 Jan 5 10:14:52 ares-unraid kernel: R13: 0000000000000002 R14: 00001b8038bae1e9 R15: 0000000000000000 Jan 5 10:14:52 ares-unraid kernel: ? cpuidle_enter_state+0xf5/0x1e4 Jan 5 10:14:52 ares-unraid kernel: cpuidle_enter+0x2a/0x38 Jan 5 10:14:52 ares-unraid kernel: do_idle+0x187/0x1f5 Jan 5 10:14:52 ares-unraid kernel: cpu_startup_entry+0x1d/0x1f Jan 5 10:14:52 ares-unraid kernel: start_secondary+0xeb/0xeb Jan 5 10:14:52 ares-unraid kernel: secondary_startup_64_no_verify+0xce/0xdb Jan 5 10:14:52 ares-unraid kernel: </TASK> Jan 5 10:14:52 ares-unraid kernel: ---[ end trace 0000000000000000 ]--- ares-unraid-diagnostics-20230105-1311.zip
  10. Got similar random reboots on Windows 11 / Windows 10 VM. It may happen in an hour or in 24 hours but always happens. The entire unraid server crashes. No errors in the log upon reboot, it happens silently. I ran memtest for few hours (zero errors) and ran bare metal windows 10 configuration for few days - no reboots. So this is not a hardware issue. I've no idea yet what causes it. I have an unraid 6.9.2 with similar conifg running aside for years with zero issues.
  11. Well this explains why autostart works fine on my registered unraid and doesn't on trial one. Thanks!
  12. I have the same issue to me it seems. My unraid on 6.11.3 doesn't autostart array even I set so.
  13. Yes, I only have 6600 xt in my disposal right now. Works pretty well in VM. However I set up everything you did, but once rbar enabled in bios it's black screen when I start my VM. No any errors in the logs. But I've noticed also that VM manager stops after I shutdown the VM. May be there is some critical issue with that.
  14. How did you manage to do this ? Whenever I enable resizable bar support in bios my VM starts with black screen, no video output at all on my 6600 XT.
  15. Resizable bar is not virtualized so far. It's disabled in QEMU: https://github.com/qemu/qemu/commit/3412d8ec9810b819f8b79e8e0c6b87217c876e32
  16. The overclocking won't work in this container. Since the only way to overclock nvidia gpu in linux is using nvidia-settings which requires X server running. Which is not the case in this docker. And it's not trivial thing to setup (at least for me :D) You may try following this guide:
  17. Thanks a lot for this.
  18. Hi, I've prepared the instruction in a way that you don't have to see this QXL paravirtual graphic card. If you still see it in the VM then you do something wrong. If you share your VM xml I can be more specific.
  19. Hi, this thing helped me. Ensure you have 8444 port forwarded and then change network type in chia docker template from Host (as recommended) to Bridge. I have zero sync issues after I did this.
  20. I've just built a local image in 5 minutes , no time to chill 🙂
  21. The docker builds chia-blockchain inside, and it takes the latest by default if I got it correctly. But for some reason they 're not preparing new docker images. Therefore I think the only way is to build docker image manually.
  22. They've only added tzdata package, nothing important. And anyway it won't build like that way they did it. I've tried to build docker image by myself from 1.1.6 tag specifically but the sync still doesn't work in linux docker version. On windows GUI it works perfectly. *UPDATE* I've changed template network type from Host to Bridge and it has started to sync immediately. I was able to sync 3k of blocks in less than 5 minutes.
  23. I tried stopping other two nodes and this wouldn't change anything. So I think there might be another reason for this particular machine to syncing slowly. However I can't figure out the root cause, because the setup looks similar to another unraid box, except different hardware.
  24. After 200k blocks it goes awfully slow, may eventually never sync up: root@chia-unraid:~# while true; do date +%x_%H:%M:%S:%N; docker exec -it chia venv/bin/chia show -s -c | grep 'to block'; sleep 300; done 05/19/2021_22:16:25:559310495 Current Blockchain Status: Full Node syncing to block 304564 Currently synced to block: 234976 05/19/2021_22:21:26:192509311 Current Blockchain Status: Full Node syncing to block 304564 Currently synced to block: 235402 05/19/2021_22:26:27:030711131 Current Blockchain Status: Full Node syncing to block 304564 Currently synced to block: 236169 05/19/2021_22:31:27:893406981 Current Blockchain Status: Full Node syncing to block 304564 Currently synced to block: 236561 Wierd thing my other two full nodes (win10 gui & another unraid server) are in the same local network and syncing perfectly fine. Only this new dedicated chia box that I'm having troubles with.