Abzstrak

Members
  • Posts

    109
  • Joined

  • Last visited

Everything posted by Abzstrak

  1. Abzstrak

    zram?

    Any reason zram isnt default on unraid? without real swap this seems like an obvious choice unless I'm missing something.
  2. yeah, I might... i just dislike the overhead on smb comparatively. I might try the native mount stuff for kvm since its there now too... probably thats the best idea.
  3. So this has happened 3 times now, so I dont think its a fluke. I never had issues on 6.9.x, and I skipped 6.10. The below is from 6.11.4 and seems to be a segfault created by libfuse3.so. I've had the same occur on 6.11.2 and 6.11.3. I cannot figure out a way to induce the problem it seemingly happens randomly as far as I can tell. Might be a couple of days uptime, might be a week or more. In the logs, the event seemed to happen at Nov 22 16:18:05. I do see the call trace to nfsd, that could maybe be related. I do have one VM that makes some downloads and after downloading I sue NFS to copy back to a folder for unraid for plex to see it. I'm attaching diagnostics from before rebooting it, and I've also attached the dmesg output (not really needed, but why not) dmesg-output-lubfuse3.so-segfault.text athena-diagnostics-20221122-1626.zip
  4. I updated a few hours ago, was watching plex and everything froze... I checked on the box and the array seems angry. checked dmesg before rebooting it and got this: [ 9927.692366] shfs[15681]: segfault at 10 ip 0000146e662715c2 sp 0000146e64c52c20 error 4 in libfuse3.so.3.12.0[146e6626d000+19000] [ 9927.692375] Code: f4 c8 ff ff 8b b3 08 01 00 00 85 f6 0f 85 46 01 00 00 4c 89 ee 48 89 df 45 31 ff e8 18 dc ff ff 4c 89 e7 45 31 e4 48 8b 40 20 <4c> 8b 68 10 e8 15 c2 ff ff 48 8d 4c 24 18 45 31 c0 31 d2 4c 89 ee [ 9927.708009] ------------[ cut here ]------------ [ 9927.708011] nfsd: non-standard errno: -103 [ 9927.708026] WARNING: CPU: 3 PID: 3616 at fs/nfsd/nfsproc.c:889 nfserrno+0x45/0x51 [nfsd] [ 9927.708051] Modules linked in: rpcsec_gss_krb5 xt_nat veth xt_CHECKSUM ipt_REJECT nf_reject_ipv4 xt_tcpudp ip6table_mangle ip6table_nat iptable_mangle vhost_net tun vhost vhost_iotlb tap xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink xfrm_user xfrm_algo xt_addrtype iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 br_netfilter xfs nfsd auth_rpcgss oid_registry lockd grace sunrpc md_mod nct6775 nct6775_core hwmon_vid efivarfs ip6table_filter ip6_tables iptable_filter ip_tables x_tables bridge stp llc i915 iosf_mbi drm_buddy ttm x86_pkg_temp_thermal intel_powerclamp drm_display_helper coretemp kvm_intel drm_kms_helper kvm drm crct10dif_pclmul crc32_pclmul igb crc32c_intel ghash_clmulni_intel aesni_intel intel_gtt i2c_i801 crypto_simd wmi_bmof cryptd rapl intel_cstate intel_uncore e1000e nvme i2c_smbus nvme_core agpgart i2c_algo_bit ahci libahci i2c_core syscopyarea sysfillrect sysimgblt intel_pch_thermal fb_sys_fops wmi video backlight acpi_pad acpi_tad button unix [ 9927.708099] CPU: 3 PID: 3616 Comm: nfsd Not tainted 5.19.17-Unraid #2 [ 9927.708101] Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./H370M-ITX/ac, BIOS P4.00 03/19/2019 [ 9927.708102] RIP: 0010:nfserrno+0x45/0x51 [nfsd] [ 9927.708119] Code: c3 cc cc cc cc 48 ff c0 48 83 f8 26 75 e0 80 3d bb 47 05 00 00 75 15 48 c7 c7 17 64 86 a0 c6 05 ab 47 05 00 01 e8 42 47 fe e0 <0f> 0b b8 00 00 00 05 c3 cc cc cc cc 48 83 ec 18 31 c9 ba ff 07 00 [ 9927.708121] RSP: 0000:ffffc90000647d68 EFLAGS: 00010286 [ 9927.708122] RAX: 0000000000000000 RBX: ffff888103eb8030 RCX: 0000000000000027 [ 9927.708123] RDX: 0000000000000001 RSI: ffffffff820d7be1 RDI: 00000000ffffffff [ 9927.708124] RBP: ffff88810433c000 R08: 0000000000000000 R09: ffffffff82244bd0 [ 9927.708125] R10: 00007fffffffffff R11: ffffffff82874b5e R12: ffff8885b835f540 [ 9927.708126] R13: ffff888532699a00 R14: ffff888104739180 R15: 0000000000000000 [ 9927.708127] FS: 0000000000000000(0000) GS:ffff88884fd80000(0000) knlGS:0000000000000000 [ 9927.708128] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 9927.708129] CR2: 0000000000460068 CR3: 00000005dd29a004 CR4: 00000000003726e0 [ 9927.708131] Call Trace: [ 9927.708133] <TASK> [ 9927.708134] fh_verify+0x4e7/0x58d [nfsd] [ 9927.708151] nfsd_unlink+0x5d/0x1b9 [nfsd] [ 9927.708168] nfsd4_remove+0x4f/0x76 [nfsd] [ 9927.708187] nfsd4_proc_compound+0x434/0x56c [nfsd] [ 9927.708205] nfsd_dispatch+0x1a6/0x262 [nfsd] [ 9927.708222] svc_process+0x3ee/0x5d6 [sunrpc] [ 9927.708255] ? nfsd_svc+0x2b6/0x2b6 [nfsd] [ 9927.708271] ? nfsd_shutdown_threads+0x5b/0x5b [nfsd] [ 9927.708287] nfsd+0xd5/0x155 [nfsd] [ 9927.708303] kthread+0xe4/0xef [ 9927.708306] ? kthread_complete_and_exit+0x1b/0x1b [ 9927.708308] ret_from_fork+0x1f/0x30 [ 9927.708311] </TASK> [ 9927.708312] ---[ end trace 0000000000000000 ]---
  5. you could get an external esata enclosure, assuming you have esata that supports multipliers. Otherwise could go with a usb. You should be able to find an enclosures for 4 or 5 drives in a reasonable price range.
  6. This is the way things should happen, this should NOT be changed. If you want to cancel it, then login and cancel it... to your own detriment.
  7. Does anyone know a way I can get this container to always start jobs as a specific niceness, like 19? I should note that under advanced on a specific backup job I've tried setting to idle or lowest, but when the job kicks off, its always at 0
  8. anyone getting the option to upgrade to 21? Mine is at 20.0.7 and showing no updates available. I tried beta, but that only offered 20.0.8 RC...
  9. buy a UPS, save yourself these sort of headaches
  10. I do something similar, I have a Dell r210 ii box that is, I run a vm on an unraid box all the time on an i3-8300 and only boot up the dell box to fail over traffic when I need to work on the unraid machine. this works fine, its alittle tricky for VIP assignments depending on how your ISP handles WAN assignments. CARP will want 3 IP's, one for each physical, one for the VIP... so you'll have to contend with this. You probably don't need to worry about state sync either, or you can... but it's kinda a pain since it will throw a bunch of warnings when it cant reach the box that is off...I just turn it on when i boot the other box up, give it a minute and then fail over. Also, AES-NI is nice and all, for encryption... if you are doing a bunch of this (like vpn) then worry about this... if you are not doing alot of encryption, then don't worry about it. pfsense dropped the aesni requirement for newer versions, even if they put it back in later, just switch to opnsense.
  11. my guess would be that the two logical SMT cores for a real core are both ~50%, but since they share a single real core... that real core is actually at 100%
  12. I'd say grab a 8TB or so USB drive, use it for backups now and in the future.... runs about $150. Remember, redundancy is not backups. This will give you some margin for error if something goes wrong in your process and provide you with a place to run backups in the future.
  13. yeah, I know its 4 port, thats why I said I like it but its only supports 4 drives. I appreciate the help, but if anyone has any knowledge of a really samll lsi controller, please let me know. I'm not changing the ITX case for a few additional 2.5" drives :)
  14. I looked quickly, but dont see if nested virtualization is turned on by default with unraid. In theory you could unload and reload the kvm module with nested support. The term you need to search for is "nested virtualization" though, maybe it will help you find what you need. be aware, the only times I've played with it, it runs pretty slow...
  15. did you install on 2.6? if not, reinstall on 2.6
  16. bad blocks seem high to me, unless I'm reading it wrong, for a drive thats been on less than 2 weeks. Hows the cable? new? I ask because cables are often overlooked and cause alot of issues if bad/damaged.
  17. set your machine type to q35-2.6, its due to old qemu drivers in pfsense.
  18. does anyone know of a very small lsi controller that will support 8 sata drives? I found the IBM 9211-4i ones on ebay, they look awesome, but only support 4 drives (https://www.ebay.com/itm/IBM-H1110-81Y4494-9211-4i-FW-LSI-9211-8i-P20-IT-Mode-for-ZFS-FreeNAS-unRAID/224011559187?_trkparms=aid%3D111001%26algo%3DREC.SEED%26ao%3D1%26asc%3D20160811114145%26meid%3D7d7010ad5e704b9cba023414e6d02868%26pid%3D100667%26rk%3D8%26rkt%3D8%26mehot%3Dlo%26sd%3D303521684314%26itm%3D224011559187%26pmt%3D0%26noa%3D1%26pg%3D2334524%26brand%3DIBM&_trksid=p2334524.c100667.m2042) I need it as small as possible as I'll be using it on a M.2 GPU riser board and it's in a very small ITX case. I'm open to chipsets other than LSI, but no asmedia/jmicron/si junk
  19. also updated from 6.8.2 -> 6.8.3 with no issues
  20. gonna have to run top and figure out what exactly is pulling the cpu... just having high cpu isn't helpful without knowing the process.
  21. google something like "kvm snapshot", lots of hits and lots of instructions out there that are better than I can write
  22. as far as data, standard backup practice of 3,2,1 rule... 3 copies, on 2 different kinds of media and one offsite. I'd suggest a second machine to rsync data to, maybe built from scraps in general. Doesn't need to run 24/7, just when backups are needed. I say this because most of us computer geeks have extra parts laying around. Also, consider another drive in the machine, just to keep an additional copy of the data on, again rsync is your friend and easy to schedule a script with user scripts plugin. qemu-img and virsh both seem to be on my server, thus you could write a script to snapshot your VM's.
  23. Anyone have any ideas on why the mouse pointer in a GUI (in Ubuntu VM for example) would not be visible when using a chromebook for access? Its visible if I use Cirrus instead of QXL... but I'd prefer QXL.