Jump to content

Abzstrak

Members
  • Posts

    113
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Abzstrak's Achievements

Apprentice

Apprentice (3/14)

10

Reputation

  1. so I'm in the process of moving to larger drives and then reducing my drive count. I've migrated data from two data drives (4TB) to a larger 16TB, but I still have 4.02GB showing on each of these old drives in the web gui that definitely do not have data (du confirms 0 bytes). I just wanted a sanity check that this is normal and fine and that I can just restart the array without these guys and let it do its thing for parity.
  2. cool ty, thats what i was hoping. ive been lucky and only had to replace a drive once and it wasnt parity
  3. So I ordered a couple of 16TB drives to replace a couple of my 4TB drives. I plan to replace the parity drive first, and I plan on just powering down, replacing the drive, reassigning parity to the new 16TB and starting the array. Do I have to do anything to make all the space on the new drive used correctly on the parity or data drives? Then I was thinking the old drives are 512b but these new ones are 4k, but given the way that unraid does the array I would think that wouldn't be an issue.... right?
  4. Is there any way to get minicom added? or something to console over to my firewall?
  5. Abzstrak

    zram?

    Any reason zram isnt default on unraid? without real swap this seems like an obvious choice unless I'm missing something.
  6. yeah, I might... i just dislike the overhead on smb comparatively. I might try the native mount stuff for kvm since its there now too... probably thats the best idea.
  7. So this has happened 3 times now, so I dont think its a fluke. I never had issues on 6.9.x, and I skipped 6.10. The below is from 6.11.4 and seems to be a segfault created by libfuse3.so. I've had the same occur on 6.11.2 and 6.11.3. I cannot figure out a way to induce the problem it seemingly happens randomly as far as I can tell. Might be a couple of days uptime, might be a week or more. In the logs, the event seemed to happen at Nov 22 16:18:05. I do see the call trace to nfsd, that could maybe be related. I do have one VM that makes some downloads and after downloading I sue NFS to copy back to a folder for unraid for plex to see it. I'm attaching diagnostics from before rebooting it, and I've also attached the dmesg output (not really needed, but why not) dmesg-output-lubfuse3.so-segfault.text athena-diagnostics-20221122-1626.zip
  8. I updated a few hours ago, was watching plex and everything froze... I checked on the box and the array seems angry. checked dmesg before rebooting it and got this: [ 9927.692366] shfs[15681]: segfault at 10 ip 0000146e662715c2 sp 0000146e64c52c20 error 4 in libfuse3.so.3.12.0[146e6626d000+19000] [ 9927.692375] Code: f4 c8 ff ff 8b b3 08 01 00 00 85 f6 0f 85 46 01 00 00 4c 89 ee 48 89 df 45 31 ff e8 18 dc ff ff 4c 89 e7 45 31 e4 48 8b 40 20 <4c> 8b 68 10 e8 15 c2 ff ff 48 8d 4c 24 18 45 31 c0 31 d2 4c 89 ee [ 9927.708009] ------------[ cut here ]------------ [ 9927.708011] nfsd: non-standard errno: -103 [ 9927.708026] WARNING: CPU: 3 PID: 3616 at fs/nfsd/nfsproc.c:889 nfserrno+0x45/0x51 [nfsd] [ 9927.708051] Modules linked in: rpcsec_gss_krb5 xt_nat veth xt_CHECKSUM ipt_REJECT nf_reject_ipv4 xt_tcpudp ip6table_mangle ip6table_nat iptable_mangle vhost_net tun vhost vhost_iotlb tap xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink xfrm_user xfrm_algo xt_addrtype iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 br_netfilter xfs nfsd auth_rpcgss oid_registry lockd grace sunrpc md_mod nct6775 nct6775_core hwmon_vid efivarfs ip6table_filter ip6_tables iptable_filter ip_tables x_tables bridge stp llc i915 iosf_mbi drm_buddy ttm x86_pkg_temp_thermal intel_powerclamp drm_display_helper coretemp kvm_intel drm_kms_helper kvm drm crct10dif_pclmul crc32_pclmul igb crc32c_intel ghash_clmulni_intel aesni_intel intel_gtt i2c_i801 crypto_simd wmi_bmof cryptd rapl intel_cstate intel_uncore e1000e nvme i2c_smbus nvme_core agpgart i2c_algo_bit ahci libahci i2c_core syscopyarea sysfillrect sysimgblt intel_pch_thermal fb_sys_fops wmi video backlight acpi_pad acpi_tad button unix [ 9927.708099] CPU: 3 PID: 3616 Comm: nfsd Not tainted 5.19.17-Unraid #2 [ 9927.708101] Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./H370M-ITX/ac, BIOS P4.00 03/19/2019 [ 9927.708102] RIP: 0010:nfserrno+0x45/0x51 [nfsd] [ 9927.708119] Code: c3 cc cc cc cc 48 ff c0 48 83 f8 26 75 e0 80 3d bb 47 05 00 00 75 15 48 c7 c7 17 64 86 a0 c6 05 ab 47 05 00 01 e8 42 47 fe e0 <0f> 0b b8 00 00 00 05 c3 cc cc cc cc 48 83 ec 18 31 c9 ba ff 07 00 [ 9927.708121] RSP: 0000:ffffc90000647d68 EFLAGS: 00010286 [ 9927.708122] RAX: 0000000000000000 RBX: ffff888103eb8030 RCX: 0000000000000027 [ 9927.708123] RDX: 0000000000000001 RSI: ffffffff820d7be1 RDI: 00000000ffffffff [ 9927.708124] RBP: ffff88810433c000 R08: 0000000000000000 R09: ffffffff82244bd0 [ 9927.708125] R10: 00007fffffffffff R11: ffffffff82874b5e R12: ffff8885b835f540 [ 9927.708126] R13: ffff888532699a00 R14: ffff888104739180 R15: 0000000000000000 [ 9927.708127] FS: 0000000000000000(0000) GS:ffff88884fd80000(0000) knlGS:0000000000000000 [ 9927.708128] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 9927.708129] CR2: 0000000000460068 CR3: 00000005dd29a004 CR4: 00000000003726e0 [ 9927.708131] Call Trace: [ 9927.708133] <TASK> [ 9927.708134] fh_verify+0x4e7/0x58d [nfsd] [ 9927.708151] nfsd_unlink+0x5d/0x1b9 [nfsd] [ 9927.708168] nfsd4_remove+0x4f/0x76 [nfsd] [ 9927.708187] nfsd4_proc_compound+0x434/0x56c [nfsd] [ 9927.708205] nfsd_dispatch+0x1a6/0x262 [nfsd] [ 9927.708222] svc_process+0x3ee/0x5d6 [sunrpc] [ 9927.708255] ? nfsd_svc+0x2b6/0x2b6 [nfsd] [ 9927.708271] ? nfsd_shutdown_threads+0x5b/0x5b [nfsd] [ 9927.708287] nfsd+0xd5/0x155 [nfsd] [ 9927.708303] kthread+0xe4/0xef [ 9927.708306] ? kthread_complete_and_exit+0x1b/0x1b [ 9927.708308] ret_from_fork+0x1f/0x30 [ 9927.708311] </TASK> [ 9927.708312] ---[ end trace 0000000000000000 ]---
  9. you could get an external esata enclosure, assuming you have esata that supports multipliers. Otherwise could go with a usb. You should be able to find an enclosures for 4 or 5 drives in a reasonable price range.
  10. This is the way things should happen, this should NOT be changed. If you want to cancel it, then login and cancel it... to your own detriment.
  11. Does anyone know a way I can get this container to always start jobs as a specific niceness, like 19? I should note that under advanced on a specific backup job I've tried setting to idle or lowest, but when the job kicks off, its always at 0
  12. anyone getting the option to upgrade to 21? Mine is at 20.0.7 and showing no updates available. I tried beta, but that only offered 20.0.8 RC...
  13. buy a UPS, save yourself these sort of headaches
  14. I do something similar, I have a Dell r210 ii box that is, I run a vm on an unraid box all the time on an i3-8300 and only boot up the dell box to fail over traffic when I need to work on the unraid machine. this works fine, its alittle tricky for VIP assignments depending on how your ISP handles WAN assignments. CARP will want 3 IP's, one for each physical, one for the VIP... so you'll have to contend with this. You probably don't need to worry about state sync either, or you can... but it's kinda a pain since it will throw a bunch of warnings when it cant reach the box that is off...I just turn it on when i boot the other box up, give it a minute and then fail over. Also, AES-NI is nice and all, for encryption... if you are doing a bunch of this (like vpn) then worry about this... if you are not doing alot of encryption, then don't worry about it. pfsense dropped the aesni requirement for newer versions, even if they put it back in later, just switch to opnsense.
×
×
  • Create New...