Mizerka

Members
  • Posts

    75
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Mizerka's Achievements

Rookie

Rookie (2/14)

5

Reputation

  1. Hey, thanks for your work; for a about a week now (not bothered to look at it closely), delugevpn docker in unraid would fail to start, stating dns resolution errors, it'd loop through all external ip identifiers and dns resolutions until it times out eventually and starts webgui without external connection, despite it connecting to vpn (tested) and not stating any errors during it, looking at debug logs, nothing useful again other than dns resolutions failing, I tried swapping to CF, google and internal pihole with same results. It can ping external just fine from console This started happening since 6.9 rc2 update any ideas? let me know if you need logs/more info. ignore me, removed .ovpn and .conf files and it just started to work.
  2. You still need to enable bifurcation on the right slot, you'll want to use 4x4x4x4 in most cases if you split 8x8 you can end up not seeing both depending on how card splits lanes internally. and your nvme won't get saturated over x4 pcie3 anyway. For booting you'll need to use a custom bios. But if it's vmware just stick it on USB and it'll be fine.
  3. bios firmware is modded to allow for nvme boot, if you're only planning to use nvme for cache pool (like what I have), then you can use the official bios. I never bothered with nvme boot and also it's not needed for unraid since we boot from usb anyway. after all this supermicro ended up adding the official bios to their download pages so you can get it legit from them if you don't trust my links etc.
  4. just had a look, yeah 3.4 is on their site now, which is the same one I've been given. And yeah I found that it fixes the bifurcation issues in previous versions. even though there's no mention of it in any patch notes.
  5. from Supermicro support, since then I've actually got a newer 3.4 stable, same link, subfolder "3.4 official", had some say they don't trust me or whatever, included email conversation from the tech in email. Also yeah I ran 3.3 beta which worked flawlessly with m2 hyper x4, flashed to 3.4 as well without issues, 2 weeks uptime without issues so far, with each m.2 drive saturating 1.2gbps reads (sn750 1tb) also not sure if it affects nvme, but these and samsung 250gb's I have tested don't break parity like some flash ssd's were reported to. I will add the 3.4 nvme boot modded version as well, I don't need it but I know some do want the option.
  6. confirmed working bios for x9dr3(i)-f, beta dated Feb'20, also including nvme modded for bootable nvme. https://mega.nz/folder/q0sWiAya#ibXw5vbz08m8RXbaS3IB1A
  7. let's revive an old thread; Can we have a global setting for this? and manual change through disk view be a manual override of the global? or at least have a multiple disk setting. having to click through 20+ is a pain.
  8. makes sense, can confirm, --restart unless-stopped wasn't there, I've added now and will see how it behaves. thanks
  9. Hey, thanks for your work; lately jacketvpn has been turning itself off quite often with error 2020-08-08 17:04:58.977161 [ERROR] Network is down, exiting this Docker Is this just down to tun closing so jacket is forcing to shutdown?
  10. okay, so I think I'm good now, ended up booting back into full array with md14 mounted, moved all data off of it without issues, then went back into maintenance and could now run -v, once complete I've started array again and seems good fine for last 20mins or so, crisis averted for now. if it didn't -v I'd probably -L and just reformat it if it corrupts the filesystem.
  11. After looking around forums a bit more came across similar post, mod advised to run against /dev/mapper/md# if drives are encrypted (all of mine are btw), then to -L it. which spits out this output, same as webui Clearly it wants me to run with -L but that sounds destructive? It's a 12tb mostly filled, I'd really hate to lose it, at this point I'd almost be better to remove it and let parity emulate it probably and move data around before reformatting and adding back to array?
  12. also attached diagnostics if you want to have a look but doubt there's anything interesting in config side of this nekounraid-diagnostics-20200623-2216.zip
  13. running webgui with a -v flag gives this output; Phase 1 - find and verify superblock... - block cache size set to 6097840 entries Phase 2 - using internal log - zero log... zero_log: head block 6247 tail block 6235 ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this.
  14. okay, ye makes sense, so run it against md# instead, I've gone back to maintenance and I'm getting the errors in edit, md14 is saying drive busy and webui refuses to run beyond -n/-nv I've tried to run repair, but it never got past saying magic number failed and trying to find secondary superblock which outputs this if it helps Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - scan filesystem freespace and inode maps... Metadata CRC error detected at 0x43c89d, xfs_bnobt block 0x3a381e28/0x1000 Metadata CRC error detected at 0x43c89d, xfs_bnobt block 0x74703c48/0x1000 btree block 1/1 is suspect, error -74 btree block 2/1 is suspect, error -74 bad magic # 0xdaa0086c in btbno block 1/1 bad magic # 0x2fdfba35 in btbno block 2/1 Metadata CRC error detected at 0x43c89d, xfs_cntbt block 0x3a381e30/0x1000 btree block 1/2 is suspect, error -74 bad magic # 0x419e48e9 in btcnt block 1/2 agf_freeblks 122094523, counted 0 in ag 1 agf_longest 122094523, counted 0 in ag 1 Metadata CRC error detected at 0x43c89d, xfs_cntbt block 0x74703c50/0x1000 btree block 2/2 is suspect, error -74 bad magic # 0xa8692ca5 in btcnt block 2/2 agf_freeblks 121856058, counted 0 in ag 2 agf_longest 121856058, counted 0 in ag 2 Metadata CRC error detected at 0x46ad5d, xfs_inobt block 0x3a381e38/0x1000 btree block 1/3 is suspect, error -74 Metadata CRC error detected at 0x46ad5d, xfs_inobt block 0x74703c58/0x1000 bad magic # 0x639e272e in inobt block 1/3 btree block 2/3 is suspect, error -74 bad magic # 0x796a2ce3 in inobt block 2/3 Metadata CRC error detected at 0x46ad5d, xfs_inobt block 0xaea85a78/0x1000 btree block 3/3 is suspect, error -74 bad magic # 0x15f1f03 in inobt block 3/3 sb_ifree 59, counted 44 sb_fdblocks 2926555418, counted 2681574888 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 - agno = 16 - agno = 17 - agno = 18 - agno = 19 - agno = 20 - agno = 21 - agno = 22 - agno = 23 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 4 - agno = 3 - agno = 14 - agno = 22 - agno = 8 - agno = 9 - agno = 5 - agno = 6 - agno = 10 - agno = 12 - agno = 15 - agno = 16 - agno = 13 - agno = 17 - agno = 18 - agno = 2 - agno = 21 - agno = 7 - agno = 19 - agno = 20 - agno = 23 - agno = 11 No modify flag set, skipping phase 5 Inode allocation btrees are too corrupted, skipping phases 6 and 7 Maximum metadata LSN (904557511:-555599277) is ahead of log (1:6247). Would format log to cycle 904557514. No modify flag set, skipping filesystem flush and exiting.