• Content Count

  • Joined

  • Last visited

Community Reputation

5 Neutral

About Mizerka

  • Rank

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hey, thanks for your work; for a about a week now (not bothered to look at it closely), delugevpn docker in unraid would fail to start, stating dns resolution errors, it'd loop through all external ip identifiers and dns resolutions until it times out eventually and starts webgui without external connection, despite it connecting to vpn (tested) and not stating any errors during it, looking at debug logs, nothing useful again other than dns resolutions failing, I tried swapping to CF, google and internal pihole with same results. It can ping external just fine from console
  2. You still need to enable bifurcation on the right slot, you'll want to use 4x4x4x4 in most cases if you split 8x8 you can end up not seeing both depending on how card splits lanes internally. and your nvme won't get saturated over x4 pcie3 anyway. For booting you'll need to use a custom bios. But if it's vmware just stick it on USB and it'll be fine.
  3. bios firmware is modded to allow for nvme boot, if you're only planning to use nvme for cache pool (like what I have), then you can use the official bios. I never bothered with nvme boot and also it's not needed for unraid since we boot from usb anyway. after all this supermicro ended up adding the official bios to their download pages so you can get it legit from them if you don't trust my links etc.
  4. just had a look, yeah 3.4 is on their site now, which is the same one I've been given. And yeah I found that it fixes the bifurcation issues in previous versions. even though there's no mention of it in any patch notes.
  5. from Supermicro support, since then I've actually got a newer 3.4 stable, same link, subfolder "3.4 official", had some say they don't trust me or whatever, included email conversation from the tech in email. Also yeah I ran 3.3 beta which worked flawlessly with m2 hyper x4, flashed to 3.4 as well without issues, 2 weeks uptime without issues so far, with each m.2 drive saturating 1.2gbps reads (sn750 1tb) also not sure if it affects nvme, but these and samsung 250gb's I have tested don't break parity like some flash ssd's were reported to. I will add the 3.
  6. confirmed working bios for x9dr3(i)-f, beta dated Feb'20, also including nvme modded for bootable nvme. https://mega.nz/folder/q0sWiAya#ibXw5vbz08m8RXbaS3IB1A
  7. let's revive an old thread; Can we have a global setting for this? and manual change through disk view be a manual override of the global? or at least have a multiple disk setting. having to click through 20+ is a pain.
  8. makes sense, can confirm, --restart unless-stopped wasn't there, I've added now and will see how it behaves. thanks
  9. Hey, thanks for your work; lately jacketvpn has been turning itself off quite often with error 2020-08-08 17:04:58.977161 [ERROR] Network is down, exiting this Docker Is this just down to tun closing so jacket is forcing to shutdown?
  10. okay, so I think I'm good now, ended up booting back into full array with md14 mounted, moved all data off of it without issues, then went back into maintenance and could now run -v, once complete I've started array again and seems good fine for last 20mins or so, crisis averted for now. if it didn't -v I'd probably -L and just reformat it if it corrupts the filesystem.
  11. After looking around forums a bit more came across similar post, mod advised to run against /dev/mapper/md# if drives are encrypted (all of mine are btw), then to -L it. which spits out this output, same as webui Clearly it wants me to run with -L but that sounds destructive? It's a 12tb mostly filled, I'd really hate to lose it, at this point I'd almost be better to remove it and let parity emulate it probably and move data around before reformatting and adding back to array?
  12. also attached diagnostics if you want to have a look but doubt there's anything interesting in config side of this nekounraid-diagnostics-20200623-2216.zip
  13. running webgui with a -v flag gives this output; Phase 1 - find and verify superblock... - block cache size set to 6097840 entries Phase 2 - using internal log - zero log... zero_log: head block 6247 tail block 6235 ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a moun
  14. okay, ye makes sense, so run it against md# instead, I've gone back to maintenance and I'm getting the errors in edit, md14 is saying drive busy and webui refuses to run beyond -n/-nv I've tried to run repair, but it never got past saying magic number failed and trying to find secondary superblock which outputs this if it helps Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inco