Mizerka

Members
  • Content Count

    75
  • Joined

  • Last visited

Everything posted by Mizerka

  1. bios firmware is modded to allow for nvme boot, if you're only planning to use nvme for cache pool (like what I have), then you can use the official bios. I never bothered with nvme boot and also it's not needed for unraid since we boot from usb anyway. after all this supermicro ended up adding the official bios to their download pages so you can get it legit from them if you don't trust my links etc.
  2. just had a look, yeah 3.4 is on their site now, which is the same one I've been given. And yeah I found that it fixes the bifurcation issues in previous versions. even though there's no mention of it in any patch notes.
  3. from Supermicro support, since then I've actually got a newer 3.4 stable, same link, subfolder "3.4 official", had some say they don't trust me or whatever, included email conversation from the tech in email. Also yeah I ran 3.3 beta which worked flawlessly with m2 hyper x4, flashed to 3.4 as well without issues, 2 weeks uptime without issues so far, with each m.2 drive saturating 1.2gbps reads (sn750 1tb) also not sure if it affects nvme, but these and samsung 250gb's I have tested don't break parity like some flash ssd's were reported to. I will add the 3.
  4. confirmed working bios for x9dr3(i)-f, beta dated Feb'20, also including nvme modded for bootable nvme. https://mega.nz/folder/q0sWiAya#ibXw5vbz08m8RXbaS3IB1A
  5. let's revive an old thread; Can we have a global setting for this? and manual change through disk view be a manual override of the global? or at least have a multiple disk setting. having to click through 20+ is a pain.
  6. makes sense, can confirm, --restart unless-stopped wasn't there, I've added now and will see how it behaves. thanks
  7. Hey, thanks for your work; lately jacketvpn has been turning itself off quite often with error 2020-08-08 17:04:58.977161 [ERROR] Network is down, exiting this Docker Is this just down to tun closing so jacket is forcing to shutdown?
  8. okay, so I think I'm good now, ended up booting back into full array with md14 mounted, moved all data off of it without issues, then went back into maintenance and could now run -v, once complete I've started array again and seems good fine for last 20mins or so, crisis averted for now. if it didn't -v I'd probably -L and just reformat it if it corrupts the filesystem.
  9. After looking around forums a bit more came across similar post, mod advised to run against /dev/mapper/md# if drives are encrypted (all of mine are btw), then to -L it. which spits out this output, same as webui Clearly it wants me to run with -L but that sounds destructive? It's a 12tb mostly filled, I'd really hate to lose it, at this point I'd almost be better to remove it and let parity emulate it probably and move data around before reformatting and adding back to array?
  10. also attached diagnostics if you want to have a look but doubt there's anything interesting in config side of this nekounraid-diagnostics-20200623-2216.zip
  11. running webgui with a -v flag gives this output; Phase 1 - find and verify superblock... - block cache size set to 6097840 entries Phase 2 - using internal log - zero log... zero_log: head block 6247 tail block 6235 ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a moun
  12. okay, ye makes sense, so run it against md# instead, I've gone back to maintenance and I'm getting the errors in edit, md14 is saying drive busy and webui refuses to run beyond -n/-nv I've tried to run repair, but it never got past saying magic number failed and trying to find secondary superblock which outputs this if it helps Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inco
  13. I see, the doc specifies either can be used, I'll try now with md# instead, to confirm if I want to run it from webui I should just change -nv to -v? trying to run from webui now only displays this; not allowing any actions using md# also returns device is busy
  14. hey, quick one, small typo in xfs_repair help of -o subopts description -f also missing full stop.
  15. xfs_repair in webgui, after 2nd run of -nv said, "just start the array up bro, it'll be good", as doubtful as I was, I tried it and so far... it seems okay. but scrubbing btrfs cache as well just in case. edit; ye okay, spoke too soon. thoughts on best action? I'll try to unmount again and repair but doubt it'll work
  16. So had some heat issues today, some disks hit 60c before I realised. Anyway, I sorted but found some strange behaviour, but primary io write errors from smb, so loaded up logs and found a lot of issues. took array down and up again, no go. rebooted in maintainence and found disk14 reported xfs_check issues, but then after leaving it for a while and checking logs it's filled with below. So.... how bad is it? looking at docs I should run xfs_check -V /dev/sdX ,which I tried with disk14 which was only one that actually reported issue in webgui using xf
  17. Okay, ye so that worked, didn't know actual world name during generation mattered. got 3 instances running side by side now without issues. If you get some time it'd be nice to sort that downloading delay though, world is 2mb, but takes it good 10mins before it proceeds to extract it
  18. Hmm, so they match otherwise as you said it'd complain about it missing, but still fails to starts, fwiw I'm generating local worlds on windows client. If I just copy it over it goes back to the same error as before, when run straight parameters from container instead of using -config x.txt I get that weird n world, d <number> delete world in log, left it for 30mins and nothing happened.
  19. World generation still isn't working in any way, it only downloads it once if path is missing. World I tried with was different name originally (just renamed file to match serverconfig.txt), I'll give that a try.
  20. Thanks, however still not having it for me Now it it's stuck on downloading worlds, looks like it pulled a complete .zip but then does nothing with it by the looks, extracting it manually and restarting container, just goes back to the same error as before. new container from template; ---Checking if UID: 99 matches user--- ---Checking if GID: 100 matches user--- ---Setting umask to 000--- ---Checking for optional scripts--- ---No optional script found, continuing--- ---Starting...--- ---Version Check--- ---Terraria not found, downloading!--- ---Successfull
  21. To confirm, if I take your template as is, it works fine, it generates a 1.3.5.3 world, and I can just upgrade server to 1.4.0.5 but I'd like to generate a new world in 1.4 for a few reasons (and expert + medium/large but that's something else, on side topic, I assume master and journey worlds can also be created by incrementing values further?). For now I'm just running it locally, also tried creating the world locally and moving it over, which didn't work either, with same error. and yeah no rush, I appreciate your work on this.
  22. This is vanilla, I tried few more versions since, but could only get it to generate a 1.3.5.3 world. Tried few more modes, small medium large, all giving the same error, I'm not great at reading syntax but seems like it's just expecting a integer early on probably from some variable, or from initial error where some var is just not set at all.
  23. Hey, trying to run terraria server but having some issues, docker installs and created fine, but it creates a 1.3.5.3 world, I tried changing version to 1.4.0.5 or blank which updates fine to latest, but fails to create a world and crashes with this error. Tried creating multiple dockers but could only get it to create a world when the default version value is used. ---Starting...--- /opt/scripts/start-server.sh: line 5: [: : integer expression expected ------------------------------------------------------------------------------------ ---------------------------------W
  24. tested and works as expected for me, thanks.