Freppe

Members
  • Posts

    21
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Freppe's Achievements

Noob

Noob (1/14)

1

Reputation

  1. If this turns out to be the solution, then I guess what tricked me was the way that it just affected the same drive despite other drives powered through the same disk cage being fine. That I got different errors when the disk was connected through different adapters also made me think that the problem would be with the disk rather than the power. If this parity sync runs through then I think I'll look at the power connectors to try and balance things a bit. Splitters are probably necessary though, but I could do a better job at balancing the disks between the different lines from the PSU to spread the load better. I have also tried to find adapters from PCIE power connectors to molex or sata, since that would let me use that cable from the PSU as well (no graphics card in this machine), but haven't found any appropriate adapters.
  2. New test started, which will be very interesting. All the talk of potential power problems made me double check the power going to the icy dock cage with 5 drives, and I noticed that the two middle black lines in one molex connector were pushed out of their place. This was connected to two of the three power input ports on the icy dock cage. Pushed them back so will see if things are better now, but if there are still problems then I may also have to redo some other power wires as I have quite a lot of my drives hanging of one output from the power supply.
  3. I'm considering simply getting a new disk, just so I can hopefully get back to having 2 parity drives before I do more testing. Will try the switch of drive slots first though, in case that is the issue. Problem is that I end up without parity protection if I get another drive fail, which feels a bit unsafe.
  4. And now the parity sync stopped with the 4TB drive disabled: tower-diagnostics-20230112-1615.zip
  5. Parity sync has suddenly slowed down considerably, now with a new fancy error: tower-diagnostics-20230112-1553.zip
  6. OK, thanks for looking at it. I will wait for the parity sync to complete (20h left), and with that done I will try to move another drive over to the new LSI HBA as I would prefer to run everything on that considering that it seems to be the popular opinion that cards like that should be used with Unraid. I have previously swapped cables multiple times, but the drive is in an icy dock cage and I will try to move the drive over to another slot as well to see if that does anything. Would it seem strange if I manage to get my other drives working on the LSI card but not this specific drive? I do have other drives running on the on-board SAS ports, and they seem to be working fine.
  7. Good call there, I had connected it to one of the onboard SAS ports on my motherboard (Asus KGPE-D16 with installed PIKE raid card). I have now moved the disk over to the old SATA card again and have started the array, and the parity sync is running at a more normal speed compared to the very slow speed it was with the previous configuration. I'm a bit reluctant to move all the other drives around since I don't want to risk damaging the data on the other drives. Could it be a case where the disk just doesn't work with SAS cards? But that doesn't explain why it would fail (but slowly) when connected to the SATA card. Can the SAS card (and onboard SAS ports) cause the disk to fail faster? Fresh diagnostics attached. tower-diagnostics-20230112-1434.zip
  8. So I've been having some issues with my latest drive, which I bought in November. My other drives are a bunch of older 1TB drives and some newer 3TB drives, but when I had to replace a failed drive last time I bought a 4TB Seagate drive and installed it as parity 2. The drive seemed to work perfectly fine, but after some weeks it failed with errors. I pulled the drive and tested it in my Windows computer with a full read-write test and that passed without any errors. Reinstalled the drive and it worked fine in Unraid again, at least for a while and then it failed again. I've had this repeat multiple times, but every time I test it in another computer I'm unable to find any issues. I've replaced SATA cables, thinking that might be an issue, but no luck there. I've even recently purchased a proper LSI SAS 9201-16i since I thought maybe the problem could be that I'm using some SATA cards that it seems are not recommended for Unraid use with what I can read online, but when I try my disk connected to that card it fails even faster. The disk is then recognized fine but when I start the parity sync I immediately get write errors and it fails. I have done three diagnostics, the first one is from when the disk last failed prior to my attempt to use the LSI HBA. The second is when I use the LSI HBA connected to the 4TB drive. The third is from my currently running system where the 4TB drive is connected to a SATA port on the motherboard, where I can see that the logs are full of errors related to that drive. The parity sync is currently running there, but it seems to be very slow. I'm at a bit of a loss for what I should do here, since I am unable to find any issues with the drive when connected to another system I feel that it would be tricky to send it in for warranty replacement. Is there any way to see from the logs if the problem is with my system or with the drive? Any way to properly prove that the drive is faulty so I can send it under warranty? The type of logs I see right now are like this: tower-diagnostics-20221230-1239.zip tower-diagnostics-20230112-1138.zip tower-diagnostics-20230112-1245.zip The logs from the initial failure was something like this: And the logs when attached to the LSI HBA was this:
  9. Glad to hear that the cause of the problem has been found. Personally I think I will just stick to the working -02 version for now until there is an official solution.
  10. Still the same. Also tried on a different non-unraid machine: # docker run -d -p 8222:8222/tcp -p 25565:25565 --name=minecraftserver -v /apps/docker/minecraftserver:/config binhex/arch-minecraftserver:test2 Unable to find image 'binhex/arch-minecraftserver:test2' locally test2: Pulling from binhex/arch-minecraftserver a80ea52ba1f5: Already exists cefdd3643fda: Already exists 46803e3faf06: Already exists 445bf2a9da6c: Already exists 0d719328ea91: Already exists 286698634699: Pull complete 6cb8e617575a: Pull complete 2fbb8c82c6ea: Pull complete 447407499607: Pull complete Digest: sha256:7a00d597e19ed6bf833eb8b3d592fc1fe6c3851de1e468ae4c1a897605928872 Status: Downloaded newer image for binhex/arch-minecraftserver:test2 e0562763dbad5672dfb45535980c792132e0431d2376ef5bc8747ad184231c25 # docker logs minecraftserver Created by... ___. .__ .__ \_ |__ |__| ____ | |__ ____ ___ ___ | __ \| |/ \| | \_/ __ \\ \/ / | \_\ \ | | \ Y \ ___/ > < |___ /__|___| /___| /\___ >__/\_ \ \/ \/ \/ \/ \/ https://hub.docker.com/u/binhex/ 2021-03-26 11:06:41.136663 [info] System information Linux e0562763dbad 5.4.0-70-generic #78-Ubuntu SMP Fri Mar 19 13:29:52 UTC 2021 x86_64 GNU/Linux 2021-03-26 11:06:41.175756 [info] OS_ARCH defined as 'x86-64' 2021-03-26 11:06:41.213247 [warn] PUID not defined (via -e PUID), defaulting to '99' 2021-03-26 11:06:41.464994 [warn] PGID not defined (via -e PGID), defaulting to '100' 2021-03-26 11:06:41.676832 [warn] UMASK not defined (via -e UMASK), defaulting to '000' 2021-03-26 11:06:41.714252 [info] Permissions already set for volume mappings 2021-03-26 11:06:41.753249 [info] Deleting files in /tmp (non recursive)... 2021-03-26 11:06:41.792767 [info] CREATE_BACKUP_HOURS not defined,(via -e CREATE_BACKUP_HOURS), defaulting to '12' 2021-03-26 11:06:41.830753 [info] PURGE_BACKUP_DAYS not defined,(via -e PURGE_BACKUP_DAYS), defaulting to '14' 2021-03-26 11:06:41.868449 [info] ENABLE_WEBUI_CONSOLE not defined,(via -e ENABLE_WEBUI_CONSOLE), defaulting to 'yes' 2021-03-26 11:06:41.906617 [warn] ENABLE_WEBUI_AUTH not defined (via -e ENABLE_WEBUI_AUTH), defaulting to 'yes' 2021-03-26 11:06:41.944594 [warn] WEBUI_USER not defined (via -e WEBUI_USER), defaulting to 'admin' 2021-03-26 11:06:41.982307 [warn] WEBUI_PASS not defined (via -e WEBUI_PASS), using randomised password (password stored in '/config/minecraft/security/WEBUI_PASS') 2021-03-26 11:06:42.023652 [info] WEBUI_CONSOLE_TITLE not defined,(via -e WEBUI_CONSOLE_TITLE), defaulting to 'Minecraft Java' 2021-03-26 11:06:42.061628 [info] CUSTOM_JAR_PATH not defined,(via -e CUSTOM_JAR_PATH), defaulting to '/config/minecraft/minecraft_server.jar' (Mojang Minecraft Java) 2021-03-26 11:06:42.100347 [info] JAVA_VERSION not defined,(via -e JAVA_VERSION), defaulting to '8' '/usr/lib/jvm/java-8-openjdk/jre' is not a valid Java environment path
  11. No, still the same unfortunately. Did the whole thing with removing the old image as well.
  12. OK, will stop trying to figure this out for now then, new attempt tomorrow and thanks for the help
  13. Actually, is this the change you were going for? Looks like you changed the location of java 11, but not java 8. ln -fs /usr/lib/jvm/java-8-openjdk/jre/bin/java /usr/bin/java archlinux-java set java-8-openjdk/jre elif [[ "${JAVA_VERSION}" == "11" ]]; then ln -fs /usr/lib/jvm/java-11-openjdk/jre/bin/java /usr/bin/java ln -fs /usr/lib/jvm/java-11-openjdk/bin/java /usr/bin/java archlinux-java set java-11-openjdk else echo "[warn] Java version '${JAVA_VERSION}' not installed, defaulting to Java version 8" | ts '%Y-%m-%d %H:%M:%.S' Not the best quote there, but it's from the commit in github: https://github.com/binhex/arch-minecraftserver/commit/1b38ba9537d32a6235b14ea632590ac20ae694cf
  14. Getting late here, but I will absolutely try again tomorrow.
  15. Can confirm that 1.16.5-02 is working fine, as was mentioned earlier.