SinoBreizh

Members
  • Posts

    32
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

SinoBreizh's Achievements

Noob

Noob (1/14)

4

Reputation

1

Community Answers

  1. Hello everyone, I'm currently running 4 x 4TB Ironwolf drives + 1 in parity + 1 Blu Ray reader. This maxes out the 6 SATA ports I have on my Z490 motherboard hosting an i3 10100. I want to increase the number of ports available to me, ideally without an HBA (airflow and mental health concerns, see TL;DR). I plan on buying more, larger hard drives at the same time. My server mainly runs a Plex/Jellyfin server of 4K Remuxes, which Direct Play 95% of the time (so an iGPU is not an *absolute* must have). It also hosts a few game servers, with ~4 clients max each. Finally it runs qBittorrent with a several TB permaseed share. I do plan to add Homeassistant and maybe Nextcloud in the future. Here are the options I can think of, and I'd like your opinion on them: I have an Adaptec 71605 HBA in HBA mode I found on a bargain. It could handle 16 SATA ports, but I absolutely hate it: it burns up insanely quickly, needs a ton of airflow, and forces me to boot in CSM rather than UEFI. Apparently 10th gen Intel has issues with CSM or *something*, and I can't access the BIOS with the iGPU anymore when I turn CSM on. It needs a dedicated GPU - which I do not have. I had to spend hours pulling out my 7900XT which now barely fits in the server, just to troubleshoot. The HBA will work just fine in Unraid, I just lose access to BIOS in the process. I honestly can't be bothered (with a 10th gen Intel CPU, but I have no guarantee it'll work any better with other platforms). This is my least favorite option: it requires a ton of fixing issues (overheating, no BIOS in CSM, configuring write cache as it has issues with that, etc). I have a generic Asmedia 1064 SATA expander. 4 x SATA in a Gen 3 x1 slot. Has honestly been rock solid performance wise despite what people say about them - I think it supports deep sleep and ASPM; *BUT* it suffers from the powertop autotune glitch: running powertop autotune will cause drives to lose connection to the SATA expander once it enters deep sleep. Had to parity check the whole array once because of this. Does not affect my Blu Ray drive though, but that's only one of the 4 SATA ports it offers. Do I really need the powertop autotune command if it already supports PCIe Deep Sleep (L0/L1/etc) and ASPM? If not, this could be a cheap, simple option. I could upgrade to LGA1700. I've spotted a cheap motherboard with 8 x SATA and 3 x NVMe, not shared with anything else PCIe/IO wise. Perfect. I can grab a 12th/13th gen i3, or even an i5 if I want more cores and the UHD770 with double Quicksync. But upgrading for 2 extra SATA ports and an extra Quicksync engine feels stupid. I could switch to AM4. My workstation currently runs a Ryzen 9 5950X, and I could reuse it in the server to give myself an excuse to upgrade my main rig. On one hand, I'd have *MUCH* more power to expand my activities for years (the i3 10100 already has hiccups when software transcoding (thanks, Plex; take notes from Jellyfin's hardware transcoding) + seeding + game hosting), and there are plenty of cheap X570 boards with 8 x SATA and 2 x NVMe. On the other hand, I'd have to run headless and give up Quicksync: not the end of the world since I almost always direct play, but I won't have access to BIOS unless I buy a cheap GT710 or something for troubleshooting. A combination of several solutions: since the no video out/no boot issue with CSM seems to be an Intel 10th gen issue; maybe switching to LGA 1700 or AM4 will allow me to use my Adaptec HBA without issues? TL;DR: need more SATA for my Unraid build, up to 11 max. Very interested in keeping power consumption down through either efficient chipset SATA, or proper ASPM/Deep Sleep PCIe expanders/HBAs. Want to keep things simple for my mental health's sake (spent days troubleshooting the Adaptec 71605 HBA and it has honestly ruined with week).
  2. Hello everyone! I want to torrent onto an NVMe SSD share, to take full advantage of its write capabilities. However, I'd like to seed long term from a HDD, where capacity is the priority. What I can think of is to have two shares: one SSD share for incomplete (leeching) downloads, and one HDD share for completed (seeding) downloads. But since you can only map one share in the container /data path, how would I go about doing that? In qBittorrent settings, Downloads -> Saving Management -> Default/Incomplete save path, qBittorrent will only ever see the one share I can map to it, not the second one that's outside the container unless mapped. I don't know how to map that second share to qBittorrent. I've read Trash Guides about downloading to an SSD then having the mover move completed/seeding torrents. It is an alternative, that I'd like to avoid if possible: I'd like to keep my seeding pool and array separate, so my array doesn't get spun up. It's spun down about 90% of the time otherwise. I'm very new to docker containers, and I assume there's a way to do what I describe; but I haven't found one myself, or I lack the knowledge to know what to look for. Thanks in advance!
  3. I don't have pterodactyl set up, so I'd be very interested to hear your results if you do test it! It could help narrow down the performance issue to the server itself (if it's as much of a resource hog with Proton) or the Wine overhead (if it runs better with Proton).
  4. I'm going on a tangent here, but has anyone else noticed extremely high CPU usage when running this server? I have an i3 10100, nothing else is using significant amounts of power (plex, jellyfin, IRC client, qbitorrent, etc all barely use any CPU), and I'm reaching 10% CPU usage at idle, and 55-60% with just 3 players on the server. top/htop seems to confirm this isn't some IOwait "fake" CPU usage. Can anyone else confirm this high CPU usage behaviour? That way I'll know if it's just me or a global issue. Is it the server app itself causing the CPU usage, or the Wine overhead? I have a dozen game servers running, many through Wine, and none go this high with so few players. Depending on the cause of the issue, any idea if this can be fixed? It pretty much doubles my power consumption (45W -> 85W). Seescreenshots for info.
  5. No need to be sorry, don't be so harsh on yourself for the unpaid work you're putting in. I for one really appreciate what you're doing, making life easier for docker newbies like me, on a game that launched less than 48 hours ago of all things. Saves me from spinning up a VM to host servers "the old way" because I don't understand docker well enough to do things myself. Thanks to your docker and everyone's input in this thread, my server's up and friends are playing together - that's what matters! Cheers
  6. I'm updating this thread just because it can prove useful to someone, even though it has spiralled out of control into troubleshooting hell. I've narrowed the issue down to my RAM. I have two kits of identical RAM, bought at separate times. Both kits have the same model numbers, specs, and timings. Both pass MemTest without issue on their own. Put them together however, and they fail MemTest. After further investigation, I have noticed that, while they have the same model numbers, frequencies, and timings, the older kit of RAM is actually single rank, while the newer kit is dual rank. I never knew the kits were different in this manner (as it is not advertised anywhere on the online shop or the manufacturer's website) and never expected this to be an issue - I had always read that the memory controller would "pick the slower of the two" kits and work from there. I also expected memory incompatibilities to be clear and evident, where the PC wouldn't boot at all; not this unexplainable crash situation. This is the only explanation I can find for otherwise identical kits of RAM passing MemTest individually, but failing when put together, no matter what slots are used. I am currently using only the dual rank dual channel kit of RAM, and I've had consistent uptime with no crashes for around a day now. I have also failed to cause crashes as I did before, by sending a huge 125GB file over SMB. I have even reverted qbitorrent back to LibtorrentV2, with no issues so far. I'll update the post if I spoke too soon, but this seems to be it. For now I'm flairing this as the solution, in case it can help someone. TL;DR: when encountering seemingly unexplainable crashes, running MemTest can be a good first step to narrow the issue down.
  7. Never mind, it now crashes even with Docker disabled when sending the test file. I guess it has to be a different issue then. Gives the same error though: general protection fault, probably for non-canonical address I give up for tonight. I'm going to bed and letting Memtest run overnight, just in case. ------------------------- Edit: Welp, Memtest immediately failed with thousands of errors. That also explains why Unraid would crash when ingesting a big file. These sticks were working fine before and are months old. I guess my "new year power brownout" theory is gaining traction. I'll individually test the modules tomorrow. But for fuck's sake; I was thinking of buying a UPS too since I'm starting to use Unraid more. I'm not closing this thread as solved until I can confirm everything works with good RAM, hope you don't mind.
  8. I was about to say things were looking good since I switched qbitorrent to use libtorrentv1 instead of libtorrentv2, but I encountered a crash while doing something else. The problem is reproducible 100% of the time. I had attempted to copy over SMB a large (125GB) file from my NVMe-equipped desktop to my NVMe Unraid cache pool; so as to test the SSD's SLC cache limit. However, at some point during the copy process, throughput falls to zero and Unraid crashes in the following seconds. I haven't included the syslog or diagnostics, because there's no errors. Nothing. Only the console hooked up to the server has info: Crash 1 starts with: BUG: Bad page state in process shfs pfn:952a66 Crash 2 starts with: BUG: Bad page state in process smbd pfn: 2a090b Crash 3 starts with: general protection fault, probably for non-canonical address Most crashes end with a kernel panic. Others are an endless loop of trace calls. What makes me think it is related to the issue I posted here for, is the fact that disabling docker seems solves the issue entirely. I have not been able to reproduce the crash with docker disabled. I can copy the 125GB file over SMB back and forth with no issue, and it fully saturates my 10G network (1,2GB/s). Yet unlike in my previous comments, the syslog is completely clean. So it can't be the same issue? And what makes the issue reproducible every single time; why is a large file over SMB what triggers the qbitorrent kernel panic? SMART data shows the SSD is only at 20TB written out of a marketed life of 200TB. The few attributes I get seem to indicate all is normal. The Crucial P1 is known as a hot running drive, but unreliable it is not. I will be purchasing another NVMe drive to rule out the hardware side - needed a second drive for parity on the cache pool anyways. For now I'm just confused. I thought I had narrowed down the issue, but a whole other can of worms is now open. Any help is appreciated.
  9. I've read your thread @JorgeB, and I've decided to try another option proposed there, before I commit to disabling plugins I rely on. I've edited my binhex/qbitorrent-vpn container to pull from @binhex's libtorrentv1 library, and will run it as a daily driver to see if things improve. If no crashes/similar errors occur, then we can confirm your thread was the issue, without having to resort to wiping my plugins. If it still crashes, then I'll wipe the plugins as you asked. From my understanding of the thread you linked, this issue (if it is indeed what you suspect) should be solved with the release of 6.13 since it will run on the 6.5+ kernel. Correct? Anyways, thanks as usual for your time and help.
  10. First of all, happy new year! Didn't expect anyone to be online so soon in 2024. Got it, I'll dig in and report back as soon as my parity drive finishes rebuilding. I think it's completely unrelated, but I woke up this morning to a disabled parity drive and a bunch of errors in syslog. I wasn't too fresh this morning and forgot to download the syslog and the diagnostics, but the drive does pass the SMART test with no issues, and I checked all cable connections to make sure there were no loose fittings. So I don't think the problem is from the drive's health or my SATA cables. All I can think of is a brownout/very short power cut over the new year. And I don't think it's related to the main problem I posted here for.
  11. I've checked your post, and while some things are similar ("Call Trace"; <TASK>; the general syntax of the errors), it's not exactly the same. For instance, in your post you say the errors should always start and end with BUG: kernel NULL pointer dereference, address: blablabla Yet control+F in my syslog yields nothing of that kind. Same with __filemap_get_folio+0x98/0x1ff All I get is filemap_migrate_folio+0x1b/0x62 I've also read that I should remove certain plugins you mentioned. I'm however reluctant to do so as some, most notably appdata backup, are essential to backing up saves from the game servers I host. In your thread, is the top post updated with conclusions from further down the thread, or should I browse it to see if some plugins end up confirmed as culprits?
  12. Hello, I have recently installed the binhex qbtirorrent-vpn docker container, and have been experiencing regular crashes ever since (around once a day). The system was perfectly stable previously, with un uptime of around 4 months. I believe I have narrowed down the issue to one mentioned in 6.12.4 and 6.12.6 change logs, related to MACVLAN calls. The problem itself is far beyond my knowledge level, but I have applied the fix suggested by Unraid in the change logs: I have switched "Docker custom network type" from MACVLAN to IPVLAN. Yet the crashes still occur, albeit with different errors in the logs. Therefore, here are my questions to those more knowledgeable than me: I have a Realtek RTL8125B 2.5G NIC on my motherboard, but it is not in use (not plugged in), as I use a generic chinese X520-DA1 10G NIC instead for connectivity, which has worked without issues for a year. The fix common problems plugin warns the RL8125B NIC is known to cause instability, and a plugin is available for an alternative driver. Can the Realtek chip cause instability even when it is not plugged in and not in use? Should I install the driver then? I have, to the best of my knowledge, applied the recommendations found here in the 6.12.4 changelog. Yet the crashes still occur. Please find attached the syslog of the latest crash, as well as the diagnostics obtained just after reboot. The syslog is full of errors related to "call trace" and "khugepaged Tainted", which is why I believe it is related to the MACVLAN call traces issue. What should I do next to attempt to fix the issue? Thanks in advance for your help, and I wish you all a (soon to be) happy new year! tower-diagnostics-20231231-1113.zip tower-syslog-previous-20231231-1012.zip
  13. Oh wow, I'll definitely check this out later today, thanks! Edit: works like a charm so far! 100%. Only posted to report back as promised with the Windows event viewer: if there's memory issues on both OSes, it doesn't take rocket science to figure out the the server app from the devs is the problem. Thanks again!
  14. Got it. It's just a bit frustrating since my saves are backed up at 4am every day, and the server often fails to restart properly; meaning that pretty much every single day I have to manually restart the server one or up to a dozen times until the port checks go through. Unfortunately for me it's the majority of the time, about 2 in 3 starts. I'm curious as to what could cause that variation. Couldn't find any information from people having this issue; so either the sample size is too small for there to be debate on the subject, or I'm really unlucky and it hits me harder for whatever reason. I remember from some posts for another game server, that game had an option to skip the network checks - a sort of "I know it works, I don't need you to tell me if it doesn't" option. Is the port check a part of the SOTF server client itself, or something added on top that could be disabled? On a side note, it seems you were right about Icarus. I finally had the time to run the Windows native server client to see if it memory leaked like the Wine variant in your docker container. And... well... I guess the event viewer speaks for itself: Averaging a modest ~20K "Ran out of memory" errors per day of uptime. So the server client itself has issues. I don't know what's the difference between Windows and Linux (specifically Unraid) that causes the latter to crash and not the former. Then again, the Windows client never crashed with an "Out of memory" log error like the docker, but my guinea pig collegues on the server did report getting repeatedly kicked out, but with no error reported in the logs. Last point, you're also probably right that the Icarus docker command you explained to me earlier is a "dirty" fix. I've had unexplained crashes where the whole Unraid OS seizes up (unreachable dashboard, frozen terminal via HDMI, requiring hard reset), ONLY when I apply that command. Syslog shows up empty of anything that could explain it. By process elimination I assume it has to be that command, do you think it could be the cause? Anyways, thanks again for the help and explanation!
  15. Hello, I'm having a recurring issue with the Sons Of The Forest docker, where it gets stuck in the port verification stage. It'll try to test the ports to see if they're open (it sees they are), and just hangs there. It happens about 2/3 of the time and can take up to a dozen restarts for it to launch succesfully. Stuck at port 8766: #DSL [Self-Tests] [Config] Dedicated Server configuration file is valid. #DSL [Self-Tests] [Networking] Testing public accessibility... src\clientdll\steamengine.cpp (3003) : Assertion Failed: Attempt to call interface with invalid hSteamUser /+/ 1, appid=1326470, hpipe=131073, inprocess, thread=472 src\common\interfacemap.cpp (890) : Assertion Failed: IPC call to IClientHTTP::ReleaseHTTPRequest returned failure code 12 #DSL [Self-Tests] [Networking] Testing server ports against public ip <My IP>... #DSL [Self-Tests] [Networking] UDP GamePort [8766] is open. Stuck at port 27016: #DSL [Self-Tests] [Config] Dedicated Server configuration file is valid. #DSL [Self-Tests] [Networking] Testing public accessibility... src\clientdll\steamengine.cpp (3003) : Assertion Failed: Attempt to call interface with invalid hSteamUser /+/ 1, appid=1326470, hpipe=131073, inprocess, thread=464 src\common\interfacemap.cpp (890) : Assertion Failed: IPC call to IClientHTTP::ReleaseHTTPRequest returned failure code 12 #DSL [Self-Tests] [Networking] Testing server ports against public ip <My IP>... #DSL [Self-Tests] [Networking] UDP GamePort [8766] is open. #DSL [Self-Tests] [Networking] UDP QueryPort [27016] is open. What's strange is that when I try to host the server on a Windows VM or on my native Windows PC, it runs through the port checks 100% of the time. Moreover, in the docker, once it's up, it's rock solid connectivity wise. Both the Unraid server and my desktop are wired identically through an SFP+ switch which is itself hooked up to the router via SFP+. All this leads me to believe that it can't be the router. It's always either port 8766 (GamePort) or 27016 (QueryPort). Any ideas? Thanks in advance!