InternetD

Members
  • Posts

    11
  • Joined

  • Last visited

Everything posted by InternetD

  1. Still no luck for me, Must be some deeper incompatibility issue specific with my mainboard/NVMe combination. Thank you anyway If I'm not wrong kernel.org must have an ASPM blacklist somewhere in their kernel code since you can report buggy devices on the mailing list. It may be wise to not override this list until getting desperate. If you can get your hands on it it may be good to implement/update it with each Unraid release since your plugin might ignore it.
  2. Sadly it still doesnt work. Even with 0 the NVMe keeps crashing so i guess its a whole another issue on L1.1 and L2.2 that is pcie related.
  3. nvme_core.default_ps_max_latency_us=0 as a bootflag should do the trick against buggy NVMe low power state.
  4. Thanks for the answer. I use amd_pstate since a while without any issue whatsoever on other devices with fitting hardware and of course newer kernel. As for ASPM i wasnt sure if blacklisting is possible as it seems it need to be driver specific. For example the amdgpu driver allows to do so with amdgpu.aspm=0 The NVMe is uptodate, so no luck here. But its nearing its end of life anyway so i will keep attention on the next purchase. For example the newer KIOXIA NVMe with TLC and DRAM supports officially L1.1 and L1.2.
  5. Also amd_pstate=active (with the upcoming 6.6 kernel) for Zen2 and newer AMD CPUs. After that the performance and especially the powersave governor are working way better. Schedutil should be also preferred before ondemand/performance if amd_pstate is not availabe for your CPU. Also the possibility to blacklist certain devices yourself. For example my M.2 SSD doesn't like L1+ (L1.1 and L1.2) but the rest seems fine. PS: In the case you want to blacklist the M.2 for everybody.
  6. Will try again once parity is fully sync again in about 24 hours since i couldn't really check for logs and didn't save them otherwise before crashing. Had to hard reset the system yesterday after it got unresponsive and didn't do the regular shutdown over the case shutdown button. What i stated was only what i could perceive in the "surviving" unresponsive ssh htop shell. After uninstalling the plugin system is stable again. But since some workload i use likes a lot of ram i still need this plugin on the long run . PS: Nothing special in the logs, for now it seems to work again without any issues.
  7. Since the last 2 minor unraid updates it seems that mkswap can't grab the created swapfile anymore (on xfs), resulting in a massive CPU load (50 to over 100 load on a 16 thread CPU) ending in the system being unresponsive.
  8. Hallo and first: thank you Exactly this seems not to work. No matter if i change from "phlak/mumble:latest" to "phlak/mumble:1.3.2" or vice versa, 1.3.1 is pulled down. Maybe adding "murmur" in the description can resolve both issues? That way, people looking for both should be able to find the docker.
  9. Hi @A75G, thank you for your work, so far im using your murmur/mumble docker. Still, i have some suggestions for this particular docker: Rename the docker with "murmur", since this is the real name of the server application for mumble. The Project Page is not mumble.com, but mumble.info. Latest and 1.3.2 branches doesn't pull 1.3.2 but 1.3.1. Greetings
  10. So first time posting i guess What i like so far: user friendly in any way. What i want for 2020: built in possibilities for external backup of important data (cloud, any kind of external drives or network storages).