Ixel

Members
  • Posts

    12
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Ixel's Achievements

Noob

Noob (1/14)

2

Reputation

  1. Thanks. Regarding the modinfo command, that's very helpful. For Unraid it appears to just be modinfo corefreqk (without the .ko). I guess the same may apply to insmod but I haven't tested that yet. EDIT: For insmod corefreqk.ko and corefreqk aren't found, so I guess it's named differently on Unraid.
  2. Thanks, hopefully it's not simply user error on my part (regarding the configuration not persisting). As for the governor/scaling matter, I recall using acpi-cpufreq before as I'm AMD (Threadripper 3995WX) so I'm guessing I'd just need to stop blacklisting acpi-cpufreq in order to restore the ondemand and conservative choices and also presumably not register the governor from CoreFreq (only the clock source, cpu idle and cpu freq drivers). I'll play around with it and see where it gets me.
  3. A very handy plugin! I have two questions though. 1) Is it possible to somehow 'persist' any custom configuration between reboots in CoreFreq? For example, if I change the CPU's TDP and register the governor, CPU idle, etc drivers in Corefreq, these changes are forgotten if Unraid is rebooted. I was thinking it might be possible to make these changes by using arguments with corefreq-cli but I can't find arguments which wouild do that either (e.g. could've set these again via a user script on first array startup). 2) With CoreFreq's governor etc registered, the only two choices I see for scaling are performance and powersave, I was wondering if it's possible to still have ondemand or conservative as choices or would I need to possibly change back to the governor I originally had with Unraid? At least I assume it's the governor that's responsible for it, but I'm not 100% certain. I also presume these scaling choices have disappeared due to the blacklisting and registering guide (partially) I followed from the Github wiki page. Thanks in advance.
  4. According to Kingston it should be supported, however I've manually set them to 2666 now. Global C-states control is now disabled too. Fingers crossed it solves the problem.
  5. Hi, Thanks for replying. I did not, sorry. I have just read it now though and will see what I can find in the BIOS related to that and make the appropriate changes. I'll let you know how it goes, thanks! 👍
  6. Hi, In the last few weeks I've been having random crashes occur more regularly over that time. What started from perhaps every two weeks went to every week and now virtually every day. I've done a memtest with no errors sadly. I originally took out the NVMe cache drives temporarily with no change, as during some of the crashes I was only able to write to the HDDs and not the NVMe drives until a hard reset was done. I've upgraded to 6.10rc1 due to the macvlan crash, which I originally had allowed the host access to containers (now disabled and using ipvlan). The voltages shown on the IPMI seem fine so I can't believe it's a power supply fault developing. At this point I'm stumped, the hardware (other than the hard drives) is pretty new too and it was reliable for quite some time. I've now enabled mirroring of syslog to flash for now, but I've attached a snippet of what I was able to retrieve prior to needing to hard reset again. Next time it crashes I will be able to get a full syslog. I'm hoping someone possibly has an idea of what might be causing this problem, beyond the "it could be the motherboard, CPU, memory, hard drives or power supply" which sadly doesn't narrow things down much. Let me know if you have any questions. Thanks in advance. Basic summary of specs: AMD Threadripper Pro 3995WX 64-Core CPU 512GB DDR4 ECC RDIMM (64GB x 8 at 3200MHz), Kingston Server Premier ASUS WRX80-E SAGE Wifi Motherboard Corsair AXi 1200 PSU ASUS ROG 1080Ti OC GPU Samsung 970 Pro 512GB NVMe x 4 Western Digital Red NAS drives for general storage and parity, 2 x 10TB and 3 x 4TB EDIT: Looks like either or both changing the memory clock speed to a lower value, not that which is officially stated as compatible with my motherboard on Kingston's website, and disabling global c-states control has solved the instability. I've not had any issues as yet since changing those settings. Thanks for the help! Fingers crossed it stays this way. unraid_syslog_snippet.txt tower-diagnostics-20210901-1659.zip
  7. Want to say thanks for the work by those who fixed the current issue. It seems to be working fine now!
  8. I'm also having a similar problem where the data isn't automatically refreshing. No errors in the browser's console and I can see that it's no longer requesting new data about the PSU (only once and that's it). Seems to have happened since I went to 6.9 from I think 6.8.4.
  9. I know everyone has a life, however I'm equally sorry that it's perhaps difficult for an active person on there (at the time) to type a mere few characters back if they don't know the answer or even reply to a simple greeting 😉. Anyway, back on topic... I've resolved the problem. I'm not sure why but after a few further attempts at removing the container and reinstalling it somehow it's now fully working. I wish I had a clear cut answer to share with anyone else experiencing a similar issue, but good luck to anyone who does!
  10. I'm having the same problem as you. I'm trying to figure out why it is. I've even tried giving the container its own dedicated LAN IP but for some reason the TFTP server isn't able to be connected to outside of the container. If I try connecting to it within the container it works, even when using the LAN IP that I assigned the container with. It's like there's some kind of firewall. It seems to be listening on port 69 correctly according to netstat. I tried asking in the official Discord for linuxservers but it felt like I was invisible, not a soul even bothered to say something like "I don't know", just got blissful silence haha. I think I'll just setup my own small virtual machine and implement TFTP server and the netboot.xyz files myself on this occasion.
  11. Does anyone know if limiting the CPU share on the docker also makes it so the virtual machines aren't impacted and take priority if they need the CPU? I tried this in the past on Unraid some time ago but unsuccessfully stopped BOINC/F@H from causing some lag on game servers I run in a VM. I want the docker to be able to utilise 'spare/idle' CPU capacity, as opposed to dedicating some cores or such.