meep

Members
  • Posts

    758
  • Joined

  • Last visited

Everything posted by meep

  1. Hi, sorry for the late reply. I’m not sure why I didn’t receive a thread update notification. Yes, I still have the card, and managed to get it working reliably in my system by shuffling around some cards / slots. However, it really is a nice to have as I’ve lots of other USB controllers I’m not using. Would consider sale.
  2. So after a full weekend, and re-enabling all the various hardware and apps, the server has been stable. It's reasonable to say that disabling c-states was the fix for the issues I was encountering. These have been enabled since I built the server in August 2019, and the system has worked perfectly right through to 6.11.5. Only when I upgraded to 6.12 did regular crashes start, and these seemed to escalate in frequency with every point release I installed. At least now, thanks to the exhaustive testing, I've identified and removed a few CPU intensive plug ins I didn't really need, and identified a faulty SSD, so there's that at least.
  3. I believe power supply idle control was already set to normal, but I'll double check when I next reboot . I want to have the server running for 24 hours without issue before restarting and adding back some if my expansion cards.
  4. So we might have a winner here..... It's an oldie, but a goldie. Working through any and all possible troubleshooting, I disabled c-states in my bios and have had my first error free night in several weeks Not fully confirmed yet, and super confusing why this would impact my system now and not several years ago when this was actually a thing. Will report back and hopefully close out the thread if this turns out to be the actual solution.
  5. That's the plan. Though I see quite a few similar reports here but also on Reddit. I think there's something fundamentally problematic in 6.12.x
  6. I asked CHAT GPT for an analysis. Here's what I got back..... It seems like you're experiencing a recurring issue on your UNRAID server, and I'll walk you through the analysis of the provided logs. The logs indicate that there is a problem with your system's kernel, and it's encountering a "general protection fault." This is a type of error that occurs when the kernel detects that a process is attempting to access a part of memory it's not allowed to, often due to a software bug. Let's break down the information in the log: Timestamp and Process Information: The log entry starts with a timestamp indicating when the issue occurred. It mentions the process name (disk_load) and the PID (Process ID) of the process encountering the issue. Hardware Information: It provides information about the hardware, including the motherboard model and BIOS version. This is important because hardware-related issues can sometimes lead to kernel faults. Stack Trace: The RIP line indicates the Instruction Pointer at which the issue occurred. In this case, it points to the list_del function in the kernel code. The Code section shows the assembly code at the RIP location. The RSP register contains the Stack Pointer at the time of the fault. Registers like RAX, RBX, RCX, etc., show the values stored in those registers when the fault happened. Call Trace: This section provides a call stack trace, showing the sequence of function calls that led to the issue. It starts from the top-level function (list_del) and goes deeper into the kernel functions. Modules Linked In: This lists the kernel modules that are currently loaded. Kernel modules are pieces of code that can be loaded into the kernel to add functionality or drivers. Additional Information: The logs also indicate that this issue is affecting CPU 21. There are mentions of "rcu_preempt detected stalls." RCU (Read-Copy-Update) is a synchronization mechanism in the Linux kernel. These stalls might be related to the issue you're experiencing. Based on this information, it's clear that the issue is occurring in the kernel code, specifically in the list_del function, which is part of a linked list manipulation. The most likely cause of this issue is a software bug in the kernel or a kernel module. Kernel bugs can be very complex to diagnose and often require the expertise of kernel developers. Here are some steps you can take to further diagnose and potentially resolve the issue: Check for Updates: Ensure that your UNRAID server is running the latest kernel and kernel modules. Sometimes, updating the kernel can resolve known issues. Check for Specific Software: Determine if the issue started happening after installing or updating specific software or kernel modules. Try disabling or reverting those changes if possible. Kernel Debugging: Consider enabling kernel debugging and collecting more detailed logs. Kernel developers can use these logs to identify the exact cause of the issue. Community Forums: Check UNRAID or Linux kernel community forums for similar issues. Others may have encountered and solved similar problems. Contact Support: If the issue persists, consider contacting UNRAID support or the support channels for your specific hardware components. They may have specific insights or solutions. Hardware Testing: In some cases, hardware issues can manifest as kernel faults. Test your hardware components (e.g., RAM, CPU, motherboard) for any problems. Keep in mind that diagnosing and resolving kernel-level issues can be challenging, and it's essential to take precautions to avoid data loss during troubleshooting. If you're not comfortable with these steps, seeking assistance from experienced professionals or support channels is advisable.
  7. @Dimtar I'm on IPVlan (not macvlan) @SirLupus I haven't tried that, but like you, I need SMB so not really an option I've now spent DAYS peeling back the onion, removing all cards etc. and adding them back in one at a time. I thought I'd solved it in that I identified a bad SSD (that I wasn't even using) and removed that and seemed to gain some stability. However, overnight last night, it all came tumbling down again (during a parity check). I'll paste a bit of the log captured below, but it looks like something is tripping up the kernel and then continues to generate a kernel exception every 3 mins exactly. In a previous version of this, I could see these were reporting issues in smartctl, which led me down the path to find the bad drive, but here I have my 3 minute exceptions back, and no smartctl references. Really stumped. Next thing is move back to 6.11.x, but I cant be staying on that forever (assuming it works). What's up here @unraid ??? Here's the start of the issue overnight. The 3rd one just keeps repeating every 3 mins exactly until the whole system locks up, or at the very least, the GUI freezes out and becomes unusable.
  8. I would need to go all the way back to the last 6.11.x release, as thats the last time I had stability. I have the same inclination, which is why I'm currently focised on removing hardware, and will look into RAM, CPOU and Drive connections next.
  9. So with Docker and VMs disabled, the system is still generating multiple GPFs and Kernel crashes. I've attached todays Syslog that shows a boot up sequence around 8:40, with GPFs starting after 10:00. I did manage to do a clea shutdown, so there's that. Next, I'm going to remove any additional non-essential hardware such as extra GPUs, USB conbtroller etc. If still problematic, I'll boot to safe mode to eliminate PlugIns and after that, it's going to be a CPU and RAM re-install. Arrggghhhh. syslog_Sept15
  10. I'm on a slow boat right now switch off various bits of the system. I removed all bifurcation and it's still crashing. I removed Cache Dirs plugin which was pegging CPU, and its still crashing. Next, I'm switching off VM and Docker support. I'll grab a log after that if iot';s still crashing.
  11. No Joy Still crashing frequently today. I've disassembled and rebuilt with some config changes and compromises and will see if that works. (I eliminate my bifurcated set up)
  12. I ran this on all my array drives and it did it's clean up. I ran it a second time with no further issues addresses. Let's see if that made a difference. Thanks for taking a look. I appreciate it greatly.
  13. Shares came back after a reboot, but significant issues persist. I've made an issue thread. 6.12.4 is crashing much more frequently, than 6.12.3 was (hourly rather than daily).
  14. Hi folks. Ever since upgrading to 6.12.x, I've been having nothing but issues on my server. I recently updated to 6.12.4 as 6.12.3 was hard crashing every couple of days. Now it's crashing every couple of hours. Any thoughts, help or insights appreciated. There have been no changes to my system recently apart from an SSD swap to remove a flakey old drive, though the crash issues were occuring before this, and I did this change as part of troubleshooting. I've attached here my diagnostics pack as well as most recent ssysog. The behaviour I see is a standard start up, but after a few minutes or, at most, a couple of hours, I will observe a cascade of Kernel faults in logs that ultimately result in a system lock up necessitating a forced restart. The initial fault is never the same. I have run 2 passes of memcheck all day yesterday with no faults. I have several times checked SATA and power cables to drives and have verified all expansion devices are correctly seated. A fresh set of eyes would be appreciated as I'm at my wits end here. syslog unraid-diagnostics-20230912-1557.zip
  15. updated to 6.12.4 as 6.12.3 was hard crashing every few days. (6.12.2 before it lost me days tracking down some kind of plugin corruption). Now all my shares have disappeared, meaning more hours of debugging and faffing around. What's happened unRAID? I've used it for years and it's been rock solid. Now nothing but issues and problems and hassle????
  16. Reporting this encountered for the first time today on 6.12.3 Manifests as empty pages on the Array, Pool and Boot device tabs. Log just filled with this; Aug 8 13:45:08 UNRAID nginx: 2023/08/08 13:45:08 [error] 9990#9990: nchan: Out of shared memory while allocating message of size 19386. Increase nchan_max_reserved_memory. Aug 8 13:45:08 UNRAID nginx: 2023/08/08 13:45:08 [error] 9990#9990: *2742737 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost" Aug 8 13:45:08 UNRAID nginx: 2023/08/08 13:45:08 [error] 9990#9990: MEMSTORE:01: can't create shared message for channel /disks Aug 8 13:45:08 UNRAID nginx: 2023/08/08 13:45:08 [crit] 9990#9990: ngx_slab_alloc() failed: no memory Aug 8 13:45:08 UNRAID nginx: 2023/08/08 13:45:08 [error] 9990#9990: shpool alloc failed
  17. So I wiped my USB and reinstalled UNRAID. Copied my config directory and the issue persisted. (could not get GUI in safe mode). I then embarked on copying my config sub-folders and files one-at-a-time to see which one was the offender. A painstaking process of reboot after reboot. And would you believe - it was the VERY LAST FILE - the VFIO config. ARRRGGGGHHHHHH!! With a fresh UNRAID install + all my config folder except VFIO, which I rebuilt, I was finally able to create the ZFS array. It's chugging away now doing some test transfers so I can evaluate performance, but it seems somewhat on a par with my main array. I'll use it for a secondary backup for now so I don't commit any files I don't have copies of elsewhere. Thanks for looking.
  18. In trying to troubleshoot my Pool configuration problem, I've been trying to boot my 6.12.3 server into safe mode bujt that's not working, so I'm hoping for a bit of guidance. Here's the link to the thread that gives rise to this one; Anyway, when I boot to SAFE GUI mode, server boots up OK, I log in on local display. but no GUI is displayed. Firefox says unable to connect. When I try to connect remotely, I get a connection refused in my remote browser, and SSH reports that key fo [IP] had changed and that host key verification failed. I've read up a bunch of the 'No GUI' threads, but found nothing helpful. Furtherest I got was finding NGNIX was not running. I started it and got a bad gateway. I restarted both it and rc.php-fpm and was directed to a blank page at /Tools/registration. (no other UI URLS would load either. I completely emptied my plugins directory, but had the same issues on reboot to SAFE mode. SSL is OFF So am stuck. Diagnostics attached. Current advice on other thread is to rebuild USB. Would like to no whats's wrong though. unraid-diagnostics-20230801-1602.zip
  19. Im trying with both the built in GUI (monitor attached), as well as a remote connected Chrome browser. Same result in either case. i tried booting safe mode but there are problems there. In GUI safe mode, the browser won’t load the UI. In GUI or standard mode, a remote browser won’t connect, and even ssh fails to connect with password problems. I guess ive got to fix that problem before I can progress with my pools issue.
  20. I can erase, but even after that the apply button remains disabled.
  21. I am unable to configure new pools or modify existing ones under 6.12.3. What are the reasons for then 'Apply' button in pool configuration to be disabled? Discussion here;
  22. No. If I create a new pool and add one or more devices, (any device) I cannot apply settings. If I access an existing working pool, make a small change, still the apply button is disabled. Essentially, I cannot make or edit pools, unless I'm willing to accept default settings for a new pool
  23. Folks Really stuck here. In playing around with this, I find that I can neither set up & configure a new pool nor can I make any changes to existing pools while array is stopped as the APPLY button remains greyed out on the pool settings page. Diagnostics attached. unraid-diagnostics-20230731-1155.zip unraid-syslog-20230731-1054.zip
  24. No joy. Didn't have an impact on the issue. I also remade the pool without the underscore character in the name, and reduced the number of drives to 3. But still the Apply button on pool editing remains resolutely disabled.
  25. I'm running 6.12.3. I have a bunch of 'OLD' (ex-array) 4TB drives I've reinstalled to my server that I wanted to use to have a play with a ZFS pool. Here they are before I start. Unformatted, no FS etc. I created a pool (I have 3x other pools previously set up) and added the disks; So far, so good. Now, I click the link 'Zfs_pool_a' beside the first disk. (note, I DID NOT capitalise that name, I was careful to ensure I named the pool all lower case) I set up my file system type as ZFS and configured my settings; However, I cannot apply these settings. Note that the 'Apply' button remains stubbornly greyed out. In fact, it remains so regardless of what file system type I select. If I click 'Done' and start the array, it goes ahead and formats the pool BTRFS. I must be missing something obvious here, but what? A procedure? A setting? A checkbox somewhere? Pointers appreciated.