[Plugin] CA Fix Common Problems

Recommended Posts

  • 2 weeks later...

Hi all, just upgraded to 6.9.0 rc2  two days ago.  I am now getting this error, and I have no idea how to fix it.  Anyone have the steps that I can take to fix?


Legacy PCI Stubbing found: vfio-pci.ids or xen-pciback.hide found within syslinux.cfg. For best results on Unraid 6.9+, it is recommended to remove those methods of isolating devices for use within a VM and instead utilize the options within Tools - System Devices.  

Link to comment

Not sure if there is a better place to post this but I wanted to report a minor typo in the plugin. For the warning about the parity check not being scheduled, the "Suggested Fix" starts with, "It is highliy recommended to schedule parity checks . . . "


The exact warning text is "Scheduled Parity Checks are not enabled".


Thanks for your work.

  • Thanks 1
Link to comment

Unsure if this has been asked before, as it's not coming up on any search I'm using:


I see that I can exclude folders from the scheduled scans, but can I exclude from the extended test? I have a "backup" share that my Windows machines point to, and there are many errors within those backups that Windows doesn't care about, but unRAID does: leading/trailing spaces, etc. Those items are beyond my control to correct, as they are the result of other apps' organization, or even Windows'. I have image backups and file history backups here, and they make "run extended tests" take FOREVER and generate numerous lines in the results that are completely irrelevant. I'd really like to exclude this Backup share from the extended tests.

Is there a way to do this?




Edited by ainuke
edited mistype
Link to comment
  • 2 weeks later...
1 hour ago, Squid said:

background garbage collection etc the data actually on the SSD may not be what is actually expected

We had this discussion here and you can see it's relevant for TRIM and not GC. GC only combines pages of multiple blocks. The data itself stays the same as the LBA is redirected to a page in a new block and the old blocks are marked as unused. Without TRIM the pages of deleted files will stay intact and will be moved by the GC, too. Of course this is inefficient and will slow down the SSD over the time, but it does not influence data consistency.


Conclusion: The warning should refer to TRIM and not GC. Maybe something like:


SSD's aren't fully supported by now. Do not TRIM your SSD before checking compatibility.


Link to comment
18 hours ago, mgutt said:

Do not TRIM your SSD before checking compatibility.

You can't trim an SSD assigned to the array, trim is disable, on purpose I guess.


As for the warning it's better to play on the safe side, but in my experience most SSDs should not cause any issues, the exception from the many I tested is the now discontinued Kingston V300, which caused a couple of sync errors after a power cycle, I would just recommend frequent parity checks, at least in the beginning, if no sync errors are found you're fine.

Link to comment


On 8/17/2019 at 8:49 AM, JorgeB said:

Kingston V300

Maybe a general problem with this model? Someone complained a huge failure rate:


We rolled out about 80 PC's with 2x Kingston V300 60GB SSD's in RAID-1. On these installs we see a 10-15% failure rate over a period of max. 3 months, which is off course unacceptable. This is on a PC platform with an on-board Intel ICH7R SATA RAID controller


And as a short reminder: Kingston was caught for bait-and-switch with the V300 and freezes caused by a buggy A400 firmware.


As you said, it happened only if the server was shutdown, this sounds like defective flash cells not being able to hold their state and nothing related to GC.


Are there any other reports? For me it sounds like one buggy SSD model is generalizing a total SSD incompatibility.

Link to comment
On 1/29/2021 at 8:08 AM, Squid said:

Remove the entries you made within syslinux.cfg to isolate your devices for VM passthrough, reboot (but first disable the VM service) and then go to Tools - System Devices and check off the devices you want isolated.

Thanks Squid... after sitting on the error for a bit, I gained courage to delete the lines out of /flash/syslinux/syslinux.cfg and it now looks like this, but I am still getting the error in fix common problems... sorry to be a pain, but this looks like the default config.  Thoughts?

default menu.c32
menu title Lime Technology, Inc.
prompt 0
timeout 50
label Unraid OS
  menu default
  kernel /bzimage
  append initrd=/bzroot
label Unraid OS GUI Mode
  kernel /bzimage
  append initrd=/bzroot,/bzroot-gui
label Unraid OS Safe Mode (no plugins, no GUI)
  kernel /bzimage
  append initrd=/bzroot unraidsafemode
label Unraid OS GUI Safe Mode (no plugins)
  kernel /bzimage
  append initrd=/bzroot,/bzroot-gui unraidsafemode
label Memtest86+
  kernel /memtest


Link to comment

interesting, I will reboot again but just to let you know, I live in Houston and recently, the unraid server has been rebooted multiple times due to the power outages here...


UPDATE:  I have rebooted multiple times due to power outages in my area and the error is still showing.  For now, since my syslinux.cfg is exactly like the default, I am ignoring it, but would like to know where else this could be pulling from.  I must be missing it somewhere...

Edited by ZosoPage1963
adding reboot info
Link to comment
  • 3 weeks later...

Thanks for this plugin, very helpful. My only remaining warning is to install the Dynamix Trim plugin, but according to this thread:

I don't need to do trim if it's btrfs and in a pool. Is that correct? If so, could the plugin be tweaked to not give the warning in that case? Thx!

Link to comment

Getting this error recently 


Mar 8 11:52:56 unraid root: Fix Common Problems: Error: Multiple NICs on the same IPv4 network


My server has 2 nics (well 3 if you count the IPMI nic) and they're both plugged into the same switch. Is this not the way most people have it set up? Is there something wrong with doing it this way?

Link to comment
3 hours ago, Dovy6 said:

Getting this error recently 

My server has 2 nics (well 3 if you count the IPMI nic) and they're both plugged into the same switch. Is this not the way most people have it set up? Is there something wrong with doing it this way?


See my response here :) 


Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.