Marc_G2

Members
  • Content Count

    72
  • Joined

  • Last visited

Everything posted by Marc_G2

  1. Ah yes, thanks. It'd been so long since I configured my VM, I completely forgot about that. The issue is resolved now. I just need to edit it the correct way
  2. So this is my syslinux.cfg file that currently works. I thought I just needed to delete vfio-pci.ids=1022:43c7,1b73:1100 Is that not right? I attached my broken .cfg file. I know using notepad can cause issues. But I figured it would've fine for this. default menu.c32 menu title Lime Technology, Inc. prompt 0 timeout 50 label Unraid OS menu default kernel /bzimage append initrd=/bzroot vfio-pci.ids=1022:43c7,1b73:1100 label Unraid OS GUI Mode kernel /bzimage append initrd=/bzroot,/bzroot-gui label Unraid OS Safe Mode (no plugins, no GUI) kernel /bzimage appen
  3. It turns out my issue was caused by something completely different. But I will keep that in mind going forward.
  4. At the moment it's set to 35 min after disks are spun down. I'll create a new thread if I can't figure it out in the next day or so.
  5. That's a good theory, but it looks like changing that setting didn't make a difference. I did recently change my default spin down time from 60 to 30 mins. But increasing it to 45 min didn't fix it either. For now I'll try a reboot to see if a setting isn't properly getting applied immediately or something. This was never a problem when prior to updating to 6.9.2. I only made a handful of small changes since that update. But I can't think of anything else that would cause this.
  6. So I was hoping this plugin could help me determine what was pinging my Cache SSD every 30 minutes (almost on the dot) which was preventing my system from going to sleep. But sadly the plugin doesn't detect anything. Do you have any advice? I already posted on the Dynamix thread. nas-ng-diagnostics-20210428-1559.zip
  7. Actually I found the exact cause. Apparently I can't competently delete something from the syslinux.cfg file. I got tire of seeing this warning from the fix common problems tool: vfio-pci.ids or xen-pciback.hide found within syslinux.cfg. For best results on Unraid 6.9+, it is recommended to remove those methods of isolating devices for use within a VM and instead utilize the options within Tools - System Devices. See HERE for more details I tried following the directions I saw here twice. But apparently I don't know what I'm doing. https://forums.unraid.net/top
  8. Deleting all plugins didn't allow it to boot out of safe mode. So I guess that's ruled out
  9. So did it only work for you after you tried a third USB stick? Did you ever try booting in safe mode?
  10. Title says it all. Here's a photo of where it hangs. I've tried flashing to a different USB (same model), but that didn't work. It does boot in safe mode. But where do I go from here? Is it possible to revert back to 6.8.3? In unRAID it looks like there's an option. But it doesn't seem to be functional.
  11. I've just encountered this for the first time today. Still trying to deal with it. Could there be an issue with 6.9.2? It was working for me a while though.
  12. My server is set to go to sleep after an hour of inactivity. But for some reason the S3 plugin is detecting activity on one of my cache drives every 30 minutes like clockwork. This is a recent development so I though it may have been caused by a docker program after I moved the docker.img file to my cache. But disabling the docker service didn't fix the issue. Does anyone have an idea about what keeps pinging my cache drive? nas-ng-diagnostics-20210428-1559.zip
  13. I do have that checked. When it's checked, the options for 'preserve ownership, times' and 'preserve permissions' options become grayed out. You can toggle those two options if switch off NTFS mode first. But doing that didn't make a difference.
  14. I think I've fixed the spindown issues I was having. Updating unRAID to 6.9 fixed the unassigned drive and moving the docker Vdisk to the cache allowed the array to spin down. Anyway I'm backing up to an External NTFS drive and I'm getting a huge number of errors saying "failed to set times....... operation not permitted". Is there a way to fix this? Enabling the "attempt super-user activities" option didn't change anything.
  15. So these errors are a more concerning. Why am I getting drive errors right after they all spin down? The server is in maintenance mode at the moment, so does that have something to do with it?
  16. I put system to sleep and woke it again with the array stopped. And then tried starting it in maintenance mode. I'm not getting fault so far. My guess is the BIOS update fixed the issue. I'd appreciate it if someone knowledgeable could look at this log after wake up to see if there's anything concerning. The one concerning thing I saw was a warning. ata4: COMRESET failed (errno=-16) nas-ng-syslog-20210421-2123.zip
  17. Actually the April 17th log seems to show it did go to sleep for some reason (normally it'd never go to sleep on Saturday). So I'll focus on that area
  18. That's something that crossed my mind. But I'm pretty sure that first time this issue happened was shortly after a boot up before ever going to sleep. Later today I'm going to see if I can trigger the error by putting it to sleep and waking it up again. And just starting and stopping the array. Is there the a way to configure unRAID to better handle this error? Could I make unRAID immediately stop the array once it starts seeing this particular fault? The way unRAID disables one of my disks after trying repeated resets is major a headache.
  19. After looking over the system logs, I'm now thinking the LSI card (or less likely, the motherboard) is the problem. I don't think it's any of the disks. But if anyone else has additional theories or things to try, please share
  20. My card is giving the same code. But it started happening right after switching to another mother board. So I'm not sure about the card being at fault. Also in both instances the card was under hardly any load. So overheating doesn't seem likely either https://forums.unraid.net/topic/106631-disk-read-errors-on-multiple-disk-need-help-diagnosing
  21. @unraid_chris did you ever figure out what the problem was exactly?
  22. The system has running for a couple hours. These are the only errors the system log is showing right now.
  23. The problem is there's no errors most of the time. So if the array isn't active, it seems especially unlikely for the error to occur. What line in the system log did you find that the issue started with disk 3?
  24. That would show up as an error in the system log right? The problem there is my disks are getting disabled which requires a full rebuild afterward. I swapped the sata cables and I'm doing rebuild right now. I haven't seen any errors yet.
  25. Before shutting the system down or anything, I started the array in maintenance mode and started a read check. So far it hasn't given any errors. So is it likely that the problem is that one HDD? Disk 1 was the drive that got disabled in both occasions? But if it's just that disk, does it make any sense for unRAID to report errors on the other disks? Also the SMART stats for Disk 1 didn't indicate any issues either.