UncleStu

Members
  • Posts

    47
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

UncleStu's Achievements

Rookie

Rookie (2/14)

1

Reputation

1

Community Answers

  1. I now have two containers that are displaying Not Available on the version. Both containers still appear to be using their previously defined repo too. Anything else that can help resolve this? I am on 6.12.6, so the patch is not an option either.
  2. memtest found 2 of the 4 sticks had errors. Pulled those for now and started a RMA request with G.Skill. Memtest passed on the two sticks I left in too. I moved my system share and appdata into the same nvme pool. Everything started up with no issues. Historically the docker image would throw errors when on the nvme pool. No errors on the SSD pool or the array. I also added the 'nvme_core.default...' to my default boot, from this post. Time will tell at this point. Thank you @JorgeB for your assistance.
  3. Assuming 'device' could be as broad as hardware, and as in the nvme devices? How can I go about troubleshooting this more? Nothing has changed in the system for a couple of years. Except now I have the latest BIOS and still have the same issue. Oh, and I did swap out my pre-failing disk 5. The data rebuild is 90% complete. But I can't see how that would be an issue. I have 2 cache pools. NVMe and SSD. When the docker image is on the NVMe pool, along with my appdata, it corrupts. I have been using /mnt/user/system/docker/ as the path for the image. And of course when I move it, I stop the services. Last night, when I recreated the image again, I used /mnt/s-cache/system/docker instead. Putting the image on the SSD cache. The appdata is still at /mnt/user/appdata/ with the appdata share using the cache as preferred and array as backup. I have not had any errors since starting the dockers last night, but my Plex server UI did timeout and required a restart of the docker. This is how it started when I first began to notice corruption. This are the last errors.
  4. I did but didn't fully follow it. Either way, my docker image is still corrupting after formatting the cache pools. The extended smart tests showed no errors either.
  5. I see the speed set to 3200MT in the meminfo.txt file, but I couldn't find where you saw that it should be 2666. EDIT: I changed my RAM speed to auto and verified in the bios that it was 2666. I then erased/re-created both my cache pools. After mover finished moving things back to my nvme cache pool, it still corrupted part way through starting dockers. How can I tell why/what is corrupting this? unraid-diagnostics-20240204-1904.zip
  6. I see the speed set to 3200MT in the meminfo.txt file, but I couldn't find where you saw that it should be 2666.
  7. Overclocked RAM? I wasn't aware I overclocked anything as I know servers don't care for it. Mind sharing where in the diags that is? Or how you came to see that I have overclocked RAM?
  8. A couple of days ago my Plex server started to stop responding. A restart would bring it back for roughly 25 hours before it would go down again. I enabled debug logging, but saw nothing out of the ordinary. I decided to delete the docker and recreate it. The docker compose failed with "Failed to create btrfs snapshot: input/output error", leading me to believe this is a corrupt docker image. I deleted/recreated my docker image and re-downloaded all the dockers. After about the 10th image starting up, I started getting loop2 WRITE errors again. I'm in the process of moving things off my NVME cache pool, but I'm at a loss for where the image corruption could be coming from. A short SELF test of each NVME shows 0 errors. The Attributes are not telling me much either. I do have a pre-failing disk 5, which I have the replacement sitting here on my desk. But I cannot see how disk 5 would be related to my docker image corrupting so quickly after recreating it. Am I missing anything? Or could my NVME drive(s) be failing? unraid-diagnostics-20240202-1308.zip EDIT: I am running the extended tests rn. I also see the zfs pool is showing an error. Anyway to dig into that deeper to know which drive is bad? Or are both having issues?
  9. Same here. This started a few days ago for me. It's not local cache as I get the same issue while in Incognito mode. I did upgrade from 6.12.3 to 6.12.4 three days ago, but I feel like this plugin worked after the upgrade. But I could be mistaken. And unlike @yogy, I'm still getting it. Nothing is appearing within my logs either. EDIT: Does not appear to be related to the 6.12.4 upgrade. I just upgraded my secondary unraid and it is still scanning, plus it's adding the line Fix Common Things to the log. Sep 14 10:55:44 unRAID root: Fix Common Problems Version 2023.07.29 EDITED EDIT: A reboot appears to have resolved the issue. Time will tell now. As mentioned before, I don't recall seeing the issue immediately after the upgrade and running this plugin is something I do right after an upgrade. But there could be a number of factors involved. I'm also trying to move VM's off my cache pool to reformat it and was getting errors with that. Or the move was showing a state of completion at over 120%.
  10. Just upgraded from 6.12.3 to 6.12.4 and ran into this issue. EDIT: Probably stating the obvious. Yesterday I stopped and restarted the array and both the Docker and VM services started up. Today I stopped the VM services and was unable restart it. So once it's stopped, it's stopped. Until you restart the array or possibly stop the dockers and then start them up in reverse order. I have not tested this latter part yet.
  11. I recently shut down my Docker and VM services, transferred everything to my array and reformatted my cache pools (1 NVMe and 1 SSD) from btrfs to zfs. Afterwards I moved everything back and restarted the services. I started getting weird things when the docker images would update and turns out there was some corruption in the docker image. I was getting /loop3/ error messages. I deleted and recreated the docker image and everything is fine there. However my VM machines will operate fine for 30 seconds or so and then not respond for 30 seconds or so. Then catch up with the clicks/commands I was trying to run and become non-responsive again. I have updated the Virtio drivers, and even completely uninstalled and reinstalled them within the VMs. No difference. The strange thing, if I build a brand new VM, there are no issues. No pausing, no lag. Any idea what could be the cause. I do have the LIBVIRT ERROR: VIRNETSOCKETREADWIRE:1791 : END OF FILE WHILE READING DATA: INPUT/OUTPUT ERROR in my logs, but don't believe this is related. I am also not seeing any /loop2/ error messages, so I don't think my libvirt file has corruption. Plus any new VMs run fine. I will post a diag file here as soon as I can. I have a 41GB syslog.1 log file from the moving of the data off my cache pools. I temporarily enabled the mover logging so I could monitor the process. I failed to delete this syslog file before downloading a new diag file. unraid-diagnostics-20230808-1757.zip
  12. Thank you. Anyway to give them a unique SN? Copying within unRAID is SO much faster than Windows.
  13. I have two 16 GB USB sticks, from the same vendor, that I'm trying to copy files to. I insert the first stick and UD sees it as 'sdk'. As soon as I insert the second stick, which is seen as 'sdl', the previous one isn't visible anymore to mount, only the second stick. In the image, both sticks are plugged in, but only sdl is shown. At the end of the log, after sdl is inserted, there is an error about getting the ID of sdk. Is this because they are the same manufacturer? Debug Logs:
  14. Thanks. That's not a browser plugin but is my firewall. Odd, and not odd, that the remote unRAID, that I'm accessing over a site to site tunnel, is trying to do something local to me. When my local browser accesses the remote Apps page, what is my local browser trying to access? It is fetching the data like a proxy to the unRAID server? I tried connecting through unRAID Connect and I'm getting the same errors. I can't quite wrap my head around why my local firewall is giving an error on the remote server. EDIT: It has something to do with HTTPS inspection at the firewall. There are a number of URLs/Domains that I pulled from that text. Is github where the content is pulled from? I'm still looking through the logs (active and historical) to see where this is breaking. RESOLVED: Since the unRAID was in a remote site, and I was using SSL to communicate with it, the HTTPS inspection in the firewall itself was the problem. Not the remote unRAID trying to proxy. I added an exclusion to bypass SSL inspection based on the destination of my remote unraid and it resolved the issue.