mlounsbury

Members
  • Content Count

    55
  • Joined

  • Last visited

Community Reputation

0 Neutral

About mlounsbury

  • Rank
    Newbie

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Not sure if you figured this out, but I had the same issue. After doing some searching, I found someone on the Home Assistant Community that said they needed to change to their FQDN and that worked for them. They were using a duckdns for their reverse proxy, and I have my own domain. But entering my FQDN fixed this for me.
  2. Thanks. The only reason I thought it might be a memory issue which was causing the high CPU utilization is because Fix Common Problems shows this: I did not see the same issue again this past Saturday, so I'm at a loss as to what could be causing it. I will keep watching it and see if it happens again and will grab the diags while it is at 100% if/when it gets there again.
  3. I noticed the last two Saturday mornings (8/29 and 9/5) that when I wake up in the morning my Unraid server has maxed out memory and all cores/threads of my CPU are pretty much pegged at 100% (Ryzen 7 2700). When it happened on 8/29, I checked the Unraid GUI and it appeared Mover was running. I have a directory for Windows PC backups and I had been using cache to have the backups go there first and then move to the array, and I noticed once or twice in the past it takes a while to move the backups to the array, so I thought that might have been the problem. I stopped mover in the command li
  4. Looks like the swag image is in the same directory as the LE png file: https://raw.githubusercontent.com/linuxserver/docker-templates/master/linuxserver.io/img/swag.gif
  5. Not sure if you saw, but there is an issue for this in the paperless github: https://github.com/the-paperless-project/paperless/issues/651. Looks like there is a pull request to fix it but has not yet been merged.
  6. Looks like I figured it out and I don't get the error any longer. It had to do with line endings. For anyone else, the entrypoint needs to go into Extra Parameters. I was getting the following error when I had it in Extra Parameters: standard_init_linux.go:211: exec user process caused "no such file or directory" I found this post: https://forums.docker.com/t/standard-init-linux-go-175-exec-user-process-caused-no-such-file/20025/9 I created the file in VS Code, which I needed to change the End of Line Sequence from CRLF to LF.
  7. I'm having an issue getting the container to run with both webserver and consumer in one container using the script. This is what shows in the container log: Mapping UID and GID for paperless:paperless to 99:100 Operations to perform: Apply all migrations: admin, auth, contenttypes, documents, reminders, sessions Running migrations: No migrations to apply. Unknown command: '--entrypoint' Type 'manage.py help' for usage. I've got the script in Post Arguments: I've also tried it in Extra Parameters to no avail. Tried quotes also. I've changed the script
  8. At what point is your traceroute dying? I am able to traceroute without an issue, and it's only one hop from my firewall to Amazon. I am on the east coast of the United States, not sure where you're located. Sounds like it could be a regional issue.
  9. Forgot to come back to this. So my solution was to remove the one 2TB drive with sector issues, and connect my first cache drive to it's SATA port. Brought up the array with the cache drive assigned, and all my dockers containers were back. I then used the shrink array procedure to clear the other 2TB drive I wanted to remove that was part of the array, went though the new configuration and reassigned my disks, and then ran a parity check to be sure everything was good, which it came back with 0 errors. Shut down the array, pulled the 2TB drive out, connected the other cache drive to it's
  10. Yeah, I don't think it will help either. I think my best shot is going to be to remove the card, remove the 2TB with the sector issues, connect the primary cache drive, boot it up, start the array and make sure everything is good. Then do a new config, add the drives minus the other 2TB I want to remove, let it do a parity rebuild, shut down the server and remove that 2TB drive and re-add the second cache drive. At least that way I'll get cache and docker up and working tonight, just without the cache redundancy.
  11. @Benson thanks for the response. First off, I will admit I am not too slick when it comes to new PCI-E standards. I've been working on systems all the way back to ISA days, and I haven't really fooled with add-on cards much in the last 10-15 years. To answer your first question, the ASM1062 worked with my previous hardware, although it was the only add-on card installed on that motherboard. Here is the layout of my motherboard: I have my XFX GPU in slot 1, which is PCIEX16_1. The ASM1062 is plugged into slot 5, which is PCIEX1_3. I had thought about moving the ca
  12. So recently I decided to upgrade the hardware of my unRAID server, since it is doing a lot more than I originally expected it would be doing after the last upgrade I made 4+ years ago and the AMD Ryzen sales looked pretty good. I bought a Ryzen 7 2700, Asus ROG STRIX X470-F GAMING, 16GB DDR4, and a XFX RX570. This weekend I decided to make a backup of my appdata and USB and go ahead and upgrade the hardware. The upgrade of the hardware went without issue, albeit is a little cramped in my Antec Nine Hundred. After booting up last night, I couldn't get unRAID to boot. After the boot menu, I
  13. Non-correcting parity check found 0 errors. Looks like everything is good to go. Thanks again @johnnie.black
  14. Thanks, I thought that was weird but wanted to confirm. It is also possible this person was saying to do this when replacing a known good drive. I've done some more reading on this and have gone ahead and turned off correcting errors on my monthly checks. Just find it weird that it ran the check, even when the text next to the button says it will initiate a clean reset. I've always stopped the array before rebooting or shutting down but I figured since the text now says it's a clean shutdown I'd be okay. I went ahead and stopped the array and move
  15. I have a data drive in my array that starting exhibiting read errors on Sunday night, and reported current pending sectors overnight on the same night. Luckily I had a new 8TB drive I was planning on installing into my server so that I could retire my oldest 2TB drives. Just so happens that this drive reporting these errors is one of these drives. I installed the drive into the server on Monday morning and ran a preclear on it which completed this morning. So I've read in the wiki and a few other posts on the forum that to replace a failing drive is as simple as removing the fai