Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About mlounsbury

  • Rank
    Advanced Member


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Not sure if you saw, but there is an issue for this in the paperless github: https://github.com/the-paperless-project/paperless/issues/651. Looks like there is a pull request to fix it but has not yet been merged.
  2. Looks like I figured it out and I don't get the error any longer. It had to do with line endings. For anyone else, the entrypoint needs to go into Extra Parameters. I was getting the following error when I had it in Extra Parameters: standard_init_linux.go:211: exec user process caused "no such file or directory" I found this post: https://forums.docker.com/t/standard-init-linux-go-175-exec-user-process-caused-no-such-file/20025/9 I created the file in VS Code, which I needed to change the End of Line Sequence from CRLF to LF.
  3. I'm having an issue getting the container to run with both webserver and consumer in one container using the script. This is what shows in the container log: Mapping UID and GID for paperless:paperless to 99:100 Operations to perform: Apply all migrations: admin, auth, contenttypes, documents, reminders, sessions Running migrations: No migrations to apply. Unknown command: '--entrypoint' Type 'manage.py help' for usage. I've got the script in Post Arguments: I've also tried it in Extra Parameters to no avail. Tried quotes also. I've changed the script like @dcoens mentions above as well. The script is executable as well. Any ideas what I'm doing wrong here?
  4. At what point is your traceroute dying? I am able to traceroute without an issue, and it's only one hop from my firewall to Amazon. I am on the east coast of the United States, not sure where you're located. Sounds like it could be a regional issue.
  5. Forgot to come back to this. So my solution was to remove the one 2TB drive with sector issues, and connect my first cache drive to it's SATA port. Brought up the array with the cache drive assigned, and all my dockers containers were back. I then used the shrink array procedure to clear the other 2TB drive I wanted to remove that was part of the array, went though the new configuration and reassigned my disks, and then ran a parity check to be sure everything was good, which it came back with 0 errors. Shut down the array, pulled the 2TB drive out, connected the other cache drive to it's SATA port, booted up, assigned the second cache drive and started the array and everything is back to where it was before. Hopefully I'll be able to go to the M.2 SSD cache drives I was planning on in the future.
  6. Yeah, I don't think it will help either. I think my best shot is going to be to remove the card, remove the 2TB with the sector issues, connect the primary cache drive, boot it up, start the array and make sure everything is good. Then do a new config, add the drives minus the other 2TB I want to remove, let it do a parity rebuild, shut down the server and remove that 2TB drive and re-add the second cache drive. At least that way I'll get cache and docker up and working tonight, just without the cache redundancy.
  7. @Benson thanks for the response. First off, I will admit I am not too slick when it comes to new PCI-E standards. I've been working on systems all the way back to ISA days, and I haven't really fooled with add-on cards much in the last 10-15 years. To answer your first question, the ASM1062 worked with my previous hardware, although it was the only add-on card installed on that motherboard. Here is the layout of my motherboard: I have my XFX GPU in slot 1, which is PCIEX16_1. The ASM1062 is plugged into slot 5, which is PCIEX1_3. I had thought about moving the card to slot 3, which is PCIEX1_2, but wasn't sure if that was the right thing to do or not. Guess it couldn't hurt to try. Here's the BIOS information for the onboard devices: PCIEX16_1 bandwidth does not show for me since I have a Ryzen 2nd gen. PCIEX16_2 is set to 8X mode, and I set PCIEX16_3 4X-2X Switch to 2X. I attempted to set it to 4X mode, but when I did that the ASM1062 was not recognized at all by the motherboard.
  8. So recently I decided to upgrade the hardware of my unRAID server, since it is doing a lot more than I originally expected it would be doing after the last upgrade I made 4+ years ago and the AMD Ryzen sales looked pretty good. I bought a Ryzen 7 2700, Asus ROG STRIX X470-F GAMING, 16GB DDR4, and a XFX RX570. This weekend I decided to make a backup of my appdata and USB and go ahead and upgrade the hardware. The upgrade of the hardware went without issue, albeit is a little cramped in my Antec Nine Hundred. After booting up last night, I couldn't get unRAID to boot. After the boot menu, I would get an error that there wasn't enough memory to load the OS. After some reading, I found that the RAM I bought wasn't in Asus's list for the motherboard. So I went over to Micro Center and picked up RAM that was on the list, popped it in and we're getting further. I got into a boot loop when trying to boot into GUI mode, but booting into regular unRAID OS boots fine. So that's troubleshooting for another day. After I was able to get to the web GUI for my server, I went ahead and put my encryption key in and started the array. I went to the docker tab and received an error that it couldn't start. I went into the docker settings and it said it couldn't find the docker image file. Weird, I knew my cache drives were connected. So I went to the main tab and lo and behold, my cache drives weren't there. After much playing around in the BIOS and then much searching on the forums here, I found I was running into the dreaded Marvell PCI to SATA card issue. Both my cache drives are connected to a PCI-e to SATA card, specifically a ASM1061 Chipset card. The card is plugged into the PCIe 2.0 x1_3 slot on the motherboard, and when the system boots up I can see the card recognized and the two hard drives on it are recognized as well. I went into the BIOS and turned off IOMMU and SVM mode, but I am still not able to see the drives in my system. I'm getting these errors in the syslog: Apr 7 13:03:20 unRAID kernel: ahci 0000:05:00.0: AHCI controller unavailable! Apr 7 13:03:20 unRAID kernel: ata9.00: failed to IDENTIFY (I/O error, err_mask=0x4) Apr 7 13:03:20 unRAID kernel: ata9.00: revalidation failed (errno=-5) So it looks like I'm not going to be able to get this card working easily unless there's another BIOS setting I can change. If I go to Tools => System Devices, unRAID can see the card: [1b21:0612] 05:00.0 SATA controller: ASMedia Technology Inc. ASM1062 Serial ATA Controller (rev ff) [1b21:1242] 06:00.0 USB controller: ASMedia Technology Inc. ASM1142 USB 3.1 Host Controller [8086:1539] 08:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03) [1002:67df] 0a:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere [Radeon RX 470/480/570/570X/580/580X/590] (rev ef) [1002:aaf0] 0a:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere [Radeon RX 580] [1022:145a] 0b:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device 145a [1022:1456] 0b:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Platform Security Processor [1022:145f] 0b:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] USB 3.0 Host controller [1022:1455] 0c:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device 1455 [1022:7901] 0c:00.2 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51) [1022:1457] 0c:00.3 Audio device: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) HD Audio Controller Unless there's another fix I can try, here were my thoughts. I currently have 8 hard drives in my system. Five are in the array connected to onboard SATA (8TB parity, 8TB data, 2x2TB data, 1x3TB data), two are cache connected to the PCI-E card, and one is a 2TB drive that I recently had bad sectors on that is connected to the last onboard SATA connection. The drive that has bad sectors has been removed from the array but I have not physically removed it from the server, and it's still connected and just hanging out in Unassigned Devices. One of the other 2TB drives is as old as the drive with bad sectors, and I actually cleared all of the data off of it a week or so ago. I excluded it from all of my shares so nothing would be written to it either, however it is still assigned a slot in the array but doing pretty much nothing. I know I can remove the 2TB drive that is no longer in the array without issues, and connect one of my cache drives there, however I don't know the best way to go about removing the other 2TB drive from the array. Should I go through and use new config to reassign my drives and leave out the 2TB drive I wanted to remove and let parity rebuild, then shut down the server and remove the old drives and connect up the parity drives to the onboard SATA connectors? Or should I just shut down the system, remove both 2TB drives and connect up my cache drives to the onboard SATA, bring up the array even though it will have an invalid config and let parity rebuild that way? I've been without my docker containers for 24+ hours now and I'm itching to get them back up as fast as possible, but I also don't want to create more problems for myself.
  9. Non-correcting parity check found 0 errors. Looks like everything is good to go. Thanks again @johnnie.black
  10. Thanks, I thought that was weird but wanted to confirm. It is also possible this person was saying to do this when replacing a known good drive. I've done some more reading on this and have gone ahead and turned off correcting errors on my monthly checks. Just find it weird that it ran the check, even when the text next to the button says it will initiate a clean reset. I've always stopped the array before rebooting or shutting down but I figured since the text now says it's a clean shutdown I'd be okay. I went ahead and stopped the array and moved the new disk into the slot where the failing disk was and had it rebuild. It completed early this morning with no errors, and everything looks okay. I just started a non-correcting parity check to ensure everything is good. Thanks for the help!
  11. I have a data drive in my array that starting exhibiting read errors on Sunday night, and reported current pending sectors overnight on the same night. Luckily I had a new 8TB drive I was planning on installing into my server so that I could retire my oldest 2TB drives. Just so happens that this drive reporting these errors is one of these drives. I installed the drive into the server on Monday morning and ran a preclear on it which completed this morning. So I've read in the wiki and a few other posts on the forum that to replace a failing drive is as simple as removing the failing drive from the array and then add the new drive to the same slot and tell the array to rebuild the drive using the calculations from the other disks and parity to rebuild the data. I also found a post where someone said they run a parity check before doing this to make sure they have good parity. I run a correcting parity check on the first of every month, and the last check on 3/1 found 2 errors. Prior to that, I rebooted my server on 2/28 without stopping the array (just hit the reboot button in the GUI), and when the server came up it performed a parity check for some reason. It also found 2 errors. I'm assuming the 2 errors it found in that check were not corrected since the next scheduled parity check on 3/1 found 2 errors. Monthly parity checks prior to 2/28 found 0 errors. So my question is - should I just go ahead and follow the wiki procedure to replace the failing drive? Or would I be better to do something else like move the data from the failing drive to the new drive using rsync or something similar? I know 2 errors is not a lot of errors, but it's more than 0 and I would prefer not to risk losing any data on the drive.
  12. Ah, my bad. Totally forgot to check the github. Thanks for the information and the update to the template. Much appreciated.
  13. So I've been able to get the BitWarden container installed and working via LE/reverse proxy. I was having the same issue as those on the previous page, so thanks for the tips on that. I was able to get a new user registered using Chrome as well. The one thing I was curious about was whenever you go to the URL for BitWarden, it comes up to a login page and anyone can register. I'm not sure if anyone will come across my instance, but anything is possible. I did some searching, and it is possible to turn off registrations: https://github.com/bitwarden/core/issues/131 I looked through some of the directories in the docker, but I wasn't able to find the location or file they're referring to. Is it possible to do this in this docker container? Also, I was wondering if there was an admin portal, and I found another document that said the admin portal should be available at https://bitwarden-domain.com/admin, however when I go to /admin I get a 404 error. I am assuming that was not included in the docker container? Is it possible to have it added?
  14. Thanks. I guess at this point, this issue is resolved. I'll have to do some further troubleshooting on the UDMA CRC errors, but the parity check issue is fixed/solved.
  15. So the first parity check completed, and found and corrected 206 errors. I ran a second one and it completed with no errors. The UDMA CRC errors incremented in both checks, but it seems like that is the only issue, and only on disk 1. I did have my SATA cables bundled prior to doing anything, and when I installed the 8TB drive, I unbundled them. The UDMA CRC errors I was getting prior to doing anything were on disk 1, so it seems there is either a bad connection or the cable is going bad. I am still kind of mystified why the sync errors went crazy when I installed the PCI SATA controller, since I had my cache drives connected to it. Am I correct in thinking that cache drives aren't taken into account in a parity sync? Maybe it was just the bad cable that was causing all of the issues and it presents itself differently depending on the hardware configuration?