Jump to content

Spyderturbo007

Members
  • Posts

    294
  • Joined

  • Last visited

Posts posted by Spyderturbo007

  1. I tried this migration about a year ago, had issues and rolled it back.  I forgot all about it until today.  I thought I'd give it another go and I still can't get it to work.  I followed the Spaceinvader One video and used the second option.

     

    1. Disable letsencrypt

    2. Install swag

    3. Copy over *.conf files

    4. Copy over dns-conf files

    5. Profit

     

    Unfortunately, everything breaks when I turn on swag. All my subdomains give the following error.

     

    Secure Connection Failed

    An error occurred during a connection to emby.mydomain.com. PR_END_OF_FILE_ERROR

    Error code: PR_END_OF_FILE_ERROR

        The page you are trying to view cannot be shown because the authenticity of the received data could not be verified.
        Please contact the website owners to inform them of this problem.

     

    I see this line being repeated over and over and over again in the logs, but it's specific to bitwarden.  I don't see anything for the other subdomains, but they don't work either.

     

    nginx: [emerg] "resolver" directive is duplicate in /config/nginx/proxy-confs/bitwarden.subdomain.conf:7

     

    If I stop swag and start letsencrypt everything starts working again.

     

    Thank you!

  2. Does anyone know if the bug that caused folder permission issues with Docker Containers fixed with the release of 6.11?  I know there were a bunch of people, myself included, that had to downgrade to 6.9 because of the bug.  I don't see anything in the release notes and was hoping it was fixed.  I wanted to be sure before I upgraded again.

     

    Just some examples. 

     

    https://forums.unraid.net/topic/127325-issues-with-file-permissions-on-610x/#comment-1160670

    https://forums.unraid.net/topic/116281-solved-610-rc2-dockers-permission-to-appdata-folder/#comment-1056850

    https://forums.unraid.net/bug-reports/stable-releases/docker-permission-issues-unraid-610-r1986/

    https://forums.unraid.net/topic/123671-unraid-610-permission-denied-from-docker-containers/#comment-1127794

     

  3. Thank you for you help.  Since it was the same disk last time, is there a way for me to swap it out with the unassigned 8TB?  Could I just stop the array, unassign the problematic disk, assign the unassigned disk and then start the array?

     

    As for the upgrade, I did upgrade, but I had a pile of permission issues with my dockers that no one help me resolve.  There were a few of us with the same issue, but we didn't receive any help.

     

    Thanks again for your time!

  4. I received a notification about one of my disks in an error state after read errors.  The weird thing is that it just completed a successful parity check.  It is a supermicro server chassis with multiple backplanes, so it's not a cabling issue.

     

    I have an unassigned drive that has been precleared and is larger than the drive in the error state.  I'm just not sure the best way to proceed?

     

    Disk 15 (sds) is the one with errors.  I tried downloading the SMART data, but it says "No device found".  I'm assuming that's because it's been taken offline.  Diagnostics are attached.  Any help would be much appreciated.  Thanks!

     

    EDIT -> I remember running into this awhile back, looked through my old posts and oddly enough it was the same drive.   The last time it happened (12/21), I ran a few preclear cycles and it came back fine so it was re-added to the array.  I'm thinking I should remove it and be done with it.

     

    What's the best way to remove it and rebuild the data onto the unassigned 8TB drive?  Thanks!

    tower-diagnostics-20221003-2136.zip

  5. On 8/19/2022 at 3:00 PM, Mr.meowgi said:

    Im glad im not alone on this, I just wish they would acknowledge this bug on the future releases. Maybe if more people with the same problem come out 

    I downgraded to 6.9.0-rc2 and everything works now.  i didn't have to do anything other than downgrade.  As soon as the dockers came back up, they moved the files into the correct locations and deleted the temp folders.  It required no interaction on my part.

     

    It's definitely an issue with 6.10.3.  I did absolutely nothing other than upgrade which caused the problem and did nothing but downgrade which fixed the problem.

  6. I'm having the same issue having just upgraded to 6.10.3 from a 6.9 release if I remember correctly.  I just noticed it today when I went to watch a TV Show and my Emby dashboard was full of random stuff.  I was able to rename, copy and paste manually.

     

    Both SABnzbd and Sonarr are throwing all sorts of permission errors.

     

    Import failed, path does not exist or is not accessible by Sonarr: /tv/Animal.Kingdom.US.S06E11.1080p.WEBRip.x264-CAKES/. Ensure the path exists and the user running Sonarr has the correct permissions to access this file/folder
     

    Couldn't import episode /tv/Only.Murders.in.the.Building.S02E09.2160p.WEB.H265-GGEZ/0a5a4f09cbe744628f5f919a2948ebf5.mkv: Access to the path '/tv/Only.Murders.in.the.Building.S02E09.2160p.WEB.H265-GGEZ/0a5a4f09cbe744628f5f919a2948ebf5.mkv' is denied.

     

    Nothing has changed other than the upgrade to 6.10.3. 

  7. 4 hours ago, ich777 said:

    Did you maybe reboot twice?

    Do you update your plugins on a regular basis, is it possible that you weren't on the latest plugin version?

     

    Do the following:

    1. Delete the plugin from the error tab
    2. Reboot
    3. Grab a fresh copy from the Nvidia Driver from the CA App
    4. Disable and enable the Docker service
    5. Go to your Docker page and click on the bottom on Add Container
    6. From the drop-down select your Emby container (everything should be populated and how it was before) and click Apply

     

    If you got any issues, feel free to contact me again.

    That worked perfectly.  Thanks so much for your help!  Emby is up and running again and everything seems to be working perfectly.


    I update about once a week so it should have been the most recent.  

    • Like 1
  8. Everything was running fine on my server until I had to shutdown and then restart.  We had a power outage and I turned off the server before my battery was exhausted.  When I rebooted, I noticed that Emby didn't start.  It had an update so I tried updating it which failed and somehow the container for Emby disappeared.

     

    Then I noticed that the Nvidia plugin was also gone and there is a single line in the "Plugin File Install Errors" section saying.

     

    Plugin File                                                                        Status

    /boot/config/plugins-error/nvidia-driver.plg                          Error

     

    Can anyone point me in the right direction of what happened and how to fix it?

    syslog.txt

  9. I received an email last night that my docker image will filling up.  I didn't get a chance to login to the server last night, but apparently all my containers stopped and the entire server crashed.  I couldn't get into the web GUI or even ping the server.  I have it hooked up to rack mount KVM and couldn't even get a picture.

     

    I had to force reboot it so the logs are all wiped.  Where would I start to try and track down what is causing the issue?  I've read that it is typically the fault of a container writing information inside the docker image, but I haven't changed anything in months.

     

    I'm attaching a screenshot of my current mappings along with the directory structure of my cache drive.

     

    Can someone point me in the right direction?  Thanks!

     

    Event: Docker critical image disk utilization

    Subject: Alert [TOWER] - Docker image disk utilization of 95%

    Description: Docker utilization of image file /mnt/cache/system/docker/docker.img

    Importance: alert

    Directory Structure.PNG

    Volume Mappings.PNG

  10. I was moving some things around today using disk shares and I'm pretty sure I have my split level incorrectly set for my TV Show folder.  Here are how my directories are setup.

     

    --TV Shows

    ----Ozark

    --------Season 1

    -------------S01E02.mkv

     

    If I want to keep all episodes in a season together, would my split level be "Split only the top three directory levels"?  I don't mind if Season 1 is on one disk and Season 2 is on another disk.  If I change the split level, does unRAID attempt to combine existing files, or is the change something that only changes future writes to the disks?

  11. I'm trying to find a compatible 4 port NIC for my server.  Does anyone have any suggestions?  I'm not looking to spend a lot of money because honestly it's just for me to play with right now.  I'm playing with VLANs and traffic shaping and thought it would be fun.

     

    I can find 2 port cards, but not 4 port cards.  I was leaning towards Intel because they seem to be the most compatible.

     

    Would something like this work?

     

    https://www.ebay.com/itm/133993864047?hash=item1f32a81b6f:g:IOIAAOSwER1h4LJR

  12. I'm hoping someone can help me with moving my AppData folders to another drive.  I just installed 2 more SSDs so that my AppData drive isn't on the cache drive and subjected to so many read/write events.  Is there an easy way to move the folders that are on the cache drive to another drive?  It's not as simple as renaming the current Cache drive AppData and then naming another drive Cache is it?  I'm also assuming if I just drag and drop my AppData folder to another drive all my dockers are going to explode?

     

    So somehow I'd need to move the AppData folder and then fix all the docker containers somehow?

  13. I actually have 2 SSDs laying around that I can use.   I was planning on using 1 for VMs, 1 for Cache and the smaller 512 that is in there now for the AppData stuff.  It just seems like SAB writes a pile of data to the current cache that stores the docker containers.

     

    I also recently went through a drive dropping off the array that you helped me resolve so I have a 6TB spindle disk laying around that I was either going to use as a cache drive or add back to the array.


    So I have 1 x 6TB spindle, 1 x 512GB SSD (currently used as the cache drive) and 2 x 1TB SSDs sitting on the shelf that I can leverage.

     

    I'd appreciate any additional advice you'd like to share!

    tower-diagnostics-20211215-0859.zip

  14. Currently I have my array and a single 512GB SSD I use as a cache drive.  It also houses my docker containers.  I was watching a video the other day where the person said that you shouldn't have the docker container AppData folders on the cache drive because of all the abuse the cache drive takes when moving data to the array. 

     

    Is that correct?  Should I use a separate drive for cache to minimize writes on the SSD?  I have another SSD I can throw in there if needed?  I am using Crashplan to backup the AppData folder so it is protected, but I'd rather minimize my chances of having downtime.

  15. 1 hour ago, trurl said:

     

    Probably will be fine, not the extended test I was referring to. 

    I actually tried to run the extended SMART test, but shortly after starting the test, it appeared as though the drive went to sleep.  I refreshed the page and it said something like "No recent results".  I decided to do the pre-clear instead.  When I did and went back to the drive page, it showed that it was apparently trying to do the SMART test when the drive woke up.

     

    So from what I'm seeing, it's doing the preclear and the SMART test at the same time and has been for the last 36 hours.  The SMART test shows at 90%.  I guess I'm really stressing it out!  :)

  16. I threw the 6TB drive back in the server and have it setup to run through 2 preclear cycles.  It just finished the first one successfully.  If it passes the second, it is safe to use as an array drive?

     

    Event: Preclear on WD-WX51D6422029
    Subject: Disk WD-WX51D6422029 (sdp) PASSED cycle 1!
    Description: Preclear: Disk WD-WX51D6422029 (sdp) PASSED cycle 1!
    Importance: normal

    Disk sdp has successfully finished a preclear cycle!

    Finished Cycle 1 of 2 cycles.
    Last Cycle's Pre-Read Time: 12:36:26.
    Last Cycle's Zeroing Time: 12:03:36.
    Last Cycle's Post-Read Time: 12:35:27.
    Last Cycle's Elapsed Time: 37:15:38
    Disk Start Temperature: 26 C
    Disk Current Temperature: 31 C

    Starting a new cycle.

  17. I was able to get an 8TB external HDD from Best Buy, shucked it and slapped it in where Disk 12 lived.  It's in the process of a rebuild right now.  I got paranoid and wanted to get the disk replaced before anything else failed.

     

    Estimated completion is 1 day 2 hours.  Is it alright to use the array while the rebuild is progressing, or should I not be using it during the rebuild?

     

    I think this has taught me that I need a second parity drive and then another drive just sitting there waiting to go.  My critical data is backed up, but my media isn't.  It would just be a ton of work to rip all the disks again.  It's hard to find a cost effective way to backup 61TB.  

    • Like 1
×
×
  • Create New...